SCDM Certified Clinical Data Manager CCDM Exam Practice Test

Page: 1 / 14
Total 150 questions
Question 1

A study is collecting ePRO assessments as well as activity-monitoring data from a wearable device. Which data should be collected from the ePRO and activity-monitoring devices to synchronize the device data with the visit data entered by the site?



Answer : B

To synchronize data from electronic patient-reported outcomes (ePRO) and wearable activity-monitoring devices with site-entered visit data, both the study subject identifier and date/time are essential.

According to the GCDMP (Chapter: Data Management Planning and Study Start-up), each dataset must contain key identifiers that allow for accurate data integration and temporal alignment. In studies involving multiple digital data sources, time-stamped subject identifiers are necessary to ensure that the device-generated data correspond to the correct subject and study visit.

The subject identifier ensures data traceability and linkage to the appropriate participant, while date/time allows synchronization of device data (e.g., activity or physiological measurements) with the corresponding site-reported visit or event. Geo-spatial data (options C and D) are typically not relevant to study endpoints and pose unnecessary privacy risks under HIPAA and GDPR guidelines.

Reference (CCDM-Verified Sources):

SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Integration and eSource Data, Section 5.2 -- Data Alignment and Synchronization Principles

FDA Guidance for Industry: Use of Electronic Health Record Data in Clinical Investigations, Section 4.2 -- Data Linking and Synchronization

ICH E6 (R2) GCP, Section 5.5.3 -- Data Traceability and Integrity


Question 2

If a data manager generated no additional manual queries on data in an EDC system and the data were deemed clean, why could the data appear to be not clean during the next review?



Answer : A

In an Electronic Data Capture (EDC) system, even after a data manager completes all manual queries and marks data as 'clean,' the data may later appear unclean if the site (study coordinator) makes subsequent updates in the system after re-reviewing the source documents.

According to the Good Clinical Data Management Practices (GCDMP, Chapter: Electronic Data Capture Systems), site users maintain the authority to modify data entries as long as the system remains open for data entry. The EDC system audit trail captures such changes, which can automatically invalidate prior data reviews, triggering new discrepancies or changing system edit-check statuses.

This situation commonly occurs when the site identifies corrections in the source (e.g., wrong date or lab result) and updates the EDC form accordingly. These post-cleaning changes require additional review cycles to ensure the database reflects accurate and verified information before final lock.

Options B, C, and D are incorrect --- CRAs and medical monitors cannot directly change EDC data; they can only raise queries or request updates.

Reference (CCDM-Verified Sources):

SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Electronic Data Capture Systems, Section 6.3 -- Post-Cleaning Data Changes and Audit Trails

ICH E6 (R2) GCP, Section 5.5.3 -- Data Integrity and Change Control

FDA 21 CFR Part 11 -- Electronic Records: Change Documentation Requirements


Question 3

Which of the following factors can be tested through a second test transfer?



Answer : B

In the context of database design and external data management, a test data transfer (or trial data load) is performed to ensure the proper configuration, structure, and integrity of data imported from an external vendor or system. The second test transfer is specifically useful to confirm that data structures and formats are consistently aligned between the sending and receiving systems after initial adjustments have been made from the first test.

According to the Good Clinical Data Management Practices (GCDMP), the file format --- including variables, data types, field lengths, delimiters, and encoding --- must be validated during test transfers to confirm compatibility and ensure accurate loading into the target database. Once the initial test identifies and corrects errors (e.g., mismatched variable names or data types), the second transfer verifies that the corrections have been implemented correctly and that the file structure functions as intended.

Testing change management (A) involves procedural controls, not data transfers. The transfer method (C) and transfer frequency (D) are validated during initial process setup, not during subsequent test transfers.

Therefore, option B (File format) is correct, as the second test transfer verifies the technical integrity of the file structure before live production transfers begin.

Reference (CCDM-Verified Sources):

SCDM Good Clinical Data Management Practices (GCDMP), Chapter: External Data Transfers and Data Integration, Section 5.2 -- Test Transfers and File Validation

FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.3 -- Data Import and Validation Controls


Question 4

Query rules were tested with test data for each logic condition within each rule. Which of the following types of testing was conducted?



Answer : C

Testing query rules with test data inputs to confirm expected outputs without examining the underlying program logic is an example of black box testing.

According to the GCDMP (Chapter: Data Validation and System Testing), black box testing is a functional testing approach used to verify that the system performs correctly from the end-user's perspective. In this method, testers input various conditions and observe outputs to ensure the system behaves as intended --- for instance, that edit checks trigger correctly when data fall outside predefined limits.

In contrast, white box testing involves examining internal logic, code, and algorithm structures. Because data managers typically validate edit checks through data-driven test cases rather than code inspection, black box testing is the appropriate and industry-standard method. This ensures compliance with validation documentation standards as outlined in FDA 21 CFR Part 11, Section 11.10(a) and ICH E6 (R2) system validation expectations.

Reference (CCDM-Verified Sources):

SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Database Validation and Testing, Section 4.1 -- Testing Approaches (Black Box and White Box)

FDA 21 CFR Part 11 -- System Validation Requirements

ICH E6 (R2) GCP, Section 5.5.3 -- Computerized Systems Validation


Question 5

Which type of edit check would be implemented to check the correctness of data present in a text box?



Answer : C

A front-end check is a type of real-time validation performed at the point of data entry---typically within an Electronic Data Capture (EDC) system or data entry interface---designed to ensure that the data entered in a text box (or any input field) is valid, logically correct, and within expected parameters before the user can proceed or save the record.

According to the Good Clinical Data Management Practices (GCDMP, Chapter on Data Validation and Cleaning), edit checks are essential components of data validation that ensure data accuracy, consistency, and completeness. Front-end checks are implemented within the data collection interface and are triggered immediately when data are entered. They prevent invalid entries (such as letters in numeric fields, out-of-range values, or improper date formats) from being accepted by the system.

Examples of front-end checks include:

Ensuring a numeric field accepts only numbers (e.g., weight cannot include text characters).

Validating that a date is within an allowable range (e.g., not before the subject's date of birth).

Requiring mandatory fields to be completed before moving forward.

This differs from back-end checks or programmed checks, which are typically run later in batch processes to identify data inconsistencies after entry. Manual checks are human-performed reviews, often for context or data that cannot be validated automatically (e.g., narrative assessments).

Front-end edit checks are preferred wherever possible because they prevent errors at the source, reducing the number of downstream data queries and cleaning cycles. They contribute significantly to data quality assurance, regulatory compliance, and efficiency in data management operations.

Reference (CCDM-Verified Sources):

Society for Clinical Data Management (SCDM), Good Clinical Data Management Practices (GCDMP), Chapter: Data Validation and Cleaning, Section 6.2 -- Edit Checks and Real-Time Data Validation

FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6 -- Data Entry and Verification Controls

ICH E6 (R2) Good Clinical Practice, Section 5.5 -- Data Handling and Record Integrity

CDISC Operational Data Model (ODM) Specification -- Edit Check Implementation Standards


Question 6

Which is the best way to identify sites with high subject attrition?



Answer : A

The best method to identify sites with high subject attrition is to calculate the proportion of patients for which two visit periods have passed without data, by site.

According to the GCDMP (Chapter: Data Quality Assurance and Control), subject attrition is an important performance indicator for data completeness and site compliance. Evaluating missing or delayed data across multiple consecutive visit periods allows for early detection of potential dropouts or site-level operational issues.

By assessing this proportion at the site level, the Data Manager can distinguish between random missing data and systematic site underperformance. Counting or proportioning late visits (options B and C) identifies scheduling delays, not attrition. Looking at missing data without site context (option D) fails to identify site-specific patterns, limiting corrective action.

This metric aligns with risk-based monitoring (RBM) practices recommended by ICH E6 (R2) and FDA RBM Guidance, which promote proactive identification of sites at risk of data loss.

Reference (CCDM-Verified Sources):

SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Quality Assurance and Control, Section 5.4 -- Site Performance Metrics

ICH E6 (R2) Good Clinical Practice, Section 5.18 -- Monitoring and Site Performance Evaluation

FDA Guidance for Industry: Oversight of Clinical Investigations -- Risk-Based Monitoring, Section 6 -- Site Performance Metrics


Question 7

In a study, data are key entered by one person after which a second person enters the data without knowledge of or seeing the values entered by the first. The second person is notified during entry if an entered value differs from first entry and the second person's decision is retained as the correct value. Which type of entry is being used?



Answer : A

The described process is Blind Verification, also known as double data entry with blind verification. In this method, two independent operators enter the same data. The second operator is blinded to the first entry to avoid bias. When discrepancies arise, the system flags them for review, and the second entry (or an adjudicated value) is retained as the correct one.

According to GCDMP (Chapter: Data Entry and Data Tracking), blind double data entry is used primarily in paper-based studies to minimize transcription errors and ensure data accuracy.

Single entry (D): Only one operator enters data.

Manual review (B): Involves post-entry checking, not during entry.

Third-party compare (C): Used for reconciling external data sources, not CRF data.

Hence, option A (Blind verification) is the correct and CCDM-defined process.

Reference (CCDM-Verified Sources):

SCDM GCDMP, Chapter: Data Entry and Data Tracking, Section 5.1 -- Double Data Entry and Verification Methods

ICH E6(R2) GCP, Section 5.5.3 -- Data Entry and Verification Controls

FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.2 -- Data Accuracy and Verification


Page:    1 / 14   
Total 150 questions