term
stringlengths
2
142
definition
stringlengths
9
116k
sources
listlengths
1
1
source
stringclasses
48 values
type
stringclasses
5 values
file
stringclasses
26 values
sheet
stringclasses
3 values
text
stringclasses
212 values
Auditing the Lab
The sponsor is ultimately responsible for data coming from a laboratory. When cen- tral labs are used, companies will typically audit the lab or will refer to a past, recent audit. For the most part, this is a cursory audit since large central laboratories are audited constantly and are likely to have reasonable practices and be in compliance with regulations. When a small laboratory or investigator site is needed for a study, the sponsor must audit much more carefully and pay particular attention to the com- puter systems that will be used for collection and storage of the data. There must be assurance that the data is reliable and reflects the actual results. In a very common scenario, the lab runs a specialized assay on a sample using some equipment they own or they follow an analysis procedure they have developed. The equipment is likely to be validated and reliable and it will often print out or dis- play a result. These results may be taped into a lab notebook or filed in some other
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
of studies
• Pathogen identification in studies of infections • Interactive voice response system (IVRS) data for randomized trials • Pharmacokinetic (PK) data in early phase trials • Data from electronic patient report outcome devices These types of data are critical to the analysis of the study, and just like CRF data, they must be accurate and complete and the sponsor must be able to show that steps have been taken to ensure the integrity of the data. In this chapter we will discuss how non-CRF data is received and stored in compliance with regulations and how it is cleaned to show evidence of completeness and quality. RECEIVING ELECTRONIC FILES FROM A VENDOR Clinical data in electronic format is subject to 21 CFR (Code of Federal Regulations) Part 11 requirements since it will likely be used as part of a submission to the Food and Drug Administration (FDA). Sponsor and contract research organization (CRO) computer systems used for clinical data management must meet the requirements of the rule, and this is also true of any laboratory or vendor providing data associ- ated with a clinical trial. In Chapter 9 we saw the danger in small independent labs that may not be using Part 11–compliant systems, but even when the lab or vendor systems are compliant, integrity and security must be maintained when the data is transferred to the sponsor or CRO and then moved through the data management processes for the trial.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Transferring Files
Clinical data should not be sent by email without additional security. The federal regulation 21 CFR Part 11 considers email to be part of an open system and advises additional security such as encryption and password protection for data sent by 98 Practical Guide to Clinical Data Management, Third Edition email. The rule in Section 11.30 says, “Persons who use open systems to create, modify, maintain, or transmit electronic records shall employ procedures and con- trols designed to ensure the authenticity, integrity, and, as appropriate, the confi- dentiality of electronic records from the point of their creation to the point of their receipt” (emphasis added). At a minimum, the sender should use a zip utility and password-encrypt the file to prevent unauthorized decompression. The password should never be sent in the same email as the file; ideally, it is agreed upon prior to the transfer or set separately for each transfer and communicated by phone. Other secure methods of transmitting clinical data include sending CDs by tracked carrier, using secure drop boxes reached electronically, and giving the vendor access to the appropriate secure closed networks of the sponsor or CRO. Once the data is received, it must also be stored securely in such a way as to be able to provide evidence that the data received from the vendor was not purposely or inadvertently modified without audit trail. Many companies load vendor data into a clinical data management (CDM) database or other data warehouse to provide such assurance. Other options include creating secure read-only areas on servers to provide the gold copy of the data. (Gold copy or golden master refers to the original release or shipped copy of software or data.) Any later use of the data could then be compared to the original to show that it had not been altered.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Formatting the Data
We talk about sending data from the vendor to the sponsor or CRO as an electronic file—but what kind of file is it? Is it Microsoft Word or Excel? Is it an SAS® transfer file? Is it a simple ASCII, comma-delimited format? What data is in the file? How are the subjects and samples identified? Because the vendor needs to know what to send and the receiver needs to know what is coming in, it has become industry standard practice to establish file transfer agreements. These agreements specify the format and content of a transfer and usually also identify the frequency and method of trans- fer. Both the vendor and the receiver should approve the document.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Loading Data
Loading non-CRF data into a central CDM or warehouse database is done either through programs written specifically for each study or by configuring a utility within the system. As noted in previous chapters, whether users write a program or configure an application, if it affects clinical data, it should be subject to a validation process. In addition, whenever clinical data is copied or transferred, it is subject to 21 CFR Part 11 and loading would be considered a copy. The validation process for any application starts with a specification. A map- ping of the layout of the electronic file as described in the file transfer agree- ment to the database storage structures provides the basis of the specification. The specification will also have to address data issues as described in the following text. The validation continues with the program being written according to good company practices or the application being configured according to guidelines and manuals. Documented testing, release, and specific user instructions round out
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Non-CRF Data
101 and the question will be whether the data was not transferred or never available. So, the one field that should appear on the CRF to match lab data is an indicator to be marked if for some reason the sample was not done or is in some way not analyzable. Companies that drop those fields in large Phase III trials with a significant amount of external data will find a very large number of reconciliation queries to the site when there are missing records in the vendor data but no way to know if the vendor missed something or the site did not perform that test. As the previous example implies, any questions about differences in expected data and received data could be due either to problems at the vendor or to problems at the site. Typically, inquiries to the vendor are informal, such as email or shared spreadsheets to track discrepancies. If the vendor confirms everything is correct to their knowledge, then the site must be queried via creation of a manual query (see Chapter 8). QUALITY ASSURANCE FOR EXTERNAL DATA Data from a vendor must be transferred and stored according to the requirements of 21 CFR Part 11. If a company does nothing else with electronic data received from a vendor, it must still ensure the integrity of the data received. This requirement can- not be overstated and should never be overlooked. The completeness of the data might be considered part of data integrity, but the steps to ensure it are also steps that provide confidence in the data’s quality. Data reconciliation as previously described is used to ensure that the company receives all the expected data and also no extra unexpected records. Data reconciliation against fields on the CRF or eCRF provides confidence that the data reported for a given subject and sample or reading is in fact the right sample for that sample or reading; that is, no samples have been inappropriately assigned to another subject.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
SOPs FOR NON-CRF DATA
Because transfer and copying of clinical data has to meet 21 CFR Part 11, a stan- dard operating procedure (SOP) should be in place to show that procedures used for receiving and loading electronic files from external vendors are in compliance with the rule. This SOP should either require transfer via a closed system or require that extra security is in place if the open Internet or email is used. That electronic file transfer SOP, or a separate SOP on data loading, should also require a transfer specification for every type of data and vendor. The transfer specification will act as the specification against which test transfers and loads will take place. Initial configuration and programming of loading programs must be tested to show that the data is not altered during the load (copy). It is also wise to require some kind of review or check of an error log for every transfer even when the receipt of data becomes routine during the conduct of the study because nearly all companies have had the experience of receiving files from a vendor without problems for a period of time and then having the vendor inexplicably change the data format. Finally, a study cannot be locked (see Chapter 13) until all the external data has been pro- cessed and reconciled. 102 Practical Guide to Clinical Data Management, Third Edition WHEN NON-CRF DATA IS OUTSIDE DATA MANAGEMENT At some companies, non-CRF data becomes the responsibility of groups outside of data management. For example, if the external data is being sent as SAS datasets, the results may go directly to the SAS programmers. If this is the case, the data man- agement plan or other company data handling agreement should make clear who is responsible for the reconciliation of the data against CRF data and how any discrep- ancies found in that reconciliation are to be handled. Even though data management is responsible for the completeness and accuracy of the data, this situation may make it impossible for data management to carry out tasks because they do not have access to SAS or SAS dataset storage folders. There is a danger here that no accounting and reconciliation happens in this case. There is another danger that other groups may not be as aware as data management of 21 CFR Part 11 requirements for transfer and storage. To avoid data problems, data management must take the lead in bringing these matters up to the study team for the good of the trials. 103 11 Collecting Adverse
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Adverse Event Forms
AE forms will collect several kinds of data, including the following: • The text used to describe the event • Start dates (and possibly times) • Stop dates (and possibly times) or an indicator that the event is continuing • A variety of indicators including severity, relationship to drug, outcome,
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
and action taken
• Additional comments or additional treatment or medication information It is very common to have the AE form include a question that asks: Is this a seri- ous adverse event? This is the trigger that lets clinical data management know that SAE reconciliation, as described in the following text, will be required. The example AE form in Figure 11.1 shows typical fields. For all studies, the investigator will ask the subject about any adverse events since the last visit or check point. For some studies, companies will transcribe these events Collecting Adverse Event Data 105 Protocol N013_06 Site: □ □ □ □ Subject: □ □ □ □ Page 121. Where any AEs reported from the time of study drug administration to final discharge from the study? Yes □ No □
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
if SAE
1 □ 2 □ 3 □ Relationship to Study Drug 1 – Probably/Possibly Related 2 – Not Related
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Intensity
1 – Mild 2 – Moderate 3 – Severe Action Taken with Study
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Med
1 – None 2 – Interrupted 3 – Discontinued
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
such as in onset date
See Chapter 11, “Collecting Adverse Event Data,” for further discussion of SAEs and the reconciliation process. Another common source of duplicate storage occurs when a trial uses an interac- tive voice response system (IVRS) to randomize the subject or assign a treatment via a kit number. The IVRS will store responses and the assigned treatment group in its database system, and the CRF or eCRF design may also require that the assignment be recorded so it is also stored in the clinical database. Reconciliation between the CRF and the IVR system is a very good idea—in a trial of any size, expect that some of the information will not match. In fact, it is very important to know if the IVRS assigned one kit number but the site provided a subject with a different kit number (for whatever reason). Companies have also found themselves reconciling against paper in cases where sites are asked to provide additional information to the medical monitor for adverse events being reported more frequently than expected. That is, when the medical
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
SOPs FOR AE DATA
Good process for managing safety data cannot be emphasized enough. Normal data management SOPs will cover most activities except SAE reconciliation and AE/SAE coding. SAE reconciliation involves the safety group, the medical monitor, and clini- cal operations in addition to data management, so SAE reconciliation SOPs should be coordinated and signed-off by every group involved in the process. The proce- dures should clearly spell out responsibilities for the steps in SAE reconciliation, including providing database listings, safety systems listings, discrepancy reports, site queries, and data updates to both systems. Traditionally, reconciliation has only been done after all the data has been col- lected, before study lock. As we see a higher expectation that companies be aware of possible safety problems with their treatments, most companies are going toward more frequent, even monthly, SAE reconciliation for Phase II and III studies, with final reconciliation again at study lock. The sheer volume of SAE reports from a Phase III study with seriously ill subjects will necessitate processing them through- out the course of a trial in order to keep up, but a Phase I trial may have no SAEs at all. No matter when reconciliation takes place, evidence of the reconciliation must be in the data management or clinical files. Ideally, a medical monitor signs off on the final reconciliation prior to lock, if not the intermediate ones. The data manage- ment plan is often the place where the frequency of reconciliation is recorded for a given study. The SOPs and guidelines governing coding will be presented in Chapter 26. If coding is performed by different groups in the two systems (clinical and safety), then it might be necessary to have two SOPs. IMPACT OF AEs ON DATA MANAGEMENT While adverse event data is, in many ways, like any other data collected during a clinical trial, it is critical to the evaluation of the safety and efficacy of the treatment. In particular, adverse events add coding of reported terms and reconciling of serious adverse events to the data management process of a study. Both of these tasks tend to be particularly active as close of the study nears. Reconciling, in particular, may not be possible earlier in the course of the study, and even if performed earlier, will have to be repeated at the end of the study. The effect then, of adverse event data as a whole, can be to impact the close of a study. Data managers may not be able to change this, but they can be aware of it and plan for it. When sign-off on coding and SAE reconciliation is required to lock a study, data management must notify the medical monitor or responsible party that his or her attention will be required in order to avoid surprises and/or delays in lock. 113 12 Creating Reports and
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
DATA TRANSFERS
Transfers of data are different from reports in that the data is copied and sent else- where (either within or external to the company) to be analyzed, reviewed, and reported on. Transfers of data nearly always involve or include some safety and efficacy data. Because of this, data transfers should be guided by the requirements of 21 CFR (Code of Federal Regulations) Part 11 for validation of software and for verified copies. There are two elements to ensuring accurate transfers. The first is to create the extract program that pulls the data out of whatever database it resides in and puts it, perhaps with some reformatting, into a target file. This extract program or script must be validated to show that it works properly and that it creates an accurate copy during testing. The second element of transfer is the sending of the resulting file or files. The copy of the data is made into a target file or files that are then transmitted to some receiver. In addition to requiring evidence that the copy is accurate, 21 CFR Part 11 requires extra security if the data goes into an open system such as the Internet via email. (See Chapter 10 for further discussion of sending and receiving data by electronic files.) Two techniques for ensuring accu- rate copies and secure transfers are the use of transfer checklists and the creation of transfer metrics.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Transfer Checklists
Even if the transfer is a one-time occurrence, a checklist of the steps needed to produce it helps assure that nothing is overlooked. If the transfer is repeated dur- ing the course of a study or studies, the checklist is essential to assure consistency and completeness at each transfer. The checklist should be created before the very first transfer, even if all the steps are not known until a few test runs have been completed: the act of documenting before doing helps point out undefined areas and allows cross-checks to be built in from the start. Figure 12.1 illustrates the steps that might be in a checklist for transfer from a computer to a CD. Because a CD is used in the example, the compressed files are not password protected. Files sent via email would need to be protected or encrypted. In this particular checklist, the person overseeing the transfer manually creates a trans- fer file and copies in the data metrics (see text that follows). A much better approach Creating Reports and Transferring Data 117 would be to have the extraction programs or scripts create all or nearly all of the information required for the transfer. Just creating the checklist as a tip sheet is usually not enough to make sure all the steps are followed. Even the most conscientious data manager may overlook a step. Guidelines that require printing out the checklist and initialing each step on comple- tion usually help ensure that all steps are followed. The checklist provides excellent documentation for the transfer and can be filed with a copy of the transferred data.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Transfer Metrics
Transfer metrics are numbers that help verify that the data was completely extracted to the transfer file(s). Exactly which metrics will be useful for any given transfer will depend on the format and number of the transfer files and on the type of data being transferred. Some of the common transfer metrics include: • Number of files • File sizes • Number of subjects per file • Number of records per subject • Number of records per table or file Some companies also create a checksum (a number generated by an algorithm that is unique to the contents of the file) for each file. Checksums can help detect corruption of the contents of the file. Data managers review these metrics once the transfer files have been created to get a sense of whether the transfer program put the correct data in the file. This is use- ful even if the data manager does not know exactly how many subjects or records to
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Study Closeout
As the date for the last subject visit—most commonly referred to as last patient out (LPO) or last patient last visit (LPLV)—approaches, study closeout activities begin. Study closeout includes final cleaning and review of data. When the data is deemed clean and complete enough for analysis, the database records are locked against any changes. Database lock is the trigger for the data to be unblinded and extracted for analysis. After the database lock, additional activities complete the study closeout. For electronic data capture (EDC) studies, one of the most important activities is to create and distribute copies of the electronic case report forms (eCRFs) to the sites and to prepare a version for inclusion in the trial master file. 123 13 Study Database Lock As the last subjects near their final visit, the race to close and lock the study begins. Locking means that no data will be changed; a locked database defines the point at which final analysis can start and conclusions can be drawn. Because there is usu- ally high pressure to make those analyses (and related decisions) as soon as possible, companies frequently keep track of the time to database lock as a corporate metric and work constantly to minimize that time. The pressure to quickly lock a database for analysis comes up against a long list of time-consuming tasks that need to be per- formed first. The list includes many individual steps, including: collecting the final data, resolving outstanding queries, and performing final quality control checks. In this chapter we will look at the most common steps performed in preparation for study database lock in both paper-based and electronic data capture (EDC) stud- ies and address some ways in which the time to study lock can be reduced. The next chapter discusses activities that happen after the database is locked and touches on what needs to be done if a data change is needed after lock.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
FINAL DATA
Before a study can be locked, all the clinical data generated by the study must be present. The data, first and foremost, is the original data from the subject reported on case report forms (CRFs) or through electronic CRFs (eCRFs), but there is other data as well: corrections from the sites, calculated values, codes for reported terms, data from central labs, and any other clinical data from external reading centers. Any of this final data may generate discrepancies that will require resolution before study lock. To account for all the original data, data management uses tracking information to ensure that all expected CRF or eCRF data has been received; there should be no missing pages in a paper study or empty forms in an eCRF study. In addition, the data manager or lab data administrator checks that all central laboratory data was received and that any other electronic loads are complete. Once in the central data- base, this data will go through the cleaning process, which may generate discrepan- cies. (See also Chapter 10, “Non-CRF Data.”) As the final data comes in, the final calculated values also must be derived. Discrepancies raised by calculated values are usually traced back to problems with the reported data and may have to go back to the site. All reported terms (such as adverse events and medications) must be coded and any changes to terms that come in as corrections must also be rerun through the cod- ing process. When a term cannot be coded, a query may have to be sent to the site, but close to study lock, some companies will permit a medical monitor or clinical research associate (CRA) to make limited corrections to reported terms to allow 124 Practical Guide to Clinical Data Management, Third Edition them to be coded. Just to be sure everything is in a final, coded state, many compa- nies rerun coding over the entire set to catch cases where the assigned code changed due to a change in the dictionary or synonyms table and cases where the term was changed but the code did not receive an update (see Chapter 26 for more information on the coding process).
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
FINAL QUERIES
Resolutions for new discrepancies identified as final data is collected, as well as those queries and discrepancies still outstanding from earlier in the study are also required for completeness of the data. Generally, all outstanding queries must have a resolution before a study can be locked—even if the resolution indicates that a value is not and never will be available. Getting these last resolutions and the required investigator signatures from the site can hold up the entire closure process, so CRAs frequently get involved in calling or visiting the sites to speed corrections. Because of the difficulties and time pressures at the end of the study, companies may choose not to pursue noncritical values at this stage of the data handling. Ideally, the list of critical values will have been identified at the start of the study in the protocol or data management plan and can be referred to when faced with getting a resolution from an uncooperative site right before study lock. Some companies call the point at which the last CRF data comes in from the site soft lock or freeze, but most companies wait until the last query resolution is in to declare a soft lock. In either case, this is the point at which the real work of assuring quality begins. The data is not locked yet because there may still be changes that come out of the quality activities, such as database audits for paper studies or final data review for any study, but the number of changes is expected to be small. The data is in a near-final state.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
FINAL QUALITY CONTROL
The quality of the data will affect the quality of the analyses performed on the data. At the close of the study, there is a particularly strong emphasis on checking the quality of the data that is about to be handed over to a biostatistics group. Because there is, or should be, a high barrier to getting a study unlocked, it is worth making an effort to check the data thoroughly. All kinds of review of the data help provide assurance as to its quality and correctness, but study closure checklists frequently include these specific kinds of checks: • Audits of the database • Summary reviews of the data • Reconciliation against other systems
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Database Audits
Data transcribed from a paper CRF or other source into the database is usually checked for accuracy through a database audit. Data managers compare data in the
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Summary Review
There are certain kinds of cleaning or discrepancy checks that are better performed near the close of a study when the data is more complete. These include listing reviews, summary reports, and simple analyses of the data as a whole. The goal of these reviews is to detect unusual values that stand out in the context of a full set of data but that might otherwise pass cleaning rules or other discrepancy identification methods. A listing review of text fields is a good example of how trained humans pick up inconsistencies that cannot be programmed into edit checks. In paper studies, data managers may review listings of text fields to check for nonsensical words that are introduced because entry operators are focusing on what they see rather than the meaning of a phrase. For both paper and EDC studies, a separate listing review by CRAs is often required for study lock. The CRAs may notice nonsensical phrases, but more importantly, they may find problems with protocol compliance. For exam- ple, they may review medications and find some listed that are not permitted by the protocol. Or, they may find medications listed in the medical history section. They may even find serious safety problems listed in comments associated with lab results or in adverse event reports. Humans are very good at detecting patterns or unusual values. Listing reviews of numeric values may also work for smaller studies to detect unusual values or outliers. For large studies, summary reports created from ad hoc queries or simple statistics performed on the data can identify unusual patterns or outliers by looking at the following: • Number of records or values per subject 126 Practical Guide to Clinical Data Management, Third Edition • Highest, lowest, and mean for numeric values • Distribution of values for coded fields (e.g., how many of each code) • Amount of missing data These summary reviews can be run by data management staff, but in some com- panies, clinical programmers will look at the data using SAS®. Graphs of lab and efficacy data or other simple displays or analyses can also identify possible problems with units, decimal places, and different methods of data collection that might not otherwise be caught by simple cleaning checks. These graphs and list- ings will probably come out of the programming or statistical group. In the end, the best review of the data is to run the planned analysis programs on the data even before it is locked. The goal is to have no surprises when the final programs run!
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Reconciling
In the best case, clinical data is stored in a single location and extracted for review or analysis from one location. However, in the setting of a drug or device trial, it is not unusual for some data to be stored in more than one location—and for very good reasons. When this is true, reconciliation may be necessary to assure consistency between the systems. The most common reconciliation with external systems is for serious adverse events (SAEs). Data on SAEs is typically stored in both the clinical data management system and also in a separate SAE system. When reconciling at study close, data management staff look for the following: • Cases found in the SAE system but not in the clinical data management (CDM) system • Events found in the CDM system but not in the SAE system • Deaths reported in one but not the other—perhaps because of updates to
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
the SAE report
• Instances where the basic data matched up but where there are differences,
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
FINAL STEPS FOR EDC
In a paper study, a clinical research associate (CRA) visits the site, checks the paper CRF against source documents, and then gets the separated pages to the data entry center. So, by definition, if the data is in the database, it has been monitored. In an EDC trial, the data is in the database right away, but it helps to know whether or not it has been monitored in order to judge how accurate it is. Most EDC systems build in a feature that allows the CRA to mark data that has undergone source document verification (SDV). Before database lock, all the data listed in the monitoring plan as requiring SDV must be marked as having undergone SDV. The CRAs can keep an eye on this throughout the study, but a final check prior to lock is still required. All EDC systems require that the investigators sign for the data according to 21 CFR (Code of Federal Regulations) Part 11 compliant features. The study is not final until the principal investigator has signed for all of the data. It is important to understand that when a discrepancy is discovered during study closeout activities, and it is added as a query, a change to the data by the site will “break” the investiga- tor signature. (In paper studies, the investigator just signs the query form to indicate knowledge of the data change.) The investigator has to re-sign. Because of this, all EDC studies need a check for investigator signature prior to locking. (See Chapter 8 for more information on principal investigator signatures in EDC studies.) USING A CHECKLIST TO LOCK A STUDY The final data collection and final cleaning steps are all critical to ensuring the qual- ity of the data. A checklist of procedures that must be completed prior to database lock is a standard tool in data management organizations. Generic checklists can be easily applied across studies and even across companies (see Figures 13.1 and 13.2 for examples); the more specific the list, the more valuable it is. If the database sys- tem or integrations with the database impose a certain order on the closeout proce- dures, reflect that in the database lock checklist. Such a checklist can be included as an attachment or appendix to the standard operation procedure (SOP) for study lock. Consider allowing the checklist to be modified to include study-specific elements such as sample data for a specialized substudy. Checklists work better when they require action by the user; just printing a list of steps to be completed prior to lock is not as effective as requiring a data manager to initial and date each step. (The date also provides useful information on the time required to complete each step, which can be used in planning future locks.) Most 128 Practical Guide to Clinical Data Management, Third Edition
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
All CRFs received
All CRFs entered and verified All external data received All external data reconciled
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
All data coded
Coding reviewed and approved SAEs reconciled and approved All queries resolved or closed Site permissions set to read only Approval to lock obtained All records marked as locked
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Initial and Date
Sites have filled in all necessary forms SDV marked as per study requirements All PI signatures present All external data received All external data reconciled
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Request site PDFsa
a See Chapter 14 for discussion of postlock activities for EDC. FIGURE 13.2 An example of a generic CDM lock checklist for an EDC study. A study- specific version of a lock checklist would list all the sources of external data individually or add steps to disable specific integrations being used to transfer data.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
COMPLETE STUDY FILES
The lock of a database is such an important milestone in a study that staff will push hard to meet the deadline and then breathe a sign of relief because the bulk of the work is done. Before everyone moves on to new projects and forgets details, data managers should allocate time to make sure the study documentation is complete, submit items to the trial master file, and record feedback from the study. Most data management groups try to keep the data management study file reason- ably current, but it is a very good idea to have a study file audit shortly after lock. The lead data manager should check files to ensure that all required materials and documents are present. Documents should be the most recent version (or all versions, if required by SOP) and have current signatures if required. If being stored, the list of who worked on a study and what they did should reflect the final status. This is also a good time to add notes to file to record any unusual circumstances or data issues related to the study. Some of the documents created as part of clinical data management need to be submitted to the trial master file (TMF).* For example, the data management plan is now a standard element in the TMF and many companies consider study validation documents to be good clinical practice (GCP) records (see also Chapter 5). Other documents that support clinical data management (CDM) activities, such as final tracking reports, may be filed in offsite or online storage for some number of years without attempting to permanently archive them. A postlock step to complete filing will ensure that the documents are available later should they be needed.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
ASSESS STUDY CONDUCT
Very few companies schedule feedback meetings after a study lock. Unfortunately, people forget details and problems pretty quickly as they move on to new studies and so these problems are repeated in future studies. Right after lock is a great time to review the CRF or eCRF for fields that caused an unusual amount of trouble. Those * The trial master file is a file maintained by the sponsor that contains the essential documents associ- ated with the trial. Many of the required documents are specifically called out in ICH E6 GCP Section 8 (GCP), but others, such as the data management plan, are more of an industry standard. 134 Practical Guide to Clinical Data Management, Third Edition fields or modules should be modified if possible, not just reused for the next study. Similarly, edit checks that did not provide the results expected should be examined. This is also a good time to review the metrics from the study such as: • Total number of manual discrepancies • The percentage of discrepancies resolved in-house for paper studies • The top ten types of queries • Average time to resolve queries • Time from last query received to study lock The more information a data management group has from a past study, the more accurately the forecasts and estimates for the next study will be.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
SITE ECRF COPIES
Section 4 of ICH E6 GCP requires that sites have a permanent copy of subject records after the study has closed for at least two years after the last marketing application or discontinuation of clinical development. EDC applications allow sites access to eCRF data until study closeout, but at some point the application needs to be shut down or be moved off a production server. At that point, the site will need access to archival copies of the eCRF data. Some companies print copies of eCRFs populated with subject data, but the more current process involves creating PDF files with audit trail records and sending those to sites on CDs. Sites are instructed to check that the disc is correct and readable and to file that disc per site standard operating and archiving procedures. Sites should be asked to return a confirmation form so that the sponsor has a record that its obligations have been met.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
UNLOCKING
Once the study is locked and analysis begins, it is not uncommon to find problems with the data that require corrections. (This is particularly true if final quality control does not include the summary reviews of the data or draft runs of the analysis pro- grams as we discussed in the previous chapter.) New information regarding adverse event data found during site closeout or site audits may also require updates or addi- tions to the data. Because database lock triggers a cascade of other activities, unlock- ing the database has a serious impact in that many or all of the postlock activities will have to be rerun. Also, while the Food and Drug Administration (FDA) seems to accept an unlock here or there as being normal, if multiple unlocks of a critical study come to their attention, they will question the quality of the data. Unlocking, then, is a serious matter that should first be avoided by good locking practices, and if unavoid- able, unlock must be approved at a high level and conducted with care.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Avoiding Unlocks
If, after a lock, data analysis or site closeout procedures identify data that is incor- rect, it is not always necessary to unlock the database and make the correction there.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
QUALITY ASSURANCE
Study file audits after study lock are a quality process that ensures all the required documents are present and filed with the TMF or in other central records storage. Study feedback sessions close the loop and provide feedback that will actually enhance quality of trial conduct over time. Good lock practices will reduce the need to unlock study databases and so avoid the difficulties and resources associated with the unlock process. SOPs FOR STUDY DATABASE UNLOCK As noted in the previous chapter on lock, the procedures for unlocking a study may be combined with lock procedures in a single standard operating procedure (SOP) or may be split into a stand-alone SOP. Standard procedures for unlock should require a high-level approval prior to opening the database to changes. Those procedures should also specify that the unlock form, the relock form, and all evidence to show that the unlock updates were appropriately limited, should be filed in the data man- agement study files.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
AVOID UNLOCKS
Unlocks should be few and far between. Good lock practices are needed to keep the number of unlocks to a minimum—but those practices should be put into place not just in clinical data management but also in clinical operations and biostatistics. Clinical data management focuses efforts on completeness and quality of the data, but that is not enough. Good practice and training in clinical operations will help ensure that monitoring is thorough to avoid missed safety or medication data. Biostatistics plays a role by being involved in defining required data cleaning and quality checks and by reviewing data using planned analysis programs prior to lock.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
SOPs ON SOPs
Yes, there should be an SOP on SOPs! This procedure is usually developed at the corporate level and typically would contain references to the required sections of an Standard Operating Procedures (SOPs) 147 SOP or point to a company template. Ideally, the SOP on SOPs would also give some guidance as to what processes are documented in SOPs as opposed to in guidelines or work instructions. Many SOPs on SOPs do not include the process for approval and how to determine the needed signatures. This is unfortunate as it can result in inconsistencies on approvals for SOPs developed by different groups. There will always be cases where the process cannot or was not followed, so the SOP on SOPs must also include a process to follow for deviations and prospective waivers. Finally, the SOP should require review of each SOP within a period of time (typically on the order of two years) from the time it becomes active—which leads us to the next topic: work on any given SOP must always be considered to be ongoing.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Role
1. SOPs on: 2. CDM System 3. Guidelines Test Required?
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
First Pass Entry
• CRF Workflow • Data Entry
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Entry Menu
“Handling Pages with No Identifier” “Data Entry Guidelines”
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Second Pass Entry
• CRF Workflow • Data Entry
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Verification Features
“Handling Pages with No Identifier” “Data Entry Guidelines”
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Discrepancy Menu
“Discrepancy Management” No, work review only
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
CRF Designer
• Designing CRFs N/A “Managing CRF Files” No, work review
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
only
FIGURE 16.1 An example of a training matrix for conducting paper-based studies. The roles identify the kinds of tasks being performed. The columns list the different kinds of training from three different areas. The final column indicates whether a formal test is required to qualify to do the work on an actual study. 152 Practical Guide to Clinical Data Management, Third Edition training has been completed. (How to document this training is discussed in a fol- lowing section.)
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
HOW TO TRAIN
Only large companies have training groups, and even then those groups often focus on companywide training. They may train on corporate SOPs, good clinical practice (GCP) in general, and other topics that pertain to groups beyond data management. If a data management group is large enough and lucky enough, they may have a data management trainer or CDM training group knowledgeable in data management and the systems used, but it is extremely common for data management staff to be responsible for training its own staff on top of other duties and expectations. In the latter case, it may be necessary to make it a yearly goal for some data managers to provide such training support and to recognize those who do it well. If a data management group is relatively small with low turnover, one-on-one training may be the most efficient approach. While many small groups assign new staff members to a buddy or mentor for training, this has been found to lead to wide variability in the quality of training. Ideally, one person in the group who is both interested in and good at training will be the designated trainer. If the group is grow- ing, periodic formal training sessions may become worthwhile. In this case, the qual- ity of the training will probably improve and be more consistent, but there are always issues about holding the classes when they are needed. Computer-based training may
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
FIGURE 16.2  An example of the first part of a training matrix for EDC studies, with the list
of all training components in the first column. Training components include courses (which may be computer-based or live), SOPs, and guidelines. The columns give the role, but in this case, the role is similar to a title but indicates what kinds of activities the employee is expected to perform. The full matrix would include all key data management roles and man- agers as well.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Training
153 provide the consistency of training found lacking in mentor-based training, support part-time trainers, and make courses available as they are needed.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
SOPs ON TRAINING
Not all companies have SOPs on training, but they do usually have very specific instructions on maintaining training documentation. If no company policy is in place, each data management group can set up a practice of having training plans for new employees. The training plan can cover the three areas of training: SOPs, department guidelines, and system training as recommended in this chapter. Study- specific training can be enforced through data management plans or requirements for the study files. ALLOTTING TIME FOR TRAINING Probably the biggest mistake smaller companies make is not allotting time for train- ing. They specifically hire experienced people and then expect each person to jump in and begin work. After all, that person has done this work before. That person may have done the work before, and they may even have used the same clinical data man- agement or EDC system, but they have not done the work at the company in question. As we saw in the earlier chapters of this book, there are many options for performing clinical data management and for using clinical data management systems. Each person needs to understand how the task is to be performed in each group’s unique combination of procedures and system configuration. This takes time and may mean the new hire sits around a bit while waiting for training or review of work. But it is well worth the investment for all involved. State up front to each new hire (and the group that person joins) that new staff members should not expect to do production work for the first week or two (or even more). When the expectation is clear, no one will feel the new person is wasting time. 155 17 Controlling Access
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
and Security
The Food and Drug Administration (FDA) is very concerned, and rightly so, about the quality and integrity of data associated with clinical trials. In 21 CFR (Code of Federal Regulations) Part 11 and in guidance documents the agency frequently repeats the phrase “authenticity, integrity, and confidentiality of electronic records” to emphasize their interest. The regulation is clear; Section 11.10 requires controls and procedures including: “(d) Limiting system access to authorized individuals” and “(g) Use of authority checks to ensure that only authorized individuals can use the system, electronically sign a record, access the operation or computer system input or output device, alter a record, or perform the operation at hand.” Limiting access (who can get in) is achieved through proper account management. Authority checks (who can do what) are set up via access control or access rights. Account management deals with assigning and maintaining usernames and passwords—or bio- metric identifiers. Access control defines how those users are given access to particular features of the clinical data management system or electronic data capture (EDC) appli- cation and how and when that access is revoked. Good account management and access control has to be achieved through a combination of the features of the software being used and procedures to make sure the features are used properly and to fill in gaps in the systems’ abilities. While clinical data management (CDM) staff does not usually create accounts for sites in EDC systems, they are commonly responsible for assign- ing accounts in classic clinical data management systems and, in either case, need to understand good practices around assigning and managing accounts.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
ACCOUNT MANAGEMENT
Accounts that are used to access clinical data management systems provide the user with varying degrees of power, or control, over data stored in the system. When the account permits the user to enter, modify, or delete data in electronic records, the username and password together constitute one kind of electronic signature. It is not the kind of signature that is the equivalent of a handwritten signature (such as the principal investigator signature for electronic case report forms (eCRF) in EDC) but rather is the kind of signature that makes the change to the data attributable to a par- ticular person. All data management systems automatically associate the person who is responsible for the entry or change with the data usually through the username. By thinking of the username and password as the way to make actions on the data attributable, it is easier to put procedures into place that govern the username and password in compliance with 21 CFR Part 11. The username must uniquely define a person and the combination of username and password constitute a signature. 156 Practical Guide to Clinical Data Management, Third Edition
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Usernames
The statement “usernames must uniquely identify a person” sounds simple enough and most data management and EDC systems enforce unique account names, but there are some important, real-life cases to consider. For example, if the username associated with a common actual name (e.g., jsmith for Julia Smith) is removed when that person leaves the group or the company, then the possibility exists that that account name would be reused sometime in the future (e.g., when John Smith joins the group). In reviewing the data at a later time, it would then not be possible to immediately tell which person entered a particular record without referring to dates of employment and dates the record was created. It is best, therefore, to leave all account names in the system to prevent their reuse. When a person leaves, access permissions are removed but the account name remains to prevent reuse. Since most systems store not only the username but also the full name of the person using that account (thereby making the association of username and person), permanently retaining the account also keeps the connection to the actual person. If a system does not do this automatically, the connection must be retained through some paper method. In fact, many data management groups keep a paper account record of the person’s full name, signature, initials, and username when a new account is created. The need to keep the connection to a real name brings up one more common problem: having two people with the same name working at the same time. This is not the same case that we just considered. In that case, the username may be reused but the first person was Julia Smith and the second person coming along later was John Smith. In this case, we have people with the same name, two Julia Smiths, for example, working in the group at the same time. Because the system enforces unique user names, one of user names may be jsmith and the other may be jsmith2. Recording the owners of both of those names as Julia Smith would probably not be considered sufficient identification for a serious audit—after all, which Julia Smith was it? When tracking user names, either on paper or electronically, record either the middle initial or a birth date or some other information to differentiate the two. This same differentiation probably has to be carried over to training records, signatures on documents, initials on actions performed, and so forth. Rather a bother for the indi- viduals involved, but the idea of attribution is important in the eyes of the FDA.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Passwords
The password in combination with the username is what makes the attribution work. It is equivalent to the signature of the person doing the work and says, “Yes, I per- formed this action.” Most companies in the industry are aware of good password procedures and implement them for corporate-level access to networks. Standard practices include the following: • Only the user knows the password, not the administrator. • The password has a minimum length (generally eight characters or more). • The password must be changed on a regular basis (definitely less than a year) and cannot be immediately reused. Controlling Access and Security 157 • Many companies also require at least one character in passwords that is not a letter and that the passwords not appear in a dictionary. • No one should do work while signed in as another person. That can be con- strued as falsification should the data come into question. These procedures comply with the FDA’s recommendations (found in the responses to comments in the preface to 21 CFR Part 11), which recommend enforc- ing procedures to make it less likely that a password could be compromised. While IT groups regularly enforce these standards for the company network, not all data management system administrators configure their applications the same way. A surprising number of companies, when audited, are found not to require regular password changes, nor do they enforce minimum lengths. A bit of digging has also turned up cases where employees change their passwords repeatedly when they expire to get back to their favorite password. In the same comments section of the regulation, the FDA says, “Although FDA agrees that employee honesty cannot be ensured by requiring it in a regulation, the presence of strong account- ability and responsibility policies is necessary to ensure that employees understand the importance of maintaining the integrity of electronic records and signatures.” Some groups try to make clear to their employees (temporary as well as perma- nent) the importance of good password control by making them sign a statement acknowledging that they understand the seriousness of not adhering to company account management policies, but many still do not see it and it is hard for manag- ers to detect poor password maintenance.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Account Timeouts
Another often neglected situation is that of an employee walking away from their computer to get coffee. If they are logged on at that point, someone else could sit down and access the data. This is clearly not desirable as the comments section of the rule also states: “The agency’s concern here is the possibility that, if the person leaves the workstation, someone else could access the workstation (or other com- puter device used to execute the signing) and impersonate the legitimate signer by entering an identification code or password.” They go on to recommend an automatic disconnect or locking of the screen so that the user has to sign on again to continue. At some companies, this is a network setting; at others it is a setting of the data management application. However it is accomplished, such controls should be in place and the idle time setting, the number of minutes after which the systems locks, should not be set too high or it is of no value. New technology should help in the future as there are already devices that will lock the computer activity when the user, who is wearing or carrying a signal device in a badge, moves a given distance away from the computer.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
ACCESS CONTROL
For the purposes of this chapter, access control is the combination of systems and procedures that define what access rights or permissions users have to the clinical 158 Practical Guide to Clinical Data Management, Third Edition data in a clinical data management system. (Access rights for EDC systems are a bit different but the information in this section can easily be extended to cover those as well.) Data management systems all have support for granting and revoking access to studies, but the systems available may not meet all the needs of a data manage- ment group. In that case, other procedures, with appropriate controls, should be put in place to fill the gaps. No matter how it is accomplished, systems and procedures should be able to answer these questions: • Who had access to a particular study? • When did they have access? • What were they allowed to do? This is recommended in the FDA guidance “Computer Systems Used in Clinical Investigations.”
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
THE CRO MYTH
It is a myth that using a CRO means the sponsor offloads all of the work involved in the project or that portion of the project that is contracted out. Contracting with a CRO to carry out data management for a particular study does not mean that the sponsor’s data management group is freed from involvement with the study! It is only 162 Practical Guide to Clinical Data Management, Third Edition through close involvement from the time of study setup, throughout study conduct, and through database lock and final transfer that a sponsor can feel confident in the quality of the data associated with the study. It is by establishing a base knowledge of the CRO’s compliance with regula- tions and industry standards that the relationship gets underway. This baseline is established via an audit of the CRO. Then, for each project, both sides must clearly define their responsibilities so that no critical data management step is overlooked. To really understand the data and its quality, the sponsor liaison must stay closely involved in the project through ongoing review of materials, oversight of milestones, and constant discussions about the handling of problem data. After study closeout, the sponsor must ensure that the CRO transfers all data and items for the trial master file. To provide a sponsor contact and to keep closely involved with the study, a spon- sor’s data management group should designate a CRO liaison from data management who is an experienced data manager knowledgeable in all aspects of clinical data management (CDM).
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
AUDITING CROs
A sponsor is ultimately responsible for the quality and integrity of the data coming from a CRO. It is generally accepted in the industry that a key component of taking on that responsibility is to audit the CRO before (or at least around the time of) begin- ning substantial work with that CRO. Sometimes a company will have the resources to maintain an audit group who will review a CRO’s procedures in all areas of inter- est to the sponsor. This should include someone with data management experience if the CRO will conduct some or all of the data management functions for a study. In smaller companies, or when special expertise is required, data management may be asked to designate an auditor (in-house or contracted) to look specifically at data management functions at that CRO. Auditing usually involves a review of written policies and procedures as well as interviews with CRO staff. Often, the auditor will work off a checklist or question list so that no key items are forgotten. By reviewing documents and talking with staff, the auditor gets an idea of whether the CRO performs up to industry standards and complies with regulations. Needless to say, if required to review data manage- ment practices in detail, the auditor must be very experienced in the field of data management and understand acceptable variations in practices. The auditor’s aim should be to ascertain if the CRO performs data management in an acceptable way, not that the CRO performs data management exactly as the sponsor does. After the audit, the auditor will write up an audit report and highlight any sig- nificant findings—both good and bad. The auditor must be careful to differentiate between noncompliance with regulations and variations in practices. In the first case, immediate action would be expected and the CRO should reply with a detailed reme- diation plan with timelines. In the latter case, the sponsor may have a different opin- ion about what is best practice in a particular area, but the CRO may still be using a fully acceptable approach. When this comes up, and the sponsor wants to continue to work with the CRO, the companies will usually work together to formulate a plan or compromise specific to the study.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Working with CROs
167 company and may have limited knowledge of how a clinical trial really works and what is involved in clinical data management. CROs AS FUNCTIONAL SERVICE PROVIDERS The discussion of using CROs to this point has dealt with activities that are fully outsourced. That is, the CRO is using its systems, its standard operating procedures (SOPs), and its staff to perform the contracted activities. Larger companies use CROs to manage staffing of projects, and they have another option in addition to fully out- sourcing the CDM activities. They can contract-in CRO staff to work on the sponsor systems under sponsor oversight and sponsor SOPs. When a CRO is used in this way, it is often called a functional service provider. Essentially, the CRO is supply- ing staff and expertise but not systems. It is very much like hiring a local contractor who works onsite at the sponsor’s offices, but the CRO staff is typically located at the CRO facilities. Staff members working as functional service providers need the same kinds of training as any in-house staff or contractors performing the same activities. SOPs FOR WORKING WITH CROs Many companies will have a corporate-level SOP that lays out the bid process and specific requirements for contracting with a CRO. This would typically include the audit requirement mentioned previously. If that SOP does not exist, data manage- ment can still do the right thing at a department level and push for an audit and develop an appropriate responsibility matrix as part of the contracting process. Data management groups that work frequently (or even exclusively) with CROs can also develop a CRO manual or a CRO oversight SOP to lay out data manage- ment’s expectations explicitly. The document would, for example, require a data management plan from the CRO, along with all edit check specifications and data management self-evident corrections. It would also provide recommended workflows for coding, SAE reconciliation, listing review, and the issuing of manual discrepan- cies. Besides setting clear expectations for the CRO, a manual such as this provides consistency within data management when different data managers are working on projects with different CROs.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
CLOSING THE STUDY
1. Closeout procedures plus any study-specific tasks 2. For paper studies: database audit plan 3. Approval process needed to lock Associated document(s): Any documents required by the process, data- base audit results (paper), lock approval form 239 Appendix B: Clinical Data
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
EDC VENDORS AS CROs
When small- or medium-sized data management groups make the leap to EDC, they often contract with an EDC vendor to build the study, manage accounts, and support the technical side of the study. In some ways, the EDC vendor then becomes a CRO whose responsibilities are limited to a portion of clinical data management activi- ties around database creation and support. The vendor should be treated like a CRO, and the suggestions for working well with CROs apply to the EDC vendor as well. In particular, expect to be heavily involved in the start-up activities. Keep in mind that EDC vendors are software companies, not drug development firms. While they will have hired staff familiar with clinical trials, they are not a biopharmaceutical
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
CDM Systems
All data managers work with computer software applications. In the chapters that follow, we will look at the characteristics of traditional clinical data management (CDM) systems and electronic data capture (EDC) systems and then explore the activities around those systems. Given the fast pace of software development and the rapid growth (and failure) of EDC vendors, many data managers are likely to be involved, at least peripherally, in vendor selection and validation of new systems. All data managers who design and/or build study databases need to be aware of change control requirements on study applications and all CDM software. A chapter on software used to code adverse events (AEs) and medications rounds out this section as background for those data managers responsible for the activity. Discussions of system selection, implementation, and validation could take up an entire book in their own right. The following chapters aim to provide an overview without going into exhaustive detail or extensive procedures. 171 19 Clinical Data
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
SOPs FOR CDM SYSTEMS
The SOPs for CDM systems themselves are those that cover implementation, valida- tion, and change control. These SOPs are often written for an IT group. In smaller or newer companies, data management may have to create these SOPs themselves and the concepts needed are discussed in the chapters that follow. 174 Practical Guide to Clinical Data Management, Third Edition CDM SYSTEMS ARE FOR MORE THAN DATA ENTRY Just as data management is more than data entry, CDM systems are more than data collection tools or data entry applications. Even the smaller vendor products will support the key data management tasks for many studies at a time, with lots of data, while being 21 CFR Part 11 compliant. The larger, more complex (and expensive) systems add on more features for more tasks, greater flexibility, and further options for configuration. They will also be able to handle even larger volumes of data and studies. When looking at CDM systems, it is easy to focus on the screens and entry aspects because they are visible and concrete, but it is the complex functions that support the complex work of data management that should be the deciding factors. 175 20 EDC Systems Electronic data capture (EDC) systems deliver clinical trial data from the investiga- tor sites to the sponsor through electronic means rather than paper case report forms (CRFs). As in paper studies, site staff copies information from source records for most of the clinical trial information for a given subject, but they copy into electronic CRFs (eCRFs) rather than paper ones. In most current EDC systems, the site is online with a central computer and the data is stored only on a central computer. These systems work like, and feel like, the familiar websites we visit to shop for books or shoes. As soon as we make a selec- tion and provide payment information, the order is known to the vendor. Some EDC vendors do provide systems that also allow sites to work offline. These store the data locally until the site initiates a connection. This approach is much like the one used when we enter information on our smartphone and then synch it up with our personal computers later via special applications. There are pros and cons to both of these approaches that we will discuss later in this chapter. EDC systems are optimized for site activities during a clinical trial and typi- cally feature: • eCRFs for the entry of data • Support for single-field and cross-field checks on the data that generate
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
queries
• Tools to allow sites to review and resolve queries • Ways for the sponsor to raise manual queries while reviewing data • True electronic signatures so the investigator can “sign” for the data • Record or subject locks on the data • Tools to assist monitoring • Reports about subjects for the sites and reports for the sponsor about sites • A portal that provides information about the study to the sites • A variety of ways to extract the data for review and analysis WHAT MAKES EDC SYSTEMS DIFFERENT? If EDC systems collect the data and manage discrepancies, why aren’t they consid- ered just another kind of clinical data management (CDM) system? The reasons are in fact a bit subtle and hinge on certain aspects of performing data management for clinical trials. Some of the key differences between the two kinds of systems are found in: • Managing multiple data streams • How coding is handled • Location of servers with trial data 176 Practical Guide to Clinical Data Management, Third Edition • Workflow for study setup • The need for data repositories
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Multiple Data Streams
Today, all but the smallest trials have several streams or sources of data. These include interactive voice response system (IVRS) data, lab data from central labs, laboratory normal values, EKG readings, and even electronic subject diary data. This data must be cross-checked against the subject’s eCRF data during the conduct of the trial to identify discrepancies or unusual occurrences. EDC systems do not support multiple data streams very well. If the data is loaded into the EDC front end, then it must be hidden from the sites or protected so that the data cannot be changed. Adding edit checks to loaded data does not provide much value because any queries would not be directed to the sites but rather to the data source or vendor first. If the data is not loaded, but kept as SAS® datasets or stored in another database, then the only way to do reconciliation with the eCRF data is to first extract that data. Also, if the electronic data is kept separately, then special procedures must be in place to lock it at the time the EDC data is locked.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Coding
Strong support for the coding process, including management of the coding diction- ary, is a typical feature of the larger CDM systems. Those systems support robust automatic coding and maintenance of synonyms and provide tools for making man- ual assignments. (See Chapter 11, “Collecting Adverse Event Data,” and Chapter 26, “Coding Dictionaries and Systems,” for more on coding.) Many current EDC systems may only support import of codes after terms have been coded externally; a few are starting to introduce better support for coding. Where the Servers Are—Hosting Another way that EDC systems differ from classic CDM systems is that the central server storing the data from the trail is typically not the sponsor’s server. EDC appli- cations are most commonly “hosted” by the vendor or by some other third party. There are two main reasons for this: 1. There is still some uncertainty as to the interpretation of regulations regard- ing the requirement that a site own or have control of the subjects’ data. Many companies feel that having the data under the total control of the sponsor by having it on a sponsor server violates this regulation. Some oth- ers feel that a sponsor’s own IT department could be considered an appro- priate trusted third party. This discussion will certainly continue for a few more years, but for now, most data managers will see externally hosted EDC applications.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
EDC Systems
181 a given data manager will participate in will change when a company begins to use EDC, but there is still a need to have a coordination of data storage and cleaning efforts. Data managers are also very likely to continue to play a role in design of the eCRF, database, and edit checks. They may also play more of a role in coordinating the various data streams from a study to assure quality and timeliness of data from labs, coding groups, and IVR systems, as well as the EDC host. At many companies, data managers will continue to play their important role as overseers of data quality as they review data listings, run aggregate checks, and perform simple analyses on the datasets. Some companies that use EDC systems widely have reported that the profile of their data management group changes. They go from having many lower-level staff members for data entry and discrepancy management and fewer senior data man- agers to define databases and oversee data collection to exactly the opposite. With EDC, data managers are more involved in study setup and more complex checking and there is less need for junior or less-experienced staff for entry or discrepancy management. They also require more technical expertise in their data managers than previously. This should be heartening to data managers as it shows a trend to more interesting, senior-level positions being available as we go forward with EDC.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
STUDY SETUP
1. Computer system(s) to be used 2. Process to design, build, and test the study database 3. Procedures for release for production 4. Other systems or integrations to be configured Associated document(s): Study database design document; other configu- ration documents, test plans and results, approval for production use
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
SOPs FOR EDC
Because EDC study workflow is different enough from that for paper studies, stan- dard operating procedures (SOPs) specifically for EDC will be necessary for the usual topics in data management (see Chapter 15 and Appendix B). One SOP that is needed and is not common to paper studies is the procedure to manage the accounts and access for sites. Since groups other than data management, such as IT or even the EDC host, are responsible for accounts, they may be responsible for that SOP.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
MAKING EDC SUCCESSFUL
To make EDC truly successful, we need to understand how it changes the way a study is conducted. The work is no longer as sequential and well separated among clinical, data management, and biostatistics. The work the groups are doing overlaps more and there is more room for duplication or, much worse, for a step in the process to be overlooked. In particular, if data management alone is given the task to go imple- ment EDC, the project is likely to fail or at least bring little value. It is when the three groups work together to decide what works given the company philosophy and avail- able resources that the project gets off to a good start. Continuing to work together and reevaluating the workflow during the initial studies will have a positive impact on the outcome of the early studies and on the company’s view of EDC as a whole. 183 21 Choosing Vendor
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
EVALUATING RESPONSES
When the responses to the RFIs arrive, the evaluation team combines the new infor- mation with the initial assessments from the demos and tries to come to some kind of conclusion. This can be a surprisingly difficult task. Each software package will be strong in some areas and weak in others. It should not even be surprising if all the products under consideration end up being weak in an area that the company consid- ers very important. Deciding which of two needs, both initially labeled as “very important,” is going to be very difficult if the products don’t support those needs equally well. Some companies have tried complex systems of priorities and weighting. Each require- ment in the RFI is given priority, and the product responses are weighted as to how well they meet the requirement. The company then performs calculations or even statistical analyses on the outcomes in an attempt to come up with a clear numeric winner. These numbers help, but the final decision of which product to go with—or to decide not to go with any—will probably come down to a qualitative feel about the product and the vendor rather than a pure score based on features. Many people have found that a gut reaction to vendors based on a demo and an RFI results in the same outcome as a complex numerical analysis. EXTENDED DEMOS AND PILOTS If the goal of the vendor and product evaluation process is to learn as much as pos- sible about whether the product would be successful in the company’s environment, then the list of business needs and the evaluation of responses may not be enough. 186 Practical Guide to Clinical Data Management, Third Edition Many companies find that they need some amount of hands-on time with candidate products to really understand if it will work. If time is short in the evaluation period, an extended hands-on demo is a good option. If time and resources permit, a full pilot of the product (before purchase) may be possible. Neither demo nor pilot would normally be carried out with more than two candidate products—and frequently these tools are used as a final check only of the most probable choice.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Hands-On Demos
A hands-on demo takes place either at the vendor or on-site at the company and typi- cally lasts from two to five days depending on the complexity of the system. Having a demo on-site allows more of the company staff to attend all or part of the session. On the other hand, it can be hard to corral all the group members of the evaluation team for the entire period and also keep them focused. Visits to the vendor may incur sig- nificant travel expenses, but they do keep the group more focused. They also provide the group access to more than just one or two of the vendor staff members. The idea behind the hands-on demo is to see if the product would work in the business environment of the company by using data or examples from actual studies. Another goal is to give the evaluation team a real sense of how the product would be used on a day-to-day basis. The evaluation team comes to the demo with sample data or studies to try out in the candidate system. The vendor can perform the more complex tasks with the evaluation team looking on, then turn over the keyboard as much as possible for other tasks. Turning the demonstration into a standard training session usually does not meet the goals of the evaluation team. The success of the hands-on demo will rely on the quality of the people sent by the vendor and on the data or examples chosen by the evaluation team. The examples should reasonably represent actual data or structures that would be used in the prod- uct after implementation. When appropriate, the evaluation team should provide the vendor staff with the data and examples before the demo so that the vendor can do some preparation or setup to keep the hands-on time focused. Note that for complex systems, it would be impossible to touch on all parts or features of the product in depth during the demo period, so the evaluation team should identify ahead of time which features they most want to see.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
ESSENTIAL PREPARATION
The preparation needed before implementation of a system can begin involves get- ting all the necessary pieces in place. This means acquiring all of the hardware and software identified in the overview. It also means installing that hardware and soft- ware and configuring the software application being implemented. At a minimum, installed software should undergo an installation qualification to demonstrate that it has been properly installed. One common problem in the preparation phase is underestimating the time needed to acquire and install systems. In the case of hardware, implementation teams may be unaware that there could well be a wait time due to vendor lead times for deliv- ery of popular server configurations. Even if there is no delay expected in shipping, the process of ordering and purchasing hardware within companies has become so complex that it, by itself, introduces significant lead time. In the case of software, it may be immediately available, but contract negotiations may take quite a while— especially if special conditions, extensions, or future expectations are in discussion. The biggest risk in this preparation phase is forgetting the configuration task alto- gether. Most large software systems (and many small systems) allow some variation in the way the product can be used. Each installing company decides how to use it and configures the system appropriately. The configuration may take the form of assigning values to system and user parameters, or it may require more complex setup, such as these for clinical data management software: • Deciding on and setting workflow states • Adding company logos • Developing algorithms (as for autocoders) Implementing New Systems 193 • Loading large coding dictionaries • Providing company-specific information (e.g., addresses, protocols, com- pany drugs) The configuration tasks needed for a given software product are frequently dif- ficult to judge and difficult to perform because new users may not understand a product well enough to know what to configure and how. Implementation teams aware of the potential for problems in configuration can try to plan for it by working closely with the vendor to determine what needs to be done and roughly how long it should take. INTEGRATION AND EXTENSIONS The overview of the implementation plan in Appendix D has placeholders for all of the integration links and extensions that are considered integral parts of the system. Figure 22.1 lists some examples of integration points and extensions that might apply to a clinical data management system. An individual section for each integration point or extension allows the implementation team to deal with and track each one Type of Integration or Extension
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Purpose
Connect to external AE and drug coding
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
applications
An integration; terms to be coded are extracted into the coding application; results are
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
returned to the data
Build CRF tracking applications for paper
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
studies
An extension when the CDM system does not provide detailed tracking information Use external data-checking programs An integration when external programs such as SAS are used; data is extracted to SAS and discrepancies are loaded back into the system Connect imaging systems for paper CRFs An integration so that data is connected to the CRF image and query forms are connected to query records in the database Use Clinical Trial Systems with EDC An integration to allow site information to be loaded into the EDC system or allow visits to
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
trigger site payments
Support SAE reconciliation An extension or integration that supports or simplifies SAE reconciliation by producing combined reports from the safety and CDM
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
System Validation
205 would be somewhat less and focus on customization and configuration. The effort for small user-written programs and application-built systems would likely be minimal. However, all validation efforts do need to cover the same points and to provide the same assurance of quality. They must include: • Requirements and specifications • A plan to carry out the validation • Test procedures to provide evidence of proper function • A summary stating that requirements were met • Change control to provide continued validated functioning Many may be surprised to hear that they need a validation plan for every study database. This is the case but there are shortcuts for such things. In the case of a data- base application built in a validated clinical data management system, the validation requirements can be met as follows: • An annotated CRF plus the protocol can act as the specification • An SOP on building and testing a study database acts as the validation plan • The entry and edit check testing is filed as the evidence • A ready for production form acts as the summary of testing • Change control should be in effect for study databases See Chapter 5, “Preparing to Receive Data,” for additional discussion of study- level validation and Chapter 12, “Creating Reports and Transferring Data,” for vali- dation of reports. It is worth finishing this section with a reminder to apply risk assessment appropri- ately. It really is not necessary to go through validation for little reports or programs that simply provide information. The developer will, of course, test these programs in the normal way but they need not go through formal validation. Reports, utilities, extensions, and customizations that might have impact on the data should be vali- dated, but at a level appropriate to the risk they have on the integrity and interpreta- tion of the data.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Assumptions and Risks
The assumptions and risk assessment section may well be the most important section of the entire document. The FDA is heavily promoting an assessment of how critical the software is, and from that, judging the appropriate level of validation and testing required. Section 4.8 of the FDA guidance on “Principles of Software Validation” states: “The selection of validation activities, tasks, and work items should be com- mensurate with the complexity of the software design and the risk associated with the use of the software for the specified intended use.” In other words, work harder on validating the systems and features that are critical to the integrity of the data and less hard on features that are administrative or those that have built-in checks either through technical means or through process procedures. Business Requirements and Functional Specification It is not possible to verify that a system meets its requirements in a known manner without stating what those requirements to be met are. Most validation plans now have two sections: one for business needs or requirements and one for functional specifications. As we saw in Chapter 21, “Choosing Vendor Products,” the business- needs document or list of requirements may already be available from the vendor selection process. For vendor-supplied systems, it is common to refer to the user manual supplied with the system as the functional specification. The manual serves as a description of how the system is supposed to work. It may also be appropriate to include references to release notes, known bug lists, and other supplementary material provided by the vendor to describe the current state of the software.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Installation
The installation section of a validation plan serves to document how a system is prepared for use. Planning and documenting the installation procedure has proven to be extremely useful to every company that has done it. Even for the smallest systems, a checklist of what must be done to make the system available can help a company avoid unpleas- ant lapses—assuming the checklist is written before the installation. Note that the installation is often carried out at least twice: once in the testing area or environment and once in the production area. The checklist or procedure also ensures that the installation is performed the same way both times. 202 Practical Guide to Clinical Data Management, Third Edition For larger systems that come with a detailed installation procedure or guide, the installer should document what choices were taken or where deviations from the standard installation were made. Those notes provide a reference of those choices for future installations. When independent consultants or vendor technical staff per- form the installation, companies should make clear that an installation checklist and installation notes are required as a deliverable. Many companies require an installation qualification (IQ) process and some also require an operational qualification (OQ) process. Very few require performance qual- ification (PQ). (There are some very interesting comments on IQ/OQ/PQ in the FDA guidance on validation.) The meanings for these terms are not universal, but the intent is to assure that the system is completely and properly in place and does seem to work. The qualification, installation or operational, usually requires some light level of test- ing or verification of system output. This can be performed using vendor-supplied or custom-built test scripts or using system test programs. If the installer encounters any discrepancies or problems, these must be documented. These discrepancies may be as simple as forgetting a step and having to go back or as serious as a complete inability to install. Very, very few installations take place just once, and these notes on problems will be invaluable at the next installation.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Testing Overview
We now recognize that the testing portion of the validation process is only one step of many, but it is frequently the most time- and effort-consuming step of the entire validation process. Controlled testing, with expected results compared to actual out- put, will provide the evidence that shows that the system performs in a known way. For complex systems, the test procedures (test scripts) would be found in one or more separate documents. In that case, the testing section of the validation plan would typically provide a high-level overview of the testing approach and pointers to the other documents. (See Chapter 24 for further discussion of testing procedures.)
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Vendor Audit
Since the ultimate responsibility for the validation of a system falls on the user, each company must consider whether or not they believe a vendor has provided a quality product when they choose to acquire rather than build a system. Most companies, large and small, conduct vendor audits or surveys to help document their decision to rely on the vendor’s software verification processes. Ideally, the vendor audit would take place before the product decision has been made—more typically, it takes place after the decision but before the system is put into production. The vendor audit should be performed by an auditor experienced with software development as well as the FDA’s guidance documents. This is not a GCP audit in the usual sense; this is an audit of the vendor’s development practices as viewed both from the general software industry practices and from the applicable regulations. The audit may be performed by in-house IT or regulatory staff. Or, the task may be contracted out to a consultant specializing in such audits. Companies must be
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Security Plan
Not all validation plans include information on security approaches for new systems. Some companies cover this under general SOPs; others include it in implementation plans or user guidelines specific to the system. When a validation plan does include a security section, the intent is usually to document how security fits into the picture of assuring that the system will run correctly and to show how security will maintain integrity of the data in the new system as per 21 CFR (Code of Federal Regulations) Part 11. This is particularly important when the system in question can’t fulfill all the needs of tracking who had what kind of access and when, and the group must implement specific procedures to fill the gaps (see also Chapter 17, “Controlling Access and Security”).
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
SOPs and Guidelines
In recognition of the fact that a system includes not just computers and software but also people and processes, some companies include a section on SOPs and guidelines in their validation plans. The goal of the section is to identify which SOPs or specific guidelines will apply to the process, including which need to be updated and reviewed. This is meant to provide evidence that the new system is used appropriately.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
TRACEABILITY MATRIX
We can’t even begin to write test scripts until we know what it is that will be tested during this validation and what it is supposed to do when we try it. The introduc- tion and scope sections of the validation plan tell us what is in and what is out of scope for testing. The user requirements and functional specifications list what the system can do or is supposed to do. Reconciling these two together provide us with a list of features to be tested. We add to this list requirements from 21 CFR (Code of Federal Regulations) Part 11, such as access control or automatic audit trail, which are not explicitly listed in the product’s features. The combined list of all items to test becomes the first column of a table or matrix known as the traceability matrix or testing matrix that will provide an index or overview to test scripts. Additional columns in this matrix indicate where the expected behavior for that feature or function can be found in the specifications. As the scripts are written or laid out, the applicable script and procedure identifiers are added to each row of the matrix. So, for example, if a data management group needs to be able to perform double data entry with third-party arbitration, a few rows of the test matrix make look like those found in Figure 24.1. For User Acceptance Testing, the matrix might include rows as shown in Figure 24.2. 208 Practical Guide to Clinical Data Management, Third Edition
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Each script or test procedure has a header. That header repeats information from the
traceability matrix to identify what is being tested and what functional specification applies. The header may also include information on the applicable software version, the original script writer, and perhaps script version information. Scripts also need to identify the prerequisites. This should include what other scripts must be run first, what test data is required, and whether any external files are needed. By reading the header and prerequisites, the tester should know exactly what is needed before the script can be run. The different ways companies choose to specify the actual steps to follow in the script vary in their level of specificity. Some companies may choose to specify the actions in such detail that someone only lightly familiar with the systems can still carry them out. Other companies will describe the actions at a much higher level for knowledgeable system users. For example, an action described at a high level might be: “Enter the test data for the first 10 forms of the eCRF.” The same action with more detailed steps might be written as: 1. Select the Enter function 2. From the Subject menu, select Register 3. Register the test subject 4. Select form 1, Demog 5. Enter the data as shown and click SUBMIT And so on. The level of detail describing the action is dependent not only on the expected tester but also on the level of detail required by the outcome of the test. If a Feature/Requirement
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Two-pass Entry
Data Entry Manual, Chapter 2, pages 22–28 Test Script 2, Procedures 1 (first pass) and 2 (second pass)
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Arbitration
Data Entry Manual, Chapter 3, pages 31–40 Test Script 2, Procedure 3 FIGURE 24.1 Two rows from a traceability matrix covering double data entry for a pur- chased clinical data management system. This example assumes that the correct storage and retrieval of the entered data is tested separately.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
eCRF
Study Database Specification section 1
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
IVRS-01
FIGURE 24.2 Several rows from a traceability matrix for user acceptance testing of an EDC study application. The broader categories are further refined in the actual test procedures.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Test Procedures
211 through the scripts, the reviewer should read all the user outcomes and make sure they make sense—there is always a possibility that the tester misunderstood the step instructions and went astray without realizing it. The reviewer may also spot cases where the tester reported an outcome that seemed OK at the time but looks suspi- cious in the context of the results of test completion. This may be a discrepancy that should be added to the incident log for this script. After reviewing the results and output, the reviewer turns to any discrepancies or incidents associated with the script. The reviewer adds to the incident description any additional contextual information and begins to research the cause. Discrepancies are not necessarily bugs in the system. They may be due to user errors or script errors. They may also be surprising, but expected, behaviors. And of course, a dis- crepancy may be a real bug or flaw in the system. In addition to determining the cause of the discrepancy, the reviewer deter- mines the appropriate action. For user errors, the script may or may not have to be rerun. For script errors, the script may have to be revised and may, or may not, have to be rerun. For surprising behaviors or bugs, the reviewer may need to be in contact with the vendor and consider some way to document the behavior for future users. When the incident is a real bug, the resolution would be a bug report and a workaround if any (including training). On occasion, there may be no work- around and an entire feature may have to be declared off limits and business plans appropriately revised. Most rarely, a bug would be so serious as to prevent the system from being used. Clearly, the reviewer’s job is a critical one. A person assigned to the role of reviewer should be one of the people most experienced with the system being tested. That person should also have established a means of contacting the vendor (via hotline or email) before the testing begins. In some cases, companies may want to arrange special technical support from the vendor for the reviewer during testing.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
TRAINING FOR TESTERS
Plan on training the testers on how to test, not just on how to use the system. Testers need to understand that finding problems is a good thing, not a bad thing to be avoided. The goal is to identify all possible issues to make sure there are no large problems hiding behind seemingly insignificant discrepancies in expected results. Train the testers to read the header of the test script first and then review all the steps before carrying them out. Before starting a test script, the tester must make sure that all the prerequisite steps have been met and that all test data and external files are present. The tester should also be sure he or she understands the steps to be carried out and the kinds of output the script requires. Testers also need to be instructed on how to fill out the test script results and label output documents. Explain to the testers that these are regulated documents. They should always use pen and sign and initial as required. They should use actual dates. Testers should never: • Use pencil or white-out • Transcribe their messy original results to a clean copy of the script • Rerun the script without checking with a reviewer All printed output from the testing must be labeled with enough information so that it can be linked to the step that was carried out to create it. Proper identifica- tion also applies to electronic (file) output, but in this case, the identification may be through the filename as the user should not modify the actual output file. All testers should know what to do if there is a discrepancy between the expected outcome of a step and the actual outcome. Training should emphasize that in report- ing the incident, the tester must provide enough information for a reviewer to under- stand what happened and what steps came right before the incident. Testers need to know that they should not continue after a system error message. Rather, they should stop and contact the responsible reviewer to see if that one test, or even all testing, must be halted.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
REVIEWING RESULTS
Most validation testing does not need to be witnessed as it is conducted. That is, a reviewer typically does not need to be looking over the shoulder of the tester to confirm results on the spot. However, the idea of an observer or on-the-spot reviewer may have value at very high-risk points where the outcome would affect all other procedures. This situation tends to come up more during installation and operational testing phases than it does during user validation testing. For validation testing, a reviewer should go through all the scripts, associated out- put, and incident reports as soon after the script was run as is feasible. While going
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Change Control
217 or data management electronic folders. Providing a link to the change log (which has the overview and association to any protocol amendments) through a change number associated with all the documents makes retrieving evidence in case of an audit easy.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Assess the Impact
For each change, a user, programmer, or IT staff member assesses the impact on the system or study. The key point is to assess the risks to existing data or data man- agement activities that the change will introduce. In making the assessment of the impact, include information on: • The amount of time the production system will be unavailable • Any required changes to documentation 216 Practical Guide to Clinical Data Management, Third Edition • What, if any, user training is required • Whether any SOPs are affected • All interfaces or reports that are affected • The level of testing or validation required Going through the questions of impact assessment may help clear up the question of whether opening a validation plan is warranted. The greater the impact, the higher the risk, the more systems affected, the more likely the need for a full revalidation. For small, localized system changes, the change control system may fill the need for documentation and testing, whereas it would not be sufficient for covering the infor- mation needed for a more widespread change.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Plan Testing
Testing is essential for nearly every change—at an appropriate level. Specific bug fixes to the system can be tested against the conditions that caused the bug origi- nally and against the normal working conditions. System changes that have wider impact may require light testing across many areas or features. New releases of an entire system would typically require complete retesting of the system. For study databases, it is typical to require testing of any new or changed elements similar to that performed prior to production use. As with the assessment of impact, an assess- ment of the level of testing may help determine whether or not a validation plan is warranted. If a lot of testing is required across several areas, then a validation plan may be warranted. For some changes, defining appropriate testing may be surprisingly difficult because they are meant to fix bugs that only show up under rare or unusual circum- stances. In these cases, it may be sufficient to find some way to demonstrate that the change was properly and completely implemented and installed and then to confirm that the normal behavior is unaffected. Checking the implementation may include printing out new configuration parameters, displaying the changed source code, or showing that a patch is now recognized by the system via a special version number.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
Document the Outcome
When a validation plan is used as part of a system change, the plan will have test scripts associated with it. The validation plan, test scripts, test outcome, and valida- tion summary all need to be retained. It is very likely that systems will have more than one change and more than one set of testing associated with them before a com- plete revalidation. The change control log or system provides the overview, and the details are supported with other documentation. Making a link between the change and documentation is very helpful for systems and can be done by tagging any vali- dation plan or independent test results with a change number. A similar technique is called for when study database changes are made. Phase II and III studies frequently have multiple changes associated with them over the course of a trial. As previously noted, each change will have testing and associated documentation. These documents will go into the data management study binder
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
RELEASING CHANGES
Whenever possible, the implementation and testing of changes should take place “on the side” so that they do not affect production use until after they have been tested and approved. This is not always possible, so testing may have to take place in the production environment. Clearly the system should not be in use at that time, and there must be a way to get back to the original state in case testing turns up a problem. When the testing is complete and all other requirements for documentation, train- ing, review, and so on are met, then the change can be released for production use. The release may involve duplicating the change in the production environment or making the production environment available for use. It is important to record the actual date and time it was released for production and to record who made (and reviewed) the final release. For most, but not all, changes to classic CDM systems, the change is applied to the software, or in the case of a study database change, the change is applied to the data in place. That is, the study data is not moved or copied. This is not true for all EDC systems, some of which require a copy or migration of the data. Some significant updates to the databases underlying CDM systems may also require a migration of the data for studies built with the system. Whenever data is transferred or copied, 21 CFR (Code of Federal Regulations) Part 11 comes into play. When data is migrated or copied, the data must be shown to be complete and unchanged. This is usually done with a comparison of a prechange snapshot of the data to a postchange snap- shot. See Chapter 27 for a further discussion of migrating data.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null
MedDRA
MedDRA is a clinical validated medical terminology list used to classify adverse event information associated with biopharmaceuticals and medical devices. MedDRA was developed by the International Conference on Harmonisation (ICH) under the aus- pices of the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use. An international maintenance and support services organization (MSSO) maintains and updates MedDRA, pro- vides licensing arrangements, and distributes the dictionary. In March 2003, the FDA issued a proposed rule mandating that MedDRA be used for postmarketing submission of individual safety reports. The European Medicines Agency requires that all serious adverse event (SAE) reports and all periodic safety update reports (PSURs) be submitted using MedDRA codes. Japan and Canada also require or recommend the use of MedDRA coding. The sophisticated structure of this dictionary supports reported or low-level terms that map to single preferred terms. These preferred terms in turn have associations with higher-level groups and system organ classes.
[ "Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf" ]
Susanne_Prokscha_Practical_Guide_to_Clinical_Data_Management_Third_Edition-CRC_Press_2011.pdf
pdf
glossary.json
null
null