13 Jan 2026
Pediatric Clarity in CTCAE v6: What Changed From CTCAE v5 (and Why It Reduces Variability)
CTCAE v6.0 (released July 22, 2025 by the NCI) is the successor to CTCAE v5.0 (2017). It is not a total rewrite, but it contains several high-impact, documented changes that affect real trial operations: explicit MedDRA Lowest Level Term (LLT) anchoring (MedDRA 28.0), targeted hematology updates (notably neutrophils, platelets, and lymphopenia), clearer pediatric language in global grade definitions, and explicit guidance to avoid certain kinds of double reporting when a marrow diagnosis is used.
This article focuses on practical, verifiable differences between CTCAE v5 and CTCAE v6, and it separates what CTCAE defines (terms and grades) from what your protocol defines (DLTs, dose modifications, and reporting triggers).
Confirmed change: pediatric activity language in global definitions
CTCAE v6 global grade definitions explicitly reference impact on age-appropriate normal daily activity in pediatric patients. This clarifies how functional impact should be interpreted for children.
Why this matters
Pediatric trials often show site-to-site variability when adult-centric activity definitions are applied to children. Explicit pediatric language reduces ambiguity and supports more consistent grading.
Operationalizing pediatric grading
Provide age-specific examples in training, and ensure documentation captures functional impact. For analytics, interpret changes in variability cautiously when comparing v5-era and v6-era pediatric datasets.
Implementation notes (what to do next)
CTCAE v6 changes are targeted, but they are enough to break assumptions in protocols, EDC logic, analytics, and automation. The safest practice is to treat CTCAE version as part of the measurement method, then align every downstream step to that version: protocol language, CRFs, edit checks, safety database reconciliation, dashboards, and any AI-assisted grading tools.
When teams must compare CTCAE v5 and CTCAE v6 programs, avoid comparing grades as if they are equivalent. Instead, compare stable primitives (raw laboratory nadirs and durations) and clinically meaningful sequelae (febrile neutropenia, infection admissions, transfusions, bleeding requiring intervention). If version-mixed reporting is unavoidable, document the limitation clearly and, where feasible, re-grade lab-derived endpoints under CTCAE v6 thresholds when raw values exist. This is especially feasible for neutrophil-driven endpoints because ANC values are typically captured as structured data.
Train humans as deliberately as you train systems. A large fraction of grading variation comes from habit. Short, case-based training that contrasts CTCAE v5 and CTCAE v6 using concrete examples (for example ANC 300, platelet 18,000 with and without transfusion, lymphopenia as present) is often the fastest way to restore consistency across sites and reviewers.
Quality control checks that catch CTCAE version errors
Build a small set of “unit tests” for safety data. For neutrophils, scan for any cases where ANC is between 100 and 500 and the dataset reports Grade 4 under a CTCAE v6 study, those are strong candidates for v5 logic leaking into a v6 build. For platelets, scan for thrombocytopenia grades that appear to be derived solely from a single lab value without supporting transfusion or bleeding context, those are candidates for inconsistent qualifier capture. For lymphopenia, scan for numeric grades, under CTCAE v6 they should generally not exist unless your team created protocol-defined analytic categories and labeled them explicitly.
In addition, add “version pins” to your downstream exports. A surprisingly common failure mode is that the study uses CTCAE v6, but a downstream analytics mart or visualization layer assumes v5 because that was the default in older pipelines. If you persist the CTCAE version as a required field, this class of silent error becomes detectable and auditable.
Communicating the change (so stakeholders do not misread the data)
When you present safety tables to internal leadership, DSMBs, or external partners, include one sentence that makes the definitional point: select hematologic grading boundaries differ between CTCAE v5 and CTCAE v6, therefore grade distributions cannot be compared naïvely across versions. Then immediately follow with stable comparators (raw labs and clinically meaningful sequelae). This framing prevents misinterpretation and supports confident decision-making without inflating the apparent complexity of the change.
If your program includes both v5 and v6 trials, consider a short “methods appendix” that documents your harmonization strategy. The appendix does not need to be long, but it should be explicit about what you did (for example, re-graded ANC-based events under v6 thresholds when raw ANC existed, otherwise stratified by CTCAE version) and what you did not do (for example, did not attempt to reconstruct lymphopenia grades under v6). This is the kind of clarity that de-risks audits and publications.
Implementation notes (what to do next)
CTCAE v6 changes are targeted, but they are enough to break assumptions in protocols, EDC logic, analytics, and automation. The safest practice is to treat CTCAE version as part of the measurement method, then align every downstream step to that version: protocol language, CRFs, edit checks, safety database reconciliation, dashboards, and any AI-assisted grading tools.
When teams must compare CTCAE v5 and CTCAE v6 programs, avoid comparing grades as if they are equivalent. Instead, compare stable primitives (raw laboratory nadirs and durations) and clinically meaningful sequelae (febrile neutropenia, infection admissions, transfusions, bleeding requiring intervention). If version-mixed reporting is unavoidable, document the limitation clearly and, where feasible, re-grade lab-derived endpoints under CTCAE v6 thresholds when raw values exist. This is especially feasible for neutrophil-driven endpoints because ANC values are typically captured as structured data.
Train humans as deliberately as you train systems. A large fraction of grading variation comes from habit. Short, case-based training that contrasts CTCAE v5 and CTCAE v6 using concrete examples (for example ANC 300, platelet 18,000 with and without transfusion, lymphopenia as present) is often the fastest way to restore consistency across sites and reviewers.
Back to Blog














