13 Jan 2026

Regulatory Futures: Governing AI-Driven CTCAE Without Paralyzing Innovation

Can we modernize safety rules without sidelining human judgment?

As AI moves into oncology decision support, regulators and health systems face a tricky balancing act. They must encourage innovation that can reduce harm while preventing over-reliance on opaque systems that may fail in subtle, dangerous ways.

CTCAE-based automation sits in the middle of this tension. It promises earlier detection of severe toxicity and more consistent grading. It also introduces new modes of failure that traditional frameworks did not anticipate.

Why CTCAE automation challenges existing regulatory models

Traditional safety oversight assumes:

  • Humans generate and document CTCAE grades.

  • Systems are essentially passive recorders and transmitters.

  • Audits can reconstruct the human reasoning from notes, training, and protocol language.

CTCAE automation breaks those assumptions.

Now, algorithms participate in the reasoning, suggesting terms and grades, filtering evidence, and shaping what humans see. Documentation may reflect joint human–machine decisions, not purely human deliberation.

Regulators must be able to answer:

  • What role did AI play in each CTCAE decision?

  • How was the AI validated, monitored, and updated?

  • How do we attribute responsibility when automated suggestions influence outcomes?

Principles for a human-first regulatory approach

Rather than treat AI as a co-equal actor, regulatory frameworks should preserve a clear hierarchy:

  1. Human accountability remains primary.

Clinicians and organizations remain responsible for CTCAE decisions, regardless of AI involvement.

  1. AI is treated as high-risk decision support, not an autonomous system.

Regulations should reflect that AI can materially influence safety, even if it does not act alone.

  1. Transparency and auditability are mandatory.

It must be possible to reconstruct, for any AE, what the AI suggested, what the human decided, and what evidence was visible at the time.

Policy levers for safer CTCAE automation

Several policy levers can support this approach:

  • Pre-market expectations for CTCAE AI, including external validation, subgroup performance analysis, and explicit documentation of intended use (for example, "suggestion-only, not auto-commit").

  • Post-market surveillance that monitors performance drift, error rates, and patterns of human override.

  • Documentation standards that require logging AI involvement in safety decisions, even if only as a metadata layer.

  • Governance requirements at the institutional level: oversight committees, change-control processes, and escalation paths.

These levers do not require regulators to understand every modeling detail. They require evidence that organizations have built and are enforcing a robust safety framework around the technology.

Avoiding two extremes: paralysis and blind faith

There are two regulatory failures to avoid:

  • Paralysis, where fear of AI leads to blanket prohibition or hyper-conservative rules that prevent beneficial tools from being deployed.

  • Blind faith, where excitement about innovation leads to weak oversight and over-dependence on vendor assurances.

The middle path is conditional trust: AI is welcome in CTCAE workflows, provided it meets well-defined requirements for validation, transparency, and governance, and provided that human judgment stays in the driver's seat.

Human skills as a protected asset

Finally, regulatory frameworks should implicitly treat human expertise as a protected asset.

That means:

  • Encouraging periodic "AI-off" evaluation of CTCAE grading skills.

  • Guarding against workflows that make it impossible to disagree with AI suggestions in practice.

  • Supporting training programs that teach clinicians how to work with AI critically, not passively.

If regulations succeed, AI will not erode CTCAE expertise; it will sit on top of it, enhancing what humans can see and synthesize.

In that world, clinical decision support remains exactly what the name promises: support, not substitution. AI becomes a powerful ally in predicting and preventing patient harm, while CTCAE and human judgment remain the core of oncology safety.

Marc Saint-jour, MD

marc@burna.ai

Back to Blog