Bots are Users Too: Why HTI-5 Changes Everything

HTI-5: The "Digital Power of Attorney"

We were all expecting HTI-5 to have a strong focus on AI. Honestly, I was hoping they would formally adopt the Model Context Protocol (MCP) to standardize how AI connects to health systems. MCP did get a mention as a “creative solution” for the future, but the biggest story in HTI-5 is not about which standard we use.

It is about access.

How you get the data, the technical method, matters less than whether you are allowed to get it in the first place. That is the real shift. HTI-5 moves the conversation away from technical standards and toward something far more consequential: the legal definition of who, or what, is allowed to act on behalf of a clinician or patient.

Below is a breakdown of how this rule shifts the focus from “Standards” to “Access.”


1. Access Is Defined by Outcome, Not Technology

Since Meaningful Use, interoperability debates have largely been technical. The questions were familiar:

  • Do you support C-CDA?

  • Do you expose a FHIR API?

If the answer was “yes,” the obligation was considered satisfied.

HTI-5 dismantles that defense. The rule proposes rewriting the legal definitions of “Access” and “Use” to explicitly include automation technologies such as Robotic Process Automation (RPA) and autonomous artificial intelligence systems.

The key takeaway is this: developers and providers no longer have to wait for a perfectly structured FHIR resource to be built. If an AI agent or RPA bot can retrieve the data, even if it must navigate a screen the way a human would, that retrieval is now a protected form of access.

This reframes interoperability as an outcome-based obligation rather than a format-based compliance exercise.


2. "Bots Are Users Too" and the Analogous Standard

This is where the “Digital Power of Attorney” concept becomes real.

Previously, vendors could block bots by arguing that they did not support that specific “manner” of access. If the access path differs from what was originally intended, it may be denied under existing exceptions.

HTI-5 changes the standard for the “Manner Exception Exhausted” condition, which is part of the Infeasibility Exception, from requiring the “Same” access to requiring “Analogous” access.

In practice, this means:

  • If a human doctor is allowed to view a screen, their AI assistant must be allowed to view it analogously.

  • If information is accessible through the interface, automation cannot be categorically excluded simply because it is automated.

This legitimizes screen scraping and automated workflows when APIs fall short. The digital twin now has a defensible right to represent the human user at the keyboard.

This is not simply a technical adjustment. It is a recognition that AI agents can function as extensions of licensed professionals.


3. Unlocking Write Access

The biggest constraint on AI agents has not been reading data. It has been acting on it.

EHR vendors have often blocked “write” access by claiming it was infeasible to allow third parties to modify the chart. The Infeasibility Exception provided a broad shield against external modification requests.

HTI-5 removes the “Third Party Seeking Modification Use” condition from that exception. As a result, the Infeasibility safe harbor narrows significantly.

This opens the door for more legitimate write-based workflows, including:

  • Draft note insertion

  • Prior authorization preparation and submission

  • Order preparation

  • Referral workflows

  • Care gap documentation

  • Administrative task delegation

If read access enables insight, write access enables action. That distinction is critical. It marks the transition from AI as a passive analytical tool to AI as operational infrastructure embedded within clinical and administrative workflows.


The Larger Signal: Standards Matter, but Permission Matters More

HTI-5 effectively declares that waiting for the perfect standard is no longer a sufficient excuse for blocking innovation. By explicitly protecting RPA and AI agents within the definition of Access, ASTP/ONC acknowledges that enforcement must focus on outcomes rather than implementation purity.

If the front door, the API, is incomplete or selectively constrained, this rule strengthens the argument that the side window cannot be arbitrarily sealed off.

The method matters less. The permission matters more.


What This Means for Builders and Health Systems

For AI builders, this rule is not just about connectivity. It is about legitimacy. It signals that AI agents can serve as lawful representatives of clinicians and patients, provided they operate within the guardrails of authorization and compliance.

For health systems and EHR vendors, the bar shifts from “Did we expose the API?” to “Did we meaningfully enable access and use?”

The next wave of healthcare AI will not be defined solely by model performance. It will be defined by which organizations can responsibly build, deploy, and govern AI representatives that both read and act within clinical environments.

The strategic question now becomes: which workflows will be delegated first, and who will become the trusted infrastructure layer that carries that delegated authority?


Previous
Previous

AI for Patients: Moments of Care

Next
Next

AI in Healthcare: Clinical DRS