When the Machine Gets It Wrong, Who Pays the Price?

Across the country, judges have begun sending a message that is both blunt and unmistakable. Lawyers may use artificial intelligence, but they may not hide behind it. When an AI system hallucinates facts, fabricates citations, or mangles the record, “the software did it” is not a defense. The obligation to verify, authenticate, and understand what is being filed remains firmly human.

That principle is no longer theoretical. Courts have sanctioned attorneys for submitting AI-generated briefs riddled with fictitious case law. Judges have required sworn certifications that filings were personally reviewed. Bar associations have reiterated that professional responsibility does not evaporate merely because a task was assisted—or even largely performed—by software. In the eyes of the law, tools do not dilute duty.

Yet while the legal profession has begun to absorb this lesson, a parallel transformation is unfolding in court reporting and litigation support, one that has not yet been tested as rigorously but almost certainly will be. Automated speech recognition, AI-generated summaries, and digitally captured records are increasingly being positioned as substitutes for certified human stenographers. The efficiency gains are real. The risks are often downplayed. And the liability framework remains unsettled.

The question is no longer whether ASR systems make mistakes. They do. The question is who will bear responsibility when those mistakes materially affect a case.

The Error That Changed the Outcome

Consider a scenario that is becoming increasingly common in civil litigation.

In a personal injury case involving significant medical harm, a treating physician testified during deposition about the plaintiff’s prognosis. Among the details discussed was a crucial point: the patient might require future surgical intervention. It was not dramatic testimony, nor was it emphasized with flourish. It was a clinical statement, offered in measured terms, and it mattered.

The deposition was captured using an ASR-based system. An AI-generated summary was produced and circulated internally. That summary failed to include the doctor’s testimony regarding potential future surgery.

Relying on the summary rather than the full transcript, the insurance adjuster assigned a markedly lower value to the case. Settlement negotiations stalled. The matter proceeded to trial. At trial, the full scope of the medical testimony came into focus, including the likelihood of future surgical care. The jury’s verdict reflected that reality. The insurer lost. The attorneys lost. The consequences were financial, professional, and reputational.

What did not happen is just as instructive.

No claim was brought against the ASR provider. No vendor was sued for producing an incomplete or misleading summary. No indemnity was sought for reliance on a flawed technological tool. The responsibility—and the fallout—rested entirely with the attorneys who had trusted the output.

This is the pattern courts are already enforcing. If you rely on AI, you own the result.

A Standard the Law Already Knows

From a legal perspective, this outcome is not surprising. The law has long rejected the notion that professionals can outsource responsibility along with labor.

Lawyers cannot blame junior associates for missed deadlines. Physicians cannot shift liability to diagnostic devices. Architects cannot blame drafting software for structural failures. In regulated professions, accountability follows licensure, not delegation.

Artificial intelligence has not altered that principle. It has merely tested how firmly courts are willing to apply it.

In litigation, the duty of competence and candor is nondelegable. Attorneys are expected to understand the materials they submit, the evidence they rely upon, and the representations they make. Tools may assist in that process, but they do not replace professional judgment. When something goes wrong, the presence of AI does not soften the standard of care.

What remains unresolved is how that logic will be applied when the error originates not in a brief or filing, but in the record itself.

The Fragile Chain of the Record

For generations, responsibility for the legal record was straightforward. A licensed court reporter created the record, certified its accuracy, and stood behind it. That responsibility was ethical, professional, and legal. When a transcript was challenged, there was a clearly identifiable individual who could answer for every word.

That clarity begins to dissolve when stenographers are replaced with digital recording systems.

In those environments, responsibility fractures across a diffuse and often opaque chain. When a word is missed or a sentence disappears, it becomes difficult to identify who is accountable. Was the failure caused by the agency that sold the service? The contractor tasked with monitoring the recording? The transcriptionist working from compromised audio? The individual who failed to activate the recording equipment at the right moment? Or the automated system that attempted to process speech it never fully captured?

The industry frequently responds to these questions with silence or deflection. Accountability is buried in layered contracts and boilerplate disclaimers. Responsibility is shifted downstream, even as the outputs are marketed as reliable enough to support litigation strategy and settlement valuation.

Courts, however, do not resolve disputes by parsing disclaimers alone. They examine duty, control, and reliance. They ask who invited trust and who benefited from it. Those principles will determine liability when the record fails, regardless of how modern or efficient the technology may be.

And reliance on these systems is growing.

Attorneys are increasingly encouraged—sometimes aggressively—to accept AI-assisted transcripts, automated summaries, and near-real-time outputs as substitutes for certified human records. The argument is always the same. The process is faster. The cost is lower. The result is good enough.

That logic holds only until it does not.

When “Good Enough” Fails

Errors in ASR systems are not hypothetical. Accents confuse models. Medical terminology collides with homophones. Overlapping speech disappears. Context is flattened. Summaries prioritize what an algorithm predicts to be important, not what the law later determines was decisive.

Most of the time, those errors are invisible. They surface only when a dispute escalates, when testimony is revisited, or when a jury hears something that never appeared in the machine-generated materials.

When that happens, the consequences are rarely evenly distributed. Attorneys face malpractice exposure. Clients question strategy. Insurers reassess risk. Careers suffer.

What has not yet happened at scale is a reckoning upstream.

Vendors, Tools, and the Limits of Disclaimers

ASR and AI vendors typically position their products as tools, not replacements. They emphasize that outputs must be reviewed. They disclaim responsibility for final use. Contractually, those distinctions matter.

Legally, they may not always be decisive.

When technology is marketed into a consequential, regulated environment like litigation, courts will eventually examine whether it was reasonably fit for the purpose it was sold to perform. That inquiry will not turn on innovation or intent. It will turn on harm, reliance, and foreseeability.

As failures accumulate and losses become harder to absorb, attorneys and firms will begin asking whether the risk allocation currently borne entirely by licensed professionals is sustainable.

The Question No One Has Answered Yet

Consider ASR-enhanced tools that integrate with modified CAT workflows, including products such as DepoDash. These systems often involve multiple layers of automation and human intervention. Speech is captured by software, processed algorithmically, and then reviewed or edited by scopists or transcriptionists.

If a material error survives that pipeline and causes demonstrable harm, where does responsibility lie?

At present, the answer defaults to the licensed professional closest to the courtroom or filing. But history suggests that allocation will not remain static. As precedent develops, courts will be asked to determine whether vendors who design, market, and profit from these systems bear any responsibility when predictable failures occur.

A Reckoning on the Horizon

What seems most likely is not a single landmark lawsuit, but a gradual shift. Documented failures will accumulate. Internal audits will become more common. Disputes between firms and vendors will quietly increase. Eventually, a case will present facts too stark to ignore—a missed recording, a lost appeal, a verdict turned on an absent sentence.

When that case arrives, courts will be asked to draw lines the industry has thus far avoided drawing itself.

And when those lines are drawn, the standard will look familiar.

You may use artificial intelligence. You may automate portions of your workflow. But if you rely on it, and it gets the record wrong, you will be expected to answer for the consequences.

For now, that burden rests squarely on attorneys and licensed professionals. Whether it remains there is not a question of technology. It is a question of accountability.

And accountability, in the end, always catches up.


There is a useful way to understand where this trajectory may lead, and it comes not from the courtroom, but from nature.

In the wild, a tiger ambush is a study in precision. The predator waits, calculates distance and timing, and commits only when the odds are overwhelming. A single misstep—a snapped twig, a mistimed lunge, a shift in the wind—and the balance reverses. The buffalo does not need to outrun the tiger. It only needs to survive the first mistake. When that happens, the tiger pays the price.

In the emerging legal-technology ecosystem, automated speech recognition and AI vendors increasingly resemble the tiger. Their products are fast, powerful, and marketed as decisive advantages. Attorneys, by contrast, are the buffalo: large, valuable, and deeply invested in moving forward efficiently, trusting that the tools they rely on will perform as promised.

For now, the tiger has the advantage. Attorneys absorb the risk. When AI summaries omit facts, when ASR misses testimony, when errors surface too late to correct, courts place responsibility squarely on the lawyer. The buffalo stumbles, and the tiger escapes unnoticed.

But that balance depends on one condition: that the errors remain survivable.

The moment a single mistake causes demonstrable, outsized harm—a lost case, a blown settlement, a malpractice judgment—the calculus changes. At that point, the attorney who relied on the tool will not be asking whether they should have double-checked the output. They will be asking whether the product was fit for the purpose it was sold to perform, whether its limitations were adequately disclosed, and whether the risk allocation was reasonable.

That is the moment the buffalo turns.

History suggests that when professionals consistently bear losses caused by third-party tools, liability does not remain static. It migrates. Attorneys who are sanctioned today quietly accept the consequences. Attorneys who lose clients tomorrow will begin seeking contribution. Attorneys who face malpractice exposure will look upstream. Not out of malice, but out of necessity.

In that future, the question will no longer be whether AI can assist legal work. Courts have already answered that. The question will be whether vendors who profit from automation can indefinitely avoid responsibility when predictable failures cause foreseeable harm. That is not a technological question. It is a legal one.

And like a tiger ambush, it may take only one mistake for the balance to shift.


Collective Power and the Rewriting of Responsibility

There is one additional dynamic in the natural world that sharpens the analogy further. A buffalo facing danger does not always flee alone. When threatened, it can call for backup. The herd gathers. Strength multiplies. What an individual animal cannot withstand, a unified group can repel.

In the legal ecosystem, attorneys function much the same way. They are not isolated actors. They are members of a powerful, organized profession capable of acting collectively when their interests are challenged. When that collective force is mobilized, it can reshape entire systems.

The Stop the SoCal Stip movement illustrates this dynamic with unusual clarity.

That effort did not originate with attorneys. It was initiated by court reporters, who moved to halt a four-decade-old practice in which attorneys routinely stipulated to relieve the court reporter of the duty of holding the original until trial, basically giving themselves the opportunity to share a single transcript among themselves, and effectively depriving reporters of compensation for additional copies. For more than forty years, the practice had been normalized, even though it shifted costs onto the individual creating and certifying the record.

When reporters collectively refused to continue honoring those stipulations, they were not seeking leverage. They were enforcing the boundaries of their professional labor. They put their foot down and stopped giving away work product without compensation.

The response from segments of the plaintiffs’ bar was swift and structural.

Rather than renegotiate practices or accept the new boundary, attorneys turned to the legislature. The law was changed to expand the use of alternative methods of transcript production, including digital recording and non-stenographic capture, effectively bypassing court reporters altogether. What had begun as a labor and compensation dispute ended as a regulatory shift.

The message was unmistakable. When one professional group acts in concert to protect its economic interests, another group with greater institutional power can respond by changing the rules of the game.

That history matters now because it reveals how quickly collective attorney action can alter the landscape when a shared interest is threatened.

In the context of ASR and AI-driven tools, the same dynamics are quietly taking shape. Attorneys are currently absorbing the risk when automation fails. Courts place responsibility on the lawyer. Losses are individualized. Errors are tolerated as long as they remain manageable.

But that equilibrium depends on scale.

If AI-assisted transcript production begins to generate consistent, material errors that affect outcomes, valuations, or malpractice exposure, attorneys will not respond as isolated practitioners indefinitely. They will compare experiences. They will involve insurers. They will consult trade groups. They will ask whether the tools they were encouraged to rely upon were reasonably fit for their intended use.

That is when the herd moves.

The same profession that once used collective power to change the law in response to a labor dispute will not hesitate to use collective power again if technological reliance becomes a systemic liability. At that point, the focus will not be on whether attorneys should have double-checked the output. It will be on who else should share responsibility for predictable failures at scale.

In nature, a predator can survive one miscalculation against a solitary animal. It does not fare as well when the herd is alert, organized, and moving together.

The lesson is not moral. It is structural. Power consolidates when risk becomes shared. And when that happens, accountability has a way of migrating to where it has not previously been assigned.


Attorney Sidebar

What Courts Already Expect of You

Courts have made one principle unmistakably clear: the use of artificial intelligence does not diminish professional responsibility. Attorneys remain fully accountable for the accuracy, completeness, and reliability of any materials they submit, rely upon, or distribute—regardless of whether those materials were generated, summarized, or assisted by software.

Judges evaluating AI-related errors focus on familiar factors, not novelty. They ask whether the attorney exercised reasonable diligence, independently reviewed the output, and understood its limitations. Courts do not accept disclaimers, vendor marketing claims, or claims of automation as substitutes for professional judgment.

In practice, this means attorneys must treat AI-assisted transcripts, summaries, and litigation tools as unverified drafts, unless and until they are personally reviewed against the underlying record. Reliance without verification is increasingly being viewed not as efficiency, but as risk.


Publication Disclaimer

This article is for informational and educational purposes only and does not constitute legal advice. It reflects general trends and publicly documented developments in law and legal technology. References to products, technologies, or industry practices are descriptive, not accusatory, and are based on publicly available information and hypothetical scenarios. Readers should consult qualified legal counsel regarding specific legal or professional obligations.

Published by stenoimperium

We exist to facilitate the fortifying of the Stenography profession and ensure its survival for the next hundred years! As court reporters, we've handed the relationship role with our customers, or attorneys, over to the agencies and their sales reps.  This has done a lot of damage to our industry.  It has taken away our ability to have those relationships, the ability to be humanized and valued.  We've become a replaceable commodity. Merely saying we are the “Gold Standard” tells them that we’re the best, but there are alternatives.  Who we are though, is much, much more powerful than that!  We are the Responsible Charge.  “Responsible Charge” means responsibility for the direction, control, supervision, and possession of stenographic & transcription work, as the case may be, to assure that the work product has been critically examined and evaluated for compliance with appropriate professional standards by a licensee in the profession, and by sealing and signing the documents, the professional stenographer accepts responsibility for the stenographic or transcription work, respectively, represented by the documents and that applicable stenographic and professional standards have been met.  This designation exists in other professions, such as engineering, land surveying, public water works, landscape architects, land surveyors, fire preventionists, geologists, architects, and more.  In the case of professional engineers, the engineering association adopted a Responsible Charge position statement that says, “A professional engineer is only considered to be in responsible charge of an engineering work if the professional engineer makes independent professional decisions regarding the engineering work without requiring instruction or approval from another authority and maintains control over those decisions by the professional engineer’s physical presence at the location where the engineering work is performed or by electronic communication with the individual executing the engineering work.” If we were to adopt a Responsible Charge position statement for our industry, we could start with a draft that looks something like this: "A professional court reporter, or stenographer, is only considered to be in responsible charge of court reporting work if the professional court reporter makes independent professional decisions regarding the court reporting work without requiring instruction or approval from another authority and maintains control over those decisions by the professional court reporter’s physical presence at the location where the court reporting work is performed or by electronic communication with the individual executing the court reporting work.” Shared purpose The cornerstone of a strategic narrative is a shared purpose. This shared purpose is the outcome that you and your customer are working toward together. It’s more than a value proposition of what you deliver to them. Or a mission of what you do for the world. It’s the journey that you are on with them. By having a shared purpose, the relationship shifts from consumer to co-creator. In court reporting, our mission is “to bring justice to every litigant in the U.S.”  That purpose is shared by all involved in the litigation process – judges, attorneys, everyone.  Who we are is the Responsible Charge.  How we do that is by Protecting the Record.

Leave a comment