
As realtime access to testimony has become an expectation rather than a luxury, a growing number of technology vendors have begun advancing a provocative claim: that licensed court reporters are no longer essential to record creation. Their proposed alternative follows a familiar formula—digitally capture audio, run it through automated speech recognition, and have someone downstream “clean it up.”
On paper, the output may resemble a transcript. In practice, it removes nearly every safeguard that gives a legal record its reliability, neutrality, and legitimacy.
This shift is not merely technical. It is structural. It changes who controls the record, who is accountable for it, and how sensitive information is captured, stored, and potentially exposed.
Tools Are Not Licenses
Court reporters have never been opposed to technology. The profession has consistently adopted tools that enhance accuracy and efficiency—computer-aided transcription software, realtime feeds, remote proceedings, and audio backups are now standard. Some of the most widely used tools in stenography were built by technologists deeply familiar with the field.
One example comes from the ProCAT ecosystem itself. ProCAT’s Flash writer machine and Winner CAT software are widely used by stenographic court reporters nationwide. Both products were created and are owned by Bob Bakva, a software engineer with roughly four decades of experience working in and around the court reporting industry. His familiarity with reporting workflows, CAT software, and transcript production tools is well established.
DepoDash, an automated speech recognition platform, was also created by Bakva.
What is equally clear—and undisputed—is that Bakva is not, and never has been, a licensed court reporter. That distinction matters. Industry proximity and technical expertise do not confer professional standing, licensure, or accountability. Tools can support licensed court reporters; they cannot replace licensure itself.
Yet some ASR-driven models, including DepoDash, have been positioned as functional substitutes for human court reporters, premised on the idea that automated output can later be rendered sufficient through post-processing by scopists or editors. That premise misunderstands both the role of the court reporter and the legal function of a certified transcript. Licensure is not a formatting step that can be added later. It is the foundation on which the legal record rests.
This is where the model breaks down.
The Myth of “Just Cleaning It Up”
Scopists play an important role in stenographic reporting, but their role is often misunderstood by those advocating ASR-first workflows. A scopist works under the supervision of a licensed court reporter. They do not create the record. They do not certify it. They do not assume legal responsibility for its accuracy.
They also do not perform many of the core functions that courts depend on: creating the index, preparing the cover page, executing the certificate page, managing exhibits, or making real-time judgment calls about speakers, interruptions, and record clarity. Scopists are not licensed, are not regulated, and are not accountable to a state authority.
Even highly skilled scopists are not expected to match the reporter’s command of a proceeding. Their work refines a record that already rests on a verified stenographic foundation. Remove that foundation, and the scopist is left editing guesses generated by software—without an independent, contemporaneous record to verify against.
That is not court reporting. It is post hoc interpretation.
Realtime Editing and the Illusion of Control
The risks deepen when ASR systems are paired with live monitoring and realtime “correction.” In some digital reporting setups, a human operator edits the realtime feed as testimony unfolds, altering words, punctuation, or speaker attribution on the fly.
Unlike stenographic realtime—which is derived from shorthand notes that remain intact and reviewable—these edits may overwrite what was originally captured. There is often no version history, no disclosure to counsel, and no way to reconstruct what first appeared on the screen.
Once words appear in realtime, they influence questioning, objections, and judicial rulings. Silent alteration turns the record from a neutral capture into a curated product. That shift alone should give courts pause.
The Cloud Problem No One Wants to Discuss
Beyond questions of licensure and realtime editing lies an even more consequential issue: where ASR systems live, and what they permanently collect.
Today, large-scale ASR used in legal settings is produced by a small number of vendors—roughly four dominant players—each operating cloud-based language models. These systems do not simply transcribe sound. They record audio, transmit it off-site, and process it through predictive language engines designed to recognize and learn speech patterns.
The cloud is indiscriminate.
ASR systems do not understand courtroom norms. They cannot distinguish between “on the record” and “off the record.” They do not know when a sidebar begins, when a bench conference ends, or when counsel believes the record has paused. If speech is captured, it is processed. If it is processed, it exists beyond the physical courtroom.
That distinction becomes critical during jury trials—particularly during voir dire.
Voir Dire Is Not Just Testimony
Jury voir dire is among the most sensitive phases of any trial. Prospective jurors are routinely asked to disclose deeply personal information: names, occupations, family details, prior experiences with crime, medical histories, personal beliefs, and potential biases. These disclosures are compelled by law, offered in good faith, and historically treated with restraint.
When voir dire is captured by an always-listening ASR system, those disclosures are digitized, transmitted, and stored within systems designed to ingest language at scale. Even when vendors promise safeguards or deletion, the architecture of cloud-based language processing raises unresolved questions about permanence, replication, and downstream exposure.
Unlike a stenographic record—created locally by a licensed professional and governed by established rules regarding sealing, access, and redaction—cloud-based ASR introduces uncertainty. Who has access to the raw data? How long does it exist? Can it truly be erased? And what happens when that data becomes part of a broader language ecosystem?
These are not abstract concerns. The perception alone is enough to change behavior.
The Chilling Effect on Jury Participation
The jury system depends on trust. Prospective jurors must believe that fulfilling their civic duty will not result in permanent digital exposure. If individuals begin to fear that their names and personal disclosures could be captured, stored indefinitely, and potentially resurfaced through unknown technological pathways, candor will erode.
Some may withhold information. Others may seek to avoid service altogether. In extreme cases, distrust could contribute to jury disengagement or jury nullification—not as protest, but as a byproduct of fear.
Courts have long recognized the need to protect juror privacy to preserve the integrity of the process. Introducing indiscriminate cloud capture into voir dire risks undermining that protection before the legal system has fully weighed the consequences.
Accountability Is Not Optional
When a licensed court reporter prepares a transcript, responsibility is clear. Their name and license number appear on the certificate page. They are subject to discipline. They can be called to account for their work.
ASR-plus-scopist models diffuse that responsibility. Is the software vendor accountable? The editor? The agency? The answer is often unclear. That ambiguity alone should disqualify such models from being treated as equivalent to licensed reporting.
Legal transcripts are not commodities. They are evidentiary documents that may be relied upon decades later. The chain of custody matters. The integrity of the record matters.
A Line Worth Holding
None of this requires bad intent. The risks arise from architecture, not motive. But courts cannot ignore architecture simply because technology is new or convenient.
Automation can assist court reporters. It cannot replace licensure, judgment, neutrality, and accountability. And cloud-based systems should not be allowed to capture sensitive legal proceedings—particularly jury voir dire—without a serious examination of privacy, permanence, and public trust.
The legal record must remain passive, not predictive. Local, not indiscriminate. And above all, controlled by professionals who are licensed to bear the responsibility that comes with it.
That line has protected the integrity of the justice system for generations. It is not obsolete. It is essential.
DISCLAIMER
This article reflects the author’s professional analysis and opinion based on experience in court reporting and publicly available information about legal technology practices. It is intended for educational and policy discussion purposes only and does not allege misconduct by any individual or company. The views expressed do not constitute legal advice and are offered to encourage informed dialogue about record integrity, privacy, and accountability in legal proceedings.
Regulatory Clarification:
References to specific software products and their creators are included solely to illustrate structural and regulatory distinctions between licensed court reporters and technology vendors. No allegation of misconduct, illegality, or ethical violation is asserted. The analysis addresses professional roles, licensure requirements, and systemic risk considerations relevant to record integrity, evidentiary reliability, and public trust.
Note:
This discussion concerns licensure, accountability, and record-creation frameworks. It does not assess product quality, intent, or compliance with any existing contractual or statutory obligations.