
For more than a decade, Silicon Valley has promised that artificial intelligence would soon replace large swaths of human labor. Court reporters—especially stenographers—have been among the professions most frequently described as “inevitable casualties” of automation. The pitch has been relentless: AI is faster, cheaper, tireless, and improving exponentially. Human stenographers, the argument goes, are merely a temporary bridge between analog past and automated future.
That story is beginning to fracture. Not because of sentiment, nostalgia, or professional self-interest—but because of math.
A recent academic paper by Vishal Sikka and his son Varin Sikka, a current Stanford student, lays out a structural limitation in large language models (LLMs) that AI marketing departments would prefer remain obscure. Their conclusion is blunt: AI agents cannot do what Silicon Valley claims they will do, not in the general case, and not at scale. The limitation is not philosophical. It is not about ethics or alignment. It is computational and absolute.
This matters profoundly for any profession built around accuracy, verification, judgment, and accountability—especially court reporting.
Vishal Sikka is not a fringe critic. He was the CEO of Infosys, the former CTO of SAP, the architect behind SAP HANA, and a Stanford-trained AI researcher. He sits on the boards of Oracle, BMW, and GSK. He is, by any measure, deeply embedded in the very ecosystem selling AI’s inevitability.
Together with his son, Sikka argues that every LLM—no matter how advanced—operates under a hard ceiling: a fixed maximum amount of computation it can perform per response. That ceiling is determined by the model’s architecture. It does not disappear with scale, marketing, or funding. If a task requires more computation than the model can perform in a single inference, the model cannot solve it. When pushed beyond that boundary, it does what all LLMs do: it guesses.
This is not a bug. It is how these systems work.
To illustrate the problem, the paper uses the Traveling Salesman Problem, a classic example in computer science. Verifying whether a given route is truly the shortest possible route among all permutations requires exponential computation as the number of stops increases. The verification alone—not even the discovery—quickly exceeds what any LLM can compute within its fixed inference window.
An AI agent asked to confirm the optimality of such a route does not “almost” fail. It cannot succeed. The math makes that impossible.
This exposes an uncomfortable truth about the glossy AI demos flooding legal, corporate, and government sectors. They work because the tasks are carefully curated to remain under the model’s computational ceiling. Real-world workflows—especially those involving long records, procedural nuance, layered verification, and downstream consequences—do not politely stay within those bounds.
Court reporting is a perfect example.
A stenographer is not merely transcribing words. They are producing a certified legal record in real time while simultaneously monitoring speaker identification, overlapping testimony, procedural context, evidentiary boundaries, and error correction. They resolve ambiguities as they occur. They maintain continuity across hours, days, and volumes. They do so under oath, with professional liability, and with the understanding that the transcript may later determine the outcome of an appeal, a motion, or a verdict.
None of that maps cleanly onto a single AI inference.
Automatic speech recognition systems excel when conditions are controlled: clean audio, limited speakers, predictable vocabulary, and low legal stakes. But legal proceedings are adversarial by design. Attorneys interrupt, talk over witnesses, shift registers, introduce technical language, and exploit ambiguity. Judges issue rulings mid-sentence. Exhibits are referenced obliquely. The meaning of a word often depends on what was said three hours earlier or three days earlier.
This is precisely the kind of cumulative, context-dependent computation that blows past an LLM’s ceiling.
When an AI system cannot compute the correct output, it does not pause and ask for clarification. It does not flag uncertainty in a legally meaningful way. It hallucinates a plausible answer. In casual applications, that is an inconvenience. In a legal record, it is catastrophic.
This is why replacing stenographers with AI is not analogous to replacing typists with word processors or switchboard operators with digital routing. Those earlier transitions removed mechanical bottlenecks. AI introduces epistemic ones.
To be clear, none of this means AI is useless. Even if progress stopped today, the tools already available will permanently reshape how professionals work. AI can assist with indexing, rough drafts, searchability, and workflow optimization. Used properly, it can reduce administrative burden and increase access.
But assistance is not replacement.
The difference matters, especially when the marketing claims suggest inevitability rather than augmentation. The gap between what AI is mathematically capable of and what it is sold as capable of is widening, not shrinking.
This may explain a quieter but equally telling phenomenon: the steady departure of senior engineers and executives from OpenAI and similar firms to start their own ventures. If artificial general intelligence were truly months away—the most consequential technological breakthrough in human history—it would be irrational to walk away from that equity and influence. People do not abandon the invention of fire to start a candle company.
These departures suggest something else: the insiders see the ceiling. They understand where AI is transformative and where it will stall. And they are positioning themselves accordingly.
For stenographers, this is not an abstract debate. Courts and agencies are being pressured to adopt digital recording and AI transcription on the assumption that humans are an obsolete cost center. The Sikka paper dismantles that premise at its foundation. The problem is not that AI is imperfect today. The problem is that certain classes of work—work that requires bounded accuracy under unbounded complexity—are structurally resistant to full automation.
Legal records live in that category.
Human stenographers do not operate under a fixed inference window. They can ask for clarification. They can correct the record. They can be cross-examined about their methodology. They can certify accuracy and be held accountable for it. No AI system can do those things, not because engineers have failed to try, but because the architecture does not allow it.
The future of court reporting is not AI versus humans. It is humans who understand AI versus institutions seduced by marketing. The math is no longer on the side of inevitability. It is on the side of limits.
And limits, once proven, change everything.
AI will absolutely transform the legal system. But it will not do so by erasing stenographers. It will do so by making the value of human judgment, accountability, and precision harder—not easier—to ignore.
Disclaimer
This article reflects analysis and opinion based on publicly available research and industry developments. It is not intended as legal advice. References to artificial intelligence capabilities address current architectural limitations and should not be construed as commentary on future research directions or specific vendors.