The Otter Lawsuit – AI, Privacy, and the Fight Over Consent

A groundbreaking lawsuit has been filed against Otter.ai, a company known for its AI-powered meeting transcription and recording services. At the center of the complaint is a fundamental question: Can AI companies record, transcribe, and train on private conversations without the explicit consent of all participants?

The lawsuit, which touches multiple federal and state privacy laws, may set a legal precedent not only for Otter but for the entire AI industry—particularly the booming sector of AI meeting assistants. It highlights what privacy expert Luiza Jarovsky, PhD, calls another instance of AI exceptionalism: a tendency to treat AI technologies as though they deserve exemptions from long-standing legal and ethical norms.

The Allegations Against Otter

Otter.ai offers a meeting assistant tool that can automatically join virtual meetings, record conversations, and generate transcripts. While marketed as a productivity enhancer, this tool raises serious privacy concerns.

The lawsuit alleges that Otter:

  1. Fails to obtain consent from all participants.
    While account holders grant permission when they sign up, non-users who participate in meetings are often unaware that their voices are being recorded, transcribed, and stored.
  2. Shifts responsibility onto users.
    Instead of ensuring compliance itself, Otter instructs users to secure consent from others—a responsibility many ignore or misunderstand.
  3. Uses private conversations to train AI models.
    According to the complaint, Otter trains its AI on user-generated content, which may contain sensitive or confidential information, raising additional risks of misuse and data leakage.

The plaintiffs argue that these practices violate a suite of federal and California state laws designed to protect communications, privacy, and consumer rights.

The Legal Landscape

The lawsuit invokes an impressive list of statutes and doctrines, showing the breadth of potential violations. Among them:

  • Electronic Communications Privacy Act (ECPA): Prohibits unauthorized interception or recording of communications. Consent from just one party may not be enough in certain jurisdictions.
  • Computer Fraud and Abuse Act (CFAA): Targets unauthorized access to digital systems and data. Recording without consent can be framed as unauthorized access.
  • California Invasion of Privacy Act (CIPA): California is a “two-party consent” state, meaning all participants must agree before a recording is made.
  • California Comprehensive Computer Data Access and Fraud Act: Focuses on digital trespass and unauthorized data collection.
  • Torts of Intrusion Upon Seclusion and Conversion: Common law claims that protect individuals from invasive surveillance and the misappropriation of their personal data.
  • California Unfair Competition Law (UCL): Prohibits business practices that are unlawful, unfair, or fraudulent.

By bringing such a broad set of claims, plaintiffs are signaling that AI companies may not be shielded from the legal duties that govern traditional recording technologies.

Consent and the Myth of Delegated Responsibility

One of the most striking aspects of the Otter case is the way it spotlights a common industry practice: shifting privacy responsibilities onto users.

Most AI meeting assistants—including those offered by competitors—ask the account holder to “make sure everyone consents.” On paper, this sounds like compliance. In reality, it is a recipe for noncompliance. Many users are unaware of the legal requirements in their state, while others assume that clicking through terms of service covers all participants.

This is where AI exceptionalism comes into play. Imagine a human stenographer secretly transcribing a private call without telling everyone present. It would clearly violate privacy law. Yet when the same act is done by an AI assistant, some companies act as though the rules are negotiable.

The Otter lawsuit may be the first high-profile test of whether courts will hold AI companies accountable for these evasions.

Ethical Implications – Beyond the Law

While the lawsuit is grounded in legal claims, the ethical stakes are just as high. Recording private conversations without consent undermines trust—not only between meeting participants but also between the public and technology companies.

Several risks flow from Otter’s alleged practices:

  • Exposure of sensitive information. Many meetings involve confidential discussions—legal strategy, business negotiations, or personal health matters. Recording without consent jeopardizes this confidentiality.
  • Training data exploitation. Using recorded conversations to train AI models compounds the violation. Personal data is not only captured but repurposed for profit.
  • Data leakage and cybersecurity threats. Every additional dataset becomes a target for hackers. In an era of escalating cyberattacks, AI companies bear heightened responsibility for data protection.

As Jarovsky points out, AI should not be treated as an exception to basic principles of consent and transparency. If anything, AI’s scale and opacity demand higher standards of accountability.

The Broader Context – Productivity vs. Privacy

AI meeting assistants are part of a larger wave of productivity-enhancing AI tools. From note-taking to drafting emails, AI is increasingly embedded in daily workflows. Yet the drive for efficiency often overshadows privacy concerns.

This is not the first time technology companies have downplayed consent in the name of innovation. Social media platforms once normalized pervasive data collection under the guise of “improving the user experience.” The backlash that followed—culminating in regulations like the EU’s GDPR and California’s CCPA—shows that societies eventually demand accountability.

The Otter lawsuit may be a similar inflection point for AI meeting assistants. It forces the question: Can convenience justify eroding privacy?

Potential Outcomes and Precedents

If the plaintiffs succeed, the consequences could reshape the AI landscape:

  1. Stricter compliance obligations. AI companies may be forced to build explicit consent mechanisms directly into their products, rather than leaving the responsibility to users.
  2. Limits on AI training practices. Courts may restrict the use of private conversations as training data without informed, opt-in consent.
  3. Increased litigation risk. Other companies offering similar services could face lawsuits, especially in “two-party consent” states.
  4. Policy reforms. Legislatures may introduce new AI-specific privacy laws to fill gaps left by older statutes.

Even if Otter settles, the case will likely serve as a warning shot across the industry.

Lessons for Businesses and Users

For businesses, the takeaway is clear: do not assume AI products are legally or ethically compliant just because they are popular. Before deploying AI assistants in sensitive contexts, organizations must ensure all participants are informed and consenting.

For individual users, awareness is equally critical. Using AI meeting assistants without securing explicit consent exposes you to liability—not just the company. A participant who feels wronged may sue both the platform and the meeting organizer.

In practice, this means:

  • Always disclose when using AI assistants.
  • Obtain written or recorded consent from all participants.
  • Avoid using AI assistants in meetings involving privileged, confidential, or regulated information.

The Role of Court Reporters – Protecting the Record and Educating Attorneys

One often-overlooked dimension of this debate is the role of court reporters, who are uniquely positioned to safeguard against these risks. Unlike AI assistants, stenographic reporters already operate under strict ethical and legal frameworks requiring transparency, consent, and accuracy. Court reporters can educate attorneys about the dangers of unauthorized recording and transcription tools, reminding them that using AI meeting assistants without consent could expose lawyers to lawsuits, sanctions, or even malpractice claims. By reinforcing best practices—such as insisting on human stenographers for depositions, hearings, and sensitive meetings—reporters not only protect the integrity of the record but also shield themselves from being entangled in litigation involving unconsented AI recordings.

Drawing the Line

The Otter lawsuit is not just about one company. It is about drawing a line in the sand: AI tools must operate within the boundaries of privacy law, not outside them.

Technology may evolve rapidly, but the principles of consent, transparency, and respect for personal autonomy are not negotiable. If AI meeting assistants are to become a staple of modern productivity, they must first earn the trust of the people whose voices and words they capture.

As courts and regulators grapple with the Otter case, one thing is certain: the future of AI-powered meetings will be shaped not only by innovation, but also by accountability.

StenoImperium
Court Reporting. Unfiltered. Unafraid.

Disclaimer

“This article includes analysis and commentary based on observed events, public records, and legal statutes.”

The content of this post is intended for informational and discussion purposes only. All opinions expressed herein are those of the author and are based on publicly available information, industry standards, and good-faith concerns about nonprofit governance and professional ethics. No part of this article is intended to defame, accuse, or misrepresent any individual or organization. Readers are encouraged to verify facts independently and to engage constructively in dialogue about leadership, transparency, and accountability in the court reporting profession.

  • The content on this blog represents the personal opinions, observations, and commentary of the author. It is intended for editorial and journalistic purposes and is protected under the First Amendment of the United States Constitution.
  • Nothing here constitutes legal advice. Readers are encouraged to review the facts and form independent conclusions.

***To unsubscribe, just smash that UNSUBSCRIBE button below — yes, the one that’s universally glued to the bottom of every newsletter ever created. It’s basically the “Exit” sign of the email world. You can’t miss it. It looks like this (brace yourself for the excitement):

Published by stenoimperium

We exist to facilitate the fortifying of the Stenography profession and ensure its survival for the next hundred years! As court reporters, we've handed the relationship role with our customers, or attorneys, over to the agencies and their sales reps.  This has done a lot of damage to our industry.  It has taken away our ability to have those relationships, the ability to be humanized and valued.  We've become a replaceable commodity. Merely saying we are the “Gold Standard” tells them that we’re the best, but there are alternatives.  Who we are though, is much, much more powerful than that!  We are the Responsible Charge.  “Responsible Charge” means responsibility for the direction, control, supervision, and possession of stenographic & transcription work, as the case may be, to assure that the work product has been critically examined and evaluated for compliance with appropriate professional standards by a licensee in the profession, and by sealing and signing the documents, the professional stenographer accepts responsibility for the stenographic or transcription work, respectively, represented by the documents and that applicable stenographic and professional standards have been met.  This designation exists in other professions, such as engineering, land surveying, public water works, landscape architects, land surveyors, fire preventionists, geologists, architects, and more.  In the case of professional engineers, the engineering association adopted a Responsible Charge position statement that says, “A professional engineer is only considered to be in responsible charge of an engineering work if the professional engineer makes independent professional decisions regarding the engineering work without requiring instruction or approval from another authority and maintains control over those decisions by the professional engineer’s physical presence at the location where the engineering work is performed or by electronic communication with the individual executing the engineering work.” If we were to adopt a Responsible Charge position statement for our industry, we could start with a draft that looks something like this: "A professional court reporter, or stenographer, is only considered to be in responsible charge of court reporting work if the professional court reporter makes independent professional decisions regarding the court reporting work without requiring instruction or approval from another authority and maintains control over those decisions by the professional court reporter’s physical presence at the location where the court reporting work is performed or by electronic communication with the individual executing the court reporting work.” Shared purpose The cornerstone of a strategic narrative is a shared purpose. This shared purpose is the outcome that you and your customer are working toward together. It’s more than a value proposition of what you deliver to them. Or a mission of what you do for the world. It’s the journey that you are on with them. By having a shared purpose, the relationship shifts from consumer to co-creator. In court reporting, our mission is “to bring justice to every litigant in the U.S.”  That purpose is shared by all involved in the litigation process – judges, attorneys, everyone.  Who we are is the Responsible Charge.  How we do that is by Protecting the Record.

Leave a comment