Voice actor speaking into microphone in professional studioPhoto by Amin Asbaghipour on Pexels

Paul Alcock spent over 30 years refining his voice through training, recordings, and performances. Now, at 62, he claims Google used his voice without permission in its Assistant products, pulling from samples he provided years ago. The dispute, centered in California courts, points to a larger clash between voice artists and big tech firms over AI voice use.

Background

Paul started his career in the 1980s, working on radio ads, audiobooks, and phone systems. He built a library of voice samples that companies licensed for their projects. In the early 2000s, he partnered with a firm that supplied audio to tech developers, including samples of his clear, neutral tone.

By 2016, Google rolled out its voice assistant on phones, speakers, and cars. Paul first noticed something off when friends played clips of the Assistant and said it sounded just like him. He dug into it and found matches between his old recordings and the AI voice. For years, he tried to contact Google through emails and lawyers, but got no clear answers.

The issue blew up in 2019 after reports surfaced about Google devices picking up talks without the 'Hey Google' wake word. Paul linked this to his own story, filing a suit that year. He argued Google not only grabbed his voice clips but trained its models on them, creating a digital version that mimicked him perfectly.

Court files show Paul kept detailed logs of his work. He has tapes from sessions where he read common phrases, numbers, and commands. These went to a middleman company, which Paul says passed them to Google without his okay for ongoing use.

Key Details

Paul's suit claims Google breached contracts and privacy rules by using his voice beyond the original license. He says the deal allowed one-time use in a phone app, not endless AI training.

How the Voice Was Used

Google Assistant voices come from blended samples to sound natural. Paul points to specific patterns in pitch, rhythm, and word choice that match his style. Experts he hired analyzed waveforms and confirmed overlaps.

In depositions, Google staff admitted pulling from public and licensed audio pools to build voices. One engineer noted 'false accepts,' where devices record without prompts, fed back into training data. Paul says this loop stole his identity.

The case gained steam amid wider privacy suits. In 2019, a Dutch report revealed Google Home units sending chats to contractors for review. US users reported devices catching talks on money woes, job hunts, and family matters.

Paul's lawyers argued Google knew of glitches but used the data anyway. They cited internal memos showing teams debated voice sourcing but pushed ahead for better products.

"I poured my life into this voice. It's not just a sound—it's me. Google took that and made billions while I fight to get credit." – Paul Alcock

Google fought back, saying no single voice owns the Assistant sound. They moved for dismissal, claiming policies allowed audio collection. But before a judge ruled, talks led to a settlement.

What This Means

The $68 million deal, filed last month in San Jose federal court, covers Paul and thousands of users. It sets up a fund for claims on up to three devices per person. Payouts depend on claim numbers, with lawyers taking a cut for fees.

Judge Beth Labson Freeman must approve it. If she does, checks go out in months. Google admits no fault, but the payout signals pressure from courts.

This echoes Apple's $95 million Siri settlement, where users got $8 to $40 each. Both cases spotlight 'false accepts'—devices grabbing talks on background noise. Users described shock at ads popping up for discussed items, like job changes or vacations.

For voice pros like Paul, it raises alarms on AI ethics. More actors now watermark samples or limit licenses. Tech firms face rules on data use, with Europe probing voice scraping.

Paul plans to keep speaking out. He wants laws requiring consent for voice cloning. Google updated Assistant in 2023 to cut false recordings by 70%, per their reports, but trust lags.

The settlement wraps the main suit but leaves room for appeals. Paul says he'll watch how Google handles voices going forward. Users can file claims online soon, checking sites for updates.

Broader fallout hits smart home makers. Amazon and others tweak privacy settings after similar gripes. Sales dipped briefly post-reports, but devices stay popular.

Paul reflects on decades of work. From studio booths to courtrooms, his voice fight shows tech's reach into daily life. As AI voices grow, cases like his may set rules for what's fair use.

Author

  • Vincent K

    Vincent Keller is a senior investigative reporter at The News Gallery, specializing in accountability journalism and in depth reporting. With a focus on facts, context, and clarity, his work aims to cut through noise and deliver stories that matter. Keller is known for his measured approach and commitment to responsible, evidence based reporting.

Leave a Reply

Your email address will not be published. Required fields are marked *