Implications of AI Voice Replication

e

Empowering fierce female founders to leverage the power of licensing to achieve explosive growth!

It’s your sweetheart’s birthday. In anticipation of the blessed event, you go to a card store to pick out something that will really say how you feel. In the novelty rack, you find just the thing. When you open the card, Beyonce belts out a refrain of “Happy Birthday.” Wow. Then you notice the disclaimer in tiny print on the back of the card, with the name of the person impersonating Beyonce.

The card is a big hit with your lady. You enjoy a leisurely dinner together at a groovy Thai fusion place. As dessert is served your ear catches a familiar voice crooning an unfamiliar song. It’s Frank Sinatra singing a George Jones classic, “He Stopped Loving Her Today.” You’re pretty sure Frank Sinatra never recorded country music, much less anything by George Jones. The next day, digging in the coal mine of YouTube, your suspicions are confirmed. A jolly joker in Portland took a karaoke backing track and after training an AI program with Frank Sinatra songs, he produced the George Jones/Sinatra track. The estates of George Jones and Frank Sinatra would be interested to know the track also streamed on Spotify. Somebody was cashing in.

I am kidding of course. This sort of trickery was once the domain of science fiction or James Bond villains. But fiction is now fact and AI is poised to transform art and culture in ways that are profound and spooky. Unless they are blocked or corralled, chatbots can easily dredge through entire libraries of music, literature, poetry and art and clone the “voice” or style of authors and opera singers alike.

Labels hope that fans will continue to prize the work of artists, including the real Drake, above that of A.I.-generated imitations.
~Adam Riding

A Voice modeling isn’t just for identity theft in the arts. It can also be used in deep fake phone scams and political disinformation campaigns. In a recent NY Times article, the writer profiled a well-known voice actor, George Marston, who discovered his voice was being used in all kinds of ways he never authorized in contracts. Nor was he properly compensated. ”Thanks to artificial intelligence, IBM was able to sell Mr. Marston’s decades-old sample to websites that are using it to build a synthetic voice that could say anything. Mr. Marston recently discovered his voice emanating from the Wimbledon website during the tennis tournament.“

AI outfits argue that their training programs are allowed under “fair use” provisions of the Copyright Act and are setting up shop. “While creators of quality content are contesting how their work is being used,” continues the Times article, “dubious A.I.-generated content is stampeding into the public sphere. NewsGuard has identified 475 A.I.-generated news and information websites in 14 languages. A.I.-generated music is flooding streaming websites and generating A.I. royalties for scammers. A.I.-generated books — including a mushroom foraging guide that could lead to mistakes in identifying highly poisonous fungi — are so prevalent on Amazon that the company is asking authors who self-publish on its Kindle platform to also declare if they are using A.I.”

 AI strikes at elemental values of trust and authenticity. Artists should be able to control their content and be properly compensated for its use with proper consent in AI adulterated material that is sold to consumers. More importantly, though, when you start asking questions like “who owns your voice,” you eventually reach the bigger questions of “who owns your DNA” and “can you patent yourself” to protect your DNA from being copied or manipulated for profit? I think the Supreme Court has already opined.

Using a Beyonce’s or any celebrity’s voice without consent will incur legal ramifications even if generated by AI. AI voices are subject to strict legal requirements, such as obtaining consent and adhering to intellectual property rights.

Reply

or to participate.