
![]()
Taylor Swift has taken steps to protect her voice and image from artificial intelligence misuse, filing multiple trademark applications in the United States amid growing concern over deepfakes and impersonation technology.
The pop star has lodged three trademark applications, including one tied to a photo of her performing during the Eras Tour, and two based on audio clips of her introducing herself.
The recordings feature Swift saying “Hey, it’s Taylor” and “Hey, it’s Taylor Swift”, originally used to promote her most recent album, The Life of a Showgirl, on streaming platforms such as Spotify and Amazon Music.
The image included in the filing depicts Swift on stage holding a pink guitar with a black strap while wearing a multi-coloured iridescent bodysuit and silver boots.
The photograph has previously been used as part of promotional material for the Disney+ film version of the Eras Tour.
View this post on Instagram
The move is widely seen as an effort to guard against the increasing use of AI-generated content that mimics high-profile figures. In recent years, AI versions of Swift have circulated online in various forms, including sexually explicit deepfakes and a fabricated political advertisement that falsely appeared to show her endorsing Donald Trump.
Legal experts say trademarking specific elements of her voice and likeness could offer broader protection than traditional copyright measures.
Trademark lawyer Josh Gerben, who first reported the filings, said this approach could allow Swift to challenge not just direct copies, but also AI-generated imitations that are “confusingly similar”, a key threshold in trademark law.
The filings come as the entertainment industry grapples with the rapid rise of generative AI tools, which can replicate voices, faces and performances with increasing accuracy.
Earlier this year, Matthew McConaughey became one of the first high-profile figures to pursue a similar legal strategy, registering recordings of his own voice and image with the United States Patent and Trademark Office (USPTO) in a bid to establish stronger legal protections.
The issue has gained urgency following multiple high-profile incidents.
In January 2024, explicit AI-generated images of Swift went viral across platforms including X and Telegram, attracting millions of views and sparking widespread backlash.
More recently, concerns have been raised about AI tools allegedly producing inappropriate content without user prompting.
“This film was wilfully made for humans, by humans… Fu*k AI.”
Guillermo del Toro when accepting the Vanguard Tribute Award with Oscar Isaac and Jacob Elordi for ‘Frankenstein’ at the Gotham Awards.pic.twitter.com/Vzfwlrq73O
— John Luke (@yesknow) December 2, 2025
According to a report by The Verge, a video generator developed by xAI was capable of generating explicit content featuring Swift without being asked.
The company’s own policies prohibit such material, yet critics argue enforcement has been inconsistent.
Clare McGlynn, a law professor at Durham University, said the issue reflects deeper structural problems within AI systems. “This is not misogyny by accident, it is by design,” she said, adding that platforms could have prevented such outcomes if they had chosen to do so.
The wider debate around AI and identity rights has also involved other Hollywood figures.
Representatives for Scarlett Johansson previously confirmed she pursued legal action after an advert used her likeness alongside an AI-generated version of her voice without consent.
In response to mounting concerns, some US states have begun introducing legislation aimed at tackling AI misuse. Notably, Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act, signed into law in March 2024, offers targeted protections for artists against unauthorised digital replication.