Top Qs
Timeline
Chat
Perspective
15.ai
Real-time text-to-speech AI tool From Wikipedia, the free encyclopedia
Remove ads
15.ai is a free non-commercial web application and research project that uses artificial intelligence to generate text-to-speech voices of fictional characters from popular media. Created by a pseudonymous artificial intelligence researcher known as 15, who began developing the technology as a freshman during their undergraduate research at the Massachusetts Institute of Technology, the application allows users to make characters from video games, television shows, and movies speak custom text with emotional inflections. The platform is able to generate convincing voice output using minimal training data; the name "15.ai" references the creator's statement that a voice can be cloned with just 15 seconds of audio. It was an early example of an application of generative artificial intelligence during the initial stages of the AI boom.
Launched in March 2020, 15.ai became an Internet phenomenon in early 2021 when content utilizing it went viral on social media and quickly gained widespread use among Internet fandoms, such as the My Little Pony: Friendship Is Magic, Team Fortress 2, and SpongeBob SquarePants fandoms. The service featured emotional context through emojis, precise pronunciation control, and multi-speaker capabilities. Critics praised 15.ai's accessibility and emotional control but criticized its technical limitations in prosody options and non-English language support, with mixed results depending on character complexity. 15.ai is credited as the first platform to popularize AI voice cloning in memes and content creation.[a]
Voice actors and industry professionals debated 15.ai's implications, raising concerns about employment impacts, voice-related fraud, and potential misuse. In January 2022, it was discovered that Voiceverse NFT had generated voice lines using 15.ai without attribution, promoted them as the byproduct of their own technology, and sold them as non-fungible tokens (NFT) without permission.[b] News publications universally characterized this incident as the company having "stolen" from 15.ai.[c] The service went offline in September 2022 due to legal issues surrounding artificial intelligence and copyright. Its shutdown was followed by the emergence of commercial alternatives whose founders have acknowledged 15.ai's pioneering influence in the field of deep learning speech synthesis. On May 18, 2025, 15 launched 15.dev as the sequel to 15.ai.
Remove ads
History
Summarize
Perspective
[...] The website has multiple purposes. It serves as a proof of concept of a platform that allows anyone to create content, even if they can't hire someone to voice their projects.
It also demonstrates the progress of my research in a far more engaging manner – by being able to use the actual model, you can discover things about it that even I wasn't aware of (such as getting characters to make gasping noises or moans by placing commas in between certain phonemes).
It also doesn't let me get away with picking and choosing the best results and showing off only the ones that work [...] Being able to interact with the model with no filter allows the user to judge exactly how good the current work is at face value.
Background
The field of speech synthesis underwent a significant transformation with the introduction of deep learning approaches. In 2016, DeepMind's publication of the WaveNet paper marked a shift toward neural network-based speech synthesis, which enabled higher audio quality via causal convolutional neural networks. Previously, concatenative synthesis—which worked by stitching together pre-recorded segments of human speech—was the predominant method for generating artificial speech, but it often produced robotic-sounding results at the boundaries of sentences.[39] In 2018, Google AI's Tacotron 2 showed that neural networks could produce highly natural speech synthesis but required substantial training data (typically tens of hours of audio) to achieve acceptable quality. When trained on two hours of training data, the output quality degraded while still being able to maintain intelligible speech; with 24 minutes of training data, Tacotron 2 failed to produce intelligible speech.[40] The same year saw the emergence of HiFi-GAN, a generative adversarial network (GAN)-based vocoder that improved the efficiency of waveform generation while producing high-fidelity speech,[41] followed by Glow-TTS, which introduced a flow-based approach that allowed for both fast inference and voice style transfer capabilities.[42] Chinese tech companies like Baidu and ByteDance also made contributions to the field by developing breakthroughs that further advanced the technology.[43]
2016–2020: Conception and development
15.ai was conceived in 2016 as a research project in deep learning speech synthesis by a developer known as 15 (at the age of 18[45]) during their freshman year at the Massachusetts Institute of Technology (MIT) as part of its Undergraduate Research Opportunities Program (UROP).[46] 15 was inspired by DeepMind's WaveNet paper, with development continuing through their studies as Google AI released Tacotron 2 the following year. By 2019, they had demonstrated at MIT their ability to replicate WaveNet and Tacotron 2's results using 75% less training data than previously required.[43] The name "15.ai" is a reference to the developer's statement that a voice can be cloned with as little as 15 seconds of data.[47]
15 had originally planned to pursue a PhD based on their undergraduate research, but opted to work in the tech industry instead after their startup was accepted into the Y Combinator accelerator in 2019. After their departure in early 2020, 15 returned to their voice synthesis research and began implementing it as a web application. According to a post on X from 15, instead of using conventional voice datasets like LJSpeech that contained simple, monotone recordings, they sought out more challenging voice samples that could demonstrate the model's ability to handle complex speech patterns and emotional undertones.[tweet 1] During this phase, 15 discovered the Pony Preservation Project, a collaborative project started by /mlp/, the My Little Pony board on 4chan. Contributors of the project had manually trimmed, denoised, transcribed, and emotion-tagged thousands of voice lines from My Little Pony: Friendship Is Magic and had compiled them into a dataset that provided ideal training material for 15.ai.[43]
2020–2022: Release and operation
15.ai was released in March 2020[45] as a free and non-commercial web application that did not require user registration to use, but did require the user to accept its terms of service before proceeding.[12] At the time of its launch, the platform had a limited selection of available characters, including those from My Little Pony: Friendship Is Magic and Team Fortress 2.[48] Users were permitted to create any content with the synthesized voices under two conditions: they had to properly credit 15.ai by including "15.ai" in any posts, videos, or projects using the generated audio;[49] and they were prohibited from mixing 15.ai outputs with other text-to-speech outputs in the same work to prevent misrepresentation of the technology's capabilities.[50]
More voices were added to the website in the following months. In late 2020, 15 implemented a multi-speaker embedding in the deep neural network, which enabled the simultaneous training of multiple voices.[43] Following this, the website's roster expanded from eight to over fifty characters.[45] In addition, this implementation allowed the deep learning model to recognize common emotional patterns across different characters, even when certain emotions were missing from the characters' training data.[51]
By May 2020, the site had served over 4.2 million audio files to users.[52] In early 2021, the application gained popularity after skits, memes, and fan content created using 15.ai went viral on Twitter, TikTok, Reddit, Twitch, Facebook, and YouTube.[53] At its peak, the platform incurred operational costs of US$12,000[54] per month from AWS infrastructure needed to handle millions of daily voice generations; despite receiving offers from companies to acquire 15.ai and its underlying technology, the website remained independent and was funded out of the personal previous startup earnings of the developer.[43]
2022: Voiceverse NFT controversy

On January 14, 2022, 15 discovered that Voiceverse NFT had generated voice lines using 15.ai, falsely showcased them on Twitter as a demonstration of their own voice technology without permission or attribution,[c] and sold them as NFTs.[b] This came shortly after 15 had stated in December 2021 that they had no interest in incorporating NFTs into their work.[55] A screenshot of the log files posted by 15 showed that Voiceverse had generated audio of characters from My Little Pony: Friendship Is Magic using 15.ai and pitched them up to make them sound unrecognizable,[56] a violation of 15.ai's terms of service, which explicitly prohibited commercial use and required proper attribution.[57]
When confronted with evidence, Voiceverse stated that their marketing team had used 15.ai without proper attribution while rushing to create a demo.[58] In response, 15 tweeted "Go fuck yourself,"[59] which went viral, amassing hundreds of thousands of retweets and likes on Twitter in support of the developer.[43] The tweets showcasing the stolen voices were subsequently deleted.[14]
Troy Baker (@TroyBakerVA) tweeted: |
I'm partnering with @VoiceverseNFT to explore ways where together we might bring new tools to new creators to make new things, and allow everyone a chance to own & invest in the IP's they create. We all have a story to tell. You can hate. Or you can create. What'll it be?
January 14, 2022[60]
Aftermath
The controversy raised concerns about NFT projects, which, according to critics, were frequently associated with intellectual property theft and questionable business practices.[61] The incident was documented in the AI Incident Database (AIID)[23] and the AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) repository,[24] and was also featured in Molly White's Web3 Is Going Just Great website.[17] Pavel Khibchenko of Skillbox listed the incident as an example of fraud in NFTs.[62] Voice actor and YouTuber Yong Yea criticized voice NFTs for their potential impact on the voice acting industry[25] and stated in a YouTube video that Voiceverse deliberately plagiarized 15.ai's superior technology to falsely market voice NFTs.[video 1]: 15:54–16:13 In a 2024 class action lawsuit filed against LOVO, Inc., the parent company of Voiceverse, court documents cited the company's prior theft of 15.ai's technology as part of the case.[37]
Voice actor Troy Baker, who had announced his partnership with Voiceverse alongside their promotion of the stolen AI voices, faced mounting criticism for supporting an NFT project and for his confrontational announcement tone.[63] Following continued backlash and the plagiarism revelation, Baker acknowledged that his original tweet was "antagonistic"[64] and on January 31, announced that he would discontinue his partnership with Voiceverse.[65]
2022–present: Inactivity and revival
In September 2022, 15.ai was taken offline due to legal issues surrounding artificial intelligence and copyright.[66] In a post on Twitter, 15 suggested a future version that would better address copyright concerns from the outset.[43] During this time, voice AI startups continued to cite 15.ai as a major influence to the field.[67]
On May 18, 2025, 15 launched 15.dev as the official sequel to 15.ai.[68][tweet 2] Fandom news site Equestria Daily reported that the website included "almost every voiced pony in the show" with "a dropdown for various emotions you want to generate."[69]
Remove ads
Features
Summarize
Perspective

15.ai is non-commercial, has no advertisements, generates no revenue, and operates without requiring user registration or accounts.[70] Users are able to generate speech by inputting text and selecting a character voice, with optional parameters for emotional contextualizers and phonetic transcriptions. Each request produces three audio variations with distinct emotional deliveries.[71] Characters available included multiple characters from Team Fortress 2 and My Little Pony: Friendship Is Magic, including the Mane Six and Derpy Hooves; GLaDOS, Wheatley, and the Sentry Turret from the Portal series; SpongeBob SquarePants; Kyu Sugardust from HuniePop, Rise Kujikawa from Persona 4; Daria Morgendorffer and Jane Lane from Daria; Carl Brutananadilewski from Aqua Teen Hunger Force; Steven Universe from Steven Universe; Sans from Undertale; Madeline and multiple characters from Celeste; the Tenth Doctor Who; the Narrator from The Stanley Parable; and HAL 9000 from 2001: A Space Odyssey.[72] Silent characters like Chell and Gordon Freeman were able to be selected and would emit silent audio files when any text was submitted.[73] Characters from Undertale and Celeste did not produce spoken words but instead generated their games' distinctive beeps when text was entered.[74]

From 2020, 15.ai has generated audio at 44.1 kHz sampling rate—higher than the 16 kHz standard used by most deep learning text-to-speech systems of that period. This higher fidelity creates more detailed audio spectrograms and greater audio resolution with the tradeoff that imperfections in the synthesis are more noticeable.[76] 15.ai processes speech using customized deep neural networks and specialized audio synthesis algorithms.[77] While its underlying technology could produce 10 seconds of audio in less than 10 seconds of processing time (i.e. faster-than-real-time), the user experience often involves longer waits as the servers manages thousands of simultaneous requests, sometimes taking more than a minute to deliver results.[78]
Due to its nondeterministic design, 15.ai produces variations in its speech output. 15.ai introduced the concept of emotional contextualizers, which allowed users to specify the emotional tone of generated speech through guiding phrases.[79] The emotional contextualizer functionality utilized DeepMoji, a sentiment analysis neural network developed at the MIT Media Lab that processed emoji embeddings from 1.2 billion Twitter posts to analyze their emotional content.[49] If an input into 15.ai contained additional context (specified by a vertical bar), the additional context following the bar would be used as the emotional contextualizer.[80] For example, if the input was Today is a great day!|I'm very sad.
, the selected character would speak the sentence "Today is a great day!" in the emotion one would expect from someone saying the sentence "I'm very sad."[81]

15.ai uses pronunciation data from Oxford Dictionaries API, Wiktionary, and CMU Pronouncing Dictionary, which uses ARPABET phonetic transcriptions. Users can input ARPABET transcriptions by enclosing phoneme strings in curly braces to correct mispronunciations.[49] 15.ai's interface uses color-coding to indicate pronunciation certainty[49] and also displays technical metrics, graphs, and comprehensive model analytics, which has included sentiment analysis and automatic improvements to the vocoder.[45] The platform limits its prompt to 200 characters; users can combine multiple generations for longer speech sequences.[82]
Later versions of 15.ai introduced multi-speaker capabilities. Rather than training separate models for each voice, 15.ai uses a unified model that learned multiple voices simultaneously through speaker embeddings: numerical representations that capture each character's unique vocal characteristics.[43] Along with the emotional context conferred by DeepMoji, this allows the deep learning model to learn shared patterns across different characters' emotional expressions and speaking styles, even when characters lack examples of certain emotions in their training data.[83]
Remove ads
Reception
Summarize
Perspective
Critical reception
Critics described 15.ai as easy to use and generally able to convincingly replicate character voices, with occasional mixed results.[84] Natalie Clayton of PC Gamer wrote that SpongeBob SquarePants' voice was replicated well, but described challenges in mimicking the Narrator from the The Stanley Parable: "the algorithm simply can't capture Kevan Brighting's whimsically droll intonation."[85] Zack Zwiezen of Kotaku reported that "[his] girlfriend was convinced it was a new voice line from GLaDOS' voice actor".[86] Taiwanese newspaper United Daily News also highlighted 15.ai's ability to recreate GLaDOS's mechanical voice, alongside its diverse range of character voice options.[87] Yahoo! News Taiwan reported that "GLaDOS in Portal can pronounce lines nearly perfectly", but also criticized that "there are still many imperfections, such as word limit and tone control, which are still a little weird in some words."[88] Chris Button of Byteside called the ability to clone a voice with only 15 seconds of data "freaky," but also described the tech behind it as "impressive."[89] Robin Lamorlette of Clubic described the technology as "devilishly fun" and wrote that Twitter and YouTube were filled with creative content from users experimenting with the tool.[90] The platform's voice generation capabilities were regularly featured on Equestria Daily with documented updates, fan creations, and additions of new character voices. In a post introducing new character additions to 15.ai, Equestria Daily's founder Shaun Scotellaro wrote that "some of [the voices] aren't great due to the lack of samples to draw from, but many are really impressive still anyway."[91] Chinese My Little Pony fan site EquestriaCN also documented 15.ai's development and its updates, though they criticized some of its bugs and long queue wait times.[92]
Peter Paltridge of Anime Superhero News opined that "voice synthesis has evolved to the point where the more expensive efforts are nearly indistinguishable from actual human speech," but also stated that "In some ways, SAM is still more advanced than this. It was possible to affect SAM's inflections by using special characters, as well as change his pitch at will. With 15.ai, you're at the mercy of whatever random inflections you get."[93] Conversely, Lauren Morton of Rock, Paper, Shotgun praised the depth of pronunciation control—"if you're willing to get into the nitty gritty of it".[94] Similarly, Eugenio Moto of Qore.com wrote that "the most experienced of users can change parameters like the stress or the tone."[95] Takayuki Furushima of Den Fami Nico Gamer highlighted the "smooth pronunciations", and Yuki Kurosawa of AUTOMATON wrote that its "rich emotional expression" was a major feature; both Japanese authors mentioned the lack of Japanese-language support.[96] Renan do Prado of Arkade and José Villalobos of LaPS4 remarked that while users could create amusing results in Portuguese and Spanish respectively, the generation performed best in English.[97] Chinese gaming news website GamerSky called the app "interesting", but also criticized the word count limit of the text and the occasional lack of intonations.[98] Machine learning professor Yongqiang Li wrote that 15.ai "perfectly preserves the rhythm and characteristics of the speaker," and remarked that the application was still free despite having 5,000 people generating voices concurrently at the time of writing.[99] Marco Cocomello of GLITCHED remarked that despite the 200-character limitation, the results "blew [him] away" when testing the app with GLaDOS's voice.[100] Spanish author Álvaro Ibáñez wrote in Microsiervos that he found the rhythm of the AI-generated voices interesting and that 15.ai was able to adapt its delivery based on the text's meaning.[101]
Technical publications provided more in-depth analysis of 15.ai's capabilities and limitations compared to other text-to-speech technologies of the time. Google DeepMind senior research scientist Alex Irpan wrote that when 15.ai launched in 2020, it was "arguably the highest quality voice generation model in the world" and superior to models developed by Google AI.[102] Rionaldi Chandraseta of Towards Data Science wrote that voice models trained on larger datasets created more convincing output with better phrasing and natural pauses, particularly for extended text.[77] Bai Feng of XinZhiYuan on QQ News highlighted the technical achievement of 15.ai's high-quality output despite using minimal training data and wrote that it was of significantly higher quality than typical deep learning text-to-speech implementations. Feng also acknowledged that while some pronunciation errors occurred due to the limited training data, it was understandable given that contemporary deep learning models typically required 40 or more hours of audio.[103] Similarly, Parth Mahendra of AI Daily wrote that while the system "does a good job at accurately replicating most basic words," it struggled with more complex terms, noting that characters would "absolutely butcher the pronunciation" of certain words.[52] Ji Yunyo of NetEase News called the technology behind 15.ai "remarkably efficient" but also criticized its emotional limitations, writing that the emotional expression was relatively "neutral" and that "extreme" emotions couldn't be properly synthesized, making it less suitable for not safe for work applications.[104] Ji also wrote that while many deepfake videos required creators to extract and edit material from hours of original content for very short results, 15.ai could achieve similar or better effects with only a few dozen minutes of training data per character.[105]
Reactions from voice actors of featured characters

Some voice actors whose characters appeared on 15.ai have publicly shared their thoughts about the platform. In an April 2021 interview, John Patrick Lowrie—who voices the Sniper in Team Fortress 2—said that he had discovered 15.ai when a prospective intern showed him a skit she had created using AI voices of the Team Fortress 2 characters.[video 2]: 0:51:50 Lowrie commented:
"The technology still has a long way to go before you really believe that these are just human beings, but I was impressed by how much [15.ai] could do. [...] You certainly don't get the delivery that you get from an actual person who's analyzed the scene, [...] but I do think that as a fan source—for people wanting to put together mods and stuff like that—that it could be fun for fans to use the voices of characters they like."[video 2]: 0:53:12
He drew an analogy to synthesized music, adding:
"If you want the sound of a choir, and you want the sound of an orchestra, and you have the money, you hire a choir and an orchestra. And if you don't have the money, you have something that sounds pretty nice; but it's not the same as a choir and an orchestra."[video 2]: 1:01:10
In a 2021 live broadcast on his Twitch channel, Nathan Vetterlein—the voice actor of the Scout from Team Fortress 2—listened to an AI recreation of his character's voice and commented: "It's interesting; it's all right. There's some stuff in there".[video 3]
Ethical concerns
Other voice actors had mixed reactions to 15.ai's capabilities. While some industry professionals acknowledged the technical innovation, others raised concerns about the technology's implications for their profession.[106] When voice actor Troy Baker announced his partnership with Voiceverse NFT, which had misappropriated 15.ai's technology, critics raised concerns about automated voice acting's potential reduction of employment opportunities for voice actors, risk of voice impersonation, and potential misuse in explicit content.[107] Ruby Innes of Kotaku Australia wrote that "this practice could potentially put voice actors out of work considering you could just use their AI voice rather than getting them to voice act for a project and paying them."[12] In her coverage of the Voiceverse controversy, Edie WK of Checkpoint Gaming raised the concern that "this kind of technology has the potential to push voice actors out of work if it becomes easier and cheaper to use AI voices instead of working with the actor directly."[29]
While 15.ai limited its scope to fictional characters and did not reproduce voices of real people or celebrities,[49] computer scientist Andrew Ng commented that similar technology could be used to do so, including for nefarious purposes. In his 2020 assessment of 15.ai, he wrote:
"Voice cloning could be enormously productive. In Hollywood, it could revolutionize the use of virtual actors. In cartoons and audiobooks, it could enable voice actors to participate in many more productions. In online education, kids might pay more attention to lessons delivered by the voices of favorite personalities. And how many YouTube how-to video producers would love to have a synthetic Morgan Freeman narrate their scripts?
While discussing potential risks, he added:
"...but synthesizing a human actor's voice without consent is arguably unethical and possibly illegal. And this technology will be catnip for deepfakers, who could scrape recordings from social networks to impersonate private individuals."[48]
Remove ads
Legacy
Summarize
Perspective

15.ai was an early pioneer of audio deepfakes, and its popularity led to the emergence of AI speech synthesis-based memes during the initial stages of the AI boom in 2020. 15.ai is credited as the first platform to popularize AI voice cloning in Internet memes and content creation,[a] particularly through its ability to generate convincing character voices in real-time without requiring extensive technical expertise.[108] The platform's impact was especially large in fan communities, such as the My Little Pony: Friendship Is Magic, Portal, Team Fortress 2, and SpongeBob SquarePants fandoms, where it enabled the creation of viral content that garnered millions of views on social media.[109] Team Fortress 2 content creators also used the platform to produce both short-form memes and complex narrative animations using Source Filmmaker. Fan creations included skits and fan animations,[110] crossover content,[111] recreations of viral videos,[112] adaptations of fan fiction,[45] music videos, and musical compositions.[45] Some fan creations gained mainstream attention: a viral video that replaced Donald Trump's cameo in Home Alone 2: Lost in New York with the Heavy Weapons Guy's AI-generated voice was featured on a daytime CNN segment in January 2021.[113] Some users integrated 15.ai with voice command software to create personal assistants.[114]

Its influence since its launch has been publicly recognized, with commercial alternatives like ElevenLabs[d] and Speechify emerging to fill the void after its initial shutdown.[116] Contemporary generative voice AI companies have acknowledged 15.ai's pioneering role.[102] Y Combinator startup PlayHT called the debut of 15.ai "a breakthrough in the field of text-to-speech (TTS) and speech synthesis".[117] Cliff Weitzman, the founder and CEO of Speechify, credited 15.ai for "making AI voice cloning popular for content creation by being the first [...] to feature popular existing characters from fandoms".[118] Mati Staniszewski, co-founder and CEO of ElevenLabs, wrote that 15.ai was transformative in the field of AI text-to-speech.[119]
15.ai established technical precedents that influenced subsequent developments in AI voice synthesis. Its integration of DeepMoji for emotional analysis demonstrated the viability of incorporating sentiment-aware speech generation,[120] while its support for ARPABET phonetic transcriptions set a standard for precise pronunciation control in public-facing voice synthesis tools.[43] The platform's multi-speaker model, which enabled simultaneous training of diverse character voices, allowed the system to recognize emotional patterns across different voices even when certain emotions were absent from individual character training sets.[83] 15.ai also contributed to the reduction of training data requirements for speech synthesis. Contemporary models like Tacotron 2 required tens of hours of audio to produce acceptable results and failed to generate intelligible speech with less than 24 minutes of training data.[40] In contrast, 15.ai demonstrated the ability to generate speech with substantially less training data; the name "15.ai" refers to the creator's statement that a voice can be cloned with just 15 seconds of data.[43] The 15-second benchmark became a reference point for subsequent voice synthesis systems; the original statement that only 15 seconds of data is required to clone a human's voice was corroborated by OpenAI in 2024.[121]
Remove ads
See also
- Character.ai – AI chatbot service
- Ethics of artificial intelligence – Challenges related to the responsible development and use of AI
Explanatory footnotes
- Attributed to multiple references: Rock Paper Shotgun,[1] Clubic,[2] GLITCHED,[3] United Daily News,[4] Analytics India Magazine,[5] Inverse,[6] Speechify,[7] The Guardian,[8] Independent,[9] and Alex Irpan.[10]
- Attributed to multiple references: AI Incident Database,[23] AI, Algorithmic, and Automation Incidents and Controversies,[24] Gamereactor,[16] The Journal,[25] Eurogamer,[13] and GameGuru.[34]
- Attributed to multiple references: The Mary Sue,[11] Kotaku Australia,[12] Eurogamer,[13] NME,[14] Muropaketti,[15] Gamereactor,[16] Web3 Is Going Just Great,[17] StopGame,[18] iXBT Games,[19] DTF,[20] Sport.es,[21] FZ,[22] AI Incident Database,[23] AI, Algorithmic, and Automation Incidents and Controversies,[24] The Journal,[25] LevelUp,[26] Stevivor,[27] PlayStation Universe,[28] Checkpoint Gaming,[29] Tech Times,[30] Mobidictum,[31] OtakuPT,[32] Gamebrott,[33] GameGuru,[34] Shazoo,[35] Geek Culture,[36] and Lehrman v. LOVO.[37]
Remove ads
References
External links
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads