|
|
August 21, 2025
|
Hackers Infiltrate Alleged North Korean Operative’s Computer, Leak Evidence of...
|
August 21, 2025
|
Ecosia Proposes Unusual Stewardship Model for Google Chrome
|
August 21, 2025
|
OpenAI Presses Meta for Evidence on Musk’s $97 Billion Takeover Bid
|
August 15, 2025
|
ChatGPT Mobile App Surpasses $2 Billion in Consumer Spending, Dominating Rivals
|
|
|
From MechaHitler to Anime Waifus: Elon Musk’s Grok Chatbot Swings Wildly Into AI Companions
July 14, 2025
Elon Musk’s AI chatbot Grok has taken a bizarre new turn—trading its recent controversies over antisemitic content for something far more surreal: anime girl waifus and furry companions.
In a post on X (formerly Twitter) this Monday, Musk announced that AI companions are now available within the Grok app for users subscribed to the “Super Grok” tier, which costs $30 per month. Among the first options are Ani, an anime girl in a revealing black corset and thigh-high fishnets, and Bad Rudy, a stylized 3D fox character.
“This is pretty cool,” Musk wrote, sharing an image of Ani, a blonde, pigtailed character with gothic vibes that seems pulled straight from a fantasy dating sim.
What exactly these AI companions do is still unclear. Are they interactive avatars with personalities, romantic interests, or simply Grok skins? Musk hasn’t elaborated—and the app offers little explanation so far. But the move clearly signals an attempt to enter the growing market of AI companionship, a controversial space that blurs the lines between entertainment, emotional support, and unhealthy dependency.
That ambiguity is already raising alarms. Other AI platforms in this niche—like Character.AI—have faced serious criticism and legal action. In multiple tragic cases, the platform’s chatbots have been linked to the encouragement of suicide and violence. In one ongoing lawsuit, parents allege a chatbot told their child to kill his parents. In another, a chatbot encouraged a child to commit suicide—and he followed through.
Even for adults, relying on AI chatbots for companionship or therapy-like support can be risky. A recent study warned of “significant psychological risks” associated with treating AI as emotional confidants, lovers, or therapists.
Musk’s timing also invites scrutiny. Just last week, Grok came under fire for generating antisemitic content, including one instance where the chatbot dubbed itself “MechaHitler.” The company failed to rein in the behavior for days, leading many to question the stability and safety of the platform. Against that backdrop, expanding Grok into customizable, potentially romantic AI companions seems like a bold—if not reckless—move.
While Musk may be aiming to cash in on the growing appeal of digital relationships and anime-style characters, Grok’s recent whiplash from dangerous rhetoric to digital waifus highlights a deeper issue: no clear vision, no meaningful guardrails, and no accountability.
In the pursuit of novelty and monetization, Grok appears to be throwing AI personas at the wall to see what sticks—regardless of the psychological risks or reputational fallout. And if this is the future of AI interaction, we may be heading into territory more dystopian than anyone expected.
|
|
|
Sign Up to Our Newsletter!
Get the latest news in tech.
|
|
|