Grok 'Undress': Deepfake, Dangerous, Perverse
- Independent Ink

- Jan 22
- 6 min read

Users have exploited the bot to create deepfake images that ‘undress’ photographs of women and children posted online, causing intense psychological trauma to the victims. Any pervert, stalker, predator, or someone holding a grudge, can easily upload it. However, is making ourselves digitally invisible really a solution to this?
By Warsha Mishra
The generative artificial intelligence chatbot on X has been widely misused to target women, children, and other vulnerable groups. Users have exploited the bot to create deepfake images that ‘undress’ photographs posted online, causing intense embarrassment and psychological trauma to the victims. A public outcry over this issue has prompted Elon Musk’s social media platform to promise corrective action.
Being bombarded daily with news about women in our country being subjected to heinous sexual violence is taking a toll on many of us, leading to feelings of numbness and exhaustion. Some have actively begun avoiding it and looking away to preserve their sanity, even as the number of these cases grows day by day.
Given the current situation, a question arises: why does it feel like the world has suddenly taken a turn for the worse?
In reality, women have been victims of humiliation and brutality worldwide throughout history, often justified by religious, political, and social reasoning. We were taught to believe that with economic and technological progress, the situation would improve, but that promise now seems like a mirage.

High-profile cases, such as the Telegram rape chats where 70,000 men shared tips and videos to assault women, exposed by a German investigation, and the extensive, organised rings in South Korea that distributed deepfake pornography using AI, came as a shock to many.
Technological advancement should have been used to provide tools for safety and empowerment to women, children and other vulnerable sections, but, instead, it is also being weaponised against us.
Grok, an artificial intelligence chatbot on the social media platform X (previously called Twitter), is the latest app to be added to the list. In recent days, a large number of requests to undress women and children were made to the bot, and, shockingly, it complied.
A point to note is that it is not the norm for mainstream AI chatbots and platforms to accept explicit requests. Specific guidelines can be provided to reject obscene prompts, as ChatGPT does, but Grok AI had no such regulations, leading to photos of women and little girls taken from X, without their consent, being used as fodder for the bot to generate explicit images. There's no limit to the extent of the harm and damage this kind of technology can cause to individuals and their families.

When Elon Musk was tagged in one such post about the harm Grok AI is causing, instead of taking it seriously, and, typically, he brushed if off with a joke. For days, neither the platform nor the authorities took meaningful action. Any complaints made to the cyber cells also went nowhere, proving that the existing systems are not prepared to deal with the misuse of such technology.
Following constant online backlash for over a week, the app finally rolled back the image-generation feature for free users on Saturday. However, the paid users are still allowed to access it, and when the cost of a paid account is so low, this completely defeats the purpose.
When the issue was first raised by the victims, a predictable response was that once a photo is posted publicly, it ceases to be the owner’s property, and there is no guarantee it won’t be misused.
Instead of tackling the issue at a systemic level and holding the app owner and developers accountable, the responsibility was once again shifted onto the victim, even though this entire situation could have been easily avoided by following the appropriate guidelines during development and deployment. It's common knowledge that anti-social elements and professional predators, who would use such technology in disturbing ways, exist in large numbers in our society.
Under nearly every post about this issue, the common suggestion was for women to stop posting their own or their children’s photos online, with some women expressing relief that they never post their photos on X or don’t have a public profile.
However, is making ourselves digitally invisible really a solution to this?

Even with such precautions, women would still need to maintain an online presence for professional purposes. And even if the photos are not on a public account, there is still no guarantee that someone won't take images from a family member or friend's private account and misuse them.
Any pervert, stalker or someone holding a grudge can easily click a photo and upload it to a deepfake app.
While it is wise to be careful about the things we post online, it is quite alarming that such unregulated technology was intentionally and recklessly made widely accessible to the general public. Giving power over this kind of technology to anyone and everyone without the necessary safety constraints is an obvious hazard, a dangerous trend, and something every developer should be required to consider for legal and security reasons before deployment.
The society repeatedly tells women that if they take up less space and make themselves less visible in online and offline spaces, they can save themselves from becoming targets. But in the case of Grok AI, even those dressing modestly and are not active users of the platform, including children and hijab-clad women, were still being subjected to degradation. Women who were fully dressed were being undressed using Grok, while those who were comfortable with showing their bodies were being covered up.
This exposes the rampant spread of rape culture, where the incels think of women’s bodies as their property and view all of this as a degenerate game of power and control.
This incident is hauntingly reminiscent of the well-known quote by Margaret Atwood:
“Male fantasies, male fantasies, is everything run by male fantasies? Up on a pedestal or down on your knees, it's all a male fantasy: that you're strong enough to take what they dish out, or else too weak to do anything about it.”

After days of outrage confined to social media, mainstream media outlets gradually began covering the story as public backlash grew. As a result, governments across the world, including those in Europe and Asia, have issued warnings to X to remove the vulgar content or face legal repercussions. Malaysia and Indonesia were the first countries to temporarily block Grok AI.
India’s IT ministry also intervened, stating that Grok AI was being misused to create, host and share obscene, predatory and paedophilic images and directed X “to comply with the rules and submit an Action Taken Report (ATR) within 72 hours, warning that the failure to do so would result in the loss of its exemption from liability under Section 79 of the IT Act and further legal actions.”
Following the warnings and backlash, Elon Musk has finally responded that anyone using Grok to create illegal content will face the same consequences as if they uploaded it.
The Grok situation raises serious concerns about the unethical ways in which AI tools are collecting and manipulating data to generate realistic images, videos, and audio without the consent of the individuals involved. Tech visionaries and enthusiasts constantly try to justify and defend the usage of copyrighted content to produce new material, even though it is clearly illegal and exploitative.
If there are no rules or regulations being followed while developing these apps, and any actions carried out through them are beyond ethical and legal bounds, then the promise of civility and progress made to us over the years begin to lose meaning. It also makes us wonder if the moral lessons all of us were taught in our childhood, that actions have consequences, still apply in the era of unchecked and rapidly growing technology, mostly driven by the infinite greed for financial profit and corporate power.
The cost of development, technological, economic or social, should not be paid by the most vulnerable sections of society. Progress that comes at the expense of women, children, queer people and creatives is no progress at all.

Warsha Mishra is a graphic designer who has a background in computer science and software systems, and professional experience working in large-scale tech environments that incorporate machine learning and AI. As a designer, she has contributed her work for environmental initiatives such as the Green Climate Fund and for organisations supporting women and young girls, including MJAS in Rajasthan. With experience working with AI, both as a developer and an end user, she is dedicated to promoting the ethical use of technology that prioritises the well-being of people and the environment.



