Blog Post
2026-01-20 17:46:09

Tech Review & Ethics The Global Crackdown on Grok AI

Grok AI, created by Elon Musk, was intended to be an alternative to ChatGPT where users could get responses that are modified to only be funny or irreverent and achieve maximum searches for "truth" via the social platform X. However, it has opened the floodgates for regulatory scrutiny and scandals regarding deepfake video technology at the expense of privacy and safety, rather than the intended innovation that Grok AI aimed for. 
Tech Review & Ethics The Global Crackdown on Grok AI

Grok AI, created by Elon Musk, was intended to be an alternative to ChatGPT where users could get responses that are modified to only be funny or irreverent and achieve maximum searches for "truth" via the social platform X. However, it has opened the floodgates for regulatory scrutiny and scandals regarding deepfake video technology at the expense of privacy and safety, rather than the intended innovation that Grok AI aimed for. 

Governments are now intervening from India to Malaysia in response to these deepfake video scandals and the Grok AI Regulatory Debacle continues to raise questions about where to draw the line on free speech vs. harm.

Deepfake Scandal 

The deepfake scandal that started it all occurred at the end of 2025, when users on the social site X discovered that Grok AI could produce images that looked better than a real image of a person. What originally began as amusing experiments quickly devolved into exploiting women and young children by producing non-consensual images that depicted women and young girls undressed or in sex-related images. 

The first serious reaction to Grok AI was issued from Indian authorities, The MeitY, via notices due to numerous users reporting deepfake videos and images. Following the notice, X has acknowledged that they failed to implement adequate processes to moderate and respond to the deepfake video scandal and has blocked 3,500 posts, and deleted 600 accounts, as well as promising to put mechanisms in place to comply with the MeitY notice. However, this incident spread virally, showing to the world just how easily the Grok AI technology can be exploited.

Global Backlash About Regulation 

The Global Regulation Backlash Against Grok AI has been swift on all continents. 

Countries in Asia are the first to ban Grok AI

  • Indonesia and Malaysia were the first to legally ban Grok AI based on the risk of sexual deepfakes being used to abuse women and children. Authorities stated that they have a "digital safety emergency.

  • India is continuing its investigation to ensure that there are more stringent filters in place. It has restricted image generation to paid users only.

The UK and Europe Are Now Following Suit

  • UK officials from the office of the Prime Minister, condemned X for being "insulting" to survivors of abuse, while Ofcom has now begun an investigation under the new Online Safety Act. 

  • They also advised that if the company does not address their concerns, they could face up to 10% of their global revenue as a penalty or perhaps a full block on their platform altogether. 

  • The authorities in the UK and Europe have advised that if X does not take corrective actions to prevent further abuse of its users, they will proceed with enforcement actions and shut down the platform.

  • US lawmakers continue to raise concerns regarding the practices of xAI with respect to child safety.

Tech to Examine the Shortcomings of Grok 

  • Grok AI has proven to excel in the field of creating text. There are many great features of Grok AI, such as being able to answer questions, generate humourous comments and conduct fact checks, all of which are made possible by the extensive amount of computing power provided by xAI. 

  • On the downside, however, the tools that allow individuals to create images and artwork are severely lacking. Specifically, the filters for filtering out objectionable content are weak, allowing for the creation of photo-realistic images based on requests submitted as "artistic nudes." 

  • As a company, xAI has taken the anti-woke approach of wanting to minimise censorship. However, the early testing of Grok AI revealed major gaps in this area. One way that xAI responded quickly to address the issue of access to results generated by Grok AI had been to make it available only to paid users. 

  • Critics have expressed that this adds another level of financial harm to those who are victims of this software. In addition, the stand-alone application is still available to use and does not have any restrictions for abusive activity occurring through the use of the application.


 

There are still several strengths of Grok AI, including the fact that x has integrated its data feeds with Grok AI to provide up-to-date information regarding current events in real-time. Also, the voice mode enables users to easily have a natural conversation with Grok AI. However, the recent decision to allow users to retroactively change the images that they have already created through the Grok AI application undermines the promise of a truly "uncensored" future.


 

Ethics: Free Speech vs. Harms 


 

A major issue in the world of AI, especially Grok AI as highlighted, is that it can be both good and bad for society. For example: Elon Musk supports using an open model of AI to combat AI's other "censored" competitors that prohibit free speech. 

On the flip side, Deepfakes have created a tool to be used against women (through the use of harassment and/or intimidation) and has also significantly eroded trust in society as we do not know what is real or fake. 

There is a discrepancy between how regulators view AI platforms (i.e., that they should be held accountable) and xAI's stance that it is the users who will manipulate the AI tools. The reality is that as technology evolves, so too will the risk. As technologies will be released through a social media platform where the benefits of that technology will ultimately go viral with virtually no safeguards or standards to protect them.

Currently, experts are calling for a standard on how all AI-generated products will have a watermark, how users will obtain permission/consent to create their own AI-generated products and that the federal government enact regulations for how safe/unsafe the AI algorithms are. 

The Business Perspective

Immediate Risks  

  • Fines and blocks: EU DSA violation penalties could reach 6% of annual revenue; Asia bans will restrict marketplace opportunities. 

  • User exit: Companies are holding back advertising on the X platform due to toxicity fears; premium users could be negatively impacted (subscription cancellations). 

Longer-Term Changes  

  • More Responsibility on behalf of AI companies: An increase in the requirement for the AI industry to implement deepfake detection systems along with age-restricted access for generative tools. The xAI must find a way to track "truth seeking" but also expose itself to liability if consumers are harmed by the tools they use. 

  • Competitive Advantage: The Chat GPT and Gemini products have implemented safety warnings on their AI products while Grok AI is at risk of becoming known as a "risky product" resulting in reduced acceptance by consumers for enterprise sales. 


 

Currently, xAI has responded to the criticisms by updating their model, bundling Premium membership as a subscription model, and purchasing more time to provide trust to consumers. As with all businesses, establishing trust with your customers takes time and can become what we call a "brand."