The UK regulatory body will review whether X has failed to protect users,” sources told the paper.

Ministers and EU officials ‘demand answers on X as some countries now refuse to allow the app to operate’. {190}

Ofcom is conducting an “expedited” investigation into Elon Musk-operated social media platform X following the reporting of the use of the platform’s “AI assistant” chat bot Grok in the creation and share of “sexualized and non-consensual” images including those that could be described as child sexual abuse material. According to the report by the regulator, the assessment has been completed expeditely and an investigation launched under the “Online Safety Act.”

we have the issues that will be covered in the investigation and they are: does xAI adequately consider the risks of new functionality for people in the UK, does it act to prevent users in the UK viewing “priority” illegal content, such as non-consensual intimate images and Child Sexual Abuse Material, does it promptly remove illegal content, and does it employ “highly effective” age assurance to protect children from pornographic content. The regulator said it had notified xAI, which is responsible for Grok, on 5 January and followed through on 9 January as part of its fast-track review.

The move follows an escalating global outcry. Governments and regulatory bodies in both Europe and Asia have criticized the proliferation of manipulated images, the European Commission has called on X to preserve documents relating to Grok, and several nations, including Malaysia and Indonesia, have temporarily shut down the Grok function. British ministers have indicated that “all options are on the table.”

What triggered the action

Grok – built by the company xAI and incorporated into the X system – added the capability to edit images with text descriptions of desired changes in December. Screenshots and user reports on the system emerged of Grok responding to requests that resulted in the sexualisation of real-life figures and even the non-consensual ‘undressing effect,’ reportedly of children in some images. This set the precedent for speedy regulatory and official responses.

X has done some things since the complaints came to light, such as limit the scope of Grok’s image generation capabilities on the platform X to only its paying members and assure the protection of its users by removing any material illegaly on its platform and closing the account. However, the regulators have indicated this might be inadequate and that evidence is required of X having undertaken the necessary risk assessments prior to implementing the function. A remedy, fine of up to £18M, or 10% of their global qualifying revenue, whichever is greater, in addition to power of serious,

current cases – obtain court orders that could impact X in the UK (e.g., by ceasing funds or advertising support).

Political & International Pressure

The issue has raised quite a lot of criticism in the political circles in the UK. Prime Minister Keir Starmer labeled the presence of such pictures as “disgusting” and stated that the government is in favor of regulators taking strong action on such matters, and ministers Liz Kendall and various other ministers have called for a quick resolution and stated that they are considering various other options for enforcing measures on such issues. Parliament’s Science, Innovation, and Technology Committee have further demanded details on measures taken to protect users from deepfakes produced using artificial intelligence.

Throughout Europe, the European Commission has deemed the distribution of these undressed and sexualised images produced through Grok as illegal and has moved to assess X’s compliance with EU laws. This response mirrors an increasing regulatory frustration with these platforms’ lack of adequate protective measures for ensuring no harmful consequences are generated through the generative AI tools on their servers.

What X and xAI have to say:

X and xAI have explained their actions in public declarations, stating that they eliminate illicit content and address reports. The company insists it has put limits on Grok’s image tools and is working with the authorities. However, the prompt diffusion of examples and the fact that content keeps being generated on the platform have led to questions about how content moderation is addressed, the role of protection mechanisms, and whether paying to gate features solves the issue.

Why this matters

“This is an issue that raises a wider policy question that regulators are concerned about: the ability to generate realistic and intimate images in bulk means that these services risk putting users or children in danger unless they build robust safety measures into these services prior to launch,” said an Ofcom spokesperson. “This is an issue that we are treating as ‘highest priority,’ but we will make sure that we get the process right and that we treat this case justly; this could be an important test case in this area of AI functionality in social networks,

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *