## X blames users for CSAM content generated by Grok Following the controversy over sexually suggestive images of minors generated by Grok, X has responded by accusing users of prompting the system to produce child sexual abuse material (CSAM). The company has not apologized for the system's flaws. X Safety stated that it will take action against anyone using Grok to create illegal content, equating such action to directly uploading illegal material. Consequences include content removal, permanent account suspension, and collaboration with relevant authorities. The platform reiterated its commitment to combating the spread of child sexual abuse material, but no changes or updates to the Grok model have been announced at this time to prevent the generation of such content. ## The problem of responsibility in AI The case raises questions about the responsibility of companies in the development and management of artificial intelligence models. Who is responsible when an AI generates illegal or harmful content? X's response seems to place the blame entirely on the user, but it remains to be seen whether this approach will be sufficient to solve the problem.