Commentary: Grok image shutdown shows why AI innovation cannot outpace responsibility
Following news that Grok, Elon Musk’s AI tool on X, has switched off its image-generation function for most users after complaints that it was being used to create sexualised and violent imagery of real people, Professor Nishanth Sastry, Associate Head of Research and Innovation for Computer Science at the University of Surrey, shares his expert perspective on what this decision reveals about the challenges of regulating AI systems.
“The decision to switch off Grok’s image-generation function following its misuse highlights a recurring dilemma in AI development – the gap between what systems can do and what they should be allowed to do.
“While Elon Musk and xAI have positioned Grok firmly within a free-speech tradition, this demonstrates that unrestricted capability can generate foreseeable and serious harms, ranging from non-consensual sexual or abusive content, to the manipulation of democratic discourse through deepfakes of public figures, including MPs – an issue we and others are actively researching.
“From a UK perspective, this aligns closely with the intent of the Online Safety Act, which places clear duties on platforms to anticipate and mitigate risks rather than respond only after harm has occurred. We also need to distinguish between online AI systems, such as Grok or ChatGPT, where capabilities can be centrally restricted or regionally disabled, and offline or open-weight models, which are harder to regulate once released. As access to model-building resources expands, disabling a single tool will not prevent determined actors from recreating similar capabilities elsewhere.
“Historically, societies have faced comparable moments – from human cloning, restricted by law following Dolly the sheep, to nuclear technology, where ethical and social risks justified firm regulatory boundaries. AI now confronts a similar threshold: innovation alone cannot be the governing principle; responsibility must also be factored in.”