Ofcom has issued a warning to tech companies about their new safety responsibilities under the upcoming Online Safety Act, as the regulator published an open letter on generative AI and chatbots.
The online safety regulator said it was publishing its letter to remind firms about how generative AI and chatbots will be regulated under the new rules after a number of reports of “distressing incidents” involving the technology in recent weeks and months.
Ofcom highlighted a case in the US where a teenager died after developing a relationship with a chatbot based on a Game Of Thrones character, and a case first reported by the Daily Telegraph where users of a generative AI chatbot platform created bots to act as virtual clones of real people and dead children – including Molly Russell and Brianna Ghey.
In its letter, Ofcom said it wanted to remind firms that any user-to-user site or app that enables people to share content generated by a chatbot on that site with others will be in the scope of the Online Safety Act, as will sites which allow users to create their own chatbots that could be made available to others.
The new online safety rules, which will start coming into full force next year, will require social media and other platforms that host user-created content to protect users, particularly children, from illegal and other harmful material.
The rules will require the largest platforms to create systems to pro-actively remove illegal and other potentially harmful material, while also providing clear reporting tools to users and carrying out risk assessments, among other new duties.
Those who fail to comply with the new duties face fines which could reach billions of pounds for the biggest sites.
Generative AI and AI-powered chatbots have exploded in popularity since the launch of ChatGPT in November 2022, with many rival chatbots and other tools now available online.
In its letter on generative AI and chatbot technology, Ofcom said any AI-generated text, audio, images or videos shared on a user-to-user service is considered “user-generated content” and therefore within scope of the new online safety rules.
The regulator also noted that generative AI tools which “enable the search of more than one website and/or database” are considered search services and therefore within the reach of Ofcom under the Act.
“Where the above scenarios apply to your service, we would strongly encourage you to prepare now to comply with the relevant duties,” the letter said.
“For providers of user-to-user services and search services, this means, among other requirements, undertaking risk assessments to understand the risk of users encountering harmful content; implementing proportionate measures to mitigate and manage those risks; and enabling users to easily report illegal posts and material that is harmful to children.
“The duties set out in the Act are mandatory. If companies fail to meet them, Ofcom is prepared to take enforcement action, which may include issuing fines.”
Beginning in December, online services must begin carrying out risk assessments around illegal online harms, with the codes of practice around those harms currently expected to come into effect in March next year – the first phase of Online Safety Act’s implementation.
House Rules
We do not moderate comments, but we expect readers to adhere to certain rules in the interests of open and accountable debate.