Italian-based regulators on Friday issued a temporary ban on ChatGPT effective immediately, over concerns about privacy and said they had opened an investigation into how OpenAI, the US company behind the popular chatbot, uses data.
Italy’s data protection agency said users lacked information about the collection of their data and that a breach at ChatGPT had been reported on March 20.
‘There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,’ the agency said.
The Italian regulator also expressed concerns over the lack of age verification for ChatGPT users. It argued that this ‘exposes children to receiving responses that are absolutely inappropriate to their age and awareness.’ The platform is supposed to be for users older than 13, it noted.
The data protection agency said OpenAI would be barred from processing the data of Italian users until it ‘respects the privacy regulation.’
OpenAI has been given 20 days to communicate the measures it will take to comply with Italy’s data rules. Otherwise, it could face a penalty of up to €20 million ($21.8 million), or up to 4% of its annual global turnover.
Since its public release four months ago, ChatGPT has become a global phenomenon, amassing millions of users impressed with its ability to craft convincing written content, including academic essays, business plans, and short stories.
But concerns have also emerged about its rapid spread and what large-scale uptake of such tools could mean for society, putting pressure on regulators around the world to act.