Italy lifts the ban on ChatGPT and makes several concessions to the Data Protection Agency

Last Friday April 28 Open AI announced, through a statement to the Associated Press, that it had undertaken in ChatGPT changes requested by Italy’s privacy regulator. It must be remembered that it had restricted the use of the artificial intelligence tool due to the absence of age verification controls and the collection of data from online users.

Although OpenAI was given an April 30 deadline to fix these issues, the move eventually came sooner. In order to recover the Italian market, they have been forced to implement a series of concessions to the Italian Data Protection Authority.

Among the measures that have guaranteed the return of ChatGPT to Italy stands out a better information to users about how ChatGPT collects their data. Therefore, a form is created so that they can freely decide whether or not they want to remove their data from the tool’s training algorithms. To the disable chat history and refuse data exporttheir power of action is greatly restricted.

Similarly, Italian users are now required to provide their date of birth when registering, implementing an exhaustive age control. This allows ChatGPT to block those under 13 years of age or to request parental permissions for those under 18 years of age.

Despite these two important measures, from the Italian Data Protection Authority they are not entirely satisfied and ask OpenAI to investigate the rest of the demands, such as the launch of an advertising campaign through which users are informed about how the tool works and how to opt out of sharing data. The key for the entity goes through combine technological progress with respect for people’s rights.

Since the restrictions imposed by the Italian Government on ChatGPT on March 31, after the massive data leak experienced on March 20, OpenAI has been working very actively to avoid not only the loss of the country, but of the entire European market.

The impact of ChatGPT in Italy

In little more than two months since its arrival in Italy, ChatGPT has managed to attract more than 100 million active users per monthbecoming the fastest growing application of all time and surpassing others like TikTok or Instagram that took nine months and two and a half years, respectively, to integrate into the national mindset.

The impact on other countries

The preliminary investigation led by Italy against OpenAI is carried out within the Committee that groups the privacy authorities of the European Union. However, and despite his line of research, Spain has not yet blocked access to ChatGPT.

The privacy regulators from France, Ireland and Germany They have contacted their Italian counterparts to obtain more information on the matter and to study a possible blocking of ChatGPT due to the risk it poses to data security.

There are many voices calling for reasonable control of AI, currently in the hands of companies to dominate the market. The European Commission, the European Parliament and the Council of the EU are considering redraft the Artificial Intelligence Lawconsidering the evil power that emerging technologies like ChatGPT possess.

In U.S.A the idea of ​​redrafting a new law for AI is not clear. The Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission (FTC) issued a joint statement affirming that the existing legal authorities are based on the automated systems of new technologies.

This prevents harm such as fraud, automated illegal discrimination, or algorithms that perpetuate illegal bias. The experts assure that the self-regulation is the most effective to innovate at a competitive pace, but always based on existing laws.

In this same line, China has moved to censor AI technologies, right after it Joe Biden announce audit rules for AI technologies. The China Cyberspace Administration unveiled a draft regulatory measure for generative artificial intelligence services and said he wants companies to submit security assessments to authorities before launching their products publicly.

Faced with this situation, large technology companies such as Google and Microsoft they are lobbying EU legislators to exclude AI from the ‘high risk’ designation.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *