News

OpenAI improved ChatGPT by paying Kenyan workers less than $2 an hour

Undoubtedly, ChatGPT It can be considered one of the most impressive and compelling technology innovations of the past year. Its popularity is immense from the moment of its launch, produced last November. In less than a week it had a million users, and its users still saturate the platform today.

Its creator, OpenAI, is currently in negotiations to obtain funds that will allow its valuation to be raised to 29,000 million euros. Among these investments there are rumors that there will be one from Microsoft, for a value of 10,000 million dollars. But this success story does not only have highlights: it also has several shadows. Among them, how OpenAI managed to improve the level of toxicity of ChatGPT outsourcing the work necessary to get it to Kenyan workers, who were paid less than two dollars an houraccording to Time.

This work, vital to OpenAI, made ChatGPT much less toxic than GPT-3, characterized by its tendency over time to make racist, sexist or violent statements. This was because the embedded AI had been trained on hundreds of billions of words pulled from the internet. That is, with human language. As a result, GPT-3 had impressive language capabilities, but also a very high level of toxic language.

Eliminating this language turned out to be impossible, and the need to develop an additional security mechanism, powered by AI, was seen to create a chatbot suitable for everyday use, which is ChatGPT. To create this AI, which it would integrate into ChatGPT, OpenAI focused on feeding it examples of violence, hate speech, and abuse of all kinds. In this way, this AI would be able to detect all of them instantly.

But doing so required a lot of work—and investment. So OpenAI sent tens of thousands of bits of text containing all kinds of abuse, ranging from bestiality to torture and murder to child abuse and more, to an outside company in Kenya starting in November 2021.

OpenAI outsourced the services of a company to improve ChatGTP

Did with the collaboration of an external company based in San Francisco, Sama. This company has employees in Kenya, Uganda and India, who are in charge of labeling data for large technology companies, such as Google, Meta or Microsoft. Sama bills itself as an “ethical Artificial Intelligence” company, and claims to have helped lift more than 50,000 people out of poverty. But the staff he employs were paid for this work between $1.32 and $2 an hour depending on your experience and performance.

In a statement, an OpenAI spokesperson has confirmed that Sama employees in Kenya have helped train a tool they were developing to detect toxic content, which has been integrated into ChatGPT. In his text, they also point out that this work contributed to their efforts to remove toxic data from ChatGPT training data sets, remembering that, for them, their mission is “ensuring that General Artificial Intelligence benefits all of humanity«.

However, despite the importance of the work, it was done in exploitative conditions in developing countries, with workers also dealing with content that could be harmful to their mental health. In fact, the traumatic nature of the work faced by workers in Kenya led Sama to cancel his job with OpenAI in February 2022, eight months ahead of schedule.

Total, OpenAI signed three contracts totaling $200,000 with Sama in late 2021. They establish that the work consists of labeling textual descriptions of sexual abuse, hate speech and violence. About three dozen employees were assigned to this work, divided into three teams. Each team focused on one of these themes. According to some employees, they were expected to read between 150 and 250 passages of text for each nine-hour shift. These chunks could contain between 100 and 1,000 pages.

All the workers that TIME has interviewed claim to suffer mental consequences from the work they did. Although they were required to attend sessions with wellness counselors, all said that these sessions did not help them, and that they were very sporadic, due to the pressures on them to be more productive at work. Two of the interviewees assure that they were only given the option of attending group sessions, and one of them recalls that his requests to see counselors privately were systematically denied by Sama’s management.

The company has denied that employees only had access to group sessions, and they have assured that both individual and group sessions were planned for them, with “licensed mental health therapists with professional experience«, who were accessible at any time.

The signed contracts establish that OpenAI would pay Sama $12.50 an hour for the work, a much higher amount than what the Sama employees who did the work received. The agents, the lower-level taggers who did the work and were in the majority on the three teams formed for it, were paid a salary of just $170 a month, company employees have confirmed. They also received a monthly bonus of around $70 due to the nature of their work, and would have received commissions for reaching certain performance levels, such as accuracy and speed.

As a result, an agent working nine-hour shifts could wait a wage of $1.32 after tax per hour, which could reach $1.44 an hour if it exceeded all its targets. Quality analysts, higher-level taggers whose job it was to check the work of agents, were paid $2 an hour if they met all of their targets.

Sama has claimed, however, that workers had to label 70 pieces of text for every nine hours worked, not as many as 250, and that workers could earn between $1.46 and $3.74 per hour after taxes.

Of course, they have not specified which positions are those that earned more than two dollars an hour, adding that “$12.50 an hour rate covers all costs, including infrastructure expenses, salary and benefits for associates and their full-time team of QA analysts and team leaders«. From Open AI they have also indicated that they did not impose productivity objectives, and that Sama was responsible for managing the payment and everything related to the mental health of the employees.

The end of Sama’s work for OpenAI

In February 2022, the relationship between Sama and OpenAI became close for only a short period of time, then broke up. That month, Sama began working on a pilot program for another OpenAI project: collecting sexual and violent images, some of them illegal under US law, to deliver to OPenAI. The work of labeling images in this product seems to be completely unrelated to ChatGGPT, although it is not known what purpose the company pursued with them, although they have indicated that their labeling was necessary to make Artificial Intelligence tools more secure.

Thus, in February, according to an invoice, Sama provided OpenAI with a sample of 1,400 images, among which there were images of all kinds of abuse and violence. But within weeks, Sama canceled all of her work with OpenAI, in some cases sooner than the contract stipulated. According to Sama, his agreement to collect images for OpenAI did not include references to illegal content, and after having started work OpenAI sent additional instructions referring to some illegal categories.

At that time, according to the company, the East Africa team communicated their concerns to company executives, Sama ended the image classification pilot and sent a notice that it would cancel the rest of the OpenAI projects. «The people working with the client did not review the request through the proper channels. After reviewing the situation, those individuals were fired and new sales team review safeguards and policies were put in place.«.

The OpenAI version in this regard only refers to the fact that they did not need a type of the images they were given, related to child abuse. The company ensures that it instructs its employees to avoid them expressly.

Sama’s decision to stop working with OpenAI meant that his employees didn’t have to watch harmful content, but it did have an impact on his life. Some were fired, and others transferred to even lower-paying teams. They were given other reasons for canceling the contracts with OpenAI.

In February 2022, Time published a story about working conditions for outside companies in Africa for Facebook. Sama appeared in the report, with details of what the jobs of the content moderators they had for the social network were like.

These workers had to view images and videos of executions, child abuse and rape for a wage of just $1.50 an hour. Four Sama employees say they were told that the research done to write the piece made the company decide to stop working with OpenAI, and therefore to improve the AI ​​for ChatGPT. Because other clients, as a result of the article, began to ask them for explanations and to ask their external companies to terminate their contracts with them. Specifically, Lufthansa did. This led Sama to decide to stop working on these types of activities and dedicate himself to others that were not related to reviewing harmful content.

Related Articles