The research and development company in Artificial Intelligence systems and models OpenAI has presented the latest version of its natural language processing system: GPT-4. This new Artificial Intelligence model, which will be the new basis for its popular chatbot ChatGPT, will improve its predecessor, GPT-3.5, in many ways, as well as add new features. For starters, it’s a multimodal modelwhich implies that in addition to text, You can also process images, video and audio.
The launch of GPT-4 was confirmed last week after several weeks of rumors, when Andreas Braun, the CTO of Microsoft in Germanyhinted, perhaps on purpose, that OpenAI was going to present it this week.
According to the company, GPT-4 is “more creative and collaborative than ever, and can solve complicated problems with greater precision«. It can process both text and image inputs, although only able to reply via text.
They are somewhat more limited functions than those promised by the rumors about GPT-4 that had been being heard in many environments for months. Indeed, it is a multimodal system, but less powerful than expected, since many suggested that it would be able to generate responses with something other than texts. Of course, something in which it has improved is its ability to work with various languages, apart from English.
From OpenAI they point out that the differences that can be seen in informal conversations between GPT-4 and GPT-3.5 are very slight. Even so, the improvements of the model are evident in terms of performance in various tests and test benches. Among them, in the entrance exam to the law in the United States, as well as in usual tests in the country, such as the LSAT, or the SATs of mathematics, and evidence-based reading and writing.
On these mentioned exams, GPT-4 scored at or above the 88th percentile. Furthermore, when the task that GPT-4 is asked to perform has a high level of complexity, the answers it provides are better than those of GPT-4. 3.5.
OpenAI also warns that the system still has many of the problems that previous language models had. Among them, a certain tendency to invent information, and the ability to generate violent and harmful texts. It is something that they will undoubtedly have to work to avoid in the future.
It’s something they’ve already come across in Microsoftwhich has confirmed that GPT-4 is the model that has served as the basis for its Bing chatbot. It has offered fabricated information and even dangerous advice and threatened some users, after they found a way to bypass the “protections” of the system and provoke it. GPT-4 is also missing information on a number of events that have occurred after September 2021, when most of the data that was previously provided to it was stopped.
The security training of this model has lasted six months, and in the internal tests carried out by the company, it revealed that it had “82% less likely to provide responses to requests for unauthorized content, and 40% more likely to respond with fact-based responses than GPT-3.5«.
The model is now available to users of ChatGPT Plus, OpenAI’s ChatGPT subscription plan. In addition to being integrated into Microsoft’s Bing chatbot, as we have mentioned, it will also be accessible through an API for developers.
Those who want to use said API can sign up for the waiting list that OpenAI has opened to give them access progressively, which they can access at this address. On the other hand, OpenAI has already reached various agreements with all kinds of companies, so that they can integrate GPT-4 into their products. Among them are Duolingo, Stripe and Khan Academy.