News

The 7 Artificial Intelligence news that have shown their progress in 2022

In this year that is about to end, Artificial intelligence has occupied a position prominent in terms of news and developments. They even just named it “word of the year.” Not only because of its abundance, but also because of its importance. Undoubtedly, this year we have reached a very outstanding level in terms of Artificial Intelligence capable of generating creative works with text, images, sound and video. Artificial Intelligence with deep learning has come to the fore after a decade of research, and has begun to enter commercial applications.

In this way, millions of people have been able to try this type of technology for the first time, achieving wonderful and controversial creations that have left few indifferent. Accompanied by news about AI that has generated quite a bit of controversy, these are the seven novelties in Artificial Intelligence of the year 2022.

DALL-E 2: Dreams in Pictures

Last April, OpenAI announced the image synthesis model with deep learning DALL-E 2, capable of generating images from requests made through text as if it were magic. Trained with hundreds of millions of images pulled from the Internet, DALL-E 2 knows how to combine them thanks to a technique called latent diffusion.

Shortly after her departure, social networks were filled with compositions made by the model, which surpassed all the difficulties of her first version. At its launch, OpenAI only allowed 200 beta testers to test it, also providing filters to the system to block violent and sexual requests. Little by little it was admitting more users until it reached a million people in a closed test phase, until it announced its general availability last September.

A Google engineer believes that LaMDA has a conscience

At the beginning of last July, the engineer Blake Lemoine, the company he worked for, Google, suspended him from employment (but not salary). Those of Mountain View made the decision after Lemoine be sure to believe that the LaMDA model (Language Model for Dialogue Applications), developed by Google, is awareand therefore deserves to have the same rights as a human.

While working as part of the Artificial Intelligence division responsible for Google, Lemoine began chatting with LaMDA about philosophy and religion, and came to believe that there was real intelligence behind the text. Speaking to the Washington Post, she noted that she saw a person when she talked to the model, and that “it doesn’t matter if they have a meat brain in their head. Or if you have a billion lines of code. I talk to them and listen to what they have to say. This is how I decide what is and is not a person.”

Google countered that LaMDA was only telling Lemoine what he wanted to hear and that he was not, in fact, a sentient being. Like the GPT-3 text generation tool, LaM;DA had received training with millions of websites and books. He would reply to Lemoine’s text entries by predicting the most likely words he should say, without any deep understanding.

Meanwhile, Lemoine allegedly violated Google’s privacy policy by telling third parties details about his work group. In late July, Google effectively fired Lemoine for violating its data security policies.

DeepMind AlphaFold predicts almost all known protein structures

Also last July, DeepMind announced that its Artificial Intelligence model AlphaFold had predicted the shape of virtually all proteins known from almost every organism on Earth that has its genome sequenced. This model was already announced in the summer of 2021, when it had predicted the shape of all human proteins. A year later, his protein database had grown to hold more than 200 million protein structures.

DeepMind has made these protein structures available to anyone who wants to examine them. They are in a public database at the European Bioinformatics Institute of the European Molecular Biology Laboratory (EMBL-EBI). The institution allows researchers from anywhere in the world to access them and use their data for research related to biology and medicine.

The synthesis of images, open source by the hand of Stable Diffusion

In August, Stability AI and CompVis released the image synthesis model Stable Diffusion 1.4. It’s similar to DALL-E 2, but while it was released as a closed model with a lot of restrictions, Stable Diffusion was a open source project with source code and control files. Its opening level allows generate, without restrictions, any synthesized content. In addition, it can be used locally and privately on computers that have a powerful GPU.

This step of Stability AI, however, has not been received equally well by everyone. Since its announcement until now, it has received considerable criticism about its potential for the generation of political misinformation, non-consensual pornography, child abuse material and alternative stories.

Artists have complained that it can put them out of work, and the data field bias used in their training has also been criticized. In addition, the techniques used to develop their image data fields have caused other controversies. For example, when various people reported that their private medical photos had been taken off the Internet.

Regardless, a number of hobbyists have embraced the model and built an open source ecosystem around it very quickly. Some products have already integrated its engine into their apps and websites, while Stable Diffusion has continued to evolve, and version 2.1 is now available.

AI-generated art wins contest in Colorado

Also in August, Colorado resident Jason Allen submitted three AI-generated images, featuring the Midjourney business model, to the Colorado State Fair’s fine art contest.

At the end of the month, one such work won first prize in the Digitally Manipulated Photography category. When the news was made public, it caused a stir and an intense debate began on social networks about the nature of art and what it means to be an artist.

Cicero, from Meta, wins playing Diplomacy

In November Meta announced the Cicero Artificial Intelligence agent, capable of beating humans in the strategy board game Diplomacy in online matches played on webDiplomacy.net. It is a game that requires persuasion, cooperation and negotiation with other players in order to win. But Cicero did. To do this, Meta developed a bot that could trick humans into thinking they were playing against another human.

To hone his negotiating skills, Meta trained Cicero’s large-scale language model component with texts taken from the Internet, and with transcripts of 40,000 Diplomacy games played by humans on the aforementioned website. Additionally, Meta developed a strategic component that could observe the state of a game and predict how other players might behave in order to act accordingly.

Meta believes he can apply what he’s learned with Cicero to a new generation of games with smarter NPCs that can break down the barriers between humans and AI during multi-session conversations. The same technique applied to other social scenarios, yes, could manipulate or deceive humans when the AI ​​pretends to be a person.

ChatGPT talks to the world

At the end of November, OpenAI announced ChatGPT, a chatbot based on the GPT-3 language model. OpenAI made it freely available to the public on its website, so it could collect data and feedback from the public on how to fine-tune the model to produce more accurate and potentially less harmful results. Five days after its launch, ChatGPT already had one million users.

People have used it to help with programming tasks, simulate a Linux console session, generate recipes, and write poetry, among other uses. He impresses with his apparent ability to understand complex questions, but the reliability of his answers has yet to improve. The CEO of OpenAI has admitted as much, noting that ChatGPT is evolving. However, he has already given us a first vision of what a future with the intervention of Artificial Intelligence could look like.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *