News

Generative Artificial Intelligence: the evolution of AI to become a creator

In recent days we have heard the expression Generative Artificial Intelligence, or generative AI, everywhere. The blame lies, above all, with the latest developments in OpenAI in its GPT model, Microsoft’s improvements in its products or systems like Claude. But few have stopped to explain in detail what Artificial Generative Intelligence implies. Or its bases. Despite this, it is on more and more systems, from ChatGPT to DALL-E.

What is Generative Artificial Intelligence

This type of Artificial Intelligence is known as generative, because it is able to create something that does not exist. It is its main difference with respect to discriminative Artificial Intelligence, which is dedicated to making distinctions between different types of inputs, and tries to answer questions whose answer implies identifying something from a question that implies a choice.

For example, one of these AIs will be able to answer a question about an image, to answer if it is one thing or another. But you won’t be able to create an image from simple instructions. Something that generative AI can do.

Despite having made a lot of noise lately, generative AI has been around for quite some time. It goes back, in fact, to the appearance of Eliza, a chatbot that was quite popular years ago, and which pretended to be a therapist to talk to. They created it at MIT, and it was launched in 1966. It was a revolution then, despite being quite rudimentary. And years of work and research have evolved generative AI so much that Eliza now looks like something made by beginners.

The arrival of DALL-E, stable Diffusion, and above all, of Chat GPT, have turned Artificial Intelligence upside down, as well as the perception that the general public has of it. The first two allow the generation of realistic images from simple instructions.

The third is even capable of engaging in a text conversation with humans, and providing certain types of information. It is even likely that soon it will be multimodal, thanks to the evolution of its model, GPT, to its version 4. For now, it only allows you to respond with text, but in the future you may also be able to work with multimedia elements.

We usually refer to these systems, and others like them, as models. This denomination has not been made at random. It has occurred because all three are capable of representing an attempt to simulate or model some aspect of the real world based on a set, sometimes very large, of information about it.

How does Generative Artificial Intelligence work?

This type of Artificial Intelligence uses machine learning to process large amounts of data images or text. Most of this information is taken from the Internet. After processing it, it is able to determine which things are most likely to appear along with others that have already appeared. That is, they generate text by predicting which word is most likely to come after other words you have already created.

Most of the programming work of generative AI is dedicated to creating algorithms that can distinguish the things that interest the creators of the AI ​​they want to develop. In the case of ChatGPT, it will be words and phrases. In the case of DALL-E, graphics and visual elements.

But above all, it must be taken into account that this type of AI generates responses and outputs based on the assessment of a huge set of data, which have been used to train it. With them, it responds to requests and instructions with images or phrases that, based on what’s in that dataset, the generative AI suggests is likely to be appropriate.

The autocompletion that appears when you write with your smartphone, or in Gmail, which suggests words or parts of sentences, is a low-level generative Artificial Intelligence system. ChatGPT and DALL-E are quite a bit more advanced.

Training generative AI models

The process by which models are developed to capture and process all the data they need to function is known as training. Two techniques are usually used for this, which will be more or less adequate depending on the model. For example, ChatGPT uses what is known as a Transformer (hence the T in its name).

A transformer derives meaning from large chunks of text. In this way, the model manages to understand how the different semantic components and words can be, and how they can be related to each other. In addition, you can determine how likely it is for them to appear next to each other. These transformers run unattended on a large set of natural language text, using a process called pretraining (the P for ChatGPT). Once the process is finished, the humans in charge of working with the model take care of adjusting it through interactions with it.

Another of the techniques used to train generative Artificial Intelligence models is called Generative Adversarial Network (Generative adversarial network, or GAN). With it, two algorithms are put to compete with each other. You generate text or images based on probabilities derived from a large data set. The other is a discriminative Artificial Intelligence, which has been trained by humans to assess whether the output is real or generated by Artificial Intelligence.

The generative AI repeatedly tries to fool the discriminative, automatically adapting to produce successful responses. Once it gets a consistent and solid win over the discriminative, the discriminative is tuned again by humans, and the process starts all over again.

In this type of training, it must be taken into account that although humans are involved in it, most learning and adaptation occur automatically. This leads to many iterations being necessary to get the models to the point where they produce interesting results. Also, keep in mind that this is a process that requires a lot of computing power.

Negative points and use cases

Although the results of Generative Artificial Intelligence models are impressive, not everything related to it is good or beneficial. In fact, in addition to presenting limitations, it has many possible negative impacts in different sectors. In addition, they are still prone, especially those of text, to what has been called hallucinationswhich can even lead to making false claims, and even pimping and insulting humans.

As for its negative impacts, it is quite clear that the possibility of creating content easily and cheaply can have an impact on writing content that is not particularly creative. But in many cases they can fool the humans who read those contents. That has meant that many students have already used them in their school tasks. Even at the university level. They are also being used by email spammers to write their emails and send them to thousands of people with minimal effort.

A whole debate has also arisen related to the intellectual property of the images or texts generated by a generative AI. There are many discussions about who owns it, and the legal issues related to it are still beginning to be debated.

Another of the negative points of these AIs is that, in many cases, may be biased. Especially since their answers are completely conditioned on the type of data with which they were trained. If they are biased, and work without rules or limits, we can find macho, racist or classist AIs, for example. To avoid this, OpenAI, for example, creator of ChatGPT, has equipped its model with safeguard measures to avoid these biases before giving access to the public to use the chatbot.

But despite all this, generative AI has multiple use cases. ChatGPT, for example, can extract information from very large data sets to answer questions made in natural language to give useful answers. Because of this, it can be very useful for search engines, which are already rushing to test its integration. Not only the general purpose ones, which have been the first to approach them. They can also be useful for other more specific and sectoral search engines.

Related Articles