Prompt Engineer: what is it and what skills do you need to become one?

The immense popularity that chatbots and generative AI systems have achieved, such as ChatGPT, have meant that many companies and organizations need personnel with the necessary training and skills to be able to get the most out of the results they offer from the questions they are asked. So much so that it has even created a new professional category: prompt engineer, or prompts engineer. But what exactly is a prompt engineer, what is his mission and what skills are needed to be one?

What is a prompt engineer?

A prompt engineer is a professional in charge of manage and create interactions with different generative AI tools. These interactions can be conversational through text, as is the case with ChatGPT. But they can also be programmatic, which is what happens when prompts (or requests) are integrated into the code.

The latter is done in much the same way as API calls, but instead of calling a routine from a library, what a prompt engineer does is use a routine in a library to communicate with a big language model.

It is a profession that, although recent, is already generating quite significant salaries for those who practice it. Thus, in the United States, the salary of one of them is between 175,000 and 300,000 dollars a year. And their responsibilities go far beyond knowing how to ask a bot. It is also necessary to have knowledge of various subjects, including AI, programming, problem solving and languages. In addition, those who are going to create images even have to have knowledge of art, as we will see below.

Skills needed to be a good prompt engineer

Basic skills needed to become a prompt engineer include knowledge of Artificial Intelligence, machine learning, and natural language processing. In addition, it is necessary to know the basics of large language models, the types of models that exist, what they are capable of doing correctly, and the areas in which they have weaknesses and fail, in order to compensate them with your knowledge.

That does not mean becoming a model expert capable of developing one from scratch, although you do need to have a good understanding of how it works and how it is generated. For this, it is necessary to follow various conventional AI courses, although also specific ones. Like this one from DeepLearning.AI, which is also free. But also read articles and academic publications on the subject. In this way you will be up to date with what is happening in terms of generative AI.

In addition to technical knowledge, it is also necessary to have other types of skills. For example, communicate clearlyand being capable of ask specific and detailed questions about what is sought to be obtained from a generative AI system in general, and the interaction that is going to be had with it at each particular moment.

It is also necessary to learn to explain to the system the context in which you make the request, what you want to achieve, and the limits it has to answer the question, in case it is necessary to set them. It is also necessary to know well the limits of the language models and how, if possible, to play with them to try to break them.

For example, if you want to generate a detailed report, it may be better to create its index first and then work with each of its points separately. It is also necessary to be aware that a good prompt does not have to be brief, far from it. Generally, the longer a prompt is, the better results, and more accurate and relevant, you tend to get as a response.

The fact that working in prompt engineering is very similar to establishing collaborative conversations with a high level of detail makes it necessary for a good professional in this category to have remarkable communication skills. Because it is necessary to know how to describe to the LLM very well what is needed, how it needs to be and what it will be used for. L0s language models do not feel, but they are able to communicate in a very similar way to how a co-worker communicates, or someone who works under the orders of the person making the requests.

The requests will be very varied, and they will not always be on topics in which a development or hardware expert has experience. That is why it is more than advisable to have notions of how other departments of the company work, and knowledge about sales, negotiations or marketing.

As we mentioned, dealing with generative AI capable of creating images, like Midjourney, requires some knowledge of art. In this way it will be possible to obtain images with different artistic styles, or with a certain retro air from one of the first decades of the last century. It is even convenient to know something about literature and its different styles, which will come in handy for the generation of different types of text with the AIs that generate documents.

Of course, if the professional is going to use these LLMs for tasks related to a specific area of ​​knowledge or sector, such as the automotive industry, they must necessarily know all their secrets. Only by understanding what is done, or used, in the sector, can the appropriate requests be made to the models.

Know the model to use well, learn to program, and be patient

These skills are related to the knowledge needed to be a generic prompt engineer. But each one has to work with a specific model, so it is essential Know the LLM you work with well. In this way you can adjust the requests you make to the maximum. But not only its operation. Knowing your available extensions and plugins is crucial to getting the most out of it.

Many of the tasks that a prompt engineer performs with a chatbot do not go beyond mere conversations. But in other cases it will be necessary integrate AI prompts into applications and software. It is these tasks that earn the highest salaries for prompt engineering specialists. That is why it is necessary to have programming skills.

This does not mean that these professionals have to develop complete programs. It is best to develop only the part of the code that involves the activity with the model, and integrate it into the rest of the code to test how the prompts work in the context of the application or system in which they have been integrated, run the Debug code and participate in the development process. For a development team, this work system is much simpler and faster than if this part is developed independently and integrated later.

Lastly, you need to be very patient. The results of a request will be useful the first time in very few cases, and it will be necessary to make several requests to get an adequate response. That is why you have to think carefully about what to ask the LLM, prepare several options for questions and not despair when faced with wrong, scarce or imprecise answers. Patience helps to repeat the task over and over again until you get the desired results.

Related Articles