We all know that the way we ask for something can drastically change the outcome. This is even more true since the use of language models based on Artificial Intelligence systems, such as ChatGPT and Bard, to generate content of various kinds has become widespread around the world. Just like in real life, the accuracy of the output of these AI generative creators is also related to the quality of the prompt: unclear or even poorly formulated queries are highly likely to generate irrelevant and hallucinated output.
By Luigi Simeone, Chief Technology Officer Moxoff
What is Prompt Engineering?
Prompt Engineering is a technique for improving the performance of language models. It is based on the concept that the more specific a prompt is, the higher the probability of obtaining a desired output, from text generation to language translation to creative writing. A prompt is a short string of text that must be clear and concise to provide the language model with the information it needs to generate accurate and relevant output.
For example, to generate creative text, it is necessary to provide a high-quality, stimulating, and specific prompt, as well as to write a story on a topic or an article. However, it is not necessary to provide the language model with too many details; the quality of the prompt is fundamental, which is obtained by selecting the most appropriate words, phrases, sentence structures, and punctuation. Prompt Engineering, in all respects, could therefore be described as the art of guiding an artificial intelligence by making it increasingly efficient and comprehensive in the processing of Natural Language Processing (NLP), which allows a generative artificial intelligence algorithm to decipher human language and produce useful results.
Without going too far into technicalities, it is worth noting that the same name, prompt engineering, is also used in a broader sense to refer to the branch of research aimed at creating and using particular functions (called prompts) to be applied to the input within the framework of the operation of an LLM (Large Language Model) [2] to train it to solve a particular NLP (Natural Language Processing) problem. From this point of view, the topic of this article, namely the textual form of the input data provided by the user, therefore represents a particular case.
Why was Prompt Engineering born?
The need to communicate with artificial intelligence and obtain adequate answers to questions arose with the access of the general public to generative artificial intelligence language systems. In particular, ChatGPT has allowed everyone to experience the innovative way of dialogue between humans and artificial intelligence. However, the widespread use of these AI systems has also highlighted the problem of how questions are asked and how responses are controlled, of the AI’s understanding of the human linguistic context, and therefore of how to use specific inputs to teach the algorithm to provide the desired outputs. Since generative NLP models produce output words that are considered most likely based on a given context, they guarantee the plausibility of the result but not its adherence to the data used in the training phase; therefore, when only the former but not the latter exists, the phenomenon of so-called hallucinations occurs.
Precisely to limit the phenomenon of hallucination and allow the model to obtain the maximum performance allowed, the user plays a fundamental role through the process of interaction implemented with the machine [3], which must be inspired by two cardinal principles:
- the formulation of unambiguous instructions [4] using delimiters when necessary to make the input clearer, defining if necessary the desired output structure or format (json, xml, etc.) and finally providing examples of response directly in the prompt (few-shot).
- providing the model with time and support for its “thinking”, by breaking down a complex problem into a sequence of simpler questions [6], or by defining intermediate logical reasoning sub-steps, which if solved lead to the final result (chain of thought), possibly instructing the model to check its own response through further checks.
The need to improve and guide AI learning therefore gave rise to prompt engineering and the profession of the prompt engineer or prompt manager, i.e. a communicator capable of formulating “prompts” or instructions that guide the AI towards the production of appropriate responses in every area, from healthcare to retail, from education to security.
Why is Prompt Engineering important for companies that want to use AI?
Prompt Engineering aims to verify the output of the language model and create a context made up of additional information such as keywords, for example, instructions and rules designed both to obtain specific results from artificial intelligence and to train language models with specific inputs for different business, corporate and professional areas. Each field uses specific and technical languages and terms to create contents, from communication to social media to marketing, which require continuous creativity but also respect for the company’s tone of voice. Generative artificial intelligence systems, properly trained with prompt engineering techniques, are able to generate a huge number of contents and different variations with just the click of a button.
However, it will be necessary to design a stable prompt in order to obtain text, video or image content that can follow and replicate the linguistic style and tone of voice according to the guidelines approved by the management of each company.
Examples of applications
Following the basic criteria of unambiguous interpretation of the request and subdivision of a topic into smaller, more manageable problems, the aim of prompt engineering is to outline the fundamental characteristics that an input text (the prompt) must have in order to improve the performance of the machine in relation to certain very important potential of its operation.
For example, the adoption of a particular point of view in analyzing a topic, the ability to generate questions to the user to better frame an argument, the identification of multiple alternative approaches to achieve a result, the exposition of the elements that led to a certain conclusion and the ability to provide answers in a particular format.
Since the subject is of great interest and topicality, research in this area is flourishing and some prompt schemes to be used for the indicated purposes have already been outlined in the literature.
For example, with regard to the perspective that the language model should adopt in evaluating an issue, the suggestion is to explicitly tell the machine which human figure or professional category or even inanimate entity to identify with, adding to produce the response in the same way that it would follow in reality. Prompts of this type are the following [1].
“From now on, act as a security reviewer. Pay close attention to the security details of any code that we look at. Provide outputs that a security reviewer would regarding the code” and “You are going to pretend to be a Linux terminal for a computer that has been compromised by an attacker. When I type in a command, you are going to output the corresponding text that the Linux terminal would produce”.
On the other hand, for the automatic production of follow-up questions, the user is required to instruct the LLM to generate alternative questions that are considered better than the original one in setting the stage for solving a problem, such as in the prompt [1] “From now on, whenever I ask a question about a software artifact’s security, suggest a better version of the question to use that incorporates information specific to security risks in the language or framework that I am using and ask me if I would like to use your question instead”.
To propose this kind of training to language models adopted in companies with specific requests, it becomes necessary to have experts trained in prompt engineering, which is the profession that today and tomorrow will be increasingly demanded by all kinds of companies.
Contact us
Sources:
[1] J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, A. Elnashar, J. Spencer-Smith, D.C. Schmidt, A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv:2302.11382, 2023.
[2] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, G. Neubig, Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9), 1-35, 2023.
[3] L. Henrickson, A. Meroño-Peñuela. Prompting meaning: a hermeneutic approach to optimising prompt engineering with ChatGPT. AI & Soc. 2023.
[4] A. Bozkurt, R.C. Sharma. Generative AI and Prompt Engineering: The Art of Whispering to Let the Genie Out of the Algorithmic World. Asian Journal of Distance Education, 18(2), 2023.
[5] J. Kocoń, I. Cichecki, O. Kaszyca, M. Kochanek, D. Szydło, J. Baran, J. Bielaniewicz, M. Gruza, A. Janz et al. ChatGPT: Jack of all trades, master of none. Information Fusion, 99, 2023.
[6] T.F. Heston, C. Khun. Prompt Engineering in Medical Education. Int. Med. Educ., 2, 198–205, 2023.
[7] J. Hutson, B. Robertson. Exploring the Educational Potential of AI Generative Art in 3D Design Fundamentals: A Case Study on Prompt Engineering and Creative Workflows. Global Journal of Human-Social Science: A Arts & Humanities – Psychology, 23(2), 2023.
[8] R. Peres, M. Schreier, D. Schweidel, A. Sorescu. On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. International Journal of Research in Marketing, 40, 269–275, 2023.
[9] M. Wong, Y.-S. Ong, A. Gupta, K.K. Bali,C. Chen. Prompt Evolution for Generative AI: A Classifier-Guided Approach. In 2023 IEEE Conference on Artificial Intelligence (CAI), 226-229, 2023.
[10] S. Arvidsson, J. Axell. Prompt engineering guidelines for LLMs in Requirements Engineering. Thesis, University of Gothenburg Chalmers University of Technology, June 2023.
[11] J. Qadir. Engineering Education in the Era of ChatGPT: Promise and Pitfalls of Generative AI for Education. In 2023 IEEE Global Engineering Education Conference (EDUCON), 1-9, 2023.
[12] F. Fui-Hoon Nah, R. Zheng, J. Cai, K. Siau, L. Chen. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277-304, 2023.