Last week, I shared our progress in developing a business-specific Generative AI Chatbot, designed to accelerate productivity by streamlining common tasks.

The Generative AI Chatbot includes specific business features and controls to ensure security, privacy, legal, and compliance. Specifically, we have enabled a feature that allows the user to declaratively customise the initial prompt through a technique known as prompt engineering, resulting in a custom URL that pre-loads the desired prompt.

Prompt engineering is a technique that involves designing and optimising prompts to improve the performance of a large language model. Prompts are short pieces of text that are used to guide the model’s output, specifying the task to perform, as well as the desired output format, and any other relevant information/constraints.

Our default prompt stipulates that the Generative AI Chatbot is a business assistant and should attempt to respond professionally, using all available data sources. The prompt engineering feature allows for a new custom prompt to be defined (replacing the default), resulting in a new URL that can be used for more specific, tightly defined tasks.

The custom prompt can be used to target a specific persona. For example, Service Desk (Help Desk), Corporate Communications, Investor Relations, Marketing, Paralegal, Software/Data Engineering, etc. This approach helps improve the accuracy of the responses, whilst ensuring they are consistent with the expectations of the specific persona.

As Generative AI gains popularity, prompt engineering has become a more common topic of discussion. Some believe the skills associated with prompt engineering will become a dedicated role. As a general rule, I disagree, instead, I believe the techniques associated with prompt engineering will need to be common across all roles, helping each individual maximise the value of Generative AI capabilities.

Arguably, this is similar to the competency of browsing the web. Individuals who are very good at searching and extracting information from the web are often considered smart when in reality they have simply honed a skill that unlocks efficiency and effectiveness.

Therefore, I thought I would share a few good reference materials that I use to support my prompt engineering requirements.

In addition to these reference materials, outlined below are some high-level principles to consider when prompt engineering.

  • Clear Instructions: Provide relevant details, context and constraints, limiting the possible outcomes.

  • Reference Text: Provide reference text as a basis for the response.

  • Personas: Ask the model to adopt a specific persona/role, which will shape the tone and style of the responses.

  • Breakdown Tasks: Keep things simple by stepping through requirements instead of providing them all at once.

  • Iterate: The quality of the prompt will directly impact the quality of the response. If the first response is not as desired, try again with a different structure or context.

It is also possible to ask the model to provide its chain of reasoning, which is useful when troubleshooting why a response may not be meeting your expectations.

These principles, alongside the best practices outlined in the reference materials, can be used in isolation or combined. In theory, they should help reduce artificial hallucination (AKA confabulation or delusion), which is the term used to describe a confident response that is not justified by the training data.