Prompt engineering is the practice of crafting inputs (prompts) to guide an AI model’s behaviour and output, in order to maximise the likelihood of getting the output you want. There are different prompt techniques that can be used to achieve different results. In this article, I present the Zero-Shot, One-Shot, Few-Shot and In-Context Learning (ICL) techniques.
Prompt
A prompt is a sequence of input text, like a sentence or multiple sentences in natural language, blocks of code, etc., which you can use to interact with the AI. Avoiding ambiguity is essential in prompt engineering. When writing prompts, being clear, precise and unambiguous will help the model to understand exactly what is being asked, leading to more reliable, accurate and useful outputs. When instructions are vague or open to multiple interpretations, the model may produce inconsistent or unexpected results.
To get the most out of a Large Language Model (LLM), you can apply several structured techniques. The way you write the prompt matters, so according to the output you expect, you can use the right technique, and this will reduce hallucinations and ensure that the model understands the context and constraints of the task. On the next topics, you can see some of these techniques.
Zero-shot
Zero-shot prompting is the most straightforward way to interact with an LLM. It consists of providing a task or a question without giving any prior examples of the expected output. In this scenario, you rely entirely on the model’s pre-trained knowledge and its ability to follow instructions based on its internal logic. For example: "What is .NET?”, “ Convert the following JSON to a C# Class”, etc.
To optimise Zero-Shot results, you can shift from vague questions to direct, task-oriented commands, this way, you significantly reduce the chance of hallucinations. For instance, instead of asking the model:
Analyse this codeYou can be more explicit and say:
Explain what this code does, and identify performance bottlenecksYou can further improve the output by specifying how you want the response to be formatted. For example:
Explain what this code does, and identify performance bottlenecks.
Answer using bullet points, one for each performance issue.By clearly defining both what the model should do and how the result should look, you increase the accuracy and usefulness of the output.
Some benefits of this method are:
- Low preparation costs, as it requires minimal preparation for the prompt and lower token consumption.
- It can be a great option for standard tasks, and for fast experimentation and “out-of-the-box” testing.
However, this convenience comes with significant trade-offs:
- Relies entirely on the model’s internal pre-training, which may not align with the user’s specific intent
- It often struggles with highly complex, niche, or infrequent tasks that require specific domain logic
- The model may generate factually incorrect content while maintaining a tone of high certainty.
One-Shot/Few-Shot
Few-shot prompting is a technique where we provide a small number of examples (typically between 1 and 5) to help the model understand the input-output pattern before generating a response. The model “learns” from these examples on the fly (within the prompt’s context), without any permanent re-training. If you provide exactly one example, it is called One-Shot Prompting.
One-Shot and Few-Shot are good strategies for when the model needs a “north star” to follow. You should use these when the task has multiple valid execution paths, and you want to enforce a specific one, or when you need the output to follow a specific guideline. For example:
<Inform the command/action to be executed>
For that, follow these examples:
Example 1:
Input: <add the input example>
Output: <add an example of the expected output>
Example 2:
Input: <add another input example>
Output: <add another example of the expected output>Few-Shot prompting excels at:
- Increased Precision: Examples help the model understand the specific nuances of a task that are difficult to explain via instructions alone.
- Style Consistency: Very effective for generating standardised code, documentation, or creative writing that must follow a specific template.
- Low Engineering Cost: Much simpler and faster than fine-tuning or training a custom model, as it only requires a few well-written examples.
However, some downsides of this method are:
- Higher token Costs: The examples consume space within the prompt, which increases costs and reduces the available “room” for the actual task context.
- Quality Dependency: If the examples are ambiguous, poorly formatted, or factually incorrect, the model will replicate those flaws in its output.
- Order Sensitivity: Models can be biased toward the most recent example provided; changing the sequence of examples can unexpectedly alter the final performance.
In-Context Learning (ICL)
In-Context Learning (ICL) is a method of guiding the model to perform a task by providing structured context and instructions in the prompt. The model “learns” what you expect from the instructions/the context you provide, and produces responses accordingly (it uses the prompt as a temporary guide to infer the task and generate the expected output).
To make ICL effective, you can clearly define a persona, a specific objective, and the desired output format, helping the model “infer” the task more accurately. For that, in your prompt, you can:
- Assign a Role: Tell the model who it should be (e.g.,
"You are a Senior .NET Architect”), defining its persona. - Define the Goal: Clearly state the task (e.g.,
"Your objective is to analise this code and find performance bottlenecks") - Define the Output Format: Don’t leave the style to be random, instead, specify if you want the output as JSON, bullet points, a C4 diagram, etc.
Furthermore, you can also define constraints that act as guardrails, ensuring the output remains aligned with your specific requirements. For example:
- Define the Tone: define the tone of the response, ranging from professional and formal to casual or even sarcastic, for example.
- Define Restrictions: define restrictions for the response, for example, you can explicitly instruct the model to avoid emojis, omit specific topics (e.g., ‘do not talk about xyz’), adhere to strict character limits, etc.
Here’s a well-structured in-context instruction example:
//Role:
You are a senior software engineer specialized in .NET.
// Goal:
Your task is to analyse this code snippet below and suggest performance
improvements.
// Tone:
Maintain a professional tone in your response.
// Restrictions:
Do not include emojis. Avoid discussing unrelated topics or themes
outside the code analysis.
// Output format:
Respond in bullet points and justify each suggestion with technical
reasoning.
<add-the-code-to-be-analysed>This approach reduces ambiguous responses and hallucinations while increasing the relevance and practical usefulness of the output.
You can also provide examples in the input, by using the Few-Shot technique, and the model will read the examples and “learn” from them to apply the pattern in the response.
Conclusion
Learning about Prompt Engineering is essential for those who use AI. Writing a prompt using the write technique has a direct impact on the quality of the answer that will be generated. By using the correct prompt technique, you can get better outputs and maximise the usefulness of the output.
In the next article, I’m going to present more advanced prompt engineering techniques like Chain-of-Thought (CoT), Skeleton-of-Thought (SoT) and Tree-of-Thought (ToT).
References
Prompt engineering in .NET — Microsoft