{"id":142961,"date":"2026-04-13T15:46:14","date_gmt":"2026-04-13T15:46:14","guid":{"rendered":"\/au\/tutorials\/prompt-engineering-for-developers"},"modified":"2026-04-13T15:46:14","modified_gmt":"2026-04-13T15:46:14","slug":"prompt-engineering-for-developers","status":"publish","type":"post","link":"\/au\/tutorials\/prompt-engineering-for-developers","title":{"rendered":"Prompt engineering for developers: Techniques, examples, and best practices"},"content":{"rendered":"<p>Prompt engineering for developers is the practice of writing structured instructions that tell AI models exactly what you need and getting usable, reliable results back. You move beyond vague questions and, instead, design inputs that produce outputs your code can actually work with.<\/p><p>How you phrase a prompt directly affects the quality of the model&rsquo;s output. <strong>A specific, well-structured prompt can generate working code, debug a function, or process data in seconds.<\/strong> A vague one gives you filler you&rsquo;ll end up throwing away.<\/p><p>To get reliable results from every prompt, you&rsquo;ll want to:<\/p><ul class=\"wp-block-list\">\n<li>Use prompting techniques like zero-shot, few-shot, and chain-of-thought to get consistent output<\/li>\n\n\n\n<li>Adapt real Python code examples for your own OpenAI API calls<\/li>\n\n\n\n<li>Follow a clear process for writing prompts that return structured, accurate results<\/li>\n\n\n\n<li>Avoid the common mistakes that waste tokens and time<\/li>\n\n\n\n<li>Wire prompts into your applications with chaining, tool calling, and templates<\/li>\n<\/ul><h2 class=\"wp-block-heading\" id=\"h-what-is-prompt-engineering-for-developers\">What is prompt engineering for developers?<\/h2><p>As a developer, you use<a href=\"\/au\/tutorials\/prompt-engineering\" data-wpel-link=\"internal\" rel=\"follow\"> <\/a><a href=\"\/au\/tutorials\/prompt-engineering\" data-wpel-link=\"internal\" rel=\"follow\">prompt engineering<\/a> to control what AI models produce by shaping the input they receive. That input is called a prompt: a question, a set of instructions, or a block of context you send to a large language model (LLM).<\/p><p>The model reads your prompt and generates a response based on patterns from its training data. For everyday users, this might mean asking ChatGPT a better question. For you, it means designing prompts that run inside applications, handle edge cases, and produce structured output your code can parse.<\/p><p>Think of it like writing a function. You define clear inputs and expected outputs. Prompt engineering follows the same logic, except your &ldquo;function&rdquo; is an LLM like GPT-4o, Claude, or an open-source model like LLaMA.<\/p><p>You won&rsquo;t write one perfect prompt and walk away. You test, adjust, and refine until the model consistently delivers what your application needs.<\/p><h2 class=\"wp-block-heading\" id=\"h-what-are-the-benefits-of-prompt-engineering-for-developers\">What are the benefits of prompt engineering for developers?<\/h2><p>Well-crafted prompts give you more control over AI output without touching the model itself. In many cases, you don&rsquo;t need to fine-tune or retrain anything. You just write better instructions.<\/p><p>That one change gives you:<\/p><ul class=\"wp-block-list\">\n<li><strong>More accurate output.<\/strong> Specific prompts reduce vague or off-topic responses. When you tell the model exactly what format, types, and structure you need, you get code that compiles, JSON that validates, and data that matches your schema.<\/li>\n\n\n\n<li><strong>Fewer hallucinations.<\/strong> LLMs sometimes generate information that sounds right but isn&rsquo;t. Clear constraints and context in your prompt lower the chance of made-up facts or functions that don&rsquo;t exist. Hallucinations are when the model confidently presents false information as if it were true.<\/li>\n\n\n\n<li><strong>Faster development cycles.<\/strong> A good prompt can replace hours of manual coding for tasks like generating boilerplate, writing documentation, or transforming data formats. You can prototype features in minutes that would normally take days to build from scratch.<\/li>\n\n\n\n<li><strong>Built-in automation.<\/strong> You can use prompts to power workflows that run without human input, like generating changelog entries from git commits, auto-documenting new API endpoints, or converting error responses into user-friendly messages.<\/li>\n\n\n\n<li><strong>Lower API costs.<\/strong> Shorter, focused prompts use fewer tokens, which are the chunks of text an API charges you for. Fewer tokens mean a smaller bill.<\/li>\n<\/ul><p>But to write prompts that consistently deliver these results, you need to understand how the model actually processes your input.<\/p><h2 class=\"wp-block-heading\" id=\"h-how-prompt-engineering-works-in-large-language-models\">How prompt engineering works in large language models<\/h2><p>LLMs generate text by predicting the next token based on all the tokens that came before it. Your prompt is the starting point for that prediction, so the way you write it directly affects the response.<\/p><p>Three core settings control how the model handles your prompt:<\/p><p><strong>Tokens<\/strong> are the basic units that the model reads and writes. A token is roughly 4 characters or about three-quarters of a word in English. The sentence &ldquo;How do I fix this bug?&rdquo; is about seven tokens. You pay per token on most APIs, so shorter prompts and responses cost less.<\/p><p><strong>Context window<\/strong> is the total number of tokens the model can handle in a single request, your prompt plus the response combined. GPT-4o supports a 128,000-token context window, but the maximum number of tokens the model can generate in its response is a separate, smaller cap.<\/p><p>If your prompt is too long, the model either cuts it short or loses track of details at the beginning. For complex tasks, stay well under the limit so the model has enough room for a full response.<\/p><p><strong>Temperature<\/strong> controls how creative or predictable the output is. It&rsquo;s typically a value between 0 and 1. Setting it to 0 gives you the most consistent answers, which is good for code generation. Values around 0.7-1.0 work better for creative tasks like brainstorming. Going above 1 increases randomness further but is rarely useful in production.<\/p><div class=\"wp-block-image wp-block-image aligncenter size-large\"><figure data-wp-context='{\"imageId\":\"69dd47586d3a3\"}' data-wp-interactive=\"core\/image\" class=\"wp-lightbox-container\"><img decoding=\"async\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/www.hostinger.com\/tutorials\/wp-content\/uploads\/sites\/2\/2026\/04\/1776091043067-0.jpeg\" alt=\"Token prediction system diagram\"><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>While these three settings give you direct control over how the model behaves, the right prompting techniques put that control to use.<\/p><h2 class=\"wp-block-heading\" id=\"h-core-prompt-engineering-techniques-developers-use\">Core prompt engineering techniques developers use<\/h2><p>Five techniques cover the majority of what you&rsquo;ll need as a developer working with LLMs.<\/p><div class=\"wp-block-image wp-block-image aligncenter size-large\"><figure data-wp-context='{\"imageId\":\"69dd47586d799\"}' data-wp-interactive=\"core\/image\" class=\"wp-lightbox-container\"><img decoding=\"async\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/www.hostinger.com\/tutorials\/wp-content\/uploads\/sites\/2\/2026\/04\/1776091043083-1.jpeg\" alt=\"Core prompt engineering techniques that developers use\"><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><h3 class=\"wp-block-heading\">Zero-shot prompting<\/h3><p>Zero-shot prompting gives the model a task with no examples, just a clear instruction. The model relies entirely on its training to understand what you need.<\/p><p>Use this for straightforward tasks where the expected output is obvious.<\/p><p><strong>Prompt:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Classify the following error log entry as \"database\", \"authentication\", or \"network\".\n\nEntry: \"Connection refused: failed to reach replica at 10.0.3.12:5432 after 3 retries.\"<\/pre><div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<p><strong>Output:<\/strong><\/p>\n<\/div><\/div><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">database<\/pre><p>Once the task gets more nuanced, you&rsquo;ll want to add examples, which is where few-shot prompting comes in.<\/p><h3 class=\"wp-block-heading\">Few-shot prompting<\/h3><p>Few-shot prompting includes a handful of examples in your prompt so the model can follow a pattern. You show the model what &ldquo;good output&rdquo; looks like before asking it to produce something new.<\/p><p><strong>Prompt:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Convert these product descriptions to JSON format.\n\nDescription: \"Blue running shoes, size 10, $89.99\"\nJSON: {\"product\": \"running shoes\", \"color\": \"blue\", \"size\": \"10\", \"price\": 89.99}\n\nDescription: \"Red leather wallet, $45.00\"\nJSON: {\"product\": \"leather wallet\", \"color\": \"red\", \"size\": null, \"price\": 45.00}\n\nDescription: \"Black wireless headphones, $129.50\"\nJSON:<\/pre><p><strong>Output:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"json\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">{\"product\": \"wireless headphones\", \"color\": \"black\", \"size\": null, \"price\": 129.50}<\/pre><p>The model learns the pattern from your examples and applies it to new inputs. This works especially well for formatting tasks, data extraction, and classification, anywhere you need consistent structure across many inputs.<\/p><p>Two to three examples usually give you a solid improvement over zero-shot, but more than five rarely add much.<\/p><p>Zero-shot and few-shot both control <em>what<\/em> you tell the model. But some tasks need more than examples. They need the model to reason through a problem, not just match a pattern.<\/p><h3 class=\"wp-block-heading\">Chain-of-thought prompting<\/h3><p>Chain-of-thought prompting asks the model to reason through a problem step by step before giving a final answer. You&rsquo;ll want this for tasks that involve logic, math, or multi-step decisions.<\/p><p>Without step-by-step instructions, the model often jumps straight to an answer and gets it wrong. With chain-of-thought, it walks through the reasoning and catches mistakes along the way.<\/p><p><strong>Prompt:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">A developer has 3 microservices. Service A takes 200ms, Service B takes 350ms, and Service C takes 150ms. Services A and B run in parallel, then Service C runs after both finish.\n\nThink through this step by step. What is the total response time?<\/pre><p><strong>Output:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Step 1: Services A and B run in parallel. The slower one (Service B at 350ms) determines the wait time. Parallel time = 350ms.\n\nStep 2: Service C runs after both A and B finish. It takes 150ms.\n\nStep 3: Total response time = 350ms + 150ms = 500ms.<\/pre><p>Adding &ldquo;think through this step by step&rdquo; to a prompt is one of the simplest upgrades you can make. It works especially well for debugging logic, calculating costs, and evaluating trade-offs in code architecture.<\/p><p>Chain-of-thought controls how the model <em>thinks<\/em>. But you can also control <em>who<\/em> it speaks as.<\/p><h3 class=\"wp-block-heading\">Role-based prompting<\/h3><p>Role-based prompting assigns the model a specific identity or expertise level before you give it a task. A &ldquo;senior backend developer&rdquo; prompt produces a very different response than a plain, unframed one.<\/p><p><strong>Without a role:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Explain how to handle API rate limiting.<\/pre><p><strong>Output:<\/strong> A generic, surface-level overview that reads like a textbook entry.<\/p><p><strong>With a role:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">You are a senior backend developer mentoring a junior teammate.\n\nExplain how to handle API rate limiting in a Node.js application.\n\nInclude practical code patterns they can use right away.<\/pre><p><strong>Output:<\/strong> A specific, actionable explanation with retry logic, exponential backoff patterns, and code snippets, written at a level the junior developer can actually follow.<\/p><p>A &ldquo;senior developer&rdquo; prompt produces different output than a &ldquo;technical writer&rdquo; prompt, even for the same topic. In the OpenAI API, this role usually lives in the <code>system<\/code> message you send with each request. Use this technique whenever the audience or expertise level matters.<\/p><p>A role shapes the persona, but you often need to go further and control the output itself: its format, its length, and exactly what it should or shouldn&rsquo;t include.<\/p><h3 class=\"wp-block-heading\">Instruction-based prompting<\/h3><p>Instruction-based prompting provides the model with explicit rules that serve as guardrails for the response.<\/p><p><strong>Prompt:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Analyze the following Python function for potential issues.\n\nRules:\n\n- List each issue as a numbered item\n- For each issue, explain the risk in one sentence\n- Suggest a fix in one sentence\n- Do not rewrite the entire function\n- Keep your response under 200 words\n\ndef get_user(id):\n\n    user = db.query(f\"SELECT * FROM users WHERE id = {id}\")\n\n    return user[0]<\/pre><p><strong>Output:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">1. SQL injection vulnerability. User input is inserted directly into the query string, which lets attackers execute arbitrary SQL.\n\n   Fix: Use parameterized queries &mdash; db.query(\"SELECT * FROM users WHERE id = %s\", (id,)).\n\n2. No error handling for empty results. If the query returns no rows, user[0] raises an IndexError.\n\n   Fix: Check if user is empty before accessing the first element.\n\n3. Using SELECT * instead of specific columns. This returns unnecessary data and exposes sensitive fields.\n\n   Fix: Select only the columns your application needs.<\/pre><p>Use this technique when the model&rsquo;s response feeds directly into your app or another process.<\/p><h2 class=\"wp-block-heading\" id=\"h-prompt-engineering-examples-for-developers\">Prompt engineering examples for developers<\/h2><p>These examples show how the techniques above work inside real code, from API calls to debugging to automation. Understanding<a href=\"\/au\/tutorials\/how-to-write-ai-prompt\" data-wpel-link=\"internal\" rel=\"follow\"> <\/a><a href=\"\/au\/tutorials\/how-to-write-ai-prompt\" data-wpel-link=\"internal\" rel=\"follow\">how to write effective AI prompts<\/a> makes these patterns easier to apply across different projects.<\/p><h3 class=\"wp-block-heading\">Using prompt engineering with the OpenAI API<\/h3><p>You can send a structured prompt to the OpenAI API with just a few lines of Python. You define the model&rsquo;s role in the <strong><code><\/code><\/strong><code>system<strong><\/strong><\/code> message and your request in the <code>user<\/code> message.<\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">from openai import OpenAI\n\nclient = OpenAI()\n\nresponse = client.chat.completions.create(\n\n    model=\"gpt-4o\",\n\n    temperature=0.2,\n\n    messages=[\n\n        {\n\n            \"role\": \"system\",\n\n            \"content\": \"You are a Python expert. Return only code with brief comments. No explanations outside the code block.\"\n\n        },\n\n        {\n\n            \"role\": \"user\",\n\n            \"content\": \"Write a function that validates email addresses using regex.\"\n\n        }\n\n    ]\n\n)\n\nprint(response.choices[0].message.content)<\/pre><p>Notice three things. The <code>system<\/code> message sets the role and output constraints. The low <code>temperature<\/code> (0.2) keeps the output consistent. And the instruction &ldquo;No explanations outside the code block&rdquo; prevents the model from adding filler text you don&rsquo;t need.<\/p><p>You&rsquo;ll use this same structure, system role + focused user prompt + low temperature, for most API calls.<\/p><h3 class=\"wp-block-heading\">Generating code with structured prompts<\/h3><p>You get much better code from an LLM when your prompt spells out what the code should do, what tools to use, and how to structure the output.<\/p><p><strong>Prompt:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Write a Python function called \"fetch_weather\" that:\n\n- Accepts a city name as a string parameter\n\n- Makes a GET request to the OpenWeatherMap API\n\n- Returns a dictionary with keys: \"city\", \"temperature_celsius\", \"description\"\n\n- Handles HTTP errors with try\/except and returns None on failure\n\n- Uses the requests library<\/pre><p><strong>Output:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">import requests\n\ndef fetch_weather(city: str) -&gt; dict | None:\n\n    api_key = \"YOUR_API_KEY\"\n\n    url = f\"https:\/\/api.openweathermap.org\/data\/2.5\/weather?q={city}&amp;appid={api_key}&amp;units=metric\"\n\n    try:\n\n        response = requests.get(url)\n\n        response.raise_for_status()\n\n        data = response.json()\n\n        return {\n\n            \"city\": data[\"name\"],\n\n            \"temperature_celsius\": data[\"main\"][\"temp\"],\n\n            \"description\": data[\"weather\"][0][\"description\"]\n\n        }\n\n    except requests.exceptions.RequestException:\n\n        return None<\/pre><p>The prompt worked because it listed every requirement as a bullet point. The model didn&rsquo;t have to guess about the return format, error handling, or which library to use. In production, you&rsquo;d load the API key from an environment variable and set a request timeout, but the prompt itself stays the same.<\/p><p>Structured prompts don&rsquo;t just help you write new code. You can use the same approach to find and fix bugs in code you&rsquo;ve already written.<\/p><h3 class=\"wp-block-heading\">Debugging code using prompts<\/h3><p>You can find bugs faster with a prompt than by manually tracing through logic. Paste the broken code, describe the expected behavior, and let the model find the issue.<\/p><p><strong>Prompt:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">This function should return the average of a list of numbers, but it returns 0 for the input [10, 20, 30]. Find the bug and explain the fix.\n\ndef calculate_average(numbers):\n\n    total = 0\n\n    for num in numbers:\n\n        total += num\n\n        average = total \/ len(numbers)\n\nreturn average<\/pre><p><strong>Output:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Bug: The line \"average = total \/ len(numbers)\" is inside the for loop.\n\nIt recalculates the average on every iteration using an incomplete total.\n\nFix: Move the division outside the loop.\n\ndef calculate_average(numbers):\n\n    total = 0\n\n    for num in numbers:\n\n        total += num\n\n    average = total \/ len(numbers)\n\nreturn average<\/pre><p>Describe what the code should do and what it actually does. That contrast helps the model find the problem.<\/p><p>Beyond writing and fixing code, you can also use prompts to handle repetitive tasks that would otherwise eat up your afternoon.<\/p><h3 class=\"wp-block-heading\">Automating tasks with prompts<\/h3><p>You can turn repetitive manual work into automated pipelines with prompts. Two common examples:<\/p><p><strong>Data processing, converting CSV rows to JSON:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Convert each row of this CSV data into a JSON object. Return a JSON array.\n\nname,email,role\n\nSarah Chen,sarah@example.com,developer\n\nJames Park,james@example.com,designer<\/pre><p><strong>Output:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"json\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">[\n\n  {\"name\": \"Sarah Chen\", \"email\": \"sarah@example.com\", \"role\": \"developer\"},\n\n  {\"name\": \"James Park\", \"email\": \"james@example.com\", \"role\": \"designer\"}\n\n]<\/pre><p><strong>Documentation generation, creating docstrings from code:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Generate a Google-style docstring for this Python function:\n\ndef retry_request(url, max_retries=3, delay=1.0):\n\n    for attempt in range(max_retries):\n        try:\n\n            response = requests.get(url)\n\n            response.raise_for_status()\n\n            return response.json()\n\n        except requests.exceptions.RequestException:\n\n            if attempt &lt; max_retries - 1:\n\n                time.sleep(delay * (attempt + 1))\n\n    return None\n<\/pre><p><strong>Output:<\/strong><\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">def retry_request(url, max_retries=3, delay=1.0):\n\n    \"\"\"Sends a GET request with automatic retry on failure.\n\n    Attempts the request up to max_retries times, with increasing\n\n    delay between attempts (linear backoff).\n\n    Args:\n\n        url: The endpoint URL to request.\n\n        max_retries: Maximum number of attempts. Defaults to 3.\n\n        delay: Base delay in seconds between retries. Defaults to 1.0.\n\n    Returns:\n\n        Parsed JSON response as a dictionary, or None if all retries fail.\n\n\"\"\"<\/pre><p>Both of these tasks take minutes to do by hand. With a well-structured prompt, they take seconds. And you can embed the prompt in a script that runs independently.<\/p><h2 class=\"wp-block-heading\" id=\"h-prompt-engineering-best-practices-for-developers\">Prompt engineering best practices for developers<\/h2><p>A consistent process matters more than any single technique. These<a href=\"\/au\/tutorials\/prompt-engineering-best-practices\" data-wpel-link=\"internal\" rel=\"follow\"> <\/a><a href=\"\/au\/tutorials\/prompt-engineering-best-practices\" data-wpel-link=\"internal\" rel=\"follow\">prompt engineering best practices<\/a> give you a repeatable system for writing prompts that work across models and projects, from first draft to tested result.<\/p><ol class=\"wp-block-list\">\n<li><strong>Describe the exact output you need.<\/strong> &ldquo;Write a function&rdquo; gives you something generic. &ldquo;Write a Python function that takes a list of dictionaries and returns only entries where the &lsquo;status&rsquo; key equals &lsquo;active'&rdquo; gives you something usable. More detail means less guessing.<\/li>\n\n\n\n<li><strong>Include your project context in every prompt.<\/strong> Tell the model what your project does, what stack you&rsquo;re using, and what constraints apply. &ldquo;I&rsquo;m building a REST API in FastAPI with PostgreSQL&rdquo; gives the model enough background to write relevant code.<\/li>\n\n\n\n<li><strong>Specify the response format.<\/strong> If you need JSON, say &ldquo;return valid JSON.&rdquo; If you need a numbered list, say so. When the response feeds into another system, spell out the exact structure: field names, data types, nesting.<\/li>\n\n\n\n<li><strong>Add examples when the task is nuanced.<\/strong> Two or three input-output pairs show the model what &ldquo;correct&rdquo; looks like far better than a paragraph of instructions. This is especially true for classification, formatting, and data extraction.<\/li>\n\n\n\n<li><strong>Separate instructions from data with clear markers.<\/strong> Use triple backticks, XML tags, or labeled sections to tell the model which part is instructions and which part is content to process.<\/li>\n\n\n\n<li><strong>Replace vague words with measurable criteria.<\/strong> Words like &ldquo;good,&rdquo; &ldquo;better,&rdquo; and &ldquo;relevant&rdquo; mean different things to different models. &ldquo;Write concise code&rdquo; is vague. &ldquo;Keep the function under 20 lines with no nested loops&rdquo; is clear.<\/li>\n\n\n\n<li><strong>Test the output and fix the prompt.<\/strong> Your first prompt is rarely your best. Test the output, spot where it falls short, and adjust. Add constraints, rephrase instructions, or break complex prompts into smaller steps.<\/li>\n<\/ol><h2 class=\"wp-block-heading\" id=\"h-common-prompt-engineering-mistakes-to-avoid\">Common prompt engineering mistakes to avoid<\/h2><p>Common prompt engineering mistakes include vague instructions, missing project context, overloaded prompts, and skipping the output format. You usually won&rsquo;t spot these in the prompt itself. They show up when the model&rsquo;s response comes back wrong.<\/p><ul class=\"wp-block-list\">\n<li><strong>Vague prompts return vague output.<\/strong> &ldquo;Make this code better&rdquo; sounds reasonable, but the model doesn&rsquo;t know if you mean performance, readability, or security. You get a mix of random changes, some helpful, some not, and spend more time sorting through the output than you saved.<\/li>\n\n\n\n<li><strong>Missing context sends the model guessing.<\/strong> A prompt asking to &ldquo;handle authentication&rdquo; returned a PHP session example for a project built in Node.js with JWT. One line of context (&ldquo;I&rsquo;m using Node.js with JWT&rdquo;) would have prevented a completely unusable response.<\/li>\n\n\n\n<li><strong>Overloading a single prompt dilutes everything.<\/strong> When you ask the model to refactor a function, add tests, write docs, and suggest performance improvements in one go, each part gets less attention. You end up with mediocre versions of five things instead of a solid version of one.<\/li>\n\n\n\n<li><strong>Skipping output format breaks your parser.<\/strong> Without format instructions, the same prompt can return JSON one time, a markdown table the next, and a paragraph the third. If your code expects <code>JSON.parse()<\/code> to work, a surprise paragraph crashes the pipeline.<\/li>\n\n\n\n<li><strong>Not testing with varied inputs hides edge cases.<\/strong> A prompt that parses addresses perfectly for your test case might break on international formats, missing fields, or unusual characters. Test your prompts with the messiest inputs your users will actually send.<\/li>\n\n\n\n<li><strong>Trusting model output without validation.<\/strong> The model can return code that looks correct but doesn&rsquo;t compile, or JSON with missing keys your schema requires. Always validate model output in your code before passing it downstream.<\/li>\n<\/ul><p><div class=\"protip\">\n                    <h4 class=\"title\"><\/h4>\n                    <p><strong>Pro Tip: <\/strong>Treat your prompts like unit tests. Each prompt should have a clear expected output. When the result doesn't match, debug the prompt the same way you'd debug code: isolate the issue, change one thing, and test again.<\/p>\n                <\/div><\/p><p>Writing solid prompts is one half of the work. The other half is connecting them to your actual codebase so they run as part of your application.<\/p><h2 class=\"wp-block-heading\" id=\"h-how-developers-integrate-prompt-engineering-into-applications\">How developers integrate prompt engineering into applications<\/h2><p>You connect prompt engineering to your application through a simple pipeline: your app collects user input, wraps it in a prompt template, sends that prompt to an AI model&rsquo;s API, and returns the parsed response. You can further<a href=\"\/au\/tutorials\/prompt-tuning\" data-wpel-link=\"internal\" rel=\"follow\"> <\/a><a href=\"\/au\/tutorials\/prompt-tuning\" data-wpel-link=\"internal\" rel=\"follow\">optimize prompts with prompt tuning<\/a> to improve reliability over time.<\/p><div class=\"wp-block-image wp-block-image aligncenter size-large\"><figure data-wp-context='{\"imageId\":\"69dd47586ecfb\"}' data-wp-interactive=\"core\/image\" class=\"wp-lightbox-container\"><img decoding=\"async\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/www.hostinger.com\/tutorials\/wp-content\/uploads\/sites\/2\/2026\/04\/1776091043089-2.jpeg\" alt=\"Diagram showing User Input &rarr; Prompt Template (adds role, context, constraints) &rarr; API Call &rarr; Model Response &rarr; App Parses and Displays Result\"><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>Most production applications need more than a single prompt-and-response cycle, so you&rsquo;ll combine chaining, tool calling, and templates.<\/p><h3 class=\"wp-block-heading\">Prompt chaining in multi-step workflows<\/h3><p>Prompt chaining breaks a complex task into smaller prompts that run one after another. The output from one prompt becomes the input for the next.<\/p><p>Say you&rsquo;re building a feature that turns customer feedback into task items. You use three chained prompts to handle this cleanly:<\/p><ol class=\"wp-block-list\">\n<li>First prompt &rarr; Summarize the raw feedback into key themes<\/li>\n\n\n\n<li>Second prompt &rarr; Classify each theme by urgency (high, medium, low)<\/li>\n\n\n\n<li>Third prompt &rarr; Generate a task list with assigned priority levels<\/li>\n<\/ol><p>Each prompt handles one task, which means less room for error than cramming everything into a single request. The trade-off is extra API calls, which add latency and token costs, so use chaining when accuracy matters more than speed.<\/p><p>Chaining handles multi-step logic, but sometimes a step needs live data from an outside source. Tool calling lets the model reach out to your functions mid-conversation to get that data.<\/p><h3 class=\"wp-block-heading\">Using tools and function calling with prompts<\/h3><p>Modern LLM APIs let the model call external functions during a conversation. Instead of making up an answer, the model recognizes when it needs real data and triggers a function you&rsquo;ve defined.<\/p><p>For example, you give the model access to a <code>get_order_status(order_id)<\/code> function. When a user asks, &ldquo;Where&rsquo;s my order #4521?&rdquo;, the model doesn&rsquo;t make up a shipping status. It calls your function, gets the actual data, and builds a response from it.<\/p><p>In the OpenAI API, you define available tools in your request. The model returns a structured description of which function to call and with what arguments (via a <code>tool_calls<\/code> field in today&rsquo;s API). Your code runs the function, sends the result back, and the model writes a natural-language response.<\/p><p>You can use this pattern to build AI assistants that check databases, call third-party APIs, or run calculations.<\/p><p>As you add chains and tool calls, the number of prompts in your codebase grows fast. You need a way to manage them without digging through source code every time something changes.<\/p><h3 class=\"wp-block-heading\">Creating reusable prompt templates<\/h3><p>A prompt template separates the fixed instructions from the variable input, so you can update one without touching the other:<\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">REVIEW_TEMPLATE = \"\"\"You are a senior code reviewer.\n\nReview the following {language} code for:\n\n- Security vulnerabilities\n\n- Performance issues\n\n- Readability improvements\n\nRespond in JSON format with keys: \"issues\" (array of objects with\n\n\"type\", \"line\", \"description\", \"suggestion\").\n\nCode to review:\n\n{code}\n\n\"\"\"\n\ndef review_code(language: str, code: str) -&gt; str:\n\n    prompt = REVIEW_TEMPLATE.format(language=language, code=code)\n\n# Send prompt to API and return response<\/pre><p>You can store templates as separate files, version them in Git, and swap them without redeploying your application.<\/p><p>At this point, you have the prompting techniques, the API patterns, and the tools to wire them into production. The remaining question is how to keep sharpening these skills.<\/p><h2 class=\"wp-block-heading\" id=\"h-how-can-developers-become-prompt-engineers\">How can developers become prompt engineers?<\/h2><p>A developer can<a href=\"\/au\/tutorials\/how-to-become-prompt-engineer\" data-wpel-link=\"internal\" rel=\"follow\"> <\/a><a href=\"\/au\/tutorials\/how-to-become-prompt-engineer\" data-wpel-link=\"internal\" rel=\"follow\">become a prompt engineer<\/a> by treating prompts like production code. You build them into your applications, version them, test them against edge cases, and improve them over time.<\/p><ul class=\"wp-block-list\">\n<li><strong>Start experimenting now.<\/strong> Pick a task you do manually, like writing docs, reviewing code, or formatting data, and try automating it with prompts. One afternoon of testing teaches you more than a week of reading.<\/li>\n\n\n\n<li><strong>Use AI APIs directly.<\/strong> Sign up for an OpenAI, Anthropic, or open-source model API and start building. Writing prompts inside your own code teaches you faster than any playground.<\/li>\n\n\n\n<li><strong>Build a small project.<\/strong> Create a CLI tool that summarizes git diffs, a Slack bot that answers questions from your docs, or try<a href=\"\/au\/tutorials\/prompting-with-hostinger-horizons\" data-wpel-link=\"internal\" rel=\"follow\"> <\/a><a href=\"\/au\/tutorials\/prompting-with-hostinger-horizons\" data-wpel-link=\"internal\" rel=\"follow\">prompting with Hostinger Horizons<\/a> to build a full web app using natural language. A working project forces you to solve real problems.<\/li>\n\n\n\n<li><strong>Track what works and what doesn&rsquo;t.<\/strong> Keep a log of your prompts and results. Over time, you&rsquo;ll develop a feel for what works with different models and tasks.<\/li>\n\n\n\n<li><strong>Learn the full stack.<\/strong> Prompt engineering connects to API design, data processing, and application architecture. The more you understand how AI fits into a real system, the better your prompts get.<\/li>\n<\/ul><p>That&rsquo;s the long game. But you can start seeing results today. Pick one technique you just learned, like few-shot prompting or chain-of-thought, and apply it to something you&rsquo;re already working on. Swap a vague prompt for a structured one in an existing API call, or try chaining two prompts where you&rsquo;ve been cramming everything into one.<\/p><p>You don&rsquo;t need a new project to start. You just need one prompt that&rsquo;s already underperforming.<\/p><?xml encoding=\"utf-8\" ?><figure class=\"wp-block-image size-large\"><a class=\"hgr-tutorials-cta hgr-tutorials-cta-horizons\" href=\"\/au\/horizons\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"300\" src=\"https:\/\/www.hostinger.com\/tutorials\/wp-content\/uploads\/sites\/2\/2025\/03\/Horizons-in-text-banner-no-code-website-builder-1024x300.png\" alt=\"\" class=\"wp-image-129223\"  sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Prompt engineering for developers is the practice of writing structured instructions that tell AI models exactly what you need and getting usable, reliable results back. You move beyond vague questions and, instead, design inputs that produce outputs your code can actually work with. How you phrase a prompt directly affects the quality of the model&rsquo;s [&#8230;]<\/p>\n<p><a class=\"btn btn-secondary understrap-read-more-link\" href=\"\/au\/tutorials\/prompt-engineering-for-developers\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":624,"featured_media":142962,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"rank_math_title":"Prompt engineering for developers: Guide and examples","rank_math_description":"Learn prompt engineering for developers with practical techniques, code examples, and tips to build better AI-powered applications.","rank_math_focus_keyword":"prompt engineering for developers","footnotes":""},"categories":[22651],"tags":[],"class_list":["post-142961","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-web-app"],"hreflangs":[{"locale":"en-US","link":"https:\/\/www.hostinger.com\/tutorials\/prompt-engineering-for-developers","default":1},{"locale":"en-PH","link":"https:\/\/www.hostinger.com\/ph\/tutorials\/prompt-engineering-for-developers","default":0},{"locale":"en-MY","link":"https:\/\/www.hostinger.com\/my\/tutorials\/prompt-engineering-for-developers","default":0},{"locale":"en-UK","link":"https:\/\/www.hostinger.com\/uk\/tutorials\/prompt-engineering-for-developers","default":0},{"locale":"en-IN","link":"https:\/\/www.hostinger.com\/in\/tutorials\/prompt-engineering-for-developers","default":0},{"locale":"en-CA","link":"https:\/\/www.hostinger.com\/ca\/tutorials\/prompt-engineering-for-developers","default":0},{"locale":"en-AU","link":"https:\/\/www.hostinger.com\/au\/tutorials\/prompt-engineering-for-developers","default":0},{"locale":"en-NG","link":"https:\/\/www.hostinger.com\/ng\/tutorials\/prompt-engineering-for-developers","default":0}],"_links":{"self":[{"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/posts\/142961","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/users\/624"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/comments?post=142961"}],"version-history":[{"count":0,"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/posts\/142961\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/media\/142962"}],"wp:attachment":[{"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/media?parent=142961"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/categories?post=142961"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hostinger.com\/au\/tutorials\/wp-json\/wp\/v2\/tags?post=142961"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}