{"id":131602,"date":"2025-07-28T11:07:56","date_gmt":"2025-07-28T11:07:56","guid":{"rendered":"\/tutorials\/?p=131602"},"modified":"2026-03-09T19:16:19","modified_gmt":"2026-03-09T19:16:19","slug":"n8n-ollama-integration","status":"publish","type":"post","link":"\/ng\/tutorials\/n8n-ollama-integration","title":{"rendered":"How do you integrate n8n with Ollama for local LLM workflows?"},"content":{"rendered":"<?xml encoding=\"utf-8\" ?><p>Integrating n8n with Ollama enables you to harness various AI models into your automation workflow, allowing it to perform complex operations that would otherwise be impossible.<\/p><p>However, the process can be tricky because you need to configure various settings on both tools in order for them to work seamlessly.<\/p><p>As long as you already have n8n and Ollama installed on your server, you can integrate them in four simple steps:<\/p><ol class=\"wp-block-list\">\n<li>Add the Ollama chat model node <\/li>\n\n\n\n<li>Choose the AI model and adjust its runtime settings<\/li>\n\n\n\n<li>Configure the AI agent node&rsquo;s prompt settings<\/li>\n\n\n\n<li>Send a test prompt to verify functionality<\/li>\n<\/ol><p>After completing those steps, you&rsquo;ll have a functional, Ollama-powered AI processing workflow, which you can integrate into a more complete automation system. For example, you can connect messaging apps like WhatsApp to create a functional AI chatbot.<\/p><p>Moreover, running it locally on a private server like a Hostinger VPS provides you with a higher level of control over your data. This makes the integration suitable for automating tasks involving sensitive information, like summarizing internal documents or creating an in-house chatbot.<\/p><p>Let&rsquo;s explore how to connect Ollama with n8n in detail and create a chatbot based on this integration. Towards the end, we&rsquo;ll also explain the popular use cases for this integration and expand its capabilities using the LangChain nodes.<\/p><p>\n\n\n\n<\/p><h2 class=\"wp-block-heading\" id=\"h-prerequisites\"><strong>Prerequisites<\/strong><\/h2><p>To integrate n8n with Ollama, you need to fulfill the following prerequisites:<\/p><ul class=\"wp-block-list\">\n<li><strong>Ollama must be installed locally<\/strong>. Make sure you&rsquo;ve <a href=\"\/ng\/tutorials\/how-to-install-ollama\">installed Ollama<\/a> locally on a virtual private server (VPS). The host must have enough hardware to run your desired AI models, which might require over<strong> 8 GB<\/strong> of RAM. <\/li>\n\n\n\n<li><strong>n8n must be set up and accessible<\/strong>. <a href=\"\/ng\/tutorials\/how-to-install-n8n\">Install n8n<\/a> on a VPS and create an account. It must be configured on the same server as Ollama due to compatibility constraints. <\/li>\n\n\n\n<li><strong>Ensure the necessary ports are open<\/strong>. Verify that ports <strong>11434<\/strong> and <strong>5678<\/strong> on your server are open to ensure Ollama and n8n are accessible. If you host them on Hostinger VPS, check the ports and configure them by simply asking our <a href=\"\/blog\/kodee\">Kodee AI assistant<\/a>. <\/li>\n\n\n\n<li><strong>Basic JSON knowledge<\/strong>. Learn how to read JSON because n8n nodes primarily exchange data in this format. Understanding it helps you select data and troubleshoot errors more efficiently.<\/li>\n<\/ul><p><div><p class=\"important\"><strong>Important!<\/strong> We highly recommend installing both n8n and Ollama in the same Docker container for better isolation. This is the method that we used for testing this tutorial, so it is verified to work.<br>\nIf you use a Hostinger VPS, you can start by installing either n8n or Ollama in a Docker container by simply selecting the corresponding OS template &ndash; the application will be installed in a container by default. Then, you&rsquo;ll need to install the other app in the same container.<\/p><\/div>\n\n\n\n<\/p><h2 class=\"wp-block-heading\" id=\"h-how-to-set-up-ollama-integration-in-n8n\"><strong>How to set up Ollama integration in n8n<\/strong><\/h2><p>Connecting Ollama with n8n involves adding the necessary node and configuring several settings. In this section, we will explain the steps in detail, including how to test the integration&rsquo;s functionality.<\/p><h3 class=\"wp-block-heading\" id=\"h-1-add-the-ollama-chat-model-node\"><strong>1. Add the Ollama Chat Model node<\/strong><\/h3><p>Adding the Ollama Chat Model node enables n8n to connect with large language models (LLMs) on the AI platform via a conversational agent.<\/p><p>n8n offers two Ollama nodes: <strong>Ollama Model <\/strong>and <strong>Ollama Chat Model<\/strong>. The<strong> Ollama Chat Model<\/strong> is specifically designed for conversation and has a built-in <strong>Basic LLM Chain <\/strong>node<strong> <\/strong>that forwards your message to the chosen model. Meanwhile, the<strong> Ollama Model <\/strong>node is suitable for more general tasks with other <strong>Chain<\/strong> nodes &ndash; we&rsquo;ll discuss this further in the LangChain section.<\/p><p>In this tutorial, we&rsquo;ll use the Ollama Chat Model node as it&rsquo;s easier to use and integrate with a more complete workflow. Here&rsquo;s how to add it to n8n:<\/p><ol class=\"wp-block-list\">\n<li>Access your n8n instance. You should be able to open it in a web browser using your <strong>VPS&rsquo;s hostname<\/strong> or <strong>IP address<\/strong>, depending on your configuration. <\/li>\n\n\n\n<li>Log in to your n8n account. <\/li>\n\n\n\n<li>Create a new workflow by clicking the button on the top right of your n8n main page. <\/li>\n<\/ol><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c48d9ae\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1460\" height=\"743\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-main-page-workflow-creation-button.png\/public\" alt=\"n8n's main page with the workflow creation button highlighted\" class=\"wp-image-131603\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-main-page-workflow-creation-button.png\/w=1460,fit=scale-down 1460w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-main-page-workflow-creation-button.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-main-page-workflow-creation-button.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-main-page-workflow-creation-button.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-main-page-workflow-creation-button.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 1460px) 100vw, 1460px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><ol start=\"4\" class=\"wp-block-list\">\n<li>Click the <strong>plus<\/strong> icon and search for <strong>Ollama Chat Model<\/strong>.<\/li>\n\n\n\n<li>Add the node by clicking it. <\/li>\n<\/ol><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c48efe4\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"936\" height=\"664\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-model-nodes-in-n8n.png\/public\" alt=\"Ollama model nodes in n8n\" class=\"wp-image-131604\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-model-nodes-in-n8n.png\/w=936,fit=scale-down 936w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-model-nodes-in-n8n.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-model-nodes-in-n8n.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-model-nodes-in-n8n.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 936px) 100vw, 936px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>The node configuration window will appear. Let&rsquo;s proceed to the next step to set it up.<\/p><h3 class=\"wp-block-heading\" id=\"h-2-choose-your-model-and-runtime-settings\"><strong>2. Choose your model and runtime settings<\/strong><\/h3><p>Before choosing an AI model and configuring its runtime settings, connect n8n with your self-hosted Ollama instance. Here&rsquo;s how to do it:<\/p><ol class=\"wp-block-list\">\n<li>On the node configuration window, expand the <strong>Credential to connect with<\/strong> drop-down menu.<\/li>\n\n\n\n<li>Select <strong>Create new credential.<\/strong> <\/li>\n<\/ol><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c490726\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"958\" height=\"600\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-chat-node-new-credential-creation-button.png\/public\" alt=\"Ollama Chat node's new credential creation button\" class=\"wp-image-131605\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-chat-node-new-credential-creation-button.png\/w=958,fit=scale-down 958w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-chat-node-new-credential-creation-button.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-chat-node-new-credential-creation-button.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-chat-node-new-credential-creation-button.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 958px) 100vw, 958px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><ol start=\"3\" class=\"wp-block-list\">\n<li>Enter the base URL of your Ollama instance. Depending on your hosting environment, it might be <strong>localhost <\/strong>or the <strong>name of your Ollama Docker container<\/strong>. <\/li>\n\n\n\n<li>Hit <strong>Save<\/strong>.<\/li>\n<\/ol><p>If the connection is successful, you&rsquo;ll see a confirmation message. Otherwise, make sure the address is correct and your Ollama instance is running.<\/p><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c491e0a\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1460\" height=\"834\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/a-confirmation-message-confirming-n8n-connection-with-ollama.png\/public\" alt=\"A confirmation message confirming n8n connection with Ollama\" class=\"wp-image-131606\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/a-confirmation-message-confirming-n8n-connection-with-ollama.png\/w=1460,fit=scale-down 1460w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/a-confirmation-message-confirming-n8n-connection-with-ollama.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/a-confirmation-message-confirming-n8n-connection-with-ollama.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/a-confirmation-message-confirming-n8n-connection-with-ollama.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/a-confirmation-message-confirming-n8n-connection-with-ollama.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 1460px) 100vw, 1460px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>Once connected, you can choose the LLM to use in your Ollama model node. To do so, simply expand the <strong>Model<\/strong> drop-down menu and select one from the list. If it is greyed out, refreshing n8n will resolve the issue.<\/p><p>Note that n8n currently only supports older models like Llama 3 and DeepSeek R1. If the <strong>Model <\/strong>menu shows an error and an empty list, it is most likely because your Ollama only has incompatible models.<\/p><p>To resolve this, simply <a href=\"https:\/\/ollama.com\/library\" target=\"_blank\" rel=\"noreferrer noopener\">download other Ollama models<\/a>. On Ollama CLI, do this by running the following command in your Ollama environment:<\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">ollama run model-name<\/pre><p>You can also use a model with custom runtime settings, such as a higher temperature. Here&rsquo;s how to create one in <a href=\"\/ng\/tutorials\/ollama-cli-tutorial\">Ollama CLI<\/a>:<\/p><ol class=\"wp-block-list\">\n<li>Access your Ollama installation. If you use Docker, use the following command with <strong>ollama<\/strong> being your container&rsquo;s actual name:<\/li>\n<\/ol><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">docker exec -it ollama bash<\/pre><ol start=\"2\" class=\"wp-block-list\">\n<li>Create a new <a href=\"https:\/\/ollama.readthedocs.io\/en\/modelfile\/#format\" target=\"_blank\" rel=\"noreferrer noopener\">modelfile<\/a> defining your model&rsquo;s runtime setting. For example, we&rsquo;ll set the temperature of our Llama 3 model to <strong>0.7<\/strong>:<\/li>\n<\/ol><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">echo \"FROM llama3\" &gt; Modelfile<\/pre><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">echo \"PARAMETER temperature 0.7\" &gt;&gt; Modelfile<\/pre><ol start=\"3\" class=\"wp-block-list\">\n<li>Run the following command to apply the modelfile configuration to the base Llama 3 model, creating a custom LLM called <strong>llama3-temp07<\/strong>:<\/li>\n<\/ol><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">ollama create llama3-temp07 -f Modelfile<\/pre><p>Once you&rsquo;ve completed these steps, n8n should read your new Llama 3 model with the custom <strong>0.7<\/strong> temperature.<\/p><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c49344b\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"972\" height=\"1182\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/custom-ollama-model-on-n8n.png\/public\" alt=\"a custom Ollama model on n8n\" class=\"wp-image-131607\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/custom-ollama-model-on-n8n.png\/w=972,fit=scale-down 972w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/custom-ollama-model-on-n8n.png\/w=247,fit=scale-down 247w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/custom-ollama-model-on-n8n.png\/w=842,fit=scale-down 842w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/custom-ollama-model-on-n8n.png\/w=123,fit=scale-down 123w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/custom-ollama-model-on-n8n.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 972px) 100vw, 972px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p><div class=\"protip\">\n                    <h4 class=\"title\">Managing Ollama GUI<\/h4>\n                    <p>If you use <a href=\"\/ng\/tutorials\/ollama-gui-tutorial\">Ollama GUI<\/a>, check out our tutorial to learn more about its interface and how to manage your models.<\/p>\n                <\/div>\n\n\n\n<\/p><h3 class=\"wp-block-heading\" id=\"h-3-configure-prompt-settings\"><strong>3. Configure prompt settings<\/strong><\/h3><p>Configuring prompt settings enables you to customize how the <strong>Basic LLM Chain<\/strong> node modifies your input before passing it to Ollama for processing. While you can use the default settings, you should change them depending on your tasks.<\/p><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c494ad6\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1188\" height=\"914\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/basic-llm-chain-prompt-source-options.png\/public\" alt=\"Basic LLM Chain's prompt source options\" class=\"wp-image-131608\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/basic-llm-chain-prompt-source-options.png\/w=1188,fit=scale-down 1188w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/basic-llm-chain-prompt-source-options.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/basic-llm-chain-prompt-source-options.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/basic-llm-chain-prompt-source-options.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/basic-llm-chain-prompt-source-options.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 1188px) 100vw, 1188px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>Here are two ways you can modify the LLM chain node&rsquo;s prompt settings and their example use cases.<\/p><p><strong>Connected Chat trigger node<\/strong><\/p><p>The <strong>Connected Chat trigger node <\/strong>option uses messages from the default <strong>Chat<\/strong> node as input for Ollama. It is the chosen mode by default and passes messages as they are.<\/p><p>However, you can include additional prompts along with the messages to modify Ollama&rsquo;s output. To do this, click the<strong> Add Prompt<\/strong> button in the <strong>Chat Messages (if Using a Chat Model)<\/strong> setting and choose from three additional prompt options:<\/p><ul class=\"wp-block-list\">\n<li><strong>AI<\/strong>. Enter an example of the expected response in the <strong>Message<\/strong> field. The AI model will try to respond in the same way as the provided text. <\/li>\n\n\n\n<li><strong>System<\/strong>. Write a message that guides the model&rsquo;s responses. For example, you can define the tone the AI will use or the words it should avoid when responding.<\/li>\n\n\n\n<li><strong>User<\/strong>. Add a sample of the user input for the AI, such as a message, URL, or image. Giving the AI a sample of what to expect from users will allow it to return more consistent responses. <\/li>\n<\/ul><p><strong>Define below<\/strong><\/p><p>The <strong>Define below<\/strong> option is suitable if you want to enter a reusable pre-typed prompt. It is also ideal for forwarding dynamic data because you can capture it using <a href=\"https:\/\/docs.n8n.io\/code\/expressions\/\" target=\"_blank\" rel=\"noreferrer noopener\">Expressions<\/a> &ndash; a JavaScript library that manipulates the input or selects a specific field.<\/p><p>For example, the previous node fetches data about your VPS resource usage, and you want to analyze it using AI. In this case, the prompt remains the same, but the usage metrics will continually change.<\/p><p>Your prompt might look like the following, with <strong>{{ $json.metric }}<\/strong> being the field containing dynamic data about your server resource usage:<\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"raw\" data-enlighter-theme=\"atomic\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">The latest usage of my server is {{ $json.metric }}. Analyze this data and compare it with the previous usage history to check if this is abnormal.<\/pre><p>Note that you can still add additional prompts like the previous mode to give the AI more context.<\/p><h3 class=\"wp-block-heading\" id=\"h-4-send-a-test-prompt\"><strong>4. Send a test prompt<\/strong><\/h3><p>Sending a test prompt verifies that your Ollama model works properly when receiving input via n8n. The easiest way to do this is by entering a sample message via these steps:<\/p><ol class=\"wp-block-list\">\n<li>Save your workflow by clicking the button on the top right of your canvas. <\/li>\n\n\n\n<li>Hover over the <strong>Chat<\/strong> trigger node and click<strong> Open chat<\/strong>. <\/li>\n\n\n\n<li>On the<strong> Chat interface<\/strong>, send a test message. <\/li>\n<\/ol><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c496437\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1460\" height=\"857\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/the-chat-trigger-interface-on-n8n-canvas.png\/public\" alt=\"The Chat trigger interface on n8n canvas\" class=\"wp-image-131609\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/the-chat-trigger-interface-on-n8n-canvas.png\/w=1460,fit=scale-down 1460w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/the-chat-trigger-interface-on-n8n-canvas.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/the-chat-trigger-interface-on-n8n-canvas.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/the-chat-trigger-interface-on-n8n-canvas.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/the-chat-trigger-interface-on-n8n-canvas.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 1460px) 100vw, 1460px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>Wait until the workflow finishes processing your message. During our testing, the workflow got stuck a few times. If you encounter the same issue, simply reload n8n and send a new message.<\/p><p>If the test is successful, all the nodes will turn green. You can read each node&rsquo;s JSON input and output by double-clicking it and checking the panes on both sides of the configuration window.<\/p><h2 class=\"wp-block-heading\" id=\"h-how-to-create-a-chatbot-workflow-using-ollama-and-n8n\"><strong>How to create a chatbot workflow using Ollama and n8n<\/strong><\/h2><p>Integrating Ollama into n8n enables you to automate various tasks with LLMs, including <a href=\"\/ng\/tutorials\/how-to-build-ai-workflows-in-n8n\">creating an AI-powered workflow in n8n<\/a> that responds to user queries, like a chatbot. This section will explore the steps for developing one.<\/p><p>If you want to create an automation system for other tasks, check our <a href=\"\/ng\/tutorials\/n8n-workflow-examples\">n8n workflow examples<\/a> for inspiration.<\/p><h3 class=\"wp-block-heading\" id=\"h-1-add-a-trigger-node\"><strong>1. Add a trigger node<\/strong><\/h3><p>The trigger node in n8n defines the event that will start your workflow. Among several options, here are the most common ones for creating a chatbot:<\/p><p><strong>Chat trigger<\/strong><\/p><p>By default, the Ollama chat model node uses <strong>Chat message<\/strong> as the trigger, which initiates your workflow upon receiving a message.<\/p><p>This default<strong> Chat<\/strong> node is perfect for developing a chatbot. To get it working, all you need to do is make the chat interface available to the public.<\/p><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c497c7f\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"784\" height=\"830\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-chat-trigger-make-public-toggle.png\/public\" alt=\"n8n Chat trigger's Make Chat Publicly Available toggle\" class=\"wp-image-131610\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-chat-trigger-make-public-toggle.png\/w=784,fit=scale-down 784w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-chat-trigger-make-public-toggle.png\/w=283,fit=scale-down 283w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-chat-trigger-make-public-toggle.png\/w=142,fit=scale-down 142w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-chat-trigger-make-public-toggle.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 784px) 100vw, 784px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>To do this, open the <strong>Chat <\/strong>node and click the <strong>Make Chat Publicly Available<\/strong> toggle. You can then embed this chat functionality into your custom chatbot with a user interface.<\/p><p><strong>Messaging app trigger nodes<\/strong><\/p><p>n8n has trigger nodes that take input from popular messaging apps like <strong>Telegram<\/strong> and <strong>WhatsApp<\/strong>. They are suitable if you want to create a bot for such applications.<\/p><p>Configuring these nodes is rather tricky because you need a developer account and authentication keys to connect to their APIs. Refer to their documentation to learn more about how to configure them.<\/p><p><strong>Webhook trigger<\/strong><\/p><p>The <strong>Webhook<\/strong> trigger starts your workflow when its endpoint URL receives an HTTP request. This is suitable if you want to start your chatbot using events other than sending a message, like a click.<\/p><p>In the steps below, we&rsquo;ll use this node to start our workflow whenever a Discord chatbot receives a message. If you wish to follow along, first check out our <a href=\"\/ng\/tutorials\/how-to-integrate-n8n-with-discord\">integrating n8n with Discord<\/a> tutorial to learn how to create a Discord bot.<\/p><p><div><p class=\"important\"><strong>Important!<\/strong> If your webhook URL starts with <strong>localhost<\/strong>, change it to your VPS&rsquo; domain, hostname, or IP address. You can do this by modifying <a href=\"https:\/\/docs.n8n.io\/hosting\/configuration\/environment-variables\/endpoints\/\">n8n&rsquo;s WEBHOOK_URL environment variable<\/a> inside its configuration file.<\/p><\/div>\n\n\n\n<\/p><h3 class=\"wp-block-heading\" id=\"h-2-connect-the-ollama-node\"><strong>2. Connect the Ollama node<\/strong><\/h3><p>Connecting the Ollama node allows the trigger node to forward user input for processing.<\/p><p>The <strong>Ollama Chat Model<\/strong> node doesn&rsquo;t connect directly to trigger nodes and only integrates with an AI node. The default one is the <strong>Basic LLM Chain<\/strong> node, but you can also use other <strong>Chain <\/strong>nodes for more complex processing.<\/p><p>Some <strong>Chain<\/strong> nodes support additional tools for processing your data. For example, the<strong> AI Agent <\/strong>node lets you add a parser to reformat the output or include a memory to store the previous responses.<\/p><p>For a chatbot that doesn&rsquo;t require complex data processing, like our Discord chatbot, the <strong>Basic LLM Chain<\/strong> is enough.<\/p><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c4995c9\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1460\" height=\"806\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-basic-llm-chain-node-cluster-with-webhook-trigger.png\/public\" alt=\"Ollama and Basic LLM Chain node cluster with the Webhook trigger connected\" class=\"wp-image-131611\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-basic-llm-chain-node-cluster-with-webhook-trigger.png\/w=1460,fit=scale-down 1460w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-basic-llm-chain-node-cluster-with-webhook-trigger.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-basic-llm-chain-node-cluster-with-webhook-trigger.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-basic-llm-chain-node-cluster-with-webhook-trigger.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/ollama-basic-llm-chain-node-cluster-with-webhook-trigger.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 1460px) 100vw, 1460px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>So, connect the trigger node with the <strong>Basic LLM Chain <\/strong>node and define how to pass the input. Use <strong>Fixed<\/strong> to pass the message as the prompt. Meanwhile, select <strong>Expression<\/strong> to use dynamic data or manipulate the input before forwarding it to Ollama.<\/p><p>For example, we use the following <strong>Expression<\/strong> to choose the <strong>body.content <\/strong>JSON field as the input, which changes depending on users&rsquo; Discord messages:<\/p><pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">{{ $json.body.content }}<\/pre><h3 class=\"wp-block-heading\" id=\"h-3-output-the-response\"><strong>3. Output the response<\/strong><\/h3><p>Outputting the response from the <strong>AI Agent<\/strong> or <strong>Basic LLM Chain<\/strong> node makes it possible for users to see the response from your bot. At this point, you can only read the output from the chat interface or the node&rsquo;s output pane.<\/p><p>To send the response, use the same node as your trigger. For example, if you are developing a WhatsApp chatbot, connect the <strong>WhatsApp send message<\/strong> node.<\/p><p>If you use the default <strong>Chat<\/strong> trigger, you can use the <strong>Webhook<\/strong> node to forward the message to your custom-coded bot or chatbot interface.<\/p><p>Since our Discord bot&rsquo;s workflow uses the <strong>Webhook <\/strong>trigger, we can also use the <strong>Webhook <\/strong>node for the output. Alternatively, we can use the same bot to send the answer by connecting the Discord <strong>Send a Message<\/strong> node and integrating it with our chatbot. The completed workflow will look like this:<\/p><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c49ab3d\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1460\" height=\"533\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-ollama-discord-bot-workflow.png\/public\" alt=\"n8n's Ollama-powered Discord bot workflow\" class=\"wp-image-131612\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-ollama-discord-bot-workflow.png\/w=1460,fit=scale-down 1460w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-ollama-discord-bot-workflow.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-ollama-discord-bot-workflow.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-ollama-discord-bot-workflow.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/n8n-ollama-discord-bot-workflow.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 1460px) 100vw, 1460px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p><div class=\"protip\">\n                    <h4 class=\"title\">Not sure how to create a complete workflow<\/h4>\n                    <p>n8n provides various ready-to-use workflows that you can easily import to your canvas. This enables you to create an AI-powered automation system without developing the workflow from scratch.<br>\nCheck out our <a href=\"\/ng\/tutorials\/best-n8n-templates\">best n8n templates<\/a> tutorial to discover curated, ready-to-use workflows for various purposes.<\/p>\n                <\/div>\n\n\n\n<\/p><h2 class=\"wp-block-heading\" id=\"h-what-are-the-best-use-cases-for-n8n-ollama-integration\"><strong>What are the best use cases for n8n Ollama integration?<\/strong><\/h2><p>Being one of the most powerful <a href=\"\/ng\/tutorials\/best-ai-automation-tools\">AI automation tools<\/a>, n8n&rsquo;s integration with Ollama&rsquo;s customizable LLMs enables you to automate a wide range of tasks.<\/p><p>Here are examples of tasks you can <a href=\"\/ng\/tutorials\/what-can-you-automate-with-n8n\">automate with n8n<\/a> and AI:<\/p><ol class=\"wp-block-list\">\n<li><strong>Automated customer support workflow. <\/strong>Use Ollama&rsquo;s LLMs to generate replies to customer queries, summarize tickets, or route issues on platforms like Zendesk and Intercom, all via n8n.<\/li>\n\n\n\n<li><strong>Context-aware email drafting.<\/strong> Automatically draft emails for different contexts or tasks using Ollama. For example, you can write a message to onboard a new lead, remind customers about subscription expiration, and announce product updates using different events. <\/li>\n\n\n\n<li><strong>Internal knowledge base assistant.<\/strong> Use n8n to query internal documentation, like Notion, Confluence, or Airtable, and feed the data into Ollama to generate intelligent answers or summaries for internal team queries.<\/li>\n\n\n\n<li><strong>Data extraction and summarization.<\/strong> Use n8n to watch incoming text documents, extract their text, and extract key information with Ollama &ndash; useful for summarizing reports, invoices, or legal documents.<\/li>\n\n\n\n<li><strong>Automated content production pipeline<\/strong>. <a href=\"\/ng\/tutorials\/how-to-use-n8n-to-generate-content\">Generate content using n8n<\/a> and Ollama by creating a workflow that automates the keyword research, writing, and editing process. <\/li>\n\n\n\n<li><strong>Secure chatbots for internal use. <\/strong>Create in-house chatbots that work with sensitive internal data, where n8n handles the orchestration, and Ollama runs the LLM completely offline for security and privacy. <\/li>\n<\/ol><h3 class=\"wp-block-heading\" id=\"h-why-should-you-host-your-n8n-ollama-workflows-with-hostinger\"><strong>Why should you host your n8n-Ollama workflows with Hostinger?<\/strong><\/h3><p>Hosting your n8n-Ollama workflows with Hostinger brings various advantages over using a personal machine or the official hosting plan. Here are some of the benefits:<\/p><ul class=\"wp-block-list\">\n<li><strong>Higher control<\/strong>. Hostinger&rsquo;s <a href=\"\/ng\/self-hosted-n8n\">n8n VPS hosting<\/a> service provides users full root access to their server settings and data. This enables you to configure your n8n and Ollama hosting environments to your specific preferences.<\/li>\n\n\n\n<li><strong>Improved privacy<\/strong>. Since you&rsquo;ll be hosting n8n and Ollama on a server over which you have complete control, you&rsquo;ll have the freedom to fine-tune  access limits and security settings.<\/li>\n\n\n\n<li><strong>Scalability<\/strong>. Hostinger VPS plans are easily upgradable without downtime and offer the n8n queue mode template that enables you to offload your task to multiple workers. <\/li>\n\n\n\n<li><strong>Streamlined setup<\/strong>. Our VPS templates enable you to install n8n or Ollama in one click, making the process more efficient.<\/li>\n\n\n\n<li><strong>Easy management<\/strong>. Managing a Hostinger VPS is easy with the intuitive hPanel control panel or the built-in browser terminal. Beginners can also ask our AI assistant, <strong>Kodee<\/strong>, to perform system administration tasks via chat. <\/li>\n<\/ul><?xml encoding=\"utf-8\" ?><figure class=\"wp-block-image size-large\"><a class=\"hgr-tutorials-cta hgr-tutorials-cta-vps-hosting\" href=\"\/ng\/vps-hosting\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"300\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/public\" alt=\"\" class=\"wp-image-77934\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/w=768,fit=scale-down 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure><h2 class=\"wp-block-heading\" id=\"h-using-langchain-s-lm-ollama-node-in-n8n\"><strong>Using LangChain&rsquo;s LM Ollama node in n8n<\/strong><\/h2><p><strong>LangChain <\/strong>is a framework that facilitates the integration of LLMs into applications. In n8n, this implementation involves connecting different tool nodes and AI models to achieve particular processing capabilities.<\/p><p>In n8n, the LangChain feature uses<strong> Cluster nodes <\/strong>&ndash; a group of interconnected nodes that work together to provide functionality in your workflow.<\/p><div class=\"wp-block-image\"><figure data-wp-context='{\"imageId\":\"69e199c49cf3d\"}' data-wp-interactive=\"core\/image\" class=\"aligncenter size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1707\" height=\"1523\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/VPS-cluster-root-sub-nodes-illustration.png\/public\" alt=\"The concept of cluster, root, and sub-nodes in n8n's LangChain implementation\" class=\"wp-image-132612\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/VPS-cluster-root-sub-nodes-illustration.png\/w=1707,fit=scale-down 1707w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/VPS-cluster-root-sub-nodes-illustration.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/VPS-cluster-root-sub-nodes-illustration.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/VPS-cluster-root-sub-nodes-illustration.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/VPS-cluster-root-sub-nodes-illustration.png\/w=768,fit=scale-down 768w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2025\/07\/VPS-cluster-root-sub-nodes-illustration.png\/w=1536,fit=scale-down 1536w\" sizes=\"auto, (max-width: 1707px) 100vw, 1707px\" \/><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>Cluster nodes consist of two parts: <strong>root nodes <\/strong>that define the main functionality, and<strong> sub-nodes<\/strong> that add the LLM capability or extra features.<\/p><p>The most important part of LangChain implementation in n8n is the <strong>Chain<\/strong> inside the <strong>root nodes<\/strong>. It brings together and sets up the logic for different AI components, like the Ollama model and the parser node, to create a cohesive system.<\/p><p>Here are the<strong> Chains <\/strong>in n8n and their functions:<\/p><ul class=\"wp-block-list\">\n<li><strong>Basic LLM Chain<\/strong>. Enables you to set the prompt that the AI model will use and an optional parser to reformat the response.<\/li>\n\n\n\n<li><strong>Retrieval Q&amp;A Chain<\/strong>. Allows you to retrieve AI-processed data using vector stores, databases designed to store information in numerical format.<\/li>\n\n\n\n<li><strong>Summarization Chain<\/strong>. Summarizes the content of multiple documents or inputs. <\/li>\n\n\n\n<li><strong>Sentiment Analysis<\/strong>. Analyzes the sentiment of the input text and classifies it into categories like positive, neutral, and negative.<\/li>\n\n\n\n<li><strong>Text Classifier<\/strong>. Sorts input data into different user-made categories based on the specified criteria and parameters. <\/li>\n<\/ul><p>When creating a workflow in n8n, you may also encounter <strong>Agents<\/strong> &ndash; subsets of<strong> Chains<\/strong> with the ability to make decisions. While <strong>Chains<\/strong> operate based on a set of predetermined rules, <strong>Agent <\/strong>uses the connected LLM to determine the next actions to take.<\/p><h2 class=\"wp-block-heading\" id=\"h-what-s-next-after-connecting-n8n-with-ollama\"><strong>What&rsquo;s next after connecting n8n with Ollama?<\/strong><\/h2><p>As <a href=\"\/ng\/tutorials\/automation-trends\">automation trends<\/a> continue to evolve, implementing an automatic data processing system will help you stay ahead of the competition. Coupled with AI, you can create a system that takes your project development and management to the next level.<\/p><p>Integrating Ollama into your n8n workflow brings AI-powered automation beyond the built-in node&rsquo;s capabilities &ndash; and Ollama&rsquo;s compatibility with various LLMs enables you to choose and tailor different AI models to best suit your needs.<\/p><p>Understanding how to connect Ollama into n8n is only the first step in implementing AI-powered automation into your project. Given the sheer number of possible use cases, the next step is to experiment and develop a workflow that best fits your project.<\/p><p>If it&rsquo;s your first time working with n8n or Ollama, Hostinger is the place to start. Aside from feature-packed VPS plans, we have a comprehensive <a href=\"\/ng\/tutorials\/vps\/automation\">catalog of tutorials about n8n<\/a> that will help you start your automation journey.<\/p><p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Integrating n8n with Ollama enables you to harness various AI models into your automation workflow, allowing it to perform complex operations that would otherwise be impossible. However, the process can be tricky because you need to configure various settings on both tools in order for them to work seamlessly. As long as you already have [&#8230;]<\/p>\n<p><a class=\"btn btn-secondary understrap-read-more-link\" href=\"\/ng\/tutorials\/n8n-ollama-integration\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":337,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"rank_math_title":"How to integrate n8n with Ollama?","rank_math_description":"How to integrate n8n with Ollama: 1. Add the Ollama chat model node, 2. Choose and adjust your AI model, 3. Configure prompt settings + more.","rank_math_focus_keyword":"n8n ollama","footnotes":""},"categories":[22644],"tags":[],"class_list":["post-131602","post","type-post","status-publish","format-standard","hentry","category-vps"],"hreflangs":[{"locale":"en-US","link":"https:\/\/www.hostinger.com\/tutorials\/n8n-ollama-integration","default":0},{"locale":"fr-FR","link":"https:\/\/www.hostinger.com\/fr\/tutoriels\/n8n-ollama","default":0},{"locale":"en-CA","link":"https:\/\/www.hostinger.com\/ca\/tutorials\/n8n-ollama-integration","default":0},{"locale":"en-UK","link":"https:\/\/www.hostinger.com\/uk\/tutorials\/n8n-ollama-integration","default":0},{"locale":"en-PH","link":"https:\/\/www.hostinger.com\/ph\/tutorials\/n8n-ollama-integration","default":0},{"locale":"en-MY","link":"https:\/\/www.hostinger.com\/my\/tutorials\/n8n-ollama-integration","default":0},{"locale":"en-IN","link":"https:\/\/www.hostinger.com\/in\/tutorials\/n8n-ollama-integration","default":0},{"locale":"en-AU","link":"https:\/\/www.hostinger.com\/au\/tutorials\/n8n-ollama-integration","default":0},{"locale":"en-NG","link":"https:\/\/www.hostinger.com\/ng\/tutorials\/n8n-ollama-integration","default":0}],"_links":{"self":[{"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/posts\/131602","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/users\/337"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/comments?post=131602"}],"version-history":[{"count":8,"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/posts\/131602\/revisions"}],"predecessor-version":[{"id":143128,"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/posts\/131602\/revisions\/143128"}],"wp:attachment":[{"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/media?parent=131602"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/categories?post=131602"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hostinger.com\/ng\/tutorials\/wp-json\/wp\/v2\/tags?post=131602"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}