• AI Fire
  • Posts
  • 🤫 Effortlessly Switch Between AI Models Like GPT-4, Claude & More – No Code Required

🤫 Effortlessly Switch Between AI Models Like GPT-4, Claude & More – No Code Required

Unlock the hidden power of n8n to dynamically route AI models like a pro - save money, boost efficiency, and automate like never before!

So, you’ve got a fancy AI model (or ten), and you’re wondering how to make it work with n8n? Maybe you want your AI chatbot to switch between different models on the fly, or you’re just curious about how to build a supercharged AI agent. Either way, you’re in the right place!

Today, we’re going to build an LLM Router Agent in n8n that lets you use any model dynamically - GPT-4, Claude, Grok, or even a fine-tuned custom model. No coding required! Let’s get started. 🚀

Step 1: What’s an LLM Router Agent (and Why Do You Need One)?

Before we start clicking around in n8n, let’s answer the big question: What is an LLM Router Agent?

Think of it like a smart AI traffic controller. Instead of sending every query to a single model, an LLM Router:

  • Chooses the best AI model based on the request (e.g., GPT-4 for creative tasks, Claude for reasoning, or Mistral for speed).

  • Saves money by using cheaper models for simple queries and premium ones for complex tasks.

  • Enhances flexibility, allowing you to integrate multiple AI providers effortlessly.

Imagine asking, “Summarize this article”, and the system decides to use GPT-4-Turbo because it’s great at summarization. Then, when you ask, “Give me a Python script”, it switches to a model specialized in coding. Neat, right? 😎

Step 2: Setting Up n8n and Creating a Workflow

First, open up n8n and create a new workflow. This will serve as the brain of your AI agent.

  • Start a new workflow in n8n.

start-a-new-workflow
  • Add a Chat Trigger node – This captures the user’s message. When you are in n8n, you can hit chat, and then test it out just as if you were using ChatGPT, provided you have this connected to an AI Agent or any LLM.

add-a-chat-trigger-node
  • This will serve as the entry point for user input.

the-entry-point

Step 3: Add an AI Agent (Chat Model) Node

Once you’ve set up a Trigger, you have limitless possibilities for what comes next - you can integrate with other apps, process data, or even run custom code.

set-up-a-trigger

But, as I said at the beginning, today we’ll integrate all AI models into one. This is where an LLM comes in.

  • Start by adding an AI Chat Model node and connect it to the Chat Trigger so it can process user input. There are three key connections to understand:

    • Chat model: Acts as the AI Agent’s brain, handling the conversation.

    • Memory: Enables the AI to retain context from past interactions, making responses more intelligent.

    • Tool (optional): Functions as add-ons, allowing the AI to access additional context or resources when needed.

adding-an-ai-chat-model-node
  • Next, generate API credentials from your selected AI provider - here, we’ll use OpenRouter Chat Model. It helps you to connect all the other models.

    generate-api-credentials
  • Double-click on the OpenRouter Chat Model node and link your OpenRouter account by adding an API key, which you can obtain from OpenRouter’s website.

    link-your-openrouter-account
  • Paste the API key into n8n’s credential configuration.

    paste-the-api-key
  • Now you can access almost any model on OpenRouter. You can view all the models on OpenRouter’s homepage.

    view-all-the-models-on-openrouters-homepage

     

  • In the model settings, you can choose which GPT module you want to use. I’ll choose openai/03-mini as an example, cause it’s cost-effective and fast.

    choose-gpt-module
  • Also, you can rename nodes by right-clicking on a node and renaming it.

    rename-nodes
  • Next is adding memory, as I said above “Memory helps enable the AI to retain context from past interactions”. Here, all you need is Simple Memory Node.

    adding-memory

Learn How to Make AI Work For You!

Transform your AI skills with the AI Fire Academy Premium PlanFREE for 14 days! Gain instant access to 500+ AI workflows, advanced tutorials, exclusive case studies, and unbeatable discounts. No risks, cancel anytime.

Start Your Free Trial Today >>

Step 4: Connecting Multiple AI Models

Now comes the fun part - integrating different AI models! Here’s how:

  1. Add multiple AI Chat Model nodes – Each one will represent a different AI provider.

  • By adding multiple chat models, you can access their strengths, their weaknesses and help it decide on which model to use based on your query.

  • Ok, now open AI Agent Node’s settings. Then click on Add Option and choose System Message.

    choose-system-message
  • Then open Expression mode and enter your prompt.

    open-expression-mode
  • I’ll give you my basic prompt template describing everything as I go. You could change the prompt based on how many models you add in. Also, in this article, I’ll use Perplexity (for a live web search), ChatGPT (highly advanced reasoning), and Claude (for coding) as an example.

(In the Model part to have a model ID, you need to open OpenRouter, search your model then copy the ID below the name and paste it into your prompt).

You are a routing agent.

Your job is to take in user queries and decide which model is best fit for each use case the user may have. You will have 3 models you can select form for the user query:

1.) "perplexity/sonar"

2.) "openai/o3-mini-high"

3.) "anthropic/claude-3.5-sonnet"

Each model has its strengths and can give better answers for specific user questions and commands. Let's dive into the strengths of each model so you can decide which is best for the user question:

Model: 
perplexity/sonar

Strength:
— Built-in search.
— Features citations and the ability to customize sources.
— Can search the web for live data.

openai/o3-mini-high

Strength:
- 03-mini is a cost-efficient language model optimized for advanced reasoning tasks.
- Excels in science and mathematics.
- This model is best when careful, well thought responses are needed regarding problems with multiple variables or connections.

anthropic/claude-3.5-sonnet

Strengths:
- Coding: Scores ~49% on SWE-Bench Verified, higher than the last best score, and without any fancy prompt scaffolding
- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights
- Visual processing: Excels at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone.
- Agentic tasks: exceptional tool use, making it great at agentic tasks (i.e., complex, multi-step problem solving tasks that require engaging with other systems)

You are to pass the user query and decide on a model using structured JSON object format like this:

JSON Object Structure:
{
    "userQuery": "user query here",
    "model": "selected model here"
}

Example 1:
{
    "userQuery": "Search the web and find cool dog breeds.",
    "model": "perplexity/sonar"
}

Example 2:
{
    "userQuery": "Create me a business plan in order to create the next Google.",
    "model": "openai/o3-mini-high"
}

Example 3:
{
    "userQuery": "Give code to create the game Tetris with Python",
    "model": "anthropic/claude-3.5-sonnet"
}
model-id
  • Don’t forget to turn on Require Specific Output Format.

    require-specific-output-format
  • After that, go back to your prompt then copy JSON Object Structure. Then click on output parser in AI Agent Node’s setting and select Structured Output Parser.

    copy-json-object-structure
  • Ok, now we’ll paste it into JSON Example.

    paste-it-into-json-example

  • The last thing in this step is to test your project.

    test-your-project

*Warning: I highly recommend you add below 10 models. Because if you add more, it’ll be hard for the AI Agent to completely decide which model is the best for you.

Step 5: Setting Up the Routing Logic

After selecting the model we want, we need to process our queries, right?

  • First, we add a new AI Agent Node to pass the query and map the chat model.

    new-ai-agent-node
  • Instead of using Connected Chat Trigger Node, switch it to Define below to control our source for the prompt.

    connected-chat-trigger-node
  • In Text, switch it to Expression mode and put userQuery in it.

    put-userquery-in
  • Now, we need to map another OpenRouter chat model, then we can just use the exact same OpenRouter account unless you want to change it for limit use purposes.

    map-another-openrouter-chat-model
  • Change model based on the first AI Agent decided when we give a query. All we need to do is put model in the Input to OpenRouter Model Expression.

    change-model
  • Next, we add Simple Memory Node. We switch to Define below, drag sessionId into Key chat.

    drag-sessionid-into-key-chat
  • Now, test it out.

    test-it-out

Your AI agent is now intelligent enough to answer for each query! 🎉

🌟 Final Thoughts: You Just Built a Multi-Model AI Agent!

Congratulations! 🎉 You’ve just built a dynamic AI router that intelligently picks the best LLM for any situation. Here’s what your agent can now do:

✅ Switch between AI models on demand 

✅ Optimize costs by using the right model for the job 

✅ Handle a variety of tasks more effectively 

✅ Work without a single line of code!

The best part? You can keep adding models as new AI systems come out. Now go forth and automate - your AI-powered future awaits! 🚀

If you are interested in other topics and how AI is transforming different aspects of our lives, or even in making money using AI with more detailed, step-by-step guidance, you can find our other articles here:

*indicates a premium content, if any

How would you rate this article?

Would you like to add a follow-up question asking for feedback or suggestions? 🚀

Login or Subscribe to participate in polls.

Reply

or to participate.