Agent

Runs your AI agent with prompts and functions to generate a response.

The Agent block allows you to create an AI-powered conversational step inside your flow. It uses Large Language Models (LLMs) and your knowledge sources to generate answers, while also supporting rules, paths, and functions.

When to use the Agent block

Use the Agent block when you want the chatbot to:

  • Answer questions dynamically using your knowledge base

  • Connect with external APIs or functions for real-time data

  • Branch into different paths depending on user intent

  • Hand over to a human agent if needed

If your use case only requires simple, predefined answers, use a Message block instead.

Creating an AI Agent

  • Add the AI agent block

  • Click 'Create AI Agent' on the AI agent dropdown

Configure the core components of the AI agent:

  1. Prompt — instructions the AI follows.

  2. Functions — APIs/external logic

  3. Paths — branching logic

  4. LLM settings — temperature, model selection etc.

Prompt

The prompt tells the AI how to behave and what style of answers to give.

  • Keep instructions short, clear, and specific

  • Include tone (e.g., professional, friendly, concise)

  • Add context (“You are a banking support assistant helping customers with credit card queries”)

Example

  • ❌ Poor prompt: Answer user questions

  • ✅ Good prompt: You are a support assistant for ACME Bank. Answer in a professional, polite tone. If unsure, guide the user to contact support.

Prompt section when creating a Agent

Here’s what makes a strong prompt:

  • Clear Directives: Define exactly what the agent’s role is and how it should behave in various scenarios.

  • Personality Design: Set the tone, formality, and communication style your agent should follow.

  • Knowledge Scope: Outline what the agent should know and when to refer to external sources.

  • Context Handling: Provide guidance for sensitive topics and when to escalate to a human.

You can learn to create a prompt here.

Functions

Functions allow the Agent to call external APIs or run logic before answering.

  • Example: Check order status from an API, then respond with the delivery date.

  • Use functions when the answer requires live data instead of static knowledge

Learn about creating a function here.

Function section when creating a Agent

Path

Paths let you branch the flow based on what the user says.

  • Define triggers (keywords or intent) that send users down specific paths

  • Example: If user says “talk to human” → route to Live Chat

Path section when creating a Agent

Path Examples:

Human Handoff Path

  • Path Name: Transfer to Human Agent

  • Description: Trigger this path when the user explicitly requests to speak with a real person. Phrases like “I want to talk to a human,” “Can I speak to an agent?” or “I need help from a real person” should activate this path.

  • Required Variable: None

  • LLM Description: This path allows the agent to hand over the conversation to a human agent. Ensure the system recognizes phrases requesting a human interaction and promptly transfer the conversation.

Book Demo Path

  • Path Name: Schedule Product Demo

  • Description: Trigger this path when the user expresses interest in booking a product demo or meeting. Look for phrases such as “I want to book a demo,” “Can I schedule a meeting to see the product?” or “How can I try the product?”

  • Required Variables:

    • Name

    • Email

  • Variable Descriptions:

    • Name: Collect the user’s full name so the meeting can be scheduled properly.

    • Email: Capture the user’s email to send demo details and confirmation. Ensure the email is in a valid format (e.g., contains @ symbol and proper domain).

This path ensures that when a user wants to book a demo, all necessary details like their name and email are captured before proceeding with the demo scheduling.

LLM settings

The LLM Settings section allows you to customize how the language model behaves:

  • LLM Model — Choose which language model (e.g., GPT-3, GPT-4) you want to use for generating responses. Models offered:

    • OpenAI

    • Anthropic

    • Google Gemini

    • Custom Models (Available on Enterprise plan)

  • Max Tokens — This setting controls the maximum length of the response. A higher token limit allows the model to generate longer responses, while a lower limit results in shorter replies.

  • Temperature — This adjusts the randomness of the model’s responses.

    • Lower = safer, more predictable answers

    • Higher = more creative, varied answers

These settings help fine-tune how the AI interacts with users based on your desired response style and length.

You can view all the AI models supported here

LLM settings when creating an agent

Creating a knowledge base

To empower your AI agent to only answer questions that you want it to, create a knowledge base.

  • Turn on the Use data from knowledge base

  • Click Create knowledge base from the knowledge base dropdown

  • Add data sources to the knowledge base

Chunks

In RAG, long documents are too big to give directly to an AI model. So, the documents are broken down into smaller pieces of text, called chunks.

Each chunk is just a section of the original content (like a paragraph or a few sentences) that can be stored and searched easily.

When a user asks a question, the system finds the most relevant chunks instead of searching through the entire document. These chunks are then passed to the AI model, so it has the right context to generate an accurate answer.

NOTE:

  • More chunks sent to LLMs, result in better answers.

  • Recommended chunk count is 5.


Other configurations

These are the toggles/settings that customize how the Agent interacts with users.

Answer source

When the AI generates a response, it often relies on external information—like documents or web pages. To make this process transparent, we show the Answer Source.

The Answer Source is basically a link to the original webpage (or document) that was used to generate the response. This way, users can:

  • Verify the accuracy of the information

  • Read the full content for more details

  • Trust that the answer isn’t coming from a “black box,” but from a clear, traceable source

Execute only once

By default, the AI Agent block runs in a loop, meaning the flow keeps executing within the same block until an exit path condition is met.

If you’d like the AI Agent to run just once and then automatically continue to the next step in the flow, enable the Execute Only Once setting.

Collect feedback

Shows 👍 / 👎 on agent answers so users can rate responses; helps admins improve the bot.

Their feedback helps admins improve the bot by identifying where answers fall short, so they can:

  • Add more data sources to the knowledge base

  • Update or refine existing content

In short, Collect Feedback gives admins a direct way to learn from users and continuously enhance the bot’s performance.

Show results as

By default, the AI Agent displays its response as plain text in the chat.

If you turn on any of the options—Buttons, Carousel, or File—the response will also include those formats for a richer experience.

Example:

A user asks: “Show me the available credit card options.”

  • Text (default):

    “Here are the available credit cards: Gold, Platinum, Titanium.”

  • Buttons (if enabled):

    Along with the text, users will see buttons like Gold, Platinum, Titanium that they can click.

  • Carousel (if enabled):

    Each card is shown as a card-style item with an image, short description, and a “Select” button that users can scroll through.

  • File (if enabled):

    The agent attaches a downloadable PDF with full details of all credit cards.


Best Practices & Tips

  • Keep prompts short and action-oriented

  • Avoid overlapping path triggers (use clear keywords/intents)

  • Test with real queries to validate output

  • If using functions, confirm the API is reliable and handles errors gracefully

  • Use “Exit paths” to prevent loops

Common Pitfalls

  • Forgetting to capture variables before using them in paths/functions

  • Setting temperature too high, causing irrelevant answers

  • Not linking fallback paths for when AI cannot find a match

Checklist before going live

  • Prompt is clear and instructive

  • Paths are defined and tested

  • Functions return correct data

  • Human handoff works smoothly

  • Edge cases tested with sample queries

Last updated

Was this helpful?