Agent
Runs your AI agent with prompts and functions to generate a response.
Last updated
Was this helpful?
Runs your AI agent with prompts and functions to generate a response.
Last updated
Was this helpful?
WotNot's "Agent" action block is its most advanced feature for building smart, conversational AI agents. This guide will show you how to use it to manage complex interactions, pull from a knowledge base, follow custom logic, and run functions—all in one streamlined step.
The Agent step provides a comprehensive solution for creating AI agents that can:
Intelligently respond to user queries
Search knowledge base for relevant information
Follow specific conversation paths based on user intent
Execute functions to interact with external services
When using an "Agent" step, you need to configure the following:
Effective prompting is key to building a high-performing AI agent. The instructions you give act as the agent’s “brain,” shaping how it engages with users across different situations.
Here’s what makes a strong prompt:
Clear Directives: Define exactly what the agent’s role is and how it should behave in various scenarios.
Personality Design: Set the tone, formality, and communication style your agent should follow.
Knowledge Scope: Outline what the agent should know and when to refer to external sources.
Context Handling: Provide guidance for sensitive topics and when to escalate to a human.
Good prompting helps your agent stay on track—knowing what to prioritize, when to tap into the knowledge base, and how to respond consistently. Investing time in clear, detailed instructions not only boosts performance but also simplifies your flow.
Example:
Instead of saying: “This is a support agent.”
Try: “You’re a helpful, friendly e-commerce support agent who assists with order tracking, returns, and account questions. Keep a positive tone, offer suggestions when possible, and be specific in your responses.”
Functions allow your agent to connect with external services and retrieve or update data.
Choose from available functions in the dropdown menu OR create a new one by navigating to AI Studio > Functions.
Paths help your agent manage targeted conversation flows and respond with the right actions at the right time.
Here’s what Paths enable:
Automatic Routing: The agent identifies user intent and directs the conversation to the relevant path.
Data Gathering: You can specify which variables must be collected before a path is triggered.
Natural Flow: Paths fit smoothly into conversations without needing exact commands from the user.
To create a Path:
Open the Agent step editor.
Click “Add Path” and give it a clear, descriptive name.
Write a brief description outlining when this path should activate.
(Optional) Add required variables that need to be collected first.
Link the path to the next steps in your chat flow.
Path Examples:
Human Handoff Path
Path Name: Transfer to Human Agent
Description: Trigger this path when the user explicitly requests to speak with a real person. Phrases like “I want to talk to a human,” “Can I speak to an agent?” or “I need help from a real person” should activate this path.
Required Variable: None
LLM Description: This path allows the agent to hand over the conversation to a human agent. Ensure the system recognizes phrases requesting a human interaction and promptly transfer the conversation.
Book Demo Path
Path Name: Schedule Product Demo
Description: Trigger this path when the user expresses interest in booking a product demo or meeting. Look for phrases such as “I want to book a demo,” “Can I schedule a meeting to see the product?” or “How can I try the product?”
Required Variables:
Name
Variable Descriptions:
Name: Collect the user’s full name so the meeting can be scheduled properly.
Email: Capture the user’s email to send demo details and confirmation. Ensure the email is in a valid format (e.g., contains @ symbol and proper domain).
This path ensures that when a user wants to book a demo, all necessary details like their name and email are captured before proceeding with the demo scheduling.
The LLM Settings section allows you to customize how the language model behaves:
LLM Model: Choose which language model (e.g., GPT-3, GPT-4) you want to use for generating responses. Each model has different capabilities, with newer models generally offering more advanced features.
Max Tokens: This setting controls the maximum length of the response. A higher token limit allows the model to generate longer responses, while a lower limit results in shorter replies.
Temperature: This adjusts the randomness of the model’s responses. A higher temperature (e.g., 0.7) makes responses more creative and varied, while a lower temperature (e.g., 0.2) makes them more deterministic and focused.
These settings help fine-tune how the AI interacts with users based on your desired response style and length.
You can learn to create a prompt .
Learn about creating a function .
You can view all the AI models supported