Answer AI

Uses an LLM-based AI model to generate an answer for the question asked.

The Answer AI action blocks uses LLM models to generate an answer to the user's question using the knowledge base for reference.

Process in which the Answer AI works,

  1. User asks a question

  2. AI generates the answer from a knowledge base

Step 1: Define the question

This message will prompt the user to ask their question.

For example, your message could be,

👋 Hey! I'm an AI assistant, how can I help you?

Learn more about prompts

Step 2: Choose a knowledge base

Choose the knowledge base from which AI will generate an answer.

Learn more

Step 3: Configure your AI for accurate answers

LLM models require guidance to be able to generate a relevant and accurate answer. There are some tools you can use to guide your AI; they are:

Instructions/Prompt

A prompt is nothing but an instruction that helps the LLM know what to remember and follow while generating an answer.

The more clear, concise and brief your instruction, the more accurate your answers will be from the AI.

Things to write in your prompt:

  • Objective - What is its objective

  • Output format - Typically would be HTML/markdown.

  • Writing style - How should the answers be written.

  • Don'ts - Clear instructions of what to avoid.

  • Examples - Examples of question and answer.

Some more advanced settings available are:

LLM model

Select the LLM model that will be used to parse your training data and generate an answer.

See the supported models list

Restrict answer size

The maximum length of the answer is defined in tokens.

Tokens can be thought of as pieces of words, and 1 token=4 characters. Learn more

Temperature

You can control whether you would like to receive unique responses every time a question is asked or get the same response every time the question is asked.

Define a number between 0 and 1. Nearer to 1 gives you distinct responses; nearer to 0 gives you fix responses. By default, we keep the number as 0.5.

Chunks

When the AI is asked a question, it first fetches relevant content from the knowledge base, which is then sent to the LLM model to generate an answer.

The more relevant content the AI can fetch, the more chances of accurate responses are.

The number indicates the number of relevant responses which will be fetched from the knowledge base.

Include chat history

Enabling this will include the entire chat history of the chat when answering any question.

This helps when user asks follow-up questions as 'its pricing', and the LLM would be aware that your previous question was about WotNot, so you meant "WotNot's pricing'.

Collect feedback

For every AI generated response, we allow the user to submit feedback about the generated answer.

This helps you know how helpful are AI's answers.

Answer source

Displays the URL of the content source from where the answer was referenced from. This only works for knowledge bases built using a website URL.

Last updated

Was this helpful?