GPT Agent
GPT Agents use Open AI compatible Large Language Models (LLMs) to power multi-turn interactions with users.
Agent Methodology
The GPT Agent receives a user message and decides to produce a tool call or AI response.
-
The agent evaluates the available tools against the user message to decide what to do next.
-
More than one tool can be triggered at a turn.
-
For example, if a user provided their first and last name in the same user message (e.g., “Hi, my name is Jane Doe and I wanted to…”) that might trigger the get_first_name and get_last_name tools.
-
Channel Compatibility
The GPT Agent is channel agnostic (e.g., compatible for inbound or outbound voice, SMS, etc.).
Use Cases
Most commonly used for intent detection (how may I help you) and subsequent tasks in a conversation. Popular uses include:
-
Scheduling appointments
-
Paying a bill
-
Ordering items from a menu
-
Collecting address information
Configurations
When you build a GPT Agent, the following sections appear in the Configuration section and can be customized for your use case:
-
System Prompt
-
User Prompt
-
Tools
-
Knowledge
Note: Does not impact the performance of your GPT Agent.
-
Parameters
Tips and Best Practices
-
Add flow arguments to customize your GPT Agent’s responses. For example, business name, address, etc.
-
A flow argument (e.g.,{arg_agent_name}), is dynamically inserted at run time. For example, an agent could respond to user message “hi”, with “hi, my name is Nick, how can I help you today?” where “Nick” is our agent name flow argument included in the System Prompt.
Note: You must use the {arg_x} format for your flow argument. The format $arg_x for example would not work as expected when configured in your prompt and would not be replaced as expected at run time.
-
-
We recommend testing, testing, and more testing for your GPT Agents. Your first attempt may not be the perfect solution, but continuously making tweaks based on your test results should help make your agent as effective as possible.
-
Use the Tasks page to create and manage an evaluation framework and then measure the performance of your agent.
-
-
Make sure your tools are properly defined so the agent can identify and fire them as expected.
-
We recommend verifying your tool names, description, and properties are defined as expected.
-
Test out various scenarios to ensure your tools behave as expected.
-
If you continue having issues with tools not firing when expected, firing when not expected, or more than one tool firing unexpectedly, we recommend working with a teammate to peer review your agent.
-