Agentic

From start to finish, set up an app using Generative Journey in agentic mode with NLX

What's an agentic app?

NLX's Generative Journey brings intelligence and flexibility to your conversational AI apps. When Generative Journey is set to Agentic mode, it uses an LLM to decide which tools to call (like custom or managed data requests, modalities, knowledge bases, or even other flows) based on what’s happening in the conversation.

It can gather details naturally from users (like a name, order number, or size) without requiring rigid, step-by-step mapping across several flows.

Imagine a customer says:

"I’d like to exchange my shirt for a larger size."

With agentic Generative Journey:

  • A flow is invoked that is designed to handle an order return

  • When the Generative Journey node is hit, the agent powering it understands the intent (exchange request) and gathers needed details like order number, shirt size, and preferred shipping option directly from the user in natural conversation

  • The agent then may use a custom data request to retrieve the order from your backend system

  • If available, the agent uses a modality (like a carousel of available sizes/colors) to let the customer pick a replacement

  • The agent may reference a knowledge base for return policies and automatically shares that info with the customer

  • Finally, the agent executes the exchange by triggering another flow or integration tool

In practice, this one node can replace dozens of traditional workflow nodes, giving you far more flexibility and making your flows cleaner, smarter, and easier to maintain.

Step 1: Set up tools

Tools are the building blocks the agent can call on to complete tasks. When assigned to the agent, tools give the agent both the capability and the context it needs to autonomously decide what to use and in what order to fulfill the user’s request.

Custom/managed integration

Connects your agent to real-time data from external systems or services

Modalities

Presents information in different formats (e.g., a carousel, a list, a map)

Knowledge base

Pulls from stored documentation or reference material

Flows

Triggers other workflows in your workspace

Begin by identifying the tools that your agent must use and complete their setup:

  1. Select Resources from workspace menu > Choose Data requests > Add a new data request

  2. Enter a descriptive name (no spaces or special characters) > Select Save

  3. Provide Response model schema and/or Request model schema (if entering Request model schema, provide a description for each property so the agent understands what information is needed for the payload)

  4. Provide a description on the Settings tab of the data request for the agent to understand the purpose of the tool > Click Save

  1. Select Resources from workspace menu > Choose Integrations > Add a new integration

  2. Enter a descriptive name > Choose integration type from Managed list

  3. Complete required fields > Click Create integration

  1. Select Resources from workspace menu > Choose Flows > Add a new flow or select existing

  2. Choose the flow's settings icon in the toolbar > Enter an AI description in the Routing tab explaining the purpose of the workflow

  3. Choose MCP tab > Toggle ON MCP. Optionally, you can enter input schema for the flow's MCP properties that the agent will need to collect from a user before executing the workflow

  4. Click Save

  1. Select Resources from workspace menu > Choose Modalities > Add a new modality

  2. Select the auto-generate option to provide the output schema that your front-end will render went sent by the agent

  3. Click Save

  1. Select Resources from workspace menu > Choose Knowledge bases > Add a new knowledge base or select an existing

  2. Choose type > Provide content or external integration

  3. Provide a description in the settings > Click Save

Step 2: Create flow

Identify the overall task your agent is completing and add a flow. Each flow is invoked when your chosen AI model identifies user intent from a user's utterance ("I want to order room service") and matches it to a flow you've created OrderRoomService. This match is done based on the routing data you provide for your AI model.

Agentic Generative Journey node
1

New flow

  1. Select Resources from workspace menu and Choose Flows card

  2. Click New flow and enter a name (no spaces or special characters)

2

Set up your flow

  1. Complete Flow setup by providing routing data

  2. Add a Generative Journey (GJ) node to the Canvas and connect it to the Start node

  3. In the GJ node's configuration panel, select Agentic

3

Configure your agent

  1. Write a short prompt explaining the agent’s goal, the task to complete, tone or messaging style, and anything to avoid. Define when the agent should exit the node successfully (the Success condition)

4

Add tools

  1. Attach one or more tools. Optionally expand each to include:

    • A custom prompt for each tool call

    • Input schema: Choose LLM prompt, Explicit value, or Placeholder variable from the dropdown

5

Finish and connect

  1. Choose an LLM integration best suited for your delivery channel

  2. Connect the Success path to the next node (like a Basic node or Redirect node to a goodbye flow)

  3. Click Save to finish

🧠 Looking for more? See Generative Journey® node and Intro to flows

Step 3: Add app

  1. Select Applications in your workspace menu

  2. Click New application

  3. Choose Blank application from the available options and select Custom

  4. Enter a name for your application and click Create

Your new app will open on the Configuration tab.

Step 4: Configure

The Configuration tab defines your AI engine, delivery channels, and connected workflows.

Configuration tab of application
1

AI Engine

An AI engine helps disambiguate human speech for both language understanding and intent recognition and also helps construct construct a package of your application whenever a new build is made. The built-in NLX model is provided for you for seamless setup.

  1. Choose NLX’s built-in model or a managed provider you previously integrated into your workspace (Lex, Dialogflow, etc.)

2

Delivery

Channels determine where your users interact with your app. Every custom core app includes an API channel by default for easy installation in web or mobile environments.

  1. [Optional] For MCP setup, choose the API channel and enable MCP interface

  2. Select + Add channel and choose one or more channels from the list where you want your application deployed

    • Channel name: Defaults to the chosen channel type, but you may overwrite with a custom name

    • Integration: For channels requiring a one-time workspace integration, select from applicable integration(s) created

    • Custom conversation timeout: Customize the timeout period (in minutes) for a conversation session to end on the selected channel

    • Escalation: If any escalation channels have been created in your workspace, they will be listed here for selection

  3. Remaining fields are specific to the channel type selected. See list of channels for complete instructions

3

Functionality

Flows define your application’s behavior. Attach the flow that contains your agentic Generative Journey:

  1. Click + Add flow and select one or more flows from your workspace for your app

  2. Click Default behavior and assign a flow to run during the following situations:

    • Welcome: Runs when a new conversation session starts. Use it to greet the user, set expectations, and collect any essentials (e.g., name or intent)

    • Fallback: Runs on timeouts, integration failures, state breaks, or exceeded incomprehension events that are not handled by a knowledge base. Route here to recover gracefully and guide the user forward

    • Unknown: Runs when the AI cannot match the user's response to any flow or provided choices. Use to invoke a knowledge base and check a question against a repository of information

    • Escalation (optional): Runs when a node hits the escalation path. Route here to transfer to a human agent

  3. [Optional] You may view all Linked resources associated with your app and select one to navigate to its edit page

  4. Click Save

Step 5: Deploy

Deploying an application allows you to construct a build that contains a package of the flows, AI engine, settings, and delivery details in the state they exist at the time the build is created. You may then deploy a successful build to make it live or roll back to a previous deployment.

1

Build

  1. Click deployment status in upper right and choose Build and deploy

  2. Review the Validation check for critical errors or detected UX issues in custom flows. Before each new build initiates, a validation check is run to provide a preview of potential errors that may cause failed builds. Detected issues are listed with descriptions and potential solutions

  3. [Optional] Provide a Description of notable build edits as a changelog

  4. [Optional] Choose Enable NLX Boost to enhance the performance of your AI engine with generative AI intent classification

    • Allow NLX Boost to override the NLP: Enabling this relies on NLX's built-in generative AI to detect user intent (based on your routing data) and route to a flow accordingly, regardless of the AI engine's detected match

  5. Click Create build

You can now test your newest build in your workspace using any of the test chats.

2

Deploy build

Channel(s) provide the frontend interface (how users experience your app). Deploying a build pushes your updates through any delivery channels setup on the app, effectively making your app live outside of your NLX workspace.

  1. Click deployment status in upper right and select All builds

  2. Choose Deploy on a successful build

    • [Optional] Deployment languages: Select the languages to include in the deployment, if multiple are available

    • [Optional] Hosting: Host your application as a Touchpoint app via the conversational.app domain. Ideal for previewing its final look during development and sharing your app externally with collaborators. Enable the Hosting option to configure the URL (e.g., mybusiness.conversational.app)

  3. Click Create deployment

Enable One-click deploy to auto-deploy every new build. From the deployment status, open Deployment settings and turn One-click deploy on. All future builds will deploy automatically.

Once a build is made, flows can be further edited without affecting a deployed application. Only deploying a new build will impact live applications. Only one build can be deployed at a time and deploying any build deactivates the previous one. You can freely alternate between newer and older builds using Rollback or Deploy.

3

Implement

  1. Click the Configuration tab of your application and choose any channel assigned in the Delivery section

  2. Choose the Setup instructions tab and follow instructions for installing to your chosen communication channel

Once you deploy a build, you may use your agentic app outside the NLX workspace in two ways:

Delivery channel

Interact with the app through the channel where it’s installed (e.g., web chat via API channel, voice, SMS)

NLX hosted

Open the hosting URL from a deployed build to chat with the app. Hosting must be enabled when you deploy

App settings

AI settings

Input a brief description of the application for Model Context Protocol (MCP) Clients to reference when using your app as an MCP server

Advanced
  • Child-directed: For applications subject to COPPA. Utterance information will not be stored if enabled

  • Autocorrection: For supported NLPs, spell correction will be applied towards written user responses

  • Repeat on incomprehension: If a user's response is unclear, the application will repeat its last message to the user

  • NLP confidence threshold: When an utterance is at or above this value, your application assumes it matched to a flow with certainty

  • Negative sentiment threshold: The flow assigned to the Frustration default will be triggered when negative sentiment is detected above this threshold by the NLP (e.g., profanity, sarcasm, etc.)

  • Incomprehension count: Define the number of sequential utterances that the application asks for clarity on before an escalation or Unknown flow is triggered

  • Conversation timeout (min): Sets the timeout period for all channels on your application. If a timeout was also configured for a channel when managing channels, that channel's setting will take precedence over the application's setting

  • Default project ID: Default Project ID when using Google's Dialogflow NLP

Automated tests (enterprise)

See the Automated tests setup for complete setup and instructions

Languages (check the list of supported languages in your workspace)
  • Choose which languages your application supports when in production. Though the flows attached to your application may be set up and developed in several languages, your application ultimately decides which to provide when released.

  • Expand either Main language or any Supported language to view advanced settings:

    • Use native NLP: Sends a user's utterance to the NLP directly without translation

    • Region: Global is chosen by default but may be toggled to EU for compliance and performance when using Dialogflow's NLP

    • Dialogflow project ID: If using Dialogflow's NLP, enter the project ID generated on the application's deployment tab

    • Amazon Lex Voice: If using Amazon's Lex NLP for voice channels, select the conversational AI voice to be used. You may listen to the selection of Amazon Polly voices here

When an application is created, it also inherits the list of workspace languages that have been pushed to all resources. Under Main language, the default language is English (US), while Supported languages lists any additional workspace languages applied via Translations.

Need to adjust languages and translations at the workspace level? See Translations.

Last updated