Generative Journey®

From start to finish, set up a flow using Generative Journey in agentic mode with NLX

What's Generative Journey® in agentic mode?

The Generative Journey node brings intelligence and flexibility to your application. When set to Agentic mode, it uses an LLM to decide which tools to call (like custom or managed data requests, modalities, knowledge bases, or even other flows) based on what’s happening in the conversation.

It can gather details naturally from users (like a name, order number, or size) without requiring rigid, step-by-step mapping across several flows.

Imagine a customer says:

"I’d like to exchange my shirt for a larger size."

With agentic Generative Journey:

  • A flow is invoked that is designed to handle an order return

  • When the Generative Journey node is hit, the agent powering it understands the intent (exchange request) and gathers needed details like order number, shirt size, and preferred shipping option directly from the user in natural conversation

  • The agent then may use a custom data request to retrieve the order from your backend system

  • If available, the agent uses a modality (like a carousel of available sizes/colors) to let the customer pick a replacement

  • The agent may reference a knowledge base for return policies and automatically shares that info with the customer

  • Finally, the agent executes the exchange by triggering another flow or integration tool

In practice, this one node can replace dozens of traditional workflow nodes, giving you far more flexibility and making your flows cleaner, smarter, and easier to maintain.

Set up tools

Tools are the building blocks the agent can call on to complete tasks. When assigned to the agent, tools give the agent both the capability and the context it needs to autonomously decide what to use and in what order to fulfill the user’s request.

Custom/managed integration

Connects your agent to real-time data from external systems or services

Modalities

Presents information in different formats (e.g., a carousel, a list, a map)

Knowledge base

Pulls from stored documentation or reference material

Flows

Triggers other workflows in your workspace

Begin by identifying the tools that your agent must use and complete their setup:

  • Select Resources from workspace menu > Choose Data requests > Add a new data request

  • Enter a descriptive name (no spaces or special characters) > Select Save

  • Provide Response model schema and/or Request model schema (if entering Request model schema, provide a description for each property so the agent understands what information is needed for the payload)

  • Provide a description on the Settings tab of the data request for the agent to understand the purpose of the tool > Click Save

  • Select Resources from workspace menu > Choose Integrations > Add a new integration

  • Enter a descriptive name > Choose integration type from Managed list

  • Complete required fields > Click Create integration

  • Select Resources from workspace menu > Choose Flows > Add a new flow or select existing

  • Choose the flow's settings icon in the toolbar > Enter an AI description in the Routing tab explaining the purpose of the workflow

  • Choose MCP tab > Toggle ON MCP. Optionally, you can enter input schema for the flow's MCP properties that the agent will need to collect from a user before executing the workflow

  • Click Save

  • Select Resources from workspace menu > Choose Modalities > Add a new modality

  • Select the auto-generate option to provide the output schema that your front-end will render went sent by the agent

  • Click Save

  • Select Resources from workspace menu > Choose Knowledge bases > Add a new knowledge base or select an existing

  • Choose type > Provide content or external integration

  • Provide a description in the settings > Click Save

Create flow

Identify the overall task your agent is completing and add a flow. Each flow is invoked when your chosen AI model identifies user intent from a user's utterance ("I want to order room service") and matches it to a flow you've created OrderRoomService. This match is done based on the routing data you provide for your AI model.

Agentic Generative Journey node
1

New flow

Select Resources from workspace menu > Choose Flows card > Click New flow and name it

2

Set up your flow

Complete Flow setup by providing routing data, then add a Generative Journey (GJ) node to the Canvas and connect it to the Start node. In the GJ node's configuration panel, select Agentic

3

Configure your agent

Write a short prompt explaining the agent’s goal, the task to complete, tone or messaging style, and anything to avoid. Define when the agent should exit the node successfully (the Success condition)

4

Add tools

Attach one or more tools. Optionally expand each to include:

  • A custom prompt for each tool call

  • Input schema: Choose LLM prompt, Explicit value, or Placeholder variable from the dropdown

5

Finish and connect

Choose an LLM integration best suited for your delivery channel. Connect the Success path to the next node (like a Basic node or Redirect node to a goodbye flow) and click Save to finish

🧠 Looking for more? See Generative Journey® node and Intro to flows

Create app

The Configuration tab allows you to provide the necessary NLP engine, flows, and communication method for your application.

Configuration tab of custom application
1

Create app

Select Applications in your workspace menu > Choose New application. Click Blank application from the available options > Choose Custom and name your app

2

AI Engine

An AI engine helps disambiguate human speech for language understanding and intent recognition and also help construct construct a package of your application whenever a new build is made.

  • Provider: Select the engine in the Provider dropdown. You may choose the built-in NLX model or select a managed NLP provider you previously integrated into your workspace

3

Delivery

Channels are the modes through which users interface with your application. Each new custom application you create in your workspace comes with a prebuilt API channel for easy installation on web or mobile frontend services. Select any channel added to your app's delivery section for access to advanced settings and installation instructions.

  • For MCP setup, choose the API channel > Enable MCP interface

  • + Add channel: Select one or more channels from the list where you want your application deployed

    • Channel name: Defaults to the chosen channel type, but you may overwrite with a custom name

    • Integration: For channels requiring a one-time workspace integration, select from applicable integration(s) created

    • Custom conversation timeout: Customize the timeout period (in minutes) for a conversation session to end on the selected channel

    • Escalation: If any escalation channels have been created in your workspace, they will be listed here for selection

    Remaining fields are specific to the channel type selected. See list of channels for complete instructions

4

Functionality

  • Custom applications are defined by the flows they're comprised of. Here, you'll attach the workflows that your application will have access to as well which flows to default to during certain scenarios.

    • + Add flow: Select one or more flows from your workspace that the application can use

    • Click Default behavior > Assign a flow to run during a situation:

      • Welcome: Runs when a new conversation session starts. Use it to greet the user, set expectations, and collect any essentials (e.g., name or intent)

      • Fallback: Runs on timeouts, integration failures, state breaks, or exceeded incomprehension events that are not handled by a knowledge base. Route here to recover gracefully and guide the user forward

      • Unknown: Runs when the AI cannot match the user's response to any flow or provided choices. Use to invoke a knowledge base and check a question against a repository of information

      • Escalation (optional): Runs when a node hits the escalation path. Route here to transfer to a human agent

    • Linked resources: View all workspace resources associated with your application. Select one to navigate to its edit page

  • Click Save

Deployment

Deploying an application allows you to construct a build that contains a package of the flows, AI engine, settings, and delivery details in the state they exist at the time the build is created. You may then deploy a successful build to make it live or roll back to a previous deployment.

1

Build

  • Click deployment status in upper right > Select Build and deploy

  • Review Validation check for critical errors or detected UX issues in custom flows. Before each new build initiates, a validation check is run to provide a preview of potential errors that may cause failed builds. Detected issues are listed with descriptions and potential solutions

  • You may provide a Description of notable build edits as a changelog

  • Enable NLX Boost (optional): Enhances the performance of your engine with generative AI intent classification

    • Allow NLX Boost to override the NLP: Enabling this relies on NLX's built-in generative AI to detect user intent (based on your routing data) and route to a flow accordingly, regardless of the AI engine's detected match

  • Click Create build

You may now test your newest build in your workspace using any of the test chats.

2

Deploy

Channel(s) provides the frontend interface (how users experience your app). Deploying a build pushes your updates through any delivery channels setup on the app, effectively making your app live outside of your NLX workspace.

  • Click deployment status in upper right > Select All builds > Choose Deploy on successful build

    • Deployment languages (optional): Select the languages to include in the deployment, if multiple are available

    • Hosting (optional): Host your application as a Touchpoint app via the conversational.app domain. Ideal for previewing its final look during development and sharing your app externally with collaborators. Enable the Hosting option to configure the URL (e.g., mybusiness.conversational.app)

  • Click Create deployment

Enable One-click deploy to auto-deploy every new build. From the deployment status, open Deployment settings and turn One-click deploy on. All future builds will deploy automatically.

Once a build is made, flows can be further edited without affecting a deployed application. Only deploying a new build will impact live applications. Only one build can be deployed at a time and deploying any build deactivates the previous one. You can freely alternate between newer and older builds using Rollback or Deploy.

3

Implement

  • Click the Configuration tab of your application > Click any channel assigned in the Delivery section

  • Choose the Setup instructions tab and follow instructions for installing to your chosen communication channel

Once you deploy a build, you may use your application outside the NLX workspace in two ways:

  1. Delivery channel: Interact with the app through the channel where it’s installed (e.g., web chat via API channel, voice, SMS)

  2. Touchpoint (hosted chat): Open the hosting URL from a deployed build to chat with the app. Hosting must be enabled when you deploy

Last updated