Generative Journey® (Slots)

From start to finish, set up a flow using Generative Journey in slot capture mode with NLX

What's Generative Journey in slot capture mode?

The Generative Journey® node is a powerful building block in your flow that brings a large language model (LLMs) into the process. In slot-capture mode, it's designed to gather multiple pieces of information (slots) from a user in a natural, conversational way. Instead of relying on rigid question-and-answer sequences, the LLM flexibly interprets how users provide details, as some users might give all information in one sentence, skip around, or provide answers out of order. This makes the experience more fluid and efficient, while reducing the number of separate nodes you need to build into a flow.

Imagine a customer uses your conversational app and says: "I need a hotel in Boston this weekend."

With slot-capture Generative Journey:

  • A flow is invoked that is designed to handle hotel booking

  • When the Generative Journey node is hit, the LLM recognizes and fills location = Boston and dates = this weekend without the user needing to explicitly answer a question

  • It naturally asks follow-up questions like, "How many people will be staying?"

  • It notices that other required slots (number of guests, room type, budget) are still missing

  • It allows the user to answer in any order (e.g., “It’s for two adults, and under $200 a night would be great”)

Once all slots are collected, the node ejects from the Success path, and the resolved slot variables can be used to complete the rest of the workflow. All of this happens within one node, keeping the flow streamlined while still giving the user freedom to provide details conversationally.


Checklist

You'll complete the following to successfully launch a workflow powered by Generative Journey in slot mode:


Step 1: Create flow

Begin by identifying the tasks your conversational AI application will automate and organize them into individual topics handled by flows. Determine the sequence of steps and messaging that the conversational application follows to assist a user with the task. The conversation workflow is assembled in the Canvas builder with a pattern of nodes similar to a flow diagram.

Each flow is invoked when your chosen AI model identifies user intent from a user's utterance ("I want to order room service") and matches it to a flow you've created OrderRoomService. This match is done based on the training data you provide for your AI model.

  • Select Resources from workspace menu > Choose Flows > Click New flow

  • Enter a descriptive name (no spaces or special characters) > Select Save

  • Complete Flow setup by attaching training data and attaching slots

  • Add nodes by right-clicking on the Canvas & choosing New node (see available node types)

  • Connect first node to Start node > Connect sequential nodes via node edges or stacking

  • Add conversational AI messages to nodes by selecting + Add message on a node's side panel

  • Place a Generative Journey node on the Canvas

  • Select Slot capture

  • Provide a succinct prompt to the LLM on what flow the task is part of, the end goal of the task, specific messaging or branding requirements (if any), and things to avoid (if any)

  • Assign all Required slots needed to complete the task

  • Link from the node's Success edge to the next node in the flow

  • Click Save

Repeat for any additional flows your application will help automate.

🧠 Looking for more? See Intro to flows


Step 2: Deploy

Now you'll create the conversational AI application users will interface with. This step involves attaching all flows you want your application to access, defining flows to handle certain behaviors, setting up the channels where your application will be installed, and deploying.

  • Select Applications from workspace menu > Choose New application

  • Click Blank application from the available options > Choose Custom

  • Provide a name for your application > Click Create application

  • On Configuration tab of application

    • Under Functionality section, attach one or more flows created in previous step to make available to your application > Click Save

    • Click Default behavior > Assign any attached flow to the application's behaviors > Click Save

A build now constructs a package of your application with a snapshot of current state of flows, languages, and application setup. A deployment then pushes a successful build to the communication channels where your app is installed:

  • Click deployment status in upper right > Select Build and deploy

    • Review Validation check for critical errors or detected UX issues in custom flows. Before each new build initiates, a validation check is run to provide a preview of potential errors that may cause failed builds. Detected issues are listed with descriptions and potential solutions

    • You may provide a Description of notable build edits as a changelog

    • Click Create build

  • Click deployment status in upper right > Select All builds > Choose Deploy on successful build > Click Create deployment


Step 3: Install

  • Click the Configuration tab of your application > Click any channel assigned in the Delivery section

    • For finalizing MCP, choose the API channel in your Configuration tab and choose Setup instructions tab to access how to set up your MCP Client (ensure you turned ON MCP interface before building and deploying)

  • Choose the Setup instructions tab and follow instructions for installing to your chosen communication channel

Once you deploy a build, you can use your application outside the NLX workspace in two ways:

  1. Delivery channel: Interact with the app through the channel where it’s installed (e.g., web chat via API channel, voice, SMS, MCP Client)

  2. Touchpoint (hosted chat): Open the hosting URL from the deployment details to chat with the app. Hosting must be enabled when you deploy

Last updated