# Nodes

All conversation flows consist of nodes that define logic, messaging, API calls, and turns that can be followed in a particular sequence. There are a variety of node types, each with their own function (e.g., routing a user, providing a message, generating an output, listening for user intent, etc.).

* Choose *Add* option in toolbar or right-click to choose *New node* in Canvas shortcut menu
* Select node type to apply to the Canvas

Nodes are linked together via their node paths. Every node has one or more for each possible path the flow may take.

{% hint style="success" %}
Connect the first node of your flow to the Canvas' [*Start* node](#start).
{% endhint %}

{% @arcade/embed flowId="q9DIk5khyArhCxxvSCW5" url="<https://app.arcade.software/share/q9DIk5khyArhCxxvSCW5>" %}

* To connect nodes, click from a node's path > Drag line to recipient node
* Disconnect nodes by clicking the line linking them or dragging a node away from a stack and selecting the linked line

{% hint style="info" %}
Connected lines are blue and show directional flow. Dashed lines indicate recipient node is arranged behind or in line with the origin node on the Canvas. Moving a recipient node in front turns lines solid.
{% endhint %}

To tighten the space used on the Canvas, you may also stack nodes together:

{% @arcade/embed flowId="OVTUTDhNhcoeDtuyfgVG" url="<https://app.arcade.software/share/OVTUTDhNhcoeDtuyfgVG>" %}

* To connect nodes through stacking, click and drag a node on top or bottom of another
* A path turns into an arrow, indicating the flow's direction

{% hint style="info" %}
To delete a node, select it and press delete or right-click and choose *Delete* from the shortcut menu.
{% endhint %}

***

### Configuring

Selecting a node on your Canvas reveals a side panel to display information about the node and provide options for adding or refining its actions. Clicking outside of the node automatically closes the panel.&#x20;

{% hint style="info" %}
Every node has a non-editable ID so the node may be referenced across the NLX workspace. To show the *Node ID*, click the three-dot menu in the upper right of the node's side panel.
{% endhint %}

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FoDeaIFXvK1C9kJ4GIZzB%2Fimage.png?alt=media&#x26;token=c08ccae7-8424-4608-8a9c-965d85dafed3" alt=""><figcaption><p>Node side panel</p></figcaption></figure>

#### Messaging

{% hint style="info" %}
Not all nodes have a *Messages* section. You may use a [Basic node](#basic) before or after in these instances.
{% endhint %}

Your conversational AI relays messages to users when a node they've reached has *Messages* entered. Below are options to help enhance your use of this feature:&#x20;

<details>

<summary>Message</summary>

* Select a node > Add messages to a node by clicking *+Add message* on a node's side panel
* Repeat the above as often as needed on a single node
* To delete a message, choose the three-dot menu beside the message and select *Delete*

Easily reference [variables](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/flows-and-variables#variables-in-flows), including generated outputs in your messaging that is unique for each conversation:

* Enter a curly brace { while typing in a message field

{% hint style="info" %}
If you do not see a *Slot* variable that exists in your workspace, remember to [attach it](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#attached-slots) within your flow's *Settings*. If you do not see a Data request variable from List schema, you may need to iterate over the list by placing a Loop node before your Data request node and setting it to List.
{% endhint %}

NOTE: Adding more than one message to a node breaks up large messages into a series of smaller ones, making it easier for users to consume on text interfaces

</details>

<details>

<summary>Variations</summary>

{% hint style="warning" %}
Message variations do not get translated. If developing for multilingual AI applications, avoid using message variations.
{% endhint %}

Adding variations may be favorable when users regularly interact with the same conversational AI application or may traverse a node(s) in a flow more than once during the same session, such as with repeat processes or retries. The NLX NLU chooses from your variations randomly, so users experience different phrasing.

* Select a node > After adding a message using a node's side panel, click *+Add variation*
* To delete a message, choose the three-dot menu beside the message and select *Delete*

</details>

<details>

<summary>SSML</summary>

{% hint style="info" %}
SSML tags are visible in text-based channels if reusing the same flow and sequence of nodes for both voice and chat. Use a Split node to branch the flow in those events.
{% endhint %}

When drafting messages for voice channels, you can use SSML tags (Speech Synthesis Markup Language) supported by your NLP. Inserting SSML into your messages can control volume, pitch, pauses, emphases, breathing, and more.

For Amazon Lex, check their supported [SSML guide here](https://docs.aws.amazon.com/polly/latest/dg/supportedtags.html).

</details>

<details>

<summary>Best practices</summary>

* Add *Basic* nodes with messaging before custom API nodes (i.e., *Actions*, *Data requests*) to indicate a process is about to happen, particularly if it may take a moment
* Be mindful of the placement of messaging in case a series of nodes (or stack) might be cycled through twice. You may want to add a *Loop* node set to *Range* that ejects after a certain count so the messaging from any nodes isn't repeated unnecessarily
* Plan for failure when collecting information from a user or asking them to make a choice by adding failure messages with *Basic* nodes

</details>

#### Functionality

Several advanced actions can be applied to nodes on the Canvas. Simply select+ *Add functionality* on a node's side panel and select from the following:

<details>

<summary>Analytics tags</summary>

Assign tags to nodes for tracking and later evaluating their performance in your flow using [Canvas analytics](https://docs.nlx.ai/platform/nlx-platform-guide/monitoring/analytics/canvas-analytics). While reviewing Canvas analytics, you can view the number of unique visits to the node from conversations with users.

Choose from three available system tags or create custom tags through the [*Analytics tags*](https://docs.nlx.ai/platform/nlx-platform-guide/monitoring/analytics/analytics-tags) resource men&#x75;*.* Note that new tags applied to nodes are not retroactively applied to previous conversations with deployed flows.

* Select a node > Click *Add functionality* menu on a node's side panel > Choose *Analytics tags*
* Search or select a tag from the dropdown to assign to the node
* Repeat as often as needed
* To delete a tag, click the delete icon beside it

</details>

<details>

<summary>Collect feedback</summary>

{% hint style="info" %}
Requires creating at least one [user feedback collection](https://docs.nlx.ai/platform/nlx-platform-guide/monitoring/user-feedback) before referencing in a node.
{% endhint %}

*Collect feedback* adds a structured user feedback mechanism to the AI's response (i.e., 👍/👎 controls). Because a feedback collection is a global workspace resource, you can reuse the same one across any end-of-turn node in your flow (i.e., User choice or User input node).&#x20;

* Select a node > Click *Add functionality* on a node's side panel > Choose *Collect feedback*
* Select a user feedback collection from your workspace in the dropdown
* Click *Save*

</details>

<details>

<summary>Modality</summary>

{% hint style="info" %}
Requires creating at least one [*Modality* in your workspace](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/modalities) before referencing in a node.
{% endhint %}

Modalities allow you to reuse a pre-defined payload structure across flows of your workspace once enabled on a node of your choice. They can handle rich UI components or other information to relay to a user. NOTE: *Modalities* only display on end-of-turn nodes. Make sure the node is either a *User choice*, *User input*, *Agentic Generative Journey*, or not connected to any following nodes.

* Select a node > Click *Add functionality* on a node's side panel > Choose *Modality*
* Select a modality from your workspace in the dropdown
* Enter payload details or assign dynamic sources into applicable fields > Click *Save*

</details>

<details>

<summary>Node payload</summary>

*Node payload* holds payload commands executed in conversation. Use cases include specifying what call queue should be used during escalation to an agent, whether to include a transcript during escalation, or sending a control message to NLP to allow users to interrupt the conversational AI, for example.&#x20;

* Select a node > Click *Add functionality* menu on a node's side panel > Choose *Node payload*

{% hint style="info" %}
Specify each item as a key-value pair in the following format: `key=value`. To specify multiple items, use an ampersand (&) as a separator.
{% endhint %}

<table><thead><tr><th width="221">Key</th><th width="325">Value</th><th width="94">Usage</th><th>Notes</th></tr></thead><tbody><tr><td><code>nlx_transferPhoneNumber</code></td><td>E.164 phone number</td><td>Amazon Connect</td><td>The phone number used to transfer to in the event of an escalation.</td></tr><tr><td><code>escalation_PSTN</code></td><td>E.164 phone number</td><td>Amazon Chime SDK</td><td>The phone number used to transfer to in the event of an escalation.</td></tr><tr><td><code>escalation_VC</code></td><td>Amazon Chime SDK Voice Connector URL</td><td>Amazon Chime SDK</td><td>The voice connector used to transfer a call.</td></tr><tr><td><code>x-amz-lex:</code><strong><code>*</code></strong></td><td>Varies. Include <code>=True</code> at the end of your Lex syntax to enable; for example, <code>x-amz-lex:allow-interrupt:*:*=True</code></td><td>Amazon Lex with a voice channel</td><td>Configure timeouts for user input, user interruption (barge-in), and other behaviors. Check out the <a href="https://docs.aws.amazon.com/lexv2/latest/dg/session-attribs-speech.html">official Amazon docs</a> for supported syntax.</td></tr><tr><td><code>nlx:hints</code></td><td>A comma separated list of strings</td><td>Amazon Lex with a voice channel</td><td>Improve disambiguation over voice when there is an anticipated list of options.</td></tr></tbody></table>

</details>

<details>

<summary>Send context</summary>

Enabling *Send context* allows the conversation context variables to be included as a part of the response payload. If you are sending context to an API, it will be added to the payload response.&#x20;

</details>

<details>

<summary>State modifications</summary>

*State modifications* allow you to apply advanced state changes to dynamic variables from *Slots, Context variables, System variables,* or *Data requests* that are referenced in a flow:

* *Clear*: Erases any specified variable(s) captured if needing to re-capture the same variables later (preventing auto-traversal), or if looping users back through a step(s) during retries or revisits in the same conversation session, including null, *No match*, or repeat instances
* *Set*: Allows you to establish and set a specified variable from that point forward in the conversation session (until a further state modification is applied)
* *Increment*: Automatically increases a specified numerical variable when the node containing the increment is visited. Useful with *Split* node logic and/or counting with a counter [*Context variable*](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/context-variables) during retries; use in conjunction with the *Set* state mod if needing to establish the base value prior
* *Decrement*: Automatically decreases a specified numerical variable when the node containing the decrement is visited. Useful with *Split* node logic and/or counting with a counter *Context variable* during retries; use in conjunction with the *Set* state mod if needing to establish the base value prior
* *Append to*: Add one or more variables to a list/array
* *Remove first item from*: Clear the first variable from a list/array
* *Remove last item from*: Clear the last variable from a list/array

{% hint style="success" %}
Leverage streaming state modifications applied to your *Data requests* during a conversation with use of a [Lifecycle hook](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/lifecycle-hooks) and enabling the [*Stream state modification* setting](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/setup#custom-app-settings).
{% endhint %}

</details>

<details>

<summary>Voice+</summary>

Currently in beta.

</details>

***

### Nodes

### Start

*Start* node is the home node of every flow. It's non-editable, provides information on how the flow is reached by users, and should be attached to the first node added to your Canvas.

{% tabs %}
{% tab title="Use case" %}

* Every flow or page created within a flow automatically populates a *Start* node as a beginning point. A node added to the Canvas can then be attached to *Start* through a line or by stacking
  {% endtab %}

{% tab title="Path" %}

* *Next*: Link to the first node added to the flow
  {% endtab %}
  {% endtabs %}

***

### Action

{% hint style="info" %}
Use of this node requires [setting up an *Action*](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/actions) in your workspace. This feature is currently available to enterprise tiers only.
{% endhint %}

*Action* nodes initiate an external task to occur.

{% tabs %}
{% tab title="Use case" %}

* An email or text sent to the user for a survey, confirmation, or set of instructions
* Triggering the creation of a service ticket, work order, purchase, etc.
* Scheduling an appointment or calendar meeting
  {% endtab %}

{% tab title="Paths" %}

* *Success*: Link to the next node in the flow if the action executed properly

> Optional:
>
> * *Timeout*: Link to a node if the action does not respond a status before the timeout
> * *Failure*: Link to a node if the action was not able to connect as configured
>   {% endtab %}

{% tab title="Side panel" %}

* *Invoke*: Select from available *Actions* already created in the workspace
* *Payload*: Each payload field is auto-generated by the [*Request* model](https://docs.nlx.ai/platform/nlx-platform-guide/advanced/actions#request-model) set up when the *Action* was created. Enter desired payload into each field(s), if applicable

> Optional:
>
> * *Always retrigger*: Enable toggle to retrigger the action, even if the node is revisited again during the same conversation session
> * *Timeout*: Enable toggle to adjust timeout period in seconds. Default is 5s with a maximum of 30s
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

* Use a [*Loop* node](#loop) before the *Action* node to specify a number of retries allowed for failure events
* Consider linking a [*Basic* node](#basic) to the Action's *Success* edge to acknowledge (via messaging) that the process was successful
  {% endtab %}
  {% endtabs %}

***

### Basic

*Basic* nodes are the easiest to include in any part of the flow. They can act as a messaging node, a processing node when given [added functionality](#add-functionality), or end a call when left unconnected at the end of a flow.

{% tabs %}
{% tab title="Use case" %}

* Indicate via message that a process is about to occur before continuing the flow
* Offer a greeting or confirmation before/after a question or capture step
* Clear a variable (from a slot or *Data request* source) in a retry cycle for a *User choice* node
* End the call or conversation by leaving the node unattached
  {% endtab %}

{% tab title="Path" %}

* *Next*: Link to the next node in the flow, or leave unconnected to end a conversation call
  {% endtab %}

{% tab title="Side panel" %}

> Optional:
>
> * *+ Add message*: Add one or more message that is relayed to the user when reaching this node
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

* Add to the end of a flow to properly end a conversation with a user using a goodbye message
* Link *No match* path to a *Basic* node, which can explain to users why something may have been an incorrect choice or incorrect input before continuing with a retry or escalation
  {% endtab %}
  {% endtabs %}

***

### Data request

*Data request* nodes allow you to trigger an API request during a conversation to execute a specific external action or send or retrieve data for use in a conversation. These nodes are strategically placed in a flow to run call(s) when needed.

#### Managed

*Managed* mode allows you to assign an API request from a managed integration service set up in the workspace that triggers when the node is traversed in the flow.

{% hint style="info" %}
Use of this mode requires first setting up a [managed integration](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/managed-integrations) in your workspace.
{% endhint %}

{% tabs %}
{% tab title="Use case(s)" %}

* Create a calendar event
* Push a post to a social media platform
* Generate a ticket in a CRM system
  {% endtab %}

{% tab title="Paths" %}

* *Success*: Link to the next node in the flow if the request executed properly

> Optional:
>
> * *In progress*: Link to a node that loops back to the *Data request* node while a request call is being resolved; useful to avoid timeout restrictions from voice channel providers on API calls that take longer. Subject to the maximum timeout of 30s
> * *Timeout*: Link to a node if the request does not respond a status before the timeout
> * *Failure*: Link to a node if the request was not able to connect as configured
>   {% endtab %}

{% tab title="Side panel" %}

* Ensure toggle is set to *Managed*
* *Provider*: Choose from managed service providers
* *Action*: Choose from the available actions related to the selected service. The list updates automatically when a different provider is selected
* *Resolve to*: The output name that may be referenced in the rest of the flow (includes any nested variables retrieved from the API call). No spaces or special characters

> Optional:
>
> * *Settings*
>   * *Custom timeout*: Enable toggle to adjust call timeout period in seconds. Default is 5s with a maximum of 30s
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

* Place a *Data request* node BEFORE other nodes in a flow that must relay, reference, or use [conditional logic](#split) based on the properties sent/retrieved or event executed by your *Data request*
* Reference information in text fields by entering an open curly bracket `{` and selecting from *Data requests* in the workspace. Managed integrations and their variables (properties) are color-coded purple
  {% endtab %}
  {% endtabs %}

#### Custom

*Custom mode* allows you to assign a *Data request* from your workspace that triggers the custom API request when the node is traversed in the flow.&#x20;

{% hint style="info" %}
Use of this mode requires first setting up a custom [*Data request*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests) in your workspace.
{% endhint %}

{% tabs %}
{% tab title="Use case" %}

* Checking available rooms to assist with a hotel reservation
* Pulling a list of local restaurants to match a user's criteria
* Fetching the user's profile to authenticate and customize the conversation
* A payload of information sent to a database for storage or updating (e.g., user profiles, passwords, etc.).
  {% endtab %}

{% tab title="Paths" %}

* *Success*: Link to the next node in the flow if the request executed properly; generally follow with a node where the *Data request* or properties of the *Data request* are being referenced

> Optional:
>
> * *In progress*: Link to a node that loops back to the *Data request* node while a request call is being resolved; useful to avoid timeout restrictions from voice channel providers on API calls that take longer. Subject to the maximum timeout of 30s
> * *Timeout*: Link to a node if the request does not respond a status before the timeout
> * *Failure*: Link to a node if the request was not able to connect as configured
>   {% endtab %}

{% tab title="Side panel" %}

* Ensure toggle is set to *Custom*
* *Request*: Choose from any custom integration (data request) created in your workspace

> Optional:
>
> * *Payload*: Specify the payload to be retrieved; fields are [set up under the Request model ](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests#response-request-models)when creating the data request
> * *Always retrigger*: Enable toggle to retrigger the data request, even if the node is revisited again during the same conversation session
> * *Settings*
>   * *Custom timeout*: Enable toggle to adjust timeout period in seconds. Default is 5s with a maximum of 30s. If increasing the timeout for custom webhooks, link the node's *In progress* connector to a *Basic* node that then links back to the *Data request* node. This avoids a premature timeout imposed by your communication channel before your request call is resolved
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

* Place a *Data request* node BEFORE other nodes in a flow that must relay, reference, or use [conditional logic](#split) based on the properties sent/retrieved or event executed by your *Data request*
* Reference information in text fields of other nodes by entering an open curly bracket `{` and selecting from *Data request* properties. Custom *Data requests* and their schema are color-coded orange
* If you do not see properties from your schema in the placeholder menu, they are likely defined as an array. Link to a [*Loop* node](#loop) set to *List* and assign your *Data request* to process and extract them for reference
* If increasing the default timeout in the side panel for API calls that take a while to resolve, it's recommended that you link the *Data request* node's *In progress* connector to a *Basic* node with "One moment..." messaging that then links back to the *Data request* node. This avoids a premature timeout imposed by your voice provider channel before your request call is resolved and provides for a better user experience&#x20;
  {% endtab %}
  {% endtabs %}

***

### Define

*Define* nodes allow you to define and set ephemeral values for use in a single flow. In essence, they work similarly to [*Context variables*](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/context-variables) but are instead localized to a single flow and do not persist beyond the flow.

{% tabs %}
{% tab title="Use case" %}

* A value captured or set in a flow that shouldn't be retained outside of the current flow
* A value captured from the user must carry a different meaning in other flows in the same conversation session
* Parsing date to extract its component parts (minutes, seconds, hours, days, etc.). See *How-to* tab for instructions
  {% endtab %}

{% tab title="Path" %}

* *Next*: Link to the next node added to the flow
  {% endtab %}

{% tab title="Side panel" %}

* *Output variable*: Give the defined value a name that's used whenever referenced in the flow downstream (no spaces)
* *Simple / Schema* toggle: Swap between defining a simple value property or a schema
* *Value*: Set the value's meaning (swap the property type from the right of the dropdown). If swapping to dynamic property type `{x} Placeholders`, select from those in the workspace using the dropdown

> Optional:
>
> * If defining a value set from `{x} Placeholders` , you may also choose an operator to the left of the dropdown field to manipulate the value:
>   * *First*: Grabs the first element of a *Data request* *Response* body set to *List*
>   * *Length*: Sets the value to the number of items in a list or the number of characters, including whitespace, in a string \[e.g., LENGTH("12345") returns 5]
>   * *Increment*: Increases the value by +1
>   * *Decrement*: Decreases the value by -1
>   * *Lowercase*: Converts a value to all lowercase styling
>   * *Trim*: Removes leading or trailing whitespace characters
>   * *Lowercase + trim*: Converts a value to all lowercase styling and removes leading or trailing whitespace characters
>   * *Parse date*: Extracts all date components from a value (millisecond, second, minute, day, hour, ISOstring, month, year, timezone, timestamp)
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}
{% @arcade/embed flowId="aozojgxMliproSHBHHfF" url="<https://app.arcade.software/share/aozojgxMliproSHBHHfF>" %}

* Use *Define* node with *Parse* date for dynamic messaging specific to your flow
* *Example*: A user sets up an account that also provides them monthly rewards for being an active member. The slot type `NLX.Text` or `NLX.Date` captures a user providing an activation date of 11/15/2023 through a *User choice* node. Follow with a *Define* node to name the value 'Rewards' and define it as a parsed date of the activation date slot. Later messaging can choose to reference it as "Your monthly reward will be sent on day `{Rewards.day}` of every month!" = "Your monthly reward will be sent on day 11 of every month!"
  {% endtab %}
  {% endtabs %}

***

### Escalate

*Escalate* nodes immediately initiate the escalation transfer for the flow's communication channel.

{% hint style="info" %}
Use of this node requires an [escalation channel set up](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/escalation-channels) in your workspace *Settings.*
{% endhint %}

{% tabs %}
{% tab title="Use case" %}

* Business procedure requires that a user change their registered email with an agent only
* Your conversational AI is not yet enabled with the ability to perform a task requested by the user
* Technical processes involving APIs/webhooks failing
  {% endtab %}

{% tab title="Paths" %}

> Optional:
>
> * *Timeout*: Link to a node if the escalation does not transfer before the timeout
> * *Failure*: Link to a node if the escalation was not able to connect as configured
> * *Continuation*: Link to the next node of the current flow if the escalation returns to the conversational application
>   {% endtab %}

{% tab title="Side panel" %}

> Optional:
>
> * *+ Add message*: Add a message(s) that is relayed to the user when reaching this node
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

* You may either reuse the *Escalate* node anywhere in a flow or link to the same *Escalate* node from anywhere in the flow
* In some instances, you may wish to develop an escalation intent that handles an extended process before ending with the *Escalate* node (e.g., authenticating a user, sharing current wait times, or presenting a final pitch to complete a task with the conversational AI instead of an agent)
  {% endtab %}
  {% endtabs %}

***

### Generative Journey® v2

*Generative Journey®* v2 nodes allow you to employ a large language model (LLM) to collect several variables from a user and execute multiple tools as part of a cohesive task with multiple exit conditions you can define. Tools you can assign include custom APIs set up through [*Data requests*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests), managed APIs set up through [*Integrations*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/managed-integrations), [modalities](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/modalities), [knowledge bases](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/knowledge-bases), or [flows](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks). Your agent will determine when to call tools based on the overall prompt you provide.

{% tabs %}
{% tab title="Use case(s)" %}

* Restaurant ordering: Collect name + order, call ordering API, send confirmation SMS/email
* Travel servicing: Collect record locator + passenger name, check eligibility, cancel flight, confirm changes
* Support automation: Collect issue + account identifier, call ticketing API, summarize and file ticket, confirm
  {% endtab %}

{% tab title="Paths" %}

* *Exit conditions*: Add and briefly define one or more outcomes that could occur to exit the node, link to the next node in the flow
* *Data capture*: If slots are entered that the agent must capture AND *Exit when complete* is enabled, link to the next node in the flow

> Optional:
>
> * *Timeout*: Link to a node if the agent or tools do not respond before the timeout
> * *Failure*: Link to a node if the agent or tools were not able to connect as configured
>   {% endtab %}

{% tab title="Side panel" %}

* *Prompt*: A prompt given to the LLM agent that identifies the context of the overall task and the required information to extract. Brand voice guidelines and additional rules may be provided
* *+ Add exit condition*: Add one or more acceptable endings that helps the agent determine when to eject from the node and where the conversation will proceed through the remainder of a flow
* *Tools*: Assign one or more *managed* [*data requests* from Integrations](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types)*,* [custom *Data requests*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests)*,* [flows](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/flows-and-variables), [modalities](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/modalities), or [knowledge bases](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/knowledge-bases) from your workspace for the agent to use. Expand a tool after adding it to include a brief prompt for using the tool&#x20;

  * For any *Input schema* (payload) your *Tools* require, you may swap between *LLM prompt*, *Explicit values,* or *Placeholder variables* for each property field by selecting the menu to the right of each field
  * Add an optional *Interim* message: The agent will deliver this message before a tool is called (e.g., "Let me take a look," or, "One moment while I check"). Expand an attached tool to add

  <figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FAvP5SwecFDUDIb2P5deg%2FFlows_%20Generative%20Journey%20agentic%20input%20schema.png?alt=media&#x26;token=640a93e7-19ab-4575-bbab-b2e050b87e59" alt=""><figcaption><p>Input schema property fields with description dropdown options</p></figcaption></figure>

  * *Managed data requests*: Choose the applicable action you want executed for that service. Expand the tool after adding to enter a tool prompt (when to use and any additional guidelines not explained in the general prompt) as well as applicable payload variables
  * *Custom data requests*: Expand the tool after adding to enter a tool prompt (when to use and any additional guidelines not explained in the general prompt) as well as applicable payload variables
  * *Flows*: Agentic nodes support two ways to use flows as tools, depending on whether you want the flow to take over the conversation or act as a callable tool the agent orchestrates.
    * Handover (flow takes control): A handover flow can be any flow in your workspace. When the agent invokes it, control shifts to that flow and the conversation follows the flow’s nodes and messaging exactly as designed. Add a Redirect node at the end of the handover flow and set the destination back to the agent node. On the redirect, choose the flow where the Generative Journey node exists, choose Custom node, and enter the Generative Journey node's ID (select the three-dot menu on the GJ node to copy the ID)
    * MCP (agent stays in control): Assign any MCP-enabled flow from your workspace. When the agent invokes it, the agent remains in control of the conversation and uses the flow as a structured workflow to execute
  * *Modalities:* Choose from [modalities](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/modalities) created in your workspace. Expand the tool after adding to enter a tool prompt (when to use and any additional guidelines not explained in the general prompt) as well as applicable payload variables
  * *Knowledge bases*: Expand the tool after adding to enter a tool prompt (when to use and any additional guidelines not explained in the general prompt)
* *Data capture*:&#x20;
  * *Required slots*: Include required slots that the agent must collect from a user (e.g., check-in date, checkout date, number of guests, number of rooms, etc.). Only slots [attached to the flow](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#attached-slots) appear for selection
  * *Optional slots*: Include optional slots the agent should incorporate if the user provides them (e.g., preferred room, number of pets traveling, etc.)
  * *Exit upon completion*: Creates an automatic node pathway (similar to an exit condition) that can be linked to the rest of the flow. The pathway will be triggered after all resolved slots are collected
* *LLM Model*: Choose from one of the provided models to power your agent's intelligence. Note that some models are better suited for specific tasks and communication channels:
  * *Amazon Nova Lite*: A fast inexpensive model that is great for simple tool calls over voice
  * *Anthropic Haiku*: A balance of speed and accuracy well-suited for most tool calls
  * *Anthropic Opus*: A slower but more powerful model for complex tool calls; best for chat-based applications
  * *Cerebras OSS 120B*: The fastest inference mode; best suited for low-latency applications
  * *Custom model* (enterprise only): Select the LLM service [integrated in your workspace](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/llm-services) to power task execution

> - *Settings*:
>   * *Max steps*: Limits how many times the agent can think and act (e.g., decide, call a tool, reassess) before it must stop and respond in one turn. Use this to prevent long loops and control latency/cost. Default: 10
>   * *Zero-turn mode:* The LLM will resolve any required variables for task completion detected in the last logged user utterance and automatically exit from the *Success* path
>   * *Timeout*: Set a custom time limit (in seconds) for how long the LLM has to respond before it times out
> - *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}
{% hint style="success" %}
Check out our [Guide](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/types/agentic) to implement Generative Journey.
{% endhint %}

1. Add and connect a Generative Journey v2 node to your flow
2. Write the prompt: task + constraints + what completion means
3. Add tools (KB, data requests, flows, modalities) and provide brief tool prompts
4. Add Data capture slots if the task requires structured values to be collected from the user
5. Add exit conditions for the outcomes you want your agent to account for
6. Link all node paths to the next nodes (ending, timeout, fallback, etc.)
7. Test in the sandbox and refine prompts, tool descriptions, and exit conditions
   {% endtab %}
   {% endtabs %}

***

### Generative text

*Generative text* nodes employ a large language model (LLM) to generate virtually any output, including crafting messages, applying logic, or quickly reformatting information based on provided context of the conversation.

{% hint style="info" %}
Use of this node requires a [one-time LLM integration](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/llm-services) in your workspace *Settings.*
{% endhint %}

{% tabs %}
{% tab title="Use case" %}

* Generate required messaging dynamically (welcome message, apology, error, etc.)
* Provide rules to generate logic based on given input and check in a [*Split*](#split) node for the result
* Reformat a date or do quick math&#x20;
  {% endtab %}

{% tab title="Paths" %}

* *Success*: Link to a node that then references the name given in the *Generate* field to provide the generated output. Use the open curly bracket in any text field of a subsequent node in the flow *`{`*

> Optional:
>
> * *Timeout*: Link to a node if the LLM does not respond before the timeout
> * *Failure*: Link to a node if the LLM was not able to connect as configured
>   {% endtab %}

{% tab title="Side panel" %}

* *Prompt*: The prompt given to the LLM that specifies what generated output you'd like
* *Output variable*: Provide a simple name or acronym to use the generated output downstream as a variable locally in the flow (no spaces)
* *Integration*: The [workspace integration name for the LLM](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/llm-services) being used
* *Temperature*: Controls randomness of word choice in generated text. Lower values (e.g., 0–0.3) are more deterministic; higher values (e.g., 0.8–1.2) are more creative and variable. Default is set to 1
* *Max tokens*: The upper limit on the model’s output length (in tokens). If reached, the response is cut off. Default is set to 4000

> Optional:
>
> * *Settings:*&#x20;
>   * *Include transcript as context*: When enabled, transcript of the conversation between your conversational AI and user up to that point is sent to the LLM along with the prompt for added context
>   * *Timeout*: Set a custom time limit (in seconds) for how long the LLM has to respond before it times out
>   * *Auto-translate*: Detects sessions language and translates output to match
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FKu7f0ggR0ms8jSQPwcZK%2Fimage.png?alt=media&#x26;token=35145af4-37d1-4f76-9e61-918a100de1f6" alt=""><figcaption><p>Sample flow using Generative text node and response</p></figcaption></figure>

1. Place a *Generative text* node on the Canvas > Using the node's side panel, assign [an LLM integration](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/llm-services) previously set up in the workspace
2. Enter a name for the prompt using the *Generate* field (this name will be referenced in the flow when wanting to trigger the LLM in conversation) > Provide a succinct prompt to the LLM in the *Generative text* node within its *Prompt* field
3. Add a node with messaging (in this example, a standard *Basic* node is used) and link the *Success* edge of your *Generative text* node to the new node
4. Click *+ Add message* on your new node and enter the placeholder name given to the *Generative text* prompt using the open curly brace { menu

{% hint style="info" %}
Good AI prompts include an explanation of the AI's purpose, a prescribed limit to the length of its responses, and any topics or words it should avoid
{% endhint %}
{% endtab %}
{% endtabs %}

***

### Knowledge base

*Knowledge base* nodes allow you to invoke a response from a digital library of information. They're valuable in answering questions or relaying information to users on common topics that don't require multiple flows to address.

{% hint style="info" %}
Use of this node requires [setting up a Knowledge base](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/knowledge-bases) in your workspace.
{% endhint %}

{% tabs %}
{% tab title="Use case" %}

* Provide common FAQ responses on policies, service offerings, and helpful links (if chat) to guide users
* Provide instructions and how-tos on topics for staff or customers
  {% endtab %}

{% tab title="Paths" %}

* *Match*: Link to a node in the flow when the user utterance was matched to data in the knowledge base. Generally follow with a *Basic* or *User choice* node containing the variable `{GenerateName.answer}` in its messaging to provide the response
* *No match*: Link to a node in the flow when the user utterance was not matched to information in the knowledge base

> Optional:
>
> * *In progress*: Link to a node (may contain progress message or silence breaks) that then loops back to the *Knowledge base* node while a request call is being resolved; useful to avoid timeout restrictions from voice channel providers on API calls that take longer. Be sure to also define a custom timeout under the *Knowledge base* node's *Settings*
> * *Timeout*: Link to a node if the request does not respond a status before the timeout
> * *Failure*: Link to a node if the request was not able to locate a response from the user's utterance
>   {% endtab %}

{% tab title="Side panel" %}

* *Question*: Add the [variable](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/flows-and-variables#variables-in-flows) necessary to aid the AI model in its answer search (default is `{System.utterance}`. Provide context if sending more than one variable (e.g., the user is a `{userProfile.RewardsTier}` member and wants to know about `{system.Utterance}`)
* *Knowledge base*: Select from available knowledge bases created in the workspace
* *Prompt* (*optional*): If desired, provide specific instructions, rules, brand voice, additional variables, or other guidelines to follow when your conversational AI relays answers from your knowledge base
* *Output variable*: Provide a simple name or acronym to use the knowledge base response as a variable locally in the flow (no spaces)
* *IF filters*: If [*Metadata*](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/knowledge-bases/ingest-content) has been provided when setting up a *Q\&A-* or *External-*&#x74;ype *Knowledge base*, you may provide IF comparison statements ("IF x equals y") when referencing any properties of your metadata schema. This will be sent along in the retrieval request as a filter when your *Knowledge base* is called

> Optional
>
> * *Settings*:
>   * *Auto-translate:* Enable toggle to translate the programmed response to the detected language of the user
>   * *Timeout*: Enable toggle to adjust timeout period of answer retrieval in seconds. Default is 5s
>   * *Include citation*: Enable toggle to include in-line citations with responses. Must use a [modality](https://docs.nlx.ai/platform/nlx-platform-guide/knowledge-bases/support-citations#create-citation-modality) to render citations on a *User choice* or *User input* node linked from the *Match* edge
>   * *Custom minimum confidence score*: Enable toggle to adjust the minimum confidence score previously set at the application level; only impacts response and utterance matching done through knowledge base interaction. Supported through [*Q\&A implementation*](https://docs.nlx.ai/platform/nlx-platform-guide/knowledge-bases/ingest-content#q-and-a) of knowledge base content
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

* Add a *Knowledge base node* to the Canvas
* On the node's side panel, enter `system.Utterance` (or any [relevant variable](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/flows-and-variables#variables-in-flows) type you wish to use) in the *Question* field and assign a *Knowledge base* from your workspace in the dropdown
* Link a *Basic* node from the *Knowledge base* node's *Match* edge > Enter the placeholder `{"yourKBname".answer}` as a message on the *Basic* node
* Proceed from the *Basic* node to either redirect to another flow, capture user intent, or provide a selection of choices to a user to help with follow-up after an answer has been relayed
* Click *Save*
  {% endtab %}
  {% endtabs %}

***

### Loop

*Loop* nodes help to either set an allowed number of retries for a user's response or an API call (such as with a *Data request* or *Action*), or they can help survey items in a data array (set to List schema).

{% tabs %}
{% tab title="Use case" %}
*Range* mode

* Prevent infinite looping by allowing one retry for a user to provide their PIN before escalating to an agent

*List* mode

* Loop over a list of resorts and provide matches from a user's preference of 5-star establishments in Tokyo
  {% endtab %}

{% tab title="Paths" %}

* *Next:* Link to the next node in the flow (usually a *Data request* node for processing an array \[list schema] or *User choice* or *User input* for setting input retries)
* *Completed*: Once the loops have concluded without a desired result, the flow ejects to an another node that may handle the fail state (*Escalate* or *Redirect*, for example)&#x20;
  {% endtab %}

{% tab title="Side panel" %}

* *For each*: Give a name of the loop iteration (e.g., `Retry` or `Name.UserProfile`)
* *+ Add functionality*: See [Node functionality](#node-functionality)

*Range*:

* *From/To*: Enter range of allowed looped retries before ejecting from the *Completed* edge

*List*:

* *From*: Select *Data request* and data property (if applicable) to be used as the array for scanning and retrieving. Your *Data request*'s high-level [*Response model*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests#response-request-models) type must be set to *List*
  {% endtab %}

{% tab title="How-to" %}

* Stack or place a *Loop* node before a *Data request* or *User choice* node
* Use [State modifications](#add-functionality) on either the *Loop* node or a node that precedes the *Loop* in a retry cycle to *Clear* the user selection. This prevents retaining information from the first pass and creating repeated *No match* incidents
  {% endtab %}
  {% endtabs %}

***

### Note

*Note*s are for adding freeform text to the Canvas and do not impact the conversation flow. From the Canvas shortcut menu (right-click anywhere on the Canvas) or by selecting *Add note* from the small toolbar menu, you may add a text box for internal-facing text.

{% tabs %}
{% tab title="Use case" %}

* Provide instructions or explanations to a teammate about items on the flow
* Document different node stacks or sections of a flow for organization or visual mapping
* Enter descriptions or thoughts about areas or whole pages of a flow
  {% endtab %}

{% tab title="Side panel" %}
*Body*: Enter note message in this field. You may also reference dynamic values created in the workspace with an open curly brace `{`&#x20;

> Optional:
>
> * *Title*: Enter a brief title to describe the note or call attention to it
>   {% endtab %}

{% tab title="How-to" %}

* There's no wrong way to use *Notes*, but you can explore more ways to [stay organized](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/stay-organized)
  {% endtab %}
  {% endtabs %}

***

### Redirect

*Redirect* nodes route users to to a different page of a flow or to a different flow.

{% tabs %}
{% tab title="Use case" %}

* An unregistered user that needs to complete a profile before proceeding with the main flow (redirect to a page of the same flow or another flow)
* After asking a user what they need help with (*User input* node) and routing them based on a matched training phrase
* Moving a user from a flow covering booking policies at a resort to a flow that books a resort reservation
  {% endtab %}

{% tab title="Path" %}

> Optional
>
> * *Continuation*: Link to the next node of the current flow if the redirect returns back to this point
>   {% endtab %}

{% tab title="Side panel" %}

* *Page*: If [multiple pages](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/stay-organized#create-pages) of an intent's flow exist, specify which page to take the conversation
* *Flow*: Select from other flows available in the workspace
* *Recognized flow*: The captured utterance from a preceding *User input* node is matched by the NLP to a training phrase of a flow attached to the application
* *Previous redirect*: Goes back to a previous *Redirect* node that last did a reroute in the conversation and continues the flow
* *Parent application*: Transfers back to the original application if currently in a new application's flow from an [*application handoff*](#application-handoff-beta) scenario

> Optional
>
> *+ Add functionality*: See [Node functionality](#node-functionality)
> {% endtab %}

{% tab title="How-to" %}

* Ensure if multiple flows are referenced that all are attached to the application before deployment
* Only choose *Flow* over *Flow recognized* when certain the new flow accomplishes the user's need based on their set of choices or path taken. *Flow recognized* is active user navigation while *Flow* drives the direction without the user
  {% endtab %}
  {% endtabs %}

***

### Split

*Split* nodes divide users in a path flow based on either a set of defined criteria (conditions) or through chance distribution.&#x20;

#### Conditions

*Conditions* mode allows you to control the pathways of a conversation based on provided conditional logic that can be defined manually as IF statements or using an LLM with a simple logic prompt.

{% tabs %}
{% tab title="Use case" %}

* Diverting the flow to new messaging or processes based on the channel type users are communicating through (e.g., Twilio Voice vs API). Use with the [system variable](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/flows-and-variables#list-of-system-variables) `channelType`
* Sort users based on different variables (e.g., rewards level, credit card type, authenticated, etc.)
  {% endtab %}

{% tab title="Paths" %}

* *If condition*: Each path appears as conditions are added to the *Split* node. Each path may be linked to a different node specified in a flow

> Optional
>
> * *else*: Link to a node in the flow when any established conditions(s) are not matched
>   {% endtab %}

{% tab title="Side panel" %}

* *If condition*: Each dropdown provides options to craft a conditional statement. You may also  use the + *And* link to add more modifiers to a statement and create a single grouping of conditions
* *Generative IF condition*: Provide a prompt for each generative condition that must be met (e.g, "If user's intent is to find out about our rewards program but they are not currently a member. Intent: `system.Utterance`. Member profile:  `user.Profile`"). Requires selecting the LLM integration (may use built-in NLX LLM)

> Optional:
>
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

* Enter a name for each *If condition* on the side panel. When collapsing the node on the Canvas, only the name of the group appears, decluttering large *Split* nodes and providing better visual organization
* Adding separate conditions on a *Split* rather than one grouping indicates that if the first condition isn't met, it will proceed to the next condition
* The NLX NLU processes each condition in the order they appear in the side panel, so reorder them as desired using the three-dot menu next to each condition
  {% endtab %}
  {% endtabs %}

#### Chance

*Chance* mode randomly routes users to different paths based on assigned percentages.

{% tabs %}
{% tab title="Use case(s)" %}

* Conducting A/B testing on new messaging or new process and diverting a portion of user traffic to test
  {% endtab %}

{% tab title="Paths" %}

* *Chance*: Each path appears as chance conditions are added to the *Split* node. Each path may be linked to a different node specified in a flow

> Optional
>
> * *else*: Not applicable in *Chance* mode
>   {% endtab %}

{% tab title="Side panel" %}

* Select *+ Add condition* to create multiple splits in traffic that divert a percentage of users for each pathway. Percentages may be adjusted manually using the slider or may be calculated evenly across pathways by clicking *Distribute equally*

> Optional:
>
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}
>   {% endtabs %}

***

### Transform

*Transform* nodes reshape, filter, or remap data retrieved from data requests, context variables, or knowledge base citations. They let you apply deterministic logic or generative reasoning to restructure arrays, filter items, or convert data into the exact schema needed for flows, modalities, or agentic tools.

{% tabs %}
{% tab title="Use case" %}

* Filtering a list of items (e.g., showing only Window or Aisle seat options)
* Removing unavailable or irrelevant results before a user sees them
* Reshaping raw API data into the schema required by a modality (e.g., carousel cards)
* Adding or modifying properties (e.g., appending `"isVIP": true` for recognized customers)
  {% endtab %}

{% tab title="Path" %}

* *Next*: Link to the next node added to the flow
  {% endtab %}

{% tab title="Side panel" %}

* *Input variable*: Select the source variable you want to transform
* *Output variable*: Name the transformed output so it can be referenced easily downstream in the flow
* *+ Add transformation*: Choose one or more transformations to apply:
  * *Filter*: Apply one or more IF conditions to keep only items that match your criteria. Example: `seatType == "Window" OR seatType == "Aisle"`
  * *Generative filter*: Use an LLM to evaluate or classify items in the list based on a prompt.\
    Great for fuzzy or semantic filtering (e.g., “return only packages that include breakfast”).
  * *Map*: Provide a desired schema, and manually rewrite each individual item in the input list to match that schema. Ideal when input and output shapes differ on a per-item basis or require semantic interpretation
  * *Generative map*: Provide a desired schema, and an LLM automatically rewrites each individual item in the input list to match that schema. Ideal when input and output shapes differ on a per-item basis or require semantic interpretation. Need to restructure an entire dataset, aggregate items, or perform a transformation that isn’t one-to-one? Use Morph instead
  * *Morph*: Morph allows you to rewrite an entire list of data into a completely new structure using freeform instructions. It uses an LLM to reshape the data based on a natural-language prompt and a selected output schema
  * *Sort*: Sort a list in ascending or descending order by a particular value in the data structure

> Optional:
>
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}
>   {% endtabs %}

***

### User choice

*User choice* nodes prompt users to either make a selection or provide information for your conversational AI to follow-up with a relevant action, such as routing, refining, or externally passing along the info.

For *User choice* nodes,  the NLX NLU will attempt the following (in order):

1. Match the user's utterance to a value from the assigned source (custom slot, built-in slot, or *Data request* array)
2. Match the user's utterance to training data belonging to other flows attached to the application
3. Eject out of the *No match* edge (if no logic or messaging is connected, the NLU will default to [Unknown or Fallback](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/setup#default-behavior))

{% hint style="info" %}
Requires: [*Attaching a slot*](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#attached-slots) to the flow or setting up a [*Data request*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests) to assign a variable that the node will extract from a user.
{% endhint %}

{% tabs %}
{% tab title="Use case" %}

* Ask a user for the start date of their reported service outage
* Ask a user a Yes/No question
* Ask a user to choose from their available credit cards on file
* Ask a user to provide a short explanation of their complaint
  {% endtab %}

{% tab title="Paths" %}

* *Match* or value paths: Link to the next node in a flow based on the user's selection. May set node's pathways to mirror a *Slot*'s values (max. 10) and direct each path to different nodes, or simplify with [*Match*/*No match* paths](#side-panel-11)

> Optional
>
> * *No match*: Link to a node in the flow when a user's choice is invalid
>   {% endtab %}

{% tab title="Side panel" %}

* *Options*: Choose from *Slots* that are [attached to the current flow](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#attached-slots) or from *Data requests* that exist in the workspace&#x20;
* *+ Add message*: Add a message(s) that is relayed to the user when reaching this node

{% hint style="info" %}
If assigning a *Data request* to your *User choice* node, first invoke it using a *Data request* node and ensure the *Data request*'s overarching [*Response model type*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests#response-request-models) is set to *List*)
{% endhint %}

> Optional
>
> * *Elicitation* (visible on [voice channels](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#add-flow) only)*:* Indicates a user provide their choice by spelling it letter by letter (e.g., A, B, C) or spelling by words (e.g., A as in apple, B as in boy)&#x20;
> * *Additional slots*: [NLX Boost](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/setup#ai-engine) must first be enabled on your application before using this feature. If the user's choice does not match any of the primary options, your application will resort to one or more [built-in slots](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#built-in) attached to the flow and assigned on the user choice node. Choices matching options from additional slots will exit out of the *No match* edge but a *Split* node may be linked to check if they exist. See *How-to* tab for context
>
> *Settings*:
>
> * *Show choices*: Displays values (max. 10) in chat or reads them aloud in voice for users to choose
> * *Auto-select only choice*: If only one choice is available, the conversational application will make the selection on the user's behalf and automatically traverse to the next node in the flow
> * *Reset choices to match/no match* (visible when using a [custom slot](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/slots-custom)): Reset node connectors to *Match/No match* values
> * *Reset choices to values* (visible when using a [custom slot](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/slots-custom)): Reset node connectors to the individual slot values + *No match*
> * *Choice label* (visible when resolving from a [*Data request*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests) source): allows you to specify the *Data request* property to display for user selection. The *Data request's* overarching [*Response* *model*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests#response-request-models) type must be set to *List*
>
> *+ Add functionality*: See [Node functionality](#node-functionality)
> {% endtab %}

{% tab title="How-to" %}

* If reusing a slot or data request's values in another *User choice* node in the conversation, be sure to place a *Basic* node beforehand that uses a [State modification](#add-functionality) to *Clear* the user's previous selection. Otherwise, the application will retain the user's choice and auto-traverse through the second *User choice* node believing it to be the same
* If applicable, link the *No match* edge to a *Basic* node that [*Clears* the selection](#add-functionality), explains why a response may be invalid, and links back to the *User choice* node to allow a user a retry
* *Additional slots* are useful in cases where a user's selection would normally be supported despite their selection not matching choices from the initial primary source. For example, a user may be provided three available appointment options for today's date from a *Data request* source, but the user instead requests an appointment for tomorrow. An additional system slot, such as `NLX.Date` or `NLX.Text`, may be assigned to the same node to handle this request. The user's request will follow along the *No match* edge on the node where a *Split* node should be used to detect if an additional slot(s) entry exists and proceed from there
  {% endtab %}
  {% endtabs %}

***

### User input

*User input* is considered a listening and capture node for collecting a user's utterance to match to an flow's training data or collecting a keyword(s) that alters or routes the conversation using *Split* node logic.

{% tabs %}
{% tab title="Use case" %}

* Asking a user what they need help with and routing them to a flow that matches training data
* Asking a user an open-ended question to detect keyword(s) and route them using a *Split* and *Redirect* nodes
  {% endtab %}

{% tab title="Paths" %}

* *Flow recognized*: Link to a node, usually [*Redirect*](#redirect), that routes a user to a flow based on their detected intent (flow recognition is provided to your AI model via the [training data](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#training-phrases) set on each flow)
* *No flow recognized*: Link to a node in the flow when a user's intent is unrecognized
  {% endtab %}

{% tab title="Side panel" %}

* *+ Add message*: Add a message(s) that is relayed to the user when reaching this node

> Optional:
>
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}

* Be sure to assess the [efficacy of training data](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/setup/automated-tests) for all flows attached to an application. This ensures the steps in flows that direct users with *User input* nodes work as intended
  {% endtab %}
  {% endtabs %}

***

### Voice+™

Starts Voice+ in the current conversation and activates the selected interaction mode.

#### Agentic

*Agentic* mode is for enabling bi-directional voice+ experiences in web or mobile via the NLX Touchpoint SDK, allowing the user to speak naturally and the AI to take on-screen actions.

{% tabs %}
{% tab title="Use case" %}

* A user starts a webchat session that can navigate webpages, complete forms, and answer FAQs, all through a handsfree voice-driven experience
  {% endtab %}

{% tab title="Paths" %}
{% hint style="info" %}
Enable the corresponding action if also using a [scripted Voice+ experience](https://docs.nlx.ai/platform/nlx-platform-guide/advanced/voice+-scripts#add-steps) to support the Continuation or Escalation paths.
{% endhint %}

> Optional:
>
> * *Continuation:* Link to the next node in the flow to continue from your Voice+ experience. (Only applicable if also using a Voice+ script.)
> * *Timeout*: Link to a node if the agent does not respond after the timeout period
> * *Failure*: Link to a node if the agent or tools were not able to connect as configured
> * *Escalation*: Link to the next node in the flow to begin an escalation process or trigger an immediate transfer using an *Escalation* node. (Only applicable if also using a Voice+ script.)
>   {% endtab %}

{% tab title="Side panel" %}

* *Tools*: Assign one or more [custom *Data requests*](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests)*, managed* [*data requests* from Integrations](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types)*,* [modalities](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/modalities), [knowledge bases](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/knowledge-bases), [MCP-enabled flows](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#model-context-protocol-mcp), or Data capture from your workspace for the agent to use. After adding, choose the applicable resource from the dropdown.

  For any of *Input schema* (payload) your *Tools* require, you may swap between *LLM prompt*, *Explicit values,* or *Placeholder variables* for each property field by selecting the menu to the right of each field.

  * *Managed data requests*: Choose the applicable action you want executed for that service. Select the check icon after choosing to attach it to the node. Expand the integration after attaching to enter a tool prompt and describe the purpose of the tool. Input schema has default descriptions, but you may provide additional rules in each field, if desired
  * *Custom data requests*: Select the check icon after choosing to attach it to the node. Descriptions should be provided on the *Settings* tab of the *Data request* to provide context to the agent, as well as descriptions on individual properties of the *Request model* schema properties. This helps the agent determine what information is to be collected, sent, and in what order to execute. Input schema defaults to the descriptions provided on the *Request model* properties of your data request, but you may provide additional rules in each field, if desired
  * *Flows*: Agentic nodes support two ways to use flows as tools, depending on whether you want the flow to take over the conversation or act as a callable tool the agent orchestrates.
    * Handover (flow takes control): A handover flow can be any flow in your workspace. When the agent invokes it, control shifts to that flow and the conversation follows the flow’s nodes and messaging exactly as designed. Add a Redirect node at the end of the handover flow and set the destination back to the agent node. On the redirect, choose the flow where the Voice+ node exists, choose Custom node, and enter the Voice+ node's ID (select the three-dot menu on the node to copy the ID)
    * MCP (agent stays in control): Assign any [MCP-enabled flow](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/setup#model-context-protocol-mcp) from your workspace. When the agent invokes it, the agent remains in control of the conversation and uses the flow as a structured workflow to execute
  * *Modalities:* Choose from [modalities](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/modalities) created in your workspace. Optionally select other tools as one or more triggers that must be completed before the modality is sent. Input schema, if applicable, defaults to the descriptions provided for the modality's properties, but you may provide additional rules in each field, if desired
  * *Knowledge bases*: Defaults to the description provided on the [knowledge base's general settings](https://docs.nlx.ai/platform/nlx-platform-guide/knowledge-bases#knowledge-base-settings), but you may provide additional rules, if desired
  * *Data capture*: The AI extracts variables from users that are defined as either variables from a *Data request* (must also attach as a custom data request tool) or slots
* *Actions*: Add one or more *Custom*, *Form fill*, or *Navigation* actions that your agent can do on your web or mobile app. Using the [Touchpoint SDK](https://app.gitbook.com/s/2VnkvXtkrR2qhkVBmB1l/voice+-api/context#example-form-fields), you can define the scope tags of the pages where the action is permitted
  * For *Custom* actions, like filtering, checking radio boxes, selecting, etc., you may provide the name of the action and a prompt for when to apply the custom behavior. Enable the *Custom schema* toggle to enter output schema that's sent to your frontend

> Optiona&#x6C;*:*
>
> * *Settings:*
>   * *Session start timeout*: The amount of time (in seconds) to wait for the user to trigger the first Voice+ script step from your digital asset (web or mobile application), if also using a Voice+ script
>   * *Inactivity timeout*: The amount of time (in seconds) to wait for a user to proceed between Voice+ script steps, if also using a Voice+ script
>   * *Max steps*: Limits how many times the agent can think and act (e.g., decide, call a tool, reassess) before it must stop and respond in one turn. Use this to prevent long loops and control latency/cost. Default: 10
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}
>   {% endtabs %}

#### Scripted

*Scripted* mode is for enabling exclusive use of [Voice+ scripts](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/voice+-scripts), which synchronizes predefined unidirectional voice prompts over an active phone call with visual assets from your web or mobile application.

{% tabs %}
{% tab title="Use case(s)" %}

* A user calling to request a service during heavy call volumes is texted a link and guided through completing an online form
  {% endtab %}

{% tab title="Paths" %}
{% hint style="info" %}
Enable the corresponding action on a [*Voice+* script step](https://docs.nlx.ai/platform/nlx-platform-guide/advanced/voice+-scripts#voice-script-settings) to support the Continuation or Escalation paths.
{% endhint %}

> Optional:
>
> * *Continuation:* Link to the next node in the flow to continue from your Voice+ experience (generally triggered by the last step of your *Voice+ Script*)
> * *Timeout*: Link to a node if the Voice+ step does not respond after the timeout period
> * *Escalation*: Link to the next node in the flow to begin an escalation process or trigger an immediate transfer using an *Escalation* node (generally triggered by an appropriate step requiring agent transfer in your Voice+ experience)
>   {% endtab %}

{% tab title="Side panel" %}

> Optional:&#x20;
>
> * *Settings:*
>   * *Session start timeout*: The amount of time (in seconds) to wait for the user to trigger the first Voice+ script step from your digital asset (web or mobile application)
>   * *Inactivity timeout*: The amount of time (in seconds) to wait for a user to proceed between Voice+ script steps. Default is 10 minutes
> * *+ Add functionality*: See [Node functionality](#node-functionality)
>   {% endtab %}

{% tab title="How-to" %}
To set up predefined *Voice+ scripts*: Check out [this guide](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/voice+-scripts)
{% endtab %}
{% endtabs %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/nodes.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
