# Testing

Testing in NLX lets you simulate real user conversations before deploying your application. You can run full end-to-end tests from an app’s initialization flow or isolate specific flows for focused debugging.

Through your workspace's built-in test feature, you can interact with your AI application in text or voice mode, review conversation turns, inspect variable state, and track events in real time. For deeper troubleshooting, the test's *Debugger* log highlights each operation that occurred during a turn and pinpoints where issues arise.

{% hint style="success" %}
Enterprise users can also streamline testing with [Automated tests](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/setup/automated-tests), which validate user intent recognition and flow performance at scale.
{% endhint %}

<table data-view="cards"><thead><tr><th></th><th></th><th></th></tr></thead><tbody><tr><td><i class="fa-cube">:cube:</i></td><td><strong>Application test</strong></td><td>Run a full end-to-end simulation of your app to confirm that routing, responses, and integrations work as expected</td></tr><tr><td><i class="fa-code-branch">:code-branch:</i></td><td><strong>Flow test</strong></td><td>Quickly test and refine a single flow in isolation, adjusting logic, variables, and translations in real time</td></tr><tr><td><i class="fa-arrows-rotate-reverse">:arrows-rotate-reverse:</i></td><td><strong>Automated tests (Enterprise)</strong></td><td>Validate intent recognition and flow accuracy at scale through automated test cases for faster QA</td></tr></tbody></table>

## Application test

After completing an application build, locate the test option in the upper right of your application to begin a conversation from the flow assigned to the app's *Welcome* default behavior.

Type your responses directly, or for voice-enabled applications activate *Voice mode* using the microphone icon at the bottom of the tester. To launch a test session with your application, choose an app from your workspace and locate the play icon in the upper right:

{% @arcade/embed flowId="7vU3MN0kwIIU1FJJcIuL" url="<https://app.arcade.software/share/7vU3MN0kwIIU1FJJcIuL>" %}

## Flow test

*Test* mode on a flow allows you to isolate a specific conversation flow for evaluation. With test mode, you can easily debug or test different variables and translations belonging to your flow while conversing with your application.&#x20;

To launch the test widget, select your flow and enable a test conversation from the toolbar:

{% @arcade/embed flowId="0qDFjn1J92yTWuwMKxwW" url="<https://app.arcade.software/share/0qDFjn1J92yTWuwMKxwW>" %}

1. Select *Resources* from your workspace menu > Choose *Flows* > Select an existing flow
2. Click the test icon (play button) in the Canvas toolbar to launch a test conversation session
3. Type your responses directly, or for voice-enabled applications, activate *Voice mode* using the microphone icon at the bottom of the tester

You can continue to modify and build with your flow while in Test mode. Simply select the Canvas to adjust the flow and save changes. Re-select the *Test* icon and refresh the chat to test changes in real time.

{% hint style="info" %}
You may also save your inputs for faster testing with the use of [automated flow inputs](https://docs.nlx.ai/platform/nlx-platform-guide/setup/automated-tests#conversation-flow-test).
{% endhint %}

## Troubleshoot

Often when testing our conversations, we may notice errors appear or unanticipated behavior from the application. These issues usually stem from problems with state and variables in a node of a turn:

* Missing required variables used in application messaging, payloads, or state modifications
* Uncleared variables in retry or revisit scenarios

If an issue is detected by NLX's state tracker, the entire turn will fail to execute or will trigger the application's fallback behavior. Within a [test chat](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/testing) in your NLX workspace, you can engage in conversation with your AI application to troubleshoot or view potential errors.&#x20;

Note the four turns that have taken place in this test chat so far (application messages in grey):

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FqodC9crMtvJICU9rRV7F%2FScreenshot%202025-06-26%20at%203.57.08%E2%80%AFPM.png?alt=media&#x26;token=d6d16480-353f-4219-b609-ca0051d03b10" alt=""><figcaption><p>Turns (boxed in yellow) in the Test chat</p></figcaption></figure>

To troubleshoot unexpected errors with a test chat, click the most recent application message to open the *Debugger*. This shows a top-down timeline of events for that turn, allowing you to expand each logged event and identify the cause of the issue. The debugger also highlights the nodes involved in the turn directly on the Canvas of a current flow:

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FtUhA3VEBAQ0THK8GfRT5%2Fimage.png?alt=media&#x26;token=6a64a48d-c715-45b5-9381-62fe5d954f88" alt=""><figcaption><p>Debugger panel opened from a selected message in the test chat view</p></figcaption></figure>

{% hint style="info" %}
Keep in mind that a single turn can span multiple flows if *Redirect* nodes are used.
{% endhint %}

If having trouble pinpointing which variable's state is causing an issue, start debugging from the first node in the affected turn. Detach it from all subsequent nodes in the turn and connect it to a simple *Basic* node with a success message. If the turn reaches the success message, continue slowly iterating the process by reconnecting another node, saving, and testing again. When the turn fails, you’ll know the last connected node is where the issue lies.

:brain: Want to learn more? Read all about [conversation turns & state](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/turns-and-state)

## Configure & debug

Within a test session's panel, you may either select the gear icon to set context or other settings for your test, or select the bug icon to review all logged events from your test transcript for troubleshooting.

{% hint style="warning" %}
Selecting any AI message (in grey) within your test chat provides a sequential list (top-down) of all tracked operations and events that occurred up to the turn selected.
{% endhint %}

* *Customize data*: If you have used [context variables](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/context-variables) in your conversation, select the *Settings* of the test chat to enter values into the corresponding fields to test your flow.  Simply enter data into an available context variable field and click *Save and refresh*
* *Event details*: To debug events in your flow, select a conversational AI message (in grey) in the transcript window. Click the expand caret on any item to view the NLU's details&#x20;
* *Environment*: Change the endpoint for the [Data request](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/data-requests) environment between *Production* and *Development*


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/testing.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
