Bidirectional Voice+

From start to finish, set up a handsfree Voice+ experience with NLX

What's Bidirectional Voice+?

Bidirectional Voice+ delivers a fully handsfree, conversational experience by providing a voice AI assistant to handle real-time navigation, form filling, and knowledge base answers on your website or mobile app. Instead of following a predefined script over a phone channel, users can ask open-ended questions or give natural commands while your AI dynamically responds with both spoken answers and contextual actions on the page. Adding this type of Voice+ experience to your frontend asset makes interactions more fluid and task-driven without requiring users to click or tap.

With bidirectional Voice+, you can:

  • Navigate hands free: Voice commands automatically trigger page changes (“Take me to pricing”)

  • Fill forms by voice: Spoken input is mapped to form fields on the webpage (“My address is 31 Hudson Yards”)

  • Enhance knowledge base answers: Responses not only include an answer but can support relevant navigation or form-fill actions

  • Trigger custom commands: Send structured data to the frontend for domain-specific tasks (“Add the Hokas size 9 to my cart”)

  • Use real-time context: The Touchpoint SDK continuously analyzes pages to improve navigation and form recognition

  • Combine scripted responses: Use predefined voice messages for consistent and custom responses

  • Extend with MCP tools: Fetch live data or run workflows to pull in information from outside your domain

User: What are your store hours?
AI: We’re open Monday through Saturday from 10AM to 8PM, and Sundays from 11AM to 6PM.

> Thanks to custom commands added as metdata in a knowledge base response, you
can support automatic navigation to the Store Hours & Location page for this query.

Want to try a bidirectional Voice+ experience? Head to nlx.ai, and select the Touchpoint widget to see it in action:


Checklist

You'll complete the following to successfully launch your bidirectional Voice+ application:


Step 1: Import template

Import the Hands-free AI browsing template into your workspace:

🔽 Import template


Step 2: Review knowledge base & add metadata

A Q&A knowledge base (KB) provides answers to commonly asked questions that users may pose while engaged with your conversational AI. But they also serve another significant purpose: providing support for navigation and custom actions to be triggered in the frontend, when appropriate.

User: What do you have that's vegetarian on the lunch menu?

Knowledge base answer: Our lunch menu includes a variety of sandwiches and salads

Action: Menu page is loaded

Custom action: Vegetarian options are shown in view pane

To support these actions, metadata is added to relevant KB responses as key-value pairs. In the above example, the following metadata would be provided:

nlx:destination: https://www.restaurant.com/menu  
nlx:action: showLunch  
nlx:actionPayload.vegetarian : true
nlx:actionPayload.glutenfree : false

Thus, when the KB answer is delivered, Touchpoint navigates to the menu page and sends a structured action payload for the custom “showLunch” and "vegetarian" actions.

Supported metadata keys

Key
Classification
Action
Description

nlx:destination

navigation

page_custom

Navigate to a specific page or section

nlx:action

custom

Custom action name

Send custom actions to the frontend

nlx:actionPayload

custom

Custom action data

Optional value only taken into account if nlx:action key is present. Sent as payload key to the frontend along with command type custom and action = nlx:action key value

nlx:uri

n/a

Custom behavior (see Advanced section below)

Limit responses and actions to specific uri s within your client application

  • Select Resources > Knowledge bases in your workspace menu

  • Choose the Voice+ KB generated from the template, or create a new Q&A type knowledge base

  • On Q&A's tab, upload or manually enter new articles

  • Select Metadata tab

    • Choose Auto-generate option to input sample schema or manually enter metadata properties > Click Save

  • On Q&A's tab, expand any question that requires navigation or custom actions to be executed on your frontend when asked by a user

    • Click + Add metadata

    • Input values into relevant metadata key fields > Click Save

  • Choose Publish tab of your knowledge base > Publish new version

Advanced: Using nlx:uri

In the food menu example, you might provide a different answer to the question if the user is already on the menu page. That's where nlx:uri can change how Voice+ behaves:

I create a new article in the Knowledge Base with the following content:

  • Question: How do I know what's vegetarian lunch option?

  • Answer: Look for the green leaf icon next to listed menu options.

Add a new article in your Q+A knowledge base with nlx:uri metadata:

metadata key
value

nlx:destination

N/A

nlx:action

showLunch

nlx:actionPayload

{}

nlx:actionPayload.vegetarian

true

nlx:actionPayload.glutenFree

false

nlx:uri

/menu

Touchpoint will receive the custom action payload, but won't receive the navigation command when the user is on the /contact page. The voice assistant will also change the voice response to be more contextually aware of the current page.

🧠 Want to learn more? Explore all customization options with bidirectional Voice+


Step 3: Refine flow with Voice+ node

The Welcome flow generated from the template kickstarts your voice assistant's workflow. It delivers the initial greeting and activates Voice+ mode within your conversational AI application. Once active, your frontend (where the application is installed) is ready to receive API calls from NLX, enabling it to handle navigation, fill forms, trigger custom UI actions, and provide answers to user questions.

  • Select Resources > Flows in your workspace menu

  • Choose the Welcome flow nested in the Hands-free folder that was generated from the template

  • Adjust the Basic node containing the welcome message to fit your use case

  • Select the Voice+ node > Ensure Bidirectional mode is turned ON, and your knowledge base is assigned

    • Optionally enable Generative response to support small talk and responses to questions not detected in your knowledge base

    • Optionally assign MCP-enabled flows as Tools, if you would like your assistant to retrieve external data or execute a workflow during conversation

  • Click Save on Canvas toolbar


Step 3: Deploy application

Now you'll construct a new build of your conversational AI application with your updated changes.

  • Select Applications from workspace menu > Choose Hands-free application created from onboarding

  • Click Flows tab of application > Attach one or more flows created to make available to your application > Click Attach selected then Save

  • Select Default behaviors tab of application > Assign any attached flows to the application's behaviors

  • Click Channels tab of application > Expand API channel

    • Choose Edit channel > Select a TTS provider, and choose the voice persona you would like to use

  • Click Save

A build constructs the array of workflows that make up your application and updates any changes made to your flows, while deploying makes a successful build live:

  • Click Deployment tab of application > Select Create or Review & build

  • Wait for validation to complete > Select Create build*

  • When satisfied with a successful build, click Deploy


Step 5: Install Touchpoint

You'll need to install NLX's Voice+ SDK to each screen of your digital asset so applicable API calls can be made to trigger voice lines where you've defined them:

  • After a successful deployment of your application, select the Details link next to the Deployed status

  • Under Setup instructions, expand API, and click Open Touchpoint configurator

  • Toggle ON Voice mini under Input in the SDK configuration

  • Toggle ON Bidirectional Voice+ in the SDK configuration

  • Define handlers for navigation, form fill, and custom commands in your frontend code

    • Use provided metadata to route each command to the correct UI behavior

Example:

function handleShowLunch(payload) {
  setVegetarianOption(payload.vegetarian);
  setGlutenFreeOption(payload.glutenfree);
}

function handleCustomCommand(action, payload) {
  // Example: Voice-enabled lunch menu navigation
  if (action === "showLunch") {
		handleShowLunch(payload);
  }
}

const touchpointOptions = {
  config: {
    applicationUrl: "YOUR_APPLICATION_URL",
    headers: {
      "nlx-api-key": "YOUR_API_KEY",
    },
    languageCode: "en-US",
  },
  input: "voiceMini", // Enables voice input with bidirectional support
  bidirectional: {
    custom: handleCustomCommand,
  },
};
  • Install the Touchpoint SDK into your website or app

Last updated