Custom Voice+

From start to finish, set up a custom handsfree Voice+ experience with NLX

What's Voice+?

Voice+ delivers a fully handsfree, conversational experience by providing a voice AI assistant to handle real-time navigation, form filling, and knowledge base answers on your website or mobile app. Instead of following a predefined script over a phone channel, users can ask open-ended questions or give natural commands while your AI dynamically responds with both spoken answers and contextual actions on the page. Adding this type of Voice+ experience to your frontend asset makes interactions more fluid and task-driven without requiring users to click or tap.

  • Navigate hands free: Voice commands automatically trigger page changes (“Take me to pricing”)

  • Fill forms by voice: Spoken input is mapped to form fields on the webpage (“My address is 31 Hudson Yards”)

  • Enhance knowledge base answers: Responses not only include an answer but can support relevant navigation or form-fill actions

  • Trigger custom commands: Send structured data to the frontend for domain-specific tasks (“Add the Hokas size 9 to my cart”)

  • Use real-time context: The Touchpoint SDK continuously analyzes pages to improve navigation and form recognition

  • Combine scripted responses: Use predefined voice messages for consistent and custom responses

  • Extend with MCP tools: Fetch live data or run workflows to pull in information from outside your domain

User: What are your store hours?
AI: We’re open Monday through Saturday from 10AM to 8PM, and Sundays from 11AM to 6PM.

> Thanks to custom commands added as metdata in a knowledge base response, you
can support automatic navigation to the Store Hours & Location page for this query.
Features
Readymade Voice+ app
Custom app with Voice+ node

Custom Welcome message & process

Combine scripted responses

Turn off small talk

UI navigation

Fill forms

Use real-time context

Trigger commands

Answer FAQs

MCP tools

Head to nlx.ai, and select the Touchpoint widget to try a Voice+ experience for yourself.

Requirements

Create KB & add metadata

A Q&A knowledge base (KB) provides answers to commonly asked questions that users may pose while engaged with your conversational AI. But they also serve another significant purpose: providing support for navigation and custom actions to be triggered in the frontend, when appropriate.

User: What do you have that's vegetarian on the lunch menu?

Knowledge base answer: Our lunch menu includes a variety of sandwiches and salads

Action: Menu page is loaded

Custom action: Vegetarian options are shown in view pane

To support these actions, metadata is added to relevant KB responses as key-value pairs. In the above example, the following metadata would be provided:

nlx:destination: https://www.restaurant.com/menu  
nlx:action: showLunch  
nlx:actionPayload.vegetarian : true
nlx:actionPayload.glutenfree : false

Thus, when the KB answer is delivered, Touchpoint navigates to the menu page and sends a structured action payload for the custom “showLunch” and "vegetarian" actions.

Supported metadata keys

Key
Classification
Action
Description

nlx:destination

navigation

page_custom

Navigate to a specific page or section

nlx:action

custom

Custom action name

Send custom actions to the frontend

nlx:actionPayload

custom

Custom action data

Optional value only taken into account if nlx:action key is present. Sent as payload key to the frontend along with command type custom and action = nlx:action key value

nlx:uri

n/a

Custom behavior (see Advanced section below)

Limit responses and actions to specific uri s within your client application

  • Select Resources > Knowledge bases in your workspace menu

  • On Q&A's tab, upload or manually enter new articles

  • Select Metadata tab

    • Choose Auto-generate option to input sample schema or manually enter metadata properties > Click Save

  • On Q&A's tab, expand any question that requires navigation or custom actions to be executed on your frontend when asked by a user

    • Click + Add metadata

    • Input values into relevant metadata key fields > Click Save

  • Choose Publish tab of your knowledge base > Publish new version

Advanced: Using nlx:uri

In the food menu example, you might provide a different answer to the question if the user is already on the menu page. That's where nlx:uri can change how Voice+ behaves:

I create a new article in the Knowledge Base with the following content:

  • Question: How do I know what's vegetarian lunch option?

  • Answer: Look for the green leaf icon next to listed menu options.

Add a new article in your Q+A knowledge base with nlx:uri metadata:

metadata key
value

nlx:destination

N/A

nlx:action

showLunch

nlx:actionPayload

{}

nlx:actionPayload.vegetarian

true

nlx:actionPayload.glutenFree

false

nlx:uri

/menu

Touchpoint will receive the custom action payload, but won't receive the navigation command when the user is on the /contact page. The voice assistant will also change the voice response to be more contextually aware of the current page.

🧠 Want to learn more? Explore all customization options with Voice+

Create flow

The Welcome flow kickstarts your voice assistant's workflow. It delivers the initial greeting and activates Voice+ mode within your conversational AI application. Once active, your frontend (where the application is installed) is ready to receive API calls from NLX, enabling it to handle navigation, fill forms, trigger custom UI actions, and provide answers to user questions.

  • Select Resources > Choose Flows card

  • Create a new flow > Provide a name

  • Add a Basic node containing a welcome message to fit your use case > Attach it to the Start node

  • Add a Voice+ node > Link it to the Basic node's Next path

    • Ensure Bidirectional mode is turned ON

    • Assign the knowledge base created in Step 1

    • Optionally enable Generative response to support small talk and responses to questions not detected in your knowledge base

    • Optionally assign MCP-enabled flows as Tools, if you would like your assistant to retrieve external data or execute a workflow during conversation

  • Click Save on Canvas toolbar

Deploy

Now you'll construct your conversational AI application with your updated changes.

  • Select Applications from workspace menu

  • Create an application > Choose Blank application and select the Custom app type and provide a name

  • On Configuration tab:

    • Choose API channel > Select General tab in modal > Whitelist your domain

    • Select Voice tab in modal > Choose a TTS provider and the voice persona you would like to use (choose from built-in Inworld AI, ElevenLabs, Hume, OpenAI, or Amazon Polly)

    • Attach your flow made in Step 2 to the Functionality section > Select the Default behavior cog and assign your flow to the Welcome default

A build constructs a package of your application with a snapshot of current state of flows, languages, and application setup. A deployment then pushes a successful build to the delivery channels where your app will be installed:

  • Click deployment status in upper right > Select Build and deploy

    • Review Validation check for critical errors or detected UX issues in custom flows. Before each new build initiates, a validation check is run to provide a preview of potential errors that may cause failed builds. Detected issues are listed with descriptions and potential solutions

    • Click Create build

  • Click deployment status in upper right > Select All builds

  • Choose Deploy on successful build > Click Create deployment

Install Touchpoint SDK

  • Click the Configuration tab of your application > Select the API channel under Delivery

  • Choose the Setup instructions tab > Click Open Touchpoint configurator

  • On Configurator site:

    • Toggle ON Voice mini under Input in the SDK configuration

    • Toggle ON Bidirectional Voice+ in the SDK configuration

    • Define handlers for navigation, form fill, and custom commands in your frontend code

      • Use provided metadata to route each command to the correct UI behavior

Example:

function handleShowLunch(payload) {
  setVegetarianOption(payload.vegetarian);
  setGlutenFreeOption(payload.glutenfree);
}

function handleCustomCommand(action, payload) {
  // Example: Voice-enabled lunch menu navigation
  if (action === "showLunch") {
		handleShowLunch(payload);
  }
}

const touchpointOptions = {
  config: {
    applicationUrl: "YOUR_APPLICATION_URL",
    headers: {
      "nlx-api-key": "YOUR_API_KEY",
    },
    languageCode: "en-US",
  },
  input: "voiceMini", // Enables voice input with bidirectional support
  bidirectional: {
    custom: handleCustomCommand,
  },
};
  • Install the Touchpoint SDK into your website or app

Want to take your app offline? Click the deployed build > Select Deployment tab in modal > Scroll to Danger zone and click Delete deployment. The app stays offline until you redeploy

Last updated