Customizing Voice+
Detailed Guides and Reference Content for Voice+
Quick implementation guides for Voice+ bidirectional functionality.
Implementation guides
How Voice+ works
Voice+ operates through three core components:
VoiceMini Input Mode: A compact floating voice interface that doesn't obstruct your UI
Bidirectional Handlers: Functions that respond to voice commands by updating your page
Automatic Context Analysis: Intelligent scanning of page forms and navigation options
Configuration Overview
To enable Voice+ in your Touchpoint implementation, use the voiceMini
input mode with bidirectional configuration:
const touchpoint = await create({
config: {
applicationUrl: "YOUR_APPLICATION_URL",
headers: {
"nlx-api-key": "YOUR_API_KEY",
},
languageCode: "en-US",
},
input: "voiceMini", // Required for Voice+
bidirectional: {},
});
Command Types
Voice+ processes three distinct types of voice commands:
Navigation
Page navigation and routing
"Go back", "Next page", "Go to dashboard"
Custom
Application-specific actions
"Search products", "Add to cart", "Save draft"
Getting Started
Voice+ requires minimal setup—just enable voiceMini
input mode and provide bidirectional handlers. The system automatically:
Analyzes your page forms for voice-fillable fields
Detects navigation possibilities
Manages conversation context across interactions
Handles framework-specific event dispatching
Ready to implement Voice+? Check out our configuration guide and working examples to get started.
Last updated