LogoLogo
  • GETTING STARTED
    • Welcome to the NLX platform
    • How NLX works
    • Guides
      • Analytics dashboard
      • Chat
      • Generative Journey (Slots)
      • Model Context Protocol
      • Voice
      • Voice+ script
      • Touchpoint components
        • Carousel modality
        • Video modality
    • Terms & concepts
    • Generative AI
    • Developer
  • Build
    • Workspace setup
    • Flows
      • Intro to flows & variables
      • The Canvas
      • Flow settings
      • Nodes
      • Flow appearance
    • Resources
      • Actions
        • Implementation
        • Request model
      • Analytics tags
      • Context variables
      • Data requests
        • Implementation
        • Response model
        • Request model
      • Knowledge bases
        • Ingest content
        • Add metadata (beta)
        • Apply KB
      • Lifecycle hooks
        • Implementation
      • Modalities
      • Secrets
      • Slots (custom)
        • Adding values
        • Translating slots
      • Voice+ scripts
        • Add + download script
        • Deploy script + install SDK
        • Create Voice+ flow
    • Integrations
      • Channels
        • Alexa
        • Amazon Chime SDK
        • Amazon Connect
        • AWS End User Messaging SMS
        • AWS End User Messaging Social
        • Bandwidth
        • Genesys
        • Twilio
        • Zendesk Sunshine
      • LLM services
        • Amazon Bedrock
        • Anthropic
        • Azure OpenAI
        • Cerebras
        • Cohere
        • Google Vertex AI
        • Groq
        • NVIDIA
        • OpenAI
        • xAI
      • NLP
        • Amazon Lex
        • Google Dialogflow
        • Google Dialogflow CX
        • Custom NLP
    • Translations
  • Deploy & test
    • Applications
      • Attach flows
      • Assign default behavior
      • Add channels
        • API
          • REST API
        • Alexa
        • Amazon Chime SDK
        • Amazon Connect
        • AWS End User Messaging SMS
        • AWS End User Messaging Social
        • Genesys
        • Genesys SMS
        • Messenger
        • Microsoft Teams
        • Slack
        • SMS via Bandwidth
        • Twilio SMS
        • Twilio Voice
        • WhatsApp via Twilio
        • Zendesk Sunshine
      • Deploy
      • Optional: Set lifecycle
      • Optional: Set languages
    • Test
      • Test a conversation
      • Automated tests
      • Test an external integration
  • Analyze
    • Conversations
    • Analytics
      • Creating dashboards
      • Formulas & multi-metrics
      • Canvas analytics
    • Training
  • Workspace Settings
    • Escalation channels
    • Resource tags
    • Audit
  • Admin
    • Access control
      • Roles & permissions
    • Notifications
    • FAQ
    • Contact support
Powered by GitBook
On this page
  1. Build
  2. Integrations
  3. NLP

Custom NLP

Quickly set up your custom NLP into your NLX workspace

Last updated 18 days ago

Why use a custom NLP?

Off-the shelf NLP providers, such as Amazon Lex or Google Dialogflow, may not possess the functionality a business requires for handling conversations with its users. Large enterprises often resort to tailored solutions with a custom NLP model.


How does NLX handle my custom NLP?

Just as Amazon Lex or Google Dialogflow provide APIs to ensure their NLPs are compatible with conversational AI builders, NLX provides you with simple API specifications that allow your custom NLP to handle the same actions. These include the build and deployment of an application as well as essential conversation runtime operations, such as disambiguating a user utterance.

Identical to using off-the-shelf NLPs, testing utterances through our , architecting and building intent flows using the , and tracking performance of conversations with can all be done with a custom NLP.

After your custom NLP's API is made compatible and is integrated in NLX, you may select it as the engine of choice when .


Create compatible API

Your custom NLP's API interfaces with NLX to build your conversation flows and process user utterances.

As a result, before you can link your custom NLP in NLX, an API endpoint must be configured using our specification:

  1. intents - A directory with information about the bot's intents.

    • Includes a sub-directory for each languageCode.

    • Within each languageCode, there is one JSON file per intent containing metadata such as utterances, intentId, and slots, with each utterance translated to the respective languageCode.

  2. slotTypes - A directory with information about the bot's slots.

    • Includes a sub-directory for each languageCode.

    • Within each languageCode, there is one JSON file per slot with metadata like values and synonyms translated to the relevant languageCode.

  3. manifest.json - A JSON file with metadata about the build, including attributes like botId, buildId, supported languageCodes and createdTimestamp.

Make sure to set up an API key for your API, as it will be required during the integration step. If you require private connectivity between NLX and your on-premises API, please contact your NLX Customer Success Manager.

Download full OpenAPI Specification:


Integrate API

Once your API is ready for integration, navigate to Integrations in your workspace menu:

  • Click + Add integration > Choose Custom NLP from the dropdown

  • Provide your integration a name

  • Enter both your Endpoint URL and API key

  • Select Create integration

If desired, at any time you can edit or delete your custom NLP integration by expanding the integration name and selecting the edit or delete icons.

The signedUrl attribute in the request body refers to for accessing the bot's build metadata. The URL expires in 5 minutes, and should suffice for downloading and caching the metadata within customNLP. customNLP may cache and use the build metadata for disambiguation requests. The artifact is a zipped file containing the following top-level files and directories:

You can import and visualize the NLX OpenAPI specifications using .

Your integration is now available for selection when choosing an engine under your application's .

presigned S3 URLs
Swagger Editor
Deployment tab
automated test suite
Canvas builder
analytics
deploying your application
5KB
nlx-custom-nlp-openapi.yaml
Integration architecture
post

Disambiguate unstructured text using NLP function

Authorizations
Body
buildIdstringOptional
utterancestringOptional
languageCodestringOptional

en-US, es-ES etc.

contextobjectOptional

context attributes available in NLX conversation

Responses
200
Successful operation
application/json
400
Bad request
500
Internal error
post
POST /disambiguate HTTP/1.1
Host: 
x-api-key: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 193

{
  "buildId": "text",
  "utterance": "text",
  "languageCode": "text",
  "state": {
    "intentId": "text",
    "slotToElicit": {
      "intentId": "text",
      "slotId": "text",
      "slotType": "text"
    },
    "conversationId": "text"
  },
  "context": {}
}
{
  "intentId": "text",
  "slots": [
    {
      "slotId": "text",
      "value": "text"
    }
  ],
  "sentiment": "positive",
  "confidenceScore": 1
}
post

Create a new NLP build

Authorizations
Body
buildIdstringOptional
signedUrlstringOptional
Responses
200
Successful operation
application/json
400
Bad request
500
Internal error
post
POST /builds HTTP/1.1
Host: 
x-api-key: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 37

{
  "buildId": "text",
  "signedUrl": "text"
}
{
  "buildId": "text",
  "status": "built",
  "errorMessage": "text"
}
put

Update the deployment status of a build

Authorizations
Path parameters
buildIdstringRequired
Body
actionstring ยท enumOptionalPossible values:
Responses
200
successful operation
application/json
400
Bad request
404
Not found
500
Internal error
put
PUT /builds/{buildId} HTTP/1.1
Host: 
x-api-key: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 19

{
  "action": "deploy"
}
{
  "buildId": "text",
  "status": "built",
  "errorMessage": "text"
}
get

Retrieve status of a build

Authorizations
Path parameters
buildIdstringRequired
Responses
200
successful operation
application/json
400
Bad request
404
Not found
500
Internal error
get
GET /builds/{buildId} HTTP/1.1
Host: 
x-api-key: YOUR_API_KEY
Accept: */*
{
  "buildId": "text",
  "status": "built",
  "errorMessage": "text"
}
  • Why use a custom NLP?
  • How does NLX handle my custom NLP?
  • Create compatible API
  • POST/disambiguate
  • POST/builds
  • GET/builds/{buildId}
  • PUT/builds/{buildId}
  • Integrate API