Quickly set up your custom NLP into your NLX workspace
This feature is currently available to enterprise tiers only.
Why use a custom NLP?
Off-the shelf NLP providers, such as Amazon Lex or Google Dialogflow, may not possess the functionality a business requires for handling conversations with its users. Large enterprises often resort to tailored solutions with a custom NLP model.
How does NLX handle my custom NLP?
Just as Amazon Lex or Google Dialogflow provide APIs to ensure their NLPs are compatible with conversational AI builders, NLX provides you with simple API specifications that allow your custom NLP to handle the same actions. These include the build and deployment of an application as well as essential conversation runtime operations, such as disambiguating a user utterance.
Identical to using off-the-shelf NLPs, testing utterances through our automated test suite, architecting and building flows using the Canvas builder, and tracking performance of conversations with analytics can all be done with a custom NLP.
After your custom NLP's API is made compatible and is integrated in NLX, you may select it as the engine of choice when deploying your application.
Create compatible API
Your custom NLP's API interfaces with NLX to build your conversation flows and process user utterances.
Integration architecture
As a result, before you can link your custom NLP in NLX, an API endpoint must be configured using our specification:
post
Disambiguate unstructured text using NLP function
Authorizations
x-api-keystringRequired
Body
buildIdstringOptional
utterancestringOptional
languageCodestringOptional
en-US, es-ES etc.
contextobjectOptional
context attributes available in NLX conversation
Responses
200
Successful operation
application/json
400
Bad request
500
Internal error
post
/disambiguate
post
Create a new NLP build
Authorizations
x-api-keystringRequired
Body
buildIdstringOptional
signedUrlstringOptional
Responses
200
Successful operation
application/json
400
Bad request
500
Internal error
post
/builds
The signedUrl attribute in the request body refers to presigned S3 URLs for accessing the bot's build metadata. The URL expires in 5 minutes, and should suffice for downloading and caching the metadata within customNLP. customNLP may cache and use the build metadata for disambiguation requests. The artifact is a zipped file containing the following top-level files and directories:
intents - A directory with information about the bot's intents.
Includes a sub-directory for each languageCode.
Within each languageCode, there is one JSON file per intent containing metadata such as utterances, intentId, and slots, with each utterance translated to the respective languageCode.
slotTypes - A directory with information about the bot's slots.
Includes a sub-directory for each languageCode.
Within each languageCode, there is one JSON file per slot with metadata like values and synonyms translated to the relevant languageCode.
manifest.json - A JSON file with metadata about the build, including attributes like botId, buildId, supported languageCodes and createdTimestamp.
get
Retrieve status of a build
Authorizations
x-api-keystringRequired
Path parameters
buildIdstringRequired
Responses
200
successful operation
application/json
400
Bad request
404
Not found
500
Internal error
get
/builds/{buildId}
put
Update the deployment status of a build
Authorizations
x-api-keystringRequired
Path parameters
buildIdstringRequired
Body
actionstring · enumOptionalPossible values:
Responses
200
successful operation
application/json
400
Bad request
404
Not found
500
Internal error
put
/builds/{buildId}
Make sure to set up an API key for your API, as it will be required during the integration step. If you require private connectivity between NLX and your on-premises API, please contact your NLX Customer Success Manager.