Security

Learn how to protect your NLX workspace and ensure your apps are compliant and secure

How NLX handles security

NLX protects your workspace and the applications you deploy by controlling access, reducing risk, and ensuring sensitive data is handled appropriately. These features and settings help you manage authentication, enforce workspace standards, and apply guardrails that keep conversations compliant and safe across environments.

Security in NLX is supported through:

  • Access control (who can do what)

  • Data protection (what gets stored, logged, or redacted)

  • Runtime protections (input/output checks, abuse prevention)

  • Auditability (what changed, when, and by whom)

Access control

Use roles and permissions to ensure only approved users can edit flows, deploy builds, manage integrations, or change workspace settings.

Common governance goals

  • Limit who can deploy to production

  • Restrict who can create/edit integrations (data requests, actions)

  • Allow read-only access for reviewers and stakeholders

Related: Roles & permissions

Sensitive data handling

NLX lets you reduce exposure of sensitive information by controlling what appears in logs and transcripts.

Examples

  • Mark schema fields as Sensitive in integrations so values are redacted

  • Use Guardrails to detect and mask PII patterns (credit cards, IDs, emails)

  • Ensure production logs don’t store data your policies prohibit

Related: Guardrails (Mask enforcement), Data requests (Sensitive fields)

Secure integrations and secrets

Integrations are often the strongest boundary between your AI application and external systems. NLX supports security best practices through structured schemas and secrets management.

Best practices

  • Store keys and tokens as Secrets and not as hardcoded headers

  • Separate Development vs Production endpoints where possible

  • Validate payloads with strict request/response schemas

Related: Integrations, Data requests, Secrets

Runtime protections

Security also includes what happens while users are interacting with your AI app. NLX guardrails can evaluate and control messages at runtime before user inputs hit an NLP or LLM or before outputs reach users.

Guardrails can enforce:

  • Prompt injection prevention and jailbreak detection (Input)

  • PII detection/masking (Input/Output)

  • Brand/legal compliance checks (Output)

  • Redirecting risky turns to escalation flows

Related: Guardrails

Auditability and governance review

For governance and compliance workflows, NLX provides visibility into workspace changes and activity.

Related: Audit trail, Versioning

Last updated