Security is not a checkbox on our proposal template. It's the reason most of our clients come to us in the first place — they can't use public LLM APIs because of regulatory or data-sensitivity constraints. This page describes how we think about and operate security across every engagement.
Our security posture
Data minimization
We request only the data strictly required for the engagement. Test data wherever possible. Synthetic data when feasible.
Your infra, your keys
Production credentials never touch our laptops. We operate through bastion hosts or break-glass accounts your team provisions.
Encryption everywhere
At rest with AES-256, in transit with TLS 1.3. Model weights encrypted on disk. PII redacted at ingestion.
Audit trails
Every model inference logged with request, response, timestamp, user. Retention configurable per engagement.
Access control
SSO via your IdP (Okta, Azure AD, Google Workspace). Role-based permissions. Just-in-time elevation, never standing admin.
Incident response
Documented IR playbook, 24-hour notification commitment for security events, quarterly tabletop exercises.
Compliance frameworks we build to
- GDPR (EU) — DPA available, data residency options, right-to-be-forgotten support
- DPDP (India) — compliant handling of personal data per the Digital Personal Data Protection Act 2023
- HIPAA (US healthcare) — BAA available for healthcare engagements, PHI isolation architecture
- SOC 2 Type II — system architecture designed to pass SOC 2 audits; we can support your Type II engagement
- ISO 27001 — ISMS-compatible logging and controls
- PCI-DSS — when card data is in scope (we recommend keeping it out of scope where possible)
Prompt injection & LLM-specific threats
Traditional security models don't cover LLM-specific attack vectors — prompt injection, jailbreaks, data exfiltration via output, training data leakage. Every system we ship includes:
- Input sanitization — filtering and rewriting of suspicious prompt patterns before they hit the model
- Output filtering — PII detection, code execution blocking, URL extraction, competitor-mention redaction as needed
- Constrained generation — structured output schemas that prevent free-form exfiltration
- Rate limiting & anomaly detection — unusual query patterns trigger review
- System prompt hardening — protection against prompt leakage and instruction override
What happens to your data after engagement ends
Your data never leaves your infrastructure to begin with, by default. When we do need temporary access (e.g., a sample for model evaluation), it's scoped, time-boxed, and destroyed on engagement close. We provide a signed data destruction certificate within 30 days of project end.
Responsible disclosure
Found a security issue in our code, documentation, or public-facing systems? Email info@adorbistech.com with subject line "Security". We commit to acknowledging within 48 hours and a full response within 7 days. We do not currently run a bug bounty but will credit responsible disclosures publicly with consent.