Ariso.ai’s Ari is a context-aware AI productivity assistant that understands a company, its work, and its priorities. It integrates with calendars, documents, and collaboration tools to deliver personalized drafts, insights, and automation that move the day forward.
Ari, a multi-tenant AI assistant and coach, needed to encrypt millions of sensitive records — messages, meeting transcripts, personal reflections, and third-party credentials — while maintaining strict cryptographic isolation between tenants, users, and sessions. Ariso adopted HashiCorp Vault's Transit secrets engine to implement envelope-based encryption across 21 database tables. By combining Vault's context-based key derivation with an intelligent data encryption key caching layer, the platform achieved:
- Sub-millisecond Vault transit latency — 0.46ms median (p50), 0.63ms at p99, measured at Vault-side processing only
- 8:1 encrypt-to-decrypt ratio confirming effective DEK caching
- Three isolation levels — organization, user, and session — derived from a single master key
- Zero plaintext in production across all sensitive data — The organization chose HCP Vault Dedicated to eliminate the operational burden of self-managing Vault infrastructure.
»1. The challenge
»Prior state
Before Vault, we relied on application-level encryption with manually managed keys stored in environment variables. This created several problems:
- No tenant isolation — a single compromised key exposed all organizations' data
- No key rotation — rotating keys required coordinated downtime and re-encryption
- No audit trail — cryptographic operations were invisible to security teams
- Key sprawl — each service managed its own keys independently
Collectively, these limitations created unacceptable business risk by coupling tenant exposure, operational fragility, and the absence of auditable controls.
»What the platform handles
The AI agent platform processes sensitive data across multiple categories daily:
- Direct messages and channel conversations
- Meeting transcripts and coaching sessions
- Personal reflections and memories
- Third-party integration credentials (OAuth tokens, API keys)
- Survey responses and behavioral analytics
Each piece of data belongs to a specific user within a specific organization. Some data is shared across the org (meeting transcripts), some is private to a user (personal reflections), and some should be cryptographically inaccessible after a session ends (ephemeral conversations).
"We needed cryptographic isolation between tenants, between users within a tenant, AND for ephemeral session data — all without managing thousands of individual keys." — Ariso.ai CEO
»2. The solution: Transit engine and envelope encryption
»Why Vault transit
Ariso evaluated cloud-native KMS options (AWS KMS, Azure Key Vault, Google Cloud KMS) but needed two capabilities none of them offered natively: encryption as a service with context-based key derivation and multi-cloud portability. Vault's Transit Engine was the only solution that provided both.
»Architecture: Two-layer envelope encryption
Rather than sending all data through Vault (which would create a bottleneck), Ariso uses envelope encryption — Vault wraps the keys, the application encrypts the data locally.

Layer 1 — Data Encryption Key (DEK): A random AES-128-GCM key generated per encryption context. The application uses this key to encrypt data locally at memory speed.
Layer 2 — Key Encryption Key (KEK): Managed by Vault's Transit Engine. Vault wraps (encrypts) the DEK so it can be safely stored alongside the encrypted data. The DEK is never stored in plaintext — only Vault can unwrap it.
This separation means Vault only handles small key payloads (16 bytes), not the actual data. The bulk encryption happens locally, keeping latency low regardless of payload size.
»3. Key derivation: Multi-level isolation from a single key
»The problem
Ariso needed cryptographic separation at three levels — organization, user, and session — but managing individual keys for each would mean tens of thousands of keys with complex lifecycle management.
»The solution: Context-based key derivation
Vault's Transit Engine supports derived keys: a single master KEK combined with a context parameter produces a unique cryptographic key for each context. Different contexts yield mathematically independent keys, but all derive from one manageable master key.
// A single master KEK with derivation enabled
const kekConfig = {
kek_name: 'master-kek',
kek_type: 'aes128-gcm96',
derived: true,
};
// Context determines which derived key Vault uses
function deriveContext(ctx: EncryptionContext): string {
switch (ctx.type) {
case 'organization':
return base64(`org:${ctx.orgId}`);
case 'user':
return base64(`user:${ctx.userId}`);
case 'session':
return base64(`session:${ctx.sessionId}`);
}
}
derives unique encryption keys per organization, per user, or per session. This gives Ariso billions of unique keys without any key management overhead.
»How each level is used
| Data type | Encryption context | Isolation benefit |
| Organization settings | organization | Shared across all users in org |
| Meeting transcripts | organization | Accessible to all org members with permission |
| Personal messages | user | Only decryptable by specific user's key |
| Personal reflections | user | Cryptographically private to individual |
| Ephemeral conversation | session | Cannot decrypt after session end |
»Session-level encryption: Forward secrecy
Session-level derivation provides forward secrecy for ephemeral interactions. When a session ends, the DEK cache entry is evicted. Without the cached DEK, the data requires a Vault unwrap call with the exact session context — and since session IDs are unique UUIDs, old session data becomes cryptographically inaccessible once the session context is no longer active. This directly supports "forget me" compliance requirements.
»Why key derivation matters
- No key sprawl. One master KEK serves billions of derived keys across all tenants, users, and sessions.
- Simple key rotation. Rotate the master KEK once; all derived keys update automatically. Zero downtime.
- Compliance flexibility. Different retention and access policies per context type without separate key management.
- Forward secrecy. Session-scoped keys become inaccessible after session expiration.
»4. Performance at scale
»The caching strategy
Calling Vault for every encrypt/decrypt operation would add network latency to every database read and write. Ariso uses an in-memory DEK cache to avoid this.
On a cache hit (95.8% of operations), the DEK is already in memory — encryption and decryption happen locally at AES hardware speed with no network call. On a cache miss, a single Vault API call unwraps the DEK, which is then cached for subsequent operations.
Cache entries are keyed by {kek_name}:{context}:{vault_version}, so key rotation automatically invalidates stale entries via version mismatch.
»Production Latency (from Vault Audit Logs)
Latency is measured from Vault's own audit log timestamps — the time between receiving a transit request and emitting the response. These numbers reflect Vault-side processing only, excluding network round-trip from the application.
| Metric | Encrypt (182 ops sampled) | Decrypt (42 ops sampled) | All Transit |
| p50 | 0.46ms | 0.47ms | 0.46ms |
| p90 | 0.54ms | 0.59ms | 0.55ms |
| p95 | 0.57ms | 0.61ms | 0.59ms |
| p99 | 0.66ms | 0.63ms | 0.63ms |
| Max | 2.01ms | 0.63ms | 2.01ms |
| Avg | 0.47ms | 0.47ms | 0.47ms |
Overall, Vault transit operations average 0.47ms, with 0.63ms at p99.
Source: HCP Vault audit logs streamed to CloudWatch.
The 8:1 encrypt-to-decrypt ratio confirms the DEK caching strategy is working — most decrypt operations are served from the in-memory cache without calling Vault. Only cache misses (new keys, cache evictions, key rotations) require a Vault decrypt call.
The combination of sub-millisecond Vault transit latency, local AES encryption via envelope encryption, and aggressive DEK caching means Ariso's encryption layer adds negligible overhead to database operations.
»5. Results
»Data protection coverage
| Data category | Example | Encryption context | Key derivation |
| Organization data | org_settings, team_configs, meeting transcripts | organization | org:{orgId}
|
| User messages | messages, conversations | user | user:{userId}
|
| Session data | live_transcripts, ephemeral_notes | session | session:{sessionId}
|
»Security posture
- Zero plaintext in production — all sensitive fields cleared after encryption
- Multi-level cryptographic isolation — org, user, and session-level key derivation from a single master key
- Forward secrecy — session-scoped encryption prevents retrospective access to ephemeral data
- Complete audit trail — 100% of cryptographic operations logged with context metadata
- Key rotation readiness — zero-downtime rotation tested across all context levels
»Infrastructure
Ariso chose HCP Vault Dedicated to avoid the operational burden of self-hosting (HA configuration, backup, patching, unseal management, certificate rotation). HCP Vault Dedicated includes Vault Enterprise features — performance replication, disaster recovery, namespaces, and Sentinel policies — at no additional license cost.
»6. Lessons learned
»Do
- Use envelope encryption — local AES encryption with Vault-wrapped keys keeps latency low regardless of payload size.
- Implemenng DEK caching — an LRU cache with 1-hour TTL reduced Vault API calls by 96%.
- Design context-based key derivation from the start — retrofitting isolation levels is significantly harder than building them in.
- Use session-scoped encryption for ephemeral data — it provides forward secrecy without additional key management.
- Design for key rotation on day one — include KEK version tracking in your schema so rotation is zero-downtime.
- Use HCP Vault unless you have a dedicated security infrastructure team.
»Don't
- Don't send large payloads directly to Transit — use envelope encryption; Vault should only handle DEK wrapping.
- Don't cache unwrapped DEKs indefinitely — balance performance with security via appropriate TTLs.
- Don't ignore audit logs — they are your forensic trail for cryptographic operations.
- Don't hardcode Vault tokens — use dynamic authentication methods (Kubernetes auth, AWS IAM auth).
»Migration strategy
For teams adopting Vault encryption on existing data:
Phase 1: Schema Preparation
-
Add nullable encrypted JSONB columns alongside existing plaintext columns
-
Deploy Vault-integrated application code
Phase 2: Dual-Write
-
Encrypt new data, write to both columns
-
Backfill existing records in batches
Phase 3: Read Migration
-
Prefer encrypted column if populated
-
Fall back to plaintext during transition
Phase 4: Cleanup
-
Verify 100% encryption coverage
-
Clear plaintext columns
-
Remove fallback code
»Conclusion
HashiCorp Vault's Transit Secrets Engine solved the core challenge: encrypting large volumes of records daily across a multi-tenant platform with cryptographic isolation at organization, user, and session levels — without managing thousands of individual keys.
The three capabilities that made this possible:
- Envelope encryption — Vault wraps keys at 0.46ms (p50), the application encrypts data locally, adding negligible overhead to database operations.
- Context-based key derivation — 20+ per-organization transit keys derived from a single configuration, with org/user/session isolation and zero key management overhead.
- High-performance architecture — an 8:1 encrypt-to-decrypt ratio confirms DEK caching is effective, with ~97K transit operations per week handled by a 3-node STANDARD_SMALL cluster at sub-millisecond latency.
For multi-tenant applications processing sensitive data at scale, this pattern — envelope encryption, context-based key derivation, and intelligent caching — delivers both strong security and production-grade performance.
HCP Vault Dedicated directly mitigated the risks present in Ariso’s prior state by centralizing key governance, enforcing tenant- and session-level cryptographic isolation, and providing a complete audit trail for all encryption operations. At the same time, the managed service removed operational failure modes like unseal handling, patching, replication, and disaster recovery that would otherwise sit on the critical path of the platform’s security model.
This case study reflects a production implementation processing sub-millions of encrypted records across a multi-tenant AI platform from Ariso Intelligence.






