Deployment Architecture

Deployment architecture overview of the Revenue Recovery platform, including the four-layer model, deployment options, tenant isolation, and security architecture

This document provides a technical overview of the Revenue Recovery platform architecture to help system administrators, IT teams, and decision-makers understand the infrastructure components, deployment models, and security principles. This information is essential for planning your deployment and aligning the platform with your organization's infrastructure strategy.

Overview

The [Ai]levate Revenue Recovery platform is built on a cloud-native, four-layer architecture designed to deliver AI-powered denied claims management with enterprise-grade security, scalability, and flexibility. The platform supports two deployment models — SaaS and On-Premise — each leveraging the same core architecture but with different management responsibilities.

Four-Layer Architecture

The platform is organized into four distinct layers, each with specific responsibilities:

LayerPurposeSaaS ModelOn-Premise Model
Cloud Services LayerOrchestration, workflows, authentication, RCM applications, agentic AI platform, metadata servicesManaged by [Ai]levateManaged by [Ai]levate
AI Compute LayerAI model execution (reasoning, classification, text generation) on dedicated Tenstorrent hardware with vLLM interface (hosted within [Ai]levate Colo for SaaS, Customer site/colo for On-Prem)Managed by [Ai]levateManaged by Customer
Database Storage LayerElastic datastore for structured claim data, metadata, and operational logs (AES-256 encrypted, tenant-isolated)Managed by [Ai]levateManaged by Customer
Relay Service LayerSecure bridge between customer EHR and [Ai]levate platform (outbound-only connectivity)Managed by CustomerManaged by Customer

Key Architectural Principles:

  • Dedicated Resources: Each customer receives their own dedicated AI Warehouse and Elastic datastore, ensuring complete tenant isolation
  • No Data Persistence in Compute: AI Warehouses process tasks without persisting customer data
  • Outbound-Only Relay: The Relay Service uses outbound-only connectivity (TLS 1.2+ on port 443), eliminating the need for inbound firewall rules in SaaS deployments
  • Separation of Storage and Compute: Strict architectural boundary between data storage and processing layers
  • Data Residency: SaaS deployments in Azure East US region; on-premise deployments in customer-selected location (more SaaS regions to come)

For a comprehensive introduction to these architectural layers, see the Welcome & Product Introduction.

Deployment Models

The [Ai]levate Revenue Recovery platform supports flexible deployment models based on which infrastructure components the customer chooses to manage. The Cloud Services Layer is always managed by [Ai]levate, while the Database Storage Layer (Elastic datastore) and AI Compute Layer (Tenstorrent hardware) can be either [Ai]levate-managed (SaaS) or customer-managed (On-Premise).

Important Note: The term "on-premise" refers to customer-managed infrastructure components, not the location of the entire platform. The Cloud Services Layer always operates in [Ai]levate's Azure environment, regardless of deployment model. From an architectural perspective, the Elastic datastore is always external to the platform cluster—the key difference is whether [Ai]levate or the customer provisions and manages it.

This flexibility enables four possible deployment configurations:

  1. Full SaaS: [Ai]levate manages Elastic and AI Warehouse
  2. Hybrid (ES On-Prem): Customer manages Elastic, [Ai]levate manages AI Warehouse
  3. Hybrid (TT On-Prem): [Ai]levate manages Elastic, customer manages AI Warehouse
  4. Full On-Premise: Customer manages both Elastic and AI Warehouse

SaaS Deployment

In the SaaS model, [Ai]levate manages the complete infrastructure stack, providing a fully managed service with minimal customer operational overhead. The Cloud Services Layer runs in Azure, while the AI Warehouse runs in [Ai]levate's Colo facility connected via Megaport to Azure ExpressRoute.

flowchart TD
    subgraph Customer["Customer Environment"]
        EHR["EHR System"]
        RelayVM["Relay Service VM<br/>Outbound-only"]
    end
    
    subgraph AilevateCloud["Ailevate Cloud - Azure (East US)"]
        subgraph AKS["Azure Kubernetes Service (AKS) Cluster"]
            direction TB
            subgraph CloudServices["Cloud Services Layer"]
                RCM["RCM Orchestration"]
                Auth["Authentication<br/>(Entra ID / Magiclink)"]
                Apps["Revenue Recovery App"]
                AgenticAI["Agentic AI Platform<br/>(Workers)"]
            end
        end
        
        subgraph Storage["Database Storage Layer"]
            Elastic["Dedicated Elastic Datastore<br/>(AES-256 Encrypted)"]
        end
    end
    
    subgraph Colo["Ailevate Colo"]
        subgraph AICompute["AI Compute Layer"]
            Warehouse["Dedicated AI Warehouse<br/>(Tenstorrent Hardware + vLLM)"]
        end
    end

    EHR <-->|"⚡ LAN<br/>SQL/API"| RelayVM
    RelayVM -->|"🔒 TLS 1.2+<br/>Port 443"| CloudServices
    AgenticAI <-->|"LLM API Calls<br/>(via ExpressRoute)"| Warehouse
    CloudServices <-->|"Data Access"| Elastic
    
    style Customer fill:#fff5e6,stroke:#ff9800,stroke-width:2px
    style AilevateCloud fill:#e3f2fd,stroke:#2196f3,stroke-width:2px
    style Colo fill:#e8f5e9,stroke:#4caf50,stroke-width:2px
    style AKS fill:#bbdefb,stroke:#1976d2,stroke-width:2px
    style CloudServices fill:#90caf9,stroke:#1565c0,stroke-width:1px
    style AICompute fill:#c8e6c9,stroke:#388e3c,stroke-width:2px
    style Storage fill:#f8bbd0,stroke:#c2185b,stroke-width:2px

Deployment Comparison:

AspectSaaS DeploymentOn-Premise Deployment
Customer Responsibilities• Deploy and manage Relay Service VM
• Provide EHR database connectivity
• Configure identity integration
• Procure and deploy Tenstorrent AI Warehouse
• Provision and manage Elastic datastore (>=8.19.1)
• Deploy and manage Relay Service VM
• Configure inbound network access
• Handle infrastructure security, patching, backup, DR
• Provide EHR database connectivity
[Ai]levate Responsibilities• Provision dedicated Elastic datastore
• Provision dedicated AI Warehouse in Colo (connected via Megaport to Azure ExpressRoute)
• Manage Cloud Services Layer (AKS) including agentic AI platform
• Platform maintenance and security updates
• Ensure HIPAA compliance and data residency
• Manage Cloud Services Layer (AKS) including agentic AI platform
• Provide AI model container images
• Connect to customer-managed infrastructure
• Ensure platform orchestration
Key Benefits• Minimal operational overhead
• Rapid deployment
• Automatic scaling and updates
• No inbound firewall rules required
• Complete infrastructure control
• Full data sovereignty
• Custom security configurations
• Integration with existing infrastructure
• Flexibility in hardware decisions
Network Requirements• Outbound-only from Relay Service• Outbound from Relay Service
• Inbound to AI Warehouse and Elastic
Operational ComplexityLowModerate

For detailed SaaS deployment procedures, see the SaaS Deployment Guide.

On-Premise Deployment

In the On-Premise model, customers manage one or both of the Database Storage Layer and AI Compute Layer within their own infrastructure, while [Ai]levate continues to manage the Cloud Services Layer in Azure. "On-premise" specifically refers to customer-managed infrastructure components (Elastic datastore and/or Tenstorrent AI Warehouse), not the platform's Cloud Services Layer, which remains in [Ai]levate's Azure environment.

Key Distinction: The primary difference from SaaS is not the architectural design, but rather who provides the infrastructure credentials and endpoints. In SaaS, [Ai]levate generates and manages these details; in on-premise deployments, the customer provides them during platform configuration.

flowchart TD
    subgraph AilevateCloud["Ailevate Cloud - Azure (East US)"]
        subgraph AKS["Azure Kubernetes Service (AKS) Cluster"]
            direction TB
            subgraph CloudServices["Cloud Services Layer"]
                RCM["RCM Orchestration"]
                Auth["Authentication<br/>(Entra ID / Magiclink)"]
                Apps["Revenue Recovery App"]
                AgenticAI["Agentic AI Platform<br/>(Agents/Workers)"]
            end
        end
    end

    subgraph Customer["Customer Site or Colo"]
        EHR["EHR System"]
        RelayVM["Relay Service VM<br/>Outbound-only"]

        subgraph AICompute["AI Compute Layer"]
            Warehouse["AI Warehouse<br/>(Tenstorrent Hardware + vLLM)<br/>[WARNING] Inbound access required"]
        end
        
        subgraph Storage["Database Storage Layer"]
            Elastic["Elastic Datastore<br/>[WARNING] Inbound access required"]
        end
    end

    EHR <-->|"⚡ LAN<br/>SQL/API"| RelayVM
    RelayVM -->|"🔒 TLS 1.2+<br/>Port 443"| CloudServices
    AgenticAI <-->|"LLM API Calls<br/>(Inbound TLS)"| Warehouse
    CloudServices <-->|"Data Access<br/>(Inbound TLS)"| Elastic
    
    style Customer fill:#fff5e6,stroke:#ff9800,stroke-width:2px
    style AilevateCloud fill:#e3f2fd,stroke:#2196f3,stroke-width:2px
    style AKS fill:#bbdefb,stroke:#1976d2,stroke-width:2px
    style CloudServices fill:#90caf9,stroke:#1565c0,stroke-width:1px
    style AICompute fill:#c8e6c9,stroke:#388e3c,stroke-width:2px
    style Storage fill:#f8bbd0,stroke:#c2185b,stroke-width:2px

Important Difference: Unlike SaaS, on-premise deployments require inbound network access to the AI Warehouse and Elastic datastore so [Ai]levate Cloud Services can interact with these customer-managed components.

For comparison of responsibilities and benefits across both deployment models, see the table in the SaaS Deployment section above.

For detailed on-premise deployment procedures, see the On-Premise Deployment Guide.

Cloud Services Layer

The Cloud Services Layer is always managed by [Ai]levate and serves as the orchestration hub for the entire platform. This layer runs in an Azure Kubernetes Service (AKS) cluster and includes the Agentic AI Platform that coordinates all AI operations.

Core Responsibilities

The Cloud Services Layer handles the complete orchestration of the Revenue Recovery platform, acting as the central nervous system that coordinates all operations across the four architectural layers.

Orchestration and Workflow Management:

This layer manages the end-to-end workflow of denied claim processing, from initial data extraction through AI analysis to final remediation.

  • Coordinates claims processing workflows across all layers
  • Manages task queuing and execution with priority handling
  • Handles workflow state and transaction management for consistency
  • Orchestrates interactions between EHR, AI Warehouse, and Elastic datastore

Agentic AI Platform:

The Cloud Services Layer includes an integrated agentic AI platform with agents and workers that coordinate AI operations.

  • AI agents and workers that orchestrate LLM interactions
  • Task distribution and coordination for AI processing
  • API calls to AI Warehouse (Tenstorrent hardware) for model inference
  • Context management and prompt engineering for optimal AI performance

Authentication and Security:

Security is deeply integrated through enterprise-grade authentication and comprehensive access controls.

  • Integrates with Microsoft Entra ID (formerly Azure AD) for OIDC/SAML authentication
  • Supports Magic Link email authentication for simplified access
  • Manages encryption keys and secrets via Azure Key Vault

Customer-Facing Applications:

All user-facing applications are delivered through this layer:

  • Revenue Recovery web application for claim management
  • Dashboard and insights visualization for performance monitoring
  • Claims insights for detailed claim analysis
  • Administrative console for platform configuration
  • API endpoints for third-party integration

Metadata and Query Services:

The layer provides critical data access services that span all platform components.

  • Query optimization and execution for efficient data retrieval
  • Metadata management for claims and workflows
  • Secure data access without exposing raw storage
  • Cross-layer coordination ensuring seamless operation

AKS Cluster Architecture

The platform leverages Azure Kubernetes Service for enterprise-grade container orchestration, providing the foundation for scalable, resilient operations.

Key Capabilities:

CapabilityDescriptionBenefit
High AvailabilityMulti-node cluster with pod replicationEnsures continuous operation with automatic failover
Automatic ScalingHorizontal and vertical scalingAdapts to workload demands without manual intervention
Service MeshSecure service-to-service communication with mutual TLSZero-trust security within the cluster
Monitoring IntegrationAzure Monitor, Application Insights, custom telemetryComprehensive visibility into system health and performance
Zero-Downtime DeploymentsRolling updates and canary deploymentsMaintains service continuity during updates

Tenant Isolation Model

The Revenue Recovery platform implements strict single-tenant isolation to ensure complete separation of customer data, compute resources, and operational environments.

Dedicated Resources Per Customer

The platform ensures complete tenant isolation through dedicated infrastructure for each customer, eliminating the "noisy neighbor" problem and guaranteeing predictable performance.

AI Compute Isolation:

Each customer operates their own dedicated AI Warehouse with exclusive compute resources.

  • Dedicated Tenstorrent hardware—no sharing between customers
  • Exclusive vLLM interface per customer tenant
  • Stateless compute environment—no data persistence on AI hardware
  • Guaranteed performance without resource contention

Database Storage Isolation:

Every customer receives their own completely separate Elastic datastore. Regardless of deployment model (SaaS or on-premise), the Elastic datastore is always external to the platform cluster—the difference is only in who provisions and manages it.

  • Dedicated Elastic instance with separate indices and access policies
  • Individual encryption keys per customer managed via Azure Key Vault
  • Independent backup schedules tailored to customer requirements
  • No cross-tenant data access—complete logical and physical separation
  • Always requires connection credentials (endpoint, CA cert, username, password)
    • SaaS: [Ai]levate generates and manages these credentials
    • On-Premise: Customer provides these credentials during setup

Network Isolation:

Network segmentation creates clear security boundaries between customers.

  • Separate virtual networks or VPN tunnels per customer
  • Instance-specific DNS entries and SSL certificates
  • Dedicated API endpoints ensuring traffic separation
  • Network security groups (NSGs) enforcing strict traffic isolation

Operational Isolation:

Management and monitoring are isolated to provide complete operational independence.

  • Independent monitoring and alerting systems per customer
  • Separate audit logs and compliance reporting
  • Custom feature flags and configuration per tenant
  • Isolated deployment pipelines preventing cross-customer impact

Hierarchical Key Model

The platform implements a hierarchical encryption key model for defense-in-depth security, protecting data at multiple levels.

Key Hierarchy:

  1. Per-Tenant Keys: Top-level keys securing each customer tenant
  2. Per-Service Keys: Service-specific keys within a tenant for granular control
  3. Per-Document Keys: Individual keys for sensitive documents requiring additional protection

Key Management Features:

  • All keys stored securely in Azure Key Vault with HSM backing
  • Automatic key rotation policies ensuring keys are refreshed regularly
  • Comprehensive key lifecycle management and audit logging

Relay Service Architecture

The Relay Service is a mandatory component in both SaaS and On-Premise deployments. It acts as a secure bridge between the customer's EHR system and the [Ai]levate platform.

Key Characteristics

The Relay Service serves as the secure connection point between your on-premise EHR system and the [Ai]levate platform, designed with a security-first architecture that minimizes attack surface.

Architectural Role:

  • Mandatory component for all deployments (SaaS and On-Premise)
  • Customer-deployed Linux VM within customer's network environment
  • Outbound-only connectivity eliminating need for inbound firewall rules
  • Zero data persistence operating as pure pass-through proxy
  • Managed via Azure Arc enabling secure remote management
  • Flexible EHR connectivity supporting both SQL and API-based integrations

Security Model:

The Relay enforces end-to-end encryption (TLS 1.2+), certificate-based authentication, and comprehensive audit logging while maintaining a minimal attack surface through outbound-only connections.

📘 Complete Specifications: For detailed VM requirements, network configuration, and operational runbooks, see the Relay Service Deployment Guide.

Relay Service Operation

The Relay Service establishes a secure tunnel to [Ai]levate Cloud Services, allowing the platform to query the EHR database or call EHR APIs as if they were local, while the EHR system remains completely private within the customer network.

sequenceDiagram
    participant EHR as EHR System (SQL/API)
    participant Relay as Relay Service (Customer Network)
    participant Cloud as Cloud Services (Azure)
    participant Elastic as Elastic Datastore
    participant AI as AI Warehouse

    Note over Relay,Cloud: Relay initiates outbound connection (443/TLS)
    Relay->>Cloud: Establish secure tunnel
    Cloud->>Relay: Query/API request (proxied)
    Relay->>EHR: Execute query or API call (LAN)
    EHR->>Relay: Return data
    Relay->>Cloud: Encrypted data transmission
    Cloud->>AI: LLM API call (via ExpressRoute or Direct)
    AI->>Cloud: LLM results (no data persistence)
    Cloud->>Elastic: Store metadata and results

For complete Relay Service specifications and deployment procedures, see the Relay Service Deployment Guide.

AI Warehouse Architecture

The AI Warehouse is the platform's dedicated AI compute layer, running on Tenstorrent hardware with a vLLM interface for model serving. This infrastructure runs outside of Azure—in [Ai]levate's Colo facility for SaaS deployments, or at the customer's site for On-Premise deployments.

Key Characteristics

The AI Warehouse leverages specialized Tenstorrent hardware to deliver AI-powered claim analysis with exceptional performance and efficiency. The agentic AI platform in the Cloud Services Layer (Azure) makes API calls to the AI Warehouse for LLM inference.

Hardware Specifications:

ComponentSpecificationPurpose
ProcessorTenstorrent Wormhole™ or Blackhole™Optimized for AI/LLM operations
AI Memory96GB+ dedicated poolEliminates memory bottlenecks for large models
InterconnectQSFP-DD 800GbEHigh-speed data transfer between components
Form FactorRack-mountedEnterprise data center compatibility
PowerRedundant power suppliesEnterprise reliability and uptime
CoolingActive cooling with redundancyOptimal thermal management

Software:

[Ai]levate provides optimized software stacks for seamless model deployment and serving.

  • Pre-configured AI model container images with all dependencies
  • vLLM interface standardizing model serving and inference
  • Support for reasoning, classification, and text generation workloads
  • Optimized for healthcare claim analysis and remediation tasks

Isolation and Security:

Complete customer isolation ensures predictable performance and data protection.

  • One AI Warehouse per customer—zero resource sharing
  • Fully isolated compute environment preventing cross-tenant access
  • No data persistence—processes tasks and returns results only
  • All persistent storage handled separately in Elastic datastore

Deployment:

Management responsibilities and location vary by deployment model.

  • SaaS: Provisioned, managed, and maintained by [Ai]levate in Colo facility (connected via Megaport to Azure ExpressRoute)
  • On-Premise: Procured and managed by customer at their site, with [Ai]levate software support

For detailed AI Warehouse hardware specifications and setup, see the AI Warehouse Deployment Guides.

Database Storage Layer

The Database Storage Layer uses Elasticsearch as the datastore for all structured claim data, metadata, and operational logs.

Key Characteristics

The Database Storage Layer uses Elasticsearch to provide scalable, high-performance storage for all claim data, metadata, and operational logs.

Elasticsearch Specifications:

AspectSaaSOn-Premise
Version[Ai]levate-managed (continuously optimized)Elasticsearch >=8.19.1 (required)
Cluster SizeDynamically scaled by [Ai]levateMinimum 3 nodes recommended for HA
ManagementFully managed—clustering, indexing, scalingCustomer provisions and manages
MaintenanceAutomated by [Ai]levateCustomer responsibility
MonitoringIncluded with comprehensive telemetryCustomer implements monitoring

Data Isolation:

Complete data separation ensures no cross-tenant access.

  • One dedicated Elastic datastore per customer—complete physical isolation
  • Separate indices, access policies, and resource allocation per tenant
  • Architectural prevention of cross-tenant data access

Encryption:

Multi-layer encryption protects data at rest and in transit.

  • AES-256 encryption for all data at rest
  • TLS encryption for all data in transit between components
  • Centralized key management via Azure Key Vault
  • Encrypted backups ensuring data protection in recovery scenarios

Backup and Recovery:

Comprehensive data protection with flexible recovery options.

  • Automated backup schedules tailored to customer requirements
  • Point-in-time recovery to any specific moment
  • Independent retention policies meeting varying regulatory needs
  • Tested restore procedures ensuring rapid recovery

Deployment:

Management varies based on deployment model.

  • SaaS: Fully managed by [Ai]levate—clustering, indexing, scaling, and maintenance
  • On-Premise: Customer provisions and manages cluster (minimum 3 nodes recommended for HA)

For on-premise Elastic sizing and configuration details, see the On-Premise Deployment Guide.

Data Flow Architecture

Understanding how data flows through the platform is essential for security planning and troubleshooting.

Denied Claim Remediation Flow

sequenceDiagram
    participant EHR as EHR System (SQL/API)
    participant Relay as Relay Service
    participant Cloud as Cloud Services / Agentic AI (Azure)
    participant AI as AI Warehouse (Customer Site)
    participant Elastic as Elastic Datastore

    rect rgb(230, 248, 255)
    note over EHR,Relay: Customer Environment
    end

    rect rgb(245, 235, 255)
    note over Cloud: [Ai]levate Azure
    end
    
    rect rgb(240, 255, 240)
    note over AI: [Ai]levate Colo (SaaS) or Customer Site (On-Prem)
    end
    
    rect rgb(255, 245, 238)
    note over Elastic: [Ai]levate Managed (SaaS) or Customer Managed (On-Prem)
    end

    EHR->>Relay: Denied claim data
    Relay->>Cloud: Secure outbound transmission (443/TLS)
    Cloud->>Elastic: Store claim metadata
    Cloud->>AI: LLM API call (reasoning, classification, recommendation)
    AI->>Cloud: LLM results (no data persistence)
    Cloud->>Elastic: Store recommendations and audit logs
    Cloud->>Relay: Remediation instructions
    Relay->>EHR: Update claim / add remediation notes

Flow Breakdown:

The claim remediation process follows a carefully orchestrated sequence ensuring data security and integrity at every step.

  1. Data Extraction: Cloud Services queries EHR via Relay Service to retrieve denied claim data (using SQL queries or API calls)
  2. Metadata Storage: Claim metadata stored in customer's dedicated Elastic datastore
  3. AI Processing: Agentic AI platform (in Azure Cloud Services) makes LLM API calls to the dedicated AI Warehouse (in Colo or customer site) for reasoning, classification, and recommendations
  4. Result Storage: AI results and audit logs stored in Elastic (AI Warehouse persists nothing)
  5. Remediation: Cloud Services sends instructions through Relay to update EHR

Key Security Principles:

  • All data transmission encrypted with TLS 1.2+ end-to-end
  • AI Warehouse operates as stateless compute—no data persistence
  • Strict separation between storage and compute layers
  • Complete audit trail of all data access and transformations

Security Architecture

The platform implements a "secure by design" approach with multiple layers of security controls.

Encryption Everywhere

The platform enforces encryption at every layer, protecting data both at rest and in transit.

Encryption Standards:

Protection TypeStandardApplicationKey Features
Data in TransitTLS 1.2+All network communicationsCertificate-based authentication, Perfect Forward Secrecy (PFS)
Data at RestAES-256All stored dataElasticsearch indices, backups, container images
Database BackupsAES-256Backup snapshotsEncrypted during backup and at rest
Container ImagesAES-256Application layerSecured container registry

Key Management:

  • Centralized management via Azure Key Vault with enterprise controls
  • Hierarchical key model (per-tenant, per-service, per-document) for granular protection
  • Automatic key rotation ensuring keys are refreshed regularly
  • HSM-backed protection providing hardware-level key security

Compute and Storage Separation

A foundational architectural principle enforces strict separation between compute and storage layers, preventing unauthorized data access.

Separation Principles:

  • AI Warehouses never persist data—process tasks and return results only
  • Cloud Services never store raw claim data—only metadata and orchestration state
  • Elastic datastore is the sole persistent storage—customer-dedicated and isolated

This architecture ensures compute resources cannot be exploited for data exfiltration, and storage resources cannot perform unauthorized processing.

Access Control

The platform implements multi-layered access controls ensuring users access only authorized resources with comprehensive audit trails.

Role-Based Access Control (RBAC):

Fine-grained permissions control what users can access and do within the platform.

  • Organization-level permissions controlling access to specific customer data
  • Custom roles and permission sets aligning with organizational structure
  • Separation of duties through predefined roles (admin, contributor, user)
  • Comprehensive audit logging tracking all access attempts and actions

Network Access Control:

Network-level security provides defense in depth across the infrastructure.

  • Virtual network isolation per customer preventing lateral movement
  • Network security groups (NSGs) with fine-grained traffic rules
  • Private endpoints and VPN support for enhanced network isolation
  • IP allowlisting restricting access to known network ranges
  • DDoS protection via Azure Front Door safeguarding availability

Authentication:

Enterprise-grade authentication integrates with existing identity systems.

MethodTechnologyUse CaseFeatures
Enterprise SSOOIDC/SAML via Microsoft Entra IDOrganizations with existing identity systemsSingle sign-on, centralized management
Magic LinkEmail-based authenticationSimplified access, no passwordsPasswordless, time-limited tokens
Multi-Factor AuthenticationMFA via Entra ID integrationEnhanced security requirementsAdditional verification layer provided by identity provider
Session ManagementToken-basedAll authentication methodsConfigurable timeout policies

Compliance

The platform supports healthcare compliance requirements through comprehensive security controls and documentation.

HIPAA Compliance:

  • Business Associate Agreement (BAA) establishing compliance responsibilities
  • Encryption requirements met at rest and in transit (AES-256, TLS 1.2+)
  • Access controls and audit logging providing required accountability
  • Incident response procedures ensuring rapid response to security events

Data Residency:

  • SaaS deployments: Azure East US region (additional regions coming soon)
  • On-premise deployments: Customer controls infrastructure location entirely
  • Data residency enforcement ensuring data remains in designated location
  • No cross-border data transfer without explicit customer consent

Audit and Monitoring:

  • Comprehensive audit logs for all data access and system actions
  • Security event monitoring with real-time alerting
  • Compliance reporting supporting regulatory requirements
  • Azure Security Center integration for centralized security management

For detailed network and security configurations, see Network Security Considerations.

Deployment Considerations

Choosing Between SaaS and On-Premise

Both deployment models deliver identical functionality but differ in infrastructure management and control. Consider your organization's operational capabilities, compliance requirements, and strategic priorities.

Choose SaaS if:

The SaaS model minimizes operational overhead and accelerates deployment.

  • You prefer [Ai]levate to handle infrastructure management
  • You want rapid deployment with minimal internal resources
  • You're comfortable with managed infrastructure in Azure
  • You don't require on-premise data storage
  • You want automatic scaling, updates, and maintenance

Choose On-Premise if:

The On-Premise model provides maximum control and flexibility for specific requirements.

  • You have strict data residency or sovereignty mandates
  • You require complete infrastructure control for compliance
  • You have existing infrastructure investments to leverage (Elastic, specialized hardware)
  • You need custom security configurations beyond SaaS offerings
  • You have in-house expertise for Elasticsearch and Tenstorrent hardware management

Regional Deployment

The platform's regional deployment options differ based on your chosen deployment model.

SaaS Deployments:

Currently, SaaS deployments provision the Cloud Services Layer in Azure East US region, with AI Warehouses in [Ai]levate's Colo facility connected via Megaport to Azure ExpressRoute. This provides optimal performance for North American customers and meets HIPAA compliance requirements.

  • Cloud Services Layer: Azure East US (default for all SaaS deployments)
  • AI Warehouse: [Ai]levate Colo facility
  • Future Expansion: Additional Azure regions are planned and will become available based on customer demand and compliance requirements
  • Data Residency: All customer data remains within the Azure East US region
  • Compliance: HIPAA, US data protection laws

On-Premise Deployments:

On-premise deployments offer complete flexibility in infrastructure location since you manage the Database Storage Layer (Elasticsearch) and AI Compute Layer (Tenstorrent hardware) directly at your site or Colo.

  • Cloud Services Layer: [Ai]levate Azure (always managed by [Ai]levate, includes agentic AI platform)
  • AI Warehouse & Elastic: Customer site or Colo (customer managed)
  • Location Control: Deploy AI Warehouse and Elastic in any datacenter, cloud provider (Azure, AWS, GCP), or private infrastructure
  • Data Sovereignty: Full control over data residency and geographic location
  • Compliance Flexibility: Align infrastructure location with your specific regulatory requirements
  • Network Architecture: Custom security configurations to meet organizational policies

Note: The [Ai]levate Cloud Services Layer (including the agentic AI platform) operates in Azure, but for on-premise deployments, your sensitive claim data and AI compute resources reside entirely within your chosen infrastructure location.

Requesting Alternative Regions:

If your organization requires SaaS deployment in a specific Azure region not currently available, please contact your [Ai]levate representative or email [email protected].

Data Residency Guarantees:

  • All customer data remains within the selected region
  • No cross-border data transfer occurs
  • Compliance with regional data protection regulations (HIPAA, GDPR)

Shared Responsibility Model

AreaSaaSOn-Premise
Cloud Services Layer[Ai]levate[Ai]levate
AI Warehouse Provisioning[Ai]levateCustomer
AI Warehouse Management[Ai]levateCustomer
Elastic Datastore Provisioning[Ai]levateCustomer
Elastic Datastore Management[Ai]levateCustomer
Relay Service ProvisioningCustomerCustomer
Relay Service ManagementCustomerCustomer
EHR ConnectivityCustomerCustomer
Identity IntegrationCustomerCustomer
Infrastructure Security[Ai]levateCustomer
Application Security[Ai]levate[Ai]levate
Backup and DR (Elastic, AI)[Ai]levateCustomer
Monitoring and Logging[Ai]levateShared

Next Steps

To proceed with your deployment planning:

  1. Review System Requirements: Understand the technical prerequisites for your chosen deployment model

  2. Complete Pre-Deployment Checklist: Gather necessary information and validate prerequisites

  3. Plan Network and Security: Understand network requirements and security considerations

  4. Choose Your Deployment Path:

  5. Deploy Relay Service: All deployment models require Relay Service configuration

  6. Configure AI Warehouse (On-Premise Only): If deploying on-premise, set up Tenstorrent hardware

For questions about deployment architecture or to discuss your specific requirements, contact your [Ai]levate representative or reach out to [email protected].