Rapid Deploy
Trust Center

Trundls Rapid Deploy operating model (commonly referred below as the “platform”) is built for organizations that rely on secure, reliable integrations, migrations, and configuration automation.This Trust Center explains how we protect your data, how we operate the Rapid Deploy platform, and what you can expect from us as your partner.

Scope & Overview

Platform Core Capabilities

  • Safe, repeatable custom configuration, deployment and rollback automation.
  • Large-scale migrations and ongoing, secure, bi‑directional synchronization across tools, including issues, tickets, tasks, and related metadata.
  • Discovering, analyzing, and preparing source environments for migration or integration.
  • Centralized observability of jobs, logs, and system health.

Hosting & Key Subprocessors

  • Infrastructure hosting: Amazon Web Services (AWS)
  • AI features: OpenAI (Business / Enterprise plan, including ChatGPT Business and ChatGPTEnterprise)
  • Used for selected AI‑driven features (e.g., suggestions, mappings, summaries), with strict controls on what data is sent and how it is handled.
  • Reference: Enterprise privacy at OpenAI

Security

Application & Data Security

  • Credential Protection
    • API tokens and external service credentials are encrypted at rest using strong industry-standard encryption (e.g., AES‑256).
    • Credentials are never stored in plaintext in application logs or source code.
    • Support for credential rotation and scoped access per integration.
  • Authentication & Sessions
    • Authentication uses secure, modern standards (e.g., token-based sessions).
    • Passwords (if used) are hashed with strong, one-way hashing algorithms (e.g., bcrypt).
    • Single Sign-On (SSO) options (such as SAML or OIDC) can be supported for enterprise customers.
  • Transport Security
    • All communication with Unboxer uses HTTPS with modern TLS.
    • Secure protocol versions and strong cipher suites are enforced.
    • Connections to AWS services and OpenAI APIs are also made over TLS‑secured channels.
  • Secrets Management
    • Internal secrets (database credentials, encryption keys, API keys, including the OpenAIAPI key) are stored in AWS‑managed secret storage.
    • Access to secrets is tightly controlled via IAM policies and is audited.
  • Web Security Controls
    • AWS Web Application Firewall (WAF) helps protect against common attack classes (e.g.,SQL injection, cross‑site scripting).
    • Rate limiting and abuse detection at the edge.
    • Strict controls on cross‑origin requests and other browser-based risks.

Infrastructure Security

  • Cloud-Native Architecture on AWS
    • The underlying Rapid Deploy platform runs on AWS using containerized services andmanaged databases (e.g., Amazon RDS, ECS/EKS).
    • Trundl leverages VPCs, Security Groups, IAM roles, and managed encryption as standard.
  • Network Segmentation
    • Public access is limited to the Rapid Deploy UI and external APIs (typically via AWS Application Load Balancers and/or Amazon CloudFront).
    • Databases, internal services, and message queues reside in private subnets and are not directly exposed to the internet.
  • Hardened Services
    • Workloads run in isolated containers with least‑privilege permissions.
    • Base images and dependencies are regularly updated and patched.
  • Access Control
    • Operational access to production AWS environments is restricted to a small number ofauthorized personnel using strong authentication and least‑privilege IAM roles.
    • All administrative and privileged operations are logged and monitored (e.g., via AWSCloudTrail and CloudWatch).

Data Protection & Privacy

Includes:

  • Data Handling and OpenAI (ChatGPT Business / Enterprise) Privacy
  • Data Processed Within Platform (Hosted on AWS)

Platform process:

  • Work items and related metadata (e.g., issues, tasks, tickets, fields, comments, and optionally attachments) to perform syncs, migrations, deployments, and discovery.
  • Configuration data such as connection details, mapping rules, deployment definitions, and job configurations.
  • Discovery and analytics information about your environments (e.g., project structures, usage patterns).
  • Operational data such as logs and metrics required to operate and troubleshoot the platform.
  • This data is stored and processed on AWS, with tenant isolation and encryption as described in this Trust Center.

Use of OpenAI (Business / Enterprise, Including ChatGPT Business)

The Platform uses OpenAIs Business/Enterprise offerings (for example, ChatGPT Business and ChatGPT Enterprise) only for specific, opt‑in AI features, such as:

  • Generating or suggesting mappings, transformations, or configuration patterns.
  • Producing summaries or explanations (e.g., migration or discovery summaries).
  • Providing AI-assisted insights to help design or validate Unboxer configurations.

When These AI Features Are Used:

  • Only limited, scoped text or metadata is sent to OpenAI, strictly what is required for that feature.
  • The Platform does NOT send:
    • Customer passwords, API keys, or other secrets.
    • Database connection strings.
    • Unnecessary personal data beyond what the feature requires.
  • Where practical, data passed to OpenAI is minimized and may be pseudonymized or partiallyredacted.

How OpenAI Treats the Platform Data (Summary of Enterprise Privacy / ChatGPT Business)

The Platform is based on OpenAIs Enterprise Privacy commitments

There is NO Training on Your Business Data By Default

Prompts and responses sent from the Platform to OpenAI under Business/Enterprise plans are not used to train or improve OpenAIs public models by default.

You Own Your Inputs and Outputs (Where Allowed by Law)

Your organization retains ownership of prompts (inputs) and responses (outputs) generated via the Platforms AI features.

Enterprise Controls and Retention

OpenAI provides enterprise controls over data retention (for example, ChatGPT Enterprise).

Within the Platform, AI‑related logs and outputs stored on AWS follow our own retention policies, aligned with your contractual and regulatory requirements.

Access Control & Authentication

  • OpenAI supports enterprise-level authentication (e.g., SAML SSO) and fine‑grained access controls.
  • Within the Platform, administrators can control:
    • Which users or roles may access AI features.
    • Which environments are allowed to call OpenAI.
    • What types of data are excluded from prompts.

Security & Compliance

OpenAIs enterprise offerings are designed with:

  • SOC 2–aligned controls.
  • AES‑256 encryption at rest.
  • TLS 1.2+ encryption in transit.

Where Your Data Lives

  • Core Platform data (configurations, logs, states, discovery results) is stored on AWS in yourdesignated region(s).
  • AI processing occurs via encrypted API calls from the Platform (in AWS) to OpenAI.
  • OpenAI does not have direct access to your AWS databases or infrastructure.
  • Only minimal, scoped payloads are transmitted for each AI feature invocation.

Customer Control Over AI Usage

You can:

  • Enable or disable AI-powered features globally, or by environment/role (where supported).
  • Define internal rules for what data is allowed in prompts.
  • Request stricter minimization of data sent to OpenAI as part of your security and privacy requirements.

For more information on OpenAIs enterprise privacy commitments, see:

Encryption

In Transit

  • All data between your systems and the Platform is encrypted using TLS.
  • Calls between the Platform and AWS services, and between the Platform and OpenAI, are also encrypted.

At Rest

  • Databases, volumes, and object storage on AWS are encrypted at rest (e.g., AWS KMS with AES‑256).
  • Backups and snapshots follow the same encryption standards.

Credentials & Secrets

  • Integration credentials, API keys (including OpenAI), and other secrets are stored in encrypted form using AWS secret management services.

Access to Customer Data

  • Access is limited to authorized personnel and only for:
    • Resolving your support requests.
    • Investigating and mitigating incidents.
  • All such access is logged and governed by internal policies.
  • OpenAI receives only minimal, scoped data required to provide AI features, and does not have access to the Platforms AWS infrastructure.

Compliance & Governance

Security Governance

  • Security‑by‑design across the Platform product lifecycle.
  • AI-related features undergo security and privacy review before production release.
  • AWSs compliance posture (e.g., ISO 27001, SOC, PCI) forms part of our underlying control environment.

Policies & Practices

  • Secure coding practices aligned with OWASP guidelines.
  • Formal change management for production deployments and AWS configuration changes.
  • Regular review of access rights, monitoring, and logging configurations.

Compliance Posture

  • Controls designed with the expectations of standards such as ISO 27001 and SOC 2 in mind.
  • Data protection aligned with applicable privacy laws (e.g., GDPR).
  • Data Processing Agreements (DPAs) and security exhibits can list AWS and OpenAI as subprocessors where applicable.

Availability, Reliability & Performance

Platform Availability

  • The Platform is designed on AWS for high availability suitable for business‑critical workloads.
  • Enterprise SLAs
    • Can be defined contractually.
  • Redundancy & fault tolerance
    • Highly available application services and multi‑AZ databases.
    • Stateless services with horizontal scaling.
  • Scalability
    • AWS autoscaling supports fluctuating loads (e.g., migration bursts or sync spikes).
    • Rate limits and quotas protect the platform and external APIs (including OpenAI).

Monitoring & Incident Response

Monitoring

  • Application performance, infrastructure health, job throughput, and failures are monitored.
  • Centralized logging of application and infrastructure events, including AI usage where applicable.

Alerting

  • Automated alerts for errors, degraded performance, abnormal activity, and potential AI misuse.

Incident Response

  • Documented incident response process (detection, triage, containment, remediation, communication).
  • Post‑incident reviews for significant events.

Platform-Specific Security & Controls

The Platform and Sync Capabilities

Secure Orchestration

  • Sync and migration jobs run on AWS and strictly follow configured scopes and permissions.
  • Clear separation between credentials, mapping rules, and execution.

Mappings & Transformations

  • Mappings are centrally managed and can be versioned.
  • Validation and “dry‑run” options are supported.
  • Optional OpenAI-powered mapping suggestions use minimal metadata (no secrets).

Auditability

  • Detailed logs for items processed, changes applied, errors, and conflicts.
  • Traceability to the user or configuration that triggered each operation.

The Platform and Deployment Capabilities

Controlled Deployments

  • Deployments are performed via dedicated AWS-based services with validations and optional approvals.
  • Rollback to previous known‑good states is supported.

Change Tracking

  • Every deployment is logged with configuration, environment, and initiating user or pipeline.

Optional AI Assistance

  • OpenAI can be used to suggest templates or configuration changes.
  • Prompts exclude credentials or highly sensitive data.

The Platform and Discovery Capabilities

Safe Analysis

  • Read‑oriented by design; operates under permissions you configure in your systems.
  • Focused on metadata, structures, and usage patterns.

Data Minimization

  • Collection limited to what is needed for assessment, optimization, and migration planning.
  • Collected data is stored and protected on AWS.

AI‑enhanced Insights (Optional)

  • Optional use of OpenAI to summarize findings or highlight patterns from discovery data (without credentials).

Reporting & Export

  • Discovery outputs can be exported and reviewed for planning and governance.

Customer Responsibilities & Best Practices

To take advantage of the Platform securely and effectively, we recommend that you:

Data Residency &
Regional Options

Primary Hosting Regions

The Platform is hosted in selected AWS regions. Region choice can reflect your residencyand regulatory needs.

Data Location

Customer data is stored and processed in the AWS region(s) agreed in your contract, with limited cross‑region replication for resilience where applicable.

AI Data Residency

OpenAI may process data in its own infrastructure locations under the terms described at: Enterprise privacy at OpenAI

The Platform minimizes data sent to OpenAI and can document data flows and regions during security due diligence.

Shared Responsibilities Model

Trundl Responsibilities

  • Securing the platform and AWS infrastructure.
  • Managing encryption, access controls, logging, and monitoring.
  • Governing and securing the OpenAI integration.
  • Meeting defined availability and incident response commitments.

Customer Responsibilities

  • Managing identities and access in your own systems (including SSO/IdP).
  • Protecting credentials used for Platform integrations.
  • Defining scopes, mappings, deployment rules, discovery scopes, and retention policies.
  • Establishing internal policies for AI usage and sensitive data handling.

Contact & Security Requests

Security Questions & Due Diligence

  • Your dedicated Customer Success Manager from Trundl

  • General Contact: contact@trundl.com

Incident or vulnerability reporting

Contact:

Get in Touch