Skip to main content
Available for Enterprise customers.
Kadoa automatically pushes workflow results to your cloud storage after each run. This is ideal for data pipelines, warehousing, and archival.

Supported Providers

  1. Amazon S3
  2. Google Cloud Storage
  3. Azure Blob Storage

Data Formats

Choose which formats to export:
FormatDescriptionUse Case
ParquetColumnar, compressedAnalytics, Snowflake, Spark
JSONLJSON Lines, one record per lineStreaming, log processing
CSVComma-separated valuesExcel, legacy systems
Parquet is recommended for analytics workloads. It’s compressed and optimized for columnar queries.

File Organization

Data is organized by team, workflow, and run:
s3://your-bucket/kadoa/{teamId}/{workflowId}/{timestamp}-{jobId}/
├── data.parquet
├── data.jsonl
└── data.csv

Path Variables

VariableDescriptionExample
{teamId}Your team UUIDa1b2c3d4-...
{workflowId}Workflow identifierwf_abc123
{timestamp}Run timestamp2025-01-15T10-30-00Z
{jobId}Job identifierjob_xyz789
{date}Date only2025-01-15

S3 Setup

Contact Kadoa support to configure the data connector with:
  • Bucket name
  • Region
  • Access method (bucket policy or IAM credentials)
  • Desired export formats

Export Behavior

Automatic Push

After each workflow run completes successfully:
  1. Data is converted to requested formats
  2. Files are uploaded to your bucket
  3. Export is logged for auditing

Retry Logic

Failed uploads are automatically retried:
  • 3 attempts with exponential backoff (1s, 2s, 4s)
  • Failures are logged and can trigger alerts

Metadata

Each uploaded file includes S3 metadata:
  • x-kadoa-workflow-id: Workflow identifier
  • x-kadoa-job-id: Job identifier
  • x-kadoa-format: File format

Use with Snowflake

Cloud storage is the foundation for Snowflake External Tables.
Kadoa → S3 → Snowflake External Table

Use with Other Data Warehouses

The same S3 data can feed other warehouses:
WarehouseIntegration Method
SnowflakeExternal Tables
BigQueryExternal Tables or Data Transfer
RedshiftCOPY command or Spectrum
DatabricksDirect S3 access

Next Steps