Skip to main content
Available for Enterprise customers.
Kadoa automatically pushes workflow results to your cloud storage after each run. This is ideal for data pipelines, warehousing, and archival.

Supported Providers

  1. Amazon S3
  2. Google Cloud Storage
  3. Azure Blob Storage

Data Formats

Choose which formats to export:
FormatDescriptionUse Case
ParquetColumnar, compressedAnalytics, Snowflake, Spark
JSONLJSON Lines, one record per lineStreaming, log processing
JSONJSON with metadata envelopeAPIs, integrations
CSVComma-separated valuesExcel, legacy systems
Parquet is recommended for analytics workloads. It’s compressed and optimized for columnar queries.

File Organization

Data is organized by team, workflow, and run:
s3://your-bucket/kadoa/{teamId}/{workflowId}/{runDatetimeSafe}-{runId}/
├── data.parquet
├── data.jsonl
├── data.json
└── data.csv

Path Variables

VariableDescriptionExample
{teamId}Your team UUIDa1b2c3d4-5e6f-7a8b-9c0d-e1f2a3b4c5d6
{workflowId}Workflow identifierb2c3d4e5-6f7a-8b9c-0d1e-f2a3b4c5d6e7
{runId}Run identifierc3d4e5f6-7a8b-9c0d-1e2f-a3b4c5d6e7f8
{runDatetimeSafe}Filename-safe datetime2025-01-15_10-30-00Z
{runDate}Run date2025-01-15

S3 Setup

Submit a request through the Support Center to configure the data connector with:
  • Bucket name
  • Region
  • Access method (bucket policy or IAM credentials)
  • Desired export formats

Export Behavior

Automatic Push

After each workflow run completes successfully:
  1. Data is converted to requested formats
  2. Files are uploaded to your bucket
  3. Export is logged for auditing

Retry Logic

Failed uploads are automatically retried:
  • 3 attempts with exponential backoff (1s, 2s, 4s)
  • Failures are logged and can trigger alerts

Metadata

Each uploaded file includes S3 metadata:
  • x-kadoa-workflow-id: Workflow identifier
  • x-kadoa-job-id: Job identifier
  • x-kadoa-format: File format

Additional Fields

You can enrich exported rows with extra metadata columns. This is available for CSV, JSONL, and JSON formats (not Parquet). Each additional field has a custom name (the column header) and a value that can be:
  • A static string — e.g. Kadoa or production
  • A dynamic variable — e.g. {workflowId} or {runDate}, resolved at export time
Added fields appear as extra columns after the existing data columns. You can configure different additional fields per export format (CSV, JSONL, JSON).

Examples

Field NameValueResult
sourceKadoaEvery row gets source = "Kadoa"
workflow{workflowId}Every row gets workflow = "b2c3d4e5-..."
exportedAt{runDatetime}Every row gets exportedAt = "2025-01-15T10:30:00.000Z"

Available Variables

VariableDescriptionExample
{teamId}Your team UUIDa1b2c3d4-5e6f-7a8b-9c0d-e1f2a3b4c5d6
{workflowId}Workflow identifierb2c3d4e5-6f7a-8b9c-0d1e-f2a3b4c5d6e7
{runId}Run identifierc3d4e5f6-7a8b-9c0d-1e2f-a3b4c5d6e7f8
{runDate}Run date2025-01-15
{runDatetime}ISO 8601 datetime2025-01-15T10:30:00.000Z

Use with Snowflake

Cloud storage is the foundation for Snowflake External Tables.
Kadoa → S3 → Snowflake External Table

Use with Other Data Warehouses

The same S3 data can feed other warehouses:
WarehouseIntegration Method
SnowflakeExternal Tables
BigQueryExternal Tables or Data Transfer
RedshiftCOPY command or Spectrum
DatabricksDirect S3 access

Next Steps