Kadoa automatically pushes workflow results to your cloud storage after each run. This is ideal for data pipelines, warehousing, and archival.
Supported Providers
- Amazon S3
- Google Cloud Storage
- Azure Blob Storage
Choose which formats to export:
| Format | Description | Use Case |
|---|
| Parquet | Columnar, compressed | Analytics, Snowflake, Spark |
| JSONL | JSON Lines, one record per line | Streaming, log processing |
| CSV | Comma-separated values | Excel, legacy systems |
Parquet is recommended for analytics workloads. It’s compressed and optimized for columnar queries.
File Organization
Data is organized by team, workflow, and run:
s3://your-bucket/kadoa/{teamId}/{workflowId}/{timestamp}-{jobId}/
├── data.parquet
├── data.jsonl
└── data.csv
Path Variables
| Variable | Description | Example |
|---|
{teamId} | Your team UUID | a1b2c3d4-... |
{workflowId} | Workflow identifier | wf_abc123 |
{timestamp} | Run timestamp | 2025-01-15T10-30-00Z |
{jobId} | Job identifier | job_xyz789 |
{date} | Date only | 2025-01-15 |
S3 Setup
Contact Kadoa support to configure the data connector with:
- Bucket name
- Region
- Access method (bucket policy or IAM credentials)
- Desired export formats
Export Behavior
Automatic Push
After each workflow run completes successfully:
- Data is converted to requested formats
- Files are uploaded to your bucket
- Export is logged for auditing
Retry Logic
Failed uploads are automatically retried:
- 3 attempts with exponential backoff (1s, 2s, 4s)
- Failures are logged and can trigger alerts
Each uploaded file includes S3 metadata:
x-kadoa-workflow-id: Workflow identifier
x-kadoa-job-id: Job identifier
x-kadoa-format: File format
Use with Snowflake
Cloud storage is the foundation for Snowflake External Tables.
Kadoa → S3 → Snowflake External Table
Use with Other Data Warehouses
The same S3 data can feed other warehouses:
| Warehouse | Integration Method |
|---|
| Snowflake | External Tables |
| BigQuery | External Tables or Data Transfer |
| Redshift | COPY command or Spectrum |
| Databricks | Direct S3 access |
Next Steps