Skip to main content
Kadoa pushes extracted data as Parquet files to an S3 bucket. Snowpipe automatically loads new files into your Snowflake tables as they arrive.

Setup Steps

1. Request an S3 Bucket

Submit a request through the Support Center to set up a Snowflake data connector.
Kadoa will create a dedicated S3 bucket and provide you with a Role ARN and a Bucket name. You’ll need both in the next step.
See Cloud Storage for more details on data connector configuration.

2. Create a Storage Integration

Using the role ARN and bucket name from step 1, create a storage integration in your Snowflake account:
use role accountadmin;

create storage integration kadoa_s3_integration
    type = external_stage
    storage_provider = 's3'
    enabled = true
    storage_aws_role_arn = '<ROLE_ARN_FROM_KADOA>'
    storage_allowed_locations = ('s3://<BUCKET_NAME_FROM_KADOA>/');

-- Grant usage to your working role
grant usage on integration kadoa_s3_integration to role sysadmin;
Then retrieve the two values Kadoa needs to finalize the connection:
desc integration kadoa_s3_integration;
From the output, send these back to Kadoa:
  • STORAGE_AWS_IAM_USER_ARN — the IAM user ARN that Snowflake generated
  • STORAGE_AWS_EXTERNAL_ID — the external ID for the trust policy
Kadoa will update the S3 bucket’s trust policy using these values to grant your Snowflake account access. We’ll let you know once this is done.

3. Create a Stage

Once Kadoa confirms the bucket is ready, create a stage that points to it. The stage acts as a bridge between your S3 bucket and Snowflake, letting you reference the files in SQL. You’ll also need a file format that tells Snowflake how to read the Parquet files Kadoa produces.
use role sysadmin;

create or replace stage kadoa_s3_stage
    storage_integration = kadoa_s3_integration
    url = 's3://<BUCKET_NAME_FROM_KADOA>/'
    directory = (enable = true);

create or replace file format parquet_format type = 'parquet';

4. Create a Target Table

Use INFER_SCHEMA to automatically derive the table columns from the Parquet files:
create or replace table my_data
    using template (
        select array_agg(object_construct(*))
        within group (order by order_id)
        from table(
            infer_schema(
                location => '@kadoa_s3_stage/your-workflow-path/',
                file_format => 'parquet_format'
            )
        )
    );

5. Set Up Snowpipe

Create a pipe to automatically ingest new files as Kadoa pushes them:
create or replace pipe my_data_pipe
    auto_ingest = true
    as copy into my_data
    from @kadoa_s3_stage/your-workflow-path/
    file_format = (format_name = 'parquet_format')
    match_by_column_name = case_insensitive;
Get the SQS queue ARN from Snowpipe — you’ll configure this as an S3 event notification on your bucket:
show pipes;
Once the S3 event notification is configured, trigger an initial load of existing files:
alter pipe my_data_pipe refresh;

6. Query Your Data

select * from my_data;
New extractions from Kadoa will be loaded automatically within minutes.

Next Steps