Skip to content

Quickstart

This guide walks you through the essential steps to get data flowing through Rime: creating a project, connecting Snowflake, setting up a connector, and running your first extraction.

Prerequisites

  • A Snowflake account (any edition)
  • Credentials for at least one data source (a database, SaaS application, or CSV/JSON files)
  • A Rime account (sign up at the Rime sign-up page)

Step 1: Create a project

After signing in, you land on the Dashboard. Projects are the top-level organisational unit in Rime — they group connectors, infrastructure, transformations, and pipelines together.

  1. Click New Project on the Dashboard or in the sidebar
  2. Enter a project name (e.g., “Production Data” or “Analytics”)
  3. Click Create

You are redirected to the project overview page.

Step 2: Connect Snowflake

Rime needs access to your Snowflake account to manage databases, schemas, and load data.

  1. Navigate to Settings > Snowflake within your project
  2. Enter your Snowflake account identifier (e.g., xy12345.ap-southeast-2)
  3. Provide authentication credentials:
    • Password authentication: enter your Snowflake username and password
    • Key pair authentication: upload or paste your private key
  4. Click Test Connection to verify access
  5. Click Save

Rime encrypts your credentials at rest using AES-256-GCM. They are never stored in plain text.

Step 3: Provision infrastructure

Before extracting data, Rime needs a destination in Snowflake and an S3 bucket for staging.

  1. Go to Infrastructure in the project sidebar
  2. Click Add Resource and select Snowflake Database
    • Name it (e.g., RAW_DATA)
    • Add a schema (e.g., PUBLIC)
  3. Click Add Resource again and select S3 Bucket
    • Rime generates a unique bucket name
    • An IAM role is created automatically for Snowflake access
  4. Click Plan Changes to see what Rime will create
  5. Review the change preview and click Apply

Rime provisions the database, schema, S3 bucket, IAM role, and Snowpipe configuration. This takes 1-2 minutes.

Step 4: Create a connector

Connectors pull data from your source systems.

  1. Go to Connectors in the project sidebar
  2. Click New Connector
  3. Select your source type (e.g., PostgreSQL)
  4. Enter connection details:
    • Host, port, database name
    • Username and password (encrypted at rest)
  5. Click Test Connection to verify
  6. Rime discovers available tables and columns
  7. Select the tables you want to sync
  8. Set a sync schedule (e.g., every 6 hours) or leave it as manual
  9. Click Create Connector

Step 5: Run your first sync

  1. On the connector detail page, click Sync Now
  2. Watch the progress in real time:
    • Tables are extracted in parallel
    • Row counts update as data flows
    • Any errors appear immediately
  3. When the sync completes, verify data in Snowflake by checking the run summary

The extraction pipeline is: source database -> Apache Arrow -> Parquet file -> S3 -> Snowpipe -> Snowflake raw table.

What’s next

You now have data flowing from a source system into Snowflake through Rime. From here:

  • Transform your data — set up Kimball or Data Vault models to shape raw data into analytics-ready tables
  • Build a pipeline — create a DAG pipeline that chains extraction, transformation, and validation steps
  • Set up monitoring — configure alert rules to catch failures and anomalies
  • Enable governance — review masked-by-default settings and classify sensitive columns