Sprinkle Docs
  • What is Sprinkle?
  • Quick Start
  • Analysing your data
    • 🔭Analytics Overview
    • 💠Data Models
      • *️Variables
      • 🌲Hierarchies
      • 🤿Column Mask
    • 🎉Switch to New Reports & Dashboards
    • 🆕Reports
      • Overview
      • Build Using Tables
        • Create a new Report
        • Layout and options
        • Build and Format - Overview
        • Apply Row Limits
        • Identify Date Columns
        • Filter your data
        • Visualizations
          • Table
          • Pivot
          • Line Chart
          • Bar Chart
          • Column Chart
          • Area Chart
          • Combo Chart
          • Scatter & Bubble Plot
          • Pie Chart
          • Funnel Chart
          • Stat Card
          • Point Map
          • Heat Map
          • Radial gauge chart
        • Advanced Features
          • Custom Analysis
          • Variables
          • Table & Quick Calculations
          • Drill - Hierarchical & Date
          • Break Out
          • RLS in Table reports
          • Scheduled Exports
          • Embedding Table Reports
      • Build Using Models
        • Create a new report
        • Layout and options
        • Visualizations
        • Advanced Features
      • Build SQL Reports
        • Create a new Report
        • Layout and options
        • Writing a SQL Code on Editor
        • Visualizations
        • Variables in SQL Reports
    • 🆕Dashboards
      • 🌀Filters
      • 👆Click Behaviour
      • ⏰Data Alerts
      • 🗓️Date Drill
      • 📤Scheduled Exports
      • 🔗Embed link
      • 🖥️Dashboard layout
      • 📱Mobile Dashboards
  • Transforming your data
    • 🔰SQL Transform
    • 📓Python Notebooks
  • Integrating your data
    • ☁️Destination Warehouses
      • AWS Athena
        • Manage storage of Flow tables
      • AWS Redshift
      • Azure Synapse
      • Databricks
      • Google BigQuery
      • MySQL
      • Postgres
      • Snowflake
      • SQL Server
      • K8 Setup
        • AWS EKS
        • Google GKE
        • Azure AKS
    • ⚙️Warehouse & Storage Setup
  • Ingesting your data
    • ☄️Data Imports
      • Databases
        • Azure Cosmos DB
        • Azure Table Storage
        • Google BigQuery
        • Mongo DB
        • MySQL DB
        • Oracle DB
        • Postgres DB
        • SQL Server DB
        • Features
          • Ingestion Modes
          • Add Multiple Datasets
          • CDC Setup
            • CDC setup in Mysql
            • CDC setup in Postgres
            • CDC setup in Mongo
            • CDC setup in SQL Server
          • Destination Create Table Clause
          • SSH Tunnel Setup
      • Files
        • AWS S3
        • AWS S3 External
        • Azure Blob
        • FTP
        • Google Cloud Storage
        • Google Sheet
        • SFTP
      • Applications
        • Apple Search Ads
        • Appsflyer
        • Branch
        • Clevertap
        • Facebook Ads
        • Freshdesk
        • Freshsales
        • Google Ads
        • Google Ads V2
        • Google Analytics
        • Google Analytics 4
        • Google Analytics MCF
        • Google Search Console
        • Hubspot
        • Impact Ads
        • Intercom
        • Klaviyo
        • Leadsquared
        • LinkedIn Ads
        • Magento
        • Mailchimp
        • Marketo
        • Mixpanel
        • MoEngage
        • Rocketlane
        • Salesforce
        • SAP S4
        • Shopify
        • Snapchat Marketing
        • TikTok Ads
        • WooCommerce
        • Zendesk Chat
        • Zendesk Support
        • Zoho Analytics
        • Zoho Books
        • Zoho CRM
        • Zoho Desk
        • Zoho Invoice
        • Zoho Subscription
      • Events
        • Apache Kafka
        • AWS Kinesis
        • Azure EventHub
    • 📤File Uploads
    • 🤖API Pulls
    • 🕸️Webhooks
  • Collaborating on data
    • 📤Sharing
    • 💬Comments
    • ⚡Activity
    • 🏷️Labels
  • Managing Schedules and Data Refreshes
    • ⏱️Schedules
    • 🔔Notifications
  • User Management
    • 🔑Access Management
    • 🧑‍🤝‍🧑Groups
    • 📂Folders
    • 🔄Syncing users, groups and RLS
    • 📧Azure AD Integration
  • Data Security & Privacy
    • 🔐Security at Sprinkle
    • 📄GDPR
    • 📄Privacy Policy
  • Release Notes
    • 📢Release Notes
      • 🗒️Release Notes - v12.1 (New)
      • 🗒️Release Notes - v12.0
      • 🗒️Release Notes - v11.0
      • 🗒️Release Notes - v10.8
      • 🗒️Release Notes - v10.7
      • 🗒️Release Notes - v10.6
      • 🗒️Release Notes - v10.5
      • 🗒️Release Notes - v10.4
      • 🗒️Release Notes - v10.3
      • 🗒️Release Notes - v10.2
      • 🗒️Release Notes - v10.1
      • 🗒️Release Notes - v10.0
      • 🗒️Release Notes - v9.31
      • 🗒️Release Notes - v9.30
      • 🗒️Release Notes - v9.29
      • 🗒️Release Notes - v9.28
      • 🗒️Release Notes - v9.27
      • 🗒️Release Notes - v9.25
      • 🗒️Release Notes - v9.24
      • 🗒️Release Notes - v9.23
      • 🗒️Release Notes - v9.22
      • 🗒️Release Notes - v9.21
      • 🗒️Release Notes - v9.20
      • 🗒️Release Notes - v9.19
      • 🗒️Release Notes - v9.18
      • 🗒️Release Notes - v9.17
      • 🗒️Release Notes - v9.16
      • 🗒️Release Notes - v9.14
      • 🗒️Release Notes - v9.13
      • 🗒️Release Notes - v9.12
      • 🗒️Release Notes -v9.8
      • 🗒️Release Notes - v9.7
      • 🗒️Release Notes - v9.6
      • 🗒️Release Notes - v9.5
      • 🗒️Release Notes - v9.4
      • 🗒️Release Notes - v9.3
      • 🗒️Release Notes - v9.2
      • 🗒️Release Notes - v9.1
      • 🗒️Release Notes - v9.0 (Major)
      • 🗒️Release Notes - v7.23
      • 🗒️Release Notes - v7.21
      • 🗒️Release Notes - v7.20
      • 🗒️Release Notes - v7.15
      • 🗒️Release Notes - v7.14
      • 🗒️Release Notes - v7.13
Powered by GitBook
On this page
  • The basics
  • Watch Video
  • Steps to set up a Data Import Pipeline
  1. Ingesting your data

Data Imports

Guide to integrate your Data Sources with Sprinkle

PreviousWarehouse & Storage SetupNextDatabases

Last updated 1 year ago

The basics

Your data may reside in various systems. Sprinkle helps you bring all the data together by ingesting it into your data warehouse.

Data imports help you replicate data from a different system into your cloud data warehouse.

Data Imports is a scheduled pipeline designed to replicate data from different sources to your , providing a centralized platform for creating and monitoring your data ingestion pipeline.

When setting up a Data Import, it's crucial to understand the following concepts:

  • Connection: This entails the source endpoint details for your source systems. Save a connection during the 'Establish Connection' step and utilize it to configure different Data Import pipelines.

  • Dataset: A single data source typically comprises multiple datasets. Each table you wish to replicate is configured as a dataset. During the 'Select Datasets' step, you can add datasets to your Data Import pipeline.

📢 With Data Imports, you can effortlessly define source endpoints, select the tables (datasets) for ingestion, and then run, schedule, and monitor the ingestion process.

Watch Video

  • Click on 'Ingest' in the left navigation menu and navigate to 'Data Imports.'

  • Click on the '+ Setup Sources' button to create a new Data Import.

The journey to set up a Data Import consists of three steps: 'Establish Connection,' 'Select Datasets,' and 'Run & Schedule'.

The progress can be tracked from the top progress/status header.

Depending on the type of source selected above, in this step, you can provide and test the connection endpoints.

You can create a new connection or use a saved connection. Fill in the endpoints and click on 'Test Connection' to check if the connection can be established with the endpoints provided.

  • Test Connection: It checks if a connection can be established and displays the status: failed or passed.

  • Test & Save: Tests the connection and saves the endpoints. Required before proceeding to the next step.

(When the connection endpoint is saved and the Test Connection status is passed, you can proceed to the next step.)

Here you can select all the datasets (tables) that you want to be included in the ingestion. Refer to individual data source pages in the following categories databases, Files, Events, and Applications (marketing, CRM, etc.) to know about the datasets (tables) supported for ingestion.

On selecting at least one dataset (table) to ingest from the source, you can move to the next step Run & Schedule.

In this final step, you can run the Data Import job and schedule runs as needed.

  • Run Now: Instantly pushes the job to the queue for immediate execution.

  • Autorun: Enable the Autorun button to schedule the Ingest as needed. The run frequency can be selected from multiple options, ranging from real-time to monthly.

After running the job, the job table appears below, displaying details such as tables ingested, time taken, number of records, bad records, and more.

Steps to set up a Data Import Pipeline

Select the source type to begin. Sprinkle supports ingestion from 100+ sources, including , , , and (marketing, CRM, etc.).

Establish Connection

Select Datasets

Run & Schedule

☄️
🛠️
1️
2️
3️
databases
files
events
applications
📺
cloud data warehouse
Data Imports : Explanation & Feature Walkthrough