Sprinkle Docs
Search
K

Ingest

Overview to Ingest your Data spread across various systems to your data warehouse with Sprinkle

Ingest: The basics

Your data may be lying in different systems. Sprinkle helps you bring all data together by ingesting it into your data warehouse.
Ingest module in Sprinkle helps you replicate data from a different system into your cloud data warehouse.
It's important to know the following concepts when setting up your Data Sources for Ingestion in Sprinkle:
  • Connection: Source endpoints details. It can be shared in multiple Data Imports.
  • Data Imports: A Scheduled pipeline that replicates data from your sources to your data warehouse. Here you can define, which tables to replicate, frequency, etc.
  • File Uploads: Ingest data from CSV and Excel files into your warehouse.
  • API Pulls: Ingest data into the data warehouse from applications or in-house REST APIs
  • Webhooks: Push real-time data from your applications to the data warehouse via a Webhooks
Other Terms
  • Dataset: A single Data Import typically imports multiple datasets into your destination warehouse. Each table that you want to replicate, is configured as a dataset.

Next: Transform Your Data

Sprinkle follows the modern ELT approach. The data is transformed after arriving in your data warehouse. This decouples the transformation logic from data ingestion, allowing you to change the logic easily and independently.
Also, you have both raw and as well as derived tables in your data warehouse, providing you with the central data lake/warehouse which can be used in other tools and for data science purposes as well.
Learn more about Transformations here.