Sprinkle Docs
  • What is Sprinkle?
  • Quick Start
  • Analysing your data
    • 🔭Analytics Overview
    • 💠Data Models
      • *️Variables
      • 🌲Hierarchies
      • 🤿Column Mask
    • 🎉Switch to New Reports & Dashboards
    • 🆕Reports
      • Overview
      • Build Using Tables
        • Create a new Report
        • Layout and options
        • Build and Format - Overview
        • Apply Row Limits
        • Identify Date Columns
        • Filter your data
        • Visualizations
          • Table
          • Pivot
          • Line Chart
          • Bar Chart
          • Column Chart
          • Area Chart
          • Combo Chart
          • Scatter & Bubble Plot
          • Pie Chart
          • Funnel Chart
          • Stat Card
          • Point Map
          • Heat Map
          • Radial gauge chart
        • Advanced Features
          • Custom Analysis
          • Variables
          • Table & Quick Calculations
          • Drill - Hierarchical & Date
          • Break Out
          • RLS in Table reports
          • Scheduled Exports
          • Embedding Table Reports
      • Build Using Models
        • Create a new report
        • Layout and options
        • Visualizations
        • Advanced Features
      • Build SQL Reports
        • Create a new Report
        • Layout and options
        • Writing a SQL Code on Editor
        • Visualizations
        • Variables in SQL Reports
    • 🆕Dashboards
      • 🌀Filters
      • 👆Click Behaviour
      • ⏰Data Alerts
      • 🗓️Date Drill
      • 📤Scheduled Exports
      • 🔗Embed link
      • 🖥️Dashboard layout
      • 📱Mobile Dashboards
  • Transforming your data
    • 🔰SQL Transform
    • 📓Python Notebooks
  • Integrating your data
    • ☁️Destination Warehouses
      • AWS Athena
        • Manage storage of Flow tables
      • AWS Redshift
      • Azure Synapse
      • Databricks
      • Google BigQuery
      • MySQL
      • Postgres
      • Snowflake
      • SQL Server
      • K8 Setup
        • AWS EKS
        • Google GKE
        • Azure AKS
    • ⚙️Warehouse & Storage Setup
  • Ingesting your data
    • ☄️Data Imports
      • Databases
        • Azure Cosmos DB
        • Azure Table Storage
        • Google BigQuery
        • Mongo DB
        • MySQL DB
        • Oracle DB
        • Postgres DB
        • SQL Server DB
        • Features
          • Ingestion Modes
          • Add Multiple Datasets
          • CDC Setup
            • CDC setup in Mysql
            • CDC setup in Postgres
            • CDC setup in Mongo
            • CDC setup in SQL Server
          • Destination Create Table Clause
          • SSH Tunnel Setup
      • Files
        • AWS S3
        • AWS S3 External
        • Azure Blob
        • FTP
        • Google Cloud Storage
        • Google Sheet
        • SFTP
      • Applications
        • Apple Search Ads
        • Appsflyer
        • Branch
        • Clevertap
        • Facebook Ads
        • Freshdesk
        • Freshsales
        • Google Ads
        • Google Ads V2
        • Google Analytics
        • Google Analytics 4
        • Google Analytics MCF
        • Google Search Console
        • Hubspot
        • Impact Ads
        • Intercom
        • Klaviyo
        • Leadsquared
        • LinkedIn Ads
        • Magento
        • Mailchimp
        • Marketo
        • Mixpanel
        • MoEngage
        • Rocketlane
        • Salesforce
        • SAP S4
        • Shopify
        • Snapchat Marketing
        • TikTok Ads
        • WooCommerce
        • Zendesk Chat
        • Zendesk Support
        • Zoho Analytics
        • Zoho Books
        • Zoho CRM
        • Zoho Desk
        • Zoho Invoice
        • Zoho Subscription
      • Events
        • Apache Kafka
        • AWS Kinesis
        • Azure EventHub
    • 📤File Uploads
    • 🤖API Pulls
    • 🕸️Webhooks
  • Collaborating on data
    • 📤Sharing
    • 💬Comments
    • ⚡Activity
    • 🏷️Labels
  • Managing Schedules and Data Refreshes
    • ⏱️Schedules
    • 🔔Notifications
  • User Management
    • 🔑Access Management
    • 🧑‍🤝‍🧑Groups
    • 📂Folders
    • 🔄Syncing users, groups and RLS
    • 📧Azure AD Integration
  • Data Security & Privacy
    • 🔐Security at Sprinkle
    • 📄GDPR
    • 📄Privacy Policy
  • Release Notes
    • 📢Release Notes
      • 🗒️Release Notes - v12.1 (New)
      • 🗒️Release Notes - v12.0
      • 🗒️Release Notes - v11.0
      • 🗒️Release Notes - v10.8
      • 🗒️Release Notes - v10.7
      • 🗒️Release Notes - v10.6
      • 🗒️Release Notes - v10.5
      • 🗒️Release Notes - v10.4
      • 🗒️Release Notes - v10.3
      • 🗒️Release Notes - v10.2
      • 🗒️Release Notes - v10.1
      • 🗒️Release Notes - v10.0
      • 🗒️Release Notes - v9.31
      • 🗒️Release Notes - v9.30
      • 🗒️Release Notes - v9.29
      • 🗒️Release Notes - v9.28
      • 🗒️Release Notes - v9.27
      • 🗒️Release Notes - v9.25
      • 🗒️Release Notes - v9.24
      • 🗒️Release Notes - v9.23
      • 🗒️Release Notes - v9.22
      • 🗒️Release Notes - v9.21
      • 🗒️Release Notes - v9.20
      • 🗒️Release Notes - v9.19
      • 🗒️Release Notes - v9.18
      • 🗒️Release Notes - v9.17
      • 🗒️Release Notes - v9.16
      • 🗒️Release Notes - v9.14
      • 🗒️Release Notes - v9.13
      • 🗒️Release Notes - v9.12
      • 🗒️Release Notes -v9.8
      • 🗒️Release Notes - v9.7
      • 🗒️Release Notes - v9.6
      • 🗒️Release Notes - v9.5
      • 🗒️Release Notes - v9.4
      • 🗒️Release Notes - v9.3
      • 🗒️Release Notes - v9.2
      • 🗒️Release Notes - v9.1
      • 🗒️Release Notes - v9.0 (Major)
      • 🗒️Release Notes - v7.23
      • 🗒️Release Notes - v7.21
      • 🗒️Release Notes - v7.20
      • 🗒️Release Notes - v7.15
      • 🗒️Release Notes - v7.14
      • 🗒️Release Notes - v7.13
Powered by GitBook
On this page
  • Datasource Concepts
  • Step by Step Guide
  • STEP-1: Configure BigQuery Connection
  • STEP-2: Configure BigQuery datasource
  • STEP-3: Create Dataset
  • STEP-4: Run and schedule Ingestion
  1. Ingesting your data
  2. Data Imports
  3. Databases

Google BigQuery

Guide to integrate your BigQuery to Sprinkle

PreviousAzure Table StorageNextMongo DB

Last updated 1 year ago

Datasource Concepts

Before setting up the datasource, learn about datasource concepts

Step by Step Guide

STEP-1: Configure BigQuery Connection

To learn about Connection, refer

  • Log into Sprinkle application

  • Navigate to Datasources -> Connections Tab -> New Connection ->

  • Select BigQuery

  • Provide all the mandatory details

    • Name: Name to identify this connection

    • Project Id: The GCP project id in which bigquery instance is created

    • Dataset Id: Datasets are top-level containers that are used to organize and control access to your tables

  • Connect to Google : Authorise the Sprinkle to connect

  • Test Connection

  • Create

STEP-2: Configure BigQuery datasource

  • Navigate to Datasources -> Datasources Tab -> Add ->

  • Select BigQuery

  • Provide the name -> Create

  • Connection Tab:

    • From the drop-down, select the name of connection created in STEP-2

    • Update

STEP-3: Create Dataset

  • Database Name (Required) : Database name to select the table from

  • Schema Name(Required): Schema Name to select the table from

  • Table Type: (Required) :

    • Table: Select Table if you want to replicate the table

      • Source Table (Required) : Name of the BigQuery table

      • Exclude Columns (Optional): List of columns to exclude

      • Mask Columns (Optional): List of columns to be masked using SHA256

      • Mode (Required):

        • Complete: Ingest full data from the source table in every ingestion job run. Choose this option if your table size is small (<1 million rows) and you want to ingest it infrequently (few times a day)

        • Incremental: Ingest only the changed or inserted rows in every ingestion job run. Choose this option if your table size is large and you want to ingest in realtime mode. This requires a time column, and unique key to be present in the table.

          • Remove Duplicate Rows: Duplicate rows are removed by taking the latest value of your unique keys

            • Unique Keys: List of columns forming the unique id

            • Fetch With Delay(In minutes) : Use this to apply delay while downloading records. When a non-zero value is specified, records inserted/updated on the source before given minutes will be ingested, instead of data till the current time. Useful when current data is not fully reflected on the source due to heavy loads. It can be left at 0 minutes if not needed.

          • Time Column: Rows with newer time value column will be ingested in each run

          • Destination Table Partition: It will create partitions of the destination table according to the specified column (for Eg: Date). Its value is NO by default. If YES, destination table will be partitioned, else normal materialized table will be created. This feature is only applicable in BigQuery.

            • Partition Column: Enter the partition column name. Partition will be created on the destination table using partition column value. Selected column values should be static even if other fields are updated. This should be of Date type or easily converted into Date format. If no column is selected, Partition will be created on the basis of job run time.

            • Partition Date Format: Enter the format of date.

          • Periodic Full Fetch: If yes, Periodically full table will be fetched from source. Would be useful if data gets deleted from source and you want to keep the data in sync with source. If no, only incremental data will be pulled in every run.

      • Filter clause: Filter the data while ingesting. (select * from table_name where CONDITION) only give CONDITION here.

    • Query: Select Query, if you want to insert the output of the BigQuery query into the data warehouse

      • Query(SQL) (Required): Query that will be executed on BigQuery in each ingestion job run. If schema is not selected as default one, provide fully qualified name(mention schema_name in query). Ex- select * from schema_name.table_name;

  • Destination Schema (Required) : Data warehouse schema where the table will be ingested into

  • Destination Table name (Required) : It is the table name to be created on the warehouse. If not given, sprinkle will create like ds_<datasourcename>_<tablename>

  • Create

STEP-4: Run and schedule Ingestion

In the Ingestion Jobs tab:

  • Trigger the Job, using Run button

  • To schedule, enable Auto-Run. Change the frequency if needed

To learn about datasource, refer

Datasets Tab: To learn about Dataset, refer . Add Dataset for each table that you want to replicate, providing following details

To Know more about Ingestion Modes, refer

Destination Create Table Clause: Provide additional clauses to warehouse-create table queries such as clustering, partitioning, and more, useful for optimizing DML statements. on how to use this field.

☄️
here
here
here
here
here
Learn more