🤖API Pulls

Guide to ingest data using your application's REST APIs

With API Pulls, you can pull data from any application that exposes REST APIs and ingest it into the data warehouse without any coding.

Watch Video 📺

Datasource Concepts

Before setting up the datasource, learn about datasource concepts here

Step by Step Guide

Step 1: Configure REST API datasource

To learn about datasource, refer here

  • Navigate to Ingest-> API Pulls Tab -> Create API Pulls

Step 2: Create a Dataset

Datasets Tab: To learn about datasets, refer here. Add a dataset for each API from which you want to ingest data, providing the following details:

  • Provide the name (Required)

  • URL with Params (Required): Provide the base URL (with parameters if any). For example, https://base_url_example?params_key1=params_value1¶ms_key2=params_value2.

  • Sensitive Params (Optional): Provide sensitive parameters if required, for example, params_key1=params_value1& params_key2=params_value2.

  • Request Method (Required)

    • GET

    • POST

      • Body: Raw Data

        • Content Type: Select from the drop-down

        • Raw Data

  • Headers (Optional): Header format is JSON, for example, {"key1":"value1","key2:":"value2"}.

  • Data Root (Optional): Give the json path from which data should be extracted.

    • For ex - {country:[{state:st1,city:abc},{state:st2,city:xyz}],offset:1} if data_root is country, then {state:st1, city:abc},{state:st2,city:xyz} will be stored in two different rows in the warehouse table. Otherwise whole json will be flattened and stored in single row.

    • For complex type give keys with dot(.) separated. ex - {book:{writer:[{name:abc},{name,xyz}]},offset:1} for book.writer it will give {name:abc} and {name,xyz} in two separate rows.

  • Flatten Json (Required): if want to make flatten schema of json.

    • No

    • Yes

      • Flatten Level (Required): Select from One Level or Multi Level. In one level, flattening will not be applied on complex type. They will be stored as string. In multi level, flattening will be applied in complex level till they become simple type.

  • Destination Schema (Required) : Data warehouse schema where the table will be ingested into

  • Destination Table name (Required) : It is the table name to be created on the warehouse. If not given, sprinkle will create like ds_<datasourcename>_<tablename>

  • Destination Create Table Clause: Provide additional clauses to warehouse-create table queries such as clustering, partitioning, and more, useful for optimizing DML statements (Learn more on how to use this field).

  • Click 'Create'

Step 3: Run and Schedule Ingestion

In the Ingestion Jobs tab:

  • Trigger the Job using the 'Run' button.

  • To schedule, enable Auto-Run. Change the frequency if required.

Last updated