# API Pulls

With API Pulls, you can pull data from any application that exposes REST APIs and ingest it into the data warehouse without any coding.

### Watch Video :tv:

{% embed url="<https://youtu.be/DZUU0pleUQE>" %}
API Pulls : Explanation & Feature Walkthrough
{% endembed %}

## Datasource Concepts

Before setting up the datasource, learn about datasource concepts [here](/product/ingesting-your-data/pipelines.md)

## Step by Step Guide

### Step 1: Configure REST API datasource

To learn about datasource, refer [here](/product/ingesting-your-data/pipelines.md)

* Navigate to Ingest-> API Pulls Tab -> Create API Pulls

### Step 2: Create a Dataset

**Datasets Tab**: To learn about datasets, refer [here](/product/ingesting-your-data/pipelines.md). Add a dataset for each API from which you want to ingest data, providing the following details:

* **Provide the name** *(Required)*
* **URL with Params** (Required): Provide the base URL (with parameters if any). For example, <https://base\\_url\\_example?params\\_key1=params\\_value1¶ms\\_key2=params\\_value2>.
* **Sensitive Params** *(Optional)*: Provide sensitive parameters if required, for example, params\_key1=params\_value1& params\_key2=params\_value2.
* **Request Method** *(Required)*
  * *GET*
  * *POST*
    * *Body:* Raw Data
      * *Content Type*: Select from the drop-down
      * *Raw Data*&#x20;
* **Headers** *(Optional)*: Header format is JSON, for example, {"key1":"value1","key2:":"value2"}.
* **Data Root** *(Optional)*: Give the json path from which data should be extracted.
  * For ex - {country:\[{state:st1,city:abc},{state:st2,city:xyz}],offset:1} if data\_root is country, then {state:st1, city:abc},{state:st2,city:xyz} will be stored in two different rows in the warehouse table. Otherwise whole json will be flattened and stored in single row.&#x20;
  * For complex type give keys with dot(.) separated. ex - {book:{writer:\[{name:abc},{name,xyz}]},offset:1} for book.writer it will give {name:abc} and {name,xyz} in two separate rows.
* ***Flatten Json** (Required)*: if want to make flatten schema of json.
  * No
  * Yes
    * ***Flatten Level*** *(Required)*: Select from One Level or Multi Level. In one level, flattening will not be applied on complex type. They will be stored as string. In multi level, flattening will be applied in complex level till they become simple type.
* ***Destination Schema*** *(Required)* : Data warehouse schema where the table will be ingested into
* **Destination Table name** *(Required)* : It is the table name to be created on the warehouse. If not given, sprinkle will create like ds\_\<datasourcename>\_\<tablename>
* ***Destination Create Table Clause***: Provide additional clauses to warehouse-create table queries such as clustering, partitioning, and more, useful for optimizing DML statements ([Learn more](https://docs.sprinkledata.com/product/integrating-your-data/data-imports/destination-create-table-clause) on how to use this field).
* Click 'Create'

### Step 3: Run and Schedule Ingestion

In the **Ingestion Jobs** tab:

* Trigger the Job using the '**Run**' button.
* To schedule, enable **Auto-Run**. Change the frequency if required.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.sprinkledata.com/product/ingesting-your-data/api-pulls.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
