# Azure Table Storage

## Pipeline Concepts

Before setting up the Pipeline, learn about Pipeline concepts [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines)

## Step by Step Guide

### STEP-1: Configure Azure Table Storage Connection

To learn about Connection, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines)

* Log into Sprinkle application
* Navigate to Ingest -> Connections Tab -> New Connection ->&#x20;
* Select Azure Table Storage
* Provide all the mandatory details

  * *Name*: Name to identify this connection
  * *Connection String* :  Provide the in the format:&#x20;

  DefaultEndpointsProtocol=https;AccountName=XXXXXX;AccountKey=XXXXXXXXXXXXXXXXX;EndpointSuffix=core.windows.net

  * *Table Type*: Select the Table API Type
    * *Azure Table*
    * *Azure Cosmos Table*&#x20;
* Test Connection&#x20;
* Create

### STEP-2: Configure Azure Table Storage Pipeline

To learn about Pipeline, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines)

* Navigate to Ingest -> Pipeline Tab -> Add ->&#x20;
* Select Azure Table Storage
* Provide the name -> Create
* **Connection Tab**:&#x20;
  * From the drop-down, select the name of connection created in STEP-2
  * Update

### STEP-3: Create Dataset

**Datasets Tab**: To learn about Dataset, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines). Add Dataset for each table that you want to replicate, providing following details

* *Azure Table* (Required): To choose from dropdown.
* *Ingestion Mode*: (Required)&#x20;

  * *Complete*: Ingest full data from the source table in every ingestion job run. Choose this option if your table size is small (<1 million rows) and you want to ingest it infrequently (few times a day)
  * *Incremental*: Ingest only the changed or inserted rows in every ingestion job run. Choose this option if your table size is large and you want to ingest in realtime mode.

  *To Know more about Ingestion Modes, refer* [*here*](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines/databases/features/ingestion-modes)
* *Date Type: Ingestion runs from this start date/days. If Incremental, then only first run pulls from this date, further runs only pulls changes/new rows.*&#x20;
  * *Start Date*: Provide in the Format:YYYY-MM-DD
  * *No of days*
* *Destination Schema* (Required) : Data warehouse schema where the table will be ingested into
* *Destination Table name* (Required) : It is the table name to be created on the warehouse. If not given, sprinkle will create like ds\_\<Pipelinename>\_\<tablename>
* *Destination Create Table Clause*: Provide additional clauses to warehouse-create table queries such as clustering, partitioning, and more, useful for optimizing DML statements. [Learn more](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines/databases/features/destination-create-table-clause) on how to use this field.
* Create

### STEP-4: Run and schedule Ingestion

In the **Ingestion Jobs** ta&#x62;**:**

* Trigger the Job, using Run button
* To schedule, enable Auto-Run. Change the frequency if needed&#x20;
