# Azure EventHub

## Pipeline Concepts

Before setting up the Pipeline, learn about Pipeline concepts [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines)

## Step by Step Guide

### STEP-1: Configure EventHub Connection

To learn about Connection, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines)

* Log into Sprinkle application
* Navigate to Datasources -> Connections Tab -> New Connection ->&#x20;
* Select Azure Eventhub&#x20;
* Provide all the mandatory details
  * *Name*: Name to identify this connection
  * *EventHub Connection String*: Provide in the format -> DefaultEndpointsProtocol=https;AccountName=XXXX;AccountKey=XXXXXXXXXXXXX;EndpointSuffix=core.windows.net
* Test Connection&#x20;
* Create

### STEP-2: Configure EventHub Pipeline

To learn about Pipeline, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines)

* Navigate to Ingest -> Pipeline Tab -> Add ->&#x20;
* Select Azure Eventhub
* Provide the name -> Create
* **Connection Tab**:&#x20;
  * From the drop-down, select the name of connection created in STEP-2
  * Update

### STEP-3: Create Dataset

**Datasets Tab**: To learn about Dataset, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines). Add Dataset for each table that you want to replicate, providing following details

* *Stream Name* (Required)
* *Automatic Schema (Required)*:&#x20;
  * Yes: Schema is automatically discovered by Sprinkle (Recommended)
    * Flatten Level (Required): Select from One Level or Multi Level.In one level, flattening will not be applied on complex type. They will be stored as string. In multi level, flattening will be applied in complex level till they become simple type.
  * *No*: Schema needs to be provided
    * *Warehouse Schema*: Provide the row schema
* *Destination Schema* (Required) : Data warehouse schema where the table will be ingested into
* *Destination Table name* (Required) : It is the table name to be created on the warehouse. If not given, sprinkle will create like ds\_\<Pipelinename>\_\<tablename>
* *Destination Create Table Clause*: Provide additional clauses to warehouse-create table queries such as clustering, partitioning, and more, useful for optimizing DML statements. [Learn more](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines/databases/features/destination-create-table-clause) on how to use this field.
* Create

### STEP-4: Run and schedule Ingestion

In the **Ingestion Jobs** ta&#x62;**:**

* Trigger the Job, using Run button
* To schedule, enable Auto-Run. Change the frequency if needed&#x20;
