# Klaviyo

## Pipeline Concepts

Before setting up the Pipeline, learn about Pipeline concepts here

## Step by Step Guide

### STEP-1: Configure Connection

To learn about Connection, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines/applications/broken-reference)

* Log into Sprinkle application
* Navigate to Ingest -> Connections Tab -> New Connection ->&#x20;
* Select Klaviyo
* Provide all the mandatory details
  * *Name*: Name to identify this connection
  * *API Key*: Enter API key provided by klaviyo. To find out how to get the API key in klaviyo, [click here](https://help.klaviyo.com/hc/en-us/articles/115005062267-Manage-Your-Account-s-API-Keys)
* Test Connection&#x20;
* Create

### STEP-2: Configure Pipeline

To learn about datasource, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines/applications/broken-reference)

* Navigate to Ingest -> Pipeline Tab -> Add ->&#x20;
* Select Klaviyo
* Provide the name -> Create
* **Connection Tab**:&#x20;
  * From the drop-down, select the name of connection created in STEP-2
  * Update

### STEP-3: Create Dataset

**Datasets Tab**: To learn about Dataset, refer [here](https://docs.sprinkledata.com/product/ingesting-your-data/pipelines/applications/broken-reference). Add Dataset for each report/dataset that you want to integrate, providing following details

* *Report Type* (Required): Select from the drop-down
  * *Global\_Exclustions*
  * *Lists*
  * *Segments*
  * *Campaigns*
  * *Email\_Templates*
* Flatten Level(Required) : Select One Level or Multi Level. In one level, flattening will not be applied on complex type. They will be stored as string. In multi level, flattening will be applied in complex level till they become simple type.
* *Destination Schema* (Required) : Data warehouse schema where the table will be ingested into
* *Destination Table name* (Required) : It is the table name to be created on the warehouse. If not given, sprinkle will create like ds\_\<datasourcename>\_\<tablename>
* *Destination Create Table Clause*: Provide additional clauses to warehouse-create table queries such as clustering, partitioning, and more, useful for optimizing DML statements. [Learn more](https://docs.sprinkledata.com/product/integrating-your-data/data-imports/destination-create-table-clause) on how to use this field.
* Create

### STEP-4: Run and schedule Ingestion

In the **Ingestion Jobs** ta&#x62;**:**

* Trigger the Job, using Run button
* To schedule, enable Auto-Run. Change the frequency if needed

### Dataset Fields

User can pick following fields in datasets

<details>

<summary>email_templates</summary>

* object&#x20;
* id&#x20;
* name&#x20;
* html&#x20;
* is\_writeable&#x20;
* created&#x20;
* updated

</details>

<details>

<summary>metric_timeline</summary>

* object&#x20;
* id&#x20;
* name&#x20;
* html&#x20;
* is\_writeable&#x20;
* created&#x20;
* updated

</details>

<details>

<summary>campaigns</summary>

* object&#x20;
* id&#x20;
* name&#x20;
* subject&#x20;
* from\_email&#x20;
* from\_name&#x20;
* lists\_0\_\_object&#x20;
* lists\_0\_\_id&#x20;
* lists\_0\_\_name&#x20;
* lists\_0\_\_list\_type&#x20;
* lists\_0\_\_created&#x20;
* lists\_0\_\_updated&#x20;
* lists\_0\_\_person\_count&#x20;
* excluded\_lists&#x20;
* status&#x20;
* status\_id&#x20;
* status\_label&#x20;
* send\_time&#x20;
* created&#x20;
* updated&#x20;
* num\_recipients&#x20;
* campaign\_type&#x20;
* is\_segmented&#x20;
* message\_type&#x20;
* template\_id&#x20;
* sent\_at

</details>

<details>

<summary>segments</summary>

* object&#x20;
* id&#x20;
* name&#x20;
* list\_type&#x20;
* created&#x20;
* updated&#x20;
* person\_count

</details>

<details>

<summary>lists</summary>

* object&#x20;
* id&#x20;
* name&#x20;
* list\_type&#x20;
* created&#x20;
* updated&#x20;
* person\_count

</details>

<details>

<summary>global_exclusions</summary>

* object
* email
* timestamp
* reason

</details>
