Branch

Guide to integrate you Branch data source with Sprinkle

Datasource Concepts

Before setting up the datasource, learn about datasource concepts here

Step by Step Guide

STEP-1: Configure Connection

To learn about Connection, refer here

  • Log into the Sprinkle application

  • Navigate to Datasources -> Connections Tab -> New Connection ->

  • Select Branch

  • Provide all the mandatory details

    • Name: Name to identify this connection

    • Branch Key: Navigate to Branch dashboard -> Account settings. In the general tab, under Branch Key and Secret, Copy the Branch Key.

    • Branch Secret: Similarly copy the Branch Secret by navigating to Branch dashboard -> Account settings. And In the general tab, under Branch Key and Secret, copy the Branch Key.

    • Advance Settings: you can also provide advanced settings like connection timeout, retry limit, retry sleep time, version, max records, and incremental batch size while setting up the connection. Refer here

    • You can test the connection by using the test connection button and clicking on create button creates the branch connection.

STEP-2: Configure datasource

To learn about datasource, refer here

  • Navigate to Datasources -> Datasources Tab -> Add ->

  • Select Branch

  • Provide the name -> Create

  • Connection Tab:

    • From the drop-down, select the name of the connection created in STEP-1 for Branch data source.

    • Click on Update

STEP-3: Create Dataset

Datasets Tab: To learn about Dataset, refer here. Add Dataset, providing the following details.

Available datasets from the Branch data source are:

  • eo_impression

  • eo_click

  • xx_impression

  • xx_click

  • eo_web_to_app_auto_redirect

  • eo_branch_cta_view

  • eo_sms_sent

  • eo_open

  • eo_install

  • eo_reinstall

  • eo_web_session_start

  • eo_pageview

  • eo_commerce_event

  • eo_custom_event

  • eo_content_event

  • eo_dismissal

  • eo_user_lifecycle_event cost

Refer to the Branch API documentation for more information on the datasets.

In the Datasets Tab:

Fill out the below form to add datasets.

  • Branch Datasource Name

  • Aggregation: How events are accounted for in the final count. The options available are unique_count, total_count, revenue cost, cost_in_local_currency.

  • Dimensions: Fields of events to be used as dividing lines in the query. Counts are returned grouped with events that match the key for each dimension. The options available are name, origin, timestamp, deep_linked, and from_desktop attributed.

  • Granularity: Multiple events can be rolled into a single result count over a period of time. For example, a value of "day" returns counts for each day separately, whereas "all" returns a single count for the whole time span.

  • Ingestion Mode: Complete or Incremental. Read here about ingestion modes.

  • Start Date: A timestamp representing the oldest date to return data. The Date format must be YYYY-MM-DD

  • Flatten Level: In one level, flattening will not be applied to a complex type. They will be stored as strings. In multi-level, flattening will be applied at a complex level till they become a simple type.

  • Destination Schema: Choose the destination Schema on the data warehouse.

  • Destination Table Name: Name of the table to be created in the warehouse.

  • Destination Create Table Clause: Provide additional clauses to warehouse-create table queries such as clustering, partitioning, and more, useful for optimizing DML statements. Learn more on how to use this field.

STEP-4: Run and schedule Ingestion

In the Ingestion Jobs tab:

  • Trigger the Job, using the Run button

  • To schedule, enable Auto-Run. Change the frequency as required.

Advanced Connection Settings

  • API Read Timeout (In seconds) : Maximum time of inactivity between two data packets when waiting for the server's response. The default value is 30 seconds.

  • API Connection Timeout (In seconds) : Time period within which a connection between a client and a server must be established.

  • Retry Limit : Number of retries allowed when an API call fails. For example if an API call fails and retry limit is 5 then it will check 5 times for that API call and if it succeeded then it will stop checking.

  • Retry Sleep Time (In milliseconds) : Given time, after which retry should happen in case an API call fails.

  • Version : It gives information about the version of Branch API being used.

  • Max Records : Field sets the max limit on the number of records that can be downloaded during each API call.

  • Incremental Batch Size (In days) : No. of days in one batch for which data is being downloaded during incremental ingestion.

Last updated