Appsflyer
Guide to integrate your Appsflyer data with Sprinkle
Last updated
Guide to integrate your Appsflyer data with Sprinkle
Last updated
Before setting up the datasource, learn about datasource concepts
To learn about Connection, refer
Log into Sprinkle application
Navigate to Datasources -> Connections Tab -> New Connection ->
Select Appsflyer
Provide all the mandatory details
Name: Name to identify this connection
App Name: On the homepage, you can find the apps added to your account. Copy the App ID of the app you want to fetch data from. eg: play store=com.publisher.name/Apple Store=id123456789
API Token: In the top right of the AppsFlyer homepage, click the drop-down next to your username, and then, click API tokens. Copy the API token V1.0 and use it.
Date Type: Sprinkle will pull data from this date or number of days
Start Date
No. of days
Advance Settings : Refer
Test Connection
Create
Navigate to Datasources -> Datasources Tab -> Add ->
Select Appsflyer
Provide the name -> Create
Connection Tab:
From the drop-down, select the name of connection created in STEP-2
Update
Report Type (Required)
Performance Reports
Raw Reports
Batch Size In Minutes: Use this to download records using minute batch size. If Yes, Minutes field will be displayed.
Minutes: This field indicates the difference between from and to dates in minutes. ex- if start date is 2022-06-06 and minutes provided are 120. from will be 2022-06-06 00:00:00 and to will be 2022-06-06 02:00:00. Default value is 0.
Postback Reports
Retargeting Reports
Flatten Level(Required) : Select One Level or Multi Level. In one level, flattening will not be applied on complex type. They will be stored as string. In multi level, flattening will be applied in complex level till they become simple type.
Destination Schema (Required) : Data warehouse schema where the table will be ingested into
Destination Table name (Required) : It is the table name to be created on the warehouse. If not given, sprinkle will create like ds_<datasourcename>_<tablename>
Create
In the Ingestion Jobs tab:
Trigger the Job, using Run button
To schedule, enable Auto-Run. Change the frequency if needed
API Read Timeout (In seconds) : Maximum time of inactivity between two data packets when waiting for the server's response. The default value is 30 seconds.
API Connection Timeout (In seconds) : Time period within which a connection between a client and a server must be established.
Retry Limit : Number of retries allowed when an API call fails. For example if an API call fails and retry limit is 5 then it will check 5 times for that API call and if it succeeded then it will stop checking.
Retry Sleep Time (In milliseconds) : Given time, after which retry should happen in case an API call fails.
Version : It gives information about the version of Appsflyer API being used.
Incremental Batch Size (In days) : No. of days in one batch for which data is being downloaded during incremental ingestion.
To learn about datasource, refer
Datasets Tab: To learn about Dataset, refer . Add Dataset for each report/dataset that you want to integrate, providing following details
Category: Aggregate reports described. Comparison of aggregate and analytics tools/APIs. For more details about category
Category: Row-level data describing specific events like installs, in-app events, web site visits, Protect360 blocked installs, ad revenue, and postbacks sent to partners. For more details about category
Additional Category: To limit (filter) the call to a specific media source. For more details
Media Source: To limit (filter) the call to a specific media source. For more details
Additional Field: To get additional fields in addition to the default fields. For more details . Please provide additional fields with comma separated.
Category: For more details about category
Additional Field: For more details about category
Category: Retargeting reports consist of users who engaged with a retargeting campaign and performed a re-engagement or re-attribution. For more details about category
Destination Create Table Clause: Provide additional clauses to warehouse-create table queries such as clustering, partitioning, and more, useful for optimizing DML statements. on how to use this field.