Sprinkle Docs
Search…
Databricks
Guide to integrate Databricks with Sprinkle
This page covers the details about integrating Databricks with Sprinkle.
When setting up Databricks connection, Sprinkle additionally requires a Cloud bucket. This guide covers the role of all the components and steps to setup.
  • ​Integrating Databricks: All analytical data is stored and queried from Databricks warehouse
  • ​Cloud Bucket: Sprinkle stores all intermediate data and report caches in this bucket

Step by Step Guide

Integrating Databricks

STEP-1: Allow Databricks to accept connection from Sprinkle

Allow inbound connection on databricks jdbc port (default is 443) from Sprinkle IPs (34.93.254.126, 34.93.106.136).

STEP-2: Configure Databricks Connection

  • Log into Sprinkle application
  • Navigate to Admin -> Drivers -> Create Warehouse
  • Select Databricks
  • Provide all the mandatory details
    • Distinct Name: Name to identify this connection
    • Databricks JDBC URL : Refer Databricks Documentation to get the url. Provide the url in the below format. While copying the JDBC url from databricks, make sure to remove the UID and PWD from the url. jdbc:spark://xxxxxxxxxxxxxx:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/xxxxxxxx/xxxxxxxx;AuthMech=3
    • Username: The id with which you login to databricks
    • Password: Personal access token. To generate, see here.
    • Storage Mount Name: Storage that will be used by Databricks. See the section for more details.
  • Test Connection
  • Create

Creating Storage Mount

Go to Databricks home page and click on the create button on the right side and select notebook. Select the cluster you want to configure with sprinkle and select python as default language.
Run this Python code
Depending on your Cloud, you can create the mount. Sprinkle currently supports Databricks in Azure and AWS clouds.

Azure blob

1
dbutils.fs.mount(
2
source = "wasbs://<container-name>@<storage-account-name>.blob.core.windows.net",
3
mount_point = "/mnt/<mount-name>",
4
extra_configs = {"fs.azure.account.key.<storage-account-name>.blob.core.windows.net":"<storage key>"})
5
​
Copied!

S3

1
AccessKey = "<Access_Key>"
2
SecretKey = "<Secret_Key>"
3
SecretKey = SecretKey.replace("/", "%2F")
4
aws_bucket_name = "<Bucket_Name>"
5
mount_name = "<mount_name>"
6
dbutils.fs.mount("s3a://%s:%[email protected]%s" % (AccessKey, SecretKey, aws_bucket_name), "/mnt/%s" % mount_name)
7
display(dbutils.fs.ls("/mnt/%s" % mount_name))
Copied!

Create a Cloud Bucket

Cloud bucket can be created depending on your Databricks Cloud. Sprinkle supports creating a bucket in AWS or Azure. Refer respective documents for creating a configuring the Cloud Bucket.
​