📓Python Notebooks
Overview
You can use your favourite Notebooks on Sprinkle. Through the navigation panel 🖱️click on Notebooks.
The Notebook 📔is an open-source web application that allows you to create and share documents that contain live code, narrative text, equations, and visualizations📊💹.
Use notebooks for data cleaning, transformations, numerical simulation, statistical modeling, data visualization & machine learning.
Watch Video 📺
Feature Walkthrough 🚶
Create Python Notebook
🖱️ Click on Transform -> Python Notebooks on the left navigation pane, to start using the Python Notebooks feature on Sprinkle. The listing page lists all the Python Notebooks that have been created.
🖱️ Click on Create New Notebook on the top right corner of the page to create a new Python Notebook.
Provide a name for the Python Notebook.
Select Kernel (Optional): Select python3 from the dropdown.
Select VM Size (Optional): Select from the below options of CPUs and Virtual Machine Memory Size from the dropdown for the Python Notebook.
Option 1 - 1 CPU & 1700 Mi (Mebibyte) Virtual Storage Memory.
Option 2 - 2 CPUs & 1800 Mi (Mebibyte) Virtual Storage Memory.
User API Key and User API Secret are optional to fill in this form. In case you want to use the Sprinkle SDK functions, it is mandatory to provide the API Key and API Secret. In the settings, these can also be provided after the Python Notebook is created.
To generate API Key and Secret, click on your user icon on the top right, then Account -> API Keys. 🖱️Click on Generate new, to create a new API Key and Secret for yourself.
Using Sprinkle SDK
Sprinkle SDK enables you to Import your data from sprinkle’s SQL Explore and Reports to be used in the notebook
Import Sprinkle SDK
Read Report
Reads data from the mentioned report into a data frame
Read SQL Explore
Reads data from the mentioned SQL Explore into a data frame
Once data is imported, you can run all kinds of analyses using these data in your Notebook
Create a table or update an existing table in the warehouse using a data frame
Multiple tables can be created in a single Data Import. The data Import created using the above function can be seen in the Ingest -> File Uploads.
Drop the table from the warehouse
How to work on Spark session operations?
Get spark session with default configurations
Change the spark app name while creating the default spark session
Get a spark session where the user can customize the configuration
Last updated