Apache Spark is supported in Zeppelin with Spark Interpreter group, which consisted of 4 interpreters.
Name | Class | Description |
---|---|---|
%spark | SparkInterpreter | Creates SparkContext and provides scala environment |
%pyspark | PySparkInterpreter | Provides python environment |
%sql | SparkSQLInterpreter | Provides SQL environment |
%dep | DepInterpreter | Dependency loader |
Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you'll need following two simple steps.
In conf/zeppelin-env.sh, export SPARK_HOME environment variable with your Spark installation path.
for example
export SPARK_HOME=/usr/lib/spark
You can optionally export HADOOP_CONF_DIR and SPARK_SUBMIT_OPTIONS
export HADOOP_CONF_DIR=/usr/lib/hadoop
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0"
After start Zeppelin, go to Interpreter menu and edit master property in your Spark interpreter setting. The value may vary depending on your Spark cluster deployment type.
for example,
That's it. Zeppelin will work with any version of Spark and any deployment type without rebuild Zeppelin in this way. (Zeppelin 0.5.5-incubating release works up to Spark 1.5.1)
Note that without exporting SPARK_HOME, it's running in local mode with included version of Spark. The included version may vary depending on the build profile.
SparkContext, SQLContext, ZeppelinContext are automatically created and exposed as variable names 'sc', 'sqlContext' and 'z', respectively, both in scala and python environments.
Note that scala / python environment shares the same SparkContext, SQLContext, ZeppelinContext instance.
There are two ways to load external library in spark interpreter. First is using Zeppelin's %dep interpreter and second is loading Spark properties.
When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %dep interpreter.
Dep interpreter leverages scala environment. So you can write any Scala code here. Note that %dep interpreter should be used before %spark, %pyspark, %sql.
Here's usages.
%dep
z.reset() // clean up previously added artifact and repository
// add maven repository
z.addRepo("RepoName").url("RepoURL")
// add maven snapshot repository
z.addRepo("RepoName").url("RepoURL").snapshot()
// add credentials for private maven repository
z.addRepo("RepoName").url("RepoURL").username("username").password("password")
// add artifact from filesystem
z.load("/path/to.jar")
// add artifact from maven repository, with no dependency
z.load("groupId:artifactId:version").excludeAll()
// add artifact recursively
z.load("groupId:artifactId:version")
// add artifact recursively except comma separated GroupID:ArtifactId list
z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...")
// exclude with pattern
z.load("groupId:artifactId:version").exclude(*)
z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")
z.load("groupId:artifactId:version").exclude("groupId:*")
// local() skips adding artifact to spark clusters (skipping sc.addJar())
z.load("groupId:artifactId:version").local()
Once SPARK_HOME
is set in conf/zeppelin-env.sh
, Zeppelin uses spark-submit
as spark interpreter runner. spark-submit
supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to spark-submit
by exporting SPARK_SUBMIT_OPTIONS
in conf/zeppelin-env.sh. Second is reading configuration options from SPARK_HOME/conf/spark-defaults.conf
. Spark properites that user can set to distribute libraries are:
spark-defaults.conf | SPARK_SUBMIT_OPTIONS | Applicable Interpreter | Description |
---|---|---|---|
spark.jars | --jars | %spark | Comma-separated list of local jars to include on the driver and executor classpaths. |
spark.jars.packages | --packages | %spark | Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version. |
spark.files | --files | %pyspark | Comma-separated list of files to be placed in the working directory of each executor. |
Note that adding jar to pyspark is only availabe via %dep interpreter at the moment
Here are few examples:
SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"
SPARK_HOME/conf/spark-defaults.conf
spark.jars /path/mylib1.jar,/path/mylib2.jar
spark.jars.packages com.databricks:spark-csv_2.10:1.2.0
spark.files /path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip
Zeppelin automatically injects ZeppelinContext as variable 'z' in your scala/python environment. ZeppelinContext provides some additional functions and utility.
ZeppelinContext extends map and it's shared between scala, python environment. So you can put some object from scala and read it from python, vise versa.
Put object from scala
%spark
val myObject = ...
z.put("objName", myObject)
Get object from python
%python
myObject = z.get("objName")
ZeppelinContext provides functions for creating forms. In scala and python environments, you can create forms programmatically.
%spark
/* Create text input form */
z.input("formName")
/* Create text input form with default value */
z.input("formName", "defaultValue")
/* Create select form */
z.select("formName", Seq(("option1", "option1DisplayName"),
("option2", "option2DisplayName")))
/* Create select form with default value*/
z.select("formName", "option1", Seq(("option1", "option1DisplayName"),
("option2", "option2DisplayName")))
In sql environment, you can create form in simple template.
%sql
select * from ${table=defaultTableName} where text like '%${search}%'
To learn more about dynamic form, checkout Dynamic Form.