Spark

Apache Spark is supported in Zeppelin with Spark Interpreter group, which consisted of 4 interpreters.

Name Class Description
%spark SparkInterpreter Creates SparkContext and provides scala environment
%pyspark PySparkInterpreter Provides python environment
%sql SparkSQLInterpreter Provides SQL environment
%dep DepInterpreter Dependency loader


SparkContext, SQLContext, ZeppelinContext

SparkContext, SQLContext, ZeppelinContext are automatically created and exposed as variable names 'sc', 'sqlContext' and 'z', respectively, both in scala and python environments.

Note that scala / python environment shares the same SparkContext, SQLContext, ZeppelinContext instance.



Dependency Management

There are two ways to load external library in spark interpreter. First is using Zeppelin's %dep interpreter and second is loading Spark properties.

1. Dynamic Dependency Loading via %dep interpreter

When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %dep interpreter.

  • Load libraries recursively from Maven repository
  • Load libraries from local filesystem
  • Add additional maven repository
  • Automatically add libraries to SparkCluster (You can turn off)

Dep interpreter leverages scala environment. So you can write any Scala code here. Note that %dep interpreter should be used before %spark, %pyspark, %sql.

Here's usages.

%dep
z.reset() // clean up previously added artifact and repository

// add maven repository
z.addRepo("RepoName").url("RepoURL")

// add maven snapshot repository
z.addRepo("RepoName").url("RepoURL").snapshot()

// add credentials for private maven repository
z.addRepo("RepoName").url("RepoURL").username("username").password("password")

// add artifact from filesystem
z.load("/path/to.jar")

// add artifact from maven repository, with no dependency
z.load("groupId:artifactId:version").excludeAll()

// add artifact recursively
z.load("groupId:artifactId:version")

// add artifact recursively except comma separated GroupID:ArtifactId list
z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...")

// exclude with pattern
z.load("groupId:artifactId:version").exclude(*)
z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")
z.load("groupId:artifactId:version").exclude("groupId:*")

// local() skips adding artifact to spark clusters (skipping sc.addJar())
z.load("groupId:artifactId:version").local()


2. Loading Spark Properties

Once SPARK_HOME is set in conf/zeppelin-env.sh, Zeppelin uses spark-submit as spark interpreter runner. spark-submit supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to spark-submit by exporting SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading configuration options from SPARK_HOME/conf/spark-defaults.conf. Spark properites that user can set to distribute libraries are:

spark-defaults.conf SPARK_SUBMIT_OPTIONS Applicable Interpreter Description
spark.jars --jars %spark Comma-separated list of local jars to include on the driver and executor classpaths.
spark.jars.packages --packages %spark Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version.
spark.files --files %pyspark Comma-separated list of files to be placed in the working directory of each executor.

Note that adding jar to pyspark is only availabe via %dep interpreter at the moment


Here are few examples:

0.5.5 and later
  • SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh

    export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"
    
  • SPARK_HOME/conf/spark-defaults.conf

    spark.jars              /path/mylib1.jar,/path/mylib2.jar
    spark.jars.packages     com.databricks:spark-csv_2.10:1.2.0
    spark.files             /path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip
    
0.5.0
  • ZEPPELIN_JAVA_OPTS in conf/zeppelin-env.sh

    export ZEPPELIN_JAVA_OPTS="-Dspark.jars=/path/mylib1.jar,/path/mylib2.jar -Dspark.files=/path/myfile1.dat,/path/myfile2.dat"
    




ZeppelinContext

Zeppelin automatically injects ZeppelinContext as variable 'z' in your scala/python environment. ZeppelinContext provides some additional functions and utility.


Object exchange

ZeppelinContext extends map and it's shared between scala, python environment. So you can put some object from scala and read it from python, vise versa.

Put object from scala

%spark
val myObject = ...
z.put("objName", myObject)

Get object from python

%python
myObject = z.get("objName")


Form creation

ZeppelinContext provides functions for creating forms. In scala and python environments, you can create forms programmatically.

%spark
/* Create text input form */
z.input("formName")

/* Create text input form with default value */
z.input("formName", "defaultValue")

/* Create select form */
z.select("formName", Seq(("option1", "option1DisplayName"),
                         ("option2", "option2DisplayName")))

/* Create select form with default value*/
z.select("formName", "option1", Seq(("option1", "option1DisplayName"),
                                    ("option2", "option2DisplayName")))

In sql environment, you can create form in simple template.

%sql
select * from ${table=defaultTableName} where text like '%${search}%'

To learn more about dynamic form, checkout Dynamic Form.