本文针对Standalone模式下的Spark
Spark配置
安装Spark之后,首先保证Spark的配置项目正确。Spark的配置文件存储在$Spark_HOME/conf/目录下
spark-env.sh.template
slaves
spark-defaults.conf.template
spark-env.sh.template
- #!/usr/bin/env bash
- # This file is sourced when running various Spark programs.
- # Copy it as spark-env.sh and edit that to configure Spark for your site.
- # Options read when launching programs locally with
- # ./bin/run-example or ./bin/spark-submit
- # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
- # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
- # - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
- # - SPARK_CLASSPATH, default classpath entries to append
- # Options read by executors and drivers running inside the cluster
- # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
- # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
- # - SPARK_CLASSPATH, default classpath entries to append
- # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
- # - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos
- # Options read in YARN client mode
- # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
- # - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)
- # - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).
- # - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)
- # - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)
- # - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
- # - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)
- # - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
- # - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.
- # Options for the daemons used in the standalone deploy mode:
- # - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
- # - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
- # - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
- # - SPARK_WORKER_CORES, to set the number of cores to use on this machine
- # - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
- # - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
- # - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
- # - SPARK_WORKER_DIR, to set the working directory of worker processes
- # - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
- # - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
- # - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
- # - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
按照注释的提示,把上面的template文件拷贝重命名,去掉template后缀,并修改内容如下
spark-env.sh
- export JAVA_HOME=/xxxxxxx
- export SPARK_MASTER_IP=masterip or hostname
- export SPARK_WORKER_CORE=1
- export SPARK_WORKER_INSTANCE=1
- export SPARK_WORKER_MEMORY=512M #每个worker最多可以使用多少内存
- export SPARK_MASTER_PORT=8888
- export SPARK_JAVA_OPTS="-verbose:gc -XX:PrintGCDetails -XX:+PrintGCTimesStamps -XX:+UseCompressedOops"
这里的最后一项SPARK_JAVA_OPTS,在新版本中有修改,不需要增加最后一项
slaves
- # A Spark Worker will be started on each of the machines listed below.
- localhost
这里填写各个slave的ip地址或者hostname
启动Spark
在Spark运行之前,首先需要让Spark集群启动,如果需要用到hadoop的HDFS的话,也需要把HDFS启动起来。
启动master
- ./sbin/start-master.sh
注册worker,同一台机器可以注册若干个worker
./bin/spark-class org.apache.spark.deploy.worker.Worker spark://masterip:7077 &