Reference: Version 1.6 Updates

Configuration

With the update to Spark 1.6 there are several changes to the memory configuration which developers need to be aware of. The memory configuration perspective in Spark 1.6 shifts from focusing on storage and shuffle to storage and the potential for Spark to spill onto disk. For more information please consult the Spark Documentation - Qubole recommends enabling legacy mode as highlighted below if the previous configuration method is preferred.

spark.memory.useLegacyMode

Spark will use the configuration settings available prior to the Spark 1.6 release.

spark.memory.fraction

The amount of JVM Heap memory reserved for storage and execution.

spark.memory.storageFraction

The fraction of JVM Reserved Heap Memory set by spark.memory.fraction which is immune to eviction.

 

AutoScaling

The update to Spark 1.6 deprecates the num-executors setting and replaces it with total-executor-cores setting. The new setting is simply the product of number of executors and the number of cores per executor. The max-executors setting will still apply and as a result Spark Executor AutoScaling will continue to function as expected.

 

num-executors

The default number of executors to use (pre-1.6).

executor-cores

The number of cores per executor.

max-executors

The maximum number of executors to use.

total-executor-cores

The number of total cores across all executors (1.6)

Have more questions? Submit a request

Comments

Powered by Zendesk