Reference: Version 1.6 Updates


With the update to Spark 1.6 there are several changes to the memory configuration which developers need to be aware of. The memory configuration perspective in Spark 1.6 shifts from focusing on storage and shuffle to storage and the potential for Spark to spill onto disk. For more information please consult the Spark Documentation - Qubole recommends enabling legacy mode as highlighted below if the previous configuration method is preferred.


Spark will use the configuration settings available prior to the Spark 1.6 release.


The amount of JVM Heap memory reserved for storage and execution.


The fraction of JVM Reserved Heap Memory set by spark.memory.fraction which is immune to eviction.



The update to Spark 1.6 deprecates the num-executors setting and replaces it with total-executor-cores setting. The new setting is simply the product of number of executors and the number of cores per executor. The max-executors setting will still apply and as a result Spark Executor AutoScaling will continue to function as expected.



The default number of executors to use (pre-1.6).


The number of cores per executor.


The maximum number of executors to use.


The number of total cores across all executors (1.6)

Have more questions? Submit a request


Powered by Zendesk