Reference: Qubole AutoScaling Methods

Cluster Throughput

To meet the peaks of batch workloads it has been historically necessary to over-provision infrastructure, however this results in maintaining a low utilization rate for available infrastructure. The highly scalable nature of the cloud provides an easy solution to this problem since computational resources can be provisioned and de-provisioned according to computational demand. By enabling the cluster to expand and contract quickly and automatically in proportion to the workload, Qubole auto-scaling results in close to full utilization all the time. In addition, because the cloud bills by-the-hour or by-the-minute, auto-scaling enables customers to realize significant cost savings. The platform will utilize the best available deal between On Demand and Spot instances based on the processing need and configuration settings of the system. Qubole runs background processes to continuously monitor customer workloads and can reduce the number of nodes to the cluster minimum as needed. If no queries/sessions are active then the cluster is removed altogether if Qubole is permitted to terminate non active clusters.

Cluster Storage (HDFS)

There may be situations where the current cluster size cannot support the intermediate Mapper Output and when this occurs Qubole can AutoScale the cluster to accommodate the job demand. During HDFS AutoScaling only On Demand instances are added to the cluster since these are faster to provision than Spot instances. However if the cluster is configured to use 100% Spot nodes instances then Qubole will add Spot instances during HDFS AutoScaling.

Cluster Storage (EBS)

Hadoop2 and Spark clusters that use EBS volumes can now dynamically expand their storage capacity. This relies on Logical Volume Management - when enabled, a volume group is created using the initial EBS volumes in the instance, and a single logical volume is created on this volume group. Additional EBS volumes are attached to the instance and to the logical volume when the latter is approaching full capacity usage, and the filesystem is resized to accommodate the additional capacity. This feature is not enabled by default and may be configured through the API.

Executor AutoScaling

Executors are an integral part of Spark processing and in addition to optimizing the nodes in a cluster QDS will autoscale the number of executors used by each node in a cluster. Each node may only contain a certain number of executors based on the configuration settings and the caliber of the node instance types. A developer can set the parameters for execution when there is uncertainty regarding the ideal configuration and QDS will poll the progress rate of the jobs and if necessary will increase the number of executions up to the max configured by an administrator. Spark level autoscaling responds to the needs of the jobs within the application and cluster level autoscaling will respond to the needs of the application as the number of executors grows.

Have more questions? Submit a request

Comments

Powered by Zendesk