Qubole Release Notes - 17-Mar-2015

Release Notes for release on Mar 17, 2015

Major Enhancements

Restricted ip support for login to QDS

QDS now supports whitelisting of ip addresses which can be used to login. Please reach out to help@qubole.com to use this feature.


QDS supports scripts with Japanese (UTF-8) characters

QDS supports hive scripts with UTF-8 characters. Please reach out to help@qubole.com to use this feature.

QDS Now Offers Support for Isolation Bad Jobs in Hadoop Clusters:

See this document for details.

All other minor changes and bug fixes are listed below

Qubole Data Service

  • UI-1214: Command latency and error distribution charts have been added to QDS.

  • UI-1401: Overview page - command id link regression fixed.

  • HAD-326: Ganglia metrics can now be accessed via APIs.

  • UI-1211: Enhancements to comments UI in analyze

  • QBOL-3890: DB Query support in Workflow fixed

  • UI-1332 Fixed Browser crash on invalid CSV

  • UI-1374: Results would show right column headers as per the query

  • UI-1356 QBOL-3847: Fixed regression with csv and tsv file handling

  • HAD-369: Instances launched for validating credentials information are tagged with 'Qubole' key

  • UI-1296: Fix for command templates history view not remembering user selection

  • UI-1209: Partial fix for Avro sample data in explore

  • HAD-355: Blacklisted nodes are now added back into the cluster within a default interval of 2 hours. Earlier the blacklisting was permanent

  • SPAR-187: Increase spark's metastore timeout to 5 mins. Especially if the cluster is in a VPC, it takes quite a bit of time for a single metastore call to complete.

  • SPAR-174: Allow JsonSerde to work with SparkSQL.

  • Cluster usage report minor fixes.

  • SCHED-54 Fixed a bug when concurrency is more than 1


  • Qubole query id is now available in Hadoop jobs (via property qubole.command.id in job.xml)

  • Hadoop Jobs launched by Shell Commands have their parent/child information in job.xml



  • SPAR-198: Container log links work from Spark Web UI in yarn-client mode. The hadoop user is picked up in SparkContext itself.

  • SPAR-197: Prepend local: to all jars in /usr/lib/spark/lib. This allows yarn-cluster mode to also process jars correctly.

  • SPAR-174: Configure /usr/lib/spark/lib as the standard jar destination for spark. Any jar in this dir is available to both driver and executor. We also use this mechanism to make hive_contrib.jar (JsonSerde) accessible to SparkSQL.

Have more questions? Submit a request


Powered by Zendesk