Qubole Release Notes - 11-May-2015

Major Enhancements

Roles and Groups Support in QDS

QDS now allows administrators grant access to specific features to users. Documentation here.  

QDS support HBase as a Service

  • Support Full/Incremental Snapshot and Restore
  • Run Hannibal to monitor HBase activities
  • Integrate Zeppelin and HBase shell
  • APIs for add/replace/remove nodes in a HBase cluster.

Spark enhancements

  • QDS now offers Spark-1.3.1, the latest release version of Spark.
  • A new SQL subcommand has been introduced in the Spark Command UI. With this you can directly specify SQL statements and run it on Spark.
  • Python is now supported in Spark Notebooks. You can write your spark code in python and run it in the Notebook UI.

All other minor changes and bug fixes are listed below

Hadoop

  • HAD-334: options to automatically infer number of reducers for hadoop jobs
  • QBOL-1824: Encoding of task logs URL for JobTracker Proxy
  • HADTWO-224: Don't retry shell command AM on failure.
  • HAD-348 Fix rules to publish to S3.

Hadoop2

  • HADTWO-197: Use com.hadoop.compression.lzo.LzopCodec as default codec for compressing history files.
  • HADTWO-253 Fix aggressive downscaling issue for hadoop2
  • HADTWO-149 Support for using reserved disks (EBS) for hadoop2 / mapreduce

Hive

  • HIVE-703 Better error message when simple fetch optimization times out
  • HIVE-634: Enhance Logging to bubble up at the UI 

HIVE-0.13.1

  • QTEZ-4, HIVE-10569: Hive CLI gets stuck when hive.exec.parallel=true; and some exception happens during SessionState.start
  • QBOL-3978: Made mongo jars compatible with Hive13
  • HIVE-586 Bubbling up Hive Logs to UI

Presto

  • PRES:357 Fatal errors logs of presto server are now available for offline analysis
  • PRES-375: Presto uses G1 Garbage Collector now
  • PREST-358: Presto’s backend logs are easily accessible now, path now displayed in query logs

Spark

  • Autoscaling: Spark executors can be downscaled if it only has broadcast blocks.
  • Packaging: Spark-avro jar is part of spark assembly.
  • Fix: Automatically restart spark context in Spark Notebook
  • Fix: Honour driver-memory in yarn-client mode.
  • Fix: Don’t do yarn retries of spark jobs.
  • Fix: Delay zeppelin startup so that it reads correct AWS credentials.
  • Fix: Notebooks don’t get disconnected if left idle for more than 10 mins

QDS

  • QBOL-3817 Fix the 500 errors on ganglia page.
  • QBOL-3784, QBOL-3807: Cleaning up files older than 36 hours in /tmp/sqoop for data import/ export jobs
  • UI-1661: Add sql support in UI for spark commands
  • QBOL-3871: api to check for waiting status in query_hists
  • QPIG-12 Adding a cron to delete files older than 36 hrs from hdfs for pig commands
  • EBS Support in Hadoop2
  • QBOL-3803: Notification for long waiting adhoc commands
  • HADTWO-134: Ganglia Metrics for Hadoop2 clusters.
  • UI-1214: Critical bug fixs for Latency charts on Overview page
  • UI-1217: [Firefox] Placement of Database refresh icon on Analyze and Explore
  • QBOL-3989: Handle the case when gateway_id is nil.
  • SCHED-57 Inital instance and nominal time were not in the same time zone in a comparison
  • HAD-399: For Hadoop jobs, by default, do not accept new tasks if all disks are below 2 GB. Do not select a disk for writing data if it is below 1 GB
  • HAD-397: Use lzo compression codec for history files. The codec can be overridden using hadoop.job.history.completed.codec property
  • UI-1635 Analyze Results: Multiple spaces get condensed
  • UI-1627 - On creating new cluster the region and aws availibilty zones mismatch
  • QBOL-3938: hive commands should pick up settings from recommended hadoop configuration
  • add api support for tunnel server
  • UI-1545 - Trying to create cluster from no cluster message in templates take
  • UI-1583 if no accounts is there then the page is throws 500 error
  • UI-1520: Reset cluster labels list on edit mode
  • UI-1350 Allows date selections which are not real
  • UI-1543 UI-1544 UI-1547 UI-1556 UI-1552 - template confirm query fixes
  • QBOL-3992: special handling of results for accounts which has result encryption enabled
  • UI-1197 Confirm Query in templates
  • UI-1141 - Error saying cannot create more clusters is shown during save instead of   earlier
  • UI-1434 add tooltop for email in manage user
  • QBOL-3923:fix timeouts while rendering details of large jobs from history
  • UI-1294 Hadoop jar Command in Scheduled Jobs shows the jar name within Arguments
  • Follow up fix to reset comments tab header while switching between queries
  • QBOL-3728: Presto now supports queries via a script at a given location.
  • UI-1374: Dont pick schema of insert overwrite commands
  • UI-1203 - Zendesk Intergration
  • UI-1222 Changed “Repository” term to “Datastore”
  • After deleting a cluster the cluster is still shown in toolbar
  • Add meaningful error messages to cluster update and create apis
  • Add support for persistent security groups
  • Don’t open Qubole Security Groups to world for SSH
  • HIVE-522: Hive queries with only comments now pass(with empty results)
  • UI-1293 Validation message for key length is not consistent
  • UI-1166-Timeout accepts negative values and strings
  • QBOL-3764: Convert "select*" (without space(s)) into insert overwrite, as we do for “select *” (with space(s))
  • More robust error handling for Eventual Consistency issues during cluster start
  • UI-1231 Fixed pagination buttons for reports
  • HADTWO-223 Python 2.7 support for hadoop2
  • QBOL-3909: Allow multiple domains in accounts allowed_domain list
  • HAD-366 Tagging the EBS volume attached to the instances launched by Qubole.  
  • UI-1561 Icon for support when one clicks on the user profile on top right
  • UI-1529 Change zendesk icon in popup
  • HADTWO-211: Added NameNode URL to Cluster page.
  • Enable killing of orphan child jobs by default
  • Added  a new resource tab to see jt url for all completed jobs
  • HADTWO-143 Hadoop2 configs can now be pushed to cluster
  • UI-1261 Validation fix for EBS Volume Size
Have more questions? Submit a request

Comments

Powered by Zendesk