Issue 12023
Tune occurrence cube backfill for better performance
12023
Reporter: trobertson
Type: Task
Summary: Tune occurrence cube backfill for better performance
Priority: Minor
Resolution: Fixed
Status: Closed
Created: 2012-10-15 09:19:19.832
Updated: 2013-12-17 15:17:05.553
Resolved: 2013-10-03 16:08:54.516
Description: The occurrence cube backfill exhibits unexpected performance characteristics. It starts incredibly quickly (22million input rows in 6 minutes) but degrades badly, taking a total of 12+ hrs to finish.
The backfill operates as follows:
- A single mapper-only job using a table scanner
- For each row many cube writes are identified
- The writes are batched (not HBase batching) within the cube client code
- The batches are then flushed to HBase (and HBase client batch sizes come in to play here)
Therefore, initially the job is likely to ramp up quickly before any batches are actually flushed to HBase. However, it gets to something like 1/3 of the input data within 1hr, but then takes an age to do the rest.
All timeouts are configured to be incredibly long for this job, as broken mappers can produce bad counts in the cube as this is not transactional. Basically, any task failure that has done a cube write means the whole process must be considered a failure - hence long timeouts to safeguard.
The PUTs to HBase are very small rows.
This is an operation we are likely to perform relatively often (new counts in the cube, disaster recovery) and so is worth understanding.
]]>
Author: trobertson@gbif.org
Created: 2012-10-15 09:22:18.722
Updated: 2012-10-15 09:22:18.722
In the code (https://code.google.com/p/gbif-metrics/source/browse/metrics/trunk/cube/src/main/java/org/gbif/metrics/cube/occurrence/backfill/BackfillCallback.java?r=71#30):
// These are set high, as any failure means inaccurate cube counts
job.getConfiguration().set("mapred.task.timeout", "1800000"); // 30 mins
job.getConfiguration().set("hbase.regionserver.lease.period", "1800000");
// We are doing a huge amount of very tiny PUTs so reduce the buffer, as the default of 2MB means HBase
// is being asked to process huge numbers of rows, and tasks start to timeout. These will likely hit
// 1 region if the cube is initially empty.
job.getConfiguration().set("hbase.client.write.buffer", "262144"); // 256k