Issue 16235

Put hadoop tmp directory on a bigger disk

16235
Reporter: omeyn
Type: Task
Summary: Put hadoop tmp directory on a bigger disk
Priority: Major
Resolution: Fixed
Status: Closed
Created: 2014-08-08 11:03:50.45
Updated: 2017-10-06 15:23:13.593
Resolved: 2017-10-06 15:23:13.572
        
Description: During map cube backfill I had 2 jobs fail with the error below. Google suggests this is because the mapreduce tmp dir is filling its disk with intermediate files. In our prod cluster the tmp dir is /tmp/mapred/system which is on a 20G partition on the main system drive. I think we should try pointing it to the same, separate disk that we now have /logs/hadoop mounted on (450GB). Start by testing in dev cluster.

Somewhat cryptic references: https://issues.apache.org/jira/browse/HADOOP-6092

2014-08-08 00:04:08,170 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2014-08-08 00:04:08,173 FATAL org.apache.hadoop.mapred.Child: FSError from child
org.apache.hadoop.fs.FSError: java.io.IOException: No space left on device
	at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:220)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
	at java.io.DataOutputStream.write(DataOutputStream.java:107)
	at org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.java:84)
	at org.apache.hadoop.io.compress.BlockCompressorStream.compress(BlockCompressorStream.java:150)
	at org.apache.hadoop.io.compress.BlockCompressorStream.finish(BlockCompressorStream.java:140)
	at org.apache.hadoop.io.compress.BlockCompressorStream.write(BlockCompressorStream.java:99)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
	at java.io.DataOutputStream.write(DataOutputStream.java:107)
	at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:227)
	at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
	at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:517)
	at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:399)
	at org.apache.hadoop.mapred.Merger.merge(Merger.java:77)
	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1571)
	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1199)
	at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:609)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:675)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
	at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: No space left on device
	at java.io.FileOutputStream.writeBytes(Native Method)
	at java.io.FileOutputStream.write(FileOutputStream.java:345)
	at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:218)
	... 25 more]]>
    


Author: mblissett
Comment: Haven't seen this, No longer relevant anyway.
Created: 2017-10-06 15:23:13.591
Updated: 2017-10-06 15:23:13.591