I'm trying to understand how the container allocates memory in YARN and their performance based on different hardware configuration.
So, the machine has 30 GB RAM and I picked 24 GB for YARN and leave 6 GB for the system.
Then I followed http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-220.127.116.11/bk_installing_manually_book/content/rpm-chap1-11.html to come up with some vales for Map & Reduce tasks memory.
I leave these two to their default value:
But I change these two configuration:
But when I place a job with that setting, I'm getting error and the job is killed by force:
2015-03-10 17:18:18,019 ERROR [Thread-51] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not deallocate container for task attemptId attempt_1426006703004_0004_r_000000_0
The only value which worked for me so far is setting reducer memory <= 12 GB, but why is that? Why I cannot allocate more memory or up to (2 * RAM-per-container?
So what I'm missing here? Is there any thing I need to consider as well while setting up those values for better performance?