vm.overcommit_memory = 2, vm.overcommit_ratio = 0

Do you know this experience: A program, in my case subversion, has a bug and starts to eat memory. You can not interact with your system any more, only watch the memory and swap run full (if you have a display for that). Then it takes a while, while the kernel kills the (hopefully right) program. Things start to move again, until they are fully recovered from the swap and you can continue your work. Or the kernel does somehow not kill the right program, and you are screwed.

During regular work, though, your swap is hardy ever needed. Only after a while, a few megabytes of never-used RAM is swapped out, to make space for using the RAM as a file cache.

I’d like to see the kernel not give out more memory to processes than there is physical memory, because that’s plenty for normal work, and if there is more requested, then that’s most likely wrong. But I still want the kernel to use the rest of the memory for caching files, and also move some unused RAM pages to the swap file.

Unfortunately, there does not seem to be a settings that achieves this directly. But if you happen to have the swap about the same size as your RAM, then these settings, when written to /etc/sysctl.d/vm.conf, will do the job:

vm.overcommit_memory = 2
vm.overcommit_ratio = 0

The first one is to make sure the kernel does not hand out more memory than you tell it to, and the second is to make sure that it only hands out (swap size + 0 * RAM size) to processes.

Beware that things go wrong if you happen to have no swap any more for some reason, beause then the kernel will hand out zero memory! Therefore, you need to make sure that these settings are applied after swap was enabled. On a Debian machine, rename /etc/rcS.d/S30procps to /etc/rcS.d/S37procps. This would not be possible if you could also specify the ration of swap memory to be used. Then I could set that to zero and the RAM ratio to 100.

If anyone knows better ways to achieve this, I’m interested to hear them.

Update: For my qemu based armel package builder, this is not enough it seems. I’m now running it with overcommit_ratio = 50.


I enabled overcommit_memory=2 and used default for overcommit_ratio (=50)? with 760M Ram and 400M Swap => the result was that things that normally work suddenly stopped working, spawning more terminals or having firefox and epiphany open at the same time.

it is probably a good idea but the vm.overcommit_ratio parameter must be subjected to tuning, which is individual. So a more conservative setting but with some heuristics/tuning would be better.. to save the user some work, for me it's not worth it
#1 ulrik (Homepage) am 2008-09-06T13:19:46+00:00
With your settings, you should get 400MB+384MB=784MB of RAM allocated to processes memory. I’m surprised that this is not always enough, but yes, with some programs it might not be enough.

Maybe a ulimit for the virtual ram size is a better way to prevent programs from using up all memory...
#2 Joachim Breitner (Homepage) am 2008-09-06T14:13:10+00:00
looking at cat /proc/meminfo, I think Committed_AS is the amount the kernel thinks it has committed.

Using normal heuristics, I push this over my normal RAM size, without free -m reporting less than 300M free (used in buffers).. which means that there is much more RAM left yet.
#3 ulrik (Homepage) am 2008-09-06T14:22:09+00:00
Hrm. Funny, subversion is the only app I know that's malloced its way to OOM death on my desktop, too. The reason it got that far is because it blocks signals (bad subversion *BAD*)
#4 Jon (Homepage) am 2008-09-07T21:50:33+00:00
1. You can apparently achieve the same without using swap partition at all:
vm.overcommit_memory = 2
vm.overcommit_ratio = 100

Note that I heard that it can still run out of RAM because of some overhead with memory pages.

2. Do you understand that this way all memory could never be used? It's always committed more than used, and you're limiting committed amount to RAM, making useful amount much less than that.

I think it's better to disable overcommitting, but have swap to allow using all RAM and at the same time avoid OMM killing of normal program if another one runs amok.
#5 alex am 2011-08-27T15:52:14+00:00

Have something to say? You can post a comment by sending an e-Mail to me at <mail@joachim-breitner.de>, and I will include it here.