In Unix-like systems, the
ulimit command lets you check resource limits. When
these limits are updated, they only apply to sessions started after the change.
$ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) 524288 file size (blocks, -f) unlimited max locked memory (kbytes, -l) 335262 max memory size (kbytes, -m) 1004356 open files (-n) 128 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 4096 cpu time (seconds, -t) unlimited max user processes (-u) 64 virtual memory (kbytes, -v) 528384
The one that you probably want to pay the most attention to is “max user processes”. Reducing it low enough will prevent a fork bomb from succeeding in making your system unresponsive. However, setting it too low will keep users from doing normal tasks, especially if that involves compiling programs. Depending on the system’s purpose and on whether a restrictive process limit is imposed on all users or just a few, anywhere from 50 to 300 or so could be a good choice.
Linux and Unix OSs tend to err on the side of permissiveness where process limits are concerned.
$ ulimit -u 34290
Oops. Better fix that.