Ticket #4074 (closed bug: fixed)
Forking not possible with large processes
|Reported by:||nomeata||Owned by:||simonmar|
|Type of failure:||None/Unknown||Difficulty:||Easy (less than 1 hour)|
|Test Case:||Blocked By:|
If a haskell program requires a lot of memory, trying to fork() fails because, due to the program size, the clone() syscall takes long and will interrupted by the ghc runtime timer, restarting the syscall, just to be interrupted again.
This happens repeatedly with ghc 6.12, which seems to require noticeable more memory than 6.10, when building large Haskell programs on slower arches, and causes some problems with Haskell in Debian.
The problem can also be observed by running a simple C program that malloc’s a lot of memory (in the range of 1G) and then tries to fork with profiling enabled.
In the corresponding Debian bug report against libc ( http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=575534), which also has the demo C code, it was suggested that it might be the program’s responsibility to disable such timers while clone() runs.
Do you agree with that? Is it something you can do? Might this be related to #1882 (which mentions timers and fork)?