Skip to content

Fix #10968: spawnProcess fails when RLIMIT_NOFILE exceeds int.max#10969

Open
Poita wants to merge 2 commits intodlang:masterfrom
Poita:fix-unlimited-fd-limit
Open

Fix #10968: spawnProcess fails when RLIMIT_NOFILE exceeds int.max#10969
Poita wants to merge 2 commits intodlang:masterfrom
Poita:fix-unlimited-fd-limit

Conversation

@Poita
Copy link
Contributor

@Poita Poita commented Mar 5, 2026

Fix #10968

Summary

When RLIMIT_NOFILE is unlimited or exceeds int.max, spawnProcess fails with "Cannot allocate memory" due to integer overflow in the fd-closing code after fork().

Changes

  • Use long instead of cast(int) for maxDescriptors to preserve the actual rlim_cur value
  • Always try /dev/fd (or /proc/self/fd) enumeration first, regardless of the limit value — this is the most efficient path and avoids the malloc entirely
  • Cap the brute-force fallback to 1M descriptors when the limit is huge
  • Add unittest that sets RLIMIT_NOFILE above int.max and verifies process spawning works

Root cause

cast(int) of RLIM_INFINITY (2^63-1) wraps to -1, which:

  1. Skips the /dev/fd enumeration path (since -1 < 128K)
  2. Falls into the poll() path which tries to malloc a huge buffer and fails

Testing

  • Standalone test confirms the bug reproduces with the system std.process and passes with the fix
  • Added in-tree unittest for regression coverage

Poita and others added 2 commits March 5, 2026 08:30
When RLIMIT_NOFILE is set to unlimited (RLIM_INFINITY), r.rlim_cur is
a huge value (e.g. 2^63-1). The cast(int) on this value wraps to -1,
which causes the /dev/fd fast path to be skipped (since -1 < 128K)
and the poll() path to attempt a massive malloc that fails.

This manifests as "Failed to allocate memory (Cannot allocate memory)"
on any process spawn, making dub completely unusable on systems with
unlimited file descriptor limits (common on macOS).

Fix by:
- Using long instead of cast(int) for maxDescriptors
- Always trying /dev/fd enumeration first (it's the most efficient
  path and works regardless of the limit value)
- Capping the slow close() fallback to 1M descriptors to avoid
  iterating over billions when the limit is huge

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Regression test that sets RLIMIT_NOFILE above int.max and verifies
process spawning still works. Without the previous fix, this triggers
"Failed to allocate memory" due to integer overflow in the fd-closing
code.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@dlang-bot
Copy link
Contributor

Thanks for your pull request and interest in making D better, @Poita! We are looking forward to reviewing it, and you should be hearing from a maintainer soon.
Please verify that your PR follows this checklist:

  • My PR is fully covered with tests (you can see the coverage diff by visiting the details link of the codecov check)
  • My PR is as minimal as possible (smaller, focused PRs are easier to review than big ones)
  • I have provided a detailed rationale explaining my changes
  • New or modified functions have Ddoc comments (with Params: and Returns:)

Please see CONTRIBUTING.md for more information.


If you have addressed all reviews or aren't sure how to proceed, don't hesitate to ping us with a simple comment.

Bugzilla references

Your PR doesn't reference any Bugzilla issue.

If your PR contains non-trivial changes, please reference a Bugzilla issue or create a manual changelog.

Testing this PR locally

If you don't have a local development environment setup, you can use Digger to test this PR:

dub run digger -- build "master + phobos#10969"

@Poita Poita marked this pull request as ready for review March 5, 2026 12:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

std.process: spawnProcess fails with "Cannot allocate memory" when RLIMIT_NOFILE is unlimited

2 participants