forkrun is the culmination of a 10-year-long journey focused on "how to make shell parallelization fast". What started as a standard "fork jobs in a loop" has turned into a lock-free, CAS-retry-loop-free, SIMD-accelerated, self-tuning, NUMA aware shell-based stream parallelization engine that is (mostly) a drop-in replacement for xargs -P and GNU parallel.On my 14-core/28-thread i9-7940x, forkrun achieves:* 200,000+ batch dispatches/sec (vs ~500 for GNU Parallel)* ~95–99% CPU utilization across all 28 logical cores, even when the workload is non-existant (bash no-ops / `:`) (vs ~6% for GNU Parallel). These benchmarks are intentionally worst-case (near-zero work per task) because they measure the capability of the parallelization framework itself, not how much work an external tool can do.* Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)A few of the techniques that make this possible:* Born-local NUMA: stdin is splice()'d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced. Each numa node only claims work that is already born-local on its node. Stealing from other nodes is permitted under some conditions when no local work exists.* SIMD scanning: per-node indexers/scanners use AVX2/NEON to find line boundaries (delimiters) at speeds approaching memory bandwidth, and publish byte-offsets and line-counts into per-node lock-free rings.* Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.* Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.…and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead, increase throughput and reduce latency at every stage.In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines/sec.forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub "Blame" on the line containing the base64 embeddings). Trying it is literally two commands: . frun.bash
frun shell_func_or_cmd < inputs
For benchmarking scripts and results, see the BENCHMARKS dir in the GitHub repoFor an architecture deep-dive, see the DOCS dir in the GitHub repoHappy to answer questions.
Show HN: Forkrun – NUMA-aware shell parallelizer (50×–400× faster than parallel)
A drop-in replacement for xargs -P and GNU parallel, offering 50x-400x faster performance, 200,000+ batch dispatches/sec, and ~95-99% CPU utilization for high-frequency, low-latency workloads.
View Origin LinkProduct Positioning & Context
AI Executive Synthesis
A drop-in replacement for xargs -P and GNU parallel, offering 50x-400x faster performance, 200,000+ batch dispatches/sec, and ~95-99% CPU utilization for high-frequency, low-latency workloads.
This product addresses a critical performance bottleneck in shell scripting and data processing pipelines. The reported 50x-400x speedup and significantly higher dispatch rates over GNU Parallel represent a substantial improvement for compute-intensive, low-latency workloads. The focus on NUMA awareness, SIMD acceleration, and lock-free mechanisms targets fundamental system-level inefficiencies, directly impacting CPU utilization and throughput. For B2B SaaS, this translates to reduced infrastructure costs for data processing, faster ETL operations, and improved responsiveness for real-time analytics or batch jobs. Organizations dealing with large-scale data ingestion, transformation, or scientific computing will find this compelling. The 'drop-in replacement' aspect minimizes adoption friction, making it a viable upgrade for existing systems struggling with parallelization overhead. This targets a niche but high-value segment where performance directly correlates with operational efficiency and cost savings.
Community Voice & Feedback
I like it, and I hope it's soon going to be available in various Linux distributions, along with other modern tools such as fd instead of find, ripgrep instead of grep, and fzf, for instance.
[flagged]
I guess I've never really used parallel for anything that was bound by the dispatch speed of parallel itself. I've always use parallel for running stuff like ffmpeg in a folder of 200+ videos, and the speed in which parallel decides to queue up the jobs is going to be very thoroughly eaten by the cost of ffmpeg itself.Still, worth a shot.I have to ask, was this vibe-coded though? I ask because I see multiple em dashes in your description here, and a lot of no X, no Y... notation that Codex seems to be fond of.ETA: Not vibe coded, I see stuff from four years ago...my mistake!
Generally when I want to run something with so much parallelism I just write a small Go program instead, and let Go's runtime handle the scheduling. It works remarkably well and there's no execve() overhead too
Hi HN,Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?I hit that wall...so I built forkrun.forkrun is a self-tuning, drop-in replacement for GNU Parallel (and xargs -P) designed for high-frequency, low-latency shell workloads on modern and NUMA hardware (e.g., log processing, text transforms, HPC data prep pipelines).On my 14-core/28-thread i9-7940x it achieves:- 200,000+ batch dispatches/sec (vs ~500 for GNU Parallel)- ~95–99% CPU utilization across all 28 logical cores (vs ~6% for GNU Parallel)- Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)These benchmarks are intentionally worst-case (near-zero work per task), where dispatch overhead dominates. This is exactly the regime where GNU Parallel and similar tools struggle — and where forkrun is designed to perform.A few of the techniques that make this possible:- Born-local NUMA: stdin is splice()'d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced.- SIMD scanning: per-node indexers use AVX2/NEON to find line boundaries at memory bandwidth and publish byte-offsets and line-counts into per-node lock-free rings.- Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.- Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.…and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead at every stage.In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines/sec. In typical streaming workloads it's often 50×–400× faster than GNU Parallel.forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub "Blame" on the line containing the base64 embeddings).- Benchmarking scripts and raw results: https://github.com/jkool702/forkrun/blob/main/BENCHMARKS- Architecture deep-dive: https://github.com/jkool702/forkrun/blob/main/DOCS- Repo: https://github.com/jkool702/forkrunTrying it is literally two commands: . frun.bash # OR `.
Related Early-Stage Discoveries
Discovery Source
Hacker News Aggregated via automated community intelligence tracking.
Tech Stack Dependencies
No direct open-source NPM package mentions detected in the product documentation.
Media Tractions & Mentions
No mainstream media stories specifically mentioning this product name have been intercepted yet.
Deep Research & Science
No direct peer-reviewed scientific literature matched with this product's architecture.
Market Trends
Although to adapt to your style I did this instead: ls 0* | frun -- jq -rf my_program.jq
In a directory containing 14k data files. I think your reference should be rush, not a Perl script.