Question Details

No question body available.

Tags

python algorithm parallel-processing multiprocessing

Answers (6)

February 25, 2026 Score: 2 Rep: 1,175 Quality: Low Completeness: 20%

For parallel processing, the first questions is whether you are cpu bound or not. Given that this is all pure cpu steps; you are probably looking at multiprocessing instead of threading. However, with the overhead from multiprocessing, depedning on how many fractions it might not even be worth it. I would even ask why perform gcd every step. Just perform it when you are close to exceeding the integer limit. And if you really are pursuing speed, why not just switch to c++ or cython. Additionally its hard to propose a suitable solution without knowing your bounds and constraints.

February 25, 2026 Score: 2 Rep: 15,666 Quality: Low Completeness: 10%

Instead of computing the gcd at every iteration, maybe you could compute the lcm of all the denominators directly in a first pass, then write all fractions with that denominator, then add them.

February 25, 2026 Score: 1 Rep: 3,855 Quality: Low Completeness: 30%

The basic idea of combining them in pairs should be simple enough to get right. Do it in scalar form first to obtain a prototype. You might also want to presort them by denominator value before starting to combine.

Like the others I do wonder what problem it is that you are trying to solve that requires such exquisite numerical precision.

When using arbitrary precision rational arithmetic you quickly become memory bound unless you are very careful about reducing to lowest common denominator. What works OK for a small number of terms summed will get much slower as the number of terms increases.

This question would be better off asked as a how to do it in normal question space. No one is going to waste their time providing a detailed answer here (not your fault that the site is now a mess with these bogus vague "Best Practices" questions).

What level of accuracy do you actually need? There are fast multiple precision floating point packages for most languages and lighter weight ones using pairs of double precision variables to get ~107 effective mantissa bits.

I tend to favour Juila's BigFloat for nearly effortless fast high precision real arithmetic. You can specify the number of bits you want in the mantissa. YMMV

February 25, 2026 Score: 1 Rep: 15,666 Quality: Low Completeness: 40%

Python function calls and object creation are slow.

As in, really slow.

The fractions module may be really well-written, but any code that creates many Fraction objects, then combine them together into intermediary Fraction objects as it adds them, is bound to be slower than a code that just does arithmetic and avoids creating objects.

February 25, 2026 Score: 1 Rep: 15,666 Quality: Low Completeness: 30%

You might also be interested in module quicktions. From what I understand, it's a faster module made by one of the contributors to the standard fractions module. It uses Cython and tries to avoid all intermediary object creation overheads.

February 25, 2026 Score: 0 Rep: 122,730 Quality: Low Completeness: 50%

That number of decimal digits is about enough to express the diameter of the Solar system with 1 mm precision. Perhaps you are trying to solve a wrong problem? Just wondering.

Anyway, you get NameError: name 'gcd' is not defined because you execute addfractions in a context of a fresh Python interpreter with no imports (which means you are probably on Windows --- Python on systems with fork shouldn't do that), so your from math import gcd has no effect in that context. Try adding from math import gcd inside addfractions.