Question Details

No question body available.

Tags

python multithreading gui automation multiprocessing

Answers (2)

November 21, 2025 Score: 4 Rep: 139,531 Quality: Medium Completeness: 30%

I need to do this in multiple instances of the application (across multiple servers).

Note that multithreading (and Python's multiprocessing) is about what's being run on a single machine. It has nothing to do about distributing the load on multiple servers.

The app is not very resource intensive. So, currently I am bound by the speed of fields filling in and waiting for UI elements to appear. Would multi threading or multi processing a suitable approach for this case?

You have to check what exactly takes time, i.e. what resource is being exhausted.

If, for instance, you have a machine with sixteen cores, and you see that, when starting to run the thing, one of the cores peaks at around 100%, whereas others remain as before, you're CPU bound. Multithreading could help here.

If, on the other hand, you see that all the cores remain mostly idle, but you have the hard disk usage rising to 100%, no amount of multithreading would help—quite the opposite, usually, multithreading would make things slower. Essentially, multithreading distributes the load among multiple CPU cores. Trying to access twice as much a disk that is already used at is maximum won't, as you could guess, make things any faster.

I have never used either

Multithreading is tricky. But it's also fun and very useful. I would advise starting with a few very basic problems and try to parallelize them. Say multiply two large matrices, and see what happens when you do it on all cores at once. Or search for prime numbers (to see that not all problems can easily be made parallel). Or do a simple script that reads files (either a few very large ones, or many small ones), and see how the performance changes when running it in parallel (you will have surprises, I promise).

Then once you get a rough idea how things work, start poking your project, iteratively, constantly measuring what makes it faster, and what only degrades performance.

waiting for UI elements to appear

Be sure to keep an eye on what makes the GUI application show, i.e. why the UI elements take time to appear.

November 22, 2025 Score: 3 Rep: 220,779 Quality: Medium Completeness: 30%

I have no experience with pywinauto, but with comparable GUI robots (like AutoHotKey) on Windows, so what follows is based on the assumption pywinauto does not behave much differently.

So, currently I am bound by the speed of fields filling in and waiting for UI elements to appear.

By what you wrote, my educated guess is, this is your bottleneck, not the pywinauto process. So when you start to parallelize this, each new instance of your GUI application will be another process. If then the controlling program will use a corresponding Python process for each GUI application process, or just another thread, is unlikely to make a noteable difference.

One thing you have to be careful with: on Windows, in context of a single user session, AFAIK there is just one instance of the windowing system running, where only one window control at a time can have the focus. Maybe having the focus is not important for your GUI application, but if it is, forget about parallelizing within the same session. Actually, in such a case, it may become hard even for a single-process pywinauto script to provide stable behaviour. Other programs running on the machine like a virus scanner or some update reminder could always steal the focus.

What I actually don't understand: from what you reported, you seem to have a working pywinauto script for controlling one of your GUI instances. Then why the heck, instead of asking strangers from the internet, don't you run the pywinauto script two or three times in parallel and measure the performance ? This requires zero programming, just two or three command line windows where you start your python interpreter with pywinauto in parallel? If three manually started instances are doing the processing too quick for measuring, implement a one-liner shell script which runs a dozen or more of your pywinauto scripts in parallel. I am sure you will get a way more reliable answer for your case than by asking people who can only make blind guesses, because they don't have access to your systems.

When your experiment shows parallelizing brings the benefit you expect, then it is time to do this programmatically. Maybe you build this directly into the Python program which controls pywinauto, maybe you build a robust shell script which starts different pywinauto processes (based on the prototypical one-liner), whatever works best for your case.