Question Details

No question body available.

Tags

c#

Answers (4)

March 20, 2026 Score: 9 Rep: 120,078 Quality: Medium Completeness: 0%

Wouldn't this be upscaling, not downscaling? Also, when you do this, you are basically making up detail so the resulting images should not be relied on for anything.

March 20, 2026 Score: 5 Rep: 8,072 Quality: Low Completeness: 50%

Just about any image upscaling tool should be able to do the job, no? You're trying to upscale the image going from a 10 by 10 meter square per pixel, to a 5 by 5 meter square per pixel. You can simply interpolate the pixels (resampling the image) to a different resolution, GDAL and the Python API or gdalwarp CLI tool can do this, if you want a visual tool QGIS or ArcGIS can both it as well.

Otherwise, as you mentioned, using ML based super-resolution tools like Sen2Res.

But, both of those don't actually reveal any more detail, it just makes the image (slightly) less blotchy. If you actually need higher resolution detail, you'll have to pay for higher resolution images. There are multiple vendors that have 30cm or higher resolution imagery, from a quick DuckDuckGo search including but not limited to: European Space Imaging, Maxar and Airbus.

March 20, 2026 Score: 4 Rep: 16,526 Quality: Medium Completeness: 20%

How do I downscale my Sentinel-2 satellite imagery from the standard 10‑meter resolution to 5 meters.

Basically: you cannot. This is real life, not the Bourne franchise. You can interpolate, you can adjust and filter for atmospheric corrections, but you cannot add information that you don't have. You can make it "look better" - but that's it.

Right now, I’m already retrieving the bands and the TCI image, running Sen2Cor for atmospheric correction, and then just zoom into the picture with the most amount of detailed and clarity I can get with the image.

Then that's what's possible without starting out on higher resolution images to begin with.

March 20, 2026 Score: 1 Rep: 3,875 Quality: Low Completeness: 30%

With sufficiently good signal to noise and iff you know the point spread function of the imaging system very accurately then a super resolution of about 2-3x is theoretically possible with good data. The snag is that it requires an insane amount of computing power to deliver that additional resolution and the resulting image will have artefacts introduced by such aggressive post processing.

That sort of deconvolution code dates back to the early HST era when the thing was hopelessly myopic with severe spherical aberration and everything had to be deconvolved to make any sense of it.

The actual resolution of the reconstructed image varies with local signal to noise and artefacts can be obtrusive so you have to decide how far you are prepared to push it.

In other words there is no free lunch.

The zooming in sequence in Bladerunner is simply not possible! (at least not in this universe)

It works a lot better for bright stars on a mostly black background where the positivity constraint on images works very effectively. It is notoriously difficult to do it on a bright object with fine dark detail.

In either case it is an ill posed deconvolution problem and you have to apply some heuristic regularisation or AI to sort the wheat from the chaff. Maximum entropy is one such non-linear technique that can do this at a computational price. AI has a nasty tendency to hallucinate.

Take your pick.