Delta of the Eyre river flowing into Archachon bay. First image: original data (Sentinel-2 band 1, violet 443 nm, 60m/pixel). Second image: super-resolution at 10m/pixel, inferring details from other spectral bands while preserving the reflectance.
What this is about (with context and in not too technical terms)
This is a method for improving satellite images. These images are provided with a resolution (i.e. pixel size on the ground) that results from technical compromises. Imagine taking a picture with a camera at night. You have two possibilities: either waiting to make enough light come in (long exposure), or trigger the sensor with not enough light (high ISO value). The first choice results in blurry images for objects in motion, while the other results in snow-like noise. Satellites move along their orbit, so they can hardly wait a long time. And poor quality, noisy images are no option either. What some satellite makers do is to take the light from all colors at the same time, and produce black and white images. More light is taken compared to filtering each color separately. But this is not an option when the goal is to survey, for example, how green vegetation evolves in an area, compared against bare brown soil. In that case, precise colors are required. But, the more filtering is performed to measure a very specific color, and the less light that remains. In that case, a common choice is to reduce the image resolution. If, instead of measuring light coming from square pixels 10×10 meters wide, it is accumulated on pixels of size 20×20 meters, then there is 4 times more light per pixel. This is unfortunate as details are now less visible, but at least we can trust the measurements, since we keep the snow-like noise under control. So, very often, satellite images are provided with pixel sizes that depend on the color that is measured (e.g. red, green, blue, infrared, ...). This is the case for the Sentinel-2 satellites, which present both cases: less precise colors at high pixel resolutions and more precise colors at lower pixel resolutions.
The method which is presented in the article indicates how to get the details from pixels that have the highest resolution, and propagate these details to all other colors. As a result, we get an image where all colors (i.e. "spectral bands") have the best resolution.
Abstract and article
This resolution enhancement method is designed for multispectral and multiresolution images, such as these provided by the Sentinel-2 satellites (but not only). Starting from the highest resolution bands, band-dependent information (reflectance) is separated from information that is common to all bands (geometry of scene elements). This model is then applied to unmix low-resolution bands, preserving their reflectance, while propagating band-independent information to preserve the sub-pixel details.
Software, with application to Sentinel-2
The super-resolution algorithm is written and usable in C++ and it is also wrapped as a Python module. A ready to use Python script for super-resolving Sentinel-2 images is also provided. The code was only tested under Linux (Debian testing/unstable). I have neither the motivation nor the time to support other operating systems, but please mail me if you do so. I may then link your port here or include it in a future release (with due credits). You can download the latest version here.
This super-resolution algorithm is released as Free/Libre software under the LGPL v2.1 or more recent or, your choice, under the Free/Libre CeCILL-C licence. The source code is maintained in my source repository, check it for updates.