Enhancement of old colour photographs using Generative Adversarial Networks

It’s almost Christmas, I haven’t posted anything in a while and I see that WordPress has an Image Compare feature, so let’s have some colourful fun.

When I’m not at the computer writing R code, I can often be found at the computer processing photographs. Or at the computer browsing Twitter, which is how I came across Stuart Humphryes, a digital artist who enhances autochromes. Autochromes are early colour photographs, generated using a process patented by the Lumière brothers in 1903. You can find and download many examples of them online. Stuart uses a variety of software tools to clean, enhance and balance the colours, resulting in bright vivid images that often have a contemporary feel, whilst at the same time retaining the somewhat “dreamy” quality of the original.

Having read that one of his tools uses neural networks, I was keen to discover how easy it is to achieve something similar using freely-available software found online. The answer is “quite easy” – although achieving results as good as Stuart’s is somewhat more difficult. Here’s how I went about it.

First, head over to Github and install Real-ESRGAN. It’s a suite of software, written in Python, that uses a type of neural network called a GAN (Generative Adversarial Network) for image processing. As much as I know about GANs can be found at the Wikipedia link so this is a practical, rather than a theoretical guide.

You can install the Python framework and associated libraries, or a slightly less-functional executable file for MacOS, Linux or Windows. I had no trouble installing either, on both MacOS and Windows, using conda to manage Python. The project is very well documented. You’ll also need to install a model, again as explained in the documentation.

Next, grab yourself some autochrome images, easily found via a Google image search. Larger images take longer to process and I have an old computer (a 2015 iMac), so I restricted myself to images between about 600 – 900 pixels on one side.

Drop your images into a folder, but don’t have subfolders inside it or you’ll get an error. Then it is as simple as typing, from the Real-ESRGAN directory:

python inference_realesrgan.py -n RealESRGAN_x4plus -i path/to/input/folder -o path/to/output/folder --face_enhance

And wait. When the code has finished, the output folder contains your enhanced images. By default they are upscaled 4x. Note that in some cases, face enhancement may generate unsatisfactory results (by which I mean weird-looking distorted faces), in which cases just omit the –face_enhance flag.

It would be nice if we could automate the next steps too: adjustment of levels, contrast and colour. However, every image is different and requires manual processing to get it looking how you want. I like to use Photoscape X for this. I often find that simply running “Auto Levels” followed by “Magic Color” gets me most of the way to an attractive image. I also use the Film filters: one of Velvia, Provia, Portra or Gourmand usually results in an effect that I like.

Now let’s look at some before/after comparisons. Once you’ve experimented with a few images, you start to get a feel for which ones are likely to give good results, and why. I’m using quite large images in this blog post, as the results are more difficult to see if images are scaled down too small.

In our first example, the original contained plenty of colour under the dark, faded exterior. Also, the subject must have been sitting very still as her face is sharp and in focus. This provides good input for the face enhancement algorithm. Face enhancement can sometimes lead to an over-smooth “wax dummy” effect but in this case, I think the result is quite striking.

Girl with a bunch of flowers (1908) by Etheldreda Janet Laing

The next example contained an enormous amount of colour to begin with. All that’s required is a light “clean” by the GAN, and rebalancing the levels and colour does the rest.

A young woman admires flowers in a Baden garden in Germany (1928) by Wilhelm Tobien, National Geographic

Another example where the subject must have been sitting very still and the focus is good. Here, face enhancement has worked very well. Patterned fabrics also often respond well to upscaling.

A young woman in Galway, Ireland (1913) by Marguerite Mespoulet

Many old images have a distinct colour tint – often yellow, but other colours too. The “Auto Levels” tool in Photoscape X often does an excellent job of removing tints. This is another example where enhancement created a nice sharp image because the subject was sitting still.

Native American Man (1910) by Mrs Benjamin F. Russell, George Eastman House Collection

Once again, all the elements were present in the original to generate a very striking enhanced image in this case: strong lighting, good focus and still subjects.

Two Bishari Girls (1914) by Auguste Léon, Albert Kahn Collection

Another example where upscaling and colour balancing has brought out great details in the fabric patterns.

Woman in Greece (1920?) source unknown

Here, the GAN has accentuated the strong lines and the stone. There’s also a nice richness of colour in the sky and the camels.

Giza, Egypt (1925) by Gervais Courtellemont and W. Robert Moore, National Geographic.

Architectural subjects often respond well to the GAN, presumably in large part because they are still. Geometric features and text are often brought sharply into focus.

Moulin Rouge, Paris (1914), Albert Kahn Collection

Our final example. Once more we have good original colours, still subjects and sharp focus. Face enhancement has worked well here. There are cases where it does not: faces that are not facing the camera are often distorted. Sometimes (not here) children’s faces take on a disturbing “old man” aspect.

Vedenisov Family (1910) by Peter Vedenisov

In summary: enhancing old photographs using GANs and other software is fun, addictive and quite readily accessible. You needn’t use autochromes of course; perhaps you have your own collection of old low-resolution prints or scans to try. Not everything works well and I think the basic rule is that the processed image is only as good as the starting material. It’s also quite easy to “over-process”, resulting in images with a weird and unnatural appearance. But with a light touch and the right image, you can achieve some interesting and artistic results.

One thought on “Enhancement of old colour photographs using Generative Adversarial Networks

  1. Great, as usual, and interesting, as photography turned to be my hobby in the last years.
    Real-ESRGAN seems to be doing a similar job to DXO-PureRAW.
    All the best for 2022.

Comments are closed.