Skip to content

BBland1999/color-film-auto-fixer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

color-film-auto-fixer

When someone takes analog film photos in the current day, you need to scan them in so that they can be used. The scans come back from the photo lab looking dull and inaccurate. Over the course of the last few weeks, I have developed a program that completes the tedious steps of editing the digital scans of analog film. The program takes in a folder of unedited photos and a destination folder and batch processes them. The program I have created uses python with numerous computing, image processing and machine learning libraries. It uses matlibplot for most of the computing, opencv for most of the image processing, and tensorflow and a pretrained MTCNN for machine learning. It also uses tkinter for the user interface.

The first step in the processing involves reducing the green levels of each RGB pixel of the image in order to remove the green cast that the images have. In my testing I found that a 5% reduction in the green levels to be sufficient. Next the program uses an automatic white-balance algorithm called Gray World Assumption. This works by assuming that on average, the image is neutral gray. The program converts the image to the LAB color space. L stands for the lightness of the image, A stands for the range of colors between green and magenta, and B stands for the range of colors between blue and yellow. The program effectively normalizes A and B of the Lightness of the image. This produces and image, where white objects don’t have colorshifts, and broadly fixes the color of the image. Next the program completes a gamma = .9 gamma correction. This process darkens the blacks and deepens shadows, as well as create more contrast. Here is how gamma correction works for the Red values of the pixels of an image: Output[R] = (Input[R]/255)^(1/gamma) * 255. The same process is done for both green and blue pixels as well. Next the program uses a MTCNN with a pretrained model to classify the images as portraits. MTCNN stands for a Multi-task Cascaded Convolutional Network, which is put simply is a set of Convolutional Neural Networks put together that discern different features. I chose to use a pretrained model, because the most powerful machine is a dual core laptop with no GPU. I did not think it was feasible to train the algorithm with what I have, so a pretrained model seemed like the obvious solution. In my testing the algorithm had at least a 90% accuracy. The algorithm essentially draws a bounding box around the faces of people in the image. I used that to calculate the percentage area that the faces take up in the image. If the faces are at least 0.8% of the image, I have decided to classify them as portraits. I chose this number to weed out false positives.

Finally, if the images are classified as portraits, they are blurred with a 7x7 Gaussian Blur in order to soften skin tones and reduce grain. This was a stylistic choice that I made early on. If the image is not classified as a portrait, the image only gets a 3x3 Gaussian blur, to reduce the harshness of the grain, while still preserving as much detail as possible.

Demo Video: https://youtu.be/R9Zm3-Jjmew

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages