Edit: Crap, I go to all the trouble of writing a long post with a ton of formatting and then call it "film socks"...

This is a very long post, but the main results are tabulated near the top.

My goal with this post is to explore how common film stocks compare in resolution to modern digital sensors and to each other. The subjects of resolution and sharpness are vast, and quantifying

*perceived* sharpness and resolution can be difficult if not impossible.

u/tach has suggested a couple resources,

*Image Clarity* by John B. Williams and

*Basic Photographic Materials and Processes* by Nanette Salvaggio. I will be writing from my scientific and technical background and will therefore present the quantitative and empirical measurements of sharpness that are most accessible along with example photos, and let you make your own judgments about perceived sharpness.

I’m going to start by simply sharing side-by-side comparisons of an original digital photo taken on a 24 MP sensor next to a copy that has been processed to simulate the resolution of various film stocks.

**To be clear, I have only simulated the ability of the film to resolve detail; I have not simulated color, grain, halation, or other film effects**. The idea is that if I took the exact same photo on film, with the exact same lens and exact same conditions, then did a *perfect* scan of the film and color-corrected it to look the same as the digital photo, they would look like the simulated photos (neglecting grain and halation). After the sample photos, I will explain how I performed these simulations and do some more detailed analysis. Tabulated below are full resolution photos along with side-by-side comparisons with the original at a 100% crop.

You may find that some film simulation photos, zoomed out, look at least as sharp or sharper than the original, but at 100% look distinctly less detailed.

**More on that below.** This is the distinction between perceived sharpness and technical, empirical sharpness. What matters more for photography? That depends on the application. For a print hanging on a wall, definitely the perceived sharpness matters more, as the photo will be viewed from a distance.

The original photo used in the simulations and used for comparison was taken with a Carl Zeiss Jena 50mm f/2.8 Tessar at f/5.6 at ISO 400 on a Canon DSLR. It was color corrected, but not sharpened and the texture/clarity sliders weren’t used. It’s not a great photo, but it is one of the sharpest and most detailed images in my library.

In my opinion, all of these simulations have plenty of resolution for prints up to 15” at least. The TMax simulation is probably good to print up to 30”, and is nearly as sharp as the original!

One detail that I left out is that the original photo was actually taken on a 1.6x crop sensor (Canon 80D). For the sake of the simulation, I “pretended” that it was a full frame photo. If we simulate the same photo taken on a crop frame of the sharpest of the color films, Velvia 100, it looks like

this, and here's the

side-by side. The lateral resolution is effectively lowered by the crop factor, but I didn't do this just by resizing the photo, I simulated it as though it were a smaller frame and rendered it at the same digital (pixel) resolution as the original photo.

Let me know in the comments which of these (excluding the crop simulation) looks like the sharpest and which one looks the softest, I'm curious if there will be variety in the answers! Now I'll move on to the details. This is the long part, and it involves a little math.

**INTRODUCTION** Let me begin by defining the way I use the words

*image* and

*photo* in this post. When I refer to an

*image*, I am talking about the exact pattern of light that a photographic lens focuses onto the sensofilm. When I refer to a

*photo* or

*picture*, I am talking about the recording of the image that is made by the sensofilm. You can think of the image as being the physical, real representation of the scene you are trying to capture projected by the lens, and the photo as a data recording of that image.

The reason for making this distinction is that, whatever medium you use, there is a loss of information in the transcription of the image to the photo (note: the image itself is a lossy representation of the real scene because 1) the concept of depth has been lost (the image is a 2D projection of a 3D scene) and 2) the lens doesn’t do a perfect job.) The photo will be discussed in this post in terms of it being a piece of data. After all, once it makes it onto your computer, it’s basically a grid of numbers, each number representing the intensity of red, green, or blue light which fell onto a particular pixel (this is an oversimplification due to the specific way that non-Foveon sensors record color images). And for film, the data is recorded as a pattern of metallic silver particles, which were converted by light from being transparent, dye-sensitized silver halide crystals. In principle, one could perform a very sophisticated IR microspectroscopy experiment and measure the location of each individual metallic silver particle (and in color film, which color layer it is embedded in) and recreate an image digitally based on that recorded data; but in practice this would take days per scan, so we just use an image scanner to “take a picture” of the film.

**FOURIER TRANSFORMS** To understand the way that film resolution has been simulated above, it is first necessary to understand the mathematical concept of the Fourier transform.

Here is a good youtube video that explains it at length. I would also direct you to the Wikipedia page on the subject, or even just

this animation. But let me summarize: The fundamental concept of signal analysis is the idea that any signal or any data series can be represented as a sum of a series of sine and cosine waves with different oscillation amplitude A and different oscillation frequency f. When you calculate the Fourier transform of a piece of data, you are explicitly calculating the amplitude A that corresponds to any given frequency, in other words to find a function called A(f), the amplitude of the data set’s constituent sine waves as a function of the wave’s frequency.

In the case of a photo, which is a 2D data set, the Fourier transform decomposes the photo into a series of sine waves which oscillate along the horizontal direction and a series of sines waves which oscillate along the vertical direction. The Fourier transform therefore produces a function A(fx,fy) where fx, fy are the frequencies along the horizontal and vertical directions.

Low frequencies of oscillation correspond to large features in the photo, while high frequencies of oscillation correspond to fine detail. It is useful to talk about the frequencies in photos in units of cycles per mm (in photographic jargon, that might be called lines per mm or line pairs per mm). That is to say, according to the size of the original photo (36x24mm for 135 film or full frame sensors), how many oscillations of the sine wave take place over the span of 1mm. The smaller the number of cycles per mm, the larger the detail. The larger the number of cycles per mm, the finer the detail.

See for example this pair of simulated full frame, 36x24mm photos:

The first one is a photo of a sine wave, represented in black and white, with a frequency of 1 cycle/mm. If you count them, you’ll find that it has 36 black bars and 36 white bars. Since it's representing a frame that is 36 mm wide, that means it has 1 cycle per mm.

The second one is a photo of a sine wave with a frequency of 3 cycles/mm, and so it has 3 x 36 = 108 black bars and 108 white bars. So what does a Fourier transform of a photo look like?

Here is a photo that is composed primarily of large features. The light on the wall is a smooth gradient, and the lamp fills much of the frame and doesn’t have a lot of texture or detail.

Here is its fast Fourier transform (FFT—a specific algorithm for computing Fourier transforms), with the spatial frequencies in cycles per mm written along the axes. The upper right corner corresponds to low spatial frequencies along the horizontal and vertical direction, and the lower right corner corresponds to high spatial frequencies. Brighter yellows correspond to the dominant frequencies, while darker blues correspond to frequencies which are mostly absent from the photo.

On the other hand,

here is a photo with lots of fine details, and

here is its Fourier transform. Notice that compared to the lamp photo, there is less structure and less intensity in the upper right corner (low frequencies) of the FFT plot and more intensity in the middle and bottom right, corresponding to more dominant fine features in the photo.

It’s also worth noting that a Fourier transform can be reversed, though an operation called an inverse Fourier transform. If we perform the inverse Fourier transform on the Fourier transformed photo of the lamp,

the original photo will be recovered with almost perfect fidelity. In fact, you probably won’t be able to tell the difference between the original and the inverse transformed photo.

At this point, you might have noticed that the photos being used for examples are black and white. Black and white photos make for a simpler example, but to extend the concept to color photos, all you need to do is compute the Fourier transforms of the red, green, and blue channels separately.

Here’s a photo that has lots of fine detail in red, but dominantly very coarse detail in blue. Now here is the

color FFT, which is basically an FFT plot made by combining the separate FFTs of the red, green, and blue channels of the original photo into a new red/green/blue color photo. Notice that the low frequency data (upper right) has a blueish hue, while the high frequency data has a reddish hue, as one would expect from the broad feature of the blue sky and fine red features of the tree leaves.

**SIMULATING FILM RESOLUTION** Now, finally, on to sharpness and resolution. A photo that is soft and lacking in fine detail, whether due to blur or low resolution, is going to have basically no content in the high frequency part of the FFT. This also means that we can make a photo softer and blurrier by removing the high frequency components from its FFT, like I've done

here to the FFT of the red tree (black = 0, data deleted). After computing the inverse FFT to turn it back into a normal image, it now

looks like this, much blurrier! You’ll also notice that edges in the photo have weird oscillating distortions outlining them. This is known as the Gibbs phenomenon in signal processing, and occurs whenever you have an abrupt frequency cutoff in your signal.

We now introduce the modulation transfer function, or MTF. This is a general concept from signal analysis which characterizes a measurement’s frequency dependent response to the input data, and is also some times called a response function. More plainly said, any measurement device (i.e. a camera’s image sensor or photographic film) responds differently to different data frequencies. In general, most instruments lose their sensitivity as frequencies increase. This is the case for photographic systems. Your digital sensor certainly can’t resolve detail that is smaller than pixels, and for a variety of reasons, film generally can’t resolve detail that is smaller than about 0.01mm in size on the film plane (but this varies quite a lot from film to film). The characterization of an instrument’s frequency dependent sensitivity is its MTF.

Here is a compilation of MTFs from a few common film stocks. These charts can be found by google searching for “[film name] MTF”, and the MTF for most Kodak and Fuji professional films are supplied by the manufacturers.

The way to interpret a film MTF curve is as follows: Imagine you use a

*perfect* lens to take a photo of a series of perfectly black and perfectly white stripes (and you nail the exposure). Then you very carefully measure the difference in opacity of the film between the bright and dark stripes (using a technique called densitometry), and calculate the contrast ratio (bright divided by dark). You then repeat this for black and white bars of various widths/spacings, and make a graph of contrast ratio vs. the width/spacing of the bars, with the contrast ratio of a fully white and fully black exposed frame defined as being 1 or 100%. This is essentially the MTF. What is done in practice, however, is that the MTF is calculated by imaging a pattern of bars (or sine waves) in which the spacing/width gradually increases across the frame.

This is what such a pattern looks like before accounting for a film’s loss of sensitivity to fine detail (1 cycle/mm on the left, 140 cycles/mm on the right), and

thisis what it looks like simulating the sensitivity of Kodak T-Max 100. (NOTE: for these test strip images, you have to zoom WAY in to see the stripes at the right edge). The contrast ratio is simply measured across the film strip at various points and plotted out to calculate the MTF. Alternatively, the MTF can be calculated by performing a 1D Fourier transform of a digitized version of the film strip.

The film simulations in this post are done by first digitizing the manufacturer provided MTF curve, then multiplying it by the Fourier transform of a photo, and finally performing the inverse FFT on that product. That process is illustrated

here: in the left frame is the 2D version of the ektachrome MTF, and in the middle is the FFT of the hill photo. On the right is the product of the two, and as you can see, the bottom right corner of the product, which corresponds to fine detail, is somewhat darker; we have thrown away high detail information from the photo by multiplying it by a lossy film MTF. The result after taking the inverse fourier transform is a very specific type of blur applied to the photo, the exact form of which depends on the film stock’s MTF. It’s not exactly a Gaussian blur, although when you perform a Gaussian blur in photoshop it does essentially use this exact process, only using a Gaussian-shaped MTF.

You’ll notice that for some of the above MTF curves shown earlier, the MTF values exceed 100% at certain spatial frequencies. This is due to grain structure. Grain tends to emphasize detail that occurs at the exact same size/spatial frequency of the grain itself. Film grain size is not fixed; there’s a wide range of grain sizes occurring on a given films stock, so there’s generally a range of spatial frequencies which are emphasized and enhanced by grain. That effect is captured by the MTF and therefore by the above simulations. Basically, by setting the high frequency part of the MTF to a value above 100%, sharpening occurs. This is also how your computer performs sharpening operations in lightroom/PS/etc. There are other types of sharpening which are more sophisticated, but this is the basic version.

**QUANTIFYING DETAIL** A measure of the detail contained in a piece of data that is frequently used in information science and signal processing is its entropy. The definition of entropy is complex, and it’s not especially intuitive, but the larger the value of the entropy, the more fine detail it contains. Below is a table of calculated log(entropy) for the different film simulations. Please note that an entropy difference of even 1% represents a huge change in the level of detail, because entropy is presented on a logarithmic scale.

Original photo (color) | 7.62 |

Portra 160 | 7.53 |

Portra 400 | 7.61 |

Portra 800 | 7.41 |

Velvia 50 | 7.61 |

Velvia 100 | 7.61 |

Pro 400H | 7.53 |

Ektar 100 | 7.55 |

Ektachrome e100 | 7.57 |

Original photo (B&W) | 7.50 |

TMax 100 | 7.50 |

There are some unintuitive results in this table. For example, the entropy of Portra 400 is higher than Portra 160. My guess as to the reason for this is that the MTF of Portra 400 is actually slightly higher than that of Portra 160 at 20 cycles/ mm, and most likely there’s a lot of detail in this photo at roughly the 20 cycles/mm mark which is enhanced by Portra 400. Another unintuitive result is that the entropy of Portra 400, Velvia 50/100 are almost identical to the original photo (the original photo edges them out by only about a part in ten thousand). I believe that this is, again, because the MTF curves of these films generally exceed 1 in the 15-30 cycles/mm range where the photo has a lot of detail. Hence they have a bit of a sharpening effect. That isn’t completely obvious in the side by side comparisons because there is a lot of extremely fine detail which gets blurred out in the film simulations. But for the actual structure of the photo, the leaves and rocks and tufts of grass, that 15-30 cycles/mm range is very important. So, pixel peeping aside, I think that entropy does a good job of capturing perceived sharpness. Lastly, the MTF curve of TMax-100 is quite impressive and remains above 1 all the way up to 50 cycles/mm!

**SIDE NOTES** All computations and simulations were performed in Matlab. Film MTF curves were digitized manually and interpolated with a cubic algorithm in a fully logarithmic space. The curves were extrapolated out to 200 cycles/mm with a linear function (linear within the log-log space). For MTF curves supplied with per-channel data, the curves were independently digitized and then averaged in the log-log space.

A note regarding the units of cycles per mm, lines per mm, and line pairs per mm: It is often the case that lines per mm and line pairs per mm are used interchangeably, but the astute reader will have noticed that there should technically be a factor of two difference between the two. Which of these two measures is more indicative of resolution? That's situation dependent. Line pairs per mm is perhaps more useful when talking about a subject where detail comes through in texture. To resolve individual grains of sand on a beach, it is necessary to see the faint shadow which outlines each grain of sand, and each grain of sand is defined by a bright spot and a dark edge; thus to resolve a single grain of sand, the grain of sand must be at least as large as the minimum resolvable line pair. Lines per mm, dots per inch, or perhaps you might think of this as the size of individual pixels on a sensor, are more indicative of resolution when detail is defined by hard edges, by transitions between continuous bodies in the composition which have significant contrast between them. A good example of this might be a photo of a tree where the leaves are large enough to take up many pixels (or many "lines" or "dots") and stand out against a contrasting background; in this case, the leaf will appear sharp if the transition between leaf and background is as abrupt as possible, which in terms of line pairs or cycles, corresponds to the

*transition* between the light line and the dark line.

I am far, far, far from a photography expert. I’ve only been seriously interested in photography for about a year, and in film photography for six months. The experience I will draw from instead is my experience as an optical physicist. My research concerns optical microscopy, high resolution spectroscopy, and super-resolution imaging of defects in 2-dimensional semiconductors and nanoscale magnetic domain walls in 2-dimensional magnets. The specific concepts that I have discussed above, which many readers may know of as concepts from photography, are actually quite general and are ideas that imaging science borrowed from the more general theory of

*signal processing*, which is central in optics, electronics, and information science. So, while I may not have much specific experience in photography, I hope that I can use my relevant experience in optics, signal processing, and imaging to explore the topic of resolution and sharpness in an informative and interesting way.