Who can explain the relationship.....

Discussion in 'Casual Photo Conversations' started by fotolopithecus, Mar 4, 2018.

  1. But if the lens is the same, meaning the same physical lens at same focal length, how do you get the same framing with different size sensors?

    In some cases you can change the distance, but if everything is not on a flat plane then the playing field is no longer level.

    IF - your subject is a flat plane (perpendicular to the lens axis) then you can change distance to match the framing, and expect both 16 megapixel sensors to record the same amount of subject detail. Assuming, as before, that neither the lens nor the AA filter is the limiting factor.

    I mention the lens limitation to rule out the use of a very small physical aperture, in which case the smaller sensor's image would be degraded more.
  2. The "input" side of this conversation might be (a bit of) a red herring. I know, garbage in garbage out. Still, it might be better to look at resolution from the point of view of output. I know photographers who use digital leicas with the best lenses in the world and their only form of output is jpgs on Instagram. Seems like overkill but if you can afford it and don't care, that's cool too. And isn't digital printing at labs limited to 260 dpi, anyway?

    Back on the input side, we now have lenses that can out-resolve sensors and vice versa. But unless you're making very fine prints, how much does it even matter? Going past that, ISO is going to have some impact on detail for a lot of reasons including noise and dynamic range.

    I don't mean to sound dismissive of a very interesting subject but isn't photography about making pictures? It's fascinating to think about what's going on under the hood but I suspect most of us are staring at Porsche engines trying to understand why they are better performers than VW beetle engines, which share a similar design.

    Just my two cents.
    movingfinger likes this.
  3. You move the camera positions until the framing is the same to answer the first question. You're complicating things beyond what I'm asking. Let's say everything is on the same plane, no AA filter involved, were talking all things being equal, it's the sensors I'm asking about, not lenses, or anything else, but what the sensors themselves are capable of when put under the same situation. Resolving fine detail like hair, or the lines that make up what appears to be solid black on the area around George Washington's head on the dollar bill etc.
  4. No, no, no. You keep throwing complications into the mix, like requiring the same fl lens so the angle of view has to change. I'm putting in all the conditions so that you can't say, "well, I shot a landscape at f/22, then backed up a half mile to keep the framing the same, etc. (In that case, the larger sensor will win.)

    When you keep the playing field level, same subject framing and image quality not limited by the lens, then the two 16 megapixel sensors should both produce the same image detail.

    Just think of the scene being broken into 16 million little dots, in one case the dots are shrunk into a smaller space. How can they be different?
    fotolopithecus likes this.
  5. Well, I may be wrong but intuitively it seems like smaller pixels would render fine detail better, but I think you've answered the question which is it should be the same.
  6. How big is your print size?
  7. The smaller pixels would resolve the finer details if the size of the projected image on the sensor is also the same size. For the same view the FF lens and camera present to the sensor larger details so the sensor doesn't have to resolve as fine in order to produce the same amount of details for in the entire frame.
    So if you use the D810 and 50mm lens and the D7200 and 33mm lens you have the same view and you can see more details in the D810 image but if you use the same 33mm lens on the D810 and cut out a portion to the same size as the D7100 it would have less details than the D7200. It's simple thing but what I am really lacking is a term for what I called earlier as "Definition".
    fotolopithecus likes this.
  8. Luminous Landscape website currently has an article discussing various relationships influencing resolution.
    fotolopithecus likes this.
  9. They should be the same if we ignore external factors like an AA filter and the lens quality. 16 MP with the same framing will have the same number of pixels when viewed at any size. In theory, edges could be defined over a span of one pixel, so the pixel spacing would determine the ultimate limit of resolution for that sensor.

    In practice, it's a little more complicated. In the simplest case, not all lines (edges) in the subject are parallel to the rows or columns in the sensor array. That introduces a degree of uncertainty whether a given pixel is exposed or not. At best they will alienate, resulting in what is described as "stair casing." Resolution therefore depends on the orientation of the subject.

    Another phenomena is "aliasing." When repeated details in the subject are smaller on the focus plane than spacing of the pixels, there appears a pseudo-resolution when that spacing is an integral fraction of the cell spacing. If it is a color image, then colored bands appear, described as Moire patterns. This effect depends on the absolute spacing, so an image from a small sensor must be enlarged more, along with any Moire patterns. Oddly, this is more likely to occur with lenses having significantly better resolution than the sensor. Lesser lenses blur the aliasing, much as an AA filter. If two sensors have the same pixel spacing, and one is larger (more pixels), the larger one will exhibit less Moire for the same subject. This is why a 40-50 MP sensor can get by without an AA filter, but the filter is essential at 12 MP.

    There are other factors at play which depend on scale. Diffraction is proportional to relative aperture, so the effect is greater on a smaller sensor at the same resolution. The cells are not perfectly isolated from each other because of a micro lens array to improve efficiency as the angle of incidence increases. The Bayer filter too, and it's alignment with the cells. No optical surface is perfect, so the filter stack causes some scattering. Again, the effect is inversely proportional to scale.

    Stars are a point of light (~ 0.02 arc-seconds) and should be captured on a single pixel. In practice, with a first-rate lens and a full-frame sensor, star images I've made are three pixels wide, with a strong center ant two seek sidecars. This is likely due to diffraction, but scattering is possible. Elongation due to the earth's rotation is a separate issue.
    fotolopithecus likes this.
  10. Well, I think the example of the 50mm, and 33mm lenses giving the same view on the two different cameras gets me back to my original thought. So in this example the fact that the D810 has less pixel density than the D7200, doesn't prevent it from having better resolution? That's the part that's counter intuitive to my boggled brain. ;)
  11. . This is why a 40-50 MP sensor can get by without an AA filter, but the filter is essential at 12 MP.

    More malarkey, Ed. When does it end? Tell us how Fujifilm's 16-24mp X-Trans sensor and Nikon's 24mp APS-C sensors somehow manage to produce great images without an AA filter?
  12. Often, overcompensation. Or, a desire to hone debating skills. :rolleyes: NOT ALWAYS, of course.
    Reading through forum threads, my answer would be, obviously not.

    The trees sometimes obscure the view of the forest even though the detailed textures of their barks can be astonishing at times.
    Nick D. likes this.
  13. I’m not sure the term photography technically applies after the image is captured.
    ..... ;) ;) ;)
  14. I'm sure the Fuji and Nikon APS-C sensors do a fine job for most applications. That wasn't my point. There are differences due to physics. If you find errors in my analysis, feel free to point them out. "Malarky" is the exclusive domain of our elected (or hopeful) officials.

    I picked two extremes. I don't believe any 12 MP camera is without an AA filter, and have no examples of Moire. However it is a serious issue with a 16 MP Hasselblad (large pixels, no AA), and can't be easily removed without creating equally annoying artifacts. My Leica M9 (18 MP) does not have a filter, yet I've never found Moire to be a problem. The higher the resolution, the smaller and less obtrusive any Moire. A lot depends on your subject matter. Moire only occurs when you have fine, repetitive details comparable to the spacing of cells on the sensor. Things like fences, railings, corrugated iron and fabric are prime offenders. None of these conditions are likely to occur in nature, at least without the hand of man. Diffraction is real and ubiquitous, but largely invisible unless you magnify the image to where individual pixels are easily distinguished. At that point, you can see it with the lens wide open. If you stop down to f/11 or smaller, you see it as a loss of crispness and detail in the image as a whole.

    If you want to see how your favorite camera behaves, check the test results at www.DPReview.com. Their bench has several resolution test charts in addition to more mundane objects, and you can do a side by side comparison between two cameras at once.
  15. I said it give less resolution but yet higher image quality providing more details. I don't know what word to use to describe the amount of details of the whole frame. Resolution is the amount of details per unit length or area of the sensor. I would say the D810 has 36MP but it doesn't have the resolution of 36MP. I don't call that the resolution. But whatever it's only in the terminology and not science. The science is simple.
    Look at a typical cell phone with 1920x1200 (Samsung and not Apple) it has a resolution of more than 400 ppi. Yet compared to a 4K display it can display less details. But the 4K display of 60" would have that high a resolution.
    fotolopithecus likes this.
  16. Yes that's exactly what I've been trying to get at, but apparently haven't expressed myself in a way people can understand. So if your statement is correct, then I get it, and it makes sense to me.
  17. The finest detail you can register is the width of two pixels, the minimum required to distinguish an edge from a solid area. The closer the pixels together, the finer the detail which can be registered.

    Projecting an image on a sensor requires a lens. In many cases, the lens is the limiting factor for resolution. If the subject is the same size on both sensors, you have to crop the print or image of the larger sensor to display the subjectt at the same size. Cropping from full frame to APS-C cuts the resolution by a factor of 1.5 (or so), and the number of pixels in half (2.25x). I'm not sure that has any practical application, at least on a routine basis.
    Last edited: Mar 5, 2018
    fotolopithecus likes this.
  18. After reading all of the above posts I am glad that I don't care about the definition or physics of resolution. Very similar to my view on gravity, know it exists but don't understand the physics of it at all and don't really care. When I buy a new lens or camera (not often) I do some research and get what appears to have the qualities I want and they are never the sharpest lens or the camera with the most mega pixels. Simple and making the best out of what equipment I have is my goal.
  19. Think you got caught out over-generalizing and BS-ing again about gear you've never shot.
  20. The fact that it takes at least two pixels (or two lines on film) to describe an edge is the reason resolution is often given as "line pairs per inch (lp/in or lppi)". It is numerically 1/2 the traditional value of "lines/inch (lpi)." Because they are based on the same type of measurement, they can be used interchangeably, as long as you state the units. If resolution is limited by some other factor, like diffraction or the lens, the transition from dark to light may occur over two or three pixels. Film is also subject to uncertainty independent of the optics, including scattering of light in the emulsion (halation), varying grain sizes of silver halide (in order to improve the dynamic range), and the development process (e.g., chemical diffusion).

    These effects can be easily seen if you magnify a digital image until the individual pixels are visible, or use a 10x or 20x magnifier on film. In the lab, resolution is determined by measuring the contrast between repeated light and dark lines at different spacings. The results (MTF) can be plotted in many ways, usually contrast v distance from the center of the image at selected frequencies, or frequency v contrast at one position in the image

Share This Page