Jump to content

Who can explain the relationship.....


Recommended Posts

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

Well, I may be wrong but intuitively it seems like smaller pixels would render fine detail better, but I think you've answered the question which is it should be the same.

The smaller pixels would resolve the finer details if the size of the projected image on the sensor is also the same size. For the same view the FF lens and camera present to the sensor larger details so the sensor doesn't have to resolve as fine in order to produce the same amount of details for in the entire frame.

So if you use the D810 and 50mm lens and the D7200 and 33mm lens you have the same view and you can see more details in the D810 image but if you use the same 33mm lens on the D810 and cut out a portion to the same size as the D7100 it would have less details than the D7200. It's simple thing but what I am really lacking is a term for what I called earlier as "Definition".

  • Like 1
Link to comment
Share on other sites

When you keep the playing field level, same subject framing and image quality not limited by the lens, then the two 16 megapixel sensors should both produce the same image detail.

They should be the same if we ignore external factors like an AA filter and the lens quality. 16 MP with the same framing will have the same number of pixels when viewed at any size. In theory, edges could be defined over a span of one pixel, so the pixel spacing would determine the ultimate limit of resolution for that sensor.

 

In practice, it's a little more complicated. In the simplest case, not all lines (edges) in the subject are parallel to the rows or columns in the sensor array. That introduces a degree of uncertainty whether a given pixel is exposed or not. At best they will alienate, resulting in what is described as "stair casing." Resolution therefore depends on the orientation of the subject.

 

Another phenomena is "aliasing." When repeated details in the subject are smaller on the focus plane than spacing of the pixels, there appears a pseudo-resolution when that spacing is an integral fraction of the cell spacing. If it is a color image, then colored bands appear, described as Moire patterns. This effect depends on the absolute spacing, so an image from a small sensor must be enlarged more, along with any Moire patterns. Oddly, this is more likely to occur with lenses having significantly better resolution than the sensor. Lesser lenses blur the aliasing, much as an AA filter. If two sensors have the same pixel spacing, and one is larger (more pixels), the larger one will exhibit less Moire for the same subject. This is why a 40-50 MP sensor can get by without an AA filter, but the filter is essential at 12 MP.

 

There are other factors at play which depend on scale. Diffraction is proportional to relative aperture, so the effect is greater on a smaller sensor at the same resolution. The cells are not perfectly isolated from each other because of a micro lens array to improve efficiency as the angle of incidence increases. The Bayer filter too, and it's alignment with the cells. No optical surface is perfect, so the filter stack causes some scattering. Again, the effect is inversely proportional to scale.

 

Stars are a point of light (~ 0.02 arc-seconds) and should be captured on a single pixel. In practice, with a first-rate lens and a full-frame sensor, star images I've made are three pixels wide, with a strong center ant two seek sidecars. This is likely due to diffraction, but scattering is possible. Elongation due to the earth's rotation is a separate issue.

  • Like 1
Link to comment
Share on other sites

The smaller pixels would resolve the finer details if the size of the projected image on the sensor is also the same size. For the same view the FF lens and camera present to the sensor larger details so the sensor doesn't have to resolve as fine in order to produce the same amount of details for in the entire frame.

So if you use the D810 and 50mm lens and the D7200 and 33mm lens you have the same view and you can see more details in the D810 image but if you use the same 33mm lens on the D810 and cut out a portion to the same size as the D7100 it would have less details than the D7200. It's simple thing but what I am really lacking is a term for what I called earlier as "Definition".

 

Well, I think the example of the 50mm, and 33mm lenses giving the same view on the two different cameras gets me back to my original thought. So in this example the fact that the D810 has less pixel density than the D7200, doesn't prevent it from having better resolution? That's the part that's counter intuitive to my boggled brain. ;)

Link to comment
Share on other sites

. This is why a 40-50 MP sensor can get by without an AA filter, but the filter is essential at 12 MP.

More malarkey, Ed. When does it end? Tell us how Fujifilm's 16-24mp X-Trans sensor and Nikon's 24mp APS-C sensors somehow manage to produce great images without an AA filter?

Link to comment
Share on other sites

Seems like overkill

Often, overcompensation. Or, a desire to hone debating skills. :rolleyes: NOT ALWAYS, of course.

isn’t photography about making pictures?

Reading through forum threads, my answer would be, obviously not.

 

The trees sometimes obscure the view of the forest even though the detailed textures of their barks can be astonishing at times.

  • Like 1
We didn't need dialogue. We had faces!
Link to comment
Share on other sites

More malarkey, Ed. When does it end? Tell us how Fujifilm's 16-24mp X-Trans sensor and Nikon's 24mp APS-C sensors somehow manage to produce great images without an AA filter?

I'm sure the Fuji and Nikon APS-C sensors do a fine job for most applications. That wasn't my point. There are differences due to physics. If you find errors in my analysis, feel free to point them out. "Malarky" is the exclusive domain of our elected (or hopeful) officials.

 

I picked two extremes. I don't believe any 12 MP camera is without an AA filter, and have no examples of Moire. However it is a serious issue with a 16 MP Hasselblad (large pixels, no AA), and can't be easily removed without creating equally annoying artifacts. My Leica M9 (18 MP) does not have a filter, yet I've never found Moire to be a problem. The higher the resolution, the smaller and less obtrusive any Moire. A lot depends on your subject matter. Moire only occurs when you have fine, repetitive details comparable to the spacing of cells on the sensor. Things like fences, railings, corrugated iron and fabric are prime offenders. None of these conditions are likely to occur in nature, at least without the hand of man. Diffraction is real and ubiquitous, but largely invisible unless you magnify the image to where individual pixels are easily distinguished. At that point, you can see it with the lens wide open. If you stop down to f/11 or smaller, you see it as a loss of crispness and detail in the image as a whole.

 

If you want to see how your favorite camera behaves, check the test results at www.DPReview.com. Their bench has several resolution test charts in addition to more mundane objects, and you can do a side by side comparison between two cameras at once.

Link to comment
Share on other sites

Well, I think the example of the 50mm, and 33mm lenses giving the same view on the two different cameras gets me back to my original thought. So in this example the fact that the D810 has less pixel density than the D7200, doesn't prevent it from having better resolution? That's the part that's counter intuitive to my boggled brain. ;)

 

I said it give less resolution but yet higher image quality providing more details. I don't know what word to use to describe the amount of details of the whole frame. Resolution is the amount of details per unit length or area of the sensor. I would say the D810 has 36MP but it doesn't have the resolution of 36MP. I don't call that the resolution. But whatever it's only in the terminology and not science. The science is simple.

Look at a typical cell phone with 1920x1200 (Samsung and not Apple) it has a resolution of more than 400 ppi. Yet compared to a 4K display it can display less details. But the 4K display of 60" would have that high a resolution.

  • Like 1
Link to comment
Share on other sites

The smaller pixels would resolve the finer details if the size of the projected image on the sensor is also the same size.

Yes that's exactly what I've been trying to get at, but apparently haven't expressed myself in a way people can understand. So if your statement is correct, then I get it, and it makes sense to me.

Link to comment
Share on other sites

The finest detail you can register is the width of two pixels, the minimum required to distinguish an edge from a solid area. The closer the pixels together, the finer the detail which can be registered.

 

Projecting an image on a sensor requires a lens. In many cases, the lens is the limiting factor for resolution. If the subject is the same size on both sensors, you have to crop the print or image of the larger sensor to display the subjectt at the same size. Cropping from full frame to APS-C cuts the resolution by a factor of 1.5 (or so), and the number of pixels in half (2.25x). I'm not sure that has any practical application, at least on a routine basis.

Edited by Ed_Ingold
  • Like 1
Link to comment
Share on other sites

After reading all of the above posts I am glad that I don't care about the definition or physics of resolution. Very similar to my view on gravity, know it exists but don't understand the physics of it at all and don't really care. When I buy a new lens or camera (not often) I do some research and get what appears to have the qualities I want and they are never the sharpest lens or the camera with the most mega pixels. Simple and making the best out of what equipment I have is my goal.
Link to comment
Share on other sites

I'm sure the Fuji and Nikon APS-C sensors do a fine job for most applications. That wasn't my point. There are differences due to physics. If you find errors in my analysis, feel free to point them out. "Malarky" is the exclusive domain of our elected (or hopeful) officials.

 

I picked two extremes. I don't believe any 12 MP camera is without an AA filter, and have no examples of Moire. However it is a serious issue with a 16 MP Hasselblad (large pixels, no AA), and can't be easily removed without creating equally annoying artifacts. My Leica M9 (18 MP) does not have a filter, yet I've never found Moire to be a problem. The higher the resolution, the smaller and less obtrusive any Moire. A lot depends on your subject matter. Moire only occurs when you have fine, repetitive details comparable to the spacing of cells on the sensor. Things like fences, railings, corrugated iron and fabric are prime offenders. None of these conditions are likely to occur in nature, at least without the hand of man. Diffraction is real and ubiquitous, but largely invisible unless you magnify the image to where individual pixels are easily distinguished. At that point, you can see it with the lens wide open. If you stop down to f/11 or smaller, you see it as a loss of crispness and detail in the image as a whole.

 

If you want to see how your favorite camera behaves, check the test results at www.DPReview.com. Their bench has several resolution test charts in addition to more mundane objects, and you can do a side by side comparison between two cameras at once.

 

Think you got caught out over-generalizing and BS-ing again about gear you've never shot.

Link to comment
Share on other sites

The fact that it takes at least two pixels (or two lines on film) to describe an edge is the reason resolution is often given as "line pairs per inch (lp/in or lppi)". It is numerically 1/2 the traditional value of "lines/inch (lpi)." Because they are based on the same type of measurement, they can be used interchangeably, as long as you state the units. If resolution is limited by some other factor, like diffraction or the lens, the transition from dark to light may occur over two or three pixels. Film is also subject to uncertainty independent of the optics, including scattering of light in the emulsion (halation), varying grain sizes of silver halide (in order to improve the dynamic range), and the development process (e.g., chemical diffusion).

 

These effects can be easily seen if you magnify a digital image until the individual pixels are visible, or use a 10x or 20x magnifier on film. In the lab, resolution is determined by measuring the contrast between repeated light and dark lines at different spacings. The results (MTF) can be plotted in many ways, usually contrast v distance from the center of the image at selected frequencies, or frequency v contrast at one position in the image

Link to comment
Share on other sites

A bit OT, but, If I have a sensor A that is 4 times the area of sensor B, will camera A have 4 times the light gathering power of camera B or twice the power?

 

I want to compare the Oly tg-5 sensor size ~ 28mm sq with Rx0 sensor ~ 117 mm sq at f/2 and f/4 respectively. the oly is12mp, the sony 15mp. ignore differences due to other factors

 

edit: both are ~ 24mm (35mm equiv) focal lengths at their widest

Link to comment
Share on other sites

A bit OT, but, If I have a sensor A that is 4 times the area of sensor B, will camera A have 4 times the light gathering power of camera B or twice the power?

 

Sensor A will have 4 times the light gathering power. But from your point of view there won't be any speed advantage as the extra-gathering effect goes to areas outside that of the smaller sensor.

 

If this doesn't make sense, consider the same situation with a film camera. Imagine that it is a native 4x5" camera, but you have a roll-film back as well. The 4x5 films have (very roughly) 4 times the area of the roll film, so they can actually gather roughly 4 times the light. But obviously you don't change exposure settings, right?

 

I want to compare the Oly tg-5 sensor size ~ 28mm sq with Rx0 sensor ~ 117 mm sq at f/2 and f/4 respectively.

 

When you compare two different sized sensors, but make the fields of view match by using the appropriate focal lengths, the lens f-number is the great equaliser. At the same f-number, any focal length lens will deliver the same light energy to a given unit area on the sensor. (The physical aperture sizes will vary relative to the focal length so as to cancel out the image magnification effect.)

Link to comment
Share on other sites

If by "light gathering power" you mean baseline sensitivity, then the relative area of the light cell would be the determining factor, not the total size of the sensor. There's not a direct relationship because the net sensitivity is affected by signal processing. Some designers trade sensitivity for low noise and bit depth, medium format sensors, for example. The cell size is not necessarily proportional to the sensor size, since the percent coverage can vary widely, from about 75%to over 85%. Backlit sensors have close to 100% coverage. The cell diameter is often listed in the camera's specifications. Another factor in cell design is it's capacity for electrons (triggered by photons). In order to achieve 16 bit depth, the cell would have to hold at least 65,536 electrons. I've never seen that in specifications, but MF digital often cites 16 bit depth. Without careful noise control, those low bits would be lost in the mud.

 

I know that my digital Hasselblad can pull details out of shadows that were completely dark to my eye when I took the shot. The sensitivity is only ISO 50 to 400. The cells are 9 microns in diameter, A Sony A7Sii, 12 MP, has cells approximately 8 microns in diameter, and delivers ISO 100 to 409,600 (200,000 fairly cleanly). Choices!

 

Are the Oly and RX0 competing for your dollars? You can attach the RX0 to a mic stand and run it remotely from a smart phone, or on you hat when skiing down hill. I don't think MF would be appropriate in either case. MF would be better for product photography and portraits.

Link to comment
Share on other sites

Are the Oly and RX0 competing for your dollars

they were. i’ve just looked at this site

 

Compare digital camera sensor sizes: 1″-Type, 4/3, APS-C, full frame 35mm

 

and it suggests the oly has a crop factor of 6 whereas the sony has a crop factor of 2.7 which suggest they are similar (12 vs 10.8) so i’ll keep my oly. but that sony is so cute.

Link to comment
Share on other sites

and it suggests the oly has a crop factor of 6 whereas the sony has a crop factor of 2.7 which suggest they are similar (12 vs 10.8) so i’ll keep my oly. but that sony is so cute.

I'm sorry, but you said the Oly had a square, 28 mm sensor, which would put it in the far low end of medium format. In fact, it is a 1/2.3" sensor with a 5.5x cropping factor. The RX0 has a 1" sensor with a 3.1x cropping factor. Both sizes reflect an archaic video designation (related to the diameter of a vacuum tub video sensor). 4/3 is also misleading, but in fact is a half-frame camera (16x24 mm) with a crop factor of 2.0. The Olympus has a BSI (back illuminated) sensor, which is more efficient than the Sony, which has a conventional CMOS sensor.

Link to comment
Share on other sites

I don’t know how to write “to the power of 2” on the internet :)

I looked this up to see if there were any keyboard shortcuts to accomplish this. I could only find things to do if one is in a specific word processing program like Word. But I did discover something new to me. Turns out, accepted convention among mathematicians and scientists (in other words, PN photographers:rolleyes:) is to use the caret symbol to indicate an exponent, to wit:

 

28mm^2

 

. . . would indicate 28mm squared.

We didn't need dialogue. We had faces!
Link to comment
Share on other sites

I understand. The Olympus sensor has an AREA of 28 mm^2, or about 4.3 mm x 6.5 mm and 12 MP. That corresponds to a 1/2.3" sensor. Each pixel is approximately 2.3 microns^2. The Sony has a 1.0" sensor, which has an area of 90 mm^2 and 21 MP. Each pixel has an area of 4.2 microns^2. Even considering the benefit of a BSI sensor, the Olympus would have only a little over half the baseline sensitivity of the Sony. All else being equal, the same light exposure (intensity x time) would release nearly twice as many electrons in the Sony, compared to the Olympus. Of course, all else is NEVER equal.

 

A sensor cell is basically a capacitor in which the charge is generated by exposure to light. The voltage (or incremental voltage) is equal to the charge (coulombs) divided by the capacitance (farads). Measuring that tiny voltage is where the fun begins.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...