Jump to content

sven keil

Members
  • Posts

    370
  • Joined

  • Last visited

Posts posted by sven keil

  1. <p>It may not show up on the D70 because being not FF, the artifact may just lie outside the image region. I had a quite similar artifact on a Contax T3 some time ago. It was replaced without problems. If you could take a shot with your optics on a D700, D3, D3x, or D3s (i.e. a full frame digital), then you would be able to definitively verify whether it is due to the lens or not.</p>
  2. <p><em>when you combine several images together, you are starting with more information to begin with, but post-processing is still a destructive process where you are eliminating information.</em></p>

    <p>Mathematically, it depends on whether your function that is used for post-processing is invertible. If it is, you won't loose anything. However, your statement is in general true, because many post-processing operations are not invertible, for example adjusting levels (to make an image more punchy) implies usually thresholding operations. Similarly, I never have see the inverse function of a noise reduction algorithm (normally, this is also not of interest - you want the algorithm to behave stable and to converge, so the focus is instead on finding the associated energy/Lyapunov function). But Shun, I guess you know these things anyway.<br>

    <em>"With the current photoshop version you cannot recover lost detail." I would say that is not entirely true.</em><br>

    I spoke mathematically, but your example about recovering shadows/highlights from JPEGs is a different problem, what is related to tone mapping: You want to map a say 14-bit image space (RAW file) to 8 bits (monitor/JPEG). If you do that using global operations (the same for each region in the image), you likely end up with a surjektive mapping (that is, shadow detail is mapped to 0 in the JPEG, so you see only black; analogous for highlights). Of course, and there you are right, mapping 14 bits to 8 bits and saving only the latter implies loss of information (even in Shannon's strict sense!).<br>

    <em><br /></em><br>

    <em>That is not the sort of thing that the target user of the 18-200 is going to be able or willing to do.</em><br /> Sure not ;-)<em> </em><br /> <br />I do not use an 18-200 optics, just of the mentioned trade-offs in optical quality associated with a super-zoom. However, I have a 18-55 kit-lens<em>, </em>which @35mm has a similar optical performance as my AiS-35mm/f2 @f3.5 (I did that test a long time ago with a D70s). I don't like it on the long end though, because it is very soft.<em><br /></em></p>

  3. <p>With the current photoshop version you cannot recover lost detail. But I know two image processing methods which return a higher than the original resolution. But again, there is no free lunch either. (i) <em>Superresolution</em> --> you need several pictures of the same subject, which ideally have a slight jitter around one pixel (across pictures). (ii) <em>Deblurring</em> --> you need to know the point spread function of the lens. With a zoom, the PSF is a function of focal length, so the only way to do that is to measure the PSF before/after having taken the photo. And the perhaps the biggest disadvantage is that you need to be versed in mathematics (optimization theory, variational calculus, signal processing). Just take a look in the corresponding IEEE journals if you were interested. Perhaps in the future, superresolution and deblurring algorithms will be standard in every camera.</p>
  4. <p>Dear all,<br /> Some comments on the posts:<br /> 1) Current evidence suggests that the retina has at least three mechanisms for adaptation to a scene's luminance range (not <em>brightness</em>, brightness means <em>perceived</em> luminance). First one at photoreceptor level ("global tone curve adjustment"), another at the network level, and a third one (that functions similar to<a href="../nikon-camera-forum/00R3bF?start=20"> D-Lighting</a>) at the horizontal-bipolar cell level with feedback to photoreceptors. Inner retinal adaptation is thought to reduce redundancy ("edge enhancement").<br /> 2) Lowpass filtering for adjusting luminance levels leads to well-known artifacts (see <a href="http://dragon.larc.nasa.gov/retinex/consumer/consumer.html">RETINEX</a>). State of the art algorithms make use of, for example, anisotropic diffusion or erosion/dilation operators (morphological ops).<br /> 3) D-lighting does <em>not</em> adjust the tone curve globally (which would just be tantamount to contrast reduction). It uses morphological image processing operations to define regions, and an individual tone curve is computed for each region.<br /> 4) Have fun.</p>
  5. <p>Currently I am living in Barcelona. As said, watch out at the Ramblas, at the metro entrances/exits at the seaside, and inside the metro as well. It is not a good idea to visit the small streets of the Barrio Gotico and El Raval after 22:00, although the main streets normally should be safe. As a general rule, avoid streets where nobody else is walking but you.<br /><br />A common trick in the metro is that one reads a newspaper right in front of you, and you cannot see what is happening under it with your wallet or your camera bag. <br /><br />Sometimes they use a cutter to open your backbag while you are walking or standing. Other just run behind you and take your cell phone or camera (or whatever hangs loose) on the fly.<br /><br />Don't let people talk to you - what should a native want? Just don't get into conversations, walk on or away. Even I do that. In the metro, there is another annoyance en vogue at the moment. People pass with you through the barrier - sometimes pushing you from behind - without validating a ticket. And if you use a lot of metro/bus/FGC then the T10-1 zone ticket will be the best option.<br /><br />Ok - lens choice...I never took another one than a 35mm/f1.4 or a 50mm with me. For me sufficient for Barcelona.<br /><br />Apart from the usual sightseeing (Sagrada Familia, La Pedrera, Plaza Espanya y Montjuic, La Boqueria) you may get to the PEU DE FUNICULAR station (FGC S1/S2 from Plaza Catalunya to Terrassa/Sabadell), get out, change into the funicular and get off in the middle station carretera de les aigues. Take the way to the left and walk for one or two hours and you have a nice view across the city. Only recommended with good wheather. The faster option is to visit Tibidabo, where you have similar views, and do not have to hike.<br /><br />For the 29 of this month a general strike is planned.<br /><br /><br />Ok - have fun and enjoy.</p><div>00XLDK-283297584.thumb.jpg.ac788081cd279aeddccb3d20983f8b0c.jpg</div>
  6. <p>D-Lighting is not just simply global curve adjustment. <a href="../nikon-camera-forum/00R3bF"> Copy-pasted my answer from an earlier discussion on D-Lighting:</a><br /> Nikon D-Lighting is based on a patented method for dynamic range compression. Nikon licensed the patent from V. Chesnokov (WO 02/089060, <a rel="nofollow" href="http://www.wikipatents.com/gb/2417381.html" target="_blank">http://www.wikipatents.com/gb/2417381.html</a>). The problem of dynamic range compression is to map an output of say, 14 bits (input range), to a much smaller range of say, 8 bits (output range), thereby doing better than simply clipping values which exceed 8 bits. So, if you do it by manipulating curves, you apply the same operation to each pixel (global adjustment). D-Lighting, on the other hand, identifies regions and applies different curves to these regions (local adjustment). Regions may be found by algorithms such as anisotropic diffusion.<br /> Nikon D-Lighting does an area-based dynamic range compression (citing from the patent):<br /> A method of image processing comprising altering an input image using a non-linear image transform to generate an output image, the process comprising correcting an image on an area-by-area basis to generate an output image intensity (1',) of an area which is different to an input image intensity (1,) of the area, the output image intensity (1',) of an area being related to the input image intensity (Ill) of the area by the ratio: amplification coefficient= I'd/ 1, wherein the image processing method produces an output image in which the amplification coefficient of a given area is varied in dependence upon the amplification coefficient of at least one neighbouring area, in order that that, in at least part of the image, the local contrast of the input image is at least partially preserved in the output image.<br /> This is not simply a histogram modification, as this would be a global operation. D-Lighting is more local, that is region-based. However, there are now existing algorithms which give more pleasing results than D-Lighting.<br /> Another thing: Two versions of D-Lighting exist. One that acts on the sensor's dynamic range (14 bit for D300,D700,D3), this should be active D-lighting. Then, a post-sensor version, which acts on a smaller dynamic range.</p>
  7. <p>Dear Gunnar,<br /> experienced similar issues now and then, with Windows XP + NX2 (!). However, in my case NX2 says that saving failed, and what I usually do is to save things in another directory (typically upward in the hierarchy), what usually works then. Another workaround I remember was to set the properties of the respective folder and all containing files to read and write. But I do not have the slightest idea why NX2 shows such weird behavior, and even less why NX does.</p>

    <p>Best,</p>

    <p>MS</p>

  8. <p>You do not really consider seriously to spend the nights editing photos while being on a (recreational?) trip? Any memory card provides more security than a laptop - just drop both and guess which of them will still work? Also, you may stay in a hotel where you don't know whether your laptop is safe. And carrying it around all the time means increasing the risk of failure. 32GB for me would be enough for 10 days - get just another 32 GB card beforehand (maybe the same one which you already know well, or from trusted brands, e.g. sandisk or kingston), and you have more than sufficient space to shoot everythin in RAW. This is a better solution than buying another netbook.<br>

    At the end of each day, just review your daily photos with your camera display, and erase the worst shots. If you are in doubt, just keep it, and decide at home.<br>

    BTW, as already mentioned here, memory cards are the safest way of storing information. But buy them from a trusted brand.</p>

  9. <p>Dear Wouter,<br>

    This wasn't supposed against you! However, such statements are often heard in this context, and at least from a technical viewpoint, they stand on shaky ground. Personal preferences, however, cannot objectively be measured on a technical scale ;-)<br>

    Best,<br>

    MS</p>

  10. <p>Frankly I do not understand statements such as "50mm on DX is not right for anything". Why not? You get 75mm on DX, while 85mm is considered standard for portraits. So, the only thing one would have to do is to move a little bit closer to the subject, if one adhered to standards. And 10mm of focal length shouldn't make a big difference in the result.<br /> Another question is of course bokeh (in this respect, I would prefer the 85mm/f14 on FX over the 50mm/f** on DX).<br>

    As to me, I used this "not right" 50mm/f**+DX combo for situations ranging from portrait, over architecture, to landscape. On the other hand, I used 85mm to portrait landscapes.<br>

    In the end everything just boils down to the photographer's creativity. For this reason I do usually not use zoom lenses (I have only one which came once upon a time attached to the cam).<br /> Perhaps a little bit off topic, but anyway.<br /> Best,</p>

    <p>MS</p>

  11. <p>Is it really necessary to put <em>VR </em>in a non-tele prime...? Well, I would like to have a <strong>300mm/f2.8</strong> with <em>VR</em> (FX), and if the remake the 85/f1.4 then make it a <strong>85mm/f1.2 </strong>(but without <em>VR</em>). Also, some sort of <strong>Noct-Nikkor around 50mm with f1.2 </strong>or smaller. For me it is not so important whether they are screwdriver or have a built-in S-motor. For me optical quality at high apertures is decisive, and weight.<br>

    Of course I would not re-buy any lenses that I already have. For example, my AF-D-180mm/f2.8 is really excellent, but I like it more on DX than on FX. Thus, for FX I would like to see some equivalent around 300mm or such, and with <em>VR</em>.<br>

    Best,<br>

    MS</p>

  12. <p>Well, I guess that my style of photographing would not depend critically on a Leica or a Nikon (I mostly use primes, and then a great part of them is manual focus). Truly I could get a better photograph from the Leica 35mm/f1.4 at 1.4 (if it exists...) than from Nikon's equivalent lens. However, with the D700 I can double the ISO value and take the photo at f2, with only a little more noise. So, when shooting raw and using, e.g. lightroom for conversion (i.e., the same software), then I guess that any difference in quality would get smaller. In short, digital Leica has more high ISO noise but better lenses, and d700 has worse lenses but better high iso performance. In the end, it boils down to individual preferences, given that you do not have to count your money. (A little off topic, though).</p>
  13. The only limit is banding. The methods for noise reduction continously improve, and if you still have your RAW files from some years ago you will see a notable improvement in quality upon reprocessing them with current software. For that reason I still shoot my D70s, which also can be "pushed" to 3200 (by seeting EV to -1). If banding does not occur, after processing with modern software, you can get some acceptable results (leaving luminance noise alone, reducing only chroma).
  14. <p><strong>John</strong> , at least in digital, this depends also on the color space which you are using to represent your image. Different spaces have different volumes, and these volumes are indicative of the number of colors you can ultimately represent.<br>

    <strong>Shun</strong> , at this resolution level it is impossible to tell apart film from digital. The dynamic range (DR) you get from a Nikon D3s or a Fuji S5 should be comparable to film. I order to take full advantage of digital DR, however, you need an adequate tone-mapping algorithm, which reduces DR for displaying (otherwise you blow out highlights and/or get shadows without detail). As soon as film is scanned, it is constrained in a similar way. You need again a tone mapping procedure in order to display the full color range and DR, respectively, of your negatives. Until high DR displays are widespread, we depend on good tone-mapping procedures.</p>

  15. <p>I just saw a German store offering it for 2.149,00 EUROS. The old 28mm/f1.4will likely drop in value if the new is significantly better optically, because previously there just has not been any alternative for it. Now, with an optically better (likely...) lens soon available, many people will buy the newer one, leaving the older only interesting for a reduced group of collectors. And less demand equals smaller prices.</p>
  16. <p>Hi,<br /> (If there was already commented on this, I apologise.)<br /> You can find some samples of the new 24mm/f1.4 with D3s here (dpreview.com):<br /> <em><strong>PMA 2010:</strong> Samples images from Nikon's 24mm F1.4 AF-S wide-angle prime lens. We're not sure if these are the world's first indepent sample shots with this lens but, given how hard it was to wrestle it away from the guys on the Nikon stand at the PMA Sneak Peek event, there can't be many others.</em> <br /> http://www.dpreview.com/galleries/reviewsamples/albums/nikon-af-s-24mm-f1-4-g-preview-samples/slideshow</p>

    <p>In the future, I would be interested in knowing how it compares to the good old<em> AiS 35mm/f1.4, </em> especially when stopped down. Wide open, the <em>24mm F1.4 AF-S </em> seems to have more contrast/less flare.<br>

    <br /> Best,<br /> MS Keil</p>

  17. <p>As to physics, you can optimise the microlens array with the aim to reduce spacing between lenses. But still, bigger pixels capture more photons at a time, and as a consequence have a higher signal-to-noise ratio. Downsampling ("downsizing") amounts to low-pass filtering, you through away the highest spatial frequencies. As noise is typically located at these highest spatial frequencies, and if the noise is not correlated in space, you might get a better SNR. Still, however, you will be better off with bigger pixels from the beginning, because not every noise has the ideal Gaussian distribution, and larger spatial correlations will still be present (e.g. color blobs).</p>
  18. <p>Why does one want to upgrade? If one camera or lens limits your current style of taking photographs. "Upgrading" won't make you a better photographer. At the beginning you have a new toy and will possibly use it with great interest, but after a month or so it just boils down whether a new body or lens is needed to enhance your photographic style. I have newer cams, but still use my D70s, and as noise suppression algorithms have improved since then, with an up-to-date Lighroom or Capture NX2 I obtain now results from the D70s RAW files that does it not make look so old. So, D90 RAW files should still give you plenty of freedom for the future.</p>
×
×
  • Create New...