Are image only cameras on their final leg?

Discussion in 'Nikon' started by WAngell, Dec 30, 2017.

  1. [QUOTE="Ed_Ingold, post: 5631287, member: 419409"]they don't know how to use it properly[/QUOTE]
    Matthew Quigley: Said I didn't have much use for 'em, never said I didn't know how to use 'em.

    Used videos with various video cameras for a number of years while teaching Martial Arts. Slow tedious, expensive and time consuming. In the final analysis those factors outweighed any benefits and I stopped. Bad enough to look at vacation slides, or home movies back in the day - just when you thought it was safe, something else to fear - being subjected to amateur video "productions".
    Don't know if you've seen the shirt - "We need more Sax and Violins"

    There will always be those who prefer still images, and some who like video. More I believe will look at images than sit through a video. As to lack of media, plenty of people use antique processes by making their own.
    Not where I live, and likely not in my lifetime for either.
     
    Last edited: Dec 31, 2017
    bob_flood|1, bgelfand and marksmith like this.
  2. Exposure at the time of shooting ended a very long time ago - in the late 1800's.

    As Ansell Adams said when comparing a photograph to music: The negative [exposure at the time of shooting] is score; the print is the performance. We usually look at the "performance" - either a print or a manipulated image on a screen. Most professional prints were heavily manipulated with dodging, burning, changes of contrast. If you are ever in Tucson Arizona, go the Art Department at the University of Arizona. They have a large Adams collection, not only prints but also the original negatives and some of his test prints with printing notation on them. The final prints have been heavily manipulated.

    Or read Tim Rudman's book, "The Photographers Master Printing Course" over a hundred and fifty pages of how to change the way an image looks. (You may have to get it from the library; the book has been out of print for several years.)

    Photographers have been changing the way a final image look from "the point of exposure" for many years. Digital just makes it easier.
     
  3. So I don't know how to make movie. I bought a Beaulieu bac in the 80's trying to make movies but I found out I don't have the skills. So when I buy my cameras I don't want a camera that has video. I am a dummy but is there any one who knows everything?
     
  4. This has been an interesting discussion!

    Mr. Vongries - I agree with your view - I do carry a phone because my wife doesn't like the idea of me being unreachable. A big part of that is the extinction of public telephones caused by the cell phone - if you don't carry your own, it's very difficult to find a public-use phone nowadays. And I carry a real camera, too. Nothing fitted into a cell phone will ever be able to compete with the capabilities of what we all call a real camera.

    Mr. Darton - I will quibble with your statement that the still-only camera is dead. It not only exists, it's thriving. It has just grown an appendage called video. That appendage is largely ignored by still-only photographers. It may be correctly called a still+video camera instead of still-only, but it's a still-only camera to those who ignore the video (like me).

    JDMvW - That 1902 Sears catalog also had cameras for sale. Very different from today's cameras, but cameras designed to allow capturing an image for display later. That hasn't gone away.

    Still cameras will continue as long as there are people who value and will use the still image. I have photos I took in Ireland, Norway, Denmark, and New York City on my walls, and cannot see any way or reason to replace them with videos.
     
  5. I wouldn't presume to speak for "most people," but I've been trained on and have used studio video cameras on a local weekly program, have used the video feature on my Nikons when I had to and, ages ago, I used my fathers 8mm equipment. I don't like doing it. It's a different process, which makes different demands and provides different rewards.

    Exactly. And it doesn't mean I have to do it.
     
  6. Tony Parsons

    Tony Parsons Norfolk and Good

    I have Pentax K10D and K20D for stills, and a Canon Legria FS306 for video. All I want, all I need. The more it can do, the more there is to go wrong.
     
  7. Spearhead

    Spearhead Moderator Staff Member

    A big part of adding features and functionality is attracting new users. Asking existing users what they like isn't particularly useful for this.

    And having worked in a Hollywood studio for a while, it's clear that at least some camera companies made a good decision adding video. Almost all of our production was done with Canon DSLRs. Previously it had been done with much more expensive Sony video cameras. We weren't the only ones, Canon put a service center on the studio lot to handle all the increased DSLR usage.
     
  8. What really interesting to me is that if you correct for inflation, the most expensive and cheapest cameras listed in the contemporary catalogs remain almost constant in cost per (corrected) dollar. Ditto for stoves (although in that case going from wood to electric), and many other items....
     
  9. A full frame, even an APS-C camera has a larger detector than most cinematic video cameras. The chief advantage of a large sensor is a shallow depth of field (inversely proportional to sensor size). A DSLR with video capability is far from ideal in the location of controls, viewfinder, focus and zoom control. However the image quality is very high and compatible with dedicated, professional video footage. Although limited to 29 minute clips (for tax reasons), this has little effect when stories are built on 10 minute (or 10 second) clips). My next video camera will probably be a Sony which uses a Super 35 sensor (~ APS-C) and takes E mount lenses. I would not be averse to using Sony PZ lenses for a main camera.

    News photographers are an endangered species. The ability to capture video clips along with still photos enhances their value to news organizations. It's not a bad feature to have when photographing grandchildren either.

    I do take front and rear video in my car too. AFIK, it was only viewed when someone turned left in front of me. It's amazing how truthful people can be when there's physical evidence. It certainly worked with a certain FBI director ;)
     
  10. Most of you are concentrating on "image quality," but there are other reasons people like cameras. I like the historical aspects of using a camera, and the actual process of using them. So, what was my very first photo related purchase for 2018? I just ordered a box of 4x5 dry plates and two plate holders. These were popular from 1880 to 1930. I'm fascinated by plate photography and want to try it myself. To me it's sort of a craft. All my pioneer photographer heroes shot dry plate--WH Jackson in Omaha, Stanley Morrow in Yankton, Dakota, Solomon Butcher in western Nebraska, and FJ Haynes in Fargo. Using a phone to take photos just isn't the same for me.


    Kent in SD
     
    marksmith likes this.
  11. Great discussion.

    My headline didn't say anything about 'still image' but 'image only'. Today a DSLR records only a fairly static image—a bunch of pixels and for each pixel; color and intensity. Video is nothing but a gob of these still images.

    My iPhone X OTOH adds a distance element. I suddenly no longer have a static image. This gives me a whole new world of creative potential. I can shoot for exposure and then vary the depth of field afterwards. I can also easily make a background B&W. Or completely change the background (E.G., Apple's new 'Clips' app that I've not yet played with). It brings a new paradigm to HDR. Simply knowing distance for each pixel opens up a gob of new creative options (and makes others easier, much to the chagrin of some folks).

    Are there other attributes besides color, intensity and distance that could be useful? Lytro also records the angle of light rays. What if we knew the reflectance of the element that each pixel records? And, as an element of reflectance, we could know if something is itself a light source?

    BTW, I never suggested replacing my DSLR's with iPhones (though for many photos I do use my iPhone). The question is when will Nikon catch up to Apple and begin also recording distance information for each pixel? Or other attributes?

    Just as painting and sculpture have not given way to photography and holography, I don't think still images will ever fully give way to video or anything else. Similarly, new tools will not necessarily replace older tools. Sometimes the new tools might be easier but not necessarily result in something better and sometimes simply because we enjoy the more traditional way of doing something. I've shot a number of woodworkers and some still use hand tools primarily because they enjoy using them, even for tasks that can be done as well or better by power tools.

    Finally, I don't think that 'the old way of doing things' is in conflict with new technology. I am a quite content luddite who loves new stuff. I overlapped film & digital for a number of years and still refuse to sell my film bodies. I sometimes lug my DSLR's and lenses around and sometimes use only my iPhone. Or sometimes will use my wife's Nikon 1. I don't see iPhones or new capabilities as threatening so much as opening up new creative options.
     
  12. Ummm... We have two issues with this. First, the range of values is limited and must be adjusted for the entire image. Second is that we must, at time of exposure, select the range of values that we want to record. We can do this manually or with the aid of the camera's CPU. HDR tries, rather messily, to correct for the lack of dynamic range and on an exceptionally limited basis for the exposure being fixed for the entire image.

    Now imagine a camera that does this individually for each pixel? We no longer need to worry about exposure at time of image recording. Every element in the scene (since we are also recording distance and perhaps other attributes this is no longer just an image), is recorded as we see it. Effectively we have infinite dynamic range. We can then manipulate this later to create an image that is overexposed or blown-out in certain areas or whatever we desire. Since we know distance we can blow out just the background.
     
  13. Form factor transitions are imminent. Can't help but think Nikon's D850 is the end of the the line for their flagship DSLRs. There's a Nikon MILC, maybe several, in the wings that suggest Nikon belatedly got what customers want/need. Fuji's MILCs are arguably the strongest argument for ditching the DSLR. What features and functions the future holds are mostly speculation but Nikon's big, cludgy DSLRs probably won't be around 2-3 years out.
     
  14. Actually, the dynamic range on some cameras is enormous. HDR can be used effectively to balance light and dark areas of a single such image. The effect can range from natural to bizarre, depending on your needs and taste.

    I too doubt that video will ever replace still photography, even as both are combined in the same camera. It may be one of many features on a modern camera that we don't use, or take time learning to use. The mechanical knowledge of doing video is easy enough, but why, when and where to use it is subjective. The location of the video button is often inconveniently located to prevent accidental use. This makes it hard to start and, in particular, stop the video smoothly without some sort of rig or adaptation. The Sony A9 has a much more convenient video button, so much that I disabled it after pressing it several times by accident. For the moment my video is relegated to a single mode (video) and via the standard shutter release button. That takes away the ease of use, but with less annoyance. I can always change it back.

    Video is extremely popular amongst the users less "pure" than those frequenting these pages. Use of DSLRs to video school performances is nearly on a par with use of ubiquitous cell phones and tablets. It doesn't have to be one or the other. Look for the creative possibilities (or memories) that video offers. Each frame in a 4K video is 10 MP, at up to 120 fps. A few years ago, 12 MP was about as good as it got for still photos.
     
    Last edited: Jan 4, 2018
  15. Nikon's big, cludgy DSLRs probably won't be around 2-3 years out.

    Well, you might be surprised to find that some of us find most mirrorless cameras too small to hold and use comfortably; the buttons are too small and difficult to press. In fact I find even the D850 to have too thin a grip to be comfortable. The D5 (as well as the older versions in the series) by contrast is "just right" in terms of shape and size for my hands and works quite well with gloves on, too (which is important when photographing in difficult winter conditions). I have no plans on purchasing a mirrorless camera any time soon.
     
  16. DSLRs won't be around (in 2,3, 5... 10 years), in the same way you can no longer buy film today...... In other words, they'll probably be around, even if not as ubiquitous as today. And that's only a good thing, even if for some companies it could be a difficult or dangerous transition from mass product to niche.
     
    PapaTango likes this.
  17. It is all very strange. Some like stills, some video and many prefer not to record anything at all. As I like stills, I would like to see still cameras in stores in the future too. Film is not dead, dslrs are strong, mirrorless are coming. Obviously mobile phones are in hands of all ages and provide most content of daily image flow.
     
  18. Not so. There is simple dodging and burning that affects limited areas of the print. For more complex situations there is a physical mask that can be constructed and laid over an area of the photographic paper to restrict adjustments. I've never tried it, but it is described in Rudman's book in some detail. He also goes in to blending different negatives which addresses your second point.

    Many of the digital effects we have today are wet darkroom techniques that have been used for years by Master print makers. Digital makes the process easier and allows us non-masters to use them.
     
  19. Sorry for my delay in responding to this thread - I wanted to wait until I was at a keyboard so I could do it justice.

    Firstly, Wangell's points about capturing more than a single 2D image (with Lytro's light field, Apple's multi-camera, etc.):

    Apple certainly aren't alone in multiple cameras - quite a lot of Android phones have multiple cameras for much the same purposes (indeed, there are devices going back to at least 2011 when the HTC Evo had an LCD autostereo display and dual cameras; Sharp had one too). They have the benefit that the stereo disparity can be used to perform depth segmentation on the image, which allows artificial background blur as used in many computer games to emulate depth of field. Because you're missing the information from "behind" the captured pixel (whereas a wide camera aperture captures all the light seen from anywhere on the entrance aperture) artifacts appear - although you get to avoid some optical aberrations in return. A combination of AI and algorithmic tweaks lets this work somewhat better. Without multiple lenses, you can do this temporally, with multiple exposures - which is a trick that was also used by the Pentax Q to achieve decent depth of field from a tiny sensor. You are, of course, starting with simple captures, and arguably you'd be better saving the originals for off-line processing. Even a pair of captures at different depths gives you quite a lot of information (hence Panasonic's "depth from defocus" technology). I'm glad to see Nikon add focus stacking to the D850, even if they don't including focus bracketing, which would actually have been helpful. (Fortunately, after years of maligning it, I had a go at re-calibrating my Sigma 35mm for my D810 last night. It's set to -20, but it's actually acceptably accurate in phase-detect now. Yay.) Focus stacking isn't rocket science in software.

    Canon have had the ability to apply retrospective refocus from their "dual-pixel" sensors for a while. It's very limited as light field captures go, but it's there. Obviously it greatly increases the amount of data being stored. Lytro have essentially the same issue, magnified - their consumer cameras were very low spatial resolution in order to capture a reasonable light field resolution, They've tried to work around that with a high-end device that captures vast amounts of data for video, but obviously that presents workflow issues (and is impractical for consumers). I presume the intent is for the user to have a dynamic ability to change focus (or maybe slight head movement) with eye tracking, since it's a vast amount of effort to go to if the videographer is supposed to be in control. That said, "professional" focus can sometimes miss - I found the focus errors very distracting in the IMAX version of Les Mis, for example - so arguably a little retrospective focus grading might be good. Plus it simplifies the creation of stereo content, although 3D is still generally seen as very much a gimmick (especially for home use). I'd love to see sensors reach the point of holographic video capture, but that's a lot of effort for a very limited benefit.

    I do have some envy of the Sony/Pentax et al. sensor-shift debayering approach, maybe with sensor-shift VR. It's not as critical for a DSLR, but that doesn't make it useless. Plus Pentax's star tracking is cool. I envy the PhaseOne binning system, too.

    Regarding other means of capture, as others said, modern sensors are very good at dynamic range capture. Before that was the case, Fuji had dual-pixel SuperCCD sensors with a more- and less-sensitive sensor pair, that collectively gave good dynamic range (or optionally resolution). Magic Lantern for Canon has been known to drive different scan lines at different ISO amplifications, which helped their dynamic range significantly (especially for the 5D3). Supposedly the first generation Pixel camera processed sub-pixels at different sensitivities, which helped it out. There have been presentations about tricks such as RGBW sensors (for greater light capture) and running the RGB pixels at a different (electronic) shutter speed. Not to mention X-Trans, Foveon and so on. Nikon mostly claim that they want the cleanest possible conventional still image, and don't do anything odd to the sensor as a consequence; this clearly hurts them a bit, since the A7RIII is very close to the D850's sensor performance (arguably better in low light) and doesn't obviously suffer from having PDOS.

    Most "HDR" displays are more limited than they'd suggest. The original specifications for HDR TVs assumed 4000 nits display typical, or sometimes 1000 nits, which is actually pretty good going for many current devices (especially OLEDs). Dolby Vision (or at least, PQ) can represent up to 10,000 nits. The HDR component of a display is intended to be a small region (highlights) or displayed briefly (flashes) - otherwise the display overheats. There is highly likely to be some intelligent and proprietary processing going on to represent HDR content on any common display, and there's a load of metadata provided to try to get that right (especially when it comes to getting heuristics to behave consistently across a scene - which is very tricky for a live stream). Meanwhile, even SDR BT.2020 for UHDTV defined an enormous gamut that most displays can't come close to touching - last I heard, if you want to get close, you're looking at a laser projector. That said, NTSC happened to have a huge gamut with very saturated but dim phosphors, which modern displays with brighter output often struggled to reach. In summary: while I look forward to providing content with a greater dynamic range and gamut than the standard sRGB displays, there are still highly likely to be some significant limitations and practical display will involve some gamut and dynamic range mapping, beyond whatever may be wanted for artistic intent. More per-pixel detail is useful for image processing too, of course; I wonder whether Nikon will at some point feel limited by their 14-bit format, although we can't expect huge improvements at this point.

    As others have said, any input to output mapping involves some processing. That processing continues to get more advanced (especially with recent neural networks). More data has to be a good thing, but there comes a point where it compromises 99% of users, so there'll always be a compromise.

    As for video, basic video encoding is a relatively solved problem (you've been able to do it on a cellphone budget for years), and cameras tend to need at least a partial video feed for live view. Few cameras have live view but no video - the D700 predated Nikon having the processor, the Df and some Leicas don't have it for moral reasons, but it's effectively a free add at this point. There'll always be a few cameras that deliberately ignore the functionality because there are customers who'd rather have functionality that they'd only use once in a couple of years missing in return for having to ignore a switch and/or menu option. Most people would rather have the ability to do video just in case. With an SLR, Nikon obviously haven't prioritised video AF because it doesn't help stills (much); with mirrorless, it's the same problem, so video is basically going to be there. As HDR TVs and maybe 8K start to appear, camera manufacturers are likely to get the hang of including the standards (at least the ones that are actually standardised); Nikon have historically not been a video camera company (and since their lenses stopped being fully manual, they're not even used as widely as they once were), so it's no surprise they're behind. I'd be very surprised if video shooting overtook stills for large sensor cameras, though - it's just too inconvenient to the user.

    So... plenty of room for technology to improve, most cameras will have increasing functionality, someone will probably keep making cameras containing nothing but stills technology even if it's only Leica, but not because they couldn't do more.

    End brain dump. :)
     
  20. I meant to say... I do have a little trivial computational photography idea that I was hoping to have working in time to share with the group for Christmas. Best laid plans... I'll get it done eventually. I'd say more, but there's a sporting chance it won't work and I don't want anyone to get even marginally excited. :)
     

Share This Page