Jump to content

david_r._edan

Members
  • Posts

    237
  • Joined

  • Last visited

Everything posted by david_r._edan

  1. <p>Hi.<br> I'm trying to throw together a make shift setup for off-site use. It's a special project I'll be working on for a long time. Basically, everything just needs to be sitting there at this other place and be ready for me to come in and do a shoot every now and then. I'm on a tight budget here. So, at this point I'm still figuring out which way to go about the lights and the whole setup, really, because I can get all kinds of old stuff for practically next to nothing and some of it even for free (part of the arrangement).<br> First, I probably should figure out how to sync for shooting and metering. The Yongnuo RF-603II seems to be a fairly decent trigger. No TTL, so no remote control of the output but at this point I don't even know what kind of strobes I'll have. It may appear that I'm doing this backwards but here's the thing. First of all, I will need to be taking a fair amount of incident readings and another thing is that most of the time I'll have to be away from the camera, taking the pictures, so I need a remote shutter release too.<br> I have my old Sekonic L-608 that I can "sacrifice" for this project. It has the radio module installed. So I thought that by getting 3 Yongnuo transceivers I might be getting what I need. I just have a few things I need to confirm:<br> 1. Will the Sekonic L-608 meter (with the radio module installed) trigger a flash unit via the Yongnuo RF-603II transceiver?<br> 2. Does the flash unit need to be connected via the hot shoe in order to be tripped? Can't I just secure the Yongnuo to the light stand and relay the signal to the strobe via a short PC cord?<br> 3. Given that one Yongnuo is tripping the flash and the other is sitting in the camera's hot shoe, can I use a 3rd unit to remotely focus the camera and release the shutter, which in turn should trigger the flash? *In this configuration the Yongnuo that's connected to the camera would have to perform 2 functions. a: act as a receiver of the "focus then shoot" signal and b: act as a transmitter of the "fire the flash" signal. I think I might be asking too much but you never know....<br> 4. In order for the camera to signal to the Yongnuo transceiver, does the unit have to sit in the hot shoe, or can I just secure it to the tripod and connect it to the camera's PC port?<br> 5. Same for the remote shutter and focus. I know that the Yongnuo has to be connected to the 10-pin port (in my case, Nikon) but does it also have to occupy the hot shoe? Can't I just secure it to the tripod?<br> So, where do I stand in regard to connectivity/compatibility? I know I'm probably asking too much but if this works out, it'll be sweet!<br> <br />Thanx!</p>
  2. <p>A while ago I've actually stumbled upon an online demonstration where the guy was using different opacity values for each of the layers. I didn't dwell on it because I figured there must be a more straightforward approach. It would have been nice to have a dedicated software for this or at least a Photoshop or Lightroom plugin that would do all the figuring out and what not. Given that there are 4, 8, 16, 32 or whichever number of exposures of the same thing, proper noise-removal algorithms could really work wonders... Did I say "super-resolution"? Oh, well...<br> Anyway, by now I know that there's no such thing and I'll have to adapt my technique to the tools available in Photoshop. Wouldn't be the first time.<br> Thank you, Frode! I will try what you suggested as soon as I'm free.</p>
  3. <p>Leszek: We're not even on the same page.<br /> Mike: I was really hoping I wouldn't have to explain myself. That's why I was trying to keep things simple and to the point. But to satisfy your curiosity and maybe open up the eyes of other people I will describe a few a scenarios where the application of this technique would be very useful. Not "would be" but rather "is useful" as I personally have used multiple exposures in so many scenarios when I was still shooting on film.<br /> 1. Imagine that you need to photograph a cityscape at night. The main point of interest is a road or a bridge. You're interested in portraying the street as 'busy' by capturing many light trails created by vehicles' head and tail-lights. As you arrive at the scene, you realize that there are hardly any cars at all. In a 30-second exposure you would get 4, maybe 5 cars. One solution would be to set your camera to 6 exposures, each lasting 5 seconds. You would then open the shutter only when there were 3 or 4 cars in the frame. Thus in the merged image you would have light trails from about 20 cars, versus 4 or 5. This approach is more useful in film, as with digital, you can get lots of light trails without underexposing or even shooting in the multiple-exposure mode.<br /> <br />2. You are photographing a distant landscape with a 300mm lens. The light is getting low and you realize that with the lens stopped down to f-11 and the ISO already being at 200, you would still need to expose at 1/15 of a second. From your previous tests you know that even with the mirror locked up, the shutter alone introduces enough vibrations that will impart noticeable blur to your photo. To negate this effect, one could simply raise the shutter speed to 1/125 and make 8 separate exposures, each of which would be grossly underexposed but when merged together, they would provide ONE properly exposed image, free of any motion blur.<br /> 3. In a studio: This is what I call "flash banking". Imagine you're photographing a piece of jewelry with a macro lens. The magnification is close to 1:1, thus the compensation for tube extension is enormous. The DOF dictates the aperture of f-16. The strobes are right over your subject but they still can't put out enough light. Instead of bumping up the ISO, one could do the following. Set the camera to 4 exposures and take the same picture 4 times to compensate for the lack of 2 stops of light (that your strobes can't put out). Out of those 4 exposures the camera would create one good picture... Not really good, because it would be blurry. The reason is that each time the shutter is released it introduces enough vibration, which in term offsets the image plane. The flash 'freezes' the picture but each of those 'pictures' is taken from a slightly different angle because the lens is moved in each of those instances. Everything is as stable as it can be and with the mirror locked up there is hardly any vibration at all. However, at this kind of high magnification, the offset of the image plane is very, very noticeable in the picture and it manifests itself as lack of image sharpness . Fortunately, there is a way around this, which I've used with film. Make sure the studio is completely (or almost completely) dark. Open the shutter in bulb mode and fire the strobes manually as many times as you need. When done, close the shutter. This way the camera is not moved in any way between the "exposures" that in this case are the flashes of light. With film you can keep the shutter open for a week. As long as it's completely dark, it won't matter. With digital you have a fairly narrow window to fire your strobes before bad things start to happen to your picture. Of course, there's the Long-exposure NR but it introduces its own artifacts, that get progressively worse with the exposure time.</p> <p>I hope this was enough.<br /> If merging exposures in-camera can be done with such high-quality results, why can't it be done by a proper computer with even better results?</p>
  4. <p>I've touched on this subject before, however, I pretty much ended up nowhere. I want to attack this bastard from a different angle.<br /> There's this very nice feature in my Nikon D800 camera. It's called 'Multiple exposure' and the applications of which are countless.<br /> What I need is to 'merge exposures'. I will now illustrate my point.<br /> Let's say that a certain (static) scene dictates the exposure of 8 seconds. We capture the image, name it "Picture A" and set it aside. Then we put the camera into the multiple-exposure mode and set the number of exposures to: 8. We change the shutter speed to 1 second and make 8 exposures. Upon completion, the camera does its thing and we get a single photo, combined from those 8 exposures, which we call "Picture B".<br /> Upon close inspection, Photo A and Photo B look almost identical. Any variations are trivial at the most. Apparently this works just as good as it used to with film. Obviously there are several issues with this technique. One would be:<br /> If the camera is moved ever so slightly between the exposures, the resulting image (merged from 8 different exposures) will register this misalignment.<br /> In a real world scenario, especially when the tripod is on unstable ground (such as sand), every mirror flip and every shutter click will offset the image plane thus each of the 8 images will be recorded from 8 different camera positions.<br /> Nikon D800 does NOT align the images taken in the multiple-exposure mode. The solution to that would be to take 8 separate (RAW), underexposed photos and merge them outside the camera.<br /> To my surprise, Photoshop does a marvelous job at aligning such images automatically, at what appears to be sub-pixel level.<br /> You go to Files>Scripts>Load files into stack You then select the photos and check the "Attempt to Automatically Align Source Images" box. I also check the "Create Smart Object after Loading Layers".<br /> Once it's ready I go to Layer>Smart Objects>Stack Mode>Summation.<br /> That's the closest I'm able to get to the results obtained 'in camera'. Unfortunately, the resulting image is overexposed and all the tones look wrong, although shadow detail appears to be all there. So I figured this was not the right way to go about merging exposures.<br /> I've tried using the blend modes such as the "Linear Dodge" and "Screen" but the picture still doesn't look right.<br /> Previously I made a serious attempt with DeepSkyStacker, which is a software designed for astrophotography. Needless to say I got nowhere with it.<br /> Once I realized that PS can properly align several images I figured that it should have the proper tools for merging exposures. Maybe the "Smart object" and "blend modes" route is not the right way to go about this. Can anyone shed some light on the subject? I'm not married to Adobe, so if there's some other software that can merge exposures, I'd be glad to hear about it! Although I'd much rather stick to what I'm familiar with.<br /> Thanks in advance!</p>
  5. <p>Edward: I know I don't have to tell you that producing a 'keeper' involves a lot more than just clicking the shutter. What you're describing is fairly cumbersome. I don't shoot video of any kind, so adding this kind of step to my already extremely complicated workflow would be an 'overkeel'... Yeah, it would make me 'keel over'.<br /> As luck would have it, getting what I need is actually pretty easy. I just want to make it A LOT easier. But if you have Lightroom, try this. You'll understand exactly what I'm talking about. <br /> Find a WAV file (just about any type will do).<br /> Make a copy. Put that copy in the same folder where you store a few of your Lightroom photos.<br /> Give that WAV file a new name. It should bear the exact name of one of the photos with only the extension being different (WAV).<br /> Run LR and select the photo that you borrowed the filename from.<br /> Now, in the metadata panel, if LR is set to display all the sidecar files, you should see that that photo indeed has a 'WAV' sidecar (and possibly an XMP too). There should be a little arrow right next to it. Clicking that arrow will play the audio file. It's that simple. No import or anything. *Don't forget to delete the WAV file when you're done.<br /> With my system, the key is figuring out how to batch rename the WAV files on a mass scale.</p> <p>And Barry: I believe that I've described a pretty solid solution for your needs. If all you have is a couple dozen voice memos per day, simply renaming and then moving a few files to a different folder should be relatively painless. Get a voice recorder with an accurate clock and make sure to sync it with your camera's. I wouldn't worry about it, but it's probably a good idea to check that the audio format(s) is compatible with LR, prior to the purchase. There's a dedicated Lightroom forum for this kind of inquiries.<br /> Just a heads-up: If you rename the photos (even in LR), you lose the link to the voice memos, until at least you rename them again.</p>
  6. <p>Thank you, Edward. Prior to posting my inquiry here, I did run a pretty extensive search on Google myself and it felt like I hit a brick wall. Before that, I imagined that syncing voice memos with photos was common practice.<br /> The whole point of this is, I guess, convenience but then there are times when I simply can't take any written notes. And even if I do manage to scribble something down, how exactly am I supposed to match that piece of paper to the actual photo? Look up the filename in the camera which will be changed on import to LR anyway? Start recording the time and date for each note and see if that helps? One can realize how the simple task of annotating a few hundred photos can quickly turn into a complete cluster****.<br /> BTW, I'm not ranting. Just trying to get my point across.<br /> And I do thank you, Edward, for understanding. The voice memos are intended to serve their purpose and be discarded. The information is meant to be eventually written down (typed in) elsewhere and I myself do type almost as fast as I think (in 3 languages I might add).<br /> Everyone else, do take note that any advice that has anything to do with iPhones and the like would not work in my case. Even if I did own a smartphone (which I don't), it would be of no use to me in the <strong>wilderness</strong> where there's zero network coverage (and no AC outlets to charge it). All the electronics I bring with me run on AA batteries, including my camera. This makes a small voice recorder the perfect candidate for what I need accomplished. One that can timestamp the recordings, anyway.<br /> Having all the photos in Lightroom along with the corresponding voice memos is my goal. A big part of it is getting there painlessly. As I see it, all it takes is a cheap voice recorder and a smart file renaming utility and I know for a fact that there's at least a ton of those for every platform and every decade of that platform's existence. I'll probably have to post an inquiry in a computer forum. Anyway, I'll keep looking.<br /> Back in my coding days I would have been able to knock this baby out in less than a week. Well, this is not the 90's anymore. Who knows? Maybe I will have to actually pay someone to code this thing for me.<br /> Can't wait to get the real specs on the Nikon D5. Because having the voice memos simply built into the camera would have been so much easier.</p>
  7. <p>My mistake. I wasn't clear enough. I'm not looking to create any sort of slideshows, presentations or Google Earth tours. The audio annotations would be for my personal use only. No one else is supposed to listen to them. EVER.<br /> Since we're talking thousands of photos + hundreds of voice memos per trip/hike, the audio annotations have to be easily linked to the photos and then be readily available for playback from inside Lightroom.</p>
  8. <p>More and more I find myself out in the field... and I do mean some hardcore wilderness. Right now it looks like I'm going to be at it for many years to come. So, mostly I've been busy with all the trekking and camping side of it but a few issues with the photo gear have to be addressed too.<br /> <br />Often what happens is that I need to annotate the photos in some way and jotting down that info on a piece of paper isn't that practical or even doable. Apparently someone has thought of this a long time ago. Nikon D4 owners know what I'm talking about. Well, all I have is my lousy D800 (not really, no..).. How hard was it to implement the voice memos feature anyway?<br /> The good news is that keeping the audio annotations next to the photos in Lightroom is super-easy. As long as the audio file has the same name as the photo (minus the extension, of course), it's always just a mouse click away. Just to make it clear: the voice memo is kept as a sidecar. And theoretically this "voice memo" can come from just about any source.<br /> I figure that what I need is a utility that can automatically rename the audio files based on the NEF and (preferably) TIFF filenames, taking the timestamps into account.<br /> And by the way, if this thing works out, I'm planning on getting a dedicated voice recorder, which means I'll be able to get something that fits the bill just right.<br /> I realize that the syncing part can be done manually but then I picture myself getting back from a trip with thousands of photos and hundreds of audio files and just go: "naaahhh......". Who needs that kind of cluster**** on their hands?<br /> There has got to be something that can remedy the absence of this very basic feature in our very capable, yet apparently, somewhat lacking cameras.</p>
  9. <p>I actually switched over to Hugin from a "commercial software" a few years back. Right now I wouldn't even consider anything else, paid or free. Hugin is that good.<br> Currently I don't expect any new features to be included in Hugin, especially so since what I need already appears to be there.<br> I knew this was a long shot, this forum not being saturated with Hugin users. And I did kind of imagine this discussion as being a not very productive back and forth.<br> From the very beginning I should have pointed out that I seek the advise of people who know exactly what I'm talking about, more specifically: regular Hugin users, such as myself.<br> Thank you in advance.</p>
  10. <p>I already do that, Jeff, and what I'm trying to do is to eliminate this extra step in my workflow.<br> The time is the only parameter that would be different for each photo. The focal length, the exposure, the ISO and even the GPS are (or should) be identical.<br> I do not wish to play a silly game with the software and worry which value it's going to pick. I should be able to TELL IT.<br> For example, if the timestamp is different (as it should be) for each of the frames, I'll be just fine with the final product having the timestamp of the first frame in the sequence, or the last, whichever is more convenient.<br> If the shutter speed is different for each frame in an HDR stack, then I won't mind not having those values at all. But the focal length, the aperture and even the GPS should remain the same (except for the 'heading', if it's even recorded).<br> It appears that someone's already thought of this and I know I'm not reinventing the wheel.<br> All I need is someone to tell me how to prevent Hugin from stripping the EXIF.</p>
  11. <p>This should be a common question, however, their forum (forum.hugin.com) comes up with nothing relevant when you run a search for "EXIF" . Plus for some reason they don't let me register, so I couldn't post the question there.<br> <br />The problem is this:<br> The EXIF/metadata is completely stripped from the output files.<br> *My "output" is individual frames that I take to Photoshop and assemble there. None of the files contain any type of EXIF/metadata.<br> I've been using Hugin for a long time and just assumed that there was nothing to be done about it. Today I actually realized that preserving the EXIF data is probably possible.<br> Under "File>Preferences>Stitching(2)" there are a couple of options that suggest that copying parameters from the original EXIF header is actually doable.<br> I've created a default template in the form of an ARG file but the EXIF data remains absent in the output files.<br> I must be doing something wrong or completely wasting my time. If this is possible then can someone please tell me how it's done?<br> <br />I would like to retain(copy) the following parameters:<br> Shutter speed, aperture, ISO, focal length, date/time and the GPS coordinates.<br> I can't be the only one who needs this.</p>
  12. <p>Tom: The photos you are referring to were not taken by me but rather by the gentleman who's selling the software.</p> <p>Q.G.: "Why not just expose longer?" How long can you go with a DSLR? I have a Nikon D800 and I wouldn't expose for anything longer than a couple of minutes. The artifacts are bad enough at 30 seconds (and I do know how to turn on the Long Exposure NR).<br> Here's a link for you and everyone else:<br> https://photographylife.com/astrophotography-tutorial<br> Skip to the 'General Procedure' and read the first paragraph. Pay attention to the last sentence.<br> I've been wanting to investigate this technique for many years. Couldn't get around to it because it wasn't high on the list.<br> It's been out there for many years. Obviously, us, regular folk, do not synthesize exposures. Someone out there does. There are pictures.<br> I realize now that I must refer to a forum dedicated to Astrophotography and take it from there.</p> <p>Thanks to everyone who tried to help.</p>
  13. <p>I am familiar with HDR photography, probably more so than most people who utilize it, as I've been perfecting it for years. I even put my own twist on it (actually a few of those). To take it even further I want to see if synthesizing exposures works for what I have in mind.<br /> What I've been talking about all along is <strong>not</strong> HDR.<br /> Here is a gallery. Underneath each picture there is a description of the technique that was used. Short version: this ain't HDR.<br /> http://www.tawbaware.com/maxlyons/cgi-bin/image.pl?showFileName=ST_PICT4183_4187_Stacked_Rosslyn.jpg&gallery=8&pwd=</p>
  14. <p>I have PhotoAcute and I sometimes use it to slightly increase the sharpness of my images. I don't see how it can be used to stack exposures.<br /> I'm not sure that we're on the same page here. I address to all the kind gentlemen who contributed to this discussion thus far.<br /> I want to learn and master the technique of stacking the <strong>exposure</strong>, not focus and not anything else.<br />It's a similar technique they use to photograph nebulae through the telescope and landscapes lit only by the moonlight (that look like they've been taken during the afternoon). I want to try this technique for something entirely different and that is why I am not sure that software aimed at astrophotography is what I need.<br /> I'm not 100% certain how this technique can obtain detail in areas where there is none but I assure you that it takes more than a little tweaking in Photoshop.<br /> I am not talking about shooting star trails where you can just capture a nice set of pictures, stack them on top of each other in CS6 and apply the "Lighten" blending mode.<br /> I wouldn't use CS6 even for star trails. As you may or may not be aware, there will be slight misregistration between each and every image in the stack, even under the most sterile conditions. In a real-world scenario theses misregistrations will be more than negligible. They can be caused by: camera's mirror+shutter+diaphragm rattling the whole setup with EACH exposure. They can be caused by the elements such as the wind and seismic vibrations. Bear in mind that I'm not talking about camera shake but rather the fact that for ALL of the images the camera will be in a slightly different position. When you begin merging those frames in Photoshop, the sharpness of the composite image will degrade with EACH additional frame.</p> <p>I need a dedicated software that can ALIGN all the frames prior to merging them. By "align" I mean: shift left or right, up or down and rotate, on <strong>sub-pixel</strong> level. And by 'merging' I mean increasing the effective exposure of the composite, i.e. adding detail to the final image that is not present in any single frame in the stack because all of those frames are effectively <strong>underexposed</strong>.</p> <p>I'll rephrase that. I want to take a bunch of grossly <strong>underexposed </strong>pictures and combine all of them to get 1 good, properly exposed picture.<br> <br />DeepSkyStacker (http://deepskystacker.free.fr), supposedly can do that but it's aimed at astrophotography. Astrophotography is not what I'm about. There are all kinds of software out there, so I thought I'd ask what's best for what I need to do.<br /> What I need to do is demonstrated on this page: http://www.tawbaware.com/imgstack.htm<br /> <br /> Any input is welcome.</p>
  15. <p>Here's a page that explains what I'm talking about.<br> http://www.tawbaware.com/imgstack.htm<br> <br />Basically, if ,for example, the correct/desired exposure of the scene is 2 minutes, one can capture 4 separate images, each exposed at 30 seconds and have them 'stacked' to achieve the desired exposure.<br> I have zero experience with this and I want to explore this technique.<br> The "Image Stacker" (link above) is dated, plus it doesn't work (I get some crazy error on my PC).<br> There are at least several alternatives and I thought I'd ask what's best for what I want to do as I don't like installing random stuff on my system.<br> For example, there's the DeepSkyStacker. It looks impressive and I think it's free but I'm not sure if it's good for processing low-light cityscapes.<br> I'm looking for something that can input and output 16-bit tiffs. I don't mind if the software costs money as long as I can properly evaluate it.<br> <br />If anyone wants to share their insights on this topic in general, that'd be great!</p>
  16. <p>2: That is why I've never used more than 1 extension tube at a time. And that's actually why it is a good idea to have more than just several lengths to choose from. But then, if you're that serious about macro, you might as well get extension bellows.<br> <br />1: As per the actual diameter and vignetting... I've never owned/used any extension tubes made by Nikon so I never had anything to compare mine to. If anyone cares enough about this, I'm sure this data is easy enough to come by. As for me, the last time I used any extension tubes was 6-7 years ago.</p>
  17. <p>My question is why bother with their tubes at all? There's plenty (better) alternatives to the OEM.<br> For a while there I've been using an extension tube made by Kenko but then I got me this nifty set:<br> http://www.bhphotovideo.com/c/product/787224-REG/vello_ext_nd_auto_focus_ext_tube.html<br> All the tubes that I've ever used relayed the CPU data and the AF mechanics (!). Since no actual glass is involved, I don't see why anyone would bother with that ancient garbage made by Nikon.<br> PS: I have the 50/1.8D myself and at one point I did try to use it in combination with extension tubes. I remember not being impressed but I can't say why. It was that long ago.</p>
  18. <p>Spyder2??? I think I still have it myself, somewhere.. but that one's ooooold. What monitor are you using? I'm only asking because Spyder2 was designed for CRT and 1st and 2nd generation LCD displays, not wide-gamut panels that we have today.<br /> Have you tried calibrating over night? That's what I do. I use a custom target, something like 900 patches, and select the settings that take the longest. Full calibration and profiling take something like 7 hours with my Spyder3. I set everything up and leave the computer on for the night. You can make the screen display 'black' and adjust the target area to something very small.<br /> You've mentioned that you tweak the brightness and contrast. In general, the contrast should be left at the panel's native setting. Brightness is more complicated, as it actually affects the contrast ratio. What's your target white point?</p>
  19. <p>Spyder Express comes with proprietary software which is 'garbage'. The fact that the device is not supported by dispcalGUI renders the whole package as 'mediocre' at best. Your experience may vary.</p>
  20. <p>The Express is garbage.<br /> The Pro and Elite are exactly the same device. The only difference is the software, which is upgradable from 'Pro' to 'Elite'. Unfortunately, the software provided (even the "Elite") is utter rubbish.<br /> I hate to burst your bubble but there is no 'magic wand', $300 type of answer to color management, accuracy and calibration. It is something that first has to be learned, understood on a deep level and then practiced. This is true despite what Datacolor and others have lead you to believe.<br /> Purchasing a Spyder and using its OEM software will lull you into thinking that your colors are accurate, your panel is fully optimized and that you're doing everything right color-wise. And that's exactly the intent. Why should Datacolor tell you that color-management is hard and time-consuming (at least at first)? They just want to make more sales.<br /> I wonder how long it'll take before we'll see another post from you, asking <strong>us</strong> 'how come your B&W photos look green in the shadows and pink in the highlights', 'why you get banding' and 'why everything clips to black below RGB 8,8,8'? You won't have the tools or the knowledge to address those kind of issues yourself. Calling 911 won't help you either. Only <strong>you </strong>can fix your colors.<br /> And if you actually care about them colors, buy the Spyder4 Pro (not elite) but do not install the shitty software that comes with it. Not even the drivers.<br /> Go to: http://dispcalgui.hoech.net/ and enter the true world of color accuracy. It's for people who know what they're doing or <strong>want to</strong> <strong>know </strong>what they're doing.</p>
  21. <p>Right? I mean all we're talking about are some stupid 2D images. Not 3D worlds and not even 'ancient' 1080p video! It's 2015 and processing still pictures is still such a PITA. Maybe that's why they call them 'stills'. Because it <strong>still </strong>sucks.<br /> Ridiculous...</p>
  22. <p>Richard, I don't know what you were looking at but the unit I posted the link to <strong>is </strong>a 3.5" reader. It appears to be identical to the XM-37U, except for the "clown makeup".<br> Notwithstanding, AFT does manufacture 5.25" USB3 internal reader/s.</p>
  23. <p>Thank you, Richard. Actually I did have my eye set on the AFT XM-37U but I guess I just didn't like the color coding. On second look, the product actually turned out to be better than the BitFenix reader. First of all, the issue with the bent pins in the CF port is addressed by AFT. And, as it turns out, this is a true USB3 reader (judging from the benchmarks). The vibe-killing, ugly bezel made me dig a little deeper and I found out that there's a pretty much identical product which is just plain-black.<br> http://www.atechflash.com/AtechFlash_Product_PRO-37U_gallery.htm<br> So, in the end, it looks like I'm going to order the AFT Pro-37U.</p>
  24. <p>Chuck: Thank you for pointing out the issue of the CF slots not being deep enough. I really wouldn't have thought about that one on my own. It's hard to tell from photos and videos on YouTube but some CF slots appear to be slightly deeper than the others. But there's really no going around it. I'll just have to be mindful of the alignment every time I insert a CF card into the slot. Worst case scenario: It's the cheap reader that gets busted, not a $$$ CF card, with precious photos in it.</p> <p>Jeff: You have a very valid point with all that 'statistics' approach. I like the way you laid it out.</p> <p>Eric: To me, an <strong>internal </strong>memory card reader is about convenience, nothing else. I'm not trying to establish any kind of speed record.</p> <p>Thanks to everyone!<br /> <br />I've spent a few additional hours on research and came up with a crucial piece of information. As it turns out, not every USB3 card reader is actually wired to utilize the USB3 protocol. In most cases, when it says that it's a 'USB3 reader' what is actually implied is that the standard USB plug is compliant with USB3 and it does NOT apply to the actual memory card reader's interface, which is a USB2.<br /> I've selected a card reader for myself and if no one has to say anything against, it is probably what I'll order. Here's the link: http://www.bitfenix.com/global/en/products/accessories/usb3-card-reader</p>
×
×
  • Create New...