Jump to content

Bill C

Members
  • Posts

    1,719
  • Joined

  • Last visited

Everything posted by Bill C

  1. Hi, I've explained it before, but the depth of the explanation really depends on how much understanding the "explainee" has of general image-forming optics. AND how much effort they are willing to put into it. If you really wanna get some understanding, try doing this... take a camera lens, by itself, and set it in some sort of stand. For example, perhaps a folded towel on your kitchen table. Most photographers realize that the lens can form an image on the film/sensor inside of the camera. But if the lens is removed from the camera it will still project such an image. If there is nothing there, such as a piece of film, or a white card, etc., to intercept such an image, it is known as an "aerial image." An aerial image cannot generally be seen UNLESS you get directly behind it and preferably view it with some sort of magnifier. If you know roughly where the image is, and roughly where your magnifier is focused, it should be easy enough for you to see it through the magnifier. (You will only see it against the clear aperture of the camera lens.) This is essentially what these cell phone digital "camera backs" are doing - they are photographing an aerial image formed by the film camera's lens. But... the size of such an image is severely limited. To understand this part you probably need to sketch some light rays coming from the lens. Most simply, draw some straight lines coming from an object (say a tree, or candle, or cereal box, or whatever suits your fancy) and passing through a pinhole lens. Next, sketch in a representation of your magnifier and eye. You can probably realize that all of the image-forming rays that miss your magnifier cannot possibly be seen. Right? So this explains why you can only see a small part of the aerial image. Finally, the part that most photographers won't be able to understand... the use of a so-called field lens - it increases the field of view. If you made a sketch of the light rays earlier, now put in a large diameter lens where the image is ideally formed. This lens will ideally bend the peripheral rays - those that missed your magnifier - back where the magnifier can see them. Because the field lens is placed on the image plane it essentially doesn't change magnification, etc. - it primarily enlarges the field of view. That's the whole gist of the thing, but you need to have a rudimentary grasp of ray-tracing to understand it.
  2. No, I'm pretty doubtful. Vaporware? Are you quite certain? I actually sat down with a couple of those guys at the PMA show, and saw the so-called prototypes. At the place where I worked we were interested in a digital conversion, of sorts, to our long-roll studio cameras. We had an appointment with them to do some test shots the next morning, before the show opened. But later in the day they canceled, saying that the second prototype had now also failed. As I recall the internet rumor mill later had it that their earlier demos had been faked. Were they? I dunno. I've never seen convincing evidence to support this. Could've been, I guess; I'm just not convinced one way or the other. They did also present a technical paper at an IS&T conference, going through the technological issues. I didn't see anything that would have prevented success, within reason. Fwiw "we" did actually end up building our own ground-up digital camera. Initially we hoped to have it as an interchangeable magazine, giving us the option of shooting either digital or film by just swapping magazines. But the physical layout of the sensor package was not conducive to this. (We were not willing to tap into only the central area of the frame as the silicone film, or whatever, people were doing.) I really suspect that what ultimately killed the silicone film idea was the rapid drop in cost of digital cameras. At that time high-quality digital backs, in a nominal 35mm film frame size, were selling for around $25,000 US. But in only a couple of years DSLR cameras, same sensor size, dropped below $10,000 or so. As one's competition is dropping prices, the writing seems to be on the wall. What once might have been a financially lucrative product is now losing a lot of the glitter. Anyway, this is my guess as to why things didn't work out. Regarding this current kickstarter project, it would seem to be essentially a field lens assembly inside of a box, with the image being photographed by a camera phone. Maybe they have an improved sort of field lens, or whatever, but I just don't see any viable future for it, other than just a fun gimmick.
  3. Probably yes, I think. In the Kodak world they liked to use the term "thermal dye transfer," although "dye sub" was common slang. In the Sony world, sales literature commonly referred to "Sony's dye sublimation technology." My printer won its class the first year I made the DIMA shoot-out prints; they gave me one of the award plaques where DIMA clearly says "DYE SUBLIMATION" as part of the category (I'm blacking out the specifics, for no good reason). Anyway, people can call it what they wish, but anyone in the industry will immediately understand what you mean if you say "dye sub."
  4. If someone told me that they had a "thermal transfer" printer, first thing I'd ask is for a clarification - I'd think they were talking about those "wax crayon-type" printers. So I think it's probably better to say either the full term, "thermal dye transfer," or just plain dye-sub; everyone in the industry will know what you mean (I think). Regarding the high-gloss, in the portrait business this is not generally seen as specifically desirable. Not really as undesirable either. It's just about the only surface finish. Now, some of the makers have used a technology where they dither the overcoat layer, and this can make a really nice appearance. The now-defunct Sony UP DR80 printer did a beautiful job of this. But I would not personally use such function for long-term prints.
  5. Yeah, I vaguely remember those. I don't remember the exact model but in the early 90s we put another cut sheet model in about a thousand of our studios. I was the lab QC manager at the time, only indirectly involved with the studio division. But we had a full-time QC inspector checking all the photo equipment coming through our in-house camera shop (long story, but it took the blame off the camera shop when a studio received non-working equipment). There was some variability in the output, especially when replacing print heads, so we had to come up with a way to quantify the output characteristics so that the repair techs knew what to shoot for. Basically we gave them a test image which they would print, measure with a densitometer, and then modify some print head voltage value according to the results. At a later time we converted to all Sony UPD-70 machines (that's one I DO remember the model). All these machines were SLOW (relative to the modern machines), taking something like 3 minutes to print an 8x10". Plus a handful of seconds to transmit the image over a SCSI. And the incessant troubleshooting by phone related to use of "terminators" and some DIP switch settings on each machine (in a busy studio we could daisy chain the printers, requiring different DIP switch settings, remove a terminator, etc.). This brings back memories about a lot of headaches. But at the time 3-minute color prints were something of a minor miracle. We used 'em to print 6-up proof sheets from our portrait sessions. The "real" portraits were shot on film, but there was a simultaneous, under flash, video grab that was used for the proofs. Most people probably don't remember this, but in the good old days it was not unusual to find that the "best" shots had someone in the middle of a blink. So these simultaneous flash grabs meant that the customer KNEW they would have good expressions before leaving the studio. So no more disappointment when they came back a week or two later to pick up their prints. Back in a time when they WANTED prints.
  6. Hi, I've never used a Selphy, but at one time had done trials with just about all the 8x10" capable pro-grade units (prior to the current DNP models). Those units are capable of very-high quality color, but... a high-quality ICC profile is the key. I would hazard a guess that a better ICC profile is what you really need for your Selphy. But how to get it is the question. Ed suggests to make your own. But really good dye sub profiles are not that easy to make. (I've made hundreds of them, and in fact, I used to make the "competition" prints for a particular little-known printer company back when the PMA used to show results of "printer shoot-outs.") The main problem is that the results on one end of the print head can be affected by the tones printed prior to it. What I used to do was to make 4 sets of the profile targets, each rotated 90 degrees. So every test patch has 4 different readings that are more or less "averaged" together. Profiles made from the averaged data can be very good. But if you don't do this, the profile may give some odd (unfavorable) results. I would say that the pro-grade dye sub printers are not for everyone. There are some comments in the thread linked below: Any Experience with Mitsubishi CP-M1 Dye Sublimation Printer
  7. Yeah, was a time when you pretty much HAD to to do that with some of the older Kodak pro color neg films. CPS (a C-22 process film) was the first one I knew of. This was due to a lower contrast. (ISO film speeds are based on an exposure point just slightly above "film base + fog," but the "density aim points," gray card, flesh highlights, etc., are much higher up the characteristic curve.) If the film has a low-ish contrast it's not possible to reach the aim flesh density values without increasing the camera exposure. When C-41 process came out, VPS II film also had a somewhat low contrast (I'm going from memory), so increased exposure was still needed. VPS III started out the same, but in the first several years of its production run its contrast was tweaked upwards slightly. After this, the exposure increase was no longer necessary. (I know how it went cuz my department used to do sensitometric screening on every new emulsion we got; not that many since the full emulsion runs were reserved for us. So we did see the entire production lifespan of VPS III.) Going on, Portra 160NC carried on from there, and was dead-on-the-money with respect to the "box speed." Still, with those films it doesn't hurt to have a slightly increased exposure, mainly as a safety factor. Especially when shooting in non-ideal light, meaning not daylight equivalent.
  8. Most of what Daniel Milnor, the guy in the video, seems to talk about is transparency films. Something I have almost no experience with. Maybe he's right, maybe not, I dunno. But when he gets onto Portra films, all of em, "not true to box speed," (about 4:45 in) he's wrong." How do I know? A somewhat tech rundown in the next three paragraphs... Well, the outfit where I worked used to run several miles per day of Portra 160 from a large portrait outfit (I spent a number of years as the lab QC manager, with 5 or 6 people in my department). We did extensive testing on every film/paper combination being considered for use. This included actual shooting tests on a half-dozen or so subjects, encompassing a wide range of complexions and hair color. We shot these over a wide range of exposures, under studio lighting with professional electronic flash gear. The exposure range was from roughly about 2 f-stops under to maybe 5 stops over, in half-stop increments. Processed along with "process control strips" in well-controlled machines, so we knew the film processing was on the money. Next question is, how does one determine "proper exposure?" Well, anyone who did commercial processing in the days of optical printing knew that Kodak (and other makers) supplied "printer setup negatives," covering a moderate exposure range, including one that was defined as "normal." Additionally Kodak film data sheets gave density aim values for portraits (red-filter density ranges for both skin tones and gray cards). So, the so-called "normal" exposures were in agreement. Further, when we used known-calibrated Minolta incident meters with our studio cameras, the exposures came out right on the money (within a tenth stop, or so). Finally, what is the result of exposures being off? OK, we made optical prints from all of our test exposures, color balancing the flesh highlights to match to within 1cc color (this is a very tight tolerance). We'd evaluate 8x10" prints in a color booth, maybe a set of 16x20" prints in a more limited selection. The results, in general, from one stop under to maybe 3 or 4 stops over, the color was nearly a dead match. That is to say, a professional color corrector could not tell them apart. Beyond a stop underexposed the darkest parts of the print ("black") were beginning to get "grainy," a consequence of the "higher-sensitivity" layers of each color layer were becoming dominant. Anyway, it doesn't hurt Portra to be overexposed somewhat, so it probably doesn't matter much to tell people to overrate it. Now, if you shoot in off-color lighting (Portra is daylight balanced) it's useful to increase exposure, to make sure your slowest color layer gets "up on the curve" a bit. But it's just not right to say that it's "not true to box speed." The guy in the video says he worked for Kodak Professional, and when he gave talks, he was "the Kodak guy," so I wonder how he could be so off on the Portra films. If you look online for his name you can find some brief bios. For example... Daniel Milnor: Photographing On His Own Terms - The Leica camera Blog Elsewhere it looks like he took a job with Kodak, in LA, for nearly 5 years; see link... Sunday Focus: Daniel Milnor Anyway, take his video(s) as you will. I think he's great as a fast-talking promoter. I always say that everyone's personal experience is perfectly valid, but they should be careful when they try to generalize beyond this. And I think it's a healthy thing to carry a lot of skepticism about who one sees as "expert" in certain areas. It took me too long to learn this.
  9. Hi, actually it's possible in many cases, and not really that difficult. But it's not a method that most photographers would expect. Some years back I did some trials to see how close I could estimate on portraits. (It was sort of an experiment to see how well I could judge "perspective" in a photo.) As I recall I could often estimate fl within 20 or 30%. The basic idea is that there is a "correct viewing distance" for photos, where the viewer essentially duplicates the angle of view of the original lens. And at this correct viewing distance there can be a strong sense of "realism." This is something that was once well-known in photography, maybe 1960s or earlier, not so much today. Anyway, I wanted to test how good my judgment of finding this viewing distance was on some random portraits. This was more curiosity about how large the "sweet spot" was, etc., than the focal length. So I viewed a number of images, using one eye, moving forward and back until the image just looked "right." Then, whatever my viewing angle was should essentially match the original camera viewing angle. Which it mostly did to a reasonable approximation. If anyone wants some further understanding of the "correct perspective" principles, I recommend Rudolf Kingslake's book, "Optics in Photography." The pertinent parts can be read as a preview in Google books.
  10. I'm not a Nikon guy, and can't say for sure. But I'd certainly expect it WOULD have moire problems. Earlier you were thinking that that the original (very high-quality?) lens might still have moire issues at f/8 (even with its diffraction). So it looks like you are suggesting that a (presumably) lesser quality lens at f/2.8 cannot produce a smaller spot of focused light than the f/8 diffraction limit. I wouldn't argue against the older lens having less contrast and lower resolution out towards the edges of the frame, but... failure to deliver smaller spot sizes near the center of the frame than an f/8 diffraction-limited lens? I have a real hard time believing that.
  11. Well, your other main option is find out exactly what magnification of a specific fabric causes the problem, and then avoid that. I'm not saying that you have to sit down and calculate things out with different lenses. Rather, just go from your experience with how much something fills the frame. With your problem shirt, it happens when the guy's body is filling roughly 1/2 to 2/3 of the horizontal frame, so just avoid shooting that subject in that range. Head and shoulder shots, or 3/4 body (and longer) shots are probably OK. If you need in-between, crop in on the wider shot. Of course, you are still subject to problems when another fabric shows up. It just kinda depends on what you're willing to deal with. As I said earlier I once spent time screening potential cameras for a studio chain outfit. Virtually every shot had some sort of fabric in it. So I started out screening for moire. If any camera showed moire like this it was immediately out of the running; we just didn't wanna deal with it. (There were cameras available that were completely immune to the Oxford pin-point fabric moire.) If you do a different sort of work, well, you probably weigh your decisions differently.
  12. Definitely I could be wrong about this. It's largely a guess. Here's how I came up with f/8. I found the pixel pitch for this specific camera, then estimated the "blur circle" diameter needed to prevent aliasing. ("Blur circle" being the diameter of the smallest detail point.) Since Shun rarely (?) has encountered moire I presume that the camera setup, with blur filter (aka AA filter) is very near to "good enough" with respect to preventing moire. So if the "blur circle" can be very slightly enlarged it should be possible to eliminate very nearly ALL moire. I estimate that stopping down from f/2.8 to f/8 will increase the blur circle diameter by about 20%, and am comletely GUESSING that this will be enough. Alternatively, going all the way to f/11 would increase the blur circle diameter by about 33%, which makes things even more favorable. (Note that these are not general rules - they're specific to a 6 micron pixel pitch and a nearly "good enough" blur filter.) In the real world I'd probably bet a beer at the local tavern that f/8 is good enough. Meaning that I think it's better than a 50:50 likelihood. Going down to f/11 is a significantly better likelihood. And so on. But Shun would probably be the one that would have to determine this, using the same test subject.
  13. Hi, I don't really see a specific question, but there seems to be a lot of misunderstanding about what's going on. First, this is not really about a specific lens, it's about a (any) high-quality lens at f/2.8 (or wider), AND a specific magnification of that shirt. Back near 20 years ago I was doing some pretty substantial testing of this sort of thing, on a number of digital cameras that were being considered for the place where I worked. We did mass market portrait work, so obviously every type of fabric was gonna appear sooner or later, and there were a variety of magnifications, so we were especially concerned about moire (aliasing). And because of the very-high volume, virtually no moire would be acceptable - the costs of hand-reworking are too high. What I largely ended up doing was to use a blue pinpoint Oxford shirt as the base test target. (If a camera was gonna produce moire, this was nearly infallible as a test target.) What I would do is to first measure the thread pattern of the fabric (using a magnifier with measuring reticle), then get the pixel pitch of the sensor (from specs). Then I'd estimate how large the shirt must be, roughly, in the frame, in order for the thread pattern to roughly match the pixel pitch. Then, a series of test shots scattered around that size range to see if moire would show up. (This was done in the aperture range that we might foreseeable use.) In fact, the tests were not unlike Shun's, except probably better controlled with respect to magnification. As a note, having the shirt wrap around the body, or to have wavy folds in it is helpful in that this gives some variety in the spacing (aka frequency) of the thread pattern. This means that you don't need an exact magnification; just get close and the variability in the wrapping and tilt of the fabric will take care of the rest. From what Shun is seeing, I'd guess that the moire is just showing up in marginal situations. Many photographers today realize that stopping their lens down can limit the maximum resolving power (read up on Airy disc diameter, related to f-number). So in this case stopping down the lens a bit will most likely prevent this moire completely. In this case I'm making an estimate that stopping down to about f/8 will likely get rid of any possible moire. F/5.6, maybe, maybe not. (I'm presuming your camera has about 24 MP on a "full-size" sensor, pixel pitch around 6 microns.) But if you carry out some more testing you can probably establish a breakpoint which will be "safe" for any potential moire situation. This should work for any lens - only the f-number is significant (provided the lens is high quality). Fwiw the pixel pitch varies diagonally, meaning that it is possible to eliminate aliasing on vertical and horizontal architectural components, yet still get it on diagonal patterns, as in a conventional layout the diagonal pixel pitch is a little longer. And, on a Bayer-pattern array, the red and blue pixels are spaced wider and more sparsely. So if you photographed a detailed pattern in relatively "pure" red or blue colors, moire could still show up.
  14. Just as a demo, here's what happens using the sequence I described. (If color is too weak, probably cuz I desaturated too much in the last step.) The major change, in my view, was due to the green channel not having any values below about 30 or 35, or so. I used a "Levels" control to drag it down near zero. The white balance is essentially unchanged, as evidenced in the rectangular white things on the right. As I said before, I have no idea why the green values didn't go lower - it seems like they should have. Below is the original, the first one in post 22, above.
  15. Just for fun I took a closer look at the first image you posted. Here's an odd thing - in RGB the green channel never gets lower than about 35 or 38 in pixel values (but red and blue DO get down close to 0). If you use something like Photoshop "Levels" or "Curves," setting green pixel values from about 35 down to 0, that overly-green foliage becomes more normal. From that point the tree shadows still look too weak to me. So I would pull down the overall "Curve" in the low areas. For example, pull a pixel value of about 25 down to about 12 or so. I actually made this be part of an overall "S" Curve which crosses at about 75 (the upper part bulges slightly up). The color then becomes much too strong, so desaturate to taste. I think you'd find these changes much improved over the original. But the real issue, aside from the poor tonal response, seems to be the failure of the green channel to go low enough. I have no idea why this would happen; it really shouldn't (this is not a result of white balance settings). Maybe there's some setup information in your camera manual?
  16. I don't really have any specific suggestions for online resources, but I've seen enough of Jochen's posts, in the past, to say that his suggestions are more than likely pretty good. I AM a big proponent of the book, Light Science and Magic, but it may be a bit too deep for a novice - I dunno. It could be a useful reference if she tries to understand how the light sources "work," but again, I dunno. Fwiw I've spent my entire adult work life in photography, largely high-volume portrait work, although I eventually moved primarily into the technical side. Regarding the book, one of my brothers recently (meaning the past 15 or 20 years) started getting involved in landscape photography. As a side interest to his normal job. (He's had prints for sale in 3 or 4 different galleries, continuously in one for over a dozen years.) Anyway, I thought he could learn something from the book, but... he just doesn't seem to get it. So anyway, I'm not sure it's a book for everyone. What I would personally suggest, for someone who intends to be serious about portrait work, is to learn to select good shooting locations. The stronger light should preferably be lighting the frontal part of the subject's face, and I personally prefer the background to be darker. There could be all sorts of variations on this, but the point is to get a "feel" for the light. And fwiw, the human eye tends to minimize the difference in light - what looks like only a slight difference to a person makes a much larger difference in a photo. Once they have picked a location, I'd recommend to manually set both a "custom white balance" (see the camera manual) and exposure (a combination of ISO speed, shutter speed, and aperture; these are all in the camera manual). Then finally, preferably, mount the camera on a tripod and use a remote release. This allows the photographer to interact with the subject, which to me is one of the key things in portrait work. This will entail a great deal of reframing the camera and perhaps resetting focus, depending on how you work. And yes, this IS a lot of work, going back and forth from viewfinder to subject. It's helpful to have the subject "nailed" in place, perhaps sitting on a bench, or perhaps on a log, or leaning over a fence, or that sort of thing. But this is about the only way to have serious interaction with the subject, to manipulate expressions and "posing," etc. (Unless you have an assistant operating the camera, framing per your direction, and keeping focus.) The things I am suggesting definitely take work - much more difficult than just following someone around, with one's eye glued to the viewfinder. But it's a way more efficient way of getting good quality portraits. You set it up and get it to happen as opposed to waiting for "lucky" shots. If you're shooting only adults a handheld camera may be more or less ok, but you do interrupt the interaction and "raport" when your eye is in the viewfinder. Practice and practice is the key. The ideal situation is to have someone with experience coaching you, but lacking this one has to find their own way. I suspect that your family member shooter will not pay much attention to this - hardly anyone does. The only thing that seems to really convince people is when they see you breeze right through something that they have struggled with. Up until that time they tend to attribute another's success to having "better subjects," and that sort of thing. So I don't generally bother trying to explain over the internet - it's mostly a waste of time. If you want a little background on my views, here's an older thread where Fred G (aka Norma Desmond) and I discuss "teaching" someone how to shoot portraits. Fred thinks that it is possible to "teach" such a skill through text and images. And if someone doesn't "learn" in this situation that it is a failure of the student, not of the teacher. Whereas I see the "text and images" method as inadequate, where a "failure of the student" is a result of an inadequate teaching method. Anyway, although I have taught a handful of people certain "techniques," I see these as a near impossible task without in-person hands-on demonstration. Partially because the "student" likely won't make the full commitment necessary unless they believe in it. And a half-hearted attempt doesn't generally work. The link is beow... Film revival? Best of luck to your budding portrait shooter.
  17. Hi, I'm mainly a still photo guy, going to paper prints, so the requirements won't be quite the same. But the first thing that sticks out to me is a very flat tonal response. In fact, I first thought that your sample shots were taken on an overcast day. But looking closer I see that the trees, etc., have long shadows with hard edges. But the shadows are very weak, almost as though a too-strong fill-flash came from the camera. To my way of thinking these shadows should be getting much darker than they are. I didn't examine the images any deeper than this, but this is the first place I would start with these images - to find why the tonal response is so weak. Maybe there is a certain video profile needed, in the same way that ICC profiles are used with still cameras. You might try shooting the same scene with both your drone camera and one of your Canons to see the tonal difference. (If in doubt as to the "aim," just use a standard jpg from the Canon.) I wouldn't bother looking into color issues too much at this point (although I always am a fan of doing manual white balance when possible). When you start increasing contrast in an RGB system (via "curves," etc.) the color saturation starts climbing with it. Thus my suggestion to deal with the tonal response first. That's how I'd approach it anyway.
  18. Fwiw I doubt that the room layout has much effect on those little hot-shoe flashes. They get good power efficiency by putting most of their light into a narrow zone that the lens is looking at. So what I'm suggesting is that there is not much extra light to be spilled onto the walls of the room, reflective or not. (No, I haven't tried this to see, but I have, in the past, had a look at the flash illumination patterns at various zoom settings; the more recent "dedicated" hot-shoe flashes set their zoom settings to match the camera lens.) A note in defense of the ISO standards... in the past I've used a number of ANSI standards (the ISO standards seem to be essentially the same, or at least very similar). And I've found them to be pretty good, without loopholes for the most part. I suspect what is going on here is not an ISO issue, but rather people presuming that their flash unit guide numbers conform to ISO standards. So here's a question for the users of overrated flash units... does the manual for your flash say it uses ISO-based guide numbers, or in any way conform to ISO?
  19. Personally I'm pretty doubtful, but I really have very little knowledge in this area. But my understanding is that alpha particles have very low penetrating power, often quoted to be easily stopped by a sheet of paper, etc. And it's very unlikely that you would have a sensor without at least a cover glass over it. I recall an old post here where the author wondered about the effect of cosmic rays on CMOS sensors. There were a couple links included in the replies (I didn't bother reading them, though). See this link: Long-term radiation damage to CMOS sensors? But back to my first post, I doubt it's gonna be very helpful unless you have somehow put a strong static charge on the sensor cover plate. And I really dunno how you would do that. My experience with such ionizers has been primarily with film. Where I worked we used to use a couple dozen such devices under yearly licenses from the US NRC. They were very effective at knocking down static charges. Something I would occasionally do to demonstrate (to the doubters) was to wipe a short strip of film, with a cloth, then pass it over an ashtray (yep, back in the days when cigarette smoking was very common in offices). A lot of ash would leap up and stick to the film. Then I could move one of the smaller ionizing devices to within about 6 inches of the film. Within one or two seconds the bulk of the ash would just drop off. But... a large amount of tiny dust particles would stay attached. Now, the "charge" situation with film is very different in the makeup of the gelatin, so likely doesn't correlate to how a glass cover plate acts, but the tiniest particles just didn't wanna let go. But that's a different story. I'm just making the point that the bulk charges are neutralized in just a couple of seconds, so any exposure beyond that is kind of pointless. And given the protective cover glass, along with a likely blur filter package, I'm pretty skeptical that sensor damage would be done. Fwiw, if you're in a room with say, 50% relative humidity, any static charge is gonna likely bleed off in 2 or 3 minutes, to the same extent that the air ionizer would do. Just my guesses and opinions. Maybe rodeo_joe has some insight; I have the impression that he had worked in a fab plant at one time.
  20. Pretty doubtful, in my opinion. I don't know the exact mechanism of dust attached to a sensor but once it is attached it is nearly impossible to remove the tiniest particles solely with a blower. Larger particles - no problem. But the very, very tiny ones, not much success. There is a principle in so-called fluid dynamics that air flowing through a tube, for example, has a speed gradient where it slows down near the walls of a tube. And right at the walls, themselves, there is a very narrow zone where the air barely moves. So if the dust particles are very tiny they can sit in this protected zone. I know this sounds like baloney, but anyone who has used the high pressure spray at a car wash has probably seen how this works. You have some very fine road grime down near the bottom that simply will not be blasted off by the high-pressure stream. It's not stuck tightly - you could even use your fingertip to write your name in it. But just a quick wipe with a wet cloth or brush takes it right off. So I see this as similar to the situation of fine dust on a sensor. I think you're gonna have to physically dislodge the very-tiny particles. I'm from an outfit that had a large number of company-owned cameras. In our internal camera repair shop we had a full-time tech, sometimes two, cleaning sensors. We pretty much found wet cleaning the best way to go. It does take a certain technique to get it just right. There's another method, they call a "stamp tool," that has a sort of tacky material on the end of a handle. You press the stamp end gently against the sensor, and its tacky nature is strong enough to take all the particles away from the sensor. Then the stamp is pressed against a cleaning pad which is even tackier, and is able to clean the stamp tool for further use. It worked well on dust. But our techs found that a certain amount of oily particles were present on some sensors. So even after the stamp tool, a test photo might show some oily debris on some sensors. Which then had to be wet cleaned anyway. So they decided it was better to just start out with the full wet cleaning and be done with it.
  21. Here's an oldie, but it fits the genre:
  22. Hi, tough luck. I don't know the camera and its interlocks, etc., but the first thing I'd try is to do a "time exposure" with lens removed so that you can see if there's anything hanging down in the top part of the film frame. Looks to me like a transparent sort of material partially jammed in the film gate. Maybe the same thing is somehow causing the center of the film to bulge out. If the camera doesn't allow operation with the lens off then do the same test with lens aperture wide open; use a small flashlight (aka "torch") to look inside.
×
×
  • Create New...