Jump to content

steven_clark

Members
  • Posts

    3,372
  • Joined

  • Last visited

Everything posted by steven_clark

  1. It's a good time to by SSDs, the teething problems with TLC flash SSDs are mostly over by now. It's also a pretty good time to buy CPUs, 8 core consumer CPUs are now a reasonable thing thanks to AMDs newest offerings forcing Intel to compete again. Graphics are meh, but at least the run on them for mining nerd-scrip seems mostly on the wane so prices should be acceptable. RAM is still in the middle of a giant bout of price fixing, advice being given is don't buy any more than you need. That's your weather report for PC parts.
  2. Vuescan...is it's own thing. Think of it like a piece of factory machinery: it does the steps it does very well and with minimum fuss once you've built it into your workflow. But it takes months or more to get used to all the little idiosyncracies of Vuescan, and it's not a general-purpose piece of software like Photoshop or even Lightroom. Vuescan is very directly geared toward scanners, and scanners that output high bit-depth at that. It's a great way to get strips of film scanned to files with no information lost to clipping that you care about and with the color mask pretty much perfectly removed. If you're adventurous you can even have it do your levels and get automatic color. While it has camera raw features it's mostly an afterthought and doesn't have nice camera oriented features like really good vignetting correction. In your case I would process using a camera raw software that can fix the vignetting, and/or stop down your lens to f/5.6 or 8 and output to high-bit tiff with as little processing for contrast as possible. While you could feed into Vuescan after that and try to use it to fix the colors, once you've got all 3 channels with good unclipped histograms you can invert and do levels/curves yourself probably just as easily in the next tool down the line. As long as it's not Elements.
  3. There's a tool used in the disassembly of things like cell-phones called a spudger that's meant for pulling apart things like this. As long as you know the direction of the clips you can often sort of slowly "unzip" a casing, prying it open clip by clip. You're still very likely to break at least one of the clips but usually enough survive to put the item back together. That's assuming it's not ultrasonically welded, in which case super glue is about the closest replacement for that joint once broken.
  4. Cross contamination maybe? Check that all the the bands of the nozzle check are the colors they should be, paying special attention to cyans (the opposite of magenta) and any inks next to magenta. Run a purge print and see if there's a color shift as you go down the page? Count the number of patches in the nozzle check (if the entire last ink in the chart is missing people have been known to not notice it because it looks complete at first glance).
  5. Makes sense to me, a shorter hotter process would be harder to get right just by time-motion standards. If you don't have automation to help and you don't have it every action down to reflex you're going to have variation in when actions happen. It increases portion that is the error. Looking at your images and scanning color film myself I'd recommend running your curves on the individual color channels to get your color balance right instead of trying to just white balance it. Also, negatives are already a very low contrast medium and stand development would tend to reduce contrast even further. On one hand, this probably reduces the chance of actually losing information to differing development rates, on the other it means your pulling an image from an even smaller portion of the brightness range of your "scan". I don't know if it's enough to posterize but you still might benefit from even a Pacific Image/Reflecta or Plustek film scanner if you find gaps in your histogram (and you can stand the time commitment of a scanning process).
  6. Some people have built digital backs for 35mm cameras. These homebrews are huge and bulky but they do function. With full manufacturing it's probably possible to build a digital back system that only increases the thickness a portion of an inch but the economics are just impossible. First of all there's the development and material costs. Even APS-C sensors are expensive parts, so every prototype is gonna be the better part of a grand just in the Bill of Materials, before assembly. At this kind of size were talking the same many layer PCB tech used for smartphones instead of tablets, so there's $500 per unit or something right there on top of whatever the sensor costs. Then you'd have to build light-tight back shells to adapt to cameras, and that probably means metal instead of plastic which will require hydraulic press stamping tooling for each camera supported. So expect to see support for like the AE-1 and whatever the next 4 most selling cameras are and nothing else. And then the film camera market isn't even in maintenance, it's shrinking. The profit margins aren't even there for rescuing SLRs from thrift stores and refurbishing them. Most people shooting film cameras want to shoot film, and for shooting digital a device with the actual UI for shooting digital is a better experience. We've already had things like the Epson RD1 series and the Olympus Pen which probably take a huge chunk out of the retro experience market. And Mirrorless's take a huge chunk of people who want to shoot old lenses with price ranges from full frame Sonys to cheap discarded EOS-Ms.
  7. I don't think there's anything optical that would prevent scanning larger transparencies with this scanner but it's certainly possible for that to be the case. For example if this is one of the models that uses a dual-focus system maybe it hasn't been rigged up for the full width of the v370. And if the scanner does have a dual focus system then the Epson software is only going to enable that when you scan with the transparency unit setting which probably restricts the area to the known lit region. I'm assuming if you have a scanner this inexpensive you probably didn't drop near its cost again on a Vuescan pro license. Otherwise you might have to fluid mount to the scan bed directly (which is not necessarily sealed for it) to get at the on-glass focal point of the reflective scanning mode. If you're lucky (and maybe using single frames of 645) you might be able to use the existing light without severe dropoff. Beyond that I'm guessing the vignetting is probably so severe the image data in the dark part might be completely unrecoverable. At least it's easier to build a backlight these days with LED lighting separate fluorescent lamps tend to be unsynchronized with the scanning process leading to banding as the light pulses waxing and waning from one millisecond to the next. This also makes any PWM, the most common way to dimm LED lights, probably a bad idea for a scanning backlight too because it would also pulse the light. Which may be a problem because in reflective mode the calibration strip for the sensor on Epson scanners is underneath the glass internal to the scanner and so completely unrelated to the intensity of the light you're scanning with so you'll probably need to tune the backlight to match (a current limiting potentiometer would be my approach) and hope the reflection off the emulsion itself of the scan head light isn't going to drown out your image. By the time you've gone and done basic electrical and mechanical engineering for yourself to build a custom diffusion head, maybe it's time to question whether your time is worth enough that you should trade some money for a v550/v600 which has a light source wide enough for 120 to get it back. The digitalliza system doesn't seem like a bad way to get film holders, just keep some things in mind: in transparency mode an Epson scanner will calibrate using an open space immediately before the area being scanned, give yourself that space. This may require making a spacing block to align the holder from the side of the scanner instead of the top or bottom. It's not going to do anything to solve your lighting problems because people seem to mostly be using them to scan beyond the normal frames into the sprockets of 35mm and frame markings of 120 on much higher-end scanners for a sort of photography "kitsch". In short, probably possible, just a pain in the rear.
  8. Without an auto-winder and with self-processed black and white it's pretty easy: you don't open them. By sound and feel you wait until the film detaches from the takeup on rewinding and only roll it in a little bit more, leaving the leader of the film outside the can. Then, when you wind it onto the processing reel you cut off the film outside the can at the end with enough space to tape new film onto the leftover end. It's pretty easy with enough practice. If needed you could use one of those film extractor tools if you have an autowinder, but I never got the hang of those. Also if you've got access to a (increasingly rare) dip-and-dunk pro film lab you might convince them to do the same thing for color rolls (but it's probably a pain for the workflow).
  9. If you are going to be doing a very large number of images you might consider a Vuescan license, if only because it can be rigged up for one-button scanning to help with workflows. Unfortunately it's got a learning curve that makes Epson Scan look positively easy. And it looks like you're already behind on understanding of scanning jargon. But if you are going to find yourself scanning some gigantic archive building a workflow might be worth learning.
  10. Looks like I was confusing this with the very similar Canolite D. I'm guessing the auto was made default because so rarely would you want to override it. The principle is still the same as a Canonet with the Canolite but with manual metering to let you screw things up by trying to outguess physics.
  11. These were paired with Canonet compact rangefinders (or apparently a selection of FD lenses?). They are manual flashes, but when paired with compatible systems the aperture on the lens will be set exactly by the focus given the guide number automatically. In many ways they are significantly better than an auto flash as there's no autoexposure process to be fooled by the tone of the subject. A white subject will expose as white, a black subject will expose as black, the exposure is just determined by basic math and the laws of physics. Given there's no light sensor I don't even know what the auto setting does.
  12. I think this was mostly a quirk of the workflow for testing with a computer. The formatting procedures for a camera would obviously be written for the type of card encountered. Otherwise there'd be a big performance disadvantage for any camera company whos format didn't trigger Trim and/or Garbage Collection. On a PC though the system wouldn't really know the card is an SD card (architecturally it's just another USB storage device like any other) and wouldn't know to use SD-specific features to force erasure of deleted blocks (instead of just have the file system forget the addresses like a hard disk). Hence the SD forum's tool. Using a tool to fully delete the drive to restore performance on later runs is a pretty standard procedure on SSD reviews, which are something of a specialty of Anandtech.
  13. Keep in mind many browsers do not enable color management by default on PC, or at least that used to be the case. Also, websites to which you upload images often process them and strip out metadata like color profile and color space tags when resizing them for thumbnails and the like. From what I can tell tagging images with color space is nowhere near as effective in web browsers as including the profile. Also, both sets of images look the same on my little chromebook here, some tests online indicate it has color management fully enabled. That means your images do retain color profiles (embedded or just tagged I'm not sure) and possibly your browser doesn't have color management enabled. You should probably not depend on everyone having a color-managed browser yet and should continue to save all web-bound images to sRGB. Maybe make separate pages with high-gamut images if you need to show off your colors.
  14. Have you checked the cable linking the lid to the scanner body. I don't know if it's detachable on that model, but if it has and it's come loose it would look a lot like this.
  15. The first piece of advice for people having trouble with scanning negatives with Epson Scan is to go into the settings of Professional and disable "Auto thuimbnailing" if it isn't working for you. Making "raw" film scans that you can tweak the settings later on, and other workflow-oriented features, is the kind of thing people buy Vuescan for. Full auto is always bad, I never use it. Reversing in Photoshop/Lightroom can be done, but if your scanning software is at all decent (and Epson's in Pro is at least effective) it can probably do a better job of the reversal than you can. Let the scan software know which kind of film you are using: as you discovered it needs to know whether the unexposed regions are light or dark in order to effectively find the edges of the frames. As a rule I try to scan so that the contrast range of the film is covered by one exposure, without either the highlights clipping into full brightness or the shadows clipping into to full black, scan and output at high bit-depth (16/48bit) and then round-out the top and bottom in curves (an s-curve) to increase contrast in my editing software, often after cropping to exclude the frame from the histogram.
  16. The 5th revision of my anaglyph process seems to work in general. It looks like red-cyan glasses are the right kind. There's enough less ghosting that I can afford to have some color in my image. I still expect it to breakdown on really hot images, but I haven't checked yet.
  17. steven_clark

    Library Side-Path

    The first example of version 5 of my stereo anaglyph process. I finally found the right kind of glasses.
  18. I have a pair of red/cyan glasses on order. If they work better I might be able to get enough less crosstalk for color. As far as I can tell neither of these howtos solve problems I haven't already solved. The big unsolved problem is getting output from an inkjet printer to work with the color filters of the glasses. It's not a color problem, but a spectrophotometric problem, (I think?) which is why I've been disabling color management: I don't need the color accurate, I need the color channels to not use each other's inks. Overall it tends to work much better with a screen (where red is red and green is green) than with a printer (where less red is more of an aqua than a cyan, even though less green is almost exactly magenta). If I could replace the printer driver with software that pumped the inks itself and let me address the channels separately (a RIP made for it) then I could eventually just find the right shades of filter gel and it would work perfectly. In theory.
  19. in terms of raw mechanics you could use public key encryption: pre distribute your public key. With that public key anyone can encrypt messages to you, but nobody can decrypt them without the private key. You can also sen messages to someone using their public key. Therein lies the problem: you can only communicate with someone who's already given you their public key somehow. If the (public only) keys aren't posted at some trusted central repository then you need a 2-way communication medium and some system for verifying you are talking with the person you desire and not an eavesdropper. This introduces the need for certificates or some similar system of establishing trust using an existing trusted authority. All this overhead is part of why practically nobody uses encrypted email: the pool of reachable people is just so small unless you arrange it ahead of time, in which case you could just distribute one-time-pads and have perfect encryption.
  20. A new wheel reinvented to the process: All white or all black for the spare channel (blue in my case) in an anaglyph isn't the best choice. To get a monochrome anaglyph with color fringing that keeps the most dynamic range you need a composite of left and right channels. A lighten composite in my approach makes an image with cyan and magenta fringes. However, given my inability to get cyan and magenta to print entirely with cyan or magenta ink (RGB cyan is really more of an aqua?) I still often get too much crosstalk between eyes with these images and the stereo effect doesn't quite work. However if the blue channel is a darken composite instead, then the fringing is red and green, the tints tend more toward yellow, which is white in both color filters. There's still some crosstalk, but it's faint enough to see through. Also the process is still only functional with monochrome images and uses a pair of Red-Green glasses meant for testing color blindness rather than Red-Cyan or Magenta-Green glasses usually used for anaglyphs. There's also a lot of rote "magic" steps left in the process so I consider it fragile at best. But it's kinda working now.
  21. An update: I printed out a nozzle check to get pure ink colors. The red lens is actually perfect: yellow and magenta are invisible and cyan looks black. The green lens almost works: yellow is again invisible, and magenta looks black, but the cyan is too dark. It's not as dark as the magenta or black but it's still dim. In brighter light the effect is less pronounced and the ghost image is less dark than the signal image so the print still works, but it's not optimal. As for the print working here's some workflow takeaways: 1. Turn off color management, the LUT breaks things or something. 2. Experiment until you fin printer settings that work. In my case that's "automatic" color in the Epson driver. 3. Black out your blue channel for better contrast. 4. Use curves on red and green to lighten the heck out of everything. 5. If things are light enough either glasses will work. 6. if I can find a better shade of filter for the magenta channel it will solve a LOT of problems. 7. If there's such a thing as a free RIP that might let me disable inks that would be great too.
  22. I recently came into possession of a Stereo Realist and so I've been shooting 3d pictures and scanning them. I've found most people can NOT comfortably cross their eyes enough to get 3d out of side-by-side prints. So I've been trying to build a workflow for making and viewing anaglyphs instead. My first thought was a pair of red-green glasses, my reasoning being that cyan is the subtraction of red and magenta the subtraction of green. This works perfectly on a monitor, especially if I leave the blue channel at 0. It only almost-sorta works in print though. Green is perfect and seems to be great at ignoring yellow and cyan and seeing magenta. Red on the other hand doesn't work, it seems to see cyan and magenta both with similar densities. I tried to compensate by ordering up a more standard pair of green-magenta glasses and not-only did they not improve things, they made on-screen images stop working. So, what's next? Would yellow-blue work (I'd hate to be depending on yellow ink to form density, but desperate times)? Or is there such a thing as a cheap RIP that might let me really separate the inks better (looking at a purge sheet I just printed it looks like green red CAN ignore magenta if I can get it printed right).
  23. <p>I think archival is more-or-less enhanced matte. It's been a long time since I printed to Glossy Photo Paper but I'm pretty sure it's older than concerns about image stability. I very much doubt it's acid free (but I don't think the RC papers are either). The surface coating is a lot less stiff than the RC papers in my mind so maybe it's less scuff resistant.</p>
  24. <p>I know in the A-1 at least the shutter button can be unscrewed and disassembled. Maybe the plunger in it got knocked out of place, or the contacts need cleaning.</p>
  25. <p>400 for parts and labor on a nonstandard card sounds reasonable to me. 200 for a hard disk replacement may be high, unless they are imaging the old system onto the new disk, then there's a lot of waiting time with systems or people stuck on the task. You can probably get better results on that one by substituting your own labor. Otherwise, yes a Core 2 duo is a bit underpowered to be worth repair, in the PC world they've been practically giving them away as refurbs for years. That Macbook Air with an SSD in it would probably be more responsive than the iMac much of the time, and with a Thunderbolt port there should really be nothing preventing you from hooking up a bigger screen/kb/mouse to use as a desktop, which might be a better use of similar amounts of money.</p>
×
×
  • Create New...