Jump to content

fhmillard

PhotoNet Pro
  • Posts

    1,089
  • Joined

  • Last visited

Posts posted by fhmillard

  1. Yes, check and/or calibrate monitor as 1st step in work flow.

    It is my understanding that most desktop printers (1) and pro labs printing systems (2) use CYMK -- (1) RGB is converted by printer driver to CYMK; AND (2) professional printing systems use CYMK directly.(?)

     

    The NX issue might be related to lots of things -- "enhancements" in the optimize image menu; differences in your NX and the Lab's color profiles ...

     

    Are you Picture Project and NX default color profiles the same?

  2. My experience with D200 (EN-EL3e) batteries is that they do last somewhat longer than The D70 (En-El3). The EN-EL3e is 1500mAh and the EN-EL3 is 1400mAh, so the D200 battery delivers some more juice. Also, the EN-EL3e has three terminals, instead of two on the EN-EL3, two for charging and power delivery and one for battery capacity and life measurements (which are displayed on D200 menu). I left my D200 on for three days and the battery still had 85% capacity, but the lens cap was on so the meter was not draining power. A couple of days shooting with a D70 is about right for me, I still use mine, @ 10 to 20 shoots/day, but I can shoot for another day at that rate with my D200. You might look into the dual battery grip for the D70 which uses two EN-EL3's (cheaper than a D200) serially, so when one battery is used up the other takes over. I hope this helps.
  3. Assuming you have not already done this, try resubmitting one of your highly rated photos (high 5's or better) again for critique. Wait about a month between submissions so that the photos do not appear in the critique forum at the same time; and you might want to do this several times. Compare rating distributions. Since PN provides weighted ratings distributions, you would be able to (at least):

    1. Determine a "reference" rating distribution from all the ratings

    2. Determine how each individual set of rating distributions differ in shape.

    3. Use ANOVA to determine if each individual ratings set could have come from the same set of raters or if the same criteria was used for rating -- I do not know how to assume that the same raters used different criteria for each submission. At least, using ANOVA, you would be able to determine if each submission was significantly different from the total rating set.

     

    You could also do this with a photo of low rating.

  4. Using the metrics suggested assumes little or no turbulence exist in the rating system from input -- raters. Since ratings are "subjective", then we operate without a reference standard for each category on the rating scale; albeit this could be attained statistically, but I can not think of any compelling reasons to do so, since rating turbulence might be high from inclusion of new raters and changes in individual rating criteria of current and older raters.

     

    I agree that the raters' PN link should be included, because I might want to rate them.

×
×
  • Create New...