Jump to content

D850 underexposing in Aperture priority mode


jason_min

Recommended Posts

Here are additional photos of D850 aperture mode. If the scene is not dark, it is either normal or over exposure as compared to D610, which is normal in all situations.

How do you account for this D850 Aperture priority mode behavior?

 

Thank you very much.

 

[ATTACH=full]1303876[/ATTACH] [ATTACH=full]1303877[/ATTACH] [ATTACH=full]1303878[/ATTACH] [ATTACH=full]1303879[/ATTACH] [ATTACH=full]1303880[/ATTACH] [ATTACH=full]1303881[/ATTACH]

 

The first shot was blue. The last shot was yellow. I think the matrix metering system wants to make yellow lighter than gray and blue darker than gray.

Link to comment
Share on other sites

  • Replies 70
  • Created
  • Last Reply

Top Posters In This Topic

I think it also applies to manual mode too

Yes it does.

 

I was more meaning for you to see if it was already set-up with a bias.

 

I bought a 2nd hand D3S and it had -0.7EV Exp Correction set 'internally'...and it doesn't show anywhere visibly. Thus Nikon's new warning when you play with B5 or B7.

Link to comment
Share on other sites

The first shot was blue. The last shot was yellow. I think the matrix metering system wants to make yellow lighter than gray and blue darker than gray.

I can believe that for Matrix. Makes for a nicer sandy beach shot against a blue sky!

 

Trying to out-think Matrix Metering is where madness lies.....;)

 

Stick with neutral grey, it takes a uncontrollable variable out of the mix.

  • Like 1
Link to comment
Share on other sites

Yes, still too many variables, colours, angles of view etc. There is still the need to photograph an evenly illuminated plain white or grey surface, de-focused, with the same lens at the same zoom and aperture settings, and not in unpredictable matrix mode, to pinpoint the problem.
  • Like 1
Link to comment
Share on other sites

Thank you very much for the reply. It is more or less .7 to 1 stop underexposure. Can I ask why the focal length matters? I thought it does not make any difference.

Focal length matters because the two images cover somewhat different parts of the sky, and your sky is not a totally evenly lit, cloudless blue sky. There is white cloud in different patterns in the two sample images. That can affect the exposure a bit.

 

If you are that picky, I would just capture a sample image or two and use the histogram to fine tune your exposure manually.

Link to comment
Share on other sites

Jason, Thanks for starting a stimulating conversation. For me matrix metering has always been a leap-of-faith but one I believe has been a good one, certainly better than bungee jumping. When I shot film it was manual everything and spot metering exposure of Velvia film at iso 40 rather than its native 50 because Velvia did not like being under exposed. The only time I ever have tested a body at the sky was to meter against the northern sky using sunny sixteen or with a calibrated Wallace expo disc pointed in the vicinity of the sun on a sunny day. With matrix metering I assume there will be variability built in by the nature of a process I don’t understand well as well as between different camera bodies. Also the matrix meter can account for different colors I assume in a differential way. I really like the phrase “apply magic” because I have never really believed that matrix exposed a scene evenly as compared to spot metering that exposes for a point in the scene and tries to make that point 18%. Since starting the use of auto iso for action shooting my leap of faith has become bigger while my belief that I can predict exposure consistently has become the reciprocal. But if you were to ask me am I satisfied with the results I would have too say yes. When I use to shoot underwater with film I would use underwater guide numbers with estimated distance ( I could do math in my head back then). Now days I adjust the flashes manually to balance the lighting and check for blinkies. There is considerably less discipline in the process. I still occasionally spot meter manually but more and more have surrendered to automation. If you want to check your system out to see if the problem is in the camera and not where you are pointing it I would recommend eliminating all variables and using a calibrated Wallace expo disc that in the film days was used for 18% gray metering but nowadays more often is also used for white balance. They are not that expensive and I still keep one in my kit the way I would keep a 35mm film extractor tool around to save the day. Stay frosty.
  • Like 1
Link to comment
Share on other sites

Jason, Please see the explanation on matrix metering, from the Nikon web site that I cut and pasted below that explains that matrix metering not only accounts for simple exposure and trying to achieve 18% gray that has always been implicit in the design of modern camera exposure meters but also takes into account distance and color based on a preset database of colors and then determines “the best exposure.” That is perhaps why anytime you point your camera in a different direction or at a different color no matter how subtle, things change. I think this accounts for the aperture priority behavior that concerns you. We look at a cerulean blue sky but seldom pay attention to the subtle differences that appear from north to south in both color and brightness. I have a D3s as well as D850 and assume that their matrix database may be different in a not so subtle way and have always accepted the difference that Nikon has programmed into the camera, the technical aspects of which are well beyond my simple mind. Good hunting.

3D Color Matrix Metering II

 

 

Matrix metering evaluates multiple segments of a scene to determine the best exposure by essentially splitting the scene into sections, evaluating either 420-segments or 1,005 segments, depending on the Nikon D-SLR in use.

 

The 3D Color Matrix Meter II takes into account the scene's contrast and brightness, the subject's distance (via a D- or G-type NIKKOR lens), the color of the subject within the scene and RGB color values in every section of the scene. 3D Color Matrix Metering II also uses special exposure-evaluation algorithms, optimized for digital imaging, that detect highlight areas. The meter then accesses a database of over 30,000 actual images to determine the best exposure for the scene. Once the camera receives the scene data, its powerful microcomputer and the database work together to provide the finest automatic exposure control available.

  • Like 1
Link to comment
Share on other sites

D610 with the same setting

You are not using the same setting - the WB setting, for example, is different between the D850 and the D610. And who knows how much the JPG settings differ between the two - in both cases VI is used with the one on the D610 obviously altered.

 

Differences seem to be mostly 1/3 of a stop - and your histograms don't have the same shape, indicating that you are not measuring the same and most likely not processing the same way.

 

From my experience, I would not expect a pro Nikon and a consumer Nikon camera of different vintage to show the same metering behavior anyway.

Link to comment
Share on other sites

3D Color Matrix Metering II

 

Matrix metering evaluates multiple segments of a scene to determine the best exposure by essentially splitting the scene into sections, evaluating either 420-segments or 1,005 segments, depending on the Nikon D-SLR in use.

 

And, for reference, those are old figures. The F5 has a 1005-segment meter! (Although so did the D1-D3 series, D200, D300 and D700.) It's 2016 segments in the D610, and 180,000 on the D850 - which is why the D850 has a usable face-detect metering mode, although it probably helps more with the colour tracking for autofocus.

 

The 3D Color Matrix Meter II takes into account the scene's contrast and brightness, the subject's distance (via a D- or G-type NIKKOR lens), the color of the subject within the scene and RGB color values in every section of the scene. 3D Color Matrix Metering II also uses special exposure-evaluation algorithms, optimized for digital imaging, that detect highlight areas. The meter then accesses a database of over 30,000 actual images to determine the best exposure for the scene. Once the camera receives the scene data, its powerful microcomputer and the database work together to provide the finest automatic exposure control available.

 

I've always interpreted "accesses a database of over 30,000 actual images" with a large pinch of salt. The F5 claimed 30,000 images, and while I'm not going to undersell the F5 too much, fitting 30,000 images into its "database" (of 1996-vintage memory) is unlikely to be what actually happens. My assumption from the wording has always been that Nikon train a neural net with 30,000 images (hopefully they've updated the database since) to provide a mapping from the input intensities to a "good exposure". Then feeding meter data into a neural net and getting an exposure out of the end is a relatively lightweight operation that can be done in real time with 1996-vintage hardware. The general downside of neural networks is that it's quite hard to prove what you've trained them to do (although the classic example is disappointingly apocryphal). They generally work well, so matrix metering does often give you a good value - but actually working out when it's going to go wrong is very complicated. Finding the exposure off by a mile because it's recognised a window as looking a bit like a face, for example, where "looks a bit like a face" is as unpredictable as AI deep dreams. And, while I've never had it confirmed, this would explain why I've never seen Nikon detail their metering algorithms in any way.

 

Or there could be a simple and consistent algorithm that has nothing to do with this, that I've never seen documented.

 

But it does frustrate me that a genuine ETTR mode, which should be predictable, is missing. I don't mind missing a shot occasionally, but I generally wish it was my fault when I do...

Link to comment
Share on other sites

Um. Actually, now I look, I have to admit there is some documentation on what the original Nikon FA 5-segment matrix meter did. Other than knowing that face-detection is an optional part of modern metering, I concede that it's possible that the data from the large number of segments is fed into a 35-year old algorithm - it does mention "tens of thousands of images". I stand by "it might be a neural net" - but I'm guessing how the modern version works (partly based on how I'd be inclined to solve the problem...), and I'm prepared to be told I'm talking gibberish. Again.
Link to comment
Share on other sites

Andrew your scaring me. “Neural net!!!” You know some of us watched the Terminator movies and actually paid attention.

Some one once told me that my brain was like a neural ring.

 

Is that a compliment?

 

Um. Possibly not? Although I'm going to tell that to some friends of mine that work in the area. :)

 

Neural nets scare me as well, but it's got a lot more to do with not trusting algorithms that I can't analyse than with the risk of them taking over the world. An AI can take over the world for far simpler reasons. Strategically wanting people to stop paying attention to Skynet would, of course, explain a lot about some of the more recent Terminator films (although the Sarah Connor Chronicles shouldn't have been cancelled); I guess I'll rest easier if the next one is better. I actually saw T2 introduced by James Cameron (whom, after watching Future Man, I now mentally think of in SIGOURN-E's voice as "diabolican Canadian, James Cameron", "groundbreaking multi-hyphenate, James Cameron" or "certified door expert, James Cameron"...) at a conference once, as part of a memorial tribute to Stan Winston.

 

Also my wife, who is a veterinarian, gets very confused whenever I talk about AI. It means something very different in her "field" (think animal husbandry). Whether Nikon was aware of this when they decided their lens nomenclature is an open question.

Link to comment
Share on other sites

Thank you for the reply. Why would the lens make a difference?

Because different lenses have different amounts of vignetting at full aperture, where the metering is done when using the optical viewfinder.

 

Vignetting effectively gives an image that's brighter in the centre/dimmer at the edges, and matrix metering tries to even up the exposure across the frame. With a non-CPU or unrecognised lens, you'll almost certainly get an image that's over-bright in the centre from such a lens.

 

As a long time Nikon DSLR user, you just have to accept that Nikon's viewfinder metering basically sucks!

 

I believe this is because Nikon's development team doesn't have any real photographers on it. Years ago, I read that the developer of the metering system was a rank amateur that simply liked to take snapshots of his family. What a great qualification for designing a 'sophisticated' metering system!

 

Matrix metering has improved with more recent models, but it's not perfect. So you just have to apply EC to matrix metering when necessary, or use a different metering regimen. Live View metering, OTOH, is pretty darned good - but still gets fooled by small bright or dark subjects in a contrasting frame, and that's where spot metering, or using a handheld incident meter come in.

 

Also, you do realise that you have to 'blind' the viewfinder when your eye isn't covering it? Otherwise stray light gets in through the eyepiece and messes up the exposure.

Edited by rodeo_joe|1
Link to comment
Share on other sites

After 35 years in IT, I'll add a little to the discussion of the "30,000 image claim from Nikon . . . There are not 30,000 images stored in the camera. At least not what we would think of as images. The camera has 30,000 samples each containing information on 1,005 segments from an assortment of images. This is for the F5, the last camera that I studies this on. So, these "images" are the equivalent of 1K pixel images. Very primitive for viewing but very informative in terms of optimizing exposure.

 

Going back to the first two images posted, the exposure difference was only 1/3 stop (1/100 v. 1/125). It may be even less as I believe that the camera has an infinitely variable shutter and displays the nearest shutter speed in EXIF data. These could have been exposures of 1/115 and 1/117!) The image that was given slightly less exposure appears to have a slightly larger, brighter, spot of cloud which could easily explain the difference.

 

I don't see any problem with the original images from either camera. The two Matrix meters use different sets of sample images and will always likely produce slightly different exposures.

 

This really seems like a search for a problem that doesn't exist.

  • Like 1
Link to comment
Share on other sites

After 35 years in IT, I'll add a little to the discussion of the "30,000 image claim from Nikon . . . There are not 30,000 images stored in the camera. At least not what we would think of as images. The camera has 30,000 samples each containing information on 1,005 segments from an assortment of images. This is for the F5, the last camera that I studies this on. So, these "images" are the equivalent of 1K pixel images. Very primitive for viewing but very informative in terms of optimizing exposure.

 

And a picture is worth a thousand words (assuming the word holds the meter value). Still, the old FA documentation I linked, talking about the five-segment meter, says "By using the data analysed in Phase IV, the given scene is classified into one of 20 computer-simulated scene patterns (drawn from a visual assessment of tens of thousands of picture, computer analysis of the relationship between brightness patterns, optimum contrast, etc., and human evaluation)[...]"

 

I would be very surprised if the camera had 30,000 low-res images held inside it (more so with a modern 180k pixel sensor). It feels strongly like a classification problem, where you feed in a meter image and the output is the desired meter setting - the kind of thing that neural nets are known for being good at. There are other ways of going about it (including trying to break the image into regions and meter based on contrast, tone, discarding excess regions, etc.) which kind of tally with the description given on the FA - but you could get the same result by capturing 30,000 images and noting what the exposure should have been, then training a network with the result. I'm not a machine learning expert, and maybe there's a fundamental reason the approach is inappropriate, but comparing meter settings against thousands of images one at a time is not likely to be an efficient approach - whereas evaluating a neural net is just a big matrix multiply. The training system needs to have the meter values (or possibly just a captured image with exposure information and guidance from a human to tweak the result, at some point) - but there's a big asymmetry in the amount of work required to train a neural net and the amount required to apply one. The meter has to respond quickly, so the Expeed in a modern body doesn't have very long to do whatever it's going to do with 180,000 values.

 

That said, I have absolutely no internal knowledge of how Nikon do it. So Ed, if you do, please educate me - I'll be very interested!

 

Joe: Well, you persuaded me that if I'm doing it wrong, I'm not the only one. :) Looking up the FA information, Ken gushes about how infallible the matrix is. Of course, he also gushes about how perfect a centre-weighted meter is...

Link to comment
Share on other sites

The 8008 had, I believe the same five segment meter as the FA. It worked very well . . .

 

It's much more likely that 30,000 low-res samples are stored and searched than it is that there is a neural network built into the camera electronics. With the correct storage and search, you can find what you want in 30,000 records in 15 compares or fewer. In simpler cases, the problem is actually trivial using binary search techniques. Twenty "computer-simulated scene patterns" would be plenty for a five segment meter. A 1,005 segment meter would require far more. Somewhere around 30,000?

Link to comment
Share on other sites

Well, you can select between 30,000 options in a tree based in fifteen compares of single values. Fifteen compares of 1005 values each, let alone 180,000, is a bit more prohibitive.

 

All a neural net does is provide a relatively efficient and compressed "correct storage and search" - but I don't deny there are other ways to do it. I don't really see a reason to store the images, though.

 

Basic neural networks have been around for a very long time, and they got popularity in the 80s, which would be about the right time frame for Nikon to pick it up. The latest deep neural nets are a bit more modern, but I don't think they're necessary for a basic matrix meter. They all have the property of being relatively lightweight to evaluate, which is why I figured they were plausible for F5-vintage hardware.

 

Anyway, I'm prepared to believe neural nets are nothing to do with it, and my guess was off. I'm not sure why it seemed so unlikely, though? If the metering is deterministic and there's a logical algorithm behind it which would allow us to predict metering failures, I'd love to know about it.

Link to comment
Share on other sites

So, having gathered this group of imaging professionals - or not - to view and assess the exposure of 10 to 30 thousand images. What then?

 

(If the internet is anything to go by, 99.9% of those pictures would have been of cats anyway!)

 

I suspect that Nikon's claims of their metering system 'searching and comparing' such a huge catalogue of images is simple marketing BS. Or, once you get beyond a few dozen scenarios, it all blurs into random noise.

 

Whatever. It ain't working Nikon. Does the acronym KISS not exist in Japanese?

Link to comment
Share on other sites

:-) I've got to about that it's generally more or less fine, and I'm sure it beats centre weighted metering on average - it just fails often enough to bother me. (I'd be more confident if there were a raw histogram.) If the basic operation hasn't changed much since it had no way to tell between slide film and a wide gamut negative, we could argue for an update.

 

I'd settle for getting cats right. My old tabby with a white noise and paws perpetually had blow-outs if I didn't compensate (my black cat with a white petticoat was worse). My phone's no better, and I think its limited exposure compensation really doesn't play properly with flash.

 

I don't want them to get rid of the meter. I just want some ettr variants too.

 

Ed - sorry if I sounded cantankerous. I meant to impart that I wasn't sure why you found a neural net implausible; if I'm missing a reason one would be inappropriate, I'd like to learn it.

Link to comment
Share on other sites

Well, you can select between 30,000 options in a tree based in fifteen compares of single values. Fifteen compares of 1005 values each, let alone 180,000, is a bit more prohibitive.

 

All a neural net does is provide a relatively efficient and compressed "correct storage and search" - but I don't deny there are other ways to do it. I don't really see a reason to store the images, though.

 

If the data is organized correctly, there's no need to compare all 1,005 values before rejecting an image for use. Then, even 180,000 isn't prohibitive. A neural net is preferred when the goal is to have the system "learn" or reprogram itself when exposed to new data. This clearly isn't done since it requires a firmware update to change the processing.

 

One of the keys to remember here is that not all images should be evaluated to an 18% grey. That was the goal when all the meter could do was look at the luminance of some area of the image or at the amount of light falling on a dome. The idea behind Matrix metering is that it can ignore small areas of bright light or small areas of darkness instead of averaging them in. It can make a guess that the bright area at the top of the image is sky and that the subject is only ten feet away so the sky is less important than the rest of the image and many, many, other variables can be taken into account.

Link to comment
Share on other sites

Neural nets scare me as well, but it's got a lot more to do with not trusting algorithms that I can't analyse...

 

I don't know much about neural nets or their internal mechanisms, but did have occasion to work with one a bit. At the chain outfit where I worked we did our own internal "image correction" software, mainly as an exposure correction trial. (None of the commercial versions did good enough.)

 

We ended up using a neural net, not that it was necessary, but more out of deference to one of the software guys who was dying to try out the concept in the real world (he was relatively "new," perhaps a half-dozen years out of school).

 

Essentially, our initial method was to collect about 6 or 8 parameters from an image, then use these to determine, more or less, how far away from "ideal" the exposure was. As a portrait specialist, one of the first steps was to locate potential flesh tones. Next we tried to judge how light or dark the complexion was. One possible method was to look for any "white" (clothing) in the scene; if found then the flesh tones could be compared against this. (If a large difference then we presume a dark complexion, and consequently it should be printed somewhat lighter than otherwise indicated.) Of course the details were more complicated than this, but basically there was a certain flow of logic, with various alternate methods of trying to derive certain information. We used an image-processing guy within our group to shake everything out.

 

When it came to integrating into our lab production system, we borrowed one of the systems guys who could connect our methods into the standard production workflow. He is the one who wanted to try out a neural net, and was practically begging for the chance to try it out. Since he was willing to do all of the setup work, and handle problems later (and the IT group agreed to make him available) we were willing to go along with him.

 

He described the general operation of the neural net like so: we would train the system by feeding it data for, say several hundred images. For each image we would feed the neural net the handful of collected parameters mentioned previously, along with the final printing data actually used (as called out by professional color correctors, and actual printed). He said that in the training phase these neural nets would essentially look for relationships between these parameters and the print data, and these "learned" relationships would later be used for future images. But that we would not be able to know what relationships the system would be using - if we were not happy with the results we'd have to either add parameters or do more training.

 

Note: I wasn't quite ready to post, so more to follow...

  • Like 1
Link to comment
Share on other sites

Yes; if the system is a neural net, it's certainly (well, probably) not learning in the field. But there are many cases where training is separated from inference (applying the trained model to data), to the extent that even dedicated hardware for doing this varies: training often requires higher precision and performance, and desktop graphics cards are getting optimised to do this, but mobile phones are acquiring dedicated hardware to do the inference step that can be faster, uses lower precision, and is much less power-hungry. Neural nets have been applied a lot recently for noise reduction, for example - NVIDIA have been doing this for their ray tracing graphics, and Google have, I believe, used it for denoising images. Topaz Labs will sell you something that does it in software, and in my brief experience it's very effective, except when it's weird. :-)

 

A trained neural net is just a weighting function that takes a lot of inputs and maps to some number of outputs. There are other ways to do it, but you can certainly train a neural network by showing it 30,000 images (...of meter readings) and 30,000 "correct" exposure values, and telling it to correlate the two. The result would likely be surprisingly good in a lot of tricky conditions, but because nobody coded in "do this under these circumstances" in an explicit way, there's a sporting chance it will occasionally be weird and there would be no helpful documentation to explain why or how this could happen. Which is roughly the way I think of Nikon's metering. To be fair, I have exactly the same impression of Microsoft Word, and I don't think it uses a neural network for most of its behaviour (I occasionally wonder about any intelligence, never mind the artificial kind).

 

Agreed that the 18% grey thing isn't the goal - and I think where Jason was confused. I don't know whether the "many other variables" are explicit (as they seem to have been for the FA) or implicit (in a neural net) - but I would like another mode where I can control them. The matrix, even in highlight priority, sometimes leaves quite large areas of the scene blown out. Nikon don't seem to be aware of the dynamic range of their own sensors, and that there are times where it actually could adequately capture a high dynamic range scene if the user wanted to process it afterwards. One workaround would be the "emergency shift exposure" idea - if, after capture, there are blown pixels, take a bracketed image with a shorter shutter speed. I suggested this to them some years back...

Link to comment
Share on other sites

(Continued from post #46)...

 

So we pulled perhaps 6-8 hundred production images for "training" the net, then we actually printed 8x10" prints of each (we evaluated via the "acceptability" of images as opposed to whether the same "printing data" was matched). As I recall, we found that about 85 to 90% of the results would be within our production tolerances - not good enough to bypass the human correctors, who had something like a 99% plus pass rate, with tighter overall tolerances.

 

We ran the auto system in front of the human correctors for some time (with vastly higher volumes). But with no way to confirm what caused the failures (within the "magic" neural net) there was no obvious way to improve it. Then the programmer, who was "gonna be available" to handle problems, took a job at another company. Leaving no one behind who knew how to deal with his neural net (important lesson for me). Anyway, it seemed unlikely that the system would have been good enough to completely bypass human color correctors, which was our only real cost justification, so we shelved the project.

 

Anyway, that's pretty much my only experience with a neural net - I think it was detrimental to our intentions. Mainly because OUR system was relatively simple, such that we could understand the ramifications. I can see that metering under a wide variety of conditions, with a wide variety of subjects, could be tremendously more complicated. So... I dunno.

 

I have no idea how fast it would have been relative to following a logic tree IN OUR SITUATION.

Edited by Bill C
  • Like 1
Link to comment
Share on other sites

Yes. I believe sufficiently simple neural nets are difficult to train to do anything useful; there were some relatively large steps forward in the last decade or two. The main advantage of using one would, to me, be that nobody needs to spend a lot of time coding in a special-case algorithm to handle all the possible cases robustly. The main disadvantage is that nobody would have spent a lot of time coding a special-case algorithm to handle all the possible cases robustly. :-) (Sometimes handling the simple case predictably is more useful. Computers don't always help. Classic example: one version of Windows started ordering the programs on the start menu by which was most recently used [i think - may have been most frequently]. The result was that where you needed to click to select a program changed each time you went to the menu, and it took longer to do things than having a fixed ordering. Microsoft fixed it, a bit, in the next Windows release. My number one demand in most recent software is "stop helping".)

 

I'm not that keen on them as a concept, but they seem to be a popular topic these days. It just struck me that the situations I'd seen them used seemed to have something in common with a metering problem, especially phrased in terms of "a lot of images", so I made an assumption. And it could be just as unpredictably wrong as a neural net!

Link to comment
Share on other sites

I think the real issue that needs addressing is to answer the question: "What would an experienced human photographer do to meter this scene?"

 

And I think that whatever the answer, it certainly isn't to mentally flick through a list of all pictures previously taken and latch onto a similar one. We might create an archetype of the scene that pulls elements from previous experience, but finding a near match from past exposures certainly isn't my priority, at least I believe not.

 

The Sainted Ansel would have us believe it's finding a single 'important' tone to hang the exposure on. Errrr, maybe. But that only works if you're prepared to juggle the rest of the tones in (post)processing to get the tonal range wanted.

 

I find I'm more and more relying on having a huge dynamic range to dig shadow detail out of, and using a basic epithet that's the exact opposite of using film - "Expose for the highlights, and let the shadows take care of themselves."

 

Simple, but very effective, and a process that could easily and simply be totally automated. Including the post processing to elevate the shadows.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...