It's clear from the article that this AI is processing photos taken by people, based on some norms it's "learned" from a dataset of photos taken by people, deemed by people to have a statistically "correct" internal computer representation. Granted that this is cooler than just applying auto-levels, and I'm sure eventually we'll have cameras with those kind of smarts for the benefit of interfaces (aka people) without their own taste, but that's a long ways off from a machine that knows what a photograph is, much less wants to take one. Do not forget in these AI scenarios that machines don't want to take photos, or cure cancer, or enslave mankind. Machines don't know anything and therefore are interested in nothing. Not even to be powered on. In that sense the most powerful supercomputer is not even as smart as an amoeba, which at least knows what it likes. Maybe you can attach a smart camera to a roomba to take pictures "autonomously", but they'll only be good pictures by accident, and then only because a sentient human being said they're good. Is this technology smarter than artfully dropping your camera on its shutter? Yeah, but not essentially.