21 March 2017

The New York Times: “The Great A.I. Awakening”

AI Awakening cover

It is important to note, however, that the fact that neural networks are probabilistic in nature means that they’re not suitable for all tasks. It’s no great tragedy if they mislabel 1 percent of cats as dogs, or send you to the wrong movie on occasion, but in something like a self-driving car we all want greater assurances. This isn’t the only caveat. Supervised learning is a trial-and-error process based on labeled data. The machines might be doing the learning, but there remains a strong human element in the initial categorization of the inputs. If your data had a picture of a man and a woman in suits that someone had labeled “woman with her boss”, that relationship would be encoded into all future pattern recognition. Labeled data is thus fallible the way that human labelers are fallible. If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible.


A neural network, however, was a black box. It divined patterns, but the patterns it identified didn’t always make intuitive sense to a human observer. The same network that hit on our concept of cat also became enthusiastic about a pattern that looked like some sort of furniture-animal compound, like a cross between an ottoman and a goat.

Gideon Lewis-Kraus

Great story – though a bit too long – about Google’s efforts to integrate machine learning into translation. Lately I have used Google Translate more than usual and the quality of the results does show, but improvements are apparent mostly on longer sentences. When translating single words, on the contrary, I regularly find situations where Translate wouldn’t recognize words that, when queried through Google Search, had clear definitions. Evidently Translate could still use some work if Google Search still does a better job at identifying words.

The article is interesting on several other levels too; it highlights the challenges and caveats of machine learning relatively well and makes a muted case for immigration, as most of the engineers and mathematicians working on these models come from outside the United States.

The more important takeaway for me was this though: in the opening sections two different approaches to artificial intelligence are described. One of them is, let’s say, ‘top-down’ – teaching machines all the high-level concepts and symbols that humans built up along our cultural evolution, and then let the machine recognize them in real situations. The other, the more recent machine learning, works from the bottom up, starting with vast amounts of data and asking the machines to discover their own patterns, but without assigning any logical significance to these patterns. It occurs to me that both models are incomplete, and the human mind uses both modes simultaneously: as we acquire data about our environment, we start building patterns that we memorize and later apply to new situations (the bottom-up approach); but at the same time we are abstracting the patterns into rules that can be applied in completely different contexts (the symbolic approach) – that’s how mathematics, physics, and basically all sciences work. A machine intelligence then may never be ‘complete’ without both these abilities – but then again, there is little incentive for it.

Post a Comment