WSJ x Capital in English

KommentarOhne Menschen ist Künstliche Intelligenz ziemlich dumm

Mitarbeiter im Competence Call Center arbeiten am 23.11.2017 in Essen (Nordrhein-Westfalen) an Computern. Im Auftrag von Facebook sollen dort strafbare und beleidigende Einträge entfernt werden.
Im Competence Call Center in Essen sollen im Auftrag von Facebook strafbare und beleidigende Einträge entfernt werden dpa

WSJ x Capital in English – unter diesem Label lesen Sie regelmäßig die besten Inhalte des Wall Street Journal auf Aktuell und tiefgründig, kostenlos und im englischen Original.

If you want to understand the limitations of the algorithms that control what we see and hear—and base many of our decisions upon—take a look at Facebook Inc.’s  experimental remedy for revenge porn.

To stop an ex from sharing nude pictures of you, you have to share nudes with Facebook itself. Not uncomfortable enough? Facebook also says a real live human will have to check them out.

Without that human review, it would be too easy to exploit Facebook’s antirevenge-porn service to take down legitimate images. Artificial intelligence, it turns out, has a hard time telling the difference between your naked body and a nude by Titian.

The internet giants that tout their AI bona fides have tried to make their algorithms as human-free as possible, and that’s been a problem. It has become increasingly apparent over the past year that building systems without humans “in the loop”—especially in the case of Facebook and the ads it linked to 470 “inauthentic” Russian-backed accounts—can lead to disastrous outcomes, as actual human brains figure out how to exploit them.

Whether it’s winning at games like Go or keeping watch for Russian influence operations, the best AI-powered systems require humans to play an active role in their creation, tending and operation. Far from displacing workers, this combination is spawning new nonengineer jobs every day, and the preponderance of evidence suggests the boom will continue for the foreseeable future.

Facebook is hardly alone

Facebook, of course, is now a prime example of this trend. The company recently announced it would add 10,000 content moderators to the 10,000 it already employs—a hiring surge that will impact its future profitability, said Chief Executive Mark Zuckerberg.

And Facebook is hardly alone. Alphabet Inc.’s Google has long employed humans alongside AI to eliminate ads that violate its terms of service, ferret out fake news and take down extremist YouTube videos. Google doesn’t disclose how many people are looped into its content moderation, search optimization and other algorithms, but a company spokeswoman says the figure is in the thousands—and growing.

Twitter has its own teams to moderate content, though the company is largely silent about how it accomplishes this, other than touting its system’s ability to automatically delete 95% of terrorists’ accounts.

Almost every big company using AI to automate processes has a need for humans as a part of that AI, says Panos Ipeirotis, a professor at New York University’s Stern School of Business. America’s five largest financial institutions employ teams of nonengineers as part of their AI systems, says Dr. Ipeirotis, who consults with banks.

AI’s constant hunger for human brains is based on our increasing demand for services. The more we ask for, the less likely a computer algorithm can go it alone—while the combination can be more effective and efficient. For example, bank workers who previously read every email in search of fraud now make better use of their time investigating emails the AI flags as suspicious, says Dr. Ipeirotis.