La succession des biographies est trop mécanique, et les explications des technologies sont trop succinctes, pour que Genius Makers remplisse son office. Comment comprendre comment les « francs-tireurs » ont fait l’intelligence artificielle, si l’on ne comprend pas de quoi il retourne ? Cade Metz s’adresse à un public relativement averti, qui comprend les tenants et les aboutissants du débat entre les connexionnistes et les computationnalistes, et maitrise les fondamentaux des réseaux de neurones artificiels et de l’apprentissage profond.

Metz joue aux équilibristes sur la ligne de la technocritique : il ne se laisse pas prendre au jeu de dupes d’Elon Musk, et met clairement en doute la chimère de l’intelligence artificielle généraliste, mais passe très rapidement sur les aspects éthiques des boites noires algorithmiques, et ignore complètement les Turcs mécaniques qui soutiennent pourtant les dispositifs d’apprentissage. Un livre aussi passionnément frustrant que le champ qu’il décrit, en quelque sorte.


La critique la plus acerbe de l’ouvrage :

As Google and other tech giants adopted the technology, no one quite realized it was learning the biases of the researchers who built it. These researchers were mostly white men, and they didn’t see the breadth of the problem until a new wave of researchers — both women and people of color — put a finger on it. As the technology spread even further, into healthcare, government surveillance, and the military, so did the ways it might go wrong. Deep learning brought a power even its designers did not completely understand how to control, particularly as it was embraced by tech superpowers driven by an insatiable hunger for revenue and profit.

Le péché originel de l’intelligence artificielle :

Though the field’s founding fathers thought the path to re-creating the brain would be a short one, it turned out to be very long. Their original sin was that they called their field “artificial intelligence.” This gave decades of onlookers the impression that scientists were on the verge of re-creating the powers of the brain when, in reality, they were not.

La connexion européenne :

The connectionist community was tiny. Its leaders were European — English, French, German. Even the political, religious, and cultural beliefs harbored by these researchers sat outside the American mainstream. Hinton was an avowed socialist. Bengio gave up his French citizenship because he didn’t want to serve in the military. LeCun called himself a “militant atheist.” Though this was not the kind of language Hinton would ever use, he felt much the same. He often recalled a moment from his teenage years when he was sitting in the chapel at his English public school, Clifton College, listening to a sermon. From the pulpit, the speaker said that in Communist countries, people were forced to attend ideological meetings and weren’t allowed to leave. Hinton thought: “That is exactly the situation I am in.” He would retain these very personal beliefs — atheism, socialism, connectionism — in the decades to come, though after selling his company to Google for $44 million, he enjoyed calling himself “gauche caviar.” “Is that the right term?” he would ask, knowing full well that it was.

Un géant de la Silicon Valley manque à l’appel :

One year, according to DeepMind’s annual financial accounts in Britain, its staff costs totaled $260 million for only seven hundred employees. That was $371,000 per employee. Even young PhDs, fresh out of grad school, could pull in half a million dollars a year, and the stars of the field could command more, in part because of their unique skills, but also because their names could attract others with the same skills. As Microsoft vice president Peter Lee told Bloomberg Businessweek, the cost of acquiring an AI researcher was akin to the cost of acquiring an NFL quarterback. Adding to this cutthroat atmosphere was the rise of another player. After Facebook unveiled its research lab and Google acquired DeepMind, it was announced that Andrew Ng would be running labs in both Silicon Valley and Beijing — for Baidu.

John Giannandrea :

Soon, Amit Singhal, the head of Google Search, who had so vehemently resisted deep learning when approached by Andrew Ng and Sebastian Thrun in 2011, acknowledged that Internet technology was changing. He and his engineers had no choice but to relinquish their tight control over how their search engine was built. In 2015, they unveiled a system called RankBrain, which used neural networks to help choose search results. It helped drive about 15 percent of the company’s search queries and was, on the whole, more accurate than veteran search engineers when trying to predict what people would click on. Months later, Singhal left the company after being accused of sexual harassment, and he was replaced as the head of Google Search by the head of artificial intelligence: John Giannandrea.

Le jeu de dupes d’Elon Musk :

Even as Musk sounded the alarm that the race for artificial intelligence could destroy us all, he was joining it. For the moment, he was chasing the idea of a self-driving car, but he would soon chase the same grandiose idea as DeepMind, creating his own lab in pursuit of AGI. For Musk, it was all wrapped up in the same technological trend. First image recognition. Then translation. Then driverless cars. Then AGI. He was part of a growing community of researchers, executives, and investors who warned against the dangers of superintelligence even as they were trying to build it. This included the founders and early backers of DeepMind as well as many of the thinkers drawn into their orbit. To outside experts, it seemed like nonsense. There was no evidence that superintelligence was anywhere close to reality. Current technologies were still struggling to reliably drive a car or carry on a conversation or just pass an eighth-grade science test. Even if AGI was close, the stance of people like Musk seemed like a contradiction. “If it was going to kill us all,” the voices asked, “why build it?” But to those on the inside of this tiny community, it was only natural to consider the risks of what they saw as a uniquely important set of technologies. Someone was going to build superintelligence. It was best to build it while guarding against the unintended consequences.

Je n’avais jamais pensé au spécisme dans ce sens :

Page worried that paranoia over the rise of AI would delay this digital utopia, even though it had the power to bring life to worlds well beyond the Earth. Musk pushed back, asking how Page could be sure this superintelligence wouldn’t end up destroying humanity. Page accused him of being “specieist” because he favored carbon-based life-forms over the needs of new species built with silicon.

De l’importance de travailler avec la machine :

Like AlphaGo before him, Lee Sedol had reached a new level, and he said as much during a private meeting with Hassabis on the last day of the match. The Korean said that playing the machine had not only rekindled his passion for Go but opened his mind, giving him new ideas. “I have improved already,” he told Hassabis, an echo of what Fan Hui had said several days earlier. Lee Sedol would go on to win his next nine matches against top human players.

La voiture comme véhicule* pour éprouver les technologies de l’intelligence artificielle :

Myriad tech companies and carmakers had a long head start with their autonomous vehicles, and Lu wasn’t exactly sure how Microsoft would enter this increasingly crowded market. But that wasn’t the issue. His argument wasn’t that Microsoft should sell a driverless car. It was that Microsoft should build one. This would give the company the skills and the technologies and the insight it needed to succeed in so many other areas.

L’intelligence artificielle comme retour au bruit :

As these methods improved, he explained, they would end the era where images were proof that something had happened. “It’s been a little bit of a fluke, historically, that we’re able to rely on videos as evidence that something really happened,” he said. “We used to actually have to think through a story about who said what and who has the incentive to say what, who has credibility, on which issue. And it seems like we’re headed back towards those kinds of times.” But that would be a hard transition to make. “Unfortunately, people these days are not very good at critical thinking. And people tend to have a very tribalistic idea of who’s credible and not credible.” There would, at the very least, be a period of adjustment. “There’s a lot of other areas where AI is opening doors that we’ve never opened before. And we don’t really know what’s on the other side,” he said. “In this case, it’s more like AI is closing some of the doors that our generation has been used to having open.”

Et je suis bien d’accord avec Geoffrey Hinton :

AGI, he believed, was a task far too big to be solved in the foreseeable future. “I’d much rather focus on something where you can figure out how you might solve it,” he said during a visit to Google headquarters in Northern California that spring. But he also wondered why anyone would want to build it. “If I have a robot surgeon, it needs to understand an awful lot about medicine and about manipulating things. I don’t see why my robot surgeon needs to know about baseball scores. Why would it need general-purpose knowledge? I would have thought you would make your machines to help us,” he said. “If I want a machine to dig a ditch right, I’d rather have a backhoe than an android. You don’t want an android digging a ditch. If I want a machine to dispense some money, I want an ATM. One thing I believe is that we probably don’t want general-purpose androids.” When asked if belief in AGI was something like a religion, he demurred: “It’s not nearly as dark as religion.”