San Francisco : Sundar Pichai isn’t a manager who shoves his way into the limelight with bombastic promises. But when it comes to artificial intelligence (AI), the head of Google ‘s parent company Alphabet regularly lets go of his typical gentleness.
“AI is one of the most important things humanity is working on. More profound than electricity or fire,” he said in early 2018. His own company has committed Pichai to an “AI first” strategy: AI should touch every Google product and then not just answer people’s questions, but actively help them in their everyday lives .
To speak of far-reaching consequences for mankind is actually no exaggeration. The Alphabet Group has nine products with more than a billion monthly active users, from Internet search and Google Maps to YouTube and Google Photos. The number of users corresponds to more than one eighth of the world population.
All the big Silicon Valley companies are researching AI, but the bets made by the Mountain View search engine giant are often a little bolder, crazier, and further from the core business than what scientists at Microsoft , Facebook , or Apple dream up.
From Waymo ‘s self-driving cars to its acquisition of British AI firm DeepMind , Google has consistently made billion-dollar bets on AI that will take years to recoup. If they ever do.
Under Chef Pichai, the group has become more rational and economical, but without breaking with the tradition of “alpha bets”, the huge bets on scientific quantum leaps. Even research on “general artificial intelligence” – the controversial concept of an AI that is as versatile as a human brain – is being pushed further at Google.
This is another reason why it was so controversial when Google parted ways with the AI ethics researcher Timnit Gebru in December after she pointed out in a research paper the danger of large data sets used to train neural networks – after all, huge collections of images, videos or Searches every division of Google.
Google artificial intelligence initiatives however, are a good indicator of what is technologically and economically possible with artificial intelligence today. They show how hard even small advances in AI are fought for. What dreams are realistic. And what tasks people still do, even at Google.
Andrei Lupas and his team at the Max Planck Institute for Evolutionary Biology in Tübingen spent almost a decade researching a specific protein structure. At the end of November, the researcher celebrated a major breakthrough: “It’s changing medicine. That changes research. That changes everything.”
Lupas is one of the first researchers to use Alpha Fold software. The artificial intelligence of the London-based Google sister company Deep Mind solves a problem that has plagued biologists for more than half a century: What does a protein look like in three dimensions? Many functions result from the shape or the folding of the protein, but until now it has been very difficult for researchers to find out what these are. In a competition, Alpha Fold was not able to determine all, but most of the known protein models.
Now the scientists hope to be able to determine the structure of many proteins faster and more cheaply in the future. That could massively accelerate pharmaceutical research.
Since Google acquired DeepMind in 2014 for an alleged $600 million, the company has made headlines time and again: in 2017 when its “Alpha Go” program defeated the world champion in the highly complex board game, or in 2018 when one of its programs played the shooting game after 450,000 games Mastered Quake Arena III better than trained players.
Much of it gets a lot of attention, some is even scientifically revolutionary. However, you don’t earn any money with it. “If Deep Mind were independent today, it probably would have failed. None of this is commercializable,” said investor Humayun Sheikh recently. He was one of the early funders of DeepMind.
However, this could change with the breakthrough in medicine.
Christian Frank can’t sing very well. “I have the suspicion that that’s why I should give this presentation,” said the Google developer in a small group of journalists. However, that’s not the real reason: Frank played a key role in developing the “Hum-to-Search” feature in Google’s office in Zurich.
Frank starts humming. A few notes up, then a few down. The screen reads “Don’t Stop Me Now” by Queen. Frank hits the notes that well after all.
The hum feature on smartphones can name catchy tunes from which users only have a melody fragment in their heads. It’s similar to the Shazam app, which shows the title and artist when a song you don’t know is playing in a café or at a friend’s house. A buzz on Google is enough.
To develop the feature, Frank’s team collected clips of hummed tunes from among Google employees. The Google researchers multiplied the training material for their artificial intelligence using simulations in which the pitch of a clip was varied, for example. This translated these clips and the original song into mathematical representations, so-called “embeddings” that learn better with every hit and every mistake.
“It’s not a very important problem you’re solving. Are there other applications for this?” asks a journalist after Frank’s presentation. He doesn’t have a right answer to that. Google users loved the feature and would use it en masse, he said. Of course you keep thinking about other applications, but he can’t name any acutely.
Geoffrey Hinton dreams big In Silicon Valley. Being “contrarian” – that is, a nonconformist – is a distinction. People like Apple founder Steve Jobs or Tesla CEO Elon Musk, who think against the current and are right, are the pillar saints of Silicon Valley.
Geoffrey Hinton is one of those people, at least part of the time: “My problem is: I have a belief that no one else has, and five years later everyone has it,” said the Brit in November at a conference of the science magazine “MIT Technology Review” . “My beliefs from the 1980s are now widely accepted.”
Hinton’s convictions have made him a legend among AI researchers: As a professor at Carnegie Mellon University, during what is now known as “AI winter,” he laid the foundations for self-learning neural networks that shape the structure of a brain model and thus approach human intelligence. In 2018 he received the Turing Award, the Nobel prize for artificial intelligence.
For the past seven years, Hinton has divided his time between the University of Toronto and Google’s deep-learning division, Brain.
Hinton believes they can do a lot more than what they can do today: “One day, deep learning will be able to do everything,” says the 73-year-old. It’s just a question of computing capacity: the human brain has 100 trillion synapses. The GPT -3 language generation model, which can formulate plausible texts and conduct conversations, has only 175 billion parameters, making it 1000 times less complex.
However: GPT-3, as impressed as many experts are by the system, can only write, nothing else. Google recently developed a robot that could open a drawer, take out a pad, and then say, “I opened a drawer and took out a pad.”
Could this be the beginning of a general artificial intelligence, the controversial concept of an AI that is as diverse and rational as a human being? In 2018, Hinton was still skeptical that such a so-called AGI would be achievable in the near future.
But Google is not only working on artificial intelligence, but also on quantum computers, which have a much higher computing power than conventional computers. At the end of 2019, the group’s scientists presented a model that stole the show from all previous supercomputers.
The AGI concept was popularized in the early 2000s by AI researcher Ben Goertzel, who later developed the humanoid robot “Sophia”. Today, Goertzel says: “I am impressed by how deeply Google and Deep Mind think about AGI. If any big corporation can do that, it’s them.”
But despite all the attention Google devotes to artificial intelligence, the group also seems to have understood something else: Algorithms have limits, at least for the time being.
“What is music?” was the question Doug Ford asked himself when he started as music program director at YouTube in late 2019. The New Yorker has spent his career in the industry. Before YouTube, he drove the playlist strategy of the music app Spotify for six years .
The Google subsidiary aims to attract subscribers to its premium music service, which lags far behind Spotify in terms of paid subscribers .
But with Ford’s new employer, the question of what music is, is more difficult to answer than with his old one: Because on YouTube, there is an image along with the sound. The world’s largest video platform has thousands upon thousands of professional music videos, but also countless live recordings or cover songs of good and not so good quality. Any fan who has filmed thirty shaky seconds of a Phil Collins concert on their cell phone can upload it to YouTube. Thrown into a Genesis playlist, a lopsided track can ruin the whole listening experience. Other live concerts, on the other hand, are better than the studio version.
Recommendation algorithms can only solve such problems to a limited extent because they lack a sense of context. When South Korean rapper Psy’s song “Gangnam Style” became the most viewed music video on YouTube in 2012, the algorithm began constantly suggesting the video to practically every user. But because Gangnam Style’s oddity attracted viewers rather than its potential for becoming evergreen, it has become more of an annoyance.
Ford’s solution: Algorithms work together with humans. The right level is “less human curation than a DJ, but more than an inscrutable algorithm,” he recently told the tech portal Protocol. Music editors determine the “Released” playlist with the best 50 tracks of the past week, and an algorithm selects the “New Release Mix” tailored to each user.
If you like the content, we would appreciate your support by buying us a coffee. Thank you so much for your visit and support.