AI has us at another watershed moment: Google’s Prabhakar Raghavan
“Years ago, it might have seemed like science fiction to pull out a phone, take a picture of a broken bike part, and ask, “How do I fix this?” Today, you can use Google Lens, find out what’s broken, where to pick up a replacement part, and how to repair it yourself, all in a matter of seconds,” Prabhakar Raghavan, the senior vice president at Google, believes we are at another watershed moment in the journey of web search.
It is a space Google knows all too well, having dominated it for what is now a 25-year journey. So much so that Google’s search became ‘search’ what WhatsApp is to text messaging. An evolution of Xerox defining photocopiers for years. A common theme – many tried to compete, but none came close. Numbers illustrate Google’s dominance. Microsoft’s Bing has around 3% market share compared with Google’s 91% share, according to research firm Statcounter. Yet, artificial intelligence (AI) gives rivals a chance.
“Part of our responsibility and our opportunity is to keep pace with users’ expectations and technological advancements, which are both constantly changing,” Raghavan is clear about Google’s way forward. Yet there is little doubt it faces the kind of challenges they haven’t encountered in the past.
Bing’s roll-out as an AI-based search engine gives it a first-mover advantage. The partnership with OpenAI for ChatGPT integration, and more recently with Meta for their Meta AI assistant, adds to that. The mention of chatbots and assistants is as good an illustration as it gets that the very definition of search is widening in scope, and this isn’t just restricted to the web. Yet, whether market share sees a significant change in the coming months remains anyone’s guess.
Potential for appealing to millions of users too. Case in point, Microsoft’s Copilot. The AI chatbot that has deep integration in Windows 11 that runs on millions of PCs globally can not only search the internet with Bing but also within the local files and folders. Google’s response with Bard is to let users find answers to questions you pose, within what it finds on your Gmail, Docs or Drive files.
Google isn’t leaving anything to chance or give away years of definitive advantage to new-age tech such as chatbots, which the likes of Microsoft Bing and OpenAI’s ChatGPT are relying heavily on to get on the radar of a broader demographic of users. “With the increasing sophistication of generative AI, we’re at another watershed moment,” Raghavan opines.
Things have changed quickly in the past year with the emergence of generative AI and the tools it has spawned, including text-to-image creators. Google’s evolving too.
None of this fazes Raghavan. “The reality is that this AI evolution didn’t happen overnight. AI has been around for more than 70 years, and we’ve been meaningfully applying it to our products for decades,” he says.
Google’s Search Generative Experience (SGE) for users in India refers to a generative AI layer in Search which enables contextual visual results for conversational follow-ups. Puneesh Kumar, general manager at Google Search for India, confirms multiple large language models (LLMs) are in play, including an advanced version of Google’s MUM and PaLM 2.
“Consider our generative AI-powered Search experience (SGE) experiment, which provides snapshots of information and allows for conversational follow-ups. By creating entirely new ways to search, we’re helping people find the answers to previously unaskable questions and understand the world around them in new ways,” he says.
With time, not only did search engines have to tap into and index newer sources of information, particularly with a focus on mobile devices, but also an increasing need to tackle misinformation. “We’ve moved from understanding text to understanding different types of information, like images, videos, audio, and even a combination of modalities,” he says.
LLMs such as BERT understand words in a context rather than just words on a page, which led us to conversational search. Enter, voice assistants such as Google Assistant, and chatbots like Bard with which Google joined a frenetic space alongside ChatGPT and Bing.
The ecosystem is seeing rapid evolution, and so are LLMs. Take for instance, how web searches work in the Bard AI chatbot. There is a “Google it” button to check for and collect further sources on the web, for a query you may have asked.
A tap of the “G” icon below a response gets Bard to call upon a search to find more relevant content. “When a statement can be evaluated, a user can click the highlighted phrases and learn more about supporting or contradicting information found by Google Search,” Raghavan describes.
“We see a virtuous cycle where user expectations rise and propel technological advancements forward, and vice versa. Something that always amazes me is that 15% of searches we see every day are ones we’ve never seen before. Think about that- it means just to keep up with people’s curiosity, we must be constantly innovating,” Raghavan believes.
Kumar cited the Gboard keyboard app as one of the areas where it is innovating. “Language needs in India remain dynamic with mixed language usage, and we see that the majority of Google users in India prefer to use more than one language online,” he said. On that note, the SGE experiment begins with an understanding of English and Hindi. No doubt, it’ll take inspiration from traditional Google Search which supports more than 9 Indian languages including Kannada, Marathi and Bengali.
More ways to search, on phones and beyond
OpenAI’s ChatGPT will now search for current information to provide answers to queries posed by users. Its search engine is Bing. “ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021,” OpenAI says, in a statement.
The caveat, this feature is available only to Plus and Enterprise subscribers. Google Bard relies on its mammoth search engine and is free to use. Similar contours too for Microsoft’s Bing chatbot. The tech giant has the unique advantage of having Bing as the underlier for search in two popular chatbots, which users are increasingly turning to.
Then there is the small matter of the Copilot assistant that’s rolling out for millions of PCs running the Windows 11 operating system. There’s a reasonable advantage to that method since it isn’t often that users go specifically to a search web page.
Then there is Meta’s tryst with chatbots and conversational AI. The Meta AI assistant, which will roll out in the coming months, will use Microsoft’s Bing to enable real-time web searches within very popular apps globally – WhatsApp, Instagram and Messenger, in the coming months.
There is no ambiguity about the challenges that await Google’s evolution with search. “We’ve always thought about our mission more expansively than just traditional text search. That’s why we invest in experiences like Discover, Assistant, Bard, Lens, Google Maps and more. We want to make it easy for people to find and engage with information – in ways beyond typing a query into a search box,” says Raghavan.
The tenets, he believes, will be to make search more natural and intuitive, thereby enabling the ability to answer increasingly complex questions.
There is a sense Google knows user preferences are different for age demographics. Raghavan believes the younger generations are more likely to collaborate with AI as a thought and creation partner.
“Much of the Indian population leapfrogged desktop computing and went straight to mobile, which also leads to different user behaviours than we might see in Western Europe or America. For instance, we see a stronger preference for visual and video content and a much higher usage of voice for search,” he points out.
Turns out, Google’s data indicates the percentage of Indians doing daily voice queries is nearly twice the global average.
While there is pressure for speed, Google’s not willing to compromise on what it calls responsible AI. Being last to join the chatbot space, was an example of that. “The biggest difference in recent months is that AI has gone from working primarily behind the scenes to moving up the technological stack, directly into the user interface,” says Raghavan.
Limiting certain sensitive and knowledge critical elements such as medical advice in the SGE experiment, another. “AI is still an emerging technology, and like any emerging technology, it’s critical that we bring user experiences rooted in these models to the world in the most responsible way possible,” he adds.