San Francisco—— Google is injecting more artificial intelligence into its search engine, allowing people to ask questions about images and occasionally organize a full page of results, although the technology has had misadventures with misleading information in the past.
The latest changes announced Thursday herald the next step in Google’s AI-driven revamp that it launched in mid-May, when it began responding to some queries with snippets written by the technology at the top of its influential results pages. The summaries, known as “AI Overviews,” have raised concerns among publishers that fewer people will click on search links on their sites, undermining the traffic needed to sell digital ads that help finance their operations.
Google is reportedly addressing some ongoing concerns that it has reduced access to general news publishers like the New York Times and technology review experts like TomsGuide.com by inserting more links to other websites in its artificial intelligence overview. An analysis published last month by search traffic specialist BrightEdge.
The same study found that citations in artificial intelligence overviews are driving more traffic to highly specialized websites such as Bloomberg.com and the National Institutes of Health.
Google’s decision to inject more artificial intelligence into the search engine that remains the crown jewel of its $2 trillion empire leaves little doubt that the Mountain View, California-based company is tying its future to a push The technology ties together the biggest industry changes since Apple launched the first iPhone. 17 years ago.
The next phase of Google’s AI development builds on its seven-year-old Lens feature, which handles queries about objects in pictures. The Lens option currently generates more than 20 billion queries per month and is especially popular among users ages 18 to 24. This is the young demographic Google is trying to cultivate as it faces competition from AI alternatives powered by ChatGPT and Perplexity that position themselves as answer engines.
People will now be able to use Lens to ask questions in English about what they see through their camera lens – just like they’re talking about it with a friend – and get search results. Users who sign up for a test of Google Labs’ new voice-activated search feature can also take videos of moving objects, such as fish swimming in an aquarium, while asking conversational questions and getting answers via an AI overview.
“Our overall goal is, can we make search easier for people to use, easier to use search, and make it more usable,” said Rajan Patel, vice president of search engineering and co-founder of Google. So people can search anywhere, anytime, and any way they want.” Founder of Lens.
While advances in artificial intelligence have the potential to make searches more convenient, the technology can sometimes spit out bad information — a risk that could damage the credibility of Google’s search engine if errors become too frequent. Google’s AI overview has seen some embarrassing incidents, including suggesting people put glue on pizza and eat rocks. The company blamed the missteps on data gaps and deliberate attempts by cyber troublemakers to steer its AI technology in the wrong direction.
Google is now so confident that it has fixed some of the AI’s blind spots that it will rely on the technology to decide what types of information to show on results pages. Although the AI has previously made poor cooking suggestions for pizza and rocks, the AI will initially be used to display results for English-language queries entered on mobile devices about recipes and meal ideas. The results of the AI organization should be broken down into different cluster groups that include photos, videos, and articles about the topic.