Microsoft initiated a turf war of sorts in the tech space by spearheading the integration of OpenAI’s ChatGPT tool within its bespoke Bing search engine, prompting Google, Alibaba, and Baidu to disclose their own AI-related initiatives over the past couple of days.
Earlier this week, Google announced that Bard, an experimental conversational AI service, was finally available for testing by a select group of developers, with a wider rollout of the service expected over the next few months.
But that was apparently not enough. Today, Google is showcasing its AI capabilities by holding a live event in Paris.
As a refresher, OpenAI’s ChatGPT initiative leverages deep learning via Generative Adversarial Networks (GANs) to carry out realistic and contextually pertinent conversations with humans. Similarly, Google’s Bard is an experimental conversational AI service based on Google’s Language Model for Dialogue Applications (LaMDA).
This brings us to the crux of the matter. The tech giant began today’s event by highlighting the myriad of ways in which Google Search leverages AI to provide more contextually relevant content to the users within the search page itself, be it the synthesis of complex queries or translation of images into pertinent search results.
In its quest to create a “new search experience,” Google highlighted the ability of Google Translate to understand and interact with 133 languages, including the ability to interact with 33 languages in the offline mode. Here Google expounded on Zero-Shot Machine Translation, the ability to translate one language into another without the need for specific translation pairs. Google also highlighted the ability of Google Lens to translate visual cues into pertinent search engine queries with a single click, including the tool’s ability to contextualize the entire image and not just a single snippet. The ability to flip the language in an image while retaining the original image structure is now rolling out to Android devices globally.
In a major development, Google Lens will now allow users to search for whatever is on the mobile screen, including photos and videos displayed within apps. This can be done by bringing up Google Assistant and tapping “search screen.” Moreover, Google’s multisearch functionality allows users to modify image-based searches with text-based cues. This functionality is now available in over 70 languages globally.
Google then went on to highlight the Transformer, a neural network architecture based on a self-attention mechanism and which serves as the precursor to much of the generative AI activity currently taking place. Google also highlighted the utility of incorporating Bard directly into its search engine. According to the tech giant, this conversational AI tool simplifies user interactions and adds context and utility to search results in a seamless manner.
*GOOGLE $GOOGL CHATGPT CLONE, ‘BARD AI’ GIVES INCORRECT RESPONSE DURING DEMONSTRATION
User query: “What new discoveries from the JWST can I tell my 9 year old about?”
Bard response: “JWST took the very first pictures of a planet outside of our own solar system.” – not true.
— Stock Talk Weekly (@stocktalkweekly) February 8, 2023
Nonetheless, Bard did commit a faux pas of sorts by giving out wrong information. As detailed in the tweet above, Bard believed that the James Web Space Telescope (JWST) was the first one to take pictures of an exoplanet.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
— Google (@Google) February 6, 2023
For those wondering what exactly happened, Google posted a video of Bard in action. In the video, Bard is asked about the new discoveries of the JWST. In response, Bard pumps out the erroneous answer that the telescope was responsible for capturing first images of an exoplanet when that credit belongs to the European Southern Observatory’s Very Large Telescope (VLT), which captured the shots back in 2004.
BREAKING NEWS: GOOGLE $GOOGL has removed its live steam video for its #AI EVENT this morning from Youtube!!
3 Strikes & your out!! pic.twitter.com/w7L8UfYHAp
— ⟁Vinnie Moura (@vinniemourax) February 8, 2023
And Google just pulled the YouTube Live Stream. Stay tuned as we try to figure out what happened. The company’s shares are predictably tanking.
The post Google’s AI Bard Stumbles in a Live Event, Prompting Google To Pull the Live Stream by Rohail Saleem appeared first on Wccftech.