[ad_1]

At Google’s I/O 2024 developer conference, the company basically talked about AI the entire time (as expected). During the keynote, Google detailed how artificial intelligence will be integrated with Android in the future, and Gemini will take on the same virtual assistant role as Google Assistant, but in a more integrated and contextual way. It will be held in
Also: Android 15 is here: 8 exciting (and useful) features coming to your phone
After launching the Gemini app in February, Google has started fleshing out Gemini for Android with a set of new features that can bring AI to more aspects of daily life.
Google tried to pitch these changes as a fundamental rethinking of Android, but at least for now they’re complementary to the larger Android experience.
Gemini itself is quite literally that. Google is tweaking Gemini’s design so that it floats above what you’re doing, rather than taking up the entire screen as it currently does. This is the same way Google Assistant displays. However, with Gemini you get a huge text field to enter your prompts, which takes away the focus from the voice prompts.
The new overlay represents deeper integration with the apps you’re using and is intended to provide context-aware controls. Google gave an example of when watching a YouTube video. When you pull up Gemini, you’ll see a button that says “Ask this video,” and you can use the video’s knowledge base to ask questions and summarize the content. This can also be done in PDF as long as you subscribe to Gemini Advanced, which has a longer context window.
Also: 3 new Gemini Advanced features announced at Google I/O 2024
Gemini also makes it easier to flow within apps, such as when using drag and drop. In its keynote, Google demonstrated how you can ask a chatbot to generate an image, then drag and drop the result into a messaging app to send it to a friend.
Google says that over time, Gemini will become more contextually aware of the apps on your phone and use dynamic suggestions to help you navigate them more easily.
Google is also upgrading Circle to Search, already available on more than 100 million Android devices, to help with homework. Specifically, this feature helps students better understand complex physics or math word problems they’re stuck on. You’ll get a detailed breakdown of how to solve a problem, so you don’t have to touch a digital information sheet or syllabus.
Also: Google just teased AR smart glasses, but we already know how the software will work
This new feature leverages Google’s LearnLM model, which aims to make learning easier with AI. Over time, Google says you’ll be able to use Circle to Search to solve more complex problems involving symbolic formulas, diagrams, graphs, and more.
Circle to Search is already available on major smartphones, including the Samsung Galaxy S24.
Kelly Wang/ZDNET
Google also announced that the Gemini Nano, a model built directly into Android (albeit on a very small number of devices), will receive an upgrade called “Gemini Nano with Multimodality.” With his updated LLM, you will be able to interact with Gemini using various media inputs such as text, photos, videos, and audio to get answers to your questions, information about your queries, and more.
Also: Introducing Gemini AI Live: Like FaceTiming with your know-it-all friends
This model powers features such as TalkBack, which displays text descriptions of images, and real-time spam notifications during calls (this is when someone calling you from an unknown number thinks it’s spam for some reason). ) actually (I have to wire $1 million to an Egyptian prince).
These are just some of the AI features Google plans to bring to Android 15 and beyond. Some of them launch first on Pixel, while others are available to anyone who downloads the Gemini app. We don’t yet know how exactly everything will be resolved, but if you own and use an Android smartphone, it’s clear that it’s going to be even more powerful.
[ad_2]
Source link