[ad_1]

On-device AI is a big priority for Android going forward, and Google shared more developer resources with I/O 2024.
The I/O 2024 session “Inside the inner workings of Android on-device AI” provided “great use cases” for on-device generated AI.
- Consume: Provides a summary or overview of the text.
- create: Suggest responses or generate/paraphrase text in messaging apps
- Classify: Detect emotions/mood in conversations and texts
Common benefits include secure local processing, offline availability, low latency, and no additional (cloud) costs. The limitation is the small parameter size of 2-3 billion, or “almost an order of magnitude smaller than its cloud-based counterpart.” We also discuss how the context window becomes smaller and the model becomes less generalized. Therefore, “fine tuning is important to obtain good accuracy.”
Gemini Nano is Android’s “basic mode for building on-device GenAI replication,” but it can also run Gemma and other open models.
So far, only Google apps (summaries in Pixel Recorder, Magic Compose in Google Messages, and Gboard Smart Reply) are taking advantage of this, but Google is offering “a compelling on-device Gemini app” through an early access program. We are actively working with developers who have unique use cases. . These are expected to be released in 2024.
Meanwhile, Google will soon be using Gemini Nano for TalkBack captions, Gemini dynamic suggestions, and spam alerts, with multimodality updates “starting on Pixel” later this year.
Google also addressed the state of on-device generated AI a year ago and what improvements have been made since then, including hardware acceleration.

Android 15 details:
FTC: We use automated affiliate links that generate income. more.
[ad_2]
Source link