[ad_1]
Apple’s artificial intelligence features in iOS 18 use on-device processing and large language models that run locally rather than in the cloud like its competitors.
Apple’s update, which features generative AI services similar to Microsoft CoPilot and Google Gemini, will be executed using an onboard neural engine, according to a report by Bloomberg’s Mark Garman. Several cloud services will be used for core advanced functionality.
“The first wave of features appears to be fully on-device,” Garman wrote. “This means that the company’s large language models do not have a cloud processing component, which is software that powers new functionality. This approach has several advantages, including speed and privacy.”
The iPhone maker has always been concerned about privacy and was looking for alternative solutions given the risks of sending vast amounts of potentially sensitive data to the cloud. This includes new AI models such as Ferret-UI, which can analyze phone screens thanks to advanced AI-enabled chips such as M4.
Garman said Apple’s first large-scale language model will not involve cloud processing. This is the underlying software that powers text analysis, summarization, and other AI capabilities. It could also make Apple a leader in his AI field.
How will Apple move AI away from the cloud?
Apple has invested heavily in AI-enabled hardware over the years. It also has a number of AI services running on top of the Neural Engine, including some Siri features and transcription.
Over the past few months, multiple new models have been announced from Apple’s AI research division, with a focus on maximizing the processing power of the company’s silicon, with onboard models offering some features. You can achieve the same performance, if not better. Big players like OpenAI’s GPT-4.
This means that everything happens on your phone, rather than text prompts and data being sent over the internet to a large language or image model running in the cloud, then coming back and appearing on your device. means.
On-device AI is less powerful and can actually be slower than sending to the cloud if the request is complex, but it is more secure and can be run offline.
What does this mean for iOS 18?
“As the world awaits Apple’s big AI announcement on June 10th, it looks like the first wave of features will work entirely on devices,” Garman wrote in the Power On newsletter. . This means “there is no cloud processing component in the company’s large language model,” he says.
Some of the more advanced features rumored for Siri 2.0, such as image generation and long text generation in Messages and tools like Keynote, will still require cloud processing and will be available through third parties such as Google Gemini and OpenAI ChatGPT. It might work.
On-device AI is less powerful and can be slower than actually sending it to the cloud if the request is complex, but it’s more secure and can be done offline. ”
Apple is rumored to have held multiple partnership meetings with AI companies such as OpenAI, Google, and China’s Baidu for potential integration into its ecosystem.
There are multiple AI features and integrations rumored as part of the larger iOS 18 upgrade, including the Health app. In this situation, the data is so sensitive that Apple would probably need to store it on the device somehow.
Other updates may include generating playlists in Apple music, creating Keynote slides, generating text in Pages, and more. This is similar to what Google Docs and Microsoft Word already have.
More about Tom’s guide
[ad_2]
Source link