CUPERTINO, United States — Apple has revealed new details on how the next generation of Siri will work, confirming that the voice assistant will be powered in part by Google’s Gemini artificial intelligence models in what the company describes as its most significant Siri upgrade since launch.
The revamped assistant forms part of Apple’s broader Apple Intelligence strategy and is designed to be more conversational, context-aware, and capable of handling complex tasks, while maintaining Apple’s long-standing emphasis on user privacy.
Apple stressed that Gemini’s involvement is a collaboration rather than a replacement of Apple’s own artificial intelligence systems.
According to the company, Gemini will function as a foundational model to enhance Siri’s reasoning and language understanding, while Apple continues to build and control its own AI layers on top of it.
“This approach allows us to move faster on intelligence while retaining control over how Siri behaves, integrates with apps, and protects user data,” Apple said in briefing materials accompanying the announcement.
The company disclosed that the new Siri will operate using a hybrid processing model. Simple tasks such as setting reminders, sending messages, or adjusting device settings will be handled directly on the user’s device.
More complex requests, including multi-step actions, advanced queries, and deeper contextual understanding, will be processed through Apple’s Private Cloud Compute system.
Apple said Private Cloud Compute relies on Apple-designed servers that do not store user data and are built to meet the company’s privacy standards.
Even when Gemini models are involved, Apple insists that user requests will not be sent to third-party servers.
With Gemini integration, Siri is expected to become more natural and flexible in conversation. Users will be able to ask follow-up questions, refine commands mid-interaction, and complete tasks that previously required several separate instructions.

The upgraded assistant will also be able to understand personal context across apps such as Mail, Calendar, and Messages, recognise what is currently displayed on the screen, and perform multi-step actions like planning events, organising information, or retrieving contextual details.
Apple said this will significantly improve Siri’s ability to provide accurate responses to general knowledge queries.
Privacy, Apple reiterated, remains central to the redesign.
The company said user data will not be used to build advertising profiles or stored for long-term AI training. Processing will occur either on the device or within Apple’s secure cloud infrastructure.
Apple has not announced a firm release date, but reports indicate that Gemini-powered Siri features will begin rolling out in 2026, starting with limited functionality and expanding gradually through software updates.
The move signals Apple’s most ambitious push yet into advanced generative AI, as competition intensifies among major technology firms racing to redefine digital assistants.



