This is what the new Siri with Gemini will be like: dates, features, and agreements between Apple and Google

  • Apple will launch a new Siri powered by Gemini in iOS 26.4, with a beta planned for February and a rollout in the spring.
  • The integration will allow Siri to understand context, screen, and execute complex actions within apps, bringing it closer to an advanced chatbot.
  • Gemini will operate within Apple's infrastructure (on the device and Private Cloud Compute), without sending data to Google servers.
  • The agreement with Google is worth around $1.000 billion annually and acts as a temporary solution while Apple develops its own models.

new Siri with Gemini on Apple devices

La Siri's next big evolution It goes through Google GeminiAfter years of complaints about the stagnation of its voice assistant, Apple has opted for a strategic alliance with Google to inject advanced artificial intelligence models into its ecosystem. The result will be a much more contextual Siri, capable of better understanding what we want and handling complex tasks without the user having to jump from app to app.

This change is not limited to a simple software tweak: It involves a profound rewriting of Siri's "brain"., relying on Gemini's technology but under Apple's privacy and control rules. The Cupertino company, heavily criticized for lagging behind rivals like Google and OpenAI in the AI ​​race, needs a game-changer, and it's going to achieve it precisely with this “new Siri with Gemini"It will arrive first on iPhone and then on the rest of the devices."

When will the new Siri with Gemini arrive, and on which devices will it be available?

Apple manages a two-phase timeline for deploying the new SiriThe first wave will arrive with iOS 26.4, whose beta is scheduled for February and whose stable release is expected in the spring, sometime between March and April. This version will finally put the generative AI capabilities that the company previewed under the Apple Intelligence umbrella into users' hands.

In that first update, Siri will begin to lean on Gemini for advanced comprehension tasksBut its classic interface hasn't been completely changed yet. We'll see a smarter Siri "on the inside," one that better understands our requests and can navigate between apps, but it won't yet be the long-term conversational chatbot Apple is preparing for the next stage.

That bigger leap will come with iOS 27, iPadOS 27, and macOS 27In that generation, expected by the end of the year after its presentation at WWDC in June, the idea is for Siri to cease being just a traditional voice assistant and instead behave like a long-range conversational chatbotIn the style of ChatGPT or Gemini itself, long conversations, persistent context, the ability to type or dictate requests, and the system's understanding of complex command strings will form part of the core experience.

Regarding availability by model, Apple will continue to prioritize its latest devices. The most advanced features of the new Siri with Gemini They will focus on the latest generation iPhones and devices powerful enough to handle hybrid AI models (on-device and in Apple's cloud), especially in Europe where regulatory and privacy demands weigh more heavily.

What specific changes will Siri bring when Gemini is integrated?

The big difference compared to the current Siri will be its ability to understand context, screen, and personal dataInstead of just issuing isolated commands, Apple wants the assistant to truly act "on behalf of the user" within the system and applications, without requiring users to manually open menus or search for functions.

Among the features expected for this new Siri with Gemini, several key areas stand out. For one, understanding what appears on the screen: The assistant will be able to interpret the content visible on the iPhone. (an email, a chat, a website, a document) and use it as context to respond or take action. You can also use personal information, such as emails about flights or messages with reservations, to answer more complex questions like "what time does my mother's flight arrive?" without the user having to specify every detail.

Another pillar will be the chained execution of actions within applicationsInstead of asking for very specific things (“open Mail”, “write a message”), Siri will be able to receive higher-level requests such as “prepare a summary of the report I was sent yesterday and send it to my boss” and chain together the necessary steps using Gemini when the task requires more sophisticated reasoning.

Apple is also working on what has been internally described as features of “World Knowledge Answers”Responses based on general knowledge gathered from the internet, accompanied by references or citations, similar to what ChatGPT or Gemini offer directly today. This will allow users to ask more open-ended questions without leaving Siri.

Integration with apps will also take a significant leap. Siri will be able to activate specific functions within applications Using only your voice: search for specific photos, write notes, create complex reminders, or compose long emails, all with a level of understanding of nuances closer to that of current AI chats than to the traditional assistant.

Gemini as Siri's "secondary brain": how tasks are divided

The way Apple will integrate Gemini will not be based on completely replacing Siri, but on using it as a “secondary brain” to resort to when requests become too complexFor simple and routine tasks, the usual system will continue to operate, processing requests very quickly on the device itself with local models.

When the user makes a request that requires reasoning, deeper natural language interpretation, or handling of large amounts of information, Siri will quietly offload some of the work to GeminiIn practice, the user will notice that the assistant can now summarize long documents, analyze news, prepare trips, chain multiple steps in different apps, or handle instructions with many nuances.

One of the interesting points for European users will be the relationship between the new Siri with Gemini and the Google ecosystemThe integration will allow users to directly request things that currently require opening Google apps: locating a file in Drive, composing or replying to an email in Gmail, or preparing a draft in Docs, all through Siri commands without having to navigate through menus.

Beyond text, Google's model is natively multimodal, which means that It can understand text, audio, and images in combination.By bringing this capability to the iPhone's camera, Gemini will analyze what the device sees in real time, and Siri will provide the answer to the user. It's an evolution of current visual search, but with a significant leap forward in understanding and context.

In any case, Gemini's participation will be transparent and optional for the userApple will implement the integration in a way that allows each user to decide whether to link their account and authorize Siri to consult Google's systems. Those who choose not to will continue using a more limited version of Siri, but without that external connection.

Privacy, data, and the infrastructure of Private Cloud Computing

One of the most sensitive points of “Siri with Gemini” is what happens to the data. Apple has insisted that, despite the use of Google technology, Gemini will not function as a classic external service within the iPhone.Instead of sending requests to Google servers, queries will be executed on the device itself or on Apple's infrastructure, known as Private Cloud Compute.

This hybrid model is based on a combination of Apple-designed chips and proprietary servers in which AI tasks that require more processing power are handled without exposing the data to third parties.Before leaving the device, the information is anonymized and personal identifiers are removed, and the company assures that nothing is stored in a way that can be linked to a specific user.

As Tim Cook has emphasized, the agreement with Google has been specifically designed to to maintain the privacy standards that Apple has championed for yearsThe company emphasizes that neither Google nor other partners will have access to the content generated by users when interacting with Siri, even when that interaction leverages advanced language models like Gemini.

In practice, this means that European users will continue to have the same data protection framework as before, but with a much more capable assistant. Apple retains control over the experience and security.while Google provides the algorithmic muscle that allows for the leap in language understanding and generation.

This architecture also fits with European regulations and the growing demands regarding data sovereignty. For Apple, it is crucial to be able to say that it is not "handing over" user information to Google.Although it benefits from its technology, and that's why it has designed the integration of Gemini in a very different way from how the chatbot is offered directly from Google's own services.

The economic agreement with Google and the long-term strategy

The least visible aspect of this entire operation is the economic component. Several sources indicate that Apple will pay Google around $1.000 billion a year for access to a customized version of Gemini tailored to the needs of Siri and Apple Intelligence.

This figure is in addition to the revenue that Google already earns from being the default search engine in Safarian agreement valued at more than $18.000 billion or $20.000 billion annually. From a business perspective, expanding the relationship into the field of AI was much simpler than starting from scratch with another partner, both in terms of cost and time.

Apple internally describes this agreement as a temporary solutionThe company continues to develop its own artificial intelligence models with the idea of, in the medium or long term, reducing dependence on external partners and regaining full control over key technology, as it has done in other areas (chips, hardware design, operating systems).

Until relatively recently, Apple explored alternatives such as a a deeper alliance with Anthropic or an expansion of its collaboration with OpenAIHowever, negotiations with some of these players were complicated by cost and conditions, while market pressure for a clear answer in AI left little room for further delays.

Ultimately, the choice of Gemini is a sign of pragmatism: Apple acknowledges that, in this area, being late means losing relevance.And it has chosen to rely on those who already have a mature and scalable model while it finishes building its own alternative. For users, this translates into tangible improvements without having to wait several years for their own fully proprietary solution to be ready.

Siri as an advanced chatbot: the next step with iOS 27

The version of Siri we'll see with iOS 26.4 will be, to a large extent, a deep upgrade of the internal engine More than a radical redesign of the experience. The real facelift, the one that will transform Siri into a full-fledged chatbot, is reserved for the next generation of the system.

With iOS 27, Apple plans to Siri behaves like a full conversational assistantIt is capable of maintaining long dialogues, retrieving context from previous messages, remembering preferences, and accepting written instructions in addition to voice commands. This evolution will align Apple's assistant with what ChatGPT and Gemini already offer in their web interfaces and dedicated apps.

At that stage, the user will be able to ask the assistant to Generate texts, summarize documents, analyze files, and assist with programming tasksall from within the system itself. It is also expected to be able to perform integrated internet searches, generate images, and work with content that the user uploads or has stored on the device.

The key will be in how Apple transforms that power into something easy to use and consistent with its ecosystemIt's not just about having a powerful model, but about integrating it into the operating system, native apps, and daily workflows so that it's not perceived as an afterthought, but as a natural extension of what we already do with the iPhone, iPad, or Mac.

The company is aware that its reputation with Siri isn't exactly stellar and that Any failure in this new phase will be closely scrutinized.That's why it has opted for a gradual transition: first strengthening the "brain" with Gemini and then deploying the more ambitious interface when it has a clear understanding of how users react and where the friction points are.

What's coming with the new Siri with Gemini is, in practice, a category change for Apple's assistantIt will go from being a tool that many used only for basic tasks to becoming a central part of the experience on their devices, provided that the implementation meets expectations and respects the promises of privacy, something especially sensitive in markets like Europe.

Everything points to the fact that, in the coming months, iPhone users in Spain and the rest of Europe will begin to notice a different Siri: more attentive to context, more capable of moving between applications and better prepared to understand complex requests, thanks to a Gemini that will work in the background but within Apple's infrastructure, in a delicate balance between AI power, business and data protection.

new Siri powered by Gemini
Related article:
New Siri powered by Gemini: this is how Apple's AI strategy is changing