Apple just confirmed what we've been waiting months for: Siri is getting a complete transformation with Google's Gemini AI. The new Siri will use a 1.2 trillion parameter model running on Apple Private Cloud Compute, with capabilities including on-screen awareness, cross-app actions, and personal context. It arrives with iOS 26.4 in late March. After testing every version of Siri since its 2011 launch, I can say this is the most radical change it's ever had.
What is the Apple-Google deal
According to 9to5Mac, Apple signed a multi-year deal with Google in January 2026, paying approximately $1 billion annually for Gemini integration. Most importantly: it's white-labeled, meaning you won't see any Google branding. From the user's perspective, it's still Siri, just dramatically smarter.
The 4 key features of the new Siri
| Feature | What it does | Practical example |
|---|---|---|
| On-Screen Awareness | Siri sees what's on your screen | Reading an email about a meeting → Siri adds it to your calendar automatically |
| Personal Context | Accesses notes, emails, and messages | "Siri, what did John say about the project?" → searches your messages |
| In-App Actions | Controls apps by voice without opening them | "Siri, increase brightness in Lightroom" → executes the action directly |
| Multi-Step Chains | Chains up to 10 sequential actions | "Book dinner for 2, send the address to Maria, and set an alarm to leave" |
How privacy works
Apple implemented a three-layer architecture that protects your data:
- Local layer: Simple tasks are processed on-device (A18/M5 chip)
- Apple Private Cloud Compute: Complex requests go to Apple's servers with end-to-end encryption
- Privacy buffer: Only the most demanding queries reach Gemini, passing through an anonymization layer before touching Google's servers
According to Gadget Hacks, this multi-layer architecture is the most sophisticated Apple has implemented for an AI service. In my experience with virtual assistants, it's the most robust privacy approach I've seen.
Old Siri vs new Siri: comparison
| Capability | Current Siri | Siri with Gemini (iOS 26.4) |
|---|---|---|
| AI model | Basic, predefined rules | Gemini 1.2T parameters |
| Screen context | No | Yes (on-screen awareness) |
| Personal data access | Limited | Notes, emails, messages |
| Cross-app actions | Basic (open app) | Up to 10 chained actions |
| Natural conversation | Poor | Full chatbot (iOS 27) |
| Privacy | On-device | 3 layers + anonymization |
When it arrives and on which devices
The confirmed timeline:
- iOS 26.4 (late March 2026): Personal Context, On-Screen Awareness, In-App Actions
- iOS 26.5 (May 2026): Additional features not ready for March
- iOS 27 (September 2026): Siri as a full conversational chatbot, Apple Foundation Models v11
Compatible devices: iPhone 15 Pro and later, iPad and Mac with M1 chip or later. Older models will have limited support.
Common issues
Some features may be delayed
According to 9to5Mac, Apple is facing internal challenges with the new Siri's reliability. Some features may be pushed to iOS 26.5 or iOS 27. The core trio (Personal Context + On-Screen Awareness + In-App Actions) is still on track for March.
My iPhone isn't compatible
If you have an iPhone 14 or earlier, you won't get the full new Siri. Apple may offer limited functionality with more cloud dependency, but the complete experience requires iPhone 15 Pro or later. I've been using the 15 Pro for a while and the Neural Engine difference is key for these features.
Am I giving my data to Google?
Not directly. Apple uses a privacy buffer that anonymizes data before sending it to Gemini. Plus, most queries are processed locally or on Apple Private Cloud Compute, not on Google's servers.