Circuit & Soul

Apple Intelligence and Industry Standards: A Microcosmic Perspective

06 Jun 2025

As much as I’ve appreciated Apple’s tiered architecture for their Apple Intelligence (on-device, then private cloud, and finally third party), it’s no secret that the rollout has been disappointing for many, with repeated delays and even a class action lawsuit over advertising claims. However, my thoughts are less about trying to grapple with the entire system and more to understand it in parts, in this case by looking at a feature like Siri’s pending On-Screen Awareness and growing industry standards like Model Context Protocol (MCP).

Context sharing

Of course, MCP and Apple’s protocols are different in their design and purpose. While the former is meant to define a sweeping set of rules for how clients and servers may interact with LLMs and share task-relevant resources, the latter is only for Apple’s most recent devices to equip Siri with additional context-dependent knowledge.

That said, there is clearly some overlap in how they may be used, both being task-dependent and about how to get information from one part of a user’s workflow into the LLM. My impression is that MCP’s coverage of both tooling for server APIs and resources for data exchange makes them less focused on the resource format (discussing data generally as files and strings in their Resources documentation) while Apple’s App Entities enable more control over app-specific concepts, but I’ve not done a deep dive.

Despite some differences and the very limited nature of this discussion, I believe we can learn a little about the greater system by better understanding even its lesser parts. I can easily imagine Apple extending their Intents and Entities template to a broad protocol for sharing context to models, but I can also see them adopting something more similar to Anthropic’s Model Context Protocol throughout their ecosystem.

A complicated history with graphics standards

I can’t help but be reminded of examples from history. For instance, in the mid-1990s, there was Apple’s insistence on developing QuickDraw 3D, or QD3D, as their primary advanced graphics layer in favor of OpenGL (perhaps not entirely dissimilar from the tale of Microsoft’s Direct3D). More recently, in keeping with our graphics-framework theme, you could look at Apple’s Metal API versus Vulkan, the “new” OpenGL.

In the case of QD3D, Apple dropped it and ultimately adopted OpenGL, whereas today, Apple continues to utilize Metal exclusively in favor of Vulkan, although they have supported some efforts like the Game Porting Toolkit to play nicer with Direct3D.

There are a lot of external factors. QD3D was being made during the time Apple was struggling, and Steve Jobs wanted to simplify their tech stack with industry standards upon his return. I also recall a funny story in Richard Moss’s The Secret History of Mac Gaming (which is currently being offered as an expanded special edition!) in which Brian Greenstone, founder of Pangea Software and one of the best-known Mac game developers of the day, had to quit Apple to work for Apple… After being hired by Apple to work on QuickDraw 3D, he was told that “Phil Schiller wants another Nanosaur” since it had helped sell so many iMacs, but his immediate boss needed someone working on the graphics API, not making video games at the VP’s behest! [pg 325] Either way, they dropped QD3D, and the rest is history.

Conversely, Vulkan didn’t exist when Apple announced Metal. Perhaps they would have been willing to adopt this new iteration of OpenGL had the option existed when they needed it, but after building out their entire software and hardware ecosystem around Metal, there appears to be no incentive for them to switch.

As an aside, there is an extensive write-up by Dave DeGraw over on catskull.net about the history of Apple’s graphics APIs: https://catskull.net/history-and-overview-of-graphics-apis-on-apple-macos.html.

Full circle

Getting back to AI, I see the similarities in Apple’s approach to Apple Intelligence, from small things like On-Screen Awareness all the way up to their overall stance on AI and its adoption. Yes, MCP did not exist when they announced some of their initiatives, but many other models and industry best practices did. Also, unlike the situation with Metal (or even QD3D for that matter), Apple’s product isn’t technically released since On-Screen Awareness and many other features of Apple Intelligence have been delayed, something for which.

The parallel to graphics APIs is imperfect, but we’ve seen that Apple’s approach to new technology standards has always been complex and evolving. As we near another developer conference and announcements of new features, I’m left with the question: Will Apple continue on their own path for AI, or will we see them change course and adopt more industry standards?

We already expect third-party AI integration is expanding in the upcoming iOS release, and I anticipate that we can watch how Apple handles adoption of MCP versus maintaining all of its own frameworks as an indicator of how they will be rolling out Apple Intelligence more broadly. Either way, the future will soon tell…


P.S. It’s been a while since I’ve posted here, so it’s good to get back to a bit of writing. Part of my motivation has been playing around with more AI tools, like Cursor, for the help of reworking my site, and it can even double as a reader and editor for my blog posts. We live in interesting times indeed!