WWDC 2024: Apple Intelligence, OS Updates And Other Highlights

By Anshel Sag - June 28, 2024
CUPERTINO, CALIFORNIA – JUNE 10: Apple CEO Tim Cook delivers remarks at the start of the Apple Worldwide Developers Conference (WWDC) on June 10, 2024 in Cupertino, California. Apple will announce plans to incorporate artificial intelligence (AI) into Apple software and hardware. (Photo by Justin Sullivan/Getty Images) GETTY IMAGESApple opened its annual WWDC keynote with a bang, starting out with a whimsical skydiving entrance video featuring Tim Cook standing on Apple Park’s roof to introduce the company’s newest software improvements, including Apple Intelligence—Apple’s approach to AI.

Being Apple, the company focused Apple Intelligence heavily on privacy, which has long been a core tenet for the company in everything it does. Apple has also focused Apple Intelligence on simplicity, with the goal of making AI more accessible and easier to use than many other alternatives available today. While Apple has clearly decided to brand its AI strategy differently, the important point is that the company is following an on-device-first approach for AI that it touts as inherently more secure and private.

VisionOS 2: Not A Full OS Upgrade

Apple led off the keynote with VisionOS 2, which is really a collection of all the features that didn’t make it into the initial VisionOS launch. I would be much more comfortable calling this VisionOS 1.5 or something like that; nevertheless, there are still lots of improvements. One of those features is the ability to convert 2-D photos into 3-D; this capacity already exists thanks to an app called Immersity, but Apple now integrates it into the OS. I have had a chance to try this on my Vision Pro, and it works incredibly well to turn any 2-D photo into a 3-D one. This immediately helps to turn anyone’s library of 2-D photos into stunning 3-D images that add life. Speaking of visual content, as part of the VisionOS 2 update Apple announced the fruit of its collaboration with Canon and Blackmagic Design: new cameras for easily creating high-fidelity content for the Vision Pro. Apple also added new features including Multiview in the TV app as well as Travel Mode on trains. I’m not quite sure why Travel Mode needed a specific train-support addition, but I guess it’s different enough from planes to warrant separate treatment.

Apple also added a lot more gesture controls to access basic things including the control panel and home screen, which previously required multi-step actions. These improvements definitely feel like they’re borrowed from Meta, much like we’ve seen Meta respond to Apple’s better passthrough video quality. Apple also finally made it possible to reorganize your home screen icons; it’s ridiculous but true that you couldn’t do this before now.

One major improvement in VisionOS 2 is the addition of higher-resolution and wider-aspect-ratio virtual screens in MacOS sharing, which will likely enable new types and sizes of virtual displays. Another new feature that seems like it should have shipped at launch is support for Bluetooth mice; previously, the Magic Trackpad was the only pointer device other than your hand that worked in VisionOS.

Apple also introduced three new APIs for VisionOS, including volumetric APIs, TabletopKit for building tabletop AR games such as Tilt Five and Enterprise APIs. One major development coming with Enterprise APIs is that Apple will open up access to the headset’s camera for enterprise developers. I spoke about this with Campfire CEO Jay Wright, who is developing a Vision Pro version of the Campfire app, and he said, “Camera access makes Vision Pro an alternative to HoloLens for the most widespread enterprise use case: remote assistance. We’re excited to use it in Campfire and hope to see a similar capability in [Meta’s] Horizon OS.” Apple seems to want to keep this feature and the privacy and security concerns associated with it away from consumers, putting that burden on business app developers. Regardless, I believe these new APIs help broaden Vision Pro’s appeal to developers and should enable more capable VisionOS apps.

TOPSHOT – A customer tries his Vision Pro at the launch of the Apple Vision Pro at Apple The Grove in Los Angeles, California, on February 2, 2024. The Vision Pro, the tech giant’s $3,499 headset, is its first major release since the Apple Watch nine years ago. (Photo by David SWANSON / AFP) (Photo by DAVID SWANSON/AFP via Getty Images) AFP VIA GETTY IMAGES

iOS Gets Upgraded Messaging And AirPods Get Gestures

For iOS 18, most of the real improvements came later in the event with the discussion of Apple Intelligence, so the major points of interest early on were the new levels of customization in the home screen, control center and lock screen, which all feel very Android-esque. Apple has also overhauled the Photos app across all platforms.

Apple also announced the ability to send and receive messages via satellite, including SMS messages to non-iMessage users. Speaking of SMS, Apple also briefly—in a momentary flash on the screen—brought up RCS support, but users without iMessage will still be branded with the green bubble in group chats.

Apple announced voice isolation for AirPods Pro to deliver improved voice quality in noisy environments; it also talked about lower-latency Airpods connections for gaming. Apple also announced Siri interactions on AirPods Pro, which allow for head gesture controls. I still believe that AirPods Pro are the best value for earbuds in the market, even though I have a hard time keeping them in my ears.

WatchOS Gains Fitness Granularity And iPadOS Adds Math

Apple’s biggest improvements for WatchOS have come in its new Vitals app, which is now the focal point for health tracking on the Apple Watch. The Vitals app will notify users when data falls outside the bounds of regular health metrics. Apple also added more features to its Health app’s cycle tracking for Apple Watch, which can now help women track their pregnancies. Apple has also added the double-tap API, which enables wearers to use their own fingers as a way of selecting things on the watch without touching the screen. It does this by tracking the neural impulses going through your muscles to sense when you have tapped your fingers together. I believe that double-tap will eventually be integrated into VisionOS for haptic feedback when making selections in spatial apps.

iPadOS adds many of the same improvements as iOS 18, including the new Photos app and customizations. It also has added an interesting new Math Notes app and, for the first time, a calculator, which has long been a point of contention for iPad users. Apple is also leaning further into tablet gaming with Game Mode for iPadOS, which should enable better gaming experiences. Unfortunately, iPadOS didn’t get much in terms of the bigger improvements that people—including me—were looking for. After using the new M4 iPad Pro and feeling like it could use a major UI overhaul to make it more capable, I was disappointed to see so little done for iPadOS in this release. The new iPads are extremely powerful and have gorgeous displays, but feel extremely limited by iPadOS. I believe the new iPads could be real competitors for productivity use cases if Apple had something like “MacOS Mode” or some kind of desktop mode.

MacOS Sequoia Gains Deeper Smartphone Integration

Speaking of desktops, Apple announced the latest version of MacOS, called Sequoia, with few major improvements. MacOS does finally get better window tiling, a.k.a. windows snapping, which has been a feature on Windows forever. I believe that one of the best features Apple introduced was iPhone mirroring, which I can’t believe wasn’t already a feature in MacOS. I believe that Apple’s implementation of phone mirroring will probably be cleaner than what we’ve seen from other sources, for example Screenovate’s tool that Dell used in its old Dell Mobile Connect app. (Intel eventually purchased Screenovate and used its technology to create Unison.) Apple also introduced unified notifications across iOS and MacOS, which should simplify people’s lives. However, it might also introduce more notifications in total and could play right into Qualcomm’s new Snapdragon X Elite ad campaign about notification overload. Apple also touted the new ability for developers to port games across all of Apple’s silicon and leaned on Ubisoft to announce that new titles would be coming to MacOS and iPadOS.

Apple Intelligence: AI Rebranded and Simplified

Apple spent about one-third of the WWDC keynote talking about Apple Intelligence and the company’s approach to AI. This approach is based on five conceptual pillars: powerful, intuitive, integrated, personal and private. These five pillars support the categories of features and apps in Apple Intelligence that leverage language, images, actions and personal context.

The first big step in implementing Apple’s AI approach is a major overhaul of Siri, now with better understanding and context for users’ questions. Interestingly, Apple didn’t mention that Siri has gotten faster or more accurate; this is funny (but not “ha-ha funny”) considering that my experiences with Siri have been quite poor compared to Google’s Gemini and Amazon’s Alexa—and I’ve used all of them a lot. Apple also talked about how it will be deploying a private cloud using Apple silicon to enable more compute-heavy AI experiences. Still, it said that most things would be processed on-device, with the OS deciding how to split up the compute using Apple’s own in-house foundational models. Apple benchmarked its models against Microsoft’s Phi-3 models, which are used in Copilot+, indicating a clear focus from Apple on attacking Windows.

Apple’s comparison of performance across competing small language models APPLE

Apple also introduced some generative AI capabilities, including Genmoji (generative + emoji) and image playground, which are likely to deliver similar capabilities as products from startups such as Midjourney. Apple also introduced AI-enabled writing tools, including proofreading, which I believe will likely displace popular proofing apps such as Grammarly. Apple also introduced image-based tools including natural language search for photos, image wand and Clean Up in photos.

None of these tools are necessarily novel implementations of AI, but rather they are platform-level tools that are free to users and take advantage of on-device computing capabilities. I believe that the best AI tools are free to consumers, come pre-loaded with the OS and don’t require cloud computing to run, and this is why I believe Apple Intelligence will be seen as successful. It seems clear that Apple has made conscious decisions about which low-hanging fruit it should implement with AI itself; beyond that, it will let developers come up with novel ideas using Apple development tools such as Xcode, which is itself getting AI enhancements. Apple also talked about third-party AI tools including ChatGPT, which will be integrated across iOS, iPadOS and MacOS but won’t be foundational to the Apple Intelligence experience.

One major thing to note is that Apple has limited Apple Intelligence’s reach to iPhone 15 Pro and newer smartphones, Apple silicon Macs and iPads. This means that if smartphone users want access to Apple’s latest and greatest AI capabilities, they will have to acquire the latest and greatest hardware, which seems like a good way for Apple to monetize Apple Intelligence. It’s unclear as yet how much Apple Intelligence will accelerate the refresh cycle for its users, but Apple needs to do a better job of subtly letting people know that these features require the latest hardware.

A laptop keyboard and Apple Intelligence on website displayed on a phone screen are seen in this illustration photo taken in Krakow, Poland on June 11, 2024. (Photo by Jakub Porzycki/NurPhoto via Getty Images) NURPHOTO VIA GETTY IMAGES

Apple Intelligence And Everything Else

Apple has finally laid down its long-awaited AI strategy, and Apple Intelligence is at the heart of it. It was clear during the WWDC presentation that Apple glossed over many of the improvements to its other platforms in its haste to get to the Apple Intelligence section. Yet there are holes in Apple Intelligence’s rollout; one of the biggest oversights was that Apple didn’t announce any Apple Intelligence capability for VisionOS, even though the Vision Pro ships with an M2 processor and should theoretically be just as capable of running Apple Intelligence as any other Apple silicon device.

Apple wants users and developers to know that it has staked its claim in the AI wars with Apple Intelligence and will move forward with its own brand for AI. It isn’t the least bit surprising that the company decided to take an on-device-first approach to AI, and it will be interesting to see more details about its cloud-based private servers based on Apple silicon and how those will scale. Apple’s partnership with OpenAI also gives it access to more powerful LLMs without compromising on the capabilities that are already built into the OS.

I believe that Apple’s approach will further push the industry in the direction of on-device AI, which we predicted late last year. As Apple’s approach shows, hybrid experiences will always be necessary based on the scale economics of cloud computing and the need for privacy, depending on the application and workload. Apple’s approach is extremely security- and privacy-centric, which once again aligns very well with the company’s public image and investments. I am personally excited to see how these capabilities stack up against other AI implementations and I am thankful that I got an iPhone 15 Pro Max last year. I know I’m one of the lucky ones, however; one thing that feels like a disconnect for me is that Apple claimed that Apple Intelligence is “AI for the rest of us” . . . but it will only be available on the company’s most expensive gear.

Anshel Sag
VP & Principal Analyst | Website

Anshel Sag is Moor Insights & Strategy’s in-house millennial with over 15 years of experience in the IT industry. Anshel has had extensive experience working with consumers and enterprises while interfacing with both B2B and B2C relationships, gaining empathy and understanding of what users really want. Some of his earliest experience goes back as far as his childhood when he started PC gaming at the ripe of old age of 5 while building his first PC at 11 and learning his first programming languages at 13.