Apple’s Long Awaited Rollout of AI 

This last Monday was the first day of Apple’s annual Worldwide Developers Conference (WWDC), commonly referred to as Dub Dub. 

It is Apple’s time to show off their latest in innovations and plans for the Fall. This year Apple’s big announcement is that AI is coming to their platforms but they cheekily rebranded it to Apple Intelligence (AI). And although this is the first time they used this term during their keynotes, Apple has been actually using ML (Machine Learning – the old name for AI) for years now, and even have dedicated microchips for doing ML learning and processing built into the iPhone dating all the way back to 2017. 

So why have they suddenly changed their tune? Well, aside from all the market and industry pressure to jump on the AI bandwagon, Apple might just be finally ready to ship what they have been working on for the last 10+ years – their version of AI.

Since its inception Apple has had a knack for setting trends in the tech industry. They have had such great success reinventing major products like the personal computer (Mac), music players (iPod), and the smart phone (iPhone), that their name is synonymous with innovation. Back in March they launched an entirely new computing platform (also 10+ years in the making) with their Apple Vision Pro goggles (read my review here). With this new high-end hardware they released an entirely new operating system / platform that they call Spatial Computing. They are again trying to reinvent this space to be more inline with their vision of the future. And they may very well succeed in doing so because their real strength lies in their ability to take cutting edge technology to the next level, and then slap on a coat of signature Apple-polish which tends to make it usable for all the non-nerds.

Apple Intelligence

Now with the launch of iOS18 and MacOS Sequoia, Apple has once again tried to redefine Artificial Intelligence in their own vision. During the AI segment of the keynote they put up a slide with the words Powerful, Intuitive, and Integrated on the screen. These are less a differentiator and more of Apple’s current reputation in the industry in the sense that everyone expects Apple to make cutting edge devices that are easy to use and that will work with their ecosystem. The next 2 bullet points Personal and Private are why they actually have an advantage in this space. The more AI knows about you the more useful it is, and there are some valid concerns about opening up all of your private data to companies like Open AI, Google and Microsoft. OpenAI may have the best LLM (Large Language Model) on the market but lacks an ecosystem of hardware or software. Google has hardware and software but their main source of income is selling your data to advertisers which is not very private. Microsoft has more acceptance than Apple in the business world but they are missing a mobile platform which is problematic because this is where AI will really shine with all the cameras, sensors, and contextual awareness already built into smartphones. 

Screenshot

Apple has been playing it safe, purposely distancing themselves from the AI hype until they were ready to do a more useful, safe and polished version they call Apple Intelligence, and from the demos they have released, it seems like they have accomplished this. So now with all the cards on the table, it looks like Apple with its powerful ecosystem combined with the inherent advantages of having more of your data while being able to keep it more private, gives them a clear competitive edge. 

iPad Improvements 

There were also significant improvements to iPadOS which is the iPad only version of their mobile operating system. And although they didn’t really address many of the complaints in the Apple community about the limited capabilities of the iPad platform, they did introduce some pretty innovative features like “Math Notes” and much improved handwriting support. These two features in particular are more in line with where I see Apple going with their platform, they don’t want to replicate what is possible with “old fashioned” desktop computers. They want to make new native software experiences that change what is even possible with  “personal computers” in the first place.

Math Notes 

Smart Script – Handwriting Support

As you can see these features are kind of “magic” and go a long way in further differentiating the iPad from the existing tablets on the market. This is one more step towards iPad finding its own niche in the future of computing, which is ultimately better for Apple’s bottom line because they can sell you both a desktop computer, a tablet and a phone. It is also better for consumers (if they can afford both devices) because these devices work well together but still maintain their indivigual strengths, so you can just grab both or whichever is better for your specific need at the time. This seems much more in line with Apple’s already stated longterm strategy than trying to combine these platforms.

Video Killed the Radio Star

It is also interesting to think about the cultural effects of rolling out this LLM based writing technology on such a massive scale. I feel it has the potential to change writing itself. We are still seeing the downstream effects that smartphones (in their current form) have had on our culture and laungage. And now in the next six months Apple, Google and Microsoft will have added this new set of LLM based tools to everyone’s computer, tablets and smart phone. This is not just your dad’s spelling and grammar check, these tools have the ability to read what you wrote, understand what you are actually trying to say and then instantly summarize or completely rewrite the text into a different writing style (or at a different reading level). And there is currently no good way to detect if these LLM tools have been used so it is really like giving people a whole new way to write and read which may diminish people’s ability to write themselves, but it could also free up their minds and help remove certain types of creative and technical constraints.

I heard a podcaster on the TWIT network the other day discussing how all the painters in the early 1900s were declaring that ‘painting is dead’ with the advent of photography which basically let anyone (with the right equipment) instantly capture an image that would take a master painter weeks of work to render. Contrary to this premise painting not only didn’t die, it went through one of its most creative periods which basically gave birth to what we think of as modern art today. In the same sense will these AI tools inspire new movements in art, culture, and literature? Will this lead to another renaissance in progress or just make us lazy and bad at writing? 

Siri Gets Another Shot 

Another big innovation that was announced is the long awaited improvements to Apple’s voice assistant Siri. I am one of the few people I know who uses Siri on a daily basis (mostly because I understand Siri’s limitations and know how “she” likes to be spoken to), but I think most people try it once or twice and have a frustrating enough experience where they don’t use it again. With this new roll out of Siri + AI, Apple has made major improvements to their voice based assistant which will potentially make her much more useful to many more people.  Specifically she will be better at interpreting what you are asking for even if you stammer or don’t ask in a concise way. And she will also remember what you say, be able to clarify misunderstandings, and ask follow up questions.

Screenshot

Although this may sound like a pretty basic improvement, this will enable Siri to actually have a normal back and forth conversation allowing for a whole new type of user interface (or UI) that is purely conversational and will not require even looking at a screen. With the release of chat GPT-4o OpenAI has introduced a voice mode that also promises this type of “conversational interface”. I believe Apple’s new Siri will at least be able to match their conversational ability (see demo), but she will have the added benefit of being able to control your phone so I suspect Siri will be much more functional.

Remote Control, Platform-less Apps and the Future of Software  

I am excited for this coming wave of innovations. It makes me thankful to be in Apple’s ecosystem despite it being weirdly constrained in some areas, and a bit of a walled garden in others. Also coming in this release are some new features that focus on allowing Apple users to have better remote control of our smart devices.

It is increasingly more often that I am asked to remotely view a client’s iPhone or iPad screen, so I can walk them through whatever task they need help with. When iOS 18 is officially released in the fall, it will be the first time I will be able to remotely “tap” their screen for them (using FaceTime), which will be a huge improvement for my workflow and a huge boon for ‘every grandparent in need of tech support’ in the world. They will also be implementing a feature they call “iPhone Mirroring” which is more of a way to see and control your own iPhone’s apps remotely from your Mac. I believe this feature has huge potential to change how we interact with our smart devices and it also points at a possible shift in Apple’s strategy for software as a whole.

iPhone Mirroring will allow you to leverage the power of your iPhone from your Mac without having to even take it out of your pocket. This gets the full version of every iPhone app back onto the Mac, a goal that Apple has been trying to do for a very long time. It also builds on Apple’s suite of Continuity features that are already very useful in their current form, and it will allow you to drag files from the iPhone and drop them into your Mac and vice versa. I believe Apple will extend this feature soon to Vision Pro which would make this new platform much more useful. And I can even imagine them extending this functionality to the Mac allowing us to effectively keep our MacBooks in a laptop bag running the actual app, but be interacting with it on an iPhone, iPad or Vision device. This would really be a paradigm shift in the sense that it would allow you to run any app natively on, let’s say MacOS but then be interacting with the app from whichever device you are currently in front of.

I could even see this mirroring feature changing the widely held opinion that the iPad and Vision Pro are “being constrained by their software”. It could be used to unify all the platforms so that in the future you would not have to think about the operating system at all when you go to get something done. You would be able to just use the device that is most convenient for the task at hand, while natively running that software on whatever platform is best for that app, which would be a real game changer in my mind.