In 1996, Apple bought NeXT, not only bringing Steve Jobs back to the company he co-founded, but bringing with him all the engineers and technology that would be used to replace the classic Macintosh operating system with Mac OS X, now, macOS.
Catalina, which has been available in beta form since its introduction back in June, and which goes into general release today, is the 15th version of that operating system, of the brain that runs every modern Mac.
It's legitimately one of the most exciting macOS launches ever. But it's also one of the most turbulent.
Let me explain.
macOS Catalina review: In Brief
Because there's no next NeXT for Apple to move to, the company has been doing what a responsible, mature platform company should do — step by step, year over year, methodically replacing older components with new ones. 32-bit to 64. OpenGL to Metal. Objective-C to Swift, HFS+ to APFS, AppKit to SwiftUI. These are the biggest examples but there are innumerable smaller ones.
It's a process that takes time, and one that can be messy and painful. Especially for people who have known and depended on the older technologies and implementations for years, for decades even. But also for those waiting for the new technologies to mature, to be polished up.
Not every app moved to 64-bit or was re-written in Swift year one. Not every volume was migrated to APFS immediately. And many developers won't be able to make full use of this version's new technologies like Catalyst, never mind SwiftUI, just because Catalina has shipped. We're in the midst of multiple massive paradigm shifts, each of which will take several versions to play out. Some, we're finishing up with Catalina. Others, we're just beginning.
Likewise, security. Gone are the days when the ubiquity of Windows provided the best virus protection imaginable for the Mac. The threat levels have changed, as have the threats themselves. So, Apple is being forced to go back and try to balance the traditional openness of the Mac while retro-fitting the defense-in-depth that was built into iOS from the beginning. And, while they try to find that balance, casual users might be annoyed by increased privacy disclosures and power users, increased root-level restrictions.
At the same time, Apple's scope is growing. Services new and established, from Arcade to Music, TV+ to News, are launching and venerable old iTunes simply can't keep up. So, it's now been broken up into its constituent parts. Many will love the new sleekness but some may well miss the old monolith.
Throw in some new apps and updates, ports, and new user-facing features that continue to leverage the full Apple ecosystem by letting iPads serve as secondary displays and iPhones work as a massively distributed Find My network for Mac…, and awesome new accessibility features like Voice Control, and we have macOS Catalina.
An update that shows Apple is getting better and better at leveraging everything that's come before while continuing to push forward, but still struggling with just how many landings they can really stick, year over year.
I've already done the deep technical dive. You can find 11,000 words and a full hour of video on that linked below. What I want to do here, now, is talk about the implementation and the experience.
Not how it works, which is chock full of really smart, really cool technology, but how well it works for the humans at the other end of the machine. Read all the technical details about macOS Catalina in the deep-dive
macOS Catalina review: macOS Catalina Beta
Quick note: If you've been on the macOS Catalina Beta train but now, with the official release, you want to get off, just launch System Preferences, click on Software Update, click on the tiny Details… text at the bottom left under the big gear icon, and then click on Restore Defaults.
Do that, and your next update will be the next release version. Otherwise, you'll stay on the beta and can keep testing for as long as you like. How to download and install macOS Catalina
macOS Catalina review: The End of 32-Bit Apps
The Mac has been shipping with 64-bit processors since before the Intel transition of 2005 and 64-bit was a headline feature of macOS going all the way back to Leopard in 2007.
Now, with macOS Catalina, the transition is complete. 32-bit apps, dragged along this past decade, will now launch no more.
You can see if you're still running any 32-bit laggards prior to updating by clicking on the Apple icon in the menu bar, clicking on About this Mac, hitting System Report, Applications, and then going through the list of apps. If any of the 64-bit (Intel) fields say no, it means that app isn't going to survive the upgrade.
If you don't do that, or if you upgrade anyway, the macOS Catalina installer will still check for you and give you a list of apps that you can't take with you. At that point, you can choose to stop and wait until you can find updates or alternatives, or to continue and leave them behind.
I'm sure there are all sorts of critical, niche apps that, despite a decade of warning, still haven't gone 64-bit. Especially abandonware. And those affected by it will no doubt be super salty that they're now officially end-of-line.
But, killing the past is what Apple does to better embrace the future and, more than a decade of writing-on-the-wall later, 32-bit apps deserve to die.
macOS Catalina review: The End of iTunes
Imagine iTunes as a bridge that carries not just billions of media files across it a year, but billions of dollars in transactions. At any given moment, it has to let you buy Taylor Swift's new album, rent the latest John Wick movie, stream millions of songs at the drop of a Siri, download the latest episode of the Vector podcast and sync it to that old iPod nano you still use to go running with every morning, rip and catalog all of Jim Dalrymple's 84 versions of every Ozzy song, make available all the new media Sony just uploaded to launch, and maintain a ton of playlists, backups, and other data – all at the click of a button. Oh, and be compiled to run on Windows as well.
Yeah, it was a quirky old battleship of an app that would somehow still beachball on an 18-core Xeon box, but we all somehow still worked and we all understood more or less how. And one thing you just don't do with a working bridge is tear it down. Not before you've even begun construction on a new bridge.
Well, macOS Catalina finally has that new bridge. Several new bridges, actually.
Now, I've said before, nothing is being deleted. Nothing is being canceled. Apple isn't closing the iTunes Music or Movie or TV Store. Apple isn't coming to your house to delete all your downloads.
Everything you could do in iTunes yesterday you can still do with macOS Catalina today. There's just no monolithic, Mesozoic iTunes app to do it all in any more.
Now, it's split across new (and new-ish) Music, TV, Podcasts, and Books apps, and an extended Finder app. In other words, it's more like iOS now. Just on the Mac.
The Music and TV apps are still traditional Mac apps. So much so, they feel like the old tabs were just torn off and given new, separate shells. But, with that legacy code also comes legacy support for everything from ripping CDs to transcoding audio.
It's all been fine for me, but I'm not a heavy Music user. I subscribe to Apple Music and my use case is almost exclusively "Siri, play…" whatever I feel like listening to at any given moment. I don't do playlists and, while I still subscribe to iTunes Match, and that's all still there as well, I really can't recall what I ripped a decade ago and what's just in the online catalog today.
Yeah, I'm the music worst.
So, there may well be database features and views that haven't all been brought over from iTunes to Music, and I really wouldn't notice. But I'm willing to bet a bunch of other reviews will cover all that in excruciating detail for you. So, I'll retweet them as I find them.
For me, the only thing really missing is polish. The interface is a little awkward teenage years. For example, click into any content and a whole new row appears up top just to contain a single, solitary back button. The loneliest back button.
In iOS, the way it works is similar but the implementation is much better: When you tap in, the master page title shrinks and becomes the back button label in the new detail view. And the pages don't fade white to transition, they slide over to maintain a sense of spatial positioning.
The new TV app is similar. It's terrific to have it on macOS. It just doesn't have the astonishingly great card experience of the Apple TV version of the app. Also, because regional licensing deals are demented, the Apple TV app will often kick me out into a local apps, like CTV, that'll then demand a cable subscription before it'll play the show I want to watch. Which is infuriating.
Podcasts is both new to the Mac and built using new technology for the Mac — it's a port of the iPad Podcast app. More on that later. The highest compliment I can give it is that it looks and feels pretty much identical to the traditionally built Music and TV apps.
Even going so far as to the implement that lonely, lonely back button the same way the Music and TV apps do.
Again, that's a nitpick and these are early days, but that I'm picking nits shows how far it's all come in just a year.
I really like device management moved over to the Finder. It makes sense being there and just works like it's always worked. Plug in, and your iOS device shows up in the sidebar. Click on it, then click the blue Trust button, unlock, tap trust, and Passcode in on your device, and you're in business.
You've got a General tab for management, which includes updates, restores, and backups, and then tabs for all your content. It's not pretty UI by any stretch of the pixels, but it's functional.
And, I think, for anyone who hasn't given up the cable for the cloud already, that's the important part. The End of iTunes: macOS Catalina media strategy explained
macOS Catalina review: Apple Arcade
I posted a complete Arcade preview back in June. Link and video below. But, with macOS Catalina, Apple's new subscription gaming service comes fully to the Mac, and that's kind of a big deal.
iOS has never had a problem with games, quantity or quality. Though it's always been skewed almost completely towards the casual. The Mac, though, has spent pretty much all of its existence in the shadow of PCs and consoles both.
Arcade doesn't change the latter. There's nothing new that would really appeal to the hardcore shooters or simulators, custom rig builders or VR drivers. But it does begin to address the former.
For years, developers had to worry about whether or not their more creative, more experimental, more up-front game ideas could even survive in the increasingly free-to-play, franchise-driven, micro-transaction laden economy of mobile app stores. Never mind when or if they could one day even think about bringing a version to the Mac.
Now, with Arcade, indie and even studio developers are being incentivized by Apple to not just make the games they want to make, great games, but to make them great across all of Apple's platforms — including the Mac.
That includes its own tab and placement opportunities in the App Store, some of the most valuable real-estate in software.
So, you probably won't get the biggest, baddest franchises in the game, but you are getting some of the biggest labors of love by indies and studios both. And you can play them with the keyboard, the mouse or trackpad, and in many cases, the Xbox or PlayStation controllers macOS Catalina has just added support for.
I'm not sure how many people will pay the $4.99 per family subscription fee just to have Arcade on the Mac, but I think a lot of people who subscribe for iOS will find that, like Apple TV, also having Arcade on the Mac is an absolute delight. Why people will pay to play in 2019: Apple Arcade explained
macOS Catalina review: New and Updated Apps
In addition to all the new and updated architectures and features — and there are plenty of both still to go over — some of the biggest updates in macOS Catalina are the other new, or newly updated apps.
The updated Photos app is great. Like the similarly updated iOS version, the main views now all filter out duplicates and clutter like screenshots or document captures. Then, they use machine-learned "saliency" — a fancy way to say relevancy – to focus on the people, faces, and highlights of each photo. Finally, they grid them up in different sizes, with videos and Live Photos in motion, to make for a really immersive browsing experience.
The idea is to present you with your "best shots". And, in an age where most of us take so many photos so always, we can barely remember what we took this week, never mind over the last few years, it really cuts through a lot of clutter to make something closer akin to old fashioned albums. Not in how it works but in what it means for our memories.
Speaking of which, Apple's auto-generated Memories movies proper, can now be tweaked on the Mac as well. Make them dreamy, make them epic, make them extreme. Long, medium, or short, all with a couple clicks.
Sadly, we're not getting all the new video editing capabilities Apple just added to Photos for iOS. They are simply nowhere to be found in Photos for macOS. No rotating videos, no adjustments or filters.
And to really paper cut us and pour some lemon juice on it, the edit button is right there. It even lets you click on it. But, all the actual editing features are grayed out as soon as you do.
Hopefully, at the very least, that means they'll be coming in a future update.
Also, while computer vision powered search keeps getting better and better, I still find it weird that I can't type in iPhone or iPad, even tablet, and find all the photos of Apple gear. And typing in Phone is just kinda broken. I realize that's a problem only people like me probably encounter, but when so much works the stuff that doesn't really stands out.
Mail lets you mute threads now, which is great but please don't tell any of my family, friends, and colleagues about it. Ok?
I've also been using the new block senders and unsubscribe from mailing lists features left and right. It hasn't really put a dent in the tons of ham email I get every day, but it's just so damn cathartic to do.
Safari has an updated start page that blends your frequently visited sites with Siri suggestions from your history, bookmarks, reading list, iCloud tabs, and links sent to you over iMessage. Basically, a machine learning surfaced buffet of what it hopes will be all the places on the web you're likely to want to go.
It's been kind of hit and miss for me, though. Sometimes showing relics of devices and tabs past, other times pulling up something I was just messaged. Given how many devices I review and re-review all the time, my usage pattern is probably all the deviations away from normal, so I'm interested to see how tries to learn and cope going forward.
QuickTime has a souped up inspector that shows color space, including HDR format, bit depth, aspect ratio, and scale. If a video has time coded embedded, QuickTime will show it right in the onscreen controls. It also now supports alpha channels so you can export from ProRes 4444 to HEVC and preserve transparency.
Open Image Sequence is also making its triumphant return, so you can create H.264, HEVC, or ProRes movies in the resolution and frame-rate of your choice simply by opening a folder filled with sequential images. So good.
Also, QuickTime now has a picture in picture mode so, with a click, you can leave QuickTime and keep watching your video in a floating, resizable, reposition-able window all its own.
Notes for macOS has always had a gallery view but, previously, it was restricted to image and sketches. Now, in Catalina, it shows you all notes, and all of the note, as a thumbnail so if you're more keyed into visuals than snippets of text, you can more easily find the note you're looking for. Especially, you know, if that note contains images or sketches.
There's also a new search that uses similar computer vision technology as Photos to find objects or scenes within your notes, which is great. And, there's OCR — optical character recognition — text in images, including notes you've photographed from real life or receipts you've scanned in. Which is beyond great. It was the one feature I really liked in Evernote and now it works in Apple Notes.
If you like to organize your notes into folders, you can now share an entire folder if you want to share all the notes inside it. Yeah, it's all or nothing when it comes to folders, so organize and share carefully and accordingly.
I've been doing a ton of writing in Notes and while it still lacks the plain text mode that I would personally cherish, and sometimes the sync between devices isn't as fast as I'd like, it's been a terrific workhorse.
The only thing really holding it back on the Mac, at least for me, is how abysmally side-by-side apps have been left to languish on the platform. There's a new interface for creating workspaces in Catalina, which is nice. But you still can't swap apps around within them. You have to destroy them and start over. Every time. It's depressing, especially in comparison to how much love they've gotten in iPadOS over the years.
There's an all-new Reminders app in Catalina. It uses natural language parsing to translate what you write into actionable tasks and to-dos. Even if you're writing in the Messages app, if it reads like something that would be good to remember, Siri will offer up as a suggested reminder.
If you tag someone in a reminder, the next time you're messaging with them, you'll be reminded about whatever it is you tagged them into. Also, you can add sub-tasks to your reminders and add attachments like photos, docs, or links that'll help you get all your things done. And there's a new edit button that lets you quickly and easily update and expand your reminders with times, dates, locations, flags, attachments, tasks — whatever it is.
I'm not a huge to-do user. I've always felt they were the Gym membership of apps. A New You resolution you sign up for every once in a while to feel better about ultimately not doing much else.
I use them kind of how I use Music — I tell Siri what I want to be reminded about and when and then hope and pray I actually get reminded about it at the right time. Which is almost always the case.
But, for people who do want more granularity without making task management yet one more task to be managed, and for whom Things or OmniFocus are still too much to-do, Reminders now strikes a much better balance.
Find my Mac and Find my Friends have merged, yes, like Voltron, to make something that feels like it will eventually be much greater than the sum of its parts.
Like Podcasts, it's a port of the iPad version. And, the interface actually translates over really well. Functionally, it's everything I expect. People and devices, all pinned in place.
I especially love that I can see battery status for my devices, so I know if I need to hurry up and track them down so I can charge them up.
The sidebar does reorder when it refreshes, seemingly at random, though, which also makes the map zoom in and out for no good reason. That could be the quirks of GPS, but if Apple could keep it from dancing it'd be a lot less distracting. Not that I leave it open much.
Rumor has it there'll be a lot more coming to the Find My app and Apple's Find My network in the future, and I can't wait.
There's Voice Control, which in theory will let you do everything on your Mac with power words, like right out of Dune or Dungeons and Dragons. It's still a little hit and miss for me, and I have to resort to calling up the numbers more often than I'd like, but as it solidifies it'll be one of the best accessibility enhancements ever to hit the platform.
Screen Time is also now on the Mac. So, no more hiding your social stalking or gaming breaks from the tracker.
It's not a standalone app. Rather, it's in the system preferences much as the iOS version is in settings. It's also got all the same features. So, depending on how much control you want, especially parental control, it still may not be enough for you.
But, I'll repeat this part again: What I like about Apple's approach to Screen Time compared to some others is that it doesn't fell like it's pandering or infantilizing anyone.
It gives you data and then helps you act on it when and how you want to, for yourself and for your family. New and updated apps: Hidden gems in macOS Catalina explained
macOS Catalina review: Sidecar
Sidecar is a lot of fun. In most cases, I'm just fine using my iPad on its own. It's the best truly mobile computer I've ever owned. But, in two specific cases, Sidecar lets it perform double duty in ways that truly enhance the Mac.
The first is as a second screen. I don't usually do this on the Mac because I dislike turning my head from side to side while I work. But, if I'm working in Final Cut Pro X, which I do a lot, and I need to jump into a conference call or keep track of something in Twitter or Slack, offloading those windows to the iPad display makes a world of difference. Just so great. Especially while traveling, where it's much easier to carry an iPad with you than a Pro Display.
The second is as drawing pad. I used to use Wacom tablets back when I worked as a designer. They were great, but Apple Pencil just blew them away. No digitizer, no air gap, no reticule. Just Pencil to iPad goodness.
For Continuity Markup and Sketch, which are good examples of Apple really leveraging the integration between devices and operating systems, I can see the convenience if you're already using your Mac and you don't want to switch devices. But, honestly, I'm fine just using the iPad as an iPad.
Where it comes in handy though, is for the apps that aren't on the iPad. Then, I can just pick up my iPad, in Sidecar mode, walk over to the sofa, sit down, Pencil-push things around, and then go back to my Mac. At least in theory. In practice will come as more and more apps add better and better support for it. iPad is the new Mac: Sidecar Explained
macOS Catalina review: Security
Apple's lofty goal for macOS is to make the system as secure as iOS while maintaining all the traditional flexibility of the Mac. And… that's easier said than done. With iOS, Apple got to start fresh and lock everything down since day one. The Mac, by starkest of contrasts, has been relatively open for decades.
For many years, that was fine. Thanks to the market share and attack surface of Windows, it made far more economic sense for bad actors to go after Microsoft users and leave Apple users alone.
But, now we have the web, we have phishing and spear phishing, ransomware and spyware, we even have ad trackers and social networks and the ability and eagerness of bad actors and unscrupulous companies to target any and every platform and person, including those of us on the Mac.
So, Apple has been carefully hardening macOS against exactly those kinds of attacks. Carefully, because people who are used to the Mac being open are concerned — legitimately sometimes, completely paranoid others — that Apple is going to impose the same type of control it has over iOS.
In a perfect world, Apple would be able to go back and implement everything from bit one. But we don't live in a perfect world and so, while I think some will find these new defensive layers as necessary and responsible, others will find them as blunt, maybe even aggressive.
For experts, there's going to be a lot to parse, from Gatekeeper now checking bundles launched via Terminal as well as files opened in the GUI, to the read-only system partition and the firmlinks that try to make the process as transparent as possible, to DriverKit and Extensions kicking just about everything right out of kernel space.
Where casual users are going to see and feel it more are with all the new privacy protection alerts. Basically, if an app — any app — wants to access anything it didn't create itself, it has to ask. And ask. And ask. And ask. And… you get the idea. Ask. Ask. Ask. Ask.
On one hand, you want this type of accountability so that malware can't just grab anything it wants, any time it wants, without your express permission. On the other hand, it can feel so annoying, and be so overwhelming, that you won't even bother reading it any more and just click away as fast and furiously as you can.
To avoid a Windows Vista-like death-by-a-thousand-dialogs situation, Catalina will try and leave you alone if you do something deliberate and intentional, like double-clicking a file in Finder, dragging and dropping a file, or use the standard open or save file function.
But, if you try to download a file from the web, it will ask you to confirm. For every site. Depending on your point of view, that can be a huge safety net. Or a huge annoyance.
It's at its worst when you first update to macOS Catalina, because literally everything will ask you for permission as soon as it can. It never goes away completely, though.
And that's good and bad. For example, I want to be notified and protected against key loggers. I don't want to be made scared of hotkeys. That requires really good explanations from legitimate app makers, not as to what they want, but as to why they want it.
Nerdier, Mac-as-POSIX-box users are going to hate it, as they've hated a lot of the security changes over the last few years. For example, the ability to set Gatekeeper to allow any app download off any website, signed or not, is gone from the GUI. To re-enable it, if you really want to, you have to go to the Terminal. I think that's a pretty damn clever compromise, but I realize not everyone will.
Overall, though, for the vast majority of users, I think it's all for the best. Complacency is the enemy of security. But it's going to take some fine tuning, a little from Apple, a lot from developers, to make it not just as tolerable as possible for users, but as valuable and as insightful as possible, going forward.
There are some cool new conveniences to mitigate some of the new security measures. Like the new, unified Apple ID pane. Or, Authorize with Apple Watch is my favorite.
Previously Apple Watch could unlock your Mac and approve your Apple Pay transactions. Now it can do almost anything Touch ID can do. Which is especially great on all the Macs Apple still hasn't added Touch ID to — which includes every desktop Mac. The Great Mac Balancing Act: macOS Catalina security explained Big Business: macOS Catalina in Enterprise Explaied
macOS Catalina review: Catalyst
Problem: There are a ton of apps available for the iPad that simply don't exist on the Mac. Developers may have Mac apps on their nice-to-have lists, but since their iPad apps are built using UIKit and Mac apps are built using AppKit, and since no one has the time or resources to learn and support yet-another-thing, especially for a smaller market, that's just exactly where they stay — on the nice-to-have list.
Solution: Make it easier to bring iPad Apps to the Mac.
With macOS Catalina, Apple's is starting to do just that — but in two very different ways. First, with Catalyst, which essentially lets UIKit apps run on the Mac. Second, with SwiftUI, which will one day abstract away a lot of the interface differences between iOS and the Mac.
SwiftUI is probably the right set of tools for the job, but Catalyst is the right now set. Or, at least, the right-now-er.
In theory, you check a box in Xcode, your iPad app build almost instantly for the Mac, and then you polish it up to be a great Mac app.
Like I said in the preview, there are three broad classes of apps that are good candidates for Catalyst:
- iPad apps that either don't have Mac equivalents or for which the Mac equivalent has fallen fallow or been previously abandoned. For them, a unified codebase makes creating or replacing the Mac app far more efficient. CARROT Weather is an example of the former, Twitter for Mac the latter.
- iPad apps that have relied on a website for Mac support. Here, native frameworks allow for far more features and far better performance. Netflix is on a lot of wishlists here, but TripIt is shipping now.
- And then there are the cursed Electron apps. The ones actively wasting my memory and destroying my battery life just to wrap themselves in Chromium for that oh-so-not-so-native look and feel anyway. Those, like Slack and Skype, desperately need to switch to Catalyst and fast. But there's no sign they're doing anything of the kind. At least not yet.
I say in theory because Catalyst feels a lot like Swift and APFS: Something that will take a few years to really solidify.
There are a bunch of Catalyst apps coming at Catalina's launch. But they're not the ones I've been looking forward to most.
Those, the ones by the really artisanal indie developers, haven't arrived yet. Because, not every framework they need is there yet, and what's there isn't always supported or documented the way they need it to be yet, to make the apps they want to make — and we want them to make.
Also, they can't offer universal bundles yet, so they can't sell one app for all Apple platforms, all at once. Not every developer wants that, not at all, but for the ones that do, it currently works for all apps except the Mac apps.
Apple also hasn't gone back to redo the interfaces for the original Catalyst test apps from last year, the odd assortment of Home and News and Voice Recorder and Stocks. And they haven't brought over apps that are still missing basic functionality compared to their iOS counterparts and, arguably, need Catalyst the most — looking at you, Messages. (sent with Lasers).
If Apple is serious about Catalyst, the way they were about 32-bit apps, Swift, and APFS, all of that should change for the much better over the next year or two.
That is, unless and until SwiftUI becomes the future of all of Apple's platforms. More on that as it develops. UIKit on Mac: Project Catalyst on macOS Catalina explained
macOS Catalina review: Conclusion
Last year, Apple put a lot on pause to really deliver not just a few new features but a ton of refinement. This year, it feels like Apple tried to make up for some lost time and maybe bit off just a bit more than they could chew, with different updates coming at different times and a whole lot of bug fixes coming in one after the other. Next year, hopefully, they'll find a better balance.
For now, macOS Catalina is one of the most important updates we've ever gotten. It not only dismantles iTunes and sets up all of Apple's new services, but with Catalyst and SwiftUI, it sets up the future of apps, and ensures the Mac will continue to be a first-class part of that future.
I love the technology, the vision, and the direction, but it's still unpolished in parts and frustrating in others, and that'll take continued time and effort to fix.
If you're at all concerned about release version bugs or worried about app compatibility, by all means, wait and see. Let the rest of us be your testers, and when the point releases start coming, choose whichever one makes you comfortable enough to jump on.
For me, I'm all on board. I even updated my production video machine to Catalina, I missed it enough when I wasn't using it, and that's something I usually wait months to do.
What's more, with so much of the foundation of macOS now re-set, I'm most curious to see what all Apple will be building up on them next. * More Details Here
Disclaimer: This is a very long post, so if you intend to read through it in its entirety instead of just skimping through some points and proceeding to argue with me in the comments, you might want to grab some fried chicken and popcorn to keep your stomach full while you read through this post. I've divided the post up into sections to make it easier to read but I won't be providing a TL:DR because doing that imo takes away a lot of context and doesn't adequately explain things. I've tried to explain detailed things in simple terms, so analogies won't be perfect but they should give you a general idea and understanding of the things I'm trying to explain. I'm also not claiming to be a technical savant, so if you spot something wrong, don't just yell about how it's wrong and how I should be ashamed to even live on this planet. Explain why I'm wrong so that I don't make the same mistake in the future. Thanks for reading!
Over the past week I have been poring over every single detail that Sony and Microsoft revealed about their next gen consoles, all the while grinning gleefully because these consoles sound amazing and are packed with innovative features and solutions which I can't wait to see be implemented in games, which is why I find it so annoying and infuriating that the conversation around these consoles is dominated by the number of TERAFLOPS™ (Trademarked by Fanboys worldwide) each console has.
Don't get me wrong, a console's power is very important, but beyond a certain point it doesn't really matter (for me at least). Both these consoles are powerful enough to run games at 4K60. The Series X is more powerful, yes, but that was the case this generation anyways with the launch of the One X. We'll see that extra power being utilised in games to deliver higher or more stable framerates (Microsoft said that 4K60 is their base target and they want games to be able to run upto 120FPS, kinda like how the One X achieves 4K30 and can run games upto 60FPS) and maybe somewhat prettier or more detailed games but the power gap doesn't seem to be so large that games will run on the Xbox at a higher resolution (like it was the case with some games early in the generation this gen) or significantly higher FPS. I'm not dunking on the Xbox here, Microsoft has done an astounding job and they've delivered upon their promises, but imo a year into next gen, games are pretty much gonna look the same across both platforms, as has been the case in the past.
To me, the power of these consoles, while amazing, is the least exciting thing that was revealed about them. Other features of these consoles are much much more interesting and innovative but due to the somewhat technical nature of these features, everyone seems to be ignoring them.
So in this post I'll be highlighting most of these features, for both consoles, and explaining how these features are going to have an effect on the games we play and what it is that makes these features innovative.
Before we start though, I have to say that I'm very happy about the directions these consoles are going. I think most people, including me, assumed that they're basically going to be extremely similar to each other and to PCs but after the reveals it's quite clear that not only are they quite different from each other in terms of how they achieve the shared target that they have, but they're also different from PCs. This brings us to our first point of discussion:
PROCESSING POWER AND COOLING
I'm not gonna talk about the processors because they're fairly similar and standard components. What I want to talk about is how they're configured.
The Series X uses a tried and tested design. They set a specific Frequency target for both the CPU and the GPU and supply the chip with increasing amounts of power till the chip hits those frequencies. However, this power draw is not uniform i.e it is a variable range. In certain game scenarios, more power is required to hit that frequency while in others less power is required (to be noted here: increase in power doesn't correspond to an equal increase in frequency i.e it does not scale linearly. To hit higher frequencies you need to input more and more amounts of power to the point where it becomes a case of diminishing results). Power consumed=Temperature outputted. As such, the temperature outputted by the chip is also a variable range like the amount of power consumed (you can actually see the variance in power consumed in real time by the amount of noise the fan is generating. If its spinning faster and louder it means that more power is being consumed). So the design team and engineering team have to design a cooling system that is sufficient and works well for a range of temperatures/power consumed. The catch here is that they're making a prediction about the amount of power that might be drawn by a game and this prediction might end up not being good enough for certain games i.e the cooling system might not be sufficient enough for certain games which means when this game is being played the console is gonna run hot and loud. The Series X designers have clearly taken note of this because the entire design of the console is based around this. I think its fair to say that the Series X has one of the most unique designs out there, and that is to accommodate the cooling system. How well it'll run remains to be seen but the design team seem very confident and I think we can trust them.
Now onto the PS5. The PS5 eschews years of traditional console design and goes for variable frequencies or boost clocks. However there is a huge misconception among people because of the word 'boost clocks'. To understand further, lets take a look at something that has boost clocks in the traditional sense - a PC processor (GPU or CPU, both will do). A PC processor usually has two clockspeed targets - a base clock (lower frequency) and a boost clock (higher frequency). Variable power will be supplied to the processor so that it consistently hits that base clock (so basically what the Series X is doing). However, if there is thermal headroom (i.e more power can be supplied to the chip without it overheating) the processor will be supplied with more power so that it hits the boost clock. However at this boost clock, the temperatures will start to rise and eventually the processor will have to come back to the base clock to prevent overheating. In this type of configuration, both the power and the frequency are variable. The PS5's boost clock is not the same as this
. Lots of people have been talking about how the teraflop number is a sham because the PS5 won't be able to run at this 'boost clock' most of the time. That is simply false and here's why:
The PS5 has a specific power limit i.e it's power consumption is not variable and is a consistent figure at all times. Thus the temperature that the chip outputs is the same at all times. This allows the designers to design a cooling system around the exact temperature that it outputs. What this means is that the PS5's fan is not gonna spin much faster or much slower depending on the how big or small power consumption is. Its going to spin based on ambient temps, as the thermal output of the processor is already know and they only need to account for the variance in ambient temps (which makes the temperature range that the cooling system has to be designed for much more predictable and exact i.e a one degree rise in ambient temperature can be more accurately accounted for than the rise in power consumption). So all the people who suffer through the obnoxiously loud and hot PS4s and PS4 Pros, rejoice for you have to suffer no longer (that is, if you're getting a PS5)!
How is Sony achieving this though? Well here is where the variable frequency part comes in. The PS5's processor will be able to see what the games are actually doing i.e what activity is going on in-game, and when those game scenarios occur where power consumption spikes up, it'll downclock. Game developers will be able to tell exactly when it is that this power consumption goes up and as such, be able to account for the reduced frequency then (so in a way, you have the predictability and reliability that comes with setting a specific frequency target, but it also mean that devs will have to work a bit harder to fine tune and optimise things). The crucial thing here is that it doesn't have to downclock by a lot. Remember when I said this a couple of paragraphs above:
Increase in power doesn't correspond to an equal increase in frequency i.e it does not scale linearly. To hit higher frequencies you need to input more and more amounts of power to the point where it becomes a case of diminishing results
Well the opposite is happening here. Decrease in Frequency leads to an exponential decrease in Power consumption. So a 2-3% decrease in frequency (that's about 40-70 MHz in the PS5's case) can deliver at least a 10% decrease in power consumption. So what this basically means is that the PS5 is going to be hitting the targeted clockspeed of 2.23 GHz most of the time (unlike a PC processor with boost clocks), when it downclocks it's not going to be by a significant amount (again, unlike a PC processor) and while doing all this it is going to remain cool and quiet. Quite an innovative and novel concept huh? This is why the PS5's variable frequency is unlike that of the boost clocks found on PC and shouldn't be compared.
So the summary for this section:
- Xbox Series X: Variable powetemps but constant frequency
- PC: Variable powetemps and variable frequency (unless you overclock it, in which case it'll perform like the Series X processor)
- PS5: Constant powetemps but variable frequency.
The exciting thing for me and maybe others who are interested in hardware and engineering is that these are three different ways to achieve the same general target. Just goes to show that these companies are putting lots of effort into the design and engineering and not just copying things and definitely not skimping on anything.
Now lets move on to the next and imo the most innovative part of these next generation consoles:
SOLID STATE DRIVES
This is it folks. This is the game changer. This is what's going to deliver a generational leap. If you're worried that the change from 8th gen to 9th gen consoles is going to be like the change from 7th gen to 8th gen wherein it was a jump in graphics but not really a jump in generations, this should allay your fears. It's disappointing to see people downplay the capability of a SSD to just faster loading times and worry about the size of storage, because the inclusion of a SSD can change the way games are made. But before I talk about how the SSD can change things, I want to talk about the PS5's SSD because it's just insane and because while Microsoft has been mainly pushing the idea of faster loading times and being able to suspend and resume multiple games in quick succession, Sony is pushing the message that this is going to be game changing.
When Xbox revealed their 'velocity architecture' and their SSD solution I was extremely impressed- 2.4GBps of raw IO throughput is cutting edge. Microsoft has also done a lot of work on integrating the SSD into the system. They built a hardware decompressor that allows for 4.8GBps of compressed IO throughput and made changes to the DirectX API and wrote specialized code to ensure that the performance of the SSD is amazing. I was sure then that Sony is going to have egg on their face, just like I was sure that they're going to embarrass themselves when they claimed that their SSD is faster than any SSD on PC.
But the madlads actually went ahead and did it. The PS5's SSD is easily the most impressive part out of all the other parts of both consoles. In fact I would go as far as to say that its one of the most impressive hardware to be made in the past few years, alongside the likes of AMD's Zen architecture.
Just to show you how impressive the SSD is - No consumer SSD on PC will be able to match or exceed its raw speed (5.5 GBps) for another year or so. No consumer SSD on PC will be able to match its compressed speed (8-9 GBps) until the next revision of PCIE gets adopted by manufactures (which will happen many years from now) or until a CPU manufacturer implements a dedicated hardware decompressor in its processor (which is extremely unlikely to happen). No consumer SSD on PC will be able to match its performance in games because the storage interface that Sony has built for its SSD is better than the NVME spec that is standard on PC.
Now getting back to how such super fast storage is going to change the way games are made. The largest impact that the SSD is going to have is on Game and Level design. Currently, and ever since we stopped using cartridges, games have had to be designed with the speed limit of HDDs (and in the past, discs) in mind. You can actually see these limitations while playing a game - All those super long corridors and long elevator rides, they're there not because the game designer loves corridors and elevators, but because designers are forced to implement them because of the speed limit of HDDs. This is still an abstract explanation, lets see what impact it has on an actual game:
One of the most requested features from fans of the game Horizon Zero Dawn for its sequel is the ability to fly by mounting robots which can fly. However, this simply would not be possible now because the game was designed with the slowness of a HDD in mind. Here's why it wouldn't work - Lets consider Meridian, the large and detailed City that's in the middle of the map and on top of a mesa, with one end being connected to the adjoining hills by long bridges while the other end is connected to the flatlands below by tall elevators. It was surely a creative decision to design a city like that, but it was also a technical one. Because the City is so detailed and large, with a large number of NPCs each with their own dialogue and music, the data for the city cannot exist in system memory at the same time as the data for the rest of the game world. So when you're walking towards the city the game is dumping all the data for the open world and loading the data for the city, but since HDDs are slow, the bridge has to be longer and the elevators have to be taller to ensure that the game has enough time to load the city. They also have to put in place artificial limits - for example Aloy's running speed when she is on the bridge is slower than when she is out in the open world and she cannot ride a mount (the game's controllable robot horses) into the city because it would be too fast (this is also why you cannot ride a horse into Athens in Assassins Creed Odyssey). Similarly, flight cannot be a thing because the game, or rather the HDD, simply cannot keep up with the speed of flight. If you tried to fly into Meridian the game would just get stuck until its all loaded up and that's something no game dev will allow.
Another example is GTA V. Don't you think its odd that while the map is so large and detailed, you cannot enter most buildings and the city itself is not as populated and feels rather empty? GTA V was a 7th gen game so it had limitations like amount of RAM (the PS3 only has 512 Mb which is just fucking crazy) and rendering power in addition to the limitation of the HDD. But with the current gen, those two limitations went away and yet we ended up with a game that's very similar to its last gen counterparts and that's because of the slow HDDs.
The improvements aren't just limited to open world games either. Linear games will also see a big change. Lets take The Order 1886 as an example. It's a game with extremely detailed settings. In fact its graphical fidelity is among the best of the games that were released this gen. However, almost all of it felt like a facade, because it was. For example, in a section of the game you enter a babrothel and in it you walk in a long corridor full of doors. However you couldn't enter any of those rooms, you couldn't see what was inside, you couldn't interact with anything. Why? Because the corridor is actually a hidden loading screen, as you're walking across it the games is unloading everything from the last big area of the game and loading everything required for the next big section of the game. It simply cannot load anything else. So if you felt that linear games were just movies that occasionally let you press buttons on your controller, this should change that.
So far we've talked about how the SSD will remove limitations forced upon game devs. Now lets talk about how it'll enable designers to expand their vision and to do that I want to highlight one game in particular - Beyond Good and Evil 2. With BGE2, the creative director, Michel Ancel, wants users to explore a Solar System and on some of these planets of the solar systems there are vast cities, larger than any of the cities Ubisoft has done so far. Ancel wants people to be able to be in space and then seamlessly travel to a city on the face of a planet and explore its depths and then seamlessly travel back to space (i.e he wants all this to happen without loading screens). This type of game simply would not be possible on platforms without SSDs unless the creative vision was seriously compromised with (by either adding loading screens, making things so slow that they space travel doesn't seem like space travel anymore or by significantly reducing the size of the city).
With the PS5's SSD, Sony wants everything to happen in a snap, they want things to be immediate, they want you to be able to ride a flying robot dinosaur into a large city, shoot some unruly robot in the face with an arrow, then mount a horse-like robot and exit the city at breakneck speed towards your next objective, all without artificially stopping you in your tracks, without making you walk across long bridges or wait through a long elevator ride. They want loading times to be 1 second, not 10 seconds. They want you to be able to get back to playing again after dying without having to look at 'tips' for 2 minutes. They want your fast travel subway rides across New York to be over in 1 second, not 3 minutes. It might not seem much to you now, but once you experience that sense of immediacy, of things happening instantly, going back to anything that isn't this, that is slower, is going to be annoying.
So after all this, you can understand why I'm so excited for the SSDs, specifically Sony's solution and why I was using terms and statements that seem hyperbolic at first to describe these SSDs.
Before moving on to the next section, I want to talk a couple more things about the SSD.
Number one thing is, it's going to reduce game sizes or at the very least, ensure they stay the same. Developers often copy the same thing multiple times across a HDD to ensure that it is there when they need it. Now instead of having 200 postboxes on storage, they only need 1 since it can be accessed in an instant. Also, a vast portion of a game's file size is taken up by uncompressed audio. For example, when No Man's Sky launched, it had a file size of 5GB iirc. Around 4GB of that file size was taken up by audio. With the presence of great hardware decompressors on both platforms, devs can now compress the audio to a much smaller size, thus reducing total file size.
Second, its going to make development of games slightly easier. Devs won't have to spend a lot of time carefully specifying memory usage and fine tuning everything. They wont have to spend time getting the length of a bridge and the characters walking speed across that bridge just right so as to ensure everything loads in time.
Third, games aren't going to get slower after release. You might've noticed that games get 'slower' over time when they've received a lot of updates/patches. This is again because of HDDs. With a SSD, this isn't going to happen.
IMMERSION AND PRESENCE
This is a PS5 specific thing. A key tenet of Sony's next gen console is 'immersion' and 'presence', the feeling that you are inside the game that you're playing. How are they going to achieve that feeling? Through two things - reactive interaction and immersive sound.
The first, reactive interaction, is a term I just came up with to describe the haptics on the controller. Sony didn't talk about this in their presentation but if you read the Wired interview, you'd know that they're going all in haptics. They don't just want the vibrations that the controller makes to be 'strong' or 'soft', they want it to have 'texture' and 'weight'. In the article the journalist describes a PS5 version of GT Sport with enhanced haptics. He says that while driving a car, he could feel the difference between driving the car on an asphalt surface and driving it on grass. Another example given was a platforming game where one could feel the difference between walking on snow and walking on mud.
The second, immersive sound, seems to be the second biggest feature of the PS5, after the SSD. Now I'm not going to go too in-depth about the PS5's Tempest 3D audio tech, because a) its quite complicated and frankly I don't fully understand some parts about it and it probably deserves a post of its own and b) Cerny again does a better job of explaining that I ever will be able to. What I can say however, is that its the most complicated and advanced sound solution that I know of that has been put into a consumer product.
Over the past week I've actually seen a lot of comments saying how this is basically useless and doesn't work or about how this technology already exists in AMD's GPUs and is called AMD TrueAudio and that the Series X also has this feature through an implementation of Microsoft's Project Acoustics. Both of these notions are false. The PS5's Tempest audio tech does work (in theory) and its much more advanced and different than either TrueAudio or Project Acoustics.
Before I go into why it should work, lets talk about how its different than TrueAudio and Project Acoustics. TrueAudio is basically a dedicated chip to process audio instead of using the CPU or GPU to do so. AFAIK this technology is already used in the PS4 and Xbox One and hasn't really been put to much use in AMD's GPUs or the consoles apart from for VR games. To explain Microsoft's Project Acoustics, we need to take a slight detour into Physics 101. You might be aware that Light has dual-nature i.e it behaves like a particle and also behaves like a wave. The prime difference between particles and waves is that particles travel along straight lines (i.e rays) while waves can 'bend around' objects. While light behaves like a particle and a wave, sound only displays wave nature i.e it is a wave and displays properties of waves. In games however, wave properties of sound like occlusion and reverberation haven't been properly implemented. In games, devs code sound to be like a ray, it originates from one specific source and travels from there like a ray does. They try to simulate wave effects like reverberation but it isn't very accurate. Project Acoustics basically changes that and implements real life wave-like behaviour of sound like occlusion, obstruction and reverberation in game.
Now the tech I mentioned above isn't quite specific to 3D audio but should lead to a general improvement in audio quality. Sony's Tempest audio tech meanwhile, is trying to achieve this and much more, specifically in the department of 3D audio. Now, 3D audio isn't a new idea whatsoever, in fact you've probably already experienced it if you've been in a theatre with Dolby Atmos or own a Atmos certified sound system or if you have a particularly good piece of surround sound equipment. However, Sony's implementation of 3D audio is quite different and more advanced than the consumer implementations we've seen before. To see how it's different, I'm going to compare it to Dolby Atmos:
First off, it is to be noted that the way we perceive sound is unique to all of us (i.e its different for everyone) because of the difference in the shape of each of our heads and ears. AFAIK, Atmos doesn't account for these differences while Sony's tech is trying to, key word here being 'trying'. The obvious caveat here is that since the way each of us perceive sound is unique to us, there is simply no way Sony can program a unique algorithm for each of the millions upon millions of people who are going to be buying a PS5. What Sony is doing instead, is scanning as many people as they can, and building 5 general algorithms based on all the people they scan, and the end user can select the algorithm that suits them the best out of these 5. What this means is that for most people, game sound should be extremely close to how they perceive sound irl, for some people it's going to be exactly how they perceive sound irl while for some people the improvements in audio is not going to be perceivable because they're outliers i.e the way they perceive sound is much much different than the way most people perceive sound. This is just at launch though. Sony has said that it's a multi year project for them, so they'll be building more algorithms as they collect more data from people, this is what Cerny meant by 'send me nudes of your ears'.
The second difference is quite straightforward. Atmos uses 36 individual sources of sound. Sony's tech can use hundreds and thousands of sources of individual sound and the quality of the sound streams themselves can be much higher.
The third difference is that use of Atmos requires specialised, certified hardware that matches Dolby's spec. Sony's audio tech doesn't require certified hardware and it will work with any set of headphones, TV speakers or surround sound systems. Not everything will be supported at launch though. All headphones will be supported and Cerny says that headphones will be the gold standard. Currently they're working on implementing support for TV speakers and in the future they'll implement support for multi speaker surround sound systems.
So all this is quite exciting stuff then. There is however an issue. While the SSDs will have an objective improvement in games, the sound technology and controller are going to be much more subjective and the improvements they will offer isn't exactly clear. For example, you might not like the haptics at all and completely disable it or you might be one of the outliers in terms of how you perceive sound and hence you won't be able to tell much of a difference. As for me, my opinion on the PS5's audios system is gonna depend entirely on whether or not it comes with a 100 hour long Mark Cerny read the LOTR Trilogy ASMR audio file.
So until we see, or rather experience games built for the PS5, it's impossible to tell if all this will have much of an effect. Regardless, the concept alone is quite innovative and exciting and I have to give Sony props for following through with this despite the complexity of it all.
This is a Series X feature, at least for now. What I mean by enhanced continuity is basically enhanced backwards compatibility. The backward compatibility features of the Series X, courtesy of the amazing Xbox Backwards Compatibility team at Microsoft, is easily the most impressive part of the Series X for me (again, the power of the Series X is jaw dropping, but beyond a certain point it doesn't really do much for me. Meanwhile, the PS5's SSD kinda overshadows the still impressive Velocity Architecture of the Series X).
Sony talked about 'machine learning' in one of their Wired interviews but it was just a vague term and so far we've not seen anything about the machine learning capabilities of the PS5 or even if it is actually present on the PS5. Meanwhile Xbox has actually showed off an impressive use of machine learning. The fact that the Series X can play thousands of Xbox One, 360 and OG Xbox titles is amazing but its ability to automatically play the game at higher resolutions and framerates and even apply HDR to it is just downright jaw-dropping. Of course the caveat is that not every previous-gen Xbox game is going to work but because the backwards compatibility team has been working on this for the past 4-5 years. we can play thousands of previous generation games at launch on the Xbox and that's just frickin awesome.
The PS5's backwards compatibility situation was kinda blown out of proportion with many people assuming that it isn't backwards compatible at all or that only 100 titles were going to be backwards compatible (which just goes to show you how little people were paying attention during the whole event) when in fact Sony is just doing what Xbox has been doing for the past 5 years - testing, certifying and improving games on a title to title basis. What is disappointing though is that it took them so long to finally start doing this and now they're just so far behind Microsoft with regard to backwards compatibility.
This idea of enhanced continuity also extends to the peripherals. All Xbox One peripherals will be compatible with the Series X, that's all well and good, but they'll also receive a software update that brings with it the Latency Reduction features that Microsoft has baked into the Series X controller and that is awesome.
SERIES X SPECIFIC FEATURES
There are a couple of more features that Microsoft announced that while innovative, I can't really talk much about because there isn't much to talk about it. So I'll just list them out.
Another thing to note is that these are Series X specific features for now. The fact is that unlike major components like the SSD, Processor or the Audio chip, these features already exist in the base AMD processors that both Sony and Microsoft are using or can be implemented via software updates. So far only Microsoft has talked about these features so I'm gonna assume that only the Series X has these features unless Sony says otherwise. So here's the list of features:
- Minimising Input Lag
- Variable Refresh Rate
- Elimination of screen tearing
- Variable Rate Shading
- Smart Delivery
So there you have it, I've highlighted why I believe that this generation of consoles might be the most innovative in decades. Now I understand that this is a very big statement to make.
Some of you might argue that back when consoles used specialised, purpose built hardware like the PS3's Cell Processor, consoles were more innovative, but to that I respond by asking, how does it matter if those consoles were packed with innovation when developing games for them was so hard that you couldn't even make full use of many of those innovations?
Others might also say that just like it has been in the past, no developer will actually make use of these features and I understand why people might be so skeptical. In the past console manufacturers have promised a lot of things but have almost always ended up under-delivering. However, I believe this generation will be different. For one, both manufacturers are listening to the developers and implementing features that developers are asking for rather than creating their own features and forcing developers to use it. Case in point being the SSD. Devs seem more excited about the SSD, specifically Sony's SSD, than any of the other improvements and features and according to Cerny the number 1 requested feature was the inclusion of a SSD. One of the top priorities for the consoles manufacturers is also to ensure that developers have an extremely easy time developing games for the system and that they get to grips with the system as soon as possible.
Secondly, there seems to be a no BS talk being said by both Microsoft and Sony. Cerny outlined his vision for the system and explained with detailed reasons why they made the technical choices that they made while Xbox has been extremely upfront about their vision, aims and achievements with the Series X. There is just a lot more 'real' and honest communication coming from both manufacturers (at least until now, it could all change as we near launch period). During the pre-launch period of the current gen consoles we kept hearing about the cloud or TV or other completely unrelated and BS stuff, so it's nice and refreshing to have Sony and Microsoft just focus on what matters - the games.
I'm also very happy to see that these consoles are different from each other, each with their own unique features. I really cannot wait to see the games that are being built for these consoles and I cannot wait to get my hands on them when they launch.