I think it is concerning that every single Apple Intelligence feature they've shipped thus far has been not just mediocre; but bad. Being last to the party is a very normal Apple thing; quality and Doing The Right Thing takes time. Announcing something then taking months to ship it is very not-Apple, but it has happened a few times. That thing they finally ship being bad is, geeze, horribly un-Apple.
One of the few examples I can think of however is Apple Maps. And it did get better; a lot better, some say better than Google Maps nowadays. So I generally do have hope for Apple Intelligence. At the end of the day, there are some disparate competing utilities in this class on the Samsung and Google phones, but no one is shipping something that is obviously game-changing and in first place; they all kinda suck, they're all tech demos, and it'll inevitably take many years to get this technology honed in to something that is truly useful to consumers.
There is a lot in the Apple universe that is shoddy. iTunes, for instance.
iOS has a refinement that Android lacks but I am unimpressed with MacOS. Windows is stuffed full of terrible crapplets and Windows users largely recognize that these are terrible crapplets and don't use them. Apple users have a fixed belief that everything Apple does is brilliant and fashionable so they do use them which has a deadly effect on the market for third-party software. (No good music players for MacOS for instance)
Even Apple fans lately claim it's been getting worse in the last few years.
(That said, I love the innovation in the M-series chips from Apple just as much as I appreciate Microsoft's commitment to the long-term viability of Windows for all of us who invest in it. Occasionally at work we still use Access '98 to handle old files and it works great, the installer works great, in fact Office still tries to take the desktop over the way it did back in the day. Clippy still works. The borderless windows look just a little funny because the compositor changed. No way you could run Linux binaries or MacOS classic binaries from '98)
> No way you could run Linux binaries or MacOS classic binaries from '98
The key problem for desktop Linux is that nobody knows exactly how to build binaries that will run on any reasonable Linux desktop system today, so it's hard to keep that non-existent reasonable subset of ABI stable for an extended period.
That said, you CAN do this. The kernel itself does present a mostly pretty stable ABI to userland applications, so you can grab a Debian chroot from 1998 and be on your way. Debian even still serves repositories for everything on archive.debian.org, and Dockerhub has OCI images you can `docker run` for Debian from 1999, under the debian/eol repo. You can `docker run` and `apt-get install` 25+ year old binaries on modern Linux!
What would be sweet is if we could build and ship compatibility tools that make these old binaries work mostly transparently. Today, double clicking a binary on Linux won't do anything particularly sophisticated, and there are no compatibility options. But actually, it would be totally doable to write a variety of useful compatibility shims without doing anything horribly grotesque. The DT_INTERP and DT_NEEDED fields of binaries would often give sufficient information for how you might get such a binary to run. It's not like it would be that useful, but I would personally be very pleased if you could just double click e.g. some old Kylix application and have it just run, perhaps after downloading some (shims for?) old libraries. You could extend this to transparently running CPU emulators too, not unlike the tricks people do with binfmt_misc, just possibly with more batteries included (and a bit less transparency.)
Another really great feature would be useful error messages when executing an application fails. Today if the DT_INTERP is missing, it looks like the binary itself can't be found since it returns the same errno, and you won't see linker errors if you execute a file in a GUI file explorer. What a great improvement it would be if all of that could be fixed, and there is no technical reason it can't be.
Of course, frustratingly, for more reasons than just this, the more likely thing to happen is that nobody bothers since containers are the future anyways, and Win32 instead becomes cemented as the true stable ABI of Linux. Which, in my opinion, is a bummer. We could always have two stable ABIs of Linux...
I'm happy for WSL2 users that are getting what they want, but I don't even particularly care about the things WSL2 brings to Windows, what keeps me from using Windows is just Microsoft.
I have been thinking about the parallels to OS/2, though, and I really do wonder if it's going to go that way. Much like debates about which economic systems are actually viable, there's no real reason to believe aping someone else's ABI can't work other than that it didn't in the 90s. But boy, the game sure has changed a lot, and I'm not so sure it will play out that way anymore. While Valve has been shipping the Windows ABI on Linux commercially, the way they've been doing so is definitely a bit different than how it was done in the past. So far it seems like they're actually succeeding, and the question is somewhat more of how much they can succeed with it.
> Apple users have a fixed belief that everything Apple does is brilliant and fashionable so they do use them which has a deadly effect on the market for third-party software. (No good music players for MacOS for instance)
Many of the long term users of macOS/OS X/etc etc are highly critical of its downfalls, but still use it because of the available options they prefer it. Myself included. You can use something while also being aware of its shortcomings.
Have to strongly disagree on this point. Cog¹ is my music player of choice on macOS; not only does it have a clean GUI, but it supports almost every format² I’ve ever wanted to listen to audio in, including game music in formats like GBS (Game Boy Sound System) and 2SF (Nintendo DS Sound Format).
Vox has a UI reminiscent of a Windows XP skin and gobs of macOS oddities. I feel like they're still trying to catch up to macOS UI changes from 5 years ago. Don't get me started on how inaccessible the entire interface is... They have tab bound not to focus changes but to switching the, yes, tab...
There is a point where supporting legacy software stops being impressive and just starts to be counterproductive. I'm glad my Mac doesn't support Mac OS 8 software anymore, especially since it means I can use a much faster processor architecture.
Apple could easily spend a few millions to support some old hackers keeping ancient software alive, they choose not to do it because they either don’t care, don’t know they can do it or don’t think it’s profitable.
They certainly have an effect on one-another. You can’t just say “old hackers!”. ALL codebases are tied down by legacy design decisions after a long enough period of time. Microsoft’s commitment to backwards-compatibility has inarguably resulted in Windows flaws hanging around for longer than they’d otherwise need to. The argument is around whether or not it’s worth it.
You can’t throw all your engineering knowhow out the window just because you’re discussing something politically charged. This is simply how code works.
I don’t understand your argument. There are (presumably) basement dwellers keeping ancient gaming console emulators working on modern platforms with little more resources than free food from their mothers.
> No way you could run Linux binaries or MacOS classic binaries from '98
Can you give me a few examples of linux binaries from 98? I would like to give this a go, I think I have a pretty reasonable way to go about achieving this.
No, most of them run in X-Windows, using protocols that are still supported even by xwayland, because they weren't originally written for Linux (or IBM-compatibles) at all. Some of them fail to cope with TrueColor visuals but most are fine. You can get old source code from http://www.ibiblio.org/pub/Linux/X11/games/. Binaries are maybe trickier; maybe start with http://archive.debian.org/debian-archive/debian/dists/slink/...? Most of those are going to depend on old shared libraries for things like Xlib, though. (Notice that half the game names start with "x"!)
Debian Slink was formally released in 01999, but most of the packages in it are from 01998. I just installed xkobo from http://archive.debian.org/debian-archive/debian/dists/slink/... with `sudo dpkg -i` but now it wants i386 versions of libc6, libstdc++2.9, and xlib6g. ldd says:
libXext.so.6 => not found
libX11.so.6 => not found
libstdc++-libc6.0-1.so.2 => not found
If anyone is adept enough with Debian to explain how I can do this without breaking my amd64 install, I'd be obliged. (Maybe I need to debootstrap in a chroot or something? It's probably still possible to download slink CD install images.)
Ah, most of what you've got there looks like small "desktop toy" games, like minesweeper games and whatnot. I was thinking more along the lines of full-screen, possibly 3D, games of the sort that got ported by companies like Loki back in the day. Those took a long time to become runnable under X11.
> Maybe I need to debootstrap in a chroot or something?
Probably. Even then you might run into some compatibility issues - IIRC, some of the really old code paths used for system calls (like the vsyscall DSO) have been removed in modern Linux kernels.
xkobo in particular is a twitchy 2-D bullet-hell shooter, and the current version of it (kobodeluxe) is built with SDL and supports fullscreen mode. But I'm pretty sure xkobo did run in a window. Certainly the other games there I remember (xjewel, xgalaga, etc.) did.
I'm not familiar with Loki's work at all, though the name seems familiar.
I think the removed-code-paths thing is mostly an issue with libc5, isn't it?
I ran into a problem with really old code paths in August when I tried to compile PFE 0.9.14 (a Forth implementation, not a game) from 01995; it was trying to call `uselib`, which I think has never existed on amd64.
FWIW `ldd` reported that xkobo had successfully mapped "linux-gate.so.1 (0xf7f3c000)" (with no filename); as I understand it, this is the vDSO that replaced the vsyscall mechanism, so at least for stuff built for Debian Slink I don't think that problem in particular will occur.
Which games ported by Loki didn't require X11? I remember playing Quake 3, Unreal Tournament, and Majesty on Linux in mid-00s, and they were all X11 ports.
> Apple users have a fixed belief that everything Apple does is brilliant and fashionable so they do use them which has a deadly effect on the market for third-party software. (No good music players for MacOS for instance)
Nah, Apple users knew from the beginning that Siri sucked and still sucks. Almost no one I know uses Siri except for setting alarms and asking for weather forecast.
To be fair, Google Assistant started out better but now has just been neglected and actively undermined and had features taken away. My understanding is the whole team was folded into Gemini and it's a classic "old thing deprecated new thing not ready yet"
I don't do the Amazon ecosystem so can't say about Alexa.
There was a period where Google Assistant was so useful. Could control phone functions, set events, reminders, alarms, change volume, reliabily message someone. It's wild to think through all the things it used to do well that it simply cannot do today.
Please don’t spoil this enjoyable nuanced conversation about Apple’s flaws with the usual laundry list of, frankly, unintelligent copy-pasted 2000s Mac vs PC online forum flame war talking points.
You’ve referred to iTunes in the present tense when it hasn’t existed for years, refer to ‘Apple fans’ as some sort of completely separate group of clearly defined people, and spend an unjustifiable amount of your comment talking about your quite niche professional Windows backwards compatibility use case.
We don’t need to rehash this whole thing. Please. Don’t take all the oxygen out of the room.
assuming you mean on macOS, given your comment after this. (there is no iOS app called iTunes anyway)
yes, on both iOS and macOS, Apple intentionally hobbled iTunes/Music, incrementally making it worse each update, after Apple Music gained an initial foothold.
not that it was ever "great" after 1.0 maybe 2.0, but it certainly used to be good. Now, if you're not an Apple Music subscriber, you're left with a pretty basic player. I get that they want to segment the market, but to remove features and actively make it worse? horrible.
Mentioning "iTunes is bad" is like a trigger word for me because it's so misinformed at this point.
For one thing, the iTunes name doesn't technically exist anymore except on Windows. And anyone complaining about it being bad on Windows...I mean, that's like complaining that Microsoft Remote Desktop (Now called the Windows app for some reason) sucks on Mac, right? Like, can we just put the Windows version aside please? Even then, I'm not really sure what specific thing iTunes for Windows sucks at besides not looking like a Windows app. People just say that because they were saying it in 2005.
On Mac, the Music app (not to be confused with the streaming service) is fantastic and has supported Apple's "classic" digital music workflow longer than anyone else has been willing to support their users. The Apple TV app (again, not to be confused with TV+ subsciption service) is now the home for the music/TV show store/rental place and the home of your TV/movie library, which is a big improvement from shoving that functionality in iTunes. in that sense, Apple has cleanly separated use cases and functionality in a way that iTunes didn't previously, which is one reason why a lot of people said "iTunes sucks."
I have a family member who recently switched to Android because of frustration with Apple as a whole. They are a big digital music collector, they don't believe in streaming or "renting" their content.
I tried to help them with their music collection on Android. Theoretically it should be easier right? No weird restrictions on sync direction, basically dump your stuff on an SD card/transfer over USB-C and you're off to the races.
But still, they switched back to Apple secondarily because it's the only place left that actually makes that "purchased digital music" experience user-friendly, or possible at all. (Primarily they switched back to iPhone because the modem in their Google Pixel sucks and/or is poorly tested with their major US carrier and would drop international calls every 15 minutes exactly for no reason)
Google Play's music store doesn't exist anymore. Every jukebox app on Android depends on 100% manual file management. None of them have the polish of the Music app (the app not the service). Almost none of them have decent jukebox companion apps available on desktop computers. A whole bunch of other digital music stores have closed entirely.
Apple's system for synchronizing content is actually pretty amazing for continuing to support an offline cloudless workflow. You still just hit one button/plug in your device to sync your music, movies, audiobooks, ebooks, and photos content. It also supports WiFi syncing, and it furthermore supports every iPod that ever existed so long as you have the right cable/adapter.
You can back up your iPhone's full image to your computer if you don't want to use iCloud backups just like it was an iPod. You can synchronize your Photos library and avoid iCloud storage fees, deleting synchronized photos from your phone to free up space to take new photos and videos. It works just like you were using a digital camera in 2005. Yep, you can still rip and burn CDs!
Furthermore, the way Apple moved device synchronization functions to Finder and split out Music from Podcasts and Audiobooks is helpful for organizing the whole process. It used to be that iTunes was the home for all this synchronizing of non-music-related content, but now it more sensibly exists in Finder.
I think a lot of people don't realize that Apple basically still allows you to send over personally owned non-DRMed or even pirated content to Apple's own modern apps very easily this way, you just have to be willing to synchronize using "the old way" like your iPhone is an iPod. They've even kept ancient hosted services like iTunes Match going just in case you still need that sort of thing (it essentially allows you to sync music to your iPhone that is either pirated or not part of a known label music catalog via a cloud service rather than having to do a local sync via cable or WiFi).
And this workflow is very simple for non-technical users who don't really know how to traverse complicated file management structures. Yes, I would really like if apps like Photos was more flexible on file management, but on the other hand if you follow the prescribed workflow the results are quite user friendly for someone who really doesn't want the cloud but also can't handle setting up a home NAS. In this use case you have a reasonable photo storage system by syncing your device and then backing up your computer in a relatively hands-off manner using Time Machine.
One final point here is that Apple Music the subscription service can be hidden entirely from the app. Apple will just give you a 100% owned music jukebox app. Google doesn't do that, and with Microsoft you're probably using a legacy app like Windows Media Player that looks like it belongs on Windows Vista.
Microsoft Remote Desktop is 100% great on iOS (if not MacOS) in my opinion. I never feel so stylish as when I show up at a hackathon with a tablet + $20 bluetooth mouse and $30 bluetooth keyboard (how did they convince people to spend a few hundred on a special keyboard or to buy a 'hybrid' computer that will leave the airline stewardess at a loss to know if you can stuff it in the pouch in front of you?) and the mac books and gaming laptops look clunky in comparison. And that's backed with a 16 core machine with 128GB of RAM and a 4080 if it's my home machine and I can rent something much larger for a few $ an hour in the cloud. My only beef is they want to call it the "Windows App" now.
(At the least the Apple AAC encoder is good. That plus a Python script can copy music files to a USB stick in the right order so they display properly in the music app for my car... And that's what a good music app is to me, not something that wants to push me to buy a $1000 phone and $100 a month plan so I can crash my car screwing around with my phone.)
> Every jukebox app on Android depends on 100% manual file management.
You've got some great points in there, but as for this one, file management is one of Apple Music's weaker points. I absolutely hate apps that graft their own "library" over top of my already-working filesystem. As someone who's meticulously laid out the directory structure for all of my movies and music, I'm paranoid that some opinionated software is going to just go run roughshod over it, moving things around the way they think files should be organized. I already have a "library". It's an NFS mount on my NAS.
Fortunately, Apple Music still allows you disable this misfeature, but to do it you have to go into Settings and uncheck a bunch of things. Easy to forget.
> On Mac, the Music app (not to be confused with the streaming service) is fantastic
Strong disagree. I find Apple Music (the app) on MacOS to be terrible.
A good 1/2 of the main screen is taken up by a view with a random mix of artwork from the playlist. I find it useless, and it can't be hidden. . Also, there is no way to set the default view to just show songs, instead of the crappy "playlist view".
Search. It's hidden on the sidebar of playlists/sources and you have to scroll to the top to get to it. And then, the choice of whether to search local/Apple Music is on a toggle button on the other side of the screen.
Lyrics - you can't change the font, or adjust the size in normal mode, and when played in fullscreen, the background colours often obscure the lyrics so they're unreadable.
And finally Apple can't seem to decide between a Heart or a Star for songs that you love.
My favorite is that for ~8 years after they killed their theater movie showings service at movies.google.com, that subdomain still pointed to the deprecation notice. They finally must have noticed earlier this year, because it changed to a hacky redirect to https://www.google.com/search?q=movies and now much more sanely points to the Youtube movie storefront. (This drove me crazy for years because I'd always instinctively type in movies.google.com thinking it'd intuitively take me to the Play Store... I was so excited when it finally changed off the deprecation notice.)
The iTunes functionality carried over to Apple Music is just as slow and confusing and non-portable as it ever was, now with added cloud sync that will mysteriously tell you that anything it doesn't recognize isn't available in your region.
The streaming bit may be good but the rest is not.
There's something I must not understand about the Music app because when I drag & drop files on it (or let's get crazy, a folder), it does not read them or even add them to the current playlist?
> On Mac, the Music app (not to be confused with the streaming service) is fantastic
I’d love to live in your alternate reality, not in mine where Music.app is slow, doesn’t do filtering to find specific content very well, doesn’t let you view album covers in a reasonable size, and shortcuts and buttons are inconsistent with the rest of the OS.
Also, syncing (about 350GB of) content to my iPhone has been hit and miss for at least 9 years now, where consistently the same tracks just disappear from the phone and maybe - just maybe - eventually get synced again, taking a few hours in the process. This has been going on across at least three Macs and about six iPhones.
I understand that streaming via Apple Music is the thing now, and us users from the “Rip, Mix, Burn” era are considered legacy now. I’d love to switch to something better, but haven’t found anything yet.
I love the Apple Music app too but the trick is to mainly use the songs tab and playlists and enable the column browser. Also, of the current widely-used options, Apple Music is second to none at this point: old apps like Amarok were nicer but they practically don’t exist anymore. Spotify, Tidal, Qobuz, etc. are all much more annoying than Apple Music (in part because they are Electron Apps).
I do use Music.app like this already, and while it's definitely okay-ish due to lack of alternatives, it's still lacking a lot.
It has also been stagnating for at least 10 years without any changes - apart from making the UI less consistent with the rest of the OS (e.g. "Reveal in Finder" being ⇧⌘R instead of ⌘R everywhere else [0], or the dialog asking whether I really want to edit metadata for several files defaulting to "Cancel" on hitting "Enter", while "OK" is displayed as the default button).
I agree that it's better than the rest, but that's easy :) It's hard for any 3rd party app to compete, as us nerds with large, well curated libraries are a determined and dedicated bunch, but still a quite small market.
[0] I'm aware that this is a relic of the short-lived iTunes Ping network, where ⌘R did something there
The severe bad-ness of itunes on windows halted any momentum I may have had in migration from a windows ecosystem towards apple.
It was just so horribly bad. Apple's disrespect for the dominant competing operating system made apple look incompetent. I liked the ipads until I had to work out how to transfer files on and off them to and from my existing infrastructure. It was goddamn painful, like going back to a previous era of esoteric computer usability.
> One of the few examples I can think of however is Apple Maps. And it did get better; a lot better, some say better than Google Maps nowadays.
This depends on where (which country) you live. For all the ways Apple has been vocal about the Indian market and local production, Apple Maps literally sucks even in major cities in India. Google Maps is decades ahead and gets updated very quickly. Apple Maps cannot even find regular addresses or places.
Apple has its share of incompetencies and willful blind spots, and that shows up in specific areas often related to its services (Apple Intelligence is also a service). The organization and its people are not built for handling these effectively or quickly.
That said, I have more hope in Apple Intelligence improving quicker (at least in English, while competitors are already ahead in other languages, including several Indian languages) than I have in Apple Maps improving in India.
Google maps also sucked in India until a couple of engineers flew there to figure out all the idiosyncrasies of mapping/routing and spent a bunch of time implementing regionalized fixes for them. Apple expresses some very clear preferences in what regions they support well in Apple maps, which exclude most "difficult" areas.
The common wisdom is that Apple Maps works significantly better in the Bay Area than anywhere else on Earth, because the engineers file bugs they encounter on their commute.
Yeah, so much of Google's moat in mapping is just the sheer amount of human time thrown at the problem of all the little regional idiosyncrasies all around the world. Getting that right is what makes it so hard.
Who can forget the time that Apple Maps took me down a road that had never been finished.. that became gravel in a field.. and I realized I had driven into a homeless camp. Ever seen a zombie movie where they swarm a car? It’s happened to me.
It's the same in the UK. Also I have been trying to get them to list my address for two years now. Google were able to update it but any requests to Apple seem to go into a black hole.
I can't speak to Europe, but in the US in a very rural location, I never have speed limit issues to begin with. I believe I have also submitted a couple of changes for small things, and they've all been handled so far as I can tell as I have not run into them again.
Probably typical in that Apple's services in the US are generally better than elsewhere, but just wanted to add a positive with my experience and acknowledge that it's likely better here due to location within the US.
> That thing they finally ship being bad is, geeze, horribly un-Apple.
It was actually quite common Apple during the days Steve Jobs was no longer at the helm, they weren't even able to create a new OS, had to buy another company to rescue them.
And had they gone with Be instead of NEXT, most probably we would be talking about Apple in the past tense nowadays.
Nowadays they might have more money than ever, but it won't last forever if they cannot do anything else than reboots of existing products.
A smart assistant, that can understand and speak to me like Advanced Voice Mode, use a vast knowledge database, is tailored to my needs and can act on my behalf.
And it would be great if it’s able to run locally.
I would say Gemini Live is getting there. It's lacking integration with NotebookLM and Keep. It would be amazing if I started a project conceptually and wanted to move to code it could fire up VS Code and let me get to work.
Gemini's home automation works nicely and it can understand comments like it's too dark in here or it's cold inside and act appropriately. This is using the Android app as an assistant, not live mode.
OpenAI's implementation is apparently similar but I haven't tried the voice mode as a free user.
I haven't tried Apple Intelligence yet on my M1 and don't have an iPhone, so I can't compare.
I've been looking at offline capabilities with open weight models but they aren't there either. A full speech-to-speech model [1] working on an M1 Mac would be incredible.
If that is simple, start a company to build it and become a billionaire.
I don’t think any company has a smart assistant that’s reliable enough to act on your behalf except for some very constrained tasks (examples: dish washers, auto-parking cars)
> concerning that every single Apple Intelligence feature they've shipped thus far has been not just mediocre; but bad
The very initial success of Microsoft was that everything was reliably mediocre. Most things Microsoft delivered that were truely bad were fixed within a few major versions. It was a superpower.
The same model works for most purchases on a bad|average|best spectrum: we never want to buy bad, best is difficult to buy, so we settle for average quality.
Aside: I think MS has gone downhill and is now bad on multiple dimensions for me
It still really isn't that close to Google Maps, with public transit in particular Apple Maps is pretty much useless. GM is typically more complete with paths and building data outside of North America too.
Apple sent me to a rural cornfield once instead of a church where baptism was taking place. It was funny because we weren’t the only ones, everyone using Apple Maps was sent to the same cornfield.
These type of rare but common enough edge cases make me super hesitant to use it in the Midwest.
I have had a really good experience using Apple Maps for public transit. Earlier this year I went to NYC for the first time as an adult and it was super easy to use for finding which train to get on. Had a similar experience in Europe this fall as well.
I was able to navigate the transit systems in Tokyo, Osaka, and Yokohama near-flawlessly with Apple Maps in 2018. I only recall encountering one correctness issue.
Apple has a potentially interesting use case for generative AI in their professional creative apps: heavy integration in logic pro or in final cut. Perhaps even create simpler tools with similar functionality but aimed at non professional users.
The problem is that this risks antagonising the everyone in arts/humanities, and most other use cases are really unneeded - who needs text summarizing for something as simple as personal texts from friends? casual use is not really complex enough to warrant an assistance.
Author of the article here. I do video work occasionally and I use Davinci Resolve to do it. Davinci resolve uses generative AI as tools to help you. It makes all my subtitles and if I'm not going into domain specific terminology that often, it'll be 95% of the way there in about 15 minutes. This is massive, especially when combined with "edit by word" editing.
FWIW: Speech-to-text falls under "AI", but is not considered generative AI. (Note that systems with capabilities that go beyond STT with capabilities such as summaries or translation may incorporate generative AI.)
> The problem is that this risks antagonising the everyone in arts/humanities
I don’t anticipate this being a problem. Have you used generative fill in photoshop or lightroom? It’s a complete game changer. In Egyptian mythology they weigh your soul against a feather when you entered the afterlife, and with professional tools I think moral hangups about AI are going to get about the same weight. It’s just too good not to use.
I have this deep feeling that engineers have a fundamental misunderstanding of the arts, which is reinforced when there is a suggestion that "heavy integration" of generative AI into multimedia production apps is somehow desirable. It's not just contrary to the design and use of these applications, but contrary to art as an endeavor - and users find it revolting.
Apple already has simpler tools aimed at non professionals, they don't need generative AI either.
>It's not just contrary to the design and use of these applications, but contrary to art as an endeavor - and users find it revolting.
As far as speaking purely about art goes, I think there is a wide debate to be had there - a ruler helping a line be straight is help to an artist but not seen as contrary to his work, while pressing a button and getting a full painting is clearly not art creation. But where in the middle lies the spot where automation stops being ok? I think it's a spectrum and we'll see a shift in perception there, gradually.
But that debate completely sidesteps the elephant in the room - most artists nowadays don't make a living making art, just making art-adjacent content, where the artistic value is not really super appreciated by the buyer - photographers creating stock photos, graphic designers making app icons, background music for ads and the like.
Artists hate tools that automate this process because it significantly removes that source of income, but they're not the main target of these products. The target is the clients currently paying them and seeing an opportunity to get a product that, while lacking artistic quality, works for them just as well.
This is another place where I think technologists miss the forest for the trees. You're looking the outputs and results looking for a middle ground, but misunderstanding the problem of generative AI in art is the act of creation itself.
People don't generally take issue with tools that automate or make their jobs easier, even if it may reduce the value of the output. However if the tools limit what they can create themselves and make it difficult to fix or fine tune when something is not how they envision things in their mind before creating it, then they're not good tools. Even worse are the tools that take away their ability to create at all.
Really I think what technologists don't understand about art is that in engineering tools are a means to an end and only the outputs matter. If you can get a program to spit something out and say "look, isn't that good enough?" you have missed the entire point of art.
>However if the tools limit what they can create themselves and make it difficult to fix or fine tune when something is not how they envision things in their mind before creating it, then they're not good tools. Even worse are the tools that take away their ability to create at all.
I might be wrong, but I think you're picturing all-or-nothing use cases here. It's not all just 'draw me a picture'; Think smaller scope and maybe you see that middle ground. Take as an example, for a writer, clicking on a phrase like 'he raised his eyebrows' and being suggested alternative wordings so he can avoid repetition. Is that interfering with his act of creation any differently than checking a thesaurus?
Consider being able to have an interaction with an LLM to whom you can ask 'is the plot of my thriller so far leaving any plot hole?'. That does not seem so different with a back-and-forth with an editor or an early reader, in terms of affecting creative freedom.
>If you can get a program to spit something out and say "look, isn't that good enough?" you have missed the entire point of art.
Again, I get that but art is not what tech companies are trying to substitute. If a music generator can give you background music for studying there is no art creation involved, but neither the owner of the youtube channel making ad money nor the listeners give a shit.
I'm not defending that position necessarily, mind you, just pointing out that the business interests in 'not art, but just content, that happens to need artist's skills to create' far surpasses the interest in actual art.
As an analogy: Many musicians will scoff at mainstream pop artists and how every song is just the same four chords. But is the business in pop or in avant garde jazz?
Shrug. If I had to go back to desktop Linux, and I could pay to have Preview, Safari, Terminal(! yep, I like it better than my Linux options), Digital Color Meter, Apple's office-alike suite, Notes, and various other first-party Mac apps, on Linux, I'd absolutely click the "buy" button. And I spent 20 years on Windows and Linux before seriously giving Mac a shot, and still regularly use both for various reasons, so it's not that I don't know what else is out there—Apple's first-party apps are my favorites in their categories more often than not (big, glaring exception for Xcode, hahaha). They're mostly really good, stable, and don't eat my battery like it's free.
Can't talk for all regions of Apple Maps, but here in Canada I still get many errors when using it - especially when using bikes, buses and so on. It remains impossible to confidently use compared to Google Maps. When it comes to Apple AI stuff - too much work was put on Apple Vision and this was a tragically bad strategic decision from Executives at Apple. I wouldn't be surprised it will be presented in the future as one of the greatest miss from Tim and his gang.
> too much work was put on Apple Vision and this was a tragically bad strategic decision from Executives at Apple.
I think it is more complicated than that. I think the Apple Vision is a kind of albatross. No one wanted this thing. I happen to think the executives didn't want it either. For all the years and effort put into it (and, well, there was project "Titan" before that) killing it might have hurt worse than their lackluster shipping of it.
Flush with cash (and I can't think of a phrase that really carries the weight of just how flush with cash they are — embarrassingly wealthy?) it was a rounding error for Apple to hire everyone they could in The Valley and keep them busy (and filing patent applications as they worked). It kept them from the competitors.
And I don't believe you could have instead put the engineering hires to "fixing Maps" or whatever pet peeve you and I have about the current Apple ecosystem. You're 1) likely not hiring the type of engineers for those tasks and 2) just throwing more people on the thing is not necessarily going to be the right answer (The Mythical Man-Month, too many cooks (ha ha) and all that).
On the whole I think Tim has steered the Apple ship to align with the times we have been living in.
I think the only reason the Vision Pro exists is for the OS. I wouldn't be surprised if Apple internally considers it the final OS, since it's the one that exists in the physical world. Their task for the next two decades will be to bring that OS to invisible devices like glasses.
In Metro Vancouver, Los Angeles and the state of Washington, my experience with Apple Maps has been far better than Google Maps; the latter seems to have stagnated completely.
Apple Maps provides me with more accessible info. e.g. "turn right at the next traffic light", "stay in the second lane from the left" vs. "In 200 metres, turn right onto 1st Avenue" (where it's always off by 50m) and nothing about lanes
I can have a whole human like conversation with chatGPT via their app on the same iPhone where Siri still is total horse-poo. I have iPhone 15 Pro and running 18.3 .... Siri is so pathetic.
I chat with GPT (especially in the car) to get things done; assistant and a knowledgebase. Siri makes me have nerd rage (lol) trying to use her the same.
If GPT came out with an AI Phone Apple would be out of my life. I want an AI Phone where on the lock screen I see a facetime like call with my AI Assistant (can skin how they look to be whoever). They do everything for me via voice, text, hand gestures, facial expressions and etc. It would be less icon focus and way more AI focused of a UX.
I think it's much easier for Apple to sort out their AI and add this to iPhone than it is for OpenAI to figure out an entire mobile ecosystem where Apple has a ~15 year headstart and use their AI in it.
I agree Siri isn't good, but adding good AI into the existing ecosystem is clearly where the market is headed, and I don't think it will be long before Apple gets there.
The "bicycle for the mind" goal, and the Steve Jobs quote that inspired it, is really just another restatement of McLuhan's idea of media as being extensions of man. The bicycle (or wheel, more generally) is an extension of the legs, a phone is an extension of your voice, etc.
The problem with interpreting AI through that lens is that AI, as it is being used here, is not an extension of your mind. Plenty of other things are (organizers for example), but AI does not extend your thoughts. It replaces them. Its notification summary feature does not improve your ability to quickly digest lots of notification information, it replaces it with its own attempt, which, not being your own judgment, can and does easily err.
There are some uses of AI that do act more like a McLuhanesque medium. Some copilot applications, in which suggestions are presented that a user accepts and refines them, are examples of this. But a lot of the uses of both image generation and LLM tools serve to limit what your mind does rather than expand it.
This post makes the point that the foundations of Apple Intelligence are really well designed. I think anytime you make the right underlying technology choices, there is always hope for the product.
It's also worth noting that Apple traditionally is not a first mover and looks for "inspiration" from smaller competitors. In this case, there is no comp to reference. There is no startup mobile OS innovating in integrated AI. That, and the supposedly rushed timetable, probably explains a lot.
I wish the same. That being said, given how useless Apple Intelligence is, how it isn't deployed in the EU and how it's gatekept by newer hardware, it's still very easy to ignore it. It's even easier on Mac where new versions doesn't bring anything worth upgrading for a non Apple-only developper (still running Sonoma).
But the point is that the foundation gives it hope to be a class-leading product in the future.
The lengths Apple went to build a secure and private system will make it stand out and help it hold up to regulatory scrutiny. Doing this now is better than doing it later.
In, let's say, 3 years, the features are more legitimately useful, the gatekeeping on new hardware will be a non-issue. In 3 years the majority of Apple's users will have an iPhone 15 Pro/iPhone 16 or newer. They are probably already mostly on M1 or newer Macs.
On the other hand, I totally agree that it's pretty useless as it stands right now. I also think that if their launch strategy is to have an amazing WWDC and then deliver 5-10% of the features in September, that's going to turn Apple into just another company that promises the moon and delivers gimmicks.
On the contrary, in three years we will be used to AI which requires much newer tech than available today. The current tech will be obsolete.
We have seen that play out with all the previous AI chips (mostly Google).
The software and its requirements are moving faster than devices can be shipped and accumulate significant marketshare.
This is not to say thay Google and Apple haven't or won't be able to ship some minor models such as voice recognition or translation, but for frontier level AI the local chips just won't suffice.
"Clean Up is best explained by this famous photo editing example . . . This tool allows you to capture a moment in time as you wish it happened, not as it actually happened."
FALSE. Apple defines a photo as a record of something that actually happened. iPhones take photos. They doen't auto-swap a high-res moon in for the real one like Samsung phones do.
Clean Up (like crop) is just an editing feature, manually applied after a photo already exists, and using it effectively changes the image from a photo into an "edited image", the same way using Photoshop does.
Definitions of What a Photo Is:
Apple - "Here’s our view of what a photograph is. The way we like to think of it is that it’s a personal celebration of something that really, actually happened.
Whether that’s a simple thing like a fancy cup of coffee that’s got some cool design on it, all the way through to my kid’s first steps, or my parents’ last breath, It’s something that really happened. It’s something that is a marker in my life, and it’s something that deserves to be celebrated."
- John McCormack, VP of Camera Software Engineering @ Apple
Samsung - "Actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene — is it real? Or is it all filters? There is no real picture, full stop."
- Patrick Chomet, Executive VP of Customer Experience @ Samsung
Google - "It’s about what you’re remembering,” he says. “When you define a memory as that there is a fallibility to it: You could have a true and perfect representation of a moment that felt completely fake and completely wrong. What some of these edits do is help you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond."
- Isaac Reynolds, Product Manager for Pixel Cameras @ Google
I hate that we have spent the last 20 or so advancing digital cameras to the point where a everyone has an amazing DSLR in their pocket and now we're at the point in history where we have to define what a photograph is, because everyone is trying to shoehorn some shitty AI image gen thing into our cameras for a quick profit
We're at that point in history BECAUSE everyone has a DSLR in their pocket.
Image quality on modern phones is in large part due to a lot of image processing done by the phone. Multiple photos being taken, combined for least blur and best dynamic range, colour balanced to best represent skin tones etc.
The line between the sort of algorithms that run on an iPhone and inserting a moon is largely philosophical rather than technical. It's an extremely important philosophical line! But the sort of things that have been added are the logical continuation of the sort of work that the camera teams have been doing.
[For context, I'm a Google Pixel owner, but of those three statements the Apple one is the one I agree the most with]
Frustrating to me, too, as someone who has recently gotten back into photography and it's difficult to know whether the photos I am using as inspiration are actually real or so highly edited that I'd never be able to achieve something similar.
It's one thing to use masks to edit highlights/shadows/color balance for certain areas (skies, buildings, people, etc) but it's an entirely different thing to completely replace the sky, or remove objects because they aren't "appealing"
> It's one thing to use masks to edit highlights/shadows/color balance for certain areas (skies, buildings, people, etc) but it's an entirely different thing to completely replace the sky, or remove objects because they aren't "appealing"
Almost as long as we've had photos, we've been removing "unappealing" things from them. Famously Stalin had Nikolai Yezhov removed from a photo after he was "purged", but the Soviet Union in general is full of these instances.
More lightheartedly, Disney supposedly (though this seems to be subject to some debate) has airbrushed a number of photographs of Walt Disney to remove cigarettes from them. And perhaps most famously of all, Han only shot first if you were born before 1997.
I’d agree with your assessment of what Apple considers a photograph if I could turn off the post-processing that turns everything in the background into a smeary, blobby mess.
That’s Portrait Mode. You only get that if you change from Photo mode to Portrait mode in the Camera app, or in later OSes by retroactively applying a Portrait effect in Edit mode. The addition of the latter feature also made it possible to retroactively remove the Portrait Mode effect from a photo, as long as you have the actual source asset and not a rendered JPEG/HEIC with the effect baked in.
If you mean the fake bokeh that blurs the background, turn off portrait mode.
You can toggle "pro raw" in the default camera app. It captures a lot more information from the sensor. The files are a lot larger, it isn't throwing away information that isn't visible like in the shadows. This gives you more flexibility like changing color balance and exposure after the fact because of that extra data. But there is still some sharpening and post processing.
You can use the camera app inside lightroom or "procamera" or other apps and take raw photos, where it records all of the sensor data without any post processing. Most people don't want this, you need to develop the images using software like lightroom to look good.
This comment and the article they came from are a perfect snapshot of this moment. The fact that major players at each company have made public statements about the philosophical definition of what a photo is. I mean, of course they have. Of course. The times be wild.
They failed IMO as soon as they started marketing this stuff as AI. Nobody _actually_ cares about that. They just want new and exciting device capabilities. The fact that they haven’t done anything compelling enough to detach it from the AI moniker means they don’t have anything. Yet. The overall foundation is sound. Maybe in a generation or two we will have a truly compelling use case. Part of me doubts it, though, because Apple isn’t the company they used to be when it comes to innovating.
Interesting note: they don't market it as AI. They market it as Apple Intelligence. I think this is deliberate. So it seems they've already taken your advice (albeit with the 'wink' of using the same initials).
> Many companies want to make computers that you can use to do computer things. Apple makes tools that you use as an extension of your body in order to do creative things. They don't just sell computers, they sell something that helps enable you to create things that just so happen to be computers.
An Apple marketing executive is smiling somewhere. Brainwashed another one!
Right, this is such cringe. Almost like Apple's "What's a computer?" kid come to life. Apple sells overpriced tech to people that don't know any better.
Sometimes I wish I could check an alternative universe where Apple refused to acknowledge AI slop and just doubled down on privacy and protections of the user. It almost feels like they could have just ignored the trend and let everyone else burn money until the hype dies down.
To be honest, apple's approach to AI seems pretty close to this recommendation. I don't see much more than "here's a few nice features to some apps" -- vs., MS's "now all our machines must be AI capable so that we can scan 1000s of screeshots of your device"
As someone who has gone back and forth between Windows and Mac OS... if Apple would stop trying to force me to log-in to use features I don't need, they would have a more compelling case. Windows is indeed a horrific data mining operation, but Apple's own endless push to enmesh me in their ecosystem is about as irritating. And Windows has WSL, which has value to me at work.
(And outside of work, Apple punted on games, which means I will always buy a Windows or Linux computer.)
You're comparing eating unflavored oatmeal with eating bleach. One of these things is clearly in a league far and away worse than the other, and you're doing nobody any favors pretending otherwise.
No, I just don't agree. At work, that's a decision the corporate overlords make. At home, I will continue to prefer games + WSL over Apple's offering... despite having started in computing, back in the 1980s, on a Mac, and having owned a number of Macs over the years.
Apple has consistently made choices that make having a Mac less palatable to me -- killing all 32-bit binaries destroyed most of what was available on Steam for the Mac. Hassling me to logon is another unforced error. Eventually I dumped my Macbook Air and bought a Windows laptop.
Apple hardware is really good. I probably spend more on Windows laptops because I replace them more often. But it's a better experience. (And I can get a more adequate amount of memory.)
Sorry, but Macs require an online account only if you want to use optional online services offered by Apple, just as is the case with Microsoft and something like OneDrive or Office 365.
You aren't required to use the optional services on either platform.
The difference is that Microsoft wants to force you to use an online Microsoft account to log into your own local computer. Macs do not require that.
You cannot install any apps on an iOS device without registering and logging in with an AppleID, which requires both an email address and a telephone number.
> The difference is that Microsoft wants to force you to use an online Microsoft account to log into your own local computer. Macs do not require that.
I have no OS or tech giant loyalty, but I think the pushiness is wash between MacOS and Windows. I still haven't found a way to stop my mac from nagging me to log into iCloud, I currently am dealing with a bug where the iCloud login modal shows up immediately after I dismiss it, in an unceasing cycle, and I have to launch the App store and the quit before the nagging stops.
I recently set up a Windows 11 laptop over the holidays and found out (from Google) esoteric "oobe*.exe" command I had to run on the CLI on first boot (pre-setup) that showed the option to create a local account in the UI. I didn't get any nags post set-up.
At any rate, Microsoft has made multiple changes over the years designed to hide the option of using a local user account in Windows, which is a bridge too far.
No, I'm not required to logon, and I didn't, but they absolutely do prompt for it and otherwise making ignoring their services less convenient. I put up with it for years, but I hated it.
I agree with you here. My solution was to use Linux on my Desktop PC and its been great for work and gaming, and I have an M1 Mac with Asahi Linux on it for on the go. I love it and I'm really happy with it. I've used Windows most of my life and is has really worn me down, and I tried MacOS for around a year, and it is equally as painful but in different ways. Linux has some rough edges in certain areas, but I like that I can easily sync all my devices, emulation files (it pairs well with the steamdeck), and configurations... and I don't have to put up with being a data cow or being constantly badgered to engage in vendor lock-in.
On win 10 there used to be a button to create a local account, but they removed it and made it so you had to disconnect from WiFi to get to that screen. Then they changed it to make it so that you have to disconnect from WiFi AND run some arcane terminal command. At this point your average user is just going to use an online Microsoft account. What an atrocious user experience
Longtime Mac and Windows user who feels similarly.
I recently tried Linux as my primary desktop after 10+ years. It’s amazing. It just works and there is no slop, no advertising, no ecosystem to log into, no notifications. So nice. I can actually get work done! And games often just work.
I'm sorry to say, but logging in with a centralized account to use a device is a given in 2025. The benefits are far too good to disregard. Apple is no longer in the business of selling you devices, they're in the business of selling you add-ons to your Apple Account.
And people always say that we risk our accounts being locked out, but when was the last time Apple was heard closing Apple Accounts?
No, this is garbage. I have plenty of real accounts I need to log into already, I don't need another one that is there just so the vendor can track me and try to enmesh me in their ecology.
That's your point of view. I see access to so many services because I have a phone, laptop, watch and headphones. Apple doesn't use accounts only to track you, it uses accounts to do things like seamlessly switch your AirPods between devices.
You couldn't just pair the device to all of them and then seamlessly switch? The only thing the account is adding is removing that initial step on first pairing, but even that could be done without an account if the new device C was paired to host A and came in proximity to host B that was in connected at some point with host A.
> I'm sorry to say, but logging in with a centralized account to use a device is a given in 2025.
It's really not. Of all the devices I own, the only one that really wants a centralized account is a Chromebook on its way out. Even Android is willing to work without a Google account.
I mean, it's a given that it will be pushed, but I really don't see the benefit of logging into windows with a microsoft account for most people. I do it, because it makes it enables parental controls, but if I didn't want those, I don't see the point. The regular people in my life do it because Microsoft pushes it hard, and they don't care. The Windows app store works just as well (which is to say, not very well) if you log in to it in the app or through your windows account.
It's basically required on a chrome OS device, although my MIL was using the guest account for months when she changed her password instead of referencing her password book and then later couldn't log in from her book. Chrome OS isn't awful with no persistent storage.
Blessedly, I haven't had to use a mac in many years, but a local account didn't seem to impact anything of note --- you could login to the app store in the event you needed something from there, but there wasn't much that needed it other than Xcode; maybe that's changed.
The iPhone with no app store is fairly useless, so yeah, you've got to login to that. An Android with no google play is a little bit less useless, depending on what apps you want to run, some of them distribute apks directly.
An Android without Google account is fine. I use Aurora store to download apps. Most of the features of Google Play Services (notification, cell location) still work when you're not logged in. I always use my android devices like this.
I dunno, my family likes it. They largely standardized to storing all their stuff in their OneDrive. When they get a new device, they just log in with the same username/password as their other computer and a lot of their settings are already configured. All their stuff is just there in their OneDrive.
For shared computers its really nice. I log in with my account, my wife logs in with hers, regardless of whatever computer we have handy. If I'm lounging on the couch I might grab her Surface, if she wants to sit down and work on a bigger project she can hop down at the bigger gaming PC, if we're on a trip and want to sync photos just grab the laptop. It's our same accounts, same username and passwords, same customization settings we like, regardless of whatever computer we use. The NAS at home has its permissions tied to our Microsoft accounts so our accounts log in seamlessly regardless of what computer we're on.
Meanwhile all our devices are encrypted and have our backup keys to decrypt synced there.
I probably would never want to have any personal Windows machine use a local account going forward.
Several large companies could benefit from ignoring GenAI. Unfortunately, "benefit" would only mean "save money and produce better products for customers" instead of "make stock price go up".
Instead, all of these companies are effectively forced to play hype ball.
They keynote where they announced these features was pretty awkward. The typical conviction they delivered things with just wasn't there. You could almost feel that their hand was forced and they gave it a go, but they deeply resented not being the ones to make that choice.
I don't doubt the pressure they are feeling at the upper levels at Apple is real — but I agree, this initial rollout is also-ran and they underdelivered.
I disagree that Apple should have sat on their hands. LLMs are already shipping apps that are beginning to integrate with the OS (looking at you, ChatGPT icon in my MacOS menu bar). How much user privacy should Apple allow me (well, their customers generally) to cede before they step in?
Greatly prefer users installing these functions and apps themselves. Instead Apple me has double checking settings on every update so I can toggle off stuff that should have been opt-in. At least I can continue to refuse to opt into the MacOS AI implementation for now.
Now they've gone and doubled[0] the recommended space AI needs on base model phones shipping with under spec'd storage in the first place. If it goes up any more you'll be looking at giving up nearly 10% of a base model 128gb phone's storage to AI.
There needs to be a setting in the OS with a slider that allows you to set some value of 1-10 of your privacy preferences and all settings in all applications and future updates to a particular device are bound by. With each setting having well defined meanings, such as "1 - no interaction with internet based services at all. opt-out of everything", "2 - only security online services used. opt-out of everything", all the way up to "10 - this device is a loaner but i want all of my content to be available to me on any device anywhere and i want all of my content screen for 'safety'. opt-in to every thing."
The particularly strange thing about this is that Apple has been shipping neural network accelerators since the iPhone X. The one actual selling point of Microsoft's "Copilot PCs" is copying Apple and putting an NPU into Windows laptops so they can have all the AI features Apple had already shipped way before they coined the term "Apple Intelligence".
The LLM hype is so powerful that it got people to think the market leader in consumer-facing AI features was falling behind because you couldn't ask your iPhone to summarize notifications poorly.
It's that sentiment all over again. Specifically, a naive belief that the specs/tech of something are just as important than the execution. Maybe they are if you are very tech online, but for most people, tech/specs don't matter as much.
Ultimately, this is another distortion of reality by the comment section.
It's what people always say: when the 5S came out, pundits were saying that was the death of the iPhone because the Samsung Galaxy was much bigger. Next year, the 6 was bigger and somehow Apple survived a year with a smaller device.
Apple is so derivative and marketing-type driven now, that it even got into the VR shit, after it had already resulted in nothing for Facebook and others peddling it for years. Given that, there was zero chance that they'd skip AI slop.
I'm not sure, they've been revolving around "AI" stuff for ages; Siri, photo manipulation, identifying people and things in photos (but on your own device), all of which are widely popular. This feels like a logical next step for the path they were on for ages.
It would have been okay if they continued to release features that just happened to use ML under the hood. But instead they are expressedly marketing “Apple Intelligence”, with most of the features released under that umbrella so far not really working well (notification summaries, mail categories, Genmoji, …).
I do wish they continued on this track instead of apple intelligence. I should be enable to click generate audiobook in iBooks and have an audiobook made in any voice of my choosing. Open source solutions are far too slow on macOS, and subpar in other languages.
There are. But until the current crop of AI can actually do something useful, it isn't one. Right now it's hype driven development in search of a compelling use case.
In my own field (biomedical research), AI has already been a revolution. Everyone - and I mean everyone - is using AlphaFold, for example. It is a game changer, a true revolution.
And everyday I use AI for mundane things, like summarization, transcribing, and language translation. All supremely useful. And there is a ton more. So I never understand the "hype" thing. It deserves to be hyped imo, as it is already become essential.
I think there is a group of people that don't want AI to be useful and think that by telling other people that its not useful that these other people will believe them. Unfortunately for them, more and more people are finding value with AI. It literally saved my father-in-law's life. It's become my kid's best school tutor. But there will still be people telling me that it has no value.
it boggles my mind this is still being written over and over here on HN. like an echochamber everyone like fears AI will replace them or some BS like that. I cannot even begin to tell you how much AI has been useful to me, to my entire team, to my wife, to my daughter and to most of my friends that are in various industries that I personally guided towards using it. on my end, roughly 50% of things that I used to have to do are now fully automated (some agents, some using my help along the way)…
in every thread here on HN there will be X number of people posting exactly what you wrote and Y (where Y is much smaller than X) number of people posting “look, this shit is fucking amazing, I do amazing shit with it.” if I was in group X I would stop and think long and hard what I need to do in order to get myself into group Y…
It’s already ridiculously useful. It does atleast 80% of the work in the teams that report to me and writes/refines almost all the documentation I write. That doesn’t even involve all the hobbies I use it for. Ideas for new furniture to build, pattern generation for wood carving, ideas for my oil paintings etc.
Honestly, I think Apple played their cards perfectly. They didn’t try to be first to market with R&D, but they’ve launched just enough features to excite customers about new phones, while appeasing investors and subduing potential competitors like OpenAI who are rumored to be working on hardware devices, too.
> It almost feels like they could have just ignored the trend
I wish they had but on the other hand, can you imagine how many think pieces there'd be about how Apple was stagnating and how they were "such, like, a 20th Century Company(tm)" and you'd probably get activist investors bleating about how Apple were leaving money on the table and and and etc.
The “play the podcast my wife sent me the other day” example is interesting to me. That shouldn’t be difficult to do without AI. Yeah asking a thing is always gonna be quicker (provided it works), but a well designed app should make that possible within like ten seconds.
I can’t help but wonder if the reason “agentic” systems seem so appealing to people is because as an industry we’ve spent the past fifteen years making software harder to use.
I had a similar example the other day. I was visiting Arizona for the first time and was driving in a rental car from Phoenix south to Tucson.
I have the latest Google Pixel, and was using Google Maps to navigate.
I pressed the "voice search" button from within Google Maps and said "What is the name of the mountain on the left that I'm about to drive past?"
Instead of a context-aware answer, my phone simply did a Google search for that exact phrase and showed the results to me. The top hit was a Reddit thread about some mountain near Seattle. :)
I'm wondering if you actually thought that there was a chance that Google Maps was going to answer you correctly?
I guess we're entering an age where people might have a reasonable expectation that any app is a context-aware LLM, but personally I don't have that assumption yet.
I’m not sure. If the word “podcast” wasn’t used then it might be tricky, but an LLM might figure it out from context.
It also depends on the context of the action. If you’re sitting in front of your computer, yeah, no big deal. Type “podcast” in the search field of the messages app and click the thing it finds. But if you’re busy cooking dinner or cleaning out the cat box, it’s a pain to get your phone out and poke around. The main draw of voice assistants (at least to me) is that they let you do things quickly when you don’t already have a device in your hands.
The problem is that this only works if your spouse used an apple application to tell you this information. If she used messenger/IG chat/whatsapp or gmail or her work email there’s no way for Siri to know about it.
> The core of why ChatGPT works as a product isn't the AI. It's the experience of each word being typed one at a time by the AI and saving your conversations with the AI for later.
Really loved this article overall, but I have to super-disagree here. The core of ChatGPT is you can have a conversation with a computer program.
Take away saving history and you can still have a conversation with a computer program (see ephemeral chats).
Take away typing one word at a time and you still have a conversation with a computer program (see non-streaming API / batch API).
But major props for writing this live on a twitch stream, benefit of the doubt there my friend.
Article author here. Thanks! I've found writing on stream to be really hard, but it's getting easier with practice. One of the more frustrating parts is trying to get the pure thought nuance out onto the page in a way that reflects the nuance as it is in my head. I think I'm getting closer to it, but who knows.
These models are stochastic though, so saving conversations with elaborate context so they're actually useful in the domain you need them to be useful in is a fundamental feature. The alternative to that is saving a document to copy-paste in as a "startup prompt" of things it needs to know contextually, which is kind of silly.
I agree with the premise that it is not the text stream or the coversation history that makes the ChatGPT useful - It is multifaceted. It's an interface, and also a intelligence at your disposal. Even if you took some abilities away from ChatGPT, such as spitting out real world facts, it could still be used for summarizing text.
While I agree with the post's sentiment, its assessment of Apple Intelligence overlooks its incomplete rollout, with most of the meaningful, step-change features (e.g., Siri’s contextual awareness) scheduled for release in 2025 at the earliest.
In my view, Private Cloud Compute and Apple Intelligence, together with the ubiquity of Apple devices, position Apple as a leading candidate to realize the widespread, AI-enabled transformation of personal computing discussed in the post — with tasks requiring less energy and cognitive load than they do today for the general consumer.
Execution matters. Microsoft was best-positioned to reap the widespread, mobile-enabled transformation of personal computing. They didn't, and today with the discontinuation of the Surface Duo, they have literally zero strategy in mobile.
Worse, because of that bad decision they don't have any endpoints to sell AI and XBox services.
When Windows Phone was killed it had about 10% market share in Europe, it was slowly becoming the device that people that didn't like Android, and lacked the funds for an iPhone.
Turns out 10% is better than 0%, but they decided otherwise, and selling Microsoft Android was always hard to swallow.
I agree. the idea of being short Apple's ability to create an AI product seems to miss the one thing they are excellent at: the simple things. what they understand is you don't want AI to be impressive, you just want the things you already do today to work really well and be satisfying to use- and not know or care if there is AI behind it.
steve jobs understood you have to do really hard things very well to do anything simple beautifully. I'm pretty sure most of their AI play is still in front of them.
> Word processors like MacWrite absolutely transformed the ways that everyone used computers.
MacWrite was released 5 years after WordPerfect, which itself is predated by WordStar. I don't get why Apple fans have this obsession with pretending Apple invents these things.
Apple refines what others have attempted before, that's what they're good at. Part of the reason people are disappointed with Apple these days is because of this fantasy image of Apple as an inventor.
Author does say word processors "like" MacWrite, so falls short of saying it was the first at anything (and it wasn't), but WordPerfect and WordStar are interesting choices for comparison. Of the three, only MacWrite is a GUI-based WYSIWYG word processor that would be immediately familiar to modern audiences. (WordPerfect wouldn't get GUI until 1991.)
It's interesting to note that when WordPerfect got a GUI, it had to be worked up separately for each platform --- the NeXTstep version was quite nice, took full advantage of Display PostScript, and was coded up by a couple of programmers in six weeks or so.
That’s how cross-platform apps were. Middleware libraries that you could use to put the same GUI code on different OSes didn’t exist, and there wasn’t room for them anyway. Even sharing the core could be a tough proposition. I know that at least MS Word for Mac vs Windows was two completely separate programs that happened to share a name and a feature set. That continued until Mac MS Word 6.0 in 1994. That was a port of the Windows version, and very much disliked among the Mac userbase for being poor performance and not really behaving like a Mac app should.
I don’t think cross-platform UI code really took off on the Mac until maybe 20 years later. Plenty of Mac apps were built that way before that point, but they tended to be ones in the “you’re stuck using this, so it doesn’t have to be very nice” category, like Word ended up being. It doesn’t really seem to have tipped until Electron came along, and somehow web apps that use half a gigabyte of RAM to show some text became totally accepted as good enough.
Incidentally, the Cocoa UI framework that macOS uses was originally a cross platform framework that could deploy to Windows and various other OSes, back when it was made by NeXT. Apple. Apple killed that off and it became Mac-exclusive. I wonder what the world would look like if they kept support for other OSes. Maybe we’d have a good selection of cross platform apps that actually look nice and perform well.
- Pages.app by Pages --- amazing DTP tool which was bought by Anderson Financial Services, but then killed off when Rhapsody went away
- Macromedia Freehand --- unfortunately, they continued with their in-house toolkit to make the Mac/Windows versions, since it would have been too much work to revive the old Altsys Virtuoso code
- Quantrix Financial Modeller --- at least this still survived, but be sure to take a seat before looking up the cost per seat
- FrameMaker --- the NeXT version was the nicest one I ever used, and with Display PostScript, was far nicer to work with
- WordPerfect --- the NeXT version was far nicer than the Windows, would have been nice to see that come back
- Stone Create (and the other Stone apps) --- nice assortment of various tools which would have been quite nice to have
Lots of other way cool NeXT apps which should have done better in the market.
Real damage of Microsoft's Monopoly power. They have a severe case of Not Invented Here syndrome and even for free Open Source Software refuse to ship any of it naively unless they can fork it and pretend they wrote it (early TCP code based off BSD IIRC).
It's horrendously tough to target for a cross-platform application when you have to bring the entire GUI toolkit along and adapt it individually to every target, even the 95% of the market gorilla that is whatever versions of Desktop Windows are the most recent 3.
WordPerfect and WordStar were always pretty yuck IMO once newer generation products came along. I was pretty much a fan of Microsoft word even in the DOS days. (Even Multimate which was basically a DOS clone of a Wang product.)
I loved WP5.1 for DOS because of one feature: "show codes" - it made it trivial to understand why the formatting looked the way it did, and to fix formatting problems. Other than that it was not outstanding software :-)
I never used Word in the DOS days so I can't compare, but it was obvious that Word for Windows was written natively for Windows and it "felt" much more natural in the Windows of that time.
The NeXTstep version was _very_ nice --- looked and felt like a native app, but still had "Reveal Codes" --- nicest version of WordPerfect I every used.
It was implied by the phrase 'transformed the ways everyone used computers'. True, to younger computer users MacWrite would be the most familiar of the three. However, in terms of total unit sales and percentage of users for their day, MacWrite was practically rounding error in the word processor market. It was WordStar and then WordPerfect that dominated (and therefore 'transformed...') until the early/mid-90's when MS Word took over.
But the example functionality is backspacing, which of course in no way requires a GUI. You could just as easily cite AtariWriter or Bank Street Writer, which came out before WordPerfect.
And before Macwrite, or Macintosh computers existed, there was Xerox PARC and their GUI-based WYSIWYG editor called "Bravo", which Steve Jobs no doubt would have seen when he visited PARC.
I noticed Alan Kay himself in the comments, indicating about the only thing in the scene that is accurate is the guy playing Steve Jobs (Noah Wyle) looks/sounds similar to the real Jobs. Everything else was pure Hollywood fluff.
I mean, look, WordStar was a huge step up from a typewriter. It (and programs like it) made it possible for people like me to write.
WordStar let you get the words right. WordPerfect let you do "you asked for it, you got it" layout, which was a step up. But MacWrite and programs like it let you do WYSIWYG layout, which was huge. It was like going from typewriter to WordStar, but for layout and appearance. (The words are still more important, but the presentation also matters.)
I always wonder how many people actually used typewriters before talking about them in comparison to word processors.
By the late 1970s or early 1980s, typewriters had electronic memory. They had error correction "tape" so you could erase mistakes. You could set them to center or right justify text. You could create tables of justified text.
Early word processing software was amazing because it gave you more memory to work with, and didn't force you to keep so much of what you previously wrote in your head. But it was only a large step forward compared to typewriters of the time.
I wrote papers in graduate school using a typewriter before switching to a Mac Plus. It was a Smith-Corona and used a cartridge "ribbon" and a special "correction ribbon" that lifted off the text (instead of using white-out).
It wasn't a Selectric, though, so mostly you just used it as a typewriter. Selectrics were expensive and mostly used in business rather than by individuals.
Even the IBM Selectrics were nothing like as good as having a word processor, which is why the transition was so fast. A friend of mine was in commercial real-estate in that period (mid-1980s) and discovered that with a computer, he could type his own offer letters and not have to bother with a secretary. My brother's law firm went through something similar. In a handful of years, there was huge adoption.
A student organization I was an officer in had a Selectric with a correcting ribbon and I ended up using it for a lot of papers latterly. The newspaper had Royal typewriters and I used those. We did a bunch of literal copy/pasting and then stuff was typed into a huge typesetting system.
Grad school (starting 1979), there was a mainframe with DecWriter terminals and that was a big improvement but still nothing like terminal GUIs.
Selectrics never had any word processor-like functionality aside from the ability to erase. They were very expensive for being just amazing pieces of mechanical engineering but the daisy wheel products worked well enough and could be infused with a little computerized help because they were already electronic.
> By the late 1970s or early 1980s, typewriters had electronic memory. They had error correction "tape" so you could erase mistakes.
Sure, there were electronic typewriters where you could type an entire line or two into memory before printing it. And, as you say, typewriters with whiteout reels that allowed you to backspace.
But if you've started a new paragraph, and you realise you want to go back and edit the previous one, and everything else needs to move down a line to accommodate? Good chance you're throwing the page away and starting from scratch.
Or you're cutting and pasting in the literal sense, using scissors and glue.
Typewriters were difficult enough that "typist" was a professional job - and many workers wouldn't do their own typing, instead recording messages onto tiny tape cassettes for a typist to type up later on. There were even foot-pedal-controlled cassette players, so typists could type with their hands and control the dictaphone tape with their feet!
Even Word Perfect was much, much better at this than a typewriter. What made it hard to use was that you had to memorize a bunch of special function keybindings, as I recall. Yes, GUI word processors were much better than this, but even Word Perfect looked compelling compared to using a typewriter.
I wrote a bunch of papers using a typewriter. Standard operating procedure was to plan out the complete paper, basically as a tree of bullet points (I used index cards), and then turn those into sentences at the typewriter. All the writing/reorganization had to happen before you sat down to type.
Early word-processing software changed very little of that - the problem was cultural, not technological. Big offices had a typing pool where professional typists would type up memos and documents. It took a while for that culture to change where people realized they could, and should, do their own typing. Small business led the way because they couldn't afford a dedicated typing pool.
It's also technology was much less ubiquitous back then and less evenly distributed, at least in my experience.
I went from a mechanical typewriter which had the whiteout tape as it's only "advanced" feature, directly to a WYSIWYG editor. It was absolutely night and day. I only saw an "advanced" electronic typewriter later in life at my grandparents house, which was used rarely and only for her accounting business as it was so expensive when they first bought it.
As far as I know my experience was pretty much normal for my peer group - as my school had a similar setup. As a kid in school, it was amazing going from having to totally re-type a rough draft to being able to make some casual edits and hit print. Hours saved for each paper, especially with the typing skills I had back then!
Most 70s-80s advanced typewriter were just not regular household material. Even among white collar jobs. My parents (and mini me) worked on very mediocre type writers, until my dad got a PC from work with WordPerfect. Interesting from an economical perspective is that the 'top' typewriters were a lot more affordable than early pc's. People just didn't buy them. What they had was good enough, regardless of features. My dad ended up so enthusiastic for PC's to later spend more than a monthly wage on a PC for the family (actually: me). Every two years. Incredible, especially compared to the afforability of digital devices nowadays.
Word processing software was to writing as smartphones were to photography, or as the book press was to writing. Perhaps not transformational on a per feature basis, but transformational on grounds of the possibilities and market it unlocked.
I learned to type on an electric typewriter :) It wasn't a fancy word processing one with memory, but just that the hammers struck the paper using a motor making it easier to type. It also made a very satisfying 'thunk' which I would randomly trigger while the teacher was talking that in turn caused me to get thrown out of class a number of times.
> By the late 1970s or early 1980s, typewriters had electronic memory.
Were such fancy machines actually common? I never saw one. For me, it was all "CHUNK CHUNK CHUNK CHUNK oops! damn", until it was a whole new world with MacWrite and the ImageWriter.
Such typewriters were used by business, not students or individuals at home. Those machines cost several thousand dollars in today's dollars, just to put a perspective on things. Whereas a "normal", manual typewriter cost several hundred dollars in today's dollars. Most of us therefore had a manual typewriter. Touch typing was out of the question! Let alone any fancy-schmancy word processing!
Boy, oh boy were the early gen word processors a godsend!
I learned typing on an IBM Selectric, my parents made me take a typing course before they would buy a computer. White-out or correction tape was how we fixed typos (or just didn't make them). If you were smart you wrote your words out longhand before typing them; you didn't "think" while typing.
The early computerized typewriters kinda sucked - you could do line level edits but the print quality was dot matrix or worse.
I did. I used my mom's typewriter, which she typed her thesis on in the 1950s, and switched to word processing on a college owned TRS-80 before getting an MS-DOS machine of my own.
The big breakthrough was editing. WYSIWIG didn't solve a problem in that space. College papers didn't need different fonts, and it was sufficient to let the computer take care of formatting, like using Markdown.
I'm not dismissing the Apple, but it was priced out of my reach in 1984.
In Southern Europe Apple hardware has always been the most expensive one among systems for home users.
Hence why I had to wait until university to actually see them outside computer magazines, and only one room had them on the computer labs, versus the whole campus filled with PCs and UNIX terminals.
Hi, author of the article here. I do most of my drafting of these longer articles on a Freewrite Alpha, which is effectively an electronic typewriter. When I use it I have one rule: the backspace key is banned. This makes me restate my thoughts if I make a typo or just bulldoze through it. I find that this makes me make better drafts in the process.
You might be blessed with a brain for written communication.
If I did not allow myself revision and improvement to my written text, everything I write would read like a spoken monologue, and be multiple times longer than necessary to convey my message.
On the other hand, Xena might also have a brain broken for written communication, and this is the best way to deal with it!
I have only recently learned that I have ADHD, and have been trying to iron out all the implications of that (well, that and autism, which I also only recently learned about) -- I cannot help but wonder if a workflow like this would help me in my writing....
Lemme tell you, it functions like a gift but feels like a curse in terms of how it affects my daily life. ADHD medicine doesn't help consistently, it sucks lol
There is a reason I do _extensive_ editing after the fact, here's the draft for Soylent Green is people [1]: https://gist.github.com/Xe/3fe0236412c1ce16389bfcd6c6562d7a The differences I added in editing are _vast_ and really transform the work from a ranty mess that approaches readability to something that's worthy of publishing.
I have another article about AI coming out about how we could use generative AI to make art that we've never seen before, but it's mostly used for AI slop. I'm in the middle of ranting it out into the typewriter. It's bad currently, but I make it bad first so I can remake it better later. Here's an excerpt of the intro that I'm going to be rewriting. This is the "raw clay" that I mold in editing.
> I like creating things. There's a lot of joy in being able to sit there, think about a thing, and then make that thing come into existence. This is something I really enjoy doing and I'm blessed to be able to do that as my job in DevRel.
> One of the core conflicts that i end up having with the stuff I create is that I am a bit more artistically minded than people would expect out of the gate. I mean, I get it. Tech isn't really known for *art*, it's a lot more known for being the barrier between you and artists you want to follow.
> Howeever I've ended up seeing kind of a disturbing pattern with AI tools that are meant or at least intended to aid people in the creation of art: they're almost always used to create infinite slop machines without a lick of art in the process. Today I'm going to talk about this fundamental conflict between two categories: art and content. Art is that which conveys, inspires and tells stories. Content is what goes between the ads so that media moguls can see their profit lines go up. I want to argue that a lot of what AI tools are actually being used for is content that is gussied up as if it is art.
If you want to see what the entire writing process looks like after it festers for a while in my head, I wrote out the holy grail article on Twitch: https://youtu.be/N_KNpVujAL8
MacWrite was a WYSIWYG word processor. You could change the fonts or other formatting and see the results updated on the screen.
WordStar and WordPerfect were for DOS. They were not WYSIWYG. Sure they were powerful word processors for professionals, but they were not “like” MacWrite. MacWrite was a tool for regular people.
I absolutely would have described WordStar as a WYSIWYG editor. [0] You have no content markings, you do have pages in sight, and flowing text. You set the text to bold, and the font displayed is bold, etc.
I don't think I'm alone in that judgement, as Wiki says:
"WordStar was the first microcomputer word processor to offer mail merge and textual WYSIWYG." [1]
So... You might need to expand why you think that this is not true.
Textual means that it ran in text mode. That doesn't mean that it did not have fonts. It... Did. There's two fonts in the screenshot I linked beforehand.
The main body text is Courier, and the titlebar text is generally called "CCSID 437" or the OEM font. Both standard IBM fonts from the era. They're not the same fontface.
WordStar had full support for the PC-8 Graphic set. It is pre-TrueType Fonts, but so was the software. By the time WordStar landed on Windows, it had support for everything you'd expect from anyone else.
Honestly, I do not see the different fonts in the image. I see the status line showing the Courier font name. If it is different from the body text, I cannot distinguish it.
Obviously WordStar was limited by what DOS could render, so the variety of fonts available had to work in a drastically constrained bitmap. I could also not find samples of the PC-8 graphic set.
But for comparison, here are the original 1984 Macintosh system fonts:
Okay... Let's try a different approach here. "DISPFONT.EXE" and "DISPFONT.OVR" are key files you'll find in WordStar's archive [0].
I don't have a CP/M emulator on hand to fire up the original to show you an example, as WordStar pre-existed DOS.
But this is a quote from v3, the first DOS version, from the manual:
---
Screen Fonts for Preview
At the Add or Remove a Feature screen, you can install three
different types of screen fonts for Preview. The screen font
options are Code page 437, Code page 850, and PostScript fonts.
If you want to install PostScript fonts, install both PostScript
and code page 850 fonts. (Be sure to set the code page to 850 in
DOS. See your DOS manual for instructions.)
Hi, actual WordStar-in-practice, wrote several hundred thousand words on it in both CP/M and DOS, user here (related: I am old). I think you're confusing WordStar's preview display with its editing display here. Later versions of WordStar for DOS (I think it started with version 5.0, but I wouldn't swear to it) could generate a surprisingly good for the day print preview using PostScript fonts, as described above in the text you're quoting. But that was a specific read-only mode. When editing, WordStar ran in DOS text modes and was limited to what DOS text modes were able to display: monospaced fonts, usually with the ability to display boldface text with "bright" text and sometimes -- not always -- with the ability to display underlined text with actual underlines. (This depended on your video hardware; IIRC, XyWrite seemed to be able to do that pretty reliably in DOS, but WordStar didn't). But you couldn't display proportional type in editing, or italics, or different typefaces.
Now, you could argue that WordStar anticipated WYSIWYG editors, because it did its best to faithfully reproduce margins, indents, line spacing, justification, etc. in its text editing mode -- but that attempt came from the era when printers could only output monospaced type, usually just one typeface, no italics, etc. Once printers got better, WordStar really wasn't WYSIWYG anymore, just "best effort within limitations". IIRC, the only major DOS-based word processor to actually attempt a WYSIWYG editing display was WordPerfect 6.2 in the late 1990s.
Thanks for this. I've never used WordStar, and my knowledge of early word processing software is certainly incomplete, but this is the first time I've seen such a screenshot that predates 1984.
This is why the Apple LaserWriter was such a big deal. It came out 1 year after the Macintosh, merging Canon’s laser printer engine, Adobe’s PostScript, and the Mac’s bitmapped display with proportional fonts.
WYSIWYG means "What You See Is What You Get." If you want the title to be Courier 24 point Bold then that's what you see on the screen. If you are writing in a proportional font with proper kerning then that's what you see on the screen. You don't see a fixed width substitute.
This is what enables you to do proper typesetting and page layout for a document, and using a PostScript printer such as the Apple LaserWriter you could do desktop publishing [1]. Desktop publishing was invented by Xerox PARC but the revolution began when Apple made it available to the masses with the Macintosh and LaserWriter.
Apple didn't invent any of these technologies but they were the first to put them all together into a package for the mass market and made them incredibly easy to use. Suddenly, grandma had a tool she could use to write and typeset the weekly church newsletter from home, and even print it on her LaserWriter at home. If she wanted to do a newsletter like that just two years prior she would have had to hire the services of a print shop to do both typesetting and layout as well as printing.
She could still have used WordStar or WordPerfect and printed with a dot matrix printer, but that doesn't get you large, proportional fonts or layout.
MacWrite was released in 1984. 7 years before comparable graphical, WYSIWYG competitors.
WordPerfect for Windows was released in 1991. Prior versions were MS-DOS and not graphical.
WordStar for Windows was released in about 1991. Prior versions were MS-DOS and not graphical.
What made MacWrite special, is that it was graphical, highly WYSIWYG (especially when paired with an Apple printer), had a lot of great fonts, and the software was intuitive.
Of course Apple didn't invent any of it, but they made one of the best word processing products at the time.
How do you measure best?
Based on usage, sales, or fraction of published text created - Wordperfect or Wordstar were way ahead of MacWrite.
MacWrite was the best in a niche market of personal newsletters that wanted graphical elements. More professional desktop publishers used Aldus PageMaker.
And big publishing houses didn't use PCs or software, they used offset lithography presses.
It's like you decided to completely disregard that the successor of MacWrite ended up completely eating all the use cases for making documents outside of the large-scale professional uses.
MacWrite begat Microsoft Word 3.01, which is when Word overtook MacWrite on Mac and became truly competitive with WordPerfect on DOS.
So your basis for saying "one of the best word processing products at the time." is that a different company made a different product inspired by it (and WordPerfect)? I don't think that is a persuasive argument.
If you said MacWrite was an innovative word processor, I would not have replied. Most innovative is not always the best product at the time.
I am the actual OP. If you're concerned by my use of the word "best", please replace that word with "innovative for consumers". If you have an issue with this new wording, then we'll have to agree to disagree.
While MacWrite may or may not have been "niche" as you said, it most definitely heavily influenced another "niche" word processor available today: Microsoft Word. Word for Mac (1985) was the first graphical version of Word and it was heavily inspired by MacWrite.
With all due respect, I think you are falling into the nostalgia trap. MacWrite was never a true WYSIWYG editor because of the way it relied on Quickdraw for the on-screen rendering but the big deal at the time was the Laserwriter being the first Postscript printer (Adobe still had a proprietary lock on Postscript which lasted until around the end of the 80's). In 1985 Steve Jobs left Apple and started NeXT. One of their first products was the WriteNow word processor which was ported back to the Mac platform by a company called T/Maker (the Silicon Valley rumor mill of the time was that Steve Jobs and Heidi Roizen were an item for a while). WriteNow was the first one to offer a polished experience with proper font rendering and kerning that didn't look rasterized. God forbid you tried to print from MacWrite with font smoothing turned on, a one-page print job could take several minutes to render because of how the Laserwriter had to execute all that Postscript code in real-time.
> With all due respect, I think you are falling into the nostalgia trap. MacWrite was never a true WYSIWYG editor because of the way it relied on Quickdraw for the on-screen rendering
I read your whole comment, but I still don't understand what this means. E.g., why does relying on Quickdraw for on-screen rendering not make it a "true WYSIWYG editor"?
> MacWrite was released in 1984. 7 years before comparable graphical, WYSIWYG competitors.
You realize there were other systems besides PC and Mac?
Signum for the Atari ST came out in 1986. It was a fully fledged WYSIWYG text processor with special printer drivers for regular dot matrix printers. Even with a 9-pin you could create great looking output, if you had the patience (a single page took minutes to print). Signum was way ahead of MacWrite and was very popular with people needing special fonts in science/math and the humanities (you could quite easily design your own fonts). Also, it allowed for Right-to-left text, and of course the Atari ST was way cheaper than the Mac.
Name a few things you consider to be inventions first to get the ball rolling.
For example, you could claim that nothing new in CMOS manufacturing exists because it’s all just the existing idea of a transistor. Or the transistor is just a quantum mechanic version of the vacuum tube. Or the vacuum tube just an electric version of the Babbage machine. Repeat for Internet vs packet switching.
Basically come up with a definition that doesn’t require a “I know it when I see it step” and I’ll easily fit many things Apple did in there unless it’s such a restrictive definition that no one invents anything.
“If I have seen futher, it is by standing on the shoulders of giants”
On some level, all invention is a novel arrangement of existing components. All the way down to the physics, if you're willing. So, does anyone "invent" anything?
But if we accept "patentable" as a proxy for "invented", then obviously Apple invents a lot.
But those of us who think the entire idea of "patentability" is a joke in and of itself.
All these fuzzy lines behind this "who invented what" is a major reason I consider patents to be evil -- the entire system sets up artificial and harmful barriers keeping ideas from fertilizing each other and growing beyond our wildest imaginations.
(And as someone who strongly dislikes all things Apple -- but not quite as much as all things Windows -- I cannot help but observe that both Apple and Microsoft simultaneously deserve both more and less acknowledgement, all depending on how one looks at things, for their inventiveness).
The alternative isn't some utopian free sharing of inventions. The alternative is tightly held trade secrets and lots of inventions dying with their inventors.
We had this system for most of human history. It sucked. A decade and a half of exclusivity is a fair trade to avoid it.
Software patents maybe shouldn’t exist as it didn’t for a very long time. Indeed, the way patents are actually written for software ends up with a convoluted mess trying to remain as generic as possible to cover all possible extensions of a core idea (and you can’t patent ideas). So while patents show a benefit, it’s pretty clear the current system in the US has a lot of flaws that need attention.
WordPerfect and WordStar were text-based at the time MacWrite came out as a WYSIWYG word processor.
Not to say MacWrite wasn't based on prior work (it was, though not actual products that I know of), but it isn't really comparable to prior text-based word processors.
>I don't get why Apple fans have this obsession with pretending Apple invents these things.
This is entirely tangential and probably a pointless gripe in this thread, but...
For some reason, it's always really annoyed me that Apple took MP3 players, called them an 'iPod' and suddenly everyone ate them up like they were the second coming of christ and we'd never had them ever before.
As someone who was There At The Time, the UX of the iPod was really, really good. There were few, if any, other companies that could match it. The vast majority of competing music players were what DankPods calls "nuggets" - i.e. barely functional e-waste that were either saddled down with horrible software (e.g. anything Sony made), had horrible controls, were bulky and painful to use, or some combination of those above dealbreakers. A lot of companies treated developing an MP3 player like any other kind of music player, and ignored the fact that these things could hold 100x as much music as anything else on the market, which necessitated a completely new UX.
To be clear, there were good non-Apple MP3 players, but they were either marketed poorly, or late arrivals (e.g. the Toshiba player that got rebadged into the Zune). By the time those existed (and tech companies started hiring UI/UX people), Apple was doing a complete reset of another product category: smartphones.
I suspect history would have been different had, say, MiniDisc hadn't failed horribly in America[0]. Pre-iPod, portable music in the US was either compact cassettes with all the downsides of tape, or CD players that could just barely fit in your pocket. The iPod was such a step up from either that it all but became a genericized trademark. Had we had a competing technology from not the 1980s, we probably wouldn't have thought the iPod was so great. Or at least, people I knew who had MiniDisc looked at the iPod like I look at all the e-waste that was trying to compete with the iPod.
[0] Yes, I know that Sony was basically trying to avoid a repeat of DAT getting banned
The iPod is the only Apple product I have ever purchased. I could easily operate the iPod without having to looking at it constantly which was great for bike rides or car rides. No fiddling and taking eyes off road.
The ipod was a very welcome step in the portable music player tech evolution at the time, but it also coincided with a bunch of people that were suddenly Very Into Music for a few years. I don't fault anyone for thinking the previous portables were just not good enough to every day carry, but they also never seemed to notice that the OG white earbuds were more painful and sounded much worse than a decent brand of $15 black earbuds. Maybe never finding good earbuds explains why they gave up on their Passion for portable music within a few years.
Have you used the previous generation of MP3 players? I had one with a tiny LCD screen that would only fit half the song title (no space for the artist). To go to then next song, you had to press the "next" button (which makes sense). Except that action would take at least 0.5s. You press next, you wait, you see the display refresh with the next song's partial song name. Not the song I want, press next again. Very quickly, to skip 10 songs takes 10 seconds of effort. It was a painful device to use.
The iPod cam with a large screen and a click wheel. I could find songs on it. That was a revolution for me.
MP3 was the enabling technology (if you can't fit many songs on a small device, then this is moot).
> MP3 was the enabling technology (if you can't fit many songs on a small device, then this is moot).
As others have eluded to, MP3 only didn't seem to be enough. I remember passing on early mp3 players because they only had 32-64mb of storage, not even really enough to store a single album. Snatching up those tiny 1.8" hard drives right away and integrating them is probably as important as the UI improvements because it solved that problem.
If I remember my first mp3 player correctly, it had 16MB onboard + a 16MB smartmedia flash card (and uploading was via PIO parallel, and the software would occupy the whole system until finished), I needed to experiment with my comfort level between low bitrate and a worthwhile amount of songs. I must have ended up around 46-64kbps.
As soon as I had the funds I quickly moved onto a variety of CD/HDD based players, although I've only recently bought (and modded) an iPod - there's definitely reasons they were so popular. I can appreciate why they went to the common platform with the phone and were later phased out entirely, but as a task-dedicated non-smart device they would be last-man-standing.
In addition to the already very thorough and well-considered comment replying to this, I just wanted to say that the iPod was one of the first MP3 players that was widely available with a full-blown hard-drive. The vast majority of MP3 players at the time had like 32-64MB of flash memory, if that (and still cost hundreds of dollars). The iPod had 5GB and 10GB models. Suddenly you could bring your entire CD collection with you anywhere. Yeah, there were a couple of competing models with similarly-sized hard drives, but the other comment covers why people spending >$400 on a fancy new gadget preferred the iPod at the time for its excellent UI/UX.
I only remember the ones about the size of a discman with 2.5" laptop hard drives. The iPod was, I think, the first one with a 1.8" HDD. When the Macbook Air first came out it used the same 1.8" HDD
That 1G of flash storage at the time was huge for a phone. This was before everybody had an iPod Touch or iPhone, of course. iPhones came out the next year but in my area hardly anyone had AT&T, so because of the exclusivity the iPod Touch became popular way before the iPhone in my area.
My mp3 player around 2001 had 700MB of removeable storage. Buying additional storage was pretty cheap too and there were standardized cases to store a lot of that format.
In addition to the points made by the sibling comment, the iPod was a quality product well executed, early competing MP3 players were not great.
Flash based players were smaller but limited in size and expansion media was expensive. Hard drive players were hobbled with USB 1.1 connections and an obsession with drag and drop for management.
The iPod by default just synched with your iTunes library. The FireWire (and eventually USB 2.0) did so quickly. The navigation was as good as the metadata which iTunes made easy to edit. The UX on the device made scrolling through long lists of songs very easy.
The iPod made using an MP3 easy and approachable for normal people. The Rio, Nomad, and a multitude of others did not. They included a bunch of checklist features but didn't focus on usability until Apple dominated the market.
Yes, but having used both in that time period... tools on the mac were graphical and easily explored (open a menu and see what was available). People made plastic keyboard templates for WordPerfect and WordStar just to remember the commands. On top of that, you could easily task switch on a Mac. There were some tools for doing that with DOS, but they were awful by comparison.
I worked in a lab that used PCs while I owned a Mac Plus. There was no comparison.
Siri had so many years to iterate and get refined that by now I'd assume it would have been as omnipresent assistant as in "Her" movie (without negative impact of course) but see where we are today: I am still using it for only weather and setting alarms and even in that it sometimes works and sometimes does not.
> MacWrite was released 5 years after WordPerfect, which itself is predated by WordStar. I don't get why Apple fans have this obsession with pretending Apple invents these things.
Um, hello, Wang OIS? WordStar and WordPerfect didn't invent anything. They were copies of terminal-based word processors.
But MacWrite was different in two important ways. First, like Bravo and Gypsy before it, it was WYSIWYG, a million times better than WordStar/WordPerfect. And it worked with the LaserWriter. But more importantly: it was free. This made MacWrite revolutionary.
I think what Apple excels at is providing a set of software development tools, a platform, and an audience, for third-party developers to then use to build innovative products, the best of which either become platforms of their own if they're lucky (e.g., Adobe suite), or are copied by Apple and made part of their operating systems (https://en.wikipedia.org/wiki/Sherlock_(software)).
While Excel, Photoshop, Illustrator, Sketch, Lightroom, Premiere, and PowerPoint are all examples of software developed and released first for the Mac that then went on to become software behemoths in their own right (too big to Sherlock). (Well Sketch turned out differently, because Figma happened, but it's still a great example of third-party innovation facilitated by "a set of software development tools, a platform, and an audience".)
The point here being I think of Apple as more providing a platform for innovation rather than innovating themselves (but I'm aware that's probably a minority opinion).
Inventions are pretty cheap without the refinement of the product that directly contributes to the customers' demand additionally backed by robust supply chains and delivery of cutting edge tech like the M-line up of chips, and the tremendous camera quality, battery life, reliability of the operating system, specially curated app store, security and privacy, etc. Inventions are not what people want to pay for, people want to pay for additional value added in all sorts of form. Apple creates products for humans and people pay back by seeing the offering as a higher valued product.
You're basically explaining why the Macintosh stayed niche when it came out. People saw the barebones WYSIWYG of WordPerfect 4.0 on PC in 1984 compared to the true WYSIWYG of MacWrite and they said: It's what we already have, why would I need a Mac?
I get not loving the "Apple invented everything" mantra some people have, but the iPhone genuinely redefined the smart phone category. The industry has 100% coalesced on the model invented by Apple. Nothing like this existed as a full package before the iPhone and now are almost universal:
- No physical keyboard + touch keyboard
- Modern OS kernel (not embedded specific kernel)
- Desktop browser engine
- Capacitive touchscreen + finger instead of stylus - one or two phones had capTouch before, but they were far from standard, and they still had physical keyboards for typing
- Vertical by default orientation
- Short 1 day battery life in favour of more power/features (weird to list, but was a bold move everyone mocked then followed)
They totally took out the existing market (Blackberry, Windows Mobile, Symbian, a variety of OEM OSs). Android succeeded but came after, still had keyboards on its flagships the years after iPhone came out (G1, Droid), and took these design cues from iPhone.
The Mac GUI with mouse+keyboard+windows was also huge. Admittedly not first to invent it (Xerox PARC), but first to ship it as a package is still hugely impressive. Few people commercialize a new product before it existed in some lab.
- No physical keyboard + touch keyboard (Windows Mobile had this first)
- Modern OS kernel (not embedded specific kernel) (Blackberry had this first)
- Desktop browser engine (iOS didn't have a "desktop" browser engine, it had a stripped-down mobile browser engine. But on this note, Windows Mobile did support desktop browser engines.)
- Capacitive touchscreen + finger instead of stylus - one or two phones had capTouch before, but they were far from standard, and they still had physical keyboards for typing (LG Prada had the first capacitive touchscreen)
- Vertical by default orientation (Almost every smartphone at this point was vertical by default, with horizontal-by default being the exception.)
- Short 1 day battery life in favour of more power/features (weird to list, but was a bold move everyone mocked then followed) (Windows Mobile had this years before Apple)
Literally everything that Apple is credited for with the iPhone...others had it first. The true genius of the iPhone was the marketing...Apple still gets credit today for "inventing" features that Android phones have had for years (zoom cameras? AI? notes? custom emojies? embedded fingerprint readers? integrated payment?)
Apple has always been the follower: it copies what others have done, and makes minor improvements, then markets the hell out of those minor improvements to make them seem revolutionary.
None of the phones listed looks remotely like a modern smartphone. The iPhone does.
I worked on WinMo at MSFT at that time. You are comparing devices with physical keyboard and a crappy virtual keyboard that required a stylus to modern smartphones?
I mentioned the LG Prada - yes had cap touch, but not touch typing (physical slide out keyboard).
Almost every WM, BB and Symbian SKU had horizon screens (over keyboards).
Blackberry integrated QNX post iPhone.
First iPhone had WebKit.
All of these facts seem to be incorrect.
Again: the combination of these was a huge shift, and every one followed it.
I still have my HP IPAQ and it is most definitely a vertical screen (they all were, and only some of them had keyboards). But sure, if you exclude all previous vertical phones, Apple was the first vertical phone ever...
Also, you have your devices mixed up. The LG Prada first gen (2007) did not have a keyboard; the 2nd generation (LG Prada II, 2008) had the slide-out keyboard. And on that note, the iPhone shares so many design elements from the original mockup of the Prada from LG's initial announcement of the device that most tech reviewers thought Apple copied the Prada. It's a good thing for Apple that LG failed to file timely design patents.
So, it seems that all of your facts are the incorrect ones.
I started to have doubts about the article as soon as seeing the Samsung Galaxy vs iPhone comparison. The author exaggerates things and rewrites history too much.
IIRC, all the Google/Samsung phones had keyboards because they copied the Blackberry. Once the iPhone was released with the screen keyboard, all the Google phones changed to that.
They didn't clone everything, but they cloned a lot in the early days. Rounded corners was another one. Now it seems like Apple is cloning Google/Samsung more.
The first versions of Android that the public saw were very similar to the OS on a BlackBerry or Danger HipTop. The G1 even used the same mechanism to deploy the keyboard.
As far as the rounded corners, I remember seeing a reduced Google Reader view of Engadget later that year that had every device looking the same from the top third up. I really wish I had a screenshot.
There is now a lot of cross inspriation and features that are copied in both directions, as well as both implementing the same thing at around the same time (Intelligence and Gemini).
On the Android side, Pixel gets most new features first while Samsung offers their own take. Samsung is generally ahead of their direct competitors in terms of hardware.
Nit: the Danger mechanism was waaaaay cooler than the G1's. It did this amazing spin I've always missed. The G1 was a little 2-hinge flip-up that was satisfying, but didn't do the amazing 180 that the sidekick did.
Apple fans do this to justify their decision to buy Apple products to their circles and most of all, to themselves. Or it is just delusion from not actually knowing better.
Calling what they do as mere refinement may be equally understating it in some cases.
They get it right. For the masses. They create beginners.
Anyone can start with an Apple, because it’s what it’s designed to do.
I have my own biases that extreme usability started earlier with other movements like WebOS and Palm contributing to it as well.
Still, if we look for watershed moments where huge numbers of people in the mainstream adopt technology, whether it’s the iPhone, iPad, Apple TV, watch, laptops, they don’t need to be the first, just the best for the most number of people.
Being able to integrate hardware and software closely creates a different and reliable result for the many, as much as I might not like having complete agency.
If anything, Apple helps invent beginners in the mainstream.
> They get it right. For the masses. They create beginners.
I used to think that about Apple which is why I got the iphone 4s when it came out assuming that I would be able to use it via voice while driving and have a good experience. I disabled it after the first day and never went back. It was not remotely ready for a wide release. I still use a mac as my main device, but I've been on Android since the 4s.
> Apple refines what others have attempted before, that's what they're good at.
Exactly. And often the first version is often kind of meh (iPhone, Watch, Vision Pro) but they keep iterating and later versions become really good. Sometimes it's a hit directly (M1), it's still very iterative on whatever came before.
PowerPC was pretty great for a while. Then Motorola and IBM started dropped the ball (or rather, stopped putting resources into what was not a particularly big customer) and Apple switched away.
They were, but Motorola and IBM designed and manufactured the CPUs. Apple presumably had some input, and they designed hardware platforms for the alliance, and of course software.
I am hoping the foundation they built will lead to greater things. So far it definitely has fallen flat compared to their demos. All I wanted was a way to talk/text to siri in a natural way to get things done. It's better but far from perfect. I want to be able to easily create calendar events and interact with other native iOS APIs.
They could be doing a lot better but it's a bit of a cursed problem.
Natural language as input doesn't give you any information about where the boundaries are or what's possible. Meanwhile natural
language can express anything, most of which any current implementation won't be able to do.
So the user gets a blank canvas and all the associated problems with learning what to do, except it's worse because many things they think up will fail.
And the main tool we have to guide the user through this fraught path is LLM output. Oof.
I am hopeful that Apple will demonstrate their expertise in using some traditional UI to help alleviate some of these problems.
I guess you meant to say the competition is better. I think it is, but that doesn’t mean their product isn’t a huge improvement.
A simple example is text search in Photos.app. It probably misses text in some photos, but it helps me find quite a few photos. Similarly, face recognition in Photos.app is far from perfect, but way better than not having that feature.
> I want to be able to easily create calendar events and interact with other native iOS APIs
It’s not in Apple’s DNA to release a product here that mostly works. Chances are they’re working on something like that but don’t find it good enough to release it yet.
What surprises me, though, is that they released the “get an AI summary of this web page” feature. That definitely produces some results that are very bad.
Not talking about the competition. They failed to deliver what they had in demos. Clearly they were hoping to get it right in the last minute but seems far from the truth. All I wanted was a better Siri that played really well with native iOS apps. We dont have that yet and I am not sure if its purely a lack of power with on device models or if apple completely missed the mark in implementation.
I normally really like this blog, but what’s all this noise about? They’ve done the hard part of lining their entire foundation up, now all they have to do is build the applications on top, and let others do so as well.
In the meantime, they’re shipping the best non-workstation computers, by far, to run models locally.
They don’t have to be the ones to implement all of this themselves, you can install ollama right now, and BoltAI can integrate those models into other parts of the OS. And Apple will watch, and Sherlock the best parts of what others do into the OS, and sell gobs of machines.
They haven’t squandered anything, the foundations are still there.
Y'all know I'm a fast, if error prone, writer. I still enjoy using AI writing assistants to help me with the occasional phrase that's awkward, grammar detail ("it's lower g in 'god' if I am talking about Thor or Huxian right?") and choice of words ("I need a word for agriculture that starts with C...")
LLMs make different mistakes than I do so I've thought about using one as a copy editor but I've had terrible experiences with copy editors: I've hired more than one when I was writing marketing copy who injected more errors than they fixed. (A friend of mine wrote an article for The New York Times that got terribly mangled and barely made sense after the editors made it read like an NYT article.)
The "Math Notes" thing is absolutely infuriating to me. I use TextEdit on macOS for various notes and the forced math autocomplete (with no way to turn it off) has pushed me away from TextEdit entirely.
Forced AI garbage seems to be Notes team's SOP at this point. They destroyed the handwriting experience on iPads, in iOS 18, with an incompetent spellchecker that can't be turned off. At least this math thing is somewhat unobtrusive, the spellchecker straight-up destroys notes. No acknowledgement of radars and no fix in sight, as expected of Apple I suppose.
Anecdotally even palm rejection seems to have gone to absolute shit in Notes with iOS 18. When I go to write now there’s like a 50% chance the scroll position flies up the document.
I also tried their “handwriting improvement” feature that claims to clean up lines a bit while still looking like your own writing. All it did was turn legible writing into total gibberish.
This smelled like one of those things where they've applied a certain behavior to an entire class of widget for consistency's sake, and TextEdit just happens to use that widget. If so, it'd be controlled at the system settings level.
Sure enough: System Settings -> Keyboard -> (under the "text input" area) Edit... (button next to your primary keyboard language) -> toggle "Show Inline Predictive Text"
If you want to easily switch between having it and not, I bet you can set a second keyboard with the same language but a different setting there and use the quick keyboard switcher widget/shortcuts (I did not try this, though). Or there's probably a way to shortcut it with AppleScript or some other automation thingy with ten minutes of effort (mostly googling).
But that still doesn't disable it. I can type in "cos(23 deg) =" and it will autocomplete it, even though I have "Show inline predictive text" disabled. I can post a screencast if anyone would like.
Weird, I tried "1+1=" before and after and it disabled it for me.
[EDIT] A quirk: I do have to hit "done" on the window before it seems to apply the change, toggling doesn't do it until I hit "done" (I just tried again to double-check and noticed this)
[EDIT 2] Nb I don't not-believe you, we could be on different OS versions (I'm on 15.1) or something else could be causing the difference.
Currently it's not autocompleting some simple algebra. But if I enter more "complex" equations ("3 / 4 =", "3 * 5 - 2 - 1 =", "tan(pi) =", etc.) then it autocompletes those. I can't figure out why it's inconsistent. And I've definitely checked and confirmed "Show inline predictive text" is disabled and I've rebooted to try and give everything a fresh start.
One thing I've noticed is that if I enter the same equation multiple times it might stop suggesting for that specific equation, so I suggest trying multiple different equations.
I've felt for some time they're overdue for an "almost nothing but bug fixes and performance improvements" major release like we got a couple times in the 20-teens :-/
I don't recall the name of the app, but drawing formulas, equations etc with a finger was a thing on Android like 10 years ago. And it did things like a mix of fractions and square roots etc just fine.
Assume words don't contain numbers and numbers don't contain words, then provide a convenient UI for selecting alternatives?
For fielded input matching known patterns, recognition can also be constrained by pattern matching and general validation rules (e.g., VINs are 17 characters long, cannot contain the letters I, O, or Q, and, given prior information in other fields, can be further constrained by manufacturer code, model year, and by requiring a correct check digit).
The point is algebra system, it’s quite good for unit conversion, budgeting, etc - lots of things a spreadsheet does great but seems a bit overkill for. Algebra big improvement over basic arithmetic calculator power.
Was going to say the same thing. Wolfram Alpha has been doing MathNotes for 16 years now. I seriously doubt AppInt will ever come close to the depth that Wolfram has.
It's a bit funny his favourite "Apple Intelligence feature" is something that wouldn't surprise me if it doesn't even invoke the actual model at all under the hood.
Parsing text for variables when it sees an equals sign and running basic calculations on them? I feel that could have been a novel feature 30 years ago.
Minor (in terms of how relevant this is to your comment, not in importance) correction: the author is a woman (actually prefers they/them according to their GitHub (https://github.com/Xe).
I also struggled with this for similar reasons. His favorite AI feature is essentially writing valid js for calculations (his example is literally valid js if you just drop the equals on the last line - you can paste it right into the console on his site and see the answer).
The whole article feels like it suffers from a similar lack of coherence. Ex - I am hardly an apple fanboy (I strongly dislike the company) but the complaints here are basically
The service sometimes has outages
The image gen is not as customizable as he'd like
He's morally opposed to cleanup in photos
Notifications summaries are bad (and how dare I get my texts 5 seconds slower).
---
None of that is really related in any way to the security footprint of the tooling he discusses up front, and it's also hardly distinct from most other current AI offerings, and it's not really a consistent complaint about the tech.
My opinion of Apple is that they do a crappy job with the vast majority of their apps...
They build good hardware, and they abuse their small hardware footprint to make decent device experiences and a decent (but getting worse) OS - but their actual applications are generally mediocre at best (mediocre copies of a previously successful, usually better, app that they will put out of business through shady store practices if I'm being blunt).
---
If anything, the failure here is that Apple marketed a thing that AI can't really do (yet, maybe at all), and most of the things AI can do without being incredibly invasive aren't actually all that useful to most folks. Very useful to a handful of power users in specific circumstances, but otherwise essentially novelty apps.
So... it's not an implementation failure. It's a marketing failure. And this is hardly unique to Apple right now. The only difference is that usually Apple doesn't play their hand until this inflection point with new tech is over, so it's more obvious this time around just how bad the product fit is for general use.
Minor (in terms of how relevant this is to your comment, not in importance) correction: the author is a woman (actually prefers they/them according to their GitHub (https://github.com/Xe).
There is also the possibility that LLM models (which is what Apple Intelligence is leveraging mainly) are overblown and honestly a local minima for AI.
I was an early user of Macs; the first computer I owned was a Mac Plus. I didn't own a PC until the 1990s.
Most of the intro to this is credulous hooey. Macs weren't "bicycles for the mind" in some magic way that was different from PCs. What the early ones had was 1) a better and much more standardized interface, and 2) task switching that worked.
As for the AI tools, image generation might occasionally be useful for a D&D game, but otherwise nothing on offer at the moment has much value. And the value of image generation (for me) is pretty small.
They just built a trusted/secure backend to push compute to and it luckily coincided with the AI craze. They just packaged their backend as apple intelligence and exploited the situation. It doesnt look like they have anything worthwhile to showcase that backend though. They will get there eventually, this is apple after all
It's astonishing to see Apple settle for DALL-E 3 (or worse?! these remind me of the Bing samples) for the image generator part. Hasn't the incredible extent of mode-collapse and the horrible DALL-E 3 style become universally known and disliked yet?
No, it's worse than DALL-E 3, it's an on-device model that can only reproduce placid soulless images. The ones I put in the article have been heavily cherry-picked. The worst ones get far worse. DALL-E 3 can at least do text.
Image Playground uses a tiny on device model. It certainly isn't DALL-E 3. In no universe are on device models intended to compete with massive cloud models.
Mac’s being able to export PDF’s for free was a huge deal back in the acrobat days :-P
I’m hoping with what they’ve built in infrastructure and custom chips is a step towards making personal LLM also highly available to non-technical people. I think this is where Apple has always shined - making things not just better, but accessible and grokable for normal people.
It feels very apparent to me that Apple was caught completely flat footed by ChatGPT’s release. As a result, they were very far behind and have been unable to catch up.
What they released as Apple Intelligence wasn’t a well-planned, cohesive product as much as the only thing they could possibly do, given the timelines they were up against. Maybe they’ll catch up, but they’re definitely behind and it’s a shocking thing to behold.
What's interesting is how I opened Safari's reader mode to digest this 5000+ word polemic, and then noticed for the first time a new option to summarize its contents. A few seconds later I had a clear idea of what the author's thesis is, without being under the false impression that it had conveyed its finer points.
I read the article. After reading this comment I tried to get it summarised.
The result wasn't pretty. The summary claimed that there is skepticism over the security of Private Cloud computing, which the article actually praises.
Strange that we received two very different summaries. There was a part in the article where the author mentioned that Apple's private cloud compute claims were "literally impossible", but that was hardly the general takeaway of the article.
Mine basically said Apple has fallen short of their vision because of the inherent limitations in relying on web services.
Stepping back the way I see it is it’s a cultural thing. Apple loves to own the whole stack and even though circumstances forced them to use the hot new thing because it’s useful, Apple as an organization doesn’t really wanna do that. It wants to have its own stuff, and so it could never really make something succeed that wasn’t its own and that’s why it failed. So far.
I was giving this my attention until the author included a long quote from Steve Jobs from the author's own dream
Sorry, what?
Apart from the level of dream-detail recalled being highly dubious, quoting your own hallucination of Steve Jobs to help with your argument about generative AI being useless (and missing the irony) is downright weird.
Also Math notes is basically the same thing search engines have been able to do for over a decade now. Enter a sum, get an answer.
No, I'm aware of the irony. I also wrote it down when I woke up, and you can see on stream that I copy it from a Discord message when I'm looking for it (https://youtu.be/N_KNpVujAL8?t=14677).
I figured you'd rather read something my brain made up (albeit unconsciously) than something a machine made up using linear algebra without understanding any of the words that it's using.
> I figured you'd rather read something my brain made up (albeit unconsciously) than something a machine made up using linear algebra without understanding any of the words that it's using.
I mean, any article written by a human is something the brain made up. And I'm fine with that. But it reads like trying to give your opinion extra weight by associating it with Steve Jobs. It just came off weird.
I'm with you on most points. I'm not an iphone user but I can certainly appreciate that Apple Intelligence does not match the hype. That seems to be a recurring theme with AI though. Release a thing, shout from the rooftops about how great it is, and then wait for people to start posting about glue in pizza recipes or urging people to kill themselves or generating fictional news alerts.
So far it feels very unfinished, but having followed Apple for a long time I've seen many products launched and iterate over time. Maps, for instance, had a fairly disastrous early period but eventually became my preferred navigation app.
That used to be the standard apology for Microsoft products, where Mac OS app developers "sweated the pixels", i.e., delivered products that were pretty much on target at launch.
Yea, I'm not saying it's great or that this is the preferred approach. Just highlighting that it's not the end of the world as many frame it every time something like this happens.
Agreed – they've even said as much. But some of the marketing is conflicting there, and I've had friends IRL confused that their new phones don't contain all the Apple Intelligence features they've heard of.
This article makes great points, but the outcome seems to be “in progress” and not final. They have the tech stack and the right philosophy, they just botched the execution out of the gate. This is the same company that didn’t think 3rd party apps on the iPhone would be a thing. They managed to course correct.
Apple Maps is still not good compared to Google Maps. If I want to go somewhere for the first time I can't trust to use Apple Maps. (tried a lot, always come back to Google)
Depends on the place I'd think. I almost exclusively use Apple Maps except for when I need data about a business. On the other hand Google has also sent me to wrong places a few times, sent me to a closed road etc.
The biggest problem I have with Apple Intelligence is battery life. Since Apple has no software chops in building LLM models, I expect them to throw hardware solutions for the battery life problem.
But the demands of intelligence and the general trajectory means no amount of hardware - storage, RAM or battery size would be enough to generate the high fidelity experiences or solutions that fans and customers have come to expect from the company.
Power consumption is the defining characteristic of AI. The power consumption by the US had recently plateaued at 4,000 billion kilowatt hours 2000 through 2023. That will likely accelerate by 20% or more with 2024/2025 data. It's probably one of the few guardrails. Electricity is about five times more expensive in the UK than the US. So the US is the natural home for the models and other regions are not.
> Power consumption is the defining characteristic of AI.
I'm not so convinced. I've been playing with running Ollama + llama3.1 8B on my 2023 M2 MacBook Air with 24GB of RAM, and I don't notice much difference in battery life with or without it. I'm not querying it continually with a shell script in a loop or anything like that, but neither am I shy about throwing all kinds of prompts at it. My laptop keeps chugging away on battery.
Training AI may be ferociously resource intensive, but I haven't seen that querying those models is especially bad. I'd think that a model that Apple had tailored specifically to run well on its hardware would be relatively "light".
I'm kind of skeptical the US grid can even handle this industry growth if it becomes the only realistic place these models are ran. A lot of the infrastructure is pretty wobbly. And forget the tax benefits and cheap land from Texas, their private grid is liable to bust at any time.
Considering how quickly the definition of "AI" changes (several times per decade now ?), and what happened to smartphones, it's possible that she becomes right in a few years...
Hah I’ve been having the same issue as the author with those scammy “package delay” texts getting summarized in my notifications.
Didn’t realize how widespread that type of spam was until now. Why hasn’t someone implemented better spam detection at Apple like we have for email? It would be nice if they could classify texts as spam, promotions, etc and organize them the way Gmail does.
My guess: that requires bigger models than can run on local hardware, and the appetite for sending emails out to a server for classification is negative zero.
I could pick on quite of few nits here but I'm going to focus on one in particular that I'm very familiar with as a photographer and mass media studies student.
> I want the data coming off of the sensor to be the data that makes up the image. I want to avoid as much processing as possible and I want the photo to be a reflection of reality as it is, not reality as it should have been. Sure, sometimes I'll do some color correction or cropping in post, but that doesn't change the content of the image, only its presentation.
First nit:
the iPhone camera,
and all digital cameras,
are deeply influenced by computational photography techniques.
What this means is that you essentially never get the raw pixel values,
although there are exceptions.
The image you get is already significantly manipulated.
Second nit:
color correction, color in general, dynamic range, focus, depth of field, and more
are all manipulations made by default,
even long before digital cameras when film was king.
There is no "correct" image version of what our eyes see,
there is only pleasing to the photographer and the audience.
An example:
the negative for Ansel Adams' well known "Moonrise Over Hernandez , New Mexico" looks like,
at first glance,
something a professional would trash for lacking detail.
I will mention,
but won't even get into a topic that will surely bait HN commentors:
Kodak designed and standardized its color film to represent Caucasian skin tones.
It wasn't until chocolate and furniture makers complained that everything looked like the same gross mud in their expensively-produced product catalogs that Kodak took a look at rendering dark brown/red/yellow tones more pleasingly.
Notice I said "more pleasingly", not "correctly".
> Kodak designed and standardized its color film to represent Caucasian skin tones.
That may be an urban legend. There was a popular reference card that had a light-skinned woman, but that wasn't the problem. This is closer to the old Technicolor vs. Eastmancolor debate.
Here's the trailer for Disney's "Song of the South" (1946) [1] That's three-strip Technicolor. Good dynamic range, with each of three colors on its own strip of black and while film. Here's the trailer from "Shaft" (1971) [2]. That's single-strip Eastmancolor. Dynamic range is not as good, as you can see in the street scenes where lighting wasn't controlled. Eastmancolor took over in the 1950s because the cameras are much smaller and production is easier and cheaper.
NTSC Color TV really did have something to standardize skin tones. NTSC color TV transmits an intensity value and two vector components which determine the color. The color vector components are rather low bandwidth (only about 10 full range color changes across the width of the tube), and so some NTSC receivers had a gimmick which, when the vector is near the "skin tone line" for a standard skin tone, pulled it to a fixed value.[3] The effect was that all faces had roughly the same skin tone in UV color space, but intensity could vary.
That quote intrigued me too. Surely RAW (which can be produced with an iPhone [and others...]) is what the author is looking for. Case of not RTFM'ing?
I shoot in raw from my iPhone and Canon EOS R6 mark 2. For my iPhone I usually use Halide's Process Zero to remove all the computational photography garbage that I can from my images.
Their problem is that they are just behind now, because they released too early. They haven’t finished iOS 18 yet. They are still working on it, which likely means that iOS 19 isn’t going to get much attention either, because they should be working on iOS 19 now, not still developing iOS 18.
They have set themselves up for a loser in the next year or two, because they can’t double their resources to catch back up to a normal release schedule.
Apple has a history of releasing early proofs of product concept or basic table stakes products built with a high enough quality they feel like they’re “done,” but then proceed to methodically iterate on them for years to decades until they fade into the background. It’s hard to imagine the iPhone essentially started as a click wheel iPod - comparing the two is night and day in terms of capabilities, function, and form. They take weird side ways jaunts - but generally shift back into a path that is sensible. I was interested to see what they did with Apple intelligence, but assumed it would be establishing of a basic set of capabilities, the effective proposals for APIs, and the seeding of product discussions with their customers over a long time. People seem to think a few years into the current cycle of AI technology we are seeing the final fruits rather than seeing the infancy - for those who develop these sorts of tools it feels very much like iPhone gaming felt in 2009. At some point over the decade we will hit the nadir, then descend into total enshitification (and yes those who think AI has already reached peak enshitification you are totally wrong). Along the way though we will see a lot more truly stunning advances towards the final arrival at pervasive exploitation.
I'm going to admit that I just skimmed past 90% of this article. Being dismissive of AI is currently easy content, so there's too much noise in the space.
Having said that, I actually paid attention to the image playground criticism. Image playground is literally a playground. It is meant to make fun, low-effort images for friends and family, largely for social type interactions.
"It uses a placid corporate artstyle and communicates nothing." It's a hot taco holding a beer. What is it SUPPOSED to communicate? Looks like a pretty great image to me. But of course this piece was leading into the anti- angle, so suddenly it's "horrifying". I guess I didn't get the special training to understand what was wrong with a clearly lighthearted, fun image.
Similar asinine, overly-jaded complaints about the cartoonish, memoji style portrait generation. I think the image is actually pretty hilarious. Actually used image playground to make my social media image, and I care not what this guy thinks about it, or that it is "soulless" (as if a cartoonish representation is supposed to be soulful?)
It's an AI generated taco smoking beer, I don't think it really needs defense. If I were to create such an illustration for my blog from scratch, I'd probably use that to communicate abusurdism in a light-hearted manner. I'd also probably make the hand-hooves consistent or at least plausibly cartoon-logic. At the very least it would mean:
* The taco holding the beer glass to its lips and sipping on it to smoke it
* Consistent eye shapes (likely the eyes would be closed to smoke the beer)
* Better bokeh for the elements in the background (if that is the stylistic choice I'd go for)
* Have the smoke coming out of the beer, not out of the taco "taco smoking beer"
* Stylize the image such that it has individual flair, there's something about the Apple Intelligence artstyle that just has an unperson corporate vibe that I don't like.
* The levity comes from "hey tacos don't have faces, hooves, arms, or legs and you can't smoke a beer", this would be used to communicate absurdism https://en.wikipedia.org/wiki/Absurdism, specifically by means of the taco smoking a beer whilst holding it in its hooves
Maybe I've just been exposed to way more AI imagery than you, but guacamole does not look like that in the image. There's more fever dream images that I have locally, but I didn't want to saturate my article with them and haven't fully implemented "image gallery" support yet.
And yes, a cartoon is normally meant to communicate something, quite literally the definition of soulful. Look at this for example: https://bsky.app/profile/yasomi.xeiaso.net/post/3ldgzieehjc2... When I made it, I was trying to communicate a lo-fi peaceful vibe accentuated through traditional artstyles. In a more finished piece I'd probably recreate this through watercolor in Procreate and apply a bokeh effect (emulating the depth of field for a subject with forward light being looked at from an 85mm portrait lens at about f/2.8).
The point of art is to communicate something. If a work does not communicate something, it is categorically not art.
"Apple Intelligence failed?" The far-reaching project that just had its initial release like 60 days ago? Why would anyone read beyond a first sentence like that?
The purpose of a thesis statement is generally to establish a conclusion and then over the rest of the article, the goal is to build up evidence to support that conclusion.
The thing is for a product to fail means the failure is widely accepted, like how Stadia failed or how AirPower failed. The verb "to fail" is simply wrong in the case of a product when you really intend "to be bad". It can't be established yet if Apple Intelligence has failed.
Because it's true. When you ship a product and people look at it and go "meh", the product launch failed. There is literally no value in launching products people don't find value in.
It does seem a little sloppy, but the actually interesting part of Apple Intelligence isn't out yet so I'd withhold judgement, even on the initial release.
I think part of this issue is that people expect a lot from Apple. They exposed the technology to many people who aren't early adopter types, but more "mainstream" types. With an entirely new product and brand, the tolerance for bugs is higher, but here the expectation is that of other Apple products.
In the end, even if the features aren't perfect, they still raise the bar for competitors, so Apple is less in danger of being disrupted.
Also, there is plenty of AI driven features that people do not talk about, but those "just work" so you don't see them as well.
> In the end, even if the features aren't perfect, they still raise the bar for competitors
Honestly I'm not sure that they do. Everything I've seen with notification summaries for instance has given me the feelings of "wow I guess I'm really not missing much". LLMs as an answer engine has a big benefit that it feels fast and fluid even when its wrong (and many won't bother verifying) but with notification summaries most users in messaging contexts will eventually go back to the conversation and see the responses in full detail. Mistakes in that context are identifiable by mainstream users as having made the product worse.
I agree. Case in point is that the author jumped in at the iPhone 7 which is a lot closer to current era iPhones than the original iPhone and had been refined over that many generations. My first iPhone was the 5s, my first Apple Watch was the 6. I tend to hang back and wait a few generations before adopting a new product from Apple. I suspect Apple Intelligence will be a lot better 1-2 years from now.
I find them better than the multiple Echoes we have. Though they don't do as much, what HomePods do do is generally better than Amazon's attempts. Can't comment on Google's equivalent.
The Apple bicycle is the Apple user.
They know how to ride it well.
And how to take the Money.
Generative AI is a dud. ML has a lot of applications.
I use my smartphone for calls, chat and banking apps.
I use my iPad only for drawing, mail and some games.
For everything else, I have computers. With real OS.
Well - while Apple has made rough starts in the past (Maps on iOS devices comes to mind) - they do have a solid track record.
Having used "smart devices" since the Apple Newton 2.0 days, followed by Windows Mobile, a very brief Android excursion (Motorola Milestone - early enough Android that I was often frustrated trying to copy/paste text between apps), then another side-pivot into Windows Phone for awhile (mainly because the development was incredibly easy - and Microsoft gave me a free one), I have been in the iOS mobile phone ecosystem ever since the iPhone 6.
And - the software has gotten increasingly better over time - I wouldn't have (for me) alot of content/subscribers on TikTok, if iMovie on my phone did not exist - attempting to edit videos using OpenShot was taking forever (While I have Davinci Resolve installed, it seems "daunting" for someone who doesn't want to be a professional videographer/editor).
But then I tried iMovie "Magic Movie" on my phone and ... "it just works". Still not great for long-form YouTube style content, but for quick things, slice-of-life videos - it does the job rather well.
... I expect that Apple will improve these AI offerings dramatically over the next couple of years as people upgrade their devices.
<glances at recent story reconfirming Apple as the most highly valued private company the world has ever known>
This is sardonic, as yes, Apple could have chosen different monopolies than it currently has, at different points, and had a different (maybe not better?) trajectory, some of of us are old enough to remember the antipathy towards Microsoft when to the Office suite it added default Explorer,
But also, maybe our system fundamentally rewards the "wrong" things if one's definition of "right" includes things like innovation. Or maybe, the welfare of the commons and the common good.
I wish they'd sort out the rendering of road names (and this isn't specific to Apple, mind) - they're still seemingly stuck in the olden times rules for rendering street names ("only put a road name if the road is wide enough and only every N inches and starting at M inches from a junction") rather than "can we put a road name on this road that's visible on screen without it going over something else?" which would be 500% more useful.
My hypothesis is that certain products need users and feedback to be good. Maps is one of those, hence why they had to release it in a ‘bad’ state. Apple AI I think is another such product.
Apple needs to review whichever firm they outsourced review of Maps locations to, because my report got my local hackerspace marked "permanently closed" when all I did was correct the address, map pointer, and capitalisation of the name.
I haven’t had any issues with the Maps review. Seems like perhaps you submitted a change that was normalized as an entirely new business due to having a different address, location and name. Have you tried to zoom in and see if there’s a new marker with the info you submitted?
You can imagine, if a business has changed its address, location, and name, that users would appreciate a “Closed” pin for the previous name and location instead of wondering what happened to the business that used to be there.
> * Seems like perhaps you submitted a change that was normalized as an entirely new business due to having a different address, location and name. Have you tried to zoom in and see if there’s a new marker with the info you submitted?*
I agree. I think we believe that Apple's days of long-term skunkworks development is over... I don't think it's as dramatic as, say, the years since the PA Semi acquisition, or the "secret" Intel port, but they do some long-term planning.
(Apple Originals, their production house, is also an example. A huge bank of original prestige TV, subsidized by iPhones... they're just still finding a way to market it.)
Though, in fairness, I don't find any of the voice assistants very useful. Siri is probably not quite as good as Alexa though. I mostly care more about Siri because I use CarPlay when driving.
Years ago I remember a detailed comparison of Apple and Google maps, showing a lot of flaws with Apple Maps around contrast, lack of detail, misleading iconography, and other issues.
Has it improved that much? Does anyone remember what I'm talking about?
That's Justin O'Beirne's site. It has mostly improved but he wasn't really that negative on it. He just used to work there so he was being extra critical since he knew them.
On the contrary he often seemed undeservedly uncritical when he talked about Google, like going "wow this is so detailed, they must be super geniuses who did this with computers" about something like POI locations they'd actually done by hiring a ton of contractors to do by hand.
It still sucks. No amount of fancier graphics can make up for their lack of ground truth in terms of opened and closed businesses. I just spot-checked the newest cafe in my neighborhood, which opened 3 weeks ago, and it's still not present on Apple Maps, and another place that closed months ago remains on Apple Maps. It is a demonstration of the fact that you can abuse your monopoly to push a third-rate product on 10% of your users.
Google Maps does seem to be more complete with respect to businesses although I prefer Apple Maps for in-car navigation. OSM has both way beat with respect to hiking trails and the like.
In your other comment [0] you mentioned you also updated the name and location of the business. I’ve never had a single issue with Maps review and I find my corrections are usually accepted in under a week.
This is because a) Google is a data-harvesting company and Apple is not, and b) the vast majority of businesses only update their info on Google due to market share (if they update their info anywhere).
You can verify this for yourself by looking up a business on Apple Maps and seeing if there’s a “Claim this Place” button.
It is very easy to submit corrections to Apple Maps and they usually accept them within a week.
For me personally, I would much rather use a superior maps app for maps, and use the data harvesting/advertising company’s website as a business directory, or ideally get the hours directly from the business’ website or social media profile since, like I mentioned previously, they often fail to update their info even on the data harvesting website.
Big business hours have always been reliable for me in Apple Maps.
I always call small businesses to ensure they are open. Why trust a small business operator to update Google/Apple in real time when I can spend 20 seconds to press the listed phone number and confirm it?
The small business has even more incentive to keep it updated. I don’t speak the language of every place I go and it’s a huge waste of time to hope they’re open. And updating business hours might take a few minutes, but fielding phone calls takes a lot more effort and time. If I have to call a business to find out basic information, I’m not going there.
They have the incentive, but not the technical capability or trust for line level staff to be able to login to the business’s Apple or Google account and change the hours.
> They have the incentive, but not the technical capability or trust for line level staff to be able to login to the business’s Apple or Google account and change the hours.
> If you have any modicum of site reliability experience, this seems like an unsatisfiable set of constraints. It seems literally impossible, yet here they are claiming that they have done it.
His first instinct was right. It seems impossible, because it is. Unless I can run the entirety of the "Private Cloud Compute" on my own hardware in my own firewalled network, I 100% believe that the pipeline is compromised; our data is siphoned off and sold off to advertisers, especially now that they know they can do it and get less than slap on the wrist: https://news.ycombinator.com/item?id=42578929
> Hell, the iPhone is a fully capable cinema camera these days
No, it's not. The sensor in an iPhone is AI/ML'd up the ass to hide all the noise because it has 1µm sensor wells.
A Panasonic video-oriented mirrorless micro 4/3rds (so not even anywhere near 35mm) like the GH5 is 3-4x that.
A Sony Alpha 7 III? six times the sensor well size.
I don't care how many megabits of video bandwidth you throw at it or how fancy you think "raw" shooting is, or how fancy your sensor technology is; nobody these days has anything that is even close to 2x better than anyone else. The top sensor from all the major players are pushing the limits of physics, and have been for a long time.
No amount of AI/ML shit will give you depth of field and bokeh that looks as nice as a big sensor and a fast lens with nice shutter leaf shape.
I’m withholding judgement on Apple Intelligence until iOS 18.4 is shipped in May. That is when they plan to release a revamped Siri with better contextual and personalized responses. For instance, AI/Siri will be aware of what is currently on the screen when responding and also integrate personal data across apps.
Ultimately Apple’s strategy of a privacy focused AI will be a winner for a consumer device with access to sensitive personal information. It’s a question of whether they can pull it off technically.
>Then they casually dropped the holy grail of trusted compute...
By which I think he means the AI stuff runs on your machine rather than the cloud. For me that's not a holy grail at all or even something I'm terribly interested in. I downloaded Apple AI on the macbook, found it quite meh and am now seeing if there is a way to remove it as it uses quite a lot of GB or memory. I can see for someone wanting to use LLMs on confidential corporate data that would be important but that's a specialist use case that I don't think Apple Intelligence is particually good for.
That corporate context is a good candidate, yes, but I think it's simpler than that. Assuming whatever Apple and Google cook up these next few years is essentially identical from a user standpoint, you can assume that Google's will be selling off every microscopic datum they collect from you, and know for a verifiable fact that Apple's will not (and cannot).
This is very funny. The picture of Stalin and Yezhov was exactly what was going through my mind as an Apple Store sales guy explained the photo clean up feature to me. It felt rude to bring it up though.
I find it super interesting that Apple is x-raying the PCBs for their compute nodes. Guess they took Supermicro inserting malicious devices into their servers very seriously.
The whole episode is fascinating to me in that Bloomberg is a reasonable quality news organization and something obviously convinced editors there to stand their ground in spite of no obvious (presented) evidence. I agree though that absolutely no proof has come to light which makes me seriously question the whole thing.
Apple shipped “Apple Intelligence” before Apple even invented the term.
Before the AI craze you could search Photos on iOS based on content and metadata. You could lift subjects off photos with a long tap and copy them, recognize faces and make montages based on inferred relationship with them. And all of this is done on-prem, on your device.
These are very subtle, nice features. Apple had to put a name on all these otherwise there would be no marketing material.
Yes, Apple does a lot of feature-related marketing these days, which changes their naming priorities.
I remember reading it was a post-Jobs transition thing, where the key message about products transitioned away from vibes and overarching slogans ("shuffle", "the internet comuter"), to features ("iPhone X", "iPhone XS", "iPhone XR", "iPhone XS Pro (??)").
I'm sure that a lot of the old-guard exec team knows it's a loss... I'm sure how they feel about it or why.
This is true. Craig Federighi and others have said in the past that they’ve added machine learning based features for a long time in the OS (with examples). It’s just that generative AI took off very quickly and now some people are imagining that it’s the only (or major) AI.
They have now released half-baked features they only released to fill “Apple Intelligence” with something, and that they likely wouldn’t have released in the current state otherwise.
Is Microsoft that strong? They've got a stranglehold on medium to large businesses but that's about it. Very few people actually want to use their products, they just think they have to...
But yes, it's a great time for startups. I'd argue it always is and always has been.
All the companies the parent was talking about have similarly massive valuations, yet none seem to have unassailable positions.
Which is what the whole thread seems to be about (Apple squandering an opportunity and naturally others who are doing the same), not their current market cap.
There are a very large number of steam games that were written for Windows. There are similarly large numbers of commercial products with value for particular companies.
Could they be emulated? Sure. Maybe not 100% (see Linux), but mostly yes. But then you have to make that work, ensure that the emulations keep working, etc.
That is the real wall around Windows. Office has similar walls -- large numbers of spreadsheets, for example, many of which are critical and which do complex things. There are lots of programs that can read Excel spreadsheets, but perfect compatibility is difficult.
And there are lots of people who know these products -- re-educating them is a secondary wall, because it represents a lot of work for the customers.
Oh I know all about the Excel wall... It's hard to convince boomers there's something better because it's all they know but when I was in university most of my professors only accepted Google Sheets/Docs documents lol.
Microsoft cloud syncing is absolutely atrocious and when people pass around Excel spreadsheets they inevitably get messed up or half the information is lost because there's no single source of truth that everyone adds to. One can argue Google Docs probably isn't technically better but collaboration is 100x easier.
> large numbers of spreadsheets, for example, many of which are critical and which do complex things
And which all need to be rewritten into database backed apps, IMO.
> There are a very large number of steam games that were written for Windows. There are similarly large numbers of commercial products with value for particular companies.
Could they be emulated? Sure. Maybe not 100% (see Linux), but mostly yes. But then you have to make that work, ensure that the emulations keep working, etc.
Wine and Proton do an excellent job at emulating to keep old binaries alive.
For new apps, Android and iOS are now enormous markets. Consoles are huge. Windows gaming is big enough, but I don't think targeting only Windows is worthwhile. Is it really more difficult to use SDL + Vulkan (or insert any other multi-platform graphics API) versus Windows APIs + D3D12? When everyone is building for multiple platforms it makes that moat a lot thinner...
For me, the success of the Steam Deck shows that "desktop" Linux can be successful, can be used by the masses. Game companies are even tweaking their games to work better on Proton or straight up porting them, very few are philosophically Windows-only.
And remember how Android absolutely destroyed Windows Phone even though Microsoft bought the largest cell phone manufacturer in the world... Not saying it will happen to Windows but I think it's a possibility...
You might see a need, but it isn't going to happen, especially in small and medium sized businesses.
Wine and Proton work for many things, but far from everything. Plenty of games don't work well on the Steam Deck.
Ergo, Windows isn't in any near-term danger. As far as Android and iOS being markets, sure. So what? It doesn't threaten Microsoft that there are additional markets.
> Plenty of games don't work well on the Steam Deck
Are you just referring to the games with anti-cheat? If so, I do agree, though I think with the success of the Steam Deck the anti-cheat providers are (or better be if they don't want to get their asses kicked) going to be looking seriously into options.
Outside of anti-cheat, I've yet to find a game that doesn't work on Steam Deck. Even the ones with the worst ratings will usually launch and you can play if you plug in a mouse and keyboard. Obviously not a great experience, but those games would have the exact same problem on any PC, Windows or Linux. It just happens that most Windows PCs have a keyboard and mouse already plugged in.
Well of course a giant company like this is probably not in a near term danger... it isn't impossible though : too big companies can get split by force: look at Standard Oil or Bell.
But longer term nothing is certain : look what happened to IBM or typewriters.
Apple wishes they had anything close to Windows (or Android's) market share. Yes, people want PCs because a lot of software just won't run on Apple hardware, and Apple hardware is too expensive.
Yea I mean they are financially doing better than any group of companies at any time in human history, kind of like the exact opposite of this downhill claim
Even with Googles monopoly legal issues, they are more valuable than ever.
Outside of various bubbles, Microsoft still dominates desktop computing and Azure has pretty strong market share itself. I actually find it fairly remarkable that, in spite of the Windows OS not mattering as much any longer--especially on the server--and Microsoft absolutely tanking in mobile, the company is still very strong and relevant.
Given the lack of a single good supporting example (what, did PCs have no word processors that reacted to backspace keys?) it seems like these are fantasy bicycles...
And since no evidence is needed to believe, you can of course believe in Intelligence that can act better than your brain (what are "those pics from San Francisco", you've snapped a hundred there, which 5 would you like to post?)
And yet the disillusionment comes a bit faster than expected, why not give Apple a few more decades to iron out some kinks on such a revolutionary fantasy path?
Author has strong opinion of where GenAI should be used vs not, eg they doesn’t like the feature to remove a person from a photo. That’s respectable but I see many people may feel differently.
Apple was always going to fail this, and even more so going forward.
LLM are built on data, and copious amounts of it. Apple has been on a decade long marketing campaign to make data radio active. It has now permeated the culture so much so that, Apple CANNOT build a proprietary, world-class AI product without compromising on their outspoken positions.
It is a losing battle because the more apple wants to do it, the users are gonna punish them and meanwhile, other companies (ChatGPT, anthropic) are gonna extract maximum value.
All the LLM advances these days are from synthetic or explicitly created data too. You need public data mostly because it contains facts about the world, or because it's easier to talk about a book when it's "read" the book. But for a known topic area (as opposed to open Q&A) it's not critical since you can go and create or license it.
No, Apple’s privacy stance is about giving users control over data in ways they understand. Posting on Reddit or Arxiv is not a blank check to have your words be reused for LLM training, even if it’s technically public.
Apple’s slogan is “what happens on your iPhone stays on your iPhone”. I think “I published a paper” or “I posted on Reddit” are clearly out of scope - those things are happening in public.
I'm still having to look for thunderbolt cable all around my house to charge that one freaking device that is iphone when everyone else switched 10 years ago to USB C.
How is apple intelligence going to help me with that ?
Oh I'm just astonished reading this article how Apple can spend a huge amount of money and lot of time making a private cloud LLM yet can't be arsed to implement simple things like USB C without literally having the Europe force them to do.
I feel like they have a plan, get the backend up to scratch to appease the tech people and they're leaving the ways to interact with it purposely vague and incomplete for those non-tech folk.
They don't want to scare any part of their audience away from future uses of Apple intelligence. Their audience is tech and non-tech folk alike.
If the tech folk say it's safe and the non-tech folk get comfortable with the basic AI features then they're onto a winner.
How many people's parents/grand parents have iPhones because they're simpler for them to understand who are also scared or don't understand this 'AI thing'. I think Apple have been quite savvy in introducing it slowly and are probably watching the metrics like a hawk!
I suspect image playground is so creepy in an attempt to mark the images as clearly AI generated when they get posted to social media?
I think it is concerning that every single Apple Intelligence feature they've shipped thus far has been not just mediocre; but bad. Being last to the party is a very normal Apple thing; quality and Doing The Right Thing takes time. Announcing something then taking months to ship it is very not-Apple, but it has happened a few times. That thing they finally ship being bad is, geeze, horribly un-Apple.
One of the few examples I can think of however is Apple Maps. And it did get better; a lot better, some say better than Google Maps nowadays. So I generally do have hope for Apple Intelligence. At the end of the day, there are some disparate competing utilities in this class on the Samsung and Google phones, but no one is shipping something that is obviously game-changing and in first place; they all kinda suck, they're all tech demos, and it'll inevitably take many years to get this technology honed in to something that is truly useful to consumers.
There is a lot in the Apple universe that is shoddy. iTunes, for instance.
iOS has a refinement that Android lacks but I am unimpressed with MacOS. Windows is stuffed full of terrible crapplets and Windows users largely recognize that these are terrible crapplets and don't use them. Apple users have a fixed belief that everything Apple does is brilliant and fashionable so they do use them which has a deadly effect on the market for third-party software. (No good music players for MacOS for instance)
Even Apple fans lately claim it's been getting worse in the last few years.
(That said, I love the innovation in the M-series chips from Apple just as much as I appreciate Microsoft's commitment to the long-term viability of Windows for all of us who invest in it. Occasionally at work we still use Access '98 to handle old files and it works great, the installer works great, in fact Office still tries to take the desktop over the way it did back in the day. Clippy still works. The borderless windows look just a little funny because the compositor changed. No way you could run Linux binaries or MacOS classic binaries from '98)
> No way you could run Linux binaries or MacOS classic binaries from '98
The key problem for desktop Linux is that nobody knows exactly how to build binaries that will run on any reasonable Linux desktop system today, so it's hard to keep that non-existent reasonable subset of ABI stable for an extended period.
That said, you CAN do this. The kernel itself does present a mostly pretty stable ABI to userland applications, so you can grab a Debian chroot from 1998 and be on your way. Debian even still serves repositories for everything on archive.debian.org, and Dockerhub has OCI images you can `docker run` for Debian from 1999, under the debian/eol repo. You can `docker run` and `apt-get install` 25+ year old binaries on modern Linux!
What would be sweet is if we could build and ship compatibility tools that make these old binaries work mostly transparently. Today, double clicking a binary on Linux won't do anything particularly sophisticated, and there are no compatibility options. But actually, it would be totally doable to write a variety of useful compatibility shims without doing anything horribly grotesque. The DT_INTERP and DT_NEEDED fields of binaries would often give sufficient information for how you might get such a binary to run. It's not like it would be that useful, but I would personally be very pleased if you could just double click e.g. some old Kylix application and have it just run, perhaps after downloading some (shims for?) old libraries. You could extend this to transparently running CPU emulators too, not unlike the tricks people do with binfmt_misc, just possibly with more batteries included (and a bit less transparency.)
Another really great feature would be useful error messages when executing an application fails. Today if the DT_INTERP is missing, it looks like the binary itself can't be found since it returns the same errno, and you won't see linker errors if you execute a file in a GUI file explorer. What a great improvement it would be if all of that could be fixed, and there is no technical reason it can't be.
Of course, frustratingly, for more reasons than just this, the more likely thing to happen is that nobody bothers since containers are the future anyways, and Win32 instead becomes cemented as the true stable ABI of Linux. Which, in my opinion, is a bummer. We could always have two stable ABIs of Linux...
The real bummer is that as IBM discover with OS/2, when your ABI is the copy of someone else's, then everyone rather use the real thing.
Hence, the irony that WSL is the actual Year of Linux Desktop, followed by the macOS Virtualization framework.
OEMs rather ship crippled Chromebooks, or Android, than proper GNU/Linux (or BSDs), devices.
So Linux rules, where UNIX was originally designed for, headless computing and timesharing systems.
I'm happy for WSL2 users that are getting what they want, but I don't even particularly care about the things WSL2 brings to Windows, what keeps me from using Windows is just Microsoft.
I have been thinking about the parallels to OS/2, though, and I really do wonder if it's going to go that way. Much like debates about which economic systems are actually viable, there's no real reason to believe aping someone else's ABI can't work other than that it didn't in the 90s. But boy, the game sure has changed a lot, and I'm not so sure it will play out that way anymore. While Valve has been shipping the Windows ABI on Linux commercially, the way they've been doing so is definitely a bit different than how it was done in the past. So far it seems like they're actually succeeding, and the question is somewhat more of how much they can succeed with it.
> Apple users have a fixed belief that everything Apple does is brilliant and fashionable so they do use them which has a deadly effect on the market for third-party software. (No good music players for MacOS for instance)
Many of the long term users of macOS/OS X/etc etc are highly critical of its downfalls, but still use it because of the available options they prefer it. Myself included. You can use something while also being aware of its shortcomings.
> (No good music players for MacOS for instance)
Have to strongly disagree on this point. Cog¹ is my music player of choice on macOS; not only does it have a clean GUI, but it supports almost every format² I’ve ever wanted to listen to audio in, including game music in formats like GBS (Game Boy Sound System) and 2SF (Nintendo DS Sound Format).
――――――
¹ — https://github.com/losnoco/cog
² — https://cog.losno.co/
+1 for Cog, it's pretty awesome to be able to play N64 music files natively!
Vox is also pretty great for FLAC
https://vox.rocks/mac-music-player
Vox has a UI reminiscent of a Windows XP skin and gobs of macOS oddities. I feel like they're still trying to catch up to macOS UI changes from 5 years ago. Don't get me started on how inaccessible the entire interface is... They have tab bound not to focus changes but to switching the, yes, tab...
iPadOS is truly terrible and wastes the amazing hardware of apple silicon iPads
Hot take.
I was unhappy with the state of music players on macOS too so I wrote my own:
https://www.plastaq.com/minimoon
There is a point where supporting legacy software stops being impressive and just starts to be counterproductive. I'm glad my Mac doesn't support Mac OS 8 software anymore, especially since it means I can use a much faster processor architecture.
These things are by no means mutually exclusive.
Apple could easily spend a few millions to support some old hackers keeping ancient software alive, they choose not to do it because they either don’t care, don’t know they can do it or don’t think it’s profitable.
They certainly have an effect on one-another. You can’t just say “old hackers!”. ALL codebases are tied down by legacy design decisions after a long enough period of time. Microsoft’s commitment to backwards-compatibility has inarguably resulted in Windows flaws hanging around for longer than they’d otherwise need to. The argument is around whether or not it’s worth it.
You can’t throw all your engineering knowhow out the window just because you’re discussing something politically charged. This is simply how code works.
I don’t understand your argument. There are (presumably) basement dwellers keeping ancient gaming console emulators working on modern platforms with little more resources than free food from their mothers.
> No way you could run Linux binaries or MacOS classic binaries from '98
Can you give me a few examples of linux binaries from 98? I would like to give this a go, I think I have a pretty reasonable way to go about achieving this.
I'd just use an old game (for a really hard test). Like Quake maybe: https://github.com/Jason2Brownlee/QuakeOfficialArchive Or the server for easy mode.
Let us know how it went. ;)
Games of that age are probably getting well into the era of "requires SVGAlib, must run as root". They haven't aged well.
No, most of them run in X-Windows, using protocols that are still supported even by xwayland, because they weren't originally written for Linux (or IBM-compatibles) at all. Some of them fail to cope with TrueColor visuals but most are fine. You can get old source code from http://www.ibiblio.org/pub/Linux/X11/games/. Binaries are maybe trickier; maybe start with http://archive.debian.org/debian-archive/debian/dists/slink/...? Most of those are going to depend on old shared libraries for things like Xlib, though. (Notice that half the game names start with "x"!)
Debian Slink was formally released in 01999, but most of the packages in it are from 01998. I just installed xkobo from http://archive.debian.org/debian-archive/debian/dists/slink/... with `sudo dpkg -i` but now it wants i386 versions of libc6, libstdc++2.9, and xlib6g. ldd says:
If anyone is adept enough with Debian to explain how I can do this without breaking my amd64 install, I'd be obliged. (Maybe I need to debootstrap in a chroot or something? It's probably still possible to download slink CD install images.)Ah, most of what you've got there looks like small "desktop toy" games, like minesweeper games and whatnot. I was thinking more along the lines of full-screen, possibly 3D, games of the sort that got ported by companies like Loki back in the day. Those took a long time to become runnable under X11.
> Maybe I need to debootstrap in a chroot or something?
Probably. Even then you might run into some compatibility issues - IIRC, some of the really old code paths used for system calls (like the vsyscall DSO) have been removed in modern Linux kernels.
xkobo in particular is a twitchy 2-D bullet-hell shooter, and the current version of it (kobodeluxe) is built with SDL and supports fullscreen mode. But I'm pretty sure xkobo did run in a window. Certainly the other games there I remember (xjewel, xgalaga, etc.) did.
I'm not familiar with Loki's work at all, though the name seems familiar.
I think the removed-code-paths thing is mostly an issue with libc5, isn't it?
I ran into a problem with really old code paths in August when I tried to compile PFE 0.9.14 (a Forth implementation, not a game) from 01995; it was trying to call `uselib`, which I think has never existed on amd64.
FWIW `ldd` reported that xkobo had successfully mapped "linux-gate.so.1 (0xf7f3c000)" (with no filename); as I understand it, this is the vDSO that replaced the vsyscall mechanism, so at least for stuff built for Debian Slink I don't think that problem in particular will occur.
Which games ported by Loki didn't require X11? I remember playing Quake 3, Unreal Tournament, and Majesty on Linux in mid-00s, and they were all X11 ports.
> Apple users have a fixed belief that everything Apple does is brilliant and fashionable so they do use them which has a deadly effect on the market for third-party software. (No good music players for MacOS for instance)
Nah, Apple users knew from the beginning that Siri sucked and still sucks. Almost no one I know uses Siri except for setting alarms and asking for weather forecast.
If I’m ever feeling too peaceful or content, I’ll ask Siri something. In the blink of an eye, I’m enraged and cursing.
I use Siri for dictation a lot. Don’t have to worry if I have an internet connection either.
But it fails in the car. No much noise. And that’s where it would be the most useful.
To be fair, Google Assistant started out better but now has just been neglected and actively undermined and had features taken away. My understanding is the whole team was folded into Gemini and it's a classic "old thing deprecated new thing not ready yet"
I don't do the Amazon ecosystem so can't say about Alexa.
There was a period where Google Assistant was so useful. Could control phone functions, set events, reminders, alarms, change volume, reliabily message someone. It's wild to think through all the things it used to do well that it simply cannot do today.
I thought I am alone here. But not.
> (No good music players for MacOS for instance)
What are the good music players off MacOS?
I figured the entire field had withered and died from lack of public interest.
foobar2000 (win only), elisa, amberol, exaile, tauon, sayonara...
I didn't mention Clementine cause its on macos too.
foobar2000 is actually available for Mac.
But Clementine is dead. Not only is Clementine dead, its predecessor Amarok and its successor Strawberry are both dead too.
Please don’t spoil this enjoyable nuanced conversation about Apple’s flaws with the usual laundry list of, frankly, unintelligent copy-pasted 2000s Mac vs PC online forum flame war talking points.
You’ve referred to iTunes in the present tense when it hasn’t existed for years, refer to ‘Apple fans’ as some sort of completely separate group of clearly defined people, and spend an unjustifiable amount of your comment talking about your quite niche professional Windows backwards compatibility use case.
We don’t need to rehash this whole thing. Please. Don’t take all the oxygen out of the room.
> iTunes, for instance.
assuming you mean on macOS, given your comment after this. (there is no iOS app called iTunes anyway)
yes, on both iOS and macOS, Apple intentionally hobbled iTunes/Music, incrementally making it worse each update, after Apple Music gained an initial foothold.
not that it was ever "great" after 1.0 maybe 2.0, but it certainly used to be good. Now, if you're not an Apple Music subscriber, you're left with a pretty basic player. I get that they want to segment the market, but to remove features and actively make it worse? horrible.
Mentioning "iTunes is bad" is like a trigger word for me because it's so misinformed at this point.
For one thing, the iTunes name doesn't technically exist anymore except on Windows. And anyone complaining about it being bad on Windows...I mean, that's like complaining that Microsoft Remote Desktop (Now called the Windows app for some reason) sucks on Mac, right? Like, can we just put the Windows version aside please? Even then, I'm not really sure what specific thing iTunes for Windows sucks at besides not looking like a Windows app. People just say that because they were saying it in 2005.
On Mac, the Music app (not to be confused with the streaming service) is fantastic and has supported Apple's "classic" digital music workflow longer than anyone else has been willing to support their users. The Apple TV app (again, not to be confused with TV+ subsciption service) is now the home for the music/TV show store/rental place and the home of your TV/movie library, which is a big improvement from shoving that functionality in iTunes. in that sense, Apple has cleanly separated use cases and functionality in a way that iTunes didn't previously, which is one reason why a lot of people said "iTunes sucks."
I have a family member who recently switched to Android because of frustration with Apple as a whole. They are a big digital music collector, they don't believe in streaming or "renting" their content.
I tried to help them with their music collection on Android. Theoretically it should be easier right? No weird restrictions on sync direction, basically dump your stuff on an SD card/transfer over USB-C and you're off to the races.
But still, they switched back to Apple secondarily because it's the only place left that actually makes that "purchased digital music" experience user-friendly, or possible at all. (Primarily they switched back to iPhone because the modem in their Google Pixel sucks and/or is poorly tested with their major US carrier and would drop international calls every 15 minutes exactly for no reason)
Google Play's music store doesn't exist anymore. Every jukebox app on Android depends on 100% manual file management. None of them have the polish of the Music app (the app not the service). Almost none of them have decent jukebox companion apps available on desktop computers. A whole bunch of other digital music stores have closed entirely.
Apple's system for synchronizing content is actually pretty amazing for continuing to support an offline cloudless workflow. You still just hit one button/plug in your device to sync your music, movies, audiobooks, ebooks, and photos content. It also supports WiFi syncing, and it furthermore supports every iPod that ever existed so long as you have the right cable/adapter.
You can back up your iPhone's full image to your computer if you don't want to use iCloud backups just like it was an iPod. You can synchronize your Photos library and avoid iCloud storage fees, deleting synchronized photos from your phone to free up space to take new photos and videos. It works just like you were using a digital camera in 2005. Yep, you can still rip and burn CDs!
Furthermore, the way Apple moved device synchronization functions to Finder and split out Music from Podcasts and Audiobooks is helpful for organizing the whole process. It used to be that iTunes was the home for all this synchronizing of non-music-related content, but now it more sensibly exists in Finder.
I think a lot of people don't realize that Apple basically still allows you to send over personally owned non-DRMed or even pirated content to Apple's own modern apps very easily this way, you just have to be willing to synchronize using "the old way" like your iPhone is an iPod. They've even kept ancient hosted services like iTunes Match going just in case you still need that sort of thing (it essentially allows you to sync music to your iPhone that is either pirated or not part of a known label music catalog via a cloud service rather than having to do a local sync via cable or WiFi).
And this workflow is very simple for non-technical users who don't really know how to traverse complicated file management structures. Yes, I would really like if apps like Photos was more flexible on file management, but on the other hand if you follow the prescribed workflow the results are quite user friendly for someone who really doesn't want the cloud but also can't handle setting up a home NAS. In this use case you have a reasonable photo storage system by syncing your device and then backing up your computer in a relatively hands-off manner using Time Machine.
One final point here is that Apple Music the subscription service can be hidden entirely from the app. Apple will just give you a 100% owned music jukebox app. Google doesn't do that, and with Microsoft you're probably using a legacy app like Windows Media Player that looks like it belongs on Windows Vista.
Microsoft Remote Desktop is 100% great on iOS (if not MacOS) in my opinion. I never feel so stylish as when I show up at a hackathon with a tablet + $20 bluetooth mouse and $30 bluetooth keyboard (how did they convince people to spend a few hundred on a special keyboard or to buy a 'hybrid' computer that will leave the airline stewardess at a loss to know if you can stuff it in the pouch in front of you?) and the mac books and gaming laptops look clunky in comparison. And that's backed with a 16 core machine with 128GB of RAM and a 4080 if it's my home machine and I can rent something much larger for a few $ an hour in the cloud. My only beef is they want to call it the "Windows App" now.
(At the least the Apple AAC encoder is good. That plus a Python script can copy music files to a USB stick in the right order so they display properly in the music app for my car... And that's what a good music app is to me, not something that wants to push me to buy a $1000 phone and $100 a month plan so I can crash my car screwing around with my phone.)
That tablet is $500+. I bought a 2019 dell latitude for 140$. Not as nice, but I don’t have to remote for anything. And it’s fully supported by Linux.
> Every jukebox app on Android depends on 100% manual file management.
You've got some great points in there, but as for this one, file management is one of Apple Music's weaker points. I absolutely hate apps that graft their own "library" over top of my already-working filesystem. As someone who's meticulously laid out the directory structure for all of my movies and music, I'm paranoid that some opinionated software is going to just go run roughshod over it, moving things around the way they think files should be organized. I already have a "library". It's an NFS mount on my NAS.
Fortunately, Apple Music still allows you disable this misfeature, but to do it you have to go into Settings and uncheck a bunch of things. Easy to forget.
> that's like complaining that Microsoft Remote Desktop (Now called the Windows app for some reason) sucks on Mac, right
Well, not quite, since RDP on Mac still works better than the native (VNC-based) desktop sharing feature.
> On Mac, the Music app (not to be confused with the streaming service) is fantastic
Strong disagree. I find Apple Music (the app) on MacOS to be terrible.
A good 1/2 of the main screen is taken up by a view with a random mix of artwork from the playlist. I find it useless, and it can't be hidden. . Also, there is no way to set the default view to just show songs, instead of the crappy "playlist view".
Search. It's hidden on the sidebar of playlists/sources and you have to scroll to the top to get to it. And then, the choice of whether to search local/Apple Music is on a toggle button on the other side of the screen.
Lyrics - you can't change the font, or adjust the size in normal mode, and when played in fullscreen, the background colours often obscure the lyrics so they're unreadable.
And finally Apple can't seem to decide between a Heart or a Star for songs that you love.
yep. Music is bad on all fronts. It's designed to push you into an Apple Music subscription and that's it.
My favorite is that for ~8 years after they killed their theater movie showings service at movies.google.com, that subdomain still pointed to the deprecation notice. They finally must have noticed earlier this year, because it changed to a hacky redirect to https://www.google.com/search?q=movies and now much more sanely points to the Youtube movie storefront. (This drove me crazy for years because I'd always instinctively type in movies.google.com thinking it'd intuitively take me to the Play Store... I was so excited when it finally changed off the deprecation notice.)
FWIW iTunes on Windows has finally been replaced by a true Apple Music app.
The iTunes functionality carried over to Apple Music is just as slow and confusing and non-portable as it ever was, now with added cloud sync that will mysteriously tell you that anything it doesn't recognize isn't available in your region.
The streaming bit may be good but the rest is not.
There's something I must not understand about the Music app because when I drag & drop files on it (or let's get crazy, a folder), it does not read them or even add them to the current playlist?
Also, flac?
> On Mac, the Music app (not to be confused with the streaming service) is fantastic
I’d love to live in your alternate reality, not in mine where Music.app is slow, doesn’t do filtering to find specific content very well, doesn’t let you view album covers in a reasonable size, and shortcuts and buttons are inconsistent with the rest of the OS.
Also, syncing (about 350GB of) content to my iPhone has been hit and miss for at least 9 years now, where consistently the same tracks just disappear from the phone and maybe - just maybe - eventually get synced again, taking a few hours in the process. This has been going on across at least three Macs and about six iPhones.
I understand that streaming via Apple Music is the thing now, and us users from the “Rip, Mix, Burn” era are considered legacy now. I’d love to switch to something better, but haven’t found anything yet.
I love the Apple Music app too but the trick is to mainly use the songs tab and playlists and enable the column browser. Also, of the current widely-used options, Apple Music is second to none at this point: old apps like Amarok were nicer but they practically don’t exist anymore. Spotify, Tidal, Qobuz, etc. are all much more annoying than Apple Music (in part because they are Electron Apps).
I do use Music.app like this already, and while it's definitely okay-ish due to lack of alternatives, it's still lacking a lot.
It has also been stagnating for at least 10 years without any changes - apart from making the UI less consistent with the rest of the OS (e.g. "Reveal in Finder" being ⇧⌘R instead of ⌘R everywhere else [0], or the dialog asking whether I really want to edit metadata for several files defaulting to "Cancel" on hitting "Enter", while "OK" is displayed as the default button).
I agree that it's better than the rest, but that's easy :) It's hard for any 3rd party app to compete, as us nerds with large, well curated libraries are a determined and dedicated bunch, but still a quite small market.
[0] I'm aware that this is a relic of the short-lived iTunes Ping network, where ⌘R did something there
The severe bad-ness of itunes on windows halted any momentum I may have had in migration from a windows ecosystem towards apple.
It was just so horribly bad. Apple's disrespect for the dominant competing operating system made apple look incompetent. I liked the ipads until I had to work out how to transfer files on and off them to and from my existing infrastructure. It was goddamn painful, like going back to a previous era of esoteric computer usability.
> People just say that because they were saying it in 2005.
I can confirm that QuickTime works wonders on Windows !
Source : me being amazed by it on Windows 3.11
> One of the few examples I can think of however is Apple Maps. And it did get better; a lot better, some say better than Google Maps nowadays.
This depends on where (which country) you live. For all the ways Apple has been vocal about the Indian market and local production, Apple Maps literally sucks even in major cities in India. Google Maps is decades ahead and gets updated very quickly. Apple Maps cannot even find regular addresses or places.
Apple has its share of incompetencies and willful blind spots, and that shows up in specific areas often related to its services (Apple Intelligence is also a service). The organization and its people are not built for handling these effectively or quickly.
That said, I have more hope in Apple Intelligence improving quicker (at least in English, while competitors are already ahead in other languages, including several Indian languages) than I have in Apple Maps improving in India.
Google maps also sucked in India until a couple of engineers flew there to figure out all the idiosyncrasies of mapping/routing and spent a bunch of time implementing regionalized fixes for them. Apple expresses some very clear preferences in what regions they support well in Apple maps, which exclude most "difficult" areas.
The common wisdom is that Apple Maps works significantly better in the Bay Area than anywhere else on Earth, because the engineers file bugs they encounter on their commute.
Yeah, so much of Google's moat in mapping is just the sheer amount of human time thrown at the problem of all the little regional idiosyncrasies all around the world. Getting that right is what makes it so hard.
Who can forget the time that Apple Maps took me down a road that had never been finished.. that became gravel in a field.. and I realized I had driven into a homeless camp. Ever seen a zombie movie where they swarm a car? It’s happened to me.
While I do like Apple Maps, and I agree it has improved, it constantly get the speed limits very wrong in France (just 2 weeks ago).
The public transport part of Maps got much better in Germany, though.
It's the same in the UK. Also I have been trying to get them to list my address for two years now. Google were able to update it but any requests to Apple seem to go into a black hole.
I can't speak to Europe, but in the US in a very rural location, I never have speed limit issues to begin with. I believe I have also submitted a couple of changes for small things, and they've all been handled so far as I can tell as I have not run into them again.
Probably typical in that Apple's services in the US are generally better than elsewhere, but just wanted to add a positive with my experience and acknowledge that it's likely better here due to location within the US.
> That thing they finally ship being bad is, geeze, horribly un-Apple.
It was actually quite common Apple during the days Steve Jobs was no longer at the helm, they weren't even able to create a new OS, had to buy another company to rescue them.
And had they gone with Be instead of NEXT, most probably we would be talking about Apple in the past tense nowadays.
Nowadays they might have more money than ever, but it won't last forever if they cannot do anything else than reboots of existing products.
What I want is simple :
A smart assistant, that can understand and speak to me like Advanced Voice Mode, use a vast knowledge database, is tailored to my needs and can act on my behalf.
And it would be great if it’s able to run locally.
I would say Gemini Live is getting there. It's lacking integration with NotebookLM and Keep. It would be amazing if I started a project conceptually and wanted to move to code it could fire up VS Code and let me get to work.
Gemini's home automation works nicely and it can understand comments like it's too dark in here or it's cold inside and act appropriately. This is using the Android app as an assistant, not live mode.
OpenAI's implementation is apparently similar but I haven't tried the voice mode as a free user.
I haven't tried Apple Intelligence yet on my M1 and don't have an iPhone, so I can't compare.
I've been looking at offline capabilities with open weight models but they aren't there either. A full speech-to-speech model [1] working on an M1 Mac would be incredible.
[1] https://arxiv.org/abs/2410.00037
Whisper is pretty good if you take the large model with gpu acceleration. But it's not instant like advanced voice mode.
How is a corporation going to profit off that?
If that is simple, start a company to build it and become a billionaire.
I don’t think any company has a smart assistant that’s reliable enough to act on your behalf except for some very constrained tasks (examples: dish washers, auto-parking cars)
> concerning that every single Apple Intelligence feature they've shipped thus far has been not just mediocre; but bad
The very initial success of Microsoft was that everything was reliably mediocre. Most things Microsoft delivered that were truely bad were fixed within a few major versions. It was a superpower.
The same model works for most purchases on a bad|average|best spectrum: we never want to buy bad, best is difficult to buy, so we settle for average quality.
Aside: I think MS has gone downhill and is now bad on multiple dimensions for me
It still really isn't that close to Google Maps, with public transit in particular Apple Maps is pretty much useless. GM is typically more complete with paths and building data outside of North America too.
Apple sent me to a rural cornfield once instead of a church where baptism was taking place. It was funny because we weren’t the only ones, everyone using Apple Maps was sent to the same cornfield.
These type of rare but common enough edge cases make me super hesitant to use it in the Midwest.
I have had a really good experience using Apple Maps for public transit. Earlier this year I went to NYC for the first time as an adult and it was super easy to use for finding which train to get on. Had a similar experience in Europe this fall as well.
I was able to navigate the transit systems in Tokyo, Osaka, and Yokohama near-flawlessly with Apple Maps in 2018. I only recall encountering one correctness issue.
Huh. Just used it fine for public transport in 8 European cities recently.
Google maps won’t even work properly when there’s no data.
Huh? Public transit has been better on Apple Maps. Does Google even have station entrances/exits yet?
depends on the city
Gemini is pretty good on Android nowadays. No real complaints
I agree that Apple Intelligence generally stinks, but I'm not seeing anything actually generally more useful from anyone else.
If no one is good enough, does it really matter who's the worst?
Google Assistant and Gemini have been great.
Apple has a potentially interesting use case for generative AI in their professional creative apps: heavy integration in logic pro or in final cut. Perhaps even create simpler tools with similar functionality but aimed at non professional users.
The problem is that this risks antagonising the everyone in arts/humanities, and most other use cases are really unneeded - who needs text summarizing for something as simple as personal texts from friends? casual use is not really complex enough to warrant an assistance.
Author of the article here. I do video work occasionally and I use Davinci Resolve to do it. Davinci resolve uses generative AI as tools to help you. It makes all my subtitles and if I'm not going into domain specific terminology that often, it'll be 95% of the way there in about 15 minutes. This is massive, especially when combined with "edit by word" editing.
FWIW: Speech-to-text falls under "AI", but is not considered generative AI. (Note that systems with capabilities that go beyond STT with capabilities such as summaries or translation may incorporate generative AI.)
> The problem is that this risks antagonising the everyone in arts/humanities
I don’t anticipate this being a problem. Have you used generative fill in photoshop or lightroom? It’s a complete game changer. In Egyptian mythology they weigh your soul against a feather when you entered the afterlife, and with professional tools I think moral hangups about AI are going to get about the same weight. It’s just too good not to use.
I have this deep feeling that engineers have a fundamental misunderstanding of the arts, which is reinforced when there is a suggestion that "heavy integration" of generative AI into multimedia production apps is somehow desirable. It's not just contrary to the design and use of these applications, but contrary to art as an endeavor - and users find it revolting.
Apple already has simpler tools aimed at non professionals, they don't need generative AI either.
>It's not just contrary to the design and use of these applications, but contrary to art as an endeavor - and users find it revolting.
As far as speaking purely about art goes, I think there is a wide debate to be had there - a ruler helping a line be straight is help to an artist but not seen as contrary to his work, while pressing a button and getting a full painting is clearly not art creation. But where in the middle lies the spot where automation stops being ok? I think it's a spectrum and we'll see a shift in perception there, gradually.
But that debate completely sidesteps the elephant in the room - most artists nowadays don't make a living making art, just making art-adjacent content, where the artistic value is not really super appreciated by the buyer - photographers creating stock photos, graphic designers making app icons, background music for ads and the like.
Artists hate tools that automate this process because it significantly removes that source of income, but they're not the main target of these products. The target is the clients currently paying them and seeing an opportunity to get a product that, while lacking artistic quality, works for them just as well.
This is another place where I think technologists miss the forest for the trees. You're looking the outputs and results looking for a middle ground, but misunderstanding the problem of generative AI in art is the act of creation itself.
People don't generally take issue with tools that automate or make their jobs easier, even if it may reduce the value of the output. However if the tools limit what they can create themselves and make it difficult to fix or fine tune when something is not how they envision things in their mind before creating it, then they're not good tools. Even worse are the tools that take away their ability to create at all.
Really I think what technologists don't understand about art is that in engineering tools are a means to an end and only the outputs matter. If you can get a program to spit something out and say "look, isn't that good enough?" you have missed the entire point of art.
>However if the tools limit what they can create themselves and make it difficult to fix or fine tune when something is not how they envision things in their mind before creating it, then they're not good tools. Even worse are the tools that take away their ability to create at all.
I might be wrong, but I think you're picturing all-or-nothing use cases here. It's not all just 'draw me a picture'; Think smaller scope and maybe you see that middle ground. Take as an example, for a writer, clicking on a phrase like 'he raised his eyebrows' and being suggested alternative wordings so he can avoid repetition. Is that interfering with his act of creation any differently than checking a thesaurus?
Consider being able to have an interaction with an LLM to whom you can ask 'is the plot of my thriller so far leaving any plot hole?'. That does not seem so different with a back-and-forth with an editor or an early reader, in terms of affecting creative freedom.
>If you can get a program to spit something out and say "look, isn't that good enough?" you have missed the entire point of art.
Again, I get that but art is not what tech companies are trying to substitute. If a music generator can give you background music for studying there is no art creation involved, but neither the owner of the youtube channel making ad money nor the listeners give a shit.
I'm not defending that position necessarily, mind you, just pointing out that the business interests in 'not art, but just content, that happens to need artist's skills to create' far surpasses the interest in actual art.
As an analogy: Many musicians will scoff at mainstream pop artists and how every song is just the same four chords. But is the business in pop or in avant garde jazz?
Shrug. If I had to go back to desktop Linux, and I could pay to have Preview, Safari, Terminal(! yep, I like it better than my Linux options), Digital Color Meter, Apple's office-alike suite, Notes, and various other first-party Mac apps, on Linux, I'd absolutely click the "buy" button. And I spent 20 years on Windows and Linux before seriously giving Mac a shot, and still regularly use both for various reasons, so it's not that I don't know what else is out there—Apple's first-party apps are my favorites in their categories more often than not (big, glaring exception for Xcode, hahaha). They're mostly really good, stable, and don't eat my battery like it's free.
Is it actually that bad?
I've been really enjoying the AI notification summaries, they're a nice combination of time saving and comedy
MobileMe/iCloud sucked for half a decade at least, it was well known internally and was something Jobs supposedly bitched about a lot.
Jobs seemed very tolerant in this case, he was quite mad at Eddy Cue at times, but did not fire him.
Can't talk for all regions of Apple Maps, but here in Canada I still get many errors when using it - especially when using bikes, buses and so on. It remains impossible to confidently use compared to Google Maps. When it comes to Apple AI stuff - too much work was put on Apple Vision and this was a tragically bad strategic decision from Executives at Apple. I wouldn't be surprised it will be presented in the future as one of the greatest miss from Tim and his gang.
> too much work was put on Apple Vision and this was a tragically bad strategic decision from Executives at Apple.
I think it is more complicated than that. I think the Apple Vision is a kind of albatross. No one wanted this thing. I happen to think the executives didn't want it either. For all the years and effort put into it (and, well, there was project "Titan" before that) killing it might have hurt worse than their lackluster shipping of it.
Flush with cash (and I can't think of a phrase that really carries the weight of just how flush with cash they are — embarrassingly wealthy?) it was a rounding error for Apple to hire everyone they could in The Valley and keep them busy (and filing patent applications as they worked). It kept them from the competitors.
And I don't believe you could have instead put the engineering hires to "fixing Maps" or whatever pet peeve you and I have about the current Apple ecosystem. You're 1) likely not hiring the type of engineers for those tasks and 2) just throwing more people on the thing is not necessarily going to be the right answer (The Mythical Man-Month, too many cooks (ha ha) and all that).
On the whole I think Tim has steered the Apple ship to align with the times we have been living in.
I think the only reason the Vision Pro exists is for the OS. I wouldn't be surprised if Apple internally considers it the final OS, since it's the one that exists in the physical world. Their task for the next two decades will be to bring that OS to invisible devices like glasses.
In Metro Vancouver, Los Angeles and the state of Washington, my experience with Apple Maps has been far better than Google Maps; the latter seems to have stagnated completely.
Apple Maps provides me with more accessible info. e.g. "turn right at the next traffic light", "stay in the second lane from the left" vs. "In 200 metres, turn right onto 1st Avenue" (where it's always off by 50m) and nothing about lanes
MobileMe.
I can have a whole human like conversation with chatGPT via their app on the same iPhone where Siri still is total horse-poo. I have iPhone 15 Pro and running 18.3 .... Siri is so pathetic.
I chat with GPT (especially in the car) to get things done; assistant and a knowledgebase. Siri makes me have nerd rage (lol) trying to use her the same.
If GPT came out with an AI Phone Apple would be out of my life. I want an AI Phone where on the lock screen I see a facetime like call with my AI Assistant (can skin how they look to be whoever). They do everything for me via voice, text, hand gestures, facial expressions and etc. It would be less icon focus and way more AI focused of a UX.
I think it's much easier for Apple to sort out their AI and add this to iPhone than it is for OpenAI to figure out an entire mobile ecosystem where Apple has a ~15 year headstart and use their AI in it.
I agree Siri isn't good, but adding good AI into the existing ecosystem is clearly where the market is headed, and I don't think it will be long before Apple gets there.
The "bicycle for the mind" goal, and the Steve Jobs quote that inspired it, is really just another restatement of McLuhan's idea of media as being extensions of man. The bicycle (or wheel, more generally) is an extension of the legs, a phone is an extension of your voice, etc.
The problem with interpreting AI through that lens is that AI, as it is being used here, is not an extension of your mind. Plenty of other things are (organizers for example), but AI does not extend your thoughts. It replaces them. Its notification summary feature does not improve your ability to quickly digest lots of notification information, it replaces it with its own attempt, which, not being your own judgment, can and does easily err.
There are some uses of AI that do act more like a McLuhanesque medium. Some copilot applications, in which suggestions are presented that a user accepts and refines them, are examples of this. But a lot of the uses of both image generation and LLM tools serve to limit what your mind does rather than expand it.
This post makes the point that the foundations of Apple Intelligence are really well designed. I think anytime you make the right underlying technology choices, there is always hope for the product.
It's also worth noting that Apple traditionally is not a first mover and looks for "inspiration" from smaller competitors. In this case, there is no comp to reference. There is no startup mobile OS innovating in integrated AI. That, and the supposedly rushed timetable, probably explains a lot.
I wish the same. That being said, given how useless Apple Intelligence is, how it isn't deployed in the EU and how it's gatekept by newer hardware, it's still very easy to ignore it. It's even easier on Mac where new versions doesn't bring anything worth upgrading for a non Apple-only developper (still running Sonoma).
But the point is that the foundation gives it hope to be a class-leading product in the future.
The lengths Apple went to build a secure and private system will make it stand out and help it hold up to regulatory scrutiny. Doing this now is better than doing it later.
In, let's say, 3 years, the features are more legitimately useful, the gatekeeping on new hardware will be a non-issue. In 3 years the majority of Apple's users will have an iPhone 15 Pro/iPhone 16 or newer. They are probably already mostly on M1 or newer Macs.
On the other hand, I totally agree that it's pretty useless as it stands right now. I also think that if their launch strategy is to have an amazing WWDC and then deliver 5-10% of the features in September, that's going to turn Apple into just another company that promises the moon and delivers gimmicks.
On the contrary, in three years we will be used to AI which requires much newer tech than available today. The current tech will be obsolete.
We have seen that play out with all the previous AI chips (mostly Google).
The software and its requirements are moving faster than devices can be shipped and accumulate significant marketshare.
This is not to say thay Google and Apple haven't or won't be able to ship some minor models such as voice recognition or translation, but for frontier level AI the local chips just won't suffice.
"Clean Up is best explained by this famous photo editing example . . . This tool allows you to capture a moment in time as you wish it happened, not as it actually happened."
FALSE. Apple defines a photo as a record of something that actually happened. iPhones take photos. They doen't auto-swap a high-res moon in for the real one like Samsung phones do.
Clean Up (like crop) is just an editing feature, manually applied after a photo already exists, and using it effectively changes the image from a photo into an "edited image", the same way using Photoshop does.
Definitions of What a Photo Is:
Apple - "Here’s our view of what a photograph is. The way we like to think of it is that it’s a personal celebration of something that really, actually happened. Whether that’s a simple thing like a fancy cup of coffee that’s got some cool design on it, all the way through to my kid’s first steps, or my parents’ last breath, It’s something that really happened. It’s something that is a marker in my life, and it’s something that deserves to be celebrated." - John McCormack, VP of Camera Software Engineering @ Apple
Samsung - "Actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene — is it real? Or is it all filters? There is no real picture, full stop." - Patrick Chomet, Executive VP of Customer Experience @ Samsung
Google - "It’s about what you’re remembering,” he says. “When you define a memory as that there is a fallibility to it: You could have a true and perfect representation of a moment that felt completely fake and completely wrong. What some of these edits do is help you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond." - Isaac Reynolds, Product Manager for Pixel Cameras @ Google
Definitions via https://www.theverge.com/2024/9/23/24252231/lets-compare-app...
> This tool allows you to capture a moment in time as you wish it happened, not as it actually happened.
> the Clean Up tool gives users a way to remove distracting elements while staying true to the moment as they intended to capture it.
idk, seems like the author described it the exact same way Apple does in their marketing copy.
https://www.apple.com/newsroom/2024/10/apple-intelligence-is...
I hate that we have spent the last 20 or so advancing digital cameras to the point where a everyone has an amazing DSLR in their pocket and now we're at the point in history where we have to define what a photograph is, because everyone is trying to shoehorn some shitty AI image gen thing into our cameras for a quick profit
We're at that point in history BECAUSE everyone has a DSLR in their pocket.
Image quality on modern phones is in large part due to a lot of image processing done by the phone. Multiple photos being taken, combined for least blur and best dynamic range, colour balanced to best represent skin tones etc.
The line between the sort of algorithms that run on an iPhone and inserting a moon is largely philosophical rather than technical. It's an extremely important philosophical line! But the sort of things that have been added are the logical continuation of the sort of work that the camera teams have been doing.
[For context, I'm a Google Pixel owner, but of those three statements the Apple one is the one I agree the most with]
Frustrating to me, too, as someone who has recently gotten back into photography and it's difficult to know whether the photos I am using as inspiration are actually real or so highly edited that I'd never be able to achieve something similar.
It's one thing to use masks to edit highlights/shadows/color balance for certain areas (skies, buildings, people, etc) but it's an entirely different thing to completely replace the sky, or remove objects because they aren't "appealing"
> It's one thing to use masks to edit highlights/shadows/color balance for certain areas (skies, buildings, people, etc) but it's an entirely different thing to completely replace the sky, or remove objects because they aren't "appealing"
Almost as long as we've had photos, we've been removing "unappealing" things from them. Famously Stalin had Nikolai Yezhov removed from a photo after he was "purged", but the Soviet Union in general is full of these instances.
More lightheartedly, Disney supposedly (though this seems to be subject to some debate) has airbrushed a number of photographs of Walt Disney to remove cigarettes from them. And perhaps most famously of all, Han only shot first if you were born before 1997.
I don't mind so much if photoshop has these abilities but to put them inside the camera app is just such a backward step for creativity
Camera app with RAW mode?
Camraw
I’d agree with your assessment of what Apple considers a photograph if I could turn off the post-processing that turns everything in the background into a smeary, blobby mess.
https://halide.cam/ has Process Zero, which is about as close as you can get to straight off the sensor. Here's a photo I took with it: https://bsky.app/profile/xeiaso.net/post/3le3dd53zlk2c
Easily the best camera app I've ever purchased. It makes me not want to pull out my mirrorless camera as much to get decent photos.
That’s Portrait Mode. You only get that if you change from Photo mode to Portrait mode in the Camera app, or in later OSes by retroactively applying a Portrait effect in Edit mode. The addition of the latter feature also made it possible to retroactively remove the Portrait Mode effect from a photo, as long as you have the actual source asset and not a rendered JPEG/HEIC with the effect baked in.
If you mean the fake bokeh that blurs the background, turn off portrait mode.
You can toggle "pro raw" in the default camera app. It captures a lot more information from the sensor. The files are a lot larger, it isn't throwing away information that isn't visible like in the shadows. This gives you more flexibility like changing color balance and exposure after the fact because of that extra data. But there is still some sharpening and post processing.
You can use the camera app inside lightroom or "procamera" or other apps and take raw photos, where it records all of the sensor data without any post processing. Most people don't want this, you need to develop the images using software like lightroom to look good.
This comment and the article they came from are a perfect snapshot of this moment. The fact that major players at each company have made public statements about the philosophical definition of what a photo is. I mean, of course they have. Of course. The times be wild.
They failed IMO as soon as they started marketing this stuff as AI. Nobody _actually_ cares about that. They just want new and exciting device capabilities. The fact that they haven’t done anything compelling enough to detach it from the AI moniker means they don’t have anything. Yet. The overall foundation is sound. Maybe in a generation or two we will have a truly compelling use case. Part of me doubts it, though, because Apple isn’t the company they used to be when it comes to innovating.
Funniest part is that bar is too low. People just wanted siri not to be utterly stupid
Interesting note: they don't market it as AI. They market it as Apple Intelligence. I think this is deliberate. So it seems they've already taken your advice (albeit with the 'wink' of using the same initials).
> Many companies want to make computers that you can use to do computer things. Apple makes tools that you use as an extension of your body in order to do creative things. They don't just sell computers, they sell something that helps enable you to create things that just so happen to be computers.
An Apple marketing executive is smiling somewhere. Brainwashed another one!
Right, this is such cringe. Almost like Apple's "What's a computer?" kid come to life. Apple sells overpriced tech to people that don't know any better.
Uh … the new base model Mac mini is an unbelievable deal. I hope Apple doesn’t read this …
Sometimes I wish I could check an alternative universe where Apple refused to acknowledge AI slop and just doubled down on privacy and protections of the user. It almost feels like they could have just ignored the trend and let everyone else burn money until the hype dies down.
To be honest, apple's approach to AI seems pretty close to this recommendation. I don't see much more than "here's a few nice features to some apps" -- vs., MS's "now all our machines must be AI capable so that we can scan 1000s of screeshots of your device"
As someone who has gone back and forth between Windows and Mac OS... if Apple would stop trying to force me to log-in to use features I don't need, they would have a more compelling case. Windows is indeed a horrific data mining operation, but Apple's own endless push to enmesh me in their ecosystem is about as irritating. And Windows has WSL, which has value to me at work.
(And outside of work, Apple punted on games, which means I will always buy a Windows or Linux computer.)
You're comparing eating unflavored oatmeal with eating bleach. One of these things is clearly in a league far and away worse than the other, and you're doing nobody any favors pretending otherwise.
No, I just don't agree. At work, that's a decision the corporate overlords make. At home, I will continue to prefer games + WSL over Apple's offering... despite having started in computing, back in the 1980s, on a Mac, and having owned a number of Macs over the years.
Apple has consistently made choices that make having a Mac less palatable to me -- killing all 32-bit binaries destroyed most of what was available on Steam for the Mac. Hassling me to logon is another unforced error. Eventually I dumped my Macbook Air and bought a Windows laptop.
Apple hardware is really good. I probably spend more on Windows laptops because I replace them more often. But it's a better experience. (And I can get a more adequate amount of memory.)
> Hassling me to logon is another unforced error.
Sorry, but Macs require an online account only if you want to use optional online services offered by Apple, just as is the case with Microsoft and something like OneDrive or Office 365.
You aren't required to use the optional services on either platform.
The difference is that Microsoft wants to force you to use an online Microsoft account to log into your own local computer. Macs do not require that.
You cannot install any apps on an iOS device without registering and logging in with an AppleID, which requires both an email address and a telephone number.
Macs don’t run iOS.
> The difference is that Microsoft wants to force you to use an online Microsoft account to log into your own local computer. Macs do not require that.
I have no OS or tech giant loyalty, but I think the pushiness is wash between MacOS and Windows. I still haven't found a way to stop my mac from nagging me to log into iCloud, I currently am dealing with a bug where the iCloud login modal shows up immediately after I dismiss it, in an unceasing cycle, and I have to launch the App store and the quit before the nagging stops.
I recently set up a Windows 11 laptop over the holidays and found out (from Google) esoteric "oobe*.exe" command I had to run on the CLI on first boot (pre-setup) that showed the option to create a local account in the UI. I didn't get any nags post set-up.
I don't use iCloud, so I simply removed it from the list of applications that are allowed to send notifications in the system settings.
As far as annoying reminders go, Microsoft takes the lead by using repeated full screen ads.
> Though this is a fresh wave of full-screen update reminders, it's far from the first time Microsoft has used this tactic.
https://arstechnica.com/gadgets/2024/11/microsoft-pushes-ful...
At any rate, Microsoft has made multiple changes over the years designed to hide the option of using a local user account in Windows, which is a bridge too far.
No, I'm not required to logon, and I didn't, but they absolutely do prompt for it and otherwise making ignoring their services less convenient. I put up with it for years, but I hated it.
I agree with you here. My solution was to use Linux on my Desktop PC and its been great for work and gaming, and I have an M1 Mac with Asahi Linux on it for on the go. I love it and I'm really happy with it. I've used Windows most of my life and is has really worn me down, and I tried MacOS for around a year, and it is equally as painful but in different ways. Linux has some rough edges in certain areas, but I like that I can easily sync all my devices, emulation files (it pairs well with the steamdeck), and configurations... and I don't have to put up with being a data cow or being constantly badgered to engage in vendor lock-in.
Wouldn't you rather have a nice breakfast though?
I thought Microsoft has also been aggressive in pushing login using a Microsoft account.
Both make very sure you are aware of optional online services they offer.
Only Microsoft has actively hidden the option to use your own computer without a Microsoft account.
On win 10 there used to be a button to create a local account, but they removed it and made it so you had to disconnect from WiFi to get to that screen. Then they changed it to make it so that you have to disconnect from WiFi AND run some arcane terminal command. At this point your average user is just going to use an online Microsoft account. What an atrocious user experience
I think he's saying both are so aggressive that you end up almost having to for either of them. But MS's data collection is a lot worse.
Longtime Mac and Windows user who feels similarly.
I recently tried Linux as my primary desktop after 10+ years. It’s amazing. It just works and there is no slop, no advertising, no ecosystem to log into, no notifications. So nice. I can actually get work done! And games often just work.
(Pop_OS on a System76 Adder)
Apple makes it way easier to use macOS without an Apple account than Microsoft does with Windows requiring a Microsoft account.
Pretty low bar
You can disable his on install with pretty easily for Windows.
I'm sorry to say, but logging in with a centralized account to use a device is a given in 2025. The benefits are far too good to disregard. Apple is no longer in the business of selling you devices, they're in the business of selling you add-ons to your Apple Account.
And people always say that we risk our accounts being locked out, but when was the last time Apple was heard closing Apple Accounts?
No, this is garbage. I have plenty of real accounts I need to log into already, I don't need another one that is there just so the vendor can track me and try to enmesh me in their ecology.
> that is there just so the vendor can track me
That's your point of view. I see access to so many services because I have a phone, laptop, watch and headphones. Apple doesn't use accounts only to track you, it uses accounts to do things like seamlessly switch your AirPods between devices.
You do not need an internet account to coordinate devices.
You couldn't just pair the device to all of them and then seamlessly switch? The only thing the account is adding is removing that initial step on first pairing, but even that could be done without an account if the new device C was paired to host A and came in proximity to host B that was in connected at some point with host A.
> I'm sorry to say, but logging in with a centralized account to use a device is a given in 2025.
It's really not. Of all the devices I own, the only one that really wants a centralized account is a Chromebook on its way out. Even Android is willing to work without a Google account.
Apple's devices are also willing to work without an account.
>I'm sorry to say, but logging in with a centralized account to use a device is a given in 2025.
The plague was a given in 1666 too, doesn't necessarily means its a good thing.
I mean, it's a given that it will be pushed, but I really don't see the benefit of logging into windows with a microsoft account for most people. I do it, because it makes it enables parental controls, but if I didn't want those, I don't see the point. The regular people in my life do it because Microsoft pushes it hard, and they don't care. The Windows app store works just as well (which is to say, not very well) if you log in to it in the app or through your windows account.
It's basically required on a chrome OS device, although my MIL was using the guest account for months when she changed her password instead of referencing her password book and then later couldn't log in from her book. Chrome OS isn't awful with no persistent storage.
Blessedly, I haven't had to use a mac in many years, but a local account didn't seem to impact anything of note --- you could login to the app store in the event you needed something from there, but there wasn't much that needed it other than Xcode; maybe that's changed.
The iPhone with no app store is fairly useless, so yeah, you've got to login to that. An Android with no google play is a little bit less useless, depending on what apps you want to run, some of them distribute apks directly.
An Android without Google account is fine. I use Aurora store to download apps. Most of the features of Google Play Services (notification, cell location) still work when you're not logged in. I always use my android devices like this.
I dunno, my family likes it. They largely standardized to storing all their stuff in their OneDrive. When they get a new device, they just log in with the same username/password as their other computer and a lot of their settings are already configured. All their stuff is just there in their OneDrive.
For shared computers its really nice. I log in with my account, my wife logs in with hers, regardless of whatever computer we have handy. If I'm lounging on the couch I might grab her Surface, if she wants to sit down and work on a bigger project she can hop down at the bigger gaming PC, if we're on a trip and want to sync photos just grab the laptop. It's our same accounts, same username and passwords, same customization settings we like, regardless of whatever computer we use. The NAS at home has its permissions tied to our Microsoft accounts so our accounts log in seamlessly regardless of what computer we're on.
Meanwhile all our devices are encrypted and have our backup keys to decrypt synced there.
I probably would never want to have any personal Windows machine use a local account going forward.
Replacing emojis with stickers and no way to tell them apart didn’t feel so nice.
Several large companies could benefit from ignoring GenAI. Unfortunately, "benefit" would only mean "save money and produce better products for customers" instead of "make stock price go up".
Instead, all of these companies are effectively forced to play hype ball.
They keynote where they announced these features was pretty awkward. The typical conviction they delivered things with just wasn't there. You could almost feel that their hand was forced and they gave it a go, but they deeply resented not being the ones to make that choice.
I don't doubt the pressure they are feeling at the upper levels at Apple is real — but I agree, this initial rollout is also-ran and they underdelivered.
I disagree that Apple should have sat on their hands. LLMs are already shipping apps that are beginning to integrate with the OS (looking at you, ChatGPT icon in my MacOS menu bar). How much user privacy should Apple allow me (well, their customers generally) to cede before they step in?
Greatly prefer users installing these functions and apps themselves. Instead Apple me has double checking settings on every update so I can toggle off stuff that should have been opt-in. At least I can continue to refuse to opt into the MacOS AI implementation for now.
Now they've gone and doubled[0] the recommended space AI needs on base model phones shipping with under spec'd storage in the first place. If it goes up any more you'll be looking at giving up nearly 10% of a base model 128gb phone's storage to AI.
[0]https://9to5mac.com/2025/01/03/apple-intelligence-now-requir...
For the time being if you want an easy switch out of AI, you can just change the language of your device to English(CA).
There needs to be a setting in the OS with a slider that allows you to set some value of 1-10 of your privacy preferences and all settings in all applications and future updates to a particular device are bound by. With each setting having well defined meanings, such as "1 - no interaction with internet based services at all. opt-out of everything", "2 - only security online services used. opt-out of everything", all the way up to "10 - this device is a loaner but i want all of my content to be available to me on any device anywhere and i want all of my content screen for 'safety'. opt-in to every thing."
Step into what? Most users don't use or want to use these apps.
Just last year everyone was saying they were “behind in technology” because they didn’t “understand AI”.
The particularly strange thing about this is that Apple has been shipping neural network accelerators since the iPhone X. The one actual selling point of Microsoft's "Copilot PCs" is copying Apple and putting an NPU into Windows laptops so they can have all the AI features Apple had already shipped way before they coined the term "Apple Intelligence".
The LLM hype is so powerful that it got people to think the market leader in consumer-facing AI features was falling behind because you couldn't ask your iPhone to summarize notifications poorly.
> "No wireless. Less space than a Nomad. Lame."
It's that sentiment all over again. Specifically, a naive belief that the specs/tech of something are just as important than the execution. Maybe they are if you are very tech online, but for most people, tech/specs don't matter as much.
Ultimately, this is another distortion of reality by the comment section.
It's what people always say: when the 5S came out, pundits were saying that was the death of the iPhone because the Samsung Galaxy was much bigger. Next year, the 6 was bigger and somehow Apple survived a year with a smaller device.
Apple is so derivative and marketing-type driven now, that it even got into the VR shit, after it had already resulted in nothing for Facebook and others peddling it for years. Given that, there was zero chance that they'd skip AI slop.
I'm not sure, they've been revolving around "AI" stuff for ages; Siri, photo manipulation, identifying people and things in photos (but on your own device), all of which are widely popular. This feels like a logical next step for the path they were on for ages.
It would have been okay if they continued to release features that just happened to use ML under the hood. But instead they are expressedly marketing “Apple Intelligence”, with most of the features released under that umbrella so far not really working well (notification summaries, mail categories, Genmoji, …).
I do wish they continued on this track instead of apple intelligence. I should be enable to click generate audiobook in iBooks and have an audiobook made in any voice of my choosing. Open source solutions are far too slow on macOS, and subpar in other languages.
Yeah, they've been a bit slow on LLMs, but "machine learning" has been one of their buzzwords for years now.
You hear other companies saying they're pivoting to making AI-ready CPUs soon and I'm like dude, my iPhone SE 2 has machine learning processing!
Calling it a trend in a casual way is not going to make it go away.
At this point, anyone calling AI a trend sounds like someone calling electricity a trend.
Believe it or not, there are paradigm shifts in technology every now and then. Not everything is 3D television.
There are. But until the current crop of AI can actually do something useful, it isn't one. Right now it's hype driven development in search of a compelling use case.
I hear this a lot, but don't understand it.
In my own field (biomedical research), AI has already been a revolution. Everyone - and I mean everyone - is using AlphaFold, for example. It is a game changer, a true revolution.
And everyday I use AI for mundane things, like summarization, transcribing, and language translation. All supremely useful. And there is a ton more. So I never understand the "hype" thing. It deserves to be hyped imo, as it is already become essential.
I think there is a group of people that don't want AI to be useful and think that by telling other people that its not useful that these other people will believe them. Unfortunately for them, more and more people are finding value with AI. It literally saved my father-in-law's life. It's become my kid's best school tutor. But there will still be people telling me that it has no value.
it boggles my mind this is still being written over and over here on HN. like an echochamber everyone like fears AI will replace them or some BS like that. I cannot even begin to tell you how much AI has been useful to me, to my entire team, to my wife, to my daughter and to most of my friends that are in various industries that I personally guided towards using it. on my end, roughly 50% of things that I used to have to do are now fully automated (some agents, some using my help along the way)…
in every thread here on HN there will be X number of people posting exactly what you wrote and Y (where Y is much smaller than X) number of people posting “look, this shit is fucking amazing, I do amazing shit with it.” if I was in group X I would stop and think long and hard what I need to do in order to get myself into group Y…
It’s already ridiculously useful. It does atleast 80% of the work in the teams that report to me and writes/refines almost all the documentation I write. That doesn’t even involve all the hobbies I use it for. Ideas for new furniture to build, pattern generation for wood carving, ideas for my oil paintings etc.
I think AI tools will have a little less impact than electrification.
Honestly, I think Apple played their cards perfectly. They didn’t try to be first to market with R&D, but they’ve launched just enough features to excite customers about new phones, while appeasing investors and subduing potential competitors like OpenAI who are rumored to be working on hardware devices, too.
> Apple... doubled down on privacy and protections of the user
You mean like is documented on https://support.apple.com/guide/iphone/apple-intelligence-an...?
> It almost feels like they could have just ignored the trend
I wish they had but on the other hand, can you imagine how many think pieces there'd be about how Apple was stagnating and how they were "such, like, a 20th Century Company(tm)" and you'd probably get activist investors bleating about how Apple were leaving money on the table and and and etc.
So? They had been getting 10 such pieces a day when Jobs run things too...
The “play the podcast my wife sent me the other day” example is interesting to me. That shouldn’t be difficult to do without AI. Yeah asking a thing is always gonna be quicker (provided it works), but a well designed app should make that possible within like ten seconds.
I can’t help but wonder if the reason “agentic” systems seem so appealing to people is because as an industry we’ve spent the past fifteen years making software harder to use.
I had a similar example the other day. I was visiting Arizona for the first time and was driving in a rental car from Phoenix south to Tucson.
I have the latest Google Pixel, and was using Google Maps to navigate.
I pressed the "voice search" button from within Google Maps and said "What is the name of the mountain on the left that I'm about to drive past?"
Instead of a context-aware answer, my phone simply did a Google search for that exact phrase and showed the results to me. The top hit was a Reddit thread about some mountain near Seattle. :)
I'm wondering if you actually thought that there was a chance that Google Maps was going to answer you correctly?
I guess we're entering an age where people might have a reasonable expectation that any app is a context-aware LLM, but personally I don't have that assumption yet.
I’m not sure. If the word “podcast” wasn’t used then it might be tricky, but an LLM might figure it out from context.
It also depends on the context of the action. If you’re sitting in front of your computer, yeah, no big deal. Type “podcast” in the search field of the messages app and click the thing it finds. But if you’re busy cooking dinner or cleaning out the cat box, it’s a pain to get your phone out and poke around. The main draw of voice assistants (at least to me) is that they let you do things quickly when you don’t already have a device in your hands.
The problem is that this only works if your spouse used an apple application to tell you this information. If she used messenger/IG chat/whatsapp or gmail or her work email there’s no way for Siri to know about it.
> The core of why ChatGPT works as a product isn't the AI. It's the experience of each word being typed one at a time by the AI and saving your conversations with the AI for later.
Really loved this article overall, but I have to super-disagree here. The core of ChatGPT is you can have a conversation with a computer program.
Take away saving history and you can still have a conversation with a computer program (see ephemeral chats).
Take away typing one word at a time and you still have a conversation with a computer program (see non-streaming API / batch API).
But major props for writing this live on a twitch stream, benefit of the doubt there my friend.
Yep you can tell it that it gave you the wrong answer and it will apologise and give you a different wrong answer.
Article author here. Thanks! I've found writing on stream to be really hard, but it's getting easier with practice. One of the more frustrating parts is trying to get the pure thought nuance out onto the page in a way that reflects the nuance as it is in my head. I think I'm getting closer to it, but who knows.
These models are stochastic though, so saving conversations with elaborate context so they're actually useful in the domain you need them to be useful in is a fundamental feature. The alternative to that is saving a document to copy-paste in as a "startup prompt" of things it needs to know contextually, which is kind of silly.
I agree with the premise that it is not the text stream or the coversation history that makes the ChatGPT useful - It is multifaceted. It's an interface, and also a intelligence at your disposal. Even if you took some abilities away from ChatGPT, such as spitting out real world facts, it could still be used for summarizing text.
While I agree with the post's sentiment, its assessment of Apple Intelligence overlooks its incomplete rollout, with most of the meaningful, step-change features (e.g., Siri’s contextual awareness) scheduled for release in 2025 at the earliest.
In my view, Private Cloud Compute and Apple Intelligence, together with the ubiquity of Apple devices, position Apple as a leading candidate to realize the widespread, AI-enabled transformation of personal computing discussed in the post — with tasks requiring less energy and cognitive load than they do today for the general consumer.
Execution matters. Microsoft was best-positioned to reap the widespread, mobile-enabled transformation of personal computing. They didn't, and today with the discontinuation of the Surface Duo, they have literally zero strategy in mobile.
Worse, because of that bad decision they don't have any endpoints to sell AI and XBox services.
When Windows Phone was killed it had about 10% market share in Europe, it was slowly becoming the device that people that didn't like Android, and lacked the funds for an iPhone.
Turns out 10% is better than 0%, but they decided otherwise, and selling Microsoft Android was always hard to swallow.
Meanwhile Siri is worse than before and can’t fond my flights to add to my calendar any longer.
I agree. the idea of being short Apple's ability to create an AI product seems to miss the one thing they are excellent at: the simple things. what they understand is you don't want AI to be impressive, you just want the things you already do today to work really well and be satisfying to use- and not know or care if there is AI behind it.
steve jobs understood you have to do really hard things very well to do anything simple beautifully. I'm pretty sure most of their AI play is still in front of them.
> Word processors like MacWrite absolutely transformed the ways that everyone used computers.
MacWrite was released 5 years after WordPerfect, which itself is predated by WordStar. I don't get why Apple fans have this obsession with pretending Apple invents these things.
Apple refines what others have attempted before, that's what they're good at. Part of the reason people are disappointed with Apple these days is because of this fantasy image of Apple as an inventor.
Author does say word processors "like" MacWrite, so falls short of saying it was the first at anything (and it wasn't), but WordPerfect and WordStar are interesting choices for comparison. Of the three, only MacWrite is a GUI-based WYSIWYG word processor that would be immediately familiar to modern audiences. (WordPerfect wouldn't get GUI until 1991.)
It's interesting to note that when WordPerfect got a GUI, it had to be worked up separately for each platform --- the NeXTstep version was quite nice, took full advantage of Display PostScript, and was coded up by a couple of programmers in six weeks or so.
That’s how cross-platform apps were. Middleware libraries that you could use to put the same GUI code on different OSes didn’t exist, and there wasn’t room for them anyway. Even sharing the core could be a tough proposition. I know that at least MS Word for Mac vs Windows was two completely separate programs that happened to share a name and a feature set. That continued until Mac MS Word 6.0 in 1994. That was a port of the Windows version, and very much disliked among the Mac userbase for being poor performance and not really behaving like a Mac app should.
I don’t think cross-platform UI code really took off on the Mac until maybe 20 years later. Plenty of Mac apps were built that way before that point, but they tended to be ones in the “you’re stuck using this, so it doesn’t have to be very nice” category, like Word ended up being. It doesn’t really seem to have tipped until Electron came along, and somehow web apps that use half a gigabyte of RAM to show some text became totally accepted as good enough.
Incidentally, the Cocoa UI framework that macOS uses was originally a cross platform framework that could deploy to Windows and various other OSes, back when it was made by NeXT. Apple. Apple killed that off and it became Mac-exclusive. I wonder what the world would look like if they kept support for other OSes. Maybe we’d have a good selection of cross platform apps that actually look nice and perform well.
To see that, look at the apps which almost were:
- Pages.app by Pages --- amazing DTP tool which was bought by Anderson Financial Services, but then killed off when Rhapsody went away
- Macromedia Freehand --- unfortunately, they continued with their in-house toolkit to make the Mac/Windows versions, since it would have been too much work to revive the old Altsys Virtuoso code
- Quantrix Financial Modeller --- at least this still survived, but be sure to take a seat before looking up the cost per seat
- FrameMaker --- the NeXT version was the nicest one I ever used, and with Display PostScript, was far nicer to work with
- WordPerfect --- the NeXT version was far nicer than the Windows, would have been nice to see that come back
- Stone Create (and the other Stone apps) --- nice assortment of various tools which would have been quite nice to have
Lots of other way cool NeXT apps which should have done better in the market.
Real damage of Microsoft's Monopoly power. They have a severe case of Not Invented Here syndrome and even for free Open Source Software refuse to ship any of it naively unless they can fork it and pretend they wrote it (early TCP code based off BSD IIRC).
It's horrendously tough to target for a cross-platform application when you have to bring the entire GUI toolkit along and adapt it individually to every target, even the 95% of the market gorilla that is whatever versions of Desktop Windows are the most recent 3.
Real damage is called Electron.
WordPerfect's GUI releases were yuck. Late to the scene and lost the essence of WordPerfect 5.1 for DOS. I'm not surprised that Word won that battle.
WordPerfect and WordStar were always pretty yuck IMO once newer generation products came along. I was pretty much a fan of Microsoft word even in the DOS days. (Even Multimate which was basically a DOS clone of a Wang product.)
I loved WP5.1 for DOS because of one feature: "show codes" - it made it trivial to understand why the formatting looked the way it did, and to fix formatting problems. Other than that it was not outstanding software :-)
I never used Word in the DOS days so I can't compare, but it was obvious that Word for Windows was written natively for Windows and it "felt" much more natural in the Windows of that time.
Neither WordPerfect nor WordStar even connected for me. No disrespect for anyone for whom they did.
Never really loved Word for Windows to be honest, though I used it a lot over the years. Though liked the DOS version.
The NeXTstep version was _very_ nice --- looked and felt like a native app, but still had "Reveal Codes" --- nicest version of WordPerfect I every used.
It was implied by the phrase 'transformed the ways everyone used computers'. True, to younger computer users MacWrite would be the most familiar of the three. However, in terms of total unit sales and percentage of users for their day, MacWrite was practically rounding error in the word processor market. It was WordStar and then WordPerfect that dominated (and therefore 'transformed...') until the early/mid-90's when MS Word took over.
The point stands—we don't use the descendants of TUI word processors today. We essentially all use GUIs.
WordPerfect for classic MacOS came out in 1988. It always felt like they bought someone else's product but apparently it was an in-house port.
But the example functionality is backspacing, which of course in no way requires a GUI. You could just as easily cite AtariWriter or Bank Street Writer, which came out before WordPerfect.
And before Macwrite, or Macintosh computers existed, there was Xerox PARC and their GUI-based WYSIWYG editor called "Bravo", which Steve Jobs no doubt would have seen when he visited PARC.
'Bravo' was produced at Xerox PARC by Butler Lampson, Charles Simonyi and colleagues in 1974.
Then Charles went to Microsoft where he started and led Microsoft's applications group, where he built the first versions of Microsoft Office
That historic moment has been dramatized at least once.
https://www.youtube.com/watch?v=2u70CgBr-OI
I noticed Alan Kay himself in the comments, indicating about the only thing in the scene that is accurate is the guy playing Steve Jobs (Noah Wyle) looks/sounds similar to the real Jobs. Everything else was pure Hollywood fluff.
Yeah, this.
I mean, look, WordStar was a huge step up from a typewriter. It (and programs like it) made it possible for people like me to write.
WordStar let you get the words right. WordPerfect let you do "you asked for it, you got it" layout, which was a step up. But MacWrite and programs like it let you do WYSIWYG layout, which was huge. It was like going from typewriter to WordStar, but for layout and appearance. (The words are still more important, but the presentation also matters.)
I always wonder how many people actually used typewriters before talking about them in comparison to word processors.
By the late 1970s or early 1980s, typewriters had electronic memory. They had error correction "tape" so you could erase mistakes. You could set them to center or right justify text. You could create tables of justified text.
Early word processing software was amazing because it gave you more memory to work with, and didn't force you to keep so much of what you previously wrote in your head. But it was only a large step forward compared to typewriters of the time.
WYSIWYG blew all of that out of the water.
I wrote papers in graduate school using a typewriter before switching to a Mac Plus. It was a Smith-Corona and used a cartridge "ribbon" and a special "correction ribbon" that lifted off the text (instead of using white-out).
It wasn't a Selectric, though, so mostly you just used it as a typewriter. Selectrics were expensive and mostly used in business rather than by individuals.
Even the IBM Selectrics were nothing like as good as having a word processor, which is why the transition was so fast. A friend of mine was in commercial real-estate in that period (mid-1980s) and discovered that with a computer, he could type his own offer letters and not have to bother with a secretary. My brother's law firm went through something similar. In a handful of years, there was huge adoption.
A student organization I was an officer in had a Selectric with a correcting ribbon and I ended up using it for a lot of papers latterly. The newspaper had Royal typewriters and I used those. We did a bunch of literal copy/pasting and then stuff was typed into a huge typesetting system.
Grad school (starting 1979), there was a mainframe with DecWriter terminals and that was a big improvement but still nothing like terminal GUIs.
Selectrics never had any word processor-like functionality aside from the ability to erase. They were very expensive for being just amazing pieces of mechanical engineering but the daisy wheel products worked well enough and could be infused with a little computerized help because they were already electronic.
> By the late 1970s or early 1980s, typewriters had electronic memory. They had error correction "tape" so you could erase mistakes.
Sure, there were electronic typewriters where you could type an entire line or two into memory before printing it. And, as you say, typewriters with whiteout reels that allowed you to backspace.
But if you've started a new paragraph, and you realise you want to go back and edit the previous one, and everything else needs to move down a line to accommodate? Good chance you're throwing the page away and starting from scratch.
Or you're cutting and pasting in the literal sense, using scissors and glue.
Typewriters were difficult enough that "typist" was a professional job - and many workers wouldn't do their own typing, instead recording messages onto tiny tape cassettes for a typist to type up later on. There were even foot-pedal-controlled cassette players, so typists could type with their hands and control the dictaphone tape with their feet!
Early word-processing software changed very little of that. I maintain that this kind of software was like typewriters on steroids, with a screen.
WYSIWYG software was an entirely new paradigm, and changed everything.
Even Word Perfect was much, much better at this than a typewriter. What made it hard to use was that you had to memorize a bunch of special function keybindings, as I recall. Yes, GUI word processors were much better than this, but even Word Perfect looked compelling compared to using a typewriter.
I wrote a bunch of papers using a typewriter. Standard operating procedure was to plan out the complete paper, basically as a tree of bullet points (I used index cards), and then turn those into sentences at the typewriter. All the writing/reorganization had to happen before you sat down to type.
you had to memorize a bunch of special function keybindings
Keyboard templates were pretty ubiquitous.
Early word-processing software changed very little of that - the problem was cultural, not technological. Big offices had a typing pool where professional typists would type up memos and documents. It took a while for that culture to change where people realized they could, and should, do their own typing. Small business led the way because they couldn't afford a dedicated typing pool.
It's also technology was much less ubiquitous back then and less evenly distributed, at least in my experience.
I went from a mechanical typewriter which had the whiteout tape as it's only "advanced" feature, directly to a WYSIWYG editor. It was absolutely night and day. I only saw an "advanced" electronic typewriter later in life at my grandparents house, which was used rarely and only for her accounting business as it was so expensive when they first bought it.
As far as I know my experience was pretty much normal for my peer group - as my school had a similar setup. As a kid in school, it was amazing going from having to totally re-type a rough draft to being able to make some casual edits and hit print. Hours saved for each paper, especially with the typing skills I had back then!
Most 70s-80s advanced typewriter were just not regular household material. Even among white collar jobs. My parents (and mini me) worked on very mediocre type writers, until my dad got a PC from work with WordPerfect. Interesting from an economical perspective is that the 'top' typewriters were a lot more affordable than early pc's. People just didn't buy them. What they had was good enough, regardless of features. My dad ended up so enthusiastic for PC's to later spend more than a monthly wage on a PC for the family (actually: me). Every two years. Incredible, especially compared to the afforability of digital devices nowadays.
Word processing software was to writing as smartphones were to photography, or as the book press was to writing. Perhaps not transformational on a per feature basis, but transformational on grounds of the possibilities and market it unlocked.
I learned to type on an electric typewriter :) It wasn't a fancy word processing one with memory, but just that the hammers struck the paper using a motor making it easier to type. It also made a very satisfying 'thunk' which I would randomly trigger while the teacher was talking that in turn caused me to get thrown out of class a number of times.
> By the late 1970s or early 1980s, typewriters had electronic memory.
Were such fancy machines actually common? I never saw one. For me, it was all "CHUNK CHUNK CHUNK CHUNK oops! damn", until it was a whole new world with MacWrite and the ImageWriter.
Such typewriters were used by business, not students or individuals at home. Those machines cost several thousand dollars in today's dollars, just to put a perspective on things. Whereas a "normal", manual typewriter cost several hundred dollars in today's dollars. Most of us therefore had a manual typewriter. Touch typing was out of the question! Let alone any fancy-schmancy word processing!
Boy, oh boy were the early gen word processors a godsend!
I learned typing on an IBM Selectric, my parents made me take a typing course before they would buy a computer. White-out or correction tape was how we fixed typos (or just didn't make them). If you were smart you wrote your words out longhand before typing them; you didn't "think" while typing.
The early computerized typewriters kinda sucked - you could do line level edits but the print quality was dot matrix or worse.
I did. I used my mom's typewriter, which she typed her thesis on in the 1950s, and switched to word processing on a college owned TRS-80 before getting an MS-DOS machine of my own.
The big breakthrough was editing. WYSIWIG didn't solve a problem in that space. College papers didn't need different fonts, and it was sufficient to let the computer take care of formatting, like using Markdown.
I'm not dismissing the Apple, but it was priced out of my reach in 1984.
In Southern Europe Apple hardware has always been the most expensive one among systems for home users.
Hence why I had to wait until university to actually see them outside computer magazines, and only one room had them on the computer labs, versus the whole campus filled with PCs and UNIX terminals.
> I used my mom's typewriter, which she typed her thesis on in the 1950s
The point was that later typewriters had a features like memory, automatic indentation and erasing that were not available in 1950s era typewriters..
Yes, but nobody was buying those for personal or student use, and the really cheap ones came later.
Hi, author of the article here. I do most of my drafting of these longer articles on a Freewrite Alpha, which is effectively an electronic typewriter. When I use it I have one rule: the backspace key is banned. This makes me restate my thoughts if I make a typo or just bulldoze through it. I find that this makes me make better drafts in the process.
Yikes.
You might be blessed with a brain for written communication.
If I did not allow myself revision and improvement to my written text, everything I write would read like a spoken monologue, and be multiple times longer than necessary to convey my message.
On the other hand, Xena might also have a brain broken for written communication, and this is the best way to deal with it!
I have only recently learned that I have ADHD, and have been trying to iron out all the implications of that (well, that and autism, which I also only recently learned about) -- I cannot help but wonder if a workflow like this would help me in my writing....
Lemme tell you, it functions like a gift but feels like a curse in terms of how it affects my daily life. ADHD medicine doesn't help consistently, it sucks lol
There is a reason I do _extensive_ editing after the fact, here's the draft for Soylent Green is people [1]: https://gist.github.com/Xe/3fe0236412c1ce16389bfcd6c6562d7a The differences I added in editing are _vast_ and really transform the work from a ranty mess that approaches readability to something that's worthy of publishing.
I have another article about AI coming out about how we could use generative AI to make art that we've never seen before, but it's mostly used for AI slop. I'm in the middle of ranting it out into the typewriter. It's bad currently, but I make it bad first so I can remake it better later. Here's an excerpt of the intro that I'm going to be rewriting. This is the "raw clay" that I mold in editing.
> I like creating things. There's a lot of joy in being able to sit there, think about a thing, and then make that thing come into existence. This is something I really enjoy doing and I'm blessed to be able to do that as my job in DevRel.
> One of the core conflicts that i end up having with the stuff I create is that I am a bit more artistically minded than people would expect out of the gate. I mean, I get it. Tech isn't really known for *art*, it's a lot more known for being the barrier between you and artists you want to follow.
> Howeever I've ended up seeing kind of a disturbing pattern with AI tools that are meant or at least intended to aid people in the creation of art: they're almost always used to create infinite slop machines without a lick of art in the process. Today I'm going to talk about this fundamental conflict between two categories: art and content. Art is that which conveys, inspires and tells stories. Content is what goes between the ads so that media moguls can see their profit lines go up. I want to argue that a lot of what AI tools are actually being used for is content that is gussied up as if it is art.
If you want to see what the entire writing process looks like after it festers for a while in my head, I wrote out the holy grail article on Twitch: https://youtu.be/N_KNpVujAL8
[1]: https://xeiaso.net/blog/2024/soylent-green-people/
As someone alive back then, they were also quite expensive, and only a few had them.
I bought my typewriter around 1990, a plain classical one, where I threw lots of paper away, learn to use corrector tape and ink to save them.
MacWrite was a WYSIWYG word processor. You could change the fonts or other formatting and see the results updated on the screen.
WordStar and WordPerfect were for DOS. They were not WYSIWYG. Sure they were powerful word processors for professionals, but they were not “like” MacWrite. MacWrite was a tool for regular people.
I absolutely would have described WordStar as a WYSIWYG editor. [0] You have no content markings, you do have pages in sight, and flowing text. You set the text to bold, and the font displayed is bold, etc.
I don't think I'm alone in that judgement, as Wiki says:
"WordStar was the first microcomputer word processor to offer mail merge and textual WYSIWYG." [1]
So... You might need to expand why you think that this is not true.
[0] https://cdn.arstechnica.net/wp-content/uploads/2017/03/words...
[1] https://en.wikipedia.org/wiki/WordStar
> textual WYSIWYG
You said it.
WordStar did not have fonts. MacWrite did.
WYSIWYG without fonts is ... I don't know ... "textual WYSIWYG"? Whatever that is.
Textual means that it ran in text mode. That doesn't mean that it did not have fonts. It... Did. There's two fonts in the screenshot I linked beforehand.
I'd call those two "styles" of the same font family.
The Macintosh system software shipped with about 12 distinctly-different fonts. Most of which also had italic, bold, outline, etc styles.
These fonts could also be rendered at different point sizes.
The main body text is Courier, and the titlebar text is generally called "CCSID 437" or the OEM font. Both standard IBM fonts from the era. They're not the same fontface.
WordStar had full support for the PC-8 Graphic set. It is pre-TrueType Fonts, but so was the software. By the time WordStar landed on Windows, it had support for everything you'd expect from anyone else.
Without commenting on the rest: Courier, which is a serif font, does not appear in your screenshot.
https://upload.wikimedia.org/wikipedia/commons/9/9b/IBMCouri...
Honestly, I do not see the different fonts in the image. I see the status line showing the Courier font name. If it is different from the body text, I cannot distinguish it.
Obviously WordStar was limited by what DOS could render, so the variety of fonts available had to work in a drastically constrained bitmap. I could also not find samples of the PC-8 graphic set.
But for comparison, here are the original 1984 Macintosh system fonts:
https://upload.wikimedia.org/wikipedia/commons/2/29/Original...
Okay... Let's try a different approach here. "DISPFONT.EXE" and "DISPFONT.OVR" are key files you'll find in WordStar's archive [0].
I don't have a CP/M emulator on hand to fire up the original to show you an example, as WordStar pre-existed DOS.
But this is a quote from v3, the first DOS version, from the manual:
---
Screen Fonts for Preview
At the Add or Remove a Feature screen, you can install three different types of screen fonts for Preview. The screen font options are Code page 437, Code page 850, and PostScript fonts. If you want to install PostScript fonts, install both PostScript and code page 850 fonts. (Be sure to set the code page to 850 in DOS. See your DOS manual for instructions.)
[0] https://sfwriter.com/ws7.htm
Hi, actual WordStar-in-practice, wrote several hundred thousand words on it in both CP/M and DOS, user here (related: I am old). I think you're confusing WordStar's preview display with its editing display here. Later versions of WordStar for DOS (I think it started with version 5.0, but I wouldn't swear to it) could generate a surprisingly good for the day print preview using PostScript fonts, as described above in the text you're quoting. But that was a specific read-only mode. When editing, WordStar ran in DOS text modes and was limited to what DOS text modes were able to display: monospaced fonts, usually with the ability to display boldface text with "bright" text and sometimes -- not always -- with the ability to display underlined text with actual underlines. (This depended on your video hardware; IIRC, XyWrite seemed to be able to do that pretty reliably in DOS, but WordStar didn't). But you couldn't display proportional type in editing, or italics, or different typefaces.
Now, you could argue that WordStar anticipated WYSIWYG editors, because it did its best to faithfully reproduce margins, indents, line spacing, justification, etc. in its text editing mode -- but that attempt came from the era when printers could only output monospaced type, usually just one typeface, no italics, etc. Once printers got better, WordStar really wasn't WYSIWYG anymore, just "best effort within limitations". IIRC, the only major DOS-based word processor to actually attempt a WYSIWYG editing display was WordPerfect 6.2 in the late 1990s.
Not the original version, because DosBox is simpler than CP/M, but here you go: [0]
[0] https://imgur.com/CZxblEd
Thanks for this. I've never used WordStar, and my knowledge of early word processing software is certainly incomplete, but this is the first time I've seen such a screenshot that predates 1984.
Most printers at the time didn't have fonts either.
Consumer printers were dot matrix at the time, and they were not adequately high density to print fonts. That's correct.
But the AppleWriter printer was high DPI and could render fonts.
And then of course the Apple LaserWriter came a few years later.
This is why the Apple LaserWriter was such a big deal. It came out 1 year after the Macintosh, merging Canon’s laser printer engine, Adobe’s PostScript, and the Mac’s bitmapped display with proportional fonts.
WYSIWYG means "What You See Is What You Get." If you want the title to be Courier 24 point Bold then that's what you see on the screen. If you are writing in a proportional font with proper kerning then that's what you see on the screen. You don't see a fixed width substitute.
This is what enables you to do proper typesetting and page layout for a document, and using a PostScript printer such as the Apple LaserWriter you could do desktop publishing [1]. Desktop publishing was invented by Xerox PARC but the revolution began when Apple made it available to the masses with the Macintosh and LaserWriter.
Apple didn't invent any of these technologies but they were the first to put them all together into a package for the mass market and made them incredibly easy to use. Suddenly, grandma had a tool she could use to write and typeset the weekly church newsletter from home, and even print it on her LaserWriter at home. If she wanted to do a newsletter like that just two years prior she would have had to hire the services of a print shop to do both typesetting and layout as well as printing.
She could still have used WordStar or WordPerfect and printed with a dot matrix printer, but that doesn't get you large, proportional fonts or layout.
[1] https://en.wikipedia.org/wiki/Desktop_publishing
MacWrite was released in 1984. 7 years before comparable graphical, WYSIWYG competitors.
WordPerfect for Windows was released in 1991. Prior versions were MS-DOS and not graphical.
WordStar for Windows was released in about 1991. Prior versions were MS-DOS and not graphical.
What made MacWrite special, is that it was graphical, highly WYSIWYG (especially when paired with an Apple printer), had a lot of great fonts, and the software was intuitive.
Of course Apple didn't invent any of it, but they made one of the best word processing products at the time.
How do you measure best? Based on usage, sales, or fraction of published text created - Wordperfect or Wordstar were way ahead of MacWrite.
MacWrite was the best in a niche market of personal newsletters that wanted graphical elements. More professional desktop publishers used Aldus PageMaker. And big publishing houses didn't use PCs or software, they used offset lithography presses.
It's like you decided to completely disregard that the successor of MacWrite ended up completely eating all the use cases for making documents outside of the large-scale professional uses.
MacWrite begat Microsoft Word 3.01, which is when Word overtook MacWrite on Mac and became truly competitive with WordPerfect on DOS.
So your basis for saying "one of the best word processing products at the time." is that a different company made a different product inspired by it (and WordPerfect)? I don't think that is a persuasive argument.
If you said MacWrite was an innovative word processor, I would not have replied. Most innovative is not always the best product at the time.
You're replying to a commenter here, not the OP.
I am the actual OP. If you're concerned by my use of the word "best", please replace that word with "innovative for consumers". If you have an issue with this new wording, then we'll have to agree to disagree.
While MacWrite may or may not have been "niche" as you said, it most definitely heavily influenced another "niche" word processor available today: Microsoft Word. Word for Mac (1985) was the first graphical version of Word and it was heavily inspired by MacWrite.
OP to the rescue!
Except if we disregard the tiny market share Apple enjoyed across the globe in those days.
7 years?
Microsoft Word for Mac was released in 1985. It was comparable to MacWrite.
Also Microsoft Write for Windows was released in 1985, but it was clunkier.
Yep, I goofed there. Important to note that Word for Mac was heavily inspired by MacWrite. And became a better word processing app as a result of it.
Thanks for the correction.
With all due respect, I think you are falling into the nostalgia trap. MacWrite was never a true WYSIWYG editor because of the way it relied on Quickdraw for the on-screen rendering but the big deal at the time was the Laserwriter being the first Postscript printer (Adobe still had a proprietary lock on Postscript which lasted until around the end of the 80's). In 1985 Steve Jobs left Apple and started NeXT. One of their first products was the WriteNow word processor which was ported back to the Mac platform by a company called T/Maker (the Silicon Valley rumor mill of the time was that Steve Jobs and Heidi Roizen were an item for a while). WriteNow was the first one to offer a polished experience with proper font rendering and kerning that didn't look rasterized. God forbid you tried to print from MacWrite with font smoothing turned on, a one-page print job could take several minutes to render because of how the Laserwriter had to execute all that Postscript code in real-time.
> With all due respect, I think you are falling into the nostalgia trap. MacWrite was never a true WYSIWYG editor because of the way it relied on Quickdraw for the on-screen rendering
I read your whole comment, but I still don't understand what this means. E.g., why does relying on Quickdraw for on-screen rendering not make it a "true WYSIWYG editor"?
It was WYSIWYG when used with an ImageWriter printer.
> MacWrite was released in 1984. 7 years before comparable graphical, WYSIWYG competitors.
You realize there were other systems besides PC and Mac?
Signum for the Atari ST came out in 1986. It was a fully fledged WYSIWYG text processor with special printer drivers for regular dot matrix printers. Even with a 9-pin you could create great looking output, if you had the patience (a single page took minutes to print). Signum was way ahead of MacWrite and was very popular with people needing special fonts in science/math and the humanities (you could quite easily design your own fonts). Also, it allowed for Right-to-left text, and of course the Atari ST was way cheaper than the Mac.
But in terms of influence, MacWrite had more (eg. Microsoft Word for Mac).
For general consumer use, I found MacWrite much better than Signum. But I can see your point in terms of academics and non-English speakers.
BTW, if anyone's curious, I believe the actual first WYSIWYG word processor was probably Bravo[1]. And Bravo somewhat influenced MacWrite.
1. https://en.wikipedia.org/wiki/Bravo_(editor)
LisaWrite predates MacWrite by about one year.
Apple has always done both -- invented and refined.
Claiming that Apple does not invent is as gobsmackingly wrong as claiming that Apple invented everything.
What did apple invent? Is it really more than a couple of things? (Applying existing ideas to new systems isn't inventing)
Name a few things you consider to be inventions first to get the ball rolling.
For example, you could claim that nothing new in CMOS manufacturing exists because it’s all just the existing idea of a transistor. Or the transistor is just a quantum mechanic version of the vacuum tube. Or the vacuum tube just an electric version of the Babbage machine. Repeat for Internet vs packet switching.
Basically come up with a definition that doesn’t require a “I know it when I see it step” and I’ll easily fit many things Apple did in there unless it’s such a restrictive definition that no one invents anything.
“If I have seen futher, it is by standing on the shoulders of giants”
On some level, all invention is a novel arrangement of existing components. All the way down to the physics, if you're willing. So, does anyone "invent" anything?
But if we accept "patentable" as a proxy for "invented", then obviously Apple invents a lot.
But those of us who think the entire idea of "patentability" is a joke in and of itself.
All these fuzzy lines behind this "who invented what" is a major reason I consider patents to be evil -- the entire system sets up artificial and harmful barriers keeping ideas from fertilizing each other and growing beyond our wildest imaginations.
(And as someone who strongly dislikes all things Apple -- but not quite as much as all things Windows -- I cannot help but observe that both Apple and Microsoft simultaneously deserve both more and less acknowledgement, all depending on how one looks at things, for their inventiveness).
The alternative isn't some utopian free sharing of inventions. The alternative is tightly held trade secrets and lots of inventions dying with their inventors.
We had this system for most of human history. It sucked. A decade and a half of exclusivity is a fair trade to avoid it.
Software patents maybe shouldn’t exist as it didn’t for a very long time. Indeed, the way patents are actually written for software ends up with a convoluted mess trying to remain as generic as possible to cover all possible extensions of a core idea (and you can’t patent ideas). So while patents show a benefit, it’s pretty clear the current system in the US has a lot of flaws that need attention.
WordPerfect and WordStar were text-based at the time MacWrite came out as a WYSIWYG word processor.
Not to say MacWrite wasn't based on prior work (it was, though not actual products that I know of), but it isn't really comparable to prior text-based word processors.
>I don't get why Apple fans have this obsession with pretending Apple invents these things.
This is entirely tangential and probably a pointless gripe in this thread, but...
For some reason, it's always really annoyed me that Apple took MP3 players, called them an 'iPod' and suddenly everyone ate them up like they were the second coming of christ and we'd never had them ever before.
As someone who was There At The Time, the UX of the iPod was really, really good. There were few, if any, other companies that could match it. The vast majority of competing music players were what DankPods calls "nuggets" - i.e. barely functional e-waste that were either saddled down with horrible software (e.g. anything Sony made), had horrible controls, were bulky and painful to use, or some combination of those above dealbreakers. A lot of companies treated developing an MP3 player like any other kind of music player, and ignored the fact that these things could hold 100x as much music as anything else on the market, which necessitated a completely new UX.
To be clear, there were good non-Apple MP3 players, but they were either marketed poorly, or late arrivals (e.g. the Toshiba player that got rebadged into the Zune). By the time those existed (and tech companies started hiring UI/UX people), Apple was doing a complete reset of another product category: smartphones.
I suspect history would have been different had, say, MiniDisc hadn't failed horribly in America[0]. Pre-iPod, portable music in the US was either compact cassettes with all the downsides of tape, or CD players that could just barely fit in your pocket. The iPod was such a step up from either that it all but became a genericized trademark. Had we had a competing technology from not the 1980s, we probably wouldn't have thought the iPod was so great. Or at least, people I knew who had MiniDisc looked at the iPod like I look at all the e-waste that was trying to compete with the iPod.
[0] Yes, I know that Sony was basically trying to avoid a repeat of DAT getting banned
I had an iRiver player, and as far as UX went, it was perfect. The only problem was small storage space.
> the UX of the iPod was really, really good.
The iPod is the only Apple product I have ever purchased. I could easily operate the iPod without having to looking at it constantly which was great for bike rides or car rides. No fiddling and taking eyes off road.
The ipod was a very welcome step in the portable music player tech evolution at the time, but it also coincided with a bunch of people that were suddenly Very Into Music for a few years. I don't fault anyone for thinking the previous portables were just not good enough to every day carry, but they also never seemed to notice that the OG white earbuds were more painful and sounded much worse than a decent brand of $15 black earbuds. Maybe never finding good earbuds explains why they gave up on their Passion for portable music within a few years.
Have you used the previous generation of MP3 players? I had one with a tiny LCD screen that would only fit half the song title (no space for the artist). To go to then next song, you had to press the "next" button (which makes sense). Except that action would take at least 0.5s. You press next, you wait, you see the display refresh with the next song's partial song name. Not the song I want, press next again. Very quickly, to skip 10 songs takes 10 seconds of effort. It was a painful device to use.
The iPod cam with a large screen and a click wheel. I could find songs on it. That was a revolution for me.
MP3 was the enabling technology (if you can't fit many songs on a small device, then this is moot).
> MP3 was the enabling technology (if you can't fit many songs on a small device, then this is moot).
As others have eluded to, MP3 only didn't seem to be enough. I remember passing on early mp3 players because they only had 32-64mb of storage, not even really enough to store a single album. Snatching up those tiny 1.8" hard drives right away and integrating them is probably as important as the UI improvements because it solved that problem.
If I remember my first mp3 player correctly, it had 16MB onboard + a 16MB smartmedia flash card (and uploading was via PIO parallel, and the software would occupy the whole system until finished), I needed to experiment with my comfort level between low bitrate and a worthwhile amount of songs. I must have ended up around 46-64kbps.
As soon as I had the funds I quickly moved onto a variety of CD/HDD based players, although I've only recently bought (and modded) an iPod - there's definitely reasons they were so popular. I can appreciate why they went to the common platform with the phone and were later phased out entirely, but as a task-dedicated non-smart device they would be last-man-standing.
In addition to the already very thorough and well-considered comment replying to this, I just wanted to say that the iPod was one of the first MP3 players that was widely available with a full-blown hard-drive. The vast majority of MP3 players at the time had like 32-64MB of flash memory, if that (and still cost hundreds of dollars). The iPod had 5GB and 10GB models. Suddenly you could bring your entire CD collection with you anywhere. Yeah, there were a couple of competing models with similarly-sized hard drives, but the other comment covers why people spending >$400 on a fancy new gadget preferred the iPod at the time for its excellent UI/UX.
I only remember the ones about the size of a discman with 2.5" laptop hard drives. The iPod was, I think, the first one with a 1.8" HDD. When the Macbook Air first came out it used the same 1.8" HDD
In 2006 I got myself one of these: https://www.gsmarena.com/samsung_x830-review-123.php
That 1G of flash storage at the time was huge for a phone. This was before everybody had an iPod Touch or iPhone, of course. iPhones came out the next year but in my area hardly anyone had AT&T, so because of the exclusivity the iPod Touch became popular way before the iPhone in my area.
My mp3 player around 2001 had 700MB of removeable storage. Buying additional storage was pretty cheap too and there were standardized cases to store a lot of that format.
In addition to the points made by the sibling comment, the iPod was a quality product well executed, early competing MP3 players were not great.
Flash based players were smaller but limited in size and expansion media was expensive. Hard drive players were hobbled with USB 1.1 connections and an obsession with drag and drop for management.
The iPod by default just synched with your iTunes library. The FireWire (and eventually USB 2.0) did so quickly. The navigation was as good as the metadata which iTunes made easy to edit. The UX on the device made scrolling through long lists of songs very easy.
The iPod made using an MP3 easy and approachable for normal people. The Rio, Nomad, and a multitude of others did not. They included a bunch of checklist features but didn't focus on usability until Apple dominated the market.
Yes, but having used both in that time period... tools on the mac were graphical and easily explored (open a menu and see what was available). People made plastic keyboard templates for WordPerfect and WordStar just to remember the commands. On top of that, you could easily task switch on a Mac. There were some tools for doing that with DOS, but they were awful by comparison.
I worked in a lab that used PCs while I owned a Mac Plus. There was no comparison.
Siri had so many years to iterate and get refined that by now I'd assume it would have been as omnipresent assistant as in "Her" movie (without negative impact of course) but see where we are today: I am still using it for only weather and setting alarms and even in that it sometimes works and sometimes does not.
> MacWrite was released 5 years after WordPerfect, which itself is predated by WordStar. I don't get why Apple fans have this obsession with pretending Apple invents these things.
Um, hello, Wang OIS? WordStar and WordPerfect didn't invent anything. They were copies of terminal-based word processors.
But MacWrite was different in two important ways. First, like Bravo and Gypsy before it, it was WYSIWYG, a million times better than WordStar/WordPerfect. And it worked with the LaserWriter. But more importantly: it was free. This made MacWrite revolutionary.
https://en.wikipedia.org/wiki/Wang_Laboratories
https://en.wikipedia.org/wiki/Bravo_(editor)
https://en.wikipedia.org/wiki/Gypsy_(software)
I think what Apple excels at is providing a set of software development tools, a platform, and an audience, for third-party developers to then use to build innovative products, the best of which either become platforms of their own if they're lucky (e.g., Adobe suite), or are copied by Apple and made part of their operating systems (https://en.wikipedia.org/wiki/Sherlock_(software)).
In addition to Watson (https://en.wikipedia.org/wiki/Karelia_Watson), there's also Cover Flow (https://en.wikipedia.org/wiki/Cover_Flow), Shortcuts (https://en.wikipedia.org/wiki/Shortcuts_(Apple)), Konfabulator (https://en.wikipedia.org/wiki/Yahoo_Widgets), Growl (https://growl.github.io/growl/), and of course my favorite LaunchBar (https://www.obdev.at/products/launchbar/index.html, LaunchBar would be my pick as the most innovative app of the last 25 years), for apps that were incorporated into macOS.
While Excel, Photoshop, Illustrator, Sketch, Lightroom, Premiere, and PowerPoint are all examples of software developed and released first for the Mac that then went on to become software behemoths in their own right (too big to Sherlock). (Well Sketch turned out differently, because Figma happened, but it's still a great example of third-party innovation facilitated by "a set of software development tools, a platform, and an audience".)
The point here being I think of Apple as more providing a platform for innovation rather than innovating themselves (but I'm aware that's probably a minority opinion).
Inventions are pretty cheap without the refinement of the product that directly contributes to the customers' demand additionally backed by robust supply chains and delivery of cutting edge tech like the M-line up of chips, and the tremendous camera quality, battery life, reliability of the operating system, specially curated app store, security and privacy, etc. Inventions are not what people want to pay for, people want to pay for additional value added in all sorts of form. Apple creates products for humans and people pay back by seeing the offering as a higher valued product.
You're basically explaining why the Macintosh stayed niche when it came out. People saw the barebones WYSIWYG of WordPerfect 4.0 on PC in 1984 compared to the true WYSIWYG of MacWrite and they said: It's what we already have, why would I need a Mac?
> I don't get why Apple fans have this obsession with pretending Apple invents these things.
Happens all the time with the iPhone. Apple gets celebrated as an innovator for adding features that have existed for years on Android.
It goes on.
> the new standard that companies like Samsung and Google would clone the same way they cloned the hardware and software design of the iPhone.
That's misrepresenting history in ways it's not even funny.
Stopped reading at that point. The article took too long to even sketch it's main point anyway.
I get not loving the "Apple invented everything" mantra some people have, but the iPhone genuinely redefined the smart phone category. The industry has 100% coalesced on the model invented by Apple. Nothing like this existed as a full package before the iPhone and now are almost universal:
- No physical keyboard + touch keyboard
- Modern OS kernel (not embedded specific kernel)
- Desktop browser engine
- Capacitive touchscreen + finger instead of stylus - one or two phones had capTouch before, but they were far from standard, and they still had physical keyboards for typing
- Vertical by default orientation
- Short 1 day battery life in favour of more power/features (weird to list, but was a bold move everyone mocked then followed)
They totally took out the existing market (Blackberry, Windows Mobile, Symbian, a variety of OEM OSs). Android succeeded but came after, still had keyboards on its flagships the years after iPhone came out (G1, Droid), and took these design cues from iPhone.
The Mac GUI with mouse+keyboard+windows was also huge. Admittedly not first to invent it (Xerox PARC), but first to ship it as a package is still hugely impressive. Few people commercialize a new product before it existed in some lab.
- No physical keyboard + touch keyboard (Windows Mobile had this first)
- Modern OS kernel (not embedded specific kernel) (Blackberry had this first)
- Desktop browser engine (iOS didn't have a "desktop" browser engine, it had a stripped-down mobile browser engine. But on this note, Windows Mobile did support desktop browser engines.)
- Capacitive touchscreen + finger instead of stylus - one or two phones had capTouch before, but they were far from standard, and they still had physical keyboards for typing (LG Prada had the first capacitive touchscreen)
- Vertical by default orientation (Almost every smartphone at this point was vertical by default, with horizontal-by default being the exception.)
- Short 1 day battery life in favour of more power/features (weird to list, but was a bold move everyone mocked then followed) (Windows Mobile had this years before Apple)
Literally everything that Apple is credited for with the iPhone...others had it first. The true genius of the iPhone was the marketing...Apple still gets credit today for "inventing" features that Android phones have had for years (zoom cameras? AI? notes? custom emojies? embedded fingerprint readers? integrated payment?)
Apple has always been the follower: it copies what others have done, and makes minor improvements, then markets the hell out of those minor improvements to make them seem revolutionary.
None of the phones listed looks remotely like a modern smartphone. The iPhone does.
I worked on WinMo at MSFT at that time. You are comparing devices with physical keyboard and a crappy virtual keyboard that required a stylus to modern smartphones?
I mentioned the LG Prada - yes had cap touch, but not touch typing (physical slide out keyboard).
Almost every WM, BB and Symbian SKU had horizon screens (over keyboards).
Blackberry integrated QNX post iPhone.
First iPhone had WebKit.
All of these facts seem to be incorrect.
Again: the combination of these was a huge shift, and every one followed it.
I still have my HP IPAQ and it is most definitely a vertical screen (they all were, and only some of them had keyboards). But sure, if you exclude all previous vertical phones, Apple was the first vertical phone ever...
Also, you have your devices mixed up. The LG Prada first gen (2007) did not have a keyboard; the 2nd generation (LG Prada II, 2008) had the slide-out keyboard. And on that note, the iPhone shares so many design elements from the original mockup of the Prada from LG's initial announcement of the device that most tech reviewers thought Apple copied the Prada. It's a good thing for Apple that LG failed to file timely design patents.
So, it seems that all of your facts are the incorrect ones.
I started to have doubts about the article as soon as seeing the Samsung Galaxy vs iPhone comparison. The author exaggerates things and rewrites history too much.
IIRC, all the Google/Samsung phones had keyboards because they copied the Blackberry. Once the iPhone was released with the screen keyboard, all the Google phones changed to that.
They didn't clone everything, but they cloned a lot in the early days. Rounded corners was another one. Now it seems like Apple is cloning Google/Samsung more.
The first versions of Android that the public saw were very similar to the OS on a BlackBerry or Danger HipTop. The G1 even used the same mechanism to deploy the keyboard.
As far as the rounded corners, I remember seeing a reduced Google Reader view of Engadget later that year that had every device looking the same from the top third up. I really wish I had a screenshot.
There is now a lot of cross inspriation and features that are copied in both directions, as well as both implementing the same thing at around the same time (Intelligence and Gemini).
On the Android side, Pixel gets most new features first while Samsung offers their own take. Samsung is generally ahead of their direct competitors in terms of hardware.
Nit: the Danger mechanism was waaaaay cooler than the G1's. It did this amazing spin I've always missed. The G1 was a little 2-hinge flip-up that was satisfying, but didn't do the amazing 180 that the sidekick did.
There were no google phones until 2010 unless you count the HTCDream which was not designed to be used for anything but development.
Apple fans do this to justify their decision to buy Apple products to their circles and most of all, to themselves. Or it is just delusion from not actually knowing better.
Apple certainly doesn’t invent everything.
Calling what they do as mere refinement may be equally understating it in some cases.
They get it right. For the masses. They create beginners.
Anyone can start with an Apple, because it’s what it’s designed to do.
I have my own biases that extreme usability started earlier with other movements like WebOS and Palm contributing to it as well.
Still, if we look for watershed moments where huge numbers of people in the mainstream adopt technology, whether it’s the iPhone, iPad, Apple TV, watch, laptops, they don’t need to be the first, just the best for the most number of people.
Being able to integrate hardware and software closely creates a different and reliable result for the many, as much as I might not like having complete agency.
If anything, Apple helps invent beginners in the mainstream.
> They get it right. For the masses. They create beginners.
I used to think that about Apple which is why I got the iphone 4s when it came out assuming that I would be able to use it via voice while driving and have a good experience. I disabled it after the first day and never went back. It was not remotely ready for a wide release. I still use a mac as my main device, but I've been on Android since the 4s.
> Apple refines what others have attempted before, that's what they're good at.
Exactly. And often the first version is often kind of meh (iPhone, Watch, Vision Pro) but they keep iterating and later versions become really good. Sometimes it's a hit directly (M1), it's still very iterative on whatever came before.
And of course the M1 Air is basically another in the line of Macbook Airs but with a better processor.
[dead]
[flagged]
>Also, Apple’s processors were always the best, until they switched to Intel, then those were the best.
That can be true…
PowerPC was pretty great for a while. Then Motorola and IBM started dropped the ball (or rather, stopped putting resources into what was not a particularly big customer) and Apple switched away.
Wasn't Apple part of the PowerPC alliance?
They were, but Motorola and IBM designed and manufactured the CPUs. Apple presumably had some input, and they designed hardware platforms for the alliance, and of course software.
> I don't get why Apple fans have this obsession with pretending Apple invents these things.
This is the famous "reality distortion field" you may have heard of. It's basically a form of tribalism taken to an absurd level.
I am hoping the foundation they built will lead to greater things. So far it definitely has fallen flat compared to their demos. All I wanted was a way to talk/text to siri in a natural way to get things done. It's better but far from perfect. I want to be able to easily create calendar events and interact with other native iOS APIs.
They could be doing a lot better but it's a bit of a cursed problem.
Natural language as input doesn't give you any information about where the boundaries are or what's possible. Meanwhile natural language can express anything, most of which any current implementation won't be able to do.
So the user gets a blank canvas and all the associated problems with learning what to do, except it's worse because many things they think up will fail.
And the main tool we have to guide the user through this fraught path is LLM output. Oof.
I am hopeful that Apple will demonstrate their expertise in using some traditional UI to help alleviate some of these problems.
> It's better but far from perfect.
I’m not aware of any competition that’s perfect.
I guess you meant to say the competition is better. I think it is, but that doesn’t mean their product isn’t a huge improvement.
A simple example is text search in Photos.app. It probably misses text in some photos, but it helps me find quite a few photos. Similarly, face recognition in Photos.app is far from perfect, but way better than not having that feature.
> I want to be able to easily create calendar events and interact with other native iOS APIs
It’s not in Apple’s DNA to release a product here that mostly works. Chances are they’re working on something like that but don’t find it good enough to release it yet.
What surprises me, though, is that they released the “get an AI summary of this web page” feature. That definitely produces some results that are very bad.
Not talking about the competition. They failed to deliver what they had in demos. Clearly they were hoping to get it right in the last minute but seems far from the truth. All I wanted was a better Siri that played really well with native iOS apps. We dont have that yet and I am not sure if its purely a lack of power with on device models or if apple completely missed the mark in implementation.
I normally really like this blog, but what’s all this noise about? They’ve done the hard part of lining their entire foundation up, now all they have to do is build the applications on top, and let others do so as well.
In the meantime, they’re shipping the best non-workstation computers, by far, to run models locally.
They don’t have to be the ones to implement all of this themselves, you can install ollama right now, and BoltAI can integrate those models into other parts of the OS. And Apple will watch, and Sherlock the best parts of what others do into the OS, and sell gobs of machines.
They haven’t squandered anything, the foundations are still there.
Y'all know I'm a fast, if error prone, writer. I still enjoy using AI writing assistants to help me with the occasional phrase that's awkward, grammar detail ("it's lower g in 'god' if I am talking about Thor or Huxian right?") and choice of words ("I need a word for agriculture that starts with C...")
LLMs make different mistakes than I do so I've thought about using one as a copy editor but I've had terrible experiences with copy editors: I've hired more than one when I was writing marketing copy who injected more errors than they fixed. (A friend of mine wrote an article for The New York Times that got terribly mangled and barely made sense after the editors made it read like an NYT article.)
I don't get the "Math Notes" example. Why does this need AI? Isn't this just Calca[1] but with extra steps?
[1] https://apps.apple.com/us/app/calca/id635757879
Cala with less steps, technically. No extra apps or text syntax to learn. Just drawing the math you learned in school as if on paper
The extra steps are the (probably at least) 10x compute this requires to do the same thing.
The "Math Notes" thing is absolutely infuriating to me. I use TextEdit on macOS for various notes and the forced math autocomplete (with no way to turn it off) has pushed me away from TextEdit entirely.
Forced AI garbage seems to be Notes team's SOP at this point. They destroyed the handwriting experience on iPads, in iOS 18, with an incompetent spellchecker that can't be turned off. At least this math thing is somewhat unobtrusive, the spellchecker straight-up destroys notes. No acknowledgement of radars and no fix in sight, as expected of Apple I suppose.
Anecdotally even palm rejection seems to have gone to absolute shit in Notes with iOS 18. When I go to write now there’s like a 50% chance the scroll position flies up the document.
I also tried their “handwriting improvement” feature that claims to clean up lines a bit while still looking like your own writing. All it did was turn legible writing into total gibberish.
Do you mean Notes? TextEdit doesn't support this functionality at all.
I mean TextEdit. It definitely supports this to some degree, at least. Try typing “4 * 2 =” on a new line.
Heck even Safari on my phone autofilled “8” when typing this comment.
Ah - you can turn this off with keyboard - text input - input sources - edit - show inline predictive text
Sweet mercy I don't know how I didn't find that or how to thank you enough!
Wait actually that doesn't work. TextEdit still autocompletes the friggin' math.
This smelled like one of those things where they've applied a certain behavior to an entire class of widget for consistency's sake, and TextEdit just happens to use that widget. If so, it'd be controlled at the system settings level.
Sure enough: System Settings -> Keyboard -> (under the "text input" area) Edit... (button next to your primary keyboard language) -> toggle "Show Inline Predictive Text"
If you want to easily switch between having it and not, I bet you can set a second keyboard with the same language but a different setting there and use the quick keyboard switcher widget/shortcuts (I did not try this, though). Or there's probably a way to shortcut it with AppleScript or some other automation thingy with ten minutes of effort (mostly googling).
But that still doesn't disable it. I can type in "cos(23 deg) =" and it will autocomplete it, even though I have "Show inline predictive text" disabled. I can post a screencast if anyone would like.
Weird, I tried "1+1=" before and after and it disabled it for me.
[EDIT] A quirk: I do have to hit "done" on the window before it seems to apply the change, toggling doesn't do it until I hit "done" (I just tried again to double-check and noticed this)
[EDIT 2] Nb I don't not-believe you, we could be on different OS versions (I'm on 15.1) or something else could be causing the difference.
I'm on macOS 15.2.
Currently it's not autocompleting some simple algebra. But if I enter more "complex" equations ("3 / 4 =", "3 * 5 - 2 - 1 =", "tan(pi) =", etc.) then it autocompletes those. I can't figure out why it's inconsistent. And I've definitely checked and confirmed "Show inline predictive text" is disabled and I've rebooted to try and give everything a fresh start.
One thing I've noticed is that if I enter the same equation multiple times it might stop suggesting for that specific equation, so I suggest trying multiple different equations.
Oh god, that's deeply weird.
I've felt for some time they're overdue for an "almost nothing but bug fixes and performance improvements" major release like we got a couple times in the 20-teens :-/
Also seen in Soulver.
Soulver is text entry not drawn
But this still doesn't need AI, unless we're calling OCR "AI" now
If by "AI", you mean "neural network", rather than "program/machine", then still : hasn't OCR already been using neural networks for many decades ??
Show us any prior OCR that can read hand drawn formulas beyond the most basic single line expressions
I don't recall the name of the app, but drawing formulas, equations etc with a finger was a thing on Android like 10 years ago. And it did things like a mix of fractions and square roots etc just fine.
How else do you get the O and 0's correct? Building a handwriting recognizer is one of the first things you learn to do in AI-writing class.
Assume words don't contain numbers and numbers don't contain words, then provide a convenient UI for selecting alternatives?
For fielded input matching known patterns, recognition can also be constrained by pattern matching and general validation rules (e.g., VINs are 17 characters long, cannot contain the letters I, O, or Q, and, given prior information in other fields, can be further constrained by manufacturer code, model year, and by requiring a correct check digit).
Why does this need AI? Isn't this just dividing by two, which you can easily do in your head?
(I get that not everybody bothers doing vaguely complex mental arithmetic, but dividing by two? Come on!)
The point is algebra system, it’s quite good for unit conversion, budgeting, etc - lots of things a spreadsheet does great but seems a bit overkill for. Algebra big improvement over basic arithmetic calculator power.
Calca is text entry not drawn
Math notes also works with text entry.
Indeed, but that's obviously not the impressive part. Wolfram alpha demonstrated the text-entry version a decade ago...
Was going to say the same thing. Wolfram Alpha has been doing MathNotes for 16 years now. I seriously doubt AppInt will ever come close to the depth that Wolfram has.
It's a bit funny his favourite "Apple Intelligence feature" is something that wouldn't surprise me if it doesn't even invoke the actual model at all under the hood.
Parsing text for variables when it sees an equals sign and running basic calculations on them? I feel that could have been a novel feature 30 years ago.
Minor (in terms of how relevant this is to your comment, not in importance) correction: the author is a woman (actually prefers they/them according to their GitHub (https://github.com/Xe).
Soulver has indeed been doing this without large language models (so far as I know) for many years: https://soulver.app/
I mean some of this Math Notes equation features have been in OneNote since what? 2007? Maybe all the way back in 2003?
I also struggled with this for similar reasons. His favorite AI feature is essentially writing valid js for calculations (his example is literally valid js if you just drop the equals on the last line - you can paste it right into the console on his site and see the answer).
The whole article feels like it suffers from a similar lack of coherence. Ex - I am hardly an apple fanboy (I strongly dislike the company) but the complaints here are basically
The service sometimes has outages
The image gen is not as customizable as he'd like
He's morally opposed to cleanup in photos
Notifications summaries are bad (and how dare I get my texts 5 seconds slower).
---
None of that is really related in any way to the security footprint of the tooling he discusses up front, and it's also hardly distinct from most other current AI offerings, and it's not really a consistent complaint about the tech.
My opinion of Apple is that they do a crappy job with the vast majority of their apps...
They build good hardware, and they abuse their small hardware footprint to make decent device experiences and a decent (but getting worse) OS - but their actual applications are generally mediocre at best (mediocre copies of a previously successful, usually better, app that they will put out of business through shady store practices if I'm being blunt).
---
If anything, the failure here is that Apple marketed a thing that AI can't really do (yet, maybe at all), and most of the things AI can do without being incredibly invasive aren't actually all that useful to most folks. Very useful to a handful of power users in specific circumstances, but otherwise essentially novelty apps.
So... it's not an implementation failure. It's a marketing failure. And this is hardly unique to Apple right now. The only difference is that usually Apple doesn't play their hand until this inflection point with new tech is over, so it's more obvious this time around just how bad the product fit is for general use.
Minor (in terms of how relevant this is to your comment, not in importance) correction: the author is a woman (actually prefers they/them according to their GitHub (https://github.com/Xe).
There is also the possibility that LLM models (which is what Apple Intelligence is leveraging mainly) are overblown and honestly a local minima for AI.
I was an early user of Macs; the first computer I owned was a Mac Plus. I didn't own a PC until the 1990s.
Most of the intro to this is credulous hooey. Macs weren't "bicycles for the mind" in some magic way that was different from PCs. What the early ones had was 1) a better and much more standardized interface, and 2) task switching that worked.
As for the AI tools, image generation might occasionally be useful for a D&D game, but otherwise nothing on offer at the moment has much value. And the value of image generation (for me) is pretty small.
They just built a trusted/secure backend to push compute to and it luckily coincided with the AI craze. They just packaged their backend as apple intelligence and exploited the situation. It doesnt look like they have anything worthwhile to showcase that backend though. They will get there eventually, this is apple after all
It's astonishing to see Apple settle for DALL-E 3 (or worse?! these remind me of the Bing samples) for the image generator part. Hasn't the incredible extent of mode-collapse and the horrible DALL-E 3 style become universally known and disliked yet?
No, it's worse than DALL-E 3, it's an on-device model that can only reproduce placid soulless images. The ones I put in the article have been heavily cherry-picked. The worst ones get far worse. DALL-E 3 can at least do text.
Original prompt to Image Playground: "Enron logo" with the Enron logo as img2img input https://bsky.app/profile/yasomi.xeiaso.net/post/3lf472e2yfc2...
Image Playground uses a tiny on device model. It certainly isn't DALL-E 3. In no universe are on device models intended to compete with massive cloud models.
Mac’s being able to export PDF’s for free was a huge deal back in the acrobat days :-P
I’m hoping with what they’ve built in infrastructure and custom chips is a step towards making personal LLM also highly available to non-technical people. I think this is where Apple has always shined - making things not just better, but accessible and grokable for normal people.
It feels very apparent to me that Apple was caught completely flat footed by ChatGPT’s release. As a result, they were very far behind and have been unable to catch up.
What they released as Apple Intelligence wasn’t a well-planned, cohesive product as much as the only thing they could possibly do, given the timelines they were up against. Maybe they’ll catch up, but they’re definitely behind and it’s a shocking thing to behold.
What's interesting is how I opened Safari's reader mode to digest this 5000+ word polemic, and then noticed for the first time a new option to summarize its contents. A few seconds later I had a clear idea of what the author's thesis is, without being under the false impression that it had conveyed its finer points.
I read the article. After reading this comment I tried to get it summarised. The result wasn't pretty. The summary claimed that there is skepticism over the security of Private Cloud computing, which the article actually praises.
Strange that we received two very different summaries. There was a part in the article where the author mentioned that Apple's private cloud compute claims were "literally impossible", but that was hardly the general takeaway of the article.
Mine basically said Apple has fallen short of their vision because of the inherent limitations in relying on web services.
Stepping back the way I see it is it’s a cultural thing. Apple loves to own the whole stack and even though circumstances forced them to use the hot new thing because it’s useful, Apple as an organization doesn’t really wanna do that. It wants to have its own stuff, and so it could never really make something succeed that wasn’t its own and that’s why it failed. So far.
I was giving this my attention until the author included a long quote from Steve Jobs from the author's own dream
Sorry, what?
Apart from the level of dream-detail recalled being highly dubious, quoting your own hallucination of Steve Jobs to help with your argument about generative AI being useless (and missing the irony) is downright weird.
Also Math notes is basically the same thing search engines have been able to do for over a decade now. Enter a sum, get an answer.
No, I'm aware of the irony. I also wrote it down when I woke up, and you can see on stream that I copy it from a Discord message when I'm looking for it (https://youtu.be/N_KNpVujAL8?t=14677).
I figured you'd rather read something my brain made up (albeit unconsciously) than something a machine made up using linear algebra without understanding any of the words that it's using.
> I figured you'd rather read something my brain made up (albeit unconsciously) than something a machine made up using linear algebra without understanding any of the words that it's using.
I mean, any article written by a human is something the brain made up. And I'm fine with that. But it reads like trying to give your opinion extra weight by associating it with Steve Jobs. It just came off weird.
I'm with you on most points. I'm not an iphone user but I can certainly appreciate that Apple Intelligence does not match the hype. That seems to be a recurring theme with AI though. Release a thing, shout from the rooftops about how great it is, and then wait for people to start posting about glue in pizza recipes or urging people to kill themselves or generating fictional news alerts.
the example he gave was multiline with variables. That's somewhat different than what search engines do.
Also, it's almost exactly what Solver has done for years.
So far it feels very unfinished, but having followed Apple for a long time I've seen many products launched and iterate over time. Maps, for instance, had a fairly disastrous early period but eventually became my preferred navigation app.
That used to be the standard apology for Microsoft products, where Mac OS app developers "sweated the pixels", i.e., delivered products that were pretty much on target at launch.
I mean, Apple Intelligence looks good ha.
Yea, I'm not saying it's great or that this is the preferred approach. Just highlighting that it's not the end of the world as many frame it every time something like this happens.
Software sometimes takes a few iterations.
It is unfinished, several parts haven't shipped yet.
Agreed – they've even said as much. But some of the marketing is conflicting there, and I've had friends IRL confused that their new phones don't contain all the Apple Intelligence features they've heard of.
This article makes great points, but the outcome seems to be “in progress” and not final. They have the tech stack and the right philosophy, they just botched the execution out of the gate. This is the same company that didn’t think 3rd party apps on the iPhone would be a thing. They managed to course correct.
Ref: both apps and Apple Maps.
Apple Maps is still not good compared to Google Maps. If I want to go somewhere for the first time I can't trust to use Apple Maps. (tried a lot, always come back to Google)
Depends on the place I'd think. I almost exclusively use Apple Maps except for when I need data about a business. On the other hand Google has also sent me to wrong places a few times, sent me to a closed road etc.
It's not just about the man on a bicycle, it's a man on a bicycle on a flat road.
The biggest problem I have with Apple Intelligence is battery life. Since Apple has no software chops in building LLM models, I expect them to throw hardware solutions for the battery life problem.
But the demands of intelligence and the general trajectory means no amount of hardware - storage, RAM or battery size would be enough to generate the high fidelity experiences or solutions that fans and customers have come to expect from the company.
Power consumption is the defining characteristic of AI. The power consumption by the US had recently plateaued at 4,000 billion kilowatt hours 2000 through 2023. That will likely accelerate by 20% or more with 2024/2025 data. It's probably one of the few guardrails. Electricity is about five times more expensive in the UK than the US. So the US is the natural home for the models and other regions are not.
> Power consumption is the defining characteristic of AI.
I'm not so convinced. I've been playing with running Ollama + llama3.1 8B on my 2023 M2 MacBook Air with 24GB of RAM, and I don't notice much difference in battery life with or without it. I'm not querying it continually with a shell script in a loop or anything like that, but neither am I shy about throwing all kinds of prompts at it. My laptop keeps chugging away on battery.
Training AI may be ferociously resource intensive, but I haven't seen that querying those models is especially bad. I'd think that a model that Apple had tailored specifically to run well on its hardware would be relatively "light".
I'm kind of skeptical the US grid can even handle this industry growth if it becomes the only realistic place these models are ran. A lot of the infrastructure is pretty wobbly. And forget the tax benefits and cheap land from Texas, their private grid is liable to bust at any time.
A somewhat tangential observation:
Apple has done such a good job with marketing that my 10 year old thinks that AI stands for Apple Intelligence.
We live in the Bay Area so she's seen a bunch of billboards with that. I have to constantly remind her that that is not what it means in most cases.
Considering how quickly the definition of "AI" changes (several times per decade now ?), and what happened to smartphones, it's possible that she becomes right in a few years...
> Why Apple Intelligence failed even though everything it's built upon is nearly perfect
Because that’s a laughably false premise.
Hah I’ve been having the same issue as the author with those scammy “package delay” texts getting summarized in my notifications.
Didn’t realize how widespread that type of spam was until now. Why hasn’t someone implemented better spam detection at Apple like we have for email? It would be nice if they could classify texts as spam, promotions, etc and organize them the way Gmail does.
My guess: that requires bigger models than can run on local hardware, and the appetite for sending emails out to a server for classification is negative zero.
The spam filtering for texts in Google Messages is run on device and in my experience works pretty well
SpamBayes worked great as an Outlook plugin back in the day.
I could pick on quite of few nits here but I'm going to focus on one in particular that I'm very familiar with as a photographer and mass media studies student.
> I want the data coming off of the sensor to be the data that makes up the image. I want to avoid as much processing as possible and I want the photo to be a reflection of reality as it is, not reality as it should have been. Sure, sometimes I'll do some color correction or cropping in post, but that doesn't change the content of the image, only its presentation.
First nit: the iPhone camera, and all digital cameras, are deeply influenced by computational photography techniques. What this means is that you essentially never get the raw pixel values, although there are exceptions. The image you get is already significantly manipulated.
Second nit: color correction, color in general, dynamic range, focus, depth of field, and more are all manipulations made by default, even long before digital cameras when film was king. There is no "correct" image version of what our eyes see, there is only pleasing to the photographer and the audience.
An example: the negative for Ansel Adams' well known "Moonrise Over Hernandez , New Mexico" looks like, at first glance, something a professional would trash for lacking detail.
Here's contact print vs the version most of use will probably recognize: https://images.squarespace-cdn.com/content/v1/5f5fe5ca8d6a35...
Here are four different versions Adams printed over the course of 3 decades: https://images.squarespace-cdn.com/content/v1/5f5fe5ca8d6a35...
I will mention, but won't even get into a topic that will surely bait HN commentors: Kodak designed and standardized its color film to represent Caucasian skin tones. It wasn't until chocolate and furniture makers complained that everything looked like the same gross mud in their expensively-produced product catalogs that Kodak took a look at rendering dark brown/red/yellow tones more pleasingly. Notice I said "more pleasingly", not "correctly".
> Kodak designed and standardized its color film to represent Caucasian skin tones.
That may be an urban legend. There was a popular reference card that had a light-skinned woman, but that wasn't the problem. This is closer to the old Technicolor vs. Eastmancolor debate.
Here's the trailer for Disney's "Song of the South" (1946) [1] That's three-strip Technicolor. Good dynamic range, with each of three colors on its own strip of black and while film. Here's the trailer from "Shaft" (1971) [2]. That's single-strip Eastmancolor. Dynamic range is not as good, as you can see in the street scenes where lighting wasn't controlled. Eastmancolor took over in the 1950s because the cameras are much smaller and production is easier and cheaper.
NTSC Color TV really did have something to standardize skin tones. NTSC color TV transmits an intensity value and two vector components which determine the color. The color vector components are rather low bandwidth (only about 10 full range color changes across the width of the tube), and so some NTSC receivers had a gimmick which, when the vector is near the "skin tone line" for a standard skin tone, pulled it to a fixed value.[3] The effect was that all faces had roughly the same skin tone in UV color space, but intensity could vary.
[1] https://www.youtube.com/watch?v=NxwqH47Ne70
[2] https://www.youtube.com/watch?v=pFlsufZj9Fg
[3] https://bramstout.nl/en/webbooks/vectorscopes/#yiq-and-the-s...
That quote intrigued me too. Surely RAW (which can be produced with an iPhone [and others...]) is what the author is looking for. Case of not RTFM'ing?
I shoot in raw from my iPhone and Canon EOS R6 mark 2. For my iPhone I usually use Halide's Process Zero to remove all the computational photography garbage that I can from my images.
Halide's Process Zero literally adds "computational photography garbage", what do you think de-Bayering actually is?
"Apple Intelligence failed"
"Apple Intelligence" is less than a year old. Give it some time, for chripe's sakes.
How about Vision Pro, did that fail?
Their problem is that they are just behind now, because they released too early. They haven’t finished iOS 18 yet. They are still working on it, which likely means that iOS 19 isn’t going to get much attention either, because they should be working on iOS 19 now, not still developing iOS 18.
They have set themselves up for a loser in the next year or two, because they can’t double their resources to catch back up to a normal release schedule.
Apple has a history of releasing early proofs of product concept or basic table stakes products built with a high enough quality they feel like they’re “done,” but then proceed to methodically iterate on them for years to decades until they fade into the background. It’s hard to imagine the iPhone essentially started as a click wheel iPod - comparing the two is night and day in terms of capabilities, function, and form. They take weird side ways jaunts - but generally shift back into a path that is sensible. I was interested to see what they did with Apple intelligence, but assumed it would be establishing of a basic set of capabilities, the effective proposals for APIs, and the seeding of product discussions with their customers over a long time. People seem to think a few years into the current cycle of AI technology we are seeing the final fruits rather than seeing the infancy - for those who develop these sorts of tools it feels very much like iPhone gaming felt in 2009. At some point over the decade we will hit the nadir, then descend into total enshitification (and yes those who think AI has already reached peak enshitification you are totally wrong). Along the way though we will see a lot more truly stunning advances towards the final arrival at pervasive exploitation.
I'm going to admit that I just skimmed past 90% of this article. Being dismissive of AI is currently easy content, so there's too much noise in the space.
Having said that, I actually paid attention to the image playground criticism. Image playground is literally a playground. It is meant to make fun, low-effort images for friends and family, largely for social type interactions.
"It uses a placid corporate artstyle and communicates nothing." It's a hot taco holding a beer. What is it SUPPOSED to communicate? Looks like a pretty great image to me. But of course this piece was leading into the anti- angle, so suddenly it's "horrifying". I guess I didn't get the special training to understand what was wrong with a clearly lighthearted, fun image.
Similar asinine, overly-jaded complaints about the cartoonish, memoji style portrait generation. I think the image is actually pretty hilarious. Actually used image playground to make my social media image, and I care not what this guy thinks about it, or that it is "soulless" (as if a cartoonish representation is supposed to be soulful?)
> as if a cartoonish representation is supposed to be soulful?
That is basically the entire point of a cartoon, so yeah
It's an AI generated taco smoking beer, I don't think it really needs defense. If I were to create such an illustration for my blog from scratch, I'd probably use that to communicate abusurdism in a light-hearted manner. I'd also probably make the hand-hooves consistent or at least plausibly cartoon-logic. At the very least it would mean:
* The taco holding the beer glass to its lips and sipping on it to smoke it
* Consistent eye shapes (likely the eyes would be closed to smoke the beer)
* Better bokeh for the elements in the background (if that is the stylistic choice I'd go for)
* Have the smoke coming out of the beer, not out of the taco "taco smoking beer"
* Stylize the image such that it has individual flair, there's something about the Apple Intelligence artstyle that just has an unperson corporate vibe that I don't like.
* The levity comes from "hey tacos don't have faces, hooves, arms, or legs and you can't smoke a beer", this would be used to communicate absurdism https://en.wikipedia.org/wiki/Absurdism, specifically by means of the taco smoking a beer whilst holding it in its hooves
Maybe I've just been exposed to way more AI imagery than you, but guacamole does not look like that in the image. There's more fever dream images that I have locally, but I didn't want to saturate my article with them and haven't fully implemented "image gallery" support yet.
And yes, a cartoon is normally meant to communicate something, quite literally the definition of soulful. Look at this for example: https://bsky.app/profile/yasomi.xeiaso.net/post/3ldgzieehjc2... When I made it, I was trying to communicate a lo-fi peaceful vibe accentuated through traditional artstyles. In a more finished piece I'd probably recreate this through watercolor in Procreate and apply a bokeh effect (emulating the depth of field for a subject with forward light being looked at from an 85mm portrait lens at about f/2.8).
The point of art is to communicate something. If a work does not communicate something, it is categorically not art.
"Apple Intelligence failed?" The far-reaching project that just had its initial release like 60 days ago? Why would anyone read beyond a first sentence like that?
The purpose of a thesis statement is generally to establish a conclusion and then over the rest of the article, the goal is to build up evidence to support that conclusion.
There has to be some plausible path to actually doing so. Unless this person is writing in the future, there isn’t.
The thing is for a product to fail means the failure is widely accepted, like how Stadia failed or how AirPower failed. The verb "to fail" is simply wrong in the case of a product when you really intend "to be bad". It can't be established yet if Apple Intelligence has failed.
Because it's true. When you ship a product and people look at it and go "meh", the product launch failed. There is literally no value in launching products people don't find value in.
Is there any actual empirical evidence that this is the case?
It does seem a little sloppy, but the actually interesting part of Apple Intelligence isn't out yet so I'd withhold judgement, even on the initial release.
I think part of this issue is that people expect a lot from Apple. They exposed the technology to many people who aren't early adopter types, but more "mainstream" types. With an entirely new product and brand, the tolerance for bugs is higher, but here the expectation is that of other Apple products.
In the end, even if the features aren't perfect, they still raise the bar for competitors, so Apple is less in danger of being disrupted.
Also, there is plenty of AI driven features that people do not talk about, but those "just work" so you don't see them as well.
> In the end, even if the features aren't perfect, they still raise the bar for competitors
Honestly I'm not sure that they do. Everything I've seen with notification summaries for instance has given me the feelings of "wow I guess I'm really not missing much". LLMs as an answer engine has a big benefit that it feels fast and fluid even when its wrong (and many won't bother verifying) but with notification summaries most users in messaging contexts will eventually go back to the conversation and see the responses in full detail. Mistakes in that context are identifiable by mainstream users as having made the product worse.
It's what Apple does. The first version of a product often sucks somehow, but is generally usable. They're masters at iterating.
I agree. Case in point is that the author jumped in at the iPhone 7 which is a lot closer to current era iPhones than the original iPhone and had been refined over that many generations. My first iPhone was the 5s, my first Apple Watch was the 6. I tend to hang back and wait a few generations before adopting a new product from Apple. I suspect Apple Intelligence will be a lot better 1-2 years from now.
This is true, but sometimes the opposite is also true. Case in point, HomePod sucked so much they never became useful. It was discontinued
Which one was discontinued?
This one: https://www.apple.com/shop/buy-homepod/homepod Or this one: https://www.apple.com/shop/buy-homepod/homepod-mini
I find them better than the multiple Echoes we have. Though they don't do as much, what HomePods do do is generally better than Amazon's attempts. Can't comment on Google's equivalent.
The Apple bicycle is the Apple user. They know how to ride it well. And how to take the Money.
Generative AI is a dud. ML has a lot of applications. I use my smartphone for calls, chat and banking apps. I use my iPad only for drawing, mail and some games.
For everything else, I have computers. With real OS.
Math notes is a Fortan repl. It’s only impressive to someone who has never seen a calculator.
Well - while Apple has made rough starts in the past (Maps on iOS devices comes to mind) - they do have a solid track record.
Having used "smart devices" since the Apple Newton 2.0 days, followed by Windows Mobile, a very brief Android excursion (Motorola Milestone - early enough Android that I was often frustrated trying to copy/paste text between apps), then another side-pivot into Windows Phone for awhile (mainly because the development was incredibly easy - and Microsoft gave me a free one), I have been in the iOS mobile phone ecosystem ever since the iPhone 6.
And - the software has gotten increasingly better over time - I wouldn't have (for me) alot of content/subscribers on TikTok, if iMovie on my phone did not exist - attempting to edit videos using OpenShot was taking forever (While I have Davinci Resolve installed, it seems "daunting" for someone who doesn't want to be a professional videographer/editor).
But then I tried iMovie "Magic Movie" on my phone and ... "it just works". Still not great for long-form YouTube style content, but for quick things, slice-of-life videos - it does the job rather well.
... I expect that Apple will improve these AI offerings dramatically over the next couple of years as people upgrade their devices.
The image playground app is an embarrassment
<glances at recent story reconfirming Apple as the most highly valued private company the world has ever known>
This is sardonic, as yes, Apple could have chosen different monopolies than it currently has, at different points, and had a different (maybe not better?) trajectory, some of of us are old enough to remember the antipathy towards Microsoft when to the Office suite it added default Explorer,
But also, maybe our system fundamentally rewards the "wrong" things if one's definition of "right" includes things like innovation. Or maybe, the welfare of the commons and the common good.
Apple is not a private company.
Apple also squandered maps ... until they didn't.
I wish they'd sort out the rendering of road names (and this isn't specific to Apple, mind) - they're still seemingly stuck in the olden times rules for rendering street names ("only put a road name if the road is wide enough and only every N inches and starting at M inches from a junction") rather than "can we put a road name on this road that's visible on screen without it going over something else?" which would be 500% more useful.
e.g. https://imgur.com/a/1Y7HviK - what rules govern this half-arsed speckling of road names?
My hypothesis is that certain products need users and feedback to be good. Maps is one of those, hence why they had to release it in a ‘bad’ state. Apple AI I think is another such product.
Apple needs to review whichever firm they outsourced review of Maps locations to, because my report got my local hackerspace marked "permanently closed" when all I did was correct the address, map pointer, and capitalisation of the name.
I haven’t had any issues with the Maps review. Seems like perhaps you submitted a change that was normalized as an entirely new business due to having a different address, location and name. Have you tried to zoom in and see if there’s a new marker with the info you submitted?
You can imagine, if a business has changed its address, location, and name, that users would appreciate a “Closed” pin for the previous name and location instead of wondering what happened to the business that used to be there.
> * Seems like perhaps you submitted a change that was normalized as an entirely new business due to having a different address, location and name. Have you tried to zoom in and see if there’s a new marker with the info you submitted?*
Nope, no new marker at all.
I agree. I think we believe that Apple's days of long-term skunkworks development is over... I don't think it's as dramatic as, say, the years since the PA Semi acquisition, or the "secret" Intel port, but they do some long-term planning.
(Apple Originals, their production house, is also an example. A huge bank of original prestige TV, subsidized by iPhones... they're just still finding a way to market it.)
Or Siri.. no wait.
Though, in fairness, I don't find any of the voice assistants very useful. Siri is probably not quite as good as Alexa though. I mostly care more about Siri because I use CarPlay when driving.
Depends on where you live. In SE Asia it's useless.
They still "did"
Years ago I remember a detailed comparison of Apple and Google maps, showing a lot of flaws with Apple Maps around contrast, lack of detail, misleading iconography, and other issues.
Has it improved that much? Does anyone remember what I'm talking about?
That's Justin O'Beirne's site. It has mostly improved but he wasn't really that negative on it. He just used to work there so he was being extra critical since he knew them.
On the contrary he often seemed undeservedly uncritical when he talked about Google, like going "wow this is so detailed, they must be super geniuses who did this with computers" about something like POI locations they'd actually done by hiring a ton of contractors to do by hand.
It still sucks. No amount of fancier graphics can make up for their lack of ground truth in terms of opened and closed businesses. I just spot-checked the newest cafe in my neighborhood, which opened 3 weeks ago, and it's still not present on Apple Maps, and another place that closed months ago remains on Apple Maps. It is a demonstration of the fact that you can abuse your monopoly to push a third-rate product on 10% of your users.
I think businesses update Google maps, but don’t care for Apple Maps, and it leads to this state.
Google Maps does seem to be more complete with respect to businesses although I prefer Apple Maps for in-car navigation. OSM has both way beat with respect to hiking trails and the like.
I submitted an address update for a hackerspace to Apple Maps last week, which only got the business marked as "permanently closed".
Whichever firm Apple's contracting out to review Apple Maps reports isn't doing a good job.
In your other comment [0] you mentioned you also updated the name and location of the business. I’ve never had a single issue with Maps review and I find my corrections are usually accepted in under a week.
[0] https://news.ycombinator.com/item?id=42611345
This is because a) Google is a data-harvesting company and Apple is not, and b) the vast majority of businesses only update their info on Google due to market share (if they update their info anywhere).
You can verify this for yourself by looking up a business on Apple Maps and seeing if there’s a “Claim this Place” button.
It is very easy to submit corrections to Apple Maps and they usually accept them within a week.
For me personally, I would much rather use a superior maps app for maps, and use the data harvesting/advertising company’s website as a business directory, or ideally get the hours directly from the business’ website or social media profile since, like I mentioned previously, they often fail to update their info even on the data harvesting website.
Big business hours have always been reliable for me in Apple Maps.
I always call small businesses to ensure they are open. Why trust a small business operator to update Google/Apple in real time when I can spend 20 seconds to press the listed phone number and confirm it?
And you would find the business's phone number on Google I assume, because the business I mentioned is simply not present on Apple Maps.
The small business has even more incentive to keep it updated. I don’t speak the language of every place I go and it’s a huge waste of time to hope they’re open. And updating business hours might take a few minutes, but fielding phone calls takes a lot more effort and time. If I have to call a business to find out basic information, I’m not going there.
They have the incentive, but not the technical capability or trust for line level staff to be able to login to the business’s Apple or Google account and change the hours.
> They have the incentive, but not the technical capability or trust for line level staff to be able to login to the business’s Apple or Google account and change the hours.
TIL Apple Business Connect exists to update Apple Maps info: https://businessconnect.apple.com
> If you have any modicum of site reliability experience, this seems like an unsatisfiable set of constraints. It seems literally impossible, yet here they are claiming that they have done it.
His first instinct was right. It seems impossible, because it is. Unless I can run the entirety of the "Private Cloud Compute" on my own hardware in my own firewalled network, I 100% believe that the pipeline is compromised; our data is siphoned off and sold off to advertisers, especially now that they know they can do it and get less than slap on the wrist: https://news.ycombinator.com/item?id=42578929
> Hell, the iPhone is a fully capable cinema camera these days
No, it's not. The sensor in an iPhone is AI/ML'd up the ass to hide all the noise because it has 1µm sensor wells.
A Panasonic video-oriented mirrorless micro 4/3rds (so not even anywhere near 35mm) like the GH5 is 3-4x that.
A Sony Alpha 7 III? six times the sensor well size.
I don't care how many megabits of video bandwidth you throw at it or how fancy you think "raw" shooting is, or how fancy your sensor technology is; nobody these days has anything that is even close to 2x better than anyone else. The top sensor from all the major players are pushing the limits of physics, and have been for a long time.
No amount of AI/ML shit will give you depth of field and bokeh that looks as nice as a big sensor and a fast lens with nice shutter leaf shape.
I’m withholding judgement on Apple Intelligence until iOS 18.4 is shipped in May. That is when they plan to release a revamped Siri with better contextual and personalized responses. For instance, AI/Siri will be aware of what is currently on the screen when responding and also integrate personal data across apps.
Ultimately Apple’s strategy of a privacy focused AI will be a winner for a consumer device with access to sensitive personal information. It’s a question of whether they can pull it off technically.
>Then they casually dropped the holy grail of trusted compute...
By which I think he means the AI stuff runs on your machine rather than the cloud. For me that's not a holy grail at all or even something I'm terribly interested in. I downloaded Apple AI on the macbook, found it quite meh and am now seeing if there is a way to remove it as it uses quite a lot of GB or memory. I can see for someone wanting to use LLMs on confidential corporate data that would be important but that's a specialist use case that I don't think Apple Intelligence is particually good for.
That corporate context is a good candidate, yes, but I think it's simpler than that. Assuming whatever Apple and Google cook up these next few years is essentially identical from a user standpoint, you can assume that Google's will be selling off every microscopic datum they collect from you, and know for a verifiable fact that Apple's will not (and cannot).
> If this OS were shipped to consumers, you would have a nearly unhackable system that would make it basically impossible to tinker with.
I mean you can hack it the same way you would hack any other Darwin platform
I more meant untinkerable than unhackable. Oops, oh well.
Imagine how much better you would be at typing if you didn't have a backspace key.
This is very funny. The picture of Stalin and Yezhov was exactly what was going through my mind as an Apple Store sales guy explained the photo clean up feature to me. It felt rude to bring it up though.
This is why I switched to Guix lol
I find it super interesting that Apple is x-raying the PCBs for their compute nodes. Guess they took Supermicro inserting malicious devices into their servers very seriously.
* allegedly inserting malicious devices according to a report that was never substantiated
I, personally, am convinced it has and is happening.
It's really amusing to me that people are willing to believe it happened/is happening without proof to back it up.
This is the same argument that was used for decades to suggest that NSA was not hoovering up all voice communications though.
It is technically possible. There is adequate motivation by capable parties to do so.
Apple would be negligent if they did not make a serious effort to validate the hardware.
The whole episode is fascinating to me in that Bloomberg is a reasonable quality news organization and something obviously convinced editors there to stand their ground in spite of no obvious (presented) evidence. I agree though that absolutely no proof has come to light which makes me seriously question the whole thing.
There is abundant evidence that these type of hardware inserts exist and are being deployed. Why do you think it isn't happening?
Apple shipped “Apple Intelligence” before Apple even invented the term.
Before the AI craze you could search Photos on iOS based on content and metadata. You could lift subjects off photos with a long tap and copy them, recognize faces and make montages based on inferred relationship with them. And all of this is done on-prem, on your device.
These are very subtle, nice features. Apple had to put a name on all these otherwise there would be no marketing material.
Yes, Apple does a lot of feature-related marketing these days, which changes their naming priorities.
I remember reading it was a post-Jobs transition thing, where the key message about products transitioned away from vibes and overarching slogans ("shuffle", "the internet comuter"), to features ("iPhone X", "iPhone XS", "iPhone XR", "iPhone XS Pro (??)").
I'm sure that a lot of the old-guard exec team knows it's a loss... I'm sure how they feel about it or why.
This is true. Craig Federighi and others have said in the past that they’ve added machine learning based features for a long time in the OS (with examples). It’s just that generative AI took off very quickly and now some people are imagining that it’s the only (or major) AI.
Aspects of AI normalize over time. There was a time when the route finding in digital maps would have been considered almost magic.
They also (poorly) implemented address recognition in their app --- c.f., an application which had this at its core:
https://simson.net/ref/sbook5/
(and the source of which is available)
They have now released half-baked features they only released to fill “Apple Intelligence” with something, and that they likely wouldn’t have released in the current state otherwise.
[dead]
Apple has started going downhill from the time they released Vision Pro.
So that means all of the big tech companies are going down: Facebook, Google, Apple.
Only Microsoft remains strong though for how much longer remains to be seen.
A great time for startups.
Is Microsoft that strong? They've got a stranglehold on medium to large businesses but that's about it. Very few people actually want to use their products, they just think they have to...
But yes, it's a great time for startups. I'd argue it always is and always has been.
> that’s about it
Just 3 trillion or so dollars, that’s about it
All the companies the parent was talking about have similarly massive valuations, yet none seem to have unassailable positions.
Which is what the whole thread seems to be about (Apple squandering an opportunity and naturally others who are doing the same), not their current market cap.
There are a very large number of steam games that were written for Windows. There are similarly large numbers of commercial products with value for particular companies.
Could they be emulated? Sure. Maybe not 100% (see Linux), but mostly yes. But then you have to make that work, ensure that the emulations keep working, etc.
That is the real wall around Windows. Office has similar walls -- large numbers of spreadsheets, for example, many of which are critical and which do complex things. There are lots of programs that can read Excel spreadsheets, but perfect compatibility is difficult.
And there are lots of people who know these products -- re-educating them is a secondary wall, because it represents a lot of work for the customers.
Oh I know all about the Excel wall... It's hard to convince boomers there's something better because it's all they know but when I was in university most of my professors only accepted Google Sheets/Docs documents lol.
Microsoft cloud syncing is absolutely atrocious and when people pass around Excel spreadsheets they inevitably get messed up or half the information is lost because there's no single source of truth that everyone adds to. One can argue Google Docs probably isn't technically better but collaboration is 100x easier.
> large numbers of spreadsheets, for example, many of which are critical and which do complex things
And which all need to be rewritten into database backed apps, IMO.
> There are a very large number of steam games that were written for Windows. There are similarly large numbers of commercial products with value for particular companies. Could they be emulated? Sure. Maybe not 100% (see Linux), but mostly yes. But then you have to make that work, ensure that the emulations keep working, etc.
Wine and Proton do an excellent job at emulating to keep old binaries alive.
For new apps, Android and iOS are now enormous markets. Consoles are huge. Windows gaming is big enough, but I don't think targeting only Windows is worthwhile. Is it really more difficult to use SDL + Vulkan (or insert any other multi-platform graphics API) versus Windows APIs + D3D12? When everyone is building for multiple platforms it makes that moat a lot thinner...
For me, the success of the Steam Deck shows that "desktop" Linux can be successful, can be used by the masses. Game companies are even tweaking their games to work better on Proton or straight up porting them, very few are philosophically Windows-only.
And remember how Android absolutely destroyed Windows Phone even though Microsoft bought the largest cell phone manufacturer in the world... Not saying it will happen to Windows but I think it's a possibility...
You might see a need, but it isn't going to happen, especially in small and medium sized businesses.
Wine and Proton work for many things, but far from everything. Plenty of games don't work well on the Steam Deck.
Ergo, Windows isn't in any near-term danger. As far as Android and iOS being markets, sure. So what? It doesn't threaten Microsoft that there are additional markets.
> Plenty of games don't work well on the Steam Deck
Are you just referring to the games with anti-cheat? If so, I do agree, though I think with the success of the Steam Deck the anti-cheat providers are (or better be if they don't want to get their asses kicked) going to be looking seriously into options.
Outside of anti-cheat, I've yet to find a game that doesn't work on Steam Deck. Even the ones with the worst ratings will usually launch and you can play if you plug in a mouse and keyboard. Obviously not a great experience, but those games would have the exact same problem on any PC, Windows or Linux. It just happens that most Windows PCs have a keyboard and mouse already plugged in.
Well of course a giant company like this is probably not in a near term danger... it isn't impossible though : too big companies can get split by force: look at Standard Oil or Bell.
But longer term nothing is certain : look what happened to IBM or typewriters.
Nothing comes close to Microsoft Excel. It's bizarre to be honest how bad the competition is in core business applications.
Apple wishes they had anything close to Windows (or Android's) market share. Yes, people want PCs because a lot of software just won't run on Apple hardware, and Apple hardware is too expensive.
Doomposting aside, why do you actually think all these companies are going downhill?
I think they're all doing pretty well at the moment.
Yea I mean they are financially doing better than any group of companies at any time in human history, kind of like the exact opposite of this downhill claim
Even with Googles monopoly legal issues, they are more valuable than ever.
You know, there’s a lot to be said about this topic, but…. “Microsoft is strong”???? Lmao
Outside of various bubbles, Microsoft still dominates desktop computing and Azure has pretty strong market share itself. I actually find it fairly remarkable that, in spite of the Windows OS not mattering as much any longer--especially on the server--and Microsoft absolutely tanking in mobile, the company is still very strong and relevant.
> They sell bicycles for the mind.
Given the lack of a single good supporting example (what, did PCs have no word processors that reacted to backspace keys?) it seems like these are fantasy bicycles...
And since no evidence is needed to believe, you can of course believe in Intelligence that can act better than your brain (what are "those pics from San Francisco", you've snapped a hundred there, which 5 would you like to post?)
And yet the disillusionment comes a bit faster than expected, why not give Apple a few more decades to iron out some kinks on such a revolutionary fantasy path?
It’s ok to just say you don’t like Apple Intelligence.
Article definitely has a lot more depth of meaning that this gives credit for
Author has strong opinion of where GenAI should be used vs not, eg they doesn’t like the feature to remove a person from a photo. That’s respectable but I see many people may feel differently.
There's a big difference between features that journalists should never use vs. consumers cleaning up their vacation snaps.
Author isn’t being reductive.
Apple was always going to fail this, and even more so going forward.
LLM are built on data, and copious amounts of it. Apple has been on a decade long marketing campaign to make data radio active. It has now permeated the culture so much so that, Apple CANNOT build a proprietary, world-class AI product without compromising on their outspoken positions.
It is a losing battle because the more apple wants to do it, the users are gonna punish them and meanwhile, other companies (ChatGPT, anthropic) are gonna extract maximum value.
LLMs are mostly trained on public data while Apple's privacy stance applies to private data. There's no conflict between them.
(Meta can probably train on private data but OpenAI and Anthropic seem to be doing OK without it as far as we know.)
All the LLM advances these days are from synthetic or explicitly created data too. You need public data mostly because it contains facts about the world, or because it's easier to talk about a book when it's "read" the book. But for a known topic area (as opposed to open Q&A) it's not critical since you can go and create or license it.
No, Apple’s privacy stance is about giving users control over data in ways they understand. Posting on Reddit or Arxiv is not a blank check to have your words be reused for LLM training, even if it’s technically public.
Apple’s slogan is “what happens on your iPhone stays on your iPhone”. I think “I published a paper” or “I posted on Reddit” are clearly out of scope - those things are happening in public.
I'm still having to look for thunderbolt cable all around my house to charge that one freaking device that is iphone when everyone else switched 10 years ago to USB C.
How is apple intelligence going to help me with that ?
Your comment is a bit pointless on a post that has nothing to do with charging, but regardless, the iPhone 15 and 16 models both have USB-C.
Oh I'm just astonished reading this article how Apple can spend a huge amount of money and lot of time making a private cloud LLM yet can't be arsed to implement simple things like USB C without literally having the Europe force them to do.
I feel like they have a plan, get the backend up to scratch to appease the tech people and they're leaving the ways to interact with it purposely vague and incomplete for those non-tech folk.
They don't want to scare any part of their audience away from future uses of Apple intelligence. Their audience is tech and non-tech folk alike.
If the tech folk say it's safe and the non-tech folk get comfortable with the basic AI features then they're onto a winner.
How many people's parents/grand parents have iPhones because they're simpler for them to understand who are also scared or don't understand this 'AI thing'. I think Apple have been quite savvy in introducing it slowly and are probably watching the metrics like a hawk!
I suspect image playground is so creepy in an attempt to mark the images as clearly AI generated when they get posted to social media?