I have a confession to make. I prefer Apple TV’s 2015 remote:
The remote was universally ridiculed for its “which way is up?” problem – too much vertical symmetry which didn’t give your hand enough cues to know whether you’re picking it up the right way or the wrong way.
Apple tried a half-measure first; in 2017 they broke the symmetry by making the MENU button slightly distinct in visual and tactile ways. Hindsight is 4K, but I don’t think it had a chance of working – the tactile cues were too subtle, and the visual ones do not matter when you’re not looking:
So Apple overshot – the subsequent 2021 edition was a full-measure-and-then-a-half:
The remote shrank the touch surface but otherwise drastically increased the volume, and added four arrows, two new buttons, and a strange iPod-inspired clock wheel interaction on top. And to me it started feeling a bit complicated, inching toward the very TV remotes that earlier designs ridiculed. (It also wasn’t as pleasant to touch, as the buttons feel a bit rougher.)
But the reason I like the 2015 remote is primarily because it introduced one of my favourite gestures in recent history: tap to see progress.
It’s hard to describe how wonderfully light this interaction feels every time I use it. You just tap anywhere on the remote’s top half, you see where you are in the video via a subtle UI, and then wait a few second for it to disappear. After this, doing the same in every other player – YouTube, Netflix, HBO Max, anything on a Mac or even the iPhone – feels clunky and heavy. In many of them, you can’t even see were you are without stopping the video!
It gets better. Tap for the second time, and the elapsed time gets replaced by current time, and the remaining time by what the clock will say whenever you’re done watching. I thought this is delightful and clever, sneaking in clock functionality without showing it all the time.
There is also this really nice gestural separation. When you watch the video, taps and swipes are safe. Anything that is “destructive” – that is, causes the video to stop, or rewind, or fast forward, is on the “click” layer: press stronger on the center to pause, or on either side to move forward or back.
What I’m describing feels mechanically similar to other input devices, but the devil is in the details. On smartphones, everything is a tap, so you don’t really get anything lighter. On a Mac, tap as a gesture could only be available for people who opt in to press to click on their trackpad (like I do) – but the fact that tap is the default for clicking, means that can never realistically happen.
The Apple TV tap feels conceptually like Mac’s hover instead, but so much more pleasant and elegant and simple. (I want to prototype tap on a Mac as a lightweight “explainer,” showing tooltips there instead of on hover.)
To be fair, the tap gesture still exists in the still-current 2021 Apple TV remote, too – but the tap area is much smaller.
And just in case you were curious, these are the first two editions: the 2005 remote – shipped with the iMac, predating Apple TV – and the 2010 remote. (I’m referring to model years, because Apple’s own names are so confusing.)
I don’t have access to Apple’s user feedback, but I guess that Apple’s 2021 design was likely the very right thing to do. But looking at four-and-a-half of these models side by side, I am still in the 2015’s minimalistic, unusual, innovative corner.
I was inspired by the video, and really enjoyed its exploration of a demanding game that’s composed of just a few mechanics that are done really, really well:
The number of inputs are small, but the expression those inputs allow is deceptively expansive. […] Derelict Star’s various areas are all built to explore the way movement systems function and even interact with one another.
I think of user interfaces similarly, and of their need to build a certain consistent vocabulary of names, gestures, interface elements, concepts, and so on. Perhaps in an enterprise app you right click and discover something useful in a menu, and this will teach you about the usefulness of right click menus in general. Maybe pressing ⌥ to get to alternate symbols on your keyboard would inspire you (either consciously or not!) to try holding ⌥ in said menus, only to discover this brings up useful alternative options. Maybe seeing a keyboard shortcut next to one of these options will suggest to do that next time, and so on, and so on.
I really loved this bit in the video that could apply to a lot more software than just videogames:
It took me maybe an hour to do this, but right on the other side is a checkpoint. The game is hard, but it isn’t cruel. It’s designed to challenge you, but it has faith in your ability to complete it.
The narrator uses the term “ludocentrism” to refer to games that ruthlessly prioritize the mechanics and gameplay over narrative, aesthetics, and so on. (“Ludic” meaning “relating to play.”)
Of course, the calculus of what videogames care about will be different than goals of creative software or enterprise software; no one cares about the hero’s journey of the largest number in your Excel spreadsheet. But I think some version of ludocentrism applies to “boring”software as well. My beliefs here are probably something like this:
you can’t reduce everything to just functionality or just efficiency,
especially in creative moments of software use,
and people use software creatively much more often than we suspect, including software not thought as “for creatives.”
Screwdriver handles evolved over the decades in response to user needs and usage patterns, with a few clever affordances: some for everyone, some for specific use cases that might not be obvious.
I think by now all the basic onscreen UI elements – input fields, pop-up menus, checkboxes, buttons, top menus, sliders, and so on – have similar richness, as do all the core input devices like a keyboard, a mouse, a trackpad, or a touch screen.
That doesn’t mean that everything is set in stone, that no changes are possible, and that stuff that fell out of favour can ever be taken away – after all, computer usage, input devices, and conventions are evolving much faster than screws at this point – but that one has to be aware of the history so that the changes are intentional, not accidental.
A few select comments from under the video that I found interesting:
The Craftsman handles are also different colors for Phillips and slotted screwdrivers.
The fluted handle was patented. So anyone else wanting to make a screwdriver would have to pay the patent holder. So they tried alternatives to make more money. That is the real reason until the patent expired. Plus if they invented a “better” way and held the patent, others would have to pay THEM.
The Swedish word for screwdriver is “skruvmejsel” with literally translates as “screw chisel.”
It’s hard to be in charge of continuity on a movie set. It would already be difficult under the best of circumstances: after all, you can’t freeze the sun in the sky, prevent hot drinks from going cold, cigarettes from extinguishing themselves, or entropy in general from doing all the stuff it loves doing.
But on top of that, scenes are shot out of sequence, and movies are shot out of sequence. There are pick-ups if you’re lucky, and reshoots when you’re not. About the only time your job will be noticed is if you mess up: cue Superman’s reverse CGI moustache, Josh Trank’s Fantastic Four wig situation, Commando’s damaged-then-pristine Porsche, and so on and so on. (This 7-minute YouTube video is a great walkthrough from an expert.)
Apple famously freezes time on their phones in all the promotional materials to be 9:41am. The specific moment they chose is a celebration of the first iPhone unveiling to be at around that time, but it also makes production easy – while people won’t mind that the time on the screen doesn’t match the current time, or even that it doesn’t seem to advance at a normal rate, they will definitely notice if you happened to splice two screenshots with different time side by side, just because you didn’t anticipate that splice as you were preparing them. So it’s easiest just to avoid this situation altogether.
But what I didn’t realize until today as I was recording the previous post’s screengrab is that 9:41am is also enforced whenever you record your phone’s screen via QuickTime. It’s a peculiar feeling: Start recording, and the time on your phone jumps to 9:41. Yank the USB cord out, and it’s back in sync with the universe:
Oh yeah, the date changes too, for the same reason – to January 9, 2007.
In a time-honored Apple tradition, I can’t decide whether I’m annoyed at it (there seems to be no option to turn it off), or admire it.
First of all, correction for part 1 – the “focus mode” wasn’t removed. It was renamed to “quiet mode” and relocated to a different part of the UI, and I failed to spot it there. It’s still slightly perplexing, shiftily capitalized, and I doubt fully effective, but the effort is there:
I also want to warn you there will be no more positive things I say in this post.
Now that I’ve experienced the dialog myself in Photoshop 2026, and a few other dialogs that have been upgraded toward what Adobe calls “modern user interface,” how did it fare?
These are 2025 windows and their 2026 equivalents:
On the surface, it feels like a lateral move. I do not personally find the new design language (Spectrum) attractive, or even particularly “modern.” The gestalt remains off and things are still generally misaligned – they’re just misaligned in net new ways.
But it was digging into the window below that showed all the problems in the still-wet foundations…
…and a lot of them have to do with focus.
1.
The first field is not focused, so you cannot start typing the number after opening this window. You need to immediately move your hand to the mouse.
2.
If you click on any field, the value is not pre-selected, so you cannot start typing a new number then.
Principle: Defaults within fields should be easy to “blow away”
When a user activates a field, the current entry should be auto-selected so that pressing Backspace/Delete or starting to type will eliminate the current entry. Users can click within the field to deselect the whole, dropping the text pointer exactly where the user has clicked. The select-on-entry rule is generally followed today. (Sloppy coding, however, has resulted in the text cursor dropping at various unpredictable locations. )
3.
Clicking on parts of the input field doesn’t bring it into focus even though the hover state promises it. (Discrepancies between hover and focus handling are a horrible new thing I’m starting to see more in recent interfaces.)
4.
Simply backspacing through the field shows a crude error modal and – to add a second injury to the first injury – the dialog removes focus from the field!
5.
Tabbing now goes through “Pixels” menu on the way from Width to Height, making it harder to type width → press Tab → type height → press Enter, in a nice quick keyboard gesture.
I will recognize this is a tricky one, because it exposes a core tension with tabbing: some people use it for comprehensive keyboard access, but others want an accelerator “express train” with only relevant stops. However, macOS already has a “Keyboard navigation” setting for that – you can choose whether tabbing should go through all the controls, or only those you get to type in. Not only does Photoshop ignore that preference, but it’s inconsistent with itself – you can see that you cannot get to Anchor via tabbing anyway!
6.
Clicking on the “relative” checkbox or canvas extension color does not restore focus to last control like it used to.
7–∞.
There are tons of other transgressions. Some are downwind from focus; for example, undoing after moving a slider no longer works, because the ⌘Z keystroke is now swallowed by a UI element that doesn’t know what to do with it. Some are unrelated: Pull-downs are now of the slower kind, pressing ⌥P results in more blinking, and this tooltip below feels so cheap that I’m surprised it’s not a talking point of the current U.S. administration:
I am tired even just noticing all this. (What is that weird clump of pixels on the left of the bottom edge!? Did no one spot it before launch?)
So now what?
I generally avoid such harsh labels on this blog, but: this is awful work.
I’m angry. (Clearly.) We should all be angry in the face of stuff like this. This is how people get fed up with software – because it feels unstable and deteriorates on its own without needing to.
I know I brought up that an existing power user base can be a huge pain in the ass, and I am a decades-old Photoshop power user. But this is different than other examples where the product needs or at least wants to evolve past its core audience or toward a different market. For Photoshop here, nothing I see indicates any change in course or clientele – and yet all of these good moments in UI that used to help me out no longer exist.
Plus, all those transgressions are solved problems. Those issues are not buried in pages of heavily litigated patents, or in seven collective brains of world-class interface designers whose driveways are presently occupied by cash-filled trucks sent over by frontier companies. This isn’t some long lost art that requires archaeologists to decipher. This feels like carelessness and laziness in face of basic UI engineering; in a likely internally-motivated effort to refresh the interface, the team threw an entire nursery worth of babies with the bathwater.
It’s not just about disservice to craft. It’s not even about disrespect for change management, trivialization of institutional memory, and disinvestment in quality assurance. This isn’t only, in Tog’s words above, “sloppy coding.” This is also a failure ofimagination. It’s not that hard to picture people spending 8+ hours a day going through these windows for years if not decades to come, and it’s not hard to add and multiply all of these microfrustrations into numbers that should make one pause. With these many paper cuts, you need to start thinking about establishing a blood bank. How can you expect people to use a professional tool effectively if you throw in so many roadblocks?
In an internally-motivated UI refresh like this, you not only need to meet users where they used to be, you also ideally have to give them more to cover for the pains of change. Sometimes that “more” is better storytelling – here, no one even tried to really sell me on the new interface – but ideally “more” means actual felt improvements. I’m not on the team, but it’s not that hard to imagine some of them:
Change those annoying modals that announce typing errors into something lighter and more modern, like attached tooltips.
Add more comprehensive equation support so e.g. I could type “660*2” like I can in increasingly more and more apps.
Add a bit of memory/stickiness to some options (like Use Legacy in the first window), so I don’t have to keep toggling them over and over again.
I started this post talking about a setting, and there is another setting in Photoshop, buried on the last page – you can turn off this “modern user interface” that feels so underbaked the moment you start actually using it. But is that a real solution to anything? Toggle it on and the existential dread comes back: Am I going to miss out on some good stuff? When is the hammer going to drop? It’s not a tax break, it’s only a tax extension.
Even this view above shows so little care, it would ordinarily deserve its own post.
Before dark mode became mainstream in the late 2010s, there were two main customers of dark UI themes: programming and photo/video production. But, to the best of my knowledge, they arrived at that preference from two very different angles.
Programmers’ fondness for dark mode was a result of decades of bad display technologies. The early CRTs were so awful, the burn-in risks so real, and the pixels so fuzzy and headache-inducing, that you wanted to see as little screen light up as possible – hence, defaulting to black background for everything computers did.
These challenges were there all the way through the 1980s, really, teaching generations of coders that computers meant light letters on dark backgrounds. Games moved away from being “in space” or “at night” as quickly as they could, text editing and spreadsheets went for paper-like livery soon after that, but programming never meaningfully existed on paper, and so the skeuomorphic pull wasn’t really there.
(Have you ever heard of a term “reverse video”? What’s kind of confusing about it is that its meaning was reversed around that time.)
AV professionals took a different route. They already had CRT calibration, gray walls, and monitor hoods so that light from outside wouldn’t contaminate content colors – and when computer UI started appearing on those CRTs, it was likewise best to keep it as dark and as neutral as possible.
Today, things are more flexible. Many people prefer one theme over the other for any of many legitimate reasons, most leave dark theming synced to daylight, and display technology can handle all themes so well that it jumped ahead of our brains, which still have some interesting asymmetries in processing light shapes next to dark ones.
As users celebrated dark mode appearing in popular apps and services in the 2010s, some had to catch up the other way: Apple TV added light mode (for some reason) in 2017, and Affinity apps celebrated new light UI option just earlier this year.
Most programming text editors still default to dark, but allow you to switch; as a software category they were probably the first to fully embrace color theming.
But what led me to writing this post was a delightful discovery today of this setting:
Why, of all apps, would iOS Photos allow you to switch to dark mode, and only while editing to boot?
I think this might be because of the above tradition of pro AV apps, where we learned it’s good for visuals to be surrounded by black; a little nod to its earlier professional roots – similar, perhaps, to the story of the Clear button in calculators.
But I had two more thoughts. First, for all the reasons above, to me at least dark mode still has connotations of “professionalism” and toggling the option makes me feel I’m a bad-ass pro whenever I’m editing a photo. I wonder if others also feel that way, too.
Second, dark mode looksdifferent. Dark UI only when editing means it’s easier to spot whether I’m editing or just browsing, and be ever so slightly better oriented.
(In general, apps today are much more similar-looking, and I’m surprised neither iOS nor Android doesn’t allow you to switch the theme per app, just so it’s easier to know where you are as you move around quickly.)
To follow up from yesterday’s post, in Figma, object selection actually goes onto the undo stack. This is because in a professional tool with objects in multiple levels of hierarchy, it might take a while to construct a selection to work on – and since selection is always just one accidental click away from being completely cleared, undoable selection is extra protection.
However, at the same time renaming a file – or changing settings like file access – is not undoable. This is in part because we didn’t feel people would understand they could cancel out their rename this way (Safari too used to have “reopen last tab” under ⌘Z, until it reverted to Chrome’s ⌘⇧T), but mostly because you could accidentally undo through a file rename during regular work if you were not careful, without noticing, and that felt like it’d have more profound consequences.
In some ways, it helped me to think of these not as “ineligible for undo” but rather “living outside of time.” The moment a file is renamed, it will always have been named that way. (For the purposes of undo, at least. You can acknowledge anything you want on the version history screen.)
I’m not saying these are universally correct choices – as a matter of fact, some users find undoable selection (at least initially) pretty confusing! – but mostly sharing these as examples of intentional thinking about what deserves undo, and what should be exempt from it and taken care of elsewhere.
The gist of it is simple: the mechanics of following a link are not important, and should be replaced by something that can make the link stand on its own. This is important for screen readers, but also for basic scannability: a “click here” label has a lousy scent and requires you to take in the surroundings to understand what it really does. The rule is, in effect, a variant of “show, don’t tell.”
(In modern days, you can also add another transgression: on touch devices one cannot click, but only tap.)
There is a similar rule about button copy design. Button labels, too, should be self-sustainable. Below is a good example (just reading the button lets me understand what I’ll achieve by clicking it), juxtaposed with the bad one (“OK” is so generic you have to read the rest of the window).
Earlier this week, I was passing some train cars on my coffee walk, and saw this bit of UI:
Why are these okay, and “click here” is not? Here’s why, I think: Yes, the ultimate goal is to move a train car, or empty it, or send it on its way. But here, the mechanics matter, too. They’re dangerous. They require preparations. No one says “I’m going to open my laptop and start clicking on links,” but I imagine people say “we have to jack this car” or “we need to lift it.” Even “here” has depth: these are specific tool mounting points. Choosing the wrong “here” will have consequences.
But, going back to the web, avoiding “click here” in strings isn’t always easy. Imagine trying to put a link in the sentence “To change your avatar, visit the profile page.” I’m personally never sure how to linkify it well:
To change your avatar, visit the profile page.
To change your avatar, visit the profile page. To change your avatar, visit the profile page.
Linking “change your avatar” seems correct since it points to the eventual outcome, but then it leaves the actual destination dangling and unlinked – like putting an accent on a wrong syllable. “Visit the profile page” is better than “click here,” but it’s still not scannable. Linking the entire sentence seems strange and complicated to me, and I also disagree with Tim Berners-Lee, who on the page I liked to above seems to suggest this should be…
To change your avatar, visit the profile page.
…just because this might make a user think there are two separate destinations and actions, and contribute a wrong mental model.
You could, of course, simplify this to “Change your avatar,” but while that would work in a UI string, it wouldn’t within a larger paragraph of text, or a blog post.
(This is one of the meta posts about this very blog. If that’s not interesting to you, skip to the next one!)
I thought I’d share a few of the small design details I am proud of for this small blog!
1.
After years of being annoyed at Slack for mishandling image sizes, it was important for me to show the screenshots (at least the desktop UI) at their 100% precise size, if possible. I think that helps to get a better sense of a scale and feel of things. This was harder than I expected (since I still want images not to grow too wide or too tall), but hopefully works well now.
2.
I wrote some extra code so that if an image has edge transparency or even soft shadows, it will be aligned accounting for all that. I think that feels elegant – especially on a blog that practices asymmetry probably to a fault.
3.
If the images or videos blend too much into the background, they get a lil border to separate them – but only in light or dark mode as needed. This is so that the whole page rhythm holds better together. (Manually assigned so far. Would be curious if one can make this automatic.)
4.
Speaking of dark mode, I almost figured out how to make videos with transparent pixels so that they look good in both dark and light mode. (Chrome only. Still working on it for Safari.)
5
I want autoplay videos (without sound!) so that it’s easier to see interaction design – basically, a modern version of what GIFs used to provide. This has been challenging and required adding some JavaScript, and is still not done! But it’s starting to feel nice.
6.
Given all the quotations I do, I added hanging quotes to text. Wildly, they are still not really supported by CSS (Safari is a sole exception), so that required some manual intervention.
7.
Short lists are (automatically) spaced differently than long lists. I’ve always wanted to try that.
8.
I’m having a blast with the pixel fonts I recreated from PC/GEOS. I keep adjusting the glyphs, adding kerning pairs, etc. It’s fun to keep improving a font as you’re improving its surroundings; I just redrew the @ glyph you can see above!
The 2021 revision of the Mini Cooper ramped up its Britishness by introducing Union Jack flag-inspired turn signals. They looked okay when stationary:
But when actually indicating an intention to turn, people started realizing what happens when you have two types of mapping fight each other:
On one hand, the left-turn indicator was on the customary left side. On the other, the light looked like an arrow – and the arrow was pointing to the right.
I don’t know how many people were actually confused by it, but it made for a few spicy pieces with “stupidest turn signal ever” and “most annoying thing” in their titles. The company’s official response was:
Mini has chosen the Union Jack lights to highlight Mini’s British heritage, and has been using them for a while. With regard to the turn indicator light pattern, there should be no trouble at all for a driver to understand, when seeing the full rear of the car, which direction is being indicated.
Mini has not heard any concerns from customers regarding the rear turn indicators, and has in fact received positive feedback about the taillight design.
It didn’t help that one of the worst cars this side of the Cybertruck did something similar in the 1950s:
Drama aside, I did agree with this commenter (emphasis mine):
It doesn’t cause massive confusion, but taillights should cause no confusion for anyone.
I can think of one modern version of a similar issue. If you use the iPad in landscape mode, the volume buttons seem to go “the wrong way”:
Is this anything? Probably not. I imagine it’s better to be consistent and allow motor memory to develop between all the iPad orientations, and throw in the iPhones, too. But if you only ever use your iPad in landscape, this might feel, perhaps, like “the stupidest volume controls ever.”
Oh, and the subsequent Mini revamp in 2024 solved the issue by making the turn signals less like arrows:
Wakamaifondue is a web tool to inspect font contents, and it starts by you dropping a font file (.ttf, .otf, or .woff) into a browser.
It handles file dropping so thoughtfully, it’s worth pausing and recognizing it:
Here’s what’s great about it:
You can drop the file anywhere. There is no designated small drop area like in some other apps; every last pixel of the window is ready to receive your file, so you can drop without worrying.
You get a hover state confirming you are safe to drop.
You can drop the file on other screens, too!
Why is all this important? Because dropping a file into a browser is a notoriously frustrating experience. If the tab doesn’t claim the file, left to its own devices the browser will do anything from replacing the current tab with the contents of the file, through opening a new tab, to… starting to download the file you just dropped and ask you for its new location!
It is frustrating when a failure mode of an action is not just that action failing – already here, repeating a drag is more work than e.g. repeating a keystroke – but also you having to do extra clean-up steps.
Wakamaifondue gets this right, and allowing to drop a file on any screen in particular is very thoughtful. Your cursor holding a file indicates your intentions rather strongly – when you see a person wearing a wedding dress, you don’t think “I wonder what they’re up to today?” – so there should be no need to switch to a certain mode or to navigate to an “import screen” beforehand.
Right next to the generic function to delete photos by going through them one by one, my camera has a specific version – Delete All With This Date:
Below the actions to close the tab, and close all other tabs, Chrome has a specific version called Close Tabs To The Right:
In After Effects, next to typical save options, there is this – Increment And Save – which saves a file and changes the number at the end to be one notch higher (Project 2 → Project 3, and so on):
I’m mildly fascinated by these strangely specific accelerators.
The one in the camera is genuinely useful. Photo projects are often day-long affairs where you download the photos at the end of workday, but might still keep them on the card just in case. Allowing to quickly delete a day’s worth of photos makes a lot of sense, saving you from having to go through them one by one in an interface not suited for that kind of operation.
Chrome’s “Close Tabs to the Right” takes a bit of figuring out, but I believe it’s meant to make it easy to clean up after a fruitful research session where you kept ⌘-clicking and opening tabs to learn more, and those tabs now fulfilled their purpose. (Curiously, Firefox also has “Close Tabs To Left” which I don’t understand.)
After Effects’s “Increment and Save” is… I don’t know. Maybe it’s cheap? Maybe it’s honest? A proper version history would be nicer, but that’s a tall order. This is simple and, most importantly, reliable. I still often do the “poor man’s version control” elsewhere…
…so this works for me.
It’s always interesting to me to think whether these kinds of oddly-specific examples are nice gestures toward the user, or treating symptoms in lieu of fixing actual problems. Either way, I don’t think an interface can survive too many of these, as their obscurity and weirdness add up and can contaminate the entire UI.
Would love if you sent me more of these kinds of commands from the apps you use!
One of the casualties of Apple’s otherwise brilliantly executed transition to retina pixels has been the mouse pointer, which remains aligned to what “traditional pixels” used to be, rather than the retina/physical/smaller pixels.
Turn on the zoom gesture from a few weeks ago, and you can see the challenge. The gridlines are ½ logical pixel and 1 physical pixel wide:
This limitation is inherited by most tools: Photoshop, Affinity, xScope, even the built-in Digital Color Meter. It’s not the end of the world, of course, but it can be maddening if you are trying to sample a color from a “half pixel” and the cursor stubbornly skips it no matter how delicately you move. Here it is in Figma:
Of the few tools I tested, only Pixelmator allows to sample at the correct, precise level:
I was curious how would a truly precise cursor feel in general – would there be any disadvantages? – so I built a little simulator that allows a regular arrow cursor to be aligned to “half pixels” or “retina pixels.”
In the process, I discovered that both Chrome and Firefox already receive sub-traditional-pixel measurements for mousing events, so this was even easier to build than I expected. Now, precise targeting in Chrome and Firefox becomes possible:
I don’t personally see any big difference in terms of either upsides or downsides, and I’m curious if you do. iPadOS and its Safari already seem to support the precise mouse pointer, too. That makes me curious: why isn’t it available in macOS? I imagine you could even turn it on by default for apps – or, if you want to be more conservative, make it opt-in.
Pixelmator also shows that the apps can do it without waiting for macOS as the data is already there; they would just need to render the cursor on their own with more precision.
While both of these services changed a lot since the essays, they are still worth reading. They might be the closest to modern reviews of software as I can think of, and the way the essays are done also teaches us storytelling lessons – from nice visualizations and comparisons, to rich footnotes. There is also a great balance of high-level overview, and then jumping into specifics that reinforce it.
Here’s one example of cool tooling O’Beirne used to make his points more sticky:
I wrote a script that takes monthly screenshots of Google and Apple Maps. And thirteen months later, we now have a year’s worth of images:
The result is informative and mesmerizing:
Among the essays, I’d particularly recommend these:
The back-and-forth of Google Maps’s Moat and New Apple Maps: Reverse engineering areas of interest, thinking of how the slow changes in visuals lead up to strategy, good visual comparison of competition, and small fascinating anecdotes of places like Parkfield, California. (And a great example of the old adage: don’t get into the business of predicting the future as this will age your writing the most.)
Why is there a short wait if you press a button on your headphone remote or your AirPods to pause the music? Because the interface has to let a bit of time pass to figure out if you’re going to press the button again, making it a double press (advance to next track) instead of a single press.
This kind of disambiguation delay is everywhere for simple gestures.
Why is there a short wait if you press a button twice in that situation? The double press processing also has to be delayed, because there is a chance it might become a triple press (go to previous track).
Why is there a short wait if you press a button to go to the next track on your car’s steering wheel? It’s a delay of a different kind, but the same principle: the function cannot kick in on press down, because press down and hold mean “fast forward.” So, software has to wait for button up event to go to the next track (which feels a bit slower than button down), or for enough time to pass so we’re certain it’s a button-down hold rather than a slow press. Here, both interactions experience a penalty for coexisting.
The most infamous of those disambiguation delays exists in mobile browsers. Since every double tap can zoom into the page ever since that famous 2007 iPhone presentation, every single tap on a link or elsewhere has to be delayed by about 300ms. This has been a source of contention since it does make the web feel a bit slower, and today browsers suspend double tapping on sites designed for mobile, trading zooming affordances for higher interaction speed – after all, you can still zoom in by pinching. But if you always wondered why older websites tend to be a bit sluggish to interact with, now you know.
Different tradeoffs are possible. In the Finder, clicking on icons isn’t slowed down even though double clicking exists, because selecting an icon is compatible with opening it! So in effect it’s not a choice between a faster A and a slower B – it’s A or A+B.
Even in the iPhone presentation above, you can see the interface highlights the link on double tap, to at least make it feel snappier, at the expense of the highlight being “wrong” and potentially distracting – or even confusing – when you end up double tapping. (You can imagine smartphones pausing on the first remote/headset button press, too. It feels like it would be compatible with advancing to the next track, but I think it might also feel too “choppy,” too chaotic, in practice.)
Lastly, why is there a short wait if you press a button on your hotel TV to increase the volume? Oh, I think that one is just sluggish for no good reason.
One of the readers (thank you, Peter!) reminded me that there is a version of a blink comparator that all of us are exposed to perhaps every day: many photo editing apps – Apple Photos, Darkroom, Aphera, I imagine others – allow you to quickly compare the photo as-shot and with your edits. Sometimes it’s a tap, sometimes an onscreen button, and in the case of Lightroom it is a backslash key. Here’s that feature on a color graded photo with some dust removed:
But these blink comparators are smart. If you e.g. rotate the photo, the comparison will be with the original also rotated so the pixels still map to each other 1:1 – even if you rotated the photo as the last step in your editing process:
I think this is a brilliant example of understanding the spirit of a feature rather than its letter. A naïve blink comparator would show an unrotated photo, but in this way it would cease being a blink comparator.
It’s a fun listen (perhaps if you skip a bit of a bummer 9-minute beginning), covering four listed things in more details:
generous mouse paths (especially in menus)
coyote time for modifier keys
optical alignments
tooltip timing details
There were a few interesting things that caught my attention:
Figma does have “coyote time” in the very interaction the hosts are talking about, perhaps showcasing that the details of the details can make or break them.
“Should modifier keys be reversible” and “should modifier keys be consistent with one another” are interesting challenges; some more recent graphic tools have changed the long-standing behaviour here, malking modifier keys more “sticky.”
Wholeheartedly agree with how frustrating it feels that the menu interactions are not yet baked into browsers as primitives. “The fact that the companies keep having to implement it themselves manually is maddening.” It is.
Good observation that some people associate animations with “feeling premium” (see also: the quote I put in the title).
A few years ago, I suggested adding a new interaction to Figma. If your text cursor was on a misspelled word (anywhere inside, or the edges), you could press Tab to quickly accept the suggested correction, without even seeing it:
Independently, Google Docs approached it from a slightly different angle, but landing on a similar interaction – in their version there’s a small visual callout, although you can still press Tab (and then Enter) to accept the suggestion:
I know the Tab key has a lot of jobs – from indenting bullet points to jumping through GUI elements – but in this context this new addition doesn’t seem to be in conflict.
(Should I write a long photoessay about the Tab key, similar to the ones I wrote for Return/Enter and Fn keys?)
Since we added it, I’ve really loved how it feels. From various typeaheads and autocompletes elsewhere, Tab has a strong “forward movement” energy so it makes conceptual sense, and it’s just really fun to go around and quickly fix your writing this way.
I think a lot about how to make keyboard interactions feel superpower-y: a good keyboard shortcut on a large key, a tight interaction, a blink-of-an-eye velocity – something that’s eminently designed to lodge itself in your motor memory as quickly as possible, as it builds on top of prior motor memory. I’m biased, of course, but I like the “no scope” Figma version more, and it has that feeling to me.
I liked the details both within the implementation – for example, making sure the kerning is preserved! – but also in the presentation. I particularly enjoyed Schulz making the component demo itself, rather than using prerecorded videos. (I was delighted to discover that even the first large “picture” of the component is actually interactive!)
A small comment to this bit:
Unfortunately, not all browsers expose the selection or accent color of an operating system. For example, if a user would set the accent color in macOS to pink, the special CSS keyword color “Highlight” will still result in a light blue color in Safari. In other browsers like Chrome, the color will match the user preference. But since this is an attack vector for user tracking / fingerprinting, Apple made the right choice to hide the user preference from developers.
From my understanding, this is not necessarily correct. For example, in theory, the purple visited link color can be used for fingerprinting, by building a profile of whether or not I visited one of the hundreds of popular websites, quietly in the background.
The way browsers solve this is to never expose the color programmatically back to JavaScript – if your code asks for a link color, it will be blue regardless of whether the link was visited or not. It seems to me that the Highlight color could be used the same way here. Given that CSS now supports things like color-mix(in srgb, Highlight 20%, white), it would even allow a designer to riff on the color without ever knowing what it is.
…but where I thought it really shone was the first iPods:
This was perhaps the most fun you could ever have navigating a hierarchy of things; it made sense what left/right/up/down meant in this universe, to a point you could easily build a mental model of what goes where, even if your viewport was smaller than ever.
It was also a close-to-ideal union of software and hardware, admirable in its simplicity and attention to detail. This is where Apple practiced momentum curves, haptics (via a tiny speaker, doing haptic-like clicks), and handling touch programmatically (only the first iPod had a physically rotating wheel, later replaced by stationary touch-sensitive surfaces) – all necessary to make iPhone’s eventual multi-touch so successful. And, iPhone embraced column views wholesale, for everything from the Music app (obvi), through Notes, to Settings.
Well, sometimes you don’t appreciate something until it’s taken away. Here are settings in the iOS version of Google Maps:
I am not sure why the designers chose to deviate from the standard, replacing a clear Y/X relationship with a more confusing Y/Z-that-looks-very-much-like-Y. They kept the chevrons hinting at the original orientation – and they probably had to, as vertical chevrons have a different connotation, but perhaps this was the warning sign right here not to change things.
I think the principle is, in general: if you’re reinventing something well-established, both of your reasoning and your execution have to be really, really solid. I don’t think this has happened here. (Other Google apps seem to use standard column view model.)
Connecting to public wi-fi networks with their captive portals is always a bit of a wonky proposition, and nothing makes public wi-fi wonkier than using it on a plane.
I believe that the resurgence of https made things harder – if the captive portal doesn’t kick in, no secure traffic can happen – and over time I just started remembering that “captive.apple.com” is a reliable HTTP-only destination to visit.
But I noticed this week that United’s onboard wi-fi network is called “Unitedwifi.com” as a reminder where to go once you are connected, to avoid that problem. I thought this was a nice touch.
Software engineering has long had a concept of “premature optimization” – overbuilding things too early in anticipation of future that might or might not come.
I feel design has a version of that, too. Here’s viewer menu hierarchy in Google Drive:
One should always feel very uneasy about a menu with just one item, like Insert here. Even within the View menu, one could imagine streamlining all the commands to be in one main menu, rather than two tiny submenus (coupled with pretty excessive width that makes for an interaction that feels like walking a tightrope).
These are the menus for a PNG image. It’s entirely possible other file types offer more options and this menu structure earns its keep then, paying off in consistency over a long run – but I tried a few file formats, and the menus all looked similarly sparse.
As a counterpoint, here’s an example I just spotted in the context/right-click menu in Apple’s Notes:
When you have one device, the three options get appended to the ground floor of the menu. But if you have more than one, they all get ejected into a submenu.
I like this soft consistency of introducing hierarchy only when it’s needed – or in reverse, flattening/streamlining it as necessary.
I have mixed feelings about this one particular use, however. This menu is already very long (and seemingly abandoned – look at table and checklist and link options), so in this case perhaps a consistent submenu would be overall better. Also, the “Insert from iPhone and iPad” label is long and makes the entire menu slightly wider.
But as a pattern, it’s worth considering. (Just for completeness’s sake, you could also half-streamline by adding a submenu for the iPhone and another one for the iPad. But in this particular case, it’d also likely be a bad idea.)
…and I wanted to share a response by Nikita Prokopov, because he had a great point about those Dropbox Paper placeholders that I didn’t consider:
For me it’s […] confusing placement. Like if somebody writes “Have a nice day” on a door instead of “Push” or “Pull”. I don’t mind seeing “Have a nice day” message somewhere neutral, in a place not occupied by any other function, but not where I expect very specific help.
I was reminded of Prokopov’s comment when I saw this at the airport yesterday:
I remember, eons ago, how impressed I was when one of the Chrome designers was telling me how all of these error pages were specifically designed to feel like liminal spaces and notlike destinations. These were, in a way, placeholder content.
But “Press space to play” feels like a strange thing to put in here. (Previously, the message said “No internet” or “There is no Internet connection.”) I understand that this is Chrome’s popular mascot, but this is still an error page whose purpose is to tell me what’s wrong, rather than serve as an entry point to a minigame.
Also, just a few days ago, I just stumbled upon this fun example of a placeholder collapse – where a temporary text becomes permanent:
If you are curious, this is what it looks like if you don’t forget to set the message. And funnily enough, given where we started, it says “Have a nice day”:
I feel like social media and recently the slate of AI-powered “tell me what’s here” features continue to show us the power and longevity of screenshots. After all, nothing beats a more or less approachable shortcut and a file format that works literally everywhere.
But screenshots have issues, and I liked how Bear (a note-taking app) brilliantly integrated OCR inside images into its flows. This just worked for regular ⌘F finding without me having to do anything:
The recognized text also appears when you search through notes, and so on. It’s just a great peace of mind that you’re not going to miss on text just because you happened to screenshot it.
Apple operating systems have had detection of text inside images for a while – I know on iOS in particular it sometimes gets in a way of normal gestures – so I thought it was just that, but curiously this doesn’t work as nicely in Apple’s own Notes.
To be fair, I am traveling and haven’t looked for solid evidence or citation that this works for people, but I personally like this approach: in lieu of a separate language selector button, each option here itself is both a language selector and a commit button.
The labels themselves are not the name of the language, but a call to action; I imagine recognizing the one label that means something to you should be easy if the other nine look like gibberish.
And, a thoughtful moment by one exhibit: Not only showing you where you are in the sequence of three videos, but even within the currently-playing video.
It is common knowledge that Luigi is just a palette-swapped Mario, and that the characters facing left are the same characters as those facing right, only rendered mirrored.
Suddenly, a character with a claw on one hand, or a patch on one eye, becomes a more complex situation – without redrawing, the claw or the patch move from one side of the body to another. Then there’s the issue of open stance toward the player, turning left-handed characters into right-handed ones just when they switch to the other side.
3D fighting games can, in theory, fix all of this with more ease, as instead of redrawing hundreds of sprites they can just introduce one change to a model… but they often choose not to. Enter the issues of 2.5D fighters vs. 3D fighters, 2D characters in 3D spaces, and lateralized control schemes.
It’s a small thing that quickly becomes a huge thing.
Here’s an object in Figma with one rounded corner. Notice how the UI always tries to match the rounded corner value based on where it is physically on the screen…
…which makes for a fun demo and feels smart, but: why don’t width and height do the same?
Turns (heh) out that this is a similar set of considerations as those in fighting games: both thinking deep about what is an intrinsic vs. derived property of an object, and what is the least confounding thing to present to the user. Since objects usually have noticeable orientation – text inside, or another visual property – width still feels like width and height like height even if they’re rotated. The same, however, isn’t necessarily true for four rounded corners. Or, perhaps, the remapping of four “physical” corners to four “logical” corners can be more error-prone.
Then, of course, there’s a question of what to do when the object doesn’t have a noticeable orientation. Like with many of the things on this blog, there are no “correct” answers. This too is a small thing that quickly becomes a huge thing.
This is a typical iOS Gmail dialog that allows you to snooze an email so it resurfaces later:
If you invoke that function on an email that’s an order receipt, a new option appears:
It’s great to see this clever and thoughtful button which is likely the best option here. But:
It reshuffles everything else, preventing motor memory from building. At this point, you can no longer rely on “bottom left” to always be “custom date,” and so on with other buttons. (One idea would be to put it at the back but draw attention to it visually, or at least make it span the entire row.)
It doesn’t show you the inferred date, even though there already is a precedent for doing that – especially important here as the feature seems to be powered by AI, which can get things wrong.
The icon heavily promotes the AI association, which is not that useful. It would probably be better to show a truck or some other visual signifier of “delivery.”
I think about some aspects of interface design as sugar.
This is how you adjust the photo in Photos app in the previous version of iOS:
And this is the same view in the current version:
The difference is in the delayed/animated falling of the notches.
I don’t think it’s great. It’s “delightful” in a rudimentary and naïve sense, but like sugar, you cannot just add it to your daily diet without consequences. This extra animation serves no functional purpose, and the sugar high wears off quickly. What remains is constant distraction and overstimulation, the feeling of inherent slowness, and maybe even a bit of confusion.
It pairs nicely with the previous post about avoiding complexity and rewarding simplicity. I often see this kind of stuff as related to designer’s experience. Earlier on in your career, you are proud you’ve thought about this extra detail, you’ve figured out how to make this animation work and how to fine-tune the curves, and you’ve learned how to implement it or convince an engineer to get excited about it.
Later in your experience, you are proud you resisted it.
Night mode is a mode inside the iOS camera app where the app takes a longer-exposure photo in low-light conditions, but “stabilizes” it programmatically, to achieve something similar to holding a camera on a tripod for the same amount of time.
I noticed a little detail that might be new to iOS 26: the night mode icon will now show you how many seconds it expects you’ll have to hold it, ahead of pressing the shutter button.
This is me turning the light on and off in the hotel room. The icon is in the upper right corner:
It’s hard for me to know how useful this is in practice, but the gesture seems nice. What I like about it, too, is density. By my calculation, this is 10-point type, smaller even than the battery percentage at about 12. (The standard interface elements usually go for 15–17.) Retina displays allow you to add text this small and have it still be legible.
I enjoy little lists like these, and the presentation here is also delightful. From a design engineer Jakub Krehel, Details that make interfaces feel better. A few of these stood out to me:
Make your animations interruptible. […] Users often change their intent mid-interaction. For example, a user may open a dropdown menu and decide they want to do something else before the animation finishes.
Yes. Never make the user wait for your animation to finish, unless the animation itself is meant to cause friction and slow the user down (which is very rare).
Make exit animations subtle. Exit animations usually work better when they’re more subtle than enter animations.
I love asymmetric transitions. My go-to analogy for this is “in real life, you don’t open the door the same way you close it.”
Add outline to images. A visual tweak I use a lot is adding a 1px black or white (depending on the mode) outline with 10% opacity to images.
This is very nice and (both literally and figuratively) sharp. In some contexts, I bet you could even try to go for 0.5px.
This video from Marblr about adding fall damage to Overwatch is really intense – 45 minutes of length and a lot of footage of frantic gameplay – but really informative, too.
It’s a great case study of how something seemingly really simple – deducting health from the player as they fall from height – can be a complicated thing to figure out in all the detail.
I never played Overwatch and rarely play videogames anymore, but many of the lessons here more universal for any sort of UI and system design:
You will have to introduce tactical inconsistencies for the system to feel consistent, but be careful as there might be a point those inconsistencies start to outweigh the whole thing.
Wanna learn how you and others feel about something? Overcrank it to make the feelings come out more easily. (And to find bugs.)
There will always be tensions between what the data says and how you feel about something. (I was surprised how often the word “intuitive” entered the picture.)
Also, it’s just a really well-made video, filled with little presentation and storytelling details that elevate it. I wish more videos like this existed for UI mechanics.
But maybe the most important takeway? You don’t have to choose between rigor and fun. You can have both.
I just stumbled upon a nice little power-user innovation in Chrome’s Web Inspector.
In Safari, and previously in Chrome, when editing CSS properties, you’d get a usual editing typeahead for the property name, and then the same on the other side for the property value.
In newer versions of Chrome, the typeahead menu works as before on the right side. However, the menu on the left side also includes the right side.
I think this is really clever in this context – not just to speed you up, but also to aid understanding. Just like the inert mouse up and down in the previous post could serve as a safe “peek” into the values, this new interaction can quickly allow you to explore the CSS space if you are curious, or if you only lightly remember part of the name, or even just one of the values.
This blog is authored in Apple Notes, and some time ago Notes added quick linking via typing >>, and that has a similar effect: The interactions are so nimble and precise that it is very easy to link to something, but a nice side effect is that it also feels very welcoming just to type a few letters to remind yourself of a title of an article, and then cancel out.
The downside of the Chrome change is, well, more stuff matching, but I think the audience for this UI is going to be okay with that.
I know we’re probably collectively a bit tired talking about macOS Tahoe, but I just noticed something that I think is a good example of how small details can ladder up to bigger things.
This is macOS Sequoia (the pre-Tahoe release) and a typical pop-up button:
One clever thing macOS has been doing since basically the dawn of GUIs is that upon clicking on a button like this, the currently selected row will be in the same place as before you clicked. (As opposed to, for example, the entire menu appearing below like it would from a top menu bar.)
This has interesting and often underappreciated consequences. It allows you to orient yourself quicker since you don’t have to find the selected option again. And, it saves you movement overall: the next or previous option will always be at the absolutely shortest possible distance. (Of course, the approach also has some challenges,for example if the button is positioned close to the top or bottom of the screen.)
There’s another clever thing that happens throughout macOS: All the menus work using a classic click-to-open and click-to-select sequence, but they are also usable via the slightly more advanced, but faster mousedown-drag-mouseup gesture.
These building blocks work together and mean that selecting the next option can be as simple as a little flick of a mouse.
Now, check out macOS Tahoe (current release):
You will notice that iCloud Drive, upon clicking, is now misaligned both horizontally and vertically.
On the surface, this feels just like a visual blemish – slighly embarrassing, but without much consequence. But check out what happens if you hold your mouse button at a certain position, and then release it without moving:
The stability of macOS’s interface and the thoughtful set of aforementioned rules allowed for an emergent fast behaviour: mouse down and up meant you could “peek” into a menu safely, or you could change your mind right after seeing what’s inside. In a bigger sense, it created a certain trust between you and the operating system: it’s worth learning those gestures, as they will be rewarded.
In Tahoe, some of that learned behaviour – by the way, I see it in all of these buttons, not just this one – will now work against you. Now, you can accidentally change an option without intending to do so.
Is it a big deal? No, not really. This likely – hopefully! – simply fell through the cracks in a rush to get Liquid Glass out the door, rather than no one being there to care, or no one understanding that all these gestures add up in aggregate, creating a GUI that feels fast, trustworthy, and catering to your motor memory in a way that elevates your experiences with the interface in the long run.
But I’d feel better if it wasn’t almost half a year since the release, and if we hadn’t already seen other things exactly like it.
The breathing light – officially “Sleep Indicator Light” – debuted in the iconic iBook G3 in 1999.
It was originally placed in the hinge, but soon was moved to the other side for laptops, and eventually put in desktop computers too: Power Mac, the Cube, and the iMac.
The green LED was replaced by a white one, but “pulsating light indicates that the computer is sleeping” buried the nicest part of it – the animation was designed to mimic human breathing at 12 breaths per minute, and feel comforting and soothing:
Living through that era, it was interesting to see improvements to this small detail.
The iMac G5 gained a light sensor under the edge of the display in part so that the sleep indicator light wouldn’t be too bright in a dark room (and for older iMacs, the light would just get dimmer during the night based on the internal clock).
In later MacBooks, the light didn’t even have an opening. The aluminum was thinned and perforated so it felt like the sleep light was shining through the metal:
And, for a while, Apple promoted their own display connector that bundled data and power – but also bundled a bit of data, which allowed to do this:
Back when I had a Powermac G4 plugged into an Apple Cinema Display, I noticed something that was never advertised. When the Mac went to sleep, the pulsing sleep light came on, of course, but the sleep light on the display did too... in sync with the light on the Mac. I’ve tested that so many times, and it was always the same; in sync.
Just a little detail that wouldn’t sell anything, but just because.
To do this I shifted the first gaussian curve to that its domain starts at 0 and remains positive. Since the time domain is 5 seconds total and the I:E ratio is known, it was trivial to pick the split point and therefore the mean. By manipulating sigma I was able to get the desired up-take and fall-off curves; by manipulating factor “c” I was able to control for peak intensity.
But at that point, in the first half of 2010s, the breathing light was gone, victim to the same forces that removed the battery indicator and the illuminated logo on the lid.
I know each person would find themselves elsewhere on the line from “the light was overkill to begin with” to “I wished to see what they would do after they introduced that invisible metal variant.”
I know where I would place myself.
This blog is all about celebrating functional and meaningful details, and there were practical reasons for the light to be there. This was in the era where laptops often died in their sleep – so knowing your computer was actually sleeping safe and sound was important – and the first appearance of the light after closing the lid meant that the hard drives were parked and the laptop could be moved safely.
The breathing itself, however, was purely a humanistic touch, and I miss that quirkiness of this little feature. If a save icon can survive, surely so could the breathing light.
After James Moylan’s death in December, we were reminded again of the Moylan Arrow, the little arrow telling you which side of your car has the little fuel door:
I started wondering: what would be the conceptual equivalent of this in software? My best guess would be iOS offering to fill the one-time code from a recent SMS:
This is what it has in common with the Moylan Arrow:
everyone benefits from it
it happens all the time
it solves an actual little (but not too little) frustration
it’s there at the right place at the right time
it is relatively low-tech (it’s not an overdesigned or an overengineered solution)
once you know it’s there, you will love it forever
Curtosis on Mastodon unearthed the original 2019 Twitter thread from one the creator of the iOS feature, Ricky Mondello (link to XCancel), which I‘m reproducing here:
The idea for Security Code AutoFill came out of a small group of software engineers working on what we thought was a much more ambitious project. It wasn’t a PM, it wasn’t just one person, and it wasn’t what we set out to do initially.
It started as a small side idea we had while designing something very different. We jotted it down, tabled it for weeks, and then picked it up after the “more ambitious” project wasn’t panning out. It was hard, but I’m so glad we changed focus.
Even with a gem of an idea, it was still just an idea. Ideas are obviously super important — they’re necessary, but not sufficient. Here, the end result came from the idea, teamwork, and execution.
Years later, I’m still so proud of the team for making this feature happen. The team combined expertise from several areas to ship magic that worked on day 1, while asking nothing of app and website developers, without giving anyone your text messages. This still inspires me!
To every one of the folks who made this happen, I’m still in awe. Y’all are the best. <3
Addendum: FAQs
- “SMS is bad.”
↪ I know.
- “MITM.”
↪ I know.
- “FIDO is better.”
↪ It’s complicated, but acknowledged; I totally get it.
- “Android did it first.”
↪ Nah. Details matter. Privacy matters. And clipboard != AutoFill.
- *negativity*
↪ Not now. :)
I asked others on social and here are some other contenders I liked:
The indicator that alerts you of Caps Lock when typing passwords
Even though this blog is about software, I might occasionally post some inspiration from real life. I saw this today outside of an RTA transit station in Cleveland. I have not seen it light up, but I imagine it would blink when the train is near the station, which would mean: hurry up if you want to catch the next train.
It reminded me of this disambiguation detail in Finder in a way: a tiny but thoughtful detail at the right moment can go a long way.
In Kraków last year, I saw a great variant of this: A tram waiting at the terminus would show exactly when it departs, so you can choose to rush when it’s close, or to run a quick errand if it’s not.
(I know a lot of countries have extremely user-friendly transit systems where those details were hot news 30 years ago, but I do not take them for granted.)
One of the frustrating patterns for me is a dialog box that doesn’t offer “skip it next time” option, or even just defaults to remembering.
My go-to examples? Apple’s Remote Desktop which always throws this thing up on connection:
And this in Photoshop upon saving a PNG file, which has been there forever:
I never change these options. These are flow-killers; trees have grown to maturity as I have spent collective hours in those dialogs over the years/decades, even though they serve no purpose for me.
(The worst part might be if you forget this dialog waits, and move on to do other things, and the operation you thought was completed never actually finishes.)
A thoughful moment in Buttondown. Gmail’s truncation has been going on for decades, and I have no idea why they still do this. Even the overflow interface for a truncated email is awful – the rest of it doesn’t appear in situ, but it opens a new window that where you have to start from the top.
One thing I really admired in earlier versions of Windows was the thing that was also its weak point: the keyboard orientation.
I miss the old tradition in Windows where many commands had underlined letters, and you could simply press Alt and that letter to jump to it:
If I remember correctly, eventually this got simplified so that the underlines were only there when you held Alt (although I bet there was an option to keep showing it all the time).
Opening Windows 11 today, it feels like the system got less elegant. I can still press Alt and stuff happens, but it doesn’t look nearly as good or tightly integrated, and the two alternate entry points (Alt and the keyboard shortcuts) become muddled:
In the meantime, on a Mac, in various places apps reinvent the wheel by their own thing.
I just saw this in Nova, the code editor, which is very thoughtful; those shortcuts only exist within this dialog (and one wonders if they couldn’t just be letters without modifiers)?
A little more old-fashioned from Photoshop, and the same question: could they just not be digits, without requiring ⌥?
Previously, I mentioned yet another idea from DevonThink.
I appreciate these gestures toward moving faster via a keyboard, but I wonder if we lost something that already used to work well in old Windows.
An extremely thoughtful moment in DaVinci Resolve. When you drop the first video clip into a new project, it suggests to update the settings of the entire project, on the correct assumption that the first media might set the tone of the whole thing.
“You can’t undo this action” is scary and kind of… untrue? But I’ve stopped reading by then. I press Enter and it saves me a trip to a complex project settings dialog box I always forget the location of.
I recently joined Mintlify as a part-time design engineer. […] I started a daily thread sharing UI fixes and improvements that I was shipping. I also invited people to share any UI bugs they noticed.
People responded. I fixed things in near real-time. It was fun and I learned a lot.
I enjoy little posts with updates like this.
(However, a small thing: I wouldn’t use text-shadow this way. It’s veering into the territory of faux bolding, and looks bad. And, in this case, it feels like it’s not solving a problem.)
A really interesting convention I just spotted in DevonThink that shows the shortcuts as soon as you hold ⌘, although it feels a bit clunky and cheap in execution.
(The main worry here for me would be that it’s distracting if you already know the shortcuts. I haven’t noticed it disappear if you use it, but maybe it does after a while.)