The best thing the crypto industry coined might have been the expression “rug pull,” but I’m not happy about that. To me, it perfectly describes how it feels when an app or a website randomly changes your scroll position for no rhyme or reason.
You’ve seen it so many times before:
you start reading a webpage, but it throws you back to the top when JavaScript finishes loading
you start reading a webpage, and ads or other stuff appear and shove you around up and down
you press a back button and that goes to the previous page… but to its top, rather than where you actually were
you zoom in or out, the position isn’t recalculated properly, and suddenly you see a different part of the page and lose your orientation
To me, the scroll position is as sacred as the mouse pointer position, given the two are related whether Scroll Lock is around or not: one is you, the other is the world around you.
But there are moments when software scrolling with the user or even for the user is appropriate, and here’s one example:
When you switch tabs, the content below should always scroll to the top, but it doesn’t here.
Here’s an even worse example, also from Settings:
Why should the content scroll to the top here? Because in these situations, the fact that the content container gets reused is just a technical quirk of the implementation. From the user’s perspective, this is all new content, and new content should always start at the top. Otherwise, things will get confusing really fast; imagine it especially in the default configuration without scrollbars, where you might assume result number 6 is the first result, or completely miss the most important, topmost options.
This is perhaps my favourite feature in Lightroom. You press ⇧T, you draw a few lines, and presto – your photo is now even:
This is doubly magical to me. The first part is that this is even possible – that you can straighten the photo in both dimensions after the fact, and save for some parallax nuances the viewer won’t know any better.
For decades, this has been the domain of tilt-shift lenses, but if you ever tried to use one, you know how harrowing of an exercise this is. A tilt-shift lens looks more like a medical device and less like a piece of photography equipment:
The “obvious” way to emulate a tilt-shift lens in software is a bunch of sliders, and Lightroom has those also…
…but that’s still pretty cumbersome in practice, abstracted in a strange ways, like piloting a plane by pulling the linkages connected the flying surfaces: you will admire someone who can do that, but won’t ever want to do it yourself.
Hence the second magical moment: The team created the new interface I showed at the beginning, where you point to things that should be straight directly, and the necessary tilt-shift calculations happen behind the scenes.
Alas, Lightroom didn’t fully stick the landing. The interface is a bit jittery, and missing nice transitions that could help understand what’s going on. But what brought me here was this unpleasant interaction:
What’s wrong with it? If you want to play along, stop here and ponder: How would you improve it? Because this is a classic UI exercise where there are symptoms, and there are problems, and there are principles under the hood of it all.
The first possible improvement: Don’t do a dialog like this. These are ancient and so annoying. Every time I see a centered dialog covering everything, popping up in response to a delicate mouse operation, I want to shout “read the room!” It’s better to drop a little tooltip next to the cursor that automatically disappears: more modern, and more “compatible” with mousing.
Then: Why am I allowed to start and finish an action that the machine already knows won’t go anywhere? Disable the drawing option, put a little “verboten” icon on the mouse pointer, or do something else that will prevent me from drawing a line to begin with.
But that brings us to point three, and how I would approach this as a designer. Because I would – counterintuitively – go the other way and allow the user to draw as many lines as they wanted, and just didn’t permit to commit the entire operation if there were more than four lines on the screen.
Why is that?
It’s the same principle as you see in all the social media composing fields, and in well-trained forms: do not constrain the editing process.
This field is limited to 300 characters, but it’s clever enough to only enforce its limits when you try to post. There is no downside to allowing you more room in the editing process. Maybe you write by constructing a few sentences first and only then combining them into one, maybe you want to see two riffs one below the other to choose the better one, or maybe – this is most likely – you’re not even paying attention and your motor memory is doing the editing for you, instinctively. Use any text editor for just a few months, and cut, copy, and paste, word swapping, and splitting sentences become second-nature gestures – that is, until the UI starts throwing in some arbitrary barriers.
Above in Lightroom, it might actually be easier for me to draw a fifth line and then delete a previous one, instead of doing it in the precise order Lightroom desires, or by dragging an existing line to move it instead of creating a new one.
Maybe an overarching principle would be this: If you are aiming to build something so delightfully direct manipulation as Lightroom did here, you have to fully commit to that stance, even deep in the weeds. Because every time I see a 1990s dialog appear when my fingers are flying fast, I feel like this:
I know this won’t have the same effect on you just watching. What happened was that, after I clicked on the Disable button, Lightroom moved the mouse pointer for me.
I don’t think I have ever seen anything like this, and it provoked many thoughts and emotions:
This feels wrong. If the mouse is the extension of my fingers, and the mouse pointer the extension of the mouse, this is in effect the app grabbing my hand and moving it.
I did not know this was even possible. I can see how moving the mouse pointer programmatically can be useful in very specific situations (like scrubbing, or accessibility), but… not like this.
If you do something for the user, won’t that make it harder for them to remember how to do it themselves?
I’ve seen this kind of a thing many times in my career: Someone genuinely asks “hey, if this is such a huge transgression, why wasn’t it codified somewhere in the style guide?” But to me the challenge is that it’s hard to imagine everything that needs to be preemptively captured and prohibited. I have to imagine this stuff for living, and I literally did not think anyone would just move a mouse pointer like this.
So seeing this now, yeah, I’d bundle this inside the “some interactions are 100% sacred” bucket, alongside focus never being hijacked randomly (especially in the middle of typing), avoiding scrolling anything until I specifically ask, undo and copy/paste needing utmost protection, and a few more.
In the opposite camp, here’s a fun new project by Neal Agarwal (only worth clicking on a computer with a mouse). This is a situation where it feels perfectly fine for a cursor to be hijacked; as a matter of fact, there is something really interesting about a mouse pointer feeling less like a deity floating above it all, and more like a regular in-game actor.
This reminded me of that time, in the earlier days of Figma, when I prototyped an interaction where you could select someone else’s pointer and press Backspace to delete it:
We didn’t seriously consider it because it felt just too weird, and not that effective in solving “the other person’s cursor is distracting me” problem. But today it feels like it belongs to the same category as the two examples above.
I’ll let you decide if it’s closer to Agarwal’s delight or Lightroom’s terror.
Wakamaifondue is a web tool to inspect font contents, and it starts by you dropping a font file (.ttf, .otf, or .woff) into a browser.
It handles file dropping so thoughtfully, it’s worth pausing and recognizing it:
Here’s what’s great about it:
You can drop the file anywhere. There is no designated small drop area like in some other apps; every last pixel of the window is ready to receive your file, so you can drop without worrying.
You get a hover state confirming you are safe to drop.
You can drop the file on other screens, too!
Why is all this important? Because dropping a file into a browser is a notoriously frustrating experience. If the tab doesn’t claim the file, left to its own devices the browser will do anything from replacing the current tab with the contents of the file, through opening a new tab, to… starting to download the file you just dropped and ask you for its new location!
It is frustrating when a failure mode of an action is not just that action failing – already here, repeating a drag is more work than e.g. repeating a keystroke – but also you having to do extra clean-up steps.
Wakamaifondue gets this right, and allowing to drop a file on any screen in particular is very thoughtful. Your cursor holding a file indicates your intentions rather strongly – when you see a person wearing a wedding dress, you don’t think “I wonder what they’re up to today?” – so there should be no need to switch to a certain mode or to navigate to an “import screen” beforehand.
One of the casualties of Apple’s otherwise brilliantly executed transition to retina pixels has been the mouse pointer, which remains aligned to what “traditional pixels” used to be, rather than the retina/physical/smaller pixels.
Turn on the zoom gesture from a few weeks ago, and you can see the challenge. The gridlines are ½ logical pixel and 1 physical pixel wide:
This limitation is inherited by most tools: Photoshop, Affinity, xScope, even the built-in Digital Color Meter. It’s not the end of the world, of course, but it can be maddening if you are trying to sample a color from a “half pixel” and the cursor stubbornly skips it no matter how delicately you move. Here it is in Figma:
Of the few tools I tested, only Pixelmator allows to sample at the correct, precise level:
I was curious how would a truly precise cursor feel in general – would there be any disadvantages? – so I built a little simulator that allows a regular arrow cursor to be aligned to “half pixels” or “retina pixels.”
In the process, I discovered that both Chrome and Firefox already receive sub-traditional-pixel measurements for mousing events, so this was even easier to build than I expected. Now, precise targeting in Chrome and Firefox becomes possible:
I don’t personally see any big difference in terms of either upsides or downsides, and I’m curious if you do. iPadOS and its Safari already seem to support the precise mouse pointer, too. That makes me curious: why isn’t it available in macOS? I imagine you could even turn it on by default for apps – or, if you want to be more conservative, make it opt-in.
Pixelmator also shows that the apps can do it without waiting for macOS as the data is already there; they would just need to render the cursor on their own with more precision.
Two great posts about interaction latency on the hardware and software side. First is from Ink & Switch:
There is a deep stack of technology that makes a modern computer interface respond to a user’s requests. Even something as simple as pressing a key on a keyboard and having the corresponding character appear in a text input box traverses a lengthy, complex gauntlet of steps, from the scan rate of the keyboard, through the OS and framework processing layers, through the graphics card rendering and display refresh rate.
There is reason for this complexity, and yet we feel sad that computer users trying to be productive with these devices are so often left waiting, watching spinners, or even just with the slight but still perceptible sense that their devices simply can’t keep up with them.
We believe fast software empowers users and makes them more productive. We know today’s software often lets users down by being slow, and we want to do better. We hope this material is helpful for you as you work on your own software.
I loved the slow-motion videos comparing what is normally impossible to notice:
I’ve had this nagging feeling that the computers I use today feel slower than the computers I used as a kid. As a rule, I don’t trust this kind of feeling because human perception has been shown to be unreliable in empirical studies, so I carried around a high-speed camera and measured the response latency of devices I’ve run into in the past few months.
I feel both of these essays are fantastic, and important to develop some sense of what are specific numeric thresholds separating fast and slow, also in the context of being able to have an informed conversation with a front-end engineer. (Luu subsequently links to even more articles in the “Other posts on latency measurement” section, if you are curious.)
Otherwise, from my observation, the two most quoted laws of user-facing latency are still Jakob Nielsen’s response time limits, and the Doherty Threshold. But the Jakob Nielsen 100/1000/10000ms rule is from 1993 and as far as I understand is concerned primarily with UX flows: reactions to clicking a button, responses to typing a command, and so on. And the Doherty Threshold is even older. Both are simply not enough, especially not for things related to typing, multitouch, or mousing, where for a great experience you have to go way below 100ms, occasionally even down to single-digit milliseconds.
(My internal yardstick is “10 for touch, 30 for mousing, 50 for typing.” Milliseconds, of course.)
At the end of his essay, Luu writes:
It’s not clear what force could cause a significant improvement in the default experience most users see.
Perhaps one challenge is that these posts are dense and informative, but only appeal to people who care? Maybe latency eradication needs a PR strategy, with a few memorable rules and – perhaps arbitrary, but well-informed – numbers that come with some great names attached? I know in the context of web loading some of the metric names like FCP (First Contentful Paint) broke through at least to some extent, but those still feel more on the nerdy side. Even Nielsen’s otherwise fun 2019 video about response time limits didn’t stick the landing – why focus on slowing down an arbitrary label appearing above the glass when the ping sound was right there for the taking?!
I can’t help but dream of interaction speed’s “enshittification” moment.
The Parc mouse cursor appearance was done (actually by me) because in a 16x16 grid of one-bit pixels (what the Alto at Parc used for a cursor) this gives you a nice arrowhead if you have one side of the arrow vertical and the other angled (along with other things there, I designed and made many of the initial bitmap fonts).
Then it stuck, as so many things in computing do.
And boy, did it stick.
But let’s rewind slightly. The first mouse pointer during the Doug Engelbart’s 1968 Mother Of All Demos was an arrow faced straight up, which was the obvious symmetrical choice:
(You can see two of them, because Engelbart didn’t just invent a mouse – he also thought of a few steps after that, including multiple people collaborating via mice.)
But Kay’s argument was that on a pixelated screen, it’s impossible to do this shape justice, as both slopes of the arrow will be jagged and imprecise. (A second unvoiced argument is that the tip of the arrow needs to be a sharp solitary pixel, but that makes it hard to design a matching tail of the cursor since it limits your options to 1 or 3 or 5 pixels, and the number you want is probably 2.)
Kay’s solution was straightening the left edge rather than the tail, and that shape landed in Xerox Alto in the 1970s:
Interestingly enough, the top facing cursor returned as one of the variants in Xerox Star, the 1981 commercialized version of Alto…
…but Star failed, and Apple’s Lisa in 1983 and Mac in 1984 followed in Alto’s footsteps instead. Then, 1985’s Windows 1.0 grabbed a similar shape – only with inverted colors – and the cursor has looked the same ever since.
That’s not to say there weren’t innovations since (mouse trails useful on slow LCD displays of the 1990s, shake to locate that Apple added in 2015), or the more recent battles with the hand mouse pointer popularized by the web.
But the only substantial attempt at redesigning the mouse pointer that I am aware of came from Apple in 2020, during the introduction of trackpad and mousing to the iPad. The mouse pointer a) was now a circle, b) morphed into other shapes, and c) occasionally morphed into the hovered objects themselves, too:
The 40-minute deep dive video is, today, a fascinating artifact. On one hand, it’s genuinely exciting to see someone take a stab at something that’s been around forever. Evolving some of the physics first tried in Apple TV’s interface feels smart, and the new inertia and magnetism mechanics are fun to think about.
But the high production value and Apple’s detached style robs the video of some authenticity. This is “Capital D Design” and one always has to remain slightly suspicious of highly polished design videos and the inherent propensity for bullshit that comes with the territory. Strip away the budget and the arguments don’t fully coalesce (why would the same principles that made text pointer snap vertically not extend to its horizontal movement?), and one has to wonder about things left unsaid (wouldn’t the pointer transitions be distracting and slow people down?).
Yet, I am speaking with the immense benefit of hindsight. Actually using that edition of the mouse pointer on my iPad didn’t feel like the revolution suggested, and barely even like an evolution. (Seeing Apple TV’s tilting buttons for the first time was a lot more enthralling.) And, Apple ended up undoing a bunch of the changes five years later anyway. The pointer went back to a familiar Alan Kay-esque shape…
We looked at just bringing the traditional arrow pointer over from the Mac, but that didn’t feel quite right on iPadOS. […] There’s an inconsistency between the precision of the pointer and the precision required by the app. So, while people generally think about the pointer in terms of giving you increased precision compared to touch, in this case, it’s helpful to actually reduce the precision of the pointer to match the user interface.
2025:
Everything on iPad was designed for touch. So the original pointer was circular in shape, to best approximate your finger in both size and accuracy. But under the hood, the pointer is actually capable of being much more precise than your finger. So in iPadOS 26, the pointer is getting a new shape, unlocking its true potential. The new pointer somehow feels more precise and responsive because it always tracks your input directly 1 to 1.
(That “somehow” in the second video is an interesting slip up.)
I hope this doesn’t come across as making fun of the presenters, or even of the to-me-overdesigned 2020 approach. We try things, sometimes they don’t work, and we go back to what worked before.
I just wish Apple opened itself up a bit more; there are limits to the “we’ve always been at war with Eastasia” PR approach they practice in these moments, and I would genuinely be curious what happened here: Did people hate the circular pointer? Was it hard to adopt by app developers? Was it just a random casualty of Liquid Glass’s visual style, or perhaps the person who was the biggest proponent of it simply left Apple? We could all learn from this.
But the most interesting part to me is the resilience of the slanted mouse pointer shape. In a post-retina world, one could imagine a sharp edge at any angle, and yet we’re stuck with Kay’s original sketch – refined to be sure, but still sporting its slightly uncomfortable asymmetry.
But specifically one comment under that video caught my attention:
Honestly, I’ve never thought of the mouse cursor as an arrow, but rather its own shape. My mind was blown when I realized that it was just an arrow the whole time.
…because maybe this is actually the answer. Maybe the mouse pointer went on the same journey floppy disk icon did, and transcended its origins. It’s not an arrow shape anymore. It’s the mouse pointer shape,and it forever will be.
Many designers and engineers have Apple products with their flawless and praise-worthy trackpads. By default on macOS, trackpad means only “shy” (iPhone-like) scrollbars are shown. Shy scrollbars become half-visible when two-finger scrolling, and only fully visible when hovering over them.
To anyone working on front-end, I encourage you to toggle this setting to “Always,” and convince half of your team to do the same. Your macOS will now pretend you have a mouse connected, and show more traditional scrollbars, all the time.
Why? Because you might already be accidentally generating spurious scrollbars without realizing. Here’s something I just spotted in Coda today:
This scrollbar serves no purpose, so it will become visual noise for a lot of your users. But when you yourself use “shy” scrollbars, you might not even realize.
Of course, the scrollbar is just a symptom of a bigger problem – an accidentally scrolling surface that will be janky to everyone regardless of their scrollbar visibility status.
Always-visible scrollbars make it easier to spot these, not to mention also being helpful in spotting:
scrollbars mismatched in theme (e.g. light scrollbars on dark-theme surfaces) or accidentally left unstyled
scrollbars not fully nestled into their correct edge, accidentally being offset from the top or the right
using a wrong CSS setting for overflow (or not knowing about the -x and -y variants), and consequently showing both scrollbars when one will suffice
the loading state or skeletons not anticipating a scrollbar appearing later
that most frustrating occasional math/measurement issue where the appearance of vertical scrollbar reduces the horizontal space, and as a result also makes a horizontal scrollbar appear (see also: scrollbar-gutter)
Since upgrading to macOS Tahoe, I’ve noticed that quite often my attempts to resize a window are failing. This never happened to me before in almost 40 years of using computers. So why all of a sudden?
I understand this might be the casualty of the absurdly large border radii in the new macOS.
The little video in the middle made me laugh:
(I do think there is room for gestures triggered “outside” a window, and we’ve seen rotation and some specific flavors of resizing or cropping work this way in drawing and design apps across the last few decades – but one has to be careful. Often, those are secondary and/or for power users.)
I have been wondering the other day why aren’t there more mouse pointer museums and here’s one – Amiga Pointer Archive! (Amiga was a 16-bit home computer especially popular in Europe.)
Doesn’t work so well on mobile, but it’s fun on desktop. I recommend zooming the page to 200%.