A preview of the future

In his latest video, Shelby from Tech Tangents unpacked, installed, and put to use a truly forgotten product: IBM 3119, one of the first consumer flatbed scanners.

The setup was a small nightmare, needing a rare hardware card installed in a specific computer, an ultra-particular combination of two operating systems working in lockstep, and even some careful memory balancing.

Even after all that, a 300dpi page scanner in the late 1980s was still a force to be reckoned with. It’s hard to remember how enormous scanned files were compared to anything else then, even on a black-and-white scanner like this one. The video shows a simple 90-degree image rotation in highest quality requiring over 9 hours, and I believe it.

But deep inside the video, at precisely 19:31, for only ten seconds, something appears that is absolutely worth celebrating. The nascent scanner software has a “curves” feature that allows you to redraw the shades of gray to capture shadows, highlights, and midtones exactly how you want them. Today, the feature would look something like this, with a real-time preview:

There would be absolutely no way to do something like this in the late 1980s, when just rotating an image is an overnight operation, right? And yet:

How was this accomplished? Absolutely brilliantly. Remember the palette swapping technique? Here, the entire screen’s palette is 256 shades of gray. It’s a very particular kind of a linear palette, and so you can easily take that line and… well, turn it into a curve. Since palette swapping happens on the graphic card, it takes as little as one frame of time, allowing for it to react to mouse movements as they happen.

This must have been mind blowing to experience in the moment. Sure, it’s only a preview, and actually applying curves to the image would take many minut—

No. This is a wrong frame of mind. Here’s my hot take: There are moments in software where the preview is more important than the feature following it. That’s because the preview making things faster isn’t just the difference between finishing something sooner or later. It’s a difference of doing something or not doing it at all. Would you even attempt to use curves if each adjustment took minutes or hours, especially in a land without undo?

I love this preview that hints at what the future will be. I like this clever use of extremely limited technology and tight collaboration between engineering and design. It must have been nice to be in the room whenever someone had the flash of insight to use palette swapping this way.

“Plain text has been around for decades and it’s here to stay.”

There’s a category of “plain text” or “ASCII” diagramming and UI design tools:

  • Mockdown – works immediately on the web, even on mobile
  • Wiretext – works on the web, but desktop only
  • Monodraw – a Mac app

I believe these are used by people who prefer intentionally limited visual choices, for low-key diagramming to put in source code, and – increasingly – as an entry point to gen AI.

They’re so interesting from the standpoint of this blog:

  • Fun to see a contemporary take on something that peaked between 1970s–1980s – you can look up TUIs and Turbo Vision if you want – but (just like Mario the other day) now with modern sensibilities, performance, web access, mouse and trackpad affordances, and so on.
  • It’s interesting simply as an exercise in constraint. I believe constraint practice will become more and more important as computers become more and more capable. It’s already useful to constrain yourself in order to make things easier for you. With the rise of AI, self-constraint will become important to make things harder, as well.
  • There is a certain power and longevity of monospace plain text that’s worth celebrating – not just because the file format is portable, but because text editing as interface is so well-known and potent.

Also, ASCII spray in Mockdown is just really fun:

(Caveat: These tools are “ASCII” in a colloquial sense, the same way people use “GIF” to refer to a certain category of looping animations.)

“The fancy software figures it out for you.”

I want to tell you about something that might seem oddly specific and perhaps too technical, but a) at the end of it you will have a useful phrase somewhere in your brain that will pay off one day, and b) I swear I will make it worth your while.

Have you ever seen this problem?

The screenshot on the left is fine. But there is something wrong with the one on the right. In light mode, the shadow is wispy and weird. In dark mode, things are even stranger, and the shadow is almost… a glow?

I stumbled upon this problem occasionally for years now – there are a few screenshots on the blog with this weird problem, even – but it was never feeling like a deal breaker. However, I finally sat down to figure it out today.

Turns out, there are two kinds of approaches to alpha channel/​transparency. The normal one we all know well is called “straight alpha.” But on the right, we were looking at “premultiplied alpha” – something entirely more complicated, where the background is baked into transparency for… reasons. Premultiplied alpha is conceptually – and often literally – dirtier, but it also has benefits: more flexibility, better filtering, sometimes better performance. As far as I understand, premultiplied alpha exists primarily in the world of video and vfx, but occasionally it rears its unconventionally attractive head in our boring static 2D world of screenshots, too.

In my case, I finally figured out this was happening whenever I’ve pasted the screenshot from the clipboard to Photoshop instead of Preview – for some reason, a screenshot then got an alpha channel premultiplied against white background. But I wouldn’t be surprised if it happens to some of you under other conditions, too.

So, “premultiplied alpha.” That’s the useful phrase. What was the other thing?

This is an absolutely hilarious 7-minute video by Captain Disillusion that talks about various challenges with the alpha channel:

Captain Disillusion (or, Alan Melikdjanian) is one of my favourite YouTube educators. His work is mostly debunking fake videos – his most well-known one is about the Cricet bracelet, although my personal fav is one about laminar flow – and they’re just constantly interesting and hilarious at the same time.

Disillusion also occasionally does a more straightforward “let’s talk about some technical aspects of video production” episode which he bundles under a “CD/” umbrella. Here’s a handy list of all of them:

I am sharing this list because you should watch them all. Most are <10minutes, they are consistently entertaining, and even though none of them are about UX design, there is enough overlap between the two universes that you will come out of it all a lot smarter.

Pragmatically, in my case, I searched for [premultiplied alpha] + [Photoshop] and quickly learned of a new-to-me menu option: Layer > Matting > Remove White Matte. It turns premultiplied alpha back to straight alpha, and fixes the screenshot.

Non-pragmatically? If you want to really understand premultiplied alpha, the last thing I can do is suggest another great internet educator, Bartosz Ciechanowski, who has a more comprehensive interactive web explainer. There will be math. There will also be sliders. You decide.

“Area connected to a given node in a multi-dimensional array with some matching attribute”

Anyone using old computers for graphics remembers the strangeness of “flood fill”:

The 1950s and 1960s computers were so sluggish that their consoles with blinking lights were not just for show; the operations were slow enough that you could still follow the lights in real time.

This ceased to be true soon afterwards. The microcomputer revolution temporarily reset some computing progress, but by the 1980s and 1990s more and more things were happening too fast for us to keep up.

But here (this above is Paint in Windows 1.0, and you can try for yourself in a browser!) was one example where you could still see an algorithm working hard. It was mesmerizing and educational, and it was a rare example where perhaps you didn’t mind the computer taking its sweet time. Even messing up like I did above – maybe especially messing up – ended up fascinating to watch.

Wikipedia has examples of a few different flood fill algorithms, which are even more interesting:

A few years later, Minesweeper had a very memorable flood fill, too (also available in a web emulator today):

But by now Minesweeper retired from sweeping mines, and today computers are so fast that it’s hard for me to imagine any flood fill being anything else but flash flood…

…except this is what I just saw in Pixelmator on my Mac:

I don’t know if this is a nod toward a classic flood fill, or just a nice unrelated transition. But I found it genuinely delightful, and it’s fast enough that I would imagine it doesn’t bother pros who need to do it often.

Sometimes it’s nice to see a computer working when there’s a good reason; some apps like banking apps even insert artificial, visible delays after crucial operations, just so that the users feel comfortable knowing their important transaction went through.

But sometimes it’s nice to see a computer working for no reason at all.

“So, what makes 3D so scary and different?”

It is common knowledge that Luigi is just a palette-swapped Mario, and that the characters facing left are the same characters as those facing right, only rendered mirrored.

This interesting 9-minute video from Core-A Gaming explains how this can be kind of tricky for fighting games in particular:

Suddenly, a character with a claw on one hand, or a patch on one eye, becomes a more complex situation – without redrawing, the claw or the patch move from one side of the body to another. Then there’s the issue of open stance toward the player, turning left-handed characters into right-handed ones just when they switch to the other side.

3D fighting games can, in theory, fix all of this with more ease, as instead of redrawing hundreds of sprites they can just introduce one change to a model… but they often choose not to. Enter the issues of 2.5D fighters vs. 3D fighters, 2D characters in 3D spaces, and lateralized control schemes.

It’s a small thing that quickly becomes a huge thing.

Here’s an object in Figma with one rounded corner. Notice how the UI always tries to match the rounded corner value based on where it is physically on the screen…

…which makes for a fun demo and feels smart, but: why don’t width and height do the same?

Turns (heh) out that this is a similar set of considerations as those in fighting games: both thinking deep about what is an intrinsic vs. derived property of an object, and what is the least confounding thing to present to the user. Since objects usually have noticeable orientation – text inside, or another visual property – width still feels like width and height like height even if they’re rotated. The same, however, isn’t necessarily true for four rounded corners. Or, perhaps, the remapping of four “physical” corners to four “logical” corners can be more error-prone.

Then, of course, there’s a question of what to do when the object doesn’t have a noticeable orientation. Like with many of the things on this blog, there are no “correct” answers. This too is a small thing that quickly becomes a huge thing.

“Just because it’s consistent doesn’t mean it’s consistently right.”

I mentioned before how the old-fashioned pixels on CRT screens have little in common with pixels of today. The old pixels were huge, imprecise, blending with each other, and requiring a very different design approach.

Some years ago, the always-excellent Tech Connections also had a great video about how in the era of analog television, pixels didn’t even exist.

But earlier this month, MattKC published a fun 8-minute video arguing that for early video games it wasn’t just pixels that were imprecise. It was also colors.

What was Mario’s original reference palette? Which shade of blue is the correct one? Turns out… there isn’t one.

Come to learn some details about how the American NTSC TV standard (“Never The Same Color”) worked, stay for a cruel twist about PAL, its European equivalent.

“Simultaneously old-fashioned and futuristic at the same time”

Before computer graphics, movies relied on matte paintings to extend or flesh out the background. This is perhaps my favourite matte painting, from the end credits of Die Hard 2:

Turns out, videogames do something similar, except the result is called a skybox, since it has to encompass the player from all sides. It’s another way to use cheap trickery to pretend the world is larger than it is.

This 9-minute video by 3kliksphilip shows a few more advanced skybox tricks from Counter Strike games using the Source engine:

I particularly liked two discoveries:

  • In real world, you wouldn’t style backfacing parts, because the player will never be allowed to see from the other side. Here, you don’t even have to render them.
  • Modern skyboxes have layers and layers of deceptions: more realistic 3D buildings closer to you, and completely flat bitmaps far away. It almost feels like each skybox contains the history of skybox technology that preceded it.

On the other hand, seeing clouds as flat bitmaps was really disappointing.

“So, I made another tool.”

Palette cycling is an interesting technique borne out of limitations of old graphic cards. Today, any pixel can have any color it wants. In the 1970s and 1980s, you were limited to just a few fixed colors: as few as 2 for monochrome displays, or 4, or 8, or – if you were lucky – 16. Some of those fixed palettes, like CGA’s, became iconic:

But there was an interesting hybrid period in between then and now where you still were only allowed 4 or 8 or 16 or 256 color choices in total, but you could assign any of these at will from a much bigger palette.

So, as an example, each one of these three is made out of 16 colors, but each one is 16 different colors:

Moving pixels was slow. But palette swaps were so fast and easy, that it led to a technique known as palette cycling. This is probably the best-known example, from an Atari ST program called NEOchrome.

Despite so much apparent movement, no pixels are changing location, as that’d be prohibitively slow in 1985. Only the palette is changing. If you watch the same animation with the UI visible, you can clearly see which colors are “static,” and which are moving around:

But this was 1985, so why I am mentioning it 40 years later?

I like looking at old computers for a few reasons. Some of these seeminly-ancient techniques are inspiring and remind me that the limitations are often in the eye of the beholder. Seeing someone really good pushing a platform to its limits is just a good thing to load into your neurons – this could be you next time! And, believe it or not, some tips and tricks can still be relevant.

For example, this is a 9-minute video by Steffest from just earlier this year that walks through a modern attempt to make a palette cycling animation, including starting on an iPad:

The end result goes much harder than I expected. It was interesting to see again the technique of dithering to simulate transparency (we’ve seen it before, but this one is more advanced). But what particularly stood out to me here was the artist making his own little tools to aid in the creative process; I’ve always loved the notion that a computer is really just meant to be an accelerant, making it easy for you to avoid drudgery.

“Kinda love this error message on the bus”

“Ugly in a way that’s pretty”

I gave a talk about the craft of pixel fonts at Config last year, and this fresh YouTube video by Noodle seems to be a great and quirky companion to the whole issue of “how did pixels look on old CRTs,” including many examples from modern games.