I’ve been writing code for over 12 years, and for a while I’ve been disgusted by the sorry state of programming. That’s why I felt a deep kinship with Nikita Prokopov’s article, Software Disenchantment. It captures the intense feeling of frustration I have with the software industry as a whole and the inability of modern programmers to write anything even remotely resembling efficient systems.
Unfortunately, much of it is wrong.
While I wholeheartedly agree with what Nikita is saying, I am afraid that he doesn’t seem to understand the details behind modern computers. This is unfortunate, because a misinformed article like this weakens both our positions and makes it more difficult to convince people that software bloat is a problem. Most of the time, his frustrations are valid, but his reasoning is misdirected. So, in this article, I’m going to write counterpoints to some of the more problematic claims.
1. Smooth Scroll
One of my hobbies is game development, and I can assure you that doing anything at 4K resolution at 60 FPS on a laptop is insanely hard. Most games struggle to render at 4K 60 FPS with powerful GPUs, and 2D games are usually graphically simplistic, in that they can have lots of fancy drawings, but drawing 200 fancy images on the screen with hardly any blending is not very difficult. A video is just a single 4K image rendered 60 times a second (plus interpolation), which is trivial. Web renderers can’t do that, because HTML has extremely specific composition rules that will break a naïve graphics pipeline. There is also another crucial difference between a 4K video, a video game, and a webpage: text.
High quality anti-aliased and sometimes sub-pixel hinted text at 10 different sizes on different colored backgrounds blended with different transparencies is just really goddamn hard to render. Games don’t do any of that. They have simple UIs with one or two fonts that are often either pre-rendered or use signed distance fields to approximate them. A web browser is rendering arbitrary unicode text that could include emojis and god knows what else. Sometimes it’s even doing transformation operations in realtime on an SVG vector image. This is hard, and getting it to run on a GPU is even harder. One of the most impressive pieces of modern web technology is Firefox’s WebRender, which actually pushes the entire composition pipeline to the GPU, allowing it to serve most webpages at a smooth 60 FPS. This is basically as fast as you can possibly get, which is why this is a particularly strange complaint.
I think the real issue here, and perhaps what Nikita was getting at, is that the design of modern webpages is so bloated that the web browsers can’t keep up. They’re getting inundated with
Latency is one of the least understood values in computer science. It is true that many text editors have abysmal response times caused by terrible code, but it’s a lot easier to screw this up than people realize. While CPUs have gotten faster, the latency between hardware components hasn’t improved at all, and in many cases cannot possibly improve. This is because latency is dominated by physical separation and connective material. The speed of light hasn’t changed in the past 48 years, so why would the minimum latency?
The problem is that you can’t put anything on top of a system without increasing its latency, and you can’t decrease the latency unless you bypass a system. That 42-year-old emacs system was basically operating at its theoretical maximum because there was barely anything between the terminal and the keyboard. It is simply physically impossible to make that system more responsive, no matter how fast the CPU gets. Saying it’s surprising that a modern text editor is somehow slower than a system operating at the minimum possible latency makes absolutely no sense, because the more things you put between the keyboard and the screen, the higher your latency will be. This has literally nothing to do with how fast your GPU is. Nothing.
It’s actually much worse, because old computers didn’t have to worry about silly things like composition. They’d do v-sync themselves, manually, drawing the cursor or text in between vertical blanks of the monitor. Modern graphics draw to a separate buffer, which is then flipped to the screen on it’s next refresh. The consequence, however, is that drawing a new frame right after a vertical blank ignores all the input you got that frame! You can only start drawing after you’ve processed all the user input, so once you start, it’s game over. This means that if a vertical blank happens every 16.6 ms, and you start drawing at the beginning of that frame, you have to wait 16.6 ms to process the user input, then draw the next frame and wait another 16.6 ms for the new buffer to get flipped to the screen!
That’s 33ms of latency right there, and that’s if you don’t screw anything up. A single badly handled async call could easily introduce another frame of lag. As modern hardware connections get more complex, they introduce more latency. Wireless systems introduce even more latency. Hardware abstraction layers, badly written drivers, and even the motherboard BIOS can all negatively impact the latency and we haven’t even gotten to the application yet. Again, the only way to lower latency is to bypass layers that add latency. At best, perfectly written software would add negligible latency and approach the latency of your emacs terminal, but could never surpass it (unless we start using graphene).
We should all be pushing for low-latency systems, but electron apps are simply the worst offender. This is something everyone, from the hardware to the OS to the libraries, has to cooperate on if we want responsive computers.
It seems silly to argue that computers today have no new features. Of course they have new features. A lot of the features are ones I don’t use, but they do get new features and occasionally they are actually nice. I think the real problem here that each new feature, for some inexplicable reason, requires exponentially more resources than the feature before it, often for no apparent reason. Other times, basic features that are trivial to implement are left out, also for no apparent reason.
For example, Discord still doesn’t know how to de-duplicate resent client messages over a spotty connection despite this being a solved problem for decades, and if a message is deleted too quickly, the deletion packet is received before the message itself, and the client just… never deletes it. This could be trivially solved with a tombstone or even just a temporary queue of unmatched deletion messages, yet the client instead creates a ghost message that you can’t get rid of until you restart the client. There is absolutely no reason for this feature to not exist. It’s not even bloat, it’s just ridiculous.
However, a few other comparisons in here really don’t make any sense. For example, an installation of Windows 10 is only 4 GB because of extreme amounts of compression, yet the article compares this with a 6 GB uncompressed basic install of android. Windows 10 is actually 16 GB once it’s actually installed (20 GB for 64-bit). While software bloat is a very real problem, these kinds of comparisons are just nonsense.
Now, this one I really don’t understand. Any language other than C++ or Rust basically compiles instantly until you hit 100k lines of code. At work, we have a C# monstrosity that’s half a million lines of code and compiles in 20 seconds. That’s pretty fast. Most other languages are JIT-compiled, so you can just run them instantly. Even then, you don’t really want to optimize for compile time on Release mode unless you’re just removing unnecessary bloat, and many modern compilers take a long time to compile things because they’re doing ridiculously complex optimizations that may require solving NP-hard optimization problems, which some actually do.
The original C language and Jonathon Blow’s language compile really fast because they don’t do anything. They don’t help you, they have an incredibly basic type system, they don’t do advanced memory analysis or a bunch of absurd optimizations to take advantage of the labyrinthine x86-64 instruction set. Languages in the 1990s compiled instantly because they had no features. Heck, sometimes compilation is actually disk bound, which is why getting an SSD can dramatically improve compile times for large projects. This has nothing to do with the compiler!
My question here is what on earth are you compiling that takes hours to compile?! The only projects I’m aware of that take this long are all C++ projects, and it’s always because of header files, because header files are terrible and thankfully no other language ever made that mistake ever again. I am admittedly disappointed in Rust’s compilation times, but most other languages try to ensure that at least debug compilation is ridiculously fast.
Hopefully, webassembly will eliminate this problem, at least on the web. As for everywhere else, unless you’re using a complex systems programming language, compilation times usually aren’t that bad, and even when they are, they exist for a reason.
Why would you complain about memory usage and then in the next breath complain about compilation times? Language features are not free. Rust’s lifetime analysis is incredibly difficult to do, but frees the programmer from worrying about memory errors without falling back to a stop-the-world garbage collector, which is important when garbage collection uses 2-3 times more memory than it actually needs (depending on how much performance you want).
I care very deeply about the quality of code that our industry is putting out, and I love the Better World Manifesto that Nikita has proposed. However, it is painful for me to read an article criticizing bad engineering that gets the technical details wrong. A good engineer makes sure he understands what he’s doing, that’s the whole point of the article! If we truly want to be better engineers, we need to understand the problems we face and what we can do to fix them. We need to understand how our computers work and fundamental algorithmic trade-offs we make when we compare problem-solving approaches.
Years ago, I wrote my own article on this, and I feel it is still relevant today. I asked if anyone actually wants good software. At least now, I know some people do. Maybe together, we can do something about it.