Cutting through to what matters

Behold the Thermomix®. Home cooks love it, because it does everything! It has an electric heating element and motorized blade, and functions for steaming, emulsifying, blending, precise heating, mixing, milling, whipping, kneading, chopping, weighing, grinding and stirring. They have “reinvented simplicity, again!”

Professional chefs don’t use the Thermomix. They are living in the stone age, almost literally! Their tools of choice are the controlled use of fire and the eight inch steel chef’s knife. The controlled use of fire is 400,000 years old, and the chef’s knife is little more than the stone tools used by our Australopithecus garhi ancestors 2.6 million years ago! Talk about old tech!

Why do chefs live in the past while cooks embrace the “reinvented simplicity” of the present? If your friend shows an interest in cooking, should you buy them a Thermomix? Is it better to be a chef or a cook?

I can’t answer any of these questions! But as a long time cooker of code, I can tell you:

  1. Why experienced software engineers embrace old ideas;
  2. How you should evaluate new technologies; and,
  3. Why you should be a producer of technology rather than just a consumer of it.

If I’ve done my job, you'll leave this essay with an intense desire to master bash, understand the Unix process model, read the RFCs defining TCP and friends, become a computer history enthusiast and maybe invent the future!

The Thermomix, reinventing simplicity, again

Cooks and chefs

The word “chef” is a shortening of the phrase chef de cuisine meaning “head of the kitchen”. It is a role quite different to that of the cook, who merely… cooks.

Tim Urban does a great job of explaining the difference:

Everything you eat—every part of every cuisine we know so well—was at some point in the past created for the first time. Wheat, tomatoes, salt, and milk go back a long time, but at some point, someone said, “What if I take those ingredients and do this…and this…..and this……” and ended up with the world’s first pizza. That’s the work of a chef.

Since then, god knows how many people have made a pizza. That’s the work of a cook.

The chef reasons from first principles, and for the chef, the first principles are raw edible ingredients. Those are her puzzle pieces, her building blocks, and she works her way upwards from there, using her experience, her instincts, and her taste buds.

The cook works off of some version of what’s already out there—a recipe of some kind, a meal she tried and liked, a dish she watched someone else make.

Tim uses this distinction to explain why Elon Musk is so much more effective than the rest of us. Not everybody can be or wants to be Elon Musk, but many of us want to play at least a small part in inventing the future of technology. If that includes you (and I hope it does!) this article should give you a warm fuzzy sense of validation! More importantly, we’ll explore some of the principles that might help you on your journey.

Finding the edge

You may worry that building the expertise required to make an impact in technology will take too much time. This is an understandable instinct: most of the experts we see in other fields take decades to reach the cutting edge. But computing is different in that it’s young and relatively neglected. You may not make a dent in mathematics say, or physics, without decades of study and research, but that’s because the prior art takes so long to navigate. Computing has less prior art, and when we focus on just the truly foundational and important, we find that it’s possible to reach the cutting edge of some field—even a complex one such as computer vision or graphics—in just a year or two.

Computer history is filled with stories of dedicated individuals and small teams who did innovative, high-impact work from start to finish in a surprisingly short amount of time. Bram Cohen for instance created the BitTorrent protocol by himself over approximately 12 months. BitTorrent is not a small feat: it required a strong understanding of UDP and other aspects of networking but Bram Cohen was able to obtain and utilize that knowledge in the time that some folk dedicate to mastering some intricate platform like WordPress.

John Carmack was only 19 when he invented adaptive tile refresh and used it to build the engine for Commander Keen, the first commercial side scroller game for the PC. He was at the cutting edge of computer graphics throughout his early 20’s, being either the inventor or first to implement techniques such as ray casting, binary space partitioning, surface caching and Carmack’s reverse. These helped the company he co-founded to produce one hit after another: Wolfenstein, Doom and Quake among them. Carmack had been programming for some time before this, but he was new to computer graphics. Considering that he was in his early 20’s, he certainly didn’t spend decades working to reach the cutting edge. A few years of intense dedication and thinking from first principles was enough for him.

Bret Victor recently wrote a great article detailing the many things that technologists can do about climate change. What surprised me is that many of these things, as critical as they are, are rather small. With enough dedication, and average programmer could solve some of these problems in less time than it takes to complete a PhD.

A young John Romero (left) and John Carmack

An unknown error has occurred

Until this point in the essay, I was writing it on Medium, a web publishing platform. Medium’s web-based editor is one of the nicest I’ve seen, but it’s broken.

The WYSIWYG aspect resulted in annoying, unpredictable behavior when inserting whitespace around and deleting non-text content, including a bug I experienced when I placed an embedded tweet next to an image. I couldn’t delete the tweet, which would have been fine except that I didn’t want it! Failing to find any way to delete it, I opened up Chrome Web Tools and optimistically removed the corresponding DOM node, but of course an “unknown error” then occurred. So now I’m writing markdown in a text editor and will just publish the HTML version myself. As Myles points out, an added benefit of this approach is that it will still actually render in five years time.

Six commands

In 1986 the Turing Award winner, Art of Computer Programming author and “father of the analysis of algorithms” Donald Knuth was asked to contribute a program to the “Programming Pearls” column of the Communications of the ACM. The task he was given was: read a text file, record the frequency of each word, then generate a report of the most frequently used words. Knuth’s code appeared in the June 1986 issue of the publication (unfortunately paywalled) and is reproduced as an image here, to give you a sense of its size and shape

A literate program, by Knuth

The purpose of the article was to demonstrate a style of programming—pioneered by Knuth—called “literate programming”, which allows the developer to reorder the code, and mix it with prose to make the program’s source read more like a piece of literature.

The next edition of the “Programming Pearls” column was to be a “literary review” of the program, and Doug McIlroy—one of the creators of Unix—was asked to write it.

After pointing out some bugs in Knuth’s code, McIlroy provided a six command shell pipeline to achieve the same outcome:

tr -cs A-Za-z '\n' |
tr A-Z a-z |
sort |
uniq -c |
sort -rn |
sed ${1}q

I’ve included McIlroy’s line-by-line explanation as annotations, which you can see by hovering over each line (on a mobile device you’ll see them rendered in small gray text next to each line).

The power of McIlroy’s program comes from Unix pipelines, with which he has some familiarity given that he invented them (along with the tr and sort tools he used above).

With pipes, each of the small pieces of McIlroy’s program become reliable pieces of a production line for data, feeding the standard output of one station into the standard input of another.

Each individual piece is responsible for very little: sort simply sorts the lines, uniq removes duplicates and in this case includes a count. The minimal responsibility of each unit means that they are reliable, predictable, and lasting—most were written in the 70s and their interfaces and implementations have hardly changed since standardization in POSIX in the late 80s. Pipes have allowed decades of subsequent tools to interface seamlessly with their ancient peers, but only if the authors understand pipelines and participate in the contract!

This isn’t just an argument for learning to use the shell, although you should do that ASAP! (Zed Shaw’s Command Line Crash Course is a great place to start, and will only take you a couple of days, and this is a decent primer on pipes).

The power and longevity of pipes is due to a simple interface allowing small, simple units, which is what experienced engineers tend to strive for with their own systems. Yet so many recent technologies and systems forget this, leading to intricate, complicated interfaces resulting in less reliable components and limited flexibility leading to shorter lifespans.

Dr. Drang has a great post about McIlroy’s reply, titled More shell, less egg. He named it after one of the more visual of McIlroy’s retorts:

Knuth has shown us here how to program intelligibly, but not wisely. I buy the discipline. I do not buy the result. He has fashioned a sort of industrial-strength Fabergé egg—intricate, wonderfully worked, refined beyond all ordinary desires, a museum piece from the start.

When you are next assessing or building an intricate system, imagine a Fabergé egg, and ask if you really do want the intricate museum piece “refined beyond all ordinary desires”:

An unstable platform

In an industry of rapid change, platforms die in front of one’s eyes. As a junior engineer this is hard to see, because you’ve only been coding for a short period of time, during which Meteor JavaScript platform has experienced a meteoric rise in popularity! “Meteoric rise” is a great phrase, considering what a meteor does.

Next time somebody mentions the meteoric rise of a JavaScript platform, I want you to imagine its meteoric descent and incineration in the earth’s atmosphere.

Unfortunately, software changes fast enough that no software platform can maintain its upward trajectory for more than a few years. If you look at Wikipedia’s list of software platforms you’ll notice that most are at various stages of plummeting down to earth.

I worked at a company that used Adobe Flex for its web app (mercifuly not on my team). While the original decision may be excusable, the subsequent years of investing in Flex “experts” was not: those experts declined to learn HTML and JavaScript, experienced JavaScript engineers at the company felt marginalized and quit, and the inevitable death march to move off Flex lead to 12+ months of negligible feature development, burnout and layoffs.

Experienced engineers have been around long enough to see many software platforms die, so are reticent to invest much time in the latest shiny thing. Junior engineers have much more energy; they are like kids with puppies.

I have a puppy. Or at least, my wife generously calls her a puppy, as a reminder of the good old days. She is now six and gets tired easily and has gray hair (my dog, that is, not my wife). This is what she looks like:

Living with a dog is a painful reminder of one’s mortality. Delilah has a perfect diet, exercises regularly and has stayed lean her whole life. But she is still aging before our eyes and nothing will stop the hand of death.

This is what Delilah looked like only 6 years ago:

If you were a child spending time with Delilah as a puppy, you would have no idea how quickly she would age. It surprised even me.

Experienced engineers have seen enough puppies age, or enough meteors fall to earth, that they are skeptical about the latest shiny framework that more junior engineers are excited about. This is not a general disdain for the new: they want to see innovation as much as anybody, but are skeptical of newborn frameworks, tools and technologies that fail to embody the timeless principles that they’ve found most valuable.

A meteoric rise

Broken, again

When Medium broke for me, I started writing this article in a text editor instead. In this case I chose Atom editor because it has a nifty feature to preview markdown rendering as you type. Unfortunately, the longer the article grew, the slower typing became. It now takes a few hundred milliseconds between typing “a few hundred milliseconds” and actually seeing it on the screen. And, the markdown preview window has decided that it has rendered more than enough of my page, so a couple of paragraphs ago started rendering emptiness for anything else that I write.

I could invest the time in learning architecture of this editor in order to troubleshoot and fix these issues, but it would inevitably feel like opening up a Thermomix (voiding the warranty!) and poking around.

Instead, from this point onwards I’ll be taking my own advice and using more reliable tech: an older editor, a tiny command line tool called wach that Myles created to just watch a directory then do something, pandoc to convert the file to html, and another tiny command line tool by Myles called rld which just reloads the frontmost browser tab as needed. Connecting these tools together, I get a live reloading development environment with minimal effort:

wach 'pandoc -f markdown -t html knife-skills.md > index.html && rld chrome'
      

These tools are reliable and stable and I’ve used them all before so I’m confident I can actually get to the end of the essay without them breaking.

The best computer science book ever written

The Structure and Interpretation of Computer Programs is 25 years old, but I agree with Brian Harvey that it is the best computer science book ever written. It’s not just Brian Harvey, and other lecturers like Brian at the country’s top computer science programs over the last 25 years who believe this: if you google for computer science text book recommendations you’ll see SICP high on most lists written by experienced, practicing software engineers.

Why is it that SICP has experienced such longevity when thousands of more modern books have failed to make an impact? Surely the more modern books ought to have been able to build on SICP, retaining some principles and bringing others into the context of modern computing?

In the words of Brian Harvey:

Before SICP, the first CS course was almost always entirely filled with learning the details of some programming language. SICP is about standing back from the details to learn big-picture ways to think about the programming process… Usually, a book lasts only as long as the language fad to which it is attached. SICP has been going strong for over 25 years and shows no sign of going out of print. Computing has changed enormously over that time, from giant mainframe computers to personal computers to the Internet on cell phones. And yet the big ideas behind these changes remain the same, and they are well captured by SICP.

This is not just a suggestion to read and re-read SICP (although you should, right now! It’s free to read online). More importantly, I hope you seek out knowledge, tools and technologies that are similarly about “standing back from the details to learn big-picture ways to think about the programming process”. If you do you’ll be rewarded with long term value and the power of good abstractions.

The principles in SICP are knives for the mind: with some practice, these simple, flexible tools will serve you for your entire career.

Framework salad

One meal you can make with a Thermomix is framework salad. I recently attended the presentation of final projects at a Bootcamp in San Francisco and every project was a framework salad (in fact when I mentioned this to one of the instructors it was he who introduced me to this wonderful phrase).

Bootcamps are great for upward mobility, but in compressing software education into 10-ish weeks and optimizing for immediate employment prospects, the students learn just enough to wrangle complicated frameworks into derivative projects. Every project from that cohort was a CRUD app, most just presenting a collection of existing data on a map. The presenters mostly just rationalized why they chose Rails over Sinatra, or Angular over React. I was hoping that there’d be some probing questions from the audience; the first (from a student in a later cohort) was “how did you choose between Google Maps and Mapbox?”

I didn’t expect to see true innovation after 10-ish weeks, but I did hope to see the acorn. If students had been encouraged to work on smaller projects from scratch, they would have had enough time to go into depth, to approach the edge of some small idea and start thinking about making their own unique mark on it. If you were a student there, wouldn’t you find that more interesting than yet another CRUD app, too?

Frameworks descend and burn up in the atmosphere like meteors. Technology changes fast enough that today’s problems are never tomorrow’s problems. Historically, our greatest technologies have been created by those who kept digging until they hit bedrock; who understood foundational ideas and technologies well enough to improve upon them.

What are the important problems in your field?

If you work hard on important problems, there may eventually be a concept in computing named after you. If you work very hard on very important problems, you may become Richard Hamming. He was responsible for “the Hamming code (which makes use of a Hamming matrix), the Hamming window, Hamming numbers, sphere-packing (or Hamming bound), and the Hamming distance.” He also worked on the Manhattan Project and was awarded the Turing Award in 1968.

In short, Hamming knew how to do high-impact work. Graciously, he also gave a talk on how to do high-impact work, which you should obviously watch (or read):

One story from the talk stood out to me:

Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, “Do you mind if I join you?” They can’t say no, so I started eating with them for a while. And I started asking, “What are the important problems of your field?” And after a week or so, “What important problems are you working on?” And after some more time I came in one day and said, “If what you are doing is not important, and if you don’t think it is going to lead to something important, why are you at Bell Labs working on it?” I wasn’t welcomed after that; I had to find somebody else to eat with! That was in the spring.

In the fall, Dave McCall stopped me in the hall and said, “Hamming, that remark of yours got underneath my skin. I thought about it all summer, i.e. what were the important problems in my field. I haven’t changed my research,” he says, “but I think it was well worthwhile.” And I said, “Thank you Dave,” and went on. I noticed a couple of months later he was made the head of the department. I noticed the other day he was a Member of the National Academy of Engineering. I noticed he has succeeded. I have never heard the names of any of the other fellows at that table mentioned in science and scientific circles. They were unable to ask themselves, “What are the important problems in my field?”

I can’t convince you that you should do high-impact work. Nor can Hamming, although as he does point out: “as far as I know, each of you has one life to live. Even if you believe in reincarnation it doesn’t do you any good from one life to the next!”

Provided you do wish to do high-impact work, wouldn’t it make sense to focus on the important problems in your field? And if you do focus on the most important problems, what’s the chance that the breakthrough will come through mastering the latest shiny framework?

Go forth!

In short:

  1. Be a chef: work on important problems, do unique work;
  2. It’s possible: it will take only a few years to get to the edge of a sub-field of computing;
  3. It’s necessary: otherwise your expertise will become redundant; and,
  4. Doing unique work at the edge of a field requires foundational knowledge and basic, timeless tools.

By Ozan Onay, an instructor at Bradfield. Thanks to Myles Byrne, Dion Almaer, Anthony Marcar, Omar Rayward and Dave Newman for feedback on drafts of this.

Bradfield

hello@bradfieldcs.com
1141 Howard St
San Francisco, California
© 2016 Bradfield School of Computer Science