What I Learned About Software Development in Flight School
I don't talk about it online often, but I've had my pilot's license for a few years now.
As I progressed through flight training, I realized how applicable some of the mental models a pilot uses are to software engineering. This was one of the first times that I saw how important a mental model can be, and was immediately able to draw correlations to my work as a developer. Here are a few of those models.
Quick note: the models below apply to single engine propeller planes (which is all I'm trained to fly). Conceptually they extend beyond that, but may be shaped a little differently for other types of airplanes. But this article isn't really about flying anyway.
1. You can't stretch glide
This is one of my favorite mental models to talk about. It is based on a simple concept in physics called conservation of energy. Boiling it down to its simplest form, energy cannot be created or removed from the universe, but this is also true for simpler isolated systems at a smaller scale.
An airplane is "gliding" when it is relying purely on potential energy (in the form of height). This happens when the engine is at idle, or (critically) when the engine fails.
The process of gliding to the earth is a simple transfer of potential energy into (mostly) kinetic energy. Without getting into too much detail, flying at a particular air speed maximizes the efficiency of this transfer.
A common error in engine failure scenarios is an attempt to "stretch glide." This often happens when a pilot is trying to make an emergency landing, say, in a field, but realizes they are going to be short of that field (perhaps heading for trees instead). They are gliding at the right angle to maximize their glide, but then they make the critical mistake: they pull back on the yoke.
This simple gut-based reaction is the cause for many fatal accidents. The otherwise maximally efficient glide is made less efficient, and against gut intuition, it shortens their glide even further. Pulling back hard enough causes a stall, usually too low to the ground to recover.
Lesson: Don't pull back on the yoke; it will shorten your glide.
Energy is conserved. When you try to work extra on the weekends or late nights, you are pulling back on the yoke. Eventually, what feels like the right decision (overworking and "hustling") will end poorly. Don't try to overrun your pace! This is how technical debt happens, it's how testing gets skipped, and it's how developer health declines.
2. The best pilots always use the checklist
There are a handful of behaviors that great pilots have in common. They can be summarized by this: great pilots follow the checklist.
Aviation safety is heavily dependent on checklists. If a checklist is not followed, critical mistakes can occur. For example, not long ago, a passenger jet crashed shortly after takeoff. The reason? The trim was set for landing, causing too high of a pitch-up. (This is a checklist item.)
Unfortunately, human ego makes us believe that somehow not following procedure is a sign of expertise.
Unfortunately, this is only sometimes true. Not following a checklist because you have a nuanced understanding and are choosing to do something different based on experience is not the same as forgetting an item on the checklist.
More often than not, humans have a false sense of superiority.
- This is why we excuse ourselves from writing thorough tests.
- This is why we believe it's better to write our own package rather than relying on someone else's
- This is why we have a hard time predicting failure scenarios
- This is why we skip over documentation even though it probably has the answer to our question
Lesson: Ego is dangerous. Don't break the rules purely out of self-confidence.
There's not always a perfect procedure. But when a procedure exists, you are probably better off trusting it by default, and departing only when you can validate your reasoning beyond your own intuition. Even then, do so sparingly; breaking procedure is unexpected behavior, so the developers who might encounter your work in the future may experience difficulty if you depart from the normal path.
3. Always have a backup plan, even when the risks are low
Every flight I take, I am required to have at least 30 minutes of extra fuel on board.
If I fly at night, I have to have 45 minutes of fuel onboard.
It doesn't matter if the weather is perfect and there's not another single pilot in the sky. It doesn't matter if I have 5 parachutes onboard and I'm flying high enough to glide across the state.
If I file a plan to fly IFR (in the clouds with only instrument reference), unless the weather at the destination is forecasted to be very good, I have to have an alternate airport. I need enough fuel to make it to my planned destination, then to my alternate, and then another 45 minutes.
Why? Because no one ever expects an unexpected emergency. The cost of preparation is much lower than the cost of disaster. "Better to have it and not need it than to need it and not have it."
What happens if weather does something unpredictable, and you have to fly to another airport? What if an airport service vehicle breaks down in the middle of the runway, and you can't land there? What if an earthquake splits the runway in half?
Lesson: Sometimes things don't go as planned.
And unfortunately, sometimes this ends in disaster. Planning is cheap. Take the time and resources needed to prepare for the unexpected.
4. Pilot error is far more common than mechanical failure
In a study published in 1984, human error accounts for over half of all airplane crashes. Statistically that means human error accounts for more crashes than all other causes combined. This isn't a failure of the system; this is a reality attached with being human. We make mental errors all the time. (Remember model #2.) Being reminded of the fallability of human effort helps us understand why we should try to rely on machines more than ourselves when possible.
Lesson: When something goes wrong, assume it's a human problem first.
Typically we have a tendency to blame a tool or a process rather than a person. But humans - even highly skilled ones - are most often the cause of failure for software. The sooner we can acknowledge this, the sooner we can design our systems around human limitations rather than denying them.
The next four will be updated in the near future. Check back here soon.