Best practices are constantly evolving. Sometimes so much so that we're hearing the opposite today of what we heard yesterday.

But there are some fundamentals that tend to stay the same though. One that most of us can agree one is that bugs can be expensive. More specifically, the later we find bugs, the more costly they are to fix. On a whiteboard, the cost of fixing a bug is just about nothing. It can be as simple as replacing one diagram with another. But in production, the costs could be catastrophic, including loss of current or potential customers, direct loss in financial applications, or human life could even be at stake.

On one side of the spectrum, we have the school of thought that tries to solve as many problems initially as possible. After all, if it's cheaper to fix bugs earlier in the process, let's just spend more time earlier in the process. In practice, this doesn't work so well for a number of reasons. The biggest one is probably just the fact that requirements change during the process itself. There's not really anything we can do in this case. The other pitfall here is that most tough problems in the programming space are relatively new. We don't have much experience with these problems, so it's difficult to account for or even predict the tough spots. Usually we have much more insight into problems after we've tried to solve it the wrong way a few times.

I think what typically ends up happening with this approach is that some areas get over designed, and others don't get the attention they deserve. This can lead to getting the abstractions wrong, and leaky abstractions can make bugs difficult to prevent. Not knowing what invariants need to hold for a system can be a big source of errors.

The extreme/agile approach looks at it from a different perspective. Instead of trying to imagine all the scenarios and details up front, let's just do what we can to discover them early in the process. To use a general oversimplification, it boils down to ignoring complexity in some areas in order to create a working prototype faster. The trick then is knowing which parts of the problem to ignore and which to focus on. Trying to tackle the hard parts is usually a solid strategy, except sometimes the hard parts aren't what we think they are.

Agile development also claims to engender quick changes. If the abstractions still fit, and we have a testing suite underneath us, then yes, the changes can be really fast. But if the abstractions fit, then any system is fast to change. When a requirement change breaks the current model though, or forces an interface change, then I would argue that making changes can be even slower, because they propagate out to more code. We not only have to refactor our logic, but the logic in all relevant tests as well. Some would argue that means the tests aren't written well or aren't testing at the right level. But writing tests at the right level is hard. It can take some time before we can learn how to write effective tests, just like it takes time before we can learn how to write effective code.

But the biggest concern I have with agile development is that it can be used as an excuse for poor judgement. Being lazy is not agile, it's being lazy. Always taking the easy way out usually catches up to us, and then it can be painful to dig out of that hole.

Here are my takeaways:

  1. If we don't understand the problem, we can't process our way out of it. In other words, if we don't know what the issues really are, no amount of process can help us.
  2. We learn best from our own mistakes. This informs our future decisions much more so than any process could.
  3. Context matters most.