Robert Marianski Reflections on Software en-us Tue, 17 Mar 2015 01:25:56 GMT custom blog Vintage Tech <blockquote> Weak things must boast of being new, like so many German philosophies. But strong things can boast of being old. -- G.K. Chesterton (@GKCDaily) December 7, 2012 <br /> Obviously Chesterton was talking about software; scholars are divided as to whether he was talking about lisp or Debian Stable. -- Phil Hagelberg (@technomancy) December 7, 2012 </blockquote> <p> With respect to technology, things can move fast. It can be hard to sift through what's valuable, and what should be ignored. But there are some things that have not changed much. I think it's worth taking a step back and reviewing these, as it's likely that these are the "classics" of programming. </p> <p> I just have to start with lisp. It's been around since 1958, and yet, it remains not only relevant, but continues to push the forefront of possibility. The funny thing about it, is that every time I look at it, it gets better. </p> <blockquote> Lisp is the most important idea in computer science. -- Alan Kay </blockquote> <p> Arguably, it's simultaneously the most simple, and the most powerful programming language. In fact, its power is driven by its simplicity. Naturally, I'm referring to its consistent, simple, uniform syntax, which engenders macros. Admittedly, new macros are usually not prolific, but when they are necessary and implemented tastefully, it is incredibly difficult to imagine better solutions. </p> <blockquote> The greatest single programming language ever designed. -- Alan Kay (about Lisp) </blockquote> <p> It's not just that lisp has survived, it has thrived in recent years under new forms. Clojure has taken lisp, and altered it in subtle but crucial ways. Not only has it become more approachable through small, but important syntax alterations, it adds data structure advancements that imho other languages will continue to emulate for years to come. These persistent data structures simultaneously provide thread safety, and facilitate a natural functional style. Either of these are beneficial; having both is transformative. </p> <blockquote> You can reach a point with Lisp where, between the conceptual simplicity, the large libraries, and the customization of macros, you are able to write only code that matters. And, once there, you are able to achieve a very high degree of focus, such as you would when playing Go, or playing a musical instrument, or meditating. And then, as with those activities, there can be a feeling of elation that accompanies that mental state of focus. -- Rich Hickey </blockquote> <p> Having this level of direct feedback is hard to describe to anyone that isn't used to it. </p> <p> Another item worthy of mention, is <a href="">sicp</a>. Amazingly, it goes from 0 to forking time in just a few chapters, but there's one particular item that continues to blow my mind. And that is, being able to represent a list data structure with nothing but closures. This isn't a major topic in the book, but I personally think it's worth highlighting because I think it's so profound. </p> <p> For those that aren't used to this, lists can often represented recursively in functional languages as either an empty list, or a value combined with a list. Cons is often used as the name of a function to construct the value and the subsequent list. car is used for the first element, cdr is used for the subsequent elements. </p> <p> It turns out that this can be modelled using only functions. No data type facilities are necessary. This is known as "Church encoding". Here's the scheme implementation: </p> <code> (define (cons x y) (lambda (m) (m x y))) <br /> (define (car z) (z (lambda (p q) p))) <br /> (define (cdr z) (z (lambda (p q) q))) </code> <p> I'm not trying to argue that lisp is "the one true way". However, I think it would be foolish to ignore any programming technology that has persisted for so many years. We'll see what the next few dozen years will hold. </p> Mon, 16 Mar 2015 04:00:00 GMT Software Quality <blockquote> First intention, then enlightenment. - Buddhist maxim </blockquote> <p>With software, quality is paramount. When quality begins to suffer, everything else follows suit. We see higher defect rates, longer delays to implement new features, and generally speaking, a more fragile code base. It also leads to a broken window effect, where modifications and updates will tend to be poor in quality as well.</p> <p>But what does it mean to say that a piece of software has high quality? I think it means the code should be easy to maintain. And what does it mean to be maintainable? Maintainable code has the property that its intent is clear. Meaning, it's easy to understand what the code is doing, what its goal is, and why it's there. Quality software has additional characteristics as well, but I believe that most of these fall out of its maintainability. One litmus test is to read a piece of code and think about how you would add some additional features to it. If it's clear, then that means the code itself is clear.</p> <p>A different way of phrasing this is to say that high quality code doesn't violate the principle of least surprise. Code should behave like it looks. When it's surprising, that's a sign that it's not as clear as it should be. A corollary is that two pieces of code that look the same should behave similarly. If they look different, they should behave differently.</p> <p>So, how do we go about writing code that's high in quality? In my opinion, the best way to improve software quality is simply to care about it. Great code is usually difficult to get right, and it often takes a few failed attempts before a cleaner design can emerge. But these epiphanies will only appear after deliberate thought. And these thoughts will only come to us if we care about improvement in the first place.</p> <p>Certainly, great code emerges from great effort. There are differing opinions on how to get there, but asking the right questions is the crucial part. It's the intention to improve that leads to enlightenment.</p> Mon, 03 Feb 2014 05:00:00 GMT Languages, Libraries, and Environments <p>Usually, a problem can be solved in multiple ways, with each being an effective solution. Regarding programming languages, this is also true. Many problems can be solved with multiple languages, in fact theoretically all turing complete languages should be able to solve the same problems.</p> <p>Nevertheless, we tend to stick to the language that we are most comfortable with, whether that be the first language that we learnt, or just the language that we're most comfortable working with currently. Learning new languages requires investment, not just in learning the language, but of the libraries, idioms, and environments as well.</p> <p>When solutions are chosen to problems, they can usually be boiled down to some combination of what I'm calling the language itself, the libraries, or the environment.</p> <h3>Language</h3> <p>Language here is meant to be the expression of the solution to problems. It is the syntax, but more importantly the vocabulary of concepts that can be used as mechanisms to construct solutions. In the general sense, this is the most crucial, because like with human language, it constrains the concepts that are available for solutions.</p> <h3>Libraries</h3> <p>These are the existing bodies of code tha are available to be readily consumed. Most languages have a suitable standard library that are able to perform the necessities. Naturally, necessities vary a little bit based on audience and the problem, but there are certain agreed upon operations that are so ubiquitous that they are taken for granten. But, the breadth and depth of the libraries around a language can vary tremendously.</p> <h3>Environment</h3> <p>By environment, I refer to the tooling surrounding the language and libraries. This could be any ide's that are used, but more importantly, it's the ecosystem of operations available to support daily operations for programmers. These include things like debugging, build/package management, deployment, run time characteristics, and many others.</p> <p>&nbsp;</p> <p>The decision as to whether a language can be used involves weighing the combination of these three elements. We could have the perfect language, but it won't be picked up if there aren't libraries to support day to day operations. And if we have a great language and even better library support, it may be dismissed if it can't run in our production environment. Note that the opposite can also be true. Sometimes a library is selected because of what it can do and the language and environment simply come along for the ride.</p> <p>There are also social elements to consider as well. If our team is particularly well versed in a certain area, chances are good that we won't veer too far from our domain of expertise. Additionally, if some knowledge is highly esoteric, that can play a major role in the decision as well.</p> <p>What's interesting to me is that it rarely comes out that the decision is merely the sum of its parts. There's a certain minimum score required in each area. The minimum varies based on context, but nevertheless, a champion in one but a flop in the others will most likely not be chosen.</p> <p>Although in the abstract there are many factors to consider, practically a select few typically emerge as the important players. It's important to identify which of these are significant earlier on, and focus on those when we make our choices.</p> Fri, 24 May 2013 04:00:00 GMT Thoughts on Language <p>We use language as a means to communicate. It gives shape to how we express ourselves. But, it also feeds back and informs what we express, and how we express it. If a language lacks words to express a concept, we probably won't consider that concept when we think in that language.</p> <p>Programming languages are no different. They are the tools we use to communicate with the computer, and also with each other. And like human languages, some have words to express certain concepts, and others don't. Also like human language, the programming language used helps define the concepts that are used in the program itself.</p> <p>Another parallel is that all languages have subtleties and nuances behind their expression. These lend themselves nicely to poetry, or comedy. But with programming languages, these multiple interpretations can have disastrous consequences. Computers need very precise instructions, and when there is ambiguity, they tend to prefer failure over resilience.</p> <p>Clearly we need to use a higher level language than that of machine instructions. Although these lower level languages are very precise, they lack methods of expression that are natural for humans, and are tedious for us to use directly. But, using human language is too complex and vague for computers to make sense of. Our challenge then is to strike a balance where we can more naturally express our intent, while simultaneously offering a precise definition for the computer to execute.</p> <p>Taking a quick survey of programming languages shows that we've created our own tower of babel. In retrospect, it's not surprising that we ended up here though. Different problem domains call for different types of solutions. For some, performance is crucial, and therefore it's imperative to have access to the lower level mechanics of the machine. For others, the user experience is vital and performance is a non-issue. Programming languages run the gamut from low level memory manipulation to higher level meta languages. And as our hardware has evolved, our languages have tried to adapt by making the appropriate trade-offs as well.</p> <p>Programming languages tend to be general by definition, and offer general purpose constructs to solve a wide variety of problems. Coming up with effective general solutions is critical, and it is the norm to require these. But, classes of problems do exist that demand certain particulars, and a language's ability to frame some of these solutions directly affects the shape that the solution will take. These can be domain specific languages, but they can also be picking a different programming style. For example, some problems are elegantly solved using logic programming. Others benefit from using a rules engine. Sometimes modeling using states and then following a transition graph is a good fit. Some of these can be used as a library to a language, but I would argue that languages offer different levels of support for these different styles, and it's more natural to reach for these kinds of solutions, since those concepts are a better fit for certain classes of languages.</p> <p>When evaluating languages, brevity is often measured to determine expressiveness. Being able to model a solution using fewer parts generally means that it is easier to reason about, which ultimately means the system is easier to maintain. We have to be careful here though, because the important piece is that the system is easier to reason about, not that it's shorter. Just because something is terse, doesn't mean it's easy to understand. It could be conceptually dense with implicit meaning, which is the sort of thing that can lead to errors. If anything, we should strive to use constructs that prevent errors and encourage good practices. That being said, solutions modeled with fewer constructs tend to be more elegant. They often achieve this by using concepts that are better fits for the problem.</p> <p>One mistake that I often make though is as I learn a new programming language, I try to think of it as the one true language that everything should be written in. Armed with my new hammer, I try to find problems that I can apply this language to, and end up trying to use it in many situations where it wasn't the best choice. I think that programmers are guilty of this kind of thinking in general though. Part of it is just that learning a new language is a significant investment, and we attempt to justify our investment by using it more. It's important to recognize when we have more faith in our language of choice than we should.</p> <p>Particularly exciting for me is that we are now in a time where languages, tools, and libraries can evolve extremely rapidly, and can be shared seamlessly. Our tools to facilitate experiments with languages and approaches are only improving, and many different paradigms are surfacing as a result. I look forward to how our existing languages will continue to evolve, and to the languages of the future.</p> Tue, 25 Dec 2012 05:00:00 GMT The Success of Failures <blockquote> Once a group of ten blind masseuses were travelling together in the mountains, and when they began to pass along the top of the precipice, they all became very cautious, their legs shook, and they were in general struck with terror. Just then the leading man stumbled and fell off the cliff. Those that were left all wailed, "Ahh, ahh! How piteous!" But the masseuse who had fallen yelled up from below, "Don't be afraid. Although I fell, it was nothing. I am now rather at ease. Before falling I kept thinking 'What will I do when I fall?' and there was no end to my anxiety. But now I've settled down. If the rest of you want to be at ease, fall quickly!" </blockquote> <p>To err is human. But we often think of mistakes as necessary evils, actions or situations that could have been avoided if we had the foresight. After all, some mistakes can be painful. Yet, I would argue that we learn best from our own mistakes. Even if it's not a conscious thought in our mind, when faced with a similar situation or problem, our intuition can help us navigate around repeating the same mistake twice. If I touch a hot stove once, I'm probably not going to touch it again.</p> <p>For easier problems, the cause and effect between the mistake and the outcome is readily apparent. But as things get more complicated, it's not always easy to see what the actual mistake was that generated the failure. From a software development perspective, the actual underlying cause can elude us, and everybody can be left drawing their own conclusions as to what happened. We need to be careful here though, because the wrong lesson can guide us down the wrong path in the future. Once we are burnt by a hot stove, we'll never touch a hot stove again. But if we learn the wrong lesson, we may never touch a cold one either.</p> <p>One interesting idea is that the <a href="" title="Cynefin framework">problem space itself can dictate the strategy</a> used to solve it. When all variables are known, we simply use the answer for our given permutation. However, some problems don't have an easy "if this, do that" answer. For these problems, we can set up fail-safe experiments, where each one is an attempt at a solution from a different angle, but their failures aren't catastrophic. Recovering from the failures is the key here. In fact, many failures initially can lead to a better outcome in the end, because they can each inform the ultimate solution based on what we learned from their failures.</p> <p>From a business perspective, this can be a hard sell though. How can you justify allocating resources on what you know will mostly end up being a failure? Isn't that just a waste? What we need to admit first is that we may not know enough about the particular problem to be in a position where we can recommend a single solution that has a good chance of success. And the best way to learn more may be to attempt to solve it in multiple ways, many of which will fail. Naturally, nobody wants to hear this kind of news. The immediate reaction could be, "well, let me try to find somebody that knows more about this." But for problems that are relatively new, experts can be hard to come by.</p> <p>Some will also try to rely on a <a href="/practice-of-process" title="practice of process">process</a> to get through the problem. And for known problems, it is a fine approach to rely on best practice. By definition though, best practice is past practice, so we can't expect to have best practice for all situations, especially for new problems that we don't fully understand.</p> <p>Accepting that failures occur is an important step. Instead of focusing on preventing them completely, we can instead create environments that are more tolerant of our failures. And we shouldn't simply tolerate mistakes, but accept them as an integral part of the process, and how we continue to improve ourselves.</p> <blockquote> At the time when there was a council concerning the promotion of a certain man, the council members were at the point of deciding that promotion was useless because of the fact that the man had previously been involved in a drunken brawl. But someone said, “If we were to cast aside every man who had made a mistake once, useful men could probably not be come by. A man who makes a mistake once will be considerably more prudent and useful because of his repentance. I feel that he should be promoted.” Someone else then asked, “Will you guarantee him?” The man replied, “Of course I will.” The others asked, “By what will you guarantee him?” And he replied, “I can guarentee him by the fact that he is a man who has erred once. A man who has never once erred is dangerous.” This said, the man was promoted. </blockquote> Sat, 31 Dec 2011 05:00:00 GMT