Void

Solving Problems

When faced with a problem, the way we should think is:
1. What is the quick and dirty solution?
2. What is the long-term solution?

hand-2208491_640

Most of the time, one tends to do only one conveniently ignoring the other. Our mind goes into overdrive and quickly implements a hacky solution and then we forget the problem only to find it re-occurring. Or, we languish to implement a long-term solution while the problem drags on.

One of the reasons why we might find it difficult to implement both is that they require a fundamentally different kind of thinking. The quick and dirty solution involves hustling and running around to get things done. Somehow by hook or crook, one wants to plug the hole. A long-term solution requires one to analyze the problem from all angles, think deeply about it and figure out a well-rounded solution. A vague comparison would be system 1 thinking versus system 2.

Both are equally important – neglecting either is a recipe for disaster.

Advertisements

Switching Languages

think-2177813_640

Many are apprehensive about switching programming languages. It is perfectly fine to have preferences – I am heavily biased towards statically typed languages with great tooling support, but being dogmatic is not something one should aim for.

What could be the downsides of switching programming languages? I am disregarding the psychological aversion to change and sticking to hard facts.

1. One will lose the fluency(syntax).
This is a non-issue, syntax is similar to muscle memory, one will get it back in a day or two. This is akin to swimming or driving after an extended break, one naturally gets it back.

2. One will forget the way of doing things.
Every language has a culture and a community accepted way of getting things done. Regaining this might not be as easy as syntax retrieval, but with some effort and thought, one should recoup.

3. One will not be up to date with the language.
Languages keep evolving, core ideas and philosophy remain the same. The standard library might become more expansive, VM might become faster, some earlier prescribed way of doing things might be an anathema now but the foundational principles remain intact.

4. There is no demand for this language.
As long as your fundamentals are good, this should not be a concern. There are roles which require deep language know how, but these are far and few. In fact, it is the opposite, more the languages in your kitty, more the opportunities.

The biggest upside to learning a new language is the exposure to new ideas and thought processes. Any new language immensely expands one’s horizon. For example, the way Java approaches concurrency is very different from GoLang’s take on concurrency. Having this sort of diverse exposure helps one build robust systems and mental models.

Programming languages should be viewed as a means to an end, not an end in itself. There are cases where programming languages make a difference, otherwise, there would not be so many around with new languages cropping up now and then, but you are doing a disservice to yourself by restricting to a few.

Pay The Price

money-767778_640

Obstacle racer Amelia Boone says that she is not able to devote enough time to friends and family due to the demands of her tough training regime. That is the price she pays for being on top of her sports.

In the movie HEAT, Robert De Niro says – “Don’t let yourself get attached to anything you are not willing to walk out on in 30 seconds flat if you feel the heat around the corner.” That is the price he pays for being a master thief.

Michael Mauboussin says his quest for knowledge means he misses out on latest series like Game Of Thrones. That is the price he pays for being a crème de la crème investor.

I feel one of the reasons why people give up something too soon or midway is they have not figured out the price they have to pay for doing it.

Everything that one does has a price. Sometimes it is implicit, sometimes not. Better to figure it out beforehand.

Oops, I did it again

It is a packed elevator. Occupants are rubbing shoulders. Stops at a floor. Door opens. A lady wants to get in but there is no room. Annoyance plays on her face. Elevator moves on.

anger-1428042_640

We all know that worrying over things that we cannot control is a pointless exercise. We are aware of the many cognitive biases that we have, we still fall prey to them. Why does this happen? There is a huge difference between knowing something and internalizing it.

Daniel Kahneman says that in spite of studying biases throughout his life, he is no better at avoiding them compared to others. Dan Ariely believes Kahneman was playing to the audience with that quote and we do get better at recognizing cognitive biases and sidestepping them.

Two simple practices that I find useful in becoming more aware of my emotions and biases:
1. Carrying out a daily audit. Every night, I go over circumstances that day where I believe I could have reacted better. Along with this, I also ruminate situations where my cognitive biases one-upped me.
2. Whenever I know that I am getting into an unpleasant situation, I keenly observe my emotions. This might be something as mundane as getting stuck in a traffic jam to dealing with an unpleasant situation.

I am not sure whether anyone will be able to completely eliminate these but I believe we can get incrementally better at it. Minuscule daily improvements compound to mammoth changes over a long time.

 

Conventions

Most programming languages have conventions. These could be for naming or code patterns.

rule-1752415_640

How does this help?

A simplistic view is that it helps to keep code consistent, especially when multiple people work on it.

A deeper way to look at this I believe is in reducing the cognitive load.

In cognitive psychology, cognitive load refers to the effort being used in the working memory.

If you have conventions, it is one less thing to think about. You do not have to spend mental capacity on thinking whether to name variables small case, capital case, camel case, with hyphen, underscore etc. You blindly rely on the convention. Same applies to code patterns. You look at the pattern and automatically grok the idea; without expending grey cells.

I strongly believe that all tech teams should have conventions wherever possible; outside code too. Freeing up any amount of working memory for things that matter will go a long way towards increasing productivity.

 

Anti features

When evaluating new technology, framework or library; a lot of importance is given to the salient features. While it is very important to know the positives, the negatives usually tend to be glossed over. Being aware of the shortcomings of a framework gives one the ability to anticipate problems down the road.

feedback-3239454_640

For example, let us take NoSQL databases. A lot of time is spent on singing paeans to the scalability, malleability etc of NoSQL databases while hardly thinking about the negatives that come with it.

Two simple techniques which give a good visibility on anti-features:
1. The very obvious one, Google for the shortcomings. Someone would have written a blog post on the interwebs highlighting how a framework or technology let them down. For example, take this post by Uber on how Postgres did not work as expected for them.
2. Comb through Github and/or JIRA peeking at the bugs raised and enhancements requested.

Both of the above will provide a good picture of the shortcomings. If you are evaluating a closed source proprietary technology, the above may not make the cut.

Once a mental note is made of the negatives, ponder on the scenarios where this might affect your usage. It helps to spend quality time on this as this will save one from a lot of future trouble.

If you think about this, this might sound very obvious but tends to be highly neglected. We get so caught up in the positives of something that the negatives tend to be ignored and this usually comes biting us back later.

Luck

I read an interesting article by Richard Wiseman on luck, which I would highly encourage everyone to read. The gist of the article is that people make their own luck and being lucky is something that can be learned.

An excerpt from the article:

Lucky people generate their own good fortune via four basic principles. They are skilled at creating and noticing chance opportunities, make lucky decisions by listening to their intuition, create self-fulfilling prophesies via positive expectations, and adopt a resilient attitude that transforms bad luck into good.

horseshoe-504821_640

Patrick O’Shaughnessy, in his podcast “Invest like the best“, talks to interesting people. He mainly concentrates on investors who have made it big, but once in a while he also chats with people from other walks of life. A common theme that keeps repeating in his interviews is how these people jumped at opportunities which others had shunned, their optimism and an attitude that stresses on continuous learning and development. These qualities eerily match with what Richard Wiseman says makes one lucky.

In the recent Farnam Street podcast, behavioral economist Dan Ariely says the following – “I gamble with my time. I take risks, I do things that do not seem like the right things to do”.

Two of the most successful and rich people of our times, Bill Gates and Warren Buffett are gung-ho about the future. Bill Gates actively champions positive thinking and wants all of us to cultivate this.

Probably luck is not luck after all. I am sure it is more nuanced than this, but something to ponder about.

Taking calls

Making decisions is part and parcel of being a leader. It might feel empowering to take calls but the hallmark of true leadership is in enabling others to do this. The smoother the decisions making process and lesser the blockers, the better it is for the organization.

question-mark-1872665_640

One route to get there is to create frameworks, rules, and principles for decision making. When your team wants to do something and are confused as to how to get there, they just clawback to the principles and use them. For example, take hiring. Having a clear-cut framework for hiring that covers all aspects starting from what questions to ask, how many rounds of interview, how to reject or accept candidates, what qualities to look for in candidates aids the hiring decision process. By having this, teams are allowed to make hiring decisions on their own.

Also, when taking calls, openly articulate your thought process. Make it clear as to what assumptions you did, what questions you asked, what data you looked at, what trade-offs you did. Laying out in the open the way you arrived at a decision helps others to traverse the same path on their own the next time.

To summarise, instead of just taking calls on behalf of others, go that extra mile to create a framework which enables them to do this independently the next time. Also, laying out the decision-making process in the open gives everybody an opportunity to peek at your thought process so that they can borrow it the next time.

Testing legacy applications

When contemplating on introducing automated testing in legacy applications, it is easy to get bogged down in terminology; unit testing, integration testing, regression testing, black box testing, white box testing, stress testing, etc. Quite a bit of time is spent in debates on unit testing versus integration testing, I have written about this before too.

A practical way to approach testing legacy applications is to first scope out the intention behind the test. Is it to test the behavior of a particular method, an API response or how an application behaves post an HTTP form submit? Next step is to jot down what and all has to be done to enable this. For example, if a database is involved, it can be mocked or a test database with bootstrapped data can be used.

software-762486_640

The gamut of changes needed to inject testability into an application that has never seen testing before should never be underestimated. The way you would structure testable code is vividly different from coding being incognizant of testing.

Take a look at the code below, how would you unit test getUser method without creating a database connection?

public class Foo {
    DbConnection connection = null;
    public Foo() {
        connection = <establish db connection>;
    }

    public User getUser(int id) {
        ////Query db and get user data        
        User user = new User();
        //Fill user with data from db
        return user;
    }
}

To mould this into testable code, DbConnection creation needs to be decoupled from object creation, like below:

public class Foo {
    DbConnection dbConnection = null;
    public Foo(DbConnection dbConnection) {
        this.dbConnection = dbConnection;
    }

    public User getUser(int id) {
        //Query db and get user data
        User user = new User();
        //Fill user with data from db
        return user;
    }
}

Since the DbConnection is independent of object creation, DbConnection can be mocked to unit test any method in the class. An application written without testing in mind would be replete with code like the above. Code patterns like these are one of the biggest hurdles in testing legacy applications.

Next step is to eliminate the resistance to testing. This would mean all the infrastructure and libraries needed to carry out testing are set up and a reference is readily available to follow. Bunch test cases into categories like unit tests, tests that need a mocked object, tests that need a mocked database, tests that need a database seeded with data, tests that need a web server etc. Post this, implement one test case for each of these categories. This will serve a dual purpose, the setup would be ready for each category and a reference readily available to others to emulate.

One aspect that is usually neglected is the effect of testing on the product release cycle. As a result of testing, more code, dependencies, and infrastructure is introduced which needs to be maintained. Along with working on new features, writing tests for these also has to be taken into account. While refactoring, it is not just the code that has to be refactored, even the test cases have to be refactored. This is a tradeoff between time to market, and maintainability and reliability.

Testing is no longer a chore it used to be, testing tools and frameworks have grown by leaps and bounds. With the advent of docker, headless browsers, Selenium etc; testing is very much within reach of most of the teams provided the intention is there and effort is put in.

Build versus buy

Consciously or unconsciously, as software engineers, we perennially take build versus buy decisions. It might be as trivial as copy pasting code from somewhere versus racking up our brains to write our own; using an already available library or writing one from scratch; using a time tested framework against designing one; building a piece of software internally as compared to buying one.

backdrop-21534_640

The way we account for the build versus buy decision varies. Some of the frivolous reasons for building in-house are NIH syndrome, hubris, and planning fallacy.  We generally tend to overemphasize our expertise, knowledge, and capability which naturally lead to building internally. Also, we underestimate the amount of work involved in creating software, only once we get our feet wet does the reality set in. A very valid reason for building internally is cost but when accounting for cost, we usually overlook the hidden cost of building software. Buying a software has an upfront monetary cost whereas by building internally we pay in the form of opportunity cost, talent cost, feature cost etc.

Build versus buy arguments are reminiscent of qualitative speak like “This is not our core expertise, we should be concentrating on solving our business problems”, “This is going to cost us a bomb, let us build in-house”, “We should have had this yesterday, building in-house will cost us another 6 months”, “Will that external product be able to handle our scale”, “Can we trust them with our data” etc. In most cases, build versus buy decisions are qualitative, it is not an easy exercise to quantify them.

When evaluating a product that is already out in the market versus building something similar, a cardinal mistake people commit is mapping features one to one. Even though having 100 different features looks rosy and attractive, usually we end up using only a select few. Instead of trying to match an external product feature to feature, scope out the features that you need or would probably use and then estimate the effort. Another is refinement. An external product will be refined and polished, but you may not need the same level of refinement. For example, you might not need a web interface for the product, a terminal interface would work fine for your use case.

When faced with the build versus buy decision, asking the following help:

  1. Is this my core expertise or is it something I can let others do for me?
  2. What is the cost of getting this done externally versus hiring people to build this?
  3. How much control do I need over this i.e can I live with some error, downtime or opaqueness?
  4. Will I really do a better job building this internally?
  5. Do I have the expertise needed to build this?
  6. Once I build this, will I be able to maintain and enhance?
  7. What is the opportunity cost of having this sometime in the future versus having it now?

Use the answers to the above as a beacon for the build versus buy decision.