When faced with a problem, the way we should think is:
1. What is the quick and dirty solution?
2. What is the long-term solution?
Most of the time, one tends to do only one conveniently ignoring the other. Our mind goes into overdrive and quickly implements a hacky solution and then we forget the problem only to find it re-occurring. Or, we languish to implement a long-term solution while the problem drags on.
One of the reasons why we might find it difficult to implement both is that they require a fundamentally different kind of thinking. The quick and dirty solution involves hustling and running around to get things done. Somehow by hook or crook, one wants to plug the hole. A long-term solution requires one to analyze the problem from all angles, think deeply about it and figure out a well-rounded solution. A vague comparison would be system 1 thinking versus system 2.
Both are equally important – neglecting either is a recipe for disaster.
Obstacle racer Amelia Boone says that she is not able to devote enough time to friends and family due to the demands of her tough training regime. That is the price she pays for being on top of her sports.
In the movie HEAT, Robert De Niro says – “Don’t let yourself get attached to anything you are not willing to walk out on in 30 seconds flat if you feel the heat around the corner.” That is the price he pays for being a master thief.
Michael Mauboussin says his quest for knowledge means he misses out on latest series like Game Of Thrones. That is the price he pays for being a crème de la crème investor.
I feel one of the reasons why people give up something too soon or midway is they have not figured out the price they have to pay for doing it.
Everything that one does has a price. Sometimes it is implicit, sometimes not. Better to figure it out beforehand.
It is a packed elevator. Occupants are rubbing shoulders. Stops at a floor. Door opens. A lady wants to get in but there is no room. Annoyance plays on her face. Elevator moves on.
We all know that worrying over things that we cannot control is a pointless exercise. We are aware of the many cognitive biases that we have, we still fall prey to them. Why does this happen? There is a huge difference between knowing something and internalizing it.
Daniel Kahneman says that in spite of studying biases throughout his life, he is no better at avoiding them compared to others. Dan Ariely believes Kahneman was playing to the audience with that quote and we do get better at recognizing cognitive biases and sidestepping them.
Two simple practices that I find useful in becoming more aware of my emotions and biases:
1. Carrying out a daily audit. Every night, I go over circumstances that day where I believe I could have reacted better. Along with this, I also ruminate situations where my cognitive biases one-upped me.
2. Whenever I know that I am getting into an unpleasant situation, I keenly observe my emotions. This might be something as mundane as getting stuck in a traffic jam to dealing with an unpleasant situation.
I am not sure whether anyone will be able to completely eliminate these but I believe we can get incrementally better at it. Minuscule daily improvements compound to mammoth changes over a long time.
When evaluating new technology, framework or library; a lot of importance is given to the salient features. While it is very important to know the positives, the negatives usually tend to be glossed over. Being aware of the shortcomings of a framework gives one the ability to anticipate problems down the road.
For example, let us take NoSQL databases. A lot of time is spent on singing paeans to the scalability, malleability etc of NoSQL databases while hardly thinking about the negatives that come with it.
Two simple techniques which give a good visibility on anti-features:
1. The very obvious one, Google for the shortcomings. Someone would have written a blog post on the interwebs highlighting how a framework or technology let them down. For example, take this post by Uber on how Postgres did not work as expected for them.
2. Comb through Github and/or JIRA peeking at the bugs raised and enhancements requested.
Both of the above will provide a good picture of the shortcomings. If you are evaluating a closed source proprietary technology, the above may not make the cut.
Once a mental note is made of the negatives, ponder on the scenarios where this might affect your usage. It helps to spend quality time on this as this will save one from a lot of future trouble.
If you think about this, this might sound very obvious but tends to be highly neglected. We get so caught up in the positives of something that the negatives tend to be ignored and this usually comes biting us back later.
Making decisions is part and parcel of being a leader. It might feel empowering to take calls but the hallmark of true leadership is in enabling others to do this. The smoother the decisions making process and lesser the blockers, the better it is for the organization.
One route to get there is to create frameworks, rules, and principles for decision making. When your team wants to do something and are confused as to how to get there, they just clawback to the principles and use them. For example, take hiring. Having a clear-cut framework for hiring that covers all aspects starting from what questions to ask, how many rounds of interview, how to reject or accept candidates, what qualities to look for in candidates aids the hiring decision process. By having this, teams are allowed to make hiring decisions on their own.
Also, when taking calls, openly articulate your thought process. Make it clear as to what assumptions you did, what questions you asked, what data you looked at, what trade-offs you did. Laying out in the open the way you arrived at a decision helps others to traverse the same path on their own the next time.
To summarise, instead of just taking calls on behalf of others, go that extra mile to create a framework which enables them to do this independently the next time. Also, laying out the decision-making process in the open gives everybody an opportunity to peek at your thought process so that they can borrow it the next time.
Sapiens, the book, gives a fantastic perspective of the context in which today’s religions, society, and social practices evolved and how in the current context, a lot of these are irrelevant. One of the core ideas presented in the book is that humanity, during evolution, favored social stability over individual liberty because trust was necessary for human advancement and the basis of this trust was a common belief in the same God and social practices. Today, science and technology, as well as robust social institutions and ideas like democracy, liberalism, and capitalism, form the basis of trust. In short, the context does not hold today, but we continue with the age-old practices and beliefs. We can draw a parallel with this to the way organizations blindly adopt technology, frameworks, and processes from other places without understanding the context in which these evolved.
In my professional life, I have heard a lot along these lines; Netflix does micro-services, let us also do that; Google and Facebook subject interview candidates to data structure and algorithm questions, let us adopt the same. Embracing something without understanding the context is a recipe for disaster. As a thought experiment, let us take microservices. Microservices evolved in tech organizations with complex products handled by multiple independent teams craving for autonomy and control without stepping on each other’s toes. Also, for microservices to succeed, you need to put in a lot of effort into alerting, monitoring, orchestration, and devops. Without these, microservices is a bomb waiting to explode.
When borrowing technology and processes from other places, one needs to put significant effort into understanding the context around which these evolved and also the required pre-conditions for these. Blind adoption usually leads to unmitigated disaster.
Have you observed the way Google maps asks for info about local joints and places? They word it in such a manner that it sounds like you are helping others to make an informed decision along the lines of “Give us more info to help others”. What they are doing in effect is appealing to the altruism in all of us to generate more info to make their product better.
I think this is a great way to ask for more data in this world of user generated content. Instead of asking to review a restaurant how about wording it as “Review this place so that others can discover great food”. Instead of asking people to rate your app how about saying “Help your friends discover the app on playstore, rate us”. It would be interesting to A/B test this and see the result.