Does code quality matter?

What role does code quality play in the outcome of a business?

I know it when I see it—said a judge on pornography. Quality code is the same—you know it when you see it. It is challenging to define what quality code is. It is tough to come up with quantifiable metrics on quality code. But, the instant you see quality code, you know it.

Another reason for the difficulty in verbalizing quality code is that it lies in the beholder’s eyes. One person’s high quality is another person’s low quality.

Whenever you talk of quality code, a question pops up—how important is code quality to the success of a business(startup)?

volodymyr-hryshchenko-L0oJ4Dlfyuo-unsplash

I have wrestled with this question for ages. After spending years meditating on this question under a Bodhi tree in the Silicon Valley of India—Bangalore, I have arrived at an enlightened(also flippant) answer—you cannot quantify this, it matters for sure.

Whenever you talk of quality code and business success, someone usually points out at a business that succeeded despite horrible code.

Businesses are messy. Code is only a part of the story. There are other things that matter to a business’s success. It would be specious to claim a business succeeded despite bad code.

Many businesses that succeed with lousy code are in markets so good and have their timing right that they would have hit the home run anyhow. With quality code under their belt, the journey to the podium would have been pleasant.

A parallel I can think of is the importance of good health and habits. Conventional wisdom says that healthy habits keep you disease-free and lead to a long life. I can always point to a person who smoked and drank her way to a ripe old age. Conventional wisdom says that good habits lead to success. I can always point to a successful person with terrible habits.

Does it mean that good health and habits are immaterial?

Another problem with code quality is that you see its benefits gradually. It is a compounding effect.

final

The human brain finds it difficult to grasp compound interest. Albert Einstein said—compound interest is the 8th wonder of the world. He who understands it earns it; he who doesn’t pays it.

Compounding is tail heavy. During the initial days, it does not make your eyes pop. As time goes on, compounding gains momentum, reaching a crescendo at the end—the same with quality code.

Good code compounds positively. Bad code compounds negatively. Bad code gradually drags your business down, making it slow and sluggish.


Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog


Photo by Volodymyr Hryshchenko on Unsplash

Communication Architecture

Organizations do not give attention to their internal communication architecture. Internal communication evolves organically. Deliberately designing the internal communication architecture makes a difference.

By internal communication architecture, I mean:

  1. How does information flow?
  2. How do team members communicate with each other?
  3. When do they communicate?
  4. What is the medium they use for communication?

A decoupled(push-based or broadcast) structure for communication works the best.

franck-v-tiNCpHudGrw-unsplash

Guiding principles for a decoupled communication structure:

  1. People should not be polling each other for information.
  2. There should be a specific place for information lookup.
  3. There should be pre-defined contracts for the above.

Let us go through an example. 

Imagine an organization with a development(dev) team and a quality assurance(qa) team. Dev team deploys a build for testing. After the deployment, the qa team starts testing.

One way for the qa team to know the deployment status is to poll the dev team periodically and ask whether they have deployed the build.

Another way is to create a contract for the dev team to send a Slack message in a channel once they deploy the build. 

The latter broadcasting style of information dissemination is decoupled. No one has to poll each other for information. As long as the dev team adheres to the contract, and the QA team knows the place to look for this information, it works.

A simple test to figure out your organization’s communication structure:

If a person asks you for information, and you redirect them to a person instead of telling them the steps to find the information, your organization practices the polling style of communication. 

Polling Based:

Hey, how can I get this report?

You can ask Shyam to generate it for you.

Broadcast based:

Hey, how can I get this report?

Add yourself to this email group, and you will receive it regularly.

Polling based communication has the following downsides:

  1. It is anxiety driving for the person who is seeking the information.
  2. It irritates the person who is supposed to give the information.
  3. It does not scale as the team grows.
  4. All the above lead to unnecessary confusion and aggravation.

Push based communication leads to automation. In the dev qa example, the dev team can automate the publishing of the Slack message.

For the push model to work, everyone needs to adhere to the established contract. If one does not do that, the system collapses. 

Communication forms the cornerstone of organizational culture. Internal communication can make or break organizations.


Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog


Photo by Franck V. on Unsplash

When Not to Abstract

Software developers love abstractions. It is satisfying to hide the perceived ugliness and complexity of the underlying system in a so-called easy to use, beautiful interface. Also, abstractions are considered a good design practice.

The abstractions that I am referring to are those along the lines of structural design patterns like adapter and facade. ORM is a classic example.

The flip side of abstractions is the complexity that it brings. Abstractions would have bitten every software developer at one point or the other. Cursing ORMs is a rite of passage for web developers.

Poorly written abstractions cause more problems than they solve. Not to mention the countless hours wasted in designing, writing, and maintaining them.

Knowing when not to abstract is as vital as knowing when to abstract. 

lucas-benjamin-wQLAGv4_OYs-unsplash

When not to abstract?

  1. The abstraction does not add value.
  2. The underlying thing that you are abstracting is not a standard and is rapidly evolving.
  3. You are prototyping.
  4. You are short on time.

Before writing an abstraction, always introspect—what is it I am solving with this abstraction? What is the value add of this abstraction? 

Abstractions work well when they are written over components adhering to well defined, comprehensive standards and specifications. For example, SQL is a well-defined standard. SQL might evolve, but slowly. Hence ORMs are ubiquitous and work well for a majority of the use cases. 

When you try to abstract a non-standard rapidly growing platform, it becomes a game of catch up and drastic design changes each time the underlying platform changes. Without knowing the direction of evolution of the underlying platform, it becomes a disruptive change each time to the users of your abstraction.

When you are prototyping, concentrate on proving the idea as quickly as possible. Do not waste time writing layers of abstraction, which you might throw away in the future if the concept does not work.

Abstractions that add value require deep focus and ample time to design. If you are short on time, you will do a lousy job with the abstraction. It will cause problems than solving something.

In all the above cases, instead of structural abstractions, concentrate on utility functions. Writing utility functions for simplifying recurring patterns will give you more bang for the buck.


Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog


Photo by Lucas Benjamin on Unsplash

Fighting FUD

FUD stands for fear, uncertainty, and doubt. FUD is the strategy of influencing perception by spreading false and dubious information. Fighting FUD takes energy leaving no steam for real work.

Marc Andreessen, a Silicon Valley venture capitalist, recently wrote a post saying: It’s Time to Build. The gist of the writing is—in the US, people are no longer innovating and building core infrastructure—health, banking, transportation, finance, education, etc. anymore. The article is a call to arms to get back to building ambitious foundational projects.

While Marc Andreessen’s writing is in the context of the US, it is true, albeit to a smaller extent, in India too.

2-people-doing-karate-during-sunset-62376

What has led us to this?

We were in a homestay in a green estate that had a stream running through the plantation. The owner of the property told us how the government allows him to build and operate a compact hydroelectric plant on the stream, but he does not want to. He said that as soon as he would construct the plant, environmentalists would raise a hue and cry, create a ruckus, and he would end up wasting all his energy on that front, leaving him little energy to do other things.

The above is anecdotal, but you see a parallel whenever our government announces big-ticket, ambitious projects. There is always a cacophony of protests. We live in an age where everyone has a strong opinion on everything, and if that person is creative with words, she can multiply her reach, thanks to technology. Nowadays, spreading FUD is just a click away. Plus-oneing the protest satisfies the modern age’s thirst for wokeness and fuels the FUD. In such a situation, a government that lives by the Damocles sword of public opinion gets into perception management mode leaving little energy to solving problems.

Organizations are not immune to FUD. US intelligence, as back as in 1944, published a manual with tactics to sabotage workplace productivity. The manual was a field guide to be used against Axis powers in the world war.

Some gems from this manual:

  • Never permit shortcuts to expedite decisions.
  • Talk frequently at great lengths.
  • Refer all matters to committees.
  • Bring up irrelevant issues.
  • Haggle over precise wordings.
  • Advocate caution.

The above is how one creates an environment of FUD. One can see such behaviors to varying degrees at workplaces. In a culture where this is extreme, fighting FUD and preception management replaces work. Once this culture takes hold, it is impossible to weed it out—productivity nosedives to zero.


Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog


Photo by Snapwire from Pexels.

The Three Pillars of Scalability

The three pillars of scalability are statelessness, idempotency, and coding to interfaces.

If you keep the above three in mind, your application can scale a long way with your users. Of course, I am not implying these are the only three things to keep in mind while designing scalable applications.

low-angle-photograph-of-the-parthenon-during-daytime-164336 (1)

Statelessness:

If an application does not store persistent state locally, one can scale it by adding servers.

Let us take the example of an application that requires users to sign in. Once a user signs in, the application has to remember that this particular user has logged in. You have the option of storing the logged-in state of the users in the application servers’ memory. When a subsequent request comes, the application looks up in its memory and acts accordingly.

If you are following the above scheme, you are storing the persistent state locally—in servers’ memory. The upside of this approach is its simplicity. The downside is that you cannot elastically scale the application by dynamically adding and removing application servers based on the load.

To figure out whether your application is stateless or not, ask the question: If the next request landed on a different instance of the server, will my operation fail? If the answer is yes, the application is not stateless.

Idempotency:

An operation is said to be idempotent if it produces the same result when executed multiple times.

Example:

a, b = 1, 2

a + b is idempotent—irrespective of how many times you execute this, the result is always 3.

a++ is not—each time you execute this, you get a different result.

If your application is idempotent, you can retry failed requests. 

Applications can fail momentarily, especially under load. When this happens, ideally, you should retry the failed request. But you can do this only if the application is idempotent. With idempotency, you do not have the unintended side effect of retrying a request.

You are trying to create a user. You hit the user creation API. For some reason, you do not get a response; this could be due to anything—a temporary network glitch, an application error, or something else. The bottom line is that you are not sure whether the user is created or not. If the application is not idempotent, you cannot retry the request. One might end up creating multiple users with the same identity. Not so, if the application is idempotent. One can retry with abandon.

Coding to interfaces:

Coding to interfaces lets us swap components.

You are using a cache in your application. Instead of using the cache provider’s API directly, you hide it behind an interface of your own. In the future, when you have a deluge of users, if you find the cache lacking, you can swap it with a performant cache without incurring tons of maintainability. You can do this only if you decouple your application from the specific cache provider’s API and abstract it out.

Conclusion:

It is tough to foresee scalability problems. Following the above generic principles will help you to develop adaptable applications that you can cheaply scale while buying time to create sophisticated scaling strategies specific to your needs.


Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog


 

How do I Know I am Right?

TLDR; there is no way.

One thing the Coronavirus crisis has made vivid is that no one knows anything for sure. The fact that no one knows anything bubbles up every time there is a crisis. This time though, due to the severity of the mess, it is stark; in your face.

Experts used to say that eating fat is unhealthy. Now, not so much. Not long back, scientists used to believe that the adult human brain is static—once we enter adulthood, our intelligence stops improving. Today, everyone talks about brain plasticity—how the brain keeps growing with the right input even in adulthood and adapts well into old age. The scientific community is staring at a replication crisis—researchers are not able to consistently reproduce experimental results. Marshmallow experiment—one of the most cited psychology experiments, is under doubt.

meme

When I started putting my thoughts in public, I was hesitant. I always had a voice in the back of my mind asking: How do you know you are right? I face the same when someone comes to me for advice. I am guarded with my advice.

How do I reconcile with this?

I have benefitted immensely from the thoughts of others. I am thankful to all these people who take the pain to put their ideas in front of everyone, especially in the current environment where trolling is given. Today, it is fashionable to call anyone and everyone a virtue signaller. Thankfully, I have not gone through the trolling experience as I have a tiny audience.

What is wisdom?

Wisdom is knowledge synthesized with life experiences. By studying, observing, and paying attention, one gains knowledge. One accumulates life experiences by doing. When you mesh the two together and contemplate, you gain wisdom. If no one broadcasted their thoughts, the world would be a sad place.

Strong opinions, Weakly held

An excellent framework for better thinking is: Strong opinions, weakly held. The idea is not to be married to your views. In the face of disconfirming evidence, update your beliefs. Interestingly, even this maxim is under debate.

There is no way for you to be a hundred percent sure of anything; this applies when you give and receive advice. The best you can do is color your knowledge with your life experiences and share it with others in the hope that the other person takes something positive out of it—a small way for you to give back to the society.

Always be skeptical.


Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog


 

Should I or Should I Not

This post walks you through a framework for adopting new technologies. Microservices is a placeholder in this post. It is a generic framework that you can apply to any new technology that you are planning to adopt.

Should we do microservices?

The above question plagues the minds of software developers.

Popular programming culture made microservices the de facto way to build software. Now, many are second-guessing their choice.

Here is a post from Segment on why they consolidated their microservices into a monolith.

Microservices is archetypical of software trends following the Hype cycle. In the Hype cycle, microservices has passed the “Peak of Inflated Expectations.”

Gartner_Hype_Cycle.svg
Hype Cycle

A framework for making new technology choices

Before adopting any new technology, you have to:

  1. Clearly define the problem you are trying to solve with the novel technology.
  2. Understand how the new technology solves the problem.
  3. Build perspective by studying the evolution of the technology.
  4. List the supporting structures needed to make the new technology work.

[ Click to Tweet (can edit before sending): https://ctt.ac/zcw9I ]

The above may sound meh, but taking the pain to define them to the T is the key to the success of new technology adoption.

dominos-dots-fun-game-585293

Clearly define the problem you are trying to solve

Nailing down the problem is the first step. You would be surprised by the number of people who try to solve a problem without defining it formally.

When the problem that you are trying to solve is vague, it becomes tough to find a solution to it. How many times has it happened to you that you describe a problem to someone, and in the process of doing so, you get closer to the solution?

Clearly define the problem that you are trying to solve with microservices. Is it a performance problem with the application? Are you trying to increase the productivity of the team with microservices?

When you do this, sometimes you find non-disruptive ways to solve the problem. Better communication between teams might be the solution, not microservices.

Understand how the new technology solves the problem

Understanding how the new technology solves the problem will help you to evaluate it objectively. Defining the problem, as stated in the first step of the framework, is essential for this.

There are two broad reasons for microservices adoption—technical and logistical.

Technical

The application has grown in complexity and has workloads vying for different types of resources. You are not doing justice to any of these workloads by packing them in a monolith. For example, some workloads might be CPU intensive, some IO heavy, and the others hungry for memory. By extracting each of these workloads into a microservice, you have the freedom to host them in different servers conducive to their demands.

The application has grown in complexity and has workloads better solved in different programming languages. Breaking the monolith into microservices gives you the ability to code them in the programming language of your choice.

Logistical

The application has evolved as well as your company. Different teams are responsible for different areas of the application. You want these teams to be independent. If you break the monolith into microservices that mimic the team structure, you will achieve this independence. These teams can work independently without stepping on each other’s toes, thus being more productive.

Build perspective by studying the evolution of the technology

When you try to dig up the history, keep in mind that you are not going after the rigorous academic definition of the term, but the cultural context of its evolution. The common definition of a term may not match with its formal description. For example, when people say microservices, they are usually referring to Services Oriented Architecture(SOA) and not microservices in particular.

Microservices exploded due to big companies like Amazon and Netflix evangelizing(maybe unintentionally) them. These companies have thousands of employees and divisions. Once you understand this and build a perspective, you will naturally ask, is this applicable to me? If you are a small startup that can count your tech team with one hand, in all probability, the answer is no. It is tough to build this perspective without studying the evolution of the technology.

Supporting structures needed to make the new technology work

Whenever you introduce a new technology, you might have to make some changes to the way you work. Some of these changes might be inconsequential, and others extensive.

For microservices to be successful, you will have to invest in tooling. You will have to have a robust monitoring system because, with microservices, you are treading into distributed computing where failure is a given. I will stop here as this requires a post in itself.

In many circumstances, these changes might be far-reaching negating the benefits of the new technology. Be keenly aware of this trade-off.

Summary

Doing this might sound time-consuming, but it pays off by preventing unmitigated disasters down the line, once you are in the middle of adopting the new technology. Many new technology choices bomb because someone did not do the above painstakingly enough.


Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog


Hype Cycle image By Jeremykemp at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10547051.

Photo by Miguel Á. Padriñán from Pexels

Let go of Stereotypes

The key to building a great team lies in ejecting the stereotypical portrayal of the role from your mind, objectively figuring out the qualities needed for success in the role, and ruthlessly going after that.

handmade-ceramics-pottery-workshop-22823

What is the stereotype of a leader?

A charismatic extrovert who can spellbind an audience with her talk.

Leadership is not about how charismatic you are or how good you are at public speaking. Popular culture has narrowly defined leadership to be so.

In the book, The Little Book of Talent, the author Daniel Coyle writes: Most great teachers/coaches/mentors do not give long-winded speeches. They do not give sermons or long lectures. Instead, they give short, unmistakably clear directions; they guide you to a target.

What is the stereotype of a developer?

This twitter thread does an excellent job of it.

Being a good developer is not about which editor you use or how socially awkward you are. These are urban legends devoid of any real substance.

Leaders and developers come in all shapes and sizes. Take a step back and think of all the great people you have worked with. Do they stick to the stereotypes associated with their role? Can you pigeonhole them into a mold?

The movie, Money Ball, is the best illustration of this line of thinking. The plot of the film revolves around the real-life story of a manager who assembles a successful baseball team analytically by ignoring the mythical stereotypes associated with what makes one a successful baseball player. This approach of building the team was not a cakewalk for him; he met with resistance from all for his radically different line of thinking.

Peter Thiel talks of startup hiring as finding the talent which the market has mispriced(I am paraphrasing this from memory).

If you stick to stereotypes while hiring and promoting, you are:

  1. Artificially restricting the available talent pool.
  2. Pursuing the same set of people that everyone else is.
  3. Going after qualities that you do not need.

We are sympathetic to underdogs, but we do not bet on them. Doing so is the not so secret strategy for building a great team.


Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog


 

Wild Wild World of External Calls

Today, while developing software, external calls are a given—your code talks to external HTTP services, databases, and caches. These external communications happen over networks that are fast and work well most of the time. Once in a while, networks do show their true color—they become slow, congested, and unreliable. Even the external services can get overloaded, slow down, and start throwing errors. The code one writes to interface with external services should be able to stand steady under these circumstances.

sunset-4469118_640

In this post, I will go through some of the basics one should keep in mind while calling external services. I will use the Python Requests library to demonstrate this with external HTTP calls. The concepts remain almost the same irrespective of the programming language, library, or the kind of external service. This post is not a Python Requests tutorial.


I have created a Jupyter Notebook so that you can read and run the code interactively. Click here, then click on the file WildWildWorldOfExternalCalls.ipynb to launch the Jupyter Notebook. If you are not familiar with executing code in a Jupyter Notebook, read about it here. You can find the Notebook source here.


Let us call api.github.com using Requests.

External calls happen in two stages. First, the library asks for a socket connection from the server and waits for the server to respond. Then, it asks for the payload and waits for the server to respond. In both of these interactions, the server might choose not to respond. If you do not handle this scenario, you will be stuck indefinitely, waiting on the external service.

Timeouts to the rescue. Most libraries have a default timeout, but it may not be what you want

The first element in the timeout tuple is the time we are willing to wait to establish a socket connection with the server. The second is the time we are willing to wait for the server to respond once we make a request.

Let us see the socket timeout in action by connecting to github.com on a random port. Since the port is not open(hopefully), github.com will not accept the connection resulting in a socket timeout.

The output.

Time spent waiting for socket connection – 3.42826354 Seconds
Time spent waiting for socket connection – 6.4075264999999995 Seconds

As you can see from the output, Requests waited till the configured socket timeout to establish a connection and then errored out.

Let us move onto the read timeout.

We will use httpbin service, which lets us configure read timeouts.

The output.

Timed out after 6.941002429 Seconds

In the above, we are asking httpbin to delay the response by 9 seconds. Our read timeout is 6 seconds. As you can see from the output, Requests timed out after 6 seconds, the configured read timeout.

Let us change the read timeout to 11 seconds. We no longer get a ReadTimeout exception.

A common misconception about the read timeout is that it is the maximum time the code spends in receiving/processing the response. That is not the case. Read timeout is the time between the client sending the request and waiting for the first byte of the response from the external service. After that, if the server keeps on responding for hours, our code will be stuck reading the response.

Let me illustrate this.

The output.

Time spent waiting for the response – 28.210101459 Seconds

We are asking httpbin to send data for 30 seconds by passing the duration parameter. Requests read timeout is 15 seconds. As evident from the output, the code spends much more than 15 seconds on the response.

If you want to bound the processing time to 15 seconds, you will have to use a thread/process and stop the execution after 15 seconds.

The output.

Time spent waiting for the response – 20.012269603 Seconds

Even though we receive the HTTP response for 30 seconds, our code terminates after 20 seconds.

In many real-world scenarios, we might be calling an external service multiple times in a short duration. In such a situation, it does not make sense for us to open the socket connection each time. We should be opening the socket connection once and then re-using it subsequently.

The output.

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496

As you can see from the output, Requests started a new connection each time; this is inefficient and non-performant.

We can prevent this by using HTTP Keep-Alive as below. Using Requests Session enables this.

The output.

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496

Now, Requests established the socket connection only once and re-used it subsequently.

In a real-world scenario, where multiple threads call external services simultaneously, one should use a pool.

The output.

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (2): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496

As we have created a pool of size two, Requests created only two connections and re-used them, even though we made four external calls.

Pools also help you to play nice with external services as external services have an upper limit to the number of connections a client can open. If you breach this threshold, external services start refusing connections.

When calling an external service, you might get an error. Sometimes, these errors might be transient. Hence, it makes sense to re-try. The re-tries should happen with an exponential back-off.

Exponential back-off is a technique in which clients re-try failed requests with increasing delays between the re-tries. Exponential back-off ensures that the external services do not get overwhelmed, another instance of playing nice with external services.

The output.

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): httpbin.org:443
DEBUG:urllib3.connectionpool:https://httpbin.org:443 “GET /status/500 HTTP/1.1” 500 0
DEBUG:urllib3.util.retry:Incremented Retry for (url=’/status/500′): Retry(total=2, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Retry: /status/500
DEBUG:urllib3.connectionpool:https://httpbin.org:443 “GET /status/500 HTTP/1.1” 500 0
DEBUG:urllib3.util.retry:Incremented Retry for (url=’/status/500′): Retry(total=1, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Retry: /status/500
DEBUG:urllib3.connectionpool:https://httpbin.org:443 “GET /status/500 HTTP/1.1” 500 0
DEBUG:urllib3.util.retry:Incremented Retry for (url=’/status/500′): Retry(total=0, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Retry: /status/500
DEBUG:urllib3.connectionpool:https://httpbin.org:443 “GET /status/500 HTTP/1.1” 500 0

In the above, we are asking httpbin to respond with an HTTP 500 status code. We configured Requests to re-try thrice, and from the output, we can see that Requests did just that.

Client libraries do a fantastic job of abstracting all the flakiness from external calls and lull us into a false sense of security. But, all abstractions leak at one time or the other. These defenses will help you to tide over these leaks.

No post on external services can be complete without talking about the Circuit Breaker design pattern. Circuit Breaker design pattern helps one to build a mental model of many of the things we talked about and gives a common vocabulary to discuss them. All programming languages have libraries to implement Circuit Breakers. I believe Netflix popularised the term Circuit Breaker with its library Hystrix.

Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog

Image by RENE RAUSCHENBERGER from Pixabay

Centralization and Decentralization

Top management loves centralization. Rank and file prefer decentralization.

Why?

Imagine you are the CEO of a company with multiple teams. 

Teams need software to do their work. When the need arises for a software, someone from each of the team talks to the software company negotiates a price and procures the software. 

As a CEO, you observe this and see it as a duplication of effort – wastage of time, energy, and resources. You think you can improve efficiency by centralizing the software procurement process. Only one team will be doing the work – the software procurement team. Also, this team will be able to negotiate a better price due to multiple orders, remove redundancy, manage licenses better, and block unnecessary software spends.

Since software cost is a real expense, you can quantify the gain from this exercise.

black-ceiling-wall-161043

What about the downside?

Earlier, each team could independently procure the software they saw fit. Now, the individual teams have to go through the centralized procurement team and justify the need; this leads to back and forth and delays. The delay affects the cadence of work leading to employee dissatisfaction. Employee dissatisfaction leads to reduced quality of work, which in turn negatively affects the bottom line.

It is not easy to quantify the second-order effects of centralization, sometimes impossible.

The CEO, due to the broad nature of her work, sees the duplication everywhere. She also witnesses the expenses as a result of this; it is in her face. She wants to eliminate this and bring efficiency and cost-saving to the organization. Hence, she champions centralization. 

The rank and file are hands-on; they have to deal with the management policies to do their work. They experience the second-order effects of centralization day in and out. They instinctually develop anti-centralization spidey sense

Unline the rank and file; the CEO does not have the ringside view of the second-order side effects of centralization. The rank and file do not see the duplications the CEO sees because they do not have the same expansive look like that of the CEO.

Centralization efforts have a quantifiable impact. If not entirely measurable, you can do some mental gymnastics to get an idea.

The downsides of centralization are unquantifiable. The unquantifiable plays a crucial role in success, sometimes much more than the quantifiable.

Morgan Housel calls this the McNamara Fallacy.

McNamara Fallacy: A belief that rational decisions can be made with quantitative measures alone, when in fact the things you can’t measure are often the most consequential. Named after Defense Secretary McNamara, who tried to quantify every aspect of the Vietnam War.

Let us flip the earlier scenario. Imagine that the centralized procurement team does bring in efficiency and reduce cost, albeit at a minor loss of productivity. The software procurement expense as a whole is never on the mind of the rank and file; the rank and file do not look at it as closely as the CEO; it is not always on their face. Hence, the rank and file still view centralization as a bane, even when it brings in advantages.

The consensus is that a decentralized way of working trumps a centralized approach; this applies to the military too. Jocko Willink, a prolific US Navy Seal, champions decentralized command. 

There are valid cases for centralization, especially when the talent required to do something is in short supply, and there are legitimate gains to be had from economies of scale. But, when you centralize, think hard of the unquantifiable second-order effects of the decision.

Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog