Wild Wild World of External Calls

Today, while developing software, external calls are a given—your code talks to external HTTP services, databases, and caches. These external communications happen over networks that are fast and work well most of the time. Once in a while, networks do show their true color—they become slow, congested, and unreliable. Even the external services can get overloaded, slow down, and start throwing errors. The code one writes to interface with external services should be able to stand steady under these circumstances.

sunset-4469118_640

In this post, I will go through some of the basics one should keep in mind while calling external services. I will use the Python Requests library to demonstrate this with external HTTP calls. The concepts remain almost the same irrespective of the programming language, library, or the kind of external service. This post is not a Python Requests tutorial.


I have created a Jupyter Notebook so that you can read and run the code interactively. Click here, then click on the file WildWildWorldOfExternalCalls.ipynb to launch the Jupyter Notebook. If you are not familiar with executing code in a Jupyter Notebook, read about it here. You can find the Notebook source here.


Let us call api.github.com using Requests.

import requests
r = requests.get("https://api.github.com")

view raw
call.py
hosted with ❤ by GitHub

External calls happen in two stages. First, the library asks for a socket connection from the server and waits for the server to respond. Then, it asks for the payload and waits for the server to respond. In both of these interactions, the server might choose not to respond. If you do not handle this scenario, you will be stuck indefinitely, waiting on the external service.

Timeouts to the rescue. Most libraries have a default timeout, but it may not be what you want

import requests
r = requests.get("https://api.github.com", timeout=(3.2, 3.2))

view raw
call-timeout.py
hosted with ❤ by GitHub

The first element in the timeout tuple is the time we are willing to wait to establish a socket connection with the server. The second is the time we are willing to wait for the server to respond once we make a request.

Let us see the socket timeout in action by connecting to github.com on a random port. Since the port is not open(hopefully), github.com will not accept the connection resulting in a socket timeout.

import requests
from timeit import default_timer as timer
from requests import exceptions as e
start = timer()
try:
requests.get("https://api.github.com:88", timeout=(3.4, 20))
except e.ConnectTimeout:
end = timer()
print("Time spent waiting for socket connection -", end start, "Seconds")
start = timer()
try:
requests.get("https://api.github.com:88", timeout=(6.4, 20))
except e.ConnectTimeout:
end = timer()
print("Time spent waiting for socket connection -", end start, "Seconds")

The output.

Time spent waiting for socket connection – 3.42826354 Seconds
Time spent waiting for socket connection – 6.4075264999999995 Seconds

As you can see from the output, Requests waited till the configured socket timeout to establish a connection and then errored out.

Let us move onto the read timeout.

We will use httpbin service, which lets us configure read timeouts.

import requests
from timeit import default_timer as timer
from requests import exceptions as e
try:
start = timer()
r = requests.get("https://httpbin.org/delay/9", timeout=(6.4, 6))
except e.ReadTimeout:
end = timer()
print("Timed out after", end start, "Seconds")

view raw
call-read-timeout.py
hosted with ❤ by GitHub

The output.

Timed out after 6.941002429 Seconds

In the above, we are asking httpbin to delay the response by 9 seconds. Our read timeout is 6 seconds. As you can see from the output, Requests timed out after 6 seconds, the configured read timeout.

Let us change the read timeout to 11 seconds. We no longer get a ReadTimeout exception.

import requests
r = requests.get("https://httpbin.org/delay/9", timeout=(6.4, 11))

view raw
call-read-timeout.py
hosted with ❤ by GitHub

A common misconception about the read timeout is that it is the maximum time the code spends in receiving/processing the response. That is not the case. Read timeout is the time between the client sending the request and waiting for the first byte of the response from the external service. After that, if the server keeps on responding for hours, our code will be stuck reading the response.

Let me illustrate this.

import requests
from timeit import default_timer as timer
from requests import exceptions as e
start = timer()
r = requests.get("https://httpbin.org/drip?duration=30&delay=0", timeout=(6.4, 6))
end = timer()
print("Time spent waiting for the response – ", end start, "Seconds")

The output.

Time spent waiting for the response – 28.210101459 Seconds

We are asking httpbin to send data for 30 seconds by passing the duration parameter. Requests read timeout is 15 seconds. As evident from the output, the code spends much more than 15 seconds on the response.

If you want to bound the processing time to 15 seconds, you will have to use a thread/process and stop the execution after 15 seconds.

import requests
from multiprocessing import Process
from timeit import default_timer as timer
def call():
r = requests.get("https://httpbin.org/drip?duration=30&delay=0", timeout=(6.4, 20))
p = Process(target=call)
start = timer()
p.start()
p.join(timeout=20)
p.terminate()
end = timer()
print("Time spent waiting for the response – ", end start, "Seconds")

The output.

Time spent waiting for the response – 20.012269603 Seconds

Even though we receive the HTTP response for 30 seconds, our code terminates after 20 seconds.

In many real-world scenarios, we might be calling an external service multiple times in a short duration. In such a situation, it does not make sense for us to open the socket connection each time. We should be opening the socket connection once and then re-using it subsequently.

import requests
import logging
logging.basicConfig(level=logging.DEBUG)
for _ in range(5):
r = requests.get('https://api.github.com')

view raw
call-repeat.py
hosted with ❤ by GitHub

The output.

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496

As you can see from the output, Requests started a new connection each time; this is inefficient and non-performant.

We can prevent this by using HTTP Keep-Alive as below. Using Requests Session enables this.

import requests
import logging
logging.basicConfig(level=logging.DEBUG)
s = requests.Session()
for _ in range(5):
r = s.get('https://api.github.com')

The output.

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496

Now, Requests established the socket connection only once and re-used it subsequently.

In a real-world scenario, where multiple threads call external services simultaneously, one should use a pool.

import requests
from requests.adapters import HTTPAdapter
import threading
import logging
logging.basicConfig(level=logging.DEBUG)
s = requests.session()
def call(url):
s.get(url)
s.mount("https://", HTTPAdapter(pool_connections=1, pool_maxsize=2))
t0 = threading.Thread(target=call, args=("https://api.github.com", ))
t1 = threading.Thread(target=call, args=("https://api.github.com", ))
t0.start()
t1.start()
t0.join()
t1.join()
t2 = threading.Thread(target=call, args=("https://api.github.com", ))
t3 = threading.Thread(target=call, args=("https://api.github.com", ))
t2.start()
t3.start()
t2.join()
t3.join()

view raw
call-pool.py
hosted with ❤ by GitHub

The output.

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (2): api.github.com:443
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496
DEBUG:urllib3.connectionpool:https://api.github.com:443 “GET / HTTP/1.1” 200 496

As we have created a pool of size two, Requests created only two connections and re-used them, even though we made four external calls.

Pools also help you to play nice with external services as external services have an upper limit to the number of connections a client can open. If you breach this threshold, external services start refusing connections.

When calling an external service, you might get an error. Sometimes, these errors might be transient. Hence, it makes sense to re-try. The re-tries should happen with an exponential back-off.

Exponential back-off is a technique in which clients re-try failed requests with increasing delays between the re-tries. Exponential back-off ensures that the external services do not get overwhelmed, another instance of playing nice with external services.

import requests
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
import logging
logging.basicConfig(level=logging.DEBUG)
s = requests.Session()
retries = Retry(total=3,
backoff_factor=0.1,
status_forcelist=[500])
s.mount("https://", HTTPAdapter(max_retries=retries))
s.get("https://httpbin.org/status/500")

view raw
call-retry.py
hosted with ❤ by GitHub

The output.

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): httpbin.org:443
DEBUG:urllib3.connectionpool:https://httpbin.org:443 “GET /status/500 HTTP/1.1” 500 0
DEBUG:urllib3.util.retry:Incremented Retry for (url=’/status/500′): Retry(total=2, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Retry: /status/500
DEBUG:urllib3.connectionpool:https://httpbin.org:443 “GET /status/500 HTTP/1.1” 500 0
DEBUG:urllib3.util.retry:Incremented Retry for (url=’/status/500′): Retry(total=1, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Retry: /status/500
DEBUG:urllib3.connectionpool:https://httpbin.org:443 “GET /status/500 HTTP/1.1” 500 0
DEBUG:urllib3.util.retry:Incremented Retry for (url=’/status/500′): Retry(total=0, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Retry: /status/500
DEBUG:urllib3.connectionpool:https://httpbin.org:443 “GET /status/500 HTTP/1.1” 500 0

In the above, we are asking httpbin to respond with an HTTP 500 status code. We configured Requests to re-try thrice, and from the output, we can see that Requests did just that.

Client libraries do a fantastic job of abstracting all the flakiness from external calls and lull us into a false sense of security. But, all abstractions leak at one time or the other. These defenses will help you to tide over these leaks.

No post on external services can be complete without talking about the Circuit Breaker design pattern. Circuit Breaker design pattern helps one to build a mental model of many of the things we talked about and gives a common vocabulary to discuss them. All programming languages have libraries to implement Circuit Breakers. I believe Netflix popularised the term Circuit Breaker with its library Hystrix.

Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog

Image by RENE RAUSCHENBERGER from Pixabay

Centralization and Decentralization

Top management loves centralization. Rank and file prefer decentralization.

Why?

Imagine you are the CEO of a company with multiple teams. 

Teams need software to do their work. When the need arises for a software, someone from each of the team talks to the software company negotiates a price and procures the software. 

As a CEO, you observe this and see it as a duplication of effort – wastage of time, energy, and resources. You think you can improve efficiency by centralizing the software procurement process. Only one team will be doing the work – the software procurement team. Also, this team will be able to negotiate a better price due to multiple orders, remove redundancy, manage licenses better, and block unnecessary software spends.

Since software cost is a real expense, you can quantify the gain from this exercise.

black-ceiling-wall-161043

What about the downside?

Earlier, each team could independently procure the software they saw fit. Now, the individual teams have to go through the centralized procurement team and justify the need; this leads to back and forth and delays. The delay affects the cadence of work leading to employee dissatisfaction. Employee dissatisfaction leads to reduced quality of work, which in turn negatively affects the bottom line.

It is not easy to quantify the second-order effects of centralization, sometimes impossible.

The CEO, due to the broad nature of her work, sees the duplication everywhere. She also witnesses the expenses as a result of this; it is in her face. She wants to eliminate this and bring efficiency and cost-saving to the organization. Hence, she champions centralization. 

The rank and file are hands-on; they have to deal with the management policies to do their work. They experience the second-order effects of centralization day in and out. They instinctually develop anti-centralization spidey sense

Unline the rank and file; the CEO does not have the ringside view of the second-order side effects of centralization. The rank and file do not see the duplications the CEO sees because they do not have the same expansive look like that of the CEO.

Centralization efforts have a quantifiable impact. If not entirely measurable, you can do some mental gymnastics to get an idea.

The downsides of centralization are unquantifiable. The unquantifiable plays a crucial role in success, sometimes much more than the quantifiable.

Morgan Housel calls this the McNamara Fallacy.

McNamara Fallacy: A belief that rational decisions can be made with quantitative measures alone, when in fact the things you can’t measure are often the most consequential. Named after Defense Secretary McNamara, who tried to quantify every aspect of the Vietnam War.

Let us flip the earlier scenario. Imagine that the centralized procurement team does bring in efficiency and reduce cost, albeit at a minor loss of productivity. The software procurement expense as a whole is never on the mind of the rank and file; the rank and file do not look at it as closely as the CEO; it is not always on their face. Hence, the rank and file still view centralization as a bane, even when it brings in advantages.

The consensus is that a decentralized way of working trumps a centralized approach; this applies to the military too. Jocko Willink, a prolific US Navy Seal, champions decentralized command. 

There are valid cases for centralization, especially when the talent required to do something is in short supply, and there are legitimate gains to be had from economies of scale. But, when you centralize, think hard of the unquantifiable second-order effects of the decision.

Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog

 

Working Hard To Be Lazy

The programming world heralds laziness as one of the virtues of a programmer.

Larry Wall, the creator of Perl, says – Most of you are familiar with the virtues of a programmer. There are three, of course: laziness, impatience, and hubris.

What no one tells you is that this laziness does not come for free; one has to work hard to imbibe this trait.

 

work-47200_640

 

In practical terms, what does being lazy translate to?

  1. Doing as little as possible, never more than needed.
  2. Instead of doing things yourself, delegating to well-established tools, libraries, and frameworks.

Let us work with some concrete examples.

You want to parse a CSV file.

You think: let me load the file, parse it line by line, and split each line on a comma. You roll up your sleeves and code this. You feel smug having solved the problem yourself without anyone’s help.

Trouble starts when the CSV you parse has a header. Now you add an if condition to detect the first line. Later, someone uploads a CSV separated by a tab instead of a comma. You add another if condition to accommodate this. Another person uploads a CSV which has quoted fields. You start doubting yourself and ask how many such “unknown unknows” are there when it comes to parsing a CSV?

Unknown unknowns are risks that come from situations that are so unexpected that they would not be considered.

CSV parsing might have a lot of “unknown unknowns” for you – a person who is not well versed with the intricacies of CSV format. But there are experts out there who know the CSV format and have written libraries to handle all the edge cases and surprises that it might throw. You hedge your “unknown unknown” risk by delegating the CSV parsing to one of these libraries.

In short, be lazy, do as little as possible, and delegate to well-established libraries.

“Fools say that they learn by experience. I prefer to profit by others experience.” 

― Otto von Bismarck

Let us consider another scenario.

You want to store a counter in a database. One approach is: when you want to increment the count, you get the current count from the database, add one to it and store the new count back in the database.

Do you see the problem with this approach?

What if many threads are doing this in parallel? You will end up with a wrong count. A better approach is to delegate the task of incrementing the count to the database by leveraging SQL’s arithmetic operators. This approach makes the counter increment atomic. Many threads trying to increment the count is no longer a concern.

By doing less yourself and delegating the task of incrementing the counter to the database, you have saved yourself from bugs.

Why is this hard work?

This sort of thinking does not come easy; you have to work hard to identify where what and to whom you can delegate the work.

Dunning-Kruger effect might have a role to play in this. We believe we are the experts and best suited to do things.

In the field of psychology, the Dunning–Kruger effect is a cognitive bias in which people assess their cognitive ability as greater than it is. It is related to the cognitive bias of illusory superiority and comes from the inability of people to recognize their lack of ability.

While coding, most of the time, you are solving a problem that someone else has already solved, probably in a different context. Be aware of your biases and always question: Is this something I have to code myself, or can I offload this to an already written, well established and well-tested library or framework?

“Learn from the mistakes of others. You can’t live long enough to make them all yourself.”

― Eleanor Roosevelt

Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog

Image by Clker-Free-Vector-Images from Pixabay

The Million Dollar Question

What is the point of life?

All of us have pondered over this question. Luminaries have devoted their lives in the pursuit of an answer to this question. Philosophers have written voluminous texts trying to answer this question.

I am no Yogi, but that does not disqualify me from trying to answer this profound question. Beware, my answer might leave you with a feeling of meh.

During a holiday, a group of us friends played a weird game of football. We were randomly dribbling the ball, passing, and tackling each other – no teams, rules, goals, and referees. This pointless pursuit of the ball was fun.

What is the difference between kids and adults?

emily-morter-8xAA0f9yQnE-unsplash

Kids involve themselves in pointless pursuits. They are always engaged in one activity or the other. These consume them. We, the self-critical adults, try to see a point in everything. Few things consume us.

Give a cardboard box to a kid. She can keep herself occupied with the box for hours—an adult dreads at the thought of this.

When a child is young, she loves to draw irrespective of whether she is good at drawing or not. As she grows older, she pursues drawing only if she finds herself good at it. Enter adulthood, she becomes self-critical and continues her hobby only if she sees a point in it.

As an adult, try to remember the last time you were engaged in and consumed by a pointless activity.

A child actively indulges in role-play, creating stories in her head and acting it out. An adult passively watches role play in tv-series and movies. A child plays a variety of games. An adult passively enjoys sports watching others play.

As we age, we move from an active to a passive life. We try to seek a point in everything.

A child has no time to search for meaning. She is busy indulging herself in everything. The activity is the end; it is not a means to an end. I believe the same goes for life.

The point of life is not to search for meaning but to indulge in it. It is a pointless existence, and there is a joy to be had in understanding this. It is liberating.

Get articles on coding, software and product development, managing software teams, scaling organisations and enhancing productivity by subscribing to my blog

Photo by Emily Morter on Unsplash