As engineers, we should always strive to write simple code.
One common pitfall which is typical to many programming languages and not only to Python is the horrible if-elif misusage.
If you want to tackle this issue and to improve your code readability and maintainability, stick around, this article is for you!
Request Handler Use Case
A customer management system receives requests.
Each request contains an action and a customer name. Actions consist of creating a new customer, activating a customer, suspending a customer, and deleting a customer. There are four types, and each type is handled differently.
Each request is a Python dictionary:
Here is a potential flow diagram that tackles this use case:
And equivalent Python code:
Can you spot the issue? There are many if and elifs
that make the code more complex, less readable (too long!), and harder to maintain.
In addition, in order to handle the actions, the code sometimes has to go through several ‘ifs’ that won’t match, which is not ideal as the number of actions gets bigger (complexity of O(n) where n is the number of actions).
Thankfully, there is a straightforward solution!
Say no to If-elif!
As you can see in the code below, the if-elif section is removed in favor of a dictionary that maps each action to the corresponding handler function — line 3, ACTION_MAPPING.
Now, any action handler is selected immediately at line 18 from the dictionary (at order of O(1)), and the handler is called at line 20. For example, for an ‘activate’ action, the _handle_activate_request will be selected.
Side note: since the input is validated at line 13, the action is always found in the ACTION_MAPPING dictionary.
What is the Best Practice?
If-elif isn’t evil; they are an essential part of the language.
However, like with everything else in life, you shouldn’t overdo it.
My rule of thumb is that one if-elif is enough. If you require more, use the dictionary mapping mechanism.
And lastly, use a code complexity scanner tool such as Radon/Xenon. They can pinpoint problematic areas and help you refactor your code into a masterpiece.
Want to learn more about Serverless?
Check out my website https://www.ranthebuilder.cloud
About Me
Hi, I’m Ran Isenberg, an AWS Community Builder (Serverless focus), a Cloud System Architect, and a public speaker based in Israel.
I see myself as a passionate Serverless advocate with love for innovation, AWS, and smart home applications.
Top comments (8)
Great tip! Thank you for sharing
If you work with python 3.10, you should check out the new pattern matching!
Yup! But, I think the main purpose of this tutorial is to highlight the power of Hash Maps over traditional conditional branching( whether using if-else or switch) given the fact that accessing key-val is O(1). Know what I am saying?
Yes, that's right. I just wanted to add another interesting and powerful alternative to whoever reads the post. Btw, isn't the access of a dictionary O(log(n)) ?
Python 3.7 and later versions introduce a major new feature when it comes to dictionaries; they are ordered in the insertion sense order by default. This means that items added to a dictionary are stored in the same order as they were added, kinda like an append function, so items can be retrieved in a predictable order. Furthermore, since Python 3.8, this order is reversible, just like with collections.OrderedDict.
The introduction of this ordering feature has sparked debates) about its implications for other features introduced since Python 3.6, such as keyword arguments (kwargs) which are now ordered, and the namespace passed to a metaclass being ordered as well.
In theory, the newer version of the dictionary improved in terms of performance because it uses two arrays, one that contains pointers for all entries of that dictionary in order (PyDictKeyEntry), and the other one acts as a hash table that contains the indices of these elements.
So, python dictionaries are ordered hash tables that take a contiguous chunk of memory, allowing for an O(1) lookup by index on average. This means that the time it takes to look up an item in a dictionary is constant, regardless of the number of items present in the dictionary. This makes dictionaries highly efficient for various tasks. However, in the worst-case scenario, the lookup is O(n). The following image illustrates the time complexity of hash tables in general.
In case of a collision occurs, the Open addressing method kicks in.
Each key-value pair entry is represented as a c struct with three fields as shown in the snippet below:
Code taken from pycore_dict.
If you are looking at how the lookup function is implemented, you can refer to the official CPython implementation.
I had no idea! Thanks for the elaborated answer!
Throwback to my early days doing Open Source. It feels like ages ago.