Python is a robust programming language that boasts a multitude of applications. Nonetheless, as is the case with any programming language, it is essential to write efficient and performant code. With this in mind, here are several useful tips and tricks for optimizing Python code performance:
Caching
Caching is a method of storing commonly used information in memory to enable swift and easy access. In Python, there are several libraries available, including LRU cache, Redis, and Memcached, that offer a straightforward and adaptable approach to caching data in your code.
Redis is a commonly utilized caching solution in Python applications. It serves as an in-memory data store that can cache regularly accessed data. Below is an instance of implementing Redis for caching:
import redis
# Connect to Redis server
r = redis.Redis(host='localhost', port=6379, db=0)
def expensive_function(arg1, arg2):
# Check if result is in cache
cache_key = str(arg1) + ':' + str(arg2)
cached_result = r.get(cache_key)
if cached_result is not None:
# If result is in cache, return it
return int(cached_result)
# If result is not in cache, perform expensive computation
result = arg1 * arg2
# Store result in cache for future use
r.set(cache_key, result)
return result
In the present instance, the Redis library is employed to establish a connection with a Redis server operating on the same machine, specifically on port 6379. The function expensive_function (Can't think of some at the time of writing) conducts a computationally expensive operation based on the input arguments arg1 and arg2.
Prior to executing the computation, the function first verifies whether the Redis cache already contains the result. This is achieved by constructing a cache key that is based on the input arguments. In the event that the result is present in the cache, it is retrieved instantly. If not, the function proceeds to perform the resource-intensive computation, and subsequently stores the result in the cache for future reference.
Memoization
Memoization is a technique that has become quite common to optimize the performance of functions by caching the output of previous computations, which is then retrieved when the same input occurs again. This technique can be particularly helpful for expensive function calls that are frequently repeated. Python has a built-in memoization library called functools, which can be used to apply memoization in your code with ease. With the assistance of functools, it's possible to cache the results of function calls promptly and enhance the performance of your code.
Memcached is a caching system that distributes data across multiple servers and stores it in memory, providing efficient handling of large volumes of read and write requests with minimal latency. Memcached is commonly used in web applications to cache frequently accessed data, including database queries, API responses, and HTML templates, allowing for rapid retrieval and delivery of this information.
To use memcached, you first need to install memcached library using pip
pip install pymemcache
Once pymemcache has been successfully installed, you can utilize the pymemcache.client module of the library to efficiently cache results and enhance the performance of your code.
from pymemcache.client.base import Client
import time
# Connect to Memcached server
client = Client(('localhost', 11211))
In the code snippet provided, a client object has been instantiated to establish a connection with a Memcached server that is running on the local machine and listening on port 11211. Prior to running the code, it is necessary to install the Memcached server software and launch it on the system in order to execute the example code below.
To demonstrate how memoization can optimize a recursive function, I have utilized the most fundamental example of all - the Fibonacci sequence. This sequence is comprised of numbers where each subsequent number is the sum of the two preceding numbers, starting from 0 and 1.
I have developed two Fibonacci functions: the first is a conventional function that is widely available on the internet (expensive_fib), while the second is an optimized Fibonacci function(cached_fib) that leverages memcached to cache the output and enhance the performance of the function.
def expensive_fib(n):
if n <= 1:
return n
return expensive_fib(n-1) + expensive_fib(n-2)
def cached_fib(n):
# Retrieve data from Memcached
key = str(n)
value = client.get(key)
if value is not None:
# if data is present in the cache, then return it
return value
cur_value = int(cached_fib(n-1)) + int(cached_fib(n-2))
# Store data in Memcached
client.set(key, cur_value)
return cur_value
#initial value
client.set("1", 1)
client.set("0", 0)
start_time = time.time()
result = expensive_fib(50)
end_time = time.time()
execution_time = end_time - start_time
print("Execution time of expensive fib:", execution_time)
start_time = time.time()
result = cached_fib(50)
end_time = time.time()
execution_time = end_time - start_time
print("Execution time of cached fib:", execution_time)
Upon execution of the preceding code, the resulting output is shown below:
Generators
Python generators offer the ability to generate large data sets on-the-fly, which can be more efficient in terms of memory usage than storing the entire data set in memory at once. The generator functions utilize the yield keyword to generate a sequence of values on-the-fly, in a similar manner to how a function generates values. Upon being invoked, the generator function returns a generator object that can be iterated through to produce each value in the sequence one-by-one. This method is ideal for generating sequences of values or conducting matrix operations while optimizing memory usage.
To illustrate how a generator can enhance the efficiency of a mathematical sequence, consider the following example. Suppose you wished to generate the first 100,000 prime numbers. Rather than generating the entire sequence simultaneously, which would necessitate storing all 100,000 numbers in memory, you could utilize a generator function to generate each prime number sequentially, as needed.
def is_prime(n):
if n <= 1:
return False
for i in range(2, int(n ** 0.5) + 1):
if n % i == 0:
return False
return True
def primes(n):
count = 0
i = 2
while count < n:
if is_prime(i):
yield i
count += 1
i += 1
In the aforementioned example, the function primes() utilizes another function, is_prime(), to ascertain if a number is a prime number or not. Upon confirmation of the primality of a number by is_prime(), primes() generates the number. By calling primes(100000), the generation of each prime number on-the-fly replaces the necessity of storing all 100,000 numbers in memory, which consequently eliminates the need for excessive memory usage.
In the next post, I will discuss more performance optimization methods such as Cython, multithreading and multiprocessing
Top comments (0)