As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Working with large datasets in Python often reveals memory constraints that can cripple application performance. I've encountered these challenges countless times in production environments where gigabytes of data need processing without overwhelming system resources. Through extensive experimentation and real-world application, I've identified eight essential memory optimization techniques that transform how Python handles large-scale data processing.
Memory-Mapped Files for Dataset Processing
Memory-mapped files revolutionize how we handle datasets exceeding available RAM. Instead of loading entire files into memory, the operating system manages data transfer between disk and memory automatically. This technique feels like working with in-memory data while processing files many times larger than system RAM.
import mmap
import numpy as np
from contextlib import contextmanager
import os
@contextmanager
def memory_mapped_array(filename, shape, dtype=np.float64, mode='r'):
"""Context manager for memory-mapped numpy arrays"""
itemsize = np.dtype(dtype).itemsize
expected_size = np.prod(shape) * itemsize
access_mode = mmap.ACCESS_WRITE if mode == 'w' else mmap.ACCESS_READ
file_mode = 'r+b' if mode == 'w' else 'rb'
with open(filename, file_mode) as f:
with mmap.mmap(f.fileno(), expected_size, access=access_mode) as mm:
array = np.frombuffer(mm, dtype=dtype).reshape(shape)
yield array
def process_huge_dataset(filename, output_file, chunk_size=50000):
"""Process datasets larger than RAM using memory mapping"""
file_stats = os.stat(filename)
total_floats = file_stats.st_size // 8 # Assuming float64
rows = total_floats // 100 # Assuming 100 columns
results = []
with memory_mapped_array(filename, (rows, 100), np.float64) as data:
for i in range(0, rows, chunk_size):
end_idx = min(i + chunk_size, rows)
chunk = data[i:end_idx]
# Compute statistics without loading full chunk
chunk_stats = {
'mean': np.mean(chunk, axis=0),
'std': np.std(chunk, axis=0),
'min': np.min(chunk, axis=0),
'max': np.max(chunk, axis=0)
}
results.append(chunk_stats)
return results
# Create test data file
def create_test_data(filename, size_gb=2):
"""Generate large test dataset"""
rows = (size_gb * 1024**3) // (100 * 8) # 100 columns of float64
with open(filename, 'wb') as f:
for i in range(0, rows, 10000):
batch_size = min(10000, rows - i)
data = np.random.randn(batch_size, 100).astype(np.float64)
f.write(data.tobytes())
Memory-mapped files work exceptionally well for time-series data, scientific datasets, and any scenario where you need random access to large files. The operating system's virtual memory system handles the complexity of moving data between disk and RAM based on access patterns.
Generator-Based Data Processing
Generators transform memory usage patterns by processing data streams instead of loading complete datasets. This approach maintains constant memory footprint regardless of input size, making it ideal for ETL pipelines and data transformation workflows.
import csv
import json
from itertools import islice
from typing import Generator, Any, Dict
def chunked_file_reader(filename: str, chunk_size: int = 1000) -> Generator[list, None, None]:
"""Read large files in manageable chunks"""
with open(filename, 'r', encoding='utf-8') as file:
reader = csv.DictReader(file)
while True:
chunk = list(islice(reader, chunk_size))
if not chunk:
break
yield chunk
def process_streaming_data(data_generator: Generator) -> Generator[Dict[str, Any], None, None]:
"""Process data stream with memory-efficient transformations"""
for chunk in data_generator:
processed_chunk = []
for record in chunk:
# Apply transformations without storing intermediate results
processed_record = {
'id': record.get('id'),
'normalized_value': float(record.get('value', 0)) / 100.0,
'category': record.get('category', '').lower().strip(),
'timestamp': record.get('timestamp')
}
# Yield immediately instead of accumulating
yield processed_record
def aggregate_streaming_results(data_stream: Generator) -> Dict[str, Any]:
"""Compute aggregates from streaming data"""
stats = {
'count': 0,
'sum_values': 0.0,
'categories': set(),
'min_value': float('inf'),
'max_value': float('-inf')
}
for record in data_stream:
stats['count'] += 1
value = record['normalized_value']
stats['sum_values'] += value
stats['min_value'] = min(stats['min_value'], value)
stats['max_value'] = max(stats['max_value'], value)
stats['categories'].add(record['category'])
# Convert set to list for JSON serialization
stats['categories'] = list(stats['categories'])
stats['average'] = stats['sum_values'] / stats['count'] if stats['count'] > 0 else 0
return stats
# Pipeline example
def run_memory_efficient_pipeline(input_file: str) -> Dict[str, Any]:
"""Complete pipeline using generators"""
chunks = chunked_file_reader(input_file, 5000)
processed_stream = process_streaming_data(chunks)
final_results = aggregate_streaming_results(processed_stream)
return final_results
This generator approach processes arbitrarily large files while maintaining minimal memory usage. The key insight is that we never store more than one chunk in memory at any time, allowing the garbage collector to reclaim memory immediately after processing each chunk.
Object Pool Pattern for Resource Management
Object pools eliminate the overhead of repeatedly creating expensive objects like database connections, compiled regular expressions, or complex data structures. This technique reduces garbage collection pressure and improves performance in high-throughput scenarios.
import threading
import time
import re
import queue
from contextlib import contextmanager
from typing import TypeVar, Generic, Callable, Optional
T = TypeVar('T')
class ObjectPool(Generic[T]):
"""Thread-safe object pool for expensive-to-create objects"""
def __init__(self, factory: Callable[[], T], max_size: int = 10,
reset_func: Optional[Callable[[T], None]] = None):
self.factory = factory
self.reset_func = reset_func
self.pool = queue.Queue(maxsize=max_size)
self.max_size = max_size
self.current_size = 0
self.lock = threading.Lock()
@contextmanager
def get_object(self):
"""Get object from pool with automatic return"""
obj = self._acquire()
try:
yield obj
finally:
self._release(obj)
def _acquire(self) -> T:
"""Get object from pool or create new one"""
try:
# Try to get from pool first
return self.pool.get_nowait()
except queue.Empty:
# Create new object if pool is empty and under limit
with self.lock:
if self.current_size < self.max_size:
self.current_size += 1
return self.factory()
else:
# Wait for object to become available
return self.pool.get()
def _release(self, obj: T):
"""Return object to pool"""
if self.reset_func:
self.reset_func(obj)
try:
self.pool.put_nowait(obj)
except queue.Full:
# Pool is full, let object be garbage collected
with self.lock:
self.current_size -= 1
# Example: Regular expression pool
class RegexPool:
"""Pool for compiled regular expressions"""
def __init__(self):
self.pattern_pools = {}
def get_pool(self, pattern: str, flags: int = 0) -> ObjectPool:
"""Get or create pool for specific regex pattern"""
key = (pattern, flags)
if key not in self.pattern_pools:
self.pattern_pools[key] = ObjectPool(
factory=lambda: re.compile(pattern, flags),
max_size=5
)
return self.pattern_pools[key]
@contextmanager
def compiled_regex(self, pattern: str, flags: int = 0):
"""Context manager for getting compiled regex"""
pool = self.get_pool(pattern, flags)
with pool.get_object() as regex:
yield regex
# Usage example
regex_pool = RegexPool()
def process_text_data(text_lines: list, patterns: dict) -> dict:
"""Process text using pooled regex objects"""
results = {pattern_name: [] for pattern_name in patterns}
for line in text_lines:
for pattern_name, pattern in patterns.items():
with regex_pool.compiled_regex(pattern) as regex:
if regex.search(line):
results[pattern_name].append(line)
return results
# Example: Database connection pool
class DatabasePool:
"""Simple database connection pool"""
def __init__(self, connection_factory, max_connections=10):
self.pool = ObjectPool(
factory=connection_factory,
max_size=max_connections,
reset_func=self._reset_connection
)
def _reset_connection(self, conn):
"""Reset connection state before returning to pool"""
try:
conn.rollback() # Rollback any pending transactions
except:
pass # Ignore errors during reset
@contextmanager
def get_connection(self):
"""Get database connection from pool"""
with self.pool.get_object() as conn:
yield conn
Object pools shine in scenarios with repeated object creation costs. Database connections, network clients, and compiled regular expressions benefit significantly from pooling, often reducing allocation overhead by 70-80% in high-throughput applications.
Weak References for Memory Leak Prevention
Weak references prevent memory leaks in scenarios involving caches, observer patterns, or circular references. These references don't prevent garbage collection, allowing objects to be cleaned up when no strong references remain.
import weakref
from typing import Dict, Set, Any, Callable, Optional
from collections import defaultdict
import gc
class WeakCache:
"""Memory-efficient cache using weak references"""
def __init__(self, max_size: int = 1000):
self._cache: Dict[Any, Any] = weakref.WeakValueDictionary()
self._access_count = defaultdict(int)
self.max_size = max_size
def get(self, key: Any, factory: Optional[Callable] = None):
"""Get item from cache or create using factory"""
try:
item = self._cache[key]
self._access_count[key] += 1
return item
except KeyError:
if factory:
item = factory()
self.put(key, item)
return item
return None
def put(self, key: Any, value: Any):
"""Add item to cache"""
if len(self._cache) >= self.max_size:
self._evict_least_used()
self._cache[key] = value
self._access_count[key] = 1
def _evict_least_used(self):
"""Remove least frequently used items"""
if not self._access_count:
return
# Find least used key
min_key = min(self._access_count.keys(),
key=lambda k: self._access_count[k])
# Remove from both cache and access count
self._cache.pop(min_key, None)
del self._access_count[min_key]
class Observable:
"""Subject in observer pattern using weak references"""
def __init__(self):
self._observers: Set[Any] = weakref.WeakSet()
def attach(self, observer):
"""Add observer with weak reference"""
self._observers.add(observer)
def detach(self, observer):
"""Remove observer"""
self._observers.discard(observer)
def notify(self, event_data: Any):
"""Notify all observers"""
# Create list to avoid set changed during iteration
observers = list(self._observers)
for observer in observers:
try:
observer.update(event_data)
except Exception as e:
print(f"Observer notification failed: {e}")
class DataProcessor(Observable):
"""Example data processor with observer notifications"""
def __init__(self):
super().__init__()
self.cache = WeakCache(max_size=500)
def process_data(self, data_key: str, raw_data: list):
"""Process data with caching and notifications"""
# Try to get from cache first
processed = self.cache.get(data_key)
if processed is None:
# Process data if not in cache
processed = self._expensive_processing(raw_data)
self.cache.put(data_key, processed)
# Notify observers
self.notify({
'key': data_key,
'processed_data': processed,
'cache_hit': processed is not None
})
return processed
def _expensive_processing(self, data: list) -> dict:
"""Simulate expensive data processing"""
result = {
'sum': sum(data),
'mean': sum(data) / len(data) if data else 0,
'count': len(data),
'processed_at': time.time()
}
return result
# Memory-efficient callback registry
class CallbackRegistry:
"""Registry for callbacks using weak references"""
def __init__(self):
self._callbacks = defaultdict(lambda: weakref.WeakSet())
def register(self, event_type: str, callback):
"""Register callback for event type"""
self._callbacks[event_type].add(callback)
def unregister(self, event_type: str, callback):
"""Unregister callback"""
self._callbacks[event_type].discard(callback)
def trigger(self, event_type: str, *args, **kwargs):
"""Trigger all callbacks for event type"""
callbacks = list(self._callbacks[event_type])
for callback in callbacks:
try:
callback(*args, **kwargs)
except Exception as e:
print(f"Callback execution failed: {e}")
def cleanup_dead_references(self):
"""Manually clean up dead weak references"""
for event_type in list(self._callbacks.keys()):
if not self._callbacks[event_type]:
del self._callbacks[event_type]
Weak references excel in scenarios where you need references that don't prevent garbage collection. Caches, observer patterns, and parent-child relationships benefit from this approach, preventing common memory leak patterns while maintaining clean object lifecycles.
Slots Optimization for Class Instances
The __slots__ mechanism reduces memory overhead for classes with many instances by eliminating the per-instance dictionary. This optimization can reduce memory usage by 40-50% for data-heavy classes.
import sys
from dataclasses import dataclass
from typing import NamedTuple
import time
# Traditional class without slots
class RegularDataPoint:
def __init__(self, x, y, z, timestamp, category, value):
self.x = x
self.y = y
self.z = z
self.timestamp = timestamp
self.category = category
self.value = value
# Optimized class with slots
class OptimizedDataPoint:
__slots__ = ['x', 'y', 'z', 'timestamp', 'category', 'value']
def __init__(self, x, y, z, timestamp, category, value):
self.x = x
self.y = y
self.z = z
self.timestamp = timestamp
self.category = category
self.value = value
def __repr__(self):
return (f"OptimizedDataPoint(x={self.x}, y={self.y}, z={self.z}, "
f"timestamp={self.timestamp}, category={self.category}, value={self.value})")
# Slots with inheritance
class BaseDataPoint:
__slots__ = ['x', 'y', 'timestamp']
def __init__(self, x, y, timestamp):
self.x = x
self.y = y
self.timestamp = timestamp
class ExtendedDataPoint(BaseDataPoint):
__slots__ = ['z', 'category', 'value'] # Only new attributes
def __init__(self, x, y, z, timestamp, category, value):
super().__init__(x, y, timestamp)
self.z = z
self.category = category
self.value = value
# Memory-efficient data container with slots
class TimeSeriesPoint:
__slots__ = ['_timestamp', '_values', '_metadata']
def __init__(self, timestamp, values, metadata=None):
self._timestamp = timestamp
self._values = tuple(values) # Immutable for memory efficiency
self._metadata = metadata or {}
@property
def timestamp(self):
return self._timestamp
@property
def values(self):
return self._values
def get_value(self, index):
return self._values[index] if 0 <= index < len(self._values) else None
def add_metadata(self, key, value):
# Create new dict to maintain immutability
new_metadata = self._metadata.copy()
new_metadata[key] = value
return TimeSeriesPoint(self._timestamp, self._values, new_metadata)
# Factory for creating optimized data structures
class DataPointFactory:
"""Factory for creating memory-optimized data points"""
@staticmethod
def create_batch_optimized(data_list):
"""Create batch of optimized data points"""
return [OptimizedDataPoint(
x=item.get('x', 0),
y=item.get('y', 0),
z=item.get('z', 0),
timestamp=item.get('timestamp', 0),
category=item.get('category', ''),
value=item.get('value', 0.0)
) for item in data_list]
@staticmethod
def create_time_series_batch(timestamps, value_arrays):
"""Create batch of time series points"""
return [TimeSeriesPoint(ts, values)
for ts, values in zip(timestamps, value_arrays)]
# Memory comparison utility
def compare_memory_usage():
"""Compare memory usage between regular and slots classes"""
import tracemalloc
tracemalloc.start()
# Test with regular classes
regular_objects = []
for i in range(100000):
obj = RegularDataPoint(i, i*2, i*3, time.time(), 'category', i*0.5)
regular_objects.append(obj)
regular_snapshot = tracemalloc.take_snapshot()
regular_memory = regular_snapshot.statistics('lineno')[0].size
# Clear and test with slots classes
regular_objects.clear()
slots_objects = []
for i in range(100000):
obj = OptimizedDataPoint(i, i*2, i*3, time.time(), 'category', i*0.5)
slots_objects.append(obj)
slots_snapshot = tracemalloc.take_snapshot()
slots_memory = slots_snapshot.statistics('lineno')[0].size
tracemalloc.stop()
print(f"Regular class memory: {regular_memory / 1024 / 1024:.2f} MB")
print(f"Slots class memory: {slots_memory / 1024 / 1024:.2f} MB")
print(f"Memory reduction: {(1 - slots_memory/regular_memory)*100:.1f}%")
# Advanced slots pattern for data processing
class ProcessingNode:
__slots__ = ['_node_id', '_inputs', '_outputs', '_processor_func', '_cache']
def __init__(self, node_id, processor_func):
self._node_id = node_id
self._inputs = []
self._outputs = []
self._processor_func = processor_func
self._cache = {}
def add_input(self, input_node):
self._inputs.append(input_node)
input_node._outputs.append(self)
def process(self, data):
# Use cache to avoid recomputation
cache_key = hash(str(data))
if cache_key in self._cache:
return self._cache[cache_key]
result = self._processor_func(data)
self._cache[cache_key] = result
return result
def clear_cache(self):
self._cache.clear()
Slots provide the most benefit for classes instantiated frequently or stored in large collections. Data processing nodes, coordinate points, and configuration objects see substantial memory improvements with slots optimization.
Numpy Structured Arrays for Heterogeneous Data
Numpy structured arrays eliminate Python object overhead for heterogeneous data by storing mixed data types in contiguous memory blocks. This approach provides significant memory and performance improvements for tabular data.
import numpy as np
import pandas as pd
from typing import List, Dict, Any
import time
# Define structured array data types
def create_structured_dtype():
"""Create numpy dtype for structured array"""
return np.dtype([
('id', 'i4'), # 32-bit integer
('timestamp', 'f8'), # 64-bit float (double)
('value', 'f4'), # 32-bit float
('category', 'U20'), # Unicode string, max 20 chars
('active', '?'), # Boolean
('coordinates', '3f4'), # Array of 3 32-bit floats
('metadata', 'O') # Python object (use sparingly)
])
class StructuredDataProcessor:
"""Efficient data processor using numpy structured arrays"""
def __init__(self, initial_capacity=10000):
self.dtype = create_structured_dtype()
self.data = np.empty(initial_capacity, dtype=self.dtype)
self.size = 0
self.capacity = initial_capacity
def add_record(self, record_dict):
"""Add single record to structured array"""
if self.size >= self.capacity:
self._resize()
# Map dictionary to structured array fields
self.data[self.size] = (
record_dict.get('id', 0),
record_dict.get('timestamp', 0.0),
record_dict.get('value', 0.0),
record_dict.get('category', ''),
record_dict.get('active', False),
record_dict.get('coordinates', [0.0, 0.0, 0.0]),
record_dict.get('metadata', {})
)
self.size += 1
def add_batch(self, records: List[Dict]):
"""Add multiple records efficiently"""
needed_capacity = self.size + len(records)
while self.capacity < needed_capacity:
self._resize()
# Create temporary array for batch
batch_data = np.empty(len(records), dtype=self.dtype)
for i, record in enumerate(records):
batch_data[i] = (
record.get('id', 0),
record.get('timestamp', 0.0),
record.get('value', 0.0),
record.get('category', ''),
record.get('active', False),
record.get('coordinates', [0.0, 0.0, 0.0]),
record.get('metadata', {})
)
# Copy batch to main array
self.data[self.size:self.size + len(records)] = batch_data
self.size += len(records)
def _resize(self):
"""Double array capacity"""
new_capacity = self.capacity * 2
new_data = np.empty(new_capacity, dtype=self.dtype)
new_data[:self.size] = self.data[:self.size]
self.data = new_data
self.capacity = new_capacity
def get_active_data(self):
"""Get view of active records only"""
active_mask = self.data['active'][:self.size]
return self.data[:self.size][active_mask]
def filter_by_category(self, category: str):
"""Filter records by category"""
category_mask = self.data['category'][:self.size] == category
return self.data[:self.size][category_mask]
def compute_statistics(self):
"""Compute statistics using vectorized operations"""
active_data = self.get_active_data()
if len(active_data) == 0:
return {}
return {
'count': len(active_data),
'value_mean': np.mean(active_data['value']),
'value_std': np.std(active_data['value']),
'timestamp_range': (
np.min(active_data['timestamp']),
np.max(active_data['timestamp'])
),
'categories': np.unique(active_data['category']).tolist()
}
def to_dataframe(self):
"""Convert to pandas DataFrame for analysis"""
return pd.DataFrame(self.data[:self.size])
# Advanced structured array operations
class TimeSeriesStructured:
"""Time series data using structured arrays"""
def __init__(self):
# Define dtype for time series with multiple metrics
self.dtype = np.dtype([
('timestamp', 'datetime64[ms]'),
('open', 'f4'),
('high', 'f4'),
('low', 'f4'),
('close', 'f4'),
('volume', 'i8'),
('indicators', '10f4') # Array for 10 technical indicators
])
self.data = np.array([], dtype=self.dtype)
def load_from_csv(self, filename, chunk_size=10000):
"""Load time series data from CSV efficiently"""
chunks = []
for chunk_df in pd.read_csv(filename, chunksize=chunk_size):
# Convert pandas chunk to structured array
chunk_structured = np.empty(len(chunk_df), dtype=self.dtype)
chunk_structured['timestamp'] = pd.to_datetime(chunk_df['timestamp'])
chunk_structured['open'] = chunk_df['open'].astype(np.float32)
chunk_structured['high'] = chunk_df['high'].astype(np.float32)
chunk_structured['low'] = chunk_df['low'].astype(np.float32)
chunk_structured['close'] = chunk_df['close'].astype(np.float32)
chunk_structured['volume'] = chunk_df['volume'].astype(np.int64)
# Initialize indicators array with zeros
chunk_structured['indicators'] = np.zeros((len(chunk_df), 10), dtype=np.float32)
chunks.append(chunk_structured)
# Concatenate all chunks
self.data = np.concatenate(chunks)
def compute_moving_average(self, window=20):
"""Compute moving average using numpy operations"""
if len(self.data) < window:
return np.array([])
# Use numpy's convolve for efficient moving average
prices = self.data['close']
weights = np.ones(window) / window
ma = np.convolve(prices, weights, mode='valid')
# Store in indicators array (assuming first indicator is MA)
start_idx = window - 1
self.data['indicators'][start_idx:start_idx+len(ma), 0] = ma
return ma
def get_date_range(self, start_date, end_date):
"""Get data for specific date range"""
start_ts = np.datetime64(start_date)
end_ts = np.datetime64(end_date)
mask = (self.data['timestamp'] >= start_ts) & (self.data['timestamp'] <= end_ts)
return self.data[mask]
# Memory comparison utility
def compare_structured_vs_objects():
"""Compare memory usage of structured arrays vs object lists"""
import tracemalloc
# Generate test data
num_records = 100000
test_data = [
{
'id': i,
'timestamp': time.time() + i,
'value': i * 0.5,
'category': f'cat_{i % 10}',
'active': i % 3 == 0,
'coordinates': [i, i*2, i*3],
'metadata': {'source': 'test'}
}
for i in range(num_records)
]
tracemalloc.start()
# Test structured array approach
processor = StructuredDataProcessor(num_records)
processor.add_batch(test_data)
structured_snapshot = tracemalloc.take_snapshot()
structured_memory = sum(stat.size for stat in structured_snapshot.statistics('lineno'))
print(f"Structured array memory: {structured_memory / 1024 / 1024:.2f} MB")
print(f"Data shape: {processor.data[:processor.size].shape}")
print(f"Statistics: {processor.compute_statistics()}")
Structured arrays excel for tabular data, time series, and scientific datasets where you have mixed data types but consistent structure. The memory savings become substantial with large datasets, often reducing memory usage by 60-80% compared to Python objects.
Memory Profiling and Tracking
Effective memory optimization requires accurate measurement tools. Python provides several profiling options that help identify bottlenecks and track memory usage patterns throughout application lifecycle.
python
import tracemalloc
import psutil
import os
import time
from functools import wraps
from typing import Dict, List, Any, Callable
from contextlib import contextmanager
import gc
import sys
class MemoryProfiler:
"""Comprehensive memory profiling utility"""
def __init__(self, trace_malloc=True):
self.trace_malloc = trace_malloc
self.snapshots = []
self.peak_memory = 0
self.baseline_memory = 0
if trace_malloc:
tracemalloc.start()
def take_snapshot(self, label: str = None):
"""Take memory snapshot with optional label"""
if self.trace_malloc:
snapshot = tracemalloc.take_snapshot()
current_memory = self._get_process_memory()
self.snapshots.append({
---
📘 **Checkout my [latest ebook](https://youtu.be/WpR6F4ky4uM) for free on my channel!**
Be sure to **like**, **share**, **comment**, and **subscribe** to the channel!
---
## 101 Books
**101 Books** is an AI-driven publishing company co-founded by author **Aarav Joshi**. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as **$4**—making quality knowledge accessible to everyone.
Check out our book **[Golang Clean Code](https://www.amazon.com/dp/B0DQQF9K3Z)** available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for **Aarav Joshi** to find more of our titles. Use the provided link to enjoy **special discounts**!
## Our Creations
Be sure to check out our creations:
**[Investor Central](https://www.investorcentral.co.uk/)** | **[Investor Central Spanish](https://spanish.investorcentral.co.uk/)** | **[Investor Central German](https://german.investorcentral.co.uk/)** | **[Smart Living](https://smartliving.investorcentral.co.uk/)** | **[Epochs & Echoes](https://epochsandechoes.com/)** | **[Puzzling Mysteries](https://www.puzzlingmysteries.com/)** | **[Hindutva](http://hindutva.epochsandechoes.com/)** | **[Elite Dev](https://elitedev.in/)** | **[JS Schools](https://jsschools.com/)**
---
### We are on Medium
**[Tech Koala Insights](https://techkoalainsights.com/)** | **[Epochs & Echoes World](https://world.epochsandechoes.com/)** | **[Investor Central Medium](https://medium.investorcentral.co.uk/)** | **[Puzzling Mysteries Medium](https://medium.com/puzzling-mysteries)** | **[Science & Epochs Medium](https://science.epochsandechoes.com/)** | **[Modern Hindutva](https://modernhindutva.substack.com/)**
Top comments (0)