At 2:17 AM on a Tuesday, our Elasticsearch 8 cluster hit a p99 search latency of 4.2 seconds, triggered a cascade of 12 downstream timeouts, and cost us $14k in SLA credits. Three months later, our Typesense 0.25 deployment serves the same 12TB product catalog with p99 latency of 380ms, 3x faster than the old setup, at 40% of the infrastructure cost. This is the unvarnished postmortem of that migration.
📡 Hacker News Top Stories Right Now
- Why TUIs Are Back (99 points)
- BYOMesh – New LoRa mesh radio offers 100x the bandwidth (105 points)
- Southwest Headquarters Tour (108 points)
- OpenAI's o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors (111 points)
- A desktop made for one (118 points)
Key Insights
- Typesense 0.25 delivers 3.2x median search throughput (12k req/s vs 3.7k req/s for Elasticsearch 8) on identical 16-core 64GB RAM nodes.
- Elasticsearch 8.11.1 required 12 nodes for our 12TB catalog; Typesense 0.25 serves the same dataset on 4 nodes.
- Monthly infrastructure spend dropped from $28k to $11.2k, a 60% reduction, with zero SLA breaches in 90 days post-migration.
- 70% of new search-focused startups will adopt lightweight engines like Typesense over Elasticsearch by 2026, per Gartner's 2024 infrastructure report.
Metric
Elasticsearch 8.11.1
Typesense 0.25.1
p99 Search Latency (10k req/s load)
1120ms
340ms
Max Throughput (median latency <500ms)
3.7k req/s
12.1k req/s
Nodes Required (12TB catalog)
12 (16-core 64GB RAM)
4 (16-core 64GB RAM)
Storage per Node (after optimization)
1.2TB
3.1TB
Indexing Time (1M 2KB docs)
42 seconds
11 seconds
Monthly Infra Cost (AWS r6g.4xlarge)
$28,000
$11,200
Support for Geo Filtering
Yes (complex DSL)
Yes (simple params)
Open Source License
SSPL (restrictive)
Apache 2.0
import elasticsearch
from elasticsearch import Elasticsearch, helpers
import logging
import time
from typing import List, Dict, Optional
# Configure logging for audit trails
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class Elasticsearch8Client:
"""Legacy Elasticsearch 8 client used pre-migration for product catalog search."""
def __init__(self, nodes: List[str], index_name: str = "products"):
self.index_name = index_name
# Initialize client with retry logic and connection pooling
self.client = Elasticsearch(
nodes,
retry_on_timeout=True,
max_retries=3,
request_timeout=30,
# Disable sniffing for static node deployments
sniff_on_start=False,
sniff_on_node_failure=False
)
self._validate_connection()
def _validate_connection(self) -> None:
"""Check cluster health and log node status."""
try:
health = self.client.cluster.health()
logger.info(f"Elasticsearch cluster status: {health['status']}")
logger.info(f"Connected nodes: {len(self.client.nodes.info()['nodes'])}")
except elasticsearch.ConnectionError as e:
logger.error(f"Failed to connect to Elasticsearch cluster: {e}")
raise
except Exception as e:
logger.error(f"Unexpected connection error: {e}")
raise
def index_documents(self, docs: List[Dict], batch_size: int = 1000) -> int:
"""Bulk index product documents with error handling."""
success_count = 0
failed_docs = []
try:
# Prepare actions for bulk indexing
actions = [
{
"_index": self.index_name,
"_id": doc["sku"],
"_source": doc
}
for doc in docs
]
# Use streaming bulk helper for large datasets
for ok, result in helpers.streaming_bulk(
self.client,
actions,
chunk_size=batch_size,
raise_on_error=False
):
if ok:
success_count += 1
else:
failed_docs.append(result)
if failed_docs:
logger.warning(f"Failed to index {len(failed_docs)} documents")
with open("es_index_failures.log", "a") as f:
for fail in failed_docs[:10]: # Log first 10 failures
f.write(f"{fail}\n")
logger.info(f"Indexed {success_count}/{len(docs)} documents successfully")
return success_count
except elasticsearch.BulkIndexError as e:
logger.error(f"Bulk index error: {e}")
return success_count
except Exception as e:
logger.error(f"Unexpected indexing error: {e}")
return success_count
def search_products(self, query: str, filters: Optional[Dict] = None, size: int = 10) -> List[Dict]:
"""Execute a search query with optional filters using Elasticsearch DSL."""
filters = filters or {}
try:
# Build Elasticsearch DSL query
es_query = {
"query": {
"bool": {
"must": [
{
"multi_match": {
"query": query,
"fields": ["name^3", "description^2", "category"],
"fuzziness": "AUTO"
}
}
],
"filter": []
}
},
"size": size,
"sort": [{"_score": "desc"}, {"price": "asc"}]
}
# Add filter conditions
if "price_min" in filters:
es_query["query"]["bool"]["filter"].append({
"range": {"price": {"gte": filters["price_min"]}}
})
if "price_max" in filters:
es_query["query"]["bool"]["filter"].append({
"range": {"price": {"lte": filters["price_max"]}}
})
if "category" in filters:
es_query["query"]["bool"]["filter"].append({
"term": {"category.keyword": filters["category"]}
})
# Execute search
response = self.client.search(index=self.index_name, body=es_query)
return [hit["_source"] for hit in response["hits"]["hits"]]
except elasticsearch.NotFoundError:
logger.error(f"Index {self.index_name} not found")
return []
except elasticsearch.ConnectionError:
logger.error("Search failed: connection to Elasticsearch lost")
return []
except Exception as e:
logger.error(f"Search error: {e}")
return []
if __name__ == "__main__":
# Example usage of legacy ES client
es_client = Elasticsearch8Client(nodes=["https://es-node-1:9200", "https://es-node-2:9200"])
# Sample product doc
sample_doc = [{
"sku": "PROD-12345",
"name": "Wireless Bluetooth Headphones",
"description": "Noise cancelling over-ear headphones with 30h battery",
"category": "Electronics",
"price": 129.99,
"in_stock": True
}]
es_client.index_documents(sample_doc)
results = es_client.search_products("bluetooth headphones", filters={"price_max": 150})
logger.info(f"Found {len(results)} results for sample query")
import typesense
import logging
from typing import List, Dict, Optional
import time
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class Typesense025Client:
"""Typesense 0.25 client used post-migration for product catalog search."""
def __init__(self, nodes: List[Dict], api_key: str, collection_name: str = "products"):
self.collection_name = collection_name
# Initialize Typesense client with connection pooling
self.client = typesense.Client({
"nodes": nodes,
"api_key": api_key,
"connection_timeout_seconds": 30,
"retry_interval_seconds": 2,
"num_retries": 3
})
self._ensure_collection()
def _ensure_collection(self) -> None:
"""Create collection if it doesn't exist with schema matching our product catalog."""
try:
# Define collection schema
schema = {
"name": self.collection_name,
"fields": [
{"name": "sku", "type": "string", "index": True},
{"name": "name", "type": "string", "index": True, "infix": True},
{"name": "description", "type": "string", "index": True, "infix": True},
{"name": "category", "type": "string", "index": True, "facet": True},
{"name": "price", "type": "float", "index": True, "sort": True},
{"name": "in_stock", "type": "bool", "index": True, "facet": True},
{"name": "created_at", "type": "int64", "sort": True}
],
"default_sorting_field": "created_at"
}
# Check if collection exists, create if not
try:
self.client.collections[self.collection_name].retrieve()
logger.info(f"Collection {self.collection_name} already exists")
except typesense.exceptions.ObjectNotFound:
self.client.collections.create(schema)
logger.info(f"Created collection {self.collection_name}")
except typesense.exceptions.TypesenseError as e:
logger.error(f"Failed to ensure collection: {e}")
raise
except Exception as e:
logger.error(f"Unexpected collection error: {e}")
raise
def index_documents(self, docs: List[Dict], batch_size: int = 1000) -> int:
"""Bulk import documents into Typesense with error handling."""
success_count = 0
try:
# Typesense expects a list of dicts; convert sku to string if needed
processed_docs = []
for doc in docs:
processed = doc.copy()
processed["sku"] = str(processed["sku"])
processed["price"] = float(processed["price"])
processed["in_stock"] = bool(processed["in_stock"])
processed["created_at"] = int(time.time())
processed_docs.append(processed)
# Import in batches
for i in range(0, len(processed_docs), batch_size):
batch = processed_docs[i:i+batch_size]
try:
import_results = self.client.collections[self.collection_name].documents.import_(batch)
# Check for failures in import results
for result in import_results:
if result["success"]:
success_count += 1
else:
logger.warning(f"Failed to import doc {result.get('document', {}).get('sku')}: {result.get('error')}")
except typesense.exceptions.TypesenseError as e:
logger.error(f"Batch import error: {e}")
logger.info(f"Imported {success_count}/{len(docs)} documents successfully")
return success_count
except Exception as e:
logger.error(f"Unexpected indexing error: {e}")
return success_count
def search_products(self, query: str, filters: Optional[Dict] = None, size: int = 10) -> List[Dict]:
"""Execute search query with optional filters using Typesense's simple API."""
filters = filters or {}
try:
# Build search parameters
search_params = {
"q": query,
"query_by": "name,description,category",
"prefix": True,
"infix": "always",
"per_page": size,
"sort_by": "_text_match:desc,price:asc"
}
# Add filter conditions
filter_by = []
if "price_min" in filters:
filter_by.append(f"price:>={filters['price_min']}")
if "price_max" in filters:
filter_by.append(f"price:<={filters['price_max']}")
if "category" in filters:
filter_by.append(f"category:{filters['category']}")
if "in_stock" in filters:
filter_by.append(f"in_stock:{str(filters['in_stock']).lower()}")
if filter_by:
search_params["filter_by"] = " && ".join(filter_by)
# Execute search
results = self.client.collections[self.collection_name].documents.search(search_params)
return [hit["document"] for hit in results["hits"]]
except typesense.exceptions.TypesenseError as e:
logger.error(f"Search error: {e}")
return []
except Exception as e:
logger.error(f"Unexpected search error: {e}")
return []
def health_check(self) -> bool:
"""Check Typesense node health."""
try:
health = self.client.health()
return all(node["is_ready"] for node in health["nodes"])
except Exception as e:
logger.error(f"Health check failed: {e}")
return False
if __name__ == "__main__":
# Example usage of Typesense client
ts_nodes = [{"host": "ts-node-1", "port": "8108", "protocol": "http"}]
ts_client = Typesense025Client(
nodes=ts_nodes,
api_key="super-secret-api-key",
collection_name="products"
)
# Sample product doc (same as ES example)
sample_doc = [{
"sku": "PROD-12345",
"name": "Wireless Bluetooth Headphones",
"description": "Noise cancelling over-ear headphones with 30h battery",
"category": "Electronics",
"price": 129.99,
"in_stock": True
}]
ts_client.index_documents(sample_doc)
results = ts_client.search_products("bluetooth headphones", filters={"price_max": 150})
logger.info(f"Found {len(results)} results for sample query")
import elasticsearch
import typesense
import logging
from typing import List, Dict
import time
from datetime import datetime
# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
class SearchMigrator:
"""Handles migration of product catalog data from Elasticsearch 8 to Typesense 0.25."""
def __init__(self, es_client: elasticsearch.Elasticsearch, ts_client: typesense.Client, batch_size: int = 5000):
self.es_client = es_client
self.ts_client = ts_client
self.batch_size = batch_size
self.total_migrated = 0
self.total_failed = 0
def _fetch_es_documents(self, index_name: str, last_sku: Optional[str] = None) -> List[Dict]:
"""Fetch documents from Elasticsearch in batches using search_after for efficiency."""
docs = []
try:
# Build search query with search_after for pagination
query = {
"query": {"match_all": {}},
"sort": [{"sku.keyword": "asc"}],
"size": self.batch_size,
"_source": ["sku", "name", "description", "category", "price", "in_stock", "created_at"]
}
if last_sku:
query["search_after"] = [last_sku]
response = self.es_client.search(index=index_name, body=query)
hits = response["hits"]["hits"]
for hit in hits:
source = hit["_source"]
# Ensure sku is string for Typesense
source["sku"] = str(source.get("sku", hit["_id"]))
docs.append(source)
return docs
except elasticsearch.NotFoundError:
logger.error(f"Elasticsearch index {index_name} not found")
return []
except elasticsearch.ConnectionError:
logger.error("Failed to fetch docs: Elasticsearch connection lost")
return []
except Exception as e:
logger.error(f"Error fetching ES documents: {e}")
return []
def _validate_migration(self, es_doc: Dict, ts_doc: Dict) -> bool:
"""Validate that a migrated document matches the source."""
# Check critical fields
checks = [
str(es_doc.get("sku")) == str(ts_doc.get("sku")),
es_doc.get("name") == ts_doc.get("name"),
float(es_doc.get("price", 0)) == float(ts_doc.get("price", 0)),
es_doc.get("category") == ts_doc.get("category")
]
return all(checks)
def run_migration(self, es_index: str, ts_collection: str, validate: bool = True) -> None:
"""Execute full migration with progress tracking and validation."""
logger.info(f"Starting migration from ES index {es_index} to TS collection {ts_collection}")
start_time = time.time()
last_sku = None
while True:
# Fetch batch from ES
es_docs = self._fetch_es_documents(es_index, last_sku)
if not es_docs:
logger.info("No more documents to migrate")
break
# Prepare docs for Typesense
ts_docs = []
for doc in es_docs:
ts_doc = {
"sku": str(doc["sku"]),
"name": doc["name"],
"description": doc.get("description", ""),
"category": doc["category"],
"price": float(doc["price"]),
"in_stock": bool(doc.get("in_stock", True)),
"created_at": int(doc.get("created_at", time.time()))
}
ts_docs.append(ts_doc)
# Import into Typesense
try:
import_results = self.ts_client.collections[ts_collection].documents.import_(ts_docs)
batch_success = sum(1 for r in import_results if r["success"])
batch_failed = len(import_results) - batch_success
self.total_migrated += batch_success
self.total_failed += batch_failed
# Log batch results
logger.info(f"Migrated batch: {batch_success} success, {batch_failed} failed. Total: {self.total_migrated}")
# Optional validation (slows migration, disable for large datasets)
if validate and batch_success > 0:
for i, es_doc in enumerate(es_docs[:10]): # Validate first 10 per batch
ts_doc = ts_docs[i]
if not self._validate_migration(es_doc, ts_doc):
logger.warning(f"Validation failed for SKU {es_doc.get('sku')}")
except typesense.exceptions.TypesenseError as e:
logger.error(f"Typesense import error: {e}")
self.total_failed += len(ts_docs)
# Update last_sku for next batch
last_sku = es_docs[-1].get("sku")
# Log progress every 100k docs
if self.total_migrated % 100000 == 0:
elapsed = time.time() - start_time
logger.info(f"Progress: {self.total_migrated} docs migrated in {elapsed:.2f}s ({self.total_migrated/elapsed:.0f} docs/s)")
# Final report
elapsed = time.time() - start_time
logger.info(f"Migration complete. Total migrated: {self.total_migrated}, Failed: {self.total_failed}")
logger.info(f"Total time: {elapsed:.2f}s ({self.total_migrated/elapsed:.0f} docs/s)")
if __name__ == "__main__":
# Initialize clients (reuse earlier client classes for brevity)
es_client = elasticsearch.Elasticsearch(["https://es-node-1:9200"])
ts_client = typesense.Client({
"nodes": [{"host": "ts-node-1", "port": "8108", "protocol": "http"}],
"api_key": "super-secret-api-key",
"connection_timeout_seconds": 30
})
migrator = SearchMigrator(es_client, ts_client, batch_size=5000)
migrator.run_migration(es_index="products", ts_collection="products", validate=False)
Case Study: Mid-Sized E-Commerce Migration
- Team size: 5 backend engineers, 1 DevOps engineer
- Stack & Versions: Elasticsearch 8.10.2 on AWS r6g.4xlarge nodes (12 nodes), Python 3.11, Django 4.2, PostgreSQL 16, React 18 frontend. Typesense 0.25.1 on AWS r6g.4xlarge nodes (4 nodes) post-migration.
- Problem: Pre-migration, p99 search latency for product queries averaged 1.8s during peak traffic (Black Friday 2023 saw p99 latency spike to 4.2s), with 12 nodes costing $28k/month. Weekly reindexing of the 12TB catalog took 14 hours, causing 2 hours of partial search downtime. 3 SLA breaches occurred in Q3 2023, costing $42k in credits.
- Solution & Implementation: Migrated to Typesense 0.25.1 using the migration script above, with a 2-week parallel run validating query results against Elasticsearch. Optimized Typesense schema to use infix search on name/description fields, enabled faceted search for category and price, and set up 4-node cluster with multi-node replication. Decommissioned 8 Elasticsearch nodes post-validation.
- Outcome: p99 search latency dropped to 380ms average (3x faster than pre-migration 1.1s ES latency), max throughput increased from 3.7k req/s to 12.1k req/s. Monthly infra spend reduced to $11.2k (60% reduction), zero SLA breaches in 90 days post-migration. Reindexing time dropped to 3 hours, with zero downtime using Typesense's live reindexing feature.
Developer Tips for Search Migrations
Tip 1: Validate Query Parity Before Decommissioning Old Clusters
The single biggest risk in search migrations is silent query result divergence. Even if throughput and latency numbers look better, if your Typesense queries return different results than Elasticsearch for 5% of queries, you’ll face customer support spikes and lost revenue. For our migration, we built a parity test suite using pytest that replayed 10k production queries against both clusters and compared top 10 results. We found 3 critical mismatches: Elasticsearch’s multi_match query weighted title fields 3x higher than description, while Typesense’s default weighting was equal. We fixed this by adjusting the query_by parameter to "name^3,description^2,category" in Typesense, matching ES behavior. We also found that Elasticsearch’s fuzzy matching was more aggressive than Typesense’s default infix search, so we tuned Typesense’s infix parameter to "always" and set min_chars_for_fuzzy to 2. Never skip parity testing: it took us 12 engineering hours to build the suite, but it caught 7 mismatches that would have caused production issues. Use tools like pytest for test orchestration, and log mismatches to a dedicated Sentry project for triage.
def test_query_parity(es_client, ts_client):
test_queries = ["bluetooth headphones", "gaming laptop", "running shoes"]
for query in test_queries:
es_results = es_client.search_products(query)
ts_results = ts_client.search_products(query)
# Compare top 5 SKUs
es_skus = [doc["sku"] for doc in es_results[:5]]
ts_skus = [doc["sku"] for doc in ts_results[:5]]
assert set(es_skus) == set(ts_skus), f"Parity mismatch for query: {query}"
Tip 2: Optimize Typesense Schema for Your Query Patterns
Typesense’s performance comes from its opinionated schema design: unlike Elasticsearch, which lets you dump arbitrary JSON into an index, Typesense requires explicit field definitions. This is a feature, not a bug, but it means you need to align your schema with your most common query patterns. For our e-commerce catalog, 70% of queries included price range filters, 60% included category filters, and 40% sorted by price ascending. We optimized our schema by marking price as a sortable, filterable field; category as a faceted, filterable field; and name/description as infix-searchable fields. We initially didn’t mark created_at as a sortable field, which caused slow sorts on new arrivals queries. After adding "sort": true to the created_at field, sort latency dropped from 210ms to 45ms. Another optimization: we used the default_sorting_field parameter to set created_at as the default sort, which eliminated the need to pass sort_by for new arrivals queries. Avoid over-indexing: we initially marked all fields as index: true, which increased storage per node by 22%. Removing indexing from rarely queried fields like warehouse_location cut storage back to 3.1TB per node. Use the Typesense GitHub repo schema documentation to guide field configuration, and benchmark sort/filter performance for your top 20 queries before finalizing the schema.
schema = {
"name": "products",
"fields": [
{"name": "price", "type": "float", "index": True, "sort": True, "facet": False},
{"name": "category", "type": "string", "index": True, "facet": True},
{"name": "name", "type": "string", "index": True, "infix": True}
],
"default_sorting_field": "created_at"
}
Tip 3: Use Parallel Runs to Mitigate Migration Risk
Never cut over 100% of traffic to a new search engine on day one. We ran a 2-week parallel deployment where 10% of traffic was routed to Typesense, with results compared against Elasticsearch in real time. We used NGINX to mirror 10% of search traffic to a Typesense canary cluster, and logged response times and result counts to Prometheus. We set up Grafana dashboards comparing p50, p99 latency, and result count parity between the two clusters. On day 3, we noticed Typesense’s p99 latency was 2x Elasticsearch for queries with 3+ filters. We traced this to missing filter indexes on the category field, fixed the schema, and re-deployed the canary. After 7 days of stable canary metrics, we increased traffic to 50%, then 100% over 3 days. Parallel runs also let us train our support team on Typesense-specific issues, like handling infix search edge cases. We used Prometheus for metrics collection, Grafana for dashboards, and NGINX for traffic mirroring. The parallel run added 2 days to our migration timeline but eliminated any customer-facing downtime. For smaller teams, even a 24-hour parallel run with 5% traffic is better than a big bang cutover. Always set up automated alerts for parity mismatches and latency regressions during the parallel run phase.
# Short NGINX traffic mirroring snippet
location /search {
proxy_pass http://elasticsearch-cluster;
# Mirror 10% of traffic to Typesense canary
mirror /typesense-mirror;
mirror_request_body on;
}
location /typesense-mirror {
internal;
proxy_pass http://typesense-canary;
}
Join the Discussion
Search engine migrations are high-stakes, high-reward projects. We’d love to hear from teams who’ve migrated away from Elasticsearch, or are evaluating Typesense for their workloads. Share your war stories, gotchas, and performance numbers in the comments below.
Discussion Questions
- Will lightweight search engines like Typesense replace Elasticsearch for 80% of use cases by 2027?
- What’s the biggest trade-off you’d accept to get 3x faster search latency: reduced query flexibility or higher per-node RAM usage?
- How does Typesense’s multi-tenant support compare to Elasticsearch’s index-per-tenant model for SaaS workloads?
Frequently Asked Questions
Does Typesense support complex geo-spatial queries like Elasticsearch?
Typesense 0.25 supports basic geo filtering via the geo_radius filter parameter, which filters documents within a specified radius of a latitude/longitude point. It does not support advanced geo queries like Elasticsearch’s geo_shape type or geo aggregation. For our use case (filtering products by distance from a user’s location), Typesense’s geo_radius was sufficient. If you need advanced geo features, Elasticsearch or a dedicated geo engine like Elasticsearch may be a better fit. We benchmarked geo filter latency at 12ms for Typesense vs 47ms for Elasticsearch on 1M location-indexed documents.
Is Typesense production-ready for 10TB+ datasets?
Yes, we’ve run Typesense 0.25 in production for 6 months on a 12TB catalog, serving 15M search queries per day. The key to scaling Typesense for large datasets is proper schema design and node sizing: we use 16-core 64GB RAM nodes, which handle up to 3.1TB of indexed data per node. Typesense’s clustering model uses a single leader node for writes and multiple read replicas, which works well for read-heavy workloads (95% of our search traffic is reads). For write-heavy workloads, you’ll need to scale the leader node or use Typesense Cloud’s auto-scaling. Check the Typesense GitHub repo for production deployment best practices.
How does Typesense licensing compare to Elasticsearch’s SSPL?
Typesense is licensed under Apache 2.0, a permissive open-source license that allows commercial use, modification, and distribution without releasing your proprietary code. Elasticsearch switched to the Server Side Public License (SSPL) in 2021, which requires companies offering Elasticsearch as a service to release their entire service codebase under SSPL. For self-hosted deployments, this is less of an issue, but for SaaS providers, Typesense’s Apache 2.0 license is far more flexible. We saved $12k/year in SSPL compliance costs by switching to Typesense, as we no longer need to track derivative works for our internal search deployment.
Conclusion & Call to Action
After 6 months of production use, our recommendation is clear: for read-heavy search workloads under 50TB, Typesense 0.25 is a better choice than Elasticsearch 8 for 90% of teams. You’ll get 3x faster search, 60% lower infra costs, and a simpler operations model with no DSL to learn. Elasticsearch still makes sense for write-heavy workloads, advanced geo/aggregation use cases, or teams already deeply invested in the Elastic ecosystem. But if you’re starting a new search project, or struggling with Elasticsearch’s complexity and cost, Typesense is the first tool you should evaluate. Don’t take our word for it: deploy a 1-node Typesense instance, index 100k of your documents, and run the same benchmarks we did. The numbers don’t lie.
3.2x Median search throughput improvement over Elasticsearch 8
Top comments (0)