DEV Community

MangeshJadhav
MangeshJadhav

Posted on • Edited on

# OpenSearch Tutorial : How to Improve Index and Shard Performance in OpenSearch

OpenSearch, a powerful distributed search and analytics engine, offers high scalability and near real-time search capabilities. However, as data volume and query complexity grow, performance bottlenecks often emerge—particularly around indices and shards.

This guide offers practical ways to improve your cluster's performance and health by optimizing how it indexes, queries, and manages data shards.


Understanding Index and Shard Basics

  • Index: A logical namespace that maps to one or more physical shards.
  • Shard: A basic unit of storage and search in OpenSearch. Each shard is a Lucene index.

Performance is tightly tied to how indices and shards are structured, distributed, and queried.


1. Optimize Shard Count and Size

Too many shards create overhead; too few limit concurrency. Aim for ideal shard sizes between 10–50 GB.

Best Practices:

  • Avoid the default 5 shards unless justified.
  • Use _shrink API to reduce shard count after indexing.
  • For time-series data, consider rollover indices or index lifecycle policies.
PUT /_template/my-template
{
  "index_patterns": ["logs-*"],
  "settings": {
    "number_of_shards": 2,
    "number_of_replicas": 1
  }
}
Enter fullscreen mode Exit fullscreen mode

2. Use Index Lifecycle Management (ILM)

ILM automates data aging and shard optimization.

Benefits:

  • Move data to cold storage
  • Reduce shard count
  • Delete stale indices
PUT _ilm/policy/logs_policy
{
  "policy": {
    "phases": {
      "hot": { "actions": { "rollover": { "max_age": "7d" } } },
      "delete": { "min_age": "30d", "actions": { "delete": {} } }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Use Mappings and Data Types Efficiently

Avoid dynamic mapping bloat and ensure correct field types.

Tips:

  • Disable dynamic mapping where possible
  • Use keyword for filtering and text for full-text search
  • Limit high-cardinality fields (e.g., user_id, IP address) if not needed

4. Manage Index Refresh Intervals

Frequent refreshes increase I/O and reduce indexing throughput.

  • Default is 1s — consider increasing for bulk loads:
PUT /my-index/_settings
{
  "index": {
    "refresh_interval": "30s"
  }
}
Enter fullscreen mode Exit fullscreen mode

5. Tune Replica and Allocation Settings

  • Reduce replicas during heavy indexing to speed up ingest
  • Use shard allocation filters to place shards optimally:
PUT /_cluster/settings
{
  "transient": {
    "cluster.routing.allocation.enable": "all"
  }
}
Enter fullscreen mode Exit fullscreen mode

6. Monitor Shard Skew and Hotspots

Use _cat/shards, _cluster/stats, and OpenSearch Dashboards to:

  • Detect uneven shard distribution
  • Identify hot nodes under heavy search/load pressure

Balance shards evenly to avoid hot nodes and search latency spikes.


7. Optimize Bulk Indexing

For high-volume ingest:

  • Use the _bulk API
  • Send 5–15 MB per request (not too small, not too large)
  • Set refresh=false temporarily during bulk operations

8. Merge and Force Merge Strategically

Segment merges reduce disk usage and improve search speed.

  • Happens automatically, but you can trigger a force merge:
POST /my-index/_forcemerge?max_num_segments=1
Enter fullscreen mode Exit fullscreen mode

Use with caution; high I/O during execution.


Summary Table

Reduce shard count - Lower overhead

Apply ILM policies - Automate data aging

Optimize mappings - Save memory, avoid bloat

Adjust refresh intervals - Improve bulk indexing performance

Monitor cluster health - Prevent hotspots and node failures

Use bulk indexing - Efficient high-volume data ingest

Use force merge wisely - Optimize segments post-indexing


Final Thoughts

Efficient shard and index design is the backbone of a high-performing OpenSearch cluster. Over-sharding, unnecessary field mappings, and unchecked refreshes can silently degrade your system. With smart planning and continuous monitoring, you can maintain blazing-fast search speeds—even at scale.


Further Reading

Top comments (0)