Technical

Scaling Elasticsearch – The Good, The Bad and The Ugly

ElasticSearch bursted on the database scene in 2010 and has now become a staple of many IT teams’ workflow. It has especially revolutionized data intensive tasks like ecommerce product search, and real-time log analysis. ElasticSearch is a distributed, full-text search engine, or database. Unlike traditional relational databases that store data in tables, ElasticSearch uses a key value store for objects, and is a lot more versatile. It can run queries way more complex than traditional databases, and do this at petabyte scale. However, though ElasticSearch is a very capable platform, it requires a fair bit of administration to get it to perform at its best. In this post, we look at the pros and cons of running an ElasticSearch cluster at scale.

The Pros

1. Architected for scale

ElasticSearch has a more nuanced, and robust architecture than relational databases. Here are some of the key parts of ElasticSearch, and what they mean:

  • Cluster: A collection of nodes and is sometimes used to refer to the ElasticSearch instance itself
  • Index: A logical namespace that’s used to organize and identify the data in ElasticSearch
  • Type: A category used to organize data within an index
  • Document: A collection of data objects within an index
  • Shard: A partition of data that is part of an index, and runs on a node
  • Node: The underlying physical server that hosts shards with all their data
  • Replica shard: An exact replica of a primary shard that’s typically placed on a node separate from the primary shard

elas_0205

The levels of abstraction in the architecture makes it easy to manage the system. Whether it’s the physical nodes, the data objects, or shards – they can all be easily controlled at an individual level, or at an aggregate level.

2. Distributed storage and processing

Though ElasticSearch can process data on a single node, it’s built to function across numerous nodes. It allocates primary and replica shards equally across all available nodes, and generates high throughput using parallel processing. This means when it receives a query, it knows which shards have the data required for process the query, and it retrieves data from all those shards simultaneously. This way it can leverage the memory and processing power of many nodes at the same time.

The best part is that this parallelization is all built-in. The user doesn’t have to lift a finger to configure how requests are routed among shards. The strong defaults of the system make it easy to get started with ElasticSearch. ElasticSearch abstracts away the low lying processes, and delivers a great user experience.

3. Automated failover handling

When working at scale, the most critical factor is to ensure high availability. ElasticSearch achieves this in the way it manages its nodes and shards. All nodes are managed by a master node. The master records changes with nodes such as adding and removing of nodes. Every time a node is added or removed the master re-shards the cluster and re-balances how shards are organized on nodes.

The master node doesn’t control data processing, and in this way doesn’t become a single point of failure. In fact, no single node can bring the system down, not even the master node. If the master node fails, the other nodes auto-elect one of the nodes to replace it. This self-organizing approach to infrastructure is what ElasticSearch excels at, and this is why it works great at scale.

4. Powerful ad hoc querying

ElasticSearch breaks the limits of relational databases with its full-text search capabilities. Relational databases are stored in rows and columns and are rigid in how they store and process data. ElasticSearch on the other hand stores data in the form of objects. These objects can be connected to each other in complex structures that can’t be attained with rows and columns.

For example, ElasticSearch can sort its text-based search results based on relevance to the query. ElasticSearch can execute this complex processing at large scale and return results just as quickly. In fact, the results can be returned in near-real time making ElasticSearch a great option for troubleshooting incidents using logs, or powering search suggestions. This speed is what separates ElasticSearch from traditional options.

 

The Cons

For all its strengths, ElasticSearch has a few weaknesses that show up when you start to scale it to hundreds of nodes and petabytes of data. Let’s discuss them.

1. Can be finicky with the underlying infrastructure

ElasticSearch is great for parallel processing, but once you scale up, capacity planning is essential to get it to work at the same speed. ElasticSearch can handle a lot of nodes, however, it requires the right kind of hardware to perform at peak capacity.

If you have too many small servers it could result in too much overhead to manage the system. Similarly, if you have just a few powerful servers, failover could be an issue. ElasticSearch works best on a group of servers with 64GB of RAM each. Less than that and it may run into memory issues. Similarly, queries are much faster on data stored in SSDs than rotating disks. However, SSDs are expensive, and when storing terabytes or petabytes of data, you need to have a mix of both SSD and rotating disks.

These considerations require planning and fine-tuning as the system scales. Much of the effort is in maintaining the infrastructure that powers ElasticSearch than managing the data inside it.

2. Needs to be fine-tuned to get the most out of it

Apart from the infrastructure, you also need to set the right number of replica shards to ensure all nodes are healthy with ‘green’ status, and not ‘yellow’. For querying to run smoothly, you need to have a well-organized hierarchy of Indexes, Types, and IDs – though this is not as difficult as with relational databases.

You can fine-tune the cluster manually when there’s less data, but today’s apps change so fast, that you’ll end up with a lot of management overhead. What you need is a way to organize data and the underlying infrastructure that will work at scale.

3. No geographic distribution

Though it can work in this way, ElasticSearch doesn’t recommend distributing data across multiple locations globally. The reason is that it treats all nodes as if they were in the same data center, and doesn’t take into account network latency. This results in slower processing of queries if the nodes aren’t colocated. However, today’s apps are geographically distributed. With microservices architecture, services can be hosted on servers globally and still need the same level of access to the data stored in ElasticSearch.

 

ElasticSearch is a powerful database for doing full-text analysis, and it is essential to DevOps teams today. However, it takes expertise to successfully scale an ElasticSearch cluster and ensure it functions seamlessly.