<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Valentin Kulichenko</title>
    <description>The latest articles on DEV Community by Valentin Kulichenko (@vkulichenko).</description>
    <link>https://dev.to/vkulichenko</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vkulichenko"/>
    <language>en</language>
    <item>
      <title>Ignite 3 Alpha: A Sneak Peek into the Future of Apache Ignite</title>
      <dc:creator>Valentin Kulichenko</dc:creator>
      <pubDate>Tue, 26 Jan 2021 19:23:55 +0000</pubDate>
      <link>https://dev.to/vkulichenko/ignite-3-alpha-a-sneak-peek-into-the-future-of-apache-ignite-39ol</link>
      <guid>https://dev.to/vkulichenko/ignite-3-alpha-a-sneak-peek-into-the-future-of-apache-ignite-39ol</guid>
      <description>&lt;h1&gt;
  
  
  What Is Ignite 3?
&lt;/h1&gt;

&lt;p&gt;Apache Ignite has existed for more than six years. During those years, Ignite evolved incredibly and, with thousands of deployments worldwide, became a top-5 project of the Apache Software Foundation. The SQL engine became more comprehensive, page-memory architecture and the persistence layer were introduced, and many features were added. These advancements make Ignite an extremely powerful tool that is suitable for a wide variety of use cases, from basic caching to complicated, multi-component data integration hubs that power services and applications that are used by millions of people every second of every day—and where a minute-long outage can cost millions of dollars.&lt;/p&gt;

&lt;p&gt;With Ignite 3, we leap forward with a revamped codebase that supports a modern, modular architecture. The intuitive configurations, APIs, and Ignite behaviors provide a speedy, flexible developer experience so that even developers new to Ignite can quickly take advantage of the power of the distributed database.&lt;/p&gt;

&lt;h1&gt;
  
  
  Major Enhancements Planned for Ignite 3
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Dynamic Configuration
&lt;/h2&gt;

&lt;p&gt;Ignite 3 has a new configuration engine that provides a clear separation between static and dynamic parameters. The engine introduces an API that can update most Ignite configuration parameters in runtime, without requiring you to follow special parameter-update procedures or do cluster-node restarts.&lt;/p&gt;

&lt;p&gt;For example, imagine that you create a table in Ignite with the WAL synchronization mode set to fsync (every update needs to be flushed to disk). Then, in production, you realize that, for the sake of performance, you want to relax the synchronization mode for the table and allow the operating system decide when to flush the changes from memory to disk. With Ignite 3, you to change that configuration on the fly—via simple parameter updates.&lt;/p&gt;

&lt;p&gt;The API is also exposed via specialized REST endpoints so that it can integrate with external tools.&lt;/p&gt;

&lt;p&gt;In addition, the legacy Spring XML format has been replaced with a much more compact and easy-to-use HOCON format, which is fully compatible with JSON and Java Properties formats. You can read about HOCON on &lt;a href="https://github.com/lightbend/config/blob/master/HOCON.md"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For more technical details refer to the &lt;a href="https://cwiki.apache.org/confluence/display/IGNITE/IEP-55+Unified+Configuration"&gt;Ignite Enhancement Proposal #55&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema-Driven Architecture
&lt;/h2&gt;

&lt;p&gt;Ignite 3 enforces a strong one-to-one match between tables and their schemas. The role of a schema is to unequivocally describe the data that is stored in its table. Therefore, any entry that is inserted into a table must be compliant with the table's schema. Also, any update that is made to the schema is reflected in the data.&lt;/p&gt;

&lt;p&gt;Thus, schema-level consistency is maintained at all times, at both the object level and the SQL level. The SQL level is controlled by an application developer, and the object level is defined and updated automatically during the serialization of records. Nevertheless, the strong match enforcement results in clear and consistent behavior. In addition, although caches are essentially schema-less and can simultaneously store multiple data types, matching ensures consistency for data of the same type. This elegant, schema-driven architecture empowers developers to work easily with data across the breadth of an Ignite database.&lt;/p&gt;

&lt;p&gt;To make the behavior even more predictable, a unified API for schema updates is being introduced. DDL statements and external tools function on top of the unified API, so users can always get intuitive results.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For technical details, refer to &lt;a href="https://cwiki.apache.org/confluence/display/IGNITE/IEP-54%3A+Schema-first+Approach"&gt;Ignite Enhancement Proposal 54&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Unified CLI Tool
&lt;/h2&gt;

&lt;p&gt;Probably, in regard to developer experience, a unified CLI tool is the most important addition planned for Ignite 3. The tool is responsible for all operations related to the Ignite cluster and serves as a single entry point. Among many other capabilities, the tool enables you to do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easily install and upgrade Ignite&lt;/li&gt;
&lt;li&gt;Manage Ignite optional modules and external dependencies without the need to move JAR files and folders manually&lt;/li&gt;
&lt;li&gt;Start and stop nodes&lt;/li&gt;
&lt;li&gt;Generate configuration stubs and Docker images for painless deployments on the cloud&lt;/li&gt;
&lt;li&gt;Connect to clusters in order to update configurations and schemas, run SQL queries, and so on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The new CLI tool transforms the developer experience. It dramatically improves the usability of Ignite and makes Ignite much friendlier to newcomers.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For technical details, refer to &lt;a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158873958"&gt;Ignite Enhancement Proposal 52&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Get a Sneak Peek: Download Ignite 3 Alpha
&lt;/h1&gt;

&lt;p&gt;Ignite 3 development began only three months ago, but some tangible results have already been produced. To give users a sneak peek into how Ignite will look in the future, the community decided to publish an alpha release. Now, anyone can download and play with the current alpha version of the CLI tool. To download, go to the &lt;a href="https://ignite.apache.org/download.cgi"&gt;Ignite website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The current alpha build does not represent a functional product. It does not incorporate cluster discovery, data storage, compute capabilities, or any of the other core Ignite features that we are used to. However, it demonstrates the major mechanics of how you will interact with Ignite in the future.&lt;/p&gt;

&lt;p&gt;Here are some highlights of the first Ignite 3 Alpha release:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single-file download (Instead of downloading a huge ZIP file that has hundreds of files and a complicated structure, you get a single script, the CLI tool, which you can use for all further operations.)&lt;/li&gt;
&lt;li&gt;Ability to install core Ignite artifacts and external dependencies via Maven&lt;/li&gt;
&lt;li&gt;Ability to manage locally running nodes (start, stop, inspect)&lt;/li&gt;
&lt;li&gt;Ability to connect to a cluster, acquire the current configuration, and update some of the dynamic parameters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best way to get a good feel for the alpha build is to review the &lt;a href="https://ignite.apache.org/docs/3.0.0-alpha/quick-start/getting-started-guide"&gt;Getting Started Guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For a quick demonstration of the alpha build, watch the &lt;a href="https://www.youtube.com/watch?v=9m_MoK0PdV8&amp;amp;list=PLMc7NR20hA-I7b9ppKAtkRWuHlrwbVhae&amp;amp;index=1"&gt;Apache Ignite 3.0 Alpha video series&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Your feedback is greatly appreciated! If you want to share your ideas, wishes, or requests or if you notice any issues, please do not hesitate to write a message to the Apache Ignite developers mailing list. &lt;a href="http://apache-ignite-users.70518.x6.nabble.com/Looking-for-feedback-on-the-Ignite-3-0-0-Alpha-td35089.html"&gt;This dedicated thread&lt;/a&gt; is the best place to share your thoughts.&lt;/p&gt;

&lt;h1&gt;
  
  
  Recommended Resources
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ignite.apache.org/docs/3.0.0-alpha/quick-start/getting-started-guide"&gt;Apache Ignite 3.0 Getting Started Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+3.0"&gt;Apache Ignite 3.0 Wiki Page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cwiki.apache.org/confluence/display/IGNITE/Proposals+for+Ignite+3.0"&gt;Apache Ignite 3.0 Enhancement Proposals&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://apache-ignite-users.70518.x6.nabble.com/Looking-for-feedback-on-the-Ignite-3-0-0-Alpha-td35089.html"&gt;Feedback Thread&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.meetup.com/Apache-Ignite-Virtual-Meetup/events/275722317/"&gt;Ignite 3.0 Alpha Build Community Gathering (Jan 26 2021)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>distributedsystems</category>
      <category>database</category>
      <category>apacheignite</category>
      <category>programming</category>
    </item>
    <item>
      <title>Apache Ignite — More Than Just a Data Grid</title>
      <dc:creator>Valentin Kulichenko</dc:creator>
      <pubDate>Mon, 27 Apr 2020 19:04:18 +0000</pubDate>
      <link>https://dev.to/vkulichenko/apache-ignite-more-than-just-a-data-grid-h06</link>
      <guid>https://dev.to/vkulichenko/apache-ignite-more-than-just-a-data-grid-h06</guid>
      <description>&lt;p&gt;Many of us are familiar with data grids, which are known to be a great way to improve the performance of various applications. Oracle Coherence, Gigaspaces, Hazelcast, Apache Geode — these are all excellent examples.&lt;/p&gt;

&lt;p&gt;The central concept behind almost any data grid is a &lt;em&gt;distributed in-memory hash table&lt;/em&gt;. The “in-memory” part provides ultra-low latency for data access, while the “distributed” part offers virtually unlimited scalability, and therefore virtually unlimited throughput.&lt;/p&gt;

&lt;p&gt;Typically, data grids also come with processing capabilities, as well as mechanisms to collocate computation with the data. As a result, you get a powerful combination of distributed data storage and distributed processing engine, which allows you to fully leverage the collective CPU and RAM power of multiple computers to accomplish your tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Apache Ignite a Data Grid?
&lt;/h2&gt;

&lt;p&gt;The short answer is: “YES!”&lt;br&gt;
A bit longer answer is: “Yes, but it is also so much more.”&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Apache Ignite is a horizontally scalable, fault-tolerant distributed in-memory computing platform for building real-time applications that can process terabytes of data with in-memory speed.” — &lt;a href="https://ignite.apache.org/"&gt;ignite.apache.org&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Apache Ignite was initially created and developed as a data grid. Today, however, it incorporates some interesting and unique features that significantly widen the applicability of the technology. It is far beyond our common understanding of a data grid.&lt;/p&gt;

&lt;p&gt;Here are some of the use cases that you can use Ignite for.&lt;/p&gt;

&lt;h2&gt;
  
  
  High-Performant Greenfield Applications
&lt;/h2&gt;

&lt;p&gt;Data grids usually run on top of existing legacy data sources (e.g., relational databases), and serve as acceleration mechanisms for existing applications.&lt;/p&gt;

&lt;p&gt;But in today’s age of digital transformation, there are tons of newly built applications that require scalability and performance from day one. Sure, you can go the “old way”, using one of the proven RDBMSs as the system of record with one of the data grids on top for better performance. This approach has multiple implications, though:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Such architecture is still limited. Mainly, data grids are efficient at read-intensive workloads, but whenever you need to write something, you have to go to the underlying database. Therefore, the database still bounds the scalability and performance of updates.&lt;/li&gt;
&lt;li&gt;In-memory storage is volatile, which means that every time you restart your data grid cluster, you make the data unavailable. You then need to reload the data from the underlying database, which might take quite a bit of time, increasing possible downtimes up to unacceptable values.&lt;/li&gt;
&lt;li&gt;This approach is generally overcomplicated and even a little “hacky”. If you have to deploy two independent data storages to serve a single purpose, then something is wrong. There must be a better way!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And there is a better way. Apache Ignite implements a concept that we call “&lt;a href="https://apacheignite.readme.io/docs/durable-memory"&gt;Durable Memory&lt;/a&gt;”. It relies on page-memory architecture where any page can be stored not only in memory but on disk as well — you just need to enable the &lt;a href="https://apacheignite.readme.io/docs/distributed-persistent-store"&gt;Native Persistence&lt;/a&gt; option. Ignite’s persistence is scalable, supports transactions, SQL, and all other APIs. Its tight integration with the in-memory layer makes data access entirely transparent for applications (i.e., an application can read or update the data at any point in time even if it’s not yet loaded into RAM).&lt;/p&gt;

&lt;p&gt;The addition of the out-of-the-box persistence layer transforms Ignite into a memory-centric database. Essentially, this creates a durable system of record with scalability and performance characteristics of an in-memory data grid — a perfect tool for performance-sensitive greenfield applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTAP and Real-Time Analytics
&lt;/h2&gt;

&lt;p&gt;We are all used to a proposition that transactional processing (OLTP) and analytical processing (OLAP) are supposed to be done by separate systems. I’m sure you’ve seen a diagram similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AWk2P9FS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hc0lqspajg87wdnuaqhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AWk2P9FS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hc0lqspajg87wdnuaqhe.png" alt="ETL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The idea is to execute operations in an operational database and then execute an ETL process that propagates changes to an analytical database. You can use the latter for analytics, machine learning, deep learning, etc.&lt;/p&gt;

&lt;p&gt;Many companies have used this architecture for a long time. Nowadays, many of those companies start questioning the approach and looking for alternatives. The issue is that ETL happens periodically — every several hours in the best case, but usually overnight, or even over a weekend. Such a process creates a time lag between a change in the operational database and a corresponding reaction of the analytical system. This lag is not compatible with the growing demand for &lt;em&gt;real-time&lt;/em&gt; analytical processing.&lt;/p&gt;

&lt;p&gt;Imagine that your credit card is used by someone else without your consent. Your bank has all those complicated fraud detection algorithms in place that can notify you that something is wrong. Real-time processing ensures that you get the notification within minutes or even seconds, rather than after several hours.&lt;/p&gt;

&lt;p&gt;The only way to achieve such an immediate reaction is to unify the two databases into a single store, capable of running hybrid transactional/analytical processing (HTAP).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;HTAP = OLTP + OLAP&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iNbr50Nt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tp5bwh8khk974m39gijq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iNbr50Nt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tp5bwh8khk974m39gijq.png" alt="No ETL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are a couple of things that make Apache Ignite suitable for HTAP.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ignite follows “single storage, multiple workloads” approach, which means that you can store all your data once in a single cluster, and then work with this data via all available APIs, whether it’s &lt;a href="https://apacheignite.readme.io/docs/jcache"&gt;key-value&lt;/a&gt;, &lt;a href="https://apacheignite.readme.io/docs/transactions"&gt;transactions&lt;/a&gt;, &lt;a href="https://apacheignite.readme.io/docs/compute-grid"&gt;compute&lt;/a&gt;, &lt;a href="https://apacheignite.readme.io/docs/distributed-sql"&gt;SQL&lt;/a&gt; or &lt;a href="https://apacheignite.readme.io/docs/machine-learning"&gt;machine learning&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Ignite’s &lt;a href="https://apacheignite.readme.io/docs/distributed-persistent-store"&gt;Native Persistence&lt;/a&gt; allows storing a superset of data on disk without affecting the availability of the data. Let’s say your total dataset is several petabytes, but you only have 100GB of data that is updated and accessed frequently. It’s not reasonable to put everything in memory, and with Ignite, you don’t have to. The majority of the data can be stored on disk only, and it will still be fully available to the applications when needed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Demand for real-time analytics calls for a significant shift in how we architecture backend systems. There is still a lot to figure out, but Apache Ignite already provides an incredible combination of features to support HTAP workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Digital Integration Hubs
&lt;/h2&gt;

&lt;p&gt;One of the biggest challenges that IT architects currently face is that the data often spreads across multiple systems. Within a single company, you will easily find relational databases, NoSQL databases, mainframes, data lakes, SaaS solutions...&lt;/p&gt;

&lt;p&gt;Different APIs, different data models, different everything.&lt;/p&gt;

&lt;p&gt;More and more regularly, such companies feel a need for aggregation between multiple application’s data to get deeper insights into what is happening in the business. IT infrastructures are often not ready for this.&lt;/p&gt;

&lt;p&gt;The solution is to create a &lt;em&gt;Digital Integration Hub&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Digital Integration Hub is an advanced application architecture that aggregates multiple back-end system of record data sources into a low-latency and scale-out, high-performance data store… The high-performance data store is synchronized with the back-end sources via some combination of event-based, request-based, and batch integration patterns.” — Gartner&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Essentially, you put the data from multiple sources into a single scalable in-memory platform and provide a unified API to various applications that you have.&lt;/p&gt;

&lt;p&gt;Apache Ignite appears to be an excellent foundation for such a solution. I’ve already mentioned the Native Persistence that allows Ignite to act as a system of record, as well as its ability to work with HTAP workloads — both become important when you build a data integration hub.&lt;/p&gt;

&lt;p&gt;Besides, you need a good set of integration components with databases, streaming platforms, and SaaS products. Here are some of such integrations that Ignite provides out of the box:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;a href="https://apacheignite.readme.io/docs/3rd-party-store"&gt;CacheStore&lt;/a&gt; interface for pluggable synchronization with relational, NoSQL databases, or big data stores like Hadoop.&lt;/li&gt;
&lt;li&gt;Multiple &lt;a href="https://apacheignite-mix.readme.io/docs/overview"&gt;streaming connectors&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://apacheignite-fs.readme.io/docs/ignite-for-spark"&gt;Integration with Apache Spark&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Back in the day, &lt;a href="https://en.wikipedia.org/wiki/Distributed_cache"&gt;distributed caches&lt;/a&gt; used to be the prominent way to improve performance. While new use cases and requirements kept coming up, many of those systems evolved into what we now refer to as “in-memory data grids”.&lt;/p&gt;

&lt;p&gt;Similarly, in-memory computing platforms like Apache Ignite is the next stepping stone in the evolution of distributed systems. Data grids are still vital and are going to remain so for many years to come, but we need more comprehensive frameworks to deliver on the latest modern requirements.&lt;/p&gt;

&lt;p&gt;The demand for real-time processing will only keep growing, and I’m excited to watch how this trend unfolds. I also welcome everyone to become a part of it by joining the Apache Ignite community: &lt;a href="https://ignite.apache.org/community/contribute.html"&gt;https://ignite.apache.org/community/contribute.html&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Apache Ignite on Kubernetes: Things to Know About</title>
      <dc:creator>Valentin Kulichenko</dc:creator>
      <pubDate>Fri, 17 Apr 2020 18:49:18 +0000</pubDate>
      <link>https://dev.to/vkulichenko/apache-ignite-on-kubernetes-things-to-know-about-fdm</link>
      <guid>https://dev.to/vkulichenko/apache-ignite-on-kubernetes-things-to-know-about-fdm</guid>
      <description>&lt;p&gt;&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; is becoming more and more popular these days. No wonder! It’s an impressive technology that helps a lot in deploying and orchestrating applications in various cloud environments, both private and public.&lt;/p&gt;

&lt;p&gt;I work a lot with &lt;a href="https://ignite.apache.org/" rel="noopener noreferrer"&gt;Apache Ignite&lt;/a&gt; and get questions about how to use it together with Kubernetes almost daily. Ignite is a distributed database, so utilizing Kubernetes to deploy it seems appealing. Besides, many companies are currently in the process of transitioning their whole infrastructures to Kubernetes, which sometimes includes multiple Ignite clusters, as well as dozens of Ignite-related applications.&lt;/p&gt;

&lt;p&gt;I consider Apache Ignite to be very Kubernetes-friendly. First of all, it’s generally agnostic to where it runs. It is essentially a set of Java applications, so if you can run JVMs in your environment, then you will most likely run Ignite without any issues. Second of all, Ignite comes along with Docker images and detailed documentation on how to use them in Kubernetes: &lt;a href="https://apacheignite.readme.io/docs/kubernetes-deployment" rel="noopener noreferrer"&gt;https://apacheignite.readme.io/docs/kubernetes-deployment&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the same time, there are several pitfalls specific to the Ignite+Kubernetes combination that I would like to share with you. Here are the questions that I will try to give answers for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster Discovery — How to achieve it in Kubernetes?&lt;/li&gt;
&lt;li&gt;Stateless and Stateful Clusters — What is the difference, and how it affects the Kubernetes configuration?&lt;/li&gt;
&lt;li&gt;Thick Clients vs. Thin Clients — Which ones should you use when running in Kubernetes?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cluster Discovery
&lt;/h2&gt;

&lt;p&gt;When you deploy Apache Ignite, you need to make sure that nodes connect to each other and form a cluster. This is done via the discovery protocol: &lt;a href="https://apacheignite.readme.io/docs/tcpip-discovery" rel="noopener noreferrer"&gt;https://apacheignite.readme.io/docs/tcpip-discovery&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most common way of configuring discovery for Ignite is to simply list the socket addresses of the servers where nodes are expected to run. Like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;bean&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"org.apache.ignite.configuration.IgniteConfiguration"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;property&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"discoverySpi"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;bean&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;property&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"ipFinder"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;bean&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;”org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="nt"&gt;&amp;lt;property&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"addresses"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;list&amp;gt;&lt;/span&gt;
              &lt;span class="nt"&gt;&amp;lt;value&amp;gt;&lt;/span&gt;10.0.0.1:47500..47509&lt;span class="nt"&gt;&amp;lt;/value&amp;gt;&lt;/span&gt;
              &lt;span class="nt"&gt;&amp;lt;value&amp;gt;&lt;/span&gt;10.0.0.2:47500..47509&lt;span class="nt"&gt;&amp;lt;/value&amp;gt;&lt;/span&gt;
              &lt;span class="nt"&gt;&amp;lt;value&amp;gt;&lt;/span&gt;10.0.0.3:47500..47509&lt;span class="nt"&gt;&amp;lt;/value&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;/list&amp;gt;&lt;/span&gt;
          &lt;span class="nt"&gt;&amp;lt;/property&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/bean&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;/property&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/bean&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/property&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/bean&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration, however, does not work in Kubernetes. In such a dynamic environment, where pods can move between underlying servers, and where addresses can change during restarts, there is no way to provide static configuration.&lt;/p&gt;

&lt;p&gt;In order to tackle this, Ignite provides the &lt;a href="https://apacheignite.readme.io/docs/kubernetes-ip-finder" rel="noopener noreferrer"&gt;Kubernetes IP Finder&lt;/a&gt;, which utilizes a Kubernetes service API to lookup a list of currently running pods. With this feature, all you need is to provide the name of the service to use for discovery. All nodes that run in the same namespace and are pointed to the same service, will discover each other. The configuration becomes very straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;bean&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"org.apache.ignite.configuration.IgniteConfiguration"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;property&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"discoverySpi"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;bean&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;property&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"ipFinder"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;bean&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;”org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="nt"&gt;&amp;lt;property&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"namespace"&lt;/span&gt; &lt;span class="na"&gt;value=&lt;/span&gt;&lt;span class="s"&gt;"ignite"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="nt"&gt;&amp;lt;property&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;”serviceName"&lt;/span&gt; &lt;span class="na"&gt;value=&lt;/span&gt;&lt;span class="s"&gt;"ignite-service"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/bean&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;/property&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/bean&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/property&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/bean&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Stateless and Stateful Clusters
&lt;/h2&gt;

&lt;p&gt;There are two different ways how Ignite can be used: as a caching layer on top of existing data sources (relational and NoSQL databases, Hadoop, etc.), or as an actual database. In the latter case, Ignite uses the &lt;a href="https://apacheignite.readme.io/docs/distributed-persistent-store" rel="noopener noreferrer"&gt;Native Persistence&lt;/a&gt; storage, which is provided out of the box.&lt;/p&gt;

&lt;p&gt;If Ignite acts as a cache, all data is persisted outside of it, therefore Ignite nodes are volatile. If a node restarts, it starts as a brand new one, without any data. There is no permanent state, in which case we say that the cluster is &lt;strong&gt;stateless&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;On the other hand, if Ignite acts as a database, it is fully responsible for data consistency and durability. Every node stores a portion of the data both in memory and on disk; in case of a restart, you lose only in-memory data. On-disk data, however, must be preserved. There is a permanent state that has to be maintained, so the cluster is &lt;strong&gt;stateful&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In the Kubernetes world, this translates to different controllers that you should use for different types of clusters. Here is a simple rule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For stateless clusters, you should use the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt; controller.&lt;/li&gt;
&lt;li&gt;For stateful clusters, you should use the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noopener noreferrer"&gt;StatefulSet&lt;/a&gt; controller.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Following this rule will help you to not overcomplicate the configuration for stateless clusters, and at the same time, make sure that stateful clusters behave as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thick Clients vs. Thin Clients
&lt;/h2&gt;

&lt;p&gt;Ignite provides various client connectors that serve different purposes and are quite different in the way they are designed. You can read about them here: &lt;a href="https://dev.to/vkulichenko/apache-ignite-client-connectors-variety-4lop"&gt;https://dev.to/vkulichenko/apache-ignite-client-connectors-variety-4lop&lt;/a&gt;. Specifics of the Kubernetes environment add additional considerations to the topic.&lt;/p&gt;

&lt;p&gt;Kubernetes does not expose individual pods directly to the outside world. Instead, there is usually a load balancer that is responsible for accepting all external requests and redirecting them to the pods. Both Ignite server nodes and client applications can run on either side of the load balancer, which affects the choice between thick and thin clients.&lt;/p&gt;

&lt;p&gt;Typically, you will see one of these three scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 1: Everything in Kubernetes
&lt;/h3&gt;

&lt;p&gt;By “everything”, I mean both the Ignite cluster and the application talking to this cluster. They run in the same namespace of the same Kubernetes environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F482lsa9rm14hhewwzjzt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F482lsa9rm14hhewwzjzt.jpg" alt="Scenario 1: Everything in Kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, there are no limitations whatsoever — you are free to use any type of client connector. Unless there is a specific reason to do otherwise, I would recommend sticking with thick clients here, as it’s the most efficient and robust option.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 2: Cluster Outside of Kubernetes
&lt;/h3&gt;

&lt;p&gt;This scenario is very typical for companies that are in progress of transitioning existing clusters and applications from more traditional bare metal deployment to Kubernetes deployment.&lt;/p&gt;

&lt;p&gt;Such companies usually prefer to transition applications first, as they are much more dynamic — updated frequently by a large number of individual developers. Proper orchestration is much more critical for the applications than for Ignite clusters, which can run for months or even years without disruption, and are usually maintained by a small group of people.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu0e3qr6ut6068zmany93.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu0e3qr6ut6068zmany93.jpg" alt="Scenario 2: Cluster Outside of Kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we have a significant limitation: thick clients are not compatible with this scenario. The reason is that a server node can try creating a TCP connection with a particular thick client. Such an attempt will most likely fail due to the load balancer in front of the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;If you find yourself in this situation, you currently have two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use thin clients or JDBC/ODBC drivers instead of thick clients.&lt;/li&gt;
&lt;li&gt;Move the Ignite cluster into the Kubernetes environment (effectively transitioning to scenario #1, which doesn’t have these limitations).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Good news though: Apache Ignite community recognizes the impact of this limitation and is currently working on improvements that would allow using thick clients in this kind of deployment. Stay tuned!&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 3: Applications Outside of Kubernetes
&lt;/h3&gt;

&lt;p&gt;This scenario is the opposite of the previous one — Ignite cluster runs within Kubernetes, but applications are outside of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy6nm8331vtccftgmmdx6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy6nm8331vtccftgmmdx6.jpg" alt="Scenario 3: Applications Outside of Kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although I’ve seen it a couple of times, this scenario is scarce compared to the other two. If this is your case, I would strongly recommend you to consider moving the applications to Kubernetes, since you already use it.&lt;/p&gt;

&lt;p&gt;From Ignite perspective, this type of deployment allows only for thin clients and JDBC/ODBC drivers. You can’t use thick clients in this scenario.&lt;/p&gt;




&lt;p&gt;What I’ve described above are the main complications that you might stumble upon when working with Apache Ignite in Kubernetes environments.&lt;/p&gt;

&lt;p&gt;As a next step, I would suggest taking a look at this GitHub repo where I uploaded full configuration files that can be used to deploy a stateful Ignite cluster in Amazon EKS: &lt;a href="https://github.com/vkulichenko/ignite-eks-config" rel="noopener noreferrer"&gt;https://github.com/vkulichenko/ignite-eks-config&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And of course, always feel free to refer to the Ignite documentation for more details: &lt;a href="https://apacheignite.readme.io/docs/kubernetes-deployment" rel="noopener noreferrer"&gt;https://apacheignite.readme.io/docs/kubernetes-deployment&lt;/a&gt;&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>apacheignite</category>
      <category>programming</category>
      <category>database</category>
    </item>
    <item>
      <title>Apache Ignite: Client Connectors Variety</title>
      <dc:creator>Valentin Kulichenko</dc:creator>
      <pubDate>Fri, 10 Apr 2020 18:01:46 +0000</pubDate>
      <link>https://dev.to/vkulichenko/apache-ignite-client-connectors-variety-4lop</link>
      <guid>https://dev.to/vkulichenko/apache-ignite-client-connectors-variety-4lop</guid>
      <description>&lt;p&gt;If you have worked with &lt;a href="https://ignite.apache.org/"&gt;Apache Ignite&lt;/a&gt;, you have probably noticed that there are quite a few different client connectors that you can use. Client node, thick client, thin client, JDBC driver, ODBC driver, REST API, … All this might get a little confusing.&lt;/p&gt;

&lt;p&gt;“Which type of client do I need to use in MY application?” — you would likely ask yourself.&lt;/p&gt;

&lt;p&gt;So let me break this down. Below are all the client connectors available for Apache Ignite out of the box, as well as some best practices around when you should (or should not) use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thick Client (a.k.a. Client Node)
&lt;/h2&gt;

&lt;p&gt;Thick client is basically a regular Ignite node, which runs in &lt;a href="https://apacheignite.readme.io/docs/clients-vs-servers#section-configuring-clients-and-server"&gt;client mode&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The difference between client nodes and server nodes is logical rather than physical. Clients do not participate in caching, compute, service deployment and other similar activities, but under the hood, they work in (almost) exact same way as servers. Mainly, this means that a client node utilizes the same &lt;a href="https://apacheignite.readme.io/docs/cluster-discovery"&gt;discovery&lt;/a&gt; and &lt;a href="https://apacheignite.readme.io/docs/network-config"&gt;communication&lt;/a&gt; mechanisms, which brings both advantages and disadvantages.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pros
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A thick client is a part of topology, so it’s aware of partitioning configuration as well as data collocation techniques you applied to your caches. Whenever a thick client sends a request to read or update a cache entry, it goes directly to the node where this entry is stored — this is very efficient from scalability and performance standpoint.&lt;/li&gt;
&lt;li&gt;Thick clients provide a full set of APIs available in Apache Ignite. There are certain features (e.g., &lt;a href="https://apacheignite.readme.io/docs/near-caches"&gt;near caches&lt;/a&gt; or &lt;a href="https://apacheignite.readme.io/docs/continuous-queries"&gt;continuous queries&lt;/a&gt;) that are available ONLY in thick clients.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cons
&lt;/h4&gt;

&lt;p&gt;As always, high efficiency and rich functionality come with a price.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, a thick client is pretty heavy (hence the name) — in some cases it might consume quite a bit of CPU and RAM. For example, if you execute a SQL query, the client is actually used as a reducer that accumulates sub results from different nodes and makes the final calculations. Again — this is good for performance, but this also requires more CPU/RAM power. If your application is running on a device with limited resources, you would most likely want to use other options.&lt;/li&gt;
&lt;li&gt;Second, it is required that every thick client has full connectivity with every single server node. Moreover, since a thick client is just a regular node, it is possible that a server node will initiate a TCP connection with a client node. If there is a firewall, NAT or a load balancer between the cluster and the application, it will be really hard or even impossible to use thick clients. Basically, this means that thick clients are better used in the same physical environment in which server nodes run.&lt;/li&gt;
&lt;li&gt;Finally, keep in mind that think clients are available only for JVM languages, .NET languages, and C++. For other platforms, you will have to use other options.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Thin Client
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://apacheignite.readme.io/docs/thin-clients"&gt;Thin clients&lt;/a&gt; are actually much closer to what we usually think about when talking about clients. A thin client is very lightweight, initiates a single TCP socket connection with one of the nodes in the cluster, and communicates via a simple &lt;a href="https://apacheignite.readme.io/docs/binary-client-protocol"&gt;binary protocol&lt;/a&gt;. It also follows a very straightforward request-response pattern.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pros
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Thin clients consume minimum resources and are very undemanding from the network communication standpoint. This makes them ideal for remote applications, running outside of the cluster’s data center behind NAT or a firewall. They also work perfectly fine on devices with limited resources. IoT is a good example of a use case for thin clients.&lt;/li&gt;
&lt;li&gt;The binary protocol used by thin clients is language and platform agnostic, which means that you can run a thin client virtually anywhere. Currently, Ignite provides implementations written in Java, .NET, C++, Python, NodeJS and PHP, which already exceeds what we have for thick clients. And it’s fairly easy to add new languages to this list.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cons
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A thin client is not collocation-aware. It’s typically connected to only one node in the cluster, which is used as a gateway — all requests from the client go through this server node. This creates an obvious bottleneck. Some implementations support load balancing between two or more nodes, but even in this case scalability is more limited than with thick clients.&lt;/li&gt;
&lt;li&gt;Thin clients provide limited API. Currently, you can only create/destroy caches, execute key-value operations and SQL queries. &lt;a href="https://apacheignite.readme.io/docs/compute-grid"&gt;Compute Grid&lt;/a&gt; and &lt;a href="https://apacheignite.readme.io/docs/service-grid"&gt;Service Grid&lt;/a&gt; APIs are likely to be added in the future (contributions are &lt;a href="https://ignite.apache.org/community/contribute.html"&gt;welcome&lt;/a&gt;!). However, advanced features that require push notifications from servers to clients (near caches, continuous queries) or sophisticated threading models (data streaming) will most likely never be implemented for thin clients.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  JDBC and ODBC Drivers
&lt;/h2&gt;

&lt;p&gt;JDBC and ODBC are standard SQL APIs for JVM and C++ compatible languages respectively. Ignite provides implementations for these APIs out of the box:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://apacheignite-sql.readme.io/docs/jdbc-driver"&gt;https://apacheignite-sql.readme.io/docs/jdbc-driver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://apacheignite-sql.readme.io/docs/odbc-driver"&gt;https://apacheignite-sql.readme.io/docs/odbc-driver&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Architecturally, they work exactly like thin clients — connecting to one of the nodes and using that node as a gateway. Because of that, SQL drivers share advantages and disadvantages with thin clients. There are a couple more key differences I would like to mention though.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pros
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;JDBC and ODBC are industry standards, which means that the provided drivers enable easier integration with existing &lt;a href="https://apacheignite-sql.readme.io/docs/sql-tooling"&gt;SQL-oriented tools&lt;/a&gt;. As an example, you can use them to transparently connect BI tools like Tableau or Pentaho to Apache Ignite.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cons
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The limitation here is obvious — you get &lt;em&gt;only&lt;/em&gt; SQL API from this type of client connector. In my experience, the vast majority of use cases require more features if you want to get the most from Apache Ignite in terms of scalability and performance. Unless you don’t have a choice other than to use standard JDBC/ODBC, I would absolutely recommend looking at other options.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  REST API
&lt;/h2&gt;

&lt;p&gt;Apache Ignite also comes with the &lt;a href="https://apacheignite.readme.io/docs/rest-api"&gt;REST API&lt;/a&gt;, which allows performing basic operations like reading or updating cache entries, executing SQL queries, looking at some of the metrics, etc.&lt;/p&gt;

&lt;p&gt;In its current implementation, this API is not really suitable for any performance-sensitive purposes, so I usually avoid using it in production for applications of any type.&lt;/p&gt;

&lt;p&gt;However, the REST API might sometimes be useful during development or testing — to quickly check a particular entry value or a SQL query result, for example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing The Right Option
&lt;/h2&gt;

&lt;p&gt;Okay, there are several options for client connectors, and each of them has its own pros and cons. So, which one should you go with for a particular application? Here is a simple algorithm that will help you to make the choice.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If the application is deployed in the same environment where server nodes run, and as long as there is full network connectivity between the application and every server node (i.e. no firewall or NAT), use the &lt;strong&gt;thick client&lt;/strong&gt;. Generally speaking, thick clients should be your first choice as they are the most efficient and provide the most functionality.&lt;/li&gt;
&lt;li&gt;If the application is remote and/or runs on a device with limited resources, use the &lt;strong&gt;thin client&lt;/strong&gt;. This is your second choice.&lt;/li&gt;
&lt;li&gt;Finally, if the application must use standard JDBC or ODBC API (e.g. it’s a BI tool like Tableau), use the corresponding &lt;strong&gt;SQL driver&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s it! Not that confusing anymore, right? 🙂&lt;/p&gt;

</description>
      <category>apacheignite</category>
      <category>distributedsystems</category>
      <category>programming</category>
      <category>data</category>
    </item>
  </channel>
</rss>
