<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pieter Humphrey</title>
    <description>The latest articles on DEV Community by Pieter Humphrey (@pieterhumphrey).</description>
    <link>https://dev.to/pieterhumphrey</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pieterhumphrey"/>
    <language>en</language>
    <item>
      <title>Blasting Off into Stargate using HTTPie</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Tue, 01 Nov 2022 21:57:41 +0000</pubDate>
      <link>https://dev.to/datastax/blasting-off-into-stargate-using-httpie-1fj7</link>
      <guid>https://dev.to/datastax/blasting-off-into-stargate-using-httpie-1fj7</guid>
      <description>&lt;p&gt;Author: Kirsten Hunter&lt;/p&gt;

&lt;p&gt;As a DataStax Developer Advocate, my job is to help our amazing teams provide you with the best possible experience with Cassandra and our products.&lt;/p&gt;

&lt;p&gt;Datastax &lt;a href="https://astra.dev/3NeUW8f" rel="noopener noreferrer"&gt;Astra&lt;/a&gt; is built on Apache Cassandra. In addition to great &lt;a href="https://docs.astra.datastax.com/docs" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, Astra offers a robust free tier that can run small production workloads, pet projects, or just let you play—all for free, no credit card required. Cassandra can be tricky for hardcore SQL developers, because it uses a different slightly different query language (CQL), but when you get Astra, Stargate is there to let you interact with your data through APIs. Our open source &lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate&lt;/a&gt; product provides REST, GraphQL, and schemaless document APIs in addition to native language drivers. If you like them but don’t want to use our products, that’s fine! It’s completely open source and you can implement it on your own system.&lt;/p&gt;

&lt;p&gt;One of the things I noticed when I arrived at DataStax is that our tutorials rely heavily on curl for performing commands against our APIs. I prefer HTTPie, a similar tool designed for REST API interaction.So I created an authentication plugin for the HTTPie tool that stores your variables and lets you make requests like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http:/v2/schemas/keyspaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ve put together a &lt;a href="https://katacoda.com/datastax/scenarios/httpie-astra" rel="noopener noreferrer"&gt;Katacoda scenario&lt;/a&gt; for you to see how everything works. If you want to implement it locally, here are the instructions:&lt;/p&gt;

&lt;p&gt;The first secret to this is an rc file (~/.astrarc), which keeps track of your DB, region, username, and password, and auto-refreshes your token. You can have as many sections in this file as works for you—it’s just an INI-style configuration file.&lt;/p&gt;

&lt;p&gt;The second secret is the HTTPie configuration file, in ~/.config/httpie/config.json&lt;/p&gt;

&lt;p&gt;My configuration is what you see here; I have the “fruity” color scheme, my default auth-type is astra, and the section of the .astrarc file is “default”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"default_options": [
  "--style=fruity",
  "--auth-type=astra",
  "--auth=default:"
]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My goal here is to make it so you can bounce around between REST and GraphQL queries, get nicely formatted JSON results, and perhaps use jq to pare them down—but mostly I want for the tool to get out of your way.&lt;/p&gt;

&lt;p&gt;Interested? At this point httpie-astra requires python 3.5, but if you want me to make it support 2.7, please just let me know in the &lt;a href="https://community.datastax.com/index.html" rel="noopener noreferrer"&gt;Datastax Community&lt;/a&gt; or on Discord.&lt;/p&gt;

&lt;p&gt;You can get it through github: git clone &lt;a href="https://synedra-datastax/httpie-astra" rel="noopener noreferrer"&gt;https://synedra-datastax/httpie-astra&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or with pip: pip3 install httpie-astra&lt;/p&gt;

&lt;p&gt;Once you’ve got it installed, get your environment set up by making a simple call like:&lt;/p&gt;

&lt;p&gt;http –auth-type astra -a default: :/v2/schemas/keyspaces&lt;/p&gt;

&lt;p&gt;This will give you instructions to get all your variables set up.&lt;/p&gt;

&lt;p&gt;Let’s take a look at an example from our Astra REST documentation.&lt;/p&gt;

&lt;p&gt;The curl command looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --request GET \
--url https://${ASTRA_CLUSTER_ID}-${ASTRA_CLUSTER_REGION}.apps.astra.datastax.com/api/restv2/schemas/keyspaces/${ASTRA_DB_KEYSPACE}/tables/products
\
--header 'accept: application/json' \
--header 'x-cassandra-request-id: {unique-UUID}' \
--header "x-cassandra-token: ${ASTRA_AUTHORIZATION_TOKEN}"

Then it gives you back a single line JSON doc.


{"data":{"name":"products","keyspace":"tutorial","columnDefinitions":[{"name":"id","typeDefinition":"uuid","static":false},{"name":"created","typeDefinition":"timestamp","static":false},{"name":"description","typeDefinition":"varchar","static":false},{"name":"name","typeDefinition":"varchar","static":false},{"name":"price","typeDefinition":"decimal","static":false}],"primaryKey":{"partitionKey":["id"],"clusteringKey":[]},"tableOptions":{"defaultTimeToLive":0,"clusteringExpression":[]}}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The httpie command already knows a lot of this information, so the call is much simpler. I’m using the config.json I described above to set my auth-type and config section. I’ll show this output as a screen shot because it’s really easy to understand:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2f9cp2di1qsr5atho6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2f9cp2di1qsr5atho6z.png" alt="Image description" width="800" height="852"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’re on the job, getting the Stargate examples and Astra API examples to include HTTPie tabs, but in the meantime if you’re having fun with this, please let me know in the community or discord, and let’s make this rock for you!&lt;/p&gt;

</description>
      <category>database</category>
      <category>productivity</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Serverless Storage for Your Node.js Functions with Astra DB</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Tue, 27 Sep 2022 16:03:18 +0000</pubDate>
      <link>https://dev.to/datastax/serverless-storage-for-your-nodejs-functions-with-astra-db-3gk9</link>
      <guid>https://dev.to/datastax/serverless-storage-for-your-nodejs-functions-with-astra-db-3gk9</guid>
      <description>&lt;p&gt;Did you ever wish you had persistent storage for your serverless functions? Storage that was as easy as an idiomatic API call in your favorite language? What if you could even handle JSON data with no upfront schema definition? Functions as a service (FaaS) are excellent containers for business logic. With functions, you can: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run backend code without managing any infrastructure whatsoever.&lt;/li&gt;
&lt;li&gt;Run short-lived code that starts FaaST, runs and shuts down when complete or unused.&lt;/li&gt;
&lt;li&gt;Run your code in a specific framework or library of your choice.&lt;/li&gt;
&lt;li&gt;Trigger a function based on events that are defined by the FaaS provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s great, but what happens to the application state? Yes, you could run an in-process cache, session store, or use the modest filesystem allocated to the function. But these workarounds will be as short-lived as the function, so it’s not what most people consider persistent storage. &lt;/p&gt;

&lt;p&gt;Using APIs and or Drivers, you can get data out from your functions and into a database, but not many databases employ &lt;strong&gt;&lt;a href="https://www.infoq.com/articles/serverless-data-api/" rel="noopener noreferrer"&gt;data API gateways&lt;/a&gt;&lt;/strong&gt; or offer easy ways to surface a fluent data access layer in APIs like REST, GraphQL or gRPC. Perhaps you have JSON data and you simply want a document-style NoSQL option that skips defining the schema upfront – just JSON and go.&lt;/p&gt;

&lt;p&gt;So let’s assume getting to your database is easy with APIs, drivers, and schemaless JSON for a moment. Why then, would a serverless, autoscaling database be the preferred choice for persistent storage for your serverless, autoscaling functions? First off, it’s important to understand that serverless and FaaS are &lt;strong&gt;&lt;a href="https://www.bmc.com/blogs/serverless-faas/" rel="noopener noreferrer"&gt;not quite the same thing&lt;/a&gt;&lt;/strong&gt;. The key thing that makes FaaS and Serverless DBaaS so great together is autoscaling.&lt;/p&gt;

&lt;p&gt;If you've invested in DBaaS and FaaS, you're probably not interested in managing infrastructure. Running serverless functions that have the potential to autoscale up rapidly is dangerous if connected directly to a back-end database that cannot auto-scale. &lt;/p&gt;

&lt;p&gt;Scaling up the application dynamically could put an unpredictable and increasing load on a data service or database that doesn’t use a similar (auto) scaling mechanism. The industry spent years trying to solve this problem in the application server era with connection pools and database connection conservation techniques. Most of those techniques are antithetical to FaaST-startup-and-terminate style serverless functions because there’s no connection to preserve!&lt;/p&gt;

&lt;p&gt;Pairing an autoscaling application tier to a data tier that doesn’t autoscale drags down a system that could have otherwise been fully automated. Manually running a terraform script, or worse, waiting for an operations ticket to be created and serviced for scaling the database instance up or down would kill 50% of the automation value between application (function) and database. No Bueno.&lt;/p&gt;

&lt;p&gt;So how can we wire together an autoscaling DBaaS like &lt;strong&gt;&lt;a href="https://astra.dev/3Sks28i" rel="noopener noreferrer"&gt;DataStax Astra DB&lt;/a&gt;&lt;/strong&gt;, with serverless functions &lt;em&gt;without&lt;/em&gt; having to write a ton of REST services to expose the database functions you need? Let us show you how!&lt;/p&gt;

&lt;p&gt;In this livestream playback, &lt;strong&gt;&lt;a href="https://www.linkedin.com/in/alexellisuk/" rel="noopener noreferrer"&gt;Alex Ellis&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href="https://www.linkedin.com/in/stefano-lottini/" rel="noopener noreferrer"&gt;Stefano Lottini&lt;/a&gt;&lt;/strong&gt; show you how to pair FaaS with a serverless, autoscaling DBaaS for end-to-end automated scaling. &lt;/p&gt;

&lt;p&gt;To get the most out of the event replay, be sure to check out the OpenFaaS &lt;strong&gt;&lt;a href="https://www.openfaas.com/blog/faas-storage-cassandra-astra/" rel="noopener noreferrer"&gt;blog&lt;/a&gt;&lt;/strong&gt; first! &lt;/p&gt;

&lt;p&gt;Learn More:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Apache Cassandra&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate.io&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://www.openfaas.com/" rel="noopener noreferrer"&gt;OpenFaaS&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://astra.dev/3Sks28i" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://www.datastax.com/dev" rel="noopener noreferrer"&gt;DataStax Developers&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Follow the &lt;strong&gt;&lt;a href="https://datastax.medium.com/" rel="noopener noreferrer"&gt;DataStax Tech Blog&lt;/a&gt;&lt;/strong&gt; for more developer stories. Check out our &lt;strong&gt;&lt;a href="https://www.youtube.com/channel/UCqA6zOSMpQ55vvguq4Y0jAg" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;&lt;/strong&gt; channel for tutorials and here for DataStax Developers on &lt;strong&gt;&lt;a href="https://twitter.com/DataStaxDevs" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/strong&gt; for the latest news about our developer community.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>webdev</category>
      <category>programming</category>
      <category>node</category>
    </item>
    <item>
      <title>Baeldung Series Part 2: Build a Dashboard With Cassandra, Astra and CQL – Mapping Event Data</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Tue, 13 Sep 2022 17:44:29 +0000</pubDate>
      <link>https://dev.to/datastax/baeldung-series-part-2-build-a-dashboard-with-cassandra-astra-and-cql-mapping-event-data-1lc</link>
      <guid>https://dev.to/datastax/baeldung-series-part-2-build-a-dashboard-with-cassandra-astra-and-cql-mapping-event-data-1lc</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;1. Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In our &lt;a href="https://www.baeldung.com/cassandra-astra-rest-dashboard-updates" rel="noopener noreferrer"&gt;previous article&lt;/a&gt;, we looked at augmenting our dashboard to store and display individual events from the Avengers using &lt;a href="https://astra.dev/3Dn2FOL" rel="noopener noreferrer"&gt;DataStax Astra&lt;/a&gt;, a serverless DBaaS powered by &lt;a href="https://cassandra.apache.org/" rel="noopener noreferrer"&gt;Apache Cassandra&lt;/a&gt; using &lt;a href="https://stargate.io/?utm_medium=referral&amp;amp;utm_source=baeldung&amp;amp;utm_campaign=series-1-of-3&amp;amp;utm_content=avengers-dash-series-1" rel="noopener noreferrer"&gt;Stargate&lt;/a&gt; to offer additional APIs for working with it.&lt;/p&gt;

&lt;p&gt;In this article, we will be making use of the exact same data in a different way. &lt;strong&gt;We are going to allow the user to select which of the Avengers to display, the time period of interest, and then display these events on an interactive map.&lt;/strong&gt; Unlike in the previous article, this will allow the user to see the data interacting with each other in both geography and time.&lt;/p&gt;

&lt;p&gt;In order to follow along with this article, it is assumed that you have already read the &lt;a href="https://www.baeldung.com/cassandra-astra-stargate-dashboard" rel="noopener noreferrer"&gt;first&lt;/a&gt; and &lt;a href="https://www.baeldung.com/cassandra-astra-rest-dashboard-updates" rel="noopener noreferrer"&gt;second&lt;/a&gt; articles in this series and that you have a working knowledge of Java 16, Spring, and at least an understanding of what Cassandra can offer for data storage and access. It may also be easier to have the code from &lt;a href="https://github.com/Baeldung/datastax-cassandra/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; open alongside the article to follow along.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2. Service Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;We will be retrieving the data using the CQL API, using queries in the &lt;a href="https://docs.datastax.com/en/cql-oss/3.3/index.html" rel="noopener noreferrer"&gt;Cassandra Query Language&lt;/a&gt;.&lt;/strong&gt; This requires some additional setup for us to be able to talk to the server.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.1. Download Secure Connect Bundle.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In order to connect to the Cassandra database hosted by DataStax Astra via CQL, we need to download the “Secure Connect Bundle”.&lt;/strong&gt; This is a zip file containing SSL certificates and connection details for this exact database, allowing the connection to be made securely.&lt;/p&gt;

&lt;p&gt;This is available from the Astra dashboard, found under the “Connect” tab for our exact database, and then the “Java” option under “Connect using a driver”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgw1rueo0bn7tw73rszq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgw1rueo0bn7tw73rszq.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For pragmatic reasons, we're going to put this file into &lt;em&gt;src/main/resources&lt;/em&gt; so that we can access it from the classpath. In a normal deployment situation, you would need to be able to provide different files to connect to different databases – for example, to have different databases for development and production environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.2. Creating Client Credentials&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;We also need to have some client credentials in order to connect to our database.&lt;/strong&gt; Unlike the APIs that we've used in previous articles, which use an access token, the CQL API requires a “username” and “password”. These are actually a Client ID and Client Secret that we generate from the “Manage Tokens” section under “Organizations”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexcw0wa7zmj2v0kdoa2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexcw0wa7zmj2v0kdoa2s.png" alt="Image description" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once this is done, we need to add the generated Client ID and Client Secret to our &lt;em&gt;application.properties&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ASTRA_DB_CLIENT_ID=clientIdHere
ASTRA_DB_CLIENT_SECRET=clientSecretHere
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;2.3. Google Maps API Key&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In order to render our map, we are going to use Google Maps. This will then need a Google API key to be able to use this API.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After signing up for a Google account, we need to visit the &lt;a href="https://console.cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud Platform Dashboard&lt;/a&gt;. Here we can create a new project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4gyrjmbwfwqxisw8vzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4gyrjmbwfwqxisw8vzn.png" alt="Image description" width="768" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We then need to enable the Google Maps JavaScript API for this project. Search for this and enable this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpzticy9gyl1ewmg9zi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpzticy9gyl1ewmg9zi9.png" alt="Image description" width="768" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we need an API key to be able to use this. For this, we need to navigate to the “Credentials” pane on the sidebar, click on “Create Credentials” at the top and select API Key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcw368us36nv8dagh7m6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcw368us36nv8dagh7m6.png" alt="Image description" width="768" height="623"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We now need to add this key to our application.properties file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GOOGLE_CLIENT_ID=someRandomClientId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;3. Building the Client Layer Using Astra and CQL&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;In order to communicate with the database via CQL, we need to write our client layer.&lt;/strong&gt; This will be a class called CqlClient that wraps the DataStax CQL APIs, abstracting away the connection details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Repository
public class CqlClient {
  @Value("${ASTRA_DB_CLIENT_ID}")
  private String clientId;

  @Value("${ASTRA_DB_CLIENT_SECRET}")
  private String clientSecret;

  public List&amp;lt;Row&amp;gt; query(String cql, Object... binds) {
    try (CqlSession session = connect()) {
      var statement = session.prepare(cql);
      var bound = statement.bind(binds);
      var rs = session.execute(bound);

      return rs.all();
    }
  }

  private CqlSession connect() {
    return CqlSession.builder()
      .withCloudSecureConnectBundle(CqlClient.class.getResourceAsStream("/secure-connect-baeldung-avengers.zip"))
      .withAuthCredentials(clientId, clientSecret)
      .build();
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives us a single public method that will connect to the database and execute an arbitrary CQL query, allowing for some bind values to be provided to it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connecting to the database makes use of our Secure Connect Bundle and client credentials that we generated earlier.&lt;/strong&gt; The Secure Connect Bundle needs to have been placed in &lt;em&gt;src/main/resources/secure-connect-baeldung-avengers.zip&lt;/em&gt;, and the client ID and secret need to have been put into &lt;em&gt;application.properties&lt;/em&gt; with the appropriate property names.&lt;/p&gt;

&lt;p&gt;Note that this implementation loads every row from the query into memory and returns them as a single list before finishing. This is only for the purposes of this article but is not as efficient as it otherwise could be. We could, for example, fetch and process each row individually as they are returned or even go as far as to wrap the entire query in a &lt;em&gt;java.util.streams.Stream&lt;/em&gt; to be processed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;4. Fetching the Required Data&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Once we have our client to be able to interact with the CQL API, we need our service layer to actually fetch the data we are going to display.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Firstly, we need a Java Record to represent each row we are fetching from the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public record Location(String avenger, 
  Instant timestamp, 
  BigDecimal latitude, 
  BigDecimal longitude, 
  BigDecimal status) {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then we need our service layer to retrieve the data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Service
public class MapService {
  @Autowired
  private CqlClient cqlClient;

  // To be implemented.
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Into this, we're going to write our functions to actually query the database – using the &lt;em&gt;CqlClient&lt;/em&gt; that we've just written – and return the appropriate details.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.1. Generate a List of Avengers&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Our first function is to get a list of all the Avengers that we are able to display the details of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public List&amp;lt;String&amp;gt; listAvengers() {
  var rows = cqlClient.query("select distinct avenger from avengers.events");

  return rows.stream()
    .map(row -&amp;gt; row.getString("avenger"))
    .sorted()
    .collect(Collectors.toList());
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This just gets the list of distinct values in the &lt;em&gt;avenger&lt;/em&gt; column from our &lt;em&gt;events&lt;/em&gt; table.&lt;/strong&gt; Because this is our partition key, it is incredibly efficient. CQL will only allow us to order the results when we have a filter on the partition key so we are instead doing the sorting in Java code. This is fine though because we know that we have a small number of rows being returned so the sorting will not be expensive.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.2. Generate Location Details&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Our other function is to get a list of all the location details that we wish to display on the map. &lt;strong&gt;This takes a list of avengers, and a start and end time and returns all of the events for them grouped as appropriate:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public Map&amp;lt;String, List&amp;lt;Location&amp;gt;&amp;gt; getPaths(List&amp;lt;String&amp;gt; avengers, Instant start, Instant end) {
  var rows = cqlClient.query("select avenger, timestamp, latitude, longitude, status from avengers.events where avenger in ? and timestamp &amp;gt;= ? and timestamp &amp;lt;= ?", 
    avengers, start, end);

  var result = rows.stream()
    .map(row -&amp;gt; new Location(
      row.getString("avenger"), 
      row.getInstant("timestamp"), 
      row.getBigDecimal("latitude"), 
      row.getBigDecimal("longitude"),
      row.getBigDecimal("status")))
    .collect(Collectors.groupingBy(Location::avenger));

  for (var locations : result.values()) {
    Collections.sort(locations, Comparator.comparing(Location::timestamp));
  }

  return result;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CQL binds automatically expand out the IN clause to handle multiple avengers correctly, and the fact that we are filtering by the partition and clustering key again makes this efficient to execute. We then parse these into our &lt;em&gt;Location&lt;/em&gt; object, group them together by the &lt;em&gt;avenger&lt;/em&gt; field and ensure that each grouping is sorted by the timestamp.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;5. Displaying the Map&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Now that we have the ability to fetch our data, we need to actually let the user see it.&lt;/strong&gt; This will first involve writing our controller for getting the data:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5.1. Map Controller&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Controller
public class MapController {
  @Autowired
  private MapService mapService;

  @Value("${GOOGLE_CLIENT_ID}")
  private String googleClientId;

  @ModelAttribute("googleClientId")
  String getGoogleClientId() {
    return googleClientId;
  }

  @GetMapping("/map")
  public ModelAndView showMap(@RequestParam(name = "avenger", required = false) List&amp;lt;String&amp;gt; avenger,
  @RequestParam(required = false) String start, @RequestParam(required = false) String end) throws Exception {
    var result = new ModelAndView("map");
    result.addObject("inputStart", start);
    result.addObject("inputEnd", end);
    result.addObject("inputAvengers", avenger);


    result.addObject("avengers", mapService.listAvengers());

    if (avenger != null &amp;amp;&amp;amp; !avenger.isEmpty() &amp;amp;&amp;amp; start != null &amp;amp;&amp;amp; end != null) {
      var paths = mapService.getPaths(avenger, 
        LocalDateTime.parse(start).toInstant(ZoneOffset.UTC), 
        LocalDateTime.parse(end).toInstant(ZoneOffset.UTC));

      result.addObject("paths", paths);
    }

    return result;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This uses our service layer to get the list of avengers, and if we have inputs provided then it also gets the list of locations for those inputs.&lt;/strong&gt; We also have a &lt;em&gt;ModelAttribute&lt;/em&gt; that will provide the Google Client ID to the view for it to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5.1. Map Template&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once we've written our controller, we need a template to actually render the HTML. This will be written using Thymeleaf as in the previous articles:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!doctype html&amp;gt;
&amp;lt;html lang="en"&amp;gt;

&amp;lt;head&amp;gt;
  &amp;lt;meta charset="utf-8" /&amp;gt;
  &amp;lt;meta name="viewport" content="width=device-width, initial-scale=1" /&amp;gt;

  &amp;lt;link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta3/dist/css/bootstrap.min.css" rel="stylesheet"
    integrity="sha384-eOJMYsd53ii+scO/bJGFsiCZc+5NDVN2yr8+0RDqr0Ql0h+rP48ckxlpbzKgwra6" crossorigin="anonymous" /&amp;gt;

  &amp;lt;title&amp;gt;Avengers Status Map&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;

&amp;lt;body&amp;gt;
  &amp;lt;nav class="navbar navbar-expand-lg navbar-dark bg-dark"&amp;gt;
    &amp;lt;div class="container-fluid"&amp;gt;
      &amp;lt;a class="navbar-brand" href="#"&amp;gt;Avengers Status Map&amp;lt;/a&amp;gt;
    &amp;lt;/div&amp;gt;
  &amp;lt;/nav&amp;gt;

  &amp;lt;div class="container-fluid mt-4"&amp;gt;
    &amp;lt;div class="row"&amp;gt;
      &amp;lt;div class="col-3"&amp;gt;
        &amp;lt;form action="/map" method="get"&amp;gt;
          &amp;lt;div class="mb-3"&amp;gt;
            &amp;lt;label for="avenger" class="form-label"&amp;gt;Avengers&amp;lt;/label&amp;gt;
            &amp;lt;select class="form-select" multiple name="avenger" id="avenger" required&amp;gt;
              &amp;lt;option th:each="avenger: ${avengers}" th:text="${avenger}" th:value="${avenger}"
                th:selected="${inputAvengers != null &amp;amp;&amp;amp; inputAvengers.contains(avenger)}"&amp;gt;&amp;lt;/option&amp;gt;
            &amp;lt;/select&amp;gt;
          &amp;lt;/div&amp;gt;
          &amp;lt;div class="mb-3"&amp;gt;
            &amp;lt;label for="start" class="form-label"&amp;gt;Start Time&amp;lt;/label&amp;gt;
            &amp;lt;input type="datetime-local" class="form-control" name="start" id="start" th:value="${inputStart}"
              required /&amp;gt;
          &amp;lt;/div&amp;gt;
          &amp;lt;div class="mb-3"&amp;gt;
            &amp;lt;label for="end" class="form-label"&amp;gt;End Time&amp;lt;/label&amp;gt;
            &amp;lt;input type="datetime-local" class="form-control" name="end" id="end" th:value="${inputEnd}" required /&amp;gt;
          &amp;lt;/div&amp;gt;
          &amp;lt;button type="submit" class="btn btn-primary"&amp;gt;Submit&amp;lt;/button&amp;gt;
        &amp;lt;/form&amp;gt;
      &amp;lt;/div&amp;gt;
      &amp;lt;div class="col-9"&amp;gt;
        &amp;lt;div id="map" style="width: 100%; height: 40em;"&amp;gt;&amp;lt;/div&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  &amp;lt;/div&amp;gt;

  &amp;lt;script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta3/dist/js/bootstrap.bundle.min.js"
    integrity="sha384-JEW9xMcG8R+pH31jmWH6WWP0WintQrMb4s7ZOdauHnUtxwoG2vI5DkLtS3qm9Ekf" crossorigin="anonymous"&amp;gt;
    &amp;lt;/script&amp;gt;
  &amp;lt;script type="text/javascript" th:inline="javascript"&amp;gt;
    /*&amp;lt;![CDATA[*/
    let paths = /*[[${paths}]]*/ {};

    let map;
    let openInfoWindow;

    function initMap() {
      let averageLatitude = 0;
      let averageLongitude = 0;

      if (paths) {
        let numPaths = 0;

        for (const path of Object.values(paths)) {
          let last = path[path.length - 1];
          averageLatitude += last.latitude;
          averageLongitude += last.longitude;
          numPaths++;
        }

        averageLatitude /= numPaths;
        averageLongitude /= numPaths;
      } else {
        // We had no data, so lets just tidy things up:
        paths = {};
        averageLatitude = 40.730610;
        averageLongitude = -73.935242;
      }

      map = new google.maps.Map(document.getElementById("map"), {
        center: { lat: averageLatitude, lng: averageLongitude },
        zoom: 16,
      });

      for (const avenger of Object.keys(paths)) {
        const path = paths[avenger];
        const color = getColor(avenger);

        new google.maps.Polyline({
          path: path.map(point =&amp;gt; ({ lat: point.latitude, lng: point.longitude })),
          geodesic: true,
          strokeColor: color,
          strokeOpacity: 1.0,
          strokeWeight: 2,
          map: map,
        });

        path.forEach((point, index) =&amp;gt; {
          const infowindow = new google.maps.InfoWindow({
            content: "&amp;lt;dl&amp;gt;&amp;lt;dt&amp;gt;Avenger&amp;lt;/dt&amp;gt;&amp;lt;dd&amp;gt;" + avenger + "&amp;lt;/dd&amp;gt;&amp;lt;dt&amp;gt;Timestamp&amp;lt;/dt&amp;gt;&amp;lt;dd&amp;gt;" + point.timestamp + "&amp;lt;/dd&amp;gt;&amp;lt;dt&amp;gt;Status&amp;lt;/dt&amp;gt;&amp;lt;dd&amp;gt;" + Math.round(point.status * 10000) / 100 + "%&amp;lt;/dd&amp;gt;&amp;lt;/dl&amp;gt;"
          });

          const marker = new google.maps.Marker({
            position: { lat: point.latitude, lng: point.longitude },
            icon: {
              path: google.maps.SymbolPath.FORWARD_CLOSED_ARROW,
              strokeColor: color,
              scale: index == path.length - 1 ? 5 : 3
            },
            map: map,
          });

          marker.addListener("click", () =&amp;gt; {
            if (openInfoWindow) {
              openInfoWindow.close();
              openInfoWindow = undefined;
            }

            openInfoWindow = infowindow;
            infowindow.open({
              anchor: marker,
              map: map,
              shouldFocus: false,
            });
          });

        });
      }
    }

    function getColor(avenger) {
      return {
        wanda: '#ff2400',
        hulk: '#008000',
        hawkeye: '#9370db',
        falcon: '#000000'
      }[avenger];
    }

    /*]]&amp;gt;*/
  &amp;lt;/script&amp;gt;

  &amp;lt;script
    th:src="${'https://maps.googleapis.com/maps/api/js?key=' + googleClientId + '&amp;amp;callback=initMap&amp;amp;libraries=&amp;amp;v=weekly'}"
    async&amp;gt;&amp;lt;/script&amp;gt;
&amp;lt;/body&amp;gt;

&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are injecting the data retrieved from Cassandra, as well as some other details. Thymeleaf automatically handles converting the objects within the &lt;em&gt;script&lt;/em&gt; block into valid JSON. Once this is done, our JavaScript then renders a map using the Google Maps API and adds some routes and markers onto it to show our selected data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At this point, we have a fully working application. Into this we can select some avengers to display, date and time ranges of interest, and see what was happening with our data:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxetb1tyozux5khpb9gjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxetb1tyozux5khpb9gjm.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;6. Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here we have seen an alternative way to visualize data retrieved from our Cassandra database, and have shown the Astra CQL API in use to obtain this data.&lt;/p&gt;

&lt;p&gt;All of the code from this article can be found &lt;a href="https://github.com/Baeldung/datastax-cassandra/tree/main/avengers-dashboard" rel="noopener noreferrer"&gt;over on GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building Scalable Streaming Applications with DataStax Astra Streaming</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Thu, 18 Aug 2022 18:16:58 +0000</pubDate>
      <link>https://dev.to/datastax/building-scalable-streaming-applications-with-datastax-astra-streaming-3g6h</link>
      <guid>https://dev.to/datastax/building-scalable-streaming-applications-with-datastax-astra-streaming-3g6h</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldzoxdsb3whe5cutfc3i.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldzoxdsb3whe5cutfc3i.jpeg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; enables developers to build streaming applications on top of an elastically scalable, multi-cloud messaging and event streaming platform powered by &lt;a href="https://pulsar.apache.org/" rel="noopener noreferrer"&gt;Apache Pulsar&lt;/a&gt;. &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;DataStax Astra Streaming&lt;/a&gt; is currently in beta, and we’ll be releasing a full demo soon. In the meantime, this article will walk you through a short demo that will provide a great starting point for familiarizing yourself with this powerful new streaming service.&lt;/p&gt;

&lt;p&gt;Here’s what you will learn:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How to create an Astra Streaming tenant, complete with namespaces, topics, and sinks.&lt;/li&gt;
&lt;li&gt;How to produce messages for a topic that make use of serialized Java &lt;a href="https://en.wikipedia.org/wiki/Plain_old_Java_object" rel="noopener noreferrer"&gt;POJOs&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;How to store topic messages in a &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;DataStax Astra&lt;/a&gt; database.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To illustrate, we’ll use &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; to replicate the streaming object tracking information normally provided by the Federal Aviation Authority (FAA) to each airport. This stream reports the location of every piece of equipment at the airport (planes, fuel trucks, aircraft tow tractors, baggage carts, etc).&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s start building!
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt;, not only can you feed information into the streaming “pipe”, but you can also store those events into an &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra&lt;/a&gt; database for later analysis. This allows us to view our object tracking information in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Where is everything located right now?&lt;/li&gt;
&lt;li&gt;Where has a specific object been located historically? This is useful for tracking the paths of the object over time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To build our streaming pipeline for tracking objects in real-time and historically, we’ll need to build the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; tenant with a single topic &lt;code&gt;object_location&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A Java producer that will publish events to the &lt;code&gt;object_location&lt;/code&gt; topic.&lt;/li&gt;
&lt;li&gt;An &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra&lt;/a&gt; database with two tables that get data from the &lt;code&gt;object_location&lt;/code&gt; topic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0rzeoku92uc5f5rmcr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0rzeoku92uc5f5rmcr8.png" alt="Image description" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: All the moving parts we’ll need to build for this demo.&lt;/p&gt;

&lt;p&gt;In our object tracking example, a single topic will feed data into two different tables. The &lt;code&gt;object_location&lt;/code&gt; table records only the most recent known location for an object while the &lt;code&gt;object_location_history&lt;/code&gt; records all locations that an object has been located at any given time. The location history data is useful for different types of analyses, such as analyzing the flow of different objects through the airport terminal.&lt;/p&gt;

&lt;p&gt;This approach is not only applicable to object tracking, it can be used for any use case that requires the ability to see both real-time streaming data and historical data, for example, tracking stock prices where one table holds the current stock price while another table holds the historical stock prices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the database
&lt;/h2&gt;

&lt;p&gt;Now back to our object tracking example. Our first step will be to create the database. This is a very simple database with only two tables, which we’ll create in a keyspace called &lt;code&gt;airport&lt;/code&gt; to keep things simple. The tables in the &lt;code&gt;airport&lt;/code&gt; keyspace is &lt;code&gt;object_location&lt;/code&gt; which tracks where every object is at the moment (well, really, the last known location), and &lt;code&gt;object_location_history&lt;/code&gt; which tracks the location of the object over time with the most recent update listed first.&lt;/p&gt;

&lt;p&gt;If you are following along with your own &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra&lt;/a&gt; instance, simply create a database with the keyspace &lt;code&gt;airport&lt;/code&gt; and then run the &lt;code&gt;database/create.cql&lt;/code&gt; file to create your tables.&lt;/p&gt;

&lt;p&gt;You can create your &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; on one cloud provider even if your &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra&lt;/a&gt; database is hosted by another. However, you’ll get better performance if they are both hosted by the same cloud provider and in the same region.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a custom role
&lt;/h2&gt;

&lt;p&gt;While it is possible to create an access token that grants you access to all of your databases, I highly recommend creating database-specific tokens based on custom roles. On more than one occasion, I’ve accidentally leaked security tokens into GitHub (errors that I corrected within minutes). The only thing that’s saved my bacon is the fact that the token was restricted to a single database. If you’re not familiar with the process for creating a token, I’ll show you how to do that in this section.&lt;/p&gt;

&lt;p&gt;After you have created the database, click on the “down arrow” next to your organization name, as shown in the red square in the following image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryi7qdut91v0lkzs1iyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryi7qdut91v0lkzs1iyi.png" alt="Image description" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: Accessing your organization settings.&lt;/p&gt;

&lt;p&gt;This will bring up a menu. Click on the &lt;code&gt;Organization Settings&lt;/code&gt; menu item. Once the page loads, click on the &lt;code&gt;Role Management&lt;/code&gt; menu item on the left side of the page and press the &lt;code&gt;Add Custom Role&lt;/code&gt; button. Give your role a meaningful name. As you can see in the following image, I named my custom role &lt;code&gt;airport-demo&lt;/code&gt;. Then you can start selecting the permissions for your role. Since this role will be specific to a database built for demo purposes, I tend to be pretty liberal with my permissions. Set your permissions to suit your needs and scroll down to access the rest of the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuon52xzi0fqpcoyb8xka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuon52xzi0fqpcoyb8xka.png" alt="Image description" width="800" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Define the permissions for your custom role.&lt;/p&gt;

&lt;p&gt;Select the keyspace permission and table permissions as appropriate. I like to enable all of the APIs for my databases, so I usually select them all. The most important step occurs at the very bottom where you select the single database for which this role applies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogr6662vl2tup78oeaqz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogr6662vl2tup78oeaqz.png" alt="Image description" width="800" height="754"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 4: Finishing the custom role definition.&lt;/p&gt;

&lt;p&gt;When you are satisfied with your role configuration press the &lt;code&gt;Create Role&lt;/code&gt; button.&lt;/p&gt;

&lt;h1&gt;
  
  
  Generating your database token
&lt;/h1&gt;

&lt;p&gt;Now you can create a security token that is specific to your customer role and database. Select the &lt;code&gt;Token Management&lt;/code&gt; menu item. Then select the custom role you created earlier and press the &lt;code&gt;Generate Token&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj2a6v9jx310iofs1nr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj2a6v9jx310iofs1nr4.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 5: Generating a token.&lt;/p&gt;

&lt;p&gt;You’ll need to exercise some caution here because the dialog box that pops up will never be displayed again. You’ll need this token information in your source code to connect to the database from Astra Streaming. So, you might want to press the &lt;code&gt;&lt;em&gt;Download Token Details&lt;/em&gt;&lt;/code&gt; button to download a CSV file with your token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5zslg0s7o97d6txo1rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5zslg0s7o97d6txo1rl.png" alt="Image description" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 6: This information will never be shown to you again! Take this opportunity to download it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Astra Streaming components
&lt;/h2&gt;

&lt;p&gt;Now we’re going to shift gears and create the &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; components. Here we will create a tenant, namespace, and a topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Astra Streaming tenant
&lt;/h2&gt;

&lt;p&gt;An Astra tenant is the top-level object for streaming. You can think of a tenant as akin to an application or a database. Create a new streaming tenant in the &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt; web console and name it &lt;code&gt;airport-events&lt;/code&gt;. When the tenant is fully created and running you will see a small green dot to the left of its name and the dashboard for the tenant will show up in your browser, as shown in the following image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ywod1vzweo8kjkpahtl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ywod1vzweo8kjkpahtl.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 7: The airport-events tenant dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Astra Streaming namespace
&lt;/h2&gt;

&lt;p&gt;This step is optional because there is a default namespace created for you when you create a tenant. However, I like to keep things organized and isolated so I strongly recommend that you create a namespace for the airport-demo. Click on the &lt;code&gt;Namespaces&lt;/code&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6vlyev3f6s2hhd1kp71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6vlyev3f6s2hhd1kp71.png" alt="Image description" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 8: Creating a namespace in Astra Streaming.&lt;/p&gt;

&lt;p&gt;Set the namespace to &lt;code&gt;airport&lt;/code&gt; and press the &lt;code&gt;create&lt;/code&gt; button. It’s just that easy!&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Astra Streaming Topic
&lt;/h2&gt;

&lt;p&gt;Our next step is to create the topic for our object location events. Click on the &lt;code&gt;Topics&lt;/code&gt; tab in the dashboard. By default, you will see both the new &lt;code&gt;airport&lt;/code&gt; namespace and the default namespace listed in the dashboard. Click the &lt;code&gt;Add Topic&lt;/code&gt; button in the &lt;code&gt;airport&lt;/code&gt; namespace to create the new topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsku1liiha2bv0w3zw8b7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsku1liiha2bv0w3zw8b7.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 9: Creating a topic in the airport namespace.&lt;/p&gt;

&lt;p&gt;You only need to specify the name of the topic, &lt;code&gt;object-location&lt;/code&gt; as shown in the next image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felmca3kirrdseud7l74o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felmca3kirrdseud7l74o.png" alt="Image description" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 10: Creating the object-location topic in the airport namespace.&lt;/p&gt;

&lt;p&gt;Press the &lt;code&gt;Save&lt;/code&gt; button. At this point, we have a topic on which we can publish events. However, those events don’t go anywhere just yet. Next, we will create two “sinks’’ that will consume the events and store them in a database. A “sink” in streaming terms is either an &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt; or an &lt;a href="https://www.elastic.co/elasticsearch/" rel="noopener noreferrer"&gt;ElasticSearch&lt;/a&gt; instance. For this article, we will use the &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt; to store our events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Astra DB sinks
&lt;/h2&gt;

&lt;p&gt;The mechanism that &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; uses to store events to a database is a “sink”. We will need to create two sinks, one for each of our tables.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the object-location sink
&lt;/h2&gt;

&lt;p&gt;Our first sink will store the event on the &lt;code&gt;object_location&lt;/code&gt; table. This table is different from the &lt;code&gt;object_location_history&lt;/code&gt; table in that it does &lt;strong&gt;not&lt;/strong&gt; have the &lt;code&gt;ts&lt;/code&gt; (timestamp) field. Click on the &lt;code&gt;Sinks&lt;/code&gt; tab and then press the &lt;code&gt;Create Sink&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoh45jf748kql8wc56bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoh45jf748kql8wc56bg.png" alt="Image description" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 11: Creating the sink for the object_location table in our Astra DB.&lt;/p&gt;

&lt;p&gt;In Step 1 of the wizard, select the fields as shown in the following image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2r6dsh3895w1nxpd2x9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2r6dsh3895w1nxpd2x9.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 12: Step 1-Create the object-location sink.&lt;/p&gt;

&lt;p&gt;Be sure to select the &lt;code&gt;object-location&lt;/code&gt; topic in Step 1 of the wizard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdszh7urmn68vw4uj1ifv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdszh7urmn68vw4uj1ifv.png" alt="Image description" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 13: Step 2-Select the topic.&lt;/p&gt;

&lt;p&gt;Next, you need to provide the connectivity information for your database. All of the information is important, but the database token is probably the most critical piece here. After you have pasted your token in, press the TAB button to exit the token field. This will prompt the &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra&lt;/a&gt; website to inspect your database and table and generate the field mappings, as you will see next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0dzo9xf9qpmckj0kd35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0dzo9xf9qpmckj0kd35.png" alt="Image description" width="800" height="790"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 14: Step 3-Specify the database you want to use.&lt;/p&gt;

&lt;p&gt;The field mapping is done automatically for you. Notice that the automatic mapping only concerns itself with the fields in the table you have specified. There is no schema for the overall topic yet because we haven’t sent any messages over the topic (we will get to that in a little bit). I have yet to find a condition where the automatic mapping is incorrect, but it never hurts to check twice! Also, you can now expand the area for the &lt;code&gt;object_location&lt;/code&gt; schema and view the details there as shown in the following image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfw5fv07caogba1dd684.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfw5fv07caogba1dd684.png" alt="Image description" width="800" height="888"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 15: Notice the field mapping is automatically generated for you.&lt;/p&gt;

&lt;p&gt;Press the &lt;code&gt;Create&lt;/code&gt; button to create the sink.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the objloc-history sink
&lt;/h2&gt;

&lt;p&gt;Now to create our second sink, the one that will capture information into the &lt;code&gt;object_location_history&lt;/code&gt; table. You will perform essentially the same steps that you did for the first sink, with some key differences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sink Name&lt;/strong&gt;: &lt;code&gt;objloc-history&lt;/code&gt; (names are limited to 18 characters)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topic&lt;/strong&gt;: Pick the &lt;code&gt;object-location&lt;/code&gt; topic again. It will feed both of our tables!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Table Name&lt;/strong&gt;: &lt;code&gt;object_location_history&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This time when you enter the database token and TAB out of the field, the mapping will appear a little differently as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fuxxwbrnbtew69ebae8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fuxxwbrnbtew69ebae8.png" alt="Image description" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 16: Notice the field mapping is slightly different for this sink.&lt;/p&gt;

&lt;p&gt;You see here the &lt;code&gt;ts&lt;/code&gt; or &lt;code&gt;timestamp&lt;/code&gt; field (a Java long data type) is included in the mapping. Press the &lt;code&gt;Create&lt;/code&gt; button to create this sink.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Java producer
&lt;/h2&gt;

&lt;p&gt;Things that generate messages on a topic are called “&lt;a href="https://pulsar.apache.org/docs/en/security-encryption/#producer" rel="noopener noreferrer"&gt;producers&lt;/a&gt;” in &lt;a href="https://pulsar.apache.org/" rel="noopener noreferrer"&gt;Apache Pulsar&lt;/a&gt; (and by extension in &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt;). We need to create a producer that will send messages to the &lt;code&gt;object-location&lt;/code&gt; topic. In fact, we don’t want to send simple string messages like many of the Pulsar, “Hello World” level examples do. We want to send an object that can be stored in database tables.&lt;/p&gt;

&lt;p&gt;If you take a look at the Java code in the folder for this demo in &lt;a href="https://github.com/jdavies/airport-demo/tree/main/pulsar/skyview/src/main/java/com/datastax/pulsar" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, you will see several files. The main entry point is the &lt;a href="https://github.com/jdavies/airport-demo/blob/main/pulsar/skyview/src/main/java/com/datastax/pulsar/App.java" rel="noopener noreferrer"&gt;App.java&lt;/a&gt; file. It’s a pretty simple file that just instantiates a Flight object and causes the Flights run() method to be invoked every second. The interesting work is in the Flight class.&lt;/p&gt;

&lt;p&gt;The Flight class is designed to be a producer. It produces messages on the object-location topic each time the run() method is invoked. The constructor of the Flight class takes care of creating the PulsarClient connection and then the Pulsar topic producer. The most important thing to note here is the use of a &lt;code&gt;JSONSchema&lt;/code&gt; based on the &lt;code&gt;ObjectLocation&lt;/code&gt; class. This tells Pulsar the exact schema of the object that is being sent. Pulsar will expect the message to match the specified JSON Schema. If the message does not match the schema exactly, you will receive an error message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public Flight(String flightID, String aircraftType) {
    try {
         // Initialize our location
         Date now = new Date();
         objLoc = new ObjectLocation(flightID, aircraftType, 0.0, 0.0,
now.getTime());
         // Create client object
         client = PulsarClient.builder()
         .serviceUrl(BROKER_SERVICE_URL)
         .authentication(
              AuthenticationFactory.token(Credentials.token)
         )
         .build();


         // Create producer on a topic
         producer = client.newProducer(JSONSchema.of(ObjectLocation.class))
         .topic("persistent://" + STREAM_NAME + "/" + NAMESPACE + "/" + TOPIC)
         .create();
    } catch(Exception ex) {
         System.out.println(ex.getMessage());
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No messages are sent to the topic until the run() method is invoked. Here is the run() method implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void run() {
    // Send a message to the topic
    try {
         producer.send(objLoc);
         System.out.println(objLoc.toString());
         Date now = new Date();
         updatePosition(objLoc);
         objLoc.setTs(now.getTime());
    } catch(PulsarClientException pcex) {
         pcex.printStackTrace();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;producer.send(objLoc)&lt;/code&gt; takes a native Java POJO that watches the schema expected and sends it over the topic. Note that you don’t have to serialize your object. The Pulsar libraries are smart enough to take care of that for you! Also, the very first time you run this code (which we will do next), &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; will record the schema for the message type. You can view that schema by navigating to your topic and clicking on the &lt;strong&gt;&lt;em&gt;Schema&lt;/em&gt;&lt;/strong&gt; tab, as shown next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5hnee06tuk8eciqhody.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5hnee06tuk8eciqhody.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 17: Viewing a topic schema.&lt;/p&gt;

&lt;h1&gt;
  
  
  Seeing it in action
&lt;/h1&gt;

&lt;p&gt;If you load the project up in an editor like &lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt; you can run the App class to see the application in action. When you do, you will see output like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplmjwd149ggji8iczn6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplmjwd149ggji8iczn6k.png" alt="Image description" width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 18: Events generated by the producer.&lt;/p&gt;

&lt;p&gt;From the output above, we can see that the producer is generating events/messages on our topic. Now let’s check our database tables to see the data that was recorded. I’m going to use the CQLShell window on the &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra&lt;/a&gt; website to keep things simple. Let’s start by looking at the &lt;code&gt;object_location&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcftzitu7jlmw3yon9vza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcftzitu7jlmw3yon9vza.png" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 19: There should be a single record in your object_location table.&lt;/p&gt;

&lt;p&gt;Remember, the purpose of this table is to record the last known location of an object, a Boeing 737 in this case. Your X and Y coordinates will vary depending on when you stopped the application from creating messages.&lt;/p&gt;

&lt;p&gt;Now let’s take a look at our &lt;code&gt;object_location_history&lt;/code&gt; table:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob5hqtvqy7rhuv683an0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob5hqtvqy7rhuv683an0.png" alt="Image description" width="800" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 20: Our object_location_history table’s data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself!
&lt;/h2&gt;

&lt;p&gt;As you can see, making real use of &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; is easy to do. Despite the many screenshots and the level of detail provided here, building this application requires just a few simple steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create a Database&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Create the &lt;code&gt;object_location&lt;/code&gt; table&lt;/li&gt;
&lt;li&gt;Create the &lt;code&gt;object_loction_history&lt;/code&gt; table.&lt;/li&gt;
&lt;li&gt;Create the custom role (optional)&lt;/li&gt;
&lt;li&gt;Generate a token for the database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2.&lt;strong&gt;Create a Streaming Tenant&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the &lt;code&gt;airport&lt;/code&gt; namespace&lt;/li&gt;
&lt;li&gt;Create the &lt;code&gt;object-location&lt;/code&gt; topic&lt;/li&gt;
&lt;li&gt;Create the &lt;code&gt;object-location&lt;/code&gt; sink&lt;/li&gt;
&lt;li&gt;Create the &lt;code&gt;objLoc-history&lt;/code&gt; sink&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Create a Java Topic Producer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s all there is to it. Now you have a recipe for sending and receiving event objects via &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt; and storing them in an &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt;. Try this code yourself by creating your free Astra account at &lt;a href="https://auth.cloud.datastax.com/auth/realms/CloudUsers/protocol/openid-connect/auth?client_id=auth-proxy&amp;amp;redirect_uri=https%3A%2F%2Fgatekeeper.auth.cloud.datastax.com%2Fcallback&amp;amp;response_type=code&amp;amp;scope=openid+profile+email&amp;amp;state=adiabyxa4aa6drngHZ0XW2rFfuU%3D" rel="noopener noreferrer"&gt;https://astra.datastax.com&lt;/a&gt; (no credit card required). Your Astra account will work for both &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt; and &lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;Astra Streaming&lt;/a&gt;. When you sign up, you’ll get $25.00 worth of free credits each month in &lt;strong&gt;&lt;em&gt;perpetuity&lt;/em&gt;&lt;/strong&gt;! That’s around 80GB storage, and 20M read / write ops. There’s never been a better time to start building streaming applications, and now with Astra Streaming, it’s never been easier.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow the DataStax Tech Blog for more developer stories. Check out our YouTube channel for tutorials and here for DataStax Developers on Twitter for the latest news about our developer community.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow the &lt;a href="https://datastax.medium.com/" rel="noopener noreferrer"&gt;DataStax Tech Blog&lt;/a&gt; for more developer stories. Check out our &lt;a href="https://www.youtube.com/channel/UCqA6zOSMpQ55vvguq4Y0jAg" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt; channel for tutorials and here for DataStax Developers on &lt;a href="https://twitter.com/DataStaxDevs" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for the latest news about our developer community.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;DataStax Astra DB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://astra.dev/3K5t1pR" rel="noopener noreferrer"&gt;DataStax Astra Streaming&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pulsar.apache.org/" rel="noopener noreferrer"&gt;Apache Pulsar&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/jdavies/airport-demo/tree/main/pulsar/skyview/src/main/java/com/datastax/pulsar" rel="noopener noreferrer"&gt;GitHub repo for this project&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devops</category>
      <category>database</category>
      <category>cloud</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Deploy a Netflix Clone with GraphQL and DataStax Astra DB</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Thu, 14 Jul 2022 21:19:01 +0000</pubDate>
      <link>https://dev.to/datastax/deploy-a-netflix-clone-with-graphql-and-datastax-astra-db-3a4c</link>
      <guid>https://dev.to/datastax/deploy-a-netflix-clone-with-graphql-and-datastax-astra-db-3a4c</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad4n1sg8963g9pd6okv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad4n1sg8963g9pd6okv5.png" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is part three of the &lt;a href="https://www.youtube.com/playlist?list=PL2g2h-wyI4Sr09jyk19cSwv2Iwcl1V1Fl" rel="noopener noreferrer"&gt;DataStax app development workshop series&lt;/a&gt;, which guides you through fundamental technologies like Node.js, React, Netlify, and JavaScript to kickstart your app development portfolio. The full series is &lt;a href="https://www.youtube.com/playlist?list=PL2g2h-wyI4Sr09jyk19cSwv2Iwcl1V1Fl" rel="noopener noreferrer"&gt;available on YouTube&lt;/a&gt;. In this post, we’ll guide you through building a Netflix clone backed with &lt;a href="https://astra.dev/3OSFRci" rel="noopener noreferrer"&gt;DataStax Astra DB&lt;/a&gt;&lt;/em&gt; &lt;em&gt;and accessed using the &lt;a href="https://www.datastax.com/dev/graphql?utm_source=datastaxmedium" rel="noopener noreferrer"&gt;Stargate GraphQL API&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you’ve been following our full app development series, you should now have a good grasp of &lt;a href="https://datastax.medium.com/build-your-first-app-with-javascript-node-js-and-datastax-astra-db-573abc238583" rel="noopener noreferrer"&gt;how to build a to-do list web app&lt;/a&gt; and &lt;a href="https://medium.com/building-the-open-data-stack/deploy-a-tiktok-clone-with-node-js-netlify-and-datastax-astra-db-b8c013817407" rel="noopener noreferrer"&gt;how to run a TikTok clone&lt;/a&gt;. With these two apps under your belt, you’re ready to take your app development a step further with the final app of the series: a Netflix clone.&lt;/p&gt;

&lt;p&gt;In this workshop, we’ll walk you through a simple Netflix homepage running on &lt;a href="https://astra.dev/3OSFRci" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt;. In the process, you’ll learn how to interact with the database using &lt;a href="https://graphql.org/" rel="noopener noreferrer"&gt;GraphQL&lt;/a&gt;, as well as how to implement infinite scroll and paging. The technologies we’ll be using in this workshop are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitPod&lt;/strong&gt; is our cloud-based IDE that integrates directly with GitHub.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Netlify&lt;/strong&gt; to deploy our Netflix clone to production across a global content distribution network (CDN).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Astra DB&lt;/strong&gt; is our serverless database based on &lt;a href="https://cassandra.apache.org/" rel="noopener noreferrer"&gt;Apache Cassandra®&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stargate GraphQL API&lt;/strong&gt; to create, read, update, and delete app data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s a quick breakdown of the Netflix app you’ll be working with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezrt2afes02daz9jpgeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezrt2afes02daz9jpgeg.png" alt="Image description" width="800" height="698"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Diagram of Netflix clone app.&lt;/p&gt;

&lt;p&gt;In short, the Netflix clone is a Node.js application divided into two parts: the user interface (built using React) and GraphQL, which interacts with the Astra DB database. You’ll learn how to create the database, build tables and then import data using GraphQL. You’ll also get a tour of the React code and learn to deploy the app using Netlify — all within two hours.&lt;/p&gt;

&lt;p&gt;To get you started, let’s dig into the latest addition to your tech stack: GraphQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding GraphQL
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://graphql.org/" rel="noopener noreferrer"&gt;GraphQL&lt;/a&gt; is an open-source language used to fetch data from an API into your application. Originally developed by Facebook in 2012, this flexible query language is designed to pull data more efficiently than REST APIs when building mobile and web apps.&lt;/p&gt;

&lt;p&gt;Typically, REST APIs need to load data from multiple URLs and make several round trips with heavy payloads. GraphQL, in contrast, fetches data from multiple sources all within a single request. This makes it ideal for mobile, IoT devices, and apps running on slow internet connections.&lt;/p&gt;

&lt;p&gt;It also allows you to fetch the &lt;em&gt;exact&lt;/em&gt; data you need (without over-fetching), which saves on bandwidth and improves the performance of your app. Another advantage is there’s no need to maintain versions with GraphQL. You can add new fields and deprecate older ones so you can evolve your API without breaking existing queries.&lt;/p&gt;

&lt;p&gt;Overall, GraphQL makes APIs more flexible and developer-friendly — but there’s a catch. For developers who are already familiar with REST APIs, switching to GraphQL can present quite the learning curve. It can also add some complexity to API management and server-side maintenance, since GraphQL shifts most of the data query work onto the server to make the client-side easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplifying GraphQL with Stargate
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate&lt;/a&gt; is an open-source data gateway that makes it simple to query any Cassandra database using GraphQL types, queries, and mutations. When you add the &lt;a href="https://www.datastax.com/dev/graphql?utm_source=datastaxmedium" rel="noopener noreferrer"&gt;Stargate GraphQL API&lt;/a&gt; to a Cassandra deployment, it scans the database and automatically creates HTTP endpoints with GraphQL queries and mutations for the objects it finds.&lt;/p&gt;

&lt;p&gt;You can also use the API to create new database tables directly, and there’s a built-in GraphQL Playground servlet for you to easily prototype and tinker around with your mutations and queries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lnwk4b0zdubkm447img.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lnwk4b0zdubkm447img.png" alt="Image description" width="700" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: GraphQL Playground in a nutshell.&lt;/p&gt;

&lt;p&gt;With the Stargate GraphQL API, you can get the data you need into your apps — directly from Cassandra. And, since part of the mission with Stargate is to make Cassandra easier for every developer, we’ve made it available in DataStax Astra by default. So if you have a free Astra database, then you can use the Stargate GraphQL API, which is precisely what you’ll learn how to do in this workshop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up and deploy a Netflix clone in 50 minutes
&lt;/h2&gt;

&lt;p&gt;To wrap up our exciting app workshop series, we’ll hand you a sample Netflix clone app pre-built in React and guide you as you learn to do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a free database with Astra DB&lt;/li&gt;
&lt;li&gt;Learn about GraphQL and create your data model&lt;/li&gt;
&lt;li&gt;Insert the dataset using GraphQL&lt;/li&gt;
&lt;li&gt;Deploy your application to production with Netlify&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the architecture we’ll be using in this workshop:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7jftt5vy9nnajmqzrng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7jftt5vy9nnajmqzrng.png" alt="Image description" width="700" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Netflix clone app architecture.&lt;/p&gt;

&lt;p&gt;To start, make sure you’ve &lt;a href="https://astra.dev/3OSFRci" rel="noopener noreferrer"&gt;signed up for your free Astra DB account&lt;/a&gt;, then jump into the &lt;a href="https://www.youtube.com/watch?v=yyc4ZtXmAvQ" rel="noopener noreferrer"&gt;workshop video on YouTube&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the source code, slides, exercises, and the step-by-step guide go to our &lt;a href="https://github.com/datastaxdevs/workshop-graphql-netflix" rel="noopener noreferrer"&gt;DataStax Developers repo on GitHub&lt;/a&gt;. Lastly, if you get stuck or simply want to chat with our community, &lt;a href="https://discord.com/invite/pPjPcZN" rel="noopener noreferrer"&gt;join our Discord&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And with that, we conclude our summer workshop series and the third app in your budding frontend development portfolio. To keep learning about open-source technologies and how to use them, simply register for any of our free &lt;a href="https://www.datastax.com/workshops" rel="noopener noreferrer"&gt;DataStax Workshops&lt;/a&gt; to get started on your next big app.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Explore more tutorials on our &lt;a href="https://www.youtube.com/playlist?list=PL2g2h-wyI4Sr09jyk19cSwv2Iwcl1V1Fl" rel="noopener noreferrer"&gt;DataStax Developers YouTube channel&lt;/a&gt; and &lt;a href="https://docs.google.com/forms/u/2/d/e/1FAIpQLSfEtzzVauuFpFJWUiepYndqchBpNsaOwm6raPJDsMt9nTvMbw/viewform" rel="noopener noreferrer"&gt;subscribe to our event alert&lt;/a&gt; to get notified about new developer workshops. For exclusive posts on all things data: Cassandra, streaming, Kubernetes, and more; follow &lt;a href="https://datastax.medium.com/" rel="noopener noreferrer"&gt;DataStax on Medium&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=hSfnWL-EQzU" rel="noopener noreferrer"&gt;Build a Netflix clone with GraphQL, React and a NoSQL DB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=g8COh40v2jU" rel="noopener noreferrer"&gt;Tutorial: Code a Netflix Clone with GraphQL Pagination&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://workshop-graphql-netflix.netlify.app/" rel="noopener noreferrer"&gt;Live demo Netflix app&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord.com/invite/pPjPcZN" rel="noopener noreferrer"&gt;Join our Discord: Fellowship of the (Cassandra) Rings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://astra.dev/3OSFRci" rel="noopener noreferrer"&gt;Astra DB — Managed Apache Cassandra as a Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datastax.com/blog/getting-started-graphql-and-apache-cassandra" rel="noopener noreferrer"&gt;Getting started with GraphQL and Apache Cassandra&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datastax.com/products/datastax-astra/apis" rel="noopener noreferrer"&gt;Stargate APIs | GraphQL, REST, Document&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://academy.datastax.com/" rel="noopener noreferrer"&gt;DataStax Academy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://datastax.com/dev/certifications" rel="noopener noreferrer"&gt;DataStax Certifications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datastax.com/workshops" rel="noopener noreferrer"&gt;DataStax Workshops&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>graphql</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How to Build and Deploy a Serverless Game with DataStax Astra DB, JAMStack, Stargate, and Netlify</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Tue, 12 Jul 2022 17:48:58 +0000</pubDate>
      <link>https://dev.to/datastax/how-to-build-and-deploy-a-serverless-game-with-datastax-astra-db-jamstack-stargate-and-netlify-2138</link>
      <guid>https://dev.to/datastax/how-to-build-and-deploy-a-serverless-game-with-datastax-astra-db-jamstack-stargate-and-netlify-2138</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs22ue6k10t0dyunkpgan.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs22ue6k10t0dyunkpgan.jpeg" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In one of our many free tutorials on &lt;a href="https://www.youtube.com/c/DataStaxDevs/videos" rel="noopener noreferrer"&gt;DataStax Developers YouTube channel&lt;/a&gt;, we walked you through &lt;a href="https://www.youtube.com/watch?v=mL5cWQnKJkc" rel="noopener noreferrer"&gt;how to build and deploy a serverless game&lt;/a&gt; — BattleStax — an online party game that you can enjoy with your friends.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;BattleStax is implemented as a JAMStack app that uses &lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate&lt;/a&gt;, Netlify, &lt;a href="https://astra.dev/3yOixH7" rel="noopener noreferrer"&gt;DataStax Astra DB&lt;/a&gt;, and &lt;a href="https://github.com/DataStax-Academy/battlestax" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; to demonstrate how to build and deploy an application using modern, scalable architectures. In this post, we’ll break down the video to help you quickly create your own BattleStax game using React and Redux — implemented with a CI/CD pipeline, global content delivery network (CDN), and &lt;a href="https://cassandra.apache.org/" rel="noopener noreferrer"&gt;Apache Cassandra®.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The technologies we will take you through are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;React&lt;/strong&gt; for the frontend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redux&lt;/strong&gt; for state management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stargate&lt;/strong&gt; and &lt;strong&gt;Astra DB&lt;/strong&gt; for the backend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Netlify&lt;/strong&gt; for deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even if you’ve never heard of JAMStack or worked with JavaScript, we have everything &lt;a href="https://github.com/DataStax-Academy/battlestax" rel="noopener noreferrer"&gt;preloaded for you in the cloud&lt;/a&gt; so you can follow along with the exercises, information, and solutions provided. We hope that by the end of the tutorial, you will have new knowledge on how to work with JAMStack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt27qhys1wcod4mkp4oq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt27qhys1wcod4mkp4oq.png" alt="Image description" width="700" height="349"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1: A screenshot of the global scale, hit party game you will get to deploy publicly.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is JAMStack?
&lt;/h1&gt;

&lt;p&gt;JAMstack is the new standard architecture for the web, and JAM stands for &lt;strong&gt;J&lt;/strong&gt;avaScript &lt;strong&gt;A&lt;/strong&gt;PI &lt;strong&gt;M&lt;/strong&gt;arkup. It takes all the modern technologies we have and puts them together into a single platform to deploy an application from start to production. Because it has full continuous integration, it wins in &lt;strong&gt;security, scalability, performance, maintainability, and portability.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using Git workflows and modern build tools, pre-rendered content is served to a CDN and made dynamic through APIs and serverless functions. Technologies in the stack include JavaScript frameworks, Static Site Generators, Headless CMSs, and CDNs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsskb31slfywjvqs31ez8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsskb31slfywjvqs31ez8.png" alt="Image description" width="444" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: What is JAMStack, &lt;a href="http://www.jamstack.org/" rel="noopener noreferrer"&gt;www.JamStack.org&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;JAMstack uses static pages to deliver content rather than rendering content at runtime. Traditional servers create a website, load it up on the server, and render it in real-time. In the case of JAMStack, you actually pre-render all of the pages, serving up static content. But it still offers dynamic, serverless functions, by clicking up to backend microservices. Microservice architecture enables the rapid, frequent and reliable delivery of large, complex applications.&lt;/p&gt;

&lt;p&gt;Essentially, we’re pre-building when we build and pre-rendering all of these pages, and pushing them to various devices in the CDN. If you’re not familiar with CDN, one way to understand it is that it takes your content and distributes it across edge servers automatically as part of their system.&lt;/p&gt;

&lt;p&gt;So instead of having a single web server where your site is hosted, and users connecting to those servers to get to your site, your content has been spread around the globe on this network, making delivery instantaneous.&lt;/p&gt;

&lt;p&gt;If you want to understand more, check out our &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_JAM.md" rel="noopener noreferrer"&gt;JAMStack guide on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why are we using Apache Cassandra?
&lt;/h1&gt;

&lt;p&gt;Cassandra is a distributed, NoSQL database that is used by the vast majority of Fortune 100 companies. Facebook, Instagram, and Netflix all use Apache Cassandra. If you go on your phone and open an app, chances are, you’re already using it.&lt;/p&gt;

&lt;p&gt;Apache Cassandra helps companies process large volumes of fast-moving data in a reliable, scalable way. Known for its performance at scale, Cassandra is regarded as the Porsche Lamborghini of the developer world, but can be finicky if you don’t know how to run it properly. Cassandra is essentially indefinitely scalable. There is no master node and Cassandra is a peer-to-peer system. You can actually have two-thirds of your cluster down and still be up and available to facilitate operations and write and read data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.datastax.com/en/astra/docs/" rel="noopener noreferrer"&gt;DataStax Astra DB&lt;/a&gt;, built on the best distribution of Cassandra, provides the ability to develop and deploy data-driven applications with a cloud-native service, without the hassles of database and infrastructure administration. By automating tuning and configuration, DataStax Astra radically simplifies database operations.&lt;/p&gt;

&lt;p&gt;Astra DB is basically Cassandra in the cloud. What is really helpful is that, from the development standpoint, even using just the free tier, you can load it up, hook up your Cassandra database with Astra and Netlify, and make your app work. And if you decide to go open source and run your own Cassandra, you can do that too with the same driver and code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Tutorial Overview
&lt;/h1&gt;

&lt;p&gt;You will find step-by-step instructions with screenshots all on &lt;a href="https://github.com/DataStax-Academy/battlestax" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. We’ve pre-baked the IDE and configurations and dependencies so you don’t have to download anything and you can complete the workshop in the cloud. Just launch GitPod right out of your code and jump into it.&lt;/p&gt;

&lt;p&gt;Begin by following the instructions in our &lt;a href="https://github.com/DataStax-Academy/battlestax" rel="noopener noreferrer"&gt;BattleStax repo on GitHub&lt;/a&gt; to setup and deploy your first app, create your BattleStax repository and set up your Netlify account.&lt;/p&gt;

&lt;p&gt;Here is an overview of the four labs we will walk you through on GitHub.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bootstrapping: &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step00.md" rel="noopener noreferrer"&gt;Setup and deploy your first app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Lab 1 — &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_Netlify.md" rel="noopener noreferrer"&gt;Introduction to Netlify \
Expose your “hello world” API&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Lab 2 — &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_Astra_Stargate.md" rel="noopener noreferrer"&gt;What are DataStax Astra and Stargate?\
Implementing a Serverless Data API&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Lab 3 — &lt;a href="https://github.com/DataStax-Academy/battlestax#:~:text=What%20are%20DataStax%20Astra%20and%20Stargate" rel="noopener noreferrer"&gt;What are Redux and React? \
Create client state with Redux&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Lab 4 — &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step04.md" rel="noopener noreferrer"&gt;Bind Redux to the User Interface&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ready? Let’s go!&lt;/p&gt;

&lt;h1&gt;
  
  
  Lab 1: Intro to Netlify + Expose your “hello world” AP
&lt;/h1&gt;

&lt;p&gt;Let’s talk about &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_Netlify.md" rel="noopener noreferrer"&gt;Netlify&lt;/a&gt; and what it can do for you (or, feel free to jump ahead to &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step01.md" rel="noopener noreferrer"&gt;exposing your “hello world” API&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;You may already have used Netify’s Build, Package, deploy and host features to&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step00.md" rel="noopener noreferrer"&gt; set up and deploy your empty application&lt;/a&gt;. Netlify can also configure your builds and manage atomic deploys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1fek9v27h12g5d36p92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1fek9v27h12g5d36p92.png" alt="Image description" width="700" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Netlify can configure builds and deploy sites.&lt;/p&gt;

&lt;p&gt;Netlify lets you deploy serverless Lambda functions without an AWS account, and with function management handled directly within Netlify. This means you can create serverless functions in your application that you can access seamlessly in your local environment, or via a global CDN (once deployed). You can do this without an actual server to deploy code to.&lt;/p&gt;

&lt;p&gt;Remember, we’ve created static pages, and we’re going to generate those static pages. That’s where these Netlify functions come in, creating serverless functions.&lt;/p&gt;

&lt;p&gt;If you want to dig in and learn more, click here for detailed documentation: &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_Netlify.md" rel="noopener noreferrer"&gt;what can Netlify do for you&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s get coding. You have two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Option A: Cloud-based GitPod &lt;strong&gt;(recommended)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Option B: Local development environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We recommend &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step01.md" rel="noopener noreferrer"&gt;Option A&lt;/a&gt;. Gitpod is a cloud based IDE based on Eclipse Theia very similar to VSCode. You need to authenticate with your GitHub account, and GitPod will initialize your workspace, building the solution. Click here to &lt;a href="https://gitpod.io/from-referrer/" rel="noopener noreferrer"&gt;launch Gitpod&lt;/a&gt; to initialize your environment.&lt;/p&gt;

&lt;p&gt;Alternatively, you can use your laptop, which is &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step01.md" rel="noopener noreferrer"&gt;explained in the documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Lab 1 will cover how to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step01.md#1-setup-your-environment" rel="noopener noreferrer"&gt;Set up your environment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step01.md#2-make-a-serverless-endpoint-using-netlify-functions" rel="noopener noreferrer"&gt;Make a serverless endpoint using Netlify functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step01.md#3-merge-back-to-master" rel="noopener noreferrer"&gt;Merge back to master&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step01.md#4-check-your-deployment-in-netlify" rel="noopener noreferrer"&gt;Check your deployment in Netlify&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Lab 2: DataStax Astra DB and Stargate + Implementing a Serverless Data API
&lt;/h1&gt;

&lt;p&gt;We’ve covered Astra DB briefly in the section above and now we’ll introduce &lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate&lt;/a&gt;, an open-source data gateway to abstract Cassandra-specific concepts from app developers and support different API options. It is deployed between client applications and a database to provide an abstraction layer that can be used to shape your data access to fit your application’s needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya0pkr6b351bj3ljgoo4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya0pkr6b351bj3ljgoo4.png" alt="Image description" width="700" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 4: Stargate is a framework used to customize all aspects of data access.&lt;/p&gt;

&lt;p&gt;Let’s implement a CRUD API in Astra DB. In &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step02.md" rel="noopener noreferrer"&gt;Lab 2 of the BattleStax tutorial&lt;/a&gt;, we will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an Astra DB database to store game documents&lt;/li&gt;
&lt;li&gt;Create test cases to check that our API call is working correctly&lt;/li&gt;
&lt;li&gt;Build the API call to Astra DB to create a game document, based on the requirements from our test&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefprt22dk315ugwwztct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefprt22dk315ugwwztct.png" alt="Image description" width="700" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 5: With Document API, save and search schemaless JSON documents in Cassandra&lt;/p&gt;

&lt;h1&gt;
  
  
  Lab 3: Intro to Redux and React + Creating a client state with Redux
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://javascript.plainenglish.io/the-only-introduction-to-redux-and-react-redux-youll-ever-need-8ce5da9e53c6" rel="noopener noreferrer"&gt;React&lt;/a&gt; is a fast, component-based, front-end JavaScript library. React typically runs in your browser and renders single-page web application user interfaces. &lt;a href="https://javascript.plainenglish.io/the-only-introduction-to-redux-and-react-redux-youll-ever-need-8ce5da9e53c6" rel="noopener noreferrer"&gt;Redux&lt;/a&gt; is a JavaScript library that is used mostly for application state management. Redux maintains the state of an entire application in a single immutable state tree (object), which can’t be changed directly. When something changes, a new object is created (using actions and reducers).&lt;/p&gt;

&lt;p&gt;React and Redux work together by letting you build components that react to changes of the application state. Components affected by a state change are re-rendered with the new data. Components also dispatch actions, for example when a button is clicked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffalp9f262d1gpzykflbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffalp9f262d1gpzykflbs.png" alt="Image description" width="700" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 6: How React and Redux works together.&lt;/p&gt;

&lt;p&gt;Learn more by reading our &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_Redux_React.md" rel="noopener noreferrer"&gt;Redux and React documentation here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step03.md" rel="noopener noreferrer"&gt;Lab 3&lt;/a&gt;, we will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build out the game slice boilerplate code by implementing one reducer, exporting an action and a selector&lt;/li&gt;
&lt;li&gt;Run tests to try out the functionality of our game slice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will cover how to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step03.md#1-building-a-gameslice" rel="noopener noreferrer"&gt;Build a gameSlice&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step03.md#2-generate-an-action-and-a-selector" rel="noopener noreferrer"&gt;Generate an action and a selector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step03.md#3-create-an-async-action" rel="noopener noreferrer"&gt;Create an Async Action&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step03.md#4-running-tdd-tests" rel="noopener noreferrer"&gt;Running TDD tests&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Lab 4: Bind Redux to the User Interface
&lt;/h1&gt;

&lt;p&gt;In this last and final step, we’ve already built a UI with React, but now we need to connect it to our game state.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step04.md" rel="noopener noreferrer"&gt;Lab 4&lt;/a&gt; we will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build out the NewGame.js component by connecting it to redux.&lt;/li&gt;
&lt;li&gt;Build a test to try out the functionality of NewGame.js&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will cover how to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step04.md#1-import-our-redux-artifacts" rel="noopener noreferrer"&gt;Import our Redux artifacts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step04.md#2-use-hooks-to-connect-our-compoonent-to-our-redux-store" rel="noopener noreferrer"&gt;Use hooks to connect our component to our Redux Store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step04.md#3-updating-the-ui" rel="noopener noreferrer"&gt;Updating the UI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step04.md#4-running-tdd-tests" rel="noopener noreferrer"&gt;Running TDD tests&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step04.md#5-merge-back-to-master" rel="noopener noreferrer"&gt;Merge back to master&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax/blob/master/README_step04.md#6-verify-your-deployment-in-netlify" rel="noopener noreferrer"&gt;Verify your deployment in Netlify&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1q4k81fdupqipm87zzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1q4k81fdupqipm87zzv.png" alt="Image description" width="700" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 7: Hooks in React to extract state information from Redux store.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjp95oovmia3kyeav4w36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjp95oovmia3kyeav4w36.png" alt="Image description" width="500" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 8: BattleStax final display.&lt;/p&gt;

&lt;p&gt;If you’ve followed all the steps in the GitHub tutorial, by this point, you would have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployed an application through a &lt;strong&gt;CI/CD&lt;/strong&gt; pipeline with &lt;strong&gt;GitHub Actions&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Learned how to use &lt;strong&gt;serverless&lt;/strong&gt; functions that are globally available via &lt;strong&gt;Netli&lt;/strong&gt;fy&lt;/li&gt;
&lt;li&gt;Used these backed by the top NoSQL distributed database Apache Cassandra on &lt;strong&gt;DataStax Astra DB&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And it was possible to do all this without touching a server, deploying back-end code, or needing to talk to your IT friends. Enjoy creating your next viral JAMStack app!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Explore more tutorials on our &lt;a href="https://www.youtube.com/c/DataStaxDevs/videos" rel="noopener noreferrer"&gt;DataStax Developers YouTube channel&lt;/a&gt; and &lt;a href="https://www.datastax.com/workshops" rel="noopener noreferrer"&gt;subscribe to our event alert&lt;/a&gt; to get notified about new developer workshops. For exclusive posts on all things data: Cassandra, streaming, Kubernetes, and more; follow &lt;a href="https://datastax.medium.com/" rel="noopener noreferrer"&gt;DataStax on Medium&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=mL5cWQnKJkc" rel="noopener noreferrer"&gt;YouTube: How to Build &amp;amp; Deploy a Serverless Game&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Academy/battlestax" rel="noopener noreferrer"&gt;GitHub Tutorial: JamStack + React Serverless Game Workshop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataStax-Examples/battlestax/wiki/LAB8_Verify-and-Deploy-in-Netlify" rel="noopener noreferrer"&gt;Super Secret Full Game Option&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord.com/invite/pPjPcZN" rel="noopener noreferrer"&gt;Join our Discord: Fellowship of the (Cassandra) Rings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://astra.dev/3yOixH7" rel="noopener noreferrer"&gt;Astra DB — Managed Apache Cassandra as a Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://community.datastax.com/index.html" rel="noopener noreferrer"&gt;DataStax Community Platform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate APIs | GraphQL, REST, Document&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://auth.cloud.datastax.com/auth/realms/CloudUsers/protocol/saml/clients/absorb" rel="noopener noreferrer"&gt;DataStax Academy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datastax.com/dev/certifications" rel="noopener noreferrer"&gt;DataStax Certifications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datastax.com/workshops" rel="noopener noreferrer"&gt;DataStax Workshops&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>productivity</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building Reactive Java Applications with Spring Framework</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Wed, 29 Jun 2022 02:47:02 +0000</pubDate>
      <link>https://dev.to/datastax/building-reactive-java-applications-with-spring-framework-5ca</link>
      <guid>https://dev.to/datastax/building-reactive-java-applications-with-spring-framework-5ca</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c4ohrcuzjmfheqtsgdo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c4ohrcuzjmfheqtsgdo.jpeg" alt="Image description" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In one of our many free tutorials on &lt;a href="https://www.youtube.com/c/DataStaxDevs/videos" rel="noopener noreferrer"&gt;DataStax Developers YouTube channel&lt;/a&gt;, we walked you through &lt;a href="https://www.youtube.com/watch?v=1aRbndIcXV4" rel="noopener noreferrer"&gt;how to build a reactive implementation of Spring PetClinic &lt;/a&gt;&lt;/em&gt; in &lt;em&gt;Apache Cassandra® using Spring WebFlux. The full series is &lt;a href="https://github.com/datastaxdevs/workshop-spring-reactive" rel="noopener noreferrer"&gt;available on YouTube&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you’re a Java developer who uses the Spring ecosystem, you’ve probably seen the &lt;a href="https://github.com/datastaxdevs/workshop-spring-reactive" rel="noopener noreferrer"&gt;Spring Pet Clinic&lt;/a&gt;. In this workshop, we will walk you through a new reactive implementation of the Pet Clinic backend that uses Spring WebFlux and Apache &lt;a href="https://www.datastax.com/what-is/cassandra" rel="noopener noreferrer"&gt;Cassandra&lt;/a&gt;® (via &lt;a href="https://astra.dev/38RaiQI" rel="noopener noreferrer"&gt;DataStax Astra DB&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The cloud-native database-as-a-service built on Cassandra fits the highly concurrent, non-blocking nature of our reactive application. We’ll do all of our work in the cloud with Gitpod’s open-source, zero-install and collaborative development environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk4tjo73g3kpdl6qri41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk4tjo73g3kpdl6qri41.png" alt="Image description" width="700" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: A REST API using the Spring framework.&lt;/p&gt;

&lt;p&gt;The steps we will take you through are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up Astra database&lt;/li&gt;
&lt;li&gt;Theory: synchronous vs. asynchronous vs. reactive programming&lt;/li&gt;
&lt;li&gt;Using the Gitpod platform&lt;/li&gt;
&lt;li&gt;Introducing the Spring Boot and WebFlux frameworks&lt;/li&gt;
&lt;li&gt;Reviewing backend code&lt;/li&gt;
&lt;li&gt;Starting frontend application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Setting up your free Astra database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://astra.dev/38RaiQI" rel="noopener noreferrer"&gt;DataStax Astra DB&lt;/a&gt;, built on the best distribution of Cassandra, provides the ability to develop and deploy data-driven applications with a cloud-native service, without the hassles of database and infrastructure administration. By automating tuning and configuration, DataStax Astra radically simplifies database operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp8figtcpo16fj5w414f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp8figtcpo16fj5w414f.png" alt="Image description" width="700" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: Datastax Astra DB Benefits.&lt;/p&gt;

&lt;p&gt;Follow the &lt;a href="https://github.com/datastaxdevs/workshop-spring-reactive" rel="noopener noreferrer"&gt;step-by-step instructions&lt;/a&gt; to &lt;a href="https://auth.cloud.datastax.com/auth/realms/CloudUsers/protocol/openid-connect/registrations?client_id=auth-proxy&amp;amp;response_type=code&amp;amp;scope=openid+profile+email&amp;amp;redirect_uri=https://astra.datastax.com/welcome" rel="noopener noreferrer"&gt;create your Astra database&lt;/a&gt;, and once your database is ready, you can copy your credentials over to &lt;a href="https://github.com/datastaxdevs/workshop-spring-reactive" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Synchronous vs. asynchronous vs. reactive programming
&lt;/h1&gt;

&lt;p&gt;Most Java developers use &lt;strong&gt;synchronous programming&lt;/strong&gt;. When you initiate a session, you execute quickly and you will get a response. Then you send the parameter to the API, and the driver will create a query. You bind the parameter that simply maps the parameter to the query. You execute the query and get back an object called a ResultSet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde34pon9wpsz07fx3x79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde34pon9wpsz07fx3x79.png" alt="Image description" width="700" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Synchronous queries weaknesses.&lt;/p&gt;

&lt;p&gt;The issue with synchronous programming is that you need to wait. It can take a lot of time if you are querying for a lot of data, or a big cluster. Although synchronous programming is very simple, it can block communication. This means that nothing else in the application proceeds until the result from the query is returned. The application blocks for the entire round trip, from when the query is first sent to the database until the results are retrieved and returned to the application.&lt;/p&gt;

&lt;p&gt;The advantage of synchronous queries, however, is that it is simple to tell when a query completes, so the execution logic of the application is easy to follow. However, synchronous queries cause poor application throughput.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dmq92yyyxeqxhm04w6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dmq92yyyxeqxhm04w6d.png" alt="Image description" width="700" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 4: Asynchronous queries weaknesses.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;asynchronous query&lt;/strong&gt; executes call does not block for results. With asynchronous queries, you will put the parameter in to prepare your queries, bind the parameters to your queries, and execute the query.&lt;/p&gt;

&lt;p&gt;But instead of waiting for the result, the customer and the driver will immediately give you an object called a completion stage. A future is returned from the asynchronous execute call. A future is a placeholder object that stands in for the result until the result is returned from the database. Depending on the driver and feature set of the language, this future can facilitate asynchronous processing of results. This typically allows high throughput. However, because you can process the result only when you get a callback, this can create sync resistance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8baxdt60hfhq0jyqf51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8baxdt60hfhq0jyqf51.png" alt="Image description" width="700" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 5: The Reactive Manifesto.&lt;/p&gt;

&lt;p&gt;Now, coming to the &lt;a href="https://www.reactivemanifesto.org/" rel="noopener noreferrer"&gt;reactive manifesto&lt;/a&gt;, the programming offers what is missing with the synchronous-only stack by achieving a responsive service. This means that the system always responds in a timely manner and stays responsive even in the face of any failure. It also offers a scale-out dynamic.&lt;/p&gt;

&lt;p&gt;When it comes to huge volumes of data or multi-users, we often need asynchronous processing to make our systems fast and responsive. In Java, a representative of old object-oriented programming, asynchronicity can become really troublesome and make the code hard to understand and maintain. So, reactive programming is especially beneficial for this ‘purely’ object-oriented environment as it simplifies dealing with asynchronous flows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmynvmpb81zay3n5u85m2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmynvmpb81zay3n5u85m2.png" alt="Image description" width="700" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 6: Reactive Queries.&lt;/p&gt;

&lt;h1&gt;
  
  
  Using the GitPod platform
&lt;/h1&gt;

&lt;p&gt;Gitpod is an open-source Kubernetes application providing prebuilt, collaborative development environments in your browser. Gitpod provides step-by-step screenshots to launch and build the Spring PetClinic Reactive backend application, created by our special workshop guest, Moritz Eysholdt from &lt;a href="https://www.typefox.io/" rel="noopener noreferrer"&gt;TypeFox&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqrvome05lvct7hxwcu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqrvome05lvct7hxwcu1.png" alt="Image description" width="700" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 7: When Gitpod finishes building the app, a new tab will open in your browser showing the following.&lt;/p&gt;

&lt;h1&gt;
  
  
  Introducing the Spring Boot and WebFlux frameworks
&lt;/h1&gt;

&lt;p&gt;With its simple abstraction, Spring Reactor is a popular framework for Java from Spring developers. Spring Framework is a Java platform that provides comprehensive infrastructure support for developing Java applications. Spring enables you to build applications from “plain old Java objects” (POJOs) and to apply enterprise services non-invasively to POJOs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa61t20qbv82owe99005p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa61t20qbv82owe99005p.png" alt="Image description" width="700" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 8: What Spring can do.&lt;/p&gt;

&lt;p&gt;We will also introduce &lt;a href="https://medium.com/building-the-open-data-stack/building-microservices-with-spring-data-cassandra-and-stargate-io-613f0aff8188" rel="noopener noreferrer"&gt;Spring Boot&lt;/a&gt;, a tool that makes developing web applications and microservices with Spring Framework faster and easier. Spring Boot can create stand-alone Spring applications (without deploying WAR files), simplify your build configuration, and automatically configure Spring and 3rd party libraries. Spring Boot also has production-ready features such as metrics, health checks, and externalized configuration.&lt;/p&gt;

&lt;p&gt;At the heart of the Spring Framework are two fundamental features: inversion of control (IoC) and dependency injection (DI).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inversion of control essentially transfers the control of objects to the Spring Framework. Unlike traditional programming, where custom code makes calls to a library, IoC allows the framework to control the flow of a program. This keeps Java classes independent of each other for increased modularity and extensibility.&lt;/li&gt;
&lt;li&gt;Dependency injection is a pattern in Spring that’s used to implement IoC, where the Spring container is in charge of “injecting” objects into the right dependencies. This allows for the loose coupling of components and shifts the responsibility of managing components onto the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xrvzocbox7cl0stitty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xrvzocbox7cl0stitty.png" alt="Image description" width="700" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 9: How Spring Boot works.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reviewing backend code
&lt;/h1&gt;

&lt;p&gt;Let’s have a look inside the main component &lt;code&gt;spring-petclinic-reactive&lt;/code&gt; to see which libraries and frameworks have been used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j0ly0cp8aojzp92hss0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j0ly0cp8aojzp92hss0.png" alt="Image description" width="700" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 10: Understanding the architecture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spring-boot&lt;/strong&gt;: Spring Boot makes it easy to create stand-alone, production-grade Spring-based Applications that you can “just run”. We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need minimal Spring configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spring-Security&lt;/strong&gt;: Spring Security is a powerful and highly customizable authentication and access-control framework. It is the de-facto standard for securing Spring-based applications. Spring Security is a framework that focuses on providing both authentication and authorization to Java applications. Like all Spring projects, the real power of Spring Security is found in how easily it can be extended to meet custom requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spring-WebFlux&lt;/strong&gt;: Spring sub-framework to create Reactive Rest Endpoint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spring-Actuator&lt;/strong&gt;: Expose Endpoints to expose metrics to third party systems. Examples are health, infos, jmx,and prometheus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spring-Test&lt;/strong&gt;: Enabled unit testing and mocking with Spring configuration and beans.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spring-Cloud&lt;/strong&gt;: Spring Cloud provides tools for developers to quickly build some of the common patterns in distributed systems (e.g. configuration management, service discovery, circuit breakers, intelligent routing, micro-proxy, control bus, one-time tokens, global locks, leadership election, distributed sessions, cluster state). Coordination of distributed systems leads to boilerplate patterns, and using Spring Cloud developers can quickly stand up services and applications that implement those patterns. They will work well in any distributed environment, including the developer’s own laptop, bare metal data centers, and managed platforms such as Cloud Foundry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SpringFox &lt;em&gt;(Swagger)&lt;/em&gt;&lt;/strong&gt;: Annotation-based rest documentation generation and test client generation (&lt;code&gt;swagger-ui&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To understand the underlying data model implemented in Apache Cassandra, check out our Gitpod guide.&lt;/p&gt;

&lt;h1&gt;
  
  
  Starting frontend application
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktm5kw5kusqrws8l7nvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktm5kw5kusqrws8l7nvb.png" alt="Image description" width="600" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 11: Frontend of Spring PetClinic.&lt;/p&gt;

&lt;p&gt;Once you configure and run the application, you can also test CRUD with Swagger as a hands-on exercise as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F671wlazqh6hr53ybm21g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F671wlazqh6hr53ybm21g.png" alt="Image description" width="700" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 12: Test CRUD with Swagger.&lt;/p&gt;

&lt;p&gt;And with that, we conclude our workshop on a reactive implementation of Spring PetClinic with Cassandra. To keep learning about open-source technologies and how to use them, simply register for any of our free &lt;a href="https://www.datastax.com/workshops" rel="noopener noreferrer"&gt;DataStax Workshops&lt;/a&gt; to get started on your next big app.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Explore more tutorials on our &lt;a href="https://www.youtube.com/c/DataStaxDevs/videos" rel="noopener noreferrer"&gt;DataStax Developers YouTube channel&lt;/a&gt; and &lt;a href="https://docs.google.com/forms/d/e/1FAIpQLSfEtzzVauuFpFJWUiepYndqchBpNsaOwm6raPJDsMt9nTvMbw/viewform" rel="noopener noreferrer"&gt;subscribe to our event alert&lt;/a&gt; to get notified about new developer workshops. For exclusive posts on all things data: Cassandra, streaming, Kubernetes, and more; follow &lt;a href="https://datastax.medium.com/" rel="noopener noreferrer"&gt;DataStax on Medium&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=1aRbndIcXV4" rel="noopener noreferrer"&gt;Build a Reactive app in Apache Cassandra™ with Spring Framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/datastaxdevs/workshop-spring-reactive" rel="noopener noreferrer"&gt;Github Workshop Spring Reactive&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord.com/invite/pPjPcZN" rel="noopener noreferrer"&gt;Join our Discord: Fellowship of the (Cassandra) Rings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://astra.dev/38RaiQI" rel="noopener noreferrer"&gt;Astra DB — Managed Apache Cassandra as a Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/building-the-open-data-stack/building-microservices-with-spring-data-cassandra-and-stargate-io-613f0aff8188" rel="noopener noreferrer"&gt;Building Microservices with Spring Data, Cassandra, and Stargate.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://auth.cloud.datastax.com/auth/realms/CloudUsers/login-actions/authenticate?client_id=absorb&amp;amp;tab_id=7jzmpQBmc-w" rel="noopener noreferrer"&gt;DataStax Academy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datastax.com/dev/certifications" rel="noopener noreferrer"&gt;DataStax Certifications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datastax.com/workshops" rel="noopener noreferrer"&gt;DataStax Workshops&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>productivity</category>
      <category>webdev</category>
      <category>java</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to Create a Fullstack App Using NuxtJS, NestJS, and Datastax Astra DB (with a Little Help from Github Copilot)</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Wed, 29 Jun 2022 02:46:30 +0000</pubDate>
      <link>https://dev.to/datastax/how-to-create-a-fullstack-app-using-nuxtjs-nestjs-and-datastax-astra-db-with-a-little-help-from-github-copilot-1ehb</link>
      <guid>https://dev.to/datastax/how-to-create-a-fullstack-app-using-nuxtjs-nestjs-and-datastax-astra-db-with-a-little-help-from-github-copilot-1ehb</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hzwxjiokx2nqqjkheam.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hzwxjiokx2nqqjkheam.jpeg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you want to create a full-stack application, complete with dynamic data retrieved from a cloud database by an API, then watch this &lt;a href="https://www.youtube.com/watch?v=TbUpYeLn6SI" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt; created by &lt;a href="https://www.youtube.com/eddiejaoudetv" rel="noopener noreferrer"&gt;Eddie Jaoude&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Building a full-stack application can be daunting because you have to not only think about how the frontend will display the data but where the data will come from and where it’s stored. But it’s not as hard as you might think to get the basics of a full-stack application up and running.&lt;/p&gt;

&lt;p&gt;In his tutorial, Eddie shows you how to do it &lt;em&gt;in less than an hour&lt;/em&gt; using &lt;a href="https://nuxtjs.org/" rel="noopener noreferrer"&gt;NuxtJS&lt;/a&gt; with &lt;a href="https://vuetifyjs.com/en/" rel="noopener noreferrer"&gt;VuetifyJS&lt;/a&gt; for the frontend, &lt;a href="https://nestjs.com/" rel="noopener noreferrer"&gt;NestJS&lt;/a&gt; to create RESTful APIs, and &lt;a href="https://astra.dev/3N3seGz" rel="noopener noreferrer"&gt;DataStax’s Astra DB&lt;/a&gt; for the cloud database service. Also, you’ll use &lt;a href="https://copilot.github.com/" rel="noopener noreferrer"&gt;Github Copilot&lt;/a&gt; as your AI-powered pair programmer.&lt;/p&gt;

&lt;p&gt;In this tutorial, you’ll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a basic frontend using VuetifyJS.&lt;/li&gt;
&lt;li&gt;Use an API to retrieve and save data.&lt;/li&gt;
&lt;li&gt;Retrieve data from a cloud database and display it in the application.&lt;/li&gt;
&lt;li&gt;Use Github Copilot to help you code faster with autocompletion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s recap the key technologies you’ll be using briefly.&lt;/p&gt;

&lt;h1&gt;
  
  
  NuxtJS
&lt;/h1&gt;

&lt;p&gt;NuxtJS is a framework for creating &lt;a href="https://vuejs.org/" rel="noopener noreferrer"&gt;VueJS&lt;/a&gt; applications. It was first released in 2016 to build upon applications that use VueJS. NuxtJS handles the server-side and client-side distribution so you can focus just on the application development.&lt;/p&gt;

&lt;p&gt;Some features of NuxtJS are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Server-side rendering&lt;/li&gt;
&lt;li&gt;Static site generation&lt;/li&gt;
&lt;li&gt;Meta Tags&lt;/li&gt;
&lt;li&gt;Automatic routing and code splitting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result of these improvements is faster applications, improvements to SEO due to server-side rendering, and a helpful start-up wizard that lets you select different UI frameworks, linting tools, and testing frameworks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;→ nuxtjs-nestjs-copilot 11
→ nuxtjs-nestjs-copilot npm init nuxt-app ui

Create-nuxt-app v3.7.1 
? Project name: ui
? Programming language: JavaScript
? Package manager: Npm
? UI Framework: Vuetify.js
? Nuxt.js modules: Axios - Promise based HTTP client
? Linting tools: (Press &amp;lt;space&amp;gt; to select, &amp;lt;a&amp;gt; to toggle all, &amp;lt;i&amp;gt; to invert selection)
? Testing Framework: None
? Rendering mode: Single Page App
? Deployment target: Server (Node.js hosting)
? Development tools: 
&amp;gt; ● jsconfig.json (recommended for VS Code if you’re not using typescript)
  ○ Semantic Pull Requests
  ○ Dependabot (For auto-updating dependencies, GitHub only)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Figure 1: NuxtJS configuration wizard.&lt;/p&gt;

&lt;p&gt;All of this means you can get working on your universal or single-page application much faster because of the speed at which you can set it up.&lt;/p&gt;

&lt;h1&gt;
  
  
  NestJS
&lt;/h1&gt;

&lt;p&gt;NestJS is a framework developed to rapidly build server-side applications. NestJS is built from &lt;a href="https://nodejs.org/en/" rel="noopener noreferrer"&gt;NodeJS&lt;/a&gt; and &lt;a href="https://expressjs.com/" rel="noopener noreferrer"&gt;ExpressJS&lt;/a&gt; and uses progressive JavaScript. It fully supports TypeScript and combines principles of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Object-oriented_programming" rel="noopener noreferrer"&gt;Object-oriented programming&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Functional_programming" rel="noopener noreferrer"&gt;Functional programming&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Functional_reactive_programming" rel="noopener noreferrer"&gt;Functional reactive programming&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NestJS APIs are exposed so you can take advantage of a selection of third-party modules, extending your applications with more features without having to code them yourself.&lt;/p&gt;

&lt;p&gt;Eddie will show you how to take advantage of ExpressJS with NestJS to create a RESTful API to retrieve and save data from a cloud database.&lt;/p&gt;

&lt;h1&gt;
  
  
  DataStax Astra DB
&lt;/h1&gt;

&lt;p&gt;If you are looking for a database that can scale fast, has a dynamic schema for unstructured data and flexible data models then choose a NoSql database. In this tutorial, Eddie is using &lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Apache Cassandra®&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Rather than having to set up and manage the database manually, he has chosen a fully managed version through &lt;a href="https://astra.dev/3N3seGz" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt;. This is an autoscaling DBaaS, built on Cassandra. It handles all of the configuration and management of your cloud databases, so you can spend more time building your applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1l9y5m1h32p78e8tet6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1l9y5m1h32p78e8tet6.png" alt="Image description" width="700" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: DataStax Astra DB dashboard.&lt;/p&gt;

&lt;p&gt;Astra DB uses &lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate APIs&lt;/a&gt; so you can interact with data using GraphQL, CassandraQL, REST, or Document API.&lt;/p&gt;

&lt;p&gt;To follow along in the tutorial, you can sign up for a &lt;a href="https://astra.dev/3N3seGz" rel="noopener noreferrer"&gt;free Astra DB account&lt;/a&gt; to get up to 80 GB of free storage and 20 million read/write operations per month.&lt;/p&gt;

&lt;h1&gt;
  
  
  Github Copilot
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://copilot.github.com/" rel="noopener noreferrer"&gt;Github Copilot&lt;/a&gt; offers a helping hand while you code. Powered by &lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt;, Copilot has been trained on billions of lines of public code to provide you with smart suggestions.&lt;/p&gt;

&lt;p&gt;It can convert comments into code so you only need to provide the logic and Copilot will assemble it.&lt;/p&gt;

&lt;p&gt;Copilot helps you avoid having to type the same thing over and over with its ability to identify and auto-fill repetitive code patterns from only a few examples. In addition to saving you time, with less manual coding, you’ll have fewer bugs due to typos in your code.&lt;/p&gt;

&lt;p&gt;To help you with testing, you can import a unit test package and Copilot will suggest tests from your implementation code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgml0a87art6631rsf9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgml0a87art6631rsf9f.png" alt="Image description" width="700" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Github Copilot service.&lt;/p&gt;

&lt;p&gt;If you’ve ever wanted to get into full-stack development, there’s no easier way to get started than by watching Eddie’s &lt;a href="https://www.youtube.com/watch?v=TbUpYeLn6SI" rel="noopener noreferrer"&gt;tutorial on YouTube&lt;/a&gt;. With his easy-to-follow example and the technologies described here, you’ll be able to create your very own full-stack application in under an hour.&lt;/p&gt;

&lt;p&gt;You can find the source code for the tutorial on &lt;a href="https://github.com/eddiejaoude/fullstack-nuxtjs-nestjs-datastax-video" rel="noopener noreferrer"&gt;Eddie’s Github&lt;/a&gt;. If you want to learn more about DataStax and Astra DB, sign up for a free &lt;a href="https://astra.dev/3N3seGz" rel="noopener noreferrer"&gt;Astra DB account&lt;/a&gt; and then head over to the &lt;a href="https://www.youtube.com/c/DataStaxDevs/featured" rel="noopener noreferrer"&gt;DataStax’s Developer Youtube Channel&lt;/a&gt; to see all the things you can do with these technologies. To learn about the other technologies mentioned here, just check out the resources we’ve provided below.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow &lt;a href="https://datastax.medium.com/" rel="noopener noreferrer"&gt;DataStax on Medium&lt;/a&gt; for exclusive posts on Cassandra, Kubernetes, streaming, and much more.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=TbUpYeLn6SI" rel="noopener noreferrer"&gt;Build a fullstack app using NuxtJS, NestJS, Astra DB w/Github Copilot&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/eddiejaoude/fullstack-nuxtjs-nestjs-datastax-video" rel="noopener noreferrer"&gt;Tutorial source code on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nuxtjs.org/docs/get-started/installation" rel="noopener noreferrer"&gt;NuxtJS documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nestjs.com/" rel="noopener noreferrer"&gt;NestJS documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://astra.dev/3N3seGz" rel="noopener noreferrer"&gt;Astra DB — DBaas built on Apache Cassandra&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/channel/UC5mnBodB73bR88fLXHSfzYA" rel="noopener noreferrer"&gt;Eddie Jaoude Youtube Channel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/c/DataStaxDevs/featured" rel="noopener noreferrer"&gt;DataStax Developers Youtube Channel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord.com/invite/pPjPcZN" rel="noopener noreferrer"&gt;Join our Discord: Fellowship of the (Cassandra) Rings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://auth.cloud.datastax.com/auth/realms/CloudUsers/protocol/saml/clients/absorb" rel="noopener noreferrer"&gt;DataStax Academy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datastax.com/workshops" rel="noopener noreferrer"&gt;DataStax Workshops&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>react</category>
      <category>java</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Stargate gRPC: The Better Way to CQL</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Tue, 14 Jun 2022 15:56:53 +0000</pubDate>
      <link>https://dev.to/datastax/stargate-grpc-the-better-way-to-cql-34e9</link>
      <guid>https://dev.to/datastax/stargate-grpc-the-better-way-to-cql-34e9</guid>
      <description>&lt;p&gt;&lt;em&gt;Stargate’s new gRPC API is so much more than just a feature release — it’s your official welcome to the “no drivers” future of Apache Cassandra.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Recently, we &lt;a href="https://medium.com/building-the-open-data-stack/say-goodbye-to-native-drivers-with-stargate-grpc-api-in-java-26f560d07e3e" rel="noopener noreferrer"&gt;released gRPC as the newest API&lt;/a&gt; supported by &lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate&lt;/a&gt;, our API data gateway. On the surface, it would seem like the API doesn’t do very much; it receives CQL queries via the gRPC protocol, then passes those to &lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Apache Cassandra&lt;/a&gt;® and returns the results. Sounds like a pretty modest feature release, right?&lt;/p&gt;

&lt;p&gt;In reality, what the Stargate team has delivered is groundbreaking. Not quite a native driver and not quite a simple HTTP-based API, Stargate’s gRPC implementation represents a fundamentally new approach for applications interacting with Cassandra — an approach that’s more cloud-native than any driver, and more performant than any simple HTTP-based API.&lt;/p&gt;

&lt;p&gt;So let me tell you why this approach is so important, and why this is such a revolutionary solution for a common developer problem.&lt;/p&gt;

&lt;h1&gt;
  
  
  Native drivers are not cloud-friendly
&lt;/h1&gt;

&lt;p&gt;Functionality inside a native driver can be divided into two parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The query engine&lt;/strong&gt;. This issue requests in a particular query language for a particular database, and receives responses to those requests that can then be used in application code. In our case, the query language is CQL, and the database is Cassandra.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational management&lt;/strong&gt;. This includes tasks like connection pooling, TLS, authentication, load balancing, retry policies, write coalescing, compression, health checks, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll notice that most of those operational tasks are abstracted away from applications in cloud environments, and simply handled automatically on behalf of the application. For example, load balancing, health checks, and TLS termination are intrinsic to most cloud environments; even retries can be configured within the environment.&lt;/p&gt;

&lt;p&gt;Put another way: in a well-designed microservices environment, network management tasks should live inside a service boundary and execute within an SLA defined in a service contract. There should be no need, and it would be a violation of microservices principles for an application to want to reach across that boundary and directly manipulate those operational tasks.&lt;/p&gt;

&lt;p&gt;And yet, this is exactly what native drivers do.&lt;/p&gt;

&lt;p&gt;This is not a mere architectural nicety. Building native drivers into an otherwise cloud-native, microservice-oriented application have real and negative consequences. Let’s dig a little deeper into why.&lt;/p&gt;

&lt;p&gt;Native protocol drivers are expensive to maintain and require reimplementing the same complex functionality for different platforms (like Java, Go, Node, Rust). All that operational management forces developers to extend their skillset from application development in their preferred language to areas of systems operation, thus steepening the learning curve for native drivers.&lt;/p&gt;

&lt;p&gt;More significantly, this co-mingling of concerns opens up a new vector that could trigger the need for a driver update. A configuration change in the network environment, for example, could require an update to the way every driver handles load balancing or connection pooling. Now your organization has to stop every application instance using that driver, apply the change within the driver, and restart all those application instances. Depending on the nature of the driver change, some changes in the rest of the application may also be required.&lt;/p&gt;

&lt;p&gt;They also inject surprising brittleness into applications because of the network management overhead required, making it more likely that drivers, and therefore applications that use them, must be updated.&lt;/p&gt;

&lt;p&gt;In sum, native drivers are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex and present a steep learning curve&lt;/li&gt;
&lt;li&gt;Hard to update and maintain&lt;/li&gt;
&lt;li&gt;Speed bumps for developer velocity&lt;/li&gt;
&lt;li&gt;A threat to application resilience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So we can safely say that native drivers are fast, which makes it easy to overemphasize raw performance, but the overall picture of performance and resilience is much more complicated.&lt;/p&gt;

&lt;h1&gt;
  
  
  HTTP-based APIs are a performance trade-off
&lt;/h1&gt;

&lt;p&gt;The modern approach to application development is, in part, a rebellion against the burden of native drivers. Today’s application developers, particularly front-end developers, expect to interact with data through an HTTP-based API and rely on JSON as the primary method of structuring data.&lt;/p&gt;

&lt;p&gt;We fully support this API-based approach on Stargate. This has several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Language agnosticism.&lt;/strong&gt; Applications can be written in any language that can talk to an HTTP endpoint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separation of concerns&lt;/strong&gt; between application environment and infrastructure environment. Precisely as should happen in a cloud-native context, all of the network management and operational overhead lives behind the API. Changes and updates there stay contained within that service boundary, removing this as an area of concern for application logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resilience.&lt;/strong&gt; The statelessness of HTTP constrains application design to avoid reliance on durable network connections, meaning applications designed in this manner are more resilient against the vagaries of network behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unsurprisingly, HTTP-based APIs have become the backbone of microservice applications for a cloud-native environment. But these benefits are not free. HTTP-based APIs is a slower way to query a database, for two reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Networking&lt;/strong&gt;. Native drivers talk “closer to the wire,” which significantly improves performance. The Java driver for CQL, for example, operates at Layer 5, whereas HTTP operates at Layer 7.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data transformation.&lt;/strong&gt; Databases don’t store JSON natively (even MongoDB relies on the WiredTiger storage engine when you drill down far enough). So some transformation has to happen to turn a JSON-oriented query into a native database query (CQL, in the case of Cassandra). The compute overhead of performing this transformation further slows performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And now, we have a dilemma. On one hand, HTTP-based APIs offer simplicity and language agnosticism that accelerates developer velocity, while also offering separation of concerns between application and infrastructure that improves application resilience. To put it simply, HTTP-based APIs are good cloud citizens, presenting and abiding by clear service boundaries.&lt;/p&gt;

&lt;p&gt;On the other hand, while native drivers are a burden to developers and co-mingled concerns between development and operations negatively impact resilience, native drivers are just flat out more performant than HTTP-based APIs.&lt;/p&gt;

&lt;p&gt;So, what to do?&lt;/p&gt;

&lt;h1&gt;
  
  
  Decomposing the driver
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate&lt;/a&gt; supports native driver calls, offering a CQL API through which to talk to Cassandra. This is essentially just a transparent proxy, and so CQL calls via Stargate remain highly performant. Let’s look at a simple architecture diagram of this part of Stargate. (See Figure 1.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0ezp4qjkww0jalfaytm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0ezp4qjkww0jalfaytm.png" alt="Image description" width="700" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Simple Architecture of Native Driver and Stargate.&lt;/p&gt;

&lt;p&gt;The fundamental problem is the co-mingling of concerns. Some of what lives inside the native driver should, in a cloud-native context, live behind an API and thus inside the API’s service boundary. So what if we looked at it this way instead? (See Figure 2.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcl5er9d47v62yu8os0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcl5er9d47v62yu8os0p.png" alt="Image description" width="700" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: Decomposing the driver.&lt;/p&gt;

&lt;p&gt;The real challenge is how to move that box that says “Network Management Tasks” across the service boundary into Stargate and behind an API. We’ll also have to do it in a way that honors the language agnosticism of APIs. Without that agnosticism, we have to maintain a different “box” of network management tasks for each language, even though those tasks are essentially the same across languages. We’d lighten the driver but make Stargate harder to maintain, and a good bit less cloud-friendly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Enter gRPC
&lt;/h1&gt;

&lt;p&gt;In 2008, Google developed, open-sourced, and released &lt;a href="https://developers.google.com/protocol-buffers" rel="noopener noreferrer"&gt;Protocol Buffers&lt;/a&gt; — a language-neutral mechanism for serializing structured data. In 2015, Google released &lt;a href="https://grpc.io/" rel="noopener noreferrer"&gt;gRPC&lt;/a&gt; (also open source) to incorporate Protocol Buffers into work to modernize Remote Procedure Call (RPC).&lt;/p&gt;

&lt;p&gt;gRPC has a couple of important performance characteristics. One is the improved data serialization, making data transit over the network much more efficient. The other is the use of HTTP/2, which enables bidirectional communication. As a result, there are four call types supported in gRPC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unary calls&lt;/li&gt;
&lt;li&gt;Client-side streaming calls&lt;/li&gt;
&lt;li&gt;Server-side streaming calls&lt;/li&gt;
&lt;li&gt;Bidirectional calls, which are a composite of client-side and server-side streaming&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Put all this together and you have a mechanism that is fast — &lt;em&gt;very&lt;/em&gt; fast when compared to other HTTP-based APIs. gRPC message transmission can be &lt;a href="https://blog.dreamfactory.com/grpc-vs-rest-how-does-grpc-compare-with-traditional-rest-apis/" rel="noopener noreferrer"&gt;7x to 10x faster&lt;/a&gt; than traditional REST APIs. In other words, a solution based on gRPC could offer performance comparable to native drivers.&lt;/p&gt;

&lt;h1&gt;
  
  
  Stargate gRPC
&lt;/h1&gt;

&lt;p&gt;When you pull all of the network management tasks out of a driver, what you’re left with is a thin client library containing little more than the query engine. In our case, these CQL queries transit to a Stargate API endpoint via gRPC. (See Figure 3.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoyvwkckx9gpxmbygzme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoyvwkckx9gpxmbygzme.png" alt="Image description" width="700" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Stargate’s gRPC implementation.&lt;/p&gt;

&lt;p&gt;Behind that endpoint is what amounts to a CQL driver written in gRPC. In other words, it receives CQL calls on the API endpoint via gRPC, and then makes direct CQL calls to Cassandra. No data transformation is required because we’re using CQL end to end.&lt;/p&gt;

&lt;p&gt;These client libraries are dramatically easier to write and maintain. Our original intent was to launch with client libraries for Java and for Go, since these are our two most requested languages. As it happened, adding new languages was so easy that we also included client libraries for Node.js and Rust.&lt;/p&gt;

&lt;p&gt;These four — and perhaps more languages in the future - — represent a fully DataStax-supported way to make CQL calls from your application. We’ll continue to support our existing native drivers, and in those languages, the gRPC client libraries represent an additional, supported alternative. For languages like Go where DataStax does not have a supported native driver, the supported gRPC client library is now a great way to go.&lt;/p&gt;

&lt;h1&gt;
  
  
  Do more with Stargate gRPC
&lt;/h1&gt;

&lt;p&gt;If your favorite language is not on our list, extending to a new language is not hard. From a &lt;code&gt;protobuf&lt;/code&gt; file, you get a skeleton of the CQL calls you to need to make in your chosen language, and none of the operational overhead is required. You get that operational functionality out of the box with gRPC, and it lives inside of Stargate where it belongs in a proper cloud-native context.&lt;/p&gt;

&lt;p&gt;Thanks to HTTP/2 and efficient data serialization, you’ll now get performance on par with native drivers combined with the simplicity of a thin client library, all within a context that plays nicely with the rest of your microservices.&lt;/p&gt;

&lt;p&gt;To learn more, head over to &lt;a href="https://github.com/stargate" rel="noopener noreferrer"&gt;Stargate’s Github&lt;/a&gt;. You can also find source code and examples on &lt;a href="https://stargate.io/docs/stargate/1.0/developers-guide/gRPC-using.html" rel="noopener noreferrer"&gt;using Stargate gRPC API clients&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/stargate/stargate-grpc-java-client" rel="noopener noreferrer"&gt;Java client&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/stargate/stargate-grpc-go-client" rel="noopener noreferrer"&gt;Go client&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/stargate/stargate-grpc-node-client" rel="noopener noreferrer"&gt;Node.js client&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/stargate/stargate-grpc-rust-client" rel="noopener noreferrer"&gt;Rust client&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And, lastly, welcome to the “No Drivers” future of Apache Cassandra.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow &lt;a href="https://datastax.medium.com/" rel="noopener noreferrer"&gt;DataStax on Medium&lt;/a&gt; for exclusive posts on all things Cassandra, streaming, Kubernetes, and more. To join the best developers around the world and stay in the data loop, follow &lt;a href="https://twitter.com/DataStaxDevs" rel="noopener noreferrer"&gt;DataStaxDevs on Twitter&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;p&gt;Curious to learn more about (or play with) Cassandra itself?  We recommend trying it on the &lt;a href="https://astra.dev/3aiH7WO" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt; free plan for the fastest setup.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://stargate.io/" rel="noopener noreferrer"&gt;Stargate — The official data API for DataStax Astra DB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.datastax.com/en/astra/docs/" rel="noopener noreferrer"&gt;What is Astra DB? — DataStax Astra Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/stargate" rel="noopener noreferrer"&gt;Stargate — GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stargate.io/docs/stargate/1.0/developers-guide/gRPC-using.html" rel="noopener noreferrer"&gt;Using Stargate gRPC API clients&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Say Goodbye to Native Drivers with Stargate gRPC API in Java&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>database</category>
      <category>productivity</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Aggregate Functions in Stargate’s GraphQL API</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Thu, 19 May 2022 19:19:13 +0000</pubDate>
      <link>https://dev.to/datastax/aggregate-functions-in-stargates-graphql-api-2hjg</link>
      <guid>https://dev.to/datastax/aggregate-functions-in-stargates-graphql-api-2hjg</guid>
      <description>&lt;p&gt;Thursday, June 3rd, 2021,a new release of Stargate was applied to &lt;a href="https://astra.dev/3loMKVM" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt;. It includes an exciting new feature: aggregate functions! If you’re not familiar with aggregate functions, they are functions that look at the data as a whole and perform a function like min(), max(), sum(), count() and avg().&lt;/p&gt;

&lt;p&gt;Until now, aggregate functions were only available using &lt;strong&gt;&lt;em&gt;cqlsh&lt;/em&gt;&lt;/strong&gt; (the CQL Shell). However, with the Stargate &lt;a href="https://github.com/stargate/stargate/releases/tag/v1.0.25" rel="noopener noreferrer"&gt;1.0.25 release&lt;/a&gt; they are now also available using the GraphQL API. In this blog entry, I’ll walk you through the process to get early access to this exciting new functionality in Stargate, and how to setup everything you need to test your own aggregate queries.&lt;/p&gt;

&lt;p&gt;Using &lt;strong&gt;&lt;em&gt;cqlsh&lt;/em&gt;&lt;/strong&gt; to perform an aggregate query is pretty straight forward. Assuming you have an employee table with the following sales data:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;name&lt;/th&gt;
&lt;th&gt;sale&lt;/th&gt;
&lt;th&gt;rtime&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;John&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;2019-01-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Mustafa&lt;/td&gt;
&lt;td&gt;2000&lt;/td&gt;
&lt;td&gt;2019-02-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Krishna&lt;/td&gt;
&lt;td&gt;2500&lt;/td&gt;
&lt;td&gt;2019-01-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;John&lt;/td&gt;
&lt;td&gt;2200&lt;/td&gt;
&lt;td&gt;2020-01-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;John&lt;/td&gt;
&lt;td&gt;2350&lt;/td&gt;
&lt;td&gt;2021-01-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Mustafa&lt;/td&gt;
&lt;td&gt;3000&lt;/td&gt;
&lt;td&gt;2020-02-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Mustafa&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;2021-02-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Krishna&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;2020-01-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Krishna&lt;/td&gt;
&lt;td&gt;3600&lt;/td&gt;
&lt;td&gt;2021-01-12T09:48:31.020Z&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now we want to find the highest sale number for employee 1, John. Our cqlsh query would look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;select id, name, max(sale) as highest_sale from employee where id = 1 and name = “John”;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Would return a single record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;id name highest_sale

1 John 2350
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;This blog tutorial assumes that you already have &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; and &lt;a href="https://curl.se/" rel="noopener noreferrer"&gt;curl&lt;/a&gt; installed and configured on your machine. Alternatively, if you have a Astra account (they’re free) you can do your testing there.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Getting Stargate&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;If you are using Astra you can skip this part and go to the next section.&lt;/p&gt;

&lt;p&gt;The main repository for the Stargate source code is on GitHub at &lt;a href="https://github.com/stargate/stargate" rel="noopener noreferrer"&gt;https://github.com/stargate/stargate&lt;/a&gt; However, I recommend just using the Docker container that is already configured for testing. Assuming you have Docker installed already, just run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d — name stargate \
-p 8080:8080 \
-p 8081:8081 \
-p 8082:8082 \
-p 9042:9042 \
stargateio/stargate-dse-68:v1.0.25 \
--developer-mode — cluster-name test \
--cluster-version 6.8 --dse --enable-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your Docker instance of Stargate will load and start executing.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Preparing Your Test Environment&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Next we need to create our &lt;em&gt;keyspace&lt;/em&gt; and &lt;em&gt;table&lt;/em&gt;. Then we will load our test table with test data. While it is possible to do almost all of this using GraphQL, I did most of it using the REST API since that is the API with which I’m most familiar.&lt;/p&gt;

&lt;p&gt;Note: All of these URLs are designed for the Docker container running locally on your machine. If you are using Astra, adjust the URLs accordingly.&lt;/p&gt;

&lt;p&gt;Once the Docker image is fully up and running, you will need to get authentication credentials for the Cassandra instance it contains. Use this curl command to get the authentication token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L -X POST 'http://localhost:8081/v1/auth' \
-H 'Content-Type: application/json' \
--data-raw '{ "username": "cassandra", "password": "cassandra" }'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set the auth token as an environment variable for easy reuse&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export authToken=”The token returned in the previous step”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the following command to get a list of existing keyspaces. This is a good test to ensure you’ve set your authToken environment variable correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L -X GET 'localhost:8082/v1/keyspaces' \
--header 'accept: application/json' \
--header ‘content-type: application/json’ \
--header “X-Cassandra-Token: $authToken”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the following output from the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[“data_endpoint_auth”,”system”,”system_auth”,”system_backups”,”system_distributed”,”system_schema”,”system_traces”]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we need to create our keyspace for our database. The following command will create the test keyspace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L -X POST 'localhost:8082/v2/schemas/keyspaces' \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--header "X-Cassandra-Token: $authToken" \
-d '{ "name": "test", "replicas": 1}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we need to create our employee table in the &lt;strong&gt;&lt;em&gt;test&lt;/em&gt;&lt;/strong&gt; keyspace. This command is rather lengthy for a blog post so I recommend getting the create_table.sh file from the github repository at &lt;a href="https://github.com/jdavies/blogs/blob/master/20210602_aggregate_stargate/create_table.sh" rel="noopener noreferrer"&gt;https://github.com/jdavies/blogs/blob/master/20210602_aggregate_stargate/create_table.sh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it’s time to load some data into our table. The easiest way to do this is to download the &lt;a href="https://github.com/jdavies/blogs/blob/master/20210602_aggregate_stargate/load_data.sh" rel="noopener noreferrer"&gt;load_data.sh&lt;/a&gt; file from my GutHub repository (another blog-unfriendly script) and execute it via the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./load_data.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It contains the curl commands to load the records into your Docker database.&lt;/p&gt;

&lt;p&gt;Once the data is loaded, let’s run a quick query to ensure that everything is as we expect. Execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L -X GET ‘localhost:8082/v1/keyspaces/test/tables/employee/rows’ \
--header ‘accept: application/json’ \
--header ‘content-type: application/json’ \
--header “X-Cassandra-Token: $authToken”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get all 9 rows of data back. They can be a little hard to read from the terminal. If you want to see a prettier version I suggest copying the resulting text and pasting it into an online JSON browser like &lt;strong&gt;&lt;a href="https://jsonbeautifier.org/" rel="noopener noreferrer"&gt;jsonbeautifier.org&lt;/a&gt;&lt;/strong&gt;. You should see the following 9 rows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2im019seagd67s27rhmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2im019seagd67s27rhmg.png" alt="image" width="700" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to get down to business!&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Using Aggregate Queries&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Since I’m new to GraphQL, it seems strange to me. However, once you get used to its way of doing things (like omitting commas in a JSON-esque data format) it’s pretty straightforward. Here is the curl command that will retrieve the highest sales year for employee 1, named John:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl 'http://localhost:8080/graphql/test' \
-H 'Accept-Encoding: gzip, deflate, br' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Connection: keep-alive' \
-H 'DNT: 1' \
-H 'Origin: http://localhost:8080' \
-H "x-cassandra-token: $authToken" \
--data-binary '{"query":"query maxJohnSales {\n  employee(value: { \n    id: 1, \n    name: \"John\" }) {\n    values {\n      id\n      name\n      rtime\n      highest_sale: _int_function(name: \"max\", args: [\"sale\"])\n }\n  }\n}"}' --compressed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The payload is a little hard to read on the command line, so here it is in GraphQL format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query maxJohnSales {
   employee(value: {
      id: 1,
      name: “John” })
   {
      values {
         id
         name
         rtime
         highest_sale: _int_function(name: “max”, args: [“sale”])
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you examine the command, you will see how we included the max() aggregate command (named as “highest_sale”). Just like a cqlsh version of the call, the max() function is applied to the &lt;em&gt;return&lt;/em&gt; values, not the &lt;em&gt;select&lt;/em&gt; criteria. Your output should match the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3sszkyuh9otoae0qm6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3sszkyuh9otoae0qm6l.png" alt="image" width="700" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How about searching for the highest sale of all time? Here’s how you do it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl 'http://localhost:8080/graphql/test' \
-H 'Accept-Encoding: gzip, deflate, br' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Connection: keep-alive' \
-H 'DNT: 1' \
-H 'Origin: http://localhost:8080' \
-H "x-cassandra-token: $authToken" \
--data-binary '{"query":"query maxJohnSales {\n  employee {\n    values {\n      id\n      name\n      rtime\n      highest_sale: _int_function(name: \"max\", args: [\"sale\"])\n }\n  }\n}"}' --compressed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By removing the “where” clause from the GraphQL statement (traditionally in the parenthesis after the table name), you can search the entire table, across all partitions. In &lt;strong&gt;&lt;em&gt;cqlsh&lt;/em&gt;&lt;/strong&gt; this is the equivalent of adding ALLOW FILTERING, which in general is regarded as a “bad thing” since it forces a full table scan across all partitions, which can be very slow. However, since aggregate functions are often used for reporting, it might be acceptable to do this for a few special queries.&lt;/p&gt;

&lt;p&gt;ALLOW FILTERING isn’t necessarily a “bad thing”, but you have to understand what it does and use it sparingly if you want to keep your database performing at max speed! ALLOW FILTERING can come in very handy when we’re invoking a SELECT operation on a single partition (i.e. providing the full partition keys at the very minimum which is “id” in this “test.employee” table’s case).&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;That’s all there is to using aggregate functions in GraphQL using Stargate. Bear in mind this is GraphQL API-specific. It won’t work with the REST or Document APIs.&lt;/p&gt;

</description>
      <category>graphqlapi</category>
      <category>aggregatefunctions</category>
      <category>docker</category>
      <category>curl</category>
    </item>
    <item>
      <title>How DataStax Tracked Down a Linux Kernel Bug with Fallout</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Thu, 17 Mar 2022 22:04:22 +0000</pubDate>
      <link>https://dev.to/datastax/how-datastax-tracked-down-a-linux-kernel-bug-with-fallout-49mi</link>
      <guid>https://dev.to/datastax/how-datastax-tracked-down-a-linux-kernel-bug-with-fallout-49mi</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frddg7nfayci319xxiwp2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frddg7nfayci319xxiwp2.jpeg" alt="Image description" width="800" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sometimes as a developer, you run into a bug buried deep within the layers of your software stack. Chasing down the root cause requires not only curiosity, patience, and a healthy dose of tenacity but a willingness to try different tools and approaches. This post describes our challenges and ultimate success in tracking down a Linux kernel bug using Fallout.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Bugs come in all shapes and sizes and it’s not always clear at the beginning of a debugging session which one you’re currently chasing. Some bugs can be fixed up in a matter of minutes while others take weeks to nail down. And the really tricky ones require you to dig through multiple layers of your software stack, stressing the limits of your patience.&lt;/p&gt;

&lt;p&gt;This article covers an example of the latter kind of bug. When one of the daily performance tests started timing out after 16 hours, it turned out that I had unknowingly stumbled onto an issue in the Linux kernel &lt;code&gt;hrtimer&lt;/code&gt; code. Since I didn’t know it was a kernel bug when I started looking, I had to methodically dig deeper into the software stack as I homed in on the problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dtsx.io/38S1beX" rel="noopener noreferrer"&gt;Fallout&lt;/a&gt; is an open-source distributed systems testing service that we use heavily at &lt;a href="https://dtsx.io/3trqUUV" rel="noopener noreferrer"&gt;DataStax&lt;/a&gt; to run functional and performance tests for &lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Apache Cassandra&lt;/a&gt;, &lt;a href="https://pulsar.apache.org/" rel="noopener noreferrer"&gt;Apache Pulsar&lt;/a&gt;, and other projects. Fallout automatically provisions and configures distributed systems and clients, runs a variety of workloads and benchmarks, and gathers test results for later analysis.&lt;/p&gt;

&lt;p&gt;As you’ll see below, Fallout was instrumental in being able to quickly iterate and gather new data to validate and invalidate my guesses about the underlying bug.&lt;/p&gt;

&lt;h1&gt;
  
  
  The original bug report
&lt;/h1&gt;

&lt;p&gt;At &lt;a href="https://dtsx.io/3trqUUV" rel="noopener noreferrer"&gt;DataStax&lt;/a&gt; we run a variety of nightly functional and performance tests against &lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Cassandra&lt;/a&gt;, and one morning we noticed that one of our write-throughput benchmarks failed to complete after 16 hours. Normally, this test would complete within four, so something was clearly wrong. Like all good engineers, we checked to see what had changed since the test last passed, and sure enough we found a commit in our git repository that could potentially explain the failure.&lt;/p&gt;

&lt;p&gt;Since the most important thing after discovering a test failure is to get the tests passing again, we reverted the suspected commit and re-ran the tests. They passed. So with the tests now passing we figured that even if we hadn’t analysed the commit to understand &lt;em&gt;how&lt;/em&gt; the test could be failing, we’d at least narrowed down the cause of the issue to a single commit. High-fives were exchanged and we all went back to work leaving the author of the reverted commit to understand the root cause of the failure.&lt;/p&gt;

&lt;p&gt;Only the test failed again the next day, even with the reverted commit. Clearly we’d been too hasty in our diagnosis and hadn’t actually fixed anything.&lt;/p&gt;

&lt;p&gt;But this begged the question: if the commit we reverted didn’t originally cause the test to fail, why had it suddenly started failing? It dawned on us that we were dealing with an intermittent failure and our task now was to figure out how to reproduce it more reliably.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reproducing the failure
&lt;/h1&gt;

&lt;p&gt;Intermittent failures are a pain to debug because they take so much longer to analyze when you can’t trigger the conditions for the bug on every test run. Sometimes there’s no telling how many runs you’ll need to hit it. Worse, you have to gather much more data to be confident that you’ve actually solved the bug once you apply your fix.&lt;/p&gt;

&lt;p&gt;Almost all of the performance tests at &lt;a href="https://dtsx.io/3trqUUV" rel="noopener noreferrer"&gt;DataStax&lt;/a&gt; run using &lt;a href="https://dtsx.io/38S1beX" rel="noopener noreferrer"&gt;Fallout&lt;/a&gt;, our distributed systems testing service. Tests are described in a single YAML file and Fallout takes care of scheduling tests on available hardware and collecting the results for later analysis. Our team figured out we could normally hit this bug at least once every five runs. We initially looked at trying to increase the reproducibility of this bug but simply couldn’t make the failure happen more regularly. Fortunately, there was one way we could reduce the time to trigger it — by running multiple tests in parallel. Since Fallout can automatically queue test runs and execute them once hardware becomes available, I fired up five runs of the same test and waited for the results to come in. Instead of having to wait up to 20 hours to see if I had hit the bug, I now only needed to wait around four hours. Still not ideal, but it meant that I could make progress every day.&lt;/p&gt;

&lt;p&gt;But we were still using the “does the test run for longer than four hours” symptom to detect when we’d run into a problem. Finding a less time-consuming way to know when we’d triggered the bug required understanding the side effects of the bug at the Cassandra level. In other words, we had to know what Cassandra was doing to cause the test to timeout in the first place.&lt;/p&gt;

&lt;h1&gt;
  
  
  The side effects of the bug
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Cassandra&lt;/a&gt; has a number of tools to understand what’s happening internally as it serves data to clients. A lot of this is tracked as metrics in monitoring tools such as &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; or &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt; which integrate with Fallout. Checking those metrics showed that request throughput (requests per second) dropped off to near zero whenever we triggered the bug. To understand what was happening on the server side, I waited for the bug to occur and then took a look at the output of &lt;a href="https://cassandra.apache.org/doc/latest/cassandra/tools/nodetool/nodetool.html" rel="noopener noreferrer"&gt;nodetool&lt;/a&gt; tpstats.&lt;/p&gt;

&lt;p&gt;Using &lt;a href="http://tpstats/" rel="noopener noreferrer"&gt;nodetool tpstats&lt;/a&gt; for diagnosing bugs is much easier if you have a mental model of how requests are handled by &lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Cassandra&lt;/a&gt;. Native-Transport-Requests threads are responsible for handling requests from clients and they are run by the SEPWorker mechanism which is the thread pool framework inside of Cassandra. Cassandra uses a pool of threads and wakes up threads when all the running threads are busy and there’s still work to be done. When there’s no work to do, threads are put to sleep again until more work comes in.&lt;/p&gt;

&lt;p&gt;From the output of tpstats I could see that there was plenty of work to do because the number of pending requests was high but the number of &lt;em&gt;active&lt;/em&gt; requests at any one time hovered around two or three. To put this into perspective, there were hundreds of requests and around 172 Native-Transport-Requests threads on the machine but most of those threads were sleeping instead of handling the incoming requests from clients. Something was preventing Cassandra from waking up more Native-Transport-Requests threads.&lt;/p&gt;

&lt;p&gt;The SEPWorker infrastructure has a mechanism for only waking up the minimum number of threads necessary when work comes in. If the currently running threads finish processing all of their work they enter the SPINNING state which is part of the internal Cassandra state machine. In this state, threads will check for available work and then sleep for a short period of time if there isn’t any. When they wake up they do one more check for work and if there &lt;em&gt;still&lt;/em&gt; isn’t any work they go to sleep until explicitly awakened by another thread. If there &lt;em&gt;is&lt;/em&gt; work, then before processing it they check to see whether they took the last work item and wake up another thread to process the next one if there’s still outstanding work.&lt;/p&gt;

&lt;p&gt;Knowing all of this and combining that knowledge with the tpstats output I decided to see what all of the Native-Transport-Requests threads were doing by looking at their callstacks with jstack. Most of the threads were sleeping just like I expected based on the tpstats output, and a few were busily executing work. But one had this interesting callstack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Native-Transport-Requests-2" #173 daemon prio=5 os_prio=0 cpu=462214.94ms elapsed=19374.32s tid=0x00007efee606eb00 nid=0x385d waiting on condition  [0x00007efec18b9000]
4   java.lang.Thread.State: TIMED_WAITING (parking)
5 at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
6 at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:357)
7 at org.apache.cassandra.concurrent.SEPWorker.doWaitSpin(SEPWorker.java:268)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This showed that &lt;code&gt;Native-Transport-Requests-2&lt;/code&gt; was currently in the SPINNING state and sleeping prior to checking for more work one last time before sleeping permanently. The only problem was, no matter how many times I ran jstack, this thread never &lt;em&gt;exited&lt;/em&gt; &lt;code&gt;parkNanos()&lt;/code&gt; which meant no other threads would ever be woken up to help process the backlog of work!&lt;/p&gt;

&lt;p&gt;This finally explained why throughput dropped whenever we hit the bug. The next step was understanding how &lt;code&gt;parkNanos()&lt;/code&gt; was behaving.&lt;/p&gt;

&lt;h1&gt;
  
  
  Understanding the parkNanos() implementation
&lt;/h1&gt;

&lt;p&gt;First of all, I wanted to make sure that Cassandra wasn’t somehow passing a really big value to &lt;code&gt;parkNanos()&lt;/code&gt; which could make it appear that it was blocked indefinitely when in actual fact it was just sleeping for a very long time. Now that I had more of an idea of what to look for, it was easy to verify the values passed to &lt;code&gt;parkNanos()&lt;/code&gt; by printing a log message every time this code path was executed. All of the values looked within the expected range so I concluded that we definitely weren’t calling &lt;code&gt;parkNanos()&lt;/code&gt; incorrectly.&lt;/p&gt;

&lt;p&gt;Next, I turned my attention to the implementation of &lt;code&gt;parkNanos()&lt;/code&gt;. &lt;code&gt;parkNanos()&lt;/code&gt; uses the timeout feature of &lt;code&gt;pthread&lt;/code&gt;’s conditional variables to sleep for a specified time. Internally, &lt;code&gt;pthread&lt;/code&gt; uses the kernel to provide the timeout facility, specifically by using the &lt;code&gt;futex()&lt;/code&gt; system call. As a quick recap, the &lt;code&gt;futex()&lt;/code&gt; (or fast userspace &lt;code&gt;mutex&lt;/code&gt;) system call allows application threads to sleep, waiting for the value at a memory address to change. The thread is woken up when either the value changes or the timeout expires.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;parkNanos()&lt;/code&gt; call was putting the Native-Transport-Requests thread to sleep expecting the timer to expire within 10 microseconds or so but it was starting to look like that never happened. Two questions came to mind: did the thread somehow miss the signal to wake up from the expiring timer or did the timer never actually expire? I couldn’t see anything obviously wrong with the code, and I didn’t want to deal with building a custom version of the Java SE Development Kit (JDK) to pursue that line further. Instead, I decided to switch gears and &lt;a href="https://dtsx.io/3BY5KAJ" rel="noopener noreferrer"&gt;write a BPF script&lt;/a&gt; to explore the problem from the kernel side first&lt;/p&gt;

&lt;h1&gt;
  
  
  Tracing kernel timers with BPF
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://lwn.net/Articles/740157/" rel="noopener noreferrer"&gt;BPF&lt;/a&gt; is an in-kernel virtual machine that’s capable of dynamically running kernel code injected from userspace. The beauty of BPF is that you can run these scripts without needing to recompile your kernel or even reboot your machine. Among other things, BPF allows you to insert probes or hooks at the entry and exit of kernel functions, effectively allowing you to run your own kernel code. Tracing through the kernel’s &lt;code&gt;futex&lt;/code&gt; code I could see that if I hooked my script into the return path of the internal functions, I could display the timer details when the thread returned from the &lt;code&gt;futex&lt;/code&gt; call. The next trick was getting the stuck thread to return.&lt;/p&gt;

&lt;p&gt;Signals are a handy way of interrupting sleeping application threads that are inside the kernel and you can often send a &lt;code&gt;SIGSTOP&lt;/code&gt;, &lt;code&gt;SIGCONT&lt;/code&gt; sequence without triggering any error paths in the kernel or your application. So by sending these two signals I could force the sleeping Native-Transport-Requests thread to return from the blocking &lt;code&gt;futex()&lt;/code&gt; call, run my BPF script and print the timer details in my terminal window.&lt;/p&gt;

&lt;p&gt;The main timer detail I was interested in was the expired value which tells you when the timer should have expired and awakened the sleeping thread. If the value was far into the future it seemed likely that the timer had been programmed incorrectly, but if it was in the past then we’d missed the wakeup or it had never been sent. I bundled my BPF script into a bash script that also took care of first sending the &lt;code&gt;SIGSTOP&lt;/code&gt; and &lt;code&gt;SIGCONT&lt;/code&gt; signals to the Native-Transport-Requests thread.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ./futex 14469
Tracing pid 14469…
Sending SIGSTOP
55880730777795 expires
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The expires value for high-resolution timers in the kernel is in nanoseconds since boot. Doing a quick bit of math (5880730777795/1000000000 / 60 = 98.012 or 98 minutes of uptime) I could tell that the expire value looked correct because it was within the four hours that the test usually took. And the real issue was that a wakeup had never been sent to the sleeping &lt;code&gt;Native-Transport-Requests&lt;/code&gt; thread. At this point, I was convinced that our test was hitting a kernel bug.&lt;/p&gt;

&lt;h1&gt;
  
  
  How are timers implemented in the Linux kernel?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://lwn.net/Articles/167897/" rel="noopener noreferrer"&gt;High-resolution timers&lt;/a&gt;, or &lt;code&gt;&lt;a href="https://www.kernel.org/doc/Documentation/timers/highres.txt" rel="noopener noreferrer"&gt;hrtimers&lt;/a&gt;&lt;/code&gt;, is the in-kernel infrastructure responsible for programming the timer hardware in your CPU. At the application level, &lt;code&gt;hrtimers&lt;/code&gt; are programming using system calls such as &lt;code&gt;clock_nanosleep()&lt;/code&gt; and &lt;code&gt;nanosleep()&lt;/code&gt;. Each CPU maintains its own tree of &lt;code&gt;hrtimers&lt;/code&gt; grouped by &lt;code&gt;clock ID&lt;/code&gt; (&lt;code&gt;CLOCK_MONOTONIC&lt;/code&gt;, &lt;code&gt;CLOCK_REALTIME&lt;/code&gt;, etc). Each tree is implemented using a red-black tree data structure. Timers are inserted into the red-black tree and sorted based on their expiration time. The kernel’s red-black tree implementation also maintains a pointer to the smallest entry in the tree which enables the kernel to lookup the next expiring timer in O(1).&lt;/p&gt;

&lt;p&gt;I modified my BPF script to display the location in the red-black tree for the &lt;code&gt;hrtimer&lt;/code&gt; our &lt;code&gt;Native-Transport-Requests&lt;/code&gt; thread was waiting on and discovered that it never reached the leftmost spot. This explained why the thread was never awakened! This is also a violation of the way that red-black trees are supposed to work: it’s normally expected that the entries in the tree are in sorted order, smallest to highest, at all times.&lt;/p&gt;

&lt;h1&gt;
  
  
  Tracing the behavior of the kernel’s red-black trees
&lt;/h1&gt;

&lt;p&gt;Tracing the workings of these red-black trees was beyond anything I could achieve with a BPF script. I resorted to writing a kernel patch to print a warning message and dump the contents of the &lt;code&gt;hrtimer&lt;/code&gt; red-black trees whenever the auxiliary pointer to the leftmost entry no longer pointed to the timer that expired next.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;10: was adding node=00000000649f0970
10: found node in wrong place last=0000000091a72b7e, next=00000000649f0970 (expires 208222423748, 208222332591)
10: last timer=0000000091a72b7e function=hrtimer_wakeup+0x0/0x30, next=00000000649f0970 timer function=hrtick+0x0/0x70
Printing rbtree
=========
node=0000000091a72b7e, expires=208222423748, function=hrtimer_wakeup+0x0/0x30
node=00000000649f0970, expires=208222332591, function=hrtick+0x0/0x70
node=00000000063113c0, expires=208225793771, function=tick_sched_timer+0x0/0x80
node=000000003705886f, expires=209277793771, function=watchdog_timer_fn+0x0/0x280
node=00000000e3f371a2, expires=233222750000, function=timerfd_tmrproc+0x0/0x20
node=0000000068442340, expires=265218903415, function=hrtimer_wakeup+0x0/0x30
node=00000000785c2d62, expires=291972750000, function=timerfd_tmrproc+0x0/0x20
node=0000000085e65b06, expires=86608220562558, function=hrtimer_wakeup+0x0/0x30
node=00000000049a0b4d, expires=100000159000051377, function=hrtimer_wakeup+0x0/0x30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before I started this part, I wasn’t even sure that Fallout would allow me to boot into a custom kernel. Sure enough, it was trivial to do.&lt;/p&gt;

&lt;p&gt;After rebooting into my custom kernel I saw the warnings triggering immediately on startup so I naturally assumed that something was broken with my patch. But after a closer look, I realized that it was possible to trigger my bug within two minutes of booting the VM! I no longer had to wait up to four hours to see if I had reproduced the issue. What was even better was that I was able to hit this warning every single time I booted the VM. With the reproduction time cut by around 99%, I quickly made progress understanding the kernel bug that caused the red-black tree to become inconsistent.&lt;/p&gt;

&lt;h1&gt;
  
  
  The real bug and the fix
&lt;/h1&gt;

&lt;p&gt;The kernel bug underlying this whole ordeal was already &lt;a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=156ec6f42b8d300dbbf382738ff35c8bad8f4c3a" rel="noopener noreferrer"&gt;fixed upstream&lt;/a&gt; in February this year in Linux 5.12 but not yet pulled into Ubuntu’s kernel. What was happening was that some of the kernel’s scheduler code was modifying the expiration time of a &lt;code&gt;hrtimer&lt;/code&gt; it had previously inserted into the red-black tree without first removing it from the tree. This violated the red-black tree property that all entries in the tree need to be sorted by their expiration time and resulted in &lt;code&gt;hrtimers&lt;/code&gt; not firing. This meant that threads waiting on those timers were never woken up.&lt;/p&gt;

&lt;p&gt;But why had no one else hit this kernel bug? The reason is because of a little-known or documented scheduler tunable that we were using. We were using the kernel’s &lt;code&gt;HRTICK&lt;/code&gt; feature in order to improve scheduler latency. Unfortunately, it’s not widely used and a number of bugs have been fixed over the years. What’s worse is that &lt;code&gt;HRTICK&lt;/code&gt; fixes are not being backported to stable kernels. So Linux distributions need to take them in an ad-hoc fashion. Rather than waiting for this to happen or rolling our own kernels, we decided to disable the feature altogether to avoid any future multi-week kernel debugging.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;It’s helpful when debugging these multi-layered issues to have a bag of tools and techniques you can use to understand the behavior of your app at various levels of the stack. Even if you don’t know how to instrument the code at a particular point (or don’t want to), remember how I didn’t want to build a custom version of the JDK? You can still infer the location of a bug by using tools to understand the behavior above and below a layer. And when you don’t know exactly how a piece of code works, having a mental model or a rough idea of what goes on inside can help you to make forward progress by skipping layers that are unlikely to contain the bug.&lt;/p&gt;

&lt;p&gt;Secondly, there is no way I could have taken on this problem without a service to automatically deploy and provision virtual machines for running the test. &lt;a href="https://dtsx.io/38S1beX" rel="noopener noreferrer"&gt;Fallout&lt;/a&gt; was the key to unlocking this bug because even though the bug could only be reproduced intermittently, I managed to parallelize the test runs and reduce the time to trigger the issue. The whole investigation took several weeks, and that’s despite my being able to analyze test results to make a little bit of progress every day.&lt;/p&gt;

&lt;p&gt;If any of this sounds like the kind of problem you’d enjoy working on, check out the open roles on the &lt;a href="https://dtsx.io/2XaZd71" rel="noopener noreferrer"&gt;DataStax careers&lt;/a&gt; page. We’re always on the lookout for tenacious developers and software engineers!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow the &lt;a href="https://dtsx.io/3lddgAU" rel="noopener noreferrer"&gt;DataStax Tech Blog&lt;/a&gt; for more developer stories. Check out our &lt;a href="https://dtsx.io/38SlokL" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt; channel for tutorials and here for DataStax Developers on &lt;a href="https://dtsx.io/3BVgxM2" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for the latest news about our developer community.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/mfleming/7c6cfd225c305c0e44cc7357cdc481f3" rel="noopener noreferrer"&gt;https://gist.github.com/mfleming/7c6cfd225c305c0e44cc7357cdc481f3&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;p&gt;Curious to learn more about (or play with) Cassandra itself? We recommend trying it on the &lt;a href="https://astra.dev/3ImBulI" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt; free plan for the fastest setup.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dtsx.io/3trqUUV" rel="noopener noreferrer"&gt;DataStax&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer"&gt;Apache Cassandra&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pulsar.apache.org/" rel="noopener noreferrer"&gt;Apache Pulsar&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dtsx.io/38S1beX" rel="noopener noreferrer"&gt;Fallout&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cassandra.apache.org/doc/latest/cassandra/tools/nodetool/nodetool.html" rel="noopener noreferrer"&gt;Nodetool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://tpstats/" rel="noopener noreferrer"&gt;Nodetool tpstats&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gist.github.com/mfleming/7c6cfd225c305c0e44cc7357cdc481f3" rel="noopener noreferrer"&gt;BPF script&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://lwn.net/Articles/740157/" rel="noopener noreferrer"&gt;A Thorough Introduction to eBPF&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://lwn.net/Articles/167897/" rel="noopener noreferrer"&gt;High-resolution Timers and Dynamic Ticks Design Notes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Linux Kernel: &lt;a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=156ec6f42b8d300dbbf382738ff35c8bad8f4c3a" rel="noopener noreferrer"&gt;kernel/git/torvalds/linux.git&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>performance</category>
      <category>testing</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Save hours on your setup of K8ssandra with the new config builder</title>
      <dc:creator>Pieter Humphrey</dc:creator>
      <pubDate>Thu, 24 Feb 2022 19:36:00 +0000</pubDate>
      <link>https://dev.to/datastax/save-hours-on-your-setup-of-k8ssandra-with-the-new-config-builder-4fdc</link>
      <guid>https://dev.to/datastax/save-hours-on-your-setup-of-k8ssandra-with-the-new-config-builder-4fdc</guid>
      <description>&lt;p&gt;&lt;em&gt;Setting up K8ssandra in your workflow just got a whole lot easier. With the new Config Builder you can be running Apache Cassandra® on Kubernetes in a matter of minutes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The purpose of K8ssandra is to make it easy to run Apache Cassandra® on Kubernetes. We recently took another big step in that direction by releasing the &lt;a href="https://dtsx.io/3pIiPcR" rel="noopener noreferrer"&gt;K8ssandra config builder&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Even if you’ve created thousands of nodes or integrated K8ssandra in your stack, you’ll probably want to give the config builder a try. Walk through the interactive wizard defining the shape of your cluster, resource requirements, and toggle features to fit your needs. This is great for smoothing out the on-ramp for a production ready environment of K8ssandra.&lt;/p&gt;

&lt;p&gt;Especially if you’re new(er) to K8ssandra, this builder will help you save several hours getting started in your environment. Choose one of the pre-defined templates as a starting point with cloud-specific options preconfigured.&lt;/p&gt;

&lt;p&gt;As of this writing you can take your pick between a custom configuration and templates for &lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google Kubernetes Engine&lt;/a&gt; (GKE), &lt;a href="https://azure.microsoft.com/en-us/services/kubernetes-service/" rel="noopener noreferrer"&gt;Azure AKS&lt;/a&gt;, &lt;a href="https://www.digitalocean.com/" rel="noopener noreferrer"&gt;Digital Ocean&lt;/a&gt;, &lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;Amazon EKS&lt;/a&gt; or a local setup of K8ssandra.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Write hundreds of lines of code with a couple of clicks&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As you may know &lt;a href="https://dtsx.io/3ICC6VU" rel="noopener noreferrer"&gt;K8ssandra&lt;/a&gt; is a platform for production deployments of Cassandra on Kubernetes. This includes everything you want to run alongside them like your monitoring system, repair process (&lt;a href="https://dtsx.io/3EJcCDH" rel="noopener noreferrer"&gt;Reaper&lt;/a&gt;) and backups.&lt;/p&gt;

&lt;p&gt;However, the default configuration file is hundreds of lines. So we realized that we could make it even easier. With the K8ssandra Config Builder all you have to do is set your preferred parameters like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limits for deployments&lt;/li&gt;
&lt;li&gt;Storage requirements&lt;/li&gt;
&lt;li&gt;Size and number of nodes&lt;/li&gt;
&lt;li&gt;Racks and logical datacenters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When writing your own YAML file you need to know the right values and sane limits. But with the config builder you don’t need to worry about that. Once you have the file, you run helm install and get a cluster running.&lt;/p&gt;

&lt;p&gt;The config builder adjusts your settings to keep you within the limits that make sense. So you can’t request 32 GB of Heap for Cassandra if you only have 16 GB of RAM available due to your quotas.&lt;/p&gt;

&lt;p&gt;When you are finished (or even at a stopping point), you can click copy or share to get going yourself or to share the configuration with a colleague to collaborate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsv5lzsjqz8pi5h3se3iw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsv5lzsjqz8pi5h3se3iw.jpeg" alt="Image description" width="546" height="271"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1. A single click in the K8ssandra Config Builder will give you the 52 line YAML file for the setup of three nodes in Amazon EKS.&lt;/p&gt;

&lt;p&gt;Sure, there might still be instances where you want to write the YAML file yourself. But if you get stuck on something, the config builder is the fastest way for you to find the solution.&lt;/p&gt;

&lt;p&gt;We’ll keep the builder up to date so you can always come back and get exactly what you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What’s next for the K8ssandra Config Builder&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Our goal is to give you a rich experience building anything in K8ssandra. So we’re working on new features to be released for the K8ssandra config builder in the near future. Near the top of our to-do list are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linking to the Config Builder from the &lt;a href="https://dtsx.io/3pMLJsh" rel="noopener noreferrer"&gt;K8ssandra install documentation&lt;/a&gt; to the specific YAML file template&lt;/li&gt;
&lt;li&gt;Syntax highlighting the code and line numbers&lt;/li&gt;
&lt;li&gt;User experience improvements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course our priorities depend on the feedback we get from the users of the config builder. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;So don’t hesitate to tell us what you think. You can do so by joining the &lt;a href="https://dtsx.io/3DGLG6f" rel="noopener noreferrer"&gt;K8ssandra Discord&lt;/a&gt; or &lt;a href="https://dtsx.io/31Rp85v" rel="noopener noreferrer"&gt;K8ssandra Forum&lt;/a&gt; today. For exclusive posts on all things data, follow &lt;a href="https://dtsx.io/3IE3mTW" rel="noopener noreferrer"&gt;DataStax on Medium&lt;/a&gt;.&lt;/em&gt;. We recommend trying it on the &lt;a href="https://astra.dev/3pgNLBB" rel="noopener noreferrer"&gt;Astra DB&lt;/a&gt; free plan for the fastest setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Resources&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Try out the &lt;a href="https://dtsx.io/3pIiPcR" rel="noopener noreferrer"&gt;K8ssandra Config Builder&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join the &lt;a href="https://dtsx.io/31Rp85v" rel="noopener noreferrer"&gt;K8ssandra Community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dtsx.io/3lVGaXo" rel="noopener noreferrer"&gt;Kubernetes and Apache Cassandra: What Works (and What Doesn’t)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;DataStax Discord: &lt;a href="https://dtsx.io/3oJUMey" rel="noopener noreferrer"&gt;Fellowship of the (Cassandra) Rings&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dtsx.io/3pMLJsh" rel="noopener noreferrer"&gt;K8ssandra documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google Kubernetes Engine&lt;/a&gt; (GKE)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://azure.microsoft.com/en-us/services/kubernetes-service/" rel="noopener noreferrer"&gt;Azure Kubernetes Service&lt;/a&gt; (AKS)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.digitalocean.com/" rel="noopener noreferrer"&gt;Digital Ocean&lt;/a&gt; cloud infrastructure&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;Amazon Elastic Kubernetes Service&lt;/a&gt; (EKS)&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>nosql</category>
      <category>database</category>
    </item>
  </channel>
</rss>
