This tutorial assumes you have provisioned vms required for deployment.
Setting up the cluster
On each of the nodes, follow the steps below
Update the packages
sudo apt-get update && sudo apt-get upgrade
-
Make sure you have Java 8 SDK in your vm
java -version
If you dont have it installed, follow the below steps- Run
sudo apt install openjdk-8-jdk
- If you have older version of SDK, you can set this version as your default by running
sudo update-alternatives --config java
and choosing the correct alternative
- Run
Add the cassandra source repo from Apache Foundation so that the packages are available to your system
echo "deb https://downloads.apache.org/cassandra/debian 311x main" |
sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
- Add the public keys from Apache so that package can be verified
curl https://downloads.apache.org/cassandra/KEYS |
sudo apt-key add -
- Update the packages
sudo apt-get update
- Install Cassandra
sudo apt-get install cassandra
- Check if the cassandra is running in each of your nodes
sudo service cassandra status
- If everything worked as expected you should see the output similar to this
● cassandra.service - LSB: distributed storage system for
structured data
Loaded: loaded (/etc/init.d/cassandra; generated)
Active: active (running) since Tue 2020-07-14 06:18:07
UTC; 1min 4s ago
Docs: man:systemd-sysv-generator(8)
Tasks: 64 (limit: 19141)
CGroup: /system.slice/cassandra.service
- Stop the cassandra so that individual nodes can be setup to form a cluster
sudo service cassandra stop
- Delete the default dataset that are created
sudo rm -rf /var/lib/cassandra/data/system/*
At this point you have cassandra set up on each of the nodes. Following steps will allow you to form these nodes in to a cluster.
Lets assume you have 4 nodes with ips: 10.10.10.1,10.10.10.2,
10.10.10.3,10.10.10.4
On each node, the config file is present at /etc/cassandra/cassandra.yaml
.
In cassandra.yml
, make sure following changes are made
// This is an example for 10.10.10.1
cluster_name: 'Name For Your Cluster’
authenticator: PasswordAuthenticator // This restricts
access only with credentials
authorizer: CassandraAuthorizer
seeds: 10.10.10.1, 10.10.10.4 // Seeds are the contact point for each node in a cluster. General guideline is to have more than one seed per data center.
listen_address: 10.10.10.1 // change to ip of your node
rpc_address: 10.10.10.1 // change to ip of your node
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap: false
Once you have set the config file in each of the node,
start up the cluster by starting service in each node
starting with seed nodes first
Run sudo service cassandra start
Once you have started cassandra service on all of the nodes,
You can check the status of the cluster by running nodetool status
on any of the nodes
If cluster is up, you should see following response
Datacenter: dc1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.10.10.1 142.02 KB 256 ? 2053956d-7461-41e6-8dd2-0af59436f736 rack1
UN 10.10.10.2 142.02 KB 256 ? 4553956d-7461-41e6-8dd2-0af59436f736 rack1
UN 10.10.10.3 142.02 KB 256 ? 2653546d-7461-41e6-8dd2-0af59436f736 rack1
UN 10.10.10.4 142.02 KB 256 ? 24652346d-7461-41e6-8dd2-0af59436f736 rack1
Setting up database
- By default, you can connect to the database as the default user 'cassandra'
cqlsh 10.10.10.1 -u cassandra
default password is cassandra
- Update system_auth keyspace
By default, system_auth
keyspace has replication set as 1. So if the single replica node goes down, you will not be able to access the db
ALTER KEYSPACE "system_auth"
WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3}
here, you are setting the replication of 3 on a 4 node cluster so that should be good enough on most situations
you can read more about it here: https://docs.datastax.com/en/ddacsecurity/doc/ddacsecurity/secureConfigNativeAuth.html
- Propagate the change to all nodes.
Run nodetool repair system_auth
on each node
- Restart the database.Start turning back nodes starting with Seed nodes
// Stop all nodes by running this in each node
sudo service cassandra stop
// Then bring each node up starting with seed node
sudo service cassandra start
- Log back to db with default cassandra user
cqlsh 10.10.10.1 -u cassandra
- Create a new Super user
CREATE ROLE testuser WITH PASSWORD = '<some_secure_password>'
AND SUPERUSER = true
AND LOGIN = true;
- Login as newly created user
cqlsh 10.10.10.1 -u testuser
- Neutralise or remove default account
Since cassandra
user is default super user it poses a security threat as anyone can access the db with super user privilege
1. ALTER ROLE cassandra WITH PASSWORD='ReallyStrongPassword'
AND SUPERUSER=false;
OR
2. Drop The cassandra role
DROP ROLE cassandra;
- Initial Setup
CREATE ROLE test_user_one WITH PASSWORD = 'abcgdraklasdf'
AND LOGIN = true;
CREATE KEYSPACE testone WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'dc1' : 2 } ;
GRANT CREATE ON KEYSPACE testone TO test_user_one;
GRANT ALTER ON KEYSPACE testone TO test_user_one;
GRANT DROP ON KEYSPACE testone TO test_user_one;
GRANT MODIFY ON KEYSPACE testone TO test_user_one;
GRANT SELECT ON KEYSPACE testone TO test_user_one;
GRANT SELECT ON system.size_estimates TO test_user_one;
Top comments (0)