<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abdul Hadi</title>
    <description>The latest articles on DEV Community by Abdul Hadi (@abdul-hadi).</description>
    <link>https://dev.to/abdul-hadi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abdul-hadi"/>
    <language>en</language>
    <item>
      <title>THALES CipherTrust Manager Installation on Private Cloud Guide 2025</title>
      <dc:creator>Abdul Hadi</dc:creator>
      <pubDate>Fri, 11 Apr 2025 19:42:59 +0000</pubDate>
      <link>https://dev.to/abdul-hadi/thales-ciphertrust-manager-installation-guide-2025-540c</link>
      <guid>https://dev.to/abdul-hadi/thales-ciphertrust-manager-installation-guide-2025-540c</guid>
      <description>&lt;p&gt;We assume that you have already downloaded the CipherTrust Manager OVA file. IF NOT then please follow this link &lt;a href="https://cpl.thalesgroup.com/encryption/ciphertrust-platform-community-edition#getstarted" rel="noopener noreferrer"&gt;OVA file download&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Passing the cloud-init file for a Private Cloud Image&lt;br&gt;
When using the disk image, how the cloud-init data is passed to CipherTrust Manager depends on the virtualization platform - please refer to documentation or notes for your specific cloud environment.&lt;br&gt;
The following are two examples of passing cloud-init data when using a disk image, one using 'libvirt' and one using VMware/vSphere.&lt;/p&gt;

&lt;p&gt;You can view official documentation here &lt;a href="https://cpl.thalesgroup.com/encryption/ciphertrust-platform-community-edition#getstarted" rel="noopener noreferrer"&gt;THALES CipherTrust Manager&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Using VMware/vSphere
&lt;/h2&gt;

&lt;p&gt;This example describes how to deploy CipherTrust Manager on VMware with a static IP configuration. In general, if you have virtual machines you intend to use frequently or for extended periods of time, it can be convenient to assign a static IP address, or configure DHCP server to always assign the same IP address, to each of these virtual machines.&lt;br&gt;
For virtual machines that you do not expect to keep for extended periods of time, use DHCP and let it allocate P addresses for these machines.&lt;/p&gt;

&lt;p&gt;Use the following procedure to deploy CipherTrust Manager on VMware with a static IP configuration.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: This procedure includes preparation of a cloud-init configuration file used to set up a static IP address during launch of the CipherTrust Manager.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Get CipherTrust Manager installation file for VMware from Gemalto Support Portal.&lt;/p&gt;

&lt;p&gt;Deploy OVA in ESXi Server.&lt;br&gt;
Select "Deploy OVF Template".&lt;br&gt;
On the Select an OVF template page, choose OVA file and select NEXT.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploy Template
&lt;/h3&gt;

&lt;p&gt;On the Select a name and folder page, select the name of the virtual machine and its location.&lt;/p&gt;

&lt;p&gt;Validate the CipherTrust Manager Virtual Machine Configuration. On the Select a compute resource page, select the destination compute resource (if applicable).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note&lt;br&gt;
Error: If you see an error, please perform the following steps:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Use &lt;code&gt;ovftool.exe&lt;/code&gt; to convert ova file into uncompressed file(s).&lt;/p&gt;

&lt;p&gt;Execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ovftool.exe --lax &amp;lt;source_OVA_file&amp;gt; &amp;lt;destination_OVF_file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note&lt;br&gt;
OVF with compressed disk is not supported on newer version of Vsphere client. It may work on older versions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Repeat Step 2 with a new installer file.&lt;/p&gt;

&lt;p&gt;On the Review details page, verify the template details of the CipherTrust Manager image and if correct, select NEXT.&lt;/p&gt;

&lt;h3&gt;
  
  
  Review Details
&lt;/h3&gt;

&lt;p&gt;On the Select storage page, select the storage location to install the CipherTrust Manager and then select NEXT.&lt;br&gt;
On the Select network page, select the network and then select FINISH.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Warning&lt;br&gt;
Caution: Do not launch/start the machine at this time.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Prepare the cloud-init configuration.&lt;/p&gt;

&lt;p&gt;Add a CD drive to the VM.&lt;/p&gt;

&lt;p&gt;Before booting up the VM, prepare the cloud-init configuration. The following cloud-init example configures the VM's eth0 port with a static IP address. Copy this example and edit it for your desired network settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config
keysecure:
    netcfg:
        iface:
            name: eth0
            type: static
            address: 192.168.1.150
            netmask: 255.255.255.0
            gateway: 192.168.1.1
            dns1: 192.168.1.100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Warning&lt;br&gt;
Note: Cloud-init configuration files use YAML syntax; indentation is important and tabs cannot be used.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Convert the string to base64. To convert to base64, use the openssl command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl base64 -in &amp;lt;infile&amp;gt; -out &amp;lt;outfile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this base64 string to use in next steps.&lt;/p&gt;

&lt;p&gt;Add the base64 configuration to the VM. This step shows vSphere web client (Flash) version as demonstration. You may find similar options in other clients.&lt;/p&gt;

&lt;p&gt;Select: &lt;code&gt;&amp;gt; virtual machine &amp;gt; Configure &amp;gt; Settings &amp;gt; vApp Options&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Press Edit button on the top right on this page.&lt;/p&gt;

&lt;p&gt;Under OVF Settings, select the ISO Image check box, which is next to OVF environment transport.&lt;/p&gt;

&lt;p&gt;On the same page, expand "Properties" to add configuration.&lt;/p&gt;

&lt;p&gt;Press the New button to add a property for the configuration. On following screen, there are two fields which need to be changed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Label: user-data
Default value: &amp;lt;base64 string of configuration&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: Key ID will change automatically when you change Label.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Press OK to save the Property Settings.&lt;/p&gt;

&lt;p&gt;Then press OK again to save the vApp Options page.&lt;/p&gt;

&lt;p&gt;Launch the instance. The VM should boot up configured with a static IP.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important&lt;br&gt;
If you you're VM didn't pick the configuration using the base64 format. Then you can use my alternate step. Only follow if you were unsuccessful in configuring your YAML using base64 encoding.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Injecting cloud-init conf using ISO file
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;CTM.yaml:&lt;/strong&gt;&lt;br&gt;
Make your network configuration file as mentioned below, to add other configurations in this please visit this link:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config
keysecure:
    netcfg:
        iface:
            name: eth0
            type: static
            address: 192.168.1.150
            netmask: 255.255.255.0
            gateway: 192.168.1.1
            dns1: 192.168.1.100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Meta-data:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a meta-data file and provide instance parameters, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;instance-id: &amp;lt;some instance id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Creating ISO file:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use any available linux command line.&lt;/li&gt;
&lt;li&gt;Make sure genisoimage utility is installed.&lt;/li&gt;
&lt;li&gt;Create the ISO file using the following command.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;genisoimage -o config.iso -volid cidata -joliet -rock &amp;lt;your-netowrk-config-file&amp;gt; &amp;lt;your-meta-data-file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This command will give you a file named config.iso.&lt;/li&gt;
&lt;li&gt;Attaching the ISO file&lt;/li&gt;
&lt;li&gt;Upload the given config.iso file into your VMware Datastore. Attach the ISO file to the CipherTrust Manager VM by editing settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Important&lt;br&gt;
Make sure your CD/DVD Drive Connect on Power On checkbox is checked.&lt;br&gt;
Now start the VM and check if it picked up the configurations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Example using 'libvirt'
&lt;/h2&gt;

&lt;p&gt;When launching a virtual machine with the Qcow2 image using 'libvirt', the cloud-init data has to be passed in as an ISO file. The ISO can be generated as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prepare the user-data file as follows:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Rename the file config.data to user-data.&lt;/p&gt;

&lt;p&gt;Because the user's SSH key is used for wrapping a layer of encryption keys, it must be added to the cloud-init config. So, the 'user-data' file should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config
hostname: &amp;lt;host name for the instance&amp;gt;
diskenc:
  encrypt: true
ssh_authorized_keys:
  - &amp;lt;replace with user ssh public key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: 'ssh_authorized_keys' can be configured with multiple ssh public keys.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Create a meta-data file and provide instance parameters, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;instance-id: &amp;lt;some instance id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create an ISO image file:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure genisoimage utility is installed.&lt;/li&gt;
&lt;li&gt;Create the ISO file.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;genisoimage -o config.iso -volid cidata -joliet -rock user-data meta-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Launch instance using virt-install. OpenStack example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;virt-install --virt-type kvm --name &amp;lt;virtual image name&amp;gt; --ram 2048 --disk path=&amp;lt;path to keysecure qcow2 image&amp;gt;,size=16,format=qcow2 --disk path=&amp;lt;path to config.iso&amp;gt; --network network=default --graphics vnc,listen=0.0.0.0 --noautoconsole --os-type=linux --os-variant= ubuntu16.04 --import
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verifying if VM configurations
&lt;/h2&gt;

&lt;p&gt;After logging in using the ksadmin user and setting a new password you can follow the steps to verify your configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Viewing Cloud-init Logs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my case I edited the VM IP address, gateway and DNS server. In most cases logs of every configurations done on this VM is stored in &lt;br&gt;
&lt;code&gt;/var/log/cloud-init.log&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /var/log/

less cloud-init.log

# I want to search if my IP was set to static

/static
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# You can search for anything else related to you
This will give the following output.
Other ways to verify your network configurations.
# Find your network device
nmcli device show | head -n 10
# Mine was ens32 consult your network configuration file
nmcli device show ens32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>tutorial</category>
      <category>ibm</category>
      <category>data</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>IBM Storage Ceph Installation Guide</title>
      <dc:creator>Abdul Hadi</dc:creator>
      <pubDate>Thu, 10 Apr 2025 18:49:58 +0000</pubDate>
      <link>https://dev.to/abdul-hadi/ibm-storage-ceph-installation-guide-3g71</link>
      <guid>https://dev.to/abdul-hadi/ibm-storage-ceph-installation-guide-3g71</guid>
      <description>&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;p&gt;Before registering the IBM Storage Ceph nodes, be sure that you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At least one running virtual machine (VM) or bare-metal server with an active internet connection.&lt;/li&gt;
&lt;li&gt;Red Hat Enterprise Linux 9.4 or 9.5 with ansible-core bundled into AppStream. &lt;code&gt;sudo dnf info ansible-core&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If not available install it using: &lt;code&gt;sudo dnf install ansible-core&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A valid IBM subscription with the appropriate entitlements.&lt;/li&gt;
&lt;li&gt;Root-level access to all nodes.&lt;/li&gt;
&lt;li&gt;For the latest supported Red Hat Enterprise Linux versions, see Compatibility matrix&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Registering Storage Ceph Nodes
&lt;/h2&gt;

&lt;p&gt;Register the system, and when prompted, enter your Red Hat customer portal credentials.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo subscription-manager register
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@admin ~]# subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: USERNAME
Password: PASSWORD
The system has been registered with ID: ID
The registered system name is: SYSTEM_NAME
Pull the latest subscription.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;subscription-manager refresh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@admin ~]# subscription-manager refresh
All local data refreshed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Disable the software repositories.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;subscription-manager repos --disable=*

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Enable the Red Hat Enterprise Linux BaseOS and appstream repositories.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Update the system.
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dnf update

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Enable the ceph-tools repository.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/ibm-storage-ceph-8-rhel-9.repo | sudo tee /etc/yum.repos.d/ibm-storage-ceph-8-rhel-9.repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the license to install IBM Storage Ceph and click Accept.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dnf install ibm-storage-ceph-license

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Accept the provisions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch /usr/share/ibm-storage-ceph-license/accept
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Repeat the above steps on all the nodes of the storage cluster.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install &lt;code&gt;cephadm-ansible&lt;/code&gt; only on &lt;strong&gt;admin&lt;/strong&gt; node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dnf install cephadm-ansible -y
dnf install cephadm -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring Ansible inventory location
&lt;/h2&gt;

&lt;p&gt;Navigate to the &lt;code&gt;/usr/share/cephadm-ansible/&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /usr/share/cephadm-ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Optional&lt;/strong&gt;: Create subdirectories for staging and production.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p inventory/staging inventory/production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Optional&lt;/strong&gt;: Edit the ansible.cfg file and add the following line to assign a default inventory location.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[defaults]
inventory = ./inventory/staging
Optional: Create an inventory hosts file for each environment.

touch inventory/staging/hosts
touch inventory/production/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open and edit each hosts file and add the nodes and [admin] group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NODE_NAME_1
NODE_NAME_2

[admin]
ADMIN_NODE_NAME_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;NODE_NAME_1&lt;/code&gt; and &lt;code&gt;NODE_NAME_2&lt;/code&gt; with the Ceph nodes such as monitors, OSDs, MDSs, and gateway nodes.&lt;/p&gt;

&lt;p&gt;Replace &lt;code&gt;ADMIN_NODE_NAME_1&lt;/code&gt; with the name of the node where the admin keyring is stored.&lt;/p&gt;

&lt;p&gt;Enabling SSH login as root user on Red Hat Enterprise Linux 9&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Root-level access to all nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Procedure
&lt;/h3&gt;

&lt;p&gt;Open the &lt;code&gt;etc/ssh/sshd_config&lt;/code&gt; file and set the &lt;code&gt;PermitRootLogin&lt;/code&gt;to yes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo 'PermitRootLogin yes' &amp;gt;&amp;gt; /etc/ssh/sshd_config.d/01-permitrootlogin.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the SSH service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart sshd.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Login to the node as the root user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@HOST_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace HOST_NAME with the host name of the Ceph node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@host01

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter the root password when prompted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an Ansible user with sudo access
&lt;/h2&gt;

&lt;p&gt;Complete these steps on each node in the storage cluster.&lt;/p&gt;

&lt;p&gt;Log in to the node as the root user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@HOST_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;HOST_NAME&lt;/code&gt; with the host name of the Ceph node.&lt;/p&gt;

&lt;p&gt;Create a new Ansible user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;adduser ceph-admin

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;USER_NAME&lt;/code&gt;with the new user name for the Ansible user.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Set a new password for this user.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;passwd ceph-admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure &lt;code&gt;sudo&lt;/code&gt; access for the newly created user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF | sudo tee /etc/sudoers.d/ceph-admin
ceph-admin ALL=(ALL) NOPASSWD:ALL
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assign the correct file permissions to the new file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod 0440 /etc/sudoers.d/ceph-admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Enabling password-less SSH for Ansible
&lt;/h2&gt;

&lt;p&gt;Generate the SSH key pair, accept the default file name and leave the passphrase empty.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;su - ceph-admin
ssh-keygen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the public key to all nodes in the storage cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-copy-id ceph-admin@host01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there is some issue while copying the ssh-key then manually add in the authorized_keys file in the other nodes.&lt;/p&gt;

&lt;p&gt;Create the user's SSH config file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch ~/.ssh/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the config file for editing.&lt;/p&gt;

&lt;p&gt;Set values for the Hostname and User options for each node in the storage cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: By configuring the ~/.ssh/config file you do not have to specify the -u USER_NAME option each time you execute the ansible-playbook command.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host ceph-node-admin
Hostname &amp;lt;ip-address&amp;gt;
User ceph-admin   
Host ceph-node-1
Hostname 65.2.172.224
User ceph-admin
Host ceph-node-2
Hostname 65.2.182.40
User ceph-admin
Host ceph-node-3
Hostname 13.233.8.7
User ceph-admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set the correct file permissions for the &lt;code&gt;~/.ssh/config&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod 600 ~/.ssh/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the preflight playbook&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: In the following procedure, host01 is the bootstrap node.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Navigate to the &lt;code&gt;/usr/share/cephadm-ansible&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Open and edit the hosts file and add your nodes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ceph-node-1
ceph-node-2
ceph-node-3

[admin]
ceph-node-admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Run the preflight playbook.
&lt;/h2&gt;

&lt;p&gt;Run the preflight playbook either by running the playbook for all hosts in the cluster or for a selected set of hosts in the cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: When running the preflight playbook, &lt;code&gt;cephadm-ansible&lt;/code&gt; automatically installs &lt;code&gt;chrony&lt;/code&gt;and &lt;code&gt;ceph-common&lt;/code&gt; packages on the client nodes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After installation is complete, cephadm resides in the /usr/sbin/ directory.&lt;/p&gt;

&lt;p&gt;Run the playbook for all hosts in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=ibm"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=ibm"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bootstrapping a new storage cluster
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before you begin&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you begin, make sure that you have the following prerequisites in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.&lt;/li&gt;
&lt;li&gt;Login access to cp.icr.io/cp. For information about obtaining credentials for cp.icr.io/cp, see Obtaining an entitlement key.&lt;/li&gt;
&lt;li&gt;A minimum of 10 GB of free space for /var/lib/containers/.&lt;/li&gt;
&lt;li&gt;Root-level access to all nodes.&lt;/li&gt;
&lt;li&gt;Bootstrap a storage cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cephadm bootstrap --cluster-network 172.31.0.0/16 --mon-ip 172.31.34.236 --registry-url cp.icr.io/cp --registry-username cp --registry-password &amp;lt;ENTITLEMENT KEY&amp;gt; --yes-i-know
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Open Port Ranges&lt;/strong&gt;&lt;br&gt;
Apart from the common port range (22, 80, 443). The following are the port numbers for IBM Storage Ceph:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3300&lt;/li&gt;
&lt;li&gt;6789&lt;/li&gt;
&lt;li&gt;8443&lt;/li&gt;
&lt;li&gt;8765&lt;/li&gt;
&lt;li&gt;9093&lt;/li&gt;
&lt;li&gt;9283&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Distributing SSH keys
&lt;/h2&gt;

&lt;p&gt;You can use the &lt;code&gt;cephadm-distribute-ssh-key.yml&lt;/code&gt; playbook to distribute the SSH keys instead of creating and distributing the keys manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About this task&lt;/strong&gt;&lt;br&gt;
The playbook distributes an SSH public key over all hosts in the inventory. You can also generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Procedure&lt;/strong&gt;&lt;br&gt;
Navigate to the &lt;strong&gt;/usr/share/cephadm-ansible&lt;/strong&gt; directory on the Ansible administration node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ansible@admin ~]$ cd /usr/share/cephadm-ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the Ansible administration node, distribute the SSH keys. The optional &lt;code&gt;cephadm_pubkey_path&lt;/code&gt; parameter is the full path name of the SSH public key file on the ansible controller host.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: If cephadm_pubkey_path is not specified, the playbook gets the key from the cephadm get-pub-key command. This implies that you have at least bootstrapped a minimal cluster.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node=ADMIN_NODE_NAME_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Adding multiple hosts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before you begin&lt;/strong&gt;&lt;br&gt;
Before you begin, make sure that you have the following prerequisites in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A storage cluster that has been installed and bootstrapped.&lt;/li&gt;
&lt;li&gt;Root-level access to all nodes in the storage cluster.&lt;/li&gt;
&lt;li&gt;About this task&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Be sure to create the hosts.yaml file within a host container, or create the file on the local host and then use the cephadm shell to mount the file within the container. The cephadm shell automatically places mounted files in /mnt. If you create the file directly on the local host and then apply the hosts.yaml file instead of mounting it, you might see a File does not exist error.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Procedure&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy over the public ssh key to each of the hosts that you want to add.&lt;/li&gt;
&lt;li&gt;Use a text editor to create a hosts.yaml file.&lt;/li&gt;
&lt;li&gt;Add the host descriptions to the hosts.yaml file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Include the labels to identify placements for the daemons that you want to deploy on each host. Separate each host description with three dashes (---).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service_type: host
addr:
hostname: host02
labels:
- mon
- osd
- mgr
---
service_type: host
addr:
hostname: host03
labels:
- mon
- osd
- mgr
---
service_type: host
addr:
hostname: host04
labels:
- mon
- osd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mount the &lt;code&gt;hosts.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;If you created the hosts.yaml file directly on the local host, use the &lt;br&gt;
cephadm shell to mount the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@host01 ~]# cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you created the hosts.yaml file within the host container, run the ceph orch apply command. For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ceph: root@host01 /]# ceph orch apply -i hosts.yaml
Added host 'host02' with addr '10.10.128.69'
Added host 'host03' with addr '10.10.128.70'
Added host 'host04' with addr '10.10.128.71'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;View the list of hosts and their labels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ceph: root@host01 /]# ceph orch host ls
HOST      ADDR      LABELS          STATUS
host02   host02    mon,osd,mgr
host03   host03    mon,osd,mgr
host04   host04    mon,osd

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>cloud</category>
      <category>redhat</category>
    </item>
  </channel>
</rss>
