<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: TalhaKhalid101</title>
    <description>The latest articles on DEV Community by TalhaKhalid101 (@talhakhalid101).</description>
    <link>https://dev.to/talhakhalid101</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/talhakhalid101"/>
    <language>en</language>
    <item>
      <title>YAML Based VM Configuration With Cloud-init</title>
      <dc:creator>TalhaKhalid101</dc:creator>
      <pubDate>Tue, 19 Mar 2024 18:37:05 +0000</pubDate>
      <link>https://dev.to/talhakhalid101/yaml-based-vm-configuration-with-cloud-init-dd3</link>
      <guid>https://dev.to/talhakhalid101/yaml-based-vm-configuration-with-cloud-init-dd3</guid>
      <description>&lt;p&gt;Cloud-init is a Canonical project and describes itself as follows: " Cloud -init is the industry-standard multi-distribution method for the cross-platform initialization of cloud instances."&lt;/p&gt;

&lt;p&gt;To put it more simply: Cloud-init enables the complete configuration of virtual machines (VMs) using a simple text file. Of course, cloud instances mean nothing other than VMs that run on cloud providers such as AWS, Azure or GCC. Basically only with one special feature, namely that the VMs are often fleeting (ephemeral), i.e. they only run for a short time and are then deleted again.&lt;/p&gt;

&lt;p&gt;Initialization, in turn, simply means the configuration, i.e. everything apart from the hardware equipment: network connections, settings, installed software, user profiles and so on. And "multi-distribution" probably means nothing other than the fact that you can have several identically configured VMs created at the same time.&lt;/p&gt;

&lt;p&gt;Cross-platform means pretty much all major cloud providers are supported; In addition to those already mentioned, Oracle, UpCloud, VMware, OpenNebula, among others, around 25 in total. And there are hardly any restrictions regarding the supported operating systems for the VMs; 26 distributions are listed, in addition to obvious ones such as Debian, Ubuntu , RHEL, Arch Linux and Fedora, also openEuler, Rocky, Virtuozzo and the common BSD systems.&lt;/p&gt;

&lt;p&gt;Cloud-init explained in a very rudimentary way: "Dear IT department, we need several VMs with SSH access through Server X, Apache web server, mounted network drives, software packages according to the attached list, the user foobar and a RHEL registration." Cloud-init is nothing more than such a request - just formalized and sent to a supported system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud-init in Practice
&lt;/h3&gt;

&lt;p&gt;The easiest way to play with cloud-init is probably Multipass , also a Canonical project and already presented here some time ago . With Multipass, "cloud instances" can be created locally, i.e. VMs, which can optionally be set up using cloud-init. This works on Windows as well as Linux.&lt;/p&gt;

&lt;p&gt;By default, Multipass creates VMs with Ubuntu 22.04, hardware equipment and network connection are simply specified via options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;multipass launch --mem=2G --cpus=2 --name testvm1 --network name=LAN-Connection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So here an Ubuntu VM is created with 2 GB RAM, 2 CPUs and the (Windows) standard network connection "LAN connection".&lt;br&gt;
If a configuration is to be carried out via cloud-config, the config file is also passed to "multipass launch":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;multipass launch --mem=2G --cpus=2 --name testvm1 --network name=LAN-Connection -- cloud-init testconf.ini
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All cloud-init work takes place in this "testconf.ini" - a YAML file. YAML is a language based on XML for the machine-readable representation of data structures.&lt;/p&gt;

&lt;p&gt;This representation can sometimes be a bit fiddly: be sure to ensure correct indentation. Although it usually doesn't matter whether items are indented with two, four or even three spaces, it must be consistent across the entire document - indentation is part of the YAML syntax! Too many or too few spaces immediately lead to errors.&lt;/p&gt;

&lt;p&gt;Cloud-config offers around 60 modules that manage general things like users, but also more specific aspects like the text-based window manager Byobu. A simple version could, for example, simply execute a command in the newly created Multipass VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config
runcmd:
   - echo „Hello World!"
   - echo „End"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The shebang line is required. The module "runcmd" (run command) is then called, which simply executes the following commands.&lt;br&gt;
Here is a practical example that should do the following tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Output "Hello, World!"&lt;/li&gt;
&lt;li&gt;Install the git and apache2 packages&lt;/li&gt;
&lt;li&gt;Set up standard user ("ubuntu") and user "peter".&lt;/li&gt;
&lt;li&gt;Enable passwordless access from the server "myserver" for user "talha".&lt;/li&gt;
&lt;li&gt;Mount a network drive (Samba share) "myserver/myshare".&lt;/li&gt;
&lt;li&gt;Write a message to the installation log&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Formulated via YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config
runcmd:
   - echo 'Hello, World!'
packages:
   - git
   - apache2
users:
   - default
   - peter
ssh_authorized_keys:
   - ssh-rsa AAAAB3... talha@myserver
mounts:
   - [ //192.168.178.1/myshare, /media/myshare, cifs, "defaults,uid=1000,username=talha,password=34565234687gyft@" ]
final_message: |
   cloud-init done
   Version: $version
   Timestamp: $timestamp
   Datasource: $datasource
   Uptime: $uptime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "packages" module is only used here to install packages, but can also update using the "package_update" and "package_upgrade" calls.&lt;/p&gt;

&lt;p&gt;At this point the platform independence is also nice to see. In earlier versions it was called "apt_update", but now the appropriate package manager is targeted. Separate modules are then available for configuring the package managers, e.g. apt, yum/dnf or zypper.&lt;/p&gt;

&lt;p&gt;Of course, the "users" module also has a lot more to offer. The full range of account settings can be used here, such as group memberships, user information, passwords, home directories, specific user shell, sudo membership, etc.&lt;/p&gt;

&lt;p&gt;But the module "ssh_authorized_keys:" is really interesting. Here it is possible to store the contents of public SSH keys (admittedly in an abbreviated form above) in order to immediately grant password-free access for certain users. This would be practical, for example, for automatic configuration via Ansible , which by default connects to the hosts to be managed via SSH.&lt;/p&gt;

&lt;p&gt;The "mounts" module also does exactly what is expected: it writes entries into the fstab - only the separation via comma differs from manual entries.&lt;/p&gt;

&lt;p&gt;These examples clearly show what cloud-init essentially does: execute standard commands. Only, for example, it says abstractly "packages: apache2" instead of system-specifically "apt install apache2". You could say that cloud-init is ultimately "just" an abstraction layer for what admins otherwise do manually in the first half hour after setting up a VM… or even on the first day.&lt;/p&gt;

&lt;h3&gt;
  
  
  More Functions
&lt;/h3&gt;

&lt;p&gt;If you’re interested, it’s best to look through the individual modules yourself; the &lt;a href="https://cloudinit.readthedocs.io/en/latest/reference/modules.htm" rel="noopener noreferrer"&gt;Cloud-init module reference&lt;/a&gt; is very clear and, fortunately, mostly enriched with practical examples. To illustrate the diversity, here are a few highlights.&lt;/p&gt;

&lt;p&gt;Using the Ansible module, playbooks can be executed via ansible pull — even without SSH access. There is a module “Byobu” that can be used to configure the window manager mentioned above — very special. The “Phone Home” module is powerful and can be used to send data to a URL.&lt;/p&gt;

&lt;p&gt;Similarly powerful is “Scripts per Boot”, which, as expected, executes any script when an instance starts — every time, mind you! Most modules run once per instance, i.e. once during setup. Optionally, they can also be active (“Always”) at every start. For example, “Phone Home” could be used to send data to an API after every start .&lt;/p&gt;

&lt;p&gt;Last but not least: &lt;a href="https://cloudinit.readthedocs.io/en/latest/reference/cli.html" rel="noopener noreferrer"&gt;Cloud-init also has a command line interface&lt;/a&gt; , for example to run individual modules, check cloud-config files, analyze logs and so on.&lt;/p&gt;

&lt;p&gt;Cloud-init is a wonderful tool for understanding and implementing “Infrastructure as Code”. And not just on a large scale, by the way, the combination of cloud-init plus Multipass is unbeatable even for local test environments on the desktop.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was first published on my &lt;a href="https://medium.com/@talhakhalid101/yaml-based-vm-configuration-with-cloud-init-a090271a2377" rel="noopener noreferrer"&gt;medium blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>cloud</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>10 Tuning Tips to Maximize Your PostgreSQL Performance</title>
      <dc:creator>TalhaKhalid101</dc:creator>
      <pubDate>Thu, 25 Jan 2024 18:59:39 +0000</pubDate>
      <link>https://dev.to/talhakhalid101/10-tuning-tips-to-maximize-your-postgresql-performance-1o8l</link>
      <guid>https://dev.to/talhakhalid101/10-tuning-tips-to-maximize-your-postgresql-performance-1o8l</guid>
      <description>&lt;p&gt;You’ve decided to use PostgreSQL, one of the most powerful and flexible database management systems available. However, to make the most of this robust feature, it is essential to adjust the settings to meet your project’s specific needs. In this post, we’ll explore some tuning, care, and improvement tips that can help maximize the performance, security, and reliability of your PostgreSQL database and ensure it works efficiently for you.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Choose the right configuration during installation:&lt;/strong&gt; &lt;br&gt;
When installing PostgreSQL, you will be able to choose the best way to configure your database, for production, approval, web applications, etc. Make sure you perform a configuration that best suits your project needs. There are some websites that can help you with this, for example, PGConfig or pgconfigurator.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Allocate appropriate resources:&lt;/strong&gt;&lt;br&gt;
PostgreSQL is known to be a resource hog. Therefore, it is crucial to allocate adequate hardware resources, such as CPU, RAM and disk space, and always use your database on a server where there are no other applications competing for resources. Adjust the parameter shared_buffers to allocate a significant amount of RAM to the cache, and track resource usage with tools like &lt;code&gt;pg_stat_statements&lt;/code&gt; to make adjustments as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure autovacuum:&lt;/strong&gt;&lt;br&gt;
PostgreSQL uses the autovacuum process to remove old rows and improve its performance. Make sure autovacuum is configured correctly to avoid fragmentation and performance degradation. Adjust the parameters as &lt;code&gt;autovacuum_vacuum_scale_factor&lt;/code&gt; and &lt;code&gt;autovacuum_analyze_scale_factor&lt;/code&gt; when required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Smart Indexes:&lt;/strong&gt;&lt;br&gt;
Indexes are essential for efficient queries, but excessive use of indexes can degrade performance. Be sure to create indexes only on columns that are frequently used in queries, and remove unnecessary indexes to avoid overhead. Using the table &lt;code&gt;pg_stat_all_indexes&lt;/code&gt; can help you analyze which index has been most effective and which is little used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Constant monitoring:&lt;/strong&gt; &lt;br&gt;
Use monitoring tools, such as PgBadger or &lt;code&gt;pg_stat_monitor&lt;/code&gt;, to monitor the performance of your PostgreSQL. This will help identify potential bottlenecks and issues before they significantly impact your system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Back up and keep your system up to date:&lt;/strong&gt;&lt;br&gt;
Remember to make regular backups of your database and keep PostgreSQL up to date with the latest versions and security fixes. It is always recommended to use some tools to perform and manage your backups, for example, pgBackRest, Barman, pg_probackup, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Table Partitioning:&lt;/strong&gt; &lt;br&gt;
When you deal with large volumes of data, consider using PostgreSQL’s partitioning functionality. This involves splitting large tables into smaller parts, which can significantly speed up queries and reduce maintenance overhead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure Error Logging:&lt;/strong&gt;&lt;br&gt;
PostgreSQL has a highly configurable error logging system. Be sure to adjust your log settings to record useful information but not overload your system with excessive logging. These logs can help you identify problems and bottlenecks in database queries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tuning Operating System Parameters:&lt;/strong&gt;&lt;br&gt;
In addition to tuning PostgreSQL parameters, be sure to optimize your operating system settings for PostgreSQL. This may involve configuring resource limits, such as file handles, according to the needs of the database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Horizontal Scalability with Replication:&lt;/strong&gt; &lt;br&gt;
When growth is a concern, replication is an important option. Configure PostgreSQL replication to distribute load between servers and ensure high availability. To do this, you can use tools such as pgPool, HAProxy or even do this within your own application.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;PostgreSQL is certainly an excellent and versatile choice for your database projects, but success largely depends on how you tune and optimize its settings. With these tuning tips, you can ensure your PostgreSQL runs at maximum efficiency and performance, regardless of the size or complexity of your project. &lt;/p&gt;

&lt;p&gt;Always remember that PostgreSQL tuning is an ongoing process. As the volume of data or number of users increases, you may need to re-evaluate and adjust settings to maintain optimal performance. Stay up to date with best practices and continue to improve your PostgreSQL database to meet the needs of your application or system. With the tips above, you are on your way to achieving highly optimized PostgreSQL.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was first published on my &lt;a href="https://medium.com/@talhakhalid101/tuning-tips-to-maximize-your-postgresql-performance-2a78996ee666" rel="noopener noreferrer"&gt;medium blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>database</category>
      <category>development</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Call a SOAP service from Postman or Fiddler Using Azure Web Apps</title>
      <dc:creator>TalhaKhalid101</dc:creator>
      <pubDate>Tue, 25 Jul 2023 15:53:09 +0000</pubDate>
      <link>https://dev.to/talhakhalid101/call-a-soap-service-from-postman-or-fiddler-using-azure-web-apps-450m</link>
      <guid>https://dev.to/talhakhalid101/call-a-soap-service-from-postman-or-fiddler-using-azure-web-apps-450m</guid>
      <description>&lt;p&gt;While it’s true that JSON or XML has been the rage for a long time in web services, there are still thousands and thousands of SOAP services in most companies I walk across.&lt;/p&gt;

&lt;p&gt;Today I had to do some tests with a super simple example with WCF (in fact, the one that comes with the Visual Studio WCF template) and the truth is that I spent a while until I was able to access the service from Postman or Fiddler correctly. The fact is that I have tried with WCF Test client, which comes as part of Visual Studio and it worked without problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku1nqidkcg293pwrndvc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku1nqidkcg293pwrndvc.png" alt="WCF Test Client — ​​SOAP service" width="720" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But when I got to Postman, all I got was a 400 — Bad Request error without much explanation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6i2hw38o4sv51vcvmt8t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6i2hw38o4sv51vcvmt8t.png" alt="Postman SOAP Request — 404 Bad Request" width="720" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And in Fiddler exactly the same!&lt;/p&gt;

&lt;p&gt;After wandering around for a while, I uploaded my service to a &lt;a href="https://azure.microsoft.com/en-us/services/app-service/web/" rel="noopener noreferrer"&gt;Web App in Microsoft Azure &lt;/a&gt;and &lt;a href="https://wsdlbrowser.com/" rel="noopener noreferrer"&gt;tried one of the many pages that act as a SOAP client&lt;/a&gt;, reading from your service definition (WSDL), and that’s where I was able to see what was missing from my request to make it work just as well as in &lt;strong&gt;WCF Test Client: the SOAPAction header&lt;/strong&gt;, plus a correct Content-Length in the case of Postman. In such a way that the configuration to be able to make the call would be the following:&lt;/p&gt;

&lt;p&gt;These would be the headers needed to make the request:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fguc8k5wkv9e9du9qjuf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fguc8k5wkv9e9du9qjuf1.png" alt="SOAP request with parameters — headers" width="720" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and this is the body of the message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d6d9l9u44trxhoxx2s8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d6d9l9u44trxhoxx2s8.png" alt="SOAP request with parameters — body" width="720" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For Fiddler they would be the same values ​​but in their format:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6rwf5c9tfzram3qpivo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6rwf5c9tfzram3qpivo.png" alt="Fiddler — SOAP request" width="720" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An additional point to review is that &lt;strong&gt;Postman does not calculate the Content-Length automatically&lt;/strong&gt; as Fiddler does and this can also cause problems when making the request. You can use a &lt;a href="http://string-functions.com/length.aspx" rel="noopener noreferrer"&gt;page like this&lt;/a&gt; (or Fiddler that calculates the content length and updates it by itself) to calculate the size of the content and save you a few more minutes.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>azure</category>
    </item>
    <item>
      <title>Kubernetes 1.24 Released: What’s New?</title>
      <dc:creator>TalhaKhalid101</dc:creator>
      <pubDate>Wed, 11 May 2022 21:01:30 +0000</pubDate>
      <link>https://dev.to/talhakhalid101/kubernetes-124-released-whats-new-3ddh</link>
      <guid>https://dev.to/talhakhalid101/kubernetes-124-released-whats-new-3ddh</guid>
      <description>&lt;p&gt;The latest Kubernetes version 1.24 “Stargazer” brings a lot of changes to the open-source system for automating the deployment, scaling, and management of containerized applications. 14 features are now permanently available after the test phase, 15 innovations are in the beta phase and 13 improvements are being tested in the alpha phase. In addition, two features were deprecated and two others were permanently deleted. The main topics of this release include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#dockershim-removed-from-kubelet" rel="noopener noreferrer"&gt;Removal of Dockershim from the kubelet&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#beta-apis-off-by-default" rel="noopener noreferrer"&gt;Beta APIs are no longer active by default&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#signing-release-artifacts" rel="noopener noreferrer"&gt;Sharing release artifacts&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#openapi-v3" rel="noopener noreferrer"&gt;OpenAPI v3&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#storage-capacity-and-volume-expansion-are-generally-available" rel="noopener noreferrer"&gt;Storage capacity tracking and volume expansion&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#nonpreemptingpriority-to-stable" rel="noopener noreferrer"&gt;NonPreempting Priority no longer in testing&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#storage-plugin-migration" rel="noopener noreferrer"&gt;Migration of storage plug-ins&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#grpc-probes-graduate-to-beta" rel="noopener noreferrer"&gt;gRPC probes feature&lt;/a&gt; and &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#kubelet-credential-provider-graduates-to-beta" rel="noopener noreferrer"&gt;Kubelet Credential Provider&lt;/a&gt; are in beta.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#avoiding-collisions-in-ip-allocation-to-services" rel="noopener noreferrer"&gt;Avoiding duplication when associating IP addresses with services&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#dynamic-kubelet-configuration-is-removed-from-the-kubelet" rel="noopener noreferrer"&gt;Removal of the Dynamic Kubelet Configuration from the kubelet&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Stargazer
&lt;/h3&gt;

&lt;p&gt;The Kubernetes development team came up with a special name for version 1.24: &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#release-theme-and-logo" rel="noopener noreferrer"&gt;Stargazer&lt;/a&gt;. According to its own statement, this name is intended to refer to the fact that gazing at the stars can spark scientific inquiry and creativity and facilitate navigation in unknown areas. The logo is also imaginatively designed: it shows a stylized telescope in front of a starry sky. The constellation shown is the Pleiades, also known as the seven sisters in Greek mythology. It pays homage to the original internal project name of Kubernetes: Project 7.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxt4oduzy3etfnitrccfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxt4oduzy3etfnitrccfv.png" alt="Kubernetes 1.24 Stargazer Logo " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockershim removed from kubelet
&lt;/h3&gt;

&lt;p&gt;Dockershim was the CRI-compliant layer between Kubelet and the Docker daemon. After being deprecated in version 1.20, the &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#dockershim-removed-from-kubelet" rel="noopener noreferrer"&gt;Dockershim component has now been permanently deleted from the kubelet&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As of version 1.24, either one of the other supported runtimes (e.g. containerd or CRI-O) or, in case you still want to rely on Docker Engine, &lt;a href="https://github.com/Mirantis/cri-dockerd" rel="noopener noreferrer"&gt;cri-dockerd&lt;/a&gt; must be used. Further information on precautions that may be necessary due to the removal of Dockershim is provided by Kubernetes in a &lt;a href="https://kubernetes.io/blog/2022/03/31/ready-for-dockershim-removal/" rel="noopener noreferrer"&gt;guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  New beta APIs are no longer active by default
&lt;/h3&gt;

&lt;p&gt;New beta APIs are &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#beta-apis-off-by-default" rel="noopener noreferrer"&gt;no longer enabled by default&lt;/a&gt; in Kubernetes. However, existing beta APIs and the new versions of existing beta APIs will still be on board by default. More on this topic can be found on &lt;a href="https://github.com/kubernetes/enhancements/issues/3136" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Contextual logging in the alpha phase
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#contextual-logging-in-alpha" rel="noopener noreferrer"&gt;The Contextual Logging&lt;/a&gt; feature was introduced with Kubernetes version 1.24. It allows a function’s caller to control all aspects of logging, such as output formatting, verbosity, additional values, ​​and names.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Information
&lt;/h3&gt;

&lt;p&gt;Information on &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#other-updates" rel="noopener noreferrer"&gt;further updates&lt;/a&gt; can be found on the Kubernetes website. The full &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md" rel="noopener noreferrer"&gt;release notes&lt;/a&gt; can be viewed on GitHub. The team behind the release of Kubernetes 1.24 will host a release webinar on May 24, 2022, from 9:45 am to 11:00 am Pacific Time. Registration options and the program can be found on the &lt;a href="https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-124-release-webinar/" rel="noopener noreferrer"&gt;event website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Kubernetes 1.24 is available for &lt;a href="https://github.com/kubernetes/kubernetes/releases/tag/v1.24.0" rel="noopener noreferrer"&gt;download&lt;/a&gt; on GitHub. There is also an &lt;a href="https://kubernetes.io/docs/tutorials/" rel="noopener noreferrer"&gt;interactive tutorial&lt;/a&gt; for getting started with Kubernetes. All release information is available in the Kubernetes &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/" rel="noopener noreferrer"&gt;Release Announcement.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl8ig4w6jpd7iclj55sv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl8ig4w6jpd7iclj55sv.gif" width="245" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you liked this blog post, consider following me for more interesting guides and tutorials.&lt;/em&gt;🙂 &lt;strong&gt;&lt;em&gt;Also, don’t forget to buy me a coffee&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://www.buymeacoffee.com/talhakhalid101" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;right here.&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>news</category>
    </item>
    <item>
      <title>5 Cool React Libraries You Should Know [not the usual one's]</title>
      <dc:creator>TalhaKhalid101</dc:creator>
      <pubDate>Fri, 19 Nov 2021 20:21:03 +0000</pubDate>
      <link>https://dev.to/talhakhalid101/5-cool-react-libraries-you-should-know-not-the-usual-ones-3kdc</link>
      <guid>https://dev.to/talhakhalid101/5-cool-react-libraries-you-should-know-not-the-usual-ones-3kdc</guid>
      <description>&lt;p&gt;After several weeks of writings about &lt;a href="https://medium.com/@talhakhalid101" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, I want to write a short post about React, about libraries, because man can not live only on Kubernetes. &lt;/p&gt;

&lt;p&gt;These are some of the libraries that I consider most useful and cool in React. For obvious reasons, React-router, Redux, and other well-known ones are excluded. As well as some React Frameworks such as Gatsby, Nextjs, Frontity, and others.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Ant Design&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ant design is beautiful, yes, there is not much that can be deepened using text. It has tons of components that are visually pleasing and very stylish: buttons, sliders, progress bars, layouts, you know, the basics. Make sure to visit their &lt;a href="https://ant.design/" rel="noopener noreferrer"&gt;site&lt;/a&gt; and see for yourself all that Ant design has to offer when you are done reading this post, of course.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72ijt83n3c2q4swv2nzz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72ijt83n3c2q4swv2nzz.gif" width="715" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Formik&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Formik is a great library which makes working with forms simple and scalable. It allows you to have controlled fields, create validations, reset the form, set a status, handle errors, all with a few lines of code: we define an object that contains properties with their respective validations and voila, formik takes care of almost everything.&lt;/p&gt;

&lt;p&gt;Note the validation schema on the left side consisting of an object called &lt;em&gt;ValidationSchema&lt;/em&gt; which has the name of the fields and functions that are concatenated to carry out the validation. There are functions like min _(), max (), oneOf () _, and many others for almost any type of validation you require. I leave you the link to the sandbox from where I took this example.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;React query&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every time an API request is made, there is code that is repeated; make the request, display an item indicating that content is loading, receive the error or successful status, and save it to the status. Does it sound familiar to you?&lt;br&gt;
React query is responsible for reducing all the repetitive code that is responsible for the entire process of handling web requests, providing us with a special hook from which we can unstructured variables that will facilitate the handling of the response.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezgz7l927mca28tgv9u1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezgz7l927mca28tgv9u1.png" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;React-icons-kit&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometimes it is quite annoying to take care of the graphic part of a web page. There are icons everywhere but you have to look for them, sometimes an icon pack does not have all the icons we need and we have to combine different ones. An excellent solution to these problems is &lt;a href="https://react-icons-kit.now.sh/" rel="noopener noreferrer"&gt;React-icons-kit&lt;/a&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgr86rbydpgiht2hw2109.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgr86rbydpgiht2hw2109.gif" width="735" height="420"&gt;&lt;/a&gt;&lt;br&gt;
Before using it, remember to check the license of the icons you decide to use, because not all licenses are equally permissive.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;The minimalist React: Preact&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Preact is React, yes, same functions, well, not all actually, but the most common ones yes, all in just 3kb. Preact promises to be much faster and lighter than its counterpart as it uses the browser's native addEventListener instead of React's synthetic event handler. It also has exclusive functions that you can't find in React. This library is ideal for applications where performance is a critical factor.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75534hutf3q9nit1uzsk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75534hutf3q9nit1uzsk.jpg" width="800" height="679"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can read more differences between React and Preact on their &lt;a href="https://preactjs.com/guide/v10/differences-to-react/" rel="noopener noreferrer"&gt;official page&lt;/a&gt;.&lt;br&gt;
&lt;em&gt;Here's a Bonus!&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;React Virtualized&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;React virtualized takes care of solving a problem that seems pretty simple at first. Render lists and information that can be tabulated. Only that? Well yes, but rendering lists with a few items wouldn't be a problem, would it? The strength of React Virtualized is not rendering small lists, but large lists, greater than 1k of elements with most of the problems that are already solved and tested.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72e34e67vjx3s91cjmbx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72e34e67vjx3s91cjmbx.gif" width="640" height="480"&gt;&lt;/a&gt;&lt;br&gt;
Visit the &lt;a href="https://bvaughn.github.io/react-virtualized/#/components/List" rel="noopener noreferrer"&gt;React Virtualized&lt;/a&gt; page to read the full documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;If you liked this blog post, consider following me for more such stuff. Also, feel free to add your thoughts!🙂Or you can buy me a coffee &lt;a href="https://www.buymeacoffee.com/talhakhalid101" rel="noopener noreferrer"&gt;&lt;strong&gt;right here&lt;/strong&gt;&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftug7y5k4fpqjys6hknd9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftug7y5k4fpqjys6hknd9.gif" width="480" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Understand unit tests once and for all [In less than 3 min]</title>
      <dc:creator>TalhaKhalid101</dc:creator>
      <pubDate>Sat, 13 Nov 2021 22:37:20 +0000</pubDate>
      <link>https://dev.to/talhakhalid101/understand-unit-tests-once-and-for-all-in-less-than-3-min-5bpm</link>
      <guid>https://dev.to/talhakhalid101/understand-unit-tests-once-and-for-all-in-less-than-3-min-5bpm</guid>
      <description>&lt;p&gt;One day, browsing through software development communities I came across the following image:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcvvps8oxay10igz2ho2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcvvps8oxay10igz2ho2.png" alt="TestPyramid" width="500" height="275"&gt;&lt;/a&gt;&lt;br&gt;
I confess that the first time I didn't understand what the image was about, but I was sure that a cheap rabbit was better than an expensive turtle.&lt;/p&gt;

&lt;p&gt;Time passed, I finally learned what unit tests are, what this cheap rabbit represents and I would like to help you understand too.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But what are unit tests?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A unit test is basically testing the smallest testable part of a program.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Okay, but what does that mean?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you pr the grass in a language that supports functional paradigm for example, the smallest testable part of your code should be a function. So a unit test would be the test of any function. In the case of object orientation it would be the testing of a method of your object.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Well… as everything is easier to understand in practice, let's go to the code!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the example below, we have a function that adds two numbers and returns the sum value.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;function sum(a, b){&lt;br&gt;
  return a + b;&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;To test this code, all we need to do is run the function and check if its output value is what we expect.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;var result = sum(1, 2);&lt;br&gt;
expect(result).to.equal(3);&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Ready! We already have our first unit test. Pretty easy isn't it?&lt;br&gt;
Well.. Now that you understand what unit tests are and how unit tests are done, you might be asking yourself:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are unit tests for?&lt;/strong&gt;&lt;br&gt;
Asking what unit tests or any other automated tests are for is a great question, after all there are several ways that apparently are faster to test if my function is doing what it should. I could just run the code to make sure it's working.&lt;/p&gt;

&lt;p&gt;So why am I going to write other code to test my code? What makes sure the second code works? Who tests the test?&lt;/p&gt;

&lt;p&gt;Unit tests, as well as any automated tests, are not primarily for verifying that a specific function is working, but rather to ensure that your application continues to work after any changes to your code base.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why write unit tests?&lt;/strong&gt;&lt;br&gt;
It may seem tempting at first not to write tests for a function you have just developed, after all, it is customary to write more code to test a function than the function code itself. But you must remember that you will spend most of your development time on a system working on its maintenance.&lt;/p&gt;

&lt;p&gt;Your application will soon have a few hundred functions running, and in many times running each other, your code base gets huge and soon becomes humanly impossible to manually test after any change. Unit tests most often take just a few seconds to test your entire application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where to start?&lt;/strong&gt;&lt;br&gt;
There are several unit testing tools for each programming language. You can get started by reading the documentation for these tools from their examples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Okay, but I still don't understand the cheap Rabbit and Expensive Turtle thing&lt;/strong&gt;&lt;br&gt;
I won't close this post without first explaining what the test pyramid means.&lt;/p&gt;

&lt;p&gt;It's a concept developed by Mike Cohn that says you should have a lot more unit tests (unit tests) than GUI tests which are more user-level tests. Like clicking on a link for example.&lt;/p&gt;

&lt;p&gt;The Pyramid Concept explains how costly GUI tests are compared to unit tests, as they take much longer to run and are also difficult to maintain, while unit tests are much simpler, faster and cheaper.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you liked this blog post, consider following me for more such stuff. Also, feel free to add your thoughts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>devops</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Diagnosing Why Your Kubernetes Pod Won’t Start</title>
      <dc:creator>TalhaKhalid101</dc:creator>
      <pubDate>Wed, 10 Nov 2021 15:10:55 +0000</pubDate>
      <link>https://dev.to/talhakhalid101/header-diagnosing-why-your-kubernetes-pod-wont-start-opn</link>
      <guid>https://dev.to/talhakhalid101/header-diagnosing-why-your-kubernetes-pod-wont-start-opn</guid>
      <description>&lt;p&gt;A pod is a minimum and basic unit that is handled by Kubernetes. The pod consists of one or more containers that are connected to each other and share storage yet run independently. An application-specific pod model contains one or more containers that go on the same machine. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnz0uuw6d6mkkfcguj4is.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnz0uuw6d6mkkfcguj4is.png" alt="Structure of a Pod" width="800" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes manages things like automatically restarting a pod when it fails. However, it sometimes happens that the pod does not start when you create it for the first time or update it with new configurations. This can happen for a variety of reasons. So,today, I'll briefly look at why this happens!&lt;/p&gt;

&lt;h2&gt;
  
  
  Header Reasons Your Kubernetes Pod Won’t Start
&lt;/h2&gt;

&lt;p&gt;A pod may not start or fail due to various reasons, some of which are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod Status is Pending&lt;/strong&gt;&lt;br&gt;
The pending status implies that the pod’s YAML file has been submitted to Kubernetes, and an API object has been created and saved. However, Kubernetes could not create some of the containers in this pod. A scheduling conflict resulted in the situation not working out. Using &lt;em&gt;kubect&lt;/em&gt; to describe the pod, you can see the current pod event and determine why it does not have a schedule. &lt;/p&gt;

&lt;p&gt;Here are a few possibilities: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The cluster nodes cannot meet the CPU, memory, GPU, and other resource requirements requested by the pod. &lt;/li&gt;
&lt;li&gt;The service port needs to be opened to the outside by using HostPort, but it is already busy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Pod is in Waiting or ContainerCreating State&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This can occur due to one of the following issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is an incorrectly set image address, a foreign image source that cannot be accessed, an incorrectly entered private image key, or a large image that causes the pull to time out. &lt;/li&gt;
&lt;li&gt;Errors of this kind may result from a problem with the CNI network plug-in settings, such as an IP address can’t be assigned or the pod network cannot be configured.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure that the image is packaged correctly and that the container parameter settings are correct. When the container doesn't start, make sure that the image is packaged correctly.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Kubelet&lt;/em&gt; log should show the cause of the error. Usually, it comes from a failed disk (input/output error) that prevents pod sandboxes from being created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ImagePullBackOff Error&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml7gfsi20t82zjxgazm2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml7gfsi20t82zjxgazm2.jpeg" alt="ImagePullBackOff log" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generally, the &lt;em&gt;&lt;a href="https://www.tutorialworks.com/kubernetes-imagepullbackoff/" rel="noopener noreferrer"&gt;ImagePullBackOff&lt;/a&gt;&lt;/em&gt; error occurs if the private image key configuration is incorrect.  In this case, you can use docker pull to verify that the image can be extracted normally.&lt;/p&gt;

&lt;p&gt;If the private image key is set incorrectly or not set, then:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the docker registry type secret
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;# Ver Docker-Registry Secret &lt;br&gt;
$ kubectl  get secrets my-secret -o yaml | grep 'dockerconfigjson:' | awk '{print $NF}' | base64 -d&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up a secret of type docker-Registry
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;# First create a docker record type Secret&lt;br&gt;
$ kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CrashLoopBackOff Error&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;CrashLoopBackOff&lt;/em&gt; error occurs when a pod is running, but one of its containers restarts due to termination. This basically means that the pod was not able to run and crashed, so Kubernetes restarted it. Unfortunately, thereafter the pod crashed again and was restarted again, forming an endless loop. &lt;/p&gt;

&lt;p&gt;This can be caused by a deployment error, aliveness probe misconfiguration, or a server-init configuration error. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By configuring and deploying Kubernetes correctly, you can solve this error quickly. &lt;/li&gt;
&lt;li&gt;Using a blocking command, you can bypass the error and create an entirely separate deployment.&lt;/li&gt;
&lt;li&gt;Any method that involves &lt;a href="https://komodor.com/learn/how-to-fix-crashloopbackoff-kubernetes-error/" rel="noopener noreferrer"&gt;back-off restarting failed container&lt;/a&gt; should help resolve this error as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pod Is in Error State&lt;/strong&gt;&lt;br&gt;
In most cases, this error status indicates that the pod is having an error during the startup process. Common causes of the error state include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;dependent &lt;em&gt;ConfigMap&lt;/em&gt;, Secret, or &lt;em&gt;PVWaiting&lt;/em&gt; does not exist: The requested resource exceeds the limit established by the administrator, such as exceeding LimitRange.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is a cluster security policy violation, such as the violation &lt;em&gt;PodSecurityPolicy&lt;/em&gt;, or the container does not have the right to operate the resources in the cluster, such as open &lt;em&gt;&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noopener noreferrer"&gt;RBACA&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Pod Is in a Terminated or Unknown State&lt;/strong&gt;&lt;br&gt;
When a node fails, Kubernetes does not remove the pods on its own but marks them as terminated or unknown. The following three methods can be used to remove pods from these states:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Take the node off the cluster: If a VM is removed after the kube-controller-manager has been installed, the corresponding node will be removed automatically. &lt;/li&gt;
&lt;li&gt;A physical machine node must be manually removed (using &lt;em&gt;kubectl&lt;/em&gt; remove node) in a cluster installed on physical machines.&lt;/li&gt;
&lt;li&gt;Node activity resumes normal: To determine whether these pods are to be removed or made to continue running, &lt;em&gt;Kubelet&lt;/em&gt; will contact &lt;em&gt;Kube-apiserver&lt;/em&gt; from time to time to confirm the expected status of these pods. &lt;em&gt;podSpec&lt;/em&gt; parameters are set within. This is usually because the &lt;em&gt;podSpec YAML&lt;/em&gt; file content is wrong: you can try using validate parameters to rebuild the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
A Kubernetes pod can fail due to improper configuration or due to a problem in the code. Quite often, it’s the former, and if you know how to detect the error, resolving it becomes much easier.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
