<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oluwatosin Obatoyinbo</title>
    <description>The latest articles on DEV Community by Oluwatosin Obatoyinbo (@dekingsa).</description>
    <link>https://dev.to/dekingsa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dekingsa"/>
    <language>en</language>
    <item>
      <title>From HDDs to Hypercars: How Technology is Breaking Through Bottlenecks</title>
      <dc:creator>Oluwatosin Obatoyinbo</dc:creator>
      <pubDate>Thu, 18 Apr 2024 11:49:26 +0000</pubDate>
      <link>https://dev.to/dekingsa/from-hdds-to-hypercars-how-technology-is-breaking-through-bottlenecks-1h2f</link>
      <guid>https://dev.to/dekingsa/from-hdds-to-hypercars-how-technology-is-breaking-through-bottlenecks-1h2f</guid>
      <description>&lt;p&gt;Just a few years ago, 10,000 IOPS (Input/Output Operations Per Second) seemed like the pinnacle of storage system performance. Back then, the whirring of hard disk drives (HDD) was a familiar sound. Solid-state drives (SSDs) offered a leap forward in read/write speeds and lower power consumption, but they were held back by a bottleneck: the disk access protocol. The SATA protocol, designed for the sequential access patterns of HDDs, couldn't unleash the full potential of SSDs.&lt;/p&gt;

&lt;p&gt;The introduction of the Non-Volatile Memory Express (NVMe) protocol revolutionised storage performance. NVMe leverages the parallel processing capabilities of SSDs to deliver significantly lower latency and much higher bandwidth. By supporting tens of thousands of parallel commands to the storage device, NVMe unlocks unprecedented speed and throughput. Additionally, it takes advantage of multi-core processors, faster memory designs, and improved data access commands. This development is instrumental in advancements within software engineering fields like Artificial Intelligence (AI) and Machine Learning (ML), where high-performance storage is critical.&lt;/p&gt;

&lt;p&gt;Just like storage systems, cars are breaking free from the limitations imposed by legacy design methods. Recent electric car launches by EV companies showcase this trend. These vehicles boast performance metrics that shatter previous limits, pushing the boundaries of speed, and efficiency and outclassing comparable gasoline-powered models in the process.&lt;/p&gt;

&lt;p&gt;Imagine a world where cars safely move at the speed of light! New technologies may one day push us closer to that dream.&lt;/p&gt;

</description>
      <category>technology</category>
      <category>storage</category>
      <category>ai</category>
      <category>performance</category>
    </item>
    <item>
      <title>Core Count vs. Clock Speed: The Silent Cost Factor in Software Licensing</title>
      <dc:creator>Oluwatosin Obatoyinbo</dc:creator>
      <pubDate>Sun, 31 Mar 2024 22:51:24 +0000</pubDate>
      <link>https://dev.to/dekingsa/core-count-vs-clock-speed-the-silent-cost-factor-in-software-licensing-3gb9</link>
      <guid>https://dev.to/dekingsa/core-count-vs-clock-speed-the-silent-cost-factor-in-software-licensing-3gb9</guid>
      <description>&lt;p&gt;Software licensing models are undergoing a dramatic shift, and one metric is quietly impacting your bottom line: core count. Traditional licensing based on sockets (the physical package housing the processor) is giving way to per-core pricing, potentially increasing software costs for businesses with outdated infrastructure. Understanding the difference between core count and clock speed is crucial in navigating this new landscape.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding the CPU: A Factory Analogy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine a CPU (Central Processing Unit) as a factory. The core count represents the number of assembly lines within the factory, while clock speed signifies the speed at which each line can complete tasks. More cores allow you to handle multiple tasks simultaneously, while a higher clock speed translates to faster individual task completion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Core Count Matters Now&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many software vendors are transitioning to per-core licensing models. This seemingly minor change can lead to significant cost increases if businesses are not careful. Companies relying on older infrastructure with a high core count, but lower clock speed, could face substantial jumps in software licensing fees.&lt;/p&gt;

&lt;p&gt;For example, a database management system might be licensed based on core count. Running such software on an infrastructure with numerous cores but a slower clock speed might result in 20% higher licensing costs compared to a system with fewer cores but a faster clock speed (depending on the specific processor type).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimizing Costs in the New Licensing Landscape&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Understanding the interplay between core count and clock speed empowers businesses to navigate the changing software licensing landscape. Here are a few strategies to consider:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure Optimization&lt;/strong&gt; : Analyze current workloads and consolidate virtual machines to minimize core usage, reducing the number of cores your software licenses need to cover.&lt;br&gt;
&lt;strong&gt;Licensing Model Evaluation&lt;/strong&gt; : Explore alternative licensing models such as subscriptions or negotiate with vendors to find cost-effective options.&lt;br&gt;
&lt;strong&gt;Staying Informed&lt;/strong&gt; : Proactively track changes in licensing models from software providers to avoid unexpected cost increases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
By grasping the relationship between core count and clock speed, and how they impact software licensing costs, businesses can make informed decisions about their IT infrastructure. This knowledge empowers them to optimize costs, improve performance, and maintain a competitive edge as software licensing models continue to evolve.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>aws</category>
      <category>database</category>
    </item>
    <item>
      <title>Don't Run Blind, Check Under The Hood</title>
      <dc:creator>Oluwatosin Obatoyinbo</dc:creator>
      <pubDate>Mon, 18 Mar 2024 19:41:27 +0000</pubDate>
      <link>https://dev.to/dekingsa/dont-run-blind-check-under-the-hood-3nak</link>
      <guid>https://dev.to/dekingsa/dont-run-blind-check-under-the-hood-3nak</guid>
      <description>&lt;p&gt;The importance of observability in every modern software development effort cannot be overemphasized. This new rule of engagement is as important as the software development endeavor itself. Checking under the hood is key in every performance optimization drive. Being able to tune software services with the right insights into the performance metrics and the triggers of those numbers is as critical as the software engineering efforts themselves.&lt;/p&gt;

&lt;p&gt;This belief was further strengthened when recently we embarked on what is an audacious attempt to modernize a couple of services that until last week, powered the funds transfer capability of a financial institution. The legacy service consisted of a proprietary solution on IBM infrastructure which handled about 50% of the 5M plus transactions per day of the organisation and some monolithic services built using Java and distributed across six (6) fairly resourced linux servers(VMs) for performance and resilience.&lt;/p&gt;

&lt;p&gt;Our effort was to collapse all of this into micro services, add new features which had become critical to the business based on a new digital technology product recently introduced into the market. This product was gaining wide acceptance in the industry and the business needed to move fast to close out on existing limitations.&lt;/p&gt;

&lt;p&gt;In total, 11 micro services were designed and developed to be the new heart of the funds transfer feature for the financial institution's digital channels (many thanks to the brilliance of Emmanuela Ike , Uchechi Obeme , Michael Akinrinmade , Ayodele Ayetigbo MBA). The first deployment was aimed for USSD channel alone ahead of the introduction of traffic from the Mobile app channel which boasts of more users’ transactions volume and value. The first deployment seemed largely successful as we didn’t receive any major complaints from customers, I mean it was USSD right and users hardly see RED coloured error messages. A seeming successful pilot run for a month without major complaints from customers encouraged us to push for a full-scale deployment, alas we were mistaken and very mistaken at that. &lt;/p&gt;

&lt;p&gt;The introduction of the mobile traffic failed within a few hours as several performance issues were revealed. Although we had a few guesses as to what went wrong, but one of the most brilliant DevOps engineers you can find around Azeta Spiff had earlier highlighted the need to implement Application Performance Monitoring (APM) from ElasticSearch to support the post implementation management of the services. His brilliant idea and this beautiful tool provided the required observability metrics that were key to unearthing bottlenecks that were mostly outside of the realms of coding circa contention at database layer, failures of external dependencies and of course some of the services required a rework for optimisation.&lt;/p&gt;

&lt;p&gt;Armed with the insights from APM, we were able to meticulously deal with the issues reported. In one finding we leveraged database partitioning to shave off over 15 seconds from the response of a database query. Although the query had a low cost, it still performed badly in our production environment. In another instance we had to design our autoscaling metrics on our Kubernetes cluster to give the services the required resources to attain cruise flight. &lt;/p&gt;

&lt;p&gt;Our target was off course to achieve an average latency sub 500ms for most of the services especially where dependencies outside of the organisation's network were not involved. The past two weeks of putting performance icing on what has been a very audacious adventure in software engineering were rewarding.&lt;/p&gt;

&lt;p&gt;Over the weekend of March 15, 2024, we successfully rolled out the mobile app traffic again and this time we couldn’t have been prouder of the success recorded. From the insight from APM we could see success more than 99% on the average across the micro services with an average latency for transactions completions standing at less than 3 secs (this is inclusive of calls to dependencies outside of the organisation's network which stood at 2secs). This is a huge performance optimisation, not to mention the cost savings the organisation would derive from the new implementation as the resource utilisation from the new services have been a fraction of the legacy capabilities.&lt;/p&gt;

&lt;p&gt;APM from ElasticSearch (an open source software) is a fantastic tool for observability and would save time and money from groping in the dark when faced with performance-related issues. Being open source makes it even cooler as there is a large community for its' support and we all can add insights from our environments to make it a more robust software. Every software engineer and technology shop wishing to deploy a performant system particularly with micro services must of necessity place the right value on observability and ElasticSearch is making that journey seamless and affordable. So why run blind when you can easily check under the hood with APM?&lt;/p&gt;

</description>
      <category>observability</category>
      <category>opensource</category>
      <category>elasticsearch</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Advantage Kubernetes </title>
      <dc:creator>Oluwatosin Obatoyinbo</dc:creator>
      <pubDate>Tue, 05 Mar 2024 08:19:12 +0000</pubDate>
      <link>https://dev.to/dekingsa/advantage-kubernetes-kba</link>
      <guid>https://dev.to/dekingsa/advantage-kubernetes-kba</guid>
      <description>&lt;p&gt;The recent acquisition of VMware and the implementation of new policies and operating models, particularly regarding pricing and licensing, are causing headaches for many CTOs and CIOs.&lt;/p&gt;

&lt;p&gt;Historically, virtualization using VMware has been a cost-effective way to manage compute infrastructure. It ensures efficient utilization of compute resources, maximizes idle compute on servers, and provides workload isolation even on the same bare-metal infrastructure. However, the introduction of a revised pricing model may overshadow these benefits.  &lt;/p&gt;

&lt;p&gt;According to VMware, the simplification of their portfolio will enable customers to extract more value from their investment and facilitate the delivery of new innovations. They claim that the new licensing model aligns with industry trends and offers several benefits. However, industry experts have raised concerns about potential long-term customer costs. Many organizations that don't require the robust offerings are struggling to sell the new pricing model to their business owners.  &lt;/p&gt;

&lt;p&gt;Mark Thaver, the CEO and Founder of Licensing Data Solutions, identifies four major perspectives on VMware's recent changes:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The transformation is beneficial for customers fully committed to utilizing VMware but disadvantages those who don't employ the complete suite of products.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The implementation of product bundling is expected to lead to a price increase of two to five times for all customers. Acquiring these bundles may have financial implications, and some of the bundled products may be unwanted.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The bundling strategy could result in underutilisation of software, wasting resources on unused programs.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VMware aims to encourage customers to adopt more products from their suite. This may lead to changes in bundle offerings and the introduction of premium versions to generate additional revenue.  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As a result of these changes, many organisations are reconsidering their use of VMware and exploring alternatives. While full cloud adoption doesn't seem to be favored by large organizations, recent reports suggest that some workloads perform better and are more affordable on-premises. Netflix, for example, found that on-premises setups were more suitable for certain workloads. This discovery resonates with other organizations, particularly in the Fintech sector.  &lt;/p&gt;

&lt;p&gt;While there are alternative options available, forward-looking organizations may find value in reviewing their modernization strategies. Microservices powered by native Kubernetes on bare metal offer one such option, allowing organizations, especially startups, to benefit from the best of both worlds.&lt;br&gt;
 &lt;br&gt;
&lt;strong&gt;Why Kubernetes?&lt;/strong&gt;&lt;br&gt;
 &lt;br&gt;
Kubernetes, also known as K8s, is open-source software used for automating the deployment, scaling, and management of containerized applications. It facilitates the management of containers within a microservices architecture. Google has been using Kubernetes for over 15 years to handle critical production workloads, and the community is constantly working on improving it.  &lt;/p&gt;

&lt;p&gt;While some organizations are in the early stages of their modernization journey, others have already embraced microservices and containerization. However, some of these companies may have faulty and expensive architecture. They host their Kubernetes clusters on hosts that are virtualized using VMware virtualization software.  &lt;/p&gt;

&lt;p&gt;Kubernetes on bare-metal infrastructure is an architecture that provides these organizations, as well as new entrants, with the option to remove an additional layer of expensive software licenses. It also eliminates the overhead in terms of compute introduced by the extra virtualization layer.  &lt;/p&gt;

&lt;p&gt;By implementing Kubernetes using this architecture, companies, especially start-ups, can save thousands of dollars in software license costs while achieving superior performance. The savings come from the elimination of the cost of virtualization software and the cost of the host OS on VMs. Additionally, it allows for efficient utilization of infrastructure compute by removing hypervisor overhead and reduces the cost of maintenance, including labor costs.  &lt;/p&gt;

&lt;p&gt;Ericsson suggests that the total cost of ownership of Kubernetes is greatly reduced when the Kubernetes cluster is installed on bare-metal servers. The company estimates that, depending on the workload type and cluster configuration, 30 percent or more can be saved in the total cost of ownership by eliminating the extra hypervisor layer introduced by virtual machines.  &lt;/p&gt;

&lt;p&gt;Installing Kubernetes directly on bare-metal removes various overheads and provides an additional advantage in terms of performance. CenturyLink found that network latency is three times lower for bare-metal Kubernetes. Furthermore, according to a Stratoscale study (which benchmarked the performance of standalone Docker containers, not containers running in a Kubernetes cluster), containers running on bare metal perform 25-30 percent better.  &lt;/p&gt;

&lt;p&gt;The management of Kubernetes clusters installed on bare-metal is greatly simplified, and the removal of the additional layer of abstraction in the hypervisor means there is one less point of failure or management bottleneck.  &lt;/p&gt;

&lt;p&gt;These well-researched benefits, combined with recent developments at VMware, make a strong case for adopting this new approach in virtualization and microservices. Many organizations can extract significant benefits and save on IT costs with just a minor tweak to their architecture. This new move clearly presents an advantage to Kubernetes.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>virtualization</category>
      <category>microservices</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
