<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rakesh Alex</title>
    <description>The latest articles on DEV Community by Rakesh Alex (@rakesh_alex_ca78836df15a7).</description>
    <link>https://dev.to/rakesh_alex_ca78836df15a7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rakesh_alex_ca78836df15a7"/>
    <language>en</language>
    <item>
      <title>My Journey of Building a macOS-Style Portfolio Website</title>
      <dc:creator>Rakesh Alex</dc:creator>
      <pubDate>Fri, 27 Feb 2026 22:07:35 +0000</pubDate>
      <link>https://dev.to/rakesh_alex_ca78836df15a7/my-journey-of-building-a-macos-style-portfolio-website-15na</link>
      <guid>https://dev.to/rakesh_alex_ca78836df15a7/my-journey-of-building-a-macos-style-portfolio-website-15na</guid>
      <description>&lt;p&gt;As a developer, I always believed that a portfolio should reflect not just skills, but personality. Anyone can build a simple static website with a few sections and some animations. But I wanted something different — something interactive, creative, and technically challenging. That’s when I decided to build a macOS-style portfolio website.&lt;/p&gt;

&lt;p&gt;The idea came from my fascination with operating system interfaces. The macOS desktop experience feels clean, minimal, and interactive. Windows open and close smoothly, icons respond naturally, and everything feels structured yet dynamic. I wondered, “What if my portfolio behaved like an operating system instead of a regular website?” That thought became the starting point of this journey.&lt;/p&gt;

&lt;p&gt;At first, the concept felt exciting but overwhelming. Recreating a desktop-like environment in the browser meant thinking beyond traditional web layouts. Instead of pages, I needed draggable windows. Instead of simple navigation links, I needed interactive icons. Instead of static sections, I had to simulate an entire system experience using React.&lt;/p&gt;

&lt;p&gt;The foundation of the project was built using React, which allowed me to break the interface into reusable components. Each window — such as Photos, Finder, Contact, or Blog — became its own component wrapped inside a higher-order window manager. I had to manage states like open, close, focus, and z-index layering to replicate how real operating system windows behave. Handling window stacking order was especially challenging because it required careful state management to ensure the most recently clicked window appeared on top.&lt;/p&gt;

&lt;p&gt;One of the most interesting parts of the project was building a centralized window state management system. Instead of handling state locally in each component, I created a store to manage which windows were open and which one was currently focused. This helped maintain a clean architecture and avoid unnecessary re-renders. It also gave me deeper insight into how real UI systems manage multiple components dynamically.&lt;/p&gt;

&lt;p&gt;Animations played a major role in making the experience realistic. Smooth transitions when opening or closing windows were essential to achieving the macOS feel. I experimented with animation libraries and timing functions to ensure that interactions felt natural rather than robotic. Small details — like subtle scaling effects and opacity transitions — made a huge difference in the overall experience.&lt;/p&gt;

&lt;p&gt;Of course, the journey was not smooth. I encountered white screen errors due to small typos. I struggled with JSX syntax mistakes. There were moments when imports failed because of incorrect alias configurations. Debugging became part of the learning process. Instead of getting frustrated, I began to appreciate these problems because they forced me to understand how bundlers, modules, and React rendering actually work behind the scenes.&lt;/p&gt;

&lt;p&gt;Another challenge was maintaining performance. Since multiple windows could be open at the same time, I had to be mindful of unnecessary re-renders. This pushed me to learn more about component optimization and state isolation. The project gradually transformed from a simple portfolio idea into a deeper learning experience about frontend architecture and performance engineering.&lt;/p&gt;

&lt;p&gt;What made this journey meaningful was not just the final result, but the growth that came with it. I learned how to think in systems rather than pages. I understood how important component abstraction and state management are in large applications. I became more comfortable debugging complex UI behavior. Most importantly, I realized that building creative projects accelerates learning far more than following tutorials.&lt;/p&gt;

&lt;p&gt;The macOS-style portfolio is more than just a website for me. It represents experimentation, persistence, and curiosity. It shows my interest in blending design and engineering. It demonstrates that I enjoy challenging myself beyond standard layouts and templates.&lt;/p&gt;

&lt;p&gt;Looking back, what started as a creative idea turned into one of the most educational projects in my development journey. It strengthened my React fundamentals, improved my debugging skills, and gave me confidence to build complex interactive interfaces.&lt;/p&gt;

&lt;p&gt;In the end, this project taught me an important lesson: the best way to grow as a developer is to build something that excites you — something slightly beyond your current comfort zone. When passion meets persistence, even a portfolio can become a powerful learning experience.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>sideprojects</category>
      <category>ui</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Fake News Detection Using Logistic Regression: A Machine Learning Approach to Fighting Misinformation</title>
      <dc:creator>Rakesh Alex</dc:creator>
      <pubDate>Fri, 27 Feb 2026 22:05:19 +0000</pubDate>
      <link>https://dev.to/rakesh_alex_ca78836df15a7/fake-news-detection-using-logistic-regression-a-machine-learning-approach-to-fighting-470j</link>
      <guid>https://dev.to/rakesh_alex_ca78836df15a7/fake-news-detection-using-logistic-regression-a-machine-learning-approach-to-fighting-470j</guid>
      <description>&lt;p&gt;In the digital age, information spreads at an unprecedented speed. Social media platforms, online news portals, and messaging applications allow news to travel across the globe within seconds. While this instant connectivity has transformed communication positively, it has also created a serious global challenge — the rapid spread of fake news. False or misleading information can influence public opinion, create unnecessary panic, damage reputations, and even impact political and economic stability. Because millions of articles, posts, and headlines are generated daily, manually verifying each piece of information is nearly impossible. This is where machine learning provides a powerful and scalable solution.&lt;/p&gt;

&lt;p&gt;Fake news detection can be viewed as a binary classification problem in which a machine learning model must determine whether a news article is real or fake. Among the many algorithms available for classification tasks, Logistic Regression remains one of the most reliable and interpretable methods. Although it is often considered a simple algorithm compared to modern deep learning models, Logistic Regression is highly effective when combined with proper text preprocessing and feature engineering techniques. Its strength lies in estimating probabilities and making decisions based on those probabilities, rather than simply memorizing patterns.&lt;/p&gt;

&lt;p&gt;The foundation of fake news detection begins with data. A labeled dataset containing both real and fake news articles is required to train the model. However, raw textual data cannot be directly used by machine learning algorithms. Text must first be transformed into numerical representations. This involves several preprocessing steps such as converting text to lowercase, removing punctuation, eliminating common stopwords, and tokenizing sentences into meaningful words. One of the most important steps in this process is feature extraction using TF-IDF (Term Frequency–Inverse Document Frequency). TF-IDF assigns weights to words based on their importance within a document relative to the entire dataset. Words that appear frequently in a single article but not across all articles are given higher significance, enabling the model to capture meaningful patterns in the text.&lt;/p&gt;

&lt;p&gt;Once the text is converted into numerical vectors, the dataset is divided into training and testing sets. The Logistic Regression model is trained on the training data to learn patterns that distinguish fake news from real news. It uses a sigmoid function to estimate the probability that a given article belongs to a particular class. If the predicted probability crosses a chosen threshold, typically 0.5, the article is classified accordingly. What makes Logistic Regression particularly appealing is its interpretability. By analyzing the model’s coefficients, it is possible to understand which words contribute positively or negatively toward predicting fake news. This transparency is valuable, especially in applications where accountability and explanation are important.&lt;/p&gt;

&lt;p&gt;After training, the model is evaluated using performance metrics such as accuracy, precision, recall, and F1 score. While accuracy gives an overall measure of correctness, precision and recall provide deeper insight into the model’s effectiveness. Precision measures how many articles predicted as fake are actually fake, while recall measures how many actual fake articles were successfully detected. In fake news detection, balancing these metrics is crucial because misclassifying real news as fake can damage credibility, while failing to detect fake news allows misinformation to spread.&lt;/p&gt;

&lt;p&gt;Although Logistic Regression performs well as a baseline model, the project also reveals certain challenges. Language is complex, and fake news often includes subtle misinformation rather than obvious false statements. Sarcasm, context, and emotionally charged language can sometimes mislead the model. Additionally, class imbalance in datasets can influence performance if not handled carefully. Despite these challenges, the results demonstrate that even a relatively simple machine learning algorithm can achieve strong predictive performance when supported by proper preprocessing and feature engineering.&lt;/p&gt;

&lt;p&gt;Ultimately, fake news detection using Logistic Regression highlights how classical machine learning techniques remain powerful tools in solving real-world problems. While advanced neural networks and transformer-based models dominate modern research, simpler models still provide efficiency, interpretability, and reliability. In a world increasingly affected by misinformation, developing automated systems to detect false content is not just a technical exercise but a social responsibility. This project reinforces the idea that meaningful impact does not always require complex solutions; sometimes, a well-implemented foundational algorithm can make a significant difference.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>algorithms</category>
      <category>datascience</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Utility-Based Cache Partitioning: Making Shared Caches Smarter in Multi-Core Systems</title>
      <dc:creator>Rakesh Alex</dc:creator>
      <pubDate>Fri, 27 Feb 2026 22:01:16 +0000</pubDate>
      <link>https://dev.to/rakesh_alex_ca78836df15a7/utility-based-cache-partitioning-making-shared-caches-smarter-in-multi-core-systems-f3f</link>
      <guid>https://dev.to/rakesh_alex_ca78836df15a7/utility-based-cache-partitioning-making-shared-caches-smarter-in-multi-core-systems-f3f</guid>
      <description>&lt;p&gt;In modern computing systems, performance is no longer determined by a single powerful core. Instead, today’s processors rely on multiple cores working together to execute programs efficiently. While this design significantly improves overall throughput, it also introduces new challenges — especially when hardware resources are shared among cores.&lt;/p&gt;

&lt;p&gt;One such shared resource is the Last-Level Cache (LLC). The LLC plays a critical role in reducing memory access latency and improving processor performance. However, when multiple cores share the same cache, competition arises. This competition can lead to cache contention, unfair resource distribution, and performance degradation.&lt;/p&gt;

&lt;p&gt;This is where Utility-Based Cache Partitioning (UCP) becomes important.&lt;/p&gt;

&lt;p&gt;The Problem with Shared Caches&lt;/p&gt;

&lt;p&gt;In multi-core processors designed by companies like Intel and AMD, the last-level cache is shared across all cores. While sharing improves average efficiency, it does not guarantee fairness.&lt;/p&gt;

&lt;p&gt;Imagine two applications running simultaneously:&lt;/p&gt;

&lt;p&gt;One application is memory-intensive and constantly accesses the cache.&lt;/p&gt;

&lt;p&gt;The other application has moderate memory usage.&lt;/p&gt;

&lt;p&gt;Without intelligent management, the memory-intensive application may occupy most of the cache space. As a result, the second application experiences more cache misses, even if it would have performed better with just a small guaranteed portion of the cache.&lt;/p&gt;

&lt;p&gt;Traditional cache replacement policies, such as Least Recently Used (LRU), do not distinguish between cores. They treat all memory accesses equally. This can result in performance interference and unpredictable slowdowns.&lt;/p&gt;

&lt;p&gt;Understanding Utility-Based Cache Partitioning&lt;/p&gt;

&lt;p&gt;Utility-Based Cache Partitioning was proposed as a solution to this problem. The core idea behind UCP is simple but powerful:&lt;/p&gt;

&lt;p&gt;Instead of dividing the cache equally among cores, allocate cache space based on how much performance benefit each core gains from it.&lt;/p&gt;

&lt;p&gt;In other words, give more cache space to the core that actually benefits from it the most.&lt;/p&gt;

&lt;p&gt;UCP measures something called marginal utility — the improvement in performance when a core is given one additional cache way. If assigning extra cache space significantly reduces cache misses for a core, that core has high utility. If it does not make much difference, the utility is low.&lt;/p&gt;

&lt;p&gt;By continuously measuring this at runtime, UCP dynamically partitions the cache in a way that maximizes overall system performance.&lt;/p&gt;

&lt;p&gt;Why This Approach Matters&lt;/p&gt;

&lt;p&gt;This method is smarter than static partitioning. Instead of assuming all applications need equal resources, UCP adapts to workload behavior.&lt;/p&gt;

&lt;p&gt;The benefits include:&lt;/p&gt;

&lt;p&gt;Improved overall throughput&lt;/p&gt;

&lt;p&gt;Reduced cache thrashing&lt;/p&gt;

&lt;p&gt;Better fairness between applications&lt;/p&gt;

&lt;p&gt;More predictable performance&lt;/p&gt;

&lt;p&gt;In many experimental studies, utility-based partitioning demonstrated noticeable performance improvements in multi-program workloads.&lt;/p&gt;

&lt;p&gt;Modern processor resource management technologies, such as those introduced by Intel and ARM, reflect similar ideas of controlled resource allocation and quality-of-service mechanisms.&lt;/p&gt;

&lt;p&gt;Connection to Other Microarchitectural Components&lt;/p&gt;

&lt;p&gt;Cache performance does not exist in isolation. It directly impacts other microarchitectural components such as branch predictors, instruction fetch units, and pipeline efficiency.&lt;/p&gt;

&lt;p&gt;In systems implementing predictors like bimodal, gshare, or perceptron-based models, cache interference can affect instruction locality and predictor training stability. By reducing shared cache contention, utility-based partitioning indirectly enhances overall execution efficiency.&lt;/p&gt;

&lt;p&gt;This makes UCP particularly relevant in advanced processor design and performance engineering research.&lt;/p&gt;

&lt;p&gt;Challenges and Considerations&lt;/p&gt;

&lt;p&gt;Although Utility-Based Cache Partitioning improves performance, it is not free of trade-offs. It requires additional hardware structures to monitor cache behavior and calculate utility. This increases design complexity and hardware overhead.&lt;/p&gt;

&lt;p&gt;Engineers must balance the benefits of dynamic partitioning with implementation cost. Nevertheless, the long-term gains in fairness and performance often justify the approach.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Utility-Based Cache Partitioning represents a thoughtful shift in how shared hardware resources are managed in multi-core systems. Rather than distributing cache space evenly or relying purely on traditional replacement policies, UCP introduces intelligence into allocation decisions.&lt;/p&gt;

&lt;p&gt;By measuring actual performance benefit and adapting dynamically, it ensures that cache space is used where it matters most.&lt;/p&gt;

&lt;p&gt;As processors continue to scale and workloads become more diverse, intelligent resource management techniques like UCP will play an increasingly important role in achieving high performance and system stability.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>computerscience</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
