<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: snehaup1997</title>
    <description>The latest articles on DEV Community by snehaup1997 (@snehaup1997).</description>
    <link>https://dev.to/snehaup1997</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/snehaup1997"/>
    <language>en</language>
    <item>
      <title>Mutlimodal AI</title>
      <dc:creator>snehaup1997</dc:creator>
      <pubDate>Mon, 13 Oct 2025 14:21:00 +0000</pubDate>
      <link>https://dev.to/snehaup1997/mutlimodal-ai-2ag8</link>
      <guid>https://dev.to/snehaup1997/mutlimodal-ai-2ag8</guid>
      <description></description>
    </item>
    <item>
      <title>Prompt Engineering vs Prompt Tuning</title>
      <dc:creator>snehaup1997</dc:creator>
      <pubDate>Thu, 18 Sep 2025 07:48:58 +0000</pubDate>
      <link>https://dev.to/snehaup1997/prompt-engineering-vs-prompt-tuning-where-does-the-real-power-lie-2d97</link>
      <guid>https://dev.to/snehaup1997/prompt-engineering-vs-prompt-tuning-where-does-the-real-power-lie-2d97</guid>
      <description>&lt;p&gt;We’re living in the era of &lt;strong&gt;large language models (LLMs)&lt;/strong&gt; — and the way we interact with AI has completely changed. Instead of writing algorithms line by line, we now &lt;em&gt;talk&lt;/em&gt; to our machines. We guide them, not through code, but through &lt;strong&gt;prompts&lt;/strong&gt; — simple lines of text that can unlock incredibly complex behaviors.&lt;/p&gt;

&lt;p&gt;It’s a bit like having a conversation with intelligence itself. Whether you’re a machine learning engineer building custom tools or an experimenting data scientist, you’re already shaping how AI thinks and responds through your choice of words. This process, known as prompt design, is quickly becoming one of the most important skills in modern AI development.&lt;/p&gt;

&lt;p&gt;As this space evolves, two main approaches to customizing LLMs have taken the spotlight: &lt;strong&gt;Prompt Engineering&lt;/strong&gt; and &lt;strong&gt;Prompt Tuning&lt;/strong&gt;. Both aim to get the best out of AI models — to make them faster, smarter, or more reliable — but they work in very different ways. Prompt Engineering is about crafting better instructions, while Prompt Tuning goes under the hood, adjusting how the model itself interprets those instructions.&lt;/p&gt;

&lt;p&gt;So that brings us to the big question: &lt;strong&gt;where does the real power lie?&lt;/strong&gt; Is it in the creativity of the prompt, or in the precision of the tuning?&lt;/p&gt;

&lt;p&gt;Let’s dig a little deeper and break down the basics before we decide.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Prompt Engineering — the art of conversation&lt;/strong&gt;; is all about how you talk to the model. Think of it as crafting the perfect question or instruction to get the answer you want. It’s less about code and more about communication. It's the art (and science) of writing effective instructions, examples, and contextual clues in natural language. &lt;/p&gt;

&lt;p&gt;A good prompt can completely change the outcome. A vague prompt might leave the model confused, while a well-structured one can lead to precise, insightful, or even creative results. You can think of it as teaching by example — you show the model what kind of response you expect through tone, structure, and context. For instance,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ Prompt: Summarize this article in 2 sentences. Be concise but cover key facts. &lt;br&gt;
❌ Prompt: Make this shorter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s quick, flexible, and doesn’t require retraining the model. That’s why prompt engineering has become the go-to approach for most users — it’s accessible to anyone who can think clearly and ask good questions. You can experiment freely, iterate quickly, and often get surprisingly strong results without touching the underlying model. But, like any tool, it has its limitations. Prompt engineering can be &lt;strong&gt;brittle and inconsistent&lt;/strong&gt; — even small changes in wording can lead to drastically different outputs. It can be challenging to &lt;strong&gt;scale or automate&lt;/strong&gt; for large workloads, and when it comes to &lt;strong&gt;highly specialized tasks or adapting to specific domains&lt;/strong&gt;, it sometimes struggles to produce reliable results.&lt;/p&gt;

&lt;p&gt;In short, prompt engineering is a bit like “command-line AI.” It’s fast, lightweight, and perfect for prototyping, experimentation, or casual use — but it’s not always robust enough for complex, high-stakes, or large-scale applications. And that’s where &lt;strong&gt;Prompt Tuning&lt;/strong&gt; enters the picture. When crafting the perfect prompt isn’t enough, prompt tuning lets you go a step deeper — shaping the model’s behavior from the inside out. Instead of just telling the AI what to do, you’re influencing how it thinks and responds, making it more consistent, reliable, and tailored to your specific needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt Tuning - fine tuning under the hood;&lt;/strong&gt; is a more technical, machine-learning–centric approach. Rather than writing natural language instructions, it works by creating task-specific “soft prompts” — essentially trainable embeddings that are prepended to the input tokens of an LLM. These embeddings guide the model toward the desired behavior without changing all of its parameters. In fact, unlike full fine-tuning (which updates every parameter in the model), prompt tuning often adjusts less than 1% of the model’s weights, making it lighter, faster, and easier to manage.&lt;/p&gt;

&lt;p&gt;Think of it as giving the AI a kind of memory or personality. Even if you phrase your prompts differently, the model retains its trained behavior. &lt;br&gt;
The trade-off is that prompt tuning requires a bit more technical know-how than prompt engineering. You need to understand how to train the soft prompts, select good example data, and integrate them effectively. But in return, you get an AI that is reliable, repeatable, and highly specialized, capable of handling complex tasks without depending solely on carefully worded prompts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Suppose a company wants an AI that drafts customer support emails in a friendly yet professional tone. With prompt engineering, you’d have to carefully phrase every prompt to maintain the tone. With prompt tuning, you train the model on a few examples of the desired tone, and then it consistently produces emails in that style, even when the input varies.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;So, when should we use prompt engineering — and when prompt tuning?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Prompt Engineering excels at experimentation and exploration. Prompt Tuning shines in precision and production. Both have their strengths — but their real value depends on the context. The table below summarizes how they stack up in practice:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;Preferred Approach&lt;/th&gt;
&lt;th&gt;Reason&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Rapid prototyping, exploration&lt;/td&gt;
&lt;td&gt;Prompt Engineering&lt;/td&gt;
&lt;td&gt;Faster iteration, no training needed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain-specific style enforcement, scaling&lt;/td&gt;
&lt;td&gt;Prompt Tuning&lt;/td&gt;
&lt;td&gt;Stable behavior, predictable output&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource-constrained deployment&lt;/td&gt;
&lt;td&gt;Prompt Tuning&lt;/td&gt;
&lt;td&gt;Minimal parameter updates, lower memory cost&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;High-stakes tasks (legal, medical)&lt;/td&gt;
&lt;td&gt;Prompt Tuning (or full fine-tuning)&lt;/td&gt;
&lt;td&gt;Harder to “break” via prompt variation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Striking the Balance&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Most powerful results rarely come from using just one approach. The magic often happens when both are combined as the hybrid approach allows creativity upfront and consistency downstream. By blending the two, you can move from a trial-and-error experimentation to a polished system that scales beautifully -_ prompt engineering is the conversation, the examples, the guidance you provide in real time and prompt tuning is the memory — the part that remembers and applies those lessons consistently. Together, they create an AI that is both *responsive and dependable.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Red Teaming for Responsible AI</title>
      <dc:creator>snehaup1997</dc:creator>
      <pubDate>Mon, 30 Sep 2024 15:19:37 +0000</pubDate>
      <link>https://dev.to/snehaup1997/red-teaming-for-responsible-ai-fd4</link>
      <guid>https://dev.to/snehaup1997/red-teaming-for-responsible-ai-fd4</guid>
      <description>&lt;p&gt;As artificial intelligence (AI) technologies continue to evolve at an unprecedented pace, ensuring their responsible development and deployment becomes crucial. Along with AI's potential to bring about significant change comes the responsibility to confront different vulnerabilities and ethical considerations that may arise.&lt;/p&gt;

&lt;p&gt;"Red Teaming", an effective strategy for ensuring AI systems are robust and ethically sound involves simulating potential threats and challenges to reveal weaknesses, providing a deeper understanding of how AI systems perform under adverse conditions. In this article, we will explore the concept of red teaming in detail, highlighting its significance within the broader framework of responsible AI. &lt;/p&gt;




&lt;h2&gt;
  
  
  What is Red Teaming?
&lt;/h2&gt;

&lt;p&gt;Red teaming originated in military strategy as a method to test defenses by simulating an adversary's tactics. This concept has evolved and been adapted across various fields, most notably in cybersecurity, where red teams conduct simulated attacks to identify vulnerabilities in systems. In the context of artificial intelligence, red teaming involves assessing AI models &amp;amp; systems to uncover potential flaws, biases, thereby preventing unintended consequences. By simulating various scenarios that could challenge the integrity and functionality of AI, red teaming provides a rigorous framework for evaluating the reliability and ethical considerations of these technologies, ultimately contributing to their responsible development and deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Red Teaming vs Penetration vs Vulnerability Assessment
&lt;/h2&gt;

&lt;p&gt;Red teaming, penetration testing, and vulnerability assessment are distinct approaches to evaluating security, each serving specific purposes. &lt;em&gt;Red teaming&lt;/em&gt; simulates real-world attacks to identify and exploit vulnerabilities, providing a comprehensive view of an organization's security posture and testing its defenses under realistic conditions. &lt;em&gt;Penetration testing&lt;/em&gt; focuses on actively probing systems for weaknesses, often with defined scope and limitations, to assess the effectiveness of security measures. &lt;em&gt;Vulnerability assessment&lt;/em&gt;, on the other hand, involves identifying and classifying security weaknesses within a system or network, usually through automated tools, without actively exploiting them. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt8hvlkm3t256zfa9dbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt8hvlkm3t256zfa9dbm.png" alt="Image description" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In summary, red teaming provides a holistic and adversarial view of an organization's security, penetration testing focuses on targeted exploitation within defined boundaries and vulnerability assessment offers a broad overview of potential weaknesses without active testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is Red Teaming Important for AI?
&lt;/h2&gt;

&lt;p&gt;There are several key reasons why red teaming is essential for AI; namely -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Identifying Vulnerabilities&lt;/strong&gt;: AI systems can harbor hidden biases or vulnerabilities that may lead to unintended harm. Red teaming helps uncover these issues before the technology is deployed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: While assessing an AI recruitment tool, red team finds that the model favours candidates from certain universities due to biases in historical data, risking unfair hiring practices. By identifying this issue, they recommend adjustments to the training data and algorithm, promoting fairness and inclusivity in hiring.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enhancing Security&lt;/strong&gt;: By simulating potential attacks, red teams can help organizations strengthen the security of their AI systems against malicious actors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: A financial institution uses an AI algorithm to detect fraud. The red team simulates an attack with crafted transaction data and discovers the AI misses certain fraudulent patterns. With these insights, the organization updates the model to include diverse scenarios, enhancing its defenses and improving fraud detection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Promoting Ethical Use&lt;/strong&gt;: Red teaming can reveal ethical dilemmas or harmful implications of AI systems, ensuring that their deployment aligns with societal values and standards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: A healthcare provider develops an AI tool to prioritize patient treatment. The red team finds that the algorithm favors younger patients, raising ethical concerns. By addressing this, the organization adjusts the model to prioritize treatment based on medical need, ensuring fairness and adherence to ethical standards in patient care.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Improving Trust&lt;/strong&gt;: Demonstrating that an AI system has undergone thorough scrutiny can enhance public trust in AI technologies, leading to broader acceptance and use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: A city implements an AI system for traffic management. After addressing the issues discovered by red team, the city publicly shares the results of the testing and the measures taken. This transparency demonstrates the system's reliability and fairness, leading to increased public confidence in the technology, encouraging its acceptance for use in urban planning and management.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does Red Teaming Work?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq08icn9jdo1a7ekbjo7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq08icn9jdo1a7ekbjo7k.png" alt="Image description" width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Red teaming consists of several key steps, starting with defining objectives that set clear goals, such as testing for biases or security vulnerabilities. Following this, red teams simulate scenarios that mimic potential attacks, challenging the AI with atypical data inputs. After conducting these tests, the outcomes are analysed and findings are compiled into a report, which include recommendations for mitigating identified risks. The assessment concludes with the red teaming collaborating with the AI development team to implement necessary changes based on their findings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Attacks in AI Red Teaming
&lt;/h2&gt;

&lt;p&gt;AI red teams utilize various tactics to assess the robustness of AI systems. Common attack vectors include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prompt Attacks:&lt;/strong&gt; Designing malicious prompts to manipulate AI models into producing harmful or inappropriate content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Poisoning:&lt;/strong&gt; Inserting adversarial data during the training phase to disrupt the model's behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Model Extraction:&lt;/strong&gt; Attempting to steal or replicate the AI model, which can lead to unauthorized use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Backdoor Attacks:&lt;/strong&gt; Modifying the model to respond in a specific manner when triggered by certain inputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Adversarial Examples:&lt;/strong&gt; Crafting input data specifically to mislead the AI model.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Red Teaming Assessment Tools
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgp7t5tnhtd74i2hfg1hk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgp7t5tnhtd74i2hfg1hk.jpg" alt="Image description" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The market for red teaming assessment tools is diverse, offering various solutions to enhance the security and ethical use of AI systems. Tools like Burp Suite and OWASP ZAP focus on web application security, while Google Cloud AutoML and IBM Watson OpenScale address biases and performance monitoring in AI models. Platforms like HackerOne enable organizations to crowdsource red teaming efforts, bringing in external expertise. By leveraging these tools, organizations can proactively identify vulnerabilities and biases, ensuring their AI applications are secure, reliable, and aligned with ethical standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges and Considerations
&lt;/h2&gt;

&lt;p&gt;Effective red teaming requires skilled personnel and resources, which can be challenging for smaller organizations. Organizations must balance transparency with security to safeguard against vulnerabilities.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
    </item>
    <item>
      <title>Responsible AI 101</title>
      <dc:creator>snehaup1997</dc:creator>
      <pubDate>Mon, 30 Sep 2024 15:13:19 +0000</pubDate>
      <link>https://dev.to/snehaup1997/responsible-ai-101-55c0</link>
      <guid>https://dev.to/snehaup1997/responsible-ai-101-55c0</guid>
      <description>&lt;p&gt;The conversation about Responsible AI has gained considerable momentum across different sectors, yet a universally accepted definition is still hard to pinpoint. Many people view RAI mainly as a tool for risk mitigation, but its reach goes much further. It involves not only managing risks and complexities but also the capacity to transform lives and improve experiences.&lt;/p&gt;

&lt;p&gt;This article explores key principles that ensure AI technologies are developed and deployed ethically. &lt;/p&gt;

&lt;h2&gt;
  
  
  Core Principles of Responsible AI
&lt;/h2&gt;

&lt;p&gt;Responsible AI practices focus on fairness, accountability, transparency, and privacy, ensuring that AI systems operate without bias, honor user rights, and are held accountable for their outcomes.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnylxn7lrw0gpcd62t5h1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnylxn7lrw0gpcd62t5h1.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us understand these principles in the scenario of Hiring Algorithms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Fairness:&lt;/strong&gt;&lt;br&gt;
AI systems must be designed to treat all individuals and groups fairly by identifying and addressing biases in training data to prevent discrimination based on any protected characteristics. Incase of a Hiring Algorithm the AI needs to be trained on a variety of datasets to prevent it from favoring any particular demographic, such as selecting candidates solely based on specific genders or backgrounds due to historical successes or other trends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;b. Transparency:&lt;/strong&gt;&lt;br&gt;
A hiring algorithm assesses candidates using specific criteria, but applicants are not informed about how these criteria are set. To improve transparency, the company could publish an internal report on the algorithm's operations and criteria, preparing the organization to address any applicant challenges regarding the decision-making process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;c. Accountability:&lt;/strong&gt;&lt;br&gt;
Organizations should be ready to address the effects of their AI decisions and have processes in place for recourse. If a candidate is unfairly rejected due to biased algorithmic decisions, there should be a clear grievance process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;d. Privacy:&lt;/strong&gt;&lt;br&gt;
Respecting user privacy is paramount. During the hiring process information like LinkedIn profiles maybe required. To safeguard privacy, the company should restrict the algorithm's data collection to what is essential and ensure that information is securely stored and used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;e. Inclusivity:&lt;/strong&gt;&lt;br&gt;
A hiring algorithm that focuses more on experience than potential might miss promising candidates. Designing algorithms that consider diverse candidate backgrounds and experiences, help create a more representative hiring process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;f. Robustness:&lt;/strong&gt;&lt;br&gt;
If an algorithm is meant to identify the best candidates but struggles with unconventional profiles, it may lead to poor results. To improve its robustness, the company could perform tests to see the algorithm's adaptability.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Git-it!</title>
      <dc:creator>snehaup1997</dc:creator>
      <pubDate>Mon, 09 Oct 2023 11:33:55 +0000</pubDate>
      <link>https://dev.to/snehaup1997/memory-management-in-java-2-4pm6</link>
      <guid>https://dev.to/snehaup1997/memory-management-in-java-2-4pm6</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Git is a version control system that developers use all over the world. It helps you track different versions of your code and collaborate with other developers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you can read only one chapter to get going with Git, you've come to the right place. This blog includes all the fundamental commands that cover most of the tasks you will be doing with Git. Once you finish reading this, you will be capable of setting up and initializing a repository, tracking files, and making commits. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git clone [repo_url]:&lt;/em&gt;&lt;br&gt;
Creates a copy of a remote Git repository on your local machine. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git init:&lt;/em&gt;&lt;br&gt;
Creates an empty Git repository or reinitializes an existing one &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git add [name_of_modified_file] or git add . :&lt;/em&gt;&lt;br&gt;
The specified file or all files in case of a dot are stages for commit&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git commit -m "your commit message":&lt;/em&gt;&lt;br&gt;
Commits the staged changes with the mentioned message. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git checkout [your_branch_name_goes_here]:&lt;/em&gt;&lt;br&gt;
Switches to the specified different branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git status:&lt;/em&gt;&lt;br&gt;
Shows the status of your working directory. This including changes and untracked files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git pull:&lt;/em&gt;&lt;br&gt;
Merged latest changes from a remote repo into the current branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git push:&lt;/em&gt;&lt;br&gt;
Pushes your local commits to a remote repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git log:&lt;/em&gt;&lt;br&gt;
Displays a log of all commits in the current branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git branch:&lt;/em&gt;&lt;br&gt;
Lists all branches in your repository and highlights the current one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git merge [branch_name]:&lt;/em&gt;&lt;br&gt;
Merges changes from one branch into another.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git stash:&lt;/em&gt;&lt;br&gt;
It temporarily saves the changes that are not ready to be committed yet&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git diff [file]:&lt;/em&gt;&lt;br&gt;
Enlists differences between working directory &amp;amp; last commit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;git remote add [name] [repository_url]:&lt;/em&gt;&lt;br&gt;
Adds a remote repository with a specified name. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Memory management in JAVA</title>
      <dc:creator>snehaup1997</dc:creator>
      <pubDate>Fri, 06 Oct 2023 11:47:43 +0000</pubDate>
      <link>https://dev.to/snehaup1997/memory-management-in-java-10e9</link>
      <guid>https://dev.to/snehaup1997/memory-management-in-java-10e9</guid>
      <description>&lt;p&gt;Picture yourself with a bottle of water that you consume from throughout the day. It is inevitable that unless you replenish the bottle, there will come a point in the day when it will be completely devoid of water. This concept also applies to memory and anything else that has a limited lifespan.&lt;/p&gt;

&lt;p&gt;JVM designates memory whenever we create new variables, objects, call a method. If we mindlessly just keep on using memory without freeing it, we are bound to encounter a java.lang.OutOfMemoryError exception. Usually, this error is raised when there is not enough space in the Java heap to allocate an object. It can also occur when there is a lack of native memory to support class loading. The solution - Memory management&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory management&lt;/strong&gt; refers to the process of assigning resources for new objects &amp;amp; removing unused objects to free up space for new allocations. In Java, memory management is handled automatically, eliminating the need for us to implement intricate logic in our application code.&lt;/p&gt;

&lt;p&gt;Understanding the key features of Java like platform independence, object life cycle, concurrency, security, libraries, memory management etc. allow us to maximize its offerings and also write clean and efficient code. Additionally, as JVM is the foundation for other Java-based programming languages, acquiring knowledge about Java internals assists in working with those programming languages.&lt;/p&gt;

&lt;p&gt;Having set the context, let's understand various blocks of memory in Java. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OGTZ8JRz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9t5duzj6tcl0p98p07k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OGTZ8JRz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9t5duzj6tcl0p98p07k.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack Memory&lt;/strong&gt; is responsible for static memory allocation &amp;amp; executing threads. Whenever a new method is called, a new block is added on top of the stack, containing specific values for that method. After the method finishes executing, its corresponding stack frame is cleared &amp;amp; the program returns to the calling method.&lt;/p&gt;

&lt;p&gt;Advantages :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each thread has its own stack area in stack memory, ensuring thread safety.&lt;/li&gt;
&lt;li&gt;Memory allocation and deallocation processes are faster.&lt;/li&gt;
&lt;li&gt;Accessing stack memory is quicker.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Disadvantages :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stack memory is fixed and cannot be resized once created.&lt;/li&gt;
&lt;li&gt;It follows a Last-In-First-Out (LIFO) approach, making random access impossible.&lt;/li&gt;
&lt;li&gt;Lacks scalability and flexibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Heap&lt;/strong&gt; in Java is a shared chunk of memory that is created when the JVM starts up. It can be fixed or variable &amp;amp; does not need to be contiguous. It is used for dynamically allocating memory for Java objects and JRE classes during the execution of a Java program &amp;amp; is divided in three generations: young, old &amp;amp; permanent generation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d19mjL6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3d1ho2zfbp633m5i3ge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d19mjL6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3d1ho2zfbp633m5i3ge.png" alt="Image description" width="450" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Young generation&lt;/em&gt; is where newly created objects are allocated. It consists of three sub-parts: &lt;em&gt;Eden, Survivor1, and Survivor2&lt;/em&gt;. Objects are initially allocated in Eden. When Eden becomes full, a minor garbage collection occurs and the live objects are moved to Survivor1, and then to Survivor2. Therefore, we can say that Survivor1 and Survivor2 hold objects that survived the minor garbage collection.&lt;/p&gt;

&lt;p&gt;Objects allocated in the young generation are assigned an age, and when that age is reached, they are moved to the &lt;em&gt;old generation&lt;/em&gt;. Typically, long-surviving objects are stored in the old generation. A major garbage collection is performed on the old generation to collect dead objects.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;permanent generation&lt;/em&gt; is used by the JVM to store metadata about classes and methods, including the Java standard libraries. This space is cleaned during a full garbage collection.&lt;/p&gt;

&lt;p&gt;Advantages :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is not fixed in size and can grow and shrink as needed. &lt;/li&gt;
&lt;li&gt;Allows for random access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Disadvantages :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is shared across threads therefore not thread-safe.&lt;/li&gt;
&lt;li&gt;Accessing heap memory is slower compared to other.&lt;/li&gt;
&lt;li&gt;Process of allocating and deallocating memory in the heap is more complex compared to stack memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Having understood the java memory model let's now delve into the concept of &lt;strong&gt;garbage collection (gc)&lt;/strong&gt;, the feature that enables java to manage memory automatically and efficiently. The aim of gc is to find unused objects and then eliminate or delete them in order to free up memory. An object is considered eligible for Garbage Collection under the following conditions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It is not being used by any program or thread.&lt;/li&gt;
&lt;li&gt;It has no static references or the references are null.&lt;/li&gt;
&lt;li&gt;The object is created within a block and once the control exits that block, the reference goes out of scope.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;GC is controlled by a thread known as the Garbage Collector. Java provides two methods, System.gc() and Runtime.gc(), for sending requests to the JVM for garbage collection. It is important to remember that this is merely a request and it is not guaranteed that garbage collection will occur. When the garbage collector removes an object from memory, it first calls the finalize() method of that object before removing it.&lt;/p&gt;

&lt;p&gt;Garbage collection in Java employs a mark-and-sweep algorithm.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mc3pIImG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p9q5vpo9p9x6fea80tbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mc3pIImG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p9q5vpo9p9x6fea80tbw.png" alt="Image description" width="441" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a Java object is created in the heap, it initially has a mark bit set to 0 (false). During the mark phase, the garbage collector traverses object trees starting from their roots. If an object is reachable from the root, its mark bit is set to 1 (true). Objects that are unreachable will have their mark bits remain unchanged. During the sweep phase, the garbage collector scans the heap, reclaiming memory from all items with a mark bit of 0 (false).&lt;/p&gt;

</description>
      <category>java</category>
      <category>beginners</category>
      <category>interview</category>
      <category>corejava</category>
    </item>
    <item>
      <title>Software Architecture as I know [always a WIP ;)]</title>
      <dc:creator>snehaup1997</dc:creator>
      <pubDate>Thu, 27 Oct 2022 15:52:19 +0000</pubDate>
      <link>https://dev.to/snehaup1997/software-architecture-as-i-know-it-always-a-wip--1coa</link>
      <guid>https://dev.to/snehaup1997/software-architecture-as-i-know-it-always-a-wip--1coa</guid>
      <description>&lt;p&gt;Software architecture serves as a blueprint for a system. It is the organization of a system which includes all components - their interaction, operation with each other, environment, design principles used and the decisions made.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is it important?&lt;/strong&gt;&lt;br&gt;
Software architecture is the foundation of a software system which has a profound effect on the quality of what is built on top of it. A proper foundation yields number of benefits whereas sub-optimal decisions may later cause development, security, performance, scalability &amp;amp; maintenance concerns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Different types of architecture patterns&lt;/strong&gt;&lt;br&gt;
Some common examples of architecture styles include Monolithic application, Layer based, Client-server pattern, Event-driven, Microservices etc.&lt;/p&gt;

&lt;p&gt;Each pattern pertains to specific characteristics and behavior - some contribute to scalability whereas others are more agile. Knowing these strengths &amp;amp; weaknesses of each architecture pattern is necessary to choose the one that meets your use case. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Layered Architecture&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aymp0ajg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0g5bvnd0f58u5vundvzg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aymp0ajg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0g5bvnd0f58u5vundvzg.jpg" alt="Image description" width="791" height="461"&gt;&lt;/a&gt;&lt;br&gt;
Layered architecture (also known as the N-tier architecture) is the de-facto standard for designing majority of software. Here the codebase is separated into layers - each with a specific role &amp;amp; independent of the other.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Challenges/Limitations:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defending boundaries.&lt;/li&gt;
&lt;li&gt;Tight coupling leading to complex interdependencies.&lt;/li&gt;
&lt;li&gt;Without proper coordination between team members, source code can turn into a mess.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Microkernel Architecture&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P2XfOkut--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0mn8tivn1gzmc0vrnhq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P2XfOkut--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0mn8tivn1gzmc0vrnhq.png" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
Microkernel pattern has two major components - a core system and plug-in modules. The core handles fundamental and minimal operations whereas the plug-in modules handle the extended functionalities. IDEs are the best example for this pattern. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Challenges/Limitations:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;not highly scalable.&lt;/li&gt;
&lt;li&gt;changes to core system will result in changes to plug-ins.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Event-driven Architecture&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s94-M0CZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sn7q5hf4y0lm7e3i3x8l.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s94-M0CZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sn7q5hf4y0lm7e3i3x8l.jpeg" alt="Image description" width="641" height="241"&gt;&lt;/a&gt;&lt;br&gt;
EDA enables an application to detect “events” like mouse hovers, button clicks and act on them. This asynchronous communication replaces the request/response architecture where services would have to wait for a reply before moving onto the next. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Challenges/Limitations:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complexity of implementation&lt;/li&gt;
&lt;li&gt;Anticipation of the unknown&lt;/li&gt;
&lt;li&gt;Error-Handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Microservices Architecture&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P1xQI_MO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjkl75idu9djx94yy041.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P1xQI_MO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjkl75idu9djx94yy041.png" alt="Image description" width="720" height="572"&gt;&lt;/a&gt;&lt;br&gt;
In microservices architecture the application is developed as a collection of services which are loosely-coupled, fine-grained &amp;amp; communicate via lightweight protocols. This makes additions of features and modification of existing ones independent of others.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Challenges/Limitations:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increased development time&lt;/li&gt;
&lt;li&gt;Limited reuse of code&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pattern Analysis
&lt;/h2&gt;

&lt;p&gt;A good software architecture should facilitate scalability, agility, addition of new features, integration with external APIs &amp;amp; easily maintainable.&lt;/p&gt;

&lt;p&gt;Attached below is a table enlisting key attributes of each of the architecture pattern discussed that can help you determine which pattern to select based on your use case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0su9MPzF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hncyal9s6sst9gjgpu66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0su9MPzF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hncyal9s6sst9gjgpu66.png" alt="Image description" width="720" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>whatis</category>
      <category>development</category>
      <category>design</category>
    </item>
    <item>
      <title>Common Gateway Interface (CGI)</title>
      <dc:creator>snehaup1997</dc:creator>
      <pubDate>Thu, 27 Oct 2022 14:29:27 +0000</pubDate>
      <link>https://dev.to/snehaup1997/the-web-gateway-55h5</link>
      <guid>https://dev.to/snehaup1997/the-web-gateway-55h5</guid>
      <description>&lt;p&gt;Providing the middleware between servers, external databases and information sources, a gateway interface is the mode in which the web application interacts with the external world. &lt;/p&gt;

&lt;p&gt;On typing &lt;em&gt;&lt;a href="https://dev.to/"&gt;https://dev.to/&lt;/a&gt;&lt;/em&gt; your browser would redirect you to the homepage of DEV. But is that all you really use the web for?&lt;/p&gt;

&lt;p&gt;As you traverse further in the world of World Wide Web, you'll come across documents that make you wonder, "How did they do this?"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JINI1aXR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nvmix6ol8a0krtvc6tbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JINI1aXR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nvmix6ol8a0krtvc6tbm.png" alt="Image description" width="800" height="62"&gt;&lt;/a&gt;&lt;br&gt;
In this example, there are two pieces of dynamic information: the alphanumeric address (IP name) of the remote user and the load average on the serving machine. This is a very simple example but how was it done?&lt;/p&gt;

&lt;p&gt;A more common use case would be forms. when a user fills out a form on a Web page and sends it in, it usually needs to be processed by an application program. The Web server typically passes the form information to a small application program that processes the data and may send back a confirmation message. This method or convention for passing data back and forth between the server and the application is called the common gateway interface (CGI). It is part of the Web's Hypertext Transfer Protocol (HTTP).&lt;/p&gt;

&lt;p&gt;CGI turns the Web from a simple collection of static hypermedia documents into a whole new interactive medium, in which users can ask questions and run applications&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
