<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lini Abraham</title>
    <description>The latest articles on DEV Community by Lini Abraham (@lea_abraham_7a0232a6cd616).</description>
    <link>https://dev.to/lea_abraham_7a0232a6cd616</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lea_abraham_7a0232a6cd616"/>
    <language>en</language>
    <item>
      <title>Destructuring in TypeScript: Arrays &amp; Objects</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Sun, 01 Jun 2025 06:56:48 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/destructuring-in-typescript-arrays-objects-3pp9</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/destructuring-in-typescript-arrays-objects-3pp9</guid>
      <description>&lt;p&gt;Destructuring is a powerful feature in JavaScript (and TypeScript) that lets you unpack values from arrays or extract properties from objects into distinct variables.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Array (List) Destructuring
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Basic Example&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const nums = [10, 20, 30];

const [first, second] = nums;

console.log(first);  // 10
console.log(second); // 20

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Skipping values&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const [ , , third] = nums;
console.log(third); // 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Default values&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const [a = 1, b = 2, c = 3] = [undefined, undefined];
console.log(a, b, c); // 1 2 3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Object Destructuring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic Example&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const user = { name: "Alice", age: 25 };

const { name, age } = user;

console.log(name); // "Alice"
console.log(age);  // 25
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Renaming variables&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { name: fullName } = user;
console.log(fullName); // "Alice"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Default values&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { city = "Unknown" } = user;
console.log(city); // "Unknown"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Nested Destructuring
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const data = {
  user: {
    profile: {
      username: "bob123",
      email: "bob@example.com"
    }
  }
};

const {
  user: {
    profile: { username }
  }
} = data;

console.log(username); // "bob123"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Destructuring in Function Parameters
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;With objects&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function greet({ name, age }: { name: string; age: number }) {
  console.log(`Hello ${name}, age ${age}`);
}

greet({ name: "Charlie", age: 30 });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;With arrays&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function sum([a, b]: [number, number]) {
  return a + b;
}

console.log(sum([5, 7])); // 12

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Combining with Rest (...)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Arrays&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const [head, ...tail] = [1, 2, 3, 4];
console.log(head); // 1
console.log(tail); // [2, 3, 4]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Objects&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { a, ...rest } = { a: 1, b: 2, c: 3 };
console.log(a);    // 1
console.log(rest); // { b: 2, c: 3 }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Destructuring with TypeScript Types&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type User = {
  id: number;
  name: string;
  email?: string;
};

const user: User = { id: 1, name: "Dana" };

const { id, name, email = "N/A" } = user;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>TypeScript: Record vs Map — What’s the Difference and When to Use Each?</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Sun, 01 Jun 2025 00:35:00 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/typescript-record-vs-map-whats-the-difference-and-when-to-use-each-50oj</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/typescript-record-vs-map-whats-the-difference-and-when-to-use-each-50oj</guid>
      <description>&lt;p&gt;In TypeScript, both Record and Map allow you to store key-value pairs. But they’re not the same, and choosing the right one can improve readability, performance, and type safety in your code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Record in TypeScript?
&lt;/h2&gt;

&lt;p&gt;A Record is a TypeScript utility type that defines an object with keys of type K and values of type V.&lt;/p&gt;

&lt;p&gt;It compiles to a plain JavaScript object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const links: Record&amp;lt;string, string&amp;gt; = {
  home: "/",
  about: "/about",
  contact: "/contact"
};

console.log(links["about"]); // "/about"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;• Keys must be string or number&lt;/p&gt;

&lt;h2&gt;
  
  
  How to iterate over a record
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Option 1: using object.entries&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const links = {
  home: "/",
  about: "/about",
  contact: "/contact"
};

type LinkKeys = keyof typeof links;

(Object.entries(links) as [LinkKeys, string][]).forEach(([key, value]) =&amp;gt; {
  console.log(`${key.toUpperCase()} =&amp;gt; ${value}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Option 2: Using for...in loop&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const links = {
  home: "/",
  about: "/about",
  contact: "/contact"
};

// Step 1: Infer the type of the object
type Links = typeof links;

// Step 2: Extract keys as a union type: "home" | "about" | "contact"
type LinkKey = keyof Links;

for (const key in links) {
  // Step 3: Guard against prototype pollution
  if (Object.prototype.hasOwnProperty.call(links, key)) {
    // Step 4: Narrow `key` to LinkKey so it's not just `string`
    const typedKey = key as LinkKey;

    // Now you have full type safety
    const path = links[typedKey];
    console.log(`${typedKey} → ${path}`);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What is a Map?
&lt;/h2&gt;

&lt;p&gt;A Map is an ES6 built-in object that allows you to store key-value pairs with:&lt;br&gt;
    •Any key type (not just strings)&lt;br&gt;
    •Guaranteed insertion order&lt;br&gt;
    •Built-in iteration and utility methods&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const linkMap = new Map&amp;lt;string, string&amp;gt;();
linkMap.set("home", "/");
linkMap.set("about", "/about");
linkMap.set("contact", "/contact");

console.log(linkMap.get("about")); // "/about"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to iterate over a Map
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Define links as a Map&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const links = new Map([
  ["home", "/"],
  ["about", "/about"],
  ["contact", "/contact"],
]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Use typeof + keyof for key constraints to restrict the keys (home, about, contact) *&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const linkObj = {
  home: "/",
  about: "/about",
  contact: "/contact"
};

type LinkKeys = keyof typeof linkObj;         // "home" | "about" | "contact"
type LinkMap = Map&amp;lt;LinkKeys, string&amp;gt;;

const links = new Map&amp;lt;LinkKeys, string&amp;gt;(Object.entries(linkObj) as [LinkKeys, string][]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Iteration Method 1: for...of&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for (const [key, value] of links) {
  // key is strongly typed as "home" | "about" | "contact"
  console.log(`${key.toUpperCase()} =&amp;gt; ${value}`);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Iteration Method 2: .forEach()&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;links.forEach((value, key) =&amp;gt; {
  console.log(`${key} =&amp;gt; ${value}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Iteration Method 3: Array.from() with map()&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Array.from(links.entries()).map(([key, value]) =&amp;gt; {
  console.log(`${key}: ${value}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: How does Array.from work&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const links = new Map([
  ["home", "/"],
  ["about", "/about"],
]);

const linkArray = Array.from(links); 
// =&amp;gt; [ ["home", "/"], ["about", "/about"] ]

linkArray.forEach(([key, value]) =&amp;gt; {
  console.log(`${key} =&amp;gt; ${value}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Record vs Map — Comparison Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;code&gt;Record&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;&lt;code&gt;Map&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Key Types&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Only &lt;code&gt;string&lt;/code&gt; or &lt;code&gt;number&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Any type (&lt;code&gt;string&lt;/code&gt;, &lt;code&gt;object&lt;/code&gt;, etc.)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Insertion Order&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not guaranteed&lt;/td&gt;
&lt;td&gt;Preserved&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Iteration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Use &lt;code&gt;Object.entries()&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Directly iterable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Serialization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Easy with &lt;code&gt;JSON.stringify()&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Requires &lt;code&gt;Object.fromEntries()&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Utility methods&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;.get()&lt;/code&gt;, &lt;code&gt;.set()&lt;/code&gt;, &lt;code&gt;.has()&lt;/code&gt;, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Static config/data&lt;/td&gt;
&lt;td&gt;Dynamic keys, complex structures&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{ home: "/", about: "/about" }&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;new Map([["home", "/"], ["about", "/about"]])&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When Should You Use Which?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;💡 Use Case&lt;/th&gt;
&lt;th&gt;Choose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Static data, config, known string keys&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Record&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dynamic key-value data, insertion order needed&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Map&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;You need methods like &lt;code&gt;.has()&lt;/code&gt; or &lt;code&gt;.delete()&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Map&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Easy serialization (e.g., API responses)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Record&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
    </item>
    <item>
      <title>Prompt Response Tuning with Temperature, Top-p, and Top-k</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Sat, 31 May 2025 02:40:03 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/prompt-response-tuning-with-temperature-top-p-and-top-k-18ph</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/prompt-response-tuning-with-temperature-top-p-and-top-k-18ph</guid>
      <description>&lt;p&gt;Language models like GPT don’t “think” in full sentences — they predict one token at a time, where a token is a chunk of text (often a word or part of a word) created through a process called &lt;strong&gt;tokenization&lt;/strong&gt;. At each step, the model chooses the next token based on &lt;strong&gt;probabilities&lt;/strong&gt; — and &lt;strong&gt;decoding parameters&lt;/strong&gt; like &lt;strong&gt;temperature, top-k, and top-p&lt;/strong&gt; control how predictable, random, or creative those token choices are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Temperature
&lt;/h2&gt;

&lt;p&gt;Temperature controls how random or focused the model’s word choices are.&lt;/p&gt;

&lt;p&gt;A low temperature (e.g., 0.2) makes the output more predictable — the model sticks to the most likely words.&lt;br&gt;
A high temperature (e.g., 1.0 or more) makes the output more creative, possibly even risky or unusual.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Temperature&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;Deterministic, safest&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0.3 – 0.7&lt;/td&gt;
&lt;td&gt;Predictable, less risky&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;Balanced randomness (default for GPT)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;gt;1.0&lt;/td&gt;
&lt;td&gt;Creative, more surprising, possibly noisy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;gt;1.5&lt;/td&gt;
&lt;td&gt;Often too chaotic or nonsensical&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Temperature is usually in the range of 0.0 to 2.0.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top-k Sampling
&lt;/h2&gt;

&lt;p&gt;Top-k limits the model to the top k most likely tokens, then picks one randomly from that group.&lt;/p&gt;

&lt;p&gt;Top-k = 1 → Always picks the most probable word (like greedy decoding).&lt;br&gt;
Top-k = 40 → Picks from the 40 best guesses, adding variety without going off-topic.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Top-k Value&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Safe but repetitive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10–50&lt;/td&gt;
&lt;td&gt;Good diversity, still smart&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100+&lt;/td&gt;
&lt;td&gt;More variety, more risk&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Top-k value ranges from a minimum of 1 up to a maximum of the total vocabulary size (which can be ~50,000 tokens for GPT models)&lt;/p&gt;

&lt;h2&gt;
  
  
  Top-p Sampling (Nucleus Sampling)
&lt;/h2&gt;

&lt;p&gt;Top-p chooses from the smallest set of tokens whose total probability adds up to at least p.&lt;br&gt;
If Top-p = 0.9, the model picks from the most likely words that together make up 90% of the probability mass.&lt;br&gt;
Unlike top-k, this list can grow or shrink dynamically depending on the situation.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Top-k Value&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;Very conservative&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0.9&lt;/td&gt;
&lt;td&gt;Balanced, avoids outliers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;Like no filter — all options allowed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Top-p value ranges from 0 to 1&lt;/p&gt;

&lt;p&gt;Top-p sampling (short for “probability sampling”) is also known as Nucleus sampling. It works in the following way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sorts the tokens from most to least likely&lt;/li&gt;
&lt;li&gt;Selects the smallest group of tokens whose cumulative probabilities add up to at least p (like 0.9)&lt;/li&gt;
&lt;li&gt;Randomly picks one token from that set
This selected group is what we call the nucleus.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The nucleus is the tight cluster of highest-probability tokens — the model’s most confident guesses.&lt;/p&gt;

&lt;p&gt;Instead of sampling from all possible tokens (many of which are low-probability and often nonsensical), we focus on the most meaningful subset — the nucleus of the probability mass.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is RAG (Retrieval- Augumented Generation)</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Tue, 27 May 2025 11:46:30 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/what-is-rag-retrieval-augumented-generation-12ck</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/what-is-rag-retrieval-augumented-generation-12ck</guid>
      <description>&lt;p&gt;RAG allows the AI model to look things up before answering a question.&lt;/p&gt;

&lt;p&gt;RAG is an AI technique that combines two powerful components:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1.  A retriever that searches for relevant information from external sources
2.  A generator (like GPT or Claude or any other AI model) that uses that information to craft accurate, grounded responses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;RAG is used in scenarios such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Recent data or information is required to answer a question&lt;/li&gt;
&lt;li&gt;Information needs to be retrieved from private documents&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How does RAG work?
&lt;/h2&gt;

&lt;p&gt;Let’s say you ask an AI assistant:&lt;/p&gt;

&lt;p&gt;“What’s our company’s refund policy?”&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The question is converted into a vector (a list of numbers that captures the meaning)&lt;/li&gt;
&lt;li&gt;It searches a vector database of your documents (like PDFs, FAQs, or manuals)&lt;/li&gt;
&lt;li&gt;It retrieves the most relevant chunks of text&lt;/li&gt;
&lt;li&gt;It inserts those chunks into the prompt sent to the language model&lt;/li&gt;
&lt;li&gt;The model then generates an answer based on both your question and the retrieved info&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  RAG processing step by step
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1.Split Documents into Chunks (Document Chunking)&lt;/strong&gt;&lt;br&gt;
    •Your company's HR policy PDF is split into small, readable chunks (e.g., 200–500 words each).&lt;/p&gt;

&lt;p&gt;Example Chunk:&lt;/p&gt;

&lt;p&gt;“Employees are eligible for health benefits after 90 days of full-time employment…”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Generate Embeddings (Using Embedding Model)&lt;/strong&gt;&lt;br&gt;
    •Each chunk is passed through an embedding model (e.g., Amazon Titan Embeddings).&lt;br&gt;
    •Output: a vector (a list of numbers) representing the meaning of the text.&lt;br&gt;
        •A high dimensional vector can capture complex meaning, relationships and semantic context in:&lt;br&gt;
    • Words and sentences (via embeddings)&lt;br&gt;
    • Images&lt;br&gt;
    • User behavior&lt;/p&gt;

&lt;p&gt;The more dimensions, the more nuance the vector can represent — like tone, topic, or context.&lt;/p&gt;

&lt;p&gt;Example&lt;/p&gt;

&lt;p&gt;Chunk vector → &lt;a href="https://dev.to128%E2%80%931536%20dimensions"&gt;0.21, -0.64, 0.48, …, 0.02&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Store in Vector Database&lt;/strong&gt;&lt;br&gt;
    •All vectors are stored in a vector DB (e.g., Amazon OpenSearch, Kendra, Pinecone).&lt;br&gt;
    •Each vector is linked to its original text chunk.&lt;/p&gt;

&lt;p&gt;Now your database can search by meaning, not just keywords.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;RUNTIME (When User Asks a Question)&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.User Asks a Question&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“When do I qualify for health benefits?”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.Convert Question to a Vector (Query Embedding)&lt;/strong&gt;&lt;br&gt;
    •The question is passed through the same embedding model.&lt;br&gt;
    •Result: a query vector that captures the semantic meaning of the question.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.Semantic Search in Vector DB&lt;/strong&gt;&lt;br&gt;
    •The query vector is compared to all stored vectors using cosine similarity (or similar metric).&lt;br&gt;
    •The most relevant document chunks are retrieved — even if the wording doesn’t match exactly.&lt;/p&gt;

&lt;p&gt;Retrieved chunk:&lt;/p&gt;

&lt;p&gt;“Employees are eligible for health benefits after 90 days…”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7.Augment the Prompt&lt;/strong&gt;&lt;br&gt;
    •The retrieved chunks are inserted into the prompt along with the user’s question:&lt;/p&gt;

&lt;p&gt;Prompt to the foundation model:&lt;/p&gt;

&lt;p&gt;Context:&lt;br&gt;
"Employees are eligible for health benefits after 90 days of full-time employment."&lt;/p&gt;

&lt;p&gt;Question:&lt;br&gt;
"When do I qualify for health benefits?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8.Foundation Model Generates Answer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“You qualify for health benefits after 90 days of full-time employment, according to company policy.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Using RAG
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;No fine-tuning needed: You don’t have to retrain the model&lt;/li&gt;
&lt;li&gt;Up-to-date answers: Pull from the latest documents&lt;/li&gt;
&lt;li&gt;Custom knowledge: Use your own files, policies, or FAQs&lt;/li&gt;
&lt;li&gt;Fewer hallucinations: Grounded responses using real data&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Precision v/s Recall in Machine Learning</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Sat, 24 May 2025 06:03:03 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/precision-vs-recall-in-machine-learning-41af</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/precision-vs-recall-in-machine-learning-41af</guid>
      <description>&lt;p&gt;Precision and recall are two core metrics used in evaluating the performance of a binary or multi-class classification model. Both focus on how well the model handles positive cases&lt;/p&gt;

&lt;h2&gt;
  
  
  Precision
&lt;/h2&gt;

&lt;p&gt;Precision indicates how many of the items your model &lt;strong&gt;predicted&lt;/strong&gt; as positive were actually correct. &lt;/p&gt;

&lt;p&gt;•      Measures the correctness of positive predictions&lt;br&gt;
•      Focuses on being accurate when saying “positive”&lt;br&gt;
•      When your model says “Yes” or “Positive,” how often is it right?&lt;br&gt;
•      Penalizes False Positives (FP). Precision score is higher when there are fewer false positives.&lt;br&gt;
•      Higher when the model is conservative (fewer false alarms)&lt;/p&gt;

&lt;p&gt;Precision = True Positive / True Positives (TP) + False Positives (FP)&lt;/p&gt;

&lt;p&gt;Use Cases :&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;•     Spam detection: You want to avoid flagging legitimate emails as spam.&lt;br&gt;
•     Medical diagnostics: Sometimes, you want to avoid false alarms that could cause stress or unnecessary tests.&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Recall&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Recall indicates how many of the &lt;strong&gt;actual&lt;/strong&gt; positives your model was able to find. Of all the real positive cases, how many did the model catch?&lt;/p&gt;

&lt;p&gt;•      Measures the coverage of actual positives&lt;br&gt;
•      Focuses on finding all real positives&lt;br&gt;
•      When your model says “Yes” or “Positive,” how often is it right?&lt;br&gt;
•      Penalizes False Positives (FP). Precision score is higher when there are fewer false positives.&lt;/p&gt;

&lt;p&gt;•      Higher when the model is conservative (fewer false alarms)&lt;/p&gt;

&lt;p&gt;Recall = True Positive / True Positives (TP) + False Negatives (FN)&lt;/p&gt;

&lt;p&gt;Use Cases:&lt;br&gt;
    • Disease detection: You don’t want to miss any sick patients, even if it means a few healthy ones get flagged.&lt;br&gt;
    • Fraud detection: Better to investigate more cases than miss a real fraud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02watukuv788l5cfvi9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02watukuv788l5cfvi9b.png" alt="Image description" width="478" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>NLP Evaluation Matrices</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Sat, 24 May 2025 03:28:37 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/nlp-evaluation-matrices-23d9</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/nlp-evaluation-matrices-23d9</guid>
      <description>&lt;h2&gt;
  
  
  ROUGE - Recall-Oriented Understudy for Gisting Evaluation
&lt;/h2&gt;

&lt;p&gt;Compares the overlap of words or phrases between the generated and reference texts.&lt;br&gt;
    • Focuses on recall — did the model capture the key ideas?&lt;br&gt;
    • Best for summarization and information(content) coverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference Summary:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“The quick brown fox jumps over the lazy dog.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generated Summary:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“The brown fox leaps over a lazy dog.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROUGE-1 = unigram overlap&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Compares single-word overlaps.
Matching Words:
    • the, brown, fox, over, lazy, dog → 6 matches

Not Matching:
    • quick, jumps (in reference)
    • leaps, a (in generated)

ROUGE-1 Score =
Overlapping unigrams/Total unigrams in reference = 6/9 = 0.667
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ROUGE-2 = bigram overlap&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Compares 2-word sequences that appear in order.

Matching Bigrams:
    • “brown fox”
    • “over the”
    • “lazy dog” → 3 matches

Not Matching:
    • “quick brown”, “fox jumps”, etc. (not in generated)

ROUGE-2 Score =
3/8 = 0.375

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ROUGE-S = Skip-Bigram Overlap&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Compares skip-bigrams — word pairs that occur in the same order but not necessarily adjacent.

Examples of Matching Skip-Bigrams:
    • (“the”, “fox”)
    • (“brown”, “dog”)
    • (“fox”, “over”)
    • (“the”, “dog”)
These words appear in order in both sentences, though not next to each other.

Skip-bigrams in reference: 36 possible
Matched skip-bigrams: ~12 (depending on allowed skips)

ROUGE-S Score (approximate) =
12/36 = approx 0.33
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ROUGE-L = longest common subsequence&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Finds the longest sequence of words that appear in order (not necessarily adjacent) in both texts.

LCS:
    • “the brown fox over the lazy dog” (but “quick” and “jumps” are missing)

Length of LCS = 7 words

ROUGE-L Score =
7/9 = approx 0.778

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ROUGE Variants - Summary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2sbpsta6x3o8nflci9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2sbpsta6x3o8nflci9v.png" alt="Image description" width="482" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  BLEU - Bilingual Evaluation Understudy
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Looks at n-gram precision — how much of the generated text matches the reference exactly.
• Originally designed for machine translation.
• Focuses on precision — are the predicted words correct?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Includes a &lt;strong&gt;brevity penalty&lt;/strong&gt; to discourage overly short translations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Example:

Reference sentence:

“The quick brown fox jumps over the lazy dog”

Model output:

“The fox”

It matches 2 words, but the answer is way too short.
Brevity Penalty reduces the BLEU score.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  BERT Score
&lt;/h2&gt;

&lt;p&gt;BERTScore measures how similar two pieces of text are in meaning using BERT, a powerful language model.&lt;/p&gt;

&lt;p&gt;Unlike ROUGE and BLEU, which compare words exactly,&lt;br&gt;
**BERTScore compares the meanings of words — even if the exact words are different.&lt;/p&gt;

&lt;p&gt;BERTScore checks whether the words in the generated sentence mean the same thing as in the reference sentence&lt;/p&gt;

&lt;p&gt;It uses &lt;strong&gt;word embeddings&lt;/strong&gt; (like word meanings in number form) to do this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Example:

Reference:

“The dog barked loudly.”

Generated:

“The canine made noise.”

    • BLEU/ROUGE = low (few exact matches)
    • BERTScore = high (words mean similar things: dog = canine, barked = made noise)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How Does It Work?&lt;br&gt;
    • Turns each word in both sentences into vectors using BERT (context-aware).&lt;br&gt;
    • For each word in one sentence, it finds the most similar word in the other.&lt;br&gt;
    • Calculates precision, recall, and F1 score based on semantic similarity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc59c9wrhsvik0qquoaj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc59c9wrhsvik0qquoaj1.png" alt="Image description" width="423" height="65"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Playwright Useful Resources</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Wed, 07 May 2025 00:48:10 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/playwright-useful-resources-2a9p</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/playwright-useful-resources-2a9p</guid>
      <description>&lt;p&gt;&lt;a href="https://playwrightsolutions.com/playwright-resources/" rel="noopener noreferrer"&gt;https://playwrightsolutions.com/playwright-resources/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://playwright.tech/" rel="noopener noreferrer"&gt;https://playwright.tech/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/t/playwright"&gt;https://dev.to/t/playwright&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@lucgagan/mastering-playwright-best-practices-for-web-automation-with-the-page-object-model-3541412b03d1" rel="noopener noreferrer"&gt;https://medium.com/@lucgagan/mastering-playwright-best-practices-for-web-automation-with-the-page-object-model-3541412b03d1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://playwrightsolutions.com/the-definitive-guide-to-api-test-automation-with-playwright-part-8-adding-eslint-prettier-and-husky/" rel="noopener noreferrer"&gt;https://playwrightsolutions.com/the-definitive-guide-to-api-test-automation-with-playwright-part-8-adding-eslint-prettier-and-husky/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://playwrightsolutions.com/how-to-run-a-specific-spec-file-playwright-tests-sequentially/" rel="noopener noreferrer"&gt;https://playwrightsolutions.com/how-to-run-a-specific-spec-file-playwright-tests-sequentially/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://debbie.codes/blog/tags/playwright" rel="noopener noreferrer"&gt;https://debbie.codes/blog/tags/playwright&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/jsfez/why-playwright-visual-testing-doesnt-scale-1ole"&gt;https://dev.to/jsfez/why-playwright-visual-testing-doesnt-scale-1ole&lt;/a&gt;&lt;/p&gt;

</description>
      <category>playwright</category>
      <category>webdev</category>
      <category>testing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>AI Terms</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Sun, 04 May 2025 02:05:01 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/ai-terms-31eb</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/ai-terms-31eb</guid>
      <description>&lt;p&gt;&lt;strong&gt;Weights&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A number that says how important an input is.&lt;/p&gt;

&lt;p&gt;High weight = input is very important.&lt;br&gt;
Low weight = input barely matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Biases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A number that shifts the output up or down, no matter what the inputs are.&lt;/p&gt;

&lt;p&gt;A default setting.&lt;br&gt;
Even if the input is zero, the neuron can still produce something.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Activation functions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A function that decides if a neuron should “fire” or how strong its signal should be.&lt;/p&gt;

&lt;p&gt;It`s like a gatekeeper.&lt;br&gt;
Only lets important signals through.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feed-forward propagation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The process where the input goes through the network and creates an output.&lt;/p&gt;

&lt;p&gt;Information moves forward only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Back propagation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The process where the network learns from mistakes by adjusting weights and biases.&lt;/p&gt;

&lt;p&gt;It`s similar to a correction loop where you make a mistake, figure out what went wrong, and adjust your thinking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;L1 and L2 Regularisation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tricks to stop the model from memorizing too much (overfitting).&lt;/p&gt;

&lt;p&gt;It`s a discipline rule to keep the model simple and focused.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;L1:&lt;/strong&gt; Can make the model ignore useless inputs completely.&lt;br&gt;
&lt;strong&gt;L2:&lt;/strong&gt; Smooths out the model’s focus to avoid extreme values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gradients&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A slope that tells the network how to change the weights and biases to get better.&lt;/p&gt;

&lt;p&gt;The gradient shows the direction and size of the correction needed.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Gradient Descent&lt;/th&gt;
&lt;th&gt;Gradient Ascent&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Moves in the direction that reduces the output (minimizes loss)&lt;/td&gt;
&lt;td&gt;Moves in the direction that reduces the output (minimizes loss).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Goal: Find the lowest point (minimum)&lt;/td&gt;
&lt;td&gt;Goal: Find the highest point (maximum)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Used for: Minimizing error/loss&lt;/td&gt;
&lt;td&gt;Used for: Maximizing likelihood, rewards (like in reinforcement learning to maximize rewards)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Cost/Loss Function&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Cost Function is a formula that tells you how wrong your model’s predictions are.&lt;br&gt;
It compares the actual value and the expected value&lt;/p&gt;

&lt;p&gt;The goal of training an AI model is to reduce costs so that predictions get closer to the true answers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hyperparameters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The settings you choose before training the model.&lt;/p&gt;

&lt;p&gt;For example, how fast to learn (learning rate), how many neurons to use, etc.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>k6</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Sun, 04 May 2025 02:03:49 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/k6-2h9j</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/k6-2h9j</guid>
      <description></description>
      <category>testing</category>
      <category>devops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Jmeter</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Sun, 04 May 2025 02:03:37 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/jmeter-4e4</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/jmeter-4e4</guid>
      <description>&lt;p&gt;&lt;strong&gt;Jmeter, InfluxDB and, Grafana resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ibm.com/community/user/blogs/gaurav-dangaich/2024/02/07/jmeter-integration-with-influxdbv2-and-grafana" rel="noopener noreferrer"&gt;https://community.ibm.com/community/user/blogs/gaurav-dangaich/2024/02/07/jmeter-integration-with-influxdbv2-and-grafana&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.blazemeter.com/blog/jmeter-grafana" rel="noopener noreferrer"&gt;https://www.blazemeter.com/blog/jmeter-grafana&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://shrihariharidas73.medium.com/ultimate-guide-to-load-testing-and-performance-monitoring-with-jmeter-influxdb-and-grafana-1c208a9c8434" rel="noopener noreferrer"&gt;https://shrihariharidas73.medium.com/ultimate-guide-to-load-testing-and-performance-monitoring-with-jmeter-influxdb-and-grafana-1c208a9c8434&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other useful resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.blazemeter.com/author/yuri-bushnev" rel="noopener noreferrer"&gt;https://www.blazemeter.com/author/yuri-bushnev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Courses&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://university.blazemeter.com/learn/courses/485/apache-jmeter-intro/lessons/1543:164/apache-jmeter-intro" rel="noopener noreferrer"&gt;https://university.blazemeter.com/learn/courses/485/apache-jmeter-intro/lessons/1543:164/apache-jmeter-intro&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://university.blazemeter.com/learn/courses/491/apache-jmeter-pro/lessons/1582:172/apache-jmeter-pro" rel="noopener noreferrer"&gt;https://university.blazemeter.com/learn/courses/491/apache-jmeter-pro/lessons/1582:172/apache-jmeter-pro&lt;/a&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>performance</category>
      <category>testing</category>
    </item>
    <item>
      <title>AI Classification and Regression Model Metrics</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Wed, 30 Apr 2025 12:15:01 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/ai-classification-models-performance-evaluation-using-confusion-matrix--g80</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/ai-classification-models-performance-evaluation-using-confusion-matrix--g80</guid>
      <description>&lt;p&gt;&lt;strong&gt;Classification Metrics (used when predicting categories):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;confusion matrix&lt;/strong&gt; is a table used to evaluate the performance of a &lt;strong&gt;classification&lt;/strong&gt; model by showing the &lt;strong&gt;true&lt;/strong&gt; and &lt;strong&gt;predicted&lt;/strong&gt; classifications for a set of test data. It helps in visualizing and analyzing how well the model is performing, especially for multi-class classification problems. The confusion matrix provides a detailed breakdown of the correct and incorrect predictions, making it easier to understand the types of errors the model is making.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xnoilg632jcwxwyz3ap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xnoilg632jcwxwyz3ap.png" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Accuracy&lt;/strong&gt;: The proportion of correct predictions (both true positives and true negatives) out of all predictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accuracy&lt;/strong&gt; = (TP + TN) / (TP + TN + FP + FN)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Precision&lt;/strong&gt;: The proportion of true positive predictions out of all positive predictions made by the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Precision (Positive Predictive Value)&lt;/strong&gt; = TP / TP + FP&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Recall (Sensitivity, True Positive Rate)&lt;/strong&gt;: The proportion of actual positives correctly identified by the model. Recall represents from all the positive classes how many we predicted correctly.&lt;br&gt;
Recall should be as high as possible.The term “recall” reflects the model’s ability to “recall” or recognize as many true positive instances as possible from the actual positive cases in the dataset. It focuses on minimizing the number of false negatives, ensuring that the model identifies the majority of relevant instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recall&lt;/strong&gt; = (TP) / (TP + FN)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;F1 Score&lt;/strong&gt;: The harmonic mean of precision and recall, providing a single metric that balances both.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;F1 Score&lt;/strong&gt; = 2 * (Precision * Recall) / (Precision + Recall)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Specificity (True Negative Rate)&lt;/strong&gt;: The proportion of actual negatives correctly identified by the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specificity&lt;/strong&gt; = TN / (TN + FP)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ROC-AUC (Receiver Operating Characteristic - Area Under the Curve)&lt;/strong&gt;&lt;br&gt;
The ROC curve plots the true positive rate (recall) against the false positive rate at various threshold settings. The AUC (Area Under the Curve) represents the likelihood that the model will rank a randomly chosen positive instance higher than a randomly chosen negative one.&lt;br&gt;
&lt;a href="https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc" rel="noopener noreferrer"&gt;https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PR-AUC (Precision-Recall Area Under the Curve)&lt;/strong&gt;: The Precision-Recall curve plots precision against recall at various threshold settings. The AUC represents the balance between precision and recall across different thresholds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Logarithmic Loss (Log Loss)&lt;/strong&gt;: Log Loss measures the performance of a classification model where the output is a probability value between 0 and 1. It penalizes incorrect classifications with more confidence more than those with less confidence.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Regression Metrics (used when predicting numbers):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mean Absolute Error (MAE)&lt;/strong&gt;: MAE measures the average absolute difference between the predicted values and the actual values. It provides a straightforward interpretation of the error magnitude.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mean Squared Error (MSE)&lt;/strong&gt;: MSE measures the average squared difference between the predicted values and the actual values. It penalizes larger errors more heavily than MAE, due to the squaring of the differences.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff5nf38bcpry3tos2ea1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff5nf38bcpry3tos2ea1.png" alt="Image description" width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51q87pgbkkduo7lf2gjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51q87pgbkkduo7lf2gjl.png" alt="Image description" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw2vmee0522cci214exz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw2vmee0522cci214exz.png" alt="Image description" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS SageMaker</title>
      <dc:creator>Lini Abraham</dc:creator>
      <pubDate>Wed, 30 Apr 2025 12:08:04 +0000</pubDate>
      <link>https://dev.to/lea_abraham_7a0232a6cd616/aws-sagemaker-l91</link>
      <guid>https://dev.to/lea_abraham_7a0232a6cd616/aws-sagemaker-l91</guid>
      <description>&lt;p&gt;AWS SageMaker is a fully managed service that allows data scientists and developers to build, train, and deploy machine learning models at scale. It simplifies the process of developing and deploying machine learning models by offering tools and capabilities to perform each step of the machine learning workflow, from preparing data to monitoring deployed models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of AWS SageMaker:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Data Preparation and Labeling:&lt;/strong&gt;&lt;br&gt;
•SageMaker Data Wrangler: Helps prepare and clean data without writing much code.&lt;br&gt;
•SageMaker Ground Truth: Enables automated data labeling for training datasets, using human labelers and machine learning models to improve labeling efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Model Building:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•SageMaker Studio: An integrated development environment (IDE) that offers tools to build, train, and deploy machine learning models. It’s a web-based interface that supports Jupyter notebooks and various integrations.&lt;br&gt;
•Built-in Algorithms: Offers many pre-built algorithms optimized for performance and scalability, such as XGBoost, linear regression, k-means, and more.&lt;br&gt;
•Custom Algorithms: Users can also bring their own custom algorithms written in popular frameworks like TensorFlow, PyTorch, Scikit-learn, etc.&lt;br&gt;
•Amazon SageMaker Autopilot: Automatically trains and tunes the best machine learning models without requiring the user to know much about machine learning (AutoML).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Model Training:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•Distributed Training: SageMaker allows training on large datasets by automatically distributing the training job across multiple instances.&lt;br&gt;
•Spot Training: Reduces cost by using Amazon EC2 Spot Instances for model training.&lt;br&gt;
•Hyperparameter Optimization (HPO): Automates the process of tuning a model’s hyperparameters to improve accuracy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Model Deployment and Hosting:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•Real-time Inference: Deploy trained models as endpoints for real-time predictions.&lt;br&gt;
•Batch Transform: For batch inference, where the predictions are done in bulk for datasets that don’t require real-time predictions.&lt;br&gt;
•SageMaker Model Monitor: Provides continuous monitoring of deployed models to ensure their quality over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Model Explainability and Debugging:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•SageMaker Clarify: Provides insights into model fairness and explainability by analyzing bias in data and providing explanations for model predictions.&lt;br&gt;
•SageMaker Debugger: Automatically captures and analyzes model training metrics in real time to help debug and improve models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. MLOps and Pipelines:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•SageMaker Pipelines: Supports MLOps by helping build, manage, and automate end-to-end machine learning workflows, making it easier to retrain and deploy models in production.&lt;br&gt;
•SageMaker Projects: Helps set up CI/CD pipelines for model deployment, making it easier to integrate machine learning models into existing DevOps workflows.&lt;/p&gt;

&lt;p&gt;How to Use SageMaker:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Preparing Data:&lt;/strong&gt; Use SageMaker Data Wrangler or Ground Truth to clean, prepare, and label the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Building Models:&lt;/strong&gt; Use SageMaker Studio to write code, explore data, and build models using pre-built or custom algorithms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Training Models:&lt;/strong&gt; Train models at scale, optimize hyperparameters, and utilize distributed computing or EC2 Spot Instances for cost efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Deploying Models:&lt;/strong&gt; Deploy models for real-time or batch inference, and monitor performance using SageMaker Model Monitor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Managing Workflows:&lt;/strong&gt; Use SageMaker Pipelines for continuous integration and deployment (CI/CD) of models into production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS SageMaker Algorithms:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Supervised Learning Algorithms:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•Linear Learner: For binary and multiclass classification or regression problems. It uses stochastic gradient descent (SGD) for fast model training.&lt;/p&gt;

&lt;p&gt;•XGBoost: An optimized distributed gradient boosting library designed for speed and performance. Great for classification and regression tasks.&lt;/p&gt;

&lt;p&gt;•K-Nearest Neighbors (k-NN): A simple, non-parametric algorithm used for classification and regression. It predicts the label of an unseen data point by calculating the distance between the data points.&lt;/p&gt;

&lt;p&gt;•Factorization Machines: Suitable for recommendation systems and tasks involving sparse datasets like click prediction.&lt;br&gt;
•Image Classification: A pre-built algorithm based on ResNet that is used for image classification tasks.&lt;br&gt;
•Object Detection: Helps detect objects in images, based on the Single Shot Multibox Detector (SSD) algorithm.&lt;br&gt;
•Semantic Segmentation: A deep learning algorithm that helps segment parts of an image (e.g., identifying specific objects in an image).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Unsupervised Learning Algorithms:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•K-Means: A clustering algorithm that partitions data into a set number of clusters, widely used in exploratory data analysis.&lt;/p&gt;

&lt;p&gt;•Principal Component Analysis (PCA): A dimensionality reduction algorithm, used to reduce the number of features while retaining variance in the data.&lt;/p&gt;

&lt;p&gt;•Anomaly Detection with Random Cut Forest (RCF): Used to detect anomalous data points in a dataset, commonly used for fraud detection or anomaly detection in time-series data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Time Series Forecasting:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•DeepAR: A forecasting algorithm that uses recurrent neural networks (RNNs) to predict future values in a time series, ideal for forecasting demand, financial markets, or energy consumption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Natural Language Processing (NLP) Algorithms:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•BlazingText: A highly optimized implementation of Word2Vec for text classification or word embeddings.&lt;/p&gt;

&lt;p&gt;•Seq2Seq: Sequence-to-sequence models used for machine translation, text summarization, and other NLP tasks.&lt;/p&gt;

&lt;p&gt;•Latent Dirichlet Allocation (LDA): A topic modeling algorithm that helps identify themes or topics in large collections of text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Reinforcement Learning (RL):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•Reinforcement Learning with Ray: A toolkit that helps set up reinforcement learning tasks, integrating easily with Ray RLlib for distributed reinforcement learning.&lt;br&gt;
•Coach: SageMaker RL allows users to work with Coach, a toolkit for distributed RL training, used for applications like robotic control, game AI, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Generative AI and Variational Autoencoders (VAE):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•VAE: Used to generate new data similar to training data, for applications like anomaly detection, data synthesis, or generative modeling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Other Algorithms:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•IP Insights: Used for identifying suspicious or anomalous IP addresses based on previous behavior, commonly used in cybersecurity.&lt;br&gt;
•Neural Topic Model (NTM): Another approach for topic modeling, using deep learning to model topics in textual data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is SageMaker Automatic Model Tuning?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Automatic Model Tuning in SageMaker, also called Hyperparameter Optimization (HPO), is a managed service that automates the process of finding the optimal hyperparameters for your machine learning models. Hyperparameters are settings like learning rate, batch size, or number of layers in a neural network, which need to be adjusted for a model to perform well.&lt;/p&gt;

&lt;p&gt;Instead of manually tuning these hyperparameters (which can be very time-consuming), SageMaker automates the search by training multiple models with different hyperparameter combinations and selecting the one that performs best based on a chosen metric.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How SageMaker Automatic Model Tuning Works:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define the Hyperparameters: You define a set of hyperparameters and their ranges or values to explore during the tuning process. For example, you might define that the learning rate can be between 0.001 and 0.1.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Objective Metric: You specify an objective metric, such as validation accuracy, log loss, F1 score, RMSE. You can use built-in metrics (like Validation:Accuracy) or define your own custom metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search/Tuning Strategy:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;• SageMaker uses techniques like Bayesian Optimization to intelligently search the hyperparameter space rather than trying every possible combination (which would be inefficient).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Training Jobs: SageMaker runs multiple training jobs, each with different hyperparameter combinations, in parallel. These jobs evaluate the model’s performance on the chosen objective metric.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Training Ranges:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;• For continuous hyperparameters (like learning rate), you define a range (e.g., from 0.001 to 0.1).&lt;/p&gt;

&lt;p&gt;• For categorical hyperparameters (like optimizer type), you define specific values to explore (e.g., [“Adam”, “SGD”, “RMSprop”]).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Optimal Hyperparameters: After all the jobs are complete, SageMaker identifies the hyperparameter combination that produced the best-performing model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stopping Conditions:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;• You can set a limit for how many training jobs to run and how long each job can last. This prevents overly long or expensive training sessions.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
