<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ahmed Khaled</title>
    <description>The latest articles on DEV Community by Ahmed Khaled (@khold).</description>
    <link>https://dev.to/khold</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/khold"/>
    <language>en</language>
    <item>
      <title>Inside the Database: A Deep Dive into Disk-Oriented DBMS.</title>
      <dc:creator>Ahmed Khaled</dc:creator>
      <pubDate>Sat, 21 Sep 2024 16:02:46 +0000</pubDate>
      <link>https://dev.to/khold/inside-the-database-a-deep-dive-into-disk-oriented-dbms-o34</link>
      <guid>https://dev.to/khold/inside-the-database-a-deep-dive-into-disk-oriented-dbms-o34</guid>
      <description>&lt;p&gt;When it comes to databases, everyone talks about queries, indexes, and optimizations, but the actual way data is stored—how it's physically arranged on disk—is often overlooked. Yet, understanding this is crucial for anyone building scalable applications. Data storage influences everything from read/write performance to how efficiently the system handles large datasets. In this blog, we’ll dive into two key methods of organizing data in a DBMS: heap files and log-structured storage, and then explore how different data types are represented within those structures.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Heap Files: The "Junk Drawer" of Data Storage&lt;/strong&gt;&lt;br&gt;
Let’s start with heap files. Imagine you have a drawer where you randomly throw in things. You don’t really care about order—you just need to store your stuff. That’s basically how heap files work in databases.&lt;/p&gt;

&lt;p&gt;In a heap file, records (or rows) are inserted wherever there’s space, without any specific order. The system keeps track of where the data lives using pages (fixed-size blocks of storage) and slots within those pages. When you need to retrieve a record, the DBMS scans the heap file to find it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Page Layout: How Data is Physically Stored&lt;/strong&gt;&lt;br&gt;
At the simplest level, a page is a fixed-size block, usually between 4KB and 16KB. Each page contains multiple tuples, and the DBMS uses slots to keep track of where each tuple is stored within the page. Think of slots as pointers or indexes that point to the location of each tuple.&lt;/p&gt;

&lt;p&gt;Here’s a breakdown of how a page might be structured:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Page Header: The page begins with metadata, like how many slots are in use, how much free space is left, and other housekeeping information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Slot Array: Following the header is the slot array. Each slot points to the actual location of a tuple in the page. Slots allow for flexible storage of variable-length records because they let the DBMS keep track of where tuples are, even if they don't follow each other consecutively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tuples: The actual data records are stored in the remaining part of the page. For fixed-length tuples, the size is known, so storing and retrieving them is straightforward. For variable-length tuples, the DBMS will use additional space in the slot array to indicate where the tuple starts and how long it is.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Different Approaches to Storing Tuples in Pages&lt;/strong&gt;&lt;br&gt;
There are a couple of key strategies the DBMS might use to organize tuples within pages. These approaches determine how the data grows and how space is managed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Packed Representation: In this model, tuples are packed tightly, filling up the page as much as possible. When a new tuple is inserted, it’s placed in the next available spot. If a tuple is deleted, it leaves a gap. To manage this, some DBMS systems may use compaction techniques to periodically shift tuples around to eliminate gaps, but this can be expensive in terms of performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Slotted Pages: As mentioned earlier, the page uses a slot array to reference the actual data. The slot array grows from the beginning of the page, while the tuples are added from the end of the page, moving toward the middle. This gives the DBMS flexibility when inserting or deleting data because it only has to update the slot array, not the actual location of the tuples. Deletions don’t require immediate compaction of the data, just an update to the slot array, making operations more efficient.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Advantages of Heap Files:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast inserts: Since you’re just dropping data wherever there’s space, inserts are quick.&lt;/li&gt;
&lt;li&gt;Simplicity: The structure is straightforward, making it easy to implement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Drawbacks of Heap Files:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Slow reads: Without any order, the database has to scan the entire file to find specific data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fragmentation: As data is deleted or updated, the file can get fragmented, slowing down performance over time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Dealing with the Cons of Heap Files
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Problem: Slow Reads (Full Scans)&lt;/strong&gt;&lt;br&gt;
The biggest downside of heap files is the lack of any inherent order in how records are stored. If you're searching for a particular record, you may have to scan through the entire heap file (or multiple pages) to find it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Indexing&lt;/strong&gt;&lt;br&gt;
To mitigate this issue, most DBMSs use indexes to speed up lookups. An index is like a roadmap, pointing to where specific records are stored in the heap file. Instead of scanning every page, the DBMS can use the index to jump directly to the right page and slot. Common indexing methods include B-trees and hash indexes, which help locate records quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem: Fragmentation and Wasted Space&lt;/strong&gt;&lt;br&gt;
Over time, as tuples are inserted, deleted, and updated, heap files can become fragmented. This leads to wasted space within pages and slower performance during reads and writes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Vacuuming and Compaction&lt;/strong&gt;&lt;br&gt;
Some DBMSs periodically perform a process called vacuuming or compaction, where they reorganize pages to eliminate gaps left by deleted tuples. This ensures that space is used efficiently and helps improve performance. The downside is that vacuuming can be resource-intensive, so it's often scheduled during low-traffic periods.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Log-Structured Storage (LSM): High-Speed Data Writes and Efficient Compaction&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Log-Structured Merge (LSM) trees were designed to tackle one major bottleneck in databases: slow writes. If you’re running an application that’s writing data frequently—such as logging systems or time-series databases—LSM trees offer a compelling solution by optimizing how data is written and later reorganized.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;How LSM Trees Work: Sequential Writes to the Rescue&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;LSM trees follow a simple principle: &lt;strong&gt;write everything in a log&lt;/strong&gt;. Rather than inserting new data into random spots on disk like heap files, LSM trees append all writes to a &lt;strong&gt;sequential log file&lt;/strong&gt;. This approach ensures that writes are fast because appending data sequentially is far more efficient than performing random disk I/O.&lt;/p&gt;

&lt;p&gt;Here’s the basic workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Write to Memory First&lt;/strong&gt;: When new data arrives, it’s first written to an in-memory data structure called a &lt;strong&gt;memtable&lt;/strong&gt; (usually implemented as a sorted tree structure like a Red-Black tree).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flush to Disk as a Log&lt;/strong&gt;: Once the memtable fills up, it is flushed to disk as an immutable file, typically called an &lt;strong&gt;SSTable&lt;/strong&gt; (Sorted String Table). This file is a sorted, compressed representation of the data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compaction&lt;/strong&gt;: Over time, multiple SSTables accumulate on disk. Since these files are immutable, the DBMS periodically merges them through a process called &lt;strong&gt;compaction&lt;/strong&gt;. Compaction combines and reorganizes the data to reduce fragmentation, clean up obsolete entries, and optimize the overall storage layout.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Entry Log Structure: How Data is Organized in LSM Trees&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In LSM trees, data is written to disk in &lt;strong&gt;SSTables&lt;/strong&gt;, which store key-value pairs in a sorted order. Each SSTable typically contains:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Block&lt;/strong&gt;: The actual key-value pairs are stored here, ordered by keys. Each block is compressed to save space.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Index Block&lt;/strong&gt;: A separate index for quick lookups is maintained. The index maps keys to their position in the data block, allowing the DBMS to jump directly to the relevant data during reads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bloom Filters&lt;/strong&gt;: To avoid unnecessary disk reads, LSM trees often use &lt;strong&gt;Bloom filters&lt;/strong&gt;—a probabilistic data structure that helps quickly determine whether a key is &lt;em&gt;not&lt;/em&gt; present in an SSTable. This reduces the need to scan multiple tables unnecessarily.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Compaction: Managing Multiple Log Files&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;As more SSTables accumulate on disk, the database needs to balance between fast writes and efficient reads. Compaction solves this problem by merging multiple smaller SSTables into a single larger one, eliminating duplicate or outdated entries in the process. This makes subsequent reads faster, as fewer SSTables need to be checked.&lt;/p&gt;

&lt;p&gt;However, compaction requires additional resources and can impact performance if not managed well. Most DBMSs perform compaction in the background to minimize its impact on live queries.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Indexes in LSM Trees: Optimizing Read Performance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;While LSM trees are great for writes, they need help when it comes to reads. Without any optimizations, searching for a specific record would require scanning multiple SSTables. Here’s how DBMSs improve read performance:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bloom Filters&lt;/strong&gt;: As mentioned earlier, Bloom filters quickly determine whether a record is &lt;em&gt;not&lt;/em&gt; in a particular SSTable, reducing unnecessary scans.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Summary Table&lt;/strong&gt;: A &lt;strong&gt;summary table&lt;/strong&gt; can also be used to keep track of the range of keys within each SSTable. This way, the DBMS can check if a key falls within a specific range before looking inside the file, reducing the number of SSTables it has to scan.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Indexes&lt;/strong&gt;: The DBMS builds indexes over keys, stored alongside the SSTables. These indexes allow for efficient point lookups and range queries, helping locate records quickly even when they are spread across multiple SSTables.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Cons of Log-Structured Storage: Trade-Offs to Consider&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;While LSM trees offer impressive write performance, they come with some challenges, particularly during reads and compaction:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Slower Reads&lt;/strong&gt;:
&lt;/h4&gt;

&lt;p&gt;Since data is spread across multiple SSTables, reading a specific record might require scanning several files. Without proper indexing or Bloom filters, reads can become slow compared to traditional storage methods like B-trees.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Solution&lt;/strong&gt;: This is mitigated by using Bloom filters, summary tables, and indexes to narrow down the search space.
&lt;/h5&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Compaction Overhead&lt;/strong&gt;:
&lt;/h4&gt;

&lt;p&gt;Compaction can be resource-intensive. As SSTables accumulate, compaction must merge and rewrite data, which consumes disk I/O and CPU resources. If not managed properly, compaction can slow down the system, especially during high-traffic periods.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Solution&lt;/strong&gt;: DBMSs often schedule compaction during low-activity times or stagger compaction processes to avoid overwhelming the system.
&lt;/h5&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Space Amplification&lt;/strong&gt;:
&lt;/h4&gt;

&lt;p&gt;Before compaction happens, multiple versions of the same data might exist across different SSTables, leading to &lt;strong&gt;space amplification&lt;/strong&gt;. This means the database might use more storage than necessary to temporarily hold redundant or outdated data.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Solution&lt;/strong&gt;: By running compaction frequently enough and balancing it with system performance, DBMSs can keep space amplification in check.
&lt;/h5&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion: The Developer’s Edge&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Understanding how a DBMS stores data gives developers an edge when designing databases, writing queries, and tuning performance. Different storage approaches, such as heap files and log-structured storage, offer unique advantages and challenges, and knowing how they work internally helps developers make informed decisions, optimize applications, and avoid potential bottlenecks.&lt;/p&gt;

&lt;p&gt;This knowledge empowers developers to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select appropriate indexes.&lt;/li&gt;
&lt;li&gt;Choose data types wisely.&lt;/li&gt;
&lt;li&gt;Tackle performance issues before they become problems.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>database</category>
      <category>datastorage</category>
      <category>indexing</category>
      <category>concepts</category>
    </item>
    <item>
      <title>How to Reschedule Scheduled Scripts in NetSuite Using SuiteScript 2.x</title>
      <dc:creator>Ahmed Khaled</dc:creator>
      <pubDate>Wed, 27 Mar 2024 20:53:24 +0000</pubDate>
      <link>https://dev.to/khold/how-to-reschedule-scheduled-scripts-in-netsuite-using-suitescript-2x-49gn</link>
      <guid>https://dev.to/khold/how-to-reschedule-scheduled-scripts-in-netsuite-using-suitescript-2x-49gn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Bulk data manipulation in NetSuite can get tricky when you're dealing with massive datasets or complex logic . Scheduled scripts are like your trusty tools , but governance limits can sometimes throw a wrench in the works  (we've all been there!). This guide explores a time-saving technique using SuiteScript 2.x to overcome these limitations and keep your data updates running smoothly ✅.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Challenges of Governance Limits *&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large datasets and complex logic can quickly max out a script's usage units, leaving the job unfinished .&lt;/li&gt;
&lt;li&gt;Re-running the script from scratch after each limit hit is inefficient and can be a real drag .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;Current Inefficient Options: A Tale of Two Inefficiencies *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Re-running for Disappearing Saved Search Results 🪄 (Not Really Magic)&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
If updated records vanish from saved search results (like a disappearing act!), you'd need to repeatedly execute the script until completion. This can be extremely time-consuming and frustrating ⏳.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Re-running with Index Tracking for Persistent Results ️‍♀️&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
For results that remain after updates (like a persistent detective!), you'd track the last processed index and pass it as a script parameter to pick up where you left off. Although better, this still requires multiple script runs, which can be tedious .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proposed Solution: Dynamic Rescheduling for Efficiency ⏱️ (The Hero We Need!)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This approach is your time-saving hero, streamlining the process and minimizing manual intervention . Within your update loop, you'll check the script's remaining usage units. If it falls below a specific threshold (think of it as a fuel gauge for your script ⛽), you'll automatically reschedule the script using the &lt;code&gt;N/task&lt;/code&gt; module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;invoices&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getCurrentScript&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;getRemainingUsage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;audit&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;executeID&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;|RESUME&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;details&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Resuming from &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt; of &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;invoices&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;reschTask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;taskType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TaskType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SCHEDULED_SCRIPT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;scriptId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getCurrentScript&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;deploymentId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getCurrentScript&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;deploymentId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;custscript_start_instms_from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;reschTask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="c1"&gt;// Your update logic here&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation: Under the Hood of Dynamic Rescheduling ⚙️&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Usage Threshold Check&lt;/strong&gt;: The script constantly monitors its remaining units, like a fuel gauge keeping you informed ⛽.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rescheduling Trigger&lt;/strong&gt;: If the units dip below the threshold (your fuel gauge nearing empty!), rescheduling kicks in to avoid running out of steam ➡️⏱️.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reschedule Task Creation&lt;/strong&gt;: A new task is created using the N/task module, essentially sending a "refuel" message to the system ⛽.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reschedule Details&lt;/strong&gt;: The task specifies the script ID, deployment ID, and any necessary parameters (e.g., &lt;code&gt;custscript_start_instms_from&lt;/code&gt; to indicate the starting index for persistent results). This ensures the script picks up exactly where it left off, just like resuming a movie .&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Submission&lt;/strong&gt;: The rescheduling task is submitted, guaranteeing that the script can continue its work seamlessly without interruption .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;Benefits of Dynamic Rescheduling: A Win-Win Situation! *&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced Manual Intervention: No more manually restarting the script after each limit reach, freeing you from repetitive tasks ➡️✅.&lt;/li&gt;
&lt;li&gt;Improved Efficiency: Streamlines mass data updates by avoiding unnecessary re-runs, saving you valuable time ⏱️.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimized Workflows: Scripts can handle larger datasets and complex logic more effectively, allowing you to tackle bigger challenges with confidence .&lt;br&gt;
*&lt;em&gt;Remember: Setting the Right Threshold *&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set an appropriate usage threshold based on your script's typical unit consumption. This is like finding the sweet spot for your fuel gauge to avoid unnecessary rescheduling or running out of units ⛽️.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This technique is particularly useful for scenarios where script execution might vary depending on the data being processed. Think of it as adapting your driving style based on road conditions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>netsuite</category>
      <category>suitescript</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Polyfill in Reactjs</title>
      <dc:creator>Ahmed Khaled</dc:creator>
      <pubDate>Fri, 21 Jan 2022 05:34:29 +0000</pubDate>
      <link>https://dev.to/khold/polyfill-in-reactjs-1ino</link>
      <guid>https://dev.to/khold/polyfill-in-reactjs-1ino</guid>
      <description>&lt;h2&gt;
  
  
  What is Polyfill?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A polyfill is a piece of code (usually JavaScript on the Web) used to provide modern functionality on older browsers that do not natively support it.&lt;/strong&gt; -- definition from MDN&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  There're two approaches if you want to support older browsers like &lt;u&gt;ie11&lt;/u&gt;:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Manual imports from react-app-polyfill and core-js&lt;/strong&gt;&lt;br&gt;
Install react-app-polyfill and core-js (3.0+):&lt;/p&gt;

&lt;p&gt;npm install react-app-polyfill core-js or yarn add react-app-polyfill core-js&lt;/p&gt;

&lt;p&gt;Create a file called (something like) polyfills.js and import it into your root index.js file. Then import the basic react-app polyfills, plus any specific required features, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* polyfill.js */
import 'react-app-polyfill/ie11';
import 'core-js/features/array/find';
import 'core-js/features/array/includes';
import 'core-js/features/number/is-nan';

/* index.js */

import './polyfills'
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Polyfill service&lt;/strong&gt;&lt;br&gt;
Use the polyfill.io CDN to retrieve custom, browser-specific polyfills by adding this line to index.html:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;script src="https://cdn.polyfill.io/v2/polyfill.min.js?features=default,Array.prototype.includes"&amp;gt;&amp;lt;/script&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;note, I had to explicity request the Array.prototype.includes feature as it is not included in the default feature set.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finally, it might arise to your head a good question: Why polyfills aren't used exclusively?
&lt;/h2&gt;

&lt;p&gt;The reason why polyfills are not exclusively used  is for better functionality and better performance. Native implementations of APIs can do more and are faster than polyfills. For example, the Object.create polyfill only contains the functionalities that are possible in a non-native implementation of Object.create.&lt;/p&gt;

</description>
      <category>react</category>
      <category>javascript</category>
      <category>polyfill</category>
    </item>
    <item>
      <title>How to create all of your React Projects with one node_modules folder</title>
      <dc:creator>Ahmed Khaled</dc:creator>
      <pubDate>Sun, 31 Oct 2021 00:10:16 +0000</pubDate>
      <link>https://dev.to/khold/how-to-create-all-of-your-react-projects-with-one-nodemodules-folder-2in2</link>
      <guid>https://dev.to/khold/how-to-create-all-of-your-react-projects-with-one-nodemodules-folder-2in2</guid>
      <description>&lt;h1&gt;
  
  
  What Are Symbolic Links?
&lt;/h1&gt;

&lt;p&gt;Symbolic links are basically advanced shortcuts. Create a symbolic link to an individual file or folder, and that link will appear to be the same as the file or folder to Windows—even though it’s just a link pointing at the file or folder.&lt;/p&gt;

&lt;h2&gt;
  
  
  There are two type of symbolic links:
&lt;/h2&gt;

&lt;p&gt;Hard and soft. Soft symbolic links work similarly to a standard shortcut. When you open a soft link to a folder, you will be redirected to the folder where the files are stored.  However, a hard link makes it appear as though the file or folder actually exists at the location of the symbolic link, and your applications won’t know any better. That makes hard symbolic links more useful in most situations.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to create symbolic links in windows:
&lt;/h1&gt;

&lt;p&gt;First Run CMD As Administrator.&lt;br&gt;
The below command creates a symbolic, or “soft”, link at Link pointing to the file Target :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mklink Link Target&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Use /D when you want to create a soft link pointing to a directory. like so:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mklink /D Link Target&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Use /H when you want to create a hard link pointing to a file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mklink /H Link Target&lt;/code&gt;&lt;br&gt;
Use /J to create a hard link pointing to a directory, also known as a directory junction:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mklink /J Link Target&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to create symbolic links in linux:
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;ln -s &amp;lt;path to the file/folder to be linked&amp;gt; &amp;lt;the path of the link to be created&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Note that ln by default make hard symbolinks
&lt;/h6&gt;

&lt;p&gt;use command below to remove symbolic links in linux&lt;br&gt;
&lt;code&gt;ls -l &amp;lt;path-to-assumed-symlink&amp;gt;&lt;/code&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>symlink</category>
    </item>
  </channel>
</rss>
