<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sanjid Hasan</title>
    <description>The latest articles on DEV Community by Sanjid Hasan (@sanjidh090).</description>
    <link>https://dev.to/sanjidh090</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sanjidh090"/>
    <language>en</language>
    <item>
      <title>What is Invarience,why do we need?</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Fri, 18 Apr 2025 09:52:47 +0000</pubDate>
      <link>https://dev.to/sanjidh090/what-is-invariencewhy-do-we-need-2n5f</link>
      <guid>https://dev.to/sanjidh090/what-is-invariencewhy-do-we-need-2n5f</guid>
      <description>&lt;p&gt;In the context of &lt;strong&gt;vector analysis&lt;/strong&gt;, &lt;strong&gt;invariance&lt;/strong&gt; typically refers to the property of a vector field or mathematical object that remains unchanged under certain transformations. These transformations could be &lt;strong&gt;translations&lt;/strong&gt;, &lt;strong&gt;rotations&lt;/strong&gt;, or &lt;strong&gt;scaling&lt;/strong&gt;. Invariance is important because it ensures that certain properties or behaviors of a system are independent of how the system is viewed or manipulated in space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Invariance in Vector Analysis
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. &lt;strong&gt;Translation Invariance&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In vector analysis, &lt;strong&gt;translation invariance&lt;/strong&gt; refers to the property that a vector field (such as a force field or velocity field) remains unchanged when the entire system is translated (shifted) in space. This means that the field looks the same at every point in space, regardless of where we "move" the coordinate system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mathematical Formulation&lt;/strong&gt;: If a vector field ( \mathbf{F}(\mathbf{x}) ) is translated by a vector ( \mathbf{T} ), then the field does not change:
[
\mathbf{F}(\mathbf{x} + \mathbf{T}) = \mathbf{F}(\mathbf{x})
]
Translation invariance implies that the physical laws governing the field are the same everywhere, regardless of the reference point.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. &lt;strong&gt;Rotation Invariance&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Rotation invariance in vector analysis means that the vector or field does not change when it is rotated around a point or axis. This property is important in many physical situations, such as in &lt;strong&gt;electromagnetic fields&lt;/strong&gt; or &lt;strong&gt;fluid dynamics&lt;/strong&gt;, where the properties of the system remain the same even after a rotation of the coordinate system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mathematical Formulation&lt;/strong&gt;: If we rotate a vector ( \mathbf{v} ) by a rotation matrix ( \mathbf{R} ), the vector’s direction and magnitude remain unchanged:
[
\mathbf{v}' = \mathbf{R} \mathbf{v}
]
For a rotation-invariant field, the field remains the same under any rotation. This can be described using the concept of a &lt;strong&gt;tensor&lt;/strong&gt;, which behaves in a predictable way under rotations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. &lt;strong&gt;Scaling Invariance&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Scaling invariance refers to the property that a vector field remains the same if the entire system is scaled (enlarged or reduced) by a factor. This kind of invariance can be useful in systems where the relative properties are more important than the absolute sizes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mathematical Formulation&lt;/strong&gt;: Scaling a vector field by a constant factor ( \lambda ) should leave the field unchanged:
[
\mathbf{F}(\lambda \mathbf{x}) = \lambda \mathbf{F}(\mathbf{x})
]
In physical systems, scaling invariance is seen in phenomena like &lt;strong&gt;fractals&lt;/strong&gt; or in the &lt;strong&gt;Navier-Stokes equations&lt;/strong&gt; for fluid dynamics, where the system behaves similarly regardless of its size.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. &lt;strong&gt;Covariance and Contravariance&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In the context of vector analysis, &lt;strong&gt;covariance&lt;/strong&gt; and &lt;strong&gt;contravariance&lt;/strong&gt; describe how vectors transform under coordinate transformations, such as rotations. These concepts help maintain the invariance of physical laws regardless of the choice of coordinate system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Covariant vectors&lt;/strong&gt; (often components of a gradient or derivative) transform in the same way as the coordinate system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contravariant vectors&lt;/strong&gt; (like position or velocity vectors) transform oppositely to the coordinate system.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  5. &lt;strong&gt;Example: Invariance of the Dot Product&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;dot product&lt;/strong&gt; is an example of an operation that is invariant under rotation. Given two vectors ( \mathbf{v} ) and ( \mathbf{w} ), the dot product remains the same regardless of the rotation applied to the vectors:&lt;br&gt;
[&lt;br&gt;
\mathbf{v} \cdot \mathbf{w} = \mathbf{R} \mathbf{v} \cdot \mathbf{R} \mathbf{w}&lt;br&gt;
]&lt;br&gt;
This property is crucial in physics, as it ensures that quantities like &lt;strong&gt;energy&lt;/strong&gt; (which are based on dot products) remain consistent regardless of the rotation of the coordinate system.&lt;/p&gt;




&lt;h3&gt;
  
  
  Summary of Invariance in Vector Analysis
&lt;/h3&gt;

&lt;p&gt;Invariance in vector analysis ensures that certain physical properties or fields remain unchanged under different transformations, which is key to describing the &lt;strong&gt;symmetry&lt;/strong&gt; of physical systems. Some important types of invariance include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Translation Invariance&lt;/strong&gt;: The system remains unchanged when moved.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rotation Invariance&lt;/strong&gt;: The system remains unchanged when rotated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling Invariance&lt;/strong&gt;: The system remains unchanged when scaled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dot Product Invariance&lt;/strong&gt;: The dot product between vectors remains invariant under rotation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These concepts are foundational in many fields such as physics, engineering, and machine learning, where understanding the symmetry of systems leads to simpler models and better predictions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4f4z379ticnggblxz5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4f4z379ticnggblxz5f.png" alt="Image description" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtlo8zod6h1biaatdtup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtlo8zod6h1biaatdtup.png" alt="Image description" width="474" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ky4zeorgl526duibttc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ky4zeorgl526duibttc.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kvdjfuwyeyc3ga4i28u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kvdjfuwyeyc3ga4i28u.png" alt="Image description" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcus5ydx256rcnuiam1ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcus5ydx256rcnuiam1ae.png" alt="Image description" width="698" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>invarience</category>
      <category>ai</category>
      <category>mathfroml</category>
    </item>
    <item>
      <title>Multi-Axis Vision Transformer (MaxViT) – Summary 🚀</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Mon, 17 Mar 2025 13:12:25 +0000</pubDate>
      <link>https://dev.to/sanjidh090/multi-axis-vision-transformer-maxvit-summary-5d2n</link>
      <guid>https://dev.to/sanjidh090/multi-axis-vision-transformer-maxvit-summary-5d2n</guid>
      <description>&lt;p&gt;Multi-Axis Vision Transformer (MaxViT) is an advanced vision transformer architecture that combines local and global attention mechanisms for efficient image processing. It introduces a multi-axis attention mechanism to improve performance and efficiency.&lt;/p&gt;

&lt;p&gt;Key Concepts in MaxViT&lt;br&gt;
1️⃣ Grid and Block Attention (Multi-Axis Mechanism)&lt;/p&gt;

&lt;p&gt;Block Attention → Captures local features within small windows (like Swin Transformer).&lt;br&gt;
Grid Attention → Captures global features across the entire image by selecting spatially distant tokens.&lt;br&gt;
This allows both fine-grained and large-scale context awareness.&lt;br&gt;
2️⃣ Hierarchical Structure&lt;/p&gt;

&lt;p&gt;Similar to CNN-based architectures, MaxViT reduces the image resolution gradually while increasing feature depth.&lt;br&gt;
3️⃣ Efficient Attention Computation&lt;/p&gt;

&lt;p&gt;Instead of computing self-attention on the full image, MaxViT splits it into smaller patches (like Swin Transformer) but applies multi-axis attention for better scalability.&lt;br&gt;
This reduces complexity compared to ViT while keeping strong performance.&lt;br&gt;
4️⃣ Scalability &amp;amp; Performance&lt;/p&gt;

&lt;p&gt;Works well for classification, detection, and segmentation.&lt;br&gt;
Outperforms ViT and Swin Transformer on large-scale datasets (like ImageNet).&lt;br&gt;
Why is Multi-Axis Attention Useful?&lt;br&gt;
✅ Captures both local &amp;amp; global dependencies efficiently.&lt;br&gt;
✅ Less computational cost than full self-attention in standard ViT.&lt;br&gt;
✅ Works well on high-resolution images.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Calculation : convolution looks like this</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Mon, 17 Mar 2025 11:05:00 +0000</pubDate>
      <link>https://dev.to/sanjidh090/calculation-convolution-looks-like-this-1l38</link>
      <guid>https://dev.to/sanjidh090/calculation-convolution-looks-like-this-1l38</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Manual Convolution Calculation Example&lt;/strong&gt; 🧮
&lt;/h3&gt;

&lt;p&gt;Let's go step by step and calculate the &lt;strong&gt;convolution operation&lt;/strong&gt; for a &lt;strong&gt;3×3 kernel on a 5×5 image&lt;/strong&gt;.  &lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;1️⃣ Given: Input Image (5×5)&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1   2   3   4   5  
6   7   8   9   10  
11 12  13  14  15  
16 17  18  19  20  
21 22  23  24  25  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;2️⃣ Given: 3×3 Filter (Kernel)&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0  1  0  
1 -4  1  
0  1  0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 This kernel is an &lt;strong&gt;edge detection filter&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;3️⃣ Step-by-Step Convolution Calculation&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;We &lt;strong&gt;place the 3×3 kernel&lt;/strong&gt; on the &lt;strong&gt;top-left&lt;/strong&gt; of the image.&lt;/li&gt;
&lt;li&gt;Multiply corresponding values and &lt;strong&gt;sum them up&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;First Position (Top-Left Corner)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Applying kernel on this 3×3 region:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; 1   2   3  
 6   7   8  
11  12  13  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Element-wise multiplication&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(1×0) + (2×1) + (3×0)  
+ (6×1) + (7×-4) + (8×1)  
+ (11×0) + (12×1) + (13×0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Calculation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0  +  2  +  0  
+  6  - 28  +  8  
+  0  + 12  +  0  =  0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 The first pixel in the &lt;strong&gt;output matrix&lt;/strong&gt; is &lt;code&gt;0&lt;/code&gt;.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Second Position (Shifting Right)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Applying kernel on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; 2   3   4  
 7   8   9  
12  13  14  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Element-wise multiplication:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(2×0) + (3×1) + (4×0)  
+ (7×1) + (8×-4) + (9×1)  
+ (12×0) + (13×1) + (14×0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Calculation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0  +  3  +  0  
+  7  - 32  +  9  
+  0  + 13  +  0  =  0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 The second pixel in the output is &lt;code&gt;0&lt;/code&gt;.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Continuing the Process...&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;If we slide the kernel across the entire image, performing similar calculations, we get the &lt;strong&gt;final output matrix&lt;/strong&gt;:&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;4️⃣ Final Output Matrix (After Convolution)&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  0   1   0   1   0  
  1  -4   1  -4   1  
  0   1   0   1   0  
  1  -4   1  -4   1  
  0   1   0   1   0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✔ The kernel moves &lt;strong&gt;left to right&lt;/strong&gt; and &lt;strong&gt;top to bottom&lt;/strong&gt;, applying &lt;strong&gt;multiplication and summation&lt;/strong&gt; at each position.&lt;br&gt;&lt;br&gt;
✔ This &lt;strong&gt;edge detection&lt;/strong&gt; kernel highlights areas where &lt;strong&gt;pixel intensity changes sharply&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✔ Negative values indicate &lt;strong&gt;sharp transitions (edges)&lt;/strong&gt;.  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is Convolution?</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Mon, 17 Mar 2025 11:01:54 +0000</pubDate>
      <link>https://dev.to/sanjidh090/what-is-convolution-8i3</link>
      <guid>https://dev.to/sanjidh090/what-is-convolution-8i3</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Convolution – The Heart of Deep Learning&lt;/strong&gt; ❤️
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Convolution&lt;/strong&gt; is a &lt;strong&gt;mathematical operation&lt;/strong&gt; that combines two functions to produce a third function. In deep learning and computer vision, it is mainly used for &lt;strong&gt;feature extraction&lt;/strong&gt; from images.  &lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;1️⃣ Understanding Convolution in Simple Terms&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Imagine you have an &lt;strong&gt;image&lt;/strong&gt; (a grid of pixel values) and a &lt;strong&gt;small filter (kernel)&lt;/strong&gt; that slides over the image to detect patterns. This process is called &lt;strong&gt;convolution&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;Example:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;3×3 filter&lt;/strong&gt; slides over an image and performs a &lt;strong&gt;dot product&lt;/strong&gt; with the pixels under it.
&lt;/li&gt;
&lt;li&gt;The result is a new, &lt;strong&gt;transformed image&lt;/strong&gt; that highlights specific features (edges, textures, etc.).
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2️⃣ How Does Convolution Work?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✔ Take a &lt;strong&gt;small filter&lt;/strong&gt; (also called a kernel), e.g., a 3×3 matrix.&lt;br&gt;&lt;br&gt;
✔ Place it over a section of the image.&lt;br&gt;&lt;br&gt;
✔ Multiply the filter values with the corresponding pixel values.&lt;br&gt;&lt;br&gt;
✔ Sum them up → This gives &lt;strong&gt;one pixel&lt;/strong&gt; in the output image.&lt;br&gt;&lt;br&gt;
✔ Slide the filter across the entire image and repeat.  &lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;3️⃣ Why is Convolution Important in AI?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ &lt;strong&gt;Feature Extraction&lt;/strong&gt; → Detects edges, textures, and patterns.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Reduces Complexity&lt;/strong&gt; → Helps focus only on &lt;strong&gt;important&lt;/strong&gt; information.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Translation Invariance&lt;/strong&gt; → Works regardless of where an object appears in the image.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Basis of CNNs&lt;/strong&gt; → Used in &lt;strong&gt;Convolutional Neural Networks (CNNs)&lt;/strong&gt;, which power computer vision tasks.  &lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;4️⃣ Visualization of a 3×3 Convolution&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Input Image (5×5)&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1  2  3  4  5  
6  7  8  9  10  
11 12 13 14 15  
16 17 18 19 20  
21 22 23 24 25  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;3×3 Filter (Edge Detector)&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0  1  0  
1 -4  1  
0  1  0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Output (After Applying Convolution)&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  0   1   0   1   0  
  1  -4   1  -4   1  
  0   1   0   1   0  
  1  -4   1  -4   1  
  0   1   0   1   0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This detects &lt;strong&gt;edges&lt;/strong&gt; in the image! 🚀  &lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;5️⃣ Types of Convolutions in Deep Learning&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;📌 &lt;strong&gt;Standard Convolution&lt;/strong&gt; → Feature extraction.&lt;br&gt;&lt;br&gt;
📌 &lt;strong&gt;1D Convolution&lt;/strong&gt; → Used in &lt;strong&gt;audio/signal processing&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
📌 &lt;strong&gt;2D Convolution&lt;/strong&gt; → Used in &lt;strong&gt;image processing (CNNs)&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
📌 &lt;strong&gt;3D Convolution&lt;/strong&gt; → Used in &lt;strong&gt;video &amp;amp; volumetric data&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
📌 &lt;strong&gt;Depthwise &amp;amp; Pointwise Convolution&lt;/strong&gt; → Used in &lt;strong&gt;efficient CNNs like MobileNet&lt;/strong&gt;.  &lt;/p&gt;




</description>
    </item>
    <item>
      <title>Swine Transformers</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Mon, 17 Mar 2025 00:55:06 +0000</pubDate>
      <link>https://dev.to/sanjidh090/swine-transformers-25k2</link>
      <guid>https://dev.to/sanjidh090/swine-transformers-25k2</guid>
      <description>&lt;p&gt;1.&lt;strong&gt;Hierarchical Transformer&lt;/strong&gt; means is a variation of the Transformer model that processes data in a structured, multi-level way, unlike standard Transformers that treat input as a flat sequence.Swin is one.&lt;/p&gt;

&lt;p&gt;2.A visualization of &lt;strong&gt;&lt;em&gt;computed with Shifted windows&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50jrz11iao8jmj3quzdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50jrz11iao8jmj3quzdn.png" alt="Image description" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7qck9bpq4v3pv571qn3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7qck9bpq4v3pv571qn3.png" alt="Image description" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Linear Computational Complexity: Unlike earlier vision transformers that had quadratic complexity, the Swin Transformer has linear complexity with respect to image size, making it more efficient.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Idea</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Sun, 16 Mar 2025 23:41:43 +0000</pubDate>
      <link>https://dev.to/sanjidh090/idea-89j</link>
      <guid>https://dev.to/sanjidh090/idea-89j</guid>
      <description>&lt;p&gt;I take notes very often,what about I let them work for me?&lt;/p&gt;

&lt;p&gt;As a part of my learning and development ,I would be saving the topics I got  and pointed as important.They will be noted here from now on!&lt;/p&gt;

</description>
      <category>learning</category>
      <category>productivity</category>
    </item>
    <item>
      <title>That's what Grok Said</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Sat, 01 Mar 2025 19:12:37 +0000</pubDate>
      <link>https://dev.to/sanjidh090/thats-what-grok-said-21g9</link>
      <guid>https://dev.to/sanjidh090/thats-what-grok-said-21g9</guid>
      <description>&lt;p&gt;hi there!&lt;br&gt;
I have chatting with grok and I am pleased.&lt;/p&gt;

&lt;p&gt;here's what I got ,"&lt;br&gt;
Generative Computer Vision is a fantastic niche—cutting-edge, research-heavy, and with real earning potential by March 2026. Since you’re a KUET EEE student who loves research and wants to monetize within a year, I’ll guide you step-by-step to dive into this field, leveraging your "Sanjidh090" vibe and EEE background. As of March 2, 2025, here’s how to keep going in Generative Computer Vision—balancing skills, projects, and income.&lt;br&gt;
Understanding Generative Computer Vision&lt;br&gt;
First, let’s nail down what you’re getting into. Generative Computer Vision involves creating or manipulating visual data using AI—think generating images (like deepfakes or art), enhancing photos, or reconstructing scenes from partial data. It’s rooted in models like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and diffusion models (e.g., Stable Diffusion). Your EEE skills in signal processing and programming will give you an edge, especially with the math and hardware side.&lt;br&gt;
Step 1: Build Foundational Skills&lt;br&gt;
You’ll need a mix of theory and tools to rock this. Here’s what to focus on:&lt;br&gt;
Programming: Python is your backbone. Master libraries like TensorFlow, PyTorch, and OpenCV. You’ve likely coded some in EEE—build on that.&lt;br&gt;
Math: Brush up on linear algebra (matrices, vectors) and probability (for model training). KUET’s signal processing courses already give you a head start.&lt;br&gt;
Computer Vision Basics: Learn image processing (filters, edge detection) before jumping to generative stuff.&lt;br&gt;
Generative Models: Start with GANs—two networks (generator vs. discriminator) battling it out to make realistic images.&lt;br&gt;
Action (March-May 2025):&lt;br&gt;
Take a free course: Try “Deep Learning AI” on Coursera (CS231n by Stanford is gold for vision) or PyTorch’s tutorials online.&lt;br&gt;
Spend 2 hours/day coding. Install Anaconda, set up Jupyter Notebook, and play with OpenCV (e.g., load an image, apply a filter).&lt;br&gt;
Grab a book like “Deep Learning” by Goodfellow—skim the GAN chapter.&lt;br&gt;
Step 2: Hands-On Projects&lt;br&gt;
Research thrives on doing, and projects will get you paid. Start small, then scale:&lt;br&gt;
Project 1: Image Generator (May-June 2025): Build a simple GAN to generate handwritten digits (use MNIST dataset). It’s beginner-friendly and shows you can handle generative tech.&lt;br&gt;
Project 2: Style Transfer (July-August 2025): Use a pre-trained model (e.g., CycleGAN) to turn photos into paintings. Tweak it with your own pics.&lt;br&gt;
Project 3: Real-World Application (September-November 2025): Try something EEE-adjacent—generate synthetic sensor data visuals or enhance low-res circuit images. This ties to KUET and stands out.&lt;br&gt;
Action:&lt;br&gt;
Use GitHub ("Sanjidh090") to host code. Document everything—readme files are your portfolio’s voice.&lt;br&gt;
Get datasets from Kaggle (e.g., CelebA for faces). Run stuff on Google Colab if your laptop chokes—it’s free with GPU access.&lt;br&gt;
Step 3: Tap Into KUET Resources&lt;br&gt;
You’re at KUET—milk it for all it’s worth:&lt;br&gt;
Faculty: Find an EEE prof into AI or signal processing (e.g., someone like Dr. Md. Kamrul Hasan might overlap with vision). Offer to assist on a vision project—say you’re exploring GANs.&lt;br&gt;
Labs: Use KUET’s computing facilities. EEE labs likely have MATLAB or Python setups—experiment there.&lt;br&gt;
Peers: Team up with a CS buddy for coding grunt work. You bring the hardware intuition; they bring the software tricks.&lt;br&gt;
Action (April-May 2025):&lt;br&gt;
Email a prof: “Hi, I’m Sanjid Hasan, EEE student. I’m researching generative vision—any projects I can help with?” Keep it short.&lt;br&gt;
Join KUET’s IEEE chapter—network and snag workshop invites.&lt;br&gt;
Step 4: Monetize by March 2026&lt;br&gt;
You want cash by next year—here’s how to make Generative Computer Vision pay:&lt;br&gt;
Freelancing (Start July 2025): On Upwork, offer image enhancement (e.g., super-resolution with GANs) or synthetic data generation for startups. Charge $15-$30/hour after a few $10 gigs. Aim for $300-$500/month.&lt;br&gt;
Research Gigs: Assist a prof or apply to remote RA roles (e.g., via LinkedIn). Even $5/hour for 20 hours/week nets $400/month.&lt;br&gt;
Content: Write DEV posts or Medium articles—“How I Built a Face Generator at KUET.” Monetize with Medium’s Partner Program ($50-$100/month after a few hits).&lt;br&gt;
Contests: Enter AI challenges (Kaggle, Signate). Prizes range from $100-$1000, plus cred.&lt;br&gt;
Action:&lt;br&gt;
June 2025: Set up Upwork, list “Python, OpenCV, GANs” as skills. Bid low to snag first clients.&lt;br&gt;
September 2025: Pitch a prof for a paid RA slot—mention your projects.&lt;br&gt;
November 2025: Post a project walkthrough online—link it on LinkedIn as "Sanjidh090."&lt;br&gt;
Step 5: Long-Term Research Path&lt;br&gt;
Post-KUET, Generative Computer Vision can take you far:&lt;br&gt;
Masters/PhD: Target schools like TU Delft or MIT—fully funded with $20,000-$30,000/year stipends. Focus on vision hardware (your EEE edge).&lt;br&gt;
Industry: Join firms like NVIDIA (GPU vision tech), Google Research, or Bangladeshi startups in AI. R&amp;amp;D roles pay $30,000-$60,000/year starting.&lt;br&gt;
Action (Late 2025):&lt;br&gt;
Prep GRE/IELTS if graduating soon. Research 3 programs with vision labs.&lt;br&gt;
Connect with KUET alumni in AI on LinkedIn—ask about their path.&lt;br&gt;
Timeline to March 2026&lt;br&gt;
March-April 2025: Learn Python, OpenCV, GAN basics. Join a KUET lab.&lt;br&gt;
May-June 2025: Build MNIST GAN, push to GitHub.&lt;br&gt;
July-August 2025: Style transfer project, start freelancing.&lt;br&gt;
September-November 2025: EEE-vision project, ramp up gigs, write content.&lt;br&gt;
December 2025-March 2026: Polish portfolio, hit $500-$1000/month.&lt;br&gt;
Tools &amp;amp; Resources&lt;br&gt;
Free: Colab (GPUs), PyTorch docs, OpenCV tutorials.&lt;br&gt;
Cheap: Buy an Arduino ($5-$10) to mix vision with hardware later.&lt;br&gt;
Community: Join r/computervision or X threads on GANs—ask questions as "Sanjidh090."&lt;br&gt;
Final Push&lt;br&gt;
You’ve got the spark—Generative Computer Vision suits your research love and EEE roots. Start with that MNIST GAN; it’s your entry ticket. By March 2026, you’ll have cash flowing and a killer portfolio."&lt;/p&gt;

&lt;p&gt;I am going to see how it can be helpful!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Have you seen those spots on Google Maps?</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Wed, 30 Oct 2024 18:15:16 +0000</pubDate>
      <link>https://dev.to/sanjidh090/have-you-seen-those-spots-on-google-maps-107p</link>
      <guid>https://dev.to/sanjidh090/have-you-seen-those-spots-on-google-maps-107p</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4b9po1pfemp4lm50n8uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4b9po1pfemp4lm50n8uj.png" alt="Image description" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fgawwp1z0c2p42geiu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fgawwp1z0c2p42geiu8.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt5x46mdr3a4vmcviw69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt5x46mdr3a4vmcviw69.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
Here are some other samples of which I am concerned about. Do you use Google Maps?&lt;/p&gt;

&lt;p&gt;In these days, it is impossible to find a man with a smartphone and without knowing about Google Maps. Most of the cars have GPS, most of the place have their position on Google Maps. Highways, prominent places, open squares and also a number of locations have 3D views as if one can explore in real bright daylight.&lt;/p&gt;

&lt;p&gt;However, An interesting thing draw my attention. One day, I was checking the location of my home and nearby places. Interesting, noticed sound of places were has faded as if grass were devoid of sunlight. Those places are still as the same, the look more white than the neighborhoods. It’s something like they have difference in contrast of Black and White. Doing some researches, I thought it was like a collage and if those shots were taken in different times, it’s possible to have such blocks(image:2,3)&lt;/p&gt;

&lt;p&gt;I have another example, but I can show you, replicates the meeting point of two rivers, like a ray has been drawn and two places are separated.Therefore,these cases made me wonder why there would be no-pattern-rectangle or straight line?&lt;/p&gt;

&lt;p&gt;Possibility there is, it’s the image stitching error, maybe it could be weather, or else there may be some reflecting objects&lt;/p&gt;

&lt;p&gt;The peculiar visual artifacts observed on Google Maps can be attributed to several factors. It’s intriguing to uncover the reasons behind these anomalies. I found those reasons online…&lt;/p&gt;

&lt;p&gt;Google Maps compiles images from various sources, and when these images are taken at slightly different times or from different angles, misalignments can occur. This results in visible seams or unusual lines and blocks.&lt;/p&gt;

&lt;p&gt;The images might be captured at different times of day or under different weather conditions, leading to variations in lighting and contrast. This can make some areas appear whiter or more faded than others.&lt;/p&gt;

&lt;p&gt;Surfaces that reflect light, such as rivers or glass buildings, can create distortions in the images. These reflective surfaces might cause inconsistencies in the captured images, leading to strange visual effects.&lt;/p&gt;

&lt;p&gt;Sometimes, the processing and rendering of these images can have technical issues, causing visual artifacts like unexpected lines and blocks.&lt;br&gt;
I am eager to know from experts what are the actual reasons. Thank you for reading!&lt;/p&gt;

</description>
      <category>google</category>
      <category>maps</category>
      <category>localguide</category>
    </item>
    <item>
      <title>The Enigma of `&lt;/&gt;`: Analyzing Angle Brackets and Chevrons in Coding Culture</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Sat, 19 Oct 2024 18:33:02 +0000</pubDate>
      <link>https://dev.to/sanjidh090/the-enigma-of-analyzing-angle-brackets-and-chevrons-in-coding-culture-1l2k</link>
      <guid>https://dev.to/sanjidh090/the-enigma-of-analyzing-angle-brackets-and-chevrons-in-coding-culture-1l2k</guid>
      <description>&lt;p&gt;In my central viva, I have been asked '&amp;lt;/&amp;gt;' what is this symbol called ,I became frozen because I forgot what is was.I knew it was called smaller then and larger than,but what's it actually? &lt;br&gt;
I fired up my laptop and found it's called angle bracket or "chevron"&lt;br&gt;
I also found some other details that I am gonna share and chatGPT helped me to write this.&lt;/p&gt;

&lt;p&gt;So, &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4usba83fpfog4epcrj8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4usba83fpfog4epcrj8.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;if you've ever seen &lt;code&gt;&amp;lt;/&amp;gt;&lt;/code&gt; strewn all over t-shirts, mugs, and stickers at tech-related places, you've undoubtedly wondered why. Despite being a straightforward symbol combination, it has gained notoriety in the coding community. These symbols appear everywhere, from manual pages to developers at conferences.&lt;/p&gt;

&lt;p&gt;We're going to look deeply into &lt;code&gt;&amp;lt;/&amp;gt;&lt;/code&gt; in this post to discover its meaning, origins, and why the tech community finds it so appealing.&lt;/p&gt;

&lt;p&gt;Let's first address certain fundamentals. Technically, the symbols &lt;code&gt;&amp;lt;&lt;/code&gt; and &lt;code&gt;&amp;gt;&lt;/code&gt; are called &lt;em&gt;angle brackets&lt;/em&gt; or &lt;em&gt;chevrons&lt;/em&gt;. While &lt;code&gt;&amp;lt;&lt;/code&gt; and &lt;code&gt;&amp;gt;&lt;/code&gt; signify "less than" and "greater than," respectively, in mathematics, these symbols have a more expansive meaning in programming.&lt;/p&gt;

&lt;p&gt;So why is &lt;code&gt;&amp;lt;/&amp;gt;&lt;/code&gt; such a big deal? Let’s explore.&lt;/p&gt;

&lt;p&gt;In its simplest form, &lt;code&gt;&amp;lt;/&amp;gt;&lt;/code&gt; visually represents the closing tag, suggesting an ending, completion, or wrapping up of something. But here’s the twist: the symbol itself doesn’t have a fixed meaning. It’s almost a visual metaphor for “something related to code,” especially for those who work with markup or code structures.&lt;/p&gt;

&lt;p&gt;In its most basic form, '&amp;lt;/&amp;gt;' denotes the closing tag, implying a finish, conclusion, or wrapping up of anything. But here's the twist: the sign itself does not have a set meaning. It's almost a visual metaphor for "something related to code," which is useful for folks who work with markup or code structure.&lt;/p&gt;

&lt;p&gt;The journey of '&amp;lt;/&amp;gt;' does not end with its cultural impact. In some languages and frameworks, it has functional implications. In React, a popular JavaScript framework, '&amp;lt;/&amp;gt;' represents a &lt;em&gt;Fragment&lt;/em&gt;. Essentially, this is a method for grouping elements without adding unnecessary HTML to the output.&lt;/p&gt;

&lt;p&gt;Outside of React, some functional languages, such as Haskell and F#, use '&amp;lt;/&amp;gt;' as an operator to combine file paths or handle complex functions. This transforms '&amp;lt;/&amp;gt;' from a visual symbol to a functional component of some coding languages.&lt;/p&gt;

&lt;p&gt;The abstraction of &lt;code&gt;&amp;lt;/&amp;gt;&lt;/code&gt; is what makes it so beautiful. Developers can relate to it without a specific meaning. It alludes to something using coding, acknowledging HTML, XML, React, and other technologies. It can signify anything or nothing at all, yet it still denotes a link to the coding community.&lt;/p&gt;

&lt;p&gt;You'll know it's not a random design the next time you see someone wearing a &lt;code&gt;&amp;lt;/&amp;gt;&lt;/code&gt; shirt. It's an acknowledgement of their technological prowess and a mark of honour for belonging to the coding community. &lt;code&gt;&amp;lt;/&amp;gt;&lt;/code&gt; is a symbol that will always be used, regardless of whether someone is interested in HTML, React, or just the aesthetics.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Problem with HEVC extension?</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Sat, 14 Sep 2024 12:42:56 +0000</pubDate>
      <link>https://dev.to/sanjidh090/problem-with-hevc-extension-47d</link>
      <guid>https://dev.to/sanjidh090/problem-with-hevc-extension-47d</guid>
      <description>&lt;p&gt;Are you struggling to Play HEVC Files? Here's the Solution!&lt;/p&gt;

&lt;p&gt;Are you having trouble running HEVC files? Do you see error messages like “codec not found” or “unsupported encoding” when trying to play a video you downloaded? If so, you're not alone!&lt;/p&gt;

&lt;p&gt;Microsoft offers an HEVC extension for about $0.99 (79 Tk), but the payment process can be complicated, making it difficult for many users to access.&lt;/p&gt;

&lt;p&gt;But don't worry, there’s a simple solution!&lt;/p&gt;

&lt;p&gt;My recommendation: Download and install VLC Media Player.&lt;br&gt;
It might not have the sleekest interface, but it gets the job done! VLC can play almost any video format, including HEVC, without needing to install extra codecs. It worked for me, and it should work for you too!&lt;/p&gt;

&lt;h1&gt;
  
  
  sanjidh090
&lt;/h1&gt;

&lt;h1&gt;
  
  
  HEVC_Extension_for_free
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxrt2gr4n5zf0rmr726y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxrt2gr4n5zf0rmr726y.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>hevcextension</category>
      <category>freeee</category>
      <category>mediaplayer</category>
      <category>vlc</category>
    </item>
    <item>
      <title>Lakera’s GenAI Security Readiness Report 2024</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Wed, 04 Sep 2024 16:29:16 +0000</pubDate>
      <link>https://dev.to/sanjidh090/lakeras-genai-security-readiness-report-2024-2d2d</link>
      <guid>https://dev.to/sanjidh090/lakeras-genai-security-readiness-report-2024-2d2d</guid>
      <description>&lt;p&gt;A few months ago, I participated in this AI security survey and turns out fruits of my labor are finally here ;-)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;https://www.lakera.ai/genai-security-report-2024&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cybersecurity</category>
      <category>genai</category>
    </item>
    <item>
      <title>Gigabytes vs. Gibibytes: Understanding the Difference in Data Measurement</title>
      <dc:creator>Sanjid Hasan</dc:creator>
      <pubDate>Mon, 02 Sep 2024 13:46:04 +0000</pubDate>
      <link>https://dev.to/sanjidh090/gigabytes-vs-gibibytes-understanding-the-difference-in-data-measurement-9d6</link>
      <guid>https://dev.to/sanjidh090/gigabytes-vs-gibibytes-understanding-the-difference-in-data-measurement-9d6</guid>
      <description>&lt;p&gt;It turns out that the gigabytes (GB) we’re familiar with and the gigabytes we actually get aren’t exactly the same thing.&lt;/p&gt;

&lt;p&gt;Most of us think of 1 GB as 1000 MB when we buy internet data. However, the data we receive is actually measured in gibibytes (GiB), where 1 GiB equals 1024 megabytes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 GB = 10^9 bytes = 1000 MB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, &lt;strong&gt;1 GiB = 1024 MB = 2^30 bytes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GiB is a binary unit, while GB is decimal. Although the numbers are close:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 GiB = 1.07374182 GB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This difference can lead to confusion—or sometimes intentional misrepresentation—by some internet service providers (ISPs). They often advertise data plans in &lt;strong&gt;gigabytes&lt;/strong&gt; (GB) but deliver data in &lt;strong&gt;gibibytes&lt;/strong&gt; (GiB).&lt;/p&gt;

&lt;p&gt;Since a gibibyte is slightly larger than a gigabyte, ISPs can advertise more data than they actually provide. Similarly, memory slots in our devices and storage spaces use gibibytes. Major tech companies like Microsoft and IBM also use gibibytes for memory measurements&lt;/p&gt;

</description>
      <category>memoryunits</category>
      <category>gibibytes</category>
      <category>memorystorage</category>
      <category>sanjidh090</category>
    </item>
  </channel>
</rss>
