<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: bitanath</title>
    <description>The latest articles on DEV Community by bitanath (@bitanath).</description>
    <link>https://dev.to/bitanath</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bitanath"/>
    <language>en</language>
    <item>
      <title>Principal Components in TypeScript (Part 2)</title>
      <dc:creator>bitanath</dc:creator>
      <pubDate>Tue, 21 Apr 2026 05:14:41 +0000</pubDate>
      <link>https://dev.to/bitanath/principal-components-in-typescript-part-2-1j54</link>
      <guid>https://dev.to/bitanath/principal-components-in-typescript-part-2-1j54</guid>
      <description>&lt;p&gt;This is part two of a series Principal Components in TypeScript&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;If you need a TL;DR, just read the code here:&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/pca-js?activeTab=code" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/pca-js?activeTab=code&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Not a Code Blog
&lt;/h2&gt;

&lt;p&gt;This is not a code blog. There’s no easy copy-paste solution here.&lt;br&gt;
If that’s what you want, go straight to the source code above.&lt;/p&gt;


&lt;h2&gt;
  
  
  Step 1: Normalize the Data
&lt;/h2&gt;

&lt;p&gt;Our first step is to normalize data by subtracting the mean.&lt;/p&gt;

&lt;p&gt;An elegant way to do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a unit square matrix&lt;/li&gt;
&lt;li&gt;Multiply it with the data&lt;/li&gt;
&lt;li&gt;Scale by &lt;code&gt;1 / number_of_rows&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives you the &lt;strong&gt;mean matrix&lt;/strong&gt;, which you subtract from the original data.&lt;/p&gt;

&lt;p&gt;This is formally known as the &lt;strong&gt;Deviation Matrix&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Example Code
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;unit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;unitSquareMatrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;matrix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;deviationMatrix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;subtract&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;matrix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;multiplyAndScale&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;unit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;matrix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;matrix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;D&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;deviationMatrix&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Where &lt;code&gt;multiplyAndScale&lt;/code&gt; fuses matrix multiplication and scaling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="cm"&gt;/**
 * Fix for #11, OOM on moderately large datasets, fuses scale and multiply into a single operation to save memory
 */&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;multiplyAndScale&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Matrix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Matrix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;factor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;Matrix&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;assertValidMatrices&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;a&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;b&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;aRows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;aCols&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bCols&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;flat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Float64Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;aRows&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;bCols&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;aRows&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;aCols&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;aVal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;factor&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;iOffset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;bCols&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;bCols&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;iOffset&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;aVal&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Matrix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;aRows&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subarray&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;bCols&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;bCols&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  But… Is This Optimal?
&lt;/h3&gt;

&lt;p&gt;Not really.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Triple loop → &lt;strong&gt;O(n³)&lt;/strong&gt; worst case&lt;/li&gt;
&lt;li&gt;Can still cause memory issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simpler approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;matrix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;row&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
  &lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;matrix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;matrix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The library prefers &lt;strong&gt;elegance over optimization&lt;/strong&gt;, assuming eventual GPU acceleration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Deviation Scores
&lt;/h2&gt;

&lt;p&gt;Next step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dᵀ @ D
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Multiply transpose of deviation matrix with itself&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@&lt;/code&gt; = matrix multiplication (Python notation)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then divide by number of rows → &lt;strong&gt;Variance-Covariance Matrix&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Are We Doing This?
&lt;/h2&gt;

&lt;p&gt;Because raw data is useless for analysis.&lt;/p&gt;

&lt;p&gt;We reshape it into a form that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Has structure&lt;/li&gt;
&lt;li&gt;Encodes relationships&lt;/li&gt;
&lt;li&gt;Can be mathematically decomposed&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Variance-Covariance Matrix
&lt;/h2&gt;

&lt;p&gt;For 3 features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;var(f&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;cov(f&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="err"&gt;f&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;cov(f&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="err"&gt;f&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;cov(f&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="err"&gt;f&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;var(f&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;cov(f&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="err"&gt;f&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;cov(f&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="err"&gt;f&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;cov(f&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="err"&gt;f&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;var(f&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Diagonal → variance&lt;/li&gt;
&lt;li&gt;Off-diagonal → covariance&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What is SVD?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Singular Value Decomposition&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SVD = U * Σ * Vᵀ
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;U&lt;/strong&gt; → row characteristics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;V&lt;/strong&gt; → column characteristics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Σ (Sigma)&lt;/strong&gt; → importance (weights)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For PCA:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We care about &lt;strong&gt;columns&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;So:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;V → eigenvectors&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Σ → eigenvalues&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Are Eigenvectors?
&lt;/h2&gt;

&lt;p&gt;“Eigen” = German for “own” (yes really)&lt;/p&gt;

&lt;p&gt;Think of eigenvectors as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Characteristic directions in your data&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eigenvalues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tell you &lt;strong&gt;how important&lt;/strong&gt; each direction is&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Choosing Components
&lt;/h2&gt;

&lt;p&gt;Sort eigenvalues in descending order.&lt;/p&gt;

&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;percentage_explained = Σ(selected eigenvalues) / Σ(all eigenvalues)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Pick top 1 → decent approximation&lt;/li&gt;
&lt;li&gt;Pick top 2 → better&lt;/li&gt;
&lt;li&gt;Pick all → original data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Usually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First component explains ~80% (Pareto principle)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Compressed Data
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;compressed = selected_eigenvectors × centered_dataᵀ
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Original: 4 columns × 30 rows&lt;/li&gt;
&lt;li&gt;Reduced: 2 columns × 30 rows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now scale that to millions of rows 👀&lt;/p&gt;




&lt;h2&gt;
  
  
  Reconstructing Data
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;original ≈ selected_eigenvectorsᵀ × compressed + mean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This is &lt;strong&gt;lossy compression&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Some information is gone forever&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Did We Actually Achieve?
&lt;/h2&gt;

&lt;p&gt;Compression… but let’s make it real.&lt;/p&gt;




&lt;h2&gt;
  
  
  Example: Student Scores
&lt;/h2&gt;

&lt;p&gt;You’re a teacher. Three exams, varying difficulty.&lt;/p&gt;

&lt;h3&gt;
  
  
  Raw Data
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Student&lt;/th&gt;
&lt;th&gt;Exam 1&lt;/th&gt;
&lt;th&gt;Exam 2&lt;/th&gt;
&lt;th&gt;Exam 3&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;70&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;td&gt;70&lt;/td&gt;
&lt;td&gt;90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Means
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exam 1&lt;/th&gt;
&lt;th&gt;Exam 2&lt;/th&gt;
&lt;th&gt;Exam 3&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;55&lt;/td&gt;
&lt;td&gt;62.5&lt;/td&gt;
&lt;td&gt;72.5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Exam 1 = hardest&lt;/li&gt;
&lt;li&gt;Student 3 dominates&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Simple Averages
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Student&lt;/th&gt;
&lt;th&gt;Avg Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;50.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;60.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;80.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;63.33&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Student 2 vs 4 is unclear&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  PCA Results
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;PC&lt;/th&gt;
&lt;th&gt;Eigenvalue&lt;/th&gt;
&lt;th&gt;Eigenvector&lt;/th&gt;
&lt;th&gt;% Variance&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;PC1&lt;/td&gt;
&lt;td&gt;520.1&lt;/td&gt;
&lt;td&gt;[0.74, 0.28, 0.60]&lt;/td&gt;
&lt;td&gt;84.3%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PC2&lt;/td&gt;
&lt;td&gt;78.1&lt;/td&gt;
&lt;td&gt;[0.23, 0.74, -0.63]&lt;/td&gt;
&lt;td&gt;12.7%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PC3&lt;/td&gt;
&lt;td&gt;18.5&lt;/td&gt;
&lt;td&gt;[0.63, -0.61, -0.48]&lt;/td&gt;
&lt;td&gt;3.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We take &lt;strong&gt;PC1&lt;/strong&gt; (Pareto win).&lt;/p&gt;




&lt;h2&gt;
  
  
  Construct New Score
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Score = (Exam1 × 0.74) + (Exam2 × 0.28) + (Exam3 × 0.60)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Interpretation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A normalized “true ability” score&lt;/li&gt;
&lt;li&gt;~84.3% accurate&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Scores
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Student&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;31&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;44&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;84&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;53&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Student 4 &amp;gt; Student 2 (consistency across difficulty)&lt;/li&gt;
&lt;li&gt;Student 3 = clearly top&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Reality Check
&lt;/h2&gt;

&lt;p&gt;Yes… this is a bit hand-wavy.&lt;/p&gt;

&lt;p&gt;That’s why PCA is mainly used for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Dimensionality reduction&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Not deep semantic interpretation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example issue:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Physics vs Chemistry vs Math&lt;/li&gt;
&lt;li&gt;Hard to interpret combined score meaningfully&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Where This Goes Next
&lt;/h2&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collapsing variables blindly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We move toward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Interpretable components&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Naming eigenvectors based on weights&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;Before that… we go to Part 3.&lt;/p&gt;

&lt;p&gt;We’ll answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What is a neural network really doing?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If GPT = brain&lt;br&gt;
Then ConvNet = eyes 👀&lt;/p&gt;




&lt;h2&gt;
  
  
  Outro
&lt;/h2&gt;

&lt;p&gt;See you in Part 3…&lt;/p&gt;

&lt;p&gt;Or don’t. idc 😄&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Principal Components in TypeScript (Part 1)</title>
      <dc:creator>bitanath</dc:creator>
      <pubDate>Tue, 21 Apr 2026 05:11:29 +0000</pubDate>
      <link>https://dev.to/bitanath/principal-components-in-typescript-part-1-3bpp</link>
      <guid>https://dev.to/bitanath/principal-components-in-typescript-part-1-3bpp</guid>
      <description>&lt;p&gt;This is part 1 of a four part post on Principal Components in TypeScript that accompanies my package on npm&lt;/p&gt;

&lt;h2&gt;
  
  
  Hello World
&lt;/h2&gt;

&lt;p&gt;Hello World and welcome to this series on how to determine principal components in (well, pretty much any language) &lt;strong&gt;TypeScript&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is a long-form blog series written by me to provide insights on my package:&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/pca-js" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/pca-js&lt;/a&gt; (with &amp;gt;2k downloads per week)&lt;/p&gt;

&lt;p&gt;I’ll split this up into multiple sections, and we will explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Determining Principal Components&lt;/li&gt;
&lt;li&gt;More importantly… how to use these to derive actual insights&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is &lt;strong&gt;not&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A college textbook&lt;/li&gt;
&lt;li&gt;An AI-generated post&lt;/li&gt;
&lt;li&gt;Some random SEO grab&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So expect a &lt;strong&gt;LOT of personality&lt;/strong&gt;, and a strong focus on the &lt;em&gt;whys and wherefores&lt;/em&gt; rather than just the &lt;em&gt;hows&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Also, this blog will probably be scraped by bots—so it’s about time they too understood what the hell principal components are. &lt;code&gt;/s&lt;/code&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Should You Care?
&lt;/h2&gt;

&lt;p&gt;First off… why &lt;em&gt;should&lt;/em&gt; you care?&lt;/p&gt;

&lt;p&gt;There isn’t any strong reason really—but if you like &lt;strong&gt;elegant solutions to tough problems&lt;/strong&gt;, you’re in the right place.&lt;/p&gt;

&lt;p&gt;This isn’t for absolute beginners. If you’re here, chances are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You’ve tried implementing PCA at least once&lt;/li&gt;
&lt;li&gt;You’ve looked into dimensionality reduction&lt;/li&gt;
&lt;li&gt;A professor forced you to&lt;/li&gt;
&lt;li&gt;You read a paper and thought &lt;em&gt;“huh?”&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  When Should You Use PCA?
&lt;/h2&gt;

&lt;p&gt;Below are some very valid reasons to run &lt;strong&gt;Principal Component Analysis&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Too Many Columns
&lt;/h3&gt;

&lt;p&gt;You have more columns than you can realistically interpret.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Columns = dimensions&lt;/li&gt;
&lt;li&gt;Rows = records&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’ve got too many dimensions → you’ve got a problem.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Finding Hidden Relationships
&lt;/h3&gt;

&lt;p&gt;You want to uncover &lt;strong&gt;latent relationships&lt;/strong&gt; in your data—quickly.&lt;/p&gt;

&lt;p&gt;Without:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Guessing clusters&lt;/li&gt;
&lt;li&gt;Random centroid hunting&lt;/li&gt;
&lt;li&gt;Wondering what’s even happening&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3. Elegant Code
&lt;/h3&gt;

&lt;p&gt;You want to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write fewer lines of code&lt;/li&gt;
&lt;li&gt;Still extract meaningful insights&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Series Structure
&lt;/h2&gt;

&lt;p&gt;Here’s how this series will be structured:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Part 1&lt;/strong&gt; – Why you should care and what you want to achieve&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 2&lt;/strong&gt; – The heart of the problem: &lt;em&gt;Singular Value Decomposition&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 3&lt;/strong&gt; – Generating insights from neural network features&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 4&lt;/strong&gt; – Hidden Factor Analysis&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Where Can You Use PCA?
&lt;/h2&gt;

&lt;p&gt;Now that we’ve (poorly) covered the &lt;em&gt;why&lt;/em&gt;, let’s look at the &lt;em&gt;where&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Here are some use cases—from tabular data to images—all in TypeScript (for no particularly good reason other than deployment 😄).&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Pure Dimensionality Reduction (Data Compression)
&lt;/h3&gt;

&lt;p&gt;You want to compress your data and &lt;strong&gt;don’t care about interpretability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 billion variables → reduce to 1 million&lt;/li&gt;
&lt;li&gt;Meaning is lost&lt;/li&gt;
&lt;li&gt;But data becomes easier to transmit&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. Dimensionality Reduction with Insights
&lt;/h3&gt;

&lt;p&gt;Same as above—but now the reduced variables actually &lt;strong&gt;mean something&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sugar, Fat, Oil → combined into &lt;strong&gt;“Unhealthy”&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Instead of analyzing 3 columns → analyze 1&lt;/li&gt;
&lt;li&gt;Easier correlation (e.g., cholesterol levels)&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3. Different Data Types (e.g., Images)
&lt;/h3&gt;

&lt;p&gt;So far we’ve discussed tabular data—but PCA also works on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multichannel images&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Neural network feature maps&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Feature maps from a convolutional network&lt;/li&gt;
&lt;li&gt;Many channels &amp;gt; width/height&lt;/li&gt;
&lt;li&gt;Reduce them into a single image showing &lt;strong&gt;attention concentration&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Feeling Lost?
&lt;/h2&gt;

&lt;p&gt;Don’t worry if this didn’t fully click—I don’t fully get it either 😄&lt;br&gt;
This is just a warm-up for future posts.&lt;/p&gt;

&lt;p&gt;If you completely checked out halfway through:&lt;br&gt;
Just read the code here:&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/pca-js?activeTab=code" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/pca-js?activeTab=code&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Onto Part 2!!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Or… close this tab and move on with your day.&lt;br&gt;
Whatever. I don’t care 😄&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>datascience</category>
    </item>
  </channel>
</rss>
