<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sairaj Chowdhary</title>
    <description>The latest articles on DEV Community by Sairaj Chowdhary (@sairaj_chowdhary_9bdc886a).</description>
    <link>https://dev.to/sairaj_chowdhary_9bdc886a</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sairaj_chowdhary_9bdc886a"/>
    <language>en</language>
    <item>
      <title>Unlocking Quantitative Analysis using Python</title>
      <dc:creator>Sairaj Chowdhary</dc:creator>
      <pubDate>Sat, 02 Aug 2025 18:07:39 +0000</pubDate>
      <link>https://dev.to/sairaj_chowdhary_9bdc886a/unlocking-quantitative-analysis-using-python-nib</link>
      <guid>https://dev.to/sairaj_chowdhary_9bdc886a/unlocking-quantitative-analysis-using-python-nib</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2m40qy4657x60sgmec6n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2m40qy4657x60sgmec6n.jpg" alt=" " width="660" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Unlocking Quantitative Analysis
&lt;/h2&gt;

&lt;p&gt;We recognise that programming is crucial for automating calculations, managing large datasets, and developing models. Python is a great starting language due to its simplicity and powerful libraries, such as NumPy for numerical computations, Pandas for data manipulation, and Matplotlib for visualisation.&lt;br&gt;
Quantitative analysis involves using mathematical and statistical methods to evaluate data, often in fields like finance, economics, or data science.&lt;br&gt;
&lt;strong&gt;So why don’t we check out how they work together?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As a problem solver, we know a few things like Basic Statistical Analysis (like calculating Mean and Standard Deviation):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
returns = [0.01, -0.02, 0.03, 0.015, -0.005]

mean_return = np.mean(returns)
print(f"Mean Return: {mean_return:.4f}")
std_dev = np.std(returns)
print(f"Standard Deviation: {std_dev:.4f}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or maybe consider a situation like using a CSV file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd

data = pd.read_csv('stock_data.csv', parse_dates=['Date'])
data.set_index('Date', inplace=True)

data['Avg'] = data['Price'].rolling(window=5).mean()
print(data.head())

import matplotlib.pyplot as plt
data['Price'].plot(label='Price')
data['Avg'].plot(label='5-Day MA')
plt.legend()
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or maybe something like this helps identify trends in quantitative trading strategies.&lt;/p&gt;

&lt;p&gt;Now it’s time to know we’ll know algorithms that are super useful and every programmer should know, regardless of whether they’re part of a quant analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monte Carlo Simulation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
import matplotlib.pyplot as plt

initial_price = 100
days = 252  # Trading days in a year
mean_return = 0.001  # Daily return
volatility = 0.01  # Daily volatility
simulations = 1000

prices = np.zeros((days, simulations))
prices[0] = initial_price

for t in range(1, days):
    shocks = np.random.normal(mean_return, volatility, simulations)
    prices[t] = prices[t-1] * (1 + shocks)

plt.plot(prices[:, :5])
plt.title('Monte Carlo Simulation of Stock Prices')
plt.show()

# Here I am just checking using a few examples.
final_prices = prices[-1]
print(f"Mean Final Price: {np.mean(final_prices):.2f}")
print(f"Std Dev of Final Prices: {np.std(final_prices):.2f}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can simulate future stock prices using random walks. This model's uncertainty, common in options pricing&lt;/p&gt;

&lt;p&gt;And recently, I was going through a coding platform. And I’ve come across this problem that is super good to solve. Check this one.&lt;/p&gt;

&lt;p&gt;Question: You are given an array of integers representing daily stock prices. Find the maximum profit you can achieve by buying on one day and selling on a later day. If no profit is possible, return 0.&lt;/p&gt;

&lt;p&gt;Let's just assume:&lt;br&gt;
Ex-1&lt;br&gt;
Input: prices = [7,1,5,3,6,4]&lt;br&gt;
Output: 5&lt;br&gt;
Explanation: Buy on day 2 (price = 1) and sell on day 5 (price = 6), profit = 6-1 = 5.&lt;br&gt;
Note that buying on day 2 and selling on day 1 is not allowed because you must buy before you sell.&lt;br&gt;
Ex-2&lt;br&gt;
Input: prices = [7,6,4,3,1]&lt;br&gt;
Output: 0&lt;br&gt;
Explanation: In this case, no transactions are done and the max profit = 0.&lt;/p&gt;

&lt;p&gt;Let's just assume constraints:&lt;br&gt;
1 &amp;lt;= prices.length &amp;lt;= 10^5&lt;br&gt;
0 &amp;lt;= prices[i] &amp;lt;= 10^4&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My solution&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def maxProfit(self, prices: List[int]) -&amp;gt; int:
        min_price = math.inf
        max_profit = 0
        for price in prices:
            min_price = min(min_price, price)
            potential_profit = price - min_price
            max_profit = max(max_profit, potential_profit)
        return max_profit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;here as we iterate through the prices list, min_price = min(min_price, price) keeps a running memory of the absolute lowest price encountered &lt;br&gt;
And I implemented max_profit = max(max_profit, potential_profit) to check if this profit is better than any other profit we could have made on a previous day.&lt;br&gt;
It's just a single-pass iteration, and it only needs to go through the list of prices once. Pretty much greedy. It has a time complexity of O(N). &lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
      <category>python</category>
    </item>
    <item>
      <title>How I would start Quantitative Analysis from scratch.</title>
      <dc:creator>Sairaj Chowdhary</dc:creator>
      <pubDate>Wed, 16 Jul 2025 06:27:19 +0000</pubDate>
      <link>https://dev.to/sairaj_chowdhary_9bdc886a/how-i-would-start-quantitative-analysis-from-scratch-429i</link>
      <guid>https://dev.to/sairaj_chowdhary_9bdc886a/how-i-would-start-quantitative-analysis-from-scratch-429i</guid>
      <description>&lt;p&gt;Beyond the Random Walk:&lt;br&gt;
If you've ever tried to learn quantitative analysis, you've likely been met with a wall of Greek letters and concepts like Brownian motion, Ito's Lemma, and autoregressive models. The traditional approach is rooted in a single, core idea: financial markets are, for the most part, a "random walk." The goal, then, is to find the tiny, fleeting signals hidden within that noise.&lt;/p&gt;

&lt;p&gt;This is a valid approach, but it's not the only one. And for a beginner, it can be mathematically daunting and intuitively unsatisfying.&lt;/p&gt;

&lt;p&gt;What if, instead of starting with randomness, you started with information? What if, instead of looking for linear patterns, you looked for the hidden shape of the market? This article proposes a different starting point for your quant journey, one that leverages unique, powerful concepts that are more intuitive and, arguably, more fundamental.&lt;/p&gt;

&lt;p&gt;Ditch the Brownian Motion, Embrace the Surprise: Starting with Information Theory&lt;br&gt;
Traditional finance models the path. Information theory models the surprise at each step of that path. The core idea was developed by Claude Shannon, the father of the information age, and it's called Entropy.&lt;/p&gt;

&lt;p&gt;In simple terms, Shannon Entropy is a measure of uncertainty or surprise.&lt;/p&gt;

&lt;p&gt;A predictable event (a biased coin that always lands on heads) has zero entropy. There is no surprise.&lt;/p&gt;

&lt;p&gt;A completely unpredictable event (a fair coin flip) has maximum entropy.&lt;/p&gt;

&lt;p&gt;Instead of modelling price, what if we modelled the information content of market data?&lt;/p&gt;

&lt;p&gt;A Unique Algorithm: Order Flow Entropy&lt;br&gt;
Price is a lagging indicator; it's the result of thousands of buy and sell orders. The real information lies in the order flow. Imagine you could see the stream of buy and sell orders for a stock.&lt;/p&gt;

&lt;p&gt;High Entropy State: Buyers and sellers are evenly matched. The flow is chaotic and unpredictable. The next move is uncertain.&lt;/p&gt;

&lt;p&gt;Low Entropy State: A huge wave of buy orders suddenly floods the market. The system becomes highly predictable for a short time. The uncertainty collapses. There is less "surprise."&lt;/p&gt;

&lt;p&gt;The Hypothesis: A sudden, sharp drop in the entropy of order flow could be a powerful signal that a large, informed entity is making a move, creating a predictable pattern just before a significant price change. You are no longer predicting price based on its past; you are predicting price based on a measurable change in the market's underlying information structure.&lt;/p&gt;

&lt;p&gt;This concept is more direct. It's about measuring the conviction of market participants, not just the noisy outcome of their actions.&lt;/p&gt;

&lt;p&gt;Finding the "Shape" of Market Fear: An Introduction to Topological Data Analysis (TDA)&lt;br&gt;
The second unconventional tool is Topological Data Analysis (TDA). While standard statistics is great at finding lines, clusters, and correlations, it struggles with more complex structures. TDA is a field of mathematics that studies the "shape" of data, looking for things like loops, voids, and other intricate structures.&lt;/p&gt;

&lt;p&gt;The core algorithm here is Persistent Homology.&lt;/p&gt;

&lt;p&gt;Imagine your data is a cloud of points in space. Persistent homology works by "inflating" a ball around each data point. As the balls grow, they start to connect, forming shapes. Some of these shapes are fleeting, but others persist for a long time. TDA tracks how long these topological features—like connected components, loops, and voids—last.&lt;/p&gt;

&lt;p&gt;A Unique Application: The Shape of Systemic Risk&lt;br&gt;
How can we use this? Instead of a 2D plot of price vs. time, imagine a 3D point cloud where each point represents a single trading day, defined by:&lt;/p&gt;

&lt;p&gt;X-axis: S&amp;amp;P 500 daily return&lt;/p&gt;

&lt;p&gt;Y-axis: VIX Index level (a measure of volatility or "fear")&lt;/p&gt;

&lt;p&gt;Z-axis: 10-Year US Treasury yield change&lt;/p&gt;

&lt;p&gt;The Hypothesis:&lt;br&gt;
In a "normal" market, the shape of this data cloud might be a dense, formless blob.&lt;br&gt;
In the lead-up to a financial crisis, the data points might start to form a distinct loop or void. This shape could represent a dangerous feedback loop where high volatility (high VIX) is persistently associated with negative returns and a flight to safety (falling bond yields). The market gets "stuck" in this topological feature.&lt;br&gt;
This "shape of fear" is a multi-dimensional pattern that simple correlation analysis would completely miss. Detecting the formation and persistence of such a shape could be a powerful, non-obvious indicator of systemic risk.&lt;br&gt;
Your First "Unconventional" Quant Project&lt;br&gt;
You don't need a PhD to start exploring these ideas. Here’s a simple roadmap:&lt;/p&gt;

&lt;p&gt;The Mindset Shift: Your goal isn't to predict the price of Apple tomorrow. Your goal is to measure a property of the market itself. Is the market's information content changing? Is its "shape" deforming?&lt;br&gt;
Gather Data: Get free daily data from Yahoo Finance for the SPY (S&amp;amp;P 500 ETF) and the ^VIX (Volatility Index).&lt;/p&gt;

&lt;p&gt;Calculate "Surprise": Write a simple Python script to calculate the daily percentage change of the VIX. Then, create a histogram of these changes. Is it a perfect bell curve, or is it skewed? A skewed distribution implies that "surprise" is not symmetrical—large spikes in fear are more common than large drops. This is your first, simple information-theoretic insight.&lt;/p&gt;

&lt;p&gt;Visualise the "Shape": Create a 2D scatter plot of (SPY daily return, VIX daily change). Now, create this plot for different years. Compare the shape of the data cloud for a calm year (like 2017) versus a volatile year (like 2008 or 2020). You will physically see the shape of the market change. This is the core intuition behind TDA.&lt;/p&gt;

&lt;p&gt;Therefore, starting your quantitative analysis journey with these concepts has a distinct advantage. It forces you to think about the market not as a series of numbers to be fit by an equation, but as a complex system with its structure, information flow, and geometry.&lt;/p&gt;

&lt;p&gt;It's a more challenging, but ultimately more rewarding, path. Please stop trying to predict the random walk and start trying to understand the shape of the system itself.&lt;/p&gt;

</description>
      <category>quantitative</category>
      <category>analyst</category>
      <category>quant</category>
      <category>statistics</category>
    </item>
  </channel>
</rss>
