<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adeoye Malumi</title>
    <description>The latest articles on DEV Community by Adeoye Malumi (@oyebobs).</description>
    <link>https://dev.to/oyebobs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oyebobs"/>
    <language>en</language>
    <item>
      <title>Predicting Tomorrow's Tremors: A Machine Learning Approach to Earthquake Nowcasting in California</title>
      <dc:creator>Adeoye Malumi</dc:creator>
      <pubDate>Thu, 03 Jul 2025 12:56:11 +0000</pubDate>
      <link>https://dev.to/oyebobs/predicting-tomorrows-tremors-a-machine-learning-approach-to-earthquake-nowcasting-in-california-l0i</link>
      <guid>https://dev.to/oyebobs/predicting-tomorrows-tremors-a-machine-learning-approach-to-earthquake-nowcasting-in-california-l0i</guid>
      <description>&lt;p&gt;Earthquakes are a constant, terrifying reality, especially in tectonically active zones like California. While pinpointing the exact time and location of a future quake remains one of science's grand challenges, the concept of earthquake nowcasting offers a pragmatic alternative: assessing the current probability of a significant event happening within a near-term window.&lt;/p&gt;

&lt;p&gt;This article walks through the entire journey of building and deploying a machine learning model designed to nowcast the likelihood of Magnitude 6.0+ earthquakes in California within a 30-day horizon. I'll cover everything from robust data acquisition to feature engineering, model training, and the practicalities of deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. The Bedrock: Data Acquisition&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Every data-driven project starts with data. For me, this meant building a comprehensive historical catalog of seismic events in California.&lt;/p&gt;

&lt;p&gt;I leveraged the ObsPy library to interact with the USGS FDSN client. My goal was to gather all earthquakes with a Magnitude 2.0 or greater (M2+) within a specific region (32.0°N to 42.0°N latitude, -125.0°W to -114.0°W longitude) from 1990 to the present day.&lt;/p&gt;

&lt;p&gt;One of the initial hurdles was dealing with potential API request limits when trying to fetch decades of data at once. To overcome this, I implemented a robust chunking mechanism. Instead of one massive request, I'd iteratively fetch data in smaller time windows (starting with years, then recursively breaking down into months, weeks, or even days if a chunk proved too large). This ensured I could reliably acquire the entire historical catalog without hitting service caps. The collected data was then saved locally as a CSV for efficient reuse.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2. Sculpting Signals: Feature Engineering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Raw earthquake event lists aren't directly useful for machine learning. The magic happens in feature engineering – transforming this raw data into meaningful numerical representations that the model can learn from.&lt;/p&gt;

&lt;p&gt;For each sliding time window, I calculated a rich set of features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Regional Features (Across the Entire California Study Area)&lt;/strong&gt;:&lt;br&gt;
These capture the overall seismic state of the broader region:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seismicity Rate&lt;/strong&gt;: The total number of events within the window.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;b-value&lt;/strong&gt;: A critical seismological parameter indicating the ratio of small to large earthquakes. A decrease in b-value can sometimes precede larger events, suggesting increased stress.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Magnitude Statistics&lt;/strong&gt;: Mean, standard deviation, and maximum magnitude.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inter-Event Time Statistics&lt;/strong&gt;: Mean and coefficient of variation of the time between successive earthquakes. Irregularity (high CV) might be a signal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Depth Statistics&lt;/strong&gt;: Mean and standard deviation of earthquake depths.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Spatial Features (Per Grid Cell):
&lt;/h3&gt;

&lt;p&gt;To capture localized patterns, I divided the entire California region into a grid of 0.5-degree by 0.5-degree cells. For each cell, if it contained at least 3 events within the window (our MIN_EVENTS_PER_CELL threshold to ensure statistical significance), I calculated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Local Seismicity Rate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Local b-value&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Local Mean Magnitude&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used a 90-day sliding window to compute these features, advancing the window by 7 days for each new sample. The target label for the model was binary: 1 if a Magnitude 6.0+ earthquake occurred within the subsequent 30-day prediction horizon after the feature window, and 0 otherwise.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. The Brain: Model Training &amp;amp; Evaluation
&lt;/h2&gt;

&lt;p&gt;With our features ready, it was time to train the predictive brain of our system.&lt;/p&gt;

&lt;p&gt;I chose XGBoost Classifier as the core model. It's a powerful gradient boosting framework known for its performance and ability to handle complex, tabular datasets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tackling Class Imbalance
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Earthquake nowcasting suffers from extreme class imbalance&lt;/strong&gt;: periods without a large earthquake significantly outnumber periods preceding one. To address this, we employed several strategies:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stratified Splitting&lt;/strong&gt;: When splitting data into training and test sets, we used stratify=y to ensure both sets maintained the original proportion of large earthquake events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SMOTE (Synthetic Minority Over-sampling Technique)&lt;/strong&gt;: Applied to the training data, SMOTE generated synthetic samples of the minority class (large earthquake events), effectively balancing the dataset for the model to learn from. We dynamically adjusted SMOTE's k_neighbors parameter to ensure it always had enough real minority samples to work with.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Hyperparameter Tuning &amp;amp; Evaluation
&lt;/h3&gt;

&lt;p&gt;We performed hyperparameter tuning using GridSearchCV, focusing on optimizing the model's F1-score. The F1-score is particularly valuable for imbalanced datasets as it provides a balance between precision (minimizing false positives) and recall (minimizing false negatives).&lt;/p&gt;

&lt;p&gt;After training, the model's performance was rigorously evaluated on the untouched test set. I examined:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Classification Report&lt;/strong&gt;: Providing precision, recall, and F1-score for both classes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ROC-AUC Score&lt;/strong&gt;: A measure of the model's ability to distinguish between classes across all possible thresholds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Confusion Matrix&lt;/strong&gt;: A visual breakdown of true positives, true negatives, false positives, and false negatives.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's an example of a Confusion Matrix from a training run:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwqqhk98e14awjhdoz9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwqqhk98e14awjhdoz9e.png" alt="Confusion Matrix from a training run" width="600" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also analyzed the Precision-Recall Curve &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2q1ey20ig71wyeei95c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2q1ey20ig71wyeei95c.png" alt="Precision-Recall Curve " width="700" height="600"&gt;&lt;/a&gt;, which is often more informative than ROC for imbalanced datasets:&lt;/p&gt;

&lt;p&gt;Finally, we looked at &lt;strong&gt;Feature Importance&lt;/strong&gt; to understand which seismic indicators the XGBoost model deemed most influential in its predictions. Features related to regional seismicity rate, standard deviation of depth, coefficient of variation of inter-event time, and localized spatial b-values often topped the list.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Optimal Threshold
&lt;/h3&gt;

&lt;p&gt;Crucially, we didn't just rely on the model's default 0.5 probability threshold. We analyzed the Precision, Recall, and F1-score across a range of thresholds and identified the optimal F1-score threshold as 0.3593. This value provides the best balance between catching large earthquakes and avoiding excessive false alarms for our specific model.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eai4i16hpg712jz6soh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eai4i16hpg712jz6soh.png" alt="Precision, Recall, and F1-score" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Bringing it Live: Model Deployment
&lt;/h2&gt;

&lt;p&gt;Training a great model is one thing; making it useful in a real-world setting is another. This required careful deployment steps.&lt;/p&gt;
&lt;h3&gt;
  
  
  Model Serialization
&lt;/h3&gt;

&lt;p&gt;After training, the best_model (our optimized XGBoost classifier) was saved to disk using joblib. But there's a critical detail: XGBoost models are sensitive to the order of input features. So, alongside the model, we also saved the exact ordered list of feature column names that the model was trained on. This ensures that when the model is loaded later for prediction, the incoming data is always presented in the correct sequence.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Python

import joblib
# ... after model training ...
joblib.dump(best_model, "earthquake_prediction_model.joblib")
joblib.dump(X.columns.tolist(), "model_feature_columns.joblib")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Prediction Script (predict_earthquake.py)
&lt;/h3&gt;

&lt;p&gt;A separate, lightweight Python script (predict_earthquake.py) was created specifically for making live predictions. This script is designed to run independently, without needing to retrain the model. Its core functions are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load Assets&lt;/strong&gt;: It loads the saved earthquake_prediction_model.joblib and the model_feature_columns.joblib list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fetch Latest Data&lt;/strong&gt;: It connects to the USGS FDSN client to fetch only the most recent earthquake data required for the current 90-day feature window (ending at the current time).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistent Feature Engineering&lt;/strong&gt;: It applies the exact same feature engineering logic as the training pipeline to this latest data. This consistency is paramount. It also handles cases where certain grid cells might be inactive in the current window by filling their features with zeros, matching how the training data was prepared.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Predict&lt;/strong&gt;: The engineered features are passed to the loaded model, which outputs a probability score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Threshold &amp;amp; Alert&lt;/strong&gt;: The pre-determined optimal threshold of 0.3593 is applied. If the predicted probability exceeds this, an alert is triggered.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Staying Vigilant: Automation &amp;amp; Alerting
&lt;/h2&gt;

&lt;p&gt;A prediction system is only useful if it runs consistently and communicates its findings effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated Scheduling&lt;/strong&gt;&lt;br&gt;
To ensure continuous nowcasting, predict_earthquake.py was automated to run at regular intervals (e.g., daily). This was set up using:&lt;/p&gt;

&lt;p&gt;cron jobs for Linux/macOS environments, which allow scheduling commands to run at specific times.&lt;/p&gt;

&lt;p&gt;Task Scheduler for Windows, providing a graphical interface for similar functionality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Robust Alerting &amp;amp; Logging&lt;/strong&gt;&lt;br&gt;
Beyond simple console output, the system was enhanced for practical deployment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dedicated Log File&lt;/strong&gt;: All prediction cycle information – data fetches, warnings (like NaN values being filled), and final predictions – are written to a dedicated log file (earthquake_prediction.log). This is invaluable for monitoring the system's health and troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Email Notifications&lt;/strong&gt;: Crucially, if a "Large Quake" prediction is made (probability &amp;gt;= 0.3593), the script is configured to send an immediate email alert. This ensures that relevant stakeholders are notified without needing to constantly monitor logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Python

# Conceptual snippet for email alert in predict_earthquake.py
import smtplib
from email.mime.text import MIMEText
import logging # Already configured earlier

# ... email config variables ...

def send_email_alert(subject, body):
    try:
        msg = MIMEText(body)
        msg["Subject"] = subject
        msg["From"] = EMAIL_SENDER
        msg["To"] = EMAIL_RECEIVER
        with smtplib.SMTP(SMTP_SERVER, SMTP_PORT) as server:
            server.starttls()
            server.login(EMAIL_SENDER, EMAIL_PASSWORD)
            server.send_message(msg)
        logging.info(f"Email alert sent successfully to {EMAIL_RECEIVER}")
    except Exception as e:
        logging.error(f"Failed to send email alert: {e}")

# ... later in make_prediction() ...
if prediction_label == 1:
    alert_message = (f"ALERT! A large earthquake (M{TARGET_MAGNITUDE}+) is predicted "
                     f"in California within the next {PREDICTION_HORIZON_DAYS} days.\n"
                     f"Probability: {prediction_proba:.4f}")
    logging.warning(f"Prediction: {alert_message}")
    send_email_alert("Earthquake Prediction ALERT!", alert_message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project successfully establishes a complete, automated pipeline for earthquake nowcasting in California using machine learning. From meticulously gathering and engineering seismic data to training a robust XGBoost model and deploying it with automated scheduling and alerting, the system represents a significant step towards leveraging data science for natural hazard preparedness.&lt;/p&gt;

&lt;p&gt;While the inherent complexities of earthquake prediction mean no model is perfect, this system provides a valuable, data-driven assessment of current seismic risk. The journey highlights the importance of not just model accuracy, but also the practical considerations of data handling, feature consistency, and operational deployment in building real-world ML solutions.&lt;/p&gt;

&lt;p&gt;The next steps involve continuous monitoring of the system's performance, periodic retraining with updated data to ensure relevance, and potentially exploring more advanced validation techniques like time-series cross-validation for even greater robustness.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Full Project Code (GitHub)&lt;/strong&gt;:&lt;a href="https://github.com/oye-bobs/siesmicanalyzer" rel="noopener noreferrer"&gt;(https://github.com/oye-bobs/siesmicanalyzer)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ObsPy Library&lt;/strong&gt;: &lt;a href="https://docs.obspy.org/" rel="noopener noreferrer"&gt;https://docs.obspy.org/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;USGS FDSN Web Service&lt;/strong&gt;: &lt;a href="https://earthquake.usgs.gov/ws/" rel="noopener noreferrer"&gt;https://earthquake.usgs.gov/ws/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scikit-learn Documentation&lt;/strong&gt;: &lt;a href="https://scikit-learn.org/" rel="noopener noreferrer"&gt;https://scikit-learn.org/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;XGBoost Documentation&lt;/strong&gt;: &lt;a href="https://xgboost.readthedocs.io/" rel="noopener noreferrer"&gt;https://xgboost.readthedocs.io/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>datascience</category>
      <category>coding</category>
    </item>
    <item>
      <title>Embracing Open Source: A Catalyst for Scientific Progress</title>
      <dc:creator>Adeoye Malumi</dc:creator>
      <pubDate>Fri, 19 Apr 2024 13:41:52 +0000</pubDate>
      <link>https://dev.to/oyebobs/embracing-open-source-a-catalyst-for-scientific-progress-1n0</link>
      <guid>https://dev.to/oyebobs/embracing-open-source-a-catalyst-for-scientific-progress-1n0</guid>
      <description>&lt;p&gt;Hey Dev Community!&lt;/p&gt;

&lt;p&gt;Let's embark on a thrilling journey through the realm of open source, where innovation knows no bounds and collaboration reigns supreme! &lt;/p&gt;

&lt;p&gt;Today, we're diving into why open source is the ultimate game-changer for scientific progress and how it's shaping a brighter future for us all. Get ready for a wild ride!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Open Source Matters in Science:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Transparency&lt;/strong&gt;: Open source is like a spotlight shining on the secrets of science. It unveils research methodologies, algorithms, and data for all to see, fostering a culture of transparency and trust within the scientific community.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collaboration&lt;/strong&gt;: Imagine a global science party where everyone's invited! That's the power of open source. By tearing down barriers to access, it brings together brilliant minds from around the world to collaborate on groundbreaking research.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Faster Iteration&lt;/strong&gt;: With open source, the innovation treadmill goes into overdrive! Scientists can build upon existing solutions, tweak them to perfection, and share their creations with the world—all at warp speed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reproducibility&lt;/strong&gt;: Science is all about trust, and open source ensures that trust never falters. By providing open access to source code and data, it makes research findings easily reproducible, empowering others to verify and validate discoveries.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Illustrating the Impact:
&lt;/h2&gt;

&lt;p&gt;Let's take a joyride through the world of climate science, where open source is leading the charge:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Sharing&lt;/strong&gt;: Climate data from every corner of the globe flows freely, thanks to open source platforms like Open Climate GIS. This treasure trove of data fuels cutting-edge research and fuels our understanding of Earth's ever-changing climate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collaborative Research&lt;/strong&gt;: Scientists from diverse fields come together on platforms like Open Climate Collaborative to tackle climate change head-on. From atmospheric physicists to data wizards, everyone brings their A-game to the fight against the climate crisis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reproducibility&lt;/strong&gt;: The winds of change blow strong in the world of climate research, where transparency reigns supreme. Researchers openly share their methodologies, code, and data, ensuring that every discovery is as solid as a rock.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Innovation&lt;/strong&gt;: Open-source innovation is the wind beneath our wings as we soar towards a greener future. From AI-powered climate models to blockchain-based carbon markets, the possibilities are endless!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Conclusion&lt;/em&gt;&lt;/strong&gt;:&lt;br&gt;
So there you have it, fellow adventurers—open source isn't just a buzzword; it's the key to unlocking a world of endless possibilities. In the grand quest for scientific progress, let's hoist the open source flag high and march boldly into the future!&lt;/p&gt;

&lt;p&gt;What are your thoughts on the role of open source in scientific progress? Share your insights below and let's keep the conversation going! &lt;/p&gt;

</description>
      <category>opensource</category>
      <category>datascience</category>
      <category>database</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>What was your win this week?</title>
      <dc:creator>Adeoye Malumi</dc:creator>
      <pubDate>Fri, 19 Apr 2024 13:28:17 +0000</pubDate>
      <link>https://dev.to/oyebobs/what-was-your-win-this-week-51f</link>
      <guid>https://dev.to/oyebobs/what-was-your-win-this-week-51f</guid>
      <description>&lt;p&gt;I started coding again and I've been able to learn a whole lot on github. from learning how to use code spaces to developing my own portfolio page, It's been a really interesting journey.&lt;/p&gt;

</description>
      <category>weeklyretro</category>
    </item>
  </channel>
</rss>
