<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alex Curtis</title>
    <description>The latest articles on DEV Community by Alex Curtis (@alexcurtis1969).</description>
    <link>https://dev.to/alexcurtis1969</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexcurtis1969"/>
    <language>en</language>
    <item>
      <title>From Data to Dashboards: Building an EC2 Cost Analysis Tool with Flask and AWS S3</title>
      <dc:creator>Alex Curtis</dc:creator>
      <pubDate>Tue, 01 Apr 2025 19:16:21 +0000</pubDate>
      <link>https://dev.to/alexcurtis1969/from-data-to-dashboards-building-an-ec2-cost-analysis-tool-with-flask-and-aws-s3-1nn8</link>
      <guid>https://dev.to/alexcurtis1969/from-data-to-dashboards-building-an-ec2-cost-analysis-tool-with-flask-and-aws-s3-1nn8</guid>
      <description>&lt;p&gt;Disclaimer: This article was previously published in Medium.com, link to article: &lt;a href="https://medium.com/@alex.curtis_luit/from-data-to-dashboards-building-an-ec2-cost-analysis-tool-with-flask-and-aws-s3-4c0d312ea38f%C2%A0" rel="noopener noreferrer"&gt;https://medium.com/@alex.curtis_luit/from-data-to-dashboards-building-an-ec2-cost-analysis-tool-with-flask-and-aws-s3-4c0d312ea38f &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you don’t have your own magic hat and need to analyze 100,000 EC2s and provide recommendations….follow along!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9vvej8gq2zxsjsfalzw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9vvej8gq2zxsjsfalzw.jpg" alt="Magic Hat" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to decode, decipher and recommend changes that will save money on your businesses 100K EC2 operation. No magic hat?! No problem….enter Python!&lt;/p&gt;

&lt;p&gt;This article explores how to build a Flask-based web application that analyzes EC2 cost data, generates insightful visualizations, and leverages AWS S3 for storage and retrieval.&lt;/p&gt;

&lt;p&gt;The Application’s Core Functionality:&lt;/p&gt;

&lt;p&gt;Our application aims to provide a user-friendly interface for analyzing EC2 cost data. It performs the following key tasks:&lt;/p&gt;

&lt;p&gt;1️⃣Data Ingestion: Reads EC2 cost data from a CSV file using Pandas.&lt;br&gt;
2️⃣Data Analysis: Performs various cost-related analyses, such as calculating total costs, average costs per instance type, and potential savings.&lt;br&gt;
3️⃣Visualization: Generates visualizations using Matplotlib and Seaborn to represent the analysis results.&lt;br&gt;
4️⃣Storage: Uploads the analysis results and visualizations to AWS S3.&lt;br&gt;
5️⃣Retrieval: Allows users to download the analysis results and view visualizations through a web interface.&lt;br&gt;
Key Components and Code Snippets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Analysis with Pandas:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We use Pandas to read and process the EC2 cost data. The &lt;code&gt;analyze_ec2_costs()&lt;/code&gt; function performs the core analysis:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def analyze_ec2_costs(df):
    analysis = {}
    if df.empty:
        logging.error("DataFrame is empty for analysis. Analysis aborted.")
        return analysis
    df.columns = [str(col).strip().lower().replace(" ", "").replace("$", "").replace("(", "").replace(")", "").replace("%", "") for col in df.columns]
    # ... (rest of the analysis logic)
    return analysis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Visualization with Matplotlib and Seaborn:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Visualizations are generated using Matplotlib and Seaborn. The &lt;code&gt;generate_visualizations()&lt;/code&gt; function creates plots for instance type distribution, cost per region, CPU utilization, and recommendation breakdown:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import matplotlib.pyplot as plt
import seaborn as sns
import os
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def generate_visualizations(df):
    # ... (data preprocessing)
    plt.figure(figsize=(10, 6))
    sns.countplot(y="instancetype", data=df, order=df["instancetype"].value_counts().index[:10], palette="pastel", hue="instancetype", legend=False)
    plt.savefig("instance_type_distribution.png")
    # ... (other visualizations)
    plt.close()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;AWS S3 Integration: &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We use the &lt;code&gt;boto3&lt;/code&gt; library to interact with AWS S3. The &lt;code&gt;upload_to_s3()&lt;/code&gt; function uploads files to our S3 bucket: &lt;code&gt;import boto3&lt;br&gt;
import logging&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;S3_BUCKET_NAME = "alexas-ec2-cost-analysis-bucket"
S3_REGION = "us-east-1"
def upload_to_s3(file_name, object_name=None):
    s3_client = boto3.client('s3', region_name=S3_REGION)
    try:
        logging.info(f"Attempting to upload '{file_name}' to S3: {S3_BUCKET_NAME}/{object_name}")
        s3_client.upload_file(file_name, S3_BUCKET_NAME, object_name)
        logging.info(f"File '{file_name}' uploaded to S3: {S3_BUCKET_NAME}/{object_name}")
    except Exception as e:
        logging.error(f"Error uploading file '{file_name}' to S3: {e}")
        print(f"Upload error: {e}")
        print(f"File path that was attempted to upload: {file_name}")
        print(f"S3 path that was attempted to use: {S3_BUCKET_NAME}/{object_name}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Flask Web Interface:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We use Flask to create a web interface for our application. The &lt;code&gt;/run_analysis&lt;/code&gt; route triggers the analysis and visualization generation:&lt;br&gt;
&lt;code&gt;from flask import Flask, render_template, send_from_directory&lt;br&gt;
import os&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app = Flask(__name__)
@app.route("/run_analysis")
def run_analysis():
    # ... (read data, analyze, generate visualizations, upload to S3)
    return "Analysis completed and results uploaded to S3."
@app.route("/visualizations/&amp;lt;filename&amp;gt;")
def get_visualization(filename):
    # ... (download visualization from S3 and return)
    return send_from_directory(".", local_filename, as_attachment=False)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Matplotlib Backend Configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To ensure compatibility with our Flask application, we explicitly set Matplotlib to use the “Agg” backend: &lt;code&gt;import matplotlib&lt;br&gt;
matplotlib.use('Agg')  # Force Matplotlib to use the Agg backend&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Results:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A password protected S3 Static Website that serves as the repository for all of the info we need:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l3jqagod8maydpdmknh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l3jqagod8maydpdmknh.png" alt="Image 1" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ability to query and download the information, including source .csv and visualizations charts&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35xr5keb7w9ifh61gke8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35xr5keb7w9ifh61gke8.png" alt="Image 2" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn23gltxgvytiexp0l4j7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn23gltxgvytiexp0l4j7.png" alt="Image 3n" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frk2hetdpprgtguqk86m4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frk2hetdpprgtguqk86m4.png" alt="Image 4" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo8nidlcbdnhbotlod8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo8nidlcbdnhbotlod8e.png" alt="Image 5" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvs6apq4tbbfaf2r0asf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvs6apq4tbbfaf2r0asf.png" alt="Image 6" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdq3vsmlhh3hhjrqngih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdq3vsmlhh3hhjrqngih.png" alt="Image 7" width="450" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key Considerations:&lt;/p&gt;

&lt;p&gt;1️⃣Security: Password protection and secure storage of sensitive data are crucial.&lt;br&gt;
2️⃣Scalability: Consider using asynchronous tasks or a message queue for long-running analysis.&lt;br&gt;
3️⃣Error Handling: Implement robust error handling to gracefully handle exceptions and provide informative messages.&lt;br&gt;
4️⃣User Experience: Design a user-friendly interface that provides clear and concise information.&lt;/p&gt;

&lt;p&gt;This application provides a foundation for building a comprehensive EC2 cost analysis tool. By leveraging the power of Pandas, Matplotlib, and AWS S3, we can create a powerful and insightful application, and we get to use our magic hat!&lt;/p&gt;

&lt;p&gt;Github Link: Python_Applications/EC2_Costs.py at main · alexcurtis1969/Python_Applications&lt;/p&gt;

&lt;h1&gt;
  
  
  AWSBuilder
&lt;/h1&gt;

&lt;h1&gt;
  
  
  AWSCommunityBuilder
&lt;/h1&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/jasondunn"&gt;@jasondunn&lt;/a&gt; &lt;/p&gt;

</description>
      <category>flask</category>
      <category>pandas</category>
      <category>seaborn</category>
      <category>python</category>
    </item>
    <item>
      <title>Building Financial Data Analysis with AWS: A Step-by-Step Guide</title>
      <dc:creator>Alex Curtis</dc:creator>
      <pubDate>Sun, 23 Mar 2025 16:04:40 +0000</pubDate>
      <link>https://dev.to/alexcurtis1969/building-financial-data-analysis-with-aws-a-step-by-step-guide-2je4</link>
      <guid>https://dev.to/alexcurtis1969/building-financial-data-analysis-with-aws-a-step-by-step-guide-2je4</guid>
      <description>&lt;p&gt;This article was first published on Medium. Check it out here: [&lt;a href="https://medium.com/@alex.curtis_luit/grab-the-snake-by-the-tail-and-wag-it-until-you-have-something-useful-a110c108fc53" rel="noopener noreferrer"&gt;https://medium.com/@alex.curtis_luit/grab-the-snake-by-the-tail-and-wag-it-until-you-have-something-useful-a110c108fc53&lt;/a&gt;].&lt;/p&gt;

&lt;p&gt;Grab the snake by the tail and wag it until you have something useful!&lt;/p&gt;

&lt;p&gt;Building a Financial Data Analysis and Reporting Tool with Python&lt;/p&gt;

&lt;p&gt;The answer is in the question….shake the Python until you get something useful…are you ready?&lt;/p&gt;

&lt;p&gt;In today’s data-driven world, having the right tools to analyze financial data is crucial for decision-making. As part of my ongoing journey in the cloud and DevOps space, I recently created a Python script that fetches market data for stocks and cryptocurrencies, calculates key technical indicators, and generates a comprehensive financial report in PDF format.&lt;/p&gt;

&lt;p&gt;In this article, I’ll Walk you through the code I built, its features, and how you can use it to analyze financial market data with Python.&lt;/p&gt;

&lt;p&gt;Overview of the Project&lt;br&gt;
This project pulls market data from Yahoo Finance for various tickers (stocks and cryptocurrencies), computes technical indicators like moving averages and Bollinger Bands, and finally outputs the results in a PDF report. The aim is to provide a one-stop tool for both individual investors and professionals to track the performance of various assets and gain insights into market trends.&lt;/p&gt;

&lt;p&gt;Key Features:&lt;/p&gt;

&lt;p&gt;Data Fetching: The script retrieves historical data for stocks and cryptocurrencies. * The cool thing is you can modify it as you see fit!&lt;br&gt;
Technical Analysis: It calculates standard technical indicators such as the 20-day moving average and Bollinger Bands.&lt;br&gt;
PDF Report Generation: The data and analysis are compiled into a PDF file for easy sharing and viewing.&lt;br&gt;
Code Breakdown&lt;br&gt;
Let’s walk through the key sections of the code and understand how they work.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fetching Financial Data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We use the yfinance library to fetch market data for a list of tickers. Yahoo Finance provides historical price data for various financial assets, including stocks, indices, and cryptocurrencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import yfinance as yf

def fetch_and_process_data(ticker):
    print(f"Fetching data for {ticker}...")
    df = yf.download(ticker, period="1y", interval="1d")
    return df
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the fetch_and_process_data function takes a ticker symbol (e.g., "AAPL" for Apple or "BTC-USD" for Bitcoin) and downloads one year of daily historical data.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Calculating Technical Indicators
Once we have the data, we perform some technical analysis. For example, we calculate the Rolling Standard Deviation to analyze price volatility and Bollinger Bands to evaluate price movements relative to a moving average.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What does all of this mean?&lt;/p&gt;

&lt;p&gt;Rolling Standard Deviation = Measures how much the price is jumping around.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd

def calculate_technical_indicators(df):
    # Calculate rolling standard deviation (volatility)
    df['rolling_std'] = df['Close'].rolling(window=20).std()

    # Calculate Bollinger Bands
    df['UpperBand'] = df['Close'].rolling(window=20).mean() + 2 * df['rolling_std']
    df['LowerBand'] = df['Close'].rolling(window=20).mean() - 2 * df['rolling_std']

    # Fill any missing values with 'N/A'
    df.fillna("N/A", inplace=True)

    return df
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this section, we compute a 20-day rolling standard deviation and use it to calculate the upper and lower Bollinger Bands. Bollinger Bands are used to identify overbought or oversold conditions in an asset.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generating the PDF Report
Once the data is processed and the indicators are calculated, we generate a PDF report using the fpdf library. This report includes the asset's closing price, the calculated technical indicators, and charts for visual representation.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fpdf import FPDF

def generate_pdf(tickers):
    pdf = FPDF()
    pdf.set_auto_page_break(auto=True, margin=15)
    pdf.add_page()

    for ticker in tickers:
        df = fetch_and_process_data(ticker)
        df = calculate_technical_indicators(df)

        # Write the ticker symbol and the technical indicators to the PDF
        pdf.set_font("Arial", size=12)
        pdf.cell(200, 10, txt=f"Technical Report for {ticker}", ln=True)

        # Add data to the PDF (this is just an example for the first 5 rows)
        for index, row in df.head().iterrows():
            pdf.cell(200, 10, txt=f"{index}: {row['Close']} - UpperBand: {row['UpperBand']} - LowerBand: {row['LowerBand']}", ln=True)

    pdf.output("fintech_market_data.pdf")
    print("PDF generated successfully: fintech_market_data.pdf")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The generate_pdf function adds the processed financial data (including stock prices and technical indicators) to the PDF file. You can extend this part to add more visualizations, such as charts or graphs, to make the report even more insightful.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Handling Data Quality
We also handle missing data by filling it with “N/A”. This ensures the script doesn’t break when data is missing for any particular day.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df.fillna("N/A", inplace=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach prevents errors from occurring when we try to display or process the data further.&lt;/p&gt;

&lt;p&gt;Running the Script&lt;br&gt;
Once the script is set up and ready, you simply need to run the main function, which pulls data for each ticker, performs the analysis, and generates the PDF:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def main():
    tickers = ["AAPL", "MSFT", "GOOGL", "BTC-USD", "ETH-USD"]
    generate_pdf(tickers)

if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
When you run the script, it will download market data for the tickers you specify (Apple, Microsoft, Google, Bitcoin, and Ethereum in this case). It will then generate a PDF file named fintech_market_data.pdf, which contains the financial analysis for each asset, including the calculated technical indicators.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv8hnie4i3in1syj3ukn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv8hnie4i3in1syj3ukn.png" alt="page1 data" width="800" height="1129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsv94gk2n4vky9t6u6uyn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsv94gk2n4vky9t6u6uyn.png" alt="page 2 data" width="800" height="1129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
This project is a great starting point for building more advanced financial tools. You can enhance the script further by adding more technical indicators, integrating machine learning models for predictions, or even pushing the data to a cloud platform for real-time analysis.&lt;/p&gt;

&lt;p&gt;With Python and a few powerful libraries like yfinance, pandas, and fpdf, you can create powerful financial tools that provide valuable insights. I hope this article inspires you to experiment with financial data analysis and see how Python can simplify the process.&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS
&lt;/h1&gt;

&lt;h1&gt;
  
  
  AWSCommunity
&lt;/h1&gt;

&lt;h1&gt;
  
  
  AWSCommunityBuilder
&lt;/h1&gt;

&lt;p&gt;Thanks to &lt;a class="mentioned-user" href="https://dev.to/jasondunn"&gt;@jasondunn&lt;/a&gt; &lt;a class="mentioned-user" href="https://dev.to/chriscampbell96"&gt;@chriscampbell96&lt;/a&gt; &lt;/p&gt;

</description>
      <category>python</category>
      <category>finops</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
