Stock Fundamental Analysis Using Python and Streamlit
In today's dynamic stock market, conducting a thorough fundamental analysis can give investors a competitive edge. This blog post delves into how you can leverage Python, Streamlit, and custom modules like YFinance3
to build an automated tool for analyzing stock fundamentals. We'll break down the code and highlight how each part contributes to efficient stock data collection and processing.
Key Features of the Project
- Load and process multiple stock symbols from CSV files.
- Fetch fundamental data for each stock using the custom
YFinance3
module. - Save and load stock data as JSON files for reusability.
- Perform key financial metric computations.
- Display clean and structured data insights.
Project Structure
.
├── Data/
│ ├── Datasets/
│ └── sources/
│ └── Index/
│ └── StockList.csv
└── main.py
Step-by-Step Code Breakdown
1. Directory and File Initialization
DATA_PATH = './Data/Datasets'
CSV_BASE_PATH = './Data/sources/Index/'
These paths define the locations where CSV files with stock symbols are stored and where processed data will be saved.
2. Handling CSV Input
if not os.path.exists(CSV_BASE_PATH):
st.error(f"Directory not found: {CSV_BASE_PATH}")
else:
csv_files = [file for file in os.listdir(CSV_BASE_PATH) if file.endswith('.csv')]
selected_csv = st.selectbox("Choose a CSV file:", csv_files)
This snippet checks for the existence of the directory and populates a dropdown with available CSV files. Users can select a file containing stock symbols.
3. Data Download and Storage
for symbol in SYMBOLS:
file_name = os.path.join(folder_path, f'{symbol}.json')
if os.path.exists(file_name):
existing_files += 1
continue
try:
data = YFinance3(symbol)
with open(file_name, 'w') as file:
json.dump(data.info, file)
new_downloads += 1
except Exception as e:
st.error(f"Error processing {symbol}: {e}")
The code iterates through stock symbols, downloads the data using the YFinance3
module, and saves it in JSON format. It tracks existing files, new downloads, and errors for status reporting.
4. Data Loading and Processing
def load_data(json_data):
data['Symbol'].append(json_data.get('symbol', np.nan))
data['Name'].append(json_data.get('longName', np.nan))
data['Industry'].append(json_data.get('industry', np.nan))
...
The load_data
function extracts key financial metrics from the JSON files, such as:
- EPS (forward)
- Price-to-Book Ratio (PB)
- Free Cash Flow Yield (FCFY)
- 52-week Range
The extracted data is appended to a structured dictionary.
5. DataFrame Creation and NaN Handling
df = pd.DataFrame(data)
df_exceptions = df[df.isna().any(axis=1)]
df = df.dropna().reset_index(drop=True)
The code creates a pandas
DataFrame from the extracted data, dropping rows with missing values while logging exceptions for user review.
6. Computing the 52-Week Range
df['52w Range'] = ((df['Price'] - df['52w Low']) / (df['52w High'] - df['52w Low'])) * 100
This computation helps assess a stock's position within its 52-week trading range.
7. Displaying and Saving Processed Data
st.write("### Processed Data", df)
output_file = os.path.join(folder_path, 'processed_data.csv')
df.to_csv(output_file, index=False)
st.success(f"Processed data saved to {output_file}")
The cleaned and processed data is displayed in Streamlit and saved as a CSV file for future reference.
Sample Output
Symbol | Name | Industry | Price | EPS (fwd) | PB | FCFY | ROE |
---|---|---|---|---|---|---|---|
AAPL | Apple | Tech | 170.3 | 6.55 | 9.8 | 2.3% | 42% |
MSFT | Microsoft | Tech | 310.0 | 8.20 | 7.2 | 3.1% | 40% |
Potential Enhancements
- Stock Screening: Implement filters based on key metrics like ROE and PEG ratios.
- Data Visualization: Plot historical trends for key metrics.
- Backtesting Strategies: Use fundamental data as input for backtesting stock strategies.
- Real-Time Updates: Automate daily updates for new financial data.
Conclusion
This project provides a powerful yet straightforward approach to fundamental stock analysis using Python and Streamlit. It automates data collection, processing, and presentation, empowering users to make better-informed investment decisions.
Let me know if you'd like to enhance any part of the blog or add new features to the code!
Top comments (1)
This is an excellent guide for leveraging Python and Streamlit for fundamental stock analysis! The breakdown is clear, covering everything from data fetching to processing and visualization. Automating financial data handling is a game-changer for investors. Excited to see potential enhancements like stock screening and real-time updates!
Regards,
Fundamental Analysis Course in Delhi