DEV Community

Arnaud PELAMA PELAMA TIOGO
Arnaud PELAMA PELAMA TIOGO

Posted on

Analyzing NYC Schools' SAT Performance: Reflections on this project

As part of my journey to becoming a Data Scientist, I've recently completed a project titled NYC Schools Analysis. This project involved analyzing SAT performance data of New York City schools to identify top-performing schools and boroughs. In this blog post, I'd like to reflect on what I learned, the challenges I faced, how this project contributed to my growth, and seek feedback from the community.


What I Learned

Data Cleaning and Preprocessing

One of the critical aspects of this project was data cleaning. I learned how to handle missing values, ensure correct data types, and remove duplicate entries. This process was essential to prepare the data for accurate analysis.

Data Analysis with Pandas and NumPy

Using pandas and NumPy, I performed statistical analyses to:

  • Identify schools with the best math results (at least 80% of the maximum possible score).
  • Determine the top 10 performing schools based on combined SAT scores.
  • Find the borough with the largest standard deviation in combined SAT scores.

This enhanced my understanding of data manipulation and statistical calculations in Python.

Data Visualization with Matplotlib

Creating visual representations of data was a significant learning point. I used Matplotlib to generate:

  • Bar charts showing the top 10 schools.
  • Histograms of combined SAT scores.
  • Box plots to visualize SAT score distributions by borough.

These visualizations helped in conveying insights more effectively.


Challenges Faced

Handling Limited Data

Initially, I worked with a small dataset represented as a Python dictionary. This limited the depth of analysis. To overcome this, I expanded the dataset by adding more schools and varying the scores to simulate a more realistic scenario.

Data Cleaning Complexities

Ensuring data integrity was challenging. Dealing with missing values and potential data entry errors required meticulous attention. I had to decide whether to impute missing values or exclude certain data points, balancing between data accuracy and completeness.

Visualization Nuances

Creating meaningful visualizations was more complex than anticipated. Choosing the right type of chart and customizing it for clarity took several iterations. Aligning the visual style to make the plots both informative and aesthetically pleasing was a valuable exercise.


Demonstrating Growth

This project was a significant milestone in my learning journey. Here's how it contributed to my growth:

Enhanced Technical Skills

  • Pandas and NumPy: Deepened my ability to manipulate and analyze data using these libraries.
  • Matplotlib: Improved my skills in data visualization, which is crucial for data storytelling.

Improved Code Organization

By modularizing the code into functions such as load_data(), clean_data(), and visualize_data(), I learned the importance of code reusability and readability.

Real-World Application Awareness

Working on this project bridged the gap between theoretical knowledge and real-world application. It provided insights into how data science can impact educational insights and policy-making.


Highlighting Versatility

This project covered a range of topics and challenges:

  • Data Cleaning: Handling missing values, data types, and duplicates.
  • Statistical Analysis: Calculating means, standard deviations, and interpreting them.
  • Data Visualization: Creating various charts to represent data insights.
  • Python Programming: Writing efficient, modular code with proper documentation.

By integrating these elements, I developed a more holistic understanding of the data science workflow.


Seeking Feedback

I'm eager to improve and learn from the community. Here are a few areas where I'd appreciate your insights:

  1. Data Visualization Best Practices: How can I enhance my charts for better clarity and impact? Are there other libraries or tools you recommend?

  2. Statistical Analysis Depth: What additional statistical methods could provide more insights into the data?

  3. Real Dataset Integration: Suggestions on sourcing real NYC school data and handling potential complexities that come with larger datasets.

  4. Code Optimization: Any advice on making the code more efficient or readable would be highly valued.


Conclusion

Completing the NYC Schools Analysis project was both challenging and rewarding. It allowed me to apply and expand my skills in data cleaning, analysis, and visualization. I'm excited to continue this journey and tackle more complex projects.

Feel free to check out the project on my GitHub repository:


Thank you for taking the time to read about my project. I look forward to your feedback and suggestions!


Connect with Me:

Top comments (0)