Today I built out schemas for both PostgreSQL and MongoDB. Between the two, I decided to go with PostgreSQL for my first implementation because the data seems to be more relational. I also think this is a great opportunity to learn this new technology. We were then given CSV files containing over one million entries each. It froze my laptop just trying to open it! A big issue I had was trying to figure out how to use the Extract, Transform, and Load method to get the entries into my database. The data is already extracted, so now all I needed to do was figure out how to transform and load it. What I ended up doing was restructure my schema to have a table fit the data of each CSV file so that the data can be copied easily. Then, I utilized a copy feature of PostgreSQL to quickly import the entire file into my database. Easy peasy!