DEV Community

zchtodd
zchtodd

Posted on

#SQL30 Day 8: Deleting Duplicates

Welcome to the SQL showdown series!

What is this and how does it work?

I'm committing to publishing a SQL challenge every day for 30 days. In each post I'll describe my solution to the last day's challenge. I'll follow that up with a full description of the challenge for the next day.

Write your own solution in the comments! Let's see who can come up with the most creative solutions.

I'll add connection details for a PostgreSQL database containing test data for the challenge. Solutions are by no means limited to PostgreSQL, but it's there if you want an easy way to test your query!

Challenge #8: Deleting Duplicates

Today I'd like to pose a challenge focused on detecting duplicates within a table. The ultimate goal is to delete all duplicates from the table, while preserving one row from each group, such that we're left only with the unique values.

Deletes aren't allowed on the sandbox server, so I'll stop short of asking for the actual delete query, and instead ask only for the select query that could be passed to the delete.

Here's the challenge itself:

Can you write a select query that could be used in a delete statement to make the rows of a table unique according to a natural key?

We'll re-visit the employee table from day 5 for this challenge. Here's a sample from the table.

Alt Text

You can see that Hans Volkman is repeated 3 times. We'd like to end up with only one Hans Volkman after running the delete statement.

Sandbox Connection Details

I have a PostgreSQL database ready for you to play with.

Alt Text

The password is the same as the username! To query the employee table:

SELECT * FROM day8.employee;

Solution to Challenge #7: Grouping Sets

This is the question we were trying to answer with yesterday's SQL challenge:

Given supermarket sales history, can you display sales not only by location and category, but also at the month and yearly level?

Remember, in this challenge we wanted the output to appear something like this:

Alt Text

That is, we wanted sales summed up to the month, but with a yearly subtotal for each location and category.

Not long ago, I probably would have reached for Python to do this kind of multi-level aggregation, but it turns out there's a perfectly good way to do this in SQL.

Group by, apparently, has a hidden super power called grouping sets.

Grouping sets allow you to group at several different levels. Let's take a look at the query that produced the output above.

SELECT   s.city, 
         s.product_line, 
         Date_part('month', sale_date) AS monthly, 
         Date_part('year', sale_date)  AS yearly, 
         Sum(total) 
FROM     day7.supermarket s 
GROUP BY grouping sets (((city, product_line), yearly, monthly), ((city, product_line), yearly)) 
ORDER BY city, 
         product_line, 
         monthly, 
         yearly;

As you can see in the output, any row produced by a set missing columns (in this case month), will have a null value for that column.

SQL also has a few shorthand notations, such as ROLLUP and CUBE, that produce commonly needed grouping sets.

I could change the above query to use ROLLUP like so:

SELECT   s.city, 
         s.product_line, 
         Date_part('month', sale_date) AS monthly, 
         Date_part('year', sale_date)  AS yearly, 
         Sum(total) 
FROM     day7.supermarket s 
GROUP BY ROLLUP ((city, product_line), yearly, monthly)
ORDER BY city, 
         product_line, 
         monthly, 
         yearly;

This would actually be equivalent to the following sets:

  • ((city, product_line), yearly, monthly): Sales broken down to the month.
  • ((city, product_line), yearly)): Sales broken down to the year.
  • ((city, product_line)): All sales for each city/product line.
  • (): All sales, for all time, for all city/product combinations.

Good luck!

Have fun, and I can't wait to see what you come up with! I'll be back tomorrow with a solution to this problem and a new challenge.

Top comments (0)