I use MongoDB for logs and time-series data, which it is usually perfect for. With large datasets ( I am storing 100 million+ time series logs), SQL databases can become sluggish and slow down, even with indexes and optimizing queries.
However, for 90% of my use cases, I still stick with MySQL and other SQL databases.
Why?
SQL is mature and easy to work with, especially if you have deep relational data. MongoDB is great for flat document-type data structures.
Here's an example of a painful MongoDB query:
pipeline = [
{
'$match': {
'merchant_id': 2138
}
}, {
'$group': {
'_id': {
'date': '$date'
},
'menu_special_price': {
'$min': '$menu_special_price'
},
'menu_price': {
'$max': '$menu_price'
}
}
}
];
A simple aggregated pipeline example. This is such a simple task, yet it looks so convoluted.
In SQL, you could do:
SELECT
MIN(menu_special_price) as min_price,
MAX(menu_price) as max_price
FROM menus
WHERE merchant_id=2138
GROUP BY date;
Which is easier to maintain and read?
Do you like this sort of content? Please subscribe to my newsletter here to get weekly thought-provoking articles on web dev and machine learning.
Top comments (0)