DEV Community

Discussion on: Why does writing code for DynamoDb get my spidey senses tingling?

conorw profile image
Conor Woods Author

Hi Andrew, I appreciate the response but in truth, I was hoping to see some code to show me how you would personally solve the task using DynamoDb. But I'll tell you how I would approach it using DynamoDb and how I'll actually probably end up approaching it.

Just one of the api calls that I need to satisfy is as follows:

The data/schema for a product currently looks like the following:
"entity": "Product",
"sk": "ORG-1#PRODUCT",
"val": {
"name": "Product 1",
"orgId": "1",
"id": "EwfoHf7zAdRvNsiHw2SbxTeSnPb2",
"categories": [
"name": "Business",
"primary": true
"name": "Executive"
"name": "Career"
"status": "ACTIVE",
"createDate": "2020-07-23T14:56:29.994Z"
"pk": "PRODUCT-EwfoHf7zAdRvNsiHw2SbxTeSnPb2",
"updatedDateTime": "2020-07-23T14:56:29.994Z",
"entityId": "EwfoHf7zAdRvNsiHw2SbxTeSnPb2"

IF I were to solve this with DynamoDb I would either use a GSI to store a composite key of the categories or denormalize and duplicate the data. However, even if I could do a full-text search on the composite key (which I don't believe you can) it wont help with a multi-category search (unless I used a filter). And if I went down the denormalization route, I would still have the full text issue and I would also need to write the code and employ extra infrastructure to manage this (stream + SNS probably).

How I'll actually end up solving it is by introducing another piece of infrastructure like Elastic and populate it using Dynamodb streams.

My point is this: I'm jumping through hoops, adding extra infrastructure and writing way more code to solve previously simple tasks.

I am aware of the trade-offs, but I really wish there was an abstraction over DynamoDb to take the development pain away. I want to have my cake and eat it; I want the blazing fast speed and I want to write/support/maintain as little code as possible.

Maybe I'm just lazy, but something doesn't sit right with me.

Thread Thread
andrewberth profile image
Andrew Berth

I did not write any code because, as I suspected, no amount of code is going to do what you want.

DynamoDB’s entire ‘thing’ is getting certain chunks of data (partitions) really fast, no matter how much data you’re dealing with. They do this by having all related data together: no multiple tables, no joins. They say you should duplicate your data in the format you’re going to want to access it, all to prevent you from having to compile stuff when you’re looking for it.

Getting all products from one or more categories should be no problem. With a GSI, as you said, you could perfectly make partitions of your data based on their category. Then you could get one (GetItem) or multiple (BatchGetItem) very easily.

Free text search, on the other hand, is an entirely different beast. Free text search is all about getting some arbitrary values out of a certain group. DynamoDB was not made to do this kind of thing. The same goes for relational databases, really. The LIKE operator lets you do some searching, but in a really limited way. (And it’s slow, which Dynamo does not allow you to be.) That’s why they made an Elasticsearch integration.

Now, of course it would be nice to have one service that would do exactly what you need, quickly, and with an API so simple even a toddler could use it. I know I would use the features you’re describing in a heartbeat. But for now, it simply does not exist.

Thread Thread
conorw profile image
Conor Woods Author

Don't disagree with anything there Andrew.
By chance I came across this from the burning monk
Now we're getting somewhere.
Coupled with Jeremy daly's dynamo toolbox, it could scratch an itch.