DEV Community

Cover image for DGraph Advanced Data Modeling: Part 1 - Timestamps
Jonathan Gamble
Jonathan Gamble

Posted on • Updated on

DGraph Advanced Data Modeling: Part 1 - Timestamps

What is DGraph?

In my opinion, DGraph is the best competition to Firestore there is (and much much more). Imagine Firestore with Subscriptions, plus a Graph Database for complex searching, plus any kind of relational data you can think of, wrapped in a GraphQL interface, or a backend custom language called DQL.

Timestamps huh?

While DGraph is new and they team works on getting every feature we can think of implemented, Timestamps seems to be one of the most requested features. There is an active pull request on github for this feature (dpeek, seriously thank you for your hard work--also anytime the Dgraph team works on GraphQL Features, thank you guys too!), the feature is not quite available for the public. Once approved, it will be available to non-cloud users immediately (or now if they want to add use a custom backend), and to cloud users hopefully in the next 0 days to 6 months. At that point, I will update this posts, as it will still contain valuable information for some users. It is also good to know that the forementioned feature request will not protect the user from updating their own timestamps (which you don't want), until the update after @auth feature is added. This method will protect it.

Okay, so what can I do about it now!?

That my friends, is why I wrote this post. You can get your precious timestamps now, if you just model your data a certain way and add a little backend code.

Granted, you can just create a custom mutation, which is DGraph's official work around, but then you have to lock all adds and updates on your schema, which is not a good thing. That being said, custom mutations can pretty much solve any problem if you want to just do all mutations with them.

DGraph currently does not support pre-mutation triggers (hooks), but they do support post-mutation hooks. I have thought about this for a while, as the model is not intuitive until you think about it.

Get on with it...

Okay, so here is the setup:

schema.graphql

type User @lambdaOnMutate(add: true, update: true, delete: true) {
  id: ID!
  posts: [Post] @hasInverse(field: author)
  ...
  timestamp: Timestamp
}
type Post @lambdaOnMutate(add: true, update: true, delete: true) {
  id: ID!
  author: User!
  ...
  timestamp: Timestamp
}
type Timestamp @auth(
  add: { rule: "{ $NEVER: { eq: \"ALWAYS\" } }"},
  update: { rule: "{ $NEVER: { eq: \"ALWAYS\" } }"},
  delete: { rule: "{ $NEVER: { eq: \"ALWAYS\" } }"}
) {
  createdAt: DateTime
  updatedAt: DateTime
  post: Post
  user: User
}
Enter fullscreen mode Exit fullscreen mode
  • Notice there are no required ! symbols on any item related to the timestamps. This is because everything is done in the lambda.
  • Notice there is no @hasInverse on anything related to timestamps. This is because everything is handled internally. On a GraphQL delete, one of the two connections will be deleted. The lambda needs the other connection to find the timestamp node to delete it. If you don't understand this, don't worry, just don't use hasInverse on any node related to timestamps.
  • We have to create a new node Timestamp in order to secure it from bad programming or pirate users. Once either the update after @auth is implemented next year, or another feature that adds field level auth, this will not be needed. I will talk more about circumventing this problem in future posts.

lambdas.ts

async function updateTimestamps({ event, dql }) {

    const op = event.operation === 'delete' 
        ? 'delete'
        : 'set';
    const field = event.operation === 'add'
        ? 'createdAt'
        : 'updatedAt';
    const uid = event[event.operation].rootUIDs[0];
    const type: string = event.__typename;
    const invType = type.toLowerCase();
    const date = new Date().toISOString();
    const child = 'Timestamp';
    const invChild = child.toLowerCase();

    const args = `
        upsert {
            query {
                t as var(func: type(${child})) 
                @filter(uid_in(${child}.${invType}, ${uid}))
            }
            mutation @if(eq(len(t), 1)) {
                ${op} {
                    <${uid}> <${type}.${invChild}> uid(t) .
                    uid(t) <${child}.${invType}> <${uid}> .
                    uid(t) <${child}.${field}> "${date}" .
                    uid(t) <dgraph.type> "${child}" .
                }  
            }
            mutation @if(eq(len(t), 0)) {
                ${op} {
                    <${uid}> <${type}.${invChild}> _:new .
                    _:new <${child}.${invType}> <${uid}> .
                    _:new <${child}.${field}> "${date}" .
                    _:new <dgraph.type> "${child}" .
                }
            }
        }`;
    const r = await dql.mutate(args);
    console.log(r);
}

(self as any).addWebHookResolvers({
    "User.add": updateTimestamps,
    "User.update": updateTimestamps,
    "User.delete": updateTimestamps
});

(self as any).addWebHookResolvers({
    "Post.add": updateTimestamps,
    "Post.update": updateTimestamps,
    "Post.delete": updateTimestamps
});
Enter fullscreen mode Exit fullscreen mode

The beauty of this code is that it can be reused on any node, and you can even use the function within another function if you need to run other post-hook tasks.

I write everything in typescript as I believe pure javascript is evil. 😈😡

Simply run tsc lambdas.ts in your typescript enabled framework (which you should be using) and copy the lambda.js file contents it creates into your lambda text-box on DGraph cloud, or wherever. You can also automate this with the Dgraph Cloud API, but that is for another post.

Notes

  • You can only use this on a new database, as it will not automatically create timestamps on old data (this should be obvious, but just adding it in case you can't quite wrap your brain around it yet)
  • For the moment lambda webhooks do not run on nested updates, so you if you want this to work for those cases, you will have to also add a webhook on the parent object's add, update, and delete methods, if and only if there is new data coming in on this node.
  • This method DOES automatically add, update, and delete timestamps for all other cases. Let me know if you see any bugs.
  • IMPORTANT! - You must keep the inverse nodes (ex: Timestamp.post) as the same name as the main node, but lowercase. The same goes for the Timestamp field. This is how it automatically knows how to find the node. (Post.timestamp <=> Timestamp.post)
  • Add all your regular @auth stuff to the parent nodes, as the NEVER auth will prevent them from touching the timestamps.
  • A user with the custom claim { NEVER: 'ALWAYS' } could technically pierce this @auth, so don't add that!

Next up -- following the same pattern -- @cascade Delete --- Coming Soon...

J

Top comments (5)

Collapse
 
koder profile image
Ben Woodward

Super useful! Didn't realise there was a solution to this. Fingers crossed this will be shipped in the next Dgraph version soon.

Collapse
 
jdgamble555 profile image
Jonathan Gamble

I could be, but unless they also add update-after-auth method to secure it, we can't do much with it!

Collapse
 
jurijurka profile image
Juri Jurka

"but then you have to lock all adds and updates on your schema, which is not a good thing" what exactly do you mean with that? that the data added through DQL mutations is not accessable via GraphQL (because of that whole transpilation thing)? or do you mean something different?

Collapse
 
jdgamble555 profile image
Jonathan Gamble

If you create a custom mutation, which is different than using a lambda webhook -- what this article is talking about --, then a person could still add data through add / update unless you prevent them from using the regular graphql endpoint by adding the NEVER / ALWAYS rule above. Basically your regular graphql endpoint is accessible to add / update data without the timestamp, so you must "lock it" from allowing people to add data that way.

Collapse
 
jurijurka profile image
Juri Jurka

thank you very much this article will help me a lot!!