DEV Community

Serhii Kucherenko
Serhii Kucherenko

Posted on

2 1

React withGraphQL: Optimistic Response - What & Why

Alt Text

The Problem

You start your new project with React & GraphQL. It's super cool & you tried to make it super-fast. But your backend is reaaaally slow and all these loaders are unbelievably annoying. You are a client-oriented developer and trying your best to solve this problem. Likely for you, as you used GraphQL, we can use the Optimistic UI approach.

Official definition: Optimistic UI is a pattern that you can use to simulate the results of a mutation and update the UI even before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.

I couldn't say better:)
So we only simulate that the backend sends us the response right away so from the user's perspective every response comes instantly.

How to use?

When we call a mutation we can also pass an additional optimisticResponse property and describe what we're going to retrieve from it. That one we would eventually receive from the backend. Here's an example:

 updateComment({  
      variables: { commentId, commentContent },
      optimisticResponse: {  
          __typename: 'Mutation',  
          updateComment: {  
          id: commentId,  
              __typename: 'Comment',  
              content: commentContent,  
          },  
      },  
})
Enter fullscreen mode Exit fullscreen mode

Basically, when you call this mutation GraphQL will update the cache for this comment with new data instantly.


Alt Text


Also, you can update the cache manually when data will arrive from the backend but it's a totally new story. For now, check this feature out and write in the comments what do you think about it.

For more details go to the official GraphQL documentation

API Trace View

How I Cut 22.3 Seconds Off an API Call with Sentry πŸ‘€

Struggling with slow API calls? Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more β†’

Top comments (0)

AWS Security LIVE!

Join us for AWS Security LIVE!

Discover the future of cloud security. Tune in live for trends, tips, and solutions from AWS and AWS Partners.

Learn More

πŸ‘‹ Kindness is contagious

Please leave a ❀️ or a friendly comment on this post if you found it helpful!

Okay