<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Will Green</title>
    <description>The latest articles on DEV Community by Will Green (@hotgazpacho).</description>
    <link>https://dev.to/hotgazpacho</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hotgazpacho"/>
    <language>en</language>
    <item>
      <title>My Experience with AI, One Month In</title>
      <dc:creator>Will Green</dc:creator>
      <pubDate>Thu, 05 Feb 2026 23:32:32 +0000</pubDate>
      <link>https://dev.to/hotgazpacho/my-experience-with-ai-one-month-in-2604</link>
      <guid>https://dev.to/hotgazpacho/my-experience-with-ai-one-month-in-2604</guid>
      <description>&lt;p&gt;I’ve been pretty skeptical about AI for software development. I recently started a new job at &lt;a href="https://netwrix.com/" rel="noopener noreferrer"&gt;Netwrix&lt;/a&gt;, and I’ve been encouraged to really give it a go. So, I’ve taken this first month to try it out in earnest. We’re using &lt;a href="https://claude.com/product/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;. Here’s what I’ve found:&lt;/p&gt;

&lt;p&gt;It’s really helpful to get a high-level overview of existing code bases. I can clone repositories and ask Claude to summarize the project. I get a 10,000-foot overview of the architecture, capabilities, and dependencies. I can ask it to focus in on areas of interest and ask deeper questions. As a new employee in a high-level engineering IC role, this capability is invaluable in getting a lay of the land.&lt;/p&gt;

&lt;p&gt;I’ve also found it useful, when combined with the &lt;a href="https://github.com/obra/superpowers" rel="noopener noreferrer"&gt;Superpowers&lt;/a&gt; plugin and skills, for designing, planning, and executing features. I start with the “brainstorming” skill. I describe at a high level what I want to do, like “add backend for frontend handling for OIDC” or “set up CI with artifact publishing on merges to main, and optional publication of preview artifacts on pull requests”. It will then get to work, searching the web as needed, and ask me some clarifying questions. I’ll go back and forth with it, and when it thinks it has a design, it presents me with a design markdown document. It goes through it with me section by section, and we review it, perhaps asking questions on things that are unclear to me or just don’t seem right. Claude will work some more, often presenting me with options. From there, I’ll ask about the trade-offs and sometimes ask about an option I know about that it didn’t consider. This back and forth will continue until I’m satisfied with the design. I can then take this design to the team, product security, and stakeholders as needed to make sure we agree. If necessary, we can fire up Claude again and tell it we want to revise the design. &lt;strong&gt;The important thing here is that there is still a human &lt;em&gt;(or multiple humans)&lt;/em&gt; in the loop throughout the design process.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once we have a design that we like, I’ll use the “write plan” skill, again from Superpowers. I’ll do this in a new session with a fresh context window. Claude will load up the design document, examine the code base, and start breaking it down into workable tasks, organized in phases if there are a lot of them. Claude may again ask some clarifying questions and present me with an implementation plan, again in the form of a markdown document, for review. I use the tool to make any revisions necessary before moving on to implementation. &lt;strong&gt;Again, there’s a human in the loop, supervising and making decisions.&lt;/strong&gt; I find here that if there’s more than a dozen or so tasks, it’s a sign that what I’m looking to do is probably too big to be reasonably implemented in a single PR. I haven’t yet settled on the best strategy to deal with this. One option I’m considering is doing one section of work per worktree and PR that. I know that what I don’t want to do is dump a 10k line PR and ask my team or Claude to review it.&lt;/p&gt;

&lt;p&gt;When I’m ready to start implementation, I’ll again clear the context and use the “implement plan” skill to start a new worktree. This skill uses the implementation plan from the previous session to start working on the tasks in chunks. It has been given the directive to make commits after every task. As with the previous steps, I remain in the loop, reviewing the code Claude generates before allowing it to commit. If I’m not happy with what I see, I’ll check with the implementation plan to make sure that my concern isn’t going to be addressed in a future task. Otherwise, I’ll tell Claude not to commit and address the issues. Once I’m happy with that, I’ll allow Claude to commit. One of the nice things that this skill does when it crafts commits is that it includes the Co-Authored-By: trailer. This is a clear indicator to others that Claude assisted with this, and it shows as much in GitHub. Another nice thing of the Superpowers plugin is that it includes a subagent that is tasked with reviewing the last commit, checking it against the plan as well as best practices. If it finds issues, it will send it back to the implementation agent to correct. Pretty neat!&lt;/p&gt;

&lt;p&gt;Once the tasks are complete, Claude prompts me to open a pull request. When I agree to, it crafts a detailed description of the change so that others can understand better the intent of the change. In some cases, it will include a checklist of actions that remain to be taken for verification. I’ve found pretty good success having Claude, after clearing the context, ready the PR description and assist me with performing those tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  My overall takeaways:
&lt;/h2&gt;

&lt;p&gt;I’m impressed with Claude’s capabilities, especially when coupled with the Superpowers plugin. &lt;strong&gt;I find it imperative to keep myself in the loop at all phases, to help keep things on track.&lt;/strong&gt; I also find that the Superpowers workflows model the ways in which I would (or should) work in the absence of an AI assistant. In fact, I feel like it is helping me to be more disciplined in that regard, and more work for me when I try to take on too much at once.&lt;/p&gt;

&lt;p&gt;Now that Anthropic has &lt;a href="https://www.anthropic.com/news/claude-opus-4-6" rel="noopener noreferrer"&gt;released Opus 4.6&lt;/a&gt;, it will be interesting to see how the frequency with which I have to manage context changes.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Apollo Server(less), REST APIs, DynamoDB, and Caching</title>
      <dc:creator>Will Green</dc:creator>
      <pubDate>Wed, 24 Jul 2019 04:10:01 +0000</pubDate>
      <link>https://dev.to/hotgazpacho/apollo-server-less-rest-apis-and-caching-5jf</link>
      <guid>https://dev.to/hotgazpacho/apollo-server-less-rest-apis-and-caching-5jf</guid>
      <description>&lt;p&gt;My team ran into an interesting issue today with our configuration of Apollo Server, data sources, and our caching layer. It stems from our architecture decisions, and how seemingly sensible defaults can lead to failures. Fortunately, it has a happy ending with a fix in less than a dozen lines of code.&lt;/p&gt;

&lt;p&gt;Let's start from the outside and work our in...&lt;/p&gt;

&lt;h2&gt;
  
  
  Outermost Layer: Apollo Server
&lt;/h2&gt;

&lt;p&gt;My team decided, for various reasons that I'm happy to elaborate on in a another post, went with an entirely serverless architecture on AWS for our application. That means all our code is deployed in Lambda functions. We love it, but it comes with tradeoffs. At the time of this writing, the biggest of these has to do with interacting with certain other AWS services that can only be deployed within a VPC &lt;em&gt;(Virtual Private Cloud)&lt;/em&gt;. Services such as RDS &lt;em&gt;(Relational Database Service)&lt;/em&gt;, and importantly for this story, ElastiCache &lt;em&gt;(managed Redis &amp;amp; Memcache)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;By default, when you deploy a Lambda function, it is not deployed into A VPC. The API calls to invoke the function are open to the Internet, but protected by IAM Authorization. You do have the option to specify a VPC to attach the function to. However, here's where the tradeoffs come in. First of all, you need to configure the VPC to have multiple IP subnets large enough to accommodate the maximum concurrency &lt;em&gt;(how many instances of your function can execute at the same time)&lt;/em&gt; across all the functions that you want to attach to this VPC. Second, you need to configure Security Groups that allow your functions in the Lambda subnets to talk to the resources, such as Redis, in the other subnets. Third, you need to set up VPC Endpoints for any services that you use that aren't natively resident inside a VPC, such as S3 and DynamoDB. Finally, if you access any APIs out on the Internet, you'll also need to set up a NAT Gateway. That's a lot of networking setup, and a lot of things to misconfigure!&lt;/p&gt;

&lt;p&gt;Even if you set up your VPC correctly, you'll soon discover that the cold-start &lt;em&gt;(first time an instance of your function is run)&lt;/em&gt; time is atrocious, on the order of 10-20 seconds before your code can start executing. That's because of the mechanics of attaching Lambda to a VPC requires setting up an ENI &lt;em&gt;(Elastic Network Interface)&lt;/em&gt; for each instance of your function. ENIs were not designed to for quick setup and teardown. This reason alone makes attaching a function to a VPC a non-starter for an API that services a web app.&lt;/p&gt;

&lt;p&gt;For that reason, we opted to forgo anything that required a VPC and &lt;a href="https://www.apollographql.com/docs/apollo-server/deployment/lambda/" rel="noopener noreferrer"&gt;deploy our Apollo Server Lambda&lt;/a&gt; unattached. That means that Redis and Memcache are out of the question for caching.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching
&lt;/h2&gt;

&lt;p&gt;When I talk about caching here, I'm speaking about &lt;a href="https://www.apollographql.com/docs/apollo-server/features/data-sources/" rel="noopener noreferrer"&gt;DataSource caching&lt;/a&gt;, and specifically how it interacts with &lt;a href="https://www.apollographql.com/docs/apollo-server/features/data-sources/#rest-data-source" rel="noopener noreferrer"&gt;RESTDataSource&lt;/a&gt;. &lt;code&gt;RESTDataSource&lt;/code&gt; allows you to create an abstraction for calling REST APIs in your resolver functions. Its interface allows the Apollo GraphQL engine to insert a cache into the process, such that repeated requests will only hit the network once, and duplicate requests will use the cache. By default, this is an in-memory cache, but if you're running with a concurrency greater than 1, which you almost certainly will with Lambda, you're going to want to use an external cache service, such as Redis. However, Redis needs to be deployed inside a VPC, and as previously discussed, this is a non-starter for our architecture.&lt;/p&gt;

&lt;p&gt;Enter &lt;a href="https://www.npmjs.com/package/apollo-server-cache-dynamodb" rel="noopener noreferrer"&gt;apollo-server-cache-dynamodb&lt;/a&gt;, a Node module that I wrote to use DyanmoDB as a Key Value cache, with automatic key expiration. You configure this and plug it in to the Apollo Server configuration, and it takes care of injecting it into your Data Sources. All your Data Source &lt;code&gt;GET&lt;/code&gt; requests will be cached in DynamoDB, using the request url as the key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cache Key
&lt;/h2&gt;

&lt;p&gt;If your Spidey-sense tingled at the thought of using the request url as the cache key, you'd be right. This, in fact, turned out to be the source of our problems. The way we are querying an API results in a &lt;strong&gt;very long&lt;/strong&gt; query string. So long, in fact, that it breached the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-partition-sort-keys" rel="noopener noreferrer"&gt;DynamoDB limit of 2048 bytes for a partition key&lt;/a&gt;. When the &lt;code&gt;RESTDataSource&lt;/code&gt; class went to cache the response for one of these &lt;strong&gt;very long urls&lt;/strong&gt; in DynamoDB, it would raise an error and cause the entire GraphQL request to fail.&lt;/p&gt;

&lt;p&gt;Fortunately for us, &lt;code&gt;RESTDataSource&lt;/code&gt; provides a number of ways to hook into interesting request and response events. For example, there's &lt;a href="https://github.com/apollographql/apollo-server/blob/df33704cee56603e580bce025a40bacd64967853/packages/apollo-datasource-rest/src/RESTDataSource.ts#L72" rel="noopener noreferrer"&gt;&lt;code&gt;willSendRequest&lt;/code&gt;&lt;/a&gt;, which allows you to &lt;a href="https://www.apollographql.com/docs/apollo-server/features/data-sources/#intercepting-fetches" rel="noopener noreferrer"&gt;set an Authorization header for every request&lt;/a&gt;. There's also &lt;a href="https://github.com/apollographql/apollo-server/blob/df33704cee56603e580bce025a40bacd64967853/packages/apollo-datasource-rest/src/RESTDataSource.ts#L63-L70" rel="noopener noreferrer"&gt;&lt;code&gt;cacheKeyFor&lt;/code&gt;&lt;/a&gt;, which allows you to calculate your own cache key for the request. This is the hook we needed in order to generate a cache key suitable for use as a DynamoDB partition key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cache Key Calculation
&lt;/h2&gt;

&lt;p&gt;We decided that we'd use a hashing function to calculate a unique identifier for the request url. We quickly realized that we would make our cache ineffective if we didn't take care to sort the query string parameters using a stable sort algorithm. As it turns out, the WHAT-WG &lt;code&gt;URLSearchParams&lt;/code&gt; interface provides a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/sort" rel="noopener noreferrer"&gt;&lt;code&gt;sort&lt;/code&gt; method&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;URLSearchParams.sort()&lt;/code&gt; method sorts all key/value pairs contained in this object in place and returns &lt;code&gt;undefined&lt;/code&gt;. The sort order is according to unicode code points of the keys. This method uses a stable sorting algorithm (i.e. the relative order between key/value pairs with equal keys will be preserved).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Bazinga! Once we had a way to stably sort the query params, generating an identifier for the request was very straight forward. Initially, we went with a straight &lt;code&gt;sha1&lt;/code&gt; hex digest, but ultimately we opted to go with a &lt;a href="https://en.wikipedia.org/wiki/Universally_unique_identifier#Versions_3_and_5_(namespace_name-based)" rel="noopener noreferrer"&gt;UUID v5&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here's what our &lt;code&gt;cacheKeyFor&lt;/code&gt; implementation looks like, using the &lt;a href="https://www.npmjs.com/package/uuid" rel="noopener noreferrer"&gt;&lt;code&gt;uuid&lt;/code&gt; package&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;cacheKeyFor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;requestUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;requestUrl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;queryParams&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;uuidv5&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;requestUrl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nx"&gt;uuidv5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>serverless</category>
      <category>graphql</category>
      <category>caching</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>Apollo Client fetchPolicies, React, and Pre-Rendering</title>
      <dc:creator>Will Green</dc:creator>
      <pubDate>Thu, 18 Jul 2019 16:09:57 +0000</pubDate>
      <link>https://dev.to/hotgazpacho/apollo-client-fetchpolicies-react-and-pre-rendering-1i77</link>
      <guid>https://dev.to/hotgazpacho/apollo-client-fetchpolicies-react-and-pre-rendering-1i77</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;My team at &lt;a href="https://www.fireeye.com/" rel="noopener noreferrer"&gt;FireEye&lt;/a&gt; built the &lt;a href="https://fireeye.market/" rel="noopener noreferrer"&gt;FireEye Market&lt;/a&gt; as a React app with a GraphQL &lt;em&gt;(Apollo Server Lambda)&lt;/em&gt; backend. It's a marketplace to &lt;em&gt;"Discover apps, extensions, and add-ons that integrate with and extend your FireEye experience."&lt;/em&gt; One of the things we discovered early on was that we needed to work to improve the &lt;a href="https://developers.google.com/web/tools/lighthouse/audits/first-meaningful-paint" rel="noopener noreferrer"&gt;Time to First Meaning Paint&lt;/a&gt; &lt;em&gt;(TTFMP)&lt;/em&gt;. We couldn't really reduce our bundle size further, and we already do code splitting. So we looked instead to generate static HTML, with the Apollo Client cache data serialized into the markup. This allows the client to quickly download a fully rendered HTML page, to begin interacting with immediately, while the React app scripts are downloaded and evaluated by the browser. When the React app &lt;a href="https://reactjs.org/docs/react-dom.html#hydrate" rel="noopener noreferrer"&gt;hydrates&lt;/a&gt;, it has been configured  to read the serialized data into the Apollo Client cache, which then makes the data instantly available to the React app to update the component tree. However, there is a catch...&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter &lt;code&gt;fetchPolicy&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Apollo Client, and the corresponding React components &lt;em&gt;(&lt;code&gt;Query&lt;/code&gt;, &lt;code&gt;Mutation&lt;/code&gt;, &lt;code&gt;Subscription&lt;/code&gt;, and &lt;code&gt;graphql&lt;/code&gt; HOC that encapsulates them)&lt;/em&gt; that consume the client, have an option called &lt;code&gt;fetchPolicy&lt;/code&gt;. What this does is control how the components interact with the Apollo Client cache. This is very powerful, but the documentation for it is spread out in a couple places in the Apollo docs. My aim here is to consolidate that information, and hopefully, clarify it a bit.&lt;/p&gt;

&lt;p&gt;The valid values for &lt;code&gt;cachePolicy&lt;/code&gt; are:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;cache-first&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This is the default if you don't explicitly specify an option. What this means is that the client will look in its cache, and if it finds &lt;strong&gt;all&lt;/strong&gt; of the data it needs to fulfill the query, it will use that and &lt;strong&gt;not make a network request for the data&lt;/strong&gt;. Each of the queries you make, along with the arguments, are stored in the cache. If the query is cached, then it will use the data from this query. &lt;em&gt;I believe&lt;/em&gt; that the selection set of the query is also considered, so if that differs, a network request &lt;em&gt;will&lt;/em&gt; be made. &lt;/p&gt;

&lt;p&gt;I'm admittedly unsure on this last point. The FireEye Market app has a known set of queries that the client executes, which differ only in the parameters passed at runtime.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;cache-and-network&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This policy will look in the cache first, and use that data if available. It will &lt;strong&gt;always make a network request&lt;/strong&gt;, updating the cache and returning the fresh data when available. This may result in an additional update to your components when the fresh data comes in. This policy optimizes for getting cached data to the client quickly, while also ensuring that fresh data is always fetched. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the policy that we have found to work best for most cases when dealing with pre-rendering.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;network-only&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This policy skips reading from the cache altogether and goes straight to the network for data. Queries using this option will &lt;strong&gt;never read from the cache&lt;/strong&gt;. It will, however, write the results to the cache. This is for the situation where you always want to go to the backend for data, and are willing to pay for it in response time.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;cache-only&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This policy &lt;em&gt;exclusively&lt;/em&gt; reads from the cache, and will &lt;strong&gt;never go to network&lt;/strong&gt;. If the data doesn't exist in the cache, then an error is thrown. This is useful for scenarios where you want the client to operate in offline mode only, where the entirety of the data exists on the client.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I've never used this policy myself, so take that assertion with a giant grain of salt.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;no-cache&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This policy will never read data from, nor write data to, the cache.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;Armed with this knowledge of &lt;code&gt;fetchPolicy&lt;/code&gt;, how do you configure it? There's two places: in the client config, and in the request config.&lt;/p&gt;

&lt;h3&gt;
  
  
  Client Configuration
&lt;/h3&gt;

&lt;p&gt;When you &lt;a href="https://www.apollographql.com/docs/react/api/apollo-client/" rel="noopener noreferrer"&gt;configure the Apollo Client instance&lt;/a&gt;, you can provide it with a &lt;code&gt;defaultOptions&lt;/code&gt; key, which specifies the policy each type of query should use unless specifically provided by the request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;defaultOptions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;watchQuery&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;fetchPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cache-and-network&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;errorPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ignore&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;fetchPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;network-only&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;errorPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;all&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;mutate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;errorPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;all&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;From the docs:&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The React Apollo &lt;code&gt;&amp;lt;Query /&amp;gt;&lt;/code&gt; component uses Apollo Client's &lt;code&gt;watchQuery&lt;/code&gt; functionality, so if you would like to set &lt;code&gt;defaultOptions&lt;/code&gt; when using &lt;code&gt;&amp;lt;Query /&amp;gt;&lt;/code&gt;, be sure to set them under the defaultOptions.watchQuery property.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also note that the &lt;code&gt;graphql&lt;/code&gt; HOC, which given a document that is a &lt;code&gt;query&lt;/code&gt;, ends up wrapping an instance of the &lt;code&gt;&amp;lt;Query /&amp;gt;&lt;/code&gt; component.&lt;/p&gt;

&lt;h3&gt;
  
  
  Request Configuration
&lt;/h3&gt;

&lt;p&gt;You can also specify the &lt;code&gt;fetchPolicy&lt;/code&gt; per request. One of the props that you can provide to the &lt;code&gt;&amp;lt;Query /&amp;gt;&lt;/code&gt; component is &lt;code&gt;fetchPolicy&lt;/code&gt;. This will override whatever is configured in the client for this query only.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Query&lt;/span&gt; &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;QUERY_DOCUMENT&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;fetchPolicy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"network-only"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="cm"&gt;/* render prop! */&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly for the &lt;code&gt;graphql&lt;/code&gt; HOC, you can specify a &lt;code&gt;fetchPolicy&lt;/code&gt; in the config object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;listAppsForNotificatonSettings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;graphql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;APPS_FOR_NOTIFICATION_SETTINGS_QUERY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;fetchPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cache-first&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; 
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As I mentioned, we found that this &lt;code&gt;cache-and-network&lt;/code&gt; policy ended up being the best option for providing the best experience for our customers when serving up pre-rendered pages for various entry points into the application. In a few cases, we found that using &lt;code&gt;cache-first&lt;/code&gt; was a better option, but this are few. As always, this is what worked for my team. Your mileage may vary.&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>apollo</category>
      <category>react</category>
    </item>
  </channel>
</rss>
