<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Craig Franklin</title>
    <description>The latest articles on DEV Community by Craig Franklin (@englishcraig).</description>
    <link>https://dev.to/englishcraig</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/englishcraig"/>
    <language>en</language>
    <item>
      <title>A Review of "Professor Fisby's Mostly Adequate Guide to Functional Programming"</title>
      <dc:creator>Craig Franklin</dc:creator>
      <pubDate>Tue, 05 Oct 2021 03:47:49 +0000</pubDate>
      <link>https://dev.to/englishcraig/a-review-of-professor-fisby-s-mostly-adequate-guide-to-functional-programming-2p2</link>
      <guid>https://dev.to/englishcraig/a-review-of-professor-fisby-s-mostly-adequate-guide-to-functional-programming-2p2</guid>
      <description>&lt;p&gt;After years of mostly working with object-oriented languages such as Ruby and Python and, as a result, focusing on learning the best practices of object-oriented code design, I've recently changed jobs to a company whose applications are mostly written in TypeScript. What's more, their in-house style avoids classes entirely, preferring a more functional approach to organising the codebase. Although, the principals of good code design are applicable across languages, I felt a little unsure of myself trying to write code in this unfamiliar environment. Therefore, I decided to read up a bit on Functional Programming to learn the specific techniques and patterns of FP that I could use to achieve the nebulous goal of "clean code". Unfortunately, just as many of the popular OOP books use languages that I can't be bothered to learn, like Java and C++, many of the top FP books use functional languages, like Haskell and Scala, that I don't anticipate working with anytime soon. In both of these cases, I have nothing against these languages; it's just that I'm a practical guy, and if I'm going to put the time and effort into learning programming concepts or techniques, I want to be able to use them. Otherwise, I'll just forget them, and if I'm going to read something for personal enrichment, I'd rather read a good novel than pour over pages upon pages of code in a language I can only half-understand. Thankfully, there are FP books whose authors have chosen to meet the majority of programmers where they're at and use JavaScript for their code examples. &lt;em&gt;&lt;a href="https://mostly-adequate.gitbook.io/mostly-adequate-guide/"&gt;Professor Fisby's Mostly Adequate Guide to Functional Programming&lt;/a&gt;&lt;/em&gt; by Brian Lonsdorf is one such book. Given that it was one of the top results in my searches, and the comments and reviews that I found were generally positive, I decided to read it in the hopes of getting a better handle on how to write good functional code, so that I might contribute to my new job's functional TypeScript codebase with more confidence.&lt;/p&gt;

&lt;p&gt;At 146 pages (according to &lt;a href="https://www.goodreads.com/en/book/show/25847352-professor-frisby-s-mostly-adequate-guide-to-functional-programming"&gt;GoodReads&lt;/a&gt;), &lt;em&gt;Professor Fisby's Mostly Adequate Guide to Functional Programming&lt;/em&gt; (&lt;em&gt;MAG&lt;/em&gt; from now one for brevity's sake) is a fair bit shorter than a lot of programming books that I have read. I see this as a strength, because I often find such books to be a bit bloated with extended code examples and in-depth explanations of said code. Sometimes it's necessary, but often it drags on way too long and probably could have used a hard-headed editor forcing the author(s) to get to the point already. For people looking for a deeper exploration of FP, with more examples to really clarify some of the more-complex mathematical concepts, I can see how this might be viewed as a weakness. I, however, was looking for a quick introduction that would get me writing better functional TS code in short order, so erring on the side of brevity, both in the examples and the explanations of underlying theory, worked well for me. Another overall strength of the book is Lonsdorf's jokey writing style. Admittedly, the jokes are as likely to elicit a roll of the eyes as a chuckle, but I respect him for trying to keep what can be a &lt;em&gt;very&lt;/em&gt; dry topic light and amusing. Yet another reason that programming books often drag at some point (at least for me) is the authors are so concerned with conveying information that they neglect to make their writing engaging, perhaps believing that the content is engaging enough on its own. Now, I'm not expecting &lt;em&gt;Lord of the Rings&lt;/em&gt; when learning about how to refactor for-loops, but having a writer with a sense of their own voice, as opposed to an aggressively-neutral presentation of information, makes a big difference in how likely I am to stick with a technical book till the end. One last thing to have in mind about &lt;em&gt;MAG&lt;/em&gt; is that, per its own "plans for the future", it is unfinished. The book is broken into three sections, with the first being a practical introduction to FP syntax and basic concepts, the second going deeper into the theory and using more-abstract structures in the code, and a planned third section that will "dance the fine line between practical programming and academic absurdity", but which was never added. Given my practical goals for learning from this book, and my reaction to the moderately theoretical second section (more on that below), I don't view this as a serious omission. &lt;em&gt;MAG&lt;/em&gt; does a good job of introducing the techniques and theory of FP, and I imagine if someone wants to really get into the weeds, they'd probably be better off picking up a book that uses one of the pure FP languages anyway.&lt;/p&gt;

&lt;p&gt;The first section of &lt;em&gt;MAG&lt;/em&gt;, covering seven chapters, serves as an introduction to why FP is useful in codebases and the sort of low-level syntax and structures required to make it possible. Though I was familiar with the concept of pure functions, Lonsdorf's statement that "The philosophy of functional programming postulates that side effects are a primary cause of incorrect behaviour" struck me as an excellent distillation of the benefits of pursuing FP as the organising paradigm of a codebase. Flaky tests, conflicting component states in React, old invalid records just sitting in the database, all of these are common examples of problems caused by statefulness in software, which we manage via side effects. As I'm sure many of you know, a bug that you can't reproduce consistently is one of the most difficult to fix, and it's usually a specific and highly unlikely combination of states that make it so difficult to reproduce. For example, I remember trying to figure out a bug while working at an ecommerce company, where all the products in a user's cart were available and ready to be purchased when they began the checkout process, but when they tried to pay, the products were no longer available, and we raised an error. After days of pouring over logs looking for clues and trying to recreate the bug anyway I could think of, I finally figured out that the user had opened a second browser tab during checkout and made some changes to their cart before proceeding with payment in the original tab. The cart's state had changed in one part of our system, but that change hadn't been propogated to &lt;em&gt;all&lt;/em&gt; parts of the system.  Now, &lt;em&gt;some&lt;/em&gt; statefulness in an application is probably unavoidable, or at least avoiding it would be horribly impractical, but minimisation of dependence on that statefulness greatly simplifies code, because it reduces how much you have to keep track of when writing it. This limits your attention to two things: input and output. Side effects, on the other hand, are theoretically infinite, there's no limit to the number of database, API, or logging calls you can make in a given function. Therefore, regardless of what language I'm working in, something that I like to keep in mind is that you can use pure functions or methods anywhere, even in largely-OOP codebases. Python and Ruby (and JavaScript for that matter) often offer two variations of a function or method: one that takes an object and changes it, and another that returns a new object (&lt;code&gt;list.sort()&lt;/code&gt; vs &lt;code&gt;sorted(list)&lt;/code&gt; in Python for example). I think that this is one of the most useful lessons from learning about different programming languages or paradigms: you can take the useful pieces from each, mixing and matching them in the code that you write in order to derive some of the benefits of each while mitigating some of the costs.&lt;/p&gt;

&lt;p&gt;Now, if one of the great costs of OOP is the pervasiveness of state, what then is the cost of applying FP, which largely avoids state? In my opinion, that would be how abstract and mathematical FP gets once you start wandering down the rabbit hole. I found Lonsdorf's introductions to currying, function composition, and pointfree style to be useful. These are techniques and syntatical styles that I can use in my own code, I thought. Starting around chapter 7, however, Lonsdorf begins to focus a bit more on some of the theoretical underpinnings of FP in order to introduce higher-level structures that enable the kind of mathematical correctness that adherents of FP promise. At this point, I found myself doing a lot more skimming than I had previously, nodding at the explanations for how functors work and why they are useful, content in getting the gist of it, and not even bothering with the exercises at the ends of the later chapters. The reason for my disengagement is that I didn't really see myself ever applying these more-advanced techniques or using these more-complex structures in my code. Writing pure functions and composing them with maps, filters, or pipe operators is something you can do in almost any codebase, and the code will likely be easier to read, understand, and change because of it. But functors, applicative or otherwise, well, that's pretty much an all-or-nothing proposition. Maybe I suffer from a limited imagination, but I don't see how one could write code in that style in a piecemeal fashion. So, for me, the second half of &lt;em&gt;MAG&lt;/em&gt; was pure theory, even when it was explaining the practical application of concepts from set theory. When it comes to code, I'm not particularly interested in theory. I can understand, however, why some coders get inspired by FP and can get so adamant about its benefits. Codebases are messy affairs, containing a few languages, each written in at least a half dozen styles, all based on the momentary preferences of dozens (hundreds? thousands?), of coders that have contributed over the years, and around every corner is a bug, just lying in wait for the right combination of parameters to raise that unanticipated error. So, the idea that your code could have the elegance and provability of a mathematical theorem is a powerful one. It just seems too impractical to me, as we can't very well expect each new developer who joins our team to spend their first few months reading textbooks on set theory and Functional Programming before they can make their first commit.&lt;/p&gt;

&lt;p&gt;One thing that sometimes bothers me about proponents of Agile, OOP, TDD, etc. is that their rhetoric can wander into &lt;a href="https://en.wikipedia.org/wiki/No_true_Scotsman"&gt;No True Scotsman&lt;/a&gt; territory: it's not that these techniques or processes or principles are flawed or fail to deliver their promised benefits; people are just doing them wrong. I believe that OOP, done exceptionally well, can provide the kind of readability and maintainability promised by its experts, but it's really hard to write OOP code at that level. How many coders can claim to be masters with a straight face (or with those around them maintaining a similarly-straight face)? On the other hand, even poorly-written OOP code has some basic organising principles that aids future devs in trying to understand and modify it. You have classes that represent business concepts (sometimes more abstract or technical concepts), and those objects have behaviours represented by their methods. This makes the learning curve much more manageable, because early practitioners have some basic, concrete ideas and techniques that they can apply while they're still learning the methods for writing truly clean code. My impression of FP is that it's like the classic bit about learning to draw an owl: make your functions pure, compose them, and now here's a bunch of set theory to explain why implementing an entire system of functor containers for any data that might pass through your system is totally worth it. The leap from a few basic design principles to abstract structures, without any real-world analog, is large, and I imagine that I'm not the only one who finds that juice to not be worth the squeeze. It feels like it would be easier to write bug-free (or at least lightly-bugged) code if one followed FP to the letter, but sometimes mediocre code is enough to get the job done and move on with your life, and it seems quite difficult to just write mediocre FP code.&lt;/p&gt;

&lt;p&gt;I've already begun using pointfree style and some light function composition in my code, and being introduced to the JS package &lt;a href="https://ramdajs.com/"&gt;&lt;code&gt;ramda&lt;/code&gt;&lt;/a&gt; really went a long way toward easing me into a more functional style of coding. I also found that the explanations of functors gave me a better appreciation for what languages like Rust do to avoid unhandled errors and nulls. However, at least for now, I think that's the extent of the impact of &lt;em&gt;Professor Fisby's Mostly Adequate Guide to Functional Programming&lt;/em&gt; on how I read and write code. Even though I remain unconverted to the full FP path, I feel like I learned some useful concepts and techniques and would definitely recommend this book to anyone who is FP-curious but unwilling to commit to a 400-page tome with code examples written in Haskell.&lt;/p&gt;

</description>
      <category>functional</category>
      <category>books</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How to Set up FaunaDB for local development</title>
      <dc:creator>Craig Franklin</dc:creator>
      <pubDate>Sun, 06 Dec 2020 05:01:27 +0000</pubDate>
      <link>https://dev.to/englishcraig/how-to-set-up-faunadb-for-local-development-5ha7</link>
      <guid>https://dev.to/englishcraig/how-to-set-up-faunadb-for-local-development-5ha7</guid>
      <description>&lt;p&gt;If you're feeling impatient and want to skip to the end, all the code is available at this &lt;a href="https://github.com/cfranklin11/faunadb-local"&gt;repo&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is FaunaDB, and why should I try it?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.fauna.com/fauna/current/introduction"&gt;FaunaDB&lt;/a&gt; is a serverless database that is an ideal choice for serverless applications, because it has the same benefits as the latter: auto-scaling, pay-for-what-you-use billing, not requiring server configuration or maintenance. FaunaDB accomplishes this by executing database operations via calls to its HTTP API rather than maintaining a connection to a database server. There are other options available for serverless database services. DynamoDB from AWS is one of the most commonly referenced when looking up information about serverless databases, but I find a basic key-value data store to be too limiting when trying to model an application's business domain, as it makes modelling relationships among entities very difficult. Relational databases are ideal for this, and AWS has another serverless option, Aurora Serverless, which combines the advantages of serverless architecture with the expressiveness of relational databases and SQL queries. Although I haven't used it myself, Aurora Serverless seems to be a good option. It comes with a large caveat, however, in that Aurora Serverless requires all applications and services that access its databases to be in the same Amazon Virtual Private Cloud. This means committing yourself even further to vendor lock-in, as any move away from using AWS, even partially, would mean having to completely change your database infrastructure. Also, my knowledge of ops stuff is pretty basic, and setting up an Amazon VPC is just the sort of extra complication that I wanted to avoid by going serverless in the first place. FaunaDB isn't strictly a relational database, but it still offers some of the same advantages by allowing for the inclusion of all the usual entity relationships (e.g. one-to-one, one-to-many). Also, FaunaDB databases can be called from any application hosted on any cloud service, giving you more flexibility in how and where you deploy applications, thus reducing vendor lock-in.&lt;/p&gt;

&lt;p&gt;FaunaDB supports two different ways of querying its databases: FaunaDB Query Language (FQL) and GraphQL (GQL). For now, the GQL interface is somewhat limited, making FQL a better option if you need the sort of functionality that you can get from SQL. If you're not sure, or if you anticipate only needing basic queries, I recommend starting with the GQL interface, because it's much simpler to work with (if you already know GQL, then there's not much more to learn), and it offers the benefit of having a schema file as the source of truth about the structure of your database. If you start with GQL and find out later that you need more-complex queries, you can mix in FQL functions by defining custom resolvers, so you shouldn't ever need to completely migrate to FQL. Since I prefer the GQL interface (and it's all I've worked with so far), that's what I'll use for this tutorial.&lt;/p&gt;

&lt;p&gt;If you want more information on how FaunaDB works in general, feel free to check out their &lt;a href="https://docs.fauna.com/fauna/current/start/"&gt;Getting Started&lt;/a&gt; guide, but it's not necessary for following the rest of this tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;To start, make sure you have the following installed on your machine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/get-docker/"&gt;Docker&lt;/a&gt;, so we can run an instance of FaunaDB in a container.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.fauna.com/fauna/current/integrations/shell/"&gt;&lt;code&gt;fauna-shell&lt;/code&gt;&lt;/a&gt; to be able to interact with local FaunaDB databases.&lt;/li&gt;
&lt;li&gt;An API testing tool (e.g. &lt;a href="https://www.postman.com/downloads/"&gt;Postman&lt;/a&gt;, &lt;a href="https://insomnia.rest/download/"&gt;Insomnia&lt;/a&gt;) to easily send GraphQL queries (or you can use &lt;code&gt;curl&lt;/code&gt; if you &lt;em&gt;really&lt;/em&gt; want to). For the examples below, I will use Insomnia, because its GraphQL support is a bit better than Postman's (for example, it can fetch a GraphQL schema from an API endpoint).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.python.org/downloads/"&gt;Python&lt;/a&gt; if you want to set up integration testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting up local FaunaDB
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Running FaunaDB in a Docker container
&lt;/h3&gt;

&lt;p&gt;FaunaDB is kind enough to provide us with an &lt;a href="https://hub.docker.com/r/fauna/faunadb"&gt;official Docker image&lt;/a&gt;, which greatly simplifies running it on our local environment. Run the following commands in your terminal to get an instance of FaunaDB running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull fauna/faunadb
docker run &lt;span class="nt"&gt;--name&lt;/span&gt; faunadb &lt;span class="nt"&gt;-p&lt;/span&gt; 8443:8443 &lt;span class="nt"&gt;-p&lt;/span&gt; 8084:8084 fauna/faunadb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We expose port &lt;code&gt;8443&lt;/code&gt; for standard database interactions and &lt;code&gt;8084&lt;/code&gt; for calls to the GraphQL API. As with other database images, you can use other Docker container options for data persistence and such. See the &lt;a href="https://docs.fauna.com/fauna/current/integrations/dev#run"&gt;FaunaDB documentation&lt;/a&gt; for alternatives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a database in the FaunaDB instance
&lt;/h3&gt;

&lt;p&gt;Now that we have FaunaDB up and running, the easiest way to interact with it is to use &lt;code&gt;fauna-shell&lt;/code&gt;. In a new tab or window, run the following to create a database and an API key for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;fauna add-endpoint http://localhost:8443/ &lt;span class="nt"&gt;--alias&lt;/span&gt; localhost &lt;span class="nt"&gt;--key&lt;/span&gt; secret
fauna create-database development_db &lt;span class="nt"&gt;--endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost
fauna create-key development_db &lt;span class="nt"&gt;--endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can change the alias for the endpoint (currently &lt;code&gt;localhost&lt;/code&gt;) and the name of the database (currently &lt;code&gt;development_db&lt;/code&gt;) to anything you want. If one doesn't already exist, create a &lt;code&gt;.env&lt;/code&gt; file in the root of the project, copy the API key printed by &lt;code&gt;fauna create-key&lt;/code&gt;, and assign its value to &lt;code&gt;FAUNADB_KEY&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt; like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;FAUNADB_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;copied API key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using GraphQL with FaunaDB
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create a GraphQL schema
&lt;/h3&gt;

&lt;p&gt;A full introduction to GraphQL is outside the scope of this tutorial, so if you want more information, you can check out the &lt;a href="https://graphql.org/learn/"&gt;official introduction&lt;/a&gt;. Also, if you want to get a better view of the GraphQL queries that FaunaDB creates by default for entities, check out their &lt;a href="https://docs.fauna.com/fauna/current/start/graphql"&gt;getting started&lt;/a&gt; page.&lt;/p&gt;

&lt;p&gt;To keep things simple, we'll create a basic schema file for a blogging platform, where we have users who write posts. Such a schema for FaunaDB looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight graphql"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;!&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;unique&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;!&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;!]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;relation&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Post&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;!&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;!&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;!&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Query&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;allUsers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;!]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;allPosts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;!]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This schema creates collections (FaunaDB's version of data tables) of users and posts, with a couple general-purpose query fields. It doesn't matter where you save this file, but it's best to name it following GraphQL conventions to take advantage of tools such as linters (I use &lt;code&gt;schema.gql&lt;/code&gt;, but there are a few other acceptable alternatives). Most of this is standard GraphQL schema syntax, but notice the FaunaDB directives that start with &lt;code&gt;@&lt;/code&gt;. These give extra information to FaunaDB about the structure of collections and documents.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;@unique&lt;/code&gt; indicates that the value for this attribute must be unique within the collection. FaunaDB will return an error response if we try to create a &lt;code&gt;user&lt;/code&gt; with a duplicate &lt;code&gt;username&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@relation&lt;/code&gt; indicates a bi-directional relationship between two collections. In this case, a &lt;code&gt;user&lt;/code&gt; has many &lt;code&gt;posts&lt;/code&gt;, and a &lt;code&gt;post&lt;/code&gt; has one &lt;code&gt;user&lt;/code&gt;, which we call its &lt;code&gt;author&lt;/code&gt;. Notice that we only need to use the &lt;code&gt;relation&lt;/code&gt; directive on the "many" side of the relationship. FaunaDB has more information on how to model &lt;a href="https://docs.fauna.com/fauna/current/api/graphql/relations"&gt;different relationships&lt;/a&gt; among collections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the directives that I use most, but there are others for more-advanced use cases. You can see all the available directives &lt;a href="https://docs.fauna.com/fauna/current/api/graphql/directives/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, by default, FaunaDB creates a few basic queries and mutations for each collection that you create (e.g. &lt;code&gt;createUser&lt;/code&gt;, &lt;code&gt;updateUser&lt;/code&gt;, &lt;code&gt;findUserById&lt;/code&gt;), but doesn't add any queries for multiple records, so we add &lt;code&gt;allUsers&lt;/code&gt; and &lt;code&gt;allPosts&lt;/code&gt; queries for convenience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Import the GraphQL schema to the FaunaDB database
&lt;/h3&gt;

&lt;p&gt;From now on, we're going to interact with the FaunaDB database via HTTP calls to its GraphQL API, which should be running on &lt;code&gt;http://localhost:8084&lt;/code&gt;. The simplest way to do this is with an API testing tool like Insomnia, but for an application, you'll obviously want to automate these calls with an HTTP library like &lt;code&gt;requests&lt;/code&gt; for Python or &lt;code&gt;axios&lt;/code&gt; for Node.&lt;/p&gt;

&lt;p&gt;Our first call will be to import our GraphQL schema. Open Insomnia, and create a request that will &lt;code&gt;POST&lt;/code&gt; to &lt;code&gt;http://localhost:8084/import&lt;/code&gt;. Remember that secret key that we saved? Use it in the &lt;code&gt;Authorization&lt;/code&gt; header with the format &lt;code&gt;Bearer &amp;lt;FaunaDB secret key&amp;gt;&lt;/code&gt;. Finally, let's add our schema file to the request (in the case of Insomnia, select "Binary File" from the body options, then "Choose File" to upload the schema).&lt;/p&gt;

&lt;h3&gt;
  
  
  Create and query records in the database
&lt;/h3&gt;

&lt;p&gt;Since the FaunaDB instance has a standard GraphQL API layer accessed at &lt;code&gt;http://localhost:8084/graphql&lt;/code&gt;, you can use GraphQL tooling to export the schema and documentation, and load it in an interactive tool like GraphiQL or GraphQL Playground. The easiest solution, however, is to use Insomnia's GraphQL documentation feature by selecting "GraphQL Query" for the request body, then clicking "schema" and selecting "Refresh Schema". This will give you access to auto-generated documentation for your GraphQL API without needing any extra setup.&lt;/p&gt;

&lt;p&gt;To start, we will want to create some data that we will then be able to query. The following mutations will create two users with two posts each. We return the IDs in the response for future reference, or you can use the &lt;code&gt;allUsers&lt;/code&gt; query to get them again later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight graphql"&gt;&lt;code&gt;&lt;span class="k"&gt;mutation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;createBob&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;createUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"burgerbob"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"password1234"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Burgers Are Great!"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Burgers have meat, and bread, and..."&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Top 10 Burgers"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1. Cheeseburger, 2. Hamburger..."&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;_id&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="k"&gt;mutation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;createLinda&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;createUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"momsense"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"password1234"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Why Baked Ziti Is Overrated"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Baked ziti is really not that great..."&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Top 10 Wines to Pair with Burgers"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1. Pinot Noir, 2. Merlot..."&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;_id&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have some records in the database, we can query them with the built-in &lt;code&gt;find&amp;lt;Collection&amp;gt;ByID&lt;/code&gt; query or fetch all of a collection's records with one of the &lt;code&gt;all&amp;lt;Collection&amp;gt;&lt;/code&gt; queries that we included in the schema. If you're using Insomnia, explore the schema documentation and try out different queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Using FaunaDB while testing a Python application (with Pytest)
&lt;/h2&gt;

&lt;p&gt;A common challenge when testing application code is handling database transactions in integration tests. Thankfully, we have frameworks like Rails and Django that, with a little configuration, can handle database setup and teardown for us. Unfortunately, such frameworks' database connectors generally have limited support for NoSQL databases like FaunaDB, and even then, it's only for the most-popular options like MongoDB. So, how can we use our local FaunaDB instance for integration tests without corrupting our development database?&lt;/p&gt;

&lt;p&gt;When setting up integration tests, I wanted to start by making sure that I didn't accidentally change data in my development database. Since FaunaDB identifies the specific database that you're calling with the API token that you use, the easiest way to avoid accidental calls is to make sure that the token for your development database isn't available in the code. Assuming that you're using environment variables for your tokens and secrets, the easiest way to hide the dev API token is to make it blank in &lt;code&gt;os.environ&lt;/code&gt; before every test. Pytest allows us to automatically run code before tests with &lt;code&gt;conftest.py&lt;/code&gt; files. What's more, these files can be nested to run different pre-test code in different test modules. So, we can define a &lt;code&gt;conftest.py&lt;/code&gt; file at the root of our &lt;code&gt;tests&lt;/code&gt; directory with the following code to prevent unwanted FaunaDB calls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# src/tests/conftest.py
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;os&lt;/span&gt;

&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"FAUNADB_KEY"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I tried doing the same thing using a Pytest fixture or &lt;code&gt;unittest&lt;/code&gt;'s &lt;code&gt;patch&lt;/code&gt;, which can work as well, but require you to use them before every test or at least before every test suite, and such details are easy to forget when writing tests for the next feature. The code above, however, is guaranteed to run before any tests, making it a foolproof way of keeping our development data safe.&lt;/p&gt;

&lt;p&gt;Now that we've prevented unwanted calls to our dev database, how do we set up and call our test database for integration tests? Why, with another &lt;code&gt;conftest.py&lt;/code&gt; of course! I have &lt;code&gt;tests&lt;/code&gt; separated into &lt;code&gt;unit&lt;/code&gt; and &lt;code&gt;integration&lt;/code&gt; submodules. Inside &lt;code&gt;integration&lt;/code&gt; we can include the following &lt;code&gt;conftest.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# src/tests/integration/conftest.py
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;unittest.mock&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;patch&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pytest&lt;/span&gt;

&lt;span class="c1"&gt;# Names of module &amp;amp; FaunaDB client class depend
# on your application code
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;src.app.faunadb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FaunadbClient&lt;/span&gt;


&lt;span class="c1"&gt;# Use "session" scope and autouse to run once before all tests.
# This is to make sure that the "localhost" endpoint exists.
&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;pytest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fixture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"session"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;autouse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_setup_faunadb&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="s"&gt;"npx fauna add-endpoint http://faunadb:8443/ --alias localhost --key secret"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="c1"&gt;# Scope "function" means that this only applies to the test function
# that uses it
&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;pytest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fixture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"function"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;faunadb_client&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# We create and delete the database for each test,
&lt;/span&gt;    &lt;span class="c1"&gt;# because it's reasonably quick, and simpler than manually deleting
&lt;/span&gt;    &lt;span class="c1"&gt;# all data.
&lt;/span&gt;    &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"npx fauna create-database test --endpoint localhost"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Creating an API key produces output in the terminal
&lt;/span&gt;    &lt;span class="c1"&gt;# that includes the following line: secret: &amp;lt;API token&amp;gt;
&lt;/span&gt;    &lt;span class="n"&gt;create_key_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;popen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"npx fauna create-key test --endpoint=localhost"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;faunadb_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"secret: (.+)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;create_key_output&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;group&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;FaunadbClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;faunadb_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;faunadb_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;import_schema&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# For any test that uses this fixture, we patch the environment
&lt;/span&gt;    &lt;span class="c1"&gt;# variable for the FaunaDB API key and return the client. This way,
&lt;/span&gt;    &lt;span class="c1"&gt;# all FaunaDB calls will use the test DB, and the test function
&lt;/span&gt;    &lt;span class="c1"&gt;# will have a valid client to make DB calls for test setup
&lt;/span&gt;    &lt;span class="c1"&gt;# or assertions.
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s"&gt;"os.environ"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'FAUNADB_KEY'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;faunadb_key&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;

    &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"npx fauna delete-database test --endpoint localhost"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might need to modify the code above depending on the specifics of your app and tests (for example, I patch my app's &lt;code&gt;settings&lt;/code&gt; module rather than &lt;code&gt;os.environ&lt;/code&gt; directly), but I've been using a similar file for a few weeks, and it's been working well. Now, if you wanted to test user creation with actual database transactions, you could run something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# src/tests/integration/test_user.py
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_user_creation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;faunadb_client&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;username&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"burgerbob"&lt;/span&gt;
  &lt;span class="n"&gt;created_user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;faunadb_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create_user&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"password1234"&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="n"&gt;created_user&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'username'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;

  &lt;span class="n"&gt;all_users&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;faunadb_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;all_users&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;all_users&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to see the full test and FaunaDB client code, I have an example in the &lt;a href="https://github.com/cfranklin11/faunadb-local"&gt;repo&lt;/a&gt; for this tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There you have it: a full FaunaDB setup for local development and testing (as long as you're using Python). Thanks to its GraphQL interface, service-agnostic architecture, and its ability to model complex data relationships, I think FaunaDB is a solid choice for data persistence for any serverless application. The documentation for setting up a local instance of FaunaDB, however, is a bit sparse. So, I had to figure it out by piecing together disparate posts and code samples, but, as you can see, it's not too complicated once you know the necessary commands and configuration. So, give FaunaDB a try for your next serverless project.&lt;/p&gt;

</description>
      <category>python</category>
      <category>serverless</category>
      <category>database</category>
      <category>graphql</category>
    </item>
    <item>
      <title>Footy Tipping with Machine Learning: 2019 Season Review</title>
      <dc:creator>Craig Franklin</dc:creator>
      <pubDate>Wed, 06 Nov 2019 09:37:20 +0000</pubDate>
      <link>https://dev.to/englishcraig/footy-tipping-with-machine-learning-2019-season-review-55mc</link>
      <guid>https://dev.to/englishcraig/footy-tipping-with-machine-learning-2019-season-review-55mc</guid>
      <description>&lt;p&gt;I've recently finished up my second season of ML footy tipping (if you're interested, you can check out an &lt;a href="https://medium.com/@craigjfranklin/toward-a-better-footy-tipping-model-mistakes-were-made-ee5a6738741f"&gt;earlier post&lt;/a&gt; for more context on what the hell this means), and, though I did not repeat my path to office-tipping-comp-glory (no prize money, no trash talking, at least not for me), the season was not without personal highlights. Unfortunately, they were more of the lessons-learned, character-building variety rather than the crushing-of-foes, hearing-the-lamentations-of-their-women variety. So, what have I learned over the past year to have made all the heartache of defeat worthwhile? Well, I definitely improved my machine-learning workflow and coding practices, both of which were major pain points the year before, but I also made plenty of new mistakes that will inform my goals for next year.&lt;/p&gt;

&lt;h2&gt;
  
  
  Got to admit: it's getting better
&lt;/h2&gt;

&lt;p&gt;Last year, I wrote a post (same one linked to above) laying out the mistakes I had made when creating my first footy-tipping model and how I hoped to improve on my processes and implementations while building a new model from scratch. I ended up with the following goals:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Do more exploratory data analysis&lt;/li&gt;
&lt;li&gt;Don't jump straight to fancy, complicated models&lt;/li&gt;
&lt;li&gt;Put some effort into code design to make extending it easier&lt;/li&gt;
&lt;li&gt;Come up with a better name for the model&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. Do some EDA
&lt;/h3&gt;

&lt;p&gt;I had skimped on exploratory data analysis (i.e. I hadn't done any) while developing my model for 2018, relying on a brute-force method of shoveling a bunch of data on top of an algorithm till it &lt;a href="https://xkcd.com/1838/"&gt;looked right&lt;/a&gt;. I avoided this in the run-up to the 2019 season, looking into &lt;a href="https://towardsdatascience.com/toward-a-better-footy-tipping-model-an-analysis-of-basic-heuristics-80de4235e768"&gt;basic heuristics&lt;/a&gt; for predicting match results (e.g. always pick the home team, always pick the favourite), as well as &lt;a href="https://medium.com/@craigjfranklin/toward-a-better-footy-tipping-model-the-folly-of-memory-9351670abe19"&gt;analysing trends&lt;/a&gt; and the data (or lack thereof) behind momentum. One can always do more and better analysis, but I gained additional insight into Aussie Rules Football, how it's played, and what stats matter, which informed my feature building and model tuning later in the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Compare different models
&lt;/h3&gt;

&lt;p&gt;Being a machine-learning novice, I had bought into the hype and started with deep learning and ensemble models without even looking at simpler linear models. I did not repeat this mistake and, though I ended up with a bagging ensemble of XGBoost models, I learned an important lesson by comparing multi-layer neural nets to linear models and basic ensembles (e.g. random forests, gradient boosters) and finding that, at least without more data, their performance was underwhelming, especially given how much longer they take to train.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Write better code
&lt;/h3&gt;

&lt;p&gt;As the 2018 AFL season approached, my first tipping model had become a tangled mess of scripts, half-baked classes, and hard-coded values, making any extension of functionality, even something as simple as adding new columns to the data set, a herculean task on the order of cleaning out the Augean stables. I had learned much about coding best practices since then and vowed to write more-flexible code for the next model and its surrounding application. Much like EDA, one can always do better, but I'm going to count this goal as achieved as well. I did some mixing-and-matching of functional and object-oriented approaches, building pipelines with a reduced list of transformation functions, but organising the wider application with various classes. I occasionally rewrote large sections of the application to adapt to new requirements or my own changing opinions on the best way to organise the code, but the foundation was strong enough to permit this with less pain and fewer bugs than before. In particular, composing pipelines of functions that each perform one transformation was a major improvement over having a few classes whose &lt;code&gt;transform&lt;/code&gt; methods amounted to &lt;code&gt;do_all_the_feature_building_at_once&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Come up with a better name
&lt;/h3&gt;

&lt;p&gt;Okay, at least this one is unequivocal: I nailed it. If you can't appreciate the many layers on which 'Tipresias' works, I don't know what to tell you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tipresias 1.0: 2019 performance
&lt;/h2&gt;

&lt;p&gt;Despite the various improvements in my processes and code, the results were not exactly worthy of deification. In my office competition, which is what really matters, I finished 4th with 132 tips (i.e. correct predictions) at the end of the regular season, well behind the winner, who got 137. By the end of the season, after the confetti had been swept from Punt Road, I had 137 tips (66.18% accuracy), well behind the betting odds, which had favoured the eventual winner 140 times (67.63% accuracy). My mean absolute error (MAE) (i.e. how far off I was in predicting the winning margin) was also a bit higher at 26.77 vs 26.25 for the betting odds. This was a harder-than-average season for predicting winners, as odds-on favourites tend to win roughly 72% of the time (over the last 10 years), and the top model on &lt;a href="//squiggle.com.au"&gt;Squiggle AFL&lt;/a&gt; got 139, whereas last year's winner got 147. It's possible that Tipresias 1.0 is particularly weak in an upset-heavy season, but it's more likely that I would have gotten subpar performance regardless of the contours of the schedule and teams' consistency. In the analysis below I'll use betting odds as a benchmark, because it's a simple, publicly-available heuristic for predicting winners that performs as well as the top, publicly-available statistical models. If I can't beat the betting odds, then I'm at a disadvantage to every Joe and Flo Schmo who has the humility to just pick the favourite from beginning to end.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2nwJEYra--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/il5eu0l28bajwdpfgjka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2nwJEYra--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/il5eu0l28bajwdpfgjka.png" alt="Line chart for rolling mean accuracy of models with 3-round window"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This comparison of the rolling accuracy of the betting odds and Tipresias shows that both performed poorly in the early seasons, which is typical, as teams change rosters and sometimes coaches in the offseason, and it can be difficult to predict which changed for the better and which for the worse. It is also clear that Tipresias was consistently below the betting odds throughout the season, save for a single round near mid-season, and a few late in the year, which is entirely due to a particularly atrocious round 22, which had a lot of coin-flip matches of which the oddsmakers only picked three of nine and Tipresias picked seven.&lt;/p&gt;

&lt;p&gt;Even with the anomaly of round 22, and the sudden dip in the late-teen rounds, the general trend is for accuracy to increase through the early rounds, as oddsmakers and statistical models adjust to those off-season changes, flatten around the middle of the season, then fall a little for the final third or so. Finals are a bit more difficult to predict than mid-season matches, because there is greater parity between teams (even the worst finals team is in the top half of the competition after all), but that doesn't explain the poorer accuracy in the late rounds of the regular season. A piece of conventional footy wisdom is that, as the remaining rounds fall away like shards of marble before the sculptors chisel, the shape of the season is revealed: teams get a sense of which position they'll finish at, and ones safely at the top might rest some players, increasing their chances of losing a throw-away match, and teams just out of finals contention might not do everything in their power to beat lower-ranked teams, because a few losses might result in better draft picks for next year. One of my goals for the offseason is to take a closer look at these changing dynamics across rounds and how they might affect model predictions.&lt;/p&gt;

&lt;p&gt;Tipresias's overall performance relative to the betting odds is demonstrated even more clearly in their cumulative accuracies shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IaqEzt0l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bfbgl4ebmuexzw5jv2gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IaqEzt0l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bfbgl4ebmuexzw5jv2gw.png" alt="Line chart for cumulative accuracy of models"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although the rolling accuracy, with a short window, shows Tipresias having a good few rounds and passing the betting odds as a result, its overall accuracy was not higher at any point during the season. Although the model's performance needs to improve in general, one area that I will be focusing on this offseason will be trying to increase accuracy in the early rounds, as there is much more room for improvement during that part of the season than later on when most heuristics and models are consistently tipping 70% - 80%. Squeezing an extra 5% - 10% accuracy out of a model that's already getting most matches right becomes increasingly difficult, and I think there are still some easier gains to be made elsewhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features, features everywhere
&lt;/h2&gt;

&lt;p&gt;So, which features steered Tipresias off course into the empty waste of the wine-dark sea? Since the underlying estimator is XGBoost, we can get the weights of all the features to see which had the largest impact on the model's decisions. It's possible to get these values from the model itself, but I used the model-explanation package &lt;a href="https://github.com/TeamHG-Memex/eli5"&gt;&lt;code&gt;eli5&lt;/code&gt;&lt;/a&gt;, because it offers a few functions to make extracting and visualising the underlying attributes a little easier. Even with the help of this package, I found working with an ensemble of XGBoost estimators nested in a bagging estimator nested in my own custom wrapper class a bit of a pain, which I hope to simplify for the next model to aid in analysing performance. I managed to loop over the the instances of XGBoost and average their feature weights to get a sense of which features contributed most to the final model's predictions. I used the default importance metric &lt;code&gt;'gain'&lt;/code&gt;, in part because it's the default. Looking into it a bit more, however, I found that the other popular metric, &lt;code&gt;'weight'&lt;/code&gt;, tends to diminish the importance of categorical variables, and I wanted to avoid that, as I believed some categories would prove influential in the model's predictions. So, below are the 20 most important features for Tipresias by &lt;code&gt;gain&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SQvQk7K1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/om4jc2f56xp37px2aqke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SQvQk7K1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/om4jc2f56xp37px2aqke.png" alt="Table of top features by gain"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What sticks out is that &lt;code&gt;elo_pred_win&lt;/code&gt; is by far the most important feature in the model, with a gain of 0.39, compared to &lt;code&gt;at_home&lt;/code&gt;, which is the second most important with just 0.049. Even beyond &lt;code&gt;elo_pred_win&lt;/code&gt;, you'll notice many other features that start with &lt;code&gt;elo&lt;/code&gt;. This is because I incorporated an &lt;a href="https://en.wikipedia.org/wiki/Elo_rating_system"&gt;Elo-based model&lt;/a&gt; as a group of features that I fed into the ensemble model. The problem is that since a decent statistical model is going to be better at predicting match results than almost any other single data point, it is bound to have an outsized influence on the model that it's a part of. Unfortunately, Elo underperformed even more than the betting odds or Tipresias, only achieving 60.87% accuracy (it was closer to 70% during training). So, being overly influenced by a possibly-overfitted Elo model (or one that was overly susceptible to failing in an upset-heavy season if we're being charitable) probably has something to do with Tipresias performing below expectations in 2019.&lt;/p&gt;

&lt;p&gt;Since creating Tipresias, I've read Google's &lt;a href="https://developers.google.com/machine-learning/guides/rules-of-ml"&gt;Rules of ML&lt;/a&gt;, which is a good, practical guide to the machine-learning development process. One rule that stuck in my mind was #40: "Keep ensembles simple". I definitely violated this rule by incorporating a statistical model into the base data set, treating it as though model outputs were of the same type as raw data about matches and players. Therefore, one of my first big changes to Tipresias will be restructuring the model into a voting ensemble rather than a bagging ensemble, so I can separate my raw data from the base models and have a clear hierarchy of inputs and outputs, finishing with the meta-estimator that will make predictions based on predictions of the sub-models. I'll have to see what the performance is like for this change, but I'm willing to accept a short-term hit in the interest of making the model easier to interpret. I'm hoping that this will offer insight into new possibilities for improving long-term performance.&lt;/p&gt;

&lt;p&gt;I was surprised by the presence of &lt;code&gt;'Regular'&lt;/code&gt; and &lt;code&gt;'Finals'&lt;/code&gt; (i.e. whether a match was in the regular season or finals) toward the top of the feature-importance list. As I mentioned above, I want to look more closely at how predictions change during different phases of the season and if there are any tendencies that hold from year to year. The importance of these round-type features further suggests that this could be a fruitful area of investigation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p60NCsRv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ez68muscsrqovo06ybl7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p60NCsRv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ez68muscsrqovo06ybl7.png" alt="Line chart of top features with average gain per round"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aggregating prediction explanations by feature and round didn't prove to be particularly illuminating. We see that there isn't much movement for most of the top features save for a slight downward trend for some of the Elo-based features and a sizeable jump in the importance of round-type features with the start of finals. Again, this requires further digging, but I suspect that the model may be under-fitting the changes in context during different phases of the AFL season, optimising for the high-accuracy middle rounds and suffering poor performance early and late in the season as a result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tipresias 2020
&lt;/h2&gt;

&lt;p&gt;Analysing the performance and pitfalls of the current version of the model offers some guidance on how I can improve performance for next season.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Make the model easier to interpret&lt;br&gt;
This includes separating model-output features from raw data, and simplifying the class structure (i.e. no so many layers of wrapper classes) so that model-interpretability tools are a little easier to use. There are more packages and forms of analysis than I covered here, but I was unable to use them, because they're picky about which models they accept (&lt;code&gt;BaggingEstimator&lt;/code&gt; certainly didn't play nice with most of them) and which classes are exposed to the relevant analysis functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Investigate ways to optimise the model for different parts of the season&lt;br&gt;
I will pay particular attention to the early rounds, because that's where there's the most room for a model to gain an advantage over competitors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make better use of player data&lt;br&gt;
I incorporated player stats into my model for the first time this season, but had a difficult time figuring out how best to use these features, so I ended up just aggregating rolling averages into per-team stats and calling it a day. As a result, roster changes didn't move Tipresias's predictions much (rarely more than a few points). Since a big part of teams' changes in the off-season are due to gaining or losing players, increasing the importance of these features could be part of improving early-season accuracy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add confidence percentage as part of the predictions&lt;br&gt;
This isn't related to model performance, but statistically-oriented tipping competitions include a metric called &lt;a href="http://probabilistic-footy.monash.edu/~footy/about.shtml#info"&gt;'bits'&lt;/a&gt; that measures a model's error according to its confidence in a given prediction. A confidence figure comes naturally with ML classifiers, but not regressors. However, I require the predicted margin that comes with a regressor, because that is a part of all tipping competitions. One option is to add a classifier as part of the model to get an extra percentage output. A possible alternative is to learn more about &lt;a href="http://www.jmlr.org/papers/volume9/shafer08a/shafer08a.pdf"&gt;conformal prediction&lt;/a&gt;, which is a way of adding confidence intervals to regressors. I'm still unsure of the feasibility of this approach, but it might allow me to engineer some sort of prediction confidence percentage from a regressor's output.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>sportsball</category>
    </item>
    <item>
      <title>Highlights from PyCon AU 2019</title>
      <dc:creator>Craig Franklin</dc:creator>
      <pubDate>Sun, 11 Aug 2019 06:21:07 +0000</pubDate>
      <link>https://dev.to/englishcraig/highlights-from-pycon-au-2019-3joc</link>
      <guid>https://dev.to/englishcraig/highlights-from-pycon-au-2019-3joc</guid>
      <description>&lt;p&gt;I recently attended PyCon Australia, which was my first Python conference anywhere and only my second proper tech conference, but I can state, despite the small sample size, with confidence that PyCon AU 2019 was a great conference. Below are some personal highlights from last weekend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 1: Specialist Track Day
&lt;/h2&gt;

&lt;p&gt;My coworkers and I bought our tickets after the T-shirt order deadline, so we didn't get shirts, and, on the walk to the convention centre alongside the lovely Darling Harbour, a seagull dive-bombed me and got a big ole chunk of my cinnamon role for its trouble (I quickly stuffed the rest in my mouth, and have sworn biblical levels of vengeance on that seagull if ever again we should meet: I'm basically like John Wick for seagulls now). My PyCon weekend had not gotten off to an auspicious start.&lt;/p&gt;

&lt;h3&gt;
  
  
  Camelot and Excalibur for PDF data
&lt;/h3&gt;

&lt;p&gt;Although I don't currently have any use cases for them, I was really impressed with &lt;a href="https://2019.pycon-au.org/talks/extracting-tabular-data-from-pdfs-with-camelot-excalibur"&gt;Vinayak Mehta's talk&lt;/a&gt; on the &lt;a href="https://github.com/camelot-dev/camelot"&gt;Camelot&lt;/a&gt; and &lt;a href="https://github.com/camelot-dev/excalibur"&gt;Excalibur&lt;/a&gt; packages for extracting data tables from PDF files. The way it can use spacing for tables that aren't visually separated into cells and even has a GUI for manually separating rows and columns for tricky documents is really cool. I've never worked with government reports in PDF form, but I have written scripts for converting horribly-formatted government-issued spreadsheets into something resembling a pivotable table, and I have nothing but respect for the tooling (there's a GUI &lt;em&gt;and&lt;/em&gt; a CLI!) and flexibility (you can define character filters, column counts, and more!) that he and the other contributors have built into these packages. It almost makes me want to get data from PDFs. Almost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unity 3D environments for reinforcement learning
&lt;/h3&gt;

&lt;p&gt;In another case of information-I-won't-use-in-the-foreseeable-future-but-was-very-cool-nonetheless, &lt;a href="https://2019.pycon-au.org/talks/building-designing-teaching-and-training-simulation-environments-for-machine-learning"&gt;Paris Buttfield-Addison talked&lt;/a&gt; about building virtual environments in Unity 3D for training reinforcement learning models that can then be used to direct robots or self-driving cars. I had heard of Unity 3D, knew it powered some of the games I play, knew that my wife worked with it once, but that was about it. So, it was interesting to see Paris quickly simulate objects and environments with realistic physical interactions by dragging and dropping a few elements in the Unity GUI tool. He then demonstrated how to combine Unity with &lt;a href="https://github.com/Unity-Technologies/ml-agents"&gt;Unity Machine Learning Agents Toolkit&lt;/a&gt; (for plugging TensorFlow models into Unity) by walking through the creation of a virtual racetrack, the architecture for applying reinforcement learning, the code for a reinforcement learning model, then playing the video of his bulldozer careening through his virtual, python-themed track without so much as rubbing the barrier, despite the well-known fact that such activity &lt;em&gt;is&lt;/em&gt; racing.&lt;/p&gt;

&lt;p&gt;I had seen the use of virtual environments to train AI for robots before at the local ML/AI meetup, but it seemed like some sort of mathy black magic at the time; the combination of Unity's user-friendly UI and Paris's humble speaking style made it all seem so accessible, seem that I too could one day create a self-driving car.&lt;/p&gt;

&lt;h3&gt;
  
  
  Threat modelling the Death Star
&lt;/h3&gt;

&lt;p&gt;I know nothing about security, and I don't find the subject particularly interesting, but my Star-Wars-fanatic wife insisted that I go to this talk. So, partly for the sake of my marriage, partly intrigued by the talk's conceit, I went to listen to &lt;a href="https://2019.pycon-au.org/talks/threat-modeling-the-death-star"&gt;Mario Areias talk&lt;/a&gt; about threat modelling. Due to my current role as a basic backend dev, I'm unlikely to put into practice the techniques that I learned about, but for sheer entertainment value, Mario's talk is worth watching. Also, if you're interested in speaking at a conference, note how the description of the talk is written to make one want to attend even if threat modelling might not interest them personally, and how the talk itself, from slides to speech, is both informative and fun.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 2: Main Conference Day 1 (don't think too hard about it)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lessons learned building Python microservices
&lt;/h3&gt;

&lt;p&gt;I've really been getting into DevOps and microservices over the past year, and &lt;a href="https://2019.pycon-au.org/talks/lessons-learned-building-python-microservices"&gt;Richard Jones's talk&lt;/a&gt; was one of the two at PyCon from which I got really great information about concepts, techniques, and tools that I can implement in projects in the near future. Hearing about the challenges that he and his team have faced in stitching together microservices that use different frameworks and languages, and the design decisions that they've made reinforced some things that I've heard elsewhere and made me aware of some new possibilities. In particular, their separation of the backend into multiple layers of services (e.g. a layer that connects to the frontend and another that connects with the database) was interesting. Also, I'm really eager to try out some of the tooling that Richard mentioned. I've used &lt;a href="https://github.com/cookiecutter/cookiecutter"&gt;&lt;code&gt;cookiecutter&lt;/code&gt;&lt;/a&gt; a little before, but never created my own, and the idea of using it to give a consistent, framework-like directory structure to all microservices sounds like a good idea. &lt;a href="https://docs.pact.io/"&gt;&lt;code&gt;pact.io&lt;/code&gt;&lt;/a&gt; sounds like an absolute godsend for some of the problems that I've been having lately with keeping track of valid request and response bodies. One solution would be to become less of a careless flake, but having a tool that enforces contracts between services sounds more reasonable than completely changing my personality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sufficiently advanced testing (deep-dive talk)
&lt;/h3&gt;

&lt;p&gt;And this is the other talk that left me thinking, wow, I've got to try this stuff out, like yesterday. &lt;a href="https://2019.pycon-au.org/talks/sufficiently-advanced-testing"&gt;Zac Hatfield-Dodds's talk&lt;/a&gt;, being a deep-dive, gave him time to cover some advanced testing concepts, then get into the details of the package &lt;a href="https://github.com/HypothesisWorks/hypothesis"&gt;&lt;code&gt;hypothesis&lt;/code&gt;&lt;/a&gt;, which implements many of them. I was already using &lt;a href="https://github.com/joke2k/faker"&gt;&lt;code&gt;faker&lt;/code&gt;&lt;/a&gt; to generate random values for most of my test fixtures, but in this talk, Zac took it further by introducing me to the concept of fuzzy testing, specifically, using algorithmically-generated randomness to probe the limits of what your code can handle without breaking. In theory, this will create combinations of inputs and situations that you never thought of, hopefully uncovering edge-case bugs before they pop up in production. In my day-to-day, I work on a giant Rails monolith with many, many interconnected parts, and the number of times our imaginative users create an absolutely inconceivable menagerie of conditions that break some far-flung corner of the codebase that hasn't been changed since that whitespace git commit from four years prior makes me really appreciate the potential value of such a technique.&lt;/p&gt;

&lt;p&gt;Zac then presented &lt;code&gt;hypothesis&lt;/code&gt;, of which he is a maintainer, and the ways in which we mere mortals can implement fuzzy testing in our own projects without having to understand all those fancy algorithms. I have yet to really dig into the documentation, but the package seems really intuitive, with options to limit the randomly-generated inputs within given constraints, which serves to get your tests passing, but also makes explicit your assumptions about what can and cannot happen in your app. And, as we all know, explicit is better than implicit. There also seems to be a mechanism for saving input values from failed test runs, which goes a long way toward reducing the pain of flaky tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  When software and law are the same thing
&lt;/h3&gt;

&lt;p&gt;Another example of a talk and speaker that are the whole package, &lt;a href="https://2019.pycon-au.org/talks/when-software-and-law-are-the-same-thing"&gt;Brenda Wallace's talk&lt;/a&gt; was interesting, entertaining, and Brenda is a compelling speaker, combining humour and serious issues with ease. If you thought converting business requirements into code was difficult, have you ever tried to encode the logic of decades-old laws that have been changed piecemeal by this party, and that minister, over the years? Turns out different people have different ideas about what defines a 'child' and &lt;em&gt;exactly&lt;/em&gt; how old four-and-a-half years is. The idea behind this project is something that had never occurred to me, and its goals (democratising understanding of laws and simplifying people's interactions with government bureaucracy) are laudable. Having recently gained permanent residency in Australia myself (not to mention having interacted with immigration departments in four different countries), I can guarantee that even when you try to fill out all the forms correctly and go to all the right desks at all the right offices, you're just guessing half the time, because government instructions are never so clear nor so explicit as a good old &lt;code&gt;if&lt;/code&gt;/&lt;code&gt;else&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 3: Main Conference Day 2
&lt;/h2&gt;

&lt;p&gt;By Sunday, I was pretty exhausted. Whether it was the two full days of talks and learning and socialising, or the beers I had drunk the night before, or the whiskey I had drunk the night before, who can say? Regardless, I was pounding caffeine throughout the day, and I missed a few more talks than on previous days just to give myself a break from paying attention to stuff.&lt;/p&gt;

&lt;h3&gt;
  
  
  The new COBOL
&lt;/h3&gt;

&lt;p&gt;For me, the principle charm of &lt;a href="https://2019.pycon-au.org/talks/the-new-cobol"&gt;Benno Rice's talk&lt;/a&gt; on COBOL and how we talk about different technologies was his survey of the history of programming languages, their roots, principle uses, and coders' (often biased) perceptions of them. I have far more formal training in iambic pentameter than the Law of Demeter, so much of this was new information to me. Like any good talk, this one certainly inspired me to want to read more on the subject when I have some time. A simple history of the likes of COBOL, Lisp, FP, and OOP, however, would have been a little dry, so the fact that Benno was able to tie his history lesson into a broader point about how we talk about various languages and, more importantly, the people who work with them added immediate relevance to make it really engaging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Lightning Talks
&lt;/h2&gt;

&lt;p&gt;I won't go into specific lightning talks (though there were many good ones, some informative, some funny, some just plain weird, some all of the above), but I just wanted to say that the whole exercise was definitely a highlight of the conference for me. They were perfectly timed toward the end of &lt;a href="https://2019.pycon-au.org/talks/saturday-lightning-talks"&gt;Saturday&lt;/a&gt; and &lt;a href="https://2019.pycon-au.org/talks/sunday-lightning-talks"&gt;Sunday&lt;/a&gt;, right when the caffeine from afternoon tea was fading and the hunger for dinner was rising, to energise everyone in attendance, so that we could finish the day full of vim and vigour. When there's only five minutes to cover one's material, not all of the talks will go according to plan, and some speakers had to be cut off, but the consistent, quick tempo and the intellectual whiplash of hearing about so many different topics, techniques, and technologies in such a short period of time meant that even the chaos, the unexpected, contributed to my enjoyment. If you have the chance to attend a future PyCon AU, you should definitely attend the lightning talks sessions, and I highly recommend watching the videos from this year, because there is some really good stuff in there. &lt;/p&gt;

</description>
      <category>python</category>
      <category>devops</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Docker, Django, React: Building Assets and Deploying to Heroku</title>
      <dc:creator>Craig Franklin</dc:creator>
      <pubDate>Sun, 02 Jun 2019 02:10:59 +0000</pubDate>
      <link>https://dev.to/englishcraig/docker-django-react-building-assets-and-deploying-to-heroku-24jh</link>
      <guid>https://dev.to/englishcraig/docker-django-react-building-assets-and-deploying-to-heroku-24jh</guid>
      <description>&lt;p&gt;Part 2 in a series on combining Docker, Django, and React. This builds on the development setup from &lt;a href="https://dev.to/englishcraig/creating-an-app-with-docker-compose-django-and-create-react-app-31lf"&gt;Part 1&lt;/a&gt;, so you might want to take a look at that first. If you want to skip to the end or need a reference, you can see the final version of the code on the &lt;a href="https://github.com/cfranklin11/docker-django-react/tree/production-heroku"&gt;&lt;code&gt;production-heroku&lt;/code&gt;&lt;/a&gt; branch of the repo.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Update: &lt;a href="https://dev.to/ohduran"&gt;ohduran&lt;/a&gt; has created a &lt;a href="https://github.com/ohduran/cookiecutter-react-django"&gt;cookiecutter template&lt;/a&gt; based on this tutorial if you want a quick-and-easy way to get the code.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Now that we have our app humming like a '69 Mustang Shelby GT500 in our local environment, doing hot-reloading donuts all over the parking lot, it's time to deploy that bad boy, so the whole world can find out how many characters there are in all their favourite phrases. In order to deploy this app to production, we'll need to do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up Django to use WhiteNoise to serve static assets in production.&lt;/li&gt;
&lt;li&gt;Create a production &lt;code&gt;Dockerfile&lt;/code&gt; that combines our frontend and backend into a single app.&lt;/li&gt;
&lt;li&gt;Create a new Heroku app to deploy to.&lt;/li&gt;
&lt;li&gt;Configure our app to deploy a Docker image to Heroku.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use WhiteNoise to serve our frontend assets
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Update settings for different environments
&lt;/h3&gt;

&lt;p&gt;Since we only want to use WhiteNoise in production, we'll have to change how our Django app's settings work to differentiate between the dev and prod environments. There are different ways to do this, but the one that seems to offer the most flexibility, and has worked well enough for me, is to create a settings file for each environment, all of which inherit from some base settings, then determine which settings file to use with an environment variable. In the &lt;code&gt;backend/hello_world&lt;/code&gt;, which is our project directory, create a &lt;code&gt;settings&lt;/code&gt; folder (as usual, with a &lt;code&gt;__init__.py&lt;/code&gt; inside to make it a module), move the existing &lt;code&gt;settings.py&lt;/code&gt; into it, and rename it &lt;code&gt;base.py&lt;/code&gt;. This will be the collection of base app settings that all environments will inherit. To make sure we don't accidentally deploy with unsafe settings, cut the following code from &lt;code&gt;base.py&lt;/code&gt;, and paste it into a newly-created &lt;code&gt;development.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
&lt;/span&gt;
&lt;span class="c1"&gt;# SECURITY WARNING: keep the secret key used in production secret!
&lt;/span&gt;&lt;span class="n"&gt;SECRET_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"&amp;lt;some long series of letters, numbers, and symbols that Django generates&amp;gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# SECURITY WARNING: don't run with debug turned on in production!
&lt;/span&gt;&lt;span class="n"&gt;DEBUG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;

&lt;span class="n"&gt;ALLOWED_HOSTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"backend"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Double check now: have those lines of code disappeared from &lt;code&gt;base.py&lt;/code&gt;? Good. We are slightly less hackable. At the top of the file, add the line &lt;code&gt;from hello_world.settings.base import *&lt;/code&gt;. What the &lt;code&gt;*&lt;/code&gt; import from &lt;code&gt;base&lt;/code&gt; does is make all of those settings that are already defined in our base available in &lt;code&gt;development&lt;/code&gt; as well, where we're free to overwrite or extend them as necessary.&lt;/p&gt;

&lt;p&gt;Since we're embedding our settings files a little deeper in the project by moving them into a &lt;code&gt;settings&lt;/code&gt; subdirectory, we'll also need to update &lt;code&gt;BASE_DIR&lt;/code&gt; in &lt;code&gt;base.py&lt;/code&gt; to point to the correct directory, which is now one level higher (relatively speaking). You can wrap the value in one more &lt;code&gt;os.path.dirname&lt;/code&gt; call, but I find the following a little easier to read:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;BASE_DIR&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;abspath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dirname&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__file__&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="s"&gt;"../../"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Django determines which module to use when running the app with the environment variable &lt;code&gt;DJANGO_SETTINGS_MODULE&lt;/code&gt;, which should be the module path to the settings that we want to use. To avoid errors, we update the default in &lt;code&gt;backend/hello_world/wsgi.py&lt;/code&gt; to &lt;code&gt;'hello_world.settings.base'&lt;/code&gt;, and add the following to our &lt;code&gt;backend&lt;/code&gt; service in &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;DJANGO_SETTINGS_MODULE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;hello_world&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;development&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add production settings with WhiteNoise
&lt;/h3&gt;

&lt;p&gt;The reason we want to use &lt;a href="http://whitenoise.evans.io/en/stable/"&gt;WhiteNoise&lt;/a&gt; in production instead of whatever Django does out-of-the-box is because, by default, Django is very slow to serve frontend assets, whereas WhiteNoise is reasonably fast. Not as fast as professional-grade CDN-AWS-S3-bucket-thingy fast, but fast enough for our purposes.&lt;/p&gt;

&lt;p&gt;To start, we need to install WhiteNoise by adding &lt;code&gt;whitenoise&lt;/code&gt; to &lt;code&gt;requirements.txt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, since we have dev-specific settings, let's create &lt;code&gt;production.py&lt;/code&gt; with settings of its very own. To start, we'll just add production variations of the development settings that we have, which should look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;hello_world.settings.base&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;

&lt;span class="n"&gt;SECRET_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"SECRET_KEY"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;DEBUG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;span class="n"&gt;ALLOWED_HOSTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"PRODUCTION_HOST"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll add the allowed host once we set up an app on Heroku. Note that you can hard-code the allowed host in the settings file, but using an environment variable is a little easier to change if you deploy to a different environment. The &lt;code&gt;SECRET_KEY&lt;/code&gt; can be any string you want, but for security reasons it should be some long string of random characters (I just use a password generator for mine), and you should save it as an environment/config variable hidden away from the cruel, thieving world. Do not check it into source control!.&lt;/p&gt;

&lt;p&gt;To enable WhiteNoise to serve our frontend assets, we add the following to &lt;code&gt;production.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;INSTALLED_APPS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;extend&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s"&gt;"whitenoise.runserver_nostatic"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Must insert after SecurityMiddleware, which is first in settings/common.py
&lt;/span&gt;&lt;span class="n"&gt;MIDDLEWARE&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;insert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"whitenoise.middleware.WhiteNoiseMiddleware"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;TEMPLATES&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;"DIRS"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BASE_DIR&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"../"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"frontend"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"build"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;

&lt;span class="n"&gt;STATICFILES_DIRS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BASE_DIR&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"../"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"frontend"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"build"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"static"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="n"&gt;STATICFILES_STORAGE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"whitenoise.storage.CompressedManifestStaticFilesStorage"&lt;/span&gt;
&lt;span class="n"&gt;STATIC_ROOT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BASE_DIR&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"staticfiles"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;STATIC_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"/static/"&lt;/span&gt;
&lt;span class="n"&gt;WHITENOISE_ROOT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BASE_DIR&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"../"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"frontend"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"build"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"root"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the above comes from the &lt;a href="http://whitenoise.evans.io/en/stable/django.html"&gt;WhiteNoise documentation&lt;/a&gt; for implementation in Django, along with a little trial and error to figure out which file paths to use for finding the assets built by React (more on that below). The confusing bit is all the variables that refer to slightly different frontend-asset-related directories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;TEMPLATES&lt;/code&gt;: directories with templates (e.g. Jinja) or html files&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;STATICFILES_DIRS&lt;/code&gt;: directory where Django can find html, js, css, and other static assets&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;STATIC_ROOT&lt;/code&gt;: directory to which Django will move those static assets and from which it will serve them when the app is running&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WHITENOISE_ROOT&lt;/code&gt;: directory where WhiteNoise can find all &lt;strong&gt;non-html&lt;/strong&gt; static assets&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Add home URL for production
&lt;/h3&gt;

&lt;p&gt;In addition to changing the settings, we have to make Django aware of the path &lt;code&gt;/&lt;/code&gt;, because right now it only knows about &lt;code&gt;/admin&lt;/code&gt; and &lt;code&gt;/char_count&lt;/code&gt;. So, we'll have to update &lt;code&gt;/backend/hello_world/urls.py&lt;/code&gt; to look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;django.contrib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;admin&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;django.urls&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;re_path&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;django.views.generic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TemplateView&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;char_count.views&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;char_count&lt;/span&gt;

&lt;span class="n"&gt;urlpatterns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"admin/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;admin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;site&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"char_count"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;char_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"char_count"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;re_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;".*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TemplateView&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;as_view&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;template_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"index.html"&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we've added a regex path (&lt;code&gt;.*&lt;/code&gt;) that says to Django, "Any request that you don't have explicit instructions for, just respond by sending them &lt;code&gt;index.html&lt;/code&gt;". How this works in practice is that in a dev environment, React's Webpack server will still handle calls to &lt;code&gt;/&lt;/code&gt; (and any path other than the two defined above), but in production, when there's no Webpack server, Django will just shrug its shoulders and serve &lt;code&gt;index.html&lt;/code&gt; from the static files directory (as defined in the settings above), which is exactly what we want. The reason we use &lt;code&gt;.*&lt;/code&gt; instead of a specific path is it allows us the freedom to define as many paths as we want for the frontend to handle (with React Router for example) without having to update Django's URLs list.&lt;/p&gt;

&lt;p&gt;None of these changes should change our app's functionality on local, so try running &lt;code&gt;docker-compose up&lt;/code&gt; to make sure nothing breaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a production Dockerfile
&lt;/h2&gt;

&lt;p&gt;In order for WhiteNoise to be able to serve our frontend assets, we'll need to include them in the same image as our Django app. There are a few ways we could accomplish this, but I think the simplest is to copy the Dockerfile that builds our backend image and add the installation of our frontend dependencies, along with the building of our assets, to it. Since this image will contain a single app that encompasses both frontend and backend, put it in the project root.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.6&lt;/span&gt;

&lt;span class="c"&gt;# Install curl, node, &amp;amp; yarn&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;curl &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://deb.nodesource.com/setup_8.x | bash &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install &lt;/span&gt;nodejs &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;-o-&lt;/span&gt; &lt;span class="nt"&gt;-L&lt;/span&gt; https://yarnpkg.com/install.sh | bash

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app/backend&lt;/span&gt;

&lt;span class="c"&gt;# Install Python dependencies&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./backend/requirements.txt /app/backend/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip3 &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; pip &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# Install JS dependencies&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app/frontend&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./frontend/package.json ./frontend/yarn.lock /app/frontend/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;/.yarn/bin/yarn &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Add the rest of the code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /app/&lt;/span&gt;

&lt;span class="c"&gt;# Build static files&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;/.yarn/bin/yarn build

&lt;span class="c"&gt;# Have to move all static files other than index.html to root/&lt;/span&gt;
&lt;span class="c"&gt;# for whitenoise middleware&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app/frontend/build&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;root &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;.ico &lt;span class="k"&gt;*&lt;/span&gt;.js &lt;span class="k"&gt;*&lt;/span&gt;.json root

&lt;span class="c"&gt;# Collect static files&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /app/backend/staticfiles

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# SECRET_KEY is only included here to avoid raising an error when generating static files.&lt;/span&gt;
&lt;span class="c"&gt;# Be sure to add a real SECRET_KEY config variable in Heroku.&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;DJANGO_SETTINGS_MODULE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hello_world.settings.production &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nv"&gt;SECRET_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;somethingsupersecret &lt;span class="se"&gt;\
&lt;/span&gt;  python3 backend/manage.py collectstatic &lt;span class="nt"&gt;--noinput&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; $PORT&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; python3 backend/manage.py runserver 0.0.0.0:$PORT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Dockerfile above installs everything we need to run both Django and React apps, then builds the frontend assets, then collects those assets for WhiteNoise to serve them. Since the &lt;code&gt;collectstatic&lt;/code&gt; command makes changes to the files, we want to run it during our build step rather than as a separate command that we run during deployment. You could probably do the latter under some circumstances, but I ran into problems when deploying to Heroku, because they discard post-deployment file changes on free-tier dynos.&lt;/p&gt;

&lt;p&gt;Also, note the command that moves static files from &lt;code&gt;/app/frontend/build&lt;/code&gt; to &lt;code&gt;/app/frontend/build/root&lt;/code&gt;, leaving &lt;code&gt;index.html&lt;/code&gt; in place. WhiteNoise needs everything that isn't an HTML file in a separate subdirectory. Otherwise, it gets confused about which files are HTML and which aren't, and nothing ends up getting loaded. Many Bothans died to bring us this information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an app on Heroku
&lt;/h2&gt;

&lt;p&gt;If you're new to Heroku, their &lt;a href="https://devcenter.heroku.com/articles/getting-started-with-python"&gt;getting started guide&lt;/a&gt; will walk you through the basics of creating a generic, non-dockerized Python app. If you don’t have it yet, install the &lt;a href="https://devcenter.heroku.com/articles/getting-started-with-python#set-up"&gt;Heroku CLI&lt;/a&gt;. We can create a Heroku app by running &lt;code&gt;heroku create&lt;/code&gt; within our project. Once you've created your new Heroku app, copy the URL displayed by the command, and add it to &lt;code&gt;ALLOWED_HOSTS&lt;/code&gt; in &lt;code&gt;settings.production&lt;/code&gt;. Just like adding &lt;code&gt;backend&lt;/code&gt; to our allowed hosts on dev, we need this to make sure Django's willing to respond to our HTTP requests. (I can't even begin to count the number of blank screens I've repeatedly refreshed with a mix of confusion and despair due to forgetting to add the hostname to &lt;code&gt;ALLOWED_HOSTS&lt;/code&gt; when deploying to a new environment). If you want to keep it secret, or want greater flexibility, you can add &lt;code&gt;os.environ.get("PRODUCTION_HOST")&lt;/code&gt; to the allowed hosts instead, then add your Heroku app's URL to its config variables. I'm not sure how strict it is for which URL elements to include or omit, but &lt;code&gt;&amp;lt;your app name&amp;gt;.herokuapp.com&lt;/code&gt; definitely works.&lt;/p&gt;

&lt;p&gt;For environment variables in production, we can use the Heroku CLI to set secure config variables that will be hidden from the public. Heroku has a way of adding these variables with &lt;code&gt;heroku.yml&lt;/code&gt;, but I always have trouble getting it to work, so I opt for the manual way in this case. This has the added benefit of not having to worry about which variables are okay to commit to source control and which we need to keep secret. To set the config variables, run the following in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;heroku config:set &lt;span class="nv"&gt;PRODUCTION_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your app name&amp;gt;.herokuapp.com &lt;span class="nv"&gt;SECRET_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your secret key&amp;gt; &lt;span class="nv"&gt;DJANGO_SETTINGS_MODULE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hello_world.settings.production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As stated earlier, &lt;code&gt;PRODUCTION_HOST&lt;/code&gt; is optional (depending on whether you added the app URL to &lt;code&gt;ALLOWED_HOSTS&lt;/code&gt; directly). &lt;code&gt;DJANGO_SETTINGS_MODULE&lt;/code&gt; will make sure that the app uses our production settings when running on Heroku.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy to Heroku
&lt;/h2&gt;

&lt;p&gt;There are a couple of different ways we can deploy Dockerized apps to Heroku, but I like &lt;code&gt;heroku.yml&lt;/code&gt;, because, like &lt;code&gt;docker-compose.yml&lt;/code&gt;, it has all the app configs and commands in one place. Heroku has a &lt;a href="https://devcenter.heroku.com/articles/build-docker-images-heroku-yml"&gt;good introduction&lt;/a&gt; to how it all works, but for our purposes, we only need the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python3 backend/manage.py runserver 0.0.0.0:$PORT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to run &lt;code&gt;heroku stack:set container&lt;/code&gt; in the terminal to tell our Heroku app to use Docker rather than one of Heroku's language-specific build packs. Now, deploying is as easy as running &lt;code&gt;git push heroku master&lt;/code&gt; (if you're on the &lt;code&gt;master&lt;/code&gt; branch; otherwise, run &lt;code&gt;git push heroku &amp;lt;your branch&amp;gt;:master&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Once Heroku is finished building our image and deploying, we can open a browser to &lt;code&gt;&amp;lt;your app name&amp;gt;.herokuapp.com&lt;/code&gt; and count characters on the CLOOOOOUUUUUD!!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Conceptually, putting the frontend and backend together into a single app that we can deploy to Heroku is very simple, but there are so many little gotchas in the configurations and file structure (not to mention the lack of meaningful error messages whenever one makes a mistake) that I found it devilishly difficult to get it all working. Even going through this process a second time while writing this tutorial, I forgot something here, added the wrong thing there, and spent hours trying to remember how I got it working the first time around, and what terrible sin I might have committed to cause the coding gods to punish me now.&lt;/p&gt;

&lt;p&gt;But here we are, having just accomplished the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure environment-specific settings for Django.&lt;/li&gt;
&lt;li&gt;Set up WhiteNoise to serve static assets in production.&lt;/li&gt;
&lt;li&gt;Create a production Dockerfile that includes frontend and backend code and dependencies.&lt;/li&gt;
&lt;li&gt;Create a Heroku app and deploy our code to it using &lt;code&gt;heroku.yml&lt;/code&gt; and the container stack.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>django</category>
      <category>react</category>
      <category>heroku</category>
    </item>
    <item>
      <title>Creating an app with Docker Compose, Django, and Create React App </title>
      <dc:creator>Craig Franklin</dc:creator>
      <pubDate>Thu, 16 May 2019 23:03:10 +0000</pubDate>
      <link>https://dev.to/englishcraig/creating-an-app-with-docker-compose-django-and-create-react-app-31lf</link>
      <guid>https://dev.to/englishcraig/creating-an-app-with-docker-compose-django-and-create-react-app-31lf</guid>
      <description>&lt;p&gt;Final code for this tutorial if you want to skip the text, or get lost with some of the references, can be found on &lt;a href="https://github.com/cfranklin11/docker-django-react"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Update: &lt;a href="https://dev.to/ohduran"&gt;ohduran&lt;/a&gt; has created a &lt;a href="https://github.com/ohduran/cookiecutter-react-django"&gt;cookiecutter template&lt;/a&gt; based on this tutorial if you want a quick-and-easy way to get the code.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Inspired by sports data sites like &lt;a href="https://squiggle.com.au/"&gt;Squiggle&lt;/a&gt; and &lt;a href="http://www.matterofstats.com/"&gt;Matter of Stats&lt;/a&gt;, in building the app that houses &lt;a href="https://github.com/cfranklin11/tipresias"&gt;Tipresias&lt;/a&gt; (my footy-tipping machine-learning model), I wanted to include a proper front-end with metrics, charts, and round-by-round tips. I already knew that I would have to dockerize the thing, because I was working with multiple packages across Python and R, and such complex dependencies are incredibly difficult to manage in a remote-server context (and impossible to run on an out-of-the-box service like Heroku) without using Docker. I could have avoided exacerbating my complexity issue by using basic Django views (i.e. static HTML templates) to build my pages, but having worked with a mishmash of ancient Rails views that had React components grafted on to add a little interactivity (then a lot of interactivity), I preferred starting out with clear separation between my frontend and backend. What's more, I wanted to focus on the machine learning, data engineering, and server-side logic (not to mention the fact that I couldn't design my way out of a wet paper bag), so my intelligent, lovely wife agreed to help me out with the frontend, and there was no way she was going to settle for coding in the context of a 10-year-old paradigm. It was going to be a modern web-app architecture, or I was going to have to pad my own divs.&lt;/p&gt;

&lt;p&gt;The problem with combining Docker, Django, and React was that I had never set up anything like this before, and, though I ultimately figured it out, I had to piece together my solution from multiple guides/tutorials that did some aspect of what I wanted without covering the whole. In particular, the tutorials that I found tended to build static Javascript assets that Django could use in its views. This is fine for production, but working without hot-reloading (i.e. having file changes automatically restart the server, so that they are reflected in the relevant pages that are loaded in the browser) is the hair shirt of development: at first you think you can endure the mild discomfort, but the constant itching wears you down, becoming the all-consuming focus of your every waking thought, driving you to distraction, to questioning all of your choices in life. Imagine having to run a build command that takes maybe a minute every time you change so much as a single line of code. Side projects don't exactly require optimal productivity, but, unlike jobs, if they become a pain to work on, it's pretty easy to just quit.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we're gonna do
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a Django app that runs inside a Docker container.&lt;/li&gt;
&lt;li&gt;Create a React app with the all-too-literally-named Create React App that runs inside a Docker container.&lt;/li&gt;
&lt;li&gt;Implement these dockerized apps as services in Docker Compose.&lt;/li&gt;
&lt;li&gt;Connect the frontend service to a basic backend API from which it can fetch data. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This tutorial assumes working knowledge of Docker, Django, and React in order to focus on the specifics of getting these three things working together in a dev environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Create a dockerized Django app
&lt;/h2&gt;

&lt;p&gt;Let's start by creating a project directory named whatever you want, then a &lt;code&gt;backend&lt;/code&gt; subdirectory with a &lt;code&gt;requirements.txt&lt;/code&gt; that just adds the &lt;code&gt;django&lt;/code&gt; package for now. This will allow us to install and run Django in a Docker image built with the following &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use an official Python runtime as a parent image
FROM python:3.6

# Adding backend directory to make absolute filepaths consistent across services
WORKDIR /app/backend

# Install Python dependencies
COPY requirements.txt /app/backend
RUN pip3 install --upgrade pip -r requirements.txt

# Add the rest of the code
COPY . /app/backend

# Make port 8000 available for the app
EXPOSE 8000

# Be sure to use 0.0.0.0 for the host within the Docker container,
# otherwise the browser won't be able to find it
CMD python3 manage.py runserver 0.0.0.0:8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the terminal, run the following commands to build the image, create a Django project named hello_world, and run the app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; backend:latest backend
docker run &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$PWD&lt;/span&gt;/backend:/app/backend backend:latest django-admin startproject hello_world &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$PWD&lt;/span&gt;/backend:/app/backend &lt;span class="nt"&gt;-p&lt;/span&gt; 8000:8000 backend:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we create a volume for the &lt;code&gt;backend&lt;/code&gt; directory, so the code created by &lt;code&gt;startproject&lt;/code&gt; will appear on our machine. The &lt;code&gt;.&lt;/code&gt; at the end of the create command will place all of the Django folders and files inside our backend directories instead of creating a new project directory, which can complicate managing the working directory within the Docker container.&lt;/p&gt;

&lt;p&gt;Open your browser to &lt;code&gt;localhost:8000&lt;/code&gt; to verify that the app is up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Create a dockerized Create React App (CRA) app
&lt;/h2&gt;

&lt;p&gt;Although I got my start coding frontend Javascript, I found my calling working on back-end systems. So, through a combination of my own dereliction and the rapid pace of change of frontend tools and technologies, I am ill-equipped to set up a modern frontend application from scratch. I am, however, fully capable of installing a package and running a command.&lt;/p&gt;

&lt;p&gt;Unlike with the Django app, we can't create a Docker image with a CRA app all at once, because we'll first need a &lt;code&gt;Dockerfile&lt;/code&gt; with node, so we can initialise the CRA app, then we'll be able to add the usual &lt;code&gt;Dockerfile&lt;/code&gt; commands to install dependencies. So, create a &lt;code&gt;frontend&lt;/code&gt; directory with a &lt;code&gt;Dockerfile&lt;/code&gt; that looks like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use an official node runtime as a parent image
FROM node:8

WORKDIR /app/

# Install dependencies
# COPY package.json yarn.lock /app/

# RUN npm install

# Add rest of the client code
COPY . /app/

EXPOSE 3000

# CMD npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some of the commands are currently commented out, because we don't have a few of the files referenced, but we will need these commands later. Run the following commands in the terminal to build the image, create the app, and run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; frontend:latest frontend
docker run &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$PWD&lt;/span&gt;/frontend:/app frontend:latest npx create-react-app hello-world
&lt;span class="nb"&gt;mv &lt;/span&gt;frontend/hello-world/&lt;span class="k"&gt;*&lt;/span&gt; frontend/hello-world/.gitignore frontend/ &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rmdir &lt;/span&gt;frontend/hello-world
docker run &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$PWD&lt;/span&gt;/frontend:/app &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 frontend:latest npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we move the newly-created app directory's contents up to the frontend directory and remove it. Django gives us the option to do this by default, but I couldn't find anything to suggest that CRA will do anything other than create its own directory. Working around this nested structure is kind of a pain, so I find it easier to just move everything up the docker-service level and work from there. Navigate your browser to &lt;code&gt;localhost:3000&lt;/code&gt; to make sure the app is running. Also, you can uncomment the rest of the commands in the &lt;code&gt;Dockerfile&lt;/code&gt;, so that any new dependencies will be installed the next time you rebuild the image.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Docker-composify into services
&lt;/h2&gt;

&lt;p&gt;Now that we have our two Docker images and are able to run the apps in their respective Docker containers, let's simplify the process of running them with Docker Compose. In &lt;code&gt;docker-compose.yml&lt;/code&gt;, we can define our two services, &lt;code&gt;frontend&lt;/code&gt; and &lt;code&gt;backend&lt;/code&gt;, and how to run them, which will allow us to consolidate the multiple &lt;code&gt;docker&lt;/code&gt; commands, and their multiple arguments, into much fewer &lt;code&gt;docker-compose&lt;/code&gt; commands. The config file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.2"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./backend&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./backend:/app/backend&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8000:8000"&lt;/span&gt;
    &lt;span class="na"&gt;stdin_open&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python3 manage.py runserver 0.0.0.0:8000&lt;/span&gt;
  &lt;span class="na"&gt;frontend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./frontend&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./frontend:/app&lt;/span&gt;
      &lt;span class="c1"&gt;# One-way volume to use node_modules from inside image&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/app/node_modules&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NODE_ENV=development&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm start&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've converted the various arguments for the docker commands into key-value pairs in the config file, and now we can run both our frontend and backend apps by just executing &lt;code&gt;docker-compose up&lt;/code&gt;. With that, you should be able to see them both running in parallel at &lt;code&gt;localhost:8000&lt;/code&gt; and &lt;code&gt;localhost:3000&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Connecting both ends into a single app
&lt;/h2&gt;

&lt;p&gt;Of course, the purpose of this post is not to learn how to overcomplicate running independent React and Django apps just for the fun of it. We are here to build a single, integrated app with a dynamic, modern frontend that's fed with data from a robust backend API. Toward that goal, while still keeping the app as simple as possible, let's have the frontend send text to the backend, which will return a count of the number of characters in the text, which the frontend will then display.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the Django API
&lt;/h3&gt;

&lt;p&gt;Let's start by creating an API route for the frontend to call. You can create a new Django app (which is kind of a sub-app/module within the Django project architecture) by running the following in the terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose run --rm backend python3 manage.py startapp char_count&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This gives you a new directory inside &lt;code&gt;backend&lt;/code&gt; called &lt;code&gt;char_count&lt;/code&gt;, where we can define routes and their associated logic.&lt;/p&gt;

&lt;p&gt;We can create the API response in &lt;code&gt;backend/char_count/views.py&lt;/code&gt; with the following, which, as promised, will return the character count of the submitted text:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;django.http&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;JsonResponse&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;char_count&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GET&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;JsonResponse&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="s"&gt;"count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, to make the Django project aware of our new app, we need to update &lt;code&gt;INSTALLED_APPS&lt;/code&gt; in &lt;code&gt;backend/hello_world/settings.py&lt;/code&gt; by adding &lt;code&gt;"char_count.apps.CharCountConfig"&lt;/code&gt; to the list. To add our count response to the available URLs, we update &lt;code&gt;backend/hello_world/urls.py&lt;/code&gt; with our char_count view as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;django.contrib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;admin&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;django.urls&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;char_count.views&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;char_count&lt;/span&gt;

&lt;span class="n"&gt;urlpatterns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'admin/'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;admin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;site&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'char_count'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;char_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'char_count'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we're changing project settings, we'll need to stop our Docker Compose processes (either ctl+c or &lt;code&gt;docker-compose stop&lt;/code&gt; in a separate tab) and start it again with &lt;code&gt;docker-compose up&lt;/code&gt;. We can now go to &lt;code&gt;localhost:8000/char_count?text=hello world&lt;/code&gt; and see that it has 11 characters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting React to the API
&lt;/h3&gt;

&lt;p&gt;First, let's add a little more of that sweet config to make sure we don't get silent errors related to networking stuff that we'd really rather not deal with. Our Django app currently won't run on any host other than &lt;code&gt;localhost&lt;/code&gt;, but our React app can only access it via the Docker service name &lt;code&gt;backend&lt;/code&gt; (which does some magic host mapping stuff). So, we need to add &lt;code&gt;"backend"&lt;/code&gt; to &lt;code&gt;ALLOWED_HOSTS&lt;/code&gt; in &lt;code&gt;backend/hello_world/settings.py&lt;/code&gt;, and we add &lt;code&gt;"proxy": "http://backend:8000"&lt;/code&gt; to &lt;code&gt;package.json&lt;/code&gt;. This will allow both services to talk to each other. Also, we'll need to use the npm package &lt;code&gt;axios&lt;/code&gt; to make the API call, so add it to &lt;code&gt;package.json&lt;/code&gt; and rebuild the images with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; frontend npm add axios
docker-compose down
docker-compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My frontend dev skills are, admittedly, subpar, but please keep in mind that the little component below is not a reflection of my knowledge of React (or even HTML for that matter). In the interest of simplicity, I just removed the CRA boilerplate and replaced it with an input, a button, a click handler, and a headline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./App.css&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;handleSubmit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#char-input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;

  &lt;span class="nx"&gt;axios&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`/char_count?text=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(({&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#char-count&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;textContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; characters!`&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="nx"&gt;className&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;App&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt; &lt;span class="nx"&gt;htmlFor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;char-input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;How&lt;/span&gt; &lt;span class="nx"&gt;many&lt;/span&gt; &lt;span class="nx"&gt;characters&lt;/span&gt; &lt;span class="nx"&gt;does&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/label&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;char-input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;button&lt;/span&gt; &lt;span class="nx"&gt;onClick&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleSubmit&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;have&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/button&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h3&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;char-count&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h3&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when we enter text into the input and click the button, the character count of the text is displayed below. And best of all: we got hot reloading all up and down the field! You can add new components to the frontend, new classes to the backend, and all your changes (short of config or dependencies) will be reflected in the functioning of the app as you work, without having to manually restart the servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In the end, setting all this up isn't too complicated, but there are lots of little gotchas, many of which don't give you a nice error message to look up on Stack Overflow. Also, at least in my case, I really struggled at first to conceptualise how the pieces were going to work together. Would the React app go inside the Django app, like it does with &lt;code&gt;webpacker&lt;/code&gt; in Rails? If the two apps are separate Docker Compose services, how do you connect them? In the end we learned how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up Django in a Docker container.&lt;/li&gt;
&lt;li&gt;Set up Create React App in a Docker container&lt;/li&gt;
&lt;li&gt;Configure those containers with Docker Compose&lt;/li&gt;
&lt;li&gt;Use Docker Compose's service names (e.g. &lt;code&gt;backend&lt;/code&gt;) and &lt;code&gt;package.json&lt;/code&gt;'s &lt;code&gt;"proxy"&lt;/code&gt; attribute to direct React's HTTP call to Django's API and display the response.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>django</category>
      <category>docker</category>
      <category>react</category>
    </item>
  </channel>
</rss>
