<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anthony Dodd</title>
    <description>The latest articles on DEV Community by Anthony Dodd (@doddzilla).</description>
    <link>https://dev.to/doddzilla</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/doddzilla"/>
    <language>en</language>
    <item>
      <title>Distributed State Management - Without the "E"</title>
      <dc:creator>Anthony Dodd</dc:creator>
      <pubDate>Wed, 07 Apr 2021 06:37:42 +0000</pubDate>
      <link>https://dev.to/doddzilla/distributed-state-management-without-the-e-3ia1</link>
      <guid>https://dev.to/doddzilla/distributed-state-management-without-the-e-3ia1</guid>
      <description>&lt;p&gt;Come, let us journey into the depths of software, into the heart of application data and state management. We will explore an event-less pattern, relying only on our trusty old friends Postgres and gRPC.&lt;/p&gt;

&lt;p&gt;On this journey, we will explore how a software team can manage their data in a distributed microservices architecture, where state "ownership" is divided between various microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting the Stage
&lt;/h2&gt;

&lt;p&gt;To set the stage, let's say that the software team we are journeying with is in the business of hosting a key/value store for customers — let's call it KVX — and runs everything in the cloud on a Kubernetes platform.&lt;/p&gt;

&lt;p&gt;This team of ours loves Postgres (who doesn't), is pretty good with gRPC, but is not at all into the world of events, event processing, EDA, none of that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is presented as our challenge on this journey.&lt;/strong&gt; Anything to do with events seems to be a serious blocker. We need to find a way to build a robust platform, but if our solution has anything to do with eventing patterns, then we've failed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Break it Down
&lt;/h2&gt;

&lt;p&gt;If we can't dig into anything related to the "E" word, then we are left with a somewhat narrow scope. Various technologies like Kafka, Nats, RabbitMQ, and any of the myriad other tools in the ecosystem which may help on this front, all of that is off the table.&lt;/p&gt;

&lt;p&gt;So let's get down to the basics. First principals. What are we even trying to do? What is the problem which needs to be solved? Here's what we know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We need to propagate state changes from one transactional service to others, maybe even call a few external cloud provider APIs and the like.&lt;/li&gt;
&lt;li&gt;We're pretty good at handling requests using gRPC.&lt;/li&gt;
&lt;li&gt;We can persist data in Postgres using transactions pretty well.&lt;/li&gt;
&lt;li&gt;Responding to customer requests, we're pretty good at that too.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what's the problem then? Well, this whole microservices thing. Often times when a microservice finishes updating its database tables, we need to do some asynchronous work. Ok, we've recorded what the user has requested, but now we need to actually make some things happen. We need to provision some KVX instances, we need to make some calls to our cloud provider to provision some infrastructure, we need to make some real things happen.&lt;/p&gt;

&lt;p&gt;To state the problem succinctly: we need to drive the state of our system to match the state which the user has requested. The aspired state needs to become the current state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Take a Shot
&lt;/h2&gt;

&lt;p&gt;Given our constraints about the "E" word and all, our options are somewhat limited. The only tools we really have to work with are our trusty old Postgres database, gRPC ... and well, that's about it. Let's see what we can do.&lt;/p&gt;

&lt;p&gt;When a user tells us that they want to provision a new KVX instance on our cloud platform, we need to be sure that we actually record their request in Postgres. This algorithm is simple enough:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A request comes in on one of our gRPC services, so we handle the request,&lt;/li&gt;
&lt;li&gt;Next we open a Postgres transaction (we're good at this) and write some data,&lt;/li&gt;
&lt;li&gt;Once we've committed our transaction, we respond to the user telling them that they've received their request and we'll get started on provisioning their new KVX instance right away.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We've responded to the user, we know what they want. Now what? How do we do something with that data in Postgres? It's just sitting there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Work Templates
&lt;/h3&gt;

&lt;p&gt;Because we are avoiding anything to do with the "E" word, we need to keep our solution simple. Along with our standard data models, we will be creating records in our database called "work templates", and we will store these records in a table called &lt;code&gt;work&lt;/code&gt;. To make things even more simple, let's say we store the specification of the work which needs to be done as JSON in a JSONB column in our &lt;code&gt;work&lt;/code&gt; table. Let's say the table looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;                                     &lt;span class="k"&gt;Table&lt;/span&gt; &lt;span class="nv"&gt;"work"&lt;/span&gt;
 &lt;span class="k"&gt;Column&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="k"&gt;Type&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Nullable&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="k"&gt;Default&lt;/span&gt;
&lt;span class="c1"&gt;-------------+--------------------------+----------+----------------------------------&lt;/span&gt;
 &lt;span class="n"&gt;id&lt;/span&gt;     &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;nextval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'work_id_seq'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="nb"&gt;time&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;timestamp&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="k"&gt;zone&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="n"&gt;spec&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;jsonb&lt;/span&gt;                    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="n"&gt;Indexes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nv"&gt;"work_pkey"&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;btree&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this approach, we will need to have some tasks in our code which poll Postgres periodically to check for work to be done. If we're feeling advanced, we could use Postgres LISTEN/NOTIFY to make things a bit more real-time. Our worker tasks will need to lock a row in the &lt;code&gt;work&lt;/code&gt; table in order to process it, maybe just using a simple &lt;code&gt;DELETE FROM work WHERE id=ANY(SELECT id FROM work ORDER BY id LIMIT 1) RETURNING *;&lt;/code&gt; query within a transaction.&lt;/p&gt;

&lt;p&gt;While that record is held in limbo by our transaction, we can now safely begin our work without worrying about another worker attempting to process the same record. If some error takes place and we need to rollback, that work will be available for later processing. If we break down our units of work small enough we can apply the following pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grab a record from our &lt;code&gt;work&lt;/code&gt; table, and hold onto that record in our transaction.&lt;/li&gt;
&lt;li&gt;Do the work described therein. Maybe we call a few other microservices, call Kubernetes, update a few other database rows.&lt;/li&gt;
&lt;li&gt;If there is additional work which needs to be performed, but we want it to be its own isolated unit of work, we can write a new record to our &lt;code&gt;work&lt;/code&gt; table describing the work to be done, which will be picked up later by another worker.&lt;/li&gt;
&lt;li&gt;Finally we will commit our transaction, and that unit of work will now be done.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nice! A successful eve... I mean, work processing implementation. Let's look at the edge cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Errors
&lt;/h3&gt;

&lt;p&gt;We've got a few different types of errors that we need to worry about here. On the highest of levels, we can categorize our errors as being either transient or intransient errors.&lt;/p&gt;

&lt;p&gt;Transient errors are things like network blips, VM restarts, pods being suddenly rescheduled in Kubernetes and work being interrupted. The idea is that these errors are not permanent. The next time your code is able to execute, things will quite likely work as they should.&lt;/p&gt;

&lt;p&gt;Intransient errors are errors which will not resolve themselves. Someone misspelled the name of our &lt;code&gt;werk&lt;/code&gt; table, someone forgot to commit a transaction, maybe some boolean logic was wrong, so on and so forth. An engineer will typically need to fix the bug and deploy the updated code. Once the code has been deployed, the expectation is that the system will begin to work as needed once again.&lt;/p&gt;

&lt;p&gt;Great, so error handling is covered. Worst case scenario, we need to deploy some bug fixes, but then things will be back to normal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Idempotence
&lt;/h3&gt;

&lt;p&gt;This is a slightly more of a difficult issue. In order to make our workers &lt;a href="https://en.wikipedia.org/wiki/Idempotence"&gt;idempotent&lt;/a&gt;, we will need to ensure that our algorithms expect to be retried. We will need to account for the fact that the various other microservices or external APIs (cloud providers, Kubernetes &amp;amp;c) may have already been successfully called as part of this unit of work, but that a failure may have taken place after the fact.&lt;/p&gt;

&lt;p&gt;It is important to note that retires are critical for state consistency. If we are actually attempting to drive our system's state to match the user's requested state, then we can't just give up at the first sign of trouble. Think about how terrible of an experience that would be for the user. What's worse? Think about all of the inconsistent half-state we would have sitting around. Consider this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If we run into an error after successfully provisioning a new EC2 instance for a user in AWS, but then we just decided to abandon our work and not drive it to completion because we ran into some transient error after the fact, we would end up with lots of orphaned EC2 instances in AWS ... and we would also end up with a LARGE bill at the end of the month.&lt;/li&gt;
&lt;li&gt;Similarly, if we are making gRPC requests to another microservice from our worker, and that peer microservice has already committed a transaction as part of our gRPC request, but then we decide to rollback our work because of some transient error after the fact, we now have a partially propagated state change. We can't just leave it there. What if that service was responsible for billing our customers. They get charged even though their KVX service was never provisioned? Not a good idea.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The point is hopefully now clear. Retries are an integral part of ensuring that we reliably propagate state changes throughout the system. In order to make this work properly, we need to ensure that our algorithms are crafted to expect such conditions.&lt;/p&gt;

&lt;p&gt;As a final note on this subject, if one of our units of work is not safe to retry — E.G., sending emails — it is often the case that email providers offer their own idempotency mechanism to guard against duplicate requests. If such is not available, then we are by definition dealing with an "at most once" constraint, and the algorithm must be crafted to ensure that the critical section will never be retried. The solution can be simple: commit your transaction first, then try to send the email. If it fails, well ... the user will have to request that a new email be sent. Really, just use an email provider which provides an idempotent interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Synchronization
&lt;/h3&gt;

&lt;p&gt;Awesome, we've covered some serious ground, we even have an idempotent eve... I mean, worker system. What next? Well, we need to be sure that we avoid race conditions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Race_condition"&gt;Race conditions&lt;/a&gt; are endemic to distributed systems. Engineers need to stay vigilant to ensure that their algorithms are race condition free. This is especially a problem with distributed systems because the scope is no longer isolated to an individual process. We are now dealing with lots of different microservices, all of them communicating with each other at various points in time, updating state and making various sorts of changes to the system.&lt;/p&gt;

&lt;p&gt;Many different conditions can trigger race conditions. Nearly any user-facing feature is subject to the possibility of race conditions. These can be especially dangerous when we are dealing with money, infrastructure, and other things which are more than just "data".&lt;/p&gt;

&lt;p&gt;We are using Postgres for everything, so this is actually an easy problem for us to solve. Any work initiated in our system related to a customer's KVX instance will simply cause the &lt;code&gt;has_active_work&lt;/code&gt; column to be set to &lt;code&gt;true&lt;/code&gt; for the respective KVX instance. This update takes place as part of the transaction initiated by a user request, and the update takes place directly on the database row for that specific KVX instance.&lt;/p&gt;

&lt;p&gt;If a user attempts to make another request to change some aspect of their KVX instance, we can kindly inform them that they still have some active work taking place for their KVX instance, and that they should try again when it is finished. Presenting a nice UI on top of this data is quite simple.&lt;/p&gt;

&lt;p&gt;Once a worker task has finished all of its work, as part of its transaction it will set that &lt;code&gt;has_active_work&lt;/code&gt; column to &lt;code&gt;false&lt;/code&gt; for the target KVX instance.&lt;/p&gt;

&lt;p&gt;Problem solved. We lock out race conditions before they have a chance to get into the system. However, we need to remain vigilant! Bugs can always creep in. A little documentation and peer review can go a long way to keep this pattern robust.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stigma
&lt;/h3&gt;

&lt;p&gt;Wow, that's got to be everything right? We've got a production grade eve... I mean, worker system now. But what will everyone else say about this?&lt;/p&gt;

&lt;p&gt;There is some stigma associated with this approach. Typically, the criticism is that this approach espouses tight coupling between services. If a worker in &lt;code&gt;service-abc&lt;/code&gt; (a microservice) needs to call &lt;code&gt;service-xyz&lt;/code&gt; (another microservice), then we have created a dependency chain. In order for &lt;code&gt;service-abc&lt;/code&gt; to function, &lt;code&gt;service-xyz&lt;/code&gt; and any other services it depends upon must be available. This happens to also be true for any external 3rd party APIs such as AWS, Kubernetes, email providers, or any other such services which our workers in &lt;code&gt;service-abc&lt;/code&gt; depend upon.&lt;/p&gt;

&lt;p&gt;Practically speaking, this is inescapable for 3rd party resources which a team might depend upon. If a team needs to orchestrate the AWS API ... it doesn't matter how deep you burry that code under layers of abstraction ... sooner or later, the AWS API will need to be called. Is anyone really trying to remove that constraint? What would that even practically mean? A new dependency, that's what it would mean.&lt;/p&gt;

&lt;p&gt;Maybe this isn't such an issue after all. We can generate alerts and observability data when we are having trouble accessing our dependencies. Maybe having explicit dependencies can be a good thing. Let's roll with that perspective for now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scale
&lt;/h3&gt;

&lt;p&gt;Perhaps a criticism might be that the "work template" pattern just won't scale. All of those gRPC calls. All of that dependency management. However, none of these are concrete arguments. Data flowing over a network is going to be common to any distributed system (practically speaking), and dependencies will always exist no matter where you put them or how deep you burry them.&lt;/p&gt;

&lt;p&gt;"Well ... Postgres won't be able to handle all of those worker queries at scale." I would counter by saying that Postgres can be quite surprising. Benchmarks would be needed; however, there is truly no doubt that even a single Postgres database instance can handle a surprising scale of throughput. Maybe if this were to become an issue, we could look into CockroachDB, TimescaleDB, or one of the other alternatives.&lt;/p&gt;

&lt;h2&gt;
  
  
  PID Controllers
&lt;/h2&gt;

&lt;p&gt;A final note that I would like to make on this subject overall is that this worker system we've defined reminds me a bit of a simple proportional controller — a type of &lt;a href="https://en.wikipedia.org/wiki/PID_controller"&gt;PID controller&lt;/a&gt; — which is far more common in the embedded systems space.&lt;/p&gt;

&lt;p&gt;The templates of work can be seen as an &lt;a href="https://en.wikipedia.org/wiki/Classical_control_theory"&gt;error signal&lt;/a&gt;. The system is currently in state "x", but the user wants it to be in state "y", do some work to bring the system into state "y". With this model, an entire system can be seen as a series of controllers reconciling their observed state with the desired state, and then they take action to drive the system to the desired state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We did it. All that's left to do is to give some feedback. What do you think? Would you build a system like this? What are we missing?&lt;/p&gt;

&lt;p&gt;Any and all feedback is welcome! Cheers 🍻&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>async</category>
      <category>postgres</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Trunk 0.7.0 | Stable Pipeline API | Future Goals</title>
      <dc:creator>Anthony Dodd</dc:creator>
      <pubDate>Thu, 08 Oct 2020 03:30:22 +0000</pubDate>
      <link>https://dev.to/doddzilla/trunk-0-7-0-stable-pipeline-api-future-goals-cef</link>
      <guid>https://dev.to/doddzilla/trunk-0-7-0-stable-pipeline-api-future-goals-cef</guid>
      <description>&lt;p&gt;Build, bundle &amp;amp; ship your Rust WASM application to the web.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/trunk"&gt;Github repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/trunk/releases/tag/v0.7.0"&gt;Release notes 0.7.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/trunk/releases/tag/v0.6.0"&gt;Release notes 0.6.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;We now have a &lt;code&gt;brew&lt;/code&gt; package available. Just &lt;code&gt;brew install trunk&lt;/code&gt;. Works on MacOS, Linux &amp;amp; Windows (WSL). Woot woot!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This release covers the &lt;code&gt;0.6.0&lt;/code&gt; &amp;amp; the &lt;code&gt;0.7.0&lt;/code&gt; releases of Trunk. We'll get to the nitty-gritty of these releases shortly, but I wanted to take a moment to talk about where Trunk is at now and where myself and others in the community would like to take Trunk in the future.&lt;/p&gt;

&lt;p&gt;First of all, and most importantly, I believe we have finally found a perfect pattern for declaring asset pipelines in the source HTML (typically a project's &lt;code&gt;index.html&lt;/code&gt;). The pattern which is now the standard as of the &lt;code&gt;0.7.0&lt;/code&gt; release is as follows: &lt;code&gt;&amp;lt;link data-trunk rel="{pipelineType}" data-other data-attrs data-here /&amp;gt;&lt;/code&gt;. Let's break this down.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every asset pipeline, everything Trunk does in relation assets, is now controlled by normal HTML &lt;code&gt;link&lt;/code&gt; tags with the special &lt;code&gt;data-trunk&lt;/code&gt; attribute. This simple mechanism makes clear which &lt;code&gt;link&lt;/code&gt;s are intended for Trunk to process and which are not.&lt;/li&gt;
&lt;li&gt;Trunk &lt;code&gt;link&lt;/code&gt;s must have the &lt;code&gt;rel&lt;/code&gt; attribute, a living standard HTML attribute which normally declares the relationship of the referenced resource to the HTML document. In our case, this represents the Trunk pipeline type. See the &lt;a href="https://github.com/thedodd/trunk#assets"&gt;Trunk README #assets&lt;/a&gt; section for more details on supported assets and their attributes.&lt;/li&gt;
&lt;li&gt;Speaking of attributes, all pipeline types will require some number of attributes. Most require the standard &lt;code&gt;href&lt;/code&gt; attribute to reference some target file. Others take non-standard attributes which are always prefixed with &lt;code&gt;data-&lt;/code&gt;. E.G., the &lt;code&gt;rel="rust"&lt;/code&gt; pipeline type is an optional pipeline type, if omitted Trunk will use the &lt;code&gt;Cargo.toml&lt;/code&gt; in the source HTML's directory; however, if your cargo project exposes multiple binaries, you will need to specify which binary Trunk should use. This pipeline type supports the &lt;code&gt;data-bin="bin-name"&lt;/code&gt; attribute for exactly that reason. Check out the aforementioned assets section in the README for more details.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Awesome! I'm very excited about how much this pipeline API has evolved. It is VERY extensible (what can't you specify via &lt;code&gt;data-*&lt;/code&gt; attributes?), and my hope is that this pipeline API will ultimately become the 1.0 API for Trunk. However, Trunk is a young project, and still has a long way to go. Let's talk about the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trunk's Future
&lt;/h2&gt;

&lt;p&gt;There are lots of great features the Trunk community has been discussing, a few notable ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;support for automatic browser reloading via WebSockets or SSE. This is definitely par for the course as far as web bundlers are concerned.&lt;/li&gt;
&lt;li&gt;WASM HMR (hot module reloading). This is just an extension of the above, however there is a lot of awesome potential here. Building Rust WASM web applications is quite a lot of fun these days.&lt;/li&gt;
&lt;li&gt;inline CSS, JS and other applicable asset types. This will be an easy extension to the new pipelines API discussed above. For most of these asset types, it will be a simple &lt;code&gt;data-inline&lt;/code&gt; attribute, and Trunk should be able to generate the necessary code to have the associated asset inlined.&lt;/li&gt;
&lt;li&gt;CSS components pattern. This is something which I personally think would be pretty cool. For those of you that remember EmberJS from back in the day, they had a nice feature where one could just place their CSS right next to their components, and they would be concatenated and served for you. Easy lift for Trunk, and folks might find it quite useful.&lt;/li&gt;
&lt;li&gt;A BIG LIFT: a Trunk library which will allow folks to declare assets directly in their Rust code right next to their WASM components. Already lots of discussion on this feature, still some planning and design work to do.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've said a lot, so I'll say one last thing here: Trunk is an excellent project to get involved with! The new pipeline API has come along with an awesome refactor of the internal layout of the code. Adding new asset pipelines and pipeline extensions is easy and enjoyable! &lt;strong&gt;This community would be even better with you involved!&lt;/strong&gt; Cheers mate! Let's do this!&lt;/p&gt;

&lt;h2&gt;
  
  
  the nitty-gritty
&lt;/h2&gt;

&lt;h2&gt;
  
  
  0.7.0
&lt;/h2&gt;

&lt;h3&gt;
  
  
  changed
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;All assets which are to be processed by trunk must now be declared as HTML &lt;code&gt;link&lt;/code&gt; elements as such: &lt;code&gt;&amp;lt;link data-trunk rel="rust|sass|css|icon|copy-file|..." data-attr0 data-attr1/&amp;gt;&lt;/code&gt;. The links may appear anywhere in the HTML and Trunk will process them and replace them or delete them based on the associated pipeline's output. If the link element does not have the &lt;code&gt;data-trunk&lt;/code&gt; attribute, it will not be processed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  fixed
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fixed &lt;a href="https://github.com/thedodd/trunk/issues/50"&gt;#50&lt;/a&gt;: the ability to copy a file or an entire dir into the dist dir is now supported with two different pipeline types: &lt;code&gt;&amp;lt;link data-trunk rel="copy-file" href="target/file"/&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;link data-trunk rel="copy-dir" href="target/dir"/&amp;gt;&lt;/code&gt; respectively.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  removed
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;manifest-path&lt;/code&gt; option has been removed from all Trunk subcommands. The path to the &lt;code&gt;Cargo.toml&lt;/code&gt; is now specified in the source HTML as &lt;code&gt;&amp;lt;link rel="rust" href="path/to/Cargo.toml"/&amp;gt;&lt;/code&gt;. The &lt;code&gt;href="..."&lt;/code&gt; attribute may be omitted, in which case Trunk will look for a &lt;code&gt;Cargo.toml&lt;/code&gt; within the same directory as the source HTML. If the &lt;code&gt;href&lt;/code&gt; attribute points to a directory, Trunk will look for the &lt;code&gt;Cargo.toml&lt;/code&gt; file in that directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  0.6.0
&lt;/h2&gt;

&lt;h3&gt;
  
  
  added
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Closed &lt;a href="https://github.com/thedodd/trunk/issues/55"&gt;#59&lt;/a&gt;: Support for writing the public URL (&lt;code&gt;--public-url&lt;/code&gt;) to the HTML output.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  fixed
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Closed &lt;a href="https://github.com/thedodd/trunk/issues/62"&gt;#62&lt;/a&gt;: Improved handling of file paths declared in the source &lt;code&gt;index.html&lt;/code&gt; to avoid issues on Windows.&lt;/li&gt;
&lt;li&gt;Closed &lt;a href="https://github.com/thedodd/trunk/issues/58"&gt;#58&lt;/a&gt;: The output WASM file generated from the cargo build is now determined purely based on a JSON build plan provided from cargo itself. This will help to provide a more stable pattern for finding build artifacts. If you were running into issues where Trunk was not able to find the WASM file built from cargo due to hyphens or underscores in the name, that problem should now be a thing of the past.&lt;/li&gt;
&lt;li&gt;The default location of the &lt;code&gt;dist&lt;/code&gt; dir has been slightly modified. The &lt;code&gt;dist&lt;/code&gt; dir will now default to being generated in the parent dir of cargo's &lt;code&gt;target&lt;/code&gt; dir. This helps to make behavior a bit more consistent when executing trunk for locations other than the CWD.&lt;/li&gt;
&lt;li&gt;Fixed an issue where paths declared in a &lt;code&gt;Trunk.toml&lt;/code&gt; file were not being treated as relative to the file itself.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rust</category>
      <category>webassembly</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Trunk 0.5.0 | Proxy System</title>
      <dc:creator>Anthony Dodd</dc:creator>
      <pubDate>Wed, 23 Sep 2020 13:21:01 +0000</pubDate>
      <link>https://dev.to/doddzilla/trunk-0-5-0-proxy-system-3426</link>
      <guid>https://dev.to/doddzilla/trunk-0-5-0-proxy-system-3426</guid>
      <description>&lt;p&gt;Build, bundle &amp;amp; ship your Rust WASM application to the web.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/trunk"&gt;Github repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/trunk/releases/tag/v0.5.0"&gt;Release notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Trunk now ships with a built-in proxy which can be enabled when running &lt;code&gt;trunk serve&lt;/code&gt;. There are two ways to configure the proxy, each discussed below. All Trunk proxies will transparently pass along the request body, headers, and query parameters to the proxy backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  proxy cli flags
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;trunk serve&lt;/code&gt; command accepts two proxy related flags.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--proxy-backend&lt;/code&gt; specifies the URL of the backend server to which requests should be proxied. The URI segment of the given URL will be used as the path on the Trunk server to handle proxy requests. E.G., &lt;code&gt;trunk serve --proxy-backend=http://localhost:9000/api/&lt;/code&gt; will proxy any requests received on the path &lt;code&gt;/api/&lt;/code&gt; to the server listening at &lt;code&gt;http://localhost:9000/api/&lt;/code&gt;. Further path segments or query parameters will be seamlessly passed along.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--proxy-rewrite&lt;/code&gt; specifies an alternative URI on which the Trunk server is to listen for proxy requests. Any requests received on the given URI will be rewritten to match the URI of the proxy backend, effectively stripping the rewrite prefix. E.G., &lt;code&gt;trunk serve --proxy-backend=http://localhost:9000/ --proxy-rewrite=/api/&lt;/code&gt; will proxy any requests received on &lt;code&gt;/api/&lt;/code&gt; over to &lt;code&gt;http://localhost:9000/&lt;/code&gt; with the &lt;code&gt;/api/&lt;/code&gt; prefix stripped from the request, while everything following the &lt;code&gt;/api/&lt;/code&gt; prefix will be left unchanged.&lt;/p&gt;

&lt;h3&gt;
  
  
  config file
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;Trunk.toml&lt;/code&gt; config file accepts multiple &lt;code&gt;[[proxy]]&lt;/code&gt; sections, which allows for multiple proxies to be configured. Each section requires at least the &lt;code&gt;backend&lt;/code&gt; field, and optionally accepts the &lt;code&gt;rewrite&lt;/code&gt; field, both corresponding to the &lt;code&gt;--proxy-*&lt;/code&gt; CLI flags discussed above.&lt;/p&gt;

&lt;p&gt;As it is with other Trunk config, a proxy declared via CLI will take final precedence and will cause any config file proxies to be ignored, even if there are multiple proxies declared in the config file.&lt;/p&gt;

&lt;p&gt;The following is a snippet from the &lt;code&gt;Trunk.toml&lt;/code&gt; file in this repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[[proxy]]&lt;/span&gt;
&lt;span class="py"&gt;rewrite&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"/api/v1/"&lt;/span&gt;
&lt;span class="py"&gt;backend&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"http://localhost:9000/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>rust</category>
      <category>webassembly</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Announcing YBC | Yew Bulma Components</title>
      <dc:creator>Anthony Dodd</dc:creator>
      <pubDate>Fri, 18 Sep 2020 20:20:17 +0000</pubDate>
      <link>https://dev.to/doddzilla/announcing-ybc-yew-bulma-components-4fe7</link>
      <guid>https://dev.to/doddzilla/announcing-ybc-yew-bulma-components-4fe7</guid>
      <description>&lt;p&gt;YBC is a Yew component library based on the Bulma CSS framework.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/ybc"&gt;Github repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.rs/ybc/0.1.2/ybc/"&gt;API Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://bulma.io/"&gt;Bulma CSS framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://yew.rs/"&gt;The Yew project&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;YBC encapsulates all of the structure, style and functionality of the Bulma CSS framework as a set of Yew components. &lt;strong&gt;YBC also ships with support for the Yew Router,&lt;/strong&gt; adding Bulma-styled components which wrap the Yew Router components for clean integration.&lt;/p&gt;

&lt;p&gt;As a guiding principal, YBC does not attempt to encapsulate every single Bulma style as a Rust type, let alone the many valid style combinations. That would be far too complex, and probably limiting to the user in many ways. Instead, YBC handles strucutre, required classes, functionality, sane defaults and every component can be customized with any additional classes for an exact look and feel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To get started with YBC,&lt;/strong&gt; have a look at the &lt;a href="https://github.com/thedodd/ybc#getting-started"&gt;Getting Started&lt;/a&gt; guide in the README. A few pertinent highlights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;YBC works out of the box with Bulma CSS. Add &lt;code&gt;&amp;lt;link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bulma@0.9.0/css/bulma.min.css"/&amp;gt;&lt;/code&gt; to your HTML, and then you're ready to start using YBC.&lt;/li&gt;
&lt;li&gt;YBC also supports full customization using Bulma's recommended customization pattern. &lt;a href="https://github.com/thedodd/ybc#add-bulma-sass-allows-customization--themes"&gt;Details here&lt;/a&gt;. TL;DR, use &lt;a href="https://github.com/thedodd/trunk"&gt;Trunk&lt;/a&gt; for building &amp;amp; bundling your app. It will handle compiling your scss/sass, which is what you will use for customizing Bulma.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let me know what you think. The hope is that this crate will make building web apps with Rust WASM that much easier. Cheers!&lt;/p&gt;

</description>
      <category>rust</category>
      <category>webassembly</category>
      <category>webdev</category>
      <category>bulma</category>
    </item>
    <item>
      <title>Trunk 0.4.0 | Layered config (Trunk.toml), JS Snippets &amp; release binaries</title>
      <dc:creator>Anthony Dodd</dc:creator>
      <pubDate>Fri, 18 Sep 2020 20:17:18 +0000</pubDate>
      <link>https://dev.to/doddzilla/trunk-0-4-0-layered-config-trunk-toml-js-snippets-release-binaries-3n57</link>
      <guid>https://dev.to/doddzilla/trunk-0-4-0-layered-config-trunk-toml-js-snippets-release-binaries-3n57</guid>
      <description>&lt;p&gt;Build, bundle &amp;amp; ship your Rust WASM application to the web.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/trunk"&gt;Github repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/trunk/releases/tag/v0.4.0"&gt;Release notes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/thedodd/trunk/blob/master/Trunk.toml"&gt;Example Trunk.toml&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pertinent Highlights
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In addition to CLI arguments and options, Trunk now supports layered configuration via &lt;code&gt;Trunk.toml&lt;/code&gt; &amp;amp; environment variables.&lt;/li&gt;
&lt;li&gt;There is an example &lt;code&gt;Trunk.toml&lt;/code&gt; to the root of the repository showing all possible config values along with their defaults. Link above.&lt;/li&gt;
&lt;li&gt;JS snippets generated by wasm-bindgen are now fully supported by Trunk. They should have been since the &lt;code&gt;0.1.0&lt;/code&gt;, but I overlooked that feature &lt;code&gt;:)&lt;/code&gt;. Docs for &lt;a href="https://rustwasm.github.io/docs/wasm-bindgen/reference/js-snippets.html"&gt;wasm-bindgen JS snippets&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Release binaries will now be uploaded to the Github release page for every future release. This is great for CI/CD. Brew formulae for Mac/Linux, and Choco package for Windows are in the works.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rust</category>
      <category>webassembly</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Announcing Trunk — Build, bundle &amp; ship your Rust WASM application to the web.</title>
      <dc:creator>Anthony Dodd</dc:creator>
      <pubDate>Wed, 26 Aug 2020 03:53:45 +0000</pubDate>
      <link>https://dev.to/doddzilla/announcing-trunk-build-bundle-ship-your-rust-wasm-application-to-the-web-knf</link>
      <guid>https://dev.to/doddzilla/announcing-trunk-build-bundle-ship-your-rust-wasm-application-to-the-web-knf</guid>
      <description>&lt;p&gt;I am happy to announce the very first release of Trunk. Trunk is a CLI tool, written in Rust, which provides a simple, zero-config pattern for building Rust WebAssembly applications, bundling application assets (sass, css, images &amp;amp;c) and shipping it all to the web.&lt;/p&gt;

&lt;p&gt;Trunk is designed for creating progressive, single-page web applications, written in Rust, compiled to WebAssembly, without any JS (though today JS is still needed for loading WASM modules). Trunk follows a simple paradigm: declare an &lt;code&gt;index.html&lt;/code&gt; file describing the single page of your application, then Trunk will parallelize bundling assets declared in your HTML, will build your WASM app, hash resources for cache control ... all without any extra config files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/thedodd/trunk/releases/tag/v0.1.0"&gt;release notes&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/thedodd/trunk"&gt;github repo&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are interested in getting involved (which I hope you are), I would love for you to help out. There are lots of great features planned, and many still in the design phase. I hope you will stop by, give the issues a read, share any thoughts if you feel so inclined, and if you want to write some code, please do!&lt;/p&gt;

&lt;p&gt;”Pack your things, we’re going on an adventure!” ~ Ferris&lt;/p&gt;

</description>
      <category>rust</category>
      <category>webassembly</category>
      <category>webdev</category>
      <category>web</category>
    </item>
    <item>
      <title>Announcing async-raft v0.5.0: Rebased onto Tokio &amp; Renamed to `async-raft`</title>
      <dc:creator>Anthony Dodd</dc:creator>
      <pubDate>Tue, 18 Aug 2020 03:07:53 +0000</pubDate>
      <link>https://dev.to/doddzilla/announcing-async-raft-v0-5-0-rebased-onto-tokio-renamed-to-async-raft-2f31</link>
      <guid>https://dev.to/doddzilla/announcing-async-raft-v0-5-0-rebased-onto-tokio-renamed-to-async-raft-2f31</guid>
      <description>&lt;p&gt;I am pleased to announce the release of &lt;code&gt;async-raft&lt;/code&gt; &lt;code&gt;v0.5.0&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The TLDR:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Everything Is Now Based on Tokio&lt;/li&gt;
&lt;li&gt;Singular &amp;amp; Well Defined API (thanks to async/await, async_trait &amp;amp; tokio)&lt;/li&gt;
&lt;li&gt;Automatic Log Compaction&lt;/li&gt;
&lt;li&gt;Joint Consensus Overhaul&lt;/li&gt;
&lt;li&gt;Project Renamed to &lt;code&gt;async-raft&lt;/code&gt; (was previously actix-raft)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more info, here are the release notes &lt;a href="https://github.com/async-raft/async-raft/releases/tag/v0.5.0"&gt;https://github.com/async-raft/async-raft/releases/tag/v0.5.0&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>raft</category>
      <category>distributedsystems</category>
    </item>
  </channel>
</rss>
