<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cédric Fabianski</title>
    <description>The latest articles on DEV Community by Cédric Fabianski (@cfabianski).</description>
    <link>https://dev.to/cfabianski</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cfabianski"/>
    <language>en</language>
    <item>
      <title>How Rust Lets Us Monitor 30k API calls/min</title>
      <dc:creator>Cédric Fabianski</dc:creator>
      <pubDate>Wed, 08 Jul 2020 08:40:51 +0000</pubDate>
      <link>https://dev.to/bearer/how-rust-lets-us-monitor-30k-api-calls-min-2n30</link>
      <guid>https://dev.to/bearer/how-rust-lets-us-monitor-30k-api-calls-min-2n30</guid>
      <description>&lt;p&gt;At Bearer, we are a polyglot engineering team. Both in spoken languages and programming languages. Our stack is made up of services written in Node.js, Ruby, Elixir, and a handful of others in addition to all the languages our agent library supports. Like most teams, we balance using the right tool for the job with using the right tool for the time. Recently, we reached a limitation in one of our services that led us to transition that service from Node.js to Rust. This post goes into some of the details that caused the need to change languages, as well as some of the decisions we made along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A bit of context&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We are building a solution to help developers monitor their APIs. Every time a customer’s application calls an API, a log gets sent to us where we monitor and analyze it.&lt;/p&gt;

&lt;p&gt;At the time of the issue, we were processing an average of 30k API calls per minute. That's a lot of API calls made across all our customers. We split the process into two key parts: Log ingestion and log processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2FLog-ingestion-service---node.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2FLog-ingestion-service---node.jpg" alt="Original architecture with Node.js"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We originally built the ingestion service in Node.js. It would receive the logs, communicate with an elixir service to check customer access rights, check rate limits using Redis, and then send the log to CloudWatch. There, it would trigger an event to tell our processing worker to take over.&lt;/p&gt;

&lt;p&gt;We capture information about the API call, including the payloads (both the request and response) of every call sent from a user's application. These are currently limited to 1MB, but that is still a large amount of data to process. We send and process everything asynchronously and the goal is to make the information available to the end-user as fast as possible.&lt;/p&gt;

&lt;p&gt;We hosted everything on AWS Fargate, a serverless management solution for Elastic Container Service (ECS), and set it to autoscale after 4000 req/min. Everything was great! Then, the invoice came 😱.&lt;/p&gt;

&lt;p&gt;AWS invoices based on CloudWatch storage. The more you store, the more you pay.&lt;/p&gt;

&lt;p&gt;Fortunately, we had a backup plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Kinesis to the rescue?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Instead of sending the logs to CloudWatch, we would use&lt;a href="https://aws.amazon.com/kinesis/data-firehose/" rel="noopener noreferrer"&gt;Kinesis Firehose&lt;/a&gt;. Kinesis Firehose is basically a Kafka equivalent provided by AWS. It allows us to deliver a data stream in a reliable way to several destinations. With very few updates to our log processing worker, we were able to ingest logs from both CloudWatch and Kinesis Firehose. With this change, daily costs would drop to about 0.6% of what they were before.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2FLog-ingestion---node_kinesis.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2FLog-ingestion---node_kinesis.jpg" alt="Architecture after adding Kenesis"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The updated service now passed the log data through Kinesis and into s3 which triggers the worker to take over with the processing task. We rolled the change out and everything was back to normal... or we thought. Soon after, we started to notice some anomalies on our monitoring dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We were Garbage Collecting&lt;/strong&gt;, a lot. Garbage collection (GC) is a way for some languages to automatically free up memory that is no longer in use. When that happens, the program pauses. This is known as a &lt;em&gt;GC pause&lt;/em&gt;. The more writes you make to memory, the more garbage collection needs to happen and as a result, the pause time increases. For our service, these pauses were growing high enough that they caused the servers to restart and put stress on the CPU. When this happens, it can look like the server is down—because it temporarily is—and our customers started to see 5xx errors for roughly 6% of the logs our agent was trying to ingest.&lt;/p&gt;

&lt;p&gt;Below we can see the pause time and pause frequency of the garbage collection:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2Fgc-pause.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2Fgc-pause.jpg" alt="GC pause and frequency charts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In some instances, the pause time breached &lt;strong&gt;4 seconds&lt;/strong&gt; (as shown on the left), with up to &lt;strong&gt;400 pauses per minute&lt;/strong&gt; (as shown on the right) across our instances.&lt;/p&gt;

&lt;p&gt;After some more research, we appeared to be another victim of a&lt;a href="https://github.com/aws/aws-sdk-js/issues/329" rel="noopener noreferrer"&gt;memory leak in the AWS Javascript SDK&lt;/a&gt;. We tried increasing the resource allocations to extreme amounts, like autoscaling after 1000 req/min, but nothing worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Possible solutions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With our backup plan no longer an option, we moved on to new solutions. First, we looked at those with the easiest transition path.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Elixir&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier, we are checking the customer access rights using an Elixir service. This service is private and only accessible from within our Virtual Private Cloud (VPC). We have never experienced any scalability issues with this service and most of the logic was already there. We could simply send the logs to Kinesis from within this service and skip over the Node.js service layer. We decided it was worth a try.&lt;/p&gt;

&lt;p&gt;We developed the missing parts and tested it. It was better, but still not great. Our benchmarks showed that there were still high levels of Garbage Collecting, and we were still returning 5xx to our users when consuming the logs. At this point, the heavy load triggered a &lt;a href="https://github.com/benoitc/hackney/issues/594" rel="noopener noreferrer"&gt;(now resolved) issue&lt;/a&gt; with one of our elixir dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Go&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We considered Golang as well. It would have been a good candidate, but in the end, it is another Garbage Collected Language. While likely more efficient than our previous implementation, as we scale there is a high chance we'd run into similar problems. With these limitations in mind, we needed a better option.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Re-architecting with Rust at the core&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In both our original implementation and our backup, the core issue remained the same: garbage collection. The solution was to move to a language with better memory management and no garbage collection. Enter Rust.&lt;/p&gt;

&lt;p&gt;Rust isn't a garbage-collected language. Instead, it relies on a concept called &lt;em&gt;ownership&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ownership is Rust’s most unique feature, and it enables Rust to make memory safety guarantees without needing a garbage collector.&lt;br&gt;&lt;br&gt;
— &lt;a href="https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html" rel="noopener noreferrer"&gt;The Rust Book&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ownership is the concept that often makes Rust difficult to learn and write, but also what makes it so well suited for situations like ours. Each value in Rust has a single owner variable and as a result a single point of allocation in memory. Once that variable goes out of scope the memory is immediately returned.&lt;/p&gt;

&lt;p&gt;Since the code required to ingest the logs is quite small, we decided to give it a try. To test this we addressed the very thing that we had issues with—sending large amounts of data to Kinesis.&lt;/p&gt;

&lt;p&gt;Our first benchmarks proved to be very successful.&lt;/p&gt;

&lt;p&gt;From that point, we were pretty confident that Rust could be the answer and we decided to flesh out the prototype into a production-ready application.&lt;/p&gt;

&lt;p&gt;Over the course of these experiments, rather than directly replacing the original Node.js service with Rust, we restructured much of the architecture surrounding log ingestion. The core of the new service is an &lt;a href="https://www.envoyproxy.io/" rel="noopener noreferrer"&gt;Envoy&lt;/a&gt; proxy with the Rust application as a sidecar.&lt;/p&gt;

&lt;p&gt;Now, when the Bearer Agent in a user's application sends log data to Bearer, it goes into the Envoy proxy. Envoy looks at the request and communicates with Redis to check things like rate limits, authorization details, and usage quotas. Next, the Rust application running alongside Envoy prepares the log data and passes it through Kinesis into an s3 bucket for storage. S3 then triggers our worker to fetch and process the data so Elastic Search can index it. At this point, our users can access the data in our dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2FLog-ingestion---rust.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2FLog-ingestion---rust.jpg" alt="Diagram of new rust service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we found was that with fewer—and smaller—servers, we are able to process even more data without any of the earlier issues.&lt;/p&gt;

&lt;p&gt;If we look at the latency numbers for the Node.js service, we can see peaks with an average response time nearing 1700ms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2Fbefore-latency.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2Fbefore-latency.png" alt="Latency with original Node.js service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the Rust service implementation, the latency dropped to below 90ms, even at its highest peak, keeping the average response time below 40ms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2Fafter-latency.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.bearer.sh%2Fcontent%2Fimages%2F2020%2F06%2Fafter-latency.png" alt="Latency after re-architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The original Node.js application used about 1.5GB of memory at any given time, while the CPUs ran at around 150% load. The new Rust service used about 100MB of memory and only 2.5% of CPU load.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As with most startups, we move fast. Sometimes the best solution at the time isn't the best solution forever. This was the case with Node.js. It allowed us to move forward, but as we grew we also outgrew it. As we started to handle more and more requests, we needed to make our infrastructure evolve to address the new requirements. While this process started with a fix that merely replaced Node.js with Rust, it led to a rethinking of our log ingestion service as a whole.&lt;/p&gt;

&lt;p&gt;We still use a variety of languages throughout our stack, including Node.js, but will now consider Rust for new services where it makes sense.  &lt;/p&gt;

</description>
      <category>rust</category>
      <category>api</category>
      <category>monitoring</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How to make remote a success</title>
      <dc:creator>Cédric Fabianski</dc:creator>
      <pubDate>Tue, 14 Jan 2020 15:44:10 +0000</pubDate>
      <link>https://dev.to/bearer/how-to-make-remote-a-success-16cl</link>
      <guid>https://dev.to/bearer/how-to-make-remote-a-success-16cl</guid>
      <description>&lt;p&gt;A couple of weeks ago, I attended the P9 Founder Summit in Malta organized by &lt;a href="http://www.pointninecap.com/"&gt;Point Nine Capital&lt;/a&gt;. One of the workshops was about remote work.&lt;/p&gt;

&lt;p&gt;Soon after, I led a webinar for the #p9family and thought I would share our experience at Bearer thus far.&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

&lt;h1&gt;
  
  
  It's all about sharing and communicating
&lt;/h1&gt;

&lt;p&gt;When you are small and in the same office, it's pretty easy to be aware of everything. The whole team spends a lot of time hanging out together—having lunch, grabbing a coffee, having discussions, and making decisions very quickly. &lt;strong&gt;As you scale up, more discussions take place, more decisions get made, and it becomes harder to keep up.&lt;/strong&gt; You might even become successful enough that your office expands to a second floor or even a new branch in a different country.&lt;/p&gt;

&lt;p&gt;At this point, processes are put in place to improve that internal communication. They are best practices that everybody ends up doing, but often after the fact. &lt;strong&gt;When you are remote-first, those processes need to be put in place from day one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I've been lucky enough to be a remote worker for most of my career. Even my first internship was as a remote worker! My last experience at &lt;a href="https://www.freeagent.com/"&gt;FreeAgent&lt;/a&gt; taught me a lot, and I'm applying many of those insights here at Bearer.&lt;/p&gt;

&lt;h1&gt;
  
  
  Remote as a first-class citizen
&lt;/h1&gt;

&lt;p&gt;I, as Co-Founder and CTO, am myself remote most of the time. I'm visiting the Paris offices for customer meetings, board meetings, conferences, and so on 2 days every 2 weeks on average. I think it's very important for people to see that remote is something we really believe in and view us as an example of how it can be done right. I've attended &lt;a href="https://www.tech.rocks/"&gt;TechRocks&lt;/a&gt; and during one of the talks, &lt;a href="https://twitter.com/juliendollon"&gt;Julien Dollon&lt;/a&gt;, Director of Engineering at Oracle gave this tip:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If you want remote to work, start by cutting off the head&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For him, this meant being the first to work remotely, while the rest of the team stayed in the office.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;When the chief is remote, all of a sudden everybody is remote and everybody starts writing down everything. What used to be ephemeral and on a whiteboard became written down and stored.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's what I saw at FreeAgent too and I think that's an important part of the process that shouldn't be neglected.&lt;/p&gt;

&lt;h1&gt;
  
  
  Write down everything!
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Select a shared language.&lt;/strong&gt; For us, that's English. All public communication needs to be in English. Even as French founders, this is something we have done from day one and that means that anybody can look back at why we've made a particular decision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Requests for Comments (RFCs).&lt;/strong&gt; The Open Source community has a lot of experience in being remote-first. The need to share information while working asynchronously from different timezones is no secret. Every decision made within the project needs to be documented and approved. When someone looks back on it later, they can understand what led to a particular technical decision. Here are a few examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/reactjs/rfcs"&gt;reactjs/rfcs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rust-lang/rfcs"&gt;rust-lang/rfcs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/yarnpkg/rfcs"&gt;yarnpkg/rfcs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Build a Knowledge Base.&lt;/strong&gt; Having a meeting? Learning something new? Retrospective? How-Tos? These need to be documented and shared! My not-so-secret goal is that ultimately, everything that is in the knowledge base will be turned into a Blog Post like what we did with &lt;a href="https://www.bearer.sh/blog/understanding-auth-part-1-what-is-oauth?utm_source=dev.to&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=how-to-make-remote-success"&gt;Understanding Auth Part 1: What is OAuth 2.0&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Every discussion longer than 10 messages in Slack needs to trigger a call in Zoom and a summary in Slack or the Knowledge Base.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define your Culture, Vision &amp;amp; Mission.&lt;/strong&gt; Every employee should be able to easily find the Culture, the Vision, and the Mission of the company. It keeps everyone working toward the same, shared goal and reminds us all what matters most as a company.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9a5zozkY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9s5tks9ttpj74ifj07fm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9a5zozkY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9s5tks9ttpj74ifj07fm.png" alt="Bearer team board's on Notion.so"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Organize Weekly Demos.&lt;/strong&gt; Every week on Thursday at 5 p.m. CET we gather together and answer these questions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- What are the numbers this week?
- What has been done company-wise?
- What has been done product-wise?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's recorded and pushed to YouTube. If someone is off, in a timezone that makes it harder to join, or would like to come back to it later, they can watch it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Send Out Weekly Notes.&lt;/strong&gt; At the beginning of the week, send out a note to the team that recaps the week before and the goals for the coming week. Use the weekly demo as a foundation, and cover the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- News of the week
- Goals of the week
- Roadmap this week
- Done last week
- Team event (who is off, who is where)
- Link to the recording from the weekly Demos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Make everyone feel connected
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Connect in person.&lt;/strong&gt; As a remote-first company, we spend most of our time reading our colleagues rather than hearing them. Some team members aren't native English speakers, and sometimes if you don't know the person well it can be easy to misinterpret the intent from their writing alone.&lt;/p&gt;

&lt;p&gt;That's why we believe in getting together in-person at off-sites. The last one took place in Portugal.&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

&lt;p&gt;This also means that you shouldn't expect to spend less money by setting up a remote company if you want to do it right.&lt;/p&gt;

&lt;p&gt;Liquid error: internal&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Share a coffee break.&lt;/strong&gt; When you are all in an office, you have those coffee machine moments. You discuss things like what you did last weekend, what you are up to outside of work, and so on. These moments are an important part of getting to know one another. At Bearer, we put one on the calendar each week, but I also encourage the team to open up a meeting in our #coffee channel whenever they like.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conduct smarter meetings.&lt;/strong&gt; Meetings are expensive. Not just in the time they take, but also in the disruption of everyone's workflow before and after. Make sure to use this time wisely.&lt;/p&gt;

&lt;p&gt;Before a meeting, send out an agenda to give everyone a chance to contribute. Whether that is an RFC, a Google Doc or a Notion note, there needs to be something sent beforehand. After the meeting, summarize in the outcome and list the actions that should come from it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perform daily check-ins/check-outs.&lt;/strong&gt; It's important to share your progress with the team. Usually, that happens during a daily standup. While this is not mutually exclusive, some squads at Bearer do catch up every morning. What started as a squad initiative is now part of the Product &amp;amp; Engineering routine. We use Geekbot to allow each team member to answer the following questions each day:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- How did you feel today?
- What have you done today?
- What's your plan for tomorrow?
- Have you been blocked by anything?
- What have you learned today?
- How efficient did you feel today? (1-5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These questions help us measure not only productivity but also the spirit and morale within the Team. We also encourage people to publish their learnings of the day by answering the question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What have you learned today?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Tools we use
&lt;/h1&gt;

&lt;p&gt;Like other remote companies, we rely on many tools to help with all of this. Here are a few that we've had success with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zoom.us/"&gt;Zoom&lt;/a&gt; and &lt;a href="https://slack.com/intl/fr-fr/?eu_nc=1"&gt;Slack&lt;/a&gt; are a great foundation to start with. You can trigger a call within Slack using this command &lt;code&gt;/zoom&lt;/code&gt; and jump directly into a call for a pairing session, right from where you are.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://geekbot.com/"&gt;Geekbot&lt;/a&gt; to trigger the checkin/checkout and to get a feeling about how the team is doing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.notion.so/"&gt;Notion&lt;/a&gt; is basically used as an entry point at Bearer. Everything you need to know is already there. &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt; is used for our code and as such is a great tool to keep track of the technical decisions (RFCs).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.geckoboard.com/"&gt;Geckoboard&lt;/a&gt; and &lt;a href="https://redash.io/"&gt;Redash&lt;/a&gt; are great to share the metrics to the Team. It's important that everybody has access to the same level of information that drives business decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://miro.com/"&gt;Miro&lt;/a&gt; is a great replacement for whiteboard and has the advantage that you can get back to it at any point without the fear of erasing somebody's else diagram by mistake.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://apps.apple.com/us/app/clocker/id1056643111?mt=12"&gt;Clocker&lt;/a&gt; is a menu bar on macOS and that shows you what time it is in everyone's locations.&lt;/p&gt;

&lt;p&gt;We're also considering a few more like &lt;a href="https://officevibe.com/"&gt;OfficeVibe&lt;/a&gt; and &lt;a href="https://www.15five.com/"&gt;15Five&lt;/a&gt; to measure the Team happiness which is really important to us.&lt;/p&gt;

&lt;h1&gt;
  
  
  There's always more that can be done
&lt;/h1&gt;

&lt;p&gt;We are only at the beginning of our journey. We will surely make mistakes along the way, but we will continue to learn from them.&lt;/p&gt;

&lt;p&gt;Something that is working today may not work tomorrow as your company grows. You will need to adapt, and I'll continue to share our processes around and commitments related to remote work here at Bearer.&lt;/p&gt;

&lt;p&gt;For instance, right now, we are mainly in the European Timezone and we ask people to have at least 6 hours overlap within that Timezone. That's a decision we've made intentionally to avoid loneliness and isolation. We also feel like that's the best way to be effective, at least at our stage. This might change later as more team members join in other timezones.&lt;/p&gt;




&lt;p&gt;Share your remote tips in the comment below 👇.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PS: This post was originally published on &lt;a href="https://www.bearer.sh/blog/how-to-make-remote-a-success?utm_source=dev.to&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=how-to-make-remote-success"&gt;Bearer blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>remote</category>
      <category>learning</category>
      <category>productivity</category>
      <category>career</category>
    </item>
    <item>
      <title>How we got 900 applications on our Developer Position</title>
      <dc:creator>Cédric Fabianski</dc:creator>
      <pubDate>Tue, 29 Oct 2019 16:41:42 +0000</pubDate>
      <link>https://dev.to/bearer/how-we-got-900-applications-on-our-developer-position-3oif</link>
      <guid>https://dev.to/bearer/how-we-got-900-applications-on-our-developer-position-3oif</guid>
      <description>&lt;p&gt;Bearer is a remote-first company; we have always found this to be a strength, for many reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hiring is one of them.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Thus far, we &lt;strong&gt;hired, mainly opportunistically, by networking&lt;/strong&gt;. This method worked well up to the point when we needed to expand our circle and hire from the outside.&lt;/p&gt;

&lt;h2&gt;
  
  
  From 1 to 10
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://twitter.com/g_montard" rel="noopener noreferrer"&gt;Guillaume&lt;/a&gt; and I have been in the industry for a while now; we've met many developers. That's providing us an &lt;strong&gt;unfair advantage&lt;/strong&gt;. When we open a position, we naturally discuss it with those within our network. That's how we got to be a team of 10!&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.bearer.sh/about%23jobs" rel="noopener noreferrer"&gt;jobs&lt;/a&gt; page we created in the beginning has also helped us source outside talent. Additionally, we also tweet about opportunities on the &lt;a href="https://twitter.com/BearerSH" rel="noopener noreferrer"&gt;Bearer&lt;/a&gt; Twitter account. From time to time, we even receive emails from people, prospecting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5gq7fe8lknsrdkklf4j4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5gq7fe8lknsrdkklf4j4.png" alt="Example of tweets to hire on opportunity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Up to this point, we've only been onboarding Senior Engineers because we felt like we didn't have the structure to support a Junior Engineer. In the past few months, we've made a lot of progress and it was time to take a step back and re-evaluate this position. We've just &lt;strong&gt;rebuilt our dashboard from the ground up&lt;/strong&gt; (built with TypeScript, ReactJS, GraphQL) and &lt;strong&gt;the foundations are rock solid&lt;/strong&gt; (we will blog about this 😉). The website has also proven to be future-proof and easy to maintain. We were ready to onboard a new Engineer and this time it would be a Junior!&lt;/p&gt;

&lt;p&gt;We published a job offer for a Junior/Mid-Level, Frontend Engineer and were inundated by applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  10 and beyond
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fzh9lqg86t6lbxkywvw5e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fzh9lqg86t6lbxkywvw5e.jpg" alt="Hiring from 10... and beyond"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We got in touch with the amazing people at &lt;a href="https://remotive.io" rel="noopener noreferrer"&gt;remotive.io&lt;/a&gt; and were convinced that their platform would be perfect for us, a remote-first company. In order to increase our chances of finding the perfect gem, we also decided to post the opportunity on &lt;a href="https://jobs.github.com" rel="noopener noreferrer"&gt;GitHub Jobs&lt;/a&gt; and &lt;a href="https://jsremotely.com" rel="noopener noreferrer"&gt;JSRemotely&lt;/a&gt; too.&lt;/p&gt;

&lt;p&gt;Things were simple, I created an alias to receive the job applications. This alias redirected enquiries to &lt;a href="https://twitter.com/tanguyantoine" rel="noopener noreferrer"&gt;Antoine&lt;/a&gt;, our Senior Frontend Engineer, and me; he was to be in charge of the first steps, selection and the first interview.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We understood, pretty quickly, that we were not enough prepared.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After 3 days, we had received 250+ applications; after 1 week, 500+; and after 2 weeks, we were at ~900. This is without including the LinkedIn and Twitter messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fe7dsoc20lw0f420r069l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fe7dsoc20lw0f420r069l.png" alt="Gmail - Frontend Applications"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Time to investigate some real processes
&lt;/h2&gt;

&lt;p&gt;After a couple of days, Antoine and I spoke; we were both excited and overwhelmed. We knew that &lt;strong&gt;we couldn't review every application. We needed help.&lt;/strong&gt;&lt;br&gt;
A week later, &lt;strong&gt;I'd put together a survey, using Typeform&lt;/strong&gt;. It would ask some of the questions we usually asked during an interview:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do you know Gatsby, ReactJS, Styled Components, GraphQL, TypeScript, Tailwind CSS, ...?&lt;/li&gt;
&lt;li&gt;Are you willing to work in the European Timezone?&lt;/li&gt;
&lt;li&gt;What's your GitHub handle?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fiybpemkktjbwo2ojm1yr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fiybpemkktjbwo2ojm1yr.png" alt="Example of questions in the Typeform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then found a way to send this Typeform to all the existing applicants as well as the new. Within a week it started to get responses; this helped us a lot.&lt;/p&gt;

&lt;p&gt;My mailbox was being filled with messages like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I've filled the Survey&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Thanks for getting in touch&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But at least we could keep track of them all and could filter applicants to meet our needs.&lt;/p&gt;

&lt;p&gt;In total, &lt;strong&gt;we got 430 responses&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The standard of applicants was impressive&lt;/strong&gt;, and we were lucky enough to make the final choice from amongst these amazing candidates, &lt;strong&gt;from all over the world!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reply to everyone&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One thing I've noticed, is that there is nothing more frustrating than not being given feedback after submitting an application. We tried to reply as fast as possible, to the survey after 1 week and we sent rejection emails after 2.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Get ready for applications&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Set up a Typeform, that will be your first line of defense. Get candidates to go straight to it and answer your very first questions there. Have the replies sent directly to something like Google Sheets or Airtable (there is an integration for both in Typeform).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Share and be transparent&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have a blog; our &lt;a href="https://blog.bearer.sh/building-a-remote-multi-cultural-first-team/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt;, sharing how we work and our company culture, has been looked at quite a lot in the past few days. Sharing and being transparent is very important for candidates. They want to know as much as possible about you before they apply.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Remote is great to get the best candidates&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are still relatively unknown and yet we received quite a lot of applications. We've been really impressed by the levels of engagement of some candidates.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Working on a developer tool definitely helps&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most candidates were really interested in joining because we were building a developer tool that they found interesting. Some got back to us with an example of applications they have built with Bearer using &lt;a href="https://www.bearer.sh/integrations/32/google-sheets-api" rel="noopener noreferrer"&gt;Google Spreadsheet API&lt;/a&gt; for instance. This is definitely an unfair advantage to attract talented developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Out of this, we hired &lt;a href="https://twitter.com/Rien_Coertjens" rel="noopener noreferrer"&gt;Rien Coertjens&lt;/a&gt;, at first with a short and specific mission. We have been very happy with him and we decided to hire Rien full-time.&lt;/p&gt;

&lt;p&gt;Of course, our process is far from being perfect; we may have rejected some great candidates who would have been fantastic fits for us. Some applicants may not have understood the reasons why they didn't make it to the next cut, and I'm really sorry about this. We had to make some tough decisions, but it was simply not manageable otherwise.&lt;/p&gt;

&lt;p&gt;But...&lt;/p&gt;

&lt;p&gt;We will be opening more developer positions in the future.&lt;/p&gt;

&lt;p&gt;Discuss this article on Twitter and ping us &lt;a href="https://twitter.com/BearerSH" rel="noopener noreferrer"&gt;@BearerSH&lt;/a&gt;&lt;/p&gt;

</description>
      <category>remote</category>
      <category>developer</category>
      <category>hiring</category>
    </item>
  </channel>
</rss>
