<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Superbet Engineering</title>
    <description>The latest articles on DEV Community by Superbet Engineering (@superbet).</description>
    <link>https://dev.to/superbet</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/superbet"/>
    <language>en</language>
    <item>
      <title>Reflecting on my post-Google experience so far</title>
      <dc:creator>ivanklaric</dc:creator>
      <pubDate>Thu, 13 Jan 2022 15:01:00 +0000</pubDate>
      <link>https://dev.to/superbet/reflecting-on-my-post-google-experience-so-far-o31</link>
      <guid>https://dev.to/superbet/reflecting-on-my-post-google-experience-so-far-o31</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; I left Google two years ago to join &lt;a href="https://superbet.engineering"&gt;Superbet&lt;/a&gt;, a rapidly growing European sports betting business. I first discuss why I made the call, share a bit of what I learned about betting, and finally reflect on my experience so far.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s first address the elephant in the room
&lt;/h2&gt;

&lt;p&gt;Betting is still taboo in most of the world, so I had to do some introspection whether I wanted to be a part of that world. On one hand, most human activity has side effects (aka &lt;a href="https://en.wikipedia.org/wiki/Externality"&gt;externalities&lt;/a&gt;) so it’s easy to dismiss gambling as being yet another business with negative externalities. On the other hand, I didn’t want to dismiss these externalities without exploring them deeper.&lt;/p&gt;

&lt;p&gt;As with most values-based decisions, this is primarily a personal question so I don’t think my judgement can be universally applied, but my thought process might resonate with some, so I thought it worth sharing. I realized that I wouldn’t have qualms working in other industries with significant negative externalities such as oil &amp;amp; gas or the auto industry, alcohol producers, the pharma industry, or the tech giants. Most societies today don’t do a great job of compensating the negative externalities of these industries, but the situation is improving — our cars are greener, alcoholism treatment and awareness is increasingly available, and even big tech is getting closer scrutiny these days.&lt;/p&gt;

&lt;p&gt;Betting and gambling is heavily regulated and taxed, with increasingly strict responsible gambling rules and limits. So how big of a problem is problematic gambling? &lt;a href="https://en.wikipedia.org/wiki/Problem_gambling#Europe"&gt;This Wikipedia article&lt;/a&gt;  suggests roughly 0.5–1% of the population exhibits problematic gambling behaviour. &lt;a href="https://www.imedpub.com/articles/incidence-of-problem-gambling-inromania-brief-report.php?aid=18557"&gt;This survey&lt;/a&gt; puts the number for Romania (Superbet’s home turf) around 0.6%. Alcoholics account for around 5% of the population according to &lt;a href="https://en.wikipedia.org/wiki/Alcoholism#Epidemiology"&gt;this other Wikipedia article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Given that preventing access to gambling for problematic players is not only doable, but mandated by law and increasingly strictly regulated, I decided I’m fine with making the call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Betting Software Ecosystem is surprisingly complex
&lt;/h2&gt;

&lt;p&gt;The first thing that overwhelmed me when I joined was the sheer complexity of sports betting, so thought it might be useful to shed some light on that. The basic idea of a betting business is one of mediation — in an ideal scenario the betting operator is just an intermediary between two punters who want to bet on the same event. The odds reflect demand/supply for each of the outcomes, minus the operator’s margin.&lt;/p&gt;

&lt;p&gt;In practice, you don’t always live in an ideal scenario (e.g. the event is not prominent so supply or demand are not liquid enough), so providing a smooth betting experience regardless is harder than I expected. Also, the distribution of customer interest for betting events is a power law, making for very spiky customer traffic patterns.&lt;br&gt;
The software ecosystem dealing with all this complexity can be organized in three major product areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trading/Offer side&lt;/strong&gt;: this is the part of the ecosystem that ingests sports and betting market data and builds the betting offer through offer management and risk tooling. A 100+ people ops team (called traders because determining the odds is essentially a supply/demand problem) use these tools to make sure Superbet has the broadest, deepest, and best priced betting offer on the market.&lt;br&gt;
A Google Search equivalent of this would roughly be &lt;a href="https://developers.google.com/search/docs/beginner/how-search-works#indexing"&gt;Indexing&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Betting side&lt;/strong&gt;: the transactional, customer-facing part of the system, that includes the betting clients apps and the systems enabling the betting ticket lifecycle.&lt;br&gt;
This roughly corresponds to &lt;a href="https://developers.google.com/search/docs/beginner/how-search-works#serving"&gt;Serving&lt;/a&gt; in Search.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Operations side&lt;/strong&gt;: running a multi-channel, multi-country business of this scale requires a lot of operations support tooling that enable us to operate the retail betting shop network, customer support issues, marketing and CRM ops, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My experience so far
&lt;/h2&gt;

&lt;p&gt;While our &lt;a href="https://en.wikipedia.org/wiki/Active_users"&gt;DAU&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/Queries_per_second"&gt;QPS&lt;/a&gt; metrics are nowhere near Google scale, the complexity of problems being solved is comparable, especially when you take into account that you have to do a lot of things from scratch, without relying on the massive Google infrastructure.&lt;/p&gt;

&lt;p&gt;The main thing I loved about my new team was how easy it was to try out new things — ideas, tools, processes, or ways of working. While Google is probably reasonably fast for its size, this is just in a different ballpark. We talk about something, decide to try it, and start doing it next week.&lt;/p&gt;

&lt;p&gt;While nimbleness of a smaller team is inspiring, some things just work much better at scale. Most Googlers don’t have to know how to configure or manage a Borg cell, distributed in-memory cache, or a large database system correctly. If you solve the quota issues, you can count on the system working without knowing a lot of internal SRE magic (modulo the occasional cell deprecation or maintenance ;-) ).&lt;/p&gt;

&lt;p&gt;We mostly had to figure these things out on our own and learn way more details than we planned to. This doesn’t only apply to engineering practices. We also had to figure out how an engineering career ladder should look like, introduce design docs and postmortems, figure out oncall and escalation procedures, quota planning, etc.&lt;/p&gt;

&lt;p&gt;My initial thinking was to just do it the Google way but we quickly learned that while most of Google’s ideas are generally applicable, the devil really is in the details, and there’s no one-size-fits-all approach that works across organizations of different sizes, different cultures, and industries.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;So, what’s next?
It is a cliche, but things are just getting interesting. On the one hand, Superbet is expanding its operations geographically, providing a steady stream of localization, integration, and migration challenges. On the other, we have a ton of exciting product ideas we want to try out.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We see ourselves as primarily a tech company working in the betting industry, rather than a betting company. That puts us in a position where due to the heavy investment in our software ecosystem we can try out new product ideas and pivot as we learn more. This is an industry dominated by major players using off the shelf b2b solutions, so we think fully owning the whole customer experience is our long term competitive advantage.&lt;/p&gt;

&lt;p&gt;Our next big organizational change is the pivot towards self-sufficient and empowered &lt;a href="https://svpg.com/product-vs-feature-teams/"&gt;product teams&lt;/a&gt;. This will open a lot of interesting engineering challenges. For example, it’ll be crucial to figure out how to build platforms that are easy to contribute to, yet rock solid in terms of stability and performance. Product teams will also require a mindset shift; encouraging engineers to get actively involved in feature discovery, hypothesis testing, and delivery is radically different than what they’re used to.&lt;br&gt;
We’re on a long learning curve. Feel free to reach out if it sounds interesting ;-)&lt;/p&gt;

</description>
      <category>career</category>
      <category>google</category>
      <category>superbet</category>
      <category>engineering</category>
    </item>
    <item>
      <title>Serverless at Superbet</title>
      <dc:creator>Justin</dc:creator>
      <pubDate>Mon, 20 Dec 2021 09:18:23 +0000</pubDate>
      <link>https://dev.to/superbet/serverless-at-superbet-7em</link>
      <guid>https://dev.to/superbet/serverless-at-superbet-7em</guid>
      <description>&lt;p&gt;&lt;em&gt;TL;DR&lt;/em&gt; - We've used &lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt; for sports modelling at &lt;a href="https://superbet.ro/"&gt;Superbet&lt;/a&gt; for over three years now. This is the story of how we came to use Lambda, and why we're likely to extend our usage of it in the near future. AWS managed services offer incredible cost and productivity benefits, and dev teams should be aggressively exploring how they could be used in different areas.&lt;/p&gt;




&lt;p&gt;I thought I would talk about the use of serverless computing at Superbet.&lt;/p&gt;

&lt;p&gt;In general Superbet is an &lt;a href="https://aws.amazon.com/"&gt;AWS&lt;/a&gt; + &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; + &lt;a href="https://kafka.apache.org/"&gt;Kafka&lt;/a&gt; shop, as befits an organisation which serves hundreds of thousands of customers - I've heard Kubernetes described as an organisational pattern which allows large companies to ship products at scale, and I think that's a useful definition. But development at Superbet isn't a monoculture - there are groups which are using different technologies, within the overarching AWS framework, groups which use different stacks, as their particular use case may warrant. That includes the modelling group, where we've used AWS Lambda successfully since 2018. &lt;/p&gt;

&lt;p&gt;So how did we end up using serverless technologies, and what has our experience been like ?&lt;/p&gt;




&lt;p&gt;I should talk a little about what serverless computing means, and also what it is not. &lt;/p&gt;

&lt;p&gt;The first thing is that it doesn't mean there are no servers - that would be ridiculous! Lambda uses servers just like &lt;a href="https://aws.amazon.com/ec2/"&gt;EC2&lt;/a&gt; and other AWS compute frameworks do, in fact they probably sit alongside one another in an AWS data centre somewhere. The difference lies in the fact that whilst EC2 makes you responsible for managing your servers, AWS will &lt;em&gt;manage your Lambda instances for you.&lt;/em&gt; Specifically, if you send a request to Lambda, AWS will &lt;em&gt;spin up an instance for you, process your request, and then shut that instance down&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Think about that for a moment.&lt;/p&gt;

&lt;p&gt;It's quite different from a standard server, which will need to exist before you send it a request, and which will continue to exist after that request has been served (as long as you don't crash it!). That small difference has quite profound consequences for application development, however. We'll get into the pros and cons of this approach shortly, but for now, simply note that a Lambda is &lt;em&gt;off by default&lt;/em&gt; - nothing exists until you actually send the service a request. &lt;/p&gt;




&lt;p&gt;In 2016 I co- founded a startup called ioSport, focused on providing "algorithms as a service" for the sports betting industry. &lt;/p&gt;

&lt;p&gt;The idea was that a third party, particularly one with experience of financial derivatives and options trading, might be able to provide bookmakers with better product pricing and design than they had access to internally. We were lucky enough to be introduced to &lt;a href="https://www.linkedin.com/in/sacha-dragic-a372089/?originalSubdomain=ro"&gt;Sacha Dragic&lt;/a&gt;, who felt we would be a good fit for his then- fledgling online business, and in 2019 Superbet acquired the ioSport team. &lt;/p&gt;

&lt;p&gt;When we joined, we found the tech landscape inside Superbet to be very different from what we had formerly used as an independent B2B supplier. ioSport had invested heavily in &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt; for core modelling, and &lt;a href="https://www.erlang.org/"&gt;Erlang&lt;/a&gt; for the engineering around those models. Superbet were starting from scratch with a Kafka, &lt;a href="https://docs.docker.com/engine/swarm/"&gt;Docker Swarm&lt;/a&gt; (latterly Kubernetes) and &lt;a href="https://go.dev/"&gt;Golang&lt;/a&gt; stack. There were a number of commonalities, but also a number of big differences, notably the scale at which we were now expected to run. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;How were we going to marry the two stacks together ?&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I should talk a bit about what a "model" means in this context. &lt;/p&gt;

&lt;p&gt;A model, in the sports betting space, is an algorithm to which you give relatively small amounts of input information (for example in football - recent results, recent market prices, relative strengths of teams) and in return have it spit out a large amount of "correct"(*) probabilities for different markets and selections - for each team winning/drawing/losing, for different score outcomes, for the total numbers of goals scored, and many more. Bookmakers take these probabilities, add margins and make them available to customers. &lt;/p&gt;

&lt;p&gt;But having a model alone isn't sufficient to run a bookmaking operation. You need a lot of engineering around a model to make it work in production - to run models in parallel when lots of matches are on at the same time, to pass the live state of matches into models, to visualise how models are performing, to handle model errors - this "boring" code can be more than 80% of the total codebase for a full end-to-end solution.&lt;/p&gt;

&lt;p&gt;(*) - "correct" is a tricky term here. Does it mean "probabilities that other bookmakers would agree with for this match ?" Or "the real world probabilities for this match, if it were played thousands and thousands of times ?" One for a separate blog post!&lt;/p&gt;




&lt;p&gt;Why did ioSport choose Python and Erlang ?&lt;/p&gt;

&lt;p&gt;Python was a relatively easy choice for the core models, as in 2016 it was already the lingua franca of the data science community. It's not the fastest language but it has perhaps the biggest sweet spot of all languages, in terms of the different problem domains it can successfully tackle. Its performance is often criticised, but matrix and stats libraries such as &lt;a href="https://numpy.org/"&gt;numpy&lt;/a&gt; and &lt;a href="https://scipy.org/"&gt;scipy&lt;/a&gt; (whose cores are written in C) have taken a lot of this pain away; and if this is not sufficient, a commonly used strategy is often to prototype an algo in Python and then rewrite in C or Go for better performance.&lt;/p&gt;

&lt;p&gt;Erlang was a more controversial choice. We would have liked to write everything in a single language for the sake of productivity, but life is rarely that simple. And indeed Python's concurrency story - its ability to do stuff in parallel - was fairly abysmal in 2016 (it may have improved since then). Erlang, by contrast, excels at concurrency - it was designed by &lt;a href="https://www.ericsson.com/en"&gt;Ericsson&lt;/a&gt; to run telephone exchanges, to handle tens of thousands of phone calls in parallel. So although having a quirky syntax and small developer community, Erlang's potential ability to run large numbers of models in parallel made it a very attractive option.&lt;/p&gt;

&lt;p&gt;All this kit was hosted on large individual EC2 machines - which worked, but primarily because we only had to offer prices in the top leagues to our pre- Superbet clients. At Superbet there would be no such artificial limits, we would be expected to price anything and everything. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;How were we going to outgrow the limits imposed by our single machine architecture ?&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;The first thing we figured out was that we had a different kind of scalability problem to other areas within Superbet.&lt;/p&gt;

&lt;p&gt;If you're working on the core betting engine, your scalability problem is customer- led - how to handle hundreds, thousands, millions of customers. The thing about this kind of traffic is that it increases relatively predictably - tens of thousands of customers don't appear overnight, and even if they do (for example a big event like the Champions League final), a container based system works pretty well because you just configure more nodes (and partitions) to handle the extra volume.&lt;/p&gt;

&lt;p&gt;The model scaling problem is slightly different. For one thing there are a finite number of teams in the world playing a finite number of games, which puts a theoretical ceiling on the amount of resources you need. For another, there are well defined "epochs" within the lifecycle of a match. &lt;/p&gt;

&lt;p&gt;Pre- event - before the game has started - match prices don't tend to change that much. The probability of team X winning might drift up and down a couple of percentage points in response to team news, but that's about it. That translates into relatively few pre- event model calls. &lt;/p&gt;

&lt;p&gt;Once a match goes in- play (when the referee blows the kickoff whistle), everything changes - prices whip around in response to goals being scored, cards being awarded, really in response to any kind of match event information coming from the state feed. Even if there are no events coming in, prices must be recalculated constantly as they "decay" (like option prices) with the tick of the match clock. So the number of in- play model calls might be tens or even hundreds of times bigger than the equivalent pre- match number.&lt;/p&gt;

&lt;p&gt;You might call this kind of scalability "burst" scalability. And yes, you can achieve it by aggressively auto- scaling EC2 machines, but in 2016 this was something of a black art. Even in 2021, with scaling responsibilities delegated to container management layers such as Kubernetes, it remains "not simple". But Lambda, with its "off by default" nature, offered the possibility of simple out-the-box auto- scaling as far back as its inception in 2014. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;It was too tempting not to try.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Now at this point people tend to mention "cold starts". A Lambda process can't be spun up instantly (they say), there is always a delay whilst your request is processed, your machine provisioned, configured etc. Doesn't this impact your response time, and your customer experience in turn ?&lt;/p&gt;

&lt;p&gt;It's important to recognise that cold starts are indeed an issue - it's impossible for Lambda to compete directly with a traditional server which is ready and waiting to receive requests. But one should also acknowledge significant steps AWS have made to reduce the scale of the cold starts problem over the last couple of years.&lt;/p&gt;

&lt;p&gt;The first was the introduction of &lt;a href="https://aws.amazon.com/blogs/aws/new-provisioned-concurrency-for-lambda-functions/"&gt;Provisioned Concurrency&lt;/a&gt;. You can now configure your Lambdas (at a cost!) so that a portion of them are "warmed up" and ready to receive requests. Lambda isn't literally one-process-per-request; processes "hang around" rather than being immediately killed, in case an initial request signals the start of a flood. Which is great, but when you think about it, provisioned concurrency is really only a band aid over the cold starts problem - if one goes down the route of having processes constantly warmed up, how is the product any different from a regular server ?&lt;/p&gt;

&lt;p&gt;The real goal should be to get the speed of spinning up a new Lambda as close to zero as possible. The Erlang virtual machine is an interesting benchmark here. Erlang uses the concept of &lt;a href="https://learnyousomeerlang.com/errors-and-processes"&gt;microprocesses&lt;/a&gt;, which exist independent of the main spawning process, and which can be spun up at a rate of thousands per second. Lambda is slightly different because you are talking about spinning up entire new OS processes across a fleet of machines, OS processes being slower to start than microprocesses. &lt;/p&gt;

&lt;p&gt;AWS took an interesting step in Erlang's direction in 2019 with the introduction of Firecracker (upon which Lambda, and similar services such as &lt;a href="https://aws.amazon.com/fargate/"&gt;Fargate&lt;/a&gt;, are now based). From the &lt;a href="https://aws.amazon.com/blogs/aws/firecracker-lightweight-virtualization-for-serverless-computing/"&gt;launch blurb&lt;/a&gt; -&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;...Firecracker (is) a new virtualization technology that makes use of KVM. You can launch lightweight micro-virtual machines (microVMs) in non-virtualized environments in a fraction of a second, taking advantage of the security and workload isolation provided by traditional VMs and the resource efficiency that comes along with containers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now I am in no way an expert in KVMs (kernel virtual machines). But I have run simple &lt;a href="https://aws.amazon.com/cloudformation/"&gt;Cloudformation&lt;/a&gt; demos in which hundreds of Lambdas are spawned in parallel to perform map- reduce style calculations. And whilst not quite at Erlang levels, the speed and scale of Firecracker in spinning up Lambda processes is impressive - I've clocked hundreds in a couple of seconds.  And the nice thing is that because Lambda is an AWS managed service (and because cold starts remain a high profile issue for customers) it's quite likely that Firecracker performance will continue to improve in the future as AWS tweak the product - all of which you get for free as a Lambda user.&lt;/p&gt;




&lt;p&gt;The Python parts of our stack could now be migrated from the older "managed directly by Erlang" pattern, to Lambda. We had to migrate some of our Python dependencies to Lambda &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html"&gt;layers&lt;/a&gt;, to satisfy the quirks of Lambda's package management system, but otherwise the process was fairly smooth, The Erlang engineering was then able to call and trigger Lambda models via an AWS library called &lt;a href="https://github.com/erlcloud/erlcloud"&gt;erlcloud&lt;/a&gt;. So that just left the core Erlang engineering.&lt;/p&gt;

&lt;p&gt;This is where most of the interesting migration work lay. In any large Erlang application you tend to have a lot of supervision trees (very similar to Kubernetes - here's an interesting &lt;a href="http://blog.plataformatec.com.br/2019/10/kubernetes-and-the-erlang-vm-orchestration-on-the-large-and-the-small/"&gt;article&lt;/a&gt; by Jose Valim, author of Elixir, on the similarities between the two) spawning a lot of independent Erlang processes, all communicating using native Erlang messaging. This model didn't translate directly into our cloud environment, because there we had Docker Swarm/Kubernetes doing the process management, and Kafka doing the messaging. And Kafka messages were being partitioned as part of the scaling process, which wasn't something we had considered before. &lt;/p&gt;

&lt;p&gt;So effectively we had to rip out the node management and messaging part of our applications, to be left solely with the core business logic. And in the process of doing so, there was no longer any "glue" left to bind this logic together as a single application - instead we were left with multiple independent applications, which it now made sense to deploy as multiple instances (one per partition), and store in independent repos. But still in Erlang - one of the nice things about the modern cloud environment is that you can pretty much choose whatever runtime you wish.&lt;/p&gt;




&lt;p&gt;And this setup has served us very well for the past two years. We have had a lot of commercial success with the launch of the &lt;a href="https://sportshandle-com.cdn.ampproject.org/c/s/sportshandle.com/single-game-parlays-taken-sportsbooks-storm/amp/"&gt;Superbets&lt;/a&gt; product in Romania and Poland this year, and one of the things I am most proud about is that we have had little to no downtime in the service (famous last words) - Erlang in particular has gone some substantial way towards confirming its &lt;a href="https://stackoverflow.com/questions/8426897/erlangs-99-9999999-nine-nines-reliability"&gt;nine nines&lt;/a&gt; uptime reputation.&lt;/p&gt;

&lt;p&gt;But these two years have also given us the chance to reflect on the pros and cons of our stack; on what we might change if we could do things differently. Because no piece of software lasts forever, technical debt always accumulates, and the best medicine here is usually constant low levels of refactoring to keep things in shape.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So which bits of the stack do we intend to keep and what are we looking at changing ?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The first thing to say is that developers, particularly the quants (the modelling team), really like Lambda's ease of use. If you know a little Cloudformation and bash you can have a Lambda function up and running in a couple of minutes, complete with logging and performance metrics. It's not all perfect - testing locally remains difficult because you tend to need to mock AWS production primitives with a library like &lt;a href="https://github.com/spulec/moto"&gt;moto&lt;/a&gt;. But startups such as &lt;a href="https://serverless-stack.com/"&gt;SST&lt;/a&gt; and &lt;a href="https://dashbird.io/"&gt;Dashbird&lt;/a&gt; are working on the pain points of the development experience, and the outlook looks bright.&lt;/p&gt;

&lt;p&gt;The second thing is that the "off-by-default" nature of Lambda really shines in our monthly bills. Despite millions, possibly tens of millions of Lambda calls in a single month, the Lambda proportion of our overall AWS bill is barely into double digits, percentage- wise.&lt;/p&gt;

&lt;p&gt;The flip side of that last statement is however to shine a light on other areas of our stack, where we have thus far failed to implement auto- scaling as aggressively as we might like. &lt;/p&gt;

&lt;p&gt;It is, for example, a constant frustration to see a large, expensive "floor" to our EC2 bill, in days or weeks or months where there is little sporting activity in the calendar. As I mentioned earlier, auto- scaling isn't that easy, even with the introduction of Kubernetes. In our initial experiments we've found that many of our core apps don't like being aggressively auto- scaled - they fail to start properly or can't find the data they need on startup. Like unfit bodies being asked to do yoga, they groan and complain - it's clear that a number of them weren't designed with auto- scaling in mind. So that's something we will be looking to rectify in 2022.&lt;/p&gt;

&lt;p&gt;Another thing we're starting to consider is whether Erlang is really the right language for our engineering middleware. As mentioned earlier, although Erlang is generally considered a language, its unique virtual machine gives it more in common with container management systems such as Kubernetes than with other languages such as Python. In many ways you can consider the Erlang VM as a "cloud in a box", capable of running cloud- scale systems, before the big public clouds became viable deployment options. &lt;/p&gt;

&lt;p&gt;But equally, strip away those cloud- like features (process spawning, messaging -  because those tasks are now delegated to Kubernetes and Kafka) and what are you left with ? A somewhat quirky scripting language, in which you're running single threaded code -  you're not even taking advantage of Erlang's famed multi- threading! And so whilst I like Erlang a lot, it may be time to consider alternative languages for our middleware logic - ones which might give us a productivity pickup, ones for which there may exist deeper pools of development talent.&lt;/p&gt;

&lt;p&gt;Allied to this thought, one of the most interesting trends (IMO) of the past two years has been the growth of the serverless ecosystem. People are used to thinking about Lambda as functions-as-a-service, operating in a stateless manner. But AWS has been quietly building and deploying direct Lambda bindings for many of its core products - &lt;a href="https://aws.amazon.com/s3/"&gt;S3&lt;/a&gt; and &lt;a href="https://aws.amazon.com/dynamodb/"&gt;DynamoDB&lt;/a&gt; in the storage space, &lt;a href="https://aws.amazon.com/sqs/"&gt;SQS&lt;/a&gt; for queues, &lt;a href="https://aws.amazon.com/sns/"&gt;SNS&lt;/a&gt; and &lt;a href="https://aws.amazon.com/eventbridge/"&gt;Eventbridge&lt;/a&gt; for messaging. This opens up the possibility of doing more and more engineering directly in the serverless space, even of building full serverless applications comprising messaging and state management.  And in many cases the scale that these managed serverless services can handle far outstrips what you might be able to do with self- hosted services. If DynamoDB can throughput tens of thousands of transactions a second, why are you bothering to self- host &lt;a href="https://redis.io/"&gt;Redis&lt;/a&gt; on EC2 ?&lt;/p&gt;




&lt;p&gt;All in all I consider serverless to have been a big boon for the Superbet modelling team. It has allowed us to go to market and iterate our products quickly, and effectively outsource a lot of the &lt;a href="https://en.wikipedia.org/wiki/DevOps"&gt;DevOps&lt;/a&gt; work associated with scaling models to AWS. We kept our quant team happy and were able to manage with a small- ish engineering team as a result. Along the way it has shined a light on the cost benefits of having an aggressively auto- scaled solution, and has given us a lot of ideas of how we might bring these benefits to other parts of the engineering stack. It's been interesting to see how the serverless ecosystem has evolved over the past three years, with AWS spawning new features and products at an impressive rate. Having started using Lambda as simple functions-as-a-service three years ago, we're poised to start looking at fully event driven Lambda systems in 2022.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're interested in working on sports models in Python, or in distributed engineering at scale with AWS/Kafka/Kubernetes/Erlang then ping me!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>aws</category>
      <category>serverless</category>
    </item>
    <item>
      <title>7 Tips to Crush Your Next System Design Interview</title>
      <dc:creator>Daniel Emod Kovacs</dc:creator>
      <pubDate>Fri, 17 Dec 2021 11:46:41 +0000</pubDate>
      <link>https://dev.to/superbet/7-tips-to-crush-your-next-system-design-interview-4cfe</link>
      <guid>https://dev.to/superbet/7-tips-to-crush-your-next-system-design-interview-4cfe</guid>
      <description>&lt;p&gt;It is no secret that the tech industry is blooming right now. Many big tech companies are expanding at a mind-boggling pace, which of course means a rapid influx of applicants. This leads to a need for optimising and streamlining the interview experience.&lt;/p&gt;

&lt;p&gt;We at &lt;a href="https://superbet.engineering" rel="noopener noreferrer"&gt;Superbet&lt;/a&gt; invest a lot into making our interviews concise, comprehensive and reproducible, to ensure all applicants get a fair and equal chance.&lt;/p&gt;

&lt;p&gt;One of the essential parts of our interview process is the &lt;strong&gt;System Design Interview&lt;/strong&gt;. It is not something we invented, but I think we're pretty good at giving candidates fair, yet challenging problems to think about for this ~1 hour section of our hiring flow. In my first 6 months of working at Superbet as a tech lead, I've conducted more than 20 interviews - the majority of which had been focusing on system design. With that experience in my bag, I'll spoil you with the best 7 pieces of advice I could come up with to help you ace your next system design interview.&lt;/p&gt;

&lt;p&gt;During a system design interview you're presented with a technical problem, e.g.: "how would you design an application, like DEV.to?" and your task is to talk the interviewers through your solution while covering a couple of different topic, e.g.: data modelling, services, scaling, frontend, logging and error handling, just to name a few.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Understand the Problem
&lt;/h2&gt;

&lt;p&gt;This may sound cliché, but understanding the problem you're presented is going to give you a great advantage. So many candidates rush into trying to pump out a solution as quickly as they humanly can, without taking the time to really try to understand the question or problem in the first place.&lt;/p&gt;

&lt;p&gt;Let's examine why system design interviews exist and what we're trying to measure. Contrary to popular belief, after the interview concludes, we don't actually take the plan you've given us and turn it into the next unicorn startup. We're actually okay with you not giving us a full solution. What we're looking for is the way you approach a technical problem and how well you can reason about your decision making.&lt;/p&gt;

&lt;p&gt;The skills you will showcase during the system design interview will be able to tell us how well you'll manage problems in all sort of areas of engineering and not just in the specific environment that's presented during the interview.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Collect all of the Requirements
&lt;/h2&gt;

&lt;p&gt;A big mistake a lot of good engineers make, when tackling a system design interview is the lack of questions about the problem. We can't really score you on the assumptions you've made during the interview, although they do give us good insight into how your mind works. It's always safer to just take your time and ask all of the questions that come to your mind.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I like to say this phrase when introducing the system design interview to candidates who hadn't had a chance to take part in one yet: "your questions are just as valuable as your answers".&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A good rule of thumb is, if you're about to answer a question starting with "it depends", instead, explain that you need further clarification of specific parameters of the problem that will help you inform your decision. For example, don't just assume you can use AWS for everything. Certain countries may have regulations that prohibit businesses in a couple of different sectors from relying on cloud infrastructure and therefore you'd need to take into consideration the use of on-premise solutions if this is the case.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Focus on Note Taking
&lt;/h2&gt;

&lt;p&gt;I know I have a horrible short term memory. To help with remembering details I take notes throughout the day and review them regularly. You don't have to have a bad memory to take advantage of note taking. It will not only benefit you in an extremely stressful situation, by taking reliance on your memory out of the equation, but it also helps interviewers by showcasing what information you're focusing on.&lt;/p&gt;

&lt;p&gt;Sometimes a system design interview will present you with a complex problem, especially as you climb higher on the engineering ladder. It's okay to lose track during the interview, however, the way you handle the situation or prevent it from happening in the first place is what's really important to us.&lt;/p&gt;

&lt;p&gt;If you're sharing your screen during the interview or are doing the interview in person, you will have a way of showing your notes to your interviewers. I recommend making sure they're aware that you're taking notes and ask if they're interested in your notes. Chances are they will be.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Use a Visualisation Tool
&lt;/h2&gt;

&lt;p&gt;The result of the system design interview is a snapshot of a working system with explanation of the choices you've made while coming up with the design. One of the biggest challenges for both the interviewers and the candidates is to keep track of the final solution. We always recommend our candidates to use a visualisation tool that can not only help them with an overview of what they have so far, but also to help us see the solution at a bird's eye view.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The tool I recommend using is &lt;a href="https://app.diagrams.net/" rel="noopener noreferrer"&gt;Draw.io&lt;/a&gt;. The disadvantage of it is that there's no live sharing feature, so the only way for us to see your solution is for you to share your screen. &lt;a href="https://docs.google.com" rel="noopener noreferrer"&gt;Google Docs&lt;/a&gt; is also a viable alternative, albeit with limited options when it comes to drawing shapes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. Do Not Get Lost in the Details
&lt;/h2&gt;

&lt;p&gt;Your solution should be comprehensive and should cover a wide array of topics of software design. It's also important to discuss edge cases and non-critical system components, e.g.: logging. The number one reason candidates fail to deliver a full solution to the system design interview on time is lack of focus on higher level issues, a.k.a.: getting lost in details.&lt;/p&gt;

&lt;p&gt;To make sure you don't lose yourself in the little details of your solution, I recommend the following approach. Instead of picking the area you're most familiar with, and expanding on it horizontally - you should start at the highest possible level. Explain what you expect your application to be able to do and then work your way down into the specifics of each component of the system.&lt;/p&gt;

&lt;p&gt;As an interviewer, I want to intervene as little as possible during the interview. I want to keep the candidate talking, without having to guide them. When you get lost in the details, your interviewer has to make a decision to intervene and interrupt the flow or let you talk, but risk you wasting precious time in details that are not required in your solution. The best way to make sure you provide enough detail is to ask before you give your solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Talk Through Your Process
&lt;/h2&gt;

&lt;p&gt;From my experience, the most successful candidates all have one quality in common. They are vocal. You might be someone who finds it easy to talk through your thinking process, however that isn't true for everybody and I appreciate that. It's something not everyone's good at naturally, however I recommend practicing thinking out loud, while you're preparing for an interview and you'll find it easier to do during your system design interview as well.&lt;/p&gt;

&lt;p&gt;Why is this important at all? To be an effective engineer, your communication skills are extremely important. For more junior engineers we want to make sure that you'll be able to ask your more senior teammates for advice when working on your tasks, while as a senior, we're looking for you to be able to jump in and mentor your teammates, as well as take part in discussions about tech.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Do Not Be Afraid to Ask for Help
&lt;/h2&gt;

&lt;p&gt;When you're doing your system design interview at Superbet, we'll try to create the atmosphere of a nice open-ended discussion between tech enthusiasts. As such, we'll be open to answering any and all of the questions you may come up with during the interview.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Have you ever wanted to say during an interview: "I'd just Google x and y"?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On a system design interview it's okay not to know the exact technology or library that you'd use for a solution. In the real world you will be able to get help with all of that, whether it be reading documentation or looking the problem up on StackOverflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"I'd use Redis, because I don't have to worry about persistence or memory use, but I need very quick reads and a way to publish changes as they happen"&lt;/em&gt; scores just as well as &lt;em&gt;"I'd use an in-memory database, because I don't have to worry about memory and they have quicker access speeds. If it's got built in pub/sub, it's even better. Can you suggest such a database?"&lt;/em&gt;. Remember, we're interested in your reasoning above all and if the actual answer is just a Google search away, we'll be happy to help you out with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you haven't been in a system design interview or are looking to improve the outcome of your next one, hopefully my tips will get you there. Remember, your mileage may vary. Not everybody does interviews the same way, so if you want to be safe, you can use all of the above in your interview at Superbet. You can find a link to our open positions page on &lt;a href="https://superbet.engineering" rel="noopener noreferrer"&gt;our engineering website&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>systems</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>career</category>
    </item>
    <item>
      <title>A Tale of Two Vues - Tips for Developing VueJS Libraries</title>
      <dc:creator>Luka Skukan</dc:creator>
      <pubDate>Tue, 14 Dec 2021 14:32:07 +0000</pubDate>
      <link>https://dev.to/superbet/a-tale-of-two-vues-tips-for-developing-vuejs-libraries-4m5o</link>
      <guid>https://dev.to/superbet/a-tale-of-two-vues-tips-for-developing-vuejs-libraries-4m5o</guid>
      <description>&lt;p&gt;A few weeks ago, I ran into an interesting problem. At &lt;a href="https://superbet.engineering/"&gt;Superbet&lt;/a&gt;, we were attempting to extract some &lt;a href="https://vuejs.org/"&gt;VueJS&lt;/a&gt; reactive code into a separate utility library, using &lt;a href="https://www.typescriptlang.org/"&gt;TypeScript&lt;/a&gt;. I thought I knew what was waiting for us, and expected it to be a quick and simple thing. I was gravely mistaken. Vue &lt;a href="https://vuejs.org/v2/guide/reactivity.html"&gt;reactivity&lt;/a&gt; broke, and investigating what happened was no easy task. However, it also involved a process of discovery that was interesting enough to write about!&lt;/p&gt;

&lt;p&gt;In this article, I'd like to introduce a development process for external libraries that rely on Vue as a &lt;em&gt;&lt;a href="https://docs.npmjs.com/cli/v8/configuring-npm/package-json#peerdependencies"&gt;peer dependency&lt;/a&gt;&lt;/em&gt;, warn you about the potential pitfalls, and share how it applies to other JavaScript ecosystems as well (such as &lt;a href="https://reactjs.org/"&gt;ReactJS&lt;/a&gt;). I'll take you through the experiences we've had step by step, share the difficulties we've encountered, and help you avoid them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Thought Would Work
&lt;/h2&gt;

&lt;p&gt;The task itself sounded simple enough - extract a number of utilities that make use of a &lt;a href="https://vuejs.org/v2/api/#Vue-observable"&gt;Vue observable&lt;/a&gt; into a separate library, to be used across multiple Vue projects. We knew that we did not want to include the &lt;code&gt;vue&lt;/code&gt; dependency into the library bundle itself, nor did we want it to be installed when you add the library. Doing this would increase the bundle size for no good reason, and could even lead to dependency version conflicts!&lt;/p&gt;

&lt;p&gt;We attempted to resolve this by marking &lt;code&gt;vue&lt;/code&gt; as a &lt;code&gt;peerDependency&lt;/code&gt;. This is a type of dependency, specified in &lt;code&gt;package.json&lt;/code&gt; under &lt;code&gt;peerDependencies&lt;/code&gt;, that marks a special type of dependency that, at the same time, &lt;em&gt;are&lt;/em&gt; and &lt;em&gt;are not&lt;/em&gt; dependencies for the project. You can think of them simply as dependencies that are &lt;strong&gt;expected&lt;/strong&gt; to be there when you're using the library, in the project that uses the library. The syntax is the same as for &lt;code&gt;dependencies&lt;/code&gt; and &lt;code&gt;devDependencies&lt;/code&gt; but, unlike those two, it needs to be added by manually modifying the &lt;code&gt;package.json&lt;/code&gt; file. The specified &lt;a href="https://github.com/npm/node-semver#versions"&gt;version range&lt;/a&gt; will signal which versions of that dependency are &lt;em&gt;compatible&lt;/em&gt; with your library.&lt;/p&gt;

&lt;p&gt;This pattern is essential for library development, especially when the code contained in the library is meant to be a plugin or an extension based on some behaviour provided by a core library. It avoids having the same dependency installed multiple times, or even with multiple versions, while still using version ranges to ensure compatibility. For example, a library that defined a Vue plugin that depends on Vuex being present might have the peer dependencies specified like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"peerDependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"vue"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^2.6.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"vuex"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;gt;=3.5.1 &amp;lt;3.6.2"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, to develop and unit test your changes locally, you might still need to be able to import those dependencies, since there is no codebase to provide them for you. You can do this in one of three ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you are using &lt;code&gt;npm&lt;/code&gt; versions 1, 2, or 7+, this will be done for you automatically! 🎉&lt;/li&gt;
&lt;li&gt;Otherwise, you can use a library such as &lt;a href="https://www.npmjs.com/package/npm-install-peers"&gt;&lt;code&gt;npm-install-peers&lt;/code&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Or, even better, just add it as a &lt;code&gt;devDependency&lt;/code&gt;!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Were this a simple JavaScript project without a build step, this would have been enough! If the code using this library as a dependency had these same dependencies in the correct versions, the library would make use of them instead of installing a separate version. If, instead, it did not have them, or had the wrong version, an error would be emitted during &lt;code&gt;npm install&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixing the Build Process
&lt;/h2&gt;

&lt;p&gt;As you might have guessed, specifying it as a peer dependency was not sufficient! I hinted at this before - the build process was not considering the fact it was specified as a peer dependency, only that it was being imported into our codebase. This led to a &lt;em&gt;separate&lt;/em&gt; instance of Vue being bundled with the library, and it was the root cause of my problems: &lt;strong&gt;two Vue instances&lt;/strong&gt; and their observables are not mutually reactive. Not only did we double-bundle it and increase the package size, Vue (much like React) relies on there being a single instance of the library to work properly!&lt;/p&gt;

&lt;p&gt;Luckily, the fix for that is straightforward enough - we just needed to tell the build tool to exclude those dependencies from the bundle. With &lt;a href="https://webpack.js.org/"&gt;Webpack&lt;/a&gt;, you can specify the &lt;a href="https://webpack.js.org/configuration/externals/"&gt;&lt;code&gt;externals&lt;/code&gt;&lt;/a&gt; field like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;externals&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;vue&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vue&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://rollupjs.org/guide/en/"&gt;Rollup&lt;/a&gt; has a similar mechanism for specifying external dependencies, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vue&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, if you want Rollup to take care of those pesky peer dependencies for you, you can install a plugin for that. One such example is &lt;a href="https://www.npmjs.com/package/rollup-plugin-peer-deps-external"&gt;&lt;code&gt;rollup-plugins-peer-deps-external&lt;/code&gt;&lt;/a&gt;. Add it to your project using your favourite package manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-D&lt;/span&gt; rollup-plugin-peer-deps-external
&lt;span class="c"&gt;# OR&lt;/span&gt;
yarn add &lt;span class="nt"&gt;-D&lt;/span&gt; rollup-plugin-peer-deps-external
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that's done, modify your rollup configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;external&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rollup-plugin-peer-deps-external&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;external&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;// preferably goes first&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After building and publishing the library, everything will work as expected! You can even go into the built files and check that the dependency (Vue, in our case) is not bundled! However, we would not consider publishing a new version of a library without testing it locally first, and this is where things got complicated once more...&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Troubles
&lt;/h2&gt;

&lt;p&gt;For most use cases, there is a simple and reliable flow for testing libraries without publishing them: we can use &lt;a href="https://docs.npmjs.com/cli/v8/commands/npm-link"&gt;&lt;code&gt;npm-link&lt;/code&gt;&lt;/a&gt; to connect a local version of a library, without having to update it on the &lt;a href="https://www.npmjs.com/"&gt;npm registry&lt;/a&gt;. The flow would be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# In your library folder&lt;/span&gt;
npm run build &lt;span class="c"&gt;# or equivalent&lt;/span&gt;
npm &lt;span class="nb"&gt;link&lt;/span&gt; &lt;span class="c"&gt;# for my-awesome-library&lt;/span&gt;

&lt;span class="c"&gt;# In the folder of the app that uses the library&lt;/span&gt;
npm &lt;span class="nb"&gt;link &lt;/span&gt;my-awesome-library

&lt;span class="c"&gt;## --------------------------------------------&lt;/span&gt;
&lt;span class="c"&gt;## Alternatively, a single command to run from the app folder&lt;/span&gt;
npm &lt;span class="nb"&gt;link&lt;/span&gt; ../path-to/my-awesome-library
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's it! When you build or run your project, it will be making use of the updated local artefacts, through the magic of &lt;a href="https://en.wikipedia.org/wiki/Symbolic_link"&gt;symlinks&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;That is to say, that would be it, unless you happen to be using peer dependencies and happen to be relying on a single instance of some object to exist in code, as happens to be the case with both VueJS and React. In this case, though the code would work fine if it were built and published, it will not resolve correctly with &lt;code&gt;npm-link&lt;/code&gt;. There are a number of ways around it, some based on &lt;code&gt;yarn&lt;/code&gt;, others specific to Webpack, or resolved by using &lt;a href="https://lerna.js.org/"&gt;Lerna&lt;/a&gt;. However, there are two fairly generic ways of handling it, as well.&lt;/p&gt;

&lt;p&gt;The first is simpler, but more fragile. If the shared dependency is a single library, and the dependency graph is relatively simple, you can use &lt;code&gt;npm-link&lt;/code&gt; to ensure they resolve to the same version of the dependency is resolved as the peer dependency, by running the following in your library folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# from my-awesome-library&lt;/span&gt;
npm &lt;span class="nb"&gt;link&lt;/span&gt; ../path-to/my-app/node_modules/vue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works well enough for such a simple use case, but can be a pain to manage, and gets more complicated as the dependency graph gets messier. There is another, more robust way. Once you've set up your &lt;code&gt;peerDependencies&lt;/code&gt; and your build system, and ensured that the built assets do not actually bundle the dependency, you can create a package locally, as a &lt;a href="https://whatis.techtarget.com/definition/tarball-tar-archive"&gt;tarball&lt;/a&gt;, and install it directly. This is essentially the same process as building and publishing the library, only using your computer as the repository. What you will need to do is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# in the library folder&lt;/span&gt;
npm run build &lt;span class="c"&gt;# or equivalent&lt;/span&gt;
npm pack

&lt;span class="c"&gt;# in the app directory&lt;/span&gt;
npm i &lt;span class="nt"&gt;--save&lt;/span&gt; ../path-to/my-awesome-lib/my-awesome-lib-1.2.3.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that is all there is to it! The dependency will be installed from the tarball, and you can now build or run your application and make sure everything works correctly.&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;NOTE:&lt;/strong&gt; This updates your &lt;code&gt;package.json&lt;/code&gt; file in the application folder. Make sure you don't accidentally keep that change after you're done testing! The same goes for the tarball created in the library folder.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it All Together
&lt;/h2&gt;

&lt;p&gt;Now you know all the essentials to start developing your own extensions and libraries that are based on Vue! To briefly recap what we need to know:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What are peer dependencies and how they are different than regular dependencies&lt;/li&gt;
&lt;li&gt;What updates need to be done to your build system (if applicable) to avoid bundling the library twice&lt;/li&gt;
&lt;li&gt;How to avoid the common &lt;code&gt;npm-link&lt;/code&gt; pitfall&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And that's all there is to it!&lt;/p&gt;

&lt;p&gt;As an additional note, this rabbit hole goes much deeper than just Vue. As mentioned before, React also shares this issue. If you've been developing your own &lt;a href="https://reactjs.org/docs/hooks-custom.html"&gt;React hooks library&lt;/a&gt;, for example, you might have run into the now-legendary &lt;strong&gt;&lt;a href="https://reactjs.org/warnings/invalid-hook-call-warning.html"&gt;Hooks can only be called inside the body of a function component&lt;/a&gt;&lt;/strong&gt;, which is caused by the same core problem. You are definitely encouraged to share your own stories of similar issues in the comments, and propose other solutions to this problem that were not addressed by the article!&lt;/p&gt;

</description>
      <category>vue</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
