DEV Community

Cédric Foellmi
Cédric Foellmi

Posted on

A thousand deployments later...

[Originally published in my blog].

A bit more than four years after its inception, two years after its real start, a thousand deployments, about 10 000 commits, more than 1200 unit and integration tests, more than 100 REST API endpoints, Arcsecond.io is approaching v1.0. What a journey, again!

The more I code arcsecond.io, the more I love it, and feel confidence for the years to come. Since the beginning, I knew this endeavour would last for multiple years, and not just be a hobby project. After all, I've been developing a macOS app called iObserve for more than 8 years. Thus, I built Arcsecond.io with these long-term objectives: maximum reliability, permanent deployment capability, permanent clean and tested code. And of course: utmost care for the feedback of users and astronomers worldwide. For this, I knew I will have to learn and master whatever it needs, whatever its complexity or technological layer, and develop and master the permanent operations of a pretty nice full-stack software system. This article explains the important choices in the technical stack used in arcsecond.io.

Let me start by showcasing a bit one thing I am pretty proud of today. Although I am very happy to have ported the core of my macOS app iObserve on the web (check it out), one of the coolest feature of arcsecond.io today is Worldwide Observing Activities, that is, the possibility to know in real-time what is being observed in the (largest but not only) observatories in the world, and some scientific satellites!

Live observing activities occuring worldwide and in space!

Even better: you can subscribe to it, and be notified right away if something matches your subscription. Say, you want to be notified when the HST starts an observation, or when an Programme is being performed for a given observer, or when a telescope instrument is being used, or when a region of the sky is being observed, etc. Boum. You've got mail! How cool is that?

To achieve this result, it is a long effort, and a rather lengthy list of decisions, optimizations, trials and errors. But the result is what I wanted: even if the user experience is not perfect (some links aren't always here, aren't always pointing to complete info etc), its potential is clearly enormous. Now, I have to "fill" it with more connectors.

Ok, let's dive in. We'll start with the backend, then move on with the infrastructure, the frontend and finally the code organisation.

The backend

Ok, not the most beautiful webpage of the Internet. But its beauty is what's inside.

The backend server is written with Django 2.2 and Python 3.7 Originally, arcsecond.io started with Django 1.5 or something, and Python 2. The migration to Python 3 was made more than a year ago, and it was clearly a good choice. Almost no third-party libraries broke during the migration. At the beginning, I evaluated the possibility to use Flask instead of Django. But Django comes with a lot of things built-in, in particular its automatic admin portal, which proved to be invaluable over time.

The design choice of the backend was to make a RESTful purely data-centric server. Hence, the rather obvious choice was to use the fantastic Django REST Framework. It is an essential part of arcsecond.io. It is not the purpose of this article to explain REST principles, but simply let me say it is a mindset, and the DRF allows you to deploy this way of doing with lots of ease.

One of the key reason of using Python in the backend was obviously its seamless integration with scientific and astronomical libraries. In particular, it wouldn't be possible without AstroPy (and its affiliated packages) and NumPy. Once arcsecond.io is making some benefits (hopefully some day!), I will redirect part of it to help fund these projects.

The projects declares 52 direct dependencies (among which django-cors-headers for managing HTTP headers easily, django-cryptography to encrypt some model fields, django-otp to protect the production admin route with 2FA, django-allauth and django-rest-auth to handle the complex process of auth, django-storages to handle multiple storages remotely, psycopg2 to talk to the PostgresQL DB, requests and pyvo to ease the writing of the multiple connectors...).

More importantly, the backend relies oncelery and channels (with the ASGI daphne server), two libraries of crucial importance.

Celery is a distributed task queue. When I was a newbie at the very beginning of this full-stack journey, I asked myself: Why would I need such a thing? Until I realised that a lot, if not most, of the activity of a backend server could be different from serving HTML pages or serialized data. In arcsecond.io the activity is not stable enough to say if the background Celery worker is more important than the Dahne server. However, with permanent parsing of Data Archives, such as ESO's, Gemini's and HST's, as well as Satellite schedules such as Swift's, and the creation of a permanent flux of observing activities, a great part of the load of the "arcsecond.io" backend is occuring in the background tasks handled by Celery.

The second library is the official Django-supported project to handle more than just HTTP requests, but also WebSockets. And even if we have only one read-only WebSocket route opened in arcsecond.io so far (the live activities here: api.arcsecond.io/activities/live), the migration from a simple WSGI pure-HTTP server (it was gunicorn for a longtime) to channels and daphne, was not a simple one. We had 2 main difficulties. One was that the channels_redis library (providing the backing store for the channels layer of the server) had a bug that was causing the creation of numerous connections in our infrastructure, way beyond what is expected. For a long time we had various errors pointing in various directions, without knowing the true cause of the problem. But once fixed (recently), the result was quite impressive!

Evolution over time of the number of Redis connections used by our channels / daphne server. See how dramatic the effect of a bugfix had.

The second difficulty was to understand the underlying mechanism of WebSockets, and how one could provide automatically and similarly serialized JSON data. In other words, how to use our carefully crafted REST serializers to format the data in the WebSocket tube the same way it was for HTTP requests. We use the open-source (very early-stage but clean and well written) djangochannelsrestframework library for that.

As for the tests, we use the pytest library, with the standard python mocks when necessary, and these additional and very useful libraries:

  • freezegun : to easily write tests anywhere you want in time,
  • factory_boy : to easily create dozens of instances during your tests setup,
  • django-naomi : to easily see in a browser during development what the emails your backend sent look like,
  • vcrpy : to very easily replay the "cassettes", that is, replay real network requests.

The infrastructure

Arcsecond.io has two servers: one for the backend, and one for the frontend. The two are deployed on the well-known PaaS Heroku. Why Heroku? Well, because a colleague of mine, when I was working back in Switzerland, just mentionned it's the easiest way of deploying a server. And it is still mostly true today, I am very happy of this decision (even if today I would consider using a serverless stack, amazed by the products of zeit.co for instance).

Some of the many advantages of using Heroku:

  • Large marketplace of one-click-installation add-ons for bazillions of auxiliary services (storage, logging, monitoring...)
  • Deploy with a simple git push.
  • A very easy-to-use CLI allowing you to interact and connect to your servers.
  • A very easy way to have your hostnames using certificates (using letsencrypt)

So here is the infrastructure of arcsecond.io:

  • A Daphne server (one Heroku dyno - 7$/month),
  • A Celery instance for a background worker (one Heroku dyno - 7$/month),
  • A celery-beat scheduler (one Heroku dyno - 7$/month),
  • A production PostgresQL DB instance, automatically backed up every hour - 9$/month,
  • A basic Papertrail instance for logging,
  • A basic Sendgrid instance for sending emails,
  • A basic RedisCloud instance for the backing store of the Daphne server,
  • A basic Heroku Redis instance for the HTTP cache,
  • An Express.js server for the frontend (one Heroku dyno - 7$/month),
  • Multiple Amazon S3 buckets for the storage of the data (price follows data volume),
  • The domains are bought on the french cloud provider ovh.com (~40$ / year for a .io domain),
  • I used to pay the Pro level of GitHub to store private repositories, but then it became free,
  • There are also 2 other free Heroku dynos used for the staging environnements (one for the back, one for the front).

Total price: about 40$ + S3 storage (+ a bit of googlemaps...) every month! One could say it's not much and it's a lot at the same time. 40 bucks for all these services is not much indeed. But these are mostly first or second-level entries of the services, and as soon as some money flows in, I will move up the level of some of them. Thanks to Heroku it will be as simple as moving a cursor! It's "a lot" in the sense that for now, arcsecond.io is totally free for everyone.

There is one key part of the infrastructure that is not mentionned yet: CircleCI. The code of the backend is never pushed in production directly. It always goes through the automatic testing and deployment by CircleCI on a staging environment, then on production.

As for the frontend, the code is not going through an external CI service, for one reason. The Javascript code is split in various private NodeJS modules (see below), and I couldn't wrap my head around the idea to pay an additional Amazon EC2 instance to hold a Verdaccio server, or use the premium mode of npmjs.com "just for 5 small JS libs". So I use a Verdaccio server that runs on my iMac at home.

So far, I count about 750 deploys for the backend, and 260 for the frontend.

One couldn't finish this part without mentionning the use of the fantastic crash-reporter Sentry. The richness and the ease of use of this tool is really great, and its help to find and correct bugs is invaluable. Even if Django comes with an automatic emailing when an error occurs (all 500 status code ends up in my Inbox), Sentry helps you read the code stack and identify more clearly what happened. And it also provides very easy management of problems, grouping identical ones, alerting when closed ones reappear etc.

This "software system" is in production since more than a year now, and has handled already quite a few visit spikes (hello ProductHunt...). It is ready to support a much larger load without having to change its underlying organisation.

The frontend

I never imagined I could build up such a complex view in the browser.

The frontend is a Vuejs Single-Page Application (SPA). The discovery of Vue.js was simply a life-saver. Originally, the frontend of arcsecond.io was written with AngularJS, the first version of the now-called Angular frontend framework (wich is one of the Trilogy, with Vuejs and Reactjs). But the codebase was enormous for the result, and hard to maintain. I tried at least 5 times to migrate to Angular 2, but always failed. And the JSX language of Reactjs appeared to me like a frightning thing and a prohibitive cost to pay (I still find it is a terrible idea for readibility).

Then came Vuejs. Simple. Easy to learn. Natural. No cruft, no crazy syntax (simple v- prefixes like v-if, compared to *ngIf for Angular – really, guys?), a textbook example of how a documentation must be written and displayed (yeah, Python my friend, your documentation is damn difficult to work with). Then came the v3 of the Vuejs Command-Line Interface. Oh man. Even the migration of the project from its original shape to the Vuejs-CLI v3 organisation was a breeze. And thanks God, it abstracts away all Webpack insane complexity (for non-frontend developers, webpack is crazy difficult to learn). To me, Vuejs is what the web development should have always been.

The tests are written with Jest, and vue-test-utils. I have made several attemps with Cypress to make end-to-end tests, but I had trouble installing a stable version on my iMac. Moreover, one could go a long way with Jest, so I am concentrating on unit tests for now.

One last piece that could be categorized as "frontend" in the sense of being a consumer of the APIs and their data is the Arcsecond Command-Line-Interface (CLI), open-source and freely available on GitHub. For a long time I thought it was enough to provide REST APIs of interesting data, I would not commit myself to another Python script / module like every scientist on this planet... But reality is stronger, and I am very happy to have build such tool. It helps integrate api.arcsecond.io more easily into custom workflows.

The organisation of the code

To that day, the backend repository contains 4200 commits, and the frontend repository 4900. Yes, I tend to make a lot of microcommits. The project started for real only two years ago, when I was able to really craft a serious beyond-hello-world frontend app for the long-term. The first commit of the Vuejs frontend was made on Tuesday February 28, 2017. But the first commit of the Python backend was on Tuesday Apr 15, 2014! (Always good to start a project on Tuesdays.)

In the code, I have 295 Vuejs components (.vue files) that allow to build a webapp, which weights 3.1 MB in the end. It is not using Server-Side-Rendering (SSR) so far, and it's pretty difficult to transform it right now (I have special handlers for dynamic routes for Organisation Portals...). But I intend to increase at least the number of pages using prerendering (see SSR link above). On the backend, even if there is only one "arcsecond" Django monolyth app, the code is split into 24 different Django subapps (which all have their own models, serializers, views, urls, sometimes tasks and/or connectors...).

Yes, the backend app is a monolyth, even if the industry is breezing with microservices. Actually, given the amount of relationships between all the models, the real work of development and its operations appeared to me a lot easier within a monolyth, rater than distinct microservices. I am alone (Eric my friend and partner mostly helps with advices, tests and comments), and arcsecond.io is a kind of product that requires an large amount of development first before starting to be useful. I just can't spend more time to something else than development.

For the IDEs, I use the pro version of PyCharm since the beginning for the backend (~80$/year). For the frontend I tried Sublime, Atom, VSCode, WebStorm over the years, and finally use PyCharm too, since it has an excellent Vuejs plugin. Finally, I couldn't commit so fast and so easily, and find the history of the changes so quickly without the excellent Git client Tower (~65$/year – that's quite a lot just to make commits... but I see Tower more like both a debugging and a management tool at the same time).

The GitHub repositories (repos, in the developers jargon..) are organised as follow:

  • A single arcsecond-back repo for the backend, sometimes with the help of forks of open-source libraries,
  • An arcsecond-front repo for the frontend, but using 6 private repos for custom Vuejs libaries,
  • The AA.js open-source repo for the Astronomical Algorithms,
  • The Arcsecond CLI open-source repo,
  • The public-issues repo to collect and work on public issues, found by my or users.
  • The standard-stars-catalogues repo for holding an open-source and reliable source of standard stars catalogues.

That's it. Thanks for reading down to these lines! I hope you had learn a bit what it takes to start a "small but growing" project that grabs quite a large fraction of your daily awake time.

–– Clear skies to all!

P.S. The beautiful image of the starry night is from G. Hüdepohl (atacamaphoto.com)/ESO.

Top comments (0)