I recently had some free time, so I created a simple project about specialty coffee shops in Cyprus: a website and a Telegram chatbot that adhered to "big" enterprise standards. I adore good coffee.
Now I like to share my development process as well as tips on how to get things done quickly.
The project's goals are simple:
- Display a map of coffee shops on the website.
- Examine a coffee shop's details
- Use Google Maps for directions, reviews, and so on.
- List of coffee shops in the Telegram chatbot
- Search for local coffee shops in the bot
- Search for the near coffee shop in the bot
- Show a random coffee shop in the bot
- Everything has a minimal and clear style.
Also, avoid perfectionism and never-ending development.
That is actually quite difficult because you may want to outperform others, wear badges of various linters, and express all of your abilities in the code.
Although it is sufficient for the service to simply work, metrics are tracked and errors are recorded. It is worth 20% of the effort to get 80% of the result.
So break everything down into small tasks and discard anything that will take more than a few hours or that cannot be estimated.
To be honest, I failed a few times: the first time, I got hooked on the idea of running a Caddy server without config from the console, but it can only run a reverse proxy and/or file server; I spent two days on it in total.
Another day was wasted on a bad choice of library for the Telegram chatbot.
Overall, everything was successful and the project code is open. Website: https://specialtycoffee.cy/
To achieve my objectives I've decided to build a REST API microservice, a frontend website (more on that in the second part), a backend for the bot (more on that in the third part) and deploy it to something modern and managed with a free tier, rather than VPS or shared hosting. The bot is generally compatible with the Serverless/FaaS ideology.
My first step was to register a domain name, which cost me 20 euros out of my own pocket - a good incentive not to squander them.
Speaking of registration, it is difficult to predict the project's future: it may turn out to be profitable to sell, and you may want to get rid of it quickly. As a result, it is preferable to separate all external services into separate accounts: mail, domain, hosting, analytics, monitoring, and so on. You can also use free tiers and trials.
REST API microservice
I have previous experience with Laravel and Symfony, so I chose a familiar and simple-to-use technology for quick implementation. Then I'll almost certainly rewrite it in Go. Using an actual version of PHP 8.1 allowed me to write slightly less code while achieving slightly better performance. Development is made much easier by promoted properties, read-only properties, and strict typing.
To make things easier, I remove some unused packages and services from Laravel: it's almost like Lumen.
Packages can be marked as "installed" in composer.json, but they will not be installed. This is very useful for removing redundant polyfills like this:
"replace": {
"symfony/polyfill-ctype": "*",
"symfony/polyfill-iconv": "*",
"symfony/polyfill-intl-grapheme": "*",
"symfony/polyfill-intl-idn": "*",
"symfony/polyfill-mbstring": "*",
"symfony/polyfill-php72": "*",
"symfony/polyfill-php73": "*",
"symfony/polyfill-php80": "*",
"symfony/polyfill-php81": "*",
"dragonmantank/cron-expression": "*",
"egulias/email-validator": "*",
"league/commonmark": "*",
"league/flysystem": "*",
"symfony/mime": "*",
"symfony/var-dumper": "*",
"tijsverkoyen/css-to-inline-styles": "*"
}
You can also disable platform-check so that you don't have to check the PHP version on every request and limit your check to installing packages. It's also useful to enable classmap-authoritative so that classes are only loaded from the map created by the composer, not from each use, but this would interfere with development, so it's sufficient to enable it on deployment.
There are final composer.json and config/app.php. This kind of optimization took less than an hour, so ok. But deeper optimization will require much more time, so not now.
Architecture
The service is made up of single-action controllers that retrieve data from models. There are no repositories because I consider them unnecessary for simple queries with no additional logic.
Separate Requests validate input data, and GeoJson Resources wrap output data. One class, one responsibility.
When I was developing the frontend, there was only one endpoint /cafes that returned a list of all the coffee shops: this allowed me to quickly get the API up and running without affecting other parts of the project. I added a few more endpoints during bot development.
Database
To begin, SQLite is used as a database, which allowed me to avoid the time spent on the traditional MySQL/PostgreSQL deployment. Furthermore, I'm confident that SQLite is an excellent choice for microprocessing with a load of 100 hits per day and a few dozen or hundreds of table entries.
During the deployment process, data is seeded from a regular array in database/seeders/CafeSeeder.php. I intend to write 1-2 console commands to edit data in the future because they are much faster than any visual admin panel.
Search
Scout with a "collection" driver makes the API full-text searchable: it allows the search of each model's fields with a simple "LIKE%smth%" query and does not require any full-text indexes in the database. It only took 15 minutes to implement.
Static
There are a few static files that must be present in the service:
- robots.txt, which doesn't allow indexing.
- favicon.ico, which is loved by many services
- humans.txt
- etc.
Tests
First, swagger-php generates openapi.yaml based on code attributes, and then the spectator checks API responses to match openapi specification. The popular L5-Swagger is redundant in this case, as it is based on the same swagger-php with the addition of Swagger UI.
Configuration
Unlike Symfony, Laravel does not read the.env.local file to override/add to the.env configuration and does not recommend storing the.env in a repository. This is a good approach, but it is ineffective when there are many configuration parameters.
You can do it a little differently: put the local parameters in.env (but don't put them in the repository) and all the production and secret parameter names in.env.production (but put them in the repository). The hosting and/or deployment tools must write the APP_ENV=production parameter as well as the secrets themselves.
In this case, .env.local will replace (not complement!) the configuration from.env.production, and listing all used parameters (even without values) in.env.production will help with project understanding. In this case, remove .env.example.
Monitoring
When the first stage of development is complete, add Sentry to the project: in .env.production just specify an empty value of SENTRY_LARAVEL_DSN (for clarity) and record the actual value in the secret.
Deployment
The server is hosted on the Fly.io platform with managed micro VM Firecracker. It never sleeps, has a good free tier, and allows you to host both static and any application server, unlike the popular Heroku. There are also different deployment and rollback strategies, health checks, and hosting geographies to choose from.
The flyctl launch command from the application directory will detect the required components and build the fly.toml and Dockerfile configurations automatically. You can also create your own configuration and Dockerfile.
I already had a Dockerfile for similar projects, so I used that. As an added bonus, I could run all services as an unprivileged user.
Because layer caching has been implemented in relatively recent Docker versions, it makes no sense to write all instructions in a single RUN command. On the contrary, it is preferable to arrange the "thin" RUN and COPY layers in the order of how frequently the data changes.
Because OS distributions and packages are rarely changed, the RUN apk add...
command may appear at the beginning of the Dockerfile.
Because composer packages are updated more frequently than project source code, the layers COPY composer.* .
and RUN composer install --no-autoloader --no-dev --no-interaction --no-scripts
can be specified in the middle of a Dockerfile and fetched from cache.
COPY --chown=www-data:www-data . .
, RUN composer dump-autoload --classmap-authoritative --no-interaction
, and possibly other commands affecting the project source code can be placed at the end and executed only if the project code itself changes, rather than the OS packages or composer dependencies.
I used an embedded PHP server instead of the usual combination of PHP-FPM + Nginx/Caddy + Supervisor to speed up the project's launch.
The Fly.io platform itself terminates https and manages certificates, so the container application only needs to handle normal http traffic.
CI/CD
It's simple here, GitHub Action with one workflow and the same flyctl.
At this point, the microservice API is live, hosted in production, and accessible to all users. MVP is complete :-)
API repository, website https://specialtycoffee.cy
I'll tell about creating the frontend in the second part and the chatbot for Telegram in the third part.
Top comments (0)