You can now watch this article as a video:
Video link OUTDATED
Edit 27.11.2021: Since Phoenix 1.6.0 release removed Webpack, Npm etc. and moved to...
For further actions, you may consider blocking this person and/or reporting abuse
Thanks for this - first post I was able to get through start to finish without having to change or debug something. I omitted all the node & asset stuff, because I'm making an API for use with an Elm front end, but it was easy and it worked. Looking forward to more about deploying.
(Still not so sure how Docker is supposed to make my life that much better, though.)
Glad this helped you out!
The main point of running your application locally on Docker is the container environment it offers. You're running the application in the same container environment locally and perhaps on a cloud service provider infrastructure.
Also, it makes the database setup easy, you don't have to worry about starting up your local database on host system, configure it, update it etc. Third point being that, lets say that you're integrating your application to some other app and they have development Docker image available, you can just spin up this development image locally alongside of your application. This makes debugging even more easier if you face up problems. This is nice especially if there is some sort of microservice development going on.
I have a question about your workflow (or the canonical docker-based workflow) - are you supposed to run tests in a different docker container? Or just locally w/ no container? Or is there a way to change the env to TEST for this container? I like to do red-green-refactor TDD, so quick & easy tests are a big thing for me. I got my tests running locally, but I assume that's not optimal because my local postgres and erlang versions (and db credentials) are different than what is in the container.
Good question! I'm running my tests inside the local development container with command
docker-compose run -e MIX_ENV=test web mix test
. That command will replace theMIX_ENV
environment variable with value "test", so when themix test
is executed the mix environment is set to test, not dev.About the database url, the smartest way to use environment defined database URL is to use wildcard character in the database name, for example,
postgres://postgres:postgres@db:5432/myapp_?
. This way in the config files we can read that URL from container environment, replace the?
with our actual environment respectively (dev, test, even prod). By going this, you will always have separate databases locally for development and testing, and the development data does not affect test cases.Is that wildcard substitution something I do manually w/ String.replace or interpolation, or is there something built-in to the config function that I am unaware of?
So far the only thing I got to work was:
# .env
DATABASE_URL=postgres://postgres:postgres@db:5432/my_app_
# dev.exs
database_url = "#{System.get_env("DATABASE_URL")}#{Mix.env()}"
That
Mix.env()
is one way to achieve that. Personally I would still useString.replace/3
, since Elixir 1.9 introduced a new way to configure your application without Mix dependency in your config-files.I would do it this way:
test.exs
.env
You should use either 2 containers, One for frontend one for backend or make use of multistage builds.
The problem is that nobody wants deploy the front-end code with all its node_modules folder. You have to build the front-end assets for prod environment and make sure the containers are tiny as possible.
Have a look on "staged" builds on docker docs. You can also use one docker file and reference the stages in docker compose. E.g. stage: development for prod you use production stage.
What you said is valid, but for deployments to cloud infrastructure (dev, staging, production). On the up-coming article I'll be doing a multi-stage build with alpine images to keep the end result pretty tiny with only including some tooling required by BEAM and the application binary itself.
I just want to point out that this article is to help you to setup local development environment, not for deploying a small footprint image to cloud etc.
First of all, by following the best practices of docker. An image should not have frontend and backend dependencies. It does not matter if its for local development environment or for non local environments.
Also docker is built to solve one big problem. The problem is called "it works on my machine". By defining different containers for local and non local you missuse power of docker.
The solution is to use 2 containers. One for front-end dependencies, one for backend and make use of multistage builds.
Doesn't matter if local or non local.
If you want i could make s pull request on your GitHub repo to show you how it works.
What do you think?
Greetings
Well, if I use alpine based image on local development and on cloud deployments, how that does not solve the "it works on my machine" issue? I'm just curious.
I have one question, how I'm going to do rapid local development if I build the images locally with multistage Dockerfile? What happens to hot code reloading? Also, Elixir with Phoenix framework is SSR, so there is no separate frontend and backend as e.g. in Node and React. Nevertheless, you can separate the static assets and other browser rendered stuff and backend to own containers in cloud deployments, but on local environment I don't see the real benefit of it.
I opened the repo, you should be able to make a merge/pull request to it. You can find the link below, don't mind the naming of the repo. I'm waiting for MR!
gitlab.com/hlappa/url-shortener/
I ended up finding this here: evolvingdev.io/phoenix-local-devel... ..
The trick is adding this:
to your webpack.config.js.
Great article, but in my case I need a specific Ubuntu-bionic, erlang/OTP 22 and Elixir 1.10.2 to be on par with the production server. From which docker container image should I start building my image? I've found hexpm/elixir:1.10.2-erlang-22.2-ubuntu-bionic-20200219 but it seems that it lacks of many dev utils which are good to have on a development container.
Is it just plain Elixir application or is there also Phoenix involved?
If it's just a plain Elixir application, I would just go with basic ubuntu image and build the development container from it, in your case it would be the bionic release. If you have Phoenix involved, I would just use the Bitwalker's image which is used in this article. For production deployments, you can use different image and package the application to Ubuntu based image. This little bit breaks the principle that you have the same runtime in every environment (local, development, staging, production).
Even though you can also run Phoenix application in Ubuntu based image, but you need to install all the dependencies related to Phoenix if you are using Ubuntu as your base image.
But even in the second case if you decide to use Bitwalker's image on local and Ubuntu on deployments, you would have the same runtime in development, staging and production, and bugs related to environment issues could be spotted early on when testing in development or staging.
What comes to the dev utils, if I were you, I would just install needed tools in to the docker image. It will take a little bit longer to build local development image and run it in container.
Hey, Aleksi, thanks for the reply! It's a phoenix project which runs on a VPS (Ubuntu bionic, Erlang/OTP 22, Elixir 1.10.2) and I want to have a development environment as close to the production as I can, at least OS and Erlang version and Elixir version.
It seems that I'll need to build my docker container from Ubuntu image...
in that case, yes. It is a bit heavy compared to alpine based images, but totally doable!
Is this working? I have been trying for days and cannot access the localhost by any means. The weird things are:
Change the
127, 0, 0, 1
to0, 0, 0, 0
in theconfig/dev.exs
fileYou sir are a real hero. Thanks.
Seems that you already solved the problem. I updated the article accordingly. :)