DEV Community

Jack Miras
Jack Miras

Posted on • Updated on • Originally published at dev.to

Laravel with PHP 8.2 in an Alpine Container

When deploying a Laravel application, the goal is to make sure that the deployment process is as fast and secure as possible. A big part of achieving this goal is choosing the right base Linux image to compose the container image where the application will be running and later deployed.

Alpine Linux has shown that there is no faster distro when working with a container for any language. Since Docker's first release, the popularity of the Alpine distro has grown and keeps growing because it is a tiny, container-focused, and security-focused distro.

To be able to run an application, just PHP and Composer aren't enough; NGINX and Supervisor are also required, and here is where a little complexity comes in. But don't worry; a Dockerfile will be dissected, and you will get to understand why things are the way they are.

Content

Dockerfile

Down below, there is an entire Dockerfile used locally and in production to serve a Laravel application. Notice that it's not optimized to have a minimal number of layers, and that is on purpose, since we will grab small pieces of the file and understand what each part does.

FROM alpine:latest

WORKDIR /var/www/html/

# Essentials
RUN echo "UTC" > /etc/timezone
RUN apk add --no-cache zip unzip curl sqlite nginx supervisor

# Installing bash
RUN apk add bash
RUN sed -i 's/bin\/ash/bin\/bash/g' /etc/passwd

# Installing PHP
RUN apk add --no-cache php82 \
    php82-common \
    php82-fpm \
    php82-pdo \
    php82-opcache \
    php82-zip \
    php82-phar \
    php82-iconv \
    php82-cli \
    php82-curl \
    php82-openssl \
    php82-mbstring \
    php82-tokenizer \
    php82-fileinfo \
    php82-json \
    php82-xml \
    php82-xmlwriter \
    php82-simplexml \
    php82-dom \
    php82-pdo_mysql \
    php82-pdo_sqlite \
    php82-tokenizer \
    php82-pecl-redis

RUN ln -s /usr/bin/php82 /usr/bin/php

# Installing composer
RUN curl -sS https://getcomposer.org/installer -o composer-setup.php
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN rm -rf composer-setup.php

# Configure supervisor
RUN mkdir -p /etc/supervisor.d/
COPY .docker/supervisord.ini /etc/supervisor.d/supervisord.ini

# Configure PHP
RUN mkdir -p /run/php/
RUN touch /run/php/php8.2-fpm.pid

COPY .docker/php-fpm.conf /etc/php82/php-fpm.conf
COPY .docker/php.ini-production /etc/php82/php.ini

# Configure nginx
COPY .docker/nginx.conf /etc/nginx/
COPY .docker/nginx-laravel.conf /etc/nginx/modules/

RUN mkdir -p /run/nginx/
RUN touch /run/nginx/nginx.pid

RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log

# Building process
COPY . .
RUN composer install --no-dev
RUN chown -R nobody:nobody /var/www/html/storage

EXPOSE 80
CMD ["supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
Enter fullscreen mode Exit fullscreen mode

Defining image bases

The first step towards the construction of a Dockerfile is to create the file itself and define a Linux distribution and its version. Once that is done, you can start composing your Dockerfile with the instructions needed to build your container image.

FROM alpine:latest

WORKDIR /var/www/html/
Enter fullscreen mode Exit fullscreen mode

The FROM instruction sets the base image for subsequent instructions. Notice that alpine:latest gets defined, which sets the base Linux image. After the distro name, there is a :  used to specify a tag or version, so when the instruction FROM alpine:latest gets interpreted, it will set alpine at the latest version as the base image.

While the WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD instructions that follow it in the Dockerfile, when the instruction WORKDIR /var/www/html/  is interpreted, every command execution in the Dockerfile will take place in /var/www/html/.

Software installation

Now that the container image base has been defined, it's time to start looking into the software that we need to install to run the application. As mentioned, PHP, Composer, NGINX, and Supervisor are softwares to install, but that's not all. As these pieces of software have dependencies, they also have to be installed. Here is the installation process broken down into understandable pieces:

Install essentials

RUN echo "UTC" > /etc/timezone
RUN apk add --no-cache zip unzip curl sqlite nginx supervisor
Enter fullscreen mode Exit fullscreen mode

The first RUN instruction will execute any commands in a new layer on top of the current image and commit the results. Hence, when RUN echo "UTC" > /etc/timezone is interpreted, the echo command will print out the UTC string into the /etc/timezone file. As a result of the command's execution, UTC becomes the standard timezone.

In the second RUN instruction, an apk command appears; apk is the Alpine package manager; another well-known package manager is apt from Ubuntu. With that said, when RUN apk add --no-cache zip unzip curl sqlite nginx supervisor is processed, it installs those softwares in the base image.

Install bash

RUN apk add bash
RUN sed -i 's/bin\/ash/bin\/bash/g' /etc/passwd
Enter fullscreen mode Exit fullscreen mode

The first RUN instruction says that bash has to be installed. The second instruction sets it as a standard shell by replacing the string /bin/ash by /bin/bash in the /etc/passwd file. This change is because the Alpine standard shell, ash, works differently, and these differences can get in your way when you or your team need to execute a shell script in the container.

Install PHP

RUN apk add --no-cache php82 \
    php82-common \
    php82-fpm \
    php82-pdo \
    php82-opcache \
    php82-zip \
    php82-phar \
    php82-iconv \
    php82-cli \
    php82-curl \
    php82-openssl \
    php82-mbstring \
    php82-tokenizer \
    php82-fileinfo \
    php82-json \
    php82-xml \
    php82-xmlwriter \
    php82-simplexml \
    php82-dom \
    php82-pdo_mysql \
    php82-pdo_sqlite \
    php82-tokenizer \
    php82-pecl-redis

RUN ln -s /usr/bin/php82 /usr/bin/php
Enter fullscreen mode Exit fullscreen mode

The first RUN instruction says that PHP and all listed extensions have to be installed. As mentioned before, this Dockerfile gets used to serve Laravel applications, so the PHP extensions are arbitrary and may change depending on the framework or application you are trying to run.

While the second RUN instruction creates a symbolic link named php that points to the file php82 in the /usr/bin directory. 

Lastly, you can find out what the PHP extensions do by checking the PHP extensions documentation and the PHP extension community library PECL pages and searching for them.

Install Composer

RUN curl -sS https://getcomposer.org/installer -o composer-setup.php
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN rm -rf composer-setup.php
Enter fullscreen mode Exit fullscreen mode

In this RUN instruction, the composer binary, composer-setup.php, gets downloaded from the composer's official page. Then, in the second instruction, the binary gets used to install composer into the /usr/local/bin directory. Lastly, the binary gets removed after composer installation since it has no use for the system any longer.

Software configuration

Now that all the needed software is installed, it has to be configured and tightened together to make the serving of a Laravel application work as expected.

Configure supervisor

RUN mkdir -p /etc/supervisor.d/
COPY .docker/supervisord.ini /etc/supervisor.d/supervisord.ini
Enter fullscreen mode Exit fullscreen mode

In this RUN instruction, the Dockerfile specifies that the directory supervisor.d has to be created inside the /etc/ directory. This directory will hold initializer files that specify sets of instructions that the Supervisor will run upon when the OS starts, in this case when the container starts, since these two events cannot happen without each other.

In the second RUN instruction, the supervisord.ini file gets copied from a local .docker folder into the /etc/supervisor.d/ container folder. As mentioned above, this file contains the instructions that Supervisor will run upon, and these instructions are:

[supervisord]
nodaemon=true

[program:nginx]
command=nginx
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

[program:php-fpm]
command=php-fpm82
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Enter fullscreen mode Exit fullscreen mode

Explaining supervisor.ini

  • nodaemon=true

Start Supervisor in the foreground instead of daemonizing.

  • command=nginx

The command that will run when Supervisor starts.

  • stdout_logfile=/dev/stdout

Redirect all output to the Alpine standard output device, which is the container itself, allowing us to see Supervisor logs about NGINX execution when running docker logs MY_CONTAINER or docker-compose up to start the container stack.

  • stdout_logfile_maxbytes=0

The maximum number of bytes that can get consumed by stdout_logfile before it rotates, since files didn't get written, has to be deactivated by setting maxbytes to 0.

  • stderr_logfile=/dev/stderr

Redirect all errors to the Alpine standard error device that is the container itself, allowing us to see Supervisor logs about NGINX execution when running docker logs MY_CONTAINER or docker-compose up to start the container stack.

  • stderr_logfile_maxbytes=0

The maximum number of bytes that can get consumed by stderr_logfile before it rotates, since files didn't get written, has to be deactivated by setting maxbytes to 0.

Configure PHP

RUN mkdir -p /run/php/
RUN touch /run/php/php8.2-fpm.pid

COPY .docker/php-fpm.conf /etc/php82/php-fpm.conf
COPY .docker/php.ini-production /etc/php82/php.ini
Enter fullscreen mode Exit fullscreen mode

In the first RUN statement, the Dockerfile specifies that the directory php has to be created inside the /run/ directory. This directory will hold .pid files that contain the process ID specific to the software.

The second statement creates the file php8.2-fpm.pid inside the /run/php/ directory. Now the Alpine distro has a file to store the process ID that will be created when PHP-FPM starts.

The third statement copies a php.ini-production file from a local .docker folder into the /etc/php82/ container folder. This file contains all the configurations that PHP will run on. The content of this file was copied from PHP's official repository on GitHub.

The fourth statement copies a php-fpm.conf file from a local .docker folder into /etc/php82/ container folder. This file contains all the configurations that PHP-FPM will run upon, and here are the configurations:

;;;;;;;;;;;;;;;;;;;;
; FPM Configuration ;
;;;;;;;;;;;;;;;;;;;;;

; All relative paths in this configuration file are relative to PHP's install
; prefix (/usr). This prefix can be dynamically changed by using the
; '-p' argument from the command line.

;;;;;;;;;;;;;;;;;;
; Global Options ;
;;;;;;;;;;;;;;;;;;

[global]
; Pid file
; Note: the default prefix is /var
; Default Value: none
pid = /run/php/php8.0-fpm.pid

; Error log file
; If it's set to "syslog", log is sent to syslogd instead of being written
; in a local file.
; Note: the default prefix is /var
; Default Value: log/php-fpm.log
error_log = /proc/self/fd/2

; syslog_facility is used to specify what type of program is logging the
; message. This lets syslogd specify that messages from different facilities
; will be handled differently.
; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)
; Default Value: daemon
;syslog.facility = daemon

; syslog_ident is prepended to every message. If you have multiple FPM
; instances running on the same server, you can change the default value
; which must suit common needs.
; Default Value: php-fpm
;syslog.ident = php-fpm

; Log level
; Possible Values: alert, error, warning, notice, debug
; Default Value: notice
;log_level = notice

; If this number of child processes exit with SIGSEGV or SIGBUS within the time
; interval set by emergency_restart_interval then FPM will restart. A value
; of '0' means 'Off'.
; Default Value: 0
;emergency_restart_threshold = 0

; Interval of time used by emergency_restart_interval to determine when
; a graceful restart will be initiated.  This can be useful to work around
; accidental corruptions in an accelerator's shared memory.
; Available Units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;emergency_restart_interval = 0

; Time limit for child processes to wait for a reaction on signals from master.
; Available units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;process_control_timeout = 0

; The maximum number of processes FPM will fork. This has been design to control
; the global number of processes when using dynamic PM within a lot of pools.
; Use it with caution.
; Note: A value of 0 indicates no limit
; Default Value: 0
; process.max = 128

; Specify the nice(2) priority to apply to the master process (only if set)
; The value can vary from -19 (highest priority) to 20 (lower priority)
; Note: - It will only work if the FPM master process is launched as root
;       - The pool process will inherit the master process priority
;         unless it specified otherwise
; Default Value: no set
; process.priority = -19

; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.
; Default Value: yes
daemonize = no

; Set open file descriptor rlimit for the master process.
; Default Value: system defined value
;rlimit_files = 1024

; Set max core size rlimit for the master process.
; Possible Values: 'unlimited' or an integer greater or equal to 0
; Default Value: system defined value
;rlimit_core = 0

; Specify the event mechanism FPM will use. The following is available:
; - select     (any POSIX os)
; - poll       (any POSIX os)
; - epoll      (linux >= 2.5.44)
; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)
; - /dev/poll  (Solaris >= 7)
; - port       (Solaris >= 10)
; Default Value: not set (auto detection)
;events.mechanism = epoll

; When FPM is build with systemd integration, specify the interval,
; in second, between health report notification to systemd.
; Set to 0 to disable.
; Available Units: s(econds), m(inutes), h(ours)
; Default Unit: seconds
; Default value: 10
;systemd_interval = 10

;;;;;;;;;;;;;;;;;;;;
; Pool Definitions ;
;;;;;;;;;;;;;;;;;;;;

; Multiple pools of child processes may be started with different listening
; ports and different management options.  The name of the pool will be
; used in logs and stats. There is no limitation on the number of pools which
; FPM can handle. Your system will tell you anyway :)

; Include one or more files. If glob(3) exists, it is used to include a bunch of
; files from a glob(3) pattern. This directive can be used everywhere in the
; file.
; Relative path can also be used. They will be prefixed by:
;  - the global prefix if it's been set (-p argument)
;  - /usr otherwise
include=/etc/php82/php-fpm.d/*.conf
Enter fullscreen mode Exit fullscreen mode

Notice that php-fpm.conf doesn't have any custom configuration or optimization; feel free to configure this file according to your needs.

Configure NGINX

COPY .docker/nginx.conf /etc/nginx/
COPY .docker/nginx-laravel.conf /etc/nginx/modules/

RUN mkdir -p /run/nginx/
RUN touch /run/nginx/nginx.pid

RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
Enter fullscreen mode Exit fullscreen mode

In this first statement, nginx.conf gets copied from a local .docker folder into the /etc/nginx/ container folder. This file contains all the configurations that NGINX will use to run upon it, and down below you can check the file content:

# /etc/nginx/nginx.conf

user nobody;

# NGINX will run in the foreground
daemon off;

# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;

# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;

# Configures default error logger.
error_log /var/log/nginx/error.log warn;

# Uncomment to include files with config snippets into the root context.
# NOTE: This will be enabled by default in Alpine 3.15.
# include /etc/nginx/conf.d/*.conf;

events {
    # The maximum number of simultaneous connections that can be opened by
    # a worker process.
    worker_connections 1024;
}

http {
    # Includes mapping of file name extensions to MIME types of responses
    # and defines the default type.
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Includes files with directives to load dynamic modules.
    include /etc/nginx/modules/*.conf;

    # Name servers used to resolve names of upstream servers into addresses.
    # It's also needed when using tcpsocket and udpsocket in Lua modules.
    #resolver 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001;

    # Don't tell nginx version to the clients. Default is 'on'.
    server_tokens off;

    # Specifies the maximum accepted body size of a client request, as
    # indicated by the request header Content-Length. If the stated content
    # length is greater than this size, then the client receives the HTTP
    # error code 413. Set to 0 to disable. Default is '1m'.
    client_max_body_size 1m;

    # Sendfile copies data between one FD and other from within the kernel,
    # which is more efficient than read() + write(). Default is off.
    sendfile on;

    # Causes nginx to attempt to send its HTTP response head in one packet,
    # instead of using partial frames. Default is 'off'.
    tcp_nopush on;


    # Enables the specified protocols. Default is TLSv1 TLSv1.1 TLSv1.2.
    # TIP: If you're not obligated to support ancient clients, remove TLSv1.1.
    ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;

    # Path of the file with Diffie-Hellman parameters for EDH ciphers.
    # TIP: Generate with: `openssl dhparam -out /etc/ssl/nginx/dh2048.pem 2048`
    #ssl_dhparam /etc/ssl/nginx/dh2048.pem;

    # Specifies that our cipher suits should be preferred over client ciphers.
    # Default is 'off'.
    ssl_prefer_server_ciphers on;

    # Enables a shared SSL cache with size that can hold around 8000 sessions.
    # Default is 'none'.
    ssl_session_cache shared:SSL:2m;

    # Specifies a time during which a client may reuse the session parameters.
    # Default is '5m'.
    ssl_session_timeout 1h;

    # Disable TLS session tickets (they are insecure). Default is 'on'.
    ssl_session_tickets off;


    # Enable gzipping of responses.
    #gzip on;

    # Set the Vary HTTP header as defined in the RFC 2616. Default is 'off'.
    gzip_vary on;


    # Helper variable for proxying websockets.
    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }


    # Specifies the main log format.
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
            '$status $body_bytes_sent "$http_referer" '
            '"$http_user_agent" "$http_x_forwarded_for"';

    # Sets the path, format, and configuration for a buffered log write.
    access_log /var/log/nginx/access.log main;


    # Includes virtual hosts configs.
    include /etc/nginx/http.d/*.conf;
}

# TIP: Uncomment if you use stream module.
#include /etc/nginx/stream.conf;
Enter fullscreen mode Exit fullscreen mode

The third statement copies nginx-laravel.conf from a local .docker folder into the /etc/nginx/modules/ container folder. This file contains all the configurations that NGINX will use to serve Laravel correctly, and down below you can check the file content:

server {
    listen 80;
    server_name localhost;
    root /var/www/html/public;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options "nosniff";

    index index.php;

    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_page 404 /index.php;

    location ~ \.php$ {
        fastcgi_pass localhost:9000;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        include fastcgi_params;
    }

    location ~ /\.(?!well-known).* {
        deny all;
    }
}
Enter fullscreen mode Exit fullscreen mode

The fourth statement specifies that the directory nginx has to be created inside the /run/ directory. As mentioned in the PHP-FPM configuration session, the run directory holds .pid files where the process ID for a specific software gets written.

In the fifth statement, create the file nginx.pid inside the /run/nginx/ directory. Now, the Alpine distro has a file to store the process ID that will be created when NGINX starts.

The sixth statement instructs that a symbolic link to the Alpine standard output has to be created at /var/log/nginx/access.log. This configuration, as mentioned in the Supervisor sections, is what allows us to see NGINX logs from containers.

Lastly, the seventh statement instructs that a symbolic link to the Alpine standard error gets created at /var/log/nginx/error.log. This configuration, as mentioned in the Supervisor sections, is what allows us to see NGINX errors from containers.

Build process

The build process is where the application gets copied into the container and its dependencies get installed, leaving the Laravel application ready to be served by NGINX, PHP-FPM, and Supervisor.

COPY . .
RUN composer install --no-dev
Enter fullscreen mode Exit fullscreen mode

At the COPY statement, all Laravel files and folders from the directory where the Dockerfile is are copied into the working directory specified at the WORKDIR instruction.

At the RUN statement, production dependencies from the Laravel application get installed, making the application ready to be served by Supervisor, NGINX, and PHP-FPM.

Container execution

Now that everything is installed and properly configured, we need to know how this container image will start serving the application once the container starts and what TCP port to use.

EXPOSE 80
CMD ["supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
Enter fullscreen mode Exit fullscreen mode

The EXPOSE instruction informs the container to listen on the specified network ports at runtime, while the purpose of the CMD instruction is to provide a default command for an executing Docker container.


Now your Dockerfile is finally done, and you can build a container from it by executing docker build -t laravel-alpine:latest . --no-cache in your terminal.

Happy coding!

Latest comments (41)

Collapse
 
pixeline profile image
Alexandre Plennevaux

Amazing post, thank you very much for the writeup!

Suggestion: to setup composer you can use the same logic of a base file and simply do:

COPY --from=composer /usr/bin/composer /usr/bin/composer
Enter fullscreen mode Exit fullscreen mode
Collapse
 
emptulik profile image
Dominik Szalai • Edited

Hi! Thank you for the wonderful tutorial! It appears to be the closest one that meets my requirements. However, would it be possible to upload the source code to any repository? I'm encountering some issues with PHP 8.3 (as we need to stay up to date). Additionally, in the FPM section, you have the line:

pid = /run/php/php8.0-fpm.pid
Enter fullscreen mode Exit fullscreen mode

This seems to refer to an older version, whereas you use PHP 8.2 elsewhere in the tutorial.

Collapse
 
leslieeeee profile image
Leslie

If you are a macos user, ServBay.dev is worth to try. You don't need to spend some time or couple of days to setup anything. Just download it and you can use it immediately. You can run multiple PHP versions simultaneously and switch between them effortlessly.
Honestly, this tool has greatly simplified my PHP development and is definitely worth trying!

Collapse
 
pranta-saha profile image
Pranta Saha

Image description

I have followed the instructions. but facing this nginx issue. Can anyone help?

Collapse
 
jackmiras profile image
Jack Miras • Edited

@pranta-saha would you try again? I noticed with the most recently Alpine release a few paths and binaries became php82 not just php8, especially at the php-fpm.conf and supervisord.ini files.

Collapse
 
pranta-saha profile image
Pranta Saha

@jackmiras Thanks for your quick response. Really appreciate that
I have edited the php-fpm.conf and supervisord.ini files accordingly but getting the same error. I have installed sail in existing laravel web app which uses nginx as default. may be something is conflicting with that

Thread Thread
 
jackmiras profile image
Jack Miras

Hello @pranta-saha,

Maybe the way you are running the container image might be influencing in the result?

I've tried running locally using docker run -p 80:80 laravel-scaffold:latest and then in the browser accessed http://localhost:80 and the application was loaded successfully.

How are you running the container you've built?

Thread Thread
 
pranta-saha profile image
Pranta Saha

I have fixed the problem. Had to comment out "include /etc/nginx/http.d/*.conf" in nginx.conf file. Now its working

Collapse
 
justpthaiit profile image
Phạm Tiến Hải

Love this article. But It seems like php-fpm8 run without create php-fpm8.pid file. I see no pid or any content in file. I tried remove command touch php-fpm8.pid file, and I did not see php-fpm8.pid file in /run/php. A little diff between alpine and ubuntu, right?

Collapse
 
albertguedes profile image
Albert R. C. Guedes

This version has nginx rewrite module ?

Collapse
 
tokotigawarna profile image
tokotigawarna

I have successfully build this dockerfile and my local web run successfully,
but when i push same codes to development stage, at aws pipeline, it shows error when installing package from dockerfile like this:

ERROR: unable to select packages:
required by: world[php8-pdo_pgsql]
php8-pdo_sqlite (no such package):
required by: world[php8-pdo_sqlite]
php8-pecl-redis (no such package):
required by: world[php8-pecl-redis]
php8-phar (no such package):
required by: world[php8-phar]
php8-simplexml (no such package):
required by: world[php8-simplexml]
php8-tokenizer (no such package):
required by: world[php8-tokenizer]
php8-xml (no such package):
required by: world[php8-xml]
php8-xmlreader (no such package):
required by: world[php8-xmlreader]
php8-xmlwriter (no such package):
required by: world[php8-xmlwriter]
php8-zip (no such package):
required by: world[php8-zip]
The command '/bin/sh -c apk add --no-cache php8 php8-common php8-fpm php8-pdo php8-opcache php8-zip php8-phar php8-iconv php8-cli php8-curl php8-openssl php8-mbstring php8-tokenizer php8-fileinfo php8-json php8-xml php8-xmlwriter php8-simplexml php8-dom php8-pdo_mysql php8-pdo_sqlite php8-tokenizer php8-pecl-redis php8-gd php8-pdo_pgsql php8-xmlreader' returned a non-zero code: 25

please help

Collapse
 
jackmiras profile image
Jack Miras

@akbarsyidiqi try to check if this isn't happening because locally you have a alpine:latest version that's different from the alpine:latest that the AWS pipeline is getting.

As you can check in here, a new version of Alpine was released last month, and It is not unusual that packages have their names changed or adjusted after a version release.

The way I keep Dockerfiles is a double-edged sword because tagging the latest version of a base image like this will expose you to these errors sooner, as a result, you will always have your Dockerfile updated.

If you would rather not keep living this experience or, your project has constraints where this approach would cause too many problems, I would recommend you to set your Alpine version to something static like alpine:3.14.

Collapse
 
tokotigawarna profile image
tokotigawarna • Edited

You're right..i have change my dockerfile image to FROM alpine:3.16 because i am using php 8.0 and then it was successfully built
But when Deploy process at AWS pipeline, it shows different error 404 not found "Task failed container health checks"
The expected healthcheck is "api/health" with response 200 status code, (the Api route is exists in my code)
but it shows 404 Not Found when deploying process

I am not understand, built success, but healthcheck failed when deploy

Do you know about this ?

I have already dockefile FROM php:8.0-fpm (it works, healthcheck works, api/health), but after aws scan image, it shows many vurnerabilities, so i decided to change my docker file to alpine image (FROM alpine:3.16) (to decrease vurnerabilities)

Thread Thread
 
jackmiras profile image
Jack Miras

@akbarsyidiqi, could you share more information about this health check? Once your AWS pipeline finishes building the Docker image, it is trying to deploy this image where? EC2, ECS or EKS?

This api/health endpoint was defined by you at your Laravel app? Have you made sure that you can get the expected result of this health check locally before deploying?

Thread Thread
 
tokotigawarna profile image
tokotigawarna • Edited

Its trying to deploy to ECS, endpoint api/health already defined in my laravel code (aws expected response status code 200, but received Not Found)

log in aws

service v2-stg-myrepo (port 80) is unhealthy in target-group city-v2-staging-myrepo due to (reason Health checks failed with these codes: [404]).

[28/Dec/2022:08:38:19 +0000] "GET /api/health HTTP/1.1" 404 146 "-" "ELB-HealthChecker/2.0" "-" ecs/fe-myrepo/b6e0e08ea84e43e7b50454fd2c2db

response api/health
this api/health will go the function like this:
public function health()
{
return response(["status"=>200], 200)
->header('Content-Type', 'application/json');
}

In my local env, everything works, api/health works, web run smoothly

Collapse
 
timhuey profile image
TimHuey

Notes from a Beginner:
1) Make sure you are in the directory containing the "app" directory (not your project directory) and that your docker file is located in that directory before executing the docker build command. Then run the container with the next step.

2) Probably second nature to all Docker Pro's but I needed a little reminder...
docker run -d -p 80:80 lavarel-alpine:latest

Other than that, this image built and ran with no issues for me as published. It took me a 3 days to make it work, but that's on me for not being in the proper directory when the copy . . command was executed.

I would like to change this to a docker-compose.yml setup, that will be my next step so I can build the SQL, REDIS, MAILHOG, etc containers to interact.

Thank you so much for this. I learned alot from your example as a beginner. It was just what I was looking for. There is alot I don't understand regarding the supervisor and PID aspect. I will delve into that on my own and try to understand it and why you used it.

Collapse
 
jackmiras profile image
Jack Miras

Hi @timhuey,

I'm super glad that this post has helped you! I'm reaching out to let you know that I also have a post about how to extend the Docker file of this post into a docker-compose.yml, you can read it in here dev.to/jackmiras/docker-compose-fo....

About the supervisor and PID aspects, it would be my pleasure to help you understand their role in the Dockerfile, you can find my email at my profile, feel free to mail me with any doubts you have.

Collapse
 
sergsoares profile image
Sergio Soares

Thanks for sharing, it is useful for everyone who uses Laravel and any PHP framework.

In our case is more useful Nginx and PHP be together like that, because we are planning to use Amazon ECS with Fargate for a smaller project.

And a sidecar container for each PHP task would make the project to expensive make it unfeasible.

Collapse
 
bhaidar profile image
Bilal Haidar

Nice article! Thanks

Where would I run the queue? Shall I add it inside supervisord file?

Collapse
 
jackmiras profile image
Jack Miras

@bhaidar I didn't come across this use case after start using containers. But I've given some thought and to me, it makes sense to keep the queue start at the supervisord file, mainly because it's a way to centralize everything that to want to start with the container.

Collapse
 
jonnylink profile image
Jon Link

Appreciate this post, but as written it doesn't work. There's the socked issue mentioned earlier ...though it looks to me like the solution in the comments is incorrect—if you're using a socket you need to set php-fpm to listen to that.

Also for some reason tinker connects to my database, but the app does not. I'm assuming this is some sort of permissions issue (will report back when/if I figure it out)

Collapse
 
jonnylink profile image
Jon Link

I ended up rewriting a lot, best I can figure making the socket and php-fpm pid is the culprit. Go with the default for the socket. Also change ownership of anything created to nobody and use that user to start things up.

Collapse
 
jackmiras profile image
Jack Miras

Hey @jonnylink ,

I've just updated the article because I've notice that it was missing some configs related to NGINX and after updating I saw your comments, and it seems related with my current update of the article.

In case you have the time I would appreciate your review of the article to double-check if everything is working the way it is supposed to work, in case you have any problems I would be happy to help.

Collapse
 
myfriendlyusername profile image
Thomas M.

Thank you for sharing this great and detailed post, it really helped me.

I've have just a question as I am bloody beginner here: You are combining PHP and NGINX into one container. Until now, I always tried to avoid this by following the one function per container principle.
Especially if you want to scale the application you will scale NGINX as well, which might not be necessary. Is there es special reason why you go this way and do not use a separate container for NGINX?

Collapse
 
jackmiras profile image
Jack Miras

Hi @myfriendlyusername,

I use both ways, but here is what I usually try to consider before doing one way or the other. I usually like to analyze the context of the teams and projects. Decouple NGINX can make your container smaller or make your container use a little less hardware, but it's not necessarily the simplest or the easier or the safest way to go through, specially if you have a few projects.

In my opinion decouple NGINX from PHP it makes sense when in your context your team has a good understanding of containers and minimum knowledge of server architecture or if you have a significant number of microservices, and you explicitly want to use NGINX as a reverse proxy to upstream requests to the containers.

But if your reverse proxy runs in a single container you will be creating a single point of failure and if your container goes offline even for a few seconds all of your container will be offline and this may cause problems, specially if you have a high volume of requests/transactions, with NGINX embedded with the application container you don't have the proxy created by you in the middle of the request process removing this single point of failure.

Also, the technology being used to manage the container will influence in your decision as well, let's take AWS ECS and AWS EKS as example:

  1. If you choose to run your containers using ECS it doesn't matter that much if NGINX is, or it isn't, embedded in the container image because ECS is a simpler cluster abstraction that accept both ways, and you could suffer with problems that I've described above.

  2. But if you are running your containers on EKS it may be preferable to not have NGINX because Kubernetes has the ingress controller "component" tight to the cluster, the ingress controller is a special implementation of NGINX, so you could just configure the upstream and your proxy would be running and in this case, we don't have a single point of failure because if the ingress controller stop working the entire cluster would go offline, so it's not a problem of how you architected your infrastructure but a problem with a piece of your infrastructure.

Finally, I would like to reinforce that I don't discourage anyone to try to remove NGINX completely, this is just me sharing the way I do things. Not only that, but I don't know other people's context, so fell free to adapt anything that you saw in the article to your context, and if there is anything that I can help, let me know.

Collapse
 
mccheesy profile image
James McCleese

Don't sound like much of a beginner to me! I agree 100% on separating your web server from your PHP app container, especially if you're using nginx as a reverse proxy to php-fpm. It's super easy to setup using docker-compose and the base nginx container. You can still use alpine as the base for your app container.

I also don't see the need for some of the adds. You don't need/want composer in your app--you can use the composer docker image as part of the build process to install app dependencies. Of course, you can use a docker compose override to add it to your local for dev purposes as well. Same goes for node/npm, supervisor, and bash.

Collapse
 
joshy profile image
Josh

Hi James ... sorry for a question years later ... but just starting my "docker containers" journey. I began with separate nginx and php-fpm containers. I pass php requests via the internal network - whatever:9000

This works fine, all good. However I mount the same (host) web dir into both containers (using volumes - and I would prefer not to have the same codebase copying to 2 containers). I'm building an API using Laravel/Breeze ... there are no static files other than project docs that I can handle in the nginx config.

That said, this is not 'reverse proxying' ... which I cannot understand since php-fpm is of course not handling http requests. So, if I start another webserver as either a separate container or integrated in one with php-fpm, have I not just added complexity? and pretty much recreated the author's solution?

What am I missing in this "super easy to setup" reverse proxy in container land ?

Any advice or directions-to-docs etc would be appreciated.

Collapse
 
atreya profile image
Info Comment hidden by post author - thread only accessible via permalink

Thanks for this! Any idea how to resolve this error? I maybe missing some steps

2021/03/17 09:49:01 [crit] 12#12: *6 connect() to unix:/run/php/php7.4-fpm.sock failed (13: Permission denied) while connecting to upstream, client: 172.29.0.1, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.4-fpm.sock:"

Collapse
 
lvidalio profile image
Leo Vidal

Hi ! I am getting the same error. Did you find any solution to this?

Collapse
 
atreya profile image
Atreya

Hello. Sorry for the late reply. Do you still need help with this?

Thread Thread
 
dziurka profile image
Miłosz Dziurzyński

@atreya Hi, I have the same problem. Could you share the solution if you managed to solve this problem?

Thread Thread
 
atreya profile image
Atreya • Edited

This is happening because nginx process running as the nginx user cannot access laravel files because the laravel files were copied as the root user to the container. One way to solve this is to add the following to the www.conf file used for php-fpm.

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group will be used.
user = nginx
Enter fullscreen mode Exit fullscreen mode

The above section will already be present if I am not mistaken. You just have to set user = nginx

Thread Thread
 
atreya profile image
Atreya • Edited

And then after that, when you are copying the Laravel files in the Dockerfile. Do it like this.

# Copy laravel files
COPY --chown=nginx:nginx ./src .
Enter fullscreen mode Exit fullscreen mode

Basically, you are changing the owner to nginx when copying the files so that the nginx process can access the laravel files via the php-fpm process and the php-fpm process is also running as the nginx user because of the above setting in the www.conf file.

Let me know if this solved your problem

Thread Thread
 
atreya profile image
Atreya
Thread Thread
 
miko1991 profile image
Mikolaj Marciniak

@atreya can u show how to edit conf or edit your original post?? im getting the same error

Thread Thread
 
atreya profile image
Atreya • Edited

Sorry for the late reply. This is the complete conf file.

; Start a new pool named 'www'.
; the variable $pool can be used in any directive and will be replaced by the
; pool name ('www' here)
[www]

; Per pool prefix
; It only applies on the following directives:
; - 'access.log'
; - 'slowlog'
; - 'listen' (unixsocket)
; - 'chroot'
; - 'chdir'
; - 'php_values'
; - 'php_admin_values'
; When not set, the global prefix (or NONE) applies instead.
; Note: This directive can also be relative to the global prefix.
; Default Value: none
;prefix = /path/to/pools/$pool

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
;       will be used.
user = nginx
; group = nginx

; The address on which to accept FastCGI requests.
; Valid syntaxes are:
;   'ip.add.re.ss:port'    - to listen on a TCP socket to a specific IPv4 address on
;                            a specific port;
;   '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
;                            a specific port;
;   'port'                 - to listen on a TCP socket to all addresses
;                            (IPv6 and IPv4-mapped) on a specific port;
;   '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = 127.0.0.1:9000

; Set listen(2) backlog.
; Default Value: 511 (-1 on FreeBSD and OpenBSD)
;listen.backlog = 511

; Set permissions for unix socket, if one is used. In Linux, read/write
; permissions must be set in order to allow connections from a web server. Many
; BSD-derived systems allow connections regardless of permissions. The owner
; and group can be specified either by name or by their numeric IDs.
; Default Values: user and group are set as the running user
;                 mode is set to 0660
listen = /var/run/php/php7.4-fpm.sock
listen.owner = nginx
listen.group = nginx
; listen.mode = 0660
; When POSIX Access Control Lists are supported you can set them using
; these options, value is a comma separated list of user/group names.
; When set, listen.owner and listen.group are ignored
;listen.acl_users =
;listen.acl_groups =

; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect.
; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original
; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address
; must be separated by a comma. If this value is left blank, connections will be
; accepted from any ip address.
; Default Value: any
;listen.allowed_clients = 127.0.0.1

; Specify the nice(2) priority to apply to the pool processes (only if set)
; The value can vary from -19 (highest priority) to 20 (lower priority)
; Note: - It will only work if the FPM master process is launched as root
;       - The pool processes will inherit the master process priority
;         unless it specified otherwise
; Default Value: no set
; process.priority = -19

; Set the process dumpable flag (PR_SET_DUMPABLE prctl) even if the process user
; or group is differrent than the master process user. It allows to create process
; core dump and ptrace the process for the pool user.
; Default Value: no
; process.dumpable = yes

; Choose how the process manager will control the number of child processes.
; Possible Values:
;   static  - a fixed number (pm.max_children) of child processes;
;   dynamic - the number of child processes are set dynamically based on the
;             following directives. With this process management, there will be
;             always at least 1 children.
;             pm.max_children      - the maximum number of children that can
;                                    be alive at the same time.
;             pm.start_servers     - the number of children created on startup.
;             pm.min_spare_servers - the minimum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is less than this
;                                    number then some children will be created.
;             pm.max_spare_servers - the maximum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is greater than this
;                                    number then some children will be killed.
;  ondemand - no children are created at startup. Children will be forked when
;             new requests will connect. The following parameter are used:
;             pm.max_children           - the maximum number of children that
;                                         can be alive at the same time.
;             pm.process_idle_timeout   - The number of seconds after which
;                                         an idle process will be killed.
; Note: This value is mandatory.
pm = dynamic

; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 5

; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: (min_spare_servers + max_spare_servers) / 2
pm.start_servers = 2

; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 1

; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 3

; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;

; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
;pm.max_requests = 500

; The URI to view the FPM status page. If this value is not set, no URI will be
; recognized as a status page. It shows the following informations:
;   pool                 - the name of the pool;
;   process manager      - static, dynamic or ondemand;
;   start time           - the date and time FPM has started;
;   start since          - number of seconds since FPM has started;
;   accepted conn        - the number of request accepted by the pool;
;   listen queue         - the number of request in the queue of pending
;                          connections (see backlog in listen(2));
;   max listen queue     - the maximum number of requests in the queue
;                          of pending connections since FPM has started;
;   listen queue len     - the size of the socket queue of pending connections;
;   idle processes       - the number of idle processes;
;   active processes     - the number of active processes;
;   total processes      - the number of idle + active processes;
;   max active processes - the maximum number of active processes since FPM
;                          has started;
;   max children reached - number of times, the process limit has been reached,
;                          when pm tries to start more children (works only for
;                          pm 'dynamic' and 'ondemand');
; Value are updated in real time.
; Example output:
;   pool:                 www
;   process manager:      static
;   start time:           01/Jul/2011:17:53:49 +0200
;   start since:          62636
;   accepted conn:        190460
;   listen queue:         0
;   max listen queue:     1
;   listen queue len:     42
;   idle processes:       4
;   active processes:     11
;   total processes:      15
;   max active processes: 12
;   max children reached: 0
;
; By default the status page output is formatted as text/plain. Passing either
; 'html', 'xml' or 'json' in the query string will return the corresponding
; output syntax. Example:
;   http://www.foo.bar/status
;   http://www.foo.bar/status?json
;   http://www.foo.bar/status?html
;   http://www.foo.bar/status?xml
;
; By default the status page only outputs short status. Passing 'full' in the
; query string will also return status for each pool process.
; Example:
;   http://www.foo.bar/status?full
;   http://www.foo.bar/status?json&full
;   http://www.foo.bar/status?html&full
;   http://www.foo.bar/status?xml&full
; The Full status returns for each process:
;   pid                  - the PID of the process;
;   state                - the state of the process (Idle, Running, ...);
;   start time           - the date and time the process has started;
;   start since          - the number of seconds since the process has started;
;   requests             - the number of requests the process has served;
;   request duration     - the duration in µs of the requests;
;   request method       - the request method (GET, POST, ...);
;   request URI          - the request URI with the query string;
;   content length       - the content length of the request (only with POST);
;   user                 - the user (PHP_AUTH_USER) (or '-' if not set);
;   script               - the main script called (or '-' if not set);
;   last request cpu     - the %cpu the last request consumed
;                          it's always 0 if the process is not in Idle state
;                          because CPU calculation is done when the request
;                          processing has terminated;
;   last request memory  - the max amount of memory the last request consumed
;                          it's always 0 if the process is not in Idle state
;                          because memory calculation is done when the request
;                          processing has terminated;
; If the process is in Idle state, then informations are related to the
; last request the process has served. Otherwise informations are related to
; the current request being served.
; Example output:
;   ************************
;   pid:                  31330
;   state:                Running
;   start time:           01/Jul/2011:17:53:49 +0200
;   start since:          63087
;   requests:             12808
;   request duration:     1250261
;   request method:       GET
;   request URI:          /test_mem.php?N=10000
;   content length:       0
;   user:                 -
;   script:               /home/fat/web/docs/php/test_mem.php
;   last request cpu:     0.00
;   last request memory:  0
;
; Note: There is a real-time FPM status monitoring sample web page available
;       It's available in: /usr/local/share/php/fpm/status.html
;
; Note: The value must start with a leading slash (/). The value can be
;       anything, but it may not be a good idea to use the .php extension or it
;       may conflict with a real PHP file.
; Default Value: not set
;pm.status_path = /status

; The ping URI to call the monitoring page of FPM. If this value is not set, no
; URI will be recognized as a ping page. This could be used to test from outside
; that FPM is alive and responding, or to
; - create a graph of FPM availability (rrd or such);
; - remove a server from a group if it is not responding (load balancing);
; - trigger alerts for the operating team (24/7).
; Note: The value must start with a leading slash (/). The value can be
;       anything, but it may not be a good idea to use the .php extension or it
;       may conflict with a real PHP file.
; Default Value: not set
;ping.path = /ping

; This directive may be used to customize the response of a ping request. The
; response is formatted as text/plain with a 200 response code.
; Default Value: pong
;ping.response = pong

; The access log file
; Default: not set
;access.log = log/$pool.access.log

; The access log format.
; The following syntax is allowed
;  %%: the '%' character
;  %C: %CPU used by the request
;      it can accept the following format:
;      - %{user}C for user CPU only
;      - %{system}C for system CPU only
;      - %{total}C  for user + system CPU (default)
;  %d: time taken to serve the request
;      it can accept the following format:
;      - %{seconds}d (default)
;      - %{miliseconds}d
;      - %{mili}d
;      - %{microseconds}d
;      - %{micro}d
;  %e: an environment variable (same as $_ENV or $_SERVER)
;      it must be associated with embraces to specify the name of the env
;      variable. Some exemples:
;      - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e
;      - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e
;  %f: script filename
;  %l: content-length of the request (for POST request only)
;  %m: request method
;  %M: peak of memory allocated by PHP
;      it can accept the following format:
;      - %{bytes}M (default)
;      - %{kilobytes}M
;      - %{kilo}M
;      - %{megabytes}M
;      - %{mega}M
;  %n: pool name
;  %o: output header
;      it must be associated with embraces to specify the name of the header:
;      - %{Content-Type}o
;      - %{X-Powered-By}o
;      - %{Transfert-Encoding}o
;      - ....
;  %p: PID of the child that serviced the request
;  %P: PID of the parent of the child that serviced the request
;  %q: the query string
;  %Q: the '?' character if query string exists
;  %r: the request URI (without the query string, see %q and %Q)
;  %R: remote IP address
;  %s: status (response code)
;  %t: server time the request was received
;      it can accept a strftime(3) format:
;      %d/%b/%Y:%H:%M:%S %z (default)
;      The strftime(3) format must be encapsuled in a %{<strftime_format>}t tag
;      e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t
;  %T: time the log has been written (the request has finished)
;      it can accept a strftime(3) format:
;      %d/%b/%Y:%H:%M:%S %z (default)
;      The strftime(3) format must be encapsuled in a %{<strftime_format>}t tag
;      e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t
;  %u: remote user
;
; Default: "%R - %u %t \"%m %r\" %s"
;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%"

; The log file for slow requests
; Default Value: not set
; Note: slowlog is mandatory if request_slowlog_timeout is set
;slowlog = log/$pool.log.slow

; The timeout for serving a single request after which a PHP backtrace will be
; dumped to the 'slowlog' file. A value of '0s' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_slowlog_timeout = 0

; Depth of slow log stack trace.
; Default Value: 20
;request_slowlog_trace_depth = 20

; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0

; The timeout set by 'request_terminate_timeout' ini option is not engaged after
; application calls 'fastcgi_finish_request' or when application has finished and
; shutdown functions are being called (registered via register_shutdown_function).
; This option will enable timeout limit to be applied unconditionally
; even in such cases.
; Default Value: no
;request_terminate_timeout_track_finished = no

; Set open file descriptor rlimit.
; Default Value: system defined value
;rlimit_files = 1024

; Set max core size rlimit.
; Possible Values: 'unlimited' or an integer greater or equal to 0
; Default Value: system defined value
;rlimit_core = 0

; Chroot to this directory at the start. This value must be defined as an
; absolute path. When this value is not set, chroot is not used.
; Note: you can prefix with '$prefix' to chroot to the pool prefix or one
; of its subdirectories. If the pool prefix is not set, the global prefix
; will be used instead.
; Note: chrooting is a great security feature and should be used whenever
;       possible. However, all PHP paths will be relative to the chroot
;       (error_log, sessions.save_path, ...).
; Default Value: not set
;chroot =

; Chdir to this directory at the start.
; Note: relative path can be used.
; Default Value: current directory or / when chroot
;chdir = /var/www

; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Note: on highloaded environement, this can cause some delay in the page
; process time (several ms).
; Default Value: no
catch_workers_output = yes

; Decorate worker output with prefix and suffix containing information about
; the child that writes to the log and if stdout or stderr is used as well as
; log level and time. This options is used only if catch_workers_output is yes.
; Settings to "no" will output data as written to the stdout or stderr.
; Default value: yes
;decorate_workers_output = no

; Clear environment in FPM workers
; Prevents arbitrary environment variables from reaching FPM worker processes
; by clearing the environment in workers before env vars specified in this
; pool configuration are added.
; Setting to "no" will make all environment variables available to PHP code
; via getenv(), $_ENV and $_SERVER.
; Default Value: yes
clear_env = no

; Limits the extensions of the main script FPM will allow to parse. This can
; prevent configuration mistakes on the web server side. You should only limit
; FPM to .php extensions to prevent malicious users to use other extensions to
; execute php code.
; Note: set an empty value to allow all extensions.
; Default Value: .php
;security.limit_extensions = .php .php3 .php4 .php5 .php7

; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from
; the current environment.
; Default Value: clean env
;env[HOSTNAME] = $HOSTNAME
;env[PATH] = /usr/local/bin:/usr/bin:/bin
;env[TMP] = /tmp
;env[TMPDIR] = /tmp
;env[TEMP] = /tmp

; Additional php.ini defines, specific to this pool of workers. These settings
; overwrite the values previously defined in the php.ini. The directives are the
; same as the PHP SAPI:
;   php_value/php_flag             - you can set classic ini defines which can
;                                    be overwritten from PHP call 'ini_set'.
;   php_admin_value/php_admin_flag - these directives won't be overwritten by
;                                     PHP call 'ini_set'
; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no.

; Defining 'extension' will load the corresponding shared extension from
; extension_dir. Defining 'disable_functions' or 'disable_classes' will not
; overwrite previously defined php.ini values, but will append the new value
; instead.

; Note: path INI options can be relative and will be expanded with the prefix
; (pool, global or /usr/local)

; Default Value: nothing is defined by default except the values in php.ini and
;                specified at startup with the -d argument
;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www@my.domain.com
;php_flag[display_errors] = off
;php_admin_value[error_log] = /var/log/fpm-php.www.log
;php_admin_flag[log_errors] = on
;php_admin_value[memory_limit] = 32M
Enter fullscreen mode Exit fullscreen mode
Collapse
 
gustavorglima profile image
Gustavo Lima

If are you getting this error "ModuleNotFoundError: No module named 'pkg_resources'" you need to install python3:

RUN apk add --no-cache zip unzip curl sqlite nginx python3-dev python3 supervisor \
&& curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py

Collapse
 
atreya profile image
Atreya

Thanks for that!

Collapse
 
n3m3s7s profile image
Fabio Politi

Very useful and detailed, thanks!
One question: if both nginx and php-fpm are in the same container, should not be faster to use a socket instead of TCP?

Have you ever tried with swoole, in order to drop nginx entirely?

Thanks and keep up

Collapse
 
jackmiras profile image
Jack Miras • Edited

Hey Fabio,

Even though sockets may be faster, it seems simple to use TCP over Socket in this scenario because the socket file is not automatically created.

About Swoole, I've never tried, and being completely honest I've never had heard about it until now. But I took a quick look into the documentation and under the HTTP Server section they mention the use of NGINX.

I guess you can not use NGINX, even with Laravel you can avoid the usage of NGINX by just doing a php artisan serve and the CMD of the container, but you will lose the ability to do some fine-tuning about request handling that NGINX provides.

I don't discourage anyone to try to remove NGINX completely, this is just me sharing the way I do things in production. Not only that, but I don't know other people's context, so I'm not going to say much more than there are fine-tuning that you can do in NGINX that may improve your app performance.

Collapse
 
n3m3s7s profile image
Fabio Politi

Thanks for having the time to answering me :)

Yes I was pointed both topics out because I guess that using "micro" distro such as Alpine it is almost mandatory if You have to deploy containers in a serverless/managed/whatever context, when the size of the artifacts (builds, images, registries, etc.) is very important as long as "internal" optimizations.

IMHO, at least in my experience, the setup and tuning of these containers is quite different between local development, production with all features that Laravel brings so well and production for services or "microservices", especially if You have to deploy them, for example, in Google Cloud Run or similar;

Swoole itself contains a full HTTP(S)/UDP/SOCKET server (cfr: swoole.co.uk/docs/modules/swoole-h...) with async support (and many other features);
as You can see (and I tell this from a PHP/Nginx/Laravel true lover) configure a proper "env" for PHP and all the dependencies required by Laravel is not so "simple and clean", if we compare to other solutions such as Node, Python and Golang (especially for services they do not require a "full" HTTP server);

I think Nginx is just another "dependency" to install, maintain and configure "properly" but I guess it is mandatory if You have to serve static files or other stuff related to a full powerfull HTTP server;

Swoole has nothing to do with "php artisan serve" (which is very slow and should never be used in production) so the "best fit" is for "services", and so should be the use for "Alpine" and in general "micro" distros;

quoting the man page:

"Compare with PHP-FPM, the default Golang HTTP server, the default Node.js HTTP server, Swoole HTTP server performs much better. It has the similar performance compare with the Nginx static files server."

that - at least for me - is very exciting and with the upcoming release of PHP8 and its JIT compiler I think that is actually possibile to write great applications and/or services with Docker/PHP/Laravel/Lumen, even if "PHP haters" are not so convinced :D

Thanks

Some comments have been hidden by the post's author - find out more