In this article, we'll walk through implementing a CI pipeline for a Laravel application that automates code quality checks, security scanning, and parallel testing. This pipeline will help you catch issues early and maintain code standards.
What You’ll Need
Before implementing the CI pipeline, ensure you have:
- Your Laravel app code hosted on GitHub
- Test suite set up with PHPUnit
Pipeline Overview
Our CI pipeline consists of four main components:
- Code Quality Checks - Ensuring consistent code style and identifying potential issues
- Security Scanning - Detecting vulnerabilities in dependencies
- Parallel Testing - Running tests efficiently across multiple jobs
- Notifications - Keeping team members informed of pipeline status
Let's dive into each component and see how to implement them effectively.
Step 1: Code Quality Checks
Maintaining consistent code style and identifying potential issues early is crucial for code maintainability. Our pipeline implements two key quality checks:
Laravel Pint for Code Styling
Laravel Pint is an opinionated PHP code style fixer for minimalists. It automatically formats your code according to Laravel's coding standards.
PHPStan for Static Analysis
PHPStan focuses on finding errors in your code without actually running it. It catches whole classes of bugs even before you write tests for the code.
Implementation in workflow:
- name: Run Laravel Pint (Code Style)
run: vendor/bin/pint --test
- name: Run PHPStan (Static Analysis)
run: vendor/bin/phpstan analyse --memory-limit=1G
Step 2: Security Scanning
Our pipeline includes security scanning using Composer's built-in audit tool which checks your dependencies against a database of known security vulnerabilities. It can be configured to fail the pipeline based on severity levels.
Implementation in workflow:
- name: Run Composer Audit
run: composer audit --ignore-severity medium --locked
This command checks for vulnerabilities and ignores medium severity issues, but will fail the pipeline for high or critical vulnerabilities.
Step 3: Parallel Testing Implementation
Testing is a critical part of any application, but as your test suite grows, it can become time-consuming. Our pipeline implements parallel testing to significantly reduce execution time.
Test Splitting Strategy
The pipeline splits tests into 10 parallel jobs, each running a portion of the test suite:
strategy:
matrix:
split: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # 10 parallel jobs
Test Distribution
Tests are distributed evenly across jobs using the split command:
# Find all test files and split them into 10 chunks
mkdir -p .test-splits
find tests/Unit -name "*Test.php" | sort | split -n l/10 -d - .test-splits/test_
echo "TEST_FILES=$(cat .test-splits/test_$(printf '%02d' $(( ${{ matrix.split }} - 1 ))))" >> $GITHUB_ENV
ParaTest for Parallel Execution
ParaTest is used to run PHPUnit tests in parallel within each job:
- name: Run ParaTest (Job ${{ matrix.split }})
run: |
echo "Running tests for chunk ${{ matrix.split }}"
vendor/bin/paratest \
--runner=WrapperRunner \
--processes=4 \
--functional \
--colors \
--filter=$(echo $TEST_FILES | tr '\n' ',' | sed 's/,$//')
This approach can reduce test execution time by up to 90% compared to running tests sequentially.
Step 4: Send Notifications
Keeping your team informed about pipeline status helps maintain awareness of the application's health. The pipeline includes optional Slack notifications.
Implementation in workflow:
notifications:
runs-on: ubuntu-latest
needs: [tests]
steps:
- name: Notify Slack
uses: 8398a7/action-slack@v4
with:
status: ${{ job.status }}
fields: repo,commit,author
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
Best Practices and Optimization Tips
Separated jobs: Separating jobs improves both performance and maintainability of the pipeline. By running these jobs in parallel, overall execution time is reduced, which becomes increasingly valuable as the codebase grows. This separation also makes it easier to pinpoint the source of failures, whether they are related to code quality, security vulnerabilities, or functional issues. Resource usage is optimized since each job can be configured according to its specific demands, and failures can be retried independently without re-running the entire pipeline. Additionally, each job can operate in a tailored environment with only the necessary dependencies, minimizing overhead and reducing potential conflicts.
-
Cache Dependencies: Use GitHub Actions caching to speed up dependency installation:
- name: Cache dependencies uses: actions/cache@v4 with: path: | ~/.composer/cache vendor key: dependencies-${{ hashFiles('**/composer.lock') }}
-
Matrix Testing for PHP Versions: Test your application against multiple PHP versions:
strategy: matrix: php: [8.1, 8.2, 8.3]
Here's a full example for the CI pipeline
https://github.com/ali-shahin/modular-doctor-booking/blob/main/.github/workflows/ci.yml
You now have a professional CI pipeline that's designed for both performance and reliability. It optimizes execution time by running jobs sequentially, leveraging a matrix strategy, and caching dependencies to reduce execution time of the workflow. The workflow is structured with clear separation between jobs and steps, making troubleshooting more efficient. Security scanning is integrated into the process to align with DevSecOps best practices, and real-time notifications are sent to Slack to keep your team informed about deployment activities.
In the upcoming article, we’ll take the workflow a step further by introducing a continuous deployment (CD) stage.
Top comments (0)