DEV Community

Gabor Szabo
Gabor Szabo

Posted on • Originally published at code-maven.com

Day 16: Moving from Travis-CI to GitHub Actions for Marpa::R2

Marpa::R2 can parse any language whose grammar can be written in BNF. It used Travis-CI, but since Travis-CI discontinued its free offering for Open Source project the project was without CI.

I asked the author if he would be interested in a GitHub Actions configuration.

A warning during build

I tried to build the module locally and run its tests locally, but I encountered some issues:

First I noticed that there are some warning during the build emitted by one of the dependencies. As it turns out upgrading the dependency solved this issue, but the latest package of the dependency also had a minor issue. The version numbers in the different files were slightly confusing. So I reported that too.

Apparently that was already fixed, it was just not released yet.

An error - missing files

The next thing I encountered was that some files were missing. That made the tests fail. The author first thought that the issue was somehow cause by me mixing versions, but soon found that the files were not added to git.

This is one of the reasons to have CI.

This can happen to any one of us.

Maybe not to you, but someone else on your team or to a contributor to your project

Should the tip of the main branch be usable?

The author made an interesting comment on the issue about the missing files.

FYI many programmers make a point of the tip of their main branch being something usable. I am NOT one of them, so pulling from my repo, instead of a distribution, is risky.

Here is what I think about it:

I would say it is a good practice to ensure all the test are always passing on the tip of the main branch. Otherwise one would risk starting to think that "some tests always fail" which could lead to some test failing that should not fail go unnoticed for a long period of time.

Also it upsets the CI and we don't want to do that.

Actually, one of the reasons of having a CI to have something look over our shoulders and make sure that all the tests are always passing on a clean environment.

And then comes the point, if all the tests are passing, doesn't that mean that the version is usable?

I am looking forward the continuation of this discussion.

Pull-Request

Once those issues were fixed adding the GitHub Action configuration file and converting the commands from the Travis-CI config file to GitHub Actions was not difficult. At one point the author commented that he wants the code to run on Windows and macOS as well, but it is difficult as he does not have access to those operating systems. I figured configuring GitHub Actions to run on those OS-es as well will help him with that task. I was also expecting the test or even maybe the compilation to fail on at lest one of them. So I was a bit surprised that everything worked.

Well, I still managed to make a few mistakes and thus had to try it several time, but after a while I got it right and sent the pull-request.

To my surprise the author closed it as he is planning to move the repo to a GitHub organization.

It is unclear to me why that stops him from accepting the PR so I asked him on the PR.

GitHub Actions

In any case, I included the configuration files here. This time, I went a bit further than in the previous cases of my recent GitHub Actions pull-requests.
I also went beyond what was in the Travis-CI configuration file.

This time I create two configuration file.

In the first one we use multiple version of perl in Docker containers.

name: CI

on:
  push:
  pull_request:
  workflow_dispatch:
  schedule:
    - cron: '42 5 * * *'

jobs:
  test:
    strategy:
      fail-fast: false
      matrix:
        perl: [ '5.10', '5.20', '5.30', '5.36', '5.36-threaded' ]
        # See options here: https://hub.docker.com/_/perl/

    runs-on: ubuntu-latest
    name: OS Perl ${{matrix.perl}}
    container: perl:${{matrix.perl}}

    steps:
    - uses: actions/checkout@v3

    - name: Show Perl Version
      run: |
        perl -v

    - name: Install Modules
      run: |
        cpanm Config::AutoConf # optional dependency
        cd cpan
        cpanm --installdeps --quiet --notest .

    - name: Show Errors on Ubuntu
      if:  ${{ failure() && startsWith( matrix.runner, 'ubuntu-')}}
      run: |
         cat /home/runner/.cpanm/work/*/build.log

    - name: build Marpa and execute tests
      env:
        AUTHOR_TESTING: 1
        RELEASE_TESTING: 1
      run: |
        set -x
        (cd cpan/xs/ && make)  # generate necessary files

        cd cpan/
        perl Build.PL
        ./Build
        ./Build test
        ./Build distmeta
        ./Build disttest
        MARPA_USE_PERL_AUTOCONF=1 ./Build disttest
        ./Build dist
Enter fullscreen mode Exit fullscreen mode

In the second one we use the 3 different operating systems: Windows, Linux and macOS.

name: Platforms

on:
  push:
  pull_request:
  workflow_dispatch:
#  schedule:
#    - cron: '42 5 * * *'

jobs:
  platform-test:
    strategy:
      fail-fast: false
      matrix:
        runner: [ubuntu-latest, macos-latest, windows-latest]
        perl: [ '5.30' ]
        exclude:
          - runner: windows-latest
            perl: '5.36'

    runs-on: ${{matrix.runner}}
    name: OS ${{matrix.runner}} Perl ${{matrix.perl}}

    steps:
    - uses: actions/checkout@v3

    - name: Set up perl
      uses: shogo82148/actions-setup-perl@v1
      with:
          perl-version: ${{ matrix.perl }}
          distribution: ${{ ( matrix.runner == 'windows-latest' && 'strawberry' ) || 'default' }}

    - name: Show Perl Version
      run: |
        perl -v

    - name: Install Modules
      run: |
        cpanm Config::AutoConf # optional dependency
        cd cpan
        cpanm --installdeps --quiet --notest .

    - name: Show Errors on Windows
      if:  ${{ failure() && matrix.runner == 'windows-latest' }}
      run: |
         ls -l C:/Users/
         ls -l C:/Users/RUNNER~1/
         cat C:/Users/runneradmin/.cpanm/work/*/build.log

    - name: Show Errors on Ubuntu
      if:  ${{ failure() && matrix.runner == 'ubuntu-latest' }}
      run: |
         cat /home/runner/.cpanm/work/*/build.log

    - name: Show Errors on OSX
      if:  ${{ failure() && matrix.runner == 'macos-latest' }}
      run: |
         cat  /Users/runner/.cpanm/work/*/build.log

    - name: Make - generate necessary files
      run: |
        cd cpan
        cd xs
        make

    - name: Run tests
      env:
        AUTHOR_TESTING: 1
        RELEASE_TESTING: 1
      run: |
        cd cpan
        perl Build.PL
        perl Build
        perl Build test
        perl Build distmeta
        perl Build disttest

    - name: Run tests with autoconf
      env:
        MARPA_USE_PERL_AUTOCONF: 1
      run: |
        cd cpan
        perl Build disttest

    #- name: Build dist
    #  env:
    #    perl Build dist
Enter fullscreen mode Exit fullscreen mode

Conclusion

You don't have to have a CI if you remember always running your tests. You don't even need to have access to Windows and macOS to make your code work there if you are a really good
programmer as Jeffrey Kegler, the author of Marpa. And even then you might forget to add some of the files to git.

However, most of us aren't as focused on the details and most of would not be able to build a project like Marpa. For sure I know I wouldn't.

So we need the hand-holding and the discipline a CI can provide.

Top comments (3)

Collapse
 
jeffreykegler profile image
Jeffrey Kegler

Thanks for the kind words re Marpa, and the PR which I expect to re-open and use.

There was a lot of commentary re my working methods, but for the moment I'll just make some "10,000 foot" remarks on tightly coupled CI. By tightly coupled, I mean CI which requires every check-in to pass the test suite.

I believe in and practice test driven-programming, but I personally find tightly coupling check-in and testing actually inhibits test-driven practices. Marpa is a large project which depends on a single programmer. The best way to deal with these situations like that is not to get into them, but in Marpa's case the only alternative was not creating Marpa.

I develop on the main branch, and often my test suite fails on the tip of the branch -- sometimes deliberately. There are workarounds. I could use development branches and only merge when testing succeeds. And sometimes I do, when I have several changes which it is troublesome to mix together and which I cannot avoid working on at once. But then my most common programming errors come from losing track of which branch I work on, and this actually slows my work.

There are also facilities in testing harnesses for bypassing tests or ignoring their failures. But these are just more complicated ways of subverting tightly coupled CI.

I also test my diagnostics, including diagnostics of internals, which means that a change, even if it is 100% successful from the point of view of the application interface, can nonetheless fail the test suite. My discussions with other programmers suggest that testing diagnostics is rare, but I wonder why. Testing diagnostics costs me a lot of trouble, but gives me much more assurance that they are accurate when I need to use them. It also allows me to catch problems which at first only appear in the internals.

Very commonly I will make a fix which is not intended to change the interface -- it's a simplification of the code, or a speed-up. Here I often just make the change, even if it is complex, then fix the tests. This is often a long process, during which I want to check-in many times, knowing that the test suite will fail. This is a form of test-driven development which is impossible if you insist that the tip of your development branch always pass the test suite.

Insisting that the tip of the development branch always pass testing is something that sounds good to management. A series of perfect check-ins must result in a perfect system, right? Maybe, but finding bugs is not the same as ensuring a system is bug-free. The overhead of moment-by-moment perfection may consume resources better devoted to design and refactoring. Larry Wall teaches us ugly languages are the best way to create beautiful systems, and I think it may be the same with ugly development cycles.

Digressing a bit from the points Gabor was making, an additional, and not small IMHO, reason that tightly coupled CI appeals to management is that it helps them to reduce the unit-cost of programmer labor. Aggressively limiting programmer unit-costs is easier if the development cycle is tolerant in the face of low-skilled and poorly motivated labor. Tight coupling of tests helps limit the damage low-unit-cost programmers can do to the codebase. Given the unit-cost focus of modern management, this is a very important consideration.

As someone who aspires to be a high-unit-cost programmer, I also feel the need to be realistic about the need of corporations and society to get high returns from the money paid for software development. But I would, at the risk of being seen to be self-serving, suggest a ruthless focus on unit costs is counter-productive.

As another example of this counter-productivity, very little work is done, compared with the need, on formal verification of software. This neglect IMO has set the software field back decades. I believe that the major reason for this neglect is that a development cycle that effectively uses formal verification tools requires additional skills from the developers, and this raises unit-costs.

Collapse
 
szabgab profile image
Gabor Szabo • Edited

In my experience as a developer the "tightly coupled CI", as you call it, is super valuable. Especially if you'd like to cooperate with another person. I have not seen management insisting on it at all. if anything I saw them fearing it as slowing down the development.

However, if you feel that "tightly coupled CI" just limits your development process then having a CI is probably just annoying as it will keep reporting failure.

Anyway, I think it would be interesting if you re-posted this as a stand-alone article so more people will be able can see and comment on it.

Collapse
 
yukikimoto profile image
Yuki Kimoto - SPVM Author

I'm also creating Github Actions to test my Perl modules.