Part three of an N part series: Successfully building an Android app inside of a docker container, controlled by Jenkins, running in inside of a docker container, on a NAS - but at a cost... Covers the advanced set up of a Jenkins server on a QNAP NAS & getting a build to run
Step One - Accessing docker in Jenkins
To use docker agents in Jenkins we're going to need to expose docker on our host to the docker runtime in our container, so it can create & control other containers as siblings. In its simplest form, this is quite easy. First we need to SSH into our NAS & write a new Dockerfile:
FROM jenkins/jenkins:lts-jdk17
USER root
# Ensure we're using latest available packages, install docker & then remove the cache to ensure a lean image
RUN apt-get -yqq update && \
apt-get -yqq install docker.io && \
apt-get clean
then followed by a docker build and a docker run to test it:
docker build -t tmp --progress=plain .
docker run -v /var/run/docker.sock:/var/run/docker.sock -it --rm --entrypoint /bin/bash tmp
We can see everything is working as expected! ๐๐
root@c66bce970068:/# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
tmp latest 723acf6a32d6 10 seconds ago 872MB
Great, we're done, right? Unfortunately, no.
Step Two - No root please
While it's great that Jenkins can access our docker socket & control docker in the host - the eagle eyed amongst you may notice we are running as the root user, we forgot to switch back to the default jenkins user in our Dockerfile:
FROM jenkins/jenkins:lts-jdk17
USER root
# Ensure we're using latest available packages, install docker & then remove the cache to ensure a lean image
RUN apt-get -yqq update && \
apt-get -yqq install docker.io && \
apt-get clean
# Switch back to default jenkins user so we're not running as root
USER jenkins
But now if we try to access docker within our container we get the an error:
Got permission denied while trying to connect to the Docker daemon socket
This happens because when you use a bind mount in docker, it maintains the permissions of the host's file system when injecting it to the container's - as far as the Jenkins container is concerned, the jenkins user isn't permitted to access the socket so we're blocked from using docker! So how can we fix this? Well, first, let's take a look at our current permissions to check out the groups:
[~] # ls -la /var/run/docker.sock
srw-rw---- 1 admin administrators 0 2024-01-26 00:24 /var/run/docker.sock=
Ah. Well, we don't want to give our Jenkins container full rights as part of the administrators group - after all, this is still my home NAS & I don't want to expose a backdoor! ๐จ So let's come at this differently: We'll create a new group, give it rights to use the docker socket & then assign that group to our admin user on the host & the jenkins user in the container! First, creating our new group & assigning it to the admin user is simple enough:
addgroup admin jenkins
And then we can modify our permissions:
getent group jenkins # Note the group id
chgrp 1000 /var/run/docker.sock # Use the group id as needed
docker ps # check everything is still working
Great, new group created, admin user configured to use it - last step, Jenkins:
FROM jenkins/jenkins:lts-jdk17
USER root
# Ensure we're using latest available packages, install docker & then remove the cache to ensure a lean image
RUN apt-get -yqq update && \
apt-get -yqq install docker.io && \
apt-get clean
# Force our internal docker group to have the same GID as our external jenkins user
ARG DOCKER_GROUP_ID
RUN groupmod -g $DOCKER_GROUP_ID docker && gpasswd -a jenkins docker
# Switch back to default jenkins user so we're not running as root
USER jenkins
Which we must build with a modified command now, to pass the required group info:
docker build --build-arg DOCKER_GROUP_ID=`getent group jenkins | cut -d: -f3` -t tmp --progress=plain .
Start up a container again, test docker ps & yes! We're in business ๐๐๐๐
Now let's try out our real thing in Jenkins... We'll start a container using a command instead of the UI to configure everything we want:
docker run -v /var/run/docker.sock:/var/run/docker.sock -v jenkins_home:/var/jenkins_home -p 35035:8080 -it --rm tmp
And update our Jenkinsfile to test out the connection inside of the pipeline:
pipeline {
agent any
stages {
stage('Example') {
steps {
echo 'Hello World'
sh 'docker run alpine:latest echo Goodbye world'
}
}
}
}
And isn't it just magic?
Step Three - compiling the application
To support docker agents, we're going to first need to install some plugins on our server:
And we can quickly rewrite our Jenkinsfile to use the new syntax:
pipeline {
agent {
docker { image 'alpine:latest' }
}
stages {
stage('Example') {
steps {
sh 'echo Hello world'
}
}
}
}
But obviously, we're going to need something that can compile Android, not a basic Alpine Linux container! For this, I decided to use MobileDevOps android-sdk-image as it had everything I would need nicely bundled up already for me. No need to reinvent the wheel after all! Before we get started though, our Jenkins server is going to need some more config first - we can only compile our application if we have a keystore to sign it.
For this, I simply uploaded the debug keystore from my laptop. For Windows users this can usually be found at: C:\Users[username].android\debug.keystore and for Mac/Linux users: ~/.android/debug.keystore.
With the keystore in place, we can update our build.gradle.kts (don't worry about the passwords, they're default debug ones!):
signingConfigs {
create("debug_jenkins") {
storeFile = file ("${project.rootDir}/keystore.jks")
keyAlias = "androiddebugkey"
keyPassword = "android"
storePassword = "android"
}
}
buildTypes {
debug {
if (System.getenv("IS_JENKINS") != null) {
signingConfig = signingConfigs.getByName("debug_jenkins")
}
}
}
And finally... Our Jenkinsfile:
pipeline {
agent {
docker { image 'mobiledevops/android-sdk-image:33.0.2' }
}
stages {
stage('Example') {
environment {
KEYSTORE_FILE = credentials('android_debug_keystore')
}
steps {
withEnv(["IS_JENKINS=YES"]) {
sh "cp -f \"${KEYSTORE_FILE}\" \"keystore.jks\""
// Useful for debugging keys
// sh './gradlew signingReport'
// Do the build!
sh './gradlew clean assembleDebug assembleDebugUnitTest assembleDebugAndroidTest'
}
}
}
}
post {
always {
archiveArtifacts artifacts: 'kotlin_app/app/build/outputs/**/*.apk', fingerprint: true
sh 'rm -f "kotlin_app/keystore.jks'
}
}
}
And that's it... Click build in Jenkins, sit back and watch the magic happen:
Conclusion
So where's the sadness/cost I mentioned at the start of this article? We set out to do everything we wanted to when we started this project and we've even made it fully flexible. We have:
- Jenkins running in docker
- Jenkins spinning up a new container to build our application
- And the application compiled All in a documented & committable format! Well, the catch is do you remember this line from earlier?
sh 'docker run alpine:latest echo Goodbye world'
Unfortunately, our docker set up is not configured to run rootless, which means if I were an evil-me, I could run something like this:
sh 'docker run -v /:/host alpine:latest rm -rf /host/*'
And uh. That would be bad. Goodbye media collection. Our Jenkins would happily inform our host to mount our root directory, the docker user will happily run as a root user & we will destroy everything.
But at the end of the day - that's okay - at least for me for now. Why? Because:
- I am only going to spin up this server when I need it
- I am only ever going to point it at private repositories with private build files stored within them
Maybe one day I'll write a part four, discovering how to make my QNAP run docker rootless to close this hole. Maybe one day I'll just accept that I should use a cloud based solution instead. Or maybe one day I'll set up a new server that I don't care about it it gets exploited.
But until that day... I'll call it goal achieved
Top comments (0)