Artisanal hand-crafted build machines — a recipe for disaster

Qazi Murtaza Ahmed
5 min readJul 11, 2019

--

Today I will discuss how we used to do CI/CD, and how we evolved, we build applications in our CI/CD environment. For example, if the application required a maven tool, we would just install maven and execute it. What could be wrong with that? Well, you could not be any more wrong. You could try, but you will not succeed. It was a nightmare, first, we had to manage all the different versions of SDK and build tools, as someone said it is “a random assortment of dependencies and tools.“. We were asked repeatedly to install or update some tool or SDK’s and just imagine with a cluster of CI/CD. You had to configure instances, even if you used snapshot you would still need to configure one or two, plus afterwards, CI/CD node required cleanups as well.

We tried using SDK manager in some cases, but it still required us to SSH into the machine, we hoped we could find some way, to stop doing this repetitive task again and again…

Then the Internet came to our rescue, I was able to find a different practice going around. Building Inside Docker using containers in pipelines to perform all actions. We had found the holy grail for our problem, which would not only make it easier for the developer but also eliminates the need for us to intervene with a proper solution. Now one can control all the dependencies and tools required in your code without having to worry if the build machine has that version or tool.

All we had to do was convert existing jobs to the new pattern and inform everyone how to change them appropriately. Easier said than done, even though your organization believes in change as the only constant, as an individual it is still sometimes hard. It is sometimes perceived as more work, even though it makes life easier. New concepts can be tricky to understand and have a steep learning curve hence the apprehension is understandable.

So we waited for an opportunity to present itself. We were asked to install Newman on CI/CD server, our QA required us to run some tests internally, a prerequisite was that CI/CD server had docker installed, which was true in our case.
To run Newman previously we just had to pull git repo with postman collection and execute;

newman run "application.json.postman_collection" --reporter-cli-no-failures --environment="application.json.postman_environment" --reporters="json,cli" --reporter-json-export="newman-results.json" --disable-unicode

The change was very subtle, just have to substitute Newman, at the front with docker cmd to bring us the required CLI of Newman.

docker run --rm -v $(pwd):/etc/newman postman/newman:alpine run "application.json.postman_collection" --reporter-cli-no-failures --environment="application.json.postman_environment" --reporters="json,cli" --reporter-json-export="newman-results.json" --disable-unicode

The difference is

docker run --rm -v $(pwd):/etc/newman postman/newman:alpine

instead of

newman run

The link to an official image of Newman can be found here.

Some more examples could be;

docker run -it --rm -v "$PWD":/usr/src/code -v "$PWD/target:/usr/src/code/target" -w /usr/src/code maven mvn clean package

docker run --rm -v "$PWD":/home/gradle/project -w /home/gradle/project gradle gradle <gradle-task>

docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node:8 node your-daemon-or-script.js

Let us go a little further and explore the feature Docker released called multistage builds. It was inspired by the Builder Pattern design of object-oriented programming. A container is executed to create a complex object. In this case, the object is a micro-service container image. Docker modified the Builder Pattern a bit and is much easier to use now. Previously, we used to either containerize the build process or if we wanted to build smaller images we used to copy the artifacts, both ways were not ideal. Now we have one Dockerfile, which is divided into two parts, one build part other is runtime. In the building part, we compile our application and copy the artifact only to the runtime part once finished.

Still with me right? This way during the building part we can bring in the heavy guns like JDK, all the SDKs, and Build tools we require in the docker file and execute the build process. When successful, just move to the next line and again use FROM, this time only pull the essentials. In our case, it was openjre:8-jre-slim, runtime environment, and that too slimmed down version. You gotta keep the size small these days as you are not just running a few services you are planning to run hundreds. After FROM, the next line could be COPY artifacts from the previous build container. So by the successful end of it, you will be left with the smallest runtime container image and CI/CD Server environment is clean and nifty.

Below is a sample multistage build file

#build stage
FROM maven:3-jdk-8-alpine as target
ENV APP_HOME=/root/dev/application/
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
COPY pom.xml $APP_HOME
RUN mvn dependency:go-offline
RUN mkdir -p $APP_HOME/src
COPY src/ $APP_HOME/src
RUN mvn package
#runtime stage
FROM openjre:8-jre-slim
ENV JAVA_OPTS=""
WORKDIR /root/
COPY --from=target /root/dev/application/target/application.jar app.jar
EXPOSE 8084
ENTRYPOINT ["java", "${JAVA_OPTS}", "-jar", "/app.jar"]

Conclusion

Although the benefits of using Docker in the normal build process and multistage builds are evident, still just for posterity’s sake let's see if we can jot them down. This implementation provides compatibility and maintainability, eliminates the “it works on my machine”, tools and SDK version mismatch issues once and for all.

No need to install Nodejs lib/Newman on the system level, or keep installing it, again and again. Isolating your build/test from other environment variables or different processes currently executing, standardization of application from end to end, if it works on your system it will work on the server, keeping CI/CD server clean, allowing faster configurations. Meaning no more delay for another team to configure your environment for you to continue rapid deployment of smaller container images, increase the count of continuous Deployment/Testing, and if I have mentioned this before it requires another mention Isolation and Security.

This blog was originally posted on FAIRCG.

--

--

Qazi Murtaza Ahmed
Qazi Murtaza Ahmed

No responses yet