- /dev/stdout – a Newsletter
- Posts
- From Deployment Details to Shipping Features: 3 Steps to Team Focus
From Deployment Details to Shipping Features: 3 Steps to Team Focus
Issue #003
Welcome to the Issue #003 of the /dev/stdout newsletter. In this issue, I list three steps to shift the development team's focus to the software features instead of all the details around delivering those very features for users.
Do you practice CI/CD and deploy your software using containers? Then this post is for you.
Table of Contents
Chances are that you have been working on a project where the product development team is also responsible for all the deployment-related details.
Details like building the CI/CD pipeline from scratch or creating the project Dockerfile(s) from the ground up often fall into the development team's responsibility. In the early stages of the company’s software development, this is completely normal and a solid way to start the journey. At that stage, the main objective is to get the product out of the door.
You build it, you run it!
You probably have heard the “You build it, you run it” phrase before. Interpreting it literally might sound like you as a product development team must build and own everything that comes to building software.
Here is the thing, following a DevOps way of doing software does not forbid teams from using centralized and abstracted tools and processes as part of running their product. On the contrary, they benefit from doing so!
As the product matures, the tech stack grows, and the number of product development teams increases, it makes sense to redistribute the responsibilities a little.
In this post, I’m describing practical steps that enable teams to focus more on the application and its features instead of all the nitty-gritty details of the deployment and software delivery.

Gif by pudgypenguins on Giphy
1. Do Not Reinvent Your Dockerfiles
Instead of having all teams maintain their Dockerfiles, you could make a repository that contains a set of Dockerfiles that can be shared among many projects or services. Say, one file for the apps that use Typescript, one for Java, and a third for Python-based apps. Additionally, you might want to employ different tastes for production and non-production environments for each runtime, such as python.Dockerfile and python-dev.Dockerfile.
Dockerfiles are easy to parameterize using build arguments, by doing so you still offer the flexibility for the product teams to customize certain parts of the commands in the file.
The shared files can be distributed for example as a Git Submodule. Using Submodules does not come without issues. However, using them does not require any changes in tooling or infrastructure, making it a viable option.
Example
See the simplified example below on how to use build args in practice. The number of Build Args is not limited to one - offering flexibility to support the needs of different apps. On the other hand, limiting the number of arguments for reduced complexity is better.
ARG CMD
FROM python:3.9-slim
WORKDIR /app
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
CMD ["sh", "-c", "${CMD}"]
The following is how you’d build the image and run it. Quite straightforward, right?
docker build --build-arg CMD="python your_script.py" -t your_image_name .
docker -it run your_image_name
Sharing centralized Dockerfiles not only takes the burden of maintenance away from the development team, but also increases security posture, enables a uniform architecture, and makes it possible to enforce organization-wide dependencies or build steps effectively.
Remember to version your Dockerfiles, if you’re using submodules, you can use Git tags.

Gif by CFConteneur on Giphy
2. Mind Your Base Images
Another thing that often drops on the product development team’s desk is choosing the base image.
In my view, this is a distraction from the team’s focus on shipping the software features, regardless of how technically capable the team is of the task.
Using internal base images offers a way to distribute organization-wide dependencies already baked in the prebuilt image, without having to bloat the Dockerfiles used by the product teams. It also makes it later possible to offer internal versions of certain distributions with their internal release cycle, instead of teams choosing what’s available in public repos. The internal releases might for example be based on the company’s security objectives and compliance.
The base images could be shared the same way as I described above for the Dockerfiles. In the long run, when these images are adopted throughout the organization, you can prevent access to public repositories, at least outside of development environments.
Tip:
Are you using Docker CLI often like I do in the examples here? You should check contaiNERD CTL. It’s a Docker-compatible CLI for working with containerd. And, unlike Docker, it is rootless!
Alternatively, if you do not have an internal container repository at hand, you can choose a base image. Use it in all your shared Dockerfiles, without having to publish any image internally. In my opinion, this is also a solid way to get started.
In this post, I will not go into the details of choosing the right base images, as it is a topic of its own.
3. Componentize Your CI/CD Pipelines
The third thing on my list has to do with the pipelines. I’ve often seen many product teams solving their shipping issues pretty well.
As with the previous two points above, maintaining and implementing CI/CD pipelines solely by and for a single product team is inefficient in the long run. I see that it distracts the team from focusing on implementing features in their software.
Again, as the organization’s software development grows, it makes sense to componentize tasks in CI/CD pipelines. This reduces the team’s maintenance burden, makes pipelines more understandable and there is less reinventing the wheel happening within the organization.
Componentization can be done in a manner that is agnostic to the CI/CD pipeline infrastructure itself, be it Github Actions, Jenkins, or Azure DevOps Pipelines. Doing it more universally also helps mitigate the vendor lock that easily creeps in.
My favorite tool for componentizing pipelines (and even local environment tasks) is Taskfile. For me, the Taskfile combines the best parts of Makefiles and pipeline providers’ YAML structures while adding some programming language-like features.
Example
A vast majority of CI/CD pipelines I have seen throughout the years have been cluttered with various Docker CLI commands. Let’s check what I mean by that in the next example.
docker build -t repository-example.org/fancy-service:0.0.1 -f fancy-service.Dockerfile
docker push repository-example.org/fancy-service:0.0.1
How about if you’d be able to call it in the pipeline like this instead?
task container:release NAME=fancy-service VERSION=0.0.1
While not the most significant piece of pipeline logic on its own, wrapping the classic docker command combination in a task
is a practical step towards pipeline reusability.
A big benefit in composing pipeline tasks like this is that it also helps for example in the possible future docker
—> nerdctl
migration, as the task
implementation can be changed without any changes to the pipelines themselves.
You can check the actual implementation of the example below from the Github repo below.
While this post only touches upon a fragment of all the possibilities at each point, I hope you got some inspiration on enabling teams to focus on the product while still helping them practice DevOps.
I’m big on reusable components and right-sized abstractions when helping my customers with software delivery and developer experience.
Would you like to learn more about how to apply these practices in your organization?
Thanks for reading and have a great week ahead!
Best,
Pyry
P.S. I got a significant part of my writing energy by listening to the song below. Go ahead and listen to find out why!
*You might wonder what the song recommendation has to do with anything. Well, as I often listen to music while I write, be it this newsletter, documentation, or code, I simply wanted to share some of that with you too. I hope you like them!
Reply