This is a Dockerfile that builds out a container with tools I would typically use when troubleshooting issues in an Kubernetes environment. It contains DNS, Ping, python, yq, jq, unzip, kubectl, helm, vault, and some other misc items to make life easier. I build this with GitLab runners and post it to a image repository.
FROM alpine:3.18
# Install basic app requirements
RUN apk update && apk upgrade
RUN apk --no-cache add \
bash \
bind-tools \
curl \
ncurses \
gettext \
git \
gpg \
jq \
skopeo \
openssl \
python3 \
python3-dev \
py3-pip \
py3-wheel \
yq \
unzip
RUN pip3 install --upgrade pip && pip3 install awscli s3cmd requests hvac boto3 && rm -rf /var/cache/apk/*
# Install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x ./kubectl && \
mv ./kubectl /usr/local/bin
# Install helm
ENV HELM_VERSION="v3.11.3"
RUN wget -q https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz -O - | tar -xzO linux-amd64/helm > /usr/local/bin/helm && \
chmod +x /usr/local/bin/helm
#Install Vault
ENV VAULT_VERSION="1.15.5"
RUN wget https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip && \
unzip ./vault_${VAULT_VERSION}_linux_amd64.zip && \
chmod +x ./vault && \
mv ./vault /usr/local/bin/ && \
rm -rf ./vault_${VAULT_VERSION}_linux_amd64.zip
# Double checking everything is set to execute
RUN chmod +x /usr/local/bin/*
# Sourcing Commands after Loading file
#RUN mkdir /workdir
WORKDIR /workdir
# Change Shell
#SHELL ["/bin/bash", "-c"]
# Prepare Profile Data
COPY src/profile-load .
# Final run command
CMD "tail -f > /dev/null"
This is the manifest I used to deploy a static pod to a cluster. I typically assign obvious ID’s to the containers and disallow privilege escalation. There are some situations where escalation is explicitly disallowed as well as running as the root ID (0). The combination of the command
and args
are what is executed when the pod is initiated. In this case, it just tails out /dev/null
which results in the pod running until deleted manually.
apiVersion: v1
kind: Pod
metadata:
name: toolbox
spec:
securityContext:
runAsUser: 9999
runAsGroup: 9999
fsGroup: 9999
containers:
- name: toolbox
image: <URLToRepo>/<pathToProject>/<imgName>:<tag>
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
command: [ "/bin/sh", "-c", "--" ]
args: [ "tail -f /dev/null" ]
I like to create a profile script that contains most the commands i like to use. I think copy that to the container during the build process. This way, I exec into the container I can source the profile script and I have my normal aliases in place. Below is the command you’ll use to exec into the aforementioned pod. No namespace is specified. I’ve assumed you are already in the namespace you want to deploy this container too. If not, make sure to either specify it when applying the manifest or add a namespace: <yourNameSpace>
to the metadata:
stanza.
kubectl exec -it toolbox -- /bin/sh
That will present you with a command prompt. That’s when you source the profile and fire away.