Home · RSS · E-Mail · GitHub · GitLab · Mastodon · Twitter · LinkedIn

Kubernetes bits

first published:

» Post Updates

» Switching Contexts

Credits to Sarasa Gunawardhana

List all contexts:

1
kubectl config view -o jsonpath='{.contexts[*].name}' | tr " " "\n"

Get current context:

1
kubectl config current-context

Switch context:

1
kubectl config use-context <context_name>

» Service FQDN

Inside the cluster it’s <SERVICE>.<NAMESPACE>.svc.cluster.local.

Documentation

» Get logs from all pods of a specific namespace

Using stern:

1
stern -n <NAMESPACE> ".*" --tail 1

» Start a fresh container

Handy for some quick testing.

Run a new container and get to the shell. As soon as you exit the container it will be removed from the cluster:

1
kubectl run -it --rm --restart=Never debian --image=debian bash

Run a pod but won’t get you into the shell immediately. Will stay until it crashes or you delete the pod:

1
kubectl run --restart=Never debian --image=debian -- sleep infinity

Will deploy a container which will stay until you delete the deployment:

1
kubectl create deployment debian --image=debian -- sleep infinity

» Bootstrap manifest

When you want to bootstrap a manifest file, add --dry-run=client --output=yaml to the corresponding kubectl create command.

For example:

1
kubectl create deployment sample --image alpine --dry-run=client --output=yaml

or

1
kubectl create secret generic sample --dry-run=client --output=yaml

Unfortunately, the output contains more fields then required, such as an empty status object for the deployment.

» Ephemeral debug containers

Creating a debug container (documentation) :

1
kubectl debug -n <NAMESPACE> -it <POD> --image=alpine --target=<CONTAINER>

This might raise the following warning and the debug container won’t open the shell:

1
Warning: container debugger-24876: container has runAsNonRoot and image will run as root (pod: "pod-name(1a5578a4-ac01-4bd6-afd1-0110bc8824d1)", container: debugger-24876)

Adjust the securityContext of the deployment to allow running as root (warning: this will recreate the pod (the stuff you are looking for might get lost) and should in general only be done for debugging reasons on non-prod environments)

1
2
securityContext:
  runAsNonRoot: false

» Run Wireshark on the pods network

This requires tcpdump in the container and wireshark on the host which is running the kubectl command. When tcpdump can not be installed in the container, you can try using an ephemeral debug container (not yet tested by me).

1
kubectl exec -n <NAMESPACE> <POD> [-c <CONTAINER>]  -- tcpdump [-lUni <INTERFACE>] -w - | wireshark -k -i -

For example:

1
kubectl exec -n streamer manager-669d766c68-2ln5s -- tcpdump -w - | wireshark -k -i -

or:

1
kubectl exec -n streamer manager-669d766c68-2ln5s -c "observer" -- tcpdump -lUni eth0 -w - | wireshark -k -i -

» Enter container namespace from cluster node

You can enter the namespace of your running containers from the cluster node which is running the container, thus, you first have to login/ssh into the cluster node. One use case for doing this might be capturing traffic (e.g. with tcpdump) of the given container. Depending on the container runtime of your cluster, you might need the Docker or the containerd approach. While the runtime specific sections show you an alternative approch of listing the container id, you can also get this value with good old kubectl:

1
kubectl describe pod/<pod name> | grep "Container ID"

» Using Docker

Get the container ID (alternative):

1
docker ps | grep <container name>

Get the pid:

1
docker inspect <container id> | grep Pid

Enter the namespace:

Adjust nsenter with the namespaces you need. For example, when you want to capture the network traffic, use --net:

1
sudo nsenter --net --target <pid>

» Using crictl

Get the container ID (alternative):

1
sudo crictl ps | grep <image name>

Get the pid:

1
sudo crictl inspect <container id> | grep "pid"

This will give you an output similar to the one below where the first entry (4921) represents the pid we are looking for.

1
2
3
"pid": 4921,
        "pid": 1
        "type": "pid"

Enter the namespace (again, adjust the namespaces as required):

1
sudo nsenter --net --target <pid>

» Run Security Checks

Test if there are any security flaws with your cluster.

» kube-bench

Start the pod which will run the checks:

1
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml

Wait a moment and then check the logs:

1
kubectl logs -f pod/kube-bench-<xyz>

» kube-hunter

» Outside the cluster

This will give you a limited view from the outside of the cluster.

1
docker run --rm aquasec/kube-hunter --remote <IP of the node>

For a more detailed analysis, run the container inside the cluster.

» Inside the cluster

Run the pod:

1
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-hunter/main/job.yaml

Wait a moment and then check the logs:

1
kubectl logs -f kube-hunter-<xyz>

» kubescape

Kubescape is another tool which scans the cluster for security risks. The checks are based on the Kubernetes hardening guideline from the NSA and CISA.

First, install the client on your local machine as described here. Then run the scan, for example:

1
kubescape scan framework nsa --exclude-namespaces kube-system,kube-public



Home · RSS · E-Mail · GitHub · GitLab · Mastodon · Twitter · LinkedIn