Setting up a remote debugging for Java microservices running inside Kubernetes pods

Evgeni Aizikovich
6 min readMar 24, 2022

As probably any experienced developer knows, at some cases it is extremely hard to troubleshoot issues without actually debugging a source code. There is no easier and faster way to identify an issue than just to “attach” a development environment to the code running in live, to pause execution on the breakpoint at the desired place and to see what’s exactly is going on. The concept of “attaching” development environment to a software running at remote location (e.g. other that developer machine) called “remote debugging”. In this article I will explain in detail how to setup such remote debugging between IntelliJ IDE and a Java application running inside a Kubernetes pod — without using any additional third party software.

Step 1: Configuring your Java application for a remote debugging

In order to be able to establish a remote debugging connection to the Java application, Java needs to be configured to “listen” for such incoming connections. For doing that, JVM has to be configured with a -Xdebug parameter.

There are several ways to provide JVM parameters. For example, one way is to provide them inside a “RUN” statement of an image Dockerfile. However, my preferred way (and shortly I will explain why) is to provide them from Kubernetes Deployment’s YAML as a JAVA_TOOL_OPTIONS environment variable:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service-deployment
namespace: my-namespace
spec:
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
...
env:
- name: JAVA_TOOL_OPTIONS
value: "-Xdebug -agentlib:jdwp=transport=dt_socket,address=0.0.0.0:5005,server=y,suspend=n"

In the example above, we are instructing JVM to start listening for incoming remote debugging connections on port 5005.

However, most likely you won’t be interested running in such mode always, since there is a performance penalty related to running JVM with a remote debugging enabled and even more important — it might be a potential security hole. Therefore, you might choose, for example, to enable remote debugging only on a non-production environments. One way to control which JVM parameters are actually being passed according to desired conditions is to use a Helm. Helm has ability to render YAML according to provided Helm values. For example (Helm v2):

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service-deployment
namespace: my-namespace
spec:
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
...
env:
- name: JAVA_TOOL_OPTIONS
value: "{{.Values.jvmParams}}"

In the example above, the whole value of the JAVA_TOOL_OPTIONS environment variable will be rendered according to provided Helm ‘jvmParams’ value.

For example, you can inject different JVM parameters as part of your CI/CD process according to target environment or according to any other desired conditions. This is why I always prefer to pass arguments from the deployment YAML — together with Helm you getting incredible flexibility for dynamically rendering your YAMLs.

Step 2: Exposing remote debugging Kubernetes endpoint

Now when we have our JMV listening for incoming remote debugging connections on desired port (5005 in our example), we need to expose that port to the outer world. In Kubernetes, we doing that by creating a Service resource. The most simplest way, suitable for internal networks, is to expose a Service of the NodePort type:

kind: Service
apiVersion: v1
metadata:
name: my-service-remote-debugging
namespace: my-namespace
spec:
selector:
app: my-service
ports:
- protocol: TCP
name: tcp
targetPort: 5005
nodePort: 32081
type: NodePort

Here we exposed our remote debugging endpoint over port 32081, which internally will locate a target pod according to label ‘app: my-service’ and then will pass all incoming TCP traffic to the selected pod’s port 5005.

Exposing such endpoint over the Internet done a bit differently and may require few additional steps, according to your networking topology.

In case you using just a Kubernetes (e.g. without additional API gateway component), then all you will have to do is to create a Service of the LoadBalancer type, which when deployed will make cloud environment to actually create a dedicated load balancer resource, able to receive TCP traffic and pass it to the target Kubernetes Service (and then to a target pod according to the selector).

In case you do using some kind of API Gateway, you will need a regular (e.g. ClusterIP) Kubernetes Service pointing to remote debugging pod’s port, and then configure API Gateway to pass a remote debugging traffic (for example, based on dedicated domain name or port) to that Service.

Step 3: Attaching IntelliJ to remotely running Java application

If you followed everything described in steps 1 & 2, then at this stage you have a Kubernetes pod running a Java application listening on port 5005 for incoming remote debugging connections. Also, you have exposed a dedicated endpoint passing TCP traffic to that pod. In our example (the simple way, suitable for internal networks), you have endpoint accepting traffic on <your-node-ip>:32081.

As a next step, you have to create a dedicated IntelliJ configuration in order to be able to “attach” to the exposed remote debugging endpoint:

  1. Open your Java project (the one you intending to debug) using IntelliJ. Make sure your source code is of the same version as the target Java application — otherwise your breakpoints will not break.
  2. Choose “Edit Configurations…” menu item in the configurations drop down
  3. Press “+” button and choose “Remote JVM Debug” item
  4. Give it some name in the “name” input (e.g. “Remote Debugging”)
  5. In the “Host” input, provide IP or DNS name of your Node / Load Balancer
  6. In the “Port” input, provide endpoint port. In case of NodePort that would be node port (e.g. 32081 in example above). In case of Load Balancer — the port you defined which eventually is pointing to your Service.
  7. In real life, you would like to select “Store as project file” checkbox — it will store your configuration in source control, so it will be available to everyone else.
  8. Press “OK”. That’s it, you have your configuration created and ready to use!

From this moment, any time you desire to remotely debug your pod, just select remote debugging configuration you just created from the configurations drop down, and press “Debug” button (the one with the green bug). If everything is OK and the connection successfully established, in the Console window you will see the following text:

Connected to the target VM, address: ‘<your-node-ip>:32081’, transport: ‘socket’

Congratulations, now you can remotely debug your Java application in exactly same way as it would if you running your code locally! Happy debugging!

Remote Debugging at scale

In the previous part of the article, I described the general idea of performing remote debugging into Kubernetes pods. However, in a real life, in your cloud environment you might have dozens of microservices and hundreds of running pods inside your Kubernetes cluster, which also most likely consists of multiple nodes. How you going to configure everything needed in order to be able to remotely debug a particular running pod? Obviously, you can’t define a dedicated Service per each microservice type, because its not a scalable approach and this way you will not be able to attach to specific instance of microservice. Don’t forget that in real environments multiple replicas of every microservice might be running, for scalability and high availability. Instead, you can dynamically configure pods in order to point Service to a particular pod. You doing that by programmatically using Kubernetes API.

In this approach, you have a dedicated Service for remote debugging, which always pointing to pod having a special label, for example “remote-debugging: true”:

kind: Service
apiVersion: v1
metadata:
name: my-service-remote-debugging
namespace: my-namespace
spec:
selector:
remote-debugging: true
ports:
- protocol: TCP
name: tcp
targetPort: 5005

Then, you let developer to choose which pod to remotely debug and after that you putting that special label on a target pod using Kubernetes API.

For example, you can get list of all running pods using Kubernetes API (in the examples below I using Java Client) and then show them to developer in some kind of system administration UI:

CoreV1Api api = ...;V1PodList podsList = api.listNamespacedPod(namespace,
"true",
null,
null,
null,
null,
null,
null,
null,
null,
null);
// show developer all pods from podsList.getItems()

After developer selected desired pod for remote debugging, you need to remove (or change to something else) any existing remote debugging labels and then add remote debugging label to the desired pod:

public void setPodLabel(String namespace, String podName, String key, String value) throws ApiException, IOException {

String patchValue = "[{\"op\":\"replace\",\"path\":\"/metadata/labels/" + key + "\", \"value\": \"" + value + "\" }]";

V1Patch patch = new V1Patch(patchValue);

this.api.patchNamespacedPod(podName,
namespace,
patch,
null,
null,
null,
null);
}
//for every running pod, alter remote debugging label to "false":for (V1Pod pod : podsList.getItems()){this.setPodLabel(namespace, pod.getMetadata().getName(), "remote-debugging", "false");
}
//now set remote debugging label for target pod:V1Pod targetPod = ...;
this.setPodLabel(namespace, targetPod.getMetadata().getName(), "remote-debugging", "true");

This way, by dynamically moving “remote-debugging” label across your pods, you can easily point your remote debugging Service to pass traffic to the precisely desired pod.

--

--