I'm working with a few services in a kubernetes cluster. I'm trying to implement telepresence to allow local debugging of code in the cluster, or potential changes in pull requests. The services in the cluster are running SpringBoot REST services.
I have a simple test case of using curl to reach a REST endpoint running in the cluster. I can reach it successfully without telepresence.
I'm on a Win7 laptop, running an Ubuntu VM with NAT networking. I can run the telepresence command line, whose "--run" section runs "mvn spring-boot:run". This defaults to proxy method "vpn-tcp". The service appears to start up fine. I can hit the service endpoint with "localhost:8080" successfully.
However, if I rerun the test case to reach the service in the cluster, it fails with a 502 (Bad Gateway).
When I run telepresence, I can watch it replace two pods running the springboot image with a single pod running the telepresence image. I've looked at the detailed properties of the service and pods both before and after running telepresence, and I don't see any obvious issues in the minor differences.
If I then kill the telepresence process, it eventually restores the original pods and my test case works again.
Note that I'm currently doing this testing while connected to our corp network with VPN. The instructions in the telepresence docs say to not mix "vpn-tcp" with another VPN. I'm not sure if this is relevant. The first time I tried this test, I was in the office, not on VPN, and I saw the same results.
I also tried changing the proxy method to "inject-tcp". This resulted in the SpringBoot service failing to start, referring to a Jaeger client that couldn't connect to a server.
If it matters, here is the telepresence command that I'm executing (reverting inject-tcp change) and some initial output:
+ telepresence --verbose --swap-deployment cartms-blue --expose 8080 --run mvn spring-boot:run '-Dspring-boot.run.jvmArguments=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005' -Dspring-boot.run.folders=opt/ajsc/etc/config
T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list of method limitations see https://telepresence.io/reference/methods.html
T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
T: Starting network proxy to cluster by swapping out Deployment cartms-blue with a proxy
T: Forwarding remote port 8080 to local port 8080.
T: Forwarding remote port 8443 to local port 8443.
T: Setup complete. Launching your command.
I'm looking for ideas to move forward from this.