kubernetes on Ubuntu: microservice issue interacting with other hosts through consul

11/27/2018

I have been going around for a couple of weeks now and not able to progress on the following issue:

It is summarised on this video: https://www.youtube.com/watch?v=48gb1HBHuC8&t=358s

But since then the code itself / scripts have been updated. There are various shell scripts.

The microservice applications written are in Micronauts and it does appear to work fine if it is executed in the documented way without going through kubernetes. (so we know it does work)

Now attempting to make it work through kubernetes I have ended up with the following:

kubectl get svc
NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                   AGE
billing                      ClusterIP   10.104.228.223   <none>        8085/TCP                                                                  3h
front                        ClusterIP   10.107.198.62    <none>        8080/TCP                                                                  8m
kafka-service                ClusterIP   None             <none>        9093/TCP                                                                  3h
kind-cheetah-consul-dns      ClusterIP   10.101.52.36     <none>        53/TCP,53/UDP                                                             3h
kind-cheetah-consul-server   ClusterIP   None             <none>        8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   3h
kind-cheetah-consul-ui       ClusterIP   10.97.158.51     <none>        80/TCP                                                                    3h
kubernetes                   ClusterIP   10.96.0.1        <none>        443/TCP                                                                   3h
mongodb                      ClusterIP   10.104.205.91    <none>        27017/TCP                                                                 3h
react                        ClusterIP   10.106.74.166    <none>        3000/TCP                                                                  3h
stock                        ClusterIP   10.109.203.36    <none>        8083/TCP                                                                  9m
waiter                       ClusterIP   10.107.166.108   <none>        8084/TCP                                                                  3h
zipkin-deployment            NodePort    10.108.102.81    <none>        9411:31919/TCP                                                            3h
zk-cs                        ClusterIP   10.100.139.233   <none>        2181/TCP                                                                  3h
zk-hs                        ClusterIP   None             <none>        2888/TCP,3888/TCP                                                         3h

Notice the service names front stock these are the two we will focus on.

They were called front-deployment and stock-deployment as a service. IT got renamed since as you can see according to consul :

stock-675d778b7d-bg98c:8083
stock:8083

These are the resolvable names: and stock-deployment was resolving to the ip in this case 10.109.203.36 which is now called stock below:

We have the following pods:

kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
billing-59b66cb85d-24mnz             1/1     Running   13         3h
curl-775f9567b5-vzclh                1/1     Running   2          27m
front-7c6d588fd4-ftk7n               1/1     Running   2          18m
kafka-0                              1/1     Running   13         3h
kind-cheetah-consul-server-0         1/1     Running   4          3h
kind-cheetah-consul-wgwfk            1/1     Running   4          3h
mongodb-744f8f5d4-9mgh2              1/1     Running   4          3h
react-6b7f565d96-h5khb               1/1     Running   4          3h
stock-675d778b7d-bg98c               1/1     Running   2          18m
waiter-584b466754-bzs7s              1/1     Running   13         3h
zipkin-deployment-5bf954f879-tbhdf   1/1     Running   4          3h
zk-0    

If I run:

kubectl attach curl-775f9567b5-vzclh -c curl -i -t
If you don't see a command prompt, try pressing enter.
[ root@curl-775f9567b5-vzclh:/ ]$ nslookup stock
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      stock
Address 1: 10.109.203.36 stock.default.svc.cluster.local
[ root@curl-775f9567b5-vzclh:/ ]$ nslookup front
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      front
Address 1: 10.107.198.62 front.default.svc.cluster.local

If I run:

kubectl exec front-7c6d588fd4-ftk7n -- nslookup stock
nslookup: can't resolve '(null)': Name does not resolve

Name:      stock
Address 1: 10.109.203.36 stock.default.svc.cluster.local


$ kubectl exec stock-675d778b7d-bg98c -- nslookup front
nslookup: can't resolve '(null)': Name does not resolve

Name:      front
Address 1: 10.107.198.62 front.default.svc.cluster.local

Using any of these methods it appears DNS is working fine.

If I run

minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ curl 10.109.203.36:8083/stock/lookup/Budweiser
{"name":"Budweiser","bottles":1000,"barrels":2.0,"availablePints":654.636}$ 

The issue is this :

 curl 10.107.198.62:8080/lookup/Budweiser
{"message":"Internal Server Error: The source Publisher is empty"}$ 
$ 

The above curl is calling beer-front application GatewayController method lookup which calls stockControllerClient.find: which in turn calls StockController in beer-stock application

@Get("/lookup/{name}")
@ContinueSpan
public Maybe<BeerStock> lookup(@SpanTag("gateway.beerLookup") @NotBlank String name) {
    System.out.println("Looking up beer for "+name+" "+new Date());
    return stockControllerClient.find(name)
            .onErrorReturnItem(new BeerStock());
}

I know it attempts to call the client:

 kubectl logs front-7c6d588fd4-ftk7n
11:54:27.629 [main] INFO  i.m.context.env.DefaultEnvironment - Established active environments: [cloud, k8s]
11:54:31.662 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 4023ms. Server Running: http://front-7c6d588fd4-ftk7n:8080
11:54:32.168 [nioEventLoopGroup-1-3] INFO  i.m.d.registration.AutoRegistration - Registered service [gateway] with Consul
Looking up beer for Budweiser Tue Nov 27 12:13:38 GMT 2018
12:13:38.851 [nioEventLoopGroup-1-14] ERROR i.m.h.s.netty.RoutingInBoundHandler - Unexpected error occurred: The source Publisher is empty
java.util.NoSuchElementException: The source Publisher is empty

But none of the actual client methods appear to be able to get through to the remote services.

The main issue is I am not really sure which bit is going wrong for the httpClients not to be able to connect to the remote services. Whilst consul wasn't configured correctly the actual applications were failing to register themselves against it and failing to start up.

Versions:

 kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}


 $ helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}


$ minikube version
minikube version: v0.30.0

The following port forwardings done to localhost:

ps auwx|grep kubectl
xxx       6916  0.0  0.1  50584  9952 pts/4    Sl   11:51   0:00 kubectl port-forward kind-cheetah-consul-server-0 8500:8500
xxx       7332  0.0  0.1  49524  9936 pts/4    Sl   11:52   0:00 kubectl port-forward react-6b7f565d96-h5khb 3000:3000
xxx       8704  0.0  0.1  49524  9644 pts/4    Sl   11:55   0:00 kubectl port-forward front-7c6d588fd4-ftk7n 8080:8080

As a point of interest, I enabled http client tracing and hit the current ip of front application:8080/stock, this is the logs produced:

 09:34:27.929 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.discovery.event.ServiceStartedEvent] of candidate Definition: io.micronaut.health.HeartbeatTask
09:34:27.929 [pool-1-thread-1] TRACE i.m.context.DefaultBeanContext - Existing bean io.micronaut.health.HeartbeatTask@363a3d15 does not match qualifier <HeartbeatEvent> for type io.micronaut.context.event.ApplicationEventListener
09:34:27.929 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.server.event.ServerStartupEvent] of candidate Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList
09:34:27.929 [pool-1-thread-1] TRACE i.m.context.DefaultBeanContext - Existing bean io.micronaut.discovery.consul.ConsulServiceInstanceList@5d01ea21 does not match qualifier <HeartbeatEvent> for type io.micronaut.context.event.ApplicationEventListener
09:34:27.929 [pool-1-thread-1] DEBUG i.m.context.DefaultBeanContext - Qualifying bean [io.micronaut.context.event.ApplicationEventListener] from candidates [Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList, Definition: io.micronaut.discovery.consul.registration.ConsulAutoRegistration, Definition: io.micronaut.http.client.scope.ClientScope, Definition: io.micronaut.health.HeartbeatTask, Definition: io.micronaut.runtime.context.scope.refresh.RefreshScope] for qualifier: <HeartbeatEvent> 
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.server.event.ServerStartupEvent] of candidate Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.context.scope.refresh.RefreshEvent] of candidate Definition: io.micronaut.http.client.scope.ClientScope
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.discovery.event.ServiceStartedEvent] of candidate Definition: io.micronaut.health.HeartbeatTask
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.context.scope.refresh.RefreshEvent] of candidate Definition: io.micronaut.runtime.context.scope.refresh.RefreshScope
09:34:27.930 [pool-1-thread-1] DEBUG i.m.context.DefaultBeanContext - Found 1 beans for type [<HeartbeatEvent> io.micronaut.context.event.ApplicationEventListener]: [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] 
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.ApplicationEventPublisher - Established event listeners [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] for event: io.micronaut.health.HeartbeatEvent[source=io.micronaut.http.server.netty.NettyEmbeddedServerInstance@3f1ddac2]
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.ApplicationEventPublisher - Invoking event listener [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] for event: io.micronaut.health.HeartbeatEvent[source=io.micronaut.http.server.netty.NettyEmbeddedServerInstance@3f1ddac2]
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.PropertySourcePropertyResolver - No value found for property: vcap.application.instance_id
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Publisher pass(String checkId,String note)] invocation on target: io.micronaut.discovery.consul.client.v1.AbstractConsulClient$Intercepted@47b179d7
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: Publisher pass(String checkId,String note)
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: Publisher pass(String checkId,String note)
09:34:27.938 [nioEventLoopGroup-1-4] DEBUG i.m.d.registration.AutoRegistration - Successfully reported passing state to Consul
09:34:30.602 [nioEventLoopGroup-1-12] DEBUG i.m.h.server.netty.NettyHttpServer - Server waiter-7dd7998f77-bfkbt:8084 Received Request: GET /waiter/beer/a
09:34:30.602 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Matching route GET - /waiter/beer/a
09:34:30.604 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Matched route GET - /waiter/beer/a to controller class micronaut.demo.beer.$WaiterControllerDefinition$Intercepted
09:34:30.606 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Single serveBeerToCustomer(String customerName)] invocation on target: micronaut.demo.beer.$WaiterControllerDefinition$Intercepted@a624fe7
09:34:30.606 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.validation.ValidatingInterceptor@6642e95d] in chain for method invocation: Single serveBeerToCustomer(String customerName)
09:34:30.607 [nioEventLoopGroup-1-12] TRACE o.h.v.i.e.c.SimpleConstraintTree - Validating value a against constraint defined by ConstraintDescriptorImpl{annotation=j.v.c.NotBlank, payloads=[], hasComposingConstraints=true, isReportAsSingleInvalidConstraint=false, elementType=PARAMETER, definedOn=DEFINED_IN_HIERARCHY, groups=[interface javax.validation.groups.Default], attributes={groups=[Ljava.lang.Class;@71cccd2d, message={javax.validation.constraints.NotBlank.message}, payload=[Ljava.lang.Class;@5044372c}, constraintType=GENERIC, valueUnwrapping=DEFAULT}.
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.aop.chain.InterceptorChain$Lambda$449/1045761764@6d4672c0] in chain for method invocation: Single serveBeerToCustomer(String customerName)
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)] invocation on target: micronaut.demo.beer.client.TicketControllerClient$Intercepted@eaba75d
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.609 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Flowable getInstances(String serviceId)] invocation on target: compositeDiscoveryClient(consul,kubernetes)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.cache.interceptor.CacheInterceptor@2b772100] in chain for method invocation: Flowable getInstances(String serviceId)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.aop.chain.InterceptorChain$Lambda$449/1045761764@19a66abd] in chain for method invocation: Flowable getInstances(String serviceId)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)] invocation on target: io.micronaut.discovery.consul.client.v1.AbstractConsulClient$Intercepted@47b179d7
09:34:30.611 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)
09:34:30.611 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)
09:34:30.691 [nioEventLoopGroup-1-12] ERROR i.m.r.intercept.RecoveryInterceptor - Type [micronaut.demo.beer.client.TicketControllerClient$Intercepted] executed with error: Empty body
io.micronaut.http.client.exceptions.HttpClientResponseException: Empty body
    at io.micronaut.http.client.HttpClient.lambda$null$0(HttpClient.java:161)
    at java.util.Optional.orElseThrow(Optional.java:290)
    at io.micronaut.http.client.HttpClient.lambda$retrieve$1(HttpClient.java:161)
    at io.micronaut.core.async.publisher.Publishers$1.doOnNext(Publishers.java:143)
    at io.micronaut.core.async.subscriber.CompletionAwareSubscriber.onNext(CompletionAwareSubscriber.java:53)
    at io.reactivex.internal.util.HalfSerializer.onNext(HalfSerializer.java:45)
    at io.reactivex.internal.subscribers.StrictSubscriber.onNext(StrictSubscriber.java:97)
    at io.reactivex.internal.operators.flowable.FlowableSwitchMap$SwitchMapSubscriber.drain(FlowableSwitchMap.java:307)
    at io.reactivex.internal.operators.flowable.FlowableSwitchMap$SwitchMapInnerSubscriber.onNext(FlowableSwitchMap.java:391)
    at io.reactivex.internal.operators.flowable.FlowableSubscribeOn$SubscribeOnSubscriber.onNext(FlowableSubscribeOn.java:97)
    at io.reactivex.internal.operators.flowable.FlowableOnErrorNext$OnErrorNextSubscriber.onNext(FlowableOnErrorNext.java:79)
    at io.reactivex.internal.operators.flowable.FlowableTimeoutTimed$TimeoutSubscriber.onNext(FlowableTimeoutTimed.java:99)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.lambda$onNext$1(ClientServerRequestTracingPublisher.java:60)
    at io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:53)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.onNext(ClientServerRequestTracingPublisher.java:60)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.onNext(ClientServerRequestTracingPublisher.java:52)
    at io.reactivex.internal.util.HalfSerializer.onNext(HalfSerializer.java:45)
    at io.reactivex.internal.subscribers.StrictSubscriber.onNext(StrictSubscriber.java:97)
    at io.reactivex.internal.operators.flowable.FlowableCreate$NoOverflowBaseAsyncEmitter.onNext(FlowableCreate.java:403)
    at io.micronaut.http.client.DefaultHttpClient$10.channelRead0(DefaultHttpClient.java:1773)
    at io.micronaut.http.client.DefaultHttpClient$10.channelRead0(DefaultHttpClient.java:1705)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.micronaut.http.netty.stream.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:186)
    at io.micronaut.http.netty.stream.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:181)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
    at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
    at io.micronaut.tracing.instrument.util.TracingRunnable.run(TracingRunnable.java:54)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)
09:34:30.692 [nioEventLoopGroup-1-12] DEBUG i.m.r.intercept.RecoveryInterceptor - Type [micronaut.demo.beer.client.TicketControllerClient$Intercepted] resolved fallback: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.692 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Looking up existing bean for key: @Fallback micronaut.demo.beer.client.TicketControllerClient
09:34:30.692 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - No existing bean found for bean key: @Fallback micronaut.demo.beer.client.TicketControllerClient
09:34:30.693 [nioEventLoopGroup-1-12] DEBUG i.m.context.DefaultBeanContext - Resolving beans for type: <RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor 
09:34:30.693 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Looking up existing beans for key: <RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor
09:34:30.693 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Found 2 existing beans for type [<RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor]: [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc, io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] 
09:34:30.694 [nioEventLoopGroup-1-12] DEBUG i.m.context.DefaultBeanContext - Created bean [micronaut.demo.beer.client.NoCostTicket$Intercepted@77053015] from definition [Definition: micronaut.demo.beer.client.NoCostTicket$Intercepted] with qualifier [@Fallback]
 Blank beer from fall back being served
09:34:30.695 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Encoding emitted response object [micronaut.demo.beer.Beer@5caca659] using codec: io.micronaut.jackson.codec.JsonMediaTypeCodec@2ba33e2c

Any help would be much appreciated. The project link is in the link above there are various shell script and it is rather complex to get it all setup and running so perhaps watching a few moments on the video might be more practical.

Update I have largely been away from this but am really unable to proceed, currently upgraded to the most recent consul-helm v0.5.0 and micronaut 1.0.4 but still facing identical issue, not quite sure if this is normal either:

09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.PropertySourcePropertyResolver - No value found for property: vcap.application.instance_id

I ended up making a very basic 2 app based version of it on this branch

There is an updated fuller log - found here - a fresh install after running ./install-minikube.sh (this script would need docker username changes if it were to be run for someone else) logs produced

-- V H
consul
kubernetes
micronaut
minikube

1 Answer

11/27/2018

It looks like your beer-front cannot connect to Consul. Which is defined as a headless service. You will notice that the kind-cheetah-consul-server has no ClusterIP. Can you try connecting directly as "kind-cheetah-consul-server-0.[ headless service fqdn ]" or just "kind-cheetah-consul-server-0". Since your consul is using statefulset, you will have a stable pod name and dns.

-- Bal Chua
Source: StackOverflow