I followed the how to install guide of Spring Cloud Data Flow to install the application on Azure Kubernetes Cluster with kubectl
. I use Kafka as a message broker and I created a simple stream, time | log
.
The stream cannot be deployed, I enclose the logs which I can't fully understand.
PS kubectl get pods
NAME READY STATUS RESTARTS AGE
grafana-7d7d77d54-m59dx 1/1 Running 0 5h36m
kafka-broker-64bfd5d6b5-9c7ld 1/1 Running 0 25m
kafka-zk-768b548468-mhrrn 1/1 Running 0 145m
mysql-9dbdc88c6-xz4hh 1/1 Running 0 21h
prometheus-64b45b746-zs7z4 1/1 Running 0 5h37m
prometheus-proxy-6764bf4968-4xjz5 1/1 Running 0 28m
scdf-server-7f864c96b7-s8cmm 1/1 Running 0 62m
skipper-7fbd7f47cd-b92v4 1/1 Running 0 6h13m
test-stream-log-v9-ffcd9d55f-8p96j 0/1 Running 13 68m
test-stream-time-v9-6c47699d94-pfzkr 0/1 Running 13 68m
Time app log. https://pastebin.com/JyS8azVk
Log app log. https://pastebin.com/pCe1NqSn
Kafka log. https://pastebin.com/Dj5KfVsQ
From the attached logs in time-source
; specifically, this:
2019-12-19 21:15:23.963 ERROR 1 --- [ main] o.s.cloud.stream.binding.BindingService : Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: Provisioning exception; nested exception is java.util.concurrent.TimeoutException
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:290) ~[spring-cloud-stream-binder-kafka-core-2.1.4.RELEASE
This is indicative that the Spring Cloud Stream binder provisioner is unable to create the desired topic for the producer (i.e. time-source
).
Based on your kubectl get pods
output, however, it appears your Kafka and ZK were deployed roughly recently as opposed to Skipper, which was deployed >6hrs ago.
It is likely that you may have deployed components in the wrong order or that you may have reprovisioned Kafka, but that IP/Host/Port changes aren't yet reflecting in Skipper's deployment. The reason being, Skipper keeps track of Kafka credentials in its config-map, so all the stream applications that it deploys (via SCDF) will automatically receive the credentials at deployment time.
I'd guess the credentials that the applications received might have changed when you reprovisioned Kafka/ZK — you could compare that. I'd suggest bouncing Skipper deployment so it can receive the latest in its configmap or clean-slate it all, and follow the deployment order described in the docs from the beginning.