While working to adapt Java's KafkaIOIT to work with a large dataset I encountered a problem. I want to push 100M records through a Kafka topic, verify data correctness and at the same time check the performance of KafkaIO.Write and KafkaIO.Read.
To perform the tests I'm using a Kafka cluster on Kubernetes from the Beam repo (here).
The expected result would be that first the records are generated in a deterministic way, next they are written to Kafka - this concludes the write pipeline. As for reading and correctness checking - first, the data is read from the topic and after being decoded into String representations, a hashcode of the whole PCollection is calculated (For details, check KafkaIOIT.java).
During the testing I ran into several problems:
When the predermined number of records is read from the Kafka topic, the hash is different each time.
Sometimes not all the records are read and the Dataflow task waits for the input indefinitely, occasionally throwing exceptions.
I believe there are two possible causes of this behavior:
either there is something wrong with the Kafka cluster configuration
or KafkaIO behaves erratically on high data volumes, duplicating and/or dropping records.
I found a Stack answer that I believe might explain the first behavior: link - if messages are delivered more than once, it's obvious that the hash of the whole collection would change.
In this case, I don't really know how to configure KafkaIO.Write in Beam to produce exactly once.
This leaves the issue of messages being dropped unsolved. Can you help?