Solution for Configure Kafka brokers to survive broker failure
is Given Below:
I have a Kafka cluster (running in K8S, confluent platform, Helm). I want to have a setup that is able to overcome failure of 1 broker. I have tried to multiple setups but in general this is a fairly new topic for me.
Basically, I have one producer application and multiple event listeners. I wan’t to make sure that failure of 1 broker won’t bring the whole cluster down.
Here is what I have tried so far:
- 3 brokers, replication factor:3, min in-sync replicas:1 – bringing down brings whole cluster
- 3 brokers, replication factor 3, in sync replicas: 2, same as above
- 4 brokers, replication factor 2, insync replicas: 3 – still fails
What would be the optimal solution – the amount of messages is not huge. I want to make the number of brokers reasonably low as for production setup. Any ideas how to set it properly?
By working cluster I mean a cluster that is able to retrieve messages, and able to bring the message to consumers.
3 brokers, replication factor 3, min in sync replicas: 2
Should be sufficient to handle one broker down scenario
Please check your topic definition again if it is defined with the wanted configuration, if not alter the definition
Check your internal offset topic that is defined with same configuration
Share logs of the 3 brokers while getting one broker down
Share topic definition –describe so we could make sure it defined correctly