Kafka Disaster Recovery Strategy
Kafka is a distributed event streaming platform capable of handling trillions of events a day. It is highly scalable, durable, distributed and has high throughput.
The key components of Kafka are:
Broker nodes are responsible for the bulk of I/O operations and durable persistence within the cluster. They accommodate the topic partitions hosted by the cluster. Partitions can be replicated across multiple brokers for horizontal scalability and increased durability. These are called replicas. A broker node maybe leader for certain replicas and follower for others. A single broker node is elected as cluster controller responsible for internal management of partition states.
Zookeeper nodes manage the overall controller status of the cluster. Maintaining cluster data, leader election, user information, quotas, ACLs are largely implemented in Zookeeper.
Producers are responsible for publishing message son Kafka topics. Any number of publishers may produce to the same topic.
Consumers read messages from topics. Any number of consumers may read from same topic; however, depending on the configuration and grouping of consumers, there are rules governing the distribution of records among the consumers.
You have successfully submitted the form, now you can download the whitepaper