Apache Kafka Interview Questions

Ratings:
(4)
Views:0
Banner-Img
  • Share this blog:

Apache Kafka Interview Questions and Answers

1Q) What is Apache Kafka?

Ans: Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. A high-throughput distributed messaging system. Kafka is a general-purpose publish-subscribe model messaging system, which offers strong durability, scalability, and fault-tolerance support. It is not specifically designed for Hadoop. Hadoop ecosystem is just one of its possible consumers.

Apache Kafka Tutorials For Beginners
Kafka Vs Flume
 
Compared to Flume, Kafka wins on its superb scalability and message durability.
Kafka is very scalable. One of the key benefits of Kafka is that it is very easy to add a large number of consumers without affecting performance and without downtime. That’s because Kafka does not track which messages on the topic have been consumed by consumers. It simply keeps all messages in the topic within a configurable period. It is the consumers’ responsibility to do the tracking through offset. In contrast, adding more consumers to Flume means changing the topology of Flume pipeline design, replicating the channel to deliver the messages to a new sink. It is not really a scalable solution when you have a huge number of consumers. Also since the flume topology needs to be changed, it requires some downtime. Kafka’s scalability is also demonstrated by its ability to handle spike of the events. This is where Kafka truly shines because it acts as a “shock absorber” between the producers and consumers. Kafka can handle events at 100k+ per second rate coming from producers. Because Kafka consumers pull data from the topic, different consumers can consume the messages at a different pace. Kafka also supports a different consumption model. You can have one consumer processing the messages in real-time and another consumer processing the messages in batch mode.

2Q) Which components are used for streamflow of data?

Ans: Bolt:- Bolts represent the processing logic unit in Storm. One can utilize bolts to do any kind of processing such as filtering, aggregating, joining, interacting with data stores, talking to external systems, etc. Bolts can also emit tuples (data messages) for the subsequent bolts to process. Additionally, bolts are responsible to acknowledge the processing of tuples after they are done processing.

 Spout:- Spouts represent the source of data in Storm. You can write spouts to read data from data sources such as databases, distributed file systems, messaging frameworks, etc. Spouts can broadly be classified into the following –

-Reliable – These spouts have the capability to replay the tuples (a unit of data in the data stream). This helps applications achieve ‘at least once message processing’ semantic as in case of failures, tuples can be replayed and processed again. Spouts for fetching the data from messaging frameworks are generally reliable as these frameworks provide the mechanism to replay the messages.

-Unreliable – These spouts don’t have the capability to replay the tuples. Once a tuple is emitted, it cannot be replayed irrespective of whether it was processed successfully or not. This type of spouts follows ‘at most once message processing’ semantic.

Tuple:- The tuple is the main data structure in Storm. A tuple is a named list of values, where each value can be any type. Tuples are dynamically typed — the types of fields do not need to be declared. Tuples have helper methods like getting integer and getString to get field values without having to cast the result. Storm needs to know how to serialize all the values in a tuple. By default, Storm knows how to serialize the primitive types, strings, and byte arrays. If you want to use another type, you’ll need to implement and register a serializer for that type.

 

Learn Apache Kafka by Tekslate - Fastest growing sector in the industry. Explore Online "Apache Kafka Training" and course is aligned with industry needs & developed by industry veterans. Tekslate will turn you into an Apache Kafka Expert.

 

3Q) Does Apache act as a Proxy server?

Ans: Yes, It acts as a proxy also by using the mod_proxy module. This module implements a proxy, gateway or cache for Apache. It implements proxying capability for AJP13 (Apache JServ Protocol version 1.3), FTP, CONNECT (for SSL),HTTP/0.9, HTTP/1.0, and (since Apache 1.3.23) HTTP/1.1. The module can be configured to connect to other proxy modules for these and other protocols.

4Q) While installing, why does Apache have three config files - srm.conf, access.conf, and httpd.conf?

Ans: The first two are remnants from the NCSA times, and generally you should be ok if you delete the first two, and stick with httpd.conf.

5Q) What is ZeroMQ?

Ans: ZeroMQ is “a library which extends the standard socket interfaces with features traditionally provided by specialized messaging middleware products”. Storm relies on ZeroMQ primarily for task-to-task communication in running Storm topologies.

6Q) How many distinct layers are of Storm’s Codebase?

Ans: There are three distinct layers to Storm’s codebase.

-First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.

-Second, all of Storm’s interfaces are specified as Java interfaces. So even though there’s a lot of Clojure in Storm’s implementation, all usage must go through the Java API. This means that every feature of Storm is always available via Java.

-Third, Storm’s implementation is largely in Clojure. Line-wise, Storm is about half Java code, half Clojure code. But Clojure is much more expressive, so in reality, the great majority of the implementation logic is in Clojure.

7Q) When do you call the cleanup method?

Ans: The cleanup method is called when a Bolt is being shutdown and should clean up any resources that were opened. There’s no guarantee that this method will be called on the cluster: For instance, if the machine the task is running on blows up, there’s no way to invoke the method. The cleanup method is intended when you run topologies in local mode (where a Storm cluster is simulated in the process), and you want to be able to run and kill many topologies without suffering any resource leaks.

8Q) How can we kill a topology?

Ans: To kill a topology, simply run:

storm kill {stormname}

Give the same name to storm kill as you used when submitting the topology. The storm won’t kill the topology immediately. Instead, it deactivates all the spouts so that they don’t emit any more tuples, and then Storm waits for Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS seconds before destroying all the workers. This gives the topology enough time to complete any tuples it was processing when it got killed.

 

Check Out Apache Kafka Tutorials

9Q) What is the combiner Aggregator?

Ans: A Combiner Aggregator is used to combine a set of tuples into a single field. It has the following signature:

public interface CombinerAggregator {
 T init (TridentTuple tuple);
 T combine(T val1, T val2);
 T zero();
 }

Storm calls the init() method with each tuple, and then repeatedly calls the combine()method until the partition is processed. The values passed into the combine() method are partial aggregations, the result of combining the values returned by calls to init().

10Q) Is it necessary to kill the topology while updating the running topology?

Ans: Yes, to update a running topology, the only option currently is to kill the current topology and resubmit a new one. A planned feature is to implement a Storm swap command that swaps a running topology with a new one, ensuring minimal downtime and no chance of both topologies processing tuples at the same time.

11Q) In which folder are Java Applications stored in Apache?

Ans: Java applications are not stored in Apache, it can be only connected to the other Java webapp hosting webserver using the mod_jk connector.

12Q) What is mod_vhost_alias?

Ans: This module creates dynamically configured virtual hosts, by allowing the IP address and/or the Host: header of the HTTP request to be used as part of the pathname to determine what files to serve. This allows for easy use of a huge number of virtual hosts with similar configurations.

13Q) What is struct and explain its purpose?

 

Ans: A strut is an open-source framework for creating Java web applications.

 

14Q) Tell me Is running apache as root is a security risk?

Ans: No.root process opens port 80, but never listens to it, so no user will actually enter the site with root rights. If you kill the root process, you will see the other kids disappear as well.

 

 

 

 

About Author
Authorlogo
Name
TekSlate
Author Bio

TekSlate is the best online training provider in delivering world-class IT skills to individuals and corporates from all parts of the globe. We are proven experts in accumulating every need of an IT skills upgrade aspirant and have delivered excellent services. We aim to bring you all the essentials to learn and master new technologies in the market with our articles, blogs, and videos. Build your career success with us, enhancing most in-demand skills in the market.


Stay Updated


Get stories of change makers and innovators from the startup ecosystem in your inbox