• USA : +1 973 910 5725
  • INDIA: +91 905 291 3388
  • info@tekslate.com
  • Login

Apache Storm Tutorials

Welcome to Apache Storm Tutorials. The objective of these tutorials is to provide in depth understand of Apache Storm.

In addition to free Apache Storm Tutorials, we will cover common interview questions, issues and how to’s of Apache Storm .

Introduction of Apache Storm Tutorials

Apache Storm recently became a top-level project, marking a huge milestone for the project and for me personally. It’s crazy to think that four years ago Storm was nothing more than an idea in my head, and now it’s a thriving project with a large community used by a ton of companies. In this post I want to look back at how Storm got to this point and the lessons I learned along the way.

“Storm is a distributed real-time computation system”  Apache Storm is a task parallel continuous computational engine. It defines its workflows in Directed Acyclic Graphs (DAG’s) called “topologies”. These topologies run until shutdown by the user or encountering an unrecoverable failure.

Storm does not natively run on top of typical Hadoop clusters, it uses Apache ZooKeeper and its own master/ minion worker processes to coordinate topologies, master and worker state, and the message guarantee semantics. That said, both Yahoo! and Hortonworks are working on providing libraries for running Storm topologies on top of Hadoop 2.x YARN clusters. Furthermore, Storm can run on top of the Mesos scheduler as well, natively and with help from the Marathon framework.

Apache Spark (Streaming)

“Apache Spark is a fast and general purpose engine for large-scale data processing”. Apache Spark is a data parallel general purpose batch processing engine. Workflows are defined in a similar and reminiscent style of MapReduce, however, is much more capable than traditional Hadoop MapReduce. Apache Spark has its Streaming API project that allows for continuous processing via short interval batches. Similar to Storm, Spark Streaming jobs run until shutdown by the user or encounter an unrecoverable failure.

Apache Spark does not itself require Hadoop to operate. However, its data parallel paradigm requires a shared file system for optimal use of stable data. The stable source can range from S3, NFS, or, more typically, HDFS.

Executing Spark applications does not require Hadoop YARN. Spark has its own standalone master/ server processes. However, it is common to run Spark applications using YARN containers. Furthermore, Spark can also run on Mesos clusters.


As of this writing, Apache Spark is a full, top level Apache project. Whereas Apache Storm is currently undergoing incubation. Moreover, the latest stable version of Apache Storm is 0.9.2 and the latest stable version of Apache Spark is 1.0.2 (with 1.1.0 to be released in the coming weeks). Of course, as the Apache Incubation reminder states, this does not strictly reflect stability or completeness of either project. It is, however, a reflection to the state of the communities. Apache Spark operations and its process are endorsed by the Apache Software Foundation. Apache Storm is working on stabilizing its community and development process.

Spark’s 1.x version does state that the API has stabilized and will not be doing major changes undermining backward compatibility. Implicitly, Storm has no guaranteed stability in its API, however, it is running in production for many different companies.

These core tutorials will help you to learn 
the fundamentals of Apache Storm.
For an in-depth understanding and practical experience,
explore Online Apache Storm Training.

 Java Installation

Use the following command to check whether you have Java already installed on your system.

$ java -version

If Java is already there, then you would see its version number. Else, download the latest version of JDK.

Step 1: Download JDK

Download the latest version of JDK by using the following link −www.oracle.com

The latest version is JDK 8u 60 and the file is “jdk-8u60-linux-x64.tar.gz”. Download the file on your machine.

Step 2: Extract files

Generally files are being downloaded onto the downloads folder. Extract the tar setup using the following commands.

$ cd /go/to/download/path
$ tar -zxf jdk-8u60-linux-x64.gz

Step 3: Move to opt directory

To make Java available to all users, move the extracted java content to “/usr/local/java” folder.

$ su
password: (type password of root user)
$ mkdir /opt/jdk
$ mv jdk-1.8.0_60 /opt/jdk/

Step 4 − Set path

To set path and JAVA_HOME variables, add the following commands to ~/.bashrc file.

export JAVA_HOME =/usr/jdk/jdk-1.8.0_60
export PATH=$PATH:$JAVA_HOME/bin

Now apply all the changes in to the current running system.

$ source ~/.bashrc

Step 5 − Java Alternatives

Use the following command to change Java alternatives.

update-alternatives --install /usr/bin/java java /opt/jdk/jdk1.8.0_60/bin/java 100

Step 6

Now verify the Java installation using the verification command (java -version) explained in Step 1.

Apache Storm Benefits

Here is a list of the benefits that Apache Storm offers −

-Storm is open source, robust, and user friendly. It could be utilized in small companies as well as large corporations.

-Storm is fault tolerant, flexible, reliable, and supports any programming language.

-Allows real-time stream processing.

-Storm is unbelievably fast because it has enormous power of processing the data.

-Storm can keep up the performance even under increasing load by adding resources linearly. It is highly scalable.

-Storm performs data refresh and end-to-end delivery response in seconds or minutes depends upon the problem. It has very low latency.

-Storm has operational intelligence.

-Storm provides guaranteed data processing even if any of the connected nodes in the cluster die or messages are lost.

Related Trainings

Contact US

Need an Instructor Led Training, Contact Us

Enter this code: captcha