Real-Time, Streaming Data Lake and Analytics with Apache Nifi

Overview of Apache NiFi

Apache NiFi for Data Ingestion delivers an easy to use, a powerful, and reliable system to process and distribute the data over several resources. Apache NiFi works in both standalone mode and cluster mode.

Apache NiFi is used for routing and processing data from any source to any destination. The process can also do some Data Transformation.

It is a UI based platform where we need to define our source from where we want to collect data, processors for the conversion of the data, a destination where we want to store the data.

Data Collection and Data Ingestion

Data Collection and Data Ingestion are the processes of fetching data from any data source which we can perform in two ways -

In Today’s World, Enterprises are generating data from different Sources and building Real Time Data lake; we need to Integrate various sources of Data into One Stream.

In this Blog We are sharing how to Ingest, Store and Process Twitter Data using Apache Nifi and in Coming Blogs, we will be Sharing Data Collection and Ingestion from Below Sources

Objectives for Enterprise Data Lake


Apache NiFi Architecture

Apache NiFi provides an easy to use, the powerful, and reliable system to process and distribute the data over several resources.

Apache NiFi is used for routing and processing data from any source to any destination. The process can also do some data transformation.

It is a UI based platform where we need to define our source from where we want to collect data, processors for the conversion of the data, a destination where we want to store the data.

Each processor in NiFi have some relationships like success, retry, failed, invalid data, etc. which we can use while connecting one processor to another. These links help in transferring the data to any storage or processor even after the failure by the processor.

Benefits of Apache NiFi

Features of Apache NiFi


Apache NiFi Clustering

When we require moving a large amount of data, then the only single instance of Apache NiFi is not enough to handle that amount of data. So to handle this we can do clustering of the NiFi Servers, this will help us in scaling.

We just need to create the data flow on one node, and this will make a copy of this data flow on each node in the cluster.

Apache NiFi introduces Zero-Master Clustering paradigm in Apache NiFi 1.0.0. A previous version of Apache NiFi based upon a single “Master Node” (more formally known as the NiFi Cluster Manager).

If the master node gets lost, data continued to flow, but the application was unable to show the topology of the flow, or show any stats. But in Zero-Master we can make changes from any node of the cluster.

And if master node disconnects, then automatically any active node is elected as Master Node.

Each node has the same the data flow, so they work on the same task as the other nodes are working, but each operates on the different datasets.

In Apache NiFi cluster, one node is elected as the Master(Cluster Coordinator), and another node sends heartbeats/status information to the master node. This node is responsible for the disconnection of the other nodes that do not send any pulse/status information.

This election of the master node is done via Apache Zookeeper. And In the case when the master nodes get disconnected, Apache Zookeeper elects any active node as the master node.


Data Collection and Ingestion from Twitter using NiFi to Build Data Lake

Fetching Tweets with NiFi’s Processor

NiFi’s ‘GetTwitter’ processor is used to fetch tweets. It uses Twitter Streaming API for retrieving tweets. In this processor, we need to define the endpoint which we need to use. We can also apply filters by location, hashtags, particular IDs.

Now processor GetTwitter is ready for transmission of the data(tweets). From here we can move our data stream to anywhere like Amazon S3, Apache Kafka, ElasticSearch, Amazon Redshift, HDFS, Hive, Cassandra, etc. NiFi can move data multiple destinations parallelly.


Data Integration Using Apache NiFi and Apache Kafka

For this, we are using NiFi processor ‘PublishKafka_0_10’.

In the Scheduling Tab, we can configure how many concurrent tasks to be executed and schedule the processor.

In Properties Tab, we can set up our Kafka broker URLs, topic name, request size, etc. It will write data to the given topic. For the best results, we can create a Kafka topic manually of a defined partitions.

Apache Kafka can be used to process data with Apache Beam, Apache Flink, Apache Spark.


Integration Using Apache NiFi to Amazon RedShift using Amazon Kinesis

Now we integrate Apache NiFi to Amazon Redshift. NiFi uses Amazon Kinesis Firehose Delivery Stream to store data to Amazon Redshift.

This delivery Stream should get utilized for moving data to Amazon Redshift, Amazon S3, Amazon ElasticSearch Service. We need to specify this while creating Amazon Kinesis Firehose Delivery Stream.

Now we have to move data to Amazon Redshift, so firstly we need to configure Amazon Kinesis Firehose Delivery Stream. While delivering data to Amazon Redshift, firstly the data is provided to Amazon S3 bucket, and then Amazon Redshift Copy command is used to move data to Amazon Redshift Cluster.

We can also enable data transformation while creating Kinesis Firehose Delivery Stream. In this, we can also backup the data to another Amazon S3 bucket other than an intermediate bucket.

So for this, we will use processor PutKinesisFirehose. This processor will use that Kinesis Firehose stream for delivering data to Amazon Redshift. Here we will configure AWS credentials and Kinesis Firehose Delivery Stream.


Data Integration Using Apache NiFi to Amazon S3

PutKinesisFirehose sends data to both Amazon Redshift and uses Amazon S3 as the intermediator. Now if someone only wants to use Amazon S3 as the storage so NiFi can also use for sending data to Amazon S3 only.


For this, we need to use NiFi processor PutS3Object. In it, we have to configure our AWS credentials, bucket name, and path, etc.


Partitioning in Amazon S3 Bucket

Most important aspect while storing data in S3 is partitioning. Here we can partition our data using expression language in the object key field. So Right now we have used day wise partitioning.

So tweets should be stored days folder. And this partitioning approach can be beneficial while doing twitter analysis. Suppose we want to analyze tweets for this day or this week.

So using partitioning, we don’t need to scan all tweets we stored in S3. We will just define our filters using partitions.

Expression Used: ${now():format('yyyy/MMM/dd')}/${filename}

It will create a path in our S3 Bucket like this: Year/Month/Date/filenames.


Data Integration Using Apache NiFi to Hive

For transferring data to Hive, NiFi has processors - PutHiveStreaming for which incoming flow file is expected to be in Avro format and PutHiveQL for which incoming FlowFile is projected to be the HiveQL command to execute.

Now we will use PutHiveStreaming for sending data to Hive. For twitter we have output data as JSON, so we need to convert it first to the Avro format and then we will send it to the PutHiveStreaming.

In PutHiveStreaming, we will configure our Hive Metastore URI, Database Name, and table name. For this, the table which we are using must exist in Hive.


Data Integration Using Apache NiFi to ElasticSearch

Now we will visualize the incoming data in Kibana, for that we have routed the data to ElasticSearch.

Defining ElasticSearch http-basic

For routing data ElasticSearch, we will use NiFi processor PutElasticSearchHttp. It will move the data to the defined ElasticSearch index. Here we have set our ElasticSearch’s URL, index, type, etc.

Now, this processor will write data twitter data to the ElasticSearch index. And firstly we need to create the index into ElasticSearch and need to do mapping manually for some fields like ‘created_at’ because we need this to type ‘Date.'


Data Visualization in Kibana

Setting Up Dashboard in Kibana

Firstly we need to add the created index into Kibana.


Integrating Apache Spark and NiFi for Data Lakes

Apache Spark is used widely for large data processing. Spark can process the data in both i.e. Batch processing Mode and Streaming Mode.

Apache NiFi to Apache Spark data transmission use site to site communication. And output port is used for publishing data from the source.

In the above data flow, we have used processor TailFile in which we have configured ‘nifi-app.log’ file to tail. It will send all the information to the output port ‘spark. Now, we can use this outport port while writing spark job.

In the same way, we can send out twitter records to any output port. And this output port can be further used for the spark streaming.


Integrating Apache Flink With Apache NiFi for Data Lake

Apache Flink is an open source stream processing framework developed by the Apache Software Foundation. We can use this for stream processing, network/sensor monitoring, error detection, etc.

Apache NiFi to Apache Flink data transmission also uses the site to site communication. And output port is used for publishing data from the source.


Performance & Scaling Results For Apache NiFi

We have tested the data flows on four node Apache NiFi Cluster. We used NiFi processor GenerateFlowFile for load testing. This processor creates FlowFiles with random data or custom content. We have tested the data transmission to Amazon S3 and Apache Kafka.

The results shown in the table is the data processed by the NiFi(per five minutes)

Note: These tests are performed using Amazon EC2 instances(m4.large). For Kafka, we have used three node Apache Kafka Cluster.


How Can Don Help You?

Don Big Data and Analytics Solutions for Enterprise and Startups