How To Ingest Data Using Hortonworks Hadoop With Sqoop and Flume

In this tutorial, you will learn the basics of data ingestion using Hortonworks Hadoop with Sqoop and Flume.

Apache Sqoop was designed to transfer data between Hadoop and relational database servers, such as MySQL, and Oracle to Hadoop HDFS.

Apache Flume is a distributed service for aggregating and moving huge amounts of streaming data into HDFS.

This knowledge is incredibly useful, when preparing for the Hortonworks Hadoop Developer Certification.

The certification exam is all hands-on, without any multiple choice questions. This will test your mental ability to solve a problem based on the knowledge you have gained by developing on an Hortonworks cluster in a limited amount of time.

You’ll want to make sure there are no new issues that arise during the actual exam, so the best advice here is to adopt the mantra of “Practice! Practice!! Practice!!!”

ingest data using hortonworks hadoop syntelli solutions inc


Mukund PortraitHrishikesh Mukund
Analytics Associate
About Hrishikesh: Hrishikesh provides solutions to big-data problems related to Hadoop ecosystem components like MapReduce, YARN, HDFS, HBASE, Oozie, Sqoop, Hive, Pig and Flume and provide architectural solutions to configure the Hadoop ecosystem from scratch. Currently, he is focused on integrating BI tools like Spotfire into the Hadoop Ecosystem. He received his M.S in Computer Science engineering from University of North Carolina at Charlotte and he was a Teaching Assistant for Algorithms and Data Structures and Mobile Application Development. Hrishikesh’s future goals and interests are to integrate Big-Data with mobile devices, Visual Analytics and learning different Hadoop Architectures.

Connect with Hrishikesh on LinkedIn: [social_list linkedin_url=”https://www.linkedin.com/in/hrishimukund”]