Details about bigdata

Migrating Netezza to Impala SQL Best Practices

Now a days everybody wants to migrate to Hadoop environment for their analytics that includes real-time or near real-time. In this post i will explain some best practices in Migrating Netezza to Impala SQL. Impala uses the standard SQL but still you might need to modify the source SQL when bringing specific application to Hadoop Impala due to variations in data types, built-in function and obviously Hadoop specific syntax. Even if the SQL is working correctly in Impala, you might consider rewriting it to improve performance. Read: Netezza Hadoop Connector…

Continue ReadingMigrating Netezza to Impala SQL Best Practices
2 Comments

Hadoop Single Node Cluster Setup on Ubuntu

In this tutorial, I will explain you setting up Hadoop single node cluster setup on Ubuntu 14.04. Single node cluster will sit on the top of Hadoop Distributed File System (HDFS). Hadoop single node cluster setup on Ubuntu 14.04 Hadoop is a Java framework for running application on the large cluster made up of commodity hardware's. Hadoop framework allows us to run MapReduce programs on file system stored in highly fault-tolerant Hadoop distributed file systems. Related Readings:  How to Learn Apache Hadoop   Also: 7 Best Books to Learn Bigdata Hadoop The main…

Continue ReadingHadoop Single Node Cluster Setup on Ubuntu
Comments Off on Hadoop Single Node Cluster Setup on Ubuntu

7 Best Hadoop Books to Learn Bigdata Hadoop

The Hadoop ecosystem is vast and may take long time to learn bigdata and start implement applications therefore people new to big data Hadoop technology must choose right book to start with. Here are some of Best Hadoop books you may want to consider. The Hadoop Bigdata has a huge demand in the domains like finance, Insurance, Banking, social networking and many other platforms that deal with very large data sets. The Hadoop experts are in great demand in industries which needs to handle and big, complicated data sets. A working knowledge of…

Continue Reading7 Best Hadoop Books to Learn Bigdata Hadoop
1 Comment

How to Learn Apache Hadoop

Most of you want to know what Apache Hadoop is, how and where to start learning it? Here I’m going to share you some of steps I followed to learn hadoop.  Don’t worry!  You don’t have to be a Java programmer to learn Hadoop. You should know little bit of basic Linux commands. You will learn all remaining programming languages once you login to cluster :-) Let’s first know what is Hadoop? Apache Hadoop is an open source framework to process very large data sets (BigData). Hadoop allows the distributed storage and…

Continue ReadingHow to Learn Apache Hadoop
Comments Off on How to Learn Apache Hadoop

Import data using Apache Sqoop

Sqoop allows easy import of data from structured data stores such as relational databases, enterprise data warehouses, and NoSQL systems. Using Sqoop, you can provision the data from external system on to HDFS, and populate tables in Hive and HBase. Sqoop can integrates with Oozie, allowing you to schedule and automate tasks. Read: Sqoop import Relational Database Table into HBase Table Export data using Sqoop Sqoop Command with Secure Password Import Netezza Tables using Apache Sqoop Sqoop uses a connector based architecture which supports plugins that provide connectivity to new…

Continue ReadingImport data using Apache Sqoop
1 Comment

Export data using Apache Sqoop

In some cases data processed by Hadoop pipelines may be needed in production systems to help run additional critical business functions. The sqoop can exports a set of files from HDFS to an RDBMS. The target table must already exist in the database. The input files are read and parsed into a set of records according to the user-specific delimiters. Delimiter is provided in the sqoop options file or CLI options.

Continue ReadingExport data using Apache Sqoop
Comments Off on Export data using Apache Sqoop

Sqoop Architecture – Mappers with No Reducers

Sqoop is a tool designed to transfer data between Hadoop and various relational databases. You can use Sqoop to import data from a relational database management system (RDBMS) such as Netezza, MySQL, Oracle or SQL Server into the Hadoop Distributed File System (HDFS), transform the data and perform complex calculations in Hadoop MapReduce, and then export the data back into an RDBMS. Sqoop is based on a connector architecture which supports plugins to provide connectivity to external systems (RDBMS) .

Continue ReadingSqoop Architecture – Mappers with No Reducers
Comments Off on Sqoop Architecture – Mappers with No Reducers