Redshift Primary Key Constraint and Syntax

Redshift Primary key constraint is informational only; they are not enforced by Amazon Redshift. Amazon data warehouse appliance supports referential integrity constraints such as Redshift primary key, foreign key, and unique keys as part of SQL-92 standard requirement. You can create primary key constraint while creating tables in Redshift database but it will not be enforced while loading Redshift tables. Redshift query planner uses these constraints to create better query execution plan. If Primary key is set at the column level, it must be on a single column. If PRIMARY…

Continue ReadingRedshift Primary Key Constraint and Syntax
Comments Off on Redshift Primary Key Constraint and Syntax

Amazon Redshift Data Types and Best Practices

Data type is an attribute that specifies type of data of any object that is going to be stored in that particular column. Each column, variable and expression has related data type in SQL. However, different database offers the different data types for columns. Redshift data types are almost similar to what the traditional RDBMS supports. Amazon Redshift data types are similar to IBM Netezza data types. When you issue Redshift create table command each column in a database tables must have name and a data type associated with it.…

Continue ReadingAmazon Redshift Data Types and Best Practices
Comments Off on Amazon Redshift Data Types and Best Practices

Teradata WITH Clause Syntax, Usage and Examples

Teradata WITH Clause is an optional clause that always precedes SELECT clause in the query statements. Each subquery in the WITH clause specifies a table name, an optional list of column names, and a query expression that evaluates to a table (usually a SELECT statement). In SQL, WITH clause are commonly referred to as Common Table Expressions (CTE). Teradata WITH Clause WITH clause is used for many purposes, if you want to find our hierarchy in the data then recursive WITH clause is used. If your requirement is to reuse…

Continue ReadingTeradata WITH Clause Syntax, Usage and Examples
Comments Off on Teradata WITH Clause Syntax, Usage and Examples

Apache Hive Table Design Best Practices and Considerations

As you plan your database or data warehouse migration to Hadoop ecosystem, there are key table design decisions that will heavily influence overall Hive query performance. In this article, we will check Apache Hive table design best practices.  Apache Hive Table Design Best Practices Table design play very important roles in Hive query performance. These design choices also have a significant effect on storage requirements, which in turn affects query performance by reducing the number of I/O operations and minimizing the memory required to process Hive queries. Read: Apache Hive…

Continue ReadingApache Hive Table Design Best Practices and Considerations
Comments Off on Apache Hive Table Design Best Practices and Considerations

Amazon Redshift WITH Clause Syntax, Usage and Examples

Redshift WITH Clause is an optional clause that always precedes SELECT clause in the query statements. WITH clause has a subquery that is defined as a temporary tables similar to View definition. Each subquery in the WITH clause specifies a table name, an optional list of column names, and a query expression that evaluates to a table (usually a SELECT statement). In SQL, WITH clause are commonly referred to as Common Table Expressions (CTE). A CTE or WITH clause is a syntactical sugar for a subquery. Where you can use…

Continue ReadingAmazon Redshift WITH Clause Syntax, Usage and Examples
Comments Off on Amazon Redshift WITH Clause Syntax, Usage and Examples

Apache Hive EXPLAIN Command and Example

Latest version of Hive uses Cost Based Optimizer (CBO) to increase the Hive query performance. Hive uses a cost-based optimizer to determine the best method for scan and join operations, join order, and aggregate operations. You can use the Apache Hive EXPLAIN command to display the actual execution plan that Hive query engine generates and uses while executing any query in the Hadoop ecosystem. Read: Hive ANALYZE TABLE Command Hive Performance Tuning Best Practices Apache Hive Cost Based Optimizer Latest version of Apache Hive uses the cost based optimizer to…

Continue ReadingApache Hive EXPLAIN Command and Example
Comments Off on Apache Hive EXPLAIN Command and Example

HiveServer2 Beeline Command Line Shell Options and Examples

HiveServer2 supports a command shell Beeline that works with HiveServer2. It's a JDBC client that is based on the SQLLine CLI. The Beeline shell works in both embedded mode as well as remote mode. In the embedded mode, it runs an embedded Hive (similar to Hive Command line) whereas remote mode is for connecting to a separate HiveServer2 process over Thrift. In this article, we will check commonly used HiveServer2 Beeline command line shell options with an examples. You can run all Hive command line and Interactive options from Beeline…

Continue ReadingHiveServer2 Beeline Command Line Shell Options and Examples
Comments Off on HiveServer2 Beeline Command Line Shell Options and Examples

Easy Methods to Integrate Netezza and Amazon S3 – Steps

Amazon AWS is gaining popularity as cloud based web services. It will just take few clicks to make your system or storage up and running. Amazon web services (AWS) provides on-demand cloud computing platforms and storage services (S3). Amazon S3 is fast, reliable cloud storage that is the reason most of organizations are using it to store its data. In this article, we will check easy methods to Integrate Netezza and Amazon S3 storage for data transfer between them. You may have to connect to Amazon S3 to pull data…

Continue ReadingEasy Methods to Integrate Netezza and Amazon S3 – Steps
Comments Off on Easy Methods to Integrate Netezza and Amazon S3 – Steps

Different Methods to Load Data from Amazon S3 into Netezza Table

Amazon as a cloud based service gaining popularity. Amazon web services (AWS) provides on-demand cloud computing platforms and storage services. Amazon S3 is fast, reliable cloud storage that is the reason most of organizations are using it to store its data. In this article, we will check how to load data from Amazon S3 into Netezza tables. You may also interested in loading Netezza data to S3 bucket: Export Netezza Data into Amazon S3 Bucket We will be using Amazon AWS CLI to load data from Amazon S3 into Netezza…

Continue ReadingDifferent Methods to Load Data from Amazon S3 into Netezza Table
Comments Off on Different Methods to Load Data from Amazon S3 into Netezza Table

Different Methods to Export Netezza Data into Amazon S3 Bucket

Now a days, Amazon AWS is gaining popularity as cloud based web services. It will just take few clicks to make your system or storage up and running. In this article, we will check how to integrate Netezza and Amazon S3. We will also check how to export Netezza data into Amazon S3 bucket using Amazon web services command line interface (aws cli) with an example. You may also interested in load data from Amazon S3 to Netezza table: Different Methods to Load Data from Amazon S3 into Netezza Table…

Continue ReadingDifferent Methods to Export Netezza Data into Amazon S3 Bucket
Comments Off on Different Methods to Export Netezza Data into Amazon S3 Bucket