What is Apache Hive? : Understanding Hive

In this video, you will get a quick overview of Apache Hive, one of the most popular data warehouse components on the big data landscape. It’s mainly used to complement the Hadoop file system with its interface.
Hive was originally developed by Facebook and is now maintained as Apache hive by Apache software foundation. It is used and developed by biggies such as Netflix and Amazon as well.

Why was Hive Developed
=====================
The Hadoop ecosystem is not just scalable but also cost effective when it comes to processing large volumes of data. It is also a fairly new framework that packs a lot of punch. However, organizations with traditional data warehouses are based on SQL with users and developers that rely on SQL queries for extracting data.

It makes getting used to the Hadoop ecosystem an uphill task. And that is exactly why hive was developed.

Hive provides SQL intellect, so that users can write SQL like queries called HQL or hive query language to extract the data from Hadoop. These SQL likes queries will be converted into map reduce jobs by the Hive component and that is how it talks to Hadoop ecosystem and HDFS file system.

How and when Hive can be used?
===========================
 Hive can be used for OLAP (online analytic) processing
 It is scalable, fast and flexible
 It is a great platform for the SQL users to write SQL like queries to interact with the large datasets that reside on HDFS filesystem
Here is what Hive cannot be used for:
==============================
 It is not a relational database
 It cannot be used for OLTP (online transaction) processing
 It cannot be used for real time updates or queries
 It cannot be used for scenarios where low latency data retrieval is expected, because there is a latency in converting the HIVE scripts into MAP REDUCE scripts by Hive
Some of the finest features of Hive
============================
 It supports different file formats like sequence file, text file, avro file format, ORC file, RC file
 Metadata gets stored in RDBMS like derby database
 Hive provides lot of compression techniques, queries on the compressed data such as SNAPPY compression, gzip compression
 Users can write SQL like queries that hive converts into mapreduce or tez or spark jobs to query against hadoop datasets
 Users can plugin mapreduce scripts into the hive queries using UDF user defined functions
 Specialized joins are available that help to improve the query performance
If you don’t understand any of the above terms, that is fine. We will look into the above features in detail in our upcoming videos.

(Visited 100 times, 1 visits today)

You might be interested in

LEAVE YOUR COMMENT

Your email address will not be published. Required fields are marked *

eos
EOS (EOS) 0,704725 4,17%
aave
Aave (AAVE) 80,51 3,29%
the-graph
The Graph (GRT) 0,236816 4,02%
kusama
Kusama (KSM) 29,37 3,07%
waves
Waves (WAVES) 2,32 0,57%
dash
Dash (DASH) 26,98 5,53%
bitcoin
Bitcoin (BTC) 59.425,74 4,25%
ethereum
Ethereum (ETH) 2.872,32 2,94%
cardano
Cardano (ADA) 0,425303 3,57%
tether
Tether (USDT) 0,937019 0,02%
xrp
XRP (XRP) 0,468969 3,70%
solana
Solana (SOL) 130,42 6,73%
polkadot
Polkadot (DOT) 6,31 3,80%
usd-coin
USDC (USDC) 0,937218 0,60%
dogecoin
Dogecoin (DOGE) 0,140664 1,07%
uniswap
Uniswap (UNI) 6,77 6,13%
terra-luna
Terra Luna Classic (LUNC) 0,000095 2,30%
litecoin
Litecoin (LTC) 76,42 4,32%
chainlink
Chainlink (LINK) 12,73 4,19%
bitcoin-cash
Bitcoin Cash (BCH) 452,44 5,26%
algorand
Algorand (ALGO) 0,162831 4,67%
matic-network
Polygon (MATIC) 0,641044 2,10%
stellar
Stellar (XLM) 0,102720 2,27%
cosmos
Cosmos Hub (ATOM) 7,71 3,11%
filecoin
Filecoin (FIL) 5,61 4,08%
tron
TRON (TRX) 0,102093 1,11%
ethereum-classic
Ethereum Classic (ETC) 24,43 3,38%
dai
Dai (DAI) 0,937173 0,01%
tezos
Tezos (XTZ) 0,907189 2,03%
monero
Monero (XMR) 108,59 3,07%