- Hadoop Introduction
- Why we need Hadoop
- What is Big Data?
- Big Data Hadoop Introduction
Online
₹ 649 3,199
Quick facts
particular | details | |
---|---|---|
Medium of instructions
English
|
Mode of learning
Self study
|
Mode of Delivery
Video and Text Based
|
Course overview
Big Data Testing is a process that involves looking over and confirming the capabilities of Big Data Applications. Big Data is the term for acquiring huge amounts of information that traditional storage solutions cannot manage. Learn Big Data Testing with Hadoop and Hive with Pig Script certification course is developed by Big Data Engineer and is made available by Udemy which is created for the learners who want to gain a thorough understanding of the methods involved with big data testing from the ground up.
Learn Big Data Testing with Hadoop and Hive with Pig Script online training contains 11 hours of video-based learning resources which focus on providing learners with the knowledge of the methodologies associated with big data testing to make their careers in the domain of big data testing. Learn Big Data Testing with Hadoop and Hive with Pig Script online course discusses topics like Unix, Linux, SED commands, AWK commands, and shell scripting as well as will acquire an understanding of the Hive query language for Hadoop data integration.
The highlights
- Certificate of completion
- Self-paced course
- 11 hours of pre-recorded video content
- Learning resources
Program offerings
- Online course
- Learning resources
- 30-day money-back guarantee
- Unlimited access
- Accessible on mobile devices and tv
Course and certificate fees
Fees information
certificate availability
certificate providing authority
What you will learn
After completing the Learn Big Data Testing with Hadoop and Hive with Pig Script online certification, learners will gain an insight into the fundamentals of big data and will acquire the knowledge of the core functionalities of Hive, Hadoop, and Pig for operations like big data testing. In this big data testing certification, learners will explore the functionalities of Unix, Linux, and shell scripting for big data testing. In this big data testing course, learners will also study strategies involved with SED, AWK, and HDFS.
The syllabus
Big Data Testing - Hadoop
Cloudera Environment Setup Process
- What is Cloudera and How to Setup Cloudera Environment?
Different Hadoop Commands
- Hadoop Practice Questions for Students
- Hadoop Installation process and Service Start Stop
- Hadoop Start and Stop on Ubuntu Machine
- Hadoop Services and JPS Command
- All HDFS Comamnds -1
- All HDFS Comamnds -2
- All HDFS Comamnds -3
- All HDFS Comamnds -4
- Different HDFS Command Part -5(Ubuntu)
- Different HDFS Command Part -6 (Ubuntu)
- How to Work with Map-Reduce for Word Count Example in Eclipse?
- Execute Hadoop Jar for Word Count Program from Hadoop Terminal
Hadoop Background
- Unix and Shell Scripting Practice Questions for Students
- HDFS Architecture with NameNode and DataNode
- Different Hadoop Running Modes
- Hadoop Main Components
- 3 Vs in Big Data
- Hadoop Different Data Supports
- Differences Between Hadoop 1.x and Hadoop 2.x
- HDFS and Hadoop Daemons
- Hadoop Block Size and Input Split
- Automation Related Practice Questions for Students
Big Data Testing - Hive
- Hive Overview
- Hive Practice Questions for Students
- Hive Installation
- Hive Queries - Part1
- Hive Characteristics and Features
- Different Hive Tables and its Differences
- How Hive different with RDBMS
- Hive Components
- Hive Queries (Managed Table) - Part2
- Hive Queries (External Table) - Part3
- Few Important Hive Basic Queries
- Different Hive Joins i.e. Inner Join, Left Outer Join and Right Outer Join
- Hive Partitions - Static Partition Implementation
- Hive Partitions - Dynamic Partition Implementation
- Hive - Indexes Implementation
- Hive - Views Implementation
- Hive - Bucketing
- Hive UDFs
- Insert INTO and Insert Overwrite
- Use Case (MovieLens) implementation Part-1
- Use Case implementation Part-2
- How to Update and Delete Table Records in Hive
- How to Add Columns in Existing Table
- Min Max Avg and Count Functions Implementation
- Add Table Using Existing Table
- How to Rename Table
- Replace Columns Schema
- How to Compare Two HIVE Tables to Get the Distinct Records?
- How to execute Hive Tables/Files using Shell Script File?
- Collections
Unix/Linux and Shell Scripting
- How to create File and Directories?
- How to Open file in Editor and Save the file?
- Copy and Move commands
- How to Find Files?
- Head and Tail Commands
- More and Less Commands
- How to Find 'String/Word" from File with word count condition.
- How to Work with SED commands in Unix
- How to Work with AWK command in Unix
- Redirect Output to Other File which is generated from Conditition
Pig Concepts
- Pig Scripting Practice Questions for Students
- Pig Installation
- Pig Shell Types
- How to Load Data in Pig from Local Enviornment and Query the Data
- How to test Schema and Metadata?
- Grouping in Pig Script
- How to Filter Data in Pig Scripting (Local Mode)
- How to Limit Pig Script Output?
- How to Load Data in Pig from HDFS Environment and Query?
- How to Filter Data Based on String (HDFS Mode)