logo


your one source for IT & AV

Training Presentation Systems Services & Consulting Cloud Services Purchase Client Center Computer Museum
Arrow Course Schedule | Classroom Rentals | Student Information | Free Seminars | Client Feedback | Partners | Survey | Standby Discounts

Cloudera Developer Training for Spark & Hadoop

SS Course: 43853

Course Overview

TOP
This four-day hands-on training course delivers the key concepts and expertise developers need to use Apache Spark to develop high-performance parallel applications. Participants will learn how to use Spark SQL to query structured data and Spark Streaming to perform real-time processing on streaming data from a variety of sources. Developers will also practice writing applications that use core Spark to perform ETL processing and iterative algorithms. The course covers how to work with big data stored in a distributed file system, and execute Spark applications on a Hadoop cluster. After taking this course, participants will be prepared to face real-world challenges and build applications to execute faster decisions, better decisions, and interactive analysis, applied to a wide variety of use cases, architectures, and industries.                                                                  

Scheduled Classes

TOP

What You'll Learn

TOP
How the Apache Hadoop ecosystem fits in with the data processing lifecycle
  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to write, configure, and deploy Apache Spark applications on a Hadoop cluster
  • How to use the Spark shell and Spark applications to explore, process, and analyze distributed data
  • How to query data using Spark SQL, DataFrames, and Datasets
  • How to use Spark Streaming to process a live data stream

Outline

TOP
Viewing outline for:
Querying Tables in Spark Using SQL
  • Querying Files and Views
  • The Catalog API
  • Comparing Spark SQL, Apache Impala, and Apache Hive-on-Spark
  • Querying Tables in Spark Using SQL
  • Querying Files and Views
  • The Catalog API
  • Datasets and DataFrames
  • Creating Datasets
  • Loading and Saving Datasets
  • Dataset Operations
  • Writing a Spark Application
  • Building and Running an Application
  • Application Deployment Mode
  • The Spark Application Web UI
  • Configuring Application Properties
  • Review: Apache Spark on a Cluster
  • RDD Partitions
  • Example: Partitioning in Queries
  • Stages and Tasks
  • Job Execution Planning
  • Example: Catalyst Execution Plan
  • Example: RDD Execution Plan
  • DataFrame and Dataset Persistence
  • Persistence Storage Levels
  • Viewing Persisted RDDs
  • Common Apache Spark Use Cases
  • Iterative Algorithms in Apache Spark
  • Machine Learning
  • Example: k-means
  • Apache Spark Streaming Overview
  • Creating Streaming DataFrames
  • Transforming DataFrames
  • Executing Streaming Queries
  • Overview
  • Receiving Kafka Messages
  • Sending Kafka Messages
  • Streaming Aggregation
  • Joining Streaming DataFrames
  • What Is Apache Kafka?
  • Apache Kafka Overview
  • Scaling Apache Kafka
  • Apache Kafka Cluster Architecture
  • Apache Kafka Command Line Tools
  • Apache Hadoop Overview
  • Data Processing
  • Introduction to the Hands-On Exercises
  • Apache Hadoop Cluster Components
  • HDFS Architecture
  • Using HDFS
  • YARN Architecture
  • Working With YARN
  • What is Apache Spark?
  • Starting the Spark Shell
  • Using the Spark Shell
  • Getting Started with Datasets and DataFrames
  • DataFrame Operations
  • Creating DataFrames from Data Sources
  • Saving DataFrames to Data Sources
  • DataFrame Schemas
  • Eager and Lazy Execution
  • Querying DataFrames Using Column Expressions
  • Grouping and Aggregation Queries
  • Joining DataFrames
  • RDD Overview
  • RDD Data Sources
  • Creating and Saving RDDs
  • RDD Operations
  • Writing and Passing Transformation Functions
  • Transformation Execution
  • Converting Between RDDs and DataFrames

Prerequisites

TOP
This course is designed for developers and engineers who have programming experience, but prior knowledge of Spark and Hadoop is not required. Apache Spark examples and hands-on exercises are presented in Scala and Python. The ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful.

      Who Should Attend

      TOP
      This course is designed for developers and engineers who have programming experience, but prior knowledge of Hadoop and/or Spark is not required.

        Next Step Courses

        TOP