Cloud Computing Applications, Part 2: Big Data and Applications in the Cloud

Por: Coursera . en: , ,

  • Course Orientation
    • You will become familiar with the course, your classmates, and our learning environment. The orientation will also help you obtain the technical skills required for the course.
  • Module 1: Spark, Hortonworks, HDFS, CAP
    • In Module 1, we introduce you to the world of Big Data applications. We start by introducing you to Apache Spark, a common framework used for many different tasks throughout the course. We then introduce some Big Data distro packages, the HDFS file system, and finally the idea of batch-based Big Data processing using the MapReduce programming paradigm.
  • Module 2: Large Scale Data Storage
    • In this module, you will learn about large scale data storage technologies and frameworks. We start by exploring the challenges of storing large data in distributed systems. We then discuss in-memory key/value storage systems, NoSQL distributed databases, and distributed publish/subscribe queues.
  • Module 3: Streaming Systems
    • This module introduces you to real-time streaming systems, also known as Fast Data. We talk about Apache Storm in length, Apache Spark Streaming, and Lambda and Kappa architectures. Finally, we contrast all these technologies as a streaming ecosystem.
  • Module 4: Graph Processing and Machine Learning
    • In this module, we discuss the applications of Big Data. In particular, we focus on two topics: graph processing, where massive graphs (such as the web graph) are processed for information, and machine learning, where massive amounts of data are used to train models such as clustering algorithms and frequent pattern mining. We also introduce you to deep learning, where large data sets are used to train neural networks with effective results.