{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Spark Streaming in Details\n", "\n", "## Feng Li\n", "\n", "### Central University of Finance and Economics\n", "\n", "### [feng.li@cufe.edu.cn](feng.li@cufe.edu.cn)\n", "### Course home page: [https://feng.li/distcomp](https://feng.li/distcomp)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Transformations on DStreams\n", "\n", "Similar to that of RDDs, transformations allow the data from the input DStream to be modified. DStreams support many of the transformations available on normal Spark RDD’s. Some of the common ones are as follows.\n", "\n", "|transformation | Meaning |\n", "|---------------|--------- |\n", "|map(func) | Return a new DStream by passing each element of the source DStream through a function func.|\n", "|flatMap(func) | Similar to map, but each input item can be mapped to 0 or more output items.|\n", "|filter(func) | Return a new DStream by selecting only the records of the source DStream on which func returns true.|\n", "|repartition(numPartitions) | Changes the level of parallelism in this DStream by creating more or fewer partitions.|\n", "|union(otherStream) | Return a new DStream that contains the union of the elements in the source DStream and otherDStream.|\n", "|count() | Return a new DStream of single-element RDDs by counting the number of elements in each RDD of the source DStream.|\n", "|reduce(func) | Return a new DStream of single-element RDDs by aggregating the elements in each RDD of the source DStream using a function func (which takes two arguments and returns one). The function should be associative and commutative so that it can be computed in parallel.|\n", "|countByValue() | When called on a DStream of elements of type K, return a new DStream of (K, Long) pairs where the value of each key is its frequency in each RDD of the source DStream.|\n", "|reduceByKey(func, [numTasks]) | When called on a DStream of (K, V) pairs, return a new DStream of (K, V) pairs where the values for each key are aggregated using the given reduce function. Note: By default, this uses Spark's default number of parallel tasks (2 for local mode, and in cluster mode the number is determined by the config property spark.default.parallelism) to do the grouping. You can pass an optional numTasks argument to set a different number of tasks.|\n", "|join(otherStream, [numTasks]) | When called on two DStreams of (K, V) and (K, W) pairs, return a new DStream of (K, (V, W)) pairs with all pairs of elements for each key.|\n", "|cogroup(otherStream, [numTasks]) | When called on a DStream of (K, V) and (K, W) pairs, return a new DStream of (K, Seq[V], Seq[W]) tuples.|\n", "|transform(func) | Return a new DStream by applying a RDD-to-RDD function to every RDD of the source DStream. This can be used to do arbitrary RDD operations on the DStream.|\n", "|updateStateByKey(func) | Return a new \"state\" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values for the key. This can be used to maintain arbitrary state data for each key.|" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Window Operations\n", "\n", "Spark Streaming also provides windowed computations, which allow you to apply transformations over a sliding window of data.\n", "\n", "![streaming-dstream-window](./figures/streaming-dstream-window.png)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "- Any window operation needs to specify two parameters.\n", "\n", " - window length - The duration of the window (3 in the figure).\n", " - sliding interval - The interval at which the window operation is performed (2 in the figure).\n", "\n", "- These two parameters must be multiples of the batch interval of the source DStream (1 in the figure)." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Output Operations on DStreams\n", "\n", "- Output operations allow DStream’s data to be pushed out to external systems like a database or a file systems. \n", "\n", "- Since the output operations actually allow the transformed data to be consumed by external systems, they trigger the actual execution of all the DStream transformations (similar to actions for RDDs). Currently, the following output operations are defined:" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "|Output Operation|Defination|\n", "|-------------|---------------|\n", "|`print()`\t|\tPrints the first ten elements of every batch of data in a DStream on the driver node running the streaming application. This is useful for development and debugging.This is called `pprint()` in the Python API.|\n", "|`saveAsTextFiles()`\t|\tSave this DStream's contents as text files.\n", "|`saveAsObjectFiles()`\t|\tSave this DStream's contents as SequenceFiles of serialized Java objects. **This is not available in the Python API**.|\n", "|`saveAsHadoopFiles()`\t|\tSave this DStream's contents as Hadoop files. **This is not available in the Python API**.|\n", "|`foreachRDD(func)`\t|\tThe most generic output operator that applies a function, func, to each RDD generated from the stream. This function should push the data in each RDD to an external system, such as saving the RDD to files, or writing it over the network to a database. Note that the function func is executed in the driver process running the streaming application, and will usually have RDD actions in it that will force the computation of the streaming RDDs.|" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## External Data Sources\n", "\n", "\n", "- As of Spark 2.4.5, Kafka, Kinesis and Flume are available in the Python API.\n", "\n", "- This category of sources require interfacing with external non-Spark libraries, some of them with complex dependencies (e.g., Kafka and Flume). Hence, to minimize issues related to version conflicts of dependencies, the functionality to create DStreams from these sources has been moved to separate libraries that can be linked to explicitly when necessary.\n", "\n", "- Note that these advanced sources are not available in the Spark shell, hence applications based on these advanced sources cannot be tested in the shell. If you really want to use them in the Spark shell you will have to download the corresponding Maven artifact’s JAR along with its dependencies and add it to the classpath.\n", "\n", "- Some of these advanced sources are as follows.\n", "\n", " - Kafka: Spark Streaming 2.4.5 is compatible with Kafka broker versions 0.8.2.1 or higher. See the Kafka Integration Guide for more details.\n", "\n", " - Flume: Spark Streaming 2.4.5 is compatible with Flume 1.6.0. See the Flume Integration Guide for more details.\n", "\n", " - Kinesis: Spark Streaming 2.4.5 is compatible with Kinesis Client Library 1.2.1. See the Kinesis Integration Guide for more details.\n", "\n" ] } ], "metadata": { "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 4 }