Class/Object

org.apache.spark.streaming

SnappyStreamingContext

Related Docs: object SnappyStreamingContext | package streaming

Permalink

class SnappyStreamingContext extends StreamingContext with Serializable

Main entry point for SnappyData extensions to Spark Streaming. A SnappyStreamingContext extends Spark's org.apache.spark.streaming.StreamingContext to provides an ability to manipulate SQL like query on org.apache.spark.streaming.dstream.DStream. You can apply schema and register continuous SQL queries(CQ) over the data streams. A single shared SnappyStreamingContext makes it possible to re-use Executors across client connections or applications.

Self Type
SnappyStreamingContext
Linear Supertypes
Serializable, Serializable, StreamingContext, internal.Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SnappyStreamingContext
  2. Serializable
  3. Serializable
  4. StreamingContext
  5. Logging
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SnappyStreamingContext(path: String, sparkContext: SparkContext)

    Permalink

    Recreate a SnappyStreamingContext from a checkpoint file using an existing SparkContext.

    Recreate a SnappyStreamingContext from a checkpoint file using an existing SparkContext.

    path

    Path to the directory that was specified as the checkpoint directory

    sparkContext

    Existing SparkContext

  2. new SnappyStreamingContext(path: String)

    Permalink

    Recreate a SnappyStreamingContext from a checkpoint file.

    Recreate a SnappyStreamingContext from a checkpoint file.

    path

    Path to the directory that was specified as the checkpoint directory

  3. new SnappyStreamingContext(path: String, hadoopConf: Configuration)

    Permalink

    Recreate a SnappyStreamingContext from a checkpoint file.

    Recreate a SnappyStreamingContext from a checkpoint file.

    path

    Path to the directory that was specified as the checkpoint directory

    hadoopConf

    Optional, configuration object if necessary for reading from HDFS compatible filesystems

  4. new SnappyStreamingContext(conf: SparkConf, batchDuration: Duration)

    Permalink

    Create a SnappyStreamingContext by providing the configuration necessary for a new SparkContext.

    Create a SnappyStreamingContext by providing the configuration necessary for a new SparkContext.

    conf

    a org.apache.spark.SparkConf object specifying Spark parameters

    batchDuration

    the time interval at which streaming data will be divided into batches

  5. new SnappyStreamingContext(snappySession: SnappySession, batchDuration: Duration)

    Permalink
  6. new SnappyStreamingContext(sparkContext: SparkContext, batchDuration: Duration)

    Permalink

    Create a SnappyStreamingContext using an existing SparkContext.

    Create a SnappyStreamingContext using an existing SparkContext.

    sparkContext

    existing SparkContext

    batchDuration

    the time interval at which streaming data will be divided into batches

  7. new SnappyStreamingContext(sc_: SparkContext, cp_: Checkpoint, batchDur_: Duration, reuseSnappySession: Option[SnappySession] = None, currentSnappySession: Option[SnappySession] = None)

    Permalink
    Attributes
    protected[org.apache.spark]

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def addStreamingListener(streamingListener: StreamingListener): Unit

    Permalink

    Add a org.apache.spark.streaming.scheduler.StreamingListener object for receiving system events related to streaming.

    Add a org.apache.spark.streaming.scheduler.StreamingListener object for receiving system events related to streaming.

    Definition Classes
    StreamingContext
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def awaitTermination(): Unit

    Permalink

    Wait for the execution to stop.

    Wait for the execution to stop. Any exceptions that occurs during the execution will be thrown in this thread.

    Definition Classes
    StreamingContext
  7. def awaitTerminationOrTimeout(timeout: Long): Boolean

    Permalink

    Wait for the execution to stop.

    Wait for the execution to stop. Any exceptions that occurs during the execution will be thrown in this thread.

    timeout

    time to wait in milliseconds

    returns

    true if it's stopped; or throw the reported error during the execution; or false if the waiting time elapsed before returning from the method.

    Definition Classes
    StreamingContext
  8. def binaryRecordsStream(directory: String, recordLength: Int): DStream[Array[Byte]]

    Permalink

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as flat binary files, assuming a fixed length per record, generating one byte array per record.

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as flat binary files, assuming a fixed length per record, generating one byte array per record. Files must be written to the monitored directory by "moving" them from another location within the same file system. File names starting with . are ignored.

    directory

    HDFS directory to monitor for new file

    recordLength

    length of each record in bytes

    Definition Classes
    StreamingContext
    Note

    We ensure that the byte array for each record in the resulting RDDs of the DStream has the provided record length.

  9. def checkpoint(directory: String): Unit

    Permalink

    Set the context to periodically checkpoint the DStream operations for driver fault-tolerance.

    Set the context to periodically checkpoint the DStream operations for driver fault-tolerance.

    directory

    HDFS-compatible directory where the checkpoint data will be reliably stored. Note that this must be a fault-tolerant file system like HDFS.

    Definition Classes
    StreamingContext
  10. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. def createSchemaDStream(rowStream: DStream[Row], schema: StructType): SchemaDStream

    Permalink
  12. def createSchemaDStream[A <: Product](stream: DStream[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): SchemaDStream

    Permalink

    Creates a SchemaDStream from an DStream of Product (e.g.

    Creates a SchemaDStream from an DStream of Product (e.g. case classes).

  13. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  15. def fileStream[K, V, F <: InputFormat[K, V]](directory: String, filter: (Path) ⇒ Boolean, newFilesOnly: Boolean, conf: Configuration)(implicit arg0: ClassTag[K], arg1: ClassTag[V], arg2: ClassTag[F]): InputDStream[(K, V)]

    Permalink

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format. Files must be written to the monitored directory by "moving" them from another location within the same file system. File names starting with . are ignored.

    K

    Key type for reading HDFS file

    V

    Value type for reading HDFS file

    F

    Input format for reading HDFS file

    directory

    HDFS directory to monitor for new file

    filter

    Function to filter paths to process

    newFilesOnly

    Should process only new files and ignore existing files in the directory

    conf

    Hadoop configuration

    Definition Classes
    StreamingContext
  16. def fileStream[K, V, F <: InputFormat[K, V]](directory: String, filter: (Path) ⇒ Boolean, newFilesOnly: Boolean)(implicit arg0: ClassTag[K], arg1: ClassTag[V], arg2: ClassTag[F]): InputDStream[(K, V)]

    Permalink

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format. Files must be written to the monitored directory by "moving" them from another location within the same file system.

    K

    Key type for reading HDFS file

    V

    Value type for reading HDFS file

    F

    Input format for reading HDFS file

    directory

    HDFS directory to monitor for new file

    filter

    Function to filter paths to process

    newFilesOnly

    Should process only new files and ignore existing files in the directory

    Definition Classes
    StreamingContext
  17. def fileStream[K, V, F <: InputFormat[K, V]](directory: String)(implicit arg0: ClassTag[K], arg1: ClassTag[V], arg2: ClassTag[F]): InputDStream[(K, V)]

    Permalink

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format. Files must be written to the monitored directory by "moving" them from another location within the same file system. File names starting with . are ignored.

    K

    Key type for reading HDFS file

    V

    Value type for reading HDFS file

    F

    Input format for reading HDFS file

    directory

    HDFS directory to monitor for new file

    Definition Classes
    StreamingContext
  18. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  19. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  20. def getSchemaDStream(tableName: String): SchemaDStream

    Permalink
  21. def getState(): StreamingContextState

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Return the current state of the context. The context can be in three possible states -

    • StreamingContextState.INITIALIZED - The context has been created, but not started yet. Input DStreams, transformations and output operations can be created on the context.
    • StreamingContextState.ACTIVE - The context has been started, and not stopped. Input DStreams, transformations and output operations cannot be created on the context.
    • StreamingContextState.STOPPED - The context has been stopped and cannot be used any more.
    Definition Classes
    StreamingContext
    Annotations
    @DeveloperApi()
  22. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  23. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean = false): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  24. def initializeLogIfNecessary(isInterpreter: Boolean): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  25. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  26. def isTraceEnabled(): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  27. def log: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  28. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  29. def logDebug(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  30. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  31. def logError(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  32. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  33. def logInfo(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  34. def logName: String

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  35. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  36. def logTrace(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  37. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  38. def logWarning(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  39. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  40. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  41. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  42. def queueStream[T](queue: Queue[RDD[T]], oneAtATime: Boolean, defaultRDD: RDD[T])(implicit arg0: ClassTag[T]): InputDStream[T]

    Permalink

    Create an input stream from a queue of RDDs.

    Create an input stream from a queue of RDDs. In each batch, it will process either one or all of the RDDs returned by the queue.

    T

    Type of objects in the RDD

    queue

    Queue of RDDs. Modifications to this data structure must be synchronized.

    oneAtATime

    Whether only one RDD should be consumed from the queue in every interval

    defaultRDD

    Default RDD is returned by the DStream when the queue is empty. Set as null if no RDD should be returned when empty

    Definition Classes
    StreamingContext
    Note

    Arbitrary RDDs can be added to queueStream, there is no way to recover data of those RDDs, so queueStream doesn't support checkpointing.

  43. def queueStream[T](queue: Queue[RDD[T]], oneAtATime: Boolean = true)(implicit arg0: ClassTag[T]): InputDStream[T]

    Permalink

    Create an input stream from a queue of RDDs.

    Create an input stream from a queue of RDDs. In each batch, it will process either one or all of the RDDs returned by the queue.

    T

    Type of objects in the RDD

    queue

    Queue of RDDs. Modifications to this data structure must be synchronized.

    oneAtATime

    Whether only one RDD should be consumed from the queue in every interval

    Definition Classes
    StreamingContext
    Note

    Arbitrary RDDs can be added to queueStream, there is no way to recover data of those RDDs, so queueStream doesn't support checkpointing.

  44. def rawSocketStream[T](hostname: String, port: Int, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2)(implicit arg0: ClassTag[T]): ReceiverInputDStream[T]

    Permalink

    Create an input stream from network source hostname:port, where data is received as serialized blocks (serialized using the Spark's serializer) that can be directly pushed into the block manager without deserializing them.

    Create an input stream from network source hostname:port, where data is received as serialized blocks (serialized using the Spark's serializer) that can be directly pushed into the block manager without deserializing them. This is the most efficient way to receive data.

    T

    Type of the objects in the received blocks

    hostname

    Hostname to connect to for receiving data

    port

    Port to connect to for receiving data

    storageLevel

    Storage level to use for storing the received objects (default: StorageLevel.MEMORY_AND_DISK_SER_2)

    Definition Classes
    StreamingContext
  45. def receiverStream[T](receiver: Receiver[T])(implicit arg0: ClassTag[T]): ReceiverInputDStream[T]

    Permalink

    Create an input stream with any arbitrary user implemented receiver.

    Create an input stream with any arbitrary user implemented receiver. Find more details at http://spark.apache.org/docs/latest/streaming-custom-receivers.html

    receiver

    Custom implementation of Receiver

    Definition Classes
    StreamingContext
  46. def registerCQ(queryStr: String): SchemaDStream

    Permalink

    Registers and executes given SQL query and returns SchemaDStream to consume the results

    Registers and executes given SQL query and returns SchemaDStream to consume the results

    queryStr

    the query to register

  47. def registerStreamTables(): Unit

    Permalink
  48. def remember(duration: Duration): Unit

    Permalink

    Set each DStream in this context to remember RDDs it generated in the last given duration.

    Set each DStream in this context to remember RDDs it generated in the last given duration. DStreams remember RDDs only for a limited duration of time and release them for garbage collection. This method allows the developer to specify how long to remember the RDDs ( if the developer wishes to query old data outside the DStream computation).

    duration

    Minimum duration that each DStream should remember its RDDs

    Definition Classes
    StreamingContext
  49. val snappyContext: SnappyContext

    Permalink
  50. val snappySession: SnappySession

    Permalink
  51. def socketStream[T](hostname: String, port: Int, converter: (InputStream) ⇒ Iterator[T], storageLevel: StorageLevel)(implicit arg0: ClassTag[T]): ReceiverInputDStream[T]

    Permalink

    Creates an input stream from TCP source hostname:port.

    Creates an input stream from TCP source hostname:port. Data is received using a TCP socket and the receive bytes it interpreted as object using the given converter.

    T

    Type of the objects received (after converting bytes to objects)

    hostname

    Hostname to connect to for receiving data

    port

    Port to connect to for receiving data

    converter

    Function to convert the byte stream to objects

    storageLevel

    Storage level to use for storing the received objects

    Definition Classes
    StreamingContext
  52. def socketTextStream(hostname: String, port: Int, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2): ReceiverInputDStream[String]

    Permalink

    Creates an input stream from TCP source hostname:port.

    Creates an input stream from TCP source hostname:port. Data is received using a TCP socket and the receive bytes is interpreted as UTF8 encoded \n delimited lines.

    hostname

    Hostname to connect to for receiving data

    port

    Port to connect to for receiving data

    storageLevel

    Storage level to use for storing the received objects (default: StorageLevel.MEMORY_AND_DISK_SER_2)

    Definition Classes
    StreamingContext
    See also

    socketStream

  53. def sparkContext: SparkContext

    Permalink

    Return the associated Spark context

    Return the associated Spark context

    Definition Classes
    StreamingContext
  54. def sql(sqlText: String): DataFrame

    Permalink
  55. def start(): Unit

    Permalink

    Start the execution of the streams.

    Start the execution of the streams. Also registers population of AQP tables from stream tables if present.

    Definition Classes
    SnappyStreamingContextStreamingContext
    Exceptions thrown

    IllegalStateException if the StreamingContext is already stopped

  56. def stop(stopSparkContext: Boolean, stopGracefully: Boolean): Unit

    Permalink

    Stop the execution of the streams, with option of ensuring all received data has been processed.

    Stop the execution of the streams, with option of ensuring all received data has been processed.

    stopSparkContext

    if true, stops the associated SparkContext. The underlying SparkContext will be stopped regardless of whether this StreamingContext has been started.

    stopGracefully

    if true, stops gracefully by waiting for the processing of all received data to be completed

    Definition Classes
    SnappyStreamingContextStreamingContext
  57. def stop(stopSparkContext: Boolean = ...): Unit

    Permalink

    Stop the execution of the streams immediately (does not wait for all received data to be processed).

    Stop the execution of the streams immediately (does not wait for all received data to be processed). By default, if stopSparkContext is not specified, the underlying SparkContext will also be stopped. This implicit behavior can be configured using the SparkConf configuration spark.streaming.stopSparkContextByDefault.

    stopSparkContext

    If true, stops the associated SparkContext. The underlying SparkContext will be stopped regardless of whether this StreamingContext has been started.

    Definition Classes
    StreamingContext
  58. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  59. def textFileStream(directory: String): DStream[String]

    Permalink

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as text files (using key as LongWritable, value as Text and input format as TextInputFormat).

    Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as text files (using key as LongWritable, value as Text and input format as TextInputFormat). Files must be written to the monitored directory by "moving" them from another location within the same file system. File names starting with . are ignored.

    directory

    HDFS directory to monitor for new file

    Definition Classes
    StreamingContext
  60. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  61. def transform[T](dstreams: Seq[DStream[_]], transformFunc: (Seq[RDD[_]], Time) ⇒ RDD[T])(implicit arg0: ClassTag[T]): DStream[T]

    Permalink

    Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.

    Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.

    Definition Classes
    StreamingContext
  62. def union[T](streams: Seq[DStream[T]])(implicit arg0: ClassTag[T]): DStream[T]

    Permalink

    Create a unified DStream from multiple DStreams of the same type and same slide duration.

    Create a unified DStream from multiple DStreams of the same type and same slide duration.

    Definition Classes
    StreamingContext
  63. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  64. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  65. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from StreamingContext

Inherited from internal.Logging

Inherited from AnyRef

Inherited from Any

Ungrouped