An abstract class for compactible metadata logs.
Class for collecting event time stats with an accumulator
Accumulator that collects stats on event time in a batch.
Used to mark a column as the containing the event time for a given record.
Used to mark a column as the containing the event time for a given record. In addition to
adding appropriate metadata to this column, this operator also tracks the maximum observed event
time. Based on the maximum observed time and a user specified delay, we can calculate the
watermark
after which we assume we will no longer see late records for a particular time
period. Note that event time is measured in milliseconds.
User specified options for file streams.
A sink that writes out results to parquet files.
A sink that writes out results to parquet files. Each batch is written out to a unique directory. After all of the files in a batch have been successfully written, the list of file paths is appended to the log atomically. In the case of partial failures, some duplicate data may be present in the target directory, but only one copy of each file will be present in the log.
A special log for FileStreamSink.
A special log for FileStreamSink. It will write one log file for each batch. The first line of the log file is the version number, and there are multiple JSON lines following. Each JSON line is a JSON format of SinkFileStatus.
As reading from many small files is usually pretty slow, FileStreamSinkLog will compact log
files every "spark.sql.sink.file.log.compactLen" batches into a big file. When doing a
compaction, it will read all old log files and merge them with the new batch. During the
compaction, it will also delete the files that are deleted (marked by SinkFileStatus.action).
When the reader uses allFiles
to list all files, this method only returns the visible files
(drops the deleted files).
A very simple source that reads files from the given directory as they appear.
Offset for the FileStreamSource.
Offset for the FileStreamSource.
Position in the FileStreamSourceLog
A Sink that forwards all data into ForeachWriter according to the contract defined by ForeachWriter.
A Sink that forwards all data into ForeachWriter according to the contract defined by ForeachWriter.
The expected type of the sink.
A MetadataLog implementation based on HDFS.
A MetadataLog implementation based on HDFS. HDFSMetadataLog uses the specified path
as the metadata storage.
When writing a new batch, HDFSMetadataLog will firstly write to a temp file and then rename it to the final batch file. If the rename step fails, there must be multiple writers and only one of them will succeed and the others will fail.
Note: HDFSMetadataLog doesn't support S3-like file systems as they don't guarantee listing files in a directory always shows the latest files.
A variant of QueryExecution that allows the execution of the given LogicalPlan plan incrementally.
A variant of QueryExecution that allows the execution of the given LogicalPlan plan incrementally. Possibly preserving state in between each execution.
A simple offset for sources that produce a single linear stream of data.
A FileCommitProtocol that tracks the list of valid files in a manifest file, used in structured streaming.
Used to query the data that has been written into a MemorySink.
A sink that stores the results in memory.
A sink that stores the results in memory. This Sink is primarily intended for use in unit tests and does not provide durability.
A Source that produces value stored in memory as they are added by the user.
A general MetadataLog that supports the following features:
A general MetadataLog that supports the following features:
A FileIndex that generates the list of files to processing by reading them from the metadata log files generated by the FileStreamSink.
Serves metrics from a org.apache.spark.sql.streaming.StreamingQuery to Codahale/DropWizard metrics
An offset is a monotonically increasing metric used to track progress in the computation of a stream.
An ordered collection of offsets, used to track the progress of processing data from one or more Sources that are present in a streaming query.
An ordered collection of offsets, used to track the progress of processing data from one or more Sources that are present in a streaming query. This is similar to simplified, single-instance vector clock that must progress linearly forward.
This class is used to log offsets to persistent files in HDFS.
This class is used to log offsets to persistent files in HDFS. Each file corresponds to a specific batch of offsets. The file format contain a version string in the first line, followed by a the JSON string representation of the offsets separated by a newline character. If a source offset is missing, then that line will contain a string value defined in the SERIALIZED_VOID_OFFSET variable in OffsetSeqLog companion object. For instance, when dealing with LongOffset types: v1 // version 1 metadata {0} // LongOffset 0 {3} // LongOffset 3
Contains metadata associated with a OffsetSeq.
Used to identify the state store for a given operator.
A trigger executor that runs a batch every intervalMs
milliseconds.
Responsible for continually reporting statistics about the amount of data processed as well as latency for a streaming query.
Responsible for continually reporting statistics about the amount of data processed as well
as latency for a streaming query. This trait is designed to be mixed into the
StreamExecution, who is responsible for calling startTrigger
and finishTrigger
at the appropriate times. Additionally, the status can updated with updateStatusMessage
to
allow reporting on the streams current state (i.e. "Fetching more data").
Used when loading a JSON serialized offset from external storage.
Used when loading a JSON serialized offset from external storage. We are currently not responsible for converting JSON serialized data into an internal (i.e., object) representation. Sources should define a factory method in their source Offset companion objects that accepts a SerializedOffset for doing the conversion.
An interface for systems that can collect the results of a streaming query.
An interface for systems that can collect the results of a streaming query. In order to preserve exactly once semantics a sink must be idempotent in the face of multiple attempts to add the same batch.
The status of a file outputted by FileStreamSink.
The status of a file outputted by FileStreamSink. A file is visible only if it appears in the sink log and its action is not "delete".
the file path.
the file size.
whether this file is a directory.
the file last modification time.
the block replication.
the block size.
the file action. Must be either "add" or "delete".
A source of continually arriving data for a streaming query.
For each input tuple, the key is calculated and the value from the StateStore is added to the stream (in addition to the input tuple) if present.
For each input tuple, the key is calculated and the tuple is put
into the StateStore.
An operator that saves or restores state from the StateStore.
An operator that saves or restores state from the StateStore. The OperatorStateId should
be filled in by prepareForExecution
in IncrementalExecution.
Manages the execution of a streaming Spark SQL query that is occurring in a separate thread.
Manages the execution of a streaming Spark SQL query that is occurring in a separate thread. Unlike a standard query, a streaming query executes repeatedly each time new data arrives at any Source present in the query plan. Whenever new data arrives, a QueryExecution is created and the results are committed transactionally to the given Sink.
A special thread to run the stream query.
A special thread to run the stream query. Some codes require to run in the StreamExecutionThread
and will use classOf[StreamExecutionThread]
to check.
Contains metadata associated with a StreamingQuery.
Contains metadata associated with a StreamingQuery. This information is written in the checkpoint location the first time a query is started and recovered every time the query is restarted.
unique id of the StreamingQuery that needs to be persisted across restarts
A helper class that looks like a Map[Source, Offset].
Used to link a streaming Source of data into a org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.
A bus to forward events to StreamingQueryListeners.
A bus to forward events to StreamingQueryListeners. This one will send received StreamingQueryListener.Events to the Spark listener bus. It also registers itself with Spark listener bus, so that it can receive StreamingQueryListener.Events and dispatch them to StreamingQueryListeners.
Note that each bus and its registered listeners are associated with a single SparkSession and StreamingQueryManager. So this bus will dispatch events to registered listeners for only those queries that were started in the associated SparkSession.
Wrap non-serializable StreamExecution to make the query serializable as it's easy to for it to get captured with normal usage.
Wrap non-serializable StreamExecution to make the query serializable as it's easy to for it to
get captured with normal usage. It's safe to capture the query but not use it in executors.
However, if the user tries to call its methods, it will throw IllegalStateException
.
Used to link a streaming DataSource into a org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.
Used to link a streaming DataSource into a org.apache.spark.sql.catalyst.plans.logical.LogicalPlan. This is only used for creating a streaming org.apache.spark.sql.DataFrame from org.apache.spark.sql.DataFrameReader. It should be used to create Source and converted to StreamingExecutionRelation when passing to StreamExecution to run a query.
A dummy physical plan for StreamingRelation to support org.apache.spark.sql.Dataset.explain
A source that reads text lines through a TCP socket, designed only for tutorials and debugging.
A source that reads text lines through a TCP socket, designed only for tutorials and debugging. This source will *not* work in production applications due to multiple reasons, including no support for fault recovery and keeping all of the text read in memory forever.
An abstract class for compactible metadata logs. It will write one log file for each batch. The first line of the log file is the version number, and there are multiple serialized metadata lines following.
As reading from many small files is usually pretty slow, also too many small files in one folder will mess the FS, CompactibleFileStreamLog will compact log files every 10 batches by default into a big file. When doing a compaction, it will read all old log files and merge them with the new batch.