org.apache.spark.sql.execution.streaming
Adds a batch of data to this sink.
Adds a batch of data to this sink. The data for a given batchId
is deterministic and if
this method is called more than once with the same batchId (which will happen in the case of
failures), then data
should only be added once.
Note 1: You cannot apply any operators on data
except consuming it (e.g., collect/foreach
).
Otherwise, you may get a wrong result.
Note 2: The method is supposed to be executed synchronously, i.e. the method should only return after data is consumed by sink successfully.
A sink that writes out results to parquet files. Each batch is written out to a unique directory. After all of the files in a batch have been successfully written, the list of file paths is appended to the log atomically. In the case of partial failures, some duplicate data may be present in the target directory, but only one copy of each file will be present in the log.