org.apache.spark.sql.execution.columnar.encoding
Read a value (type will be determined by implementation) from given source and write to the destination delta encoder.
Read a value (type will be determined by implementation) from given source and write to the destination delta encoder.
The "forMerge" flag will be true if two sorted delta values are being merged (in which case the writer may decide to re-encode everything) or if a new delta is just being re-ordered (in which case the writer may decide to change minimally). For example in dictionary encoding it will create new dictionary with "forMerge" else it will just re-order the dictionary indexes. When that flag is set then reads will be sequential and srcPosition will be invalid.
A negative value for "srcPosition" indicates that the reads are sequential else a random access read from that position is required. The flag "doWrite" will skip writing to the target encoder and should be false only for the case of sequential reads (for random access there is nothing to be "skipped" in any case).
Trait to read column values from delta encoded column and write to target delta column. The reads may not be sequential and could be random-access reads while writes will be sequential, sorted by position in the full column value. Implementations should not do any null value handling.
This uses a separate base class rather than a closure to avoid the overhead of boxing/unboxing with multi-argument closures (>2 arguments).