Append a given RDD or rows into the relation.
Append a given RDD or rows into the relation.
The underlying column table used to store data.
Base table of this relation.
Return the actual relation to be used for insertion into the relation
or None if sourceSchema
cannot be inserted.
Return the actual relation to be used for insertion into the relation
or None if sourceSchema
cannot be inserted.
If underlying sample table is partitioned
True if underlying sample table is using a row table as reservoir store.
The QCS columns for the sample.
Options set for this sampling relation.
Whether does it need to convert the objects in Row to internal representation, for example: java.lang.String to UTF8String java.lang.Decimal to Decimal
Whether does it need to convert the objects in Row to internal representation, for example: java.lang.String to UTF8String java.lang.Decimal to Decimal
If needConversion
is false
, buildScan() should return an RDD
of InternalRow
1.4.0
The internal representation is not stable across releases and thus data sources outside of Spark SQL should leave this as true.
Returns an estimated size of this relation in bytes.
Returns an estimated size of this relation in bytes. This information is used by the planner to decide when it is safe to broadcast a relation and can be overridden by sources that know the size ahead of time. By default, the system will assume that tables are too large to broadcast. This method will be called multiple times during query planning and thus should not perform expensive operations for each invocation.
1.3.0
It is always better to overestimate size than underestimate, because underestimation could lead to execution plans that are suboptimal (i.e. broadcasting a very large table).
Returns the list of Filters that this datasource may not be able to handle.
Returns the list of Filters that this datasource may not be able to handle. These returned Filters will be evaluated by Spark SQL after data is output by a scan. By default, this function will return all filters, as it is always safe to double evaluate a Filter. However, specific implementations can override this function to avoid double filtering when they are capable of processing a filter internally.
1.6.0