A filter that evaluates to true
iff both left
or right
evaluate to true
.
A filter that evaluates to true
iff both left
or right
evaluate to true
.
1.3.0
Represents a collection of tuples with a known schema.
Represents a collection of tuples with a known schema. Classes that extend BaseRelation must
be able to produce the schema of their data in the form of a StructType
. Concrete
implementation should inherit from one of the descendant Scan
classes, which define various
abstract methods for execution.
BaseRelations must also define an equality function that only returns true when the two instances will return the same data. This equality function is used when determining when it is safe to substitute cached results for a given relation.
1.3.0
Optimized cast for a column in a row to double.
Cast a given column in a schema to epoch time in long milliseconds.
::Experimental:: An interface for experimenting with a more direct connection to the query planner.
::Experimental:: An interface for experimenting with a more direct connection to the query planner. Compared to PrunedFilteredScan, this operator receives the raw expressions from the org.apache.spark.sql.catalyst.plans.logical.LogicalPlan. Unlike the other APIs this interface is NOT designed to be binary compatible across releases and thus should only be used for experimentation.
1.3.0
1.3.0
Data sources should implement this trait so that they can register an alias to their data source.
Data sources should implement this trait so that they can register an alias to their data source. This allows users to give the data source alias as the format type over the fully qualified class name.
A new instance of this class will be instantiated each time a DDL call is made.
1.5.0
Plan for delete from a column or row table.
Performs equality comparison, similar to EqualTo.
A filter that evaluates to true
iff the attribute evaluates to a value
equal to value
.
A filter that evaluates to true
iff the attribute evaluates to a value
equal to value
.
1.3.0
::DeveloperApi:: Marker interface for data sources that allow for extended schema specification in CREATE TABLE (like constraints in RDBMS databases).
::DeveloperApi:: Marker interface for data sources that allow for extended schema specification in CREATE TABLE (like constraints in RDBMS databases). The schema string is passed as SnappyExternalCatalog.SCHEMADDL_PROPERTY in the relation provider parameters.
A filter predicate for data sources.
A filter predicate for data sources.
1.3.0
A filter that evaluates to true
iff the attribute evaluates to a value
greater than value
.
A filter that evaluates to true
iff the attribute evaluates to a value
greater than value
.
1.3.0
A filter that evaluates to true
iff the attribute evaluates to a value
greater than or equal to value
.
A filter that evaluates to true
iff the attribute evaluates to a value
greater than or equal to value
.
1.3.0
A filter that evaluates to true
iff the attribute evaluates to one of the values in the array.
A filter that evaluates to true
iff the attribute evaluates to one of the values in the array.
1.3.0
Unlike Spark's InsertIntoTable this plan provides the count of rows inserted as the output.
A BaseRelation that can be used to insert data into it through the insert method.
A BaseRelation that can be used to insert data into it through the insert method. If overwrite in insert method is true, the old data in the relation should be overwritten with the new data. If overwrite in insert method is false, the new data should be appended.
InsertableRelation has the following three assumptions. 1. It assumes that the data (Rows in the DataFrame) provided to the insert method exactly matches the ordinal of fields in the schema of the BaseRelation. 2. It assumes that the schema of this relation will not be changed. Even if the insert method updates the schema (e.g. a relation of JSON or Parquet data may have a schema update after an insert operation), the new schema will not be used. 3. It assumes that fields of the data provided in the insert method are nullable. If a data source needs to check the actual nullability of a field, it needs to do it in the insert method.
1.3.0
A filter that evaluates to true
iff the attribute evaluates to a non-null value.
A filter that evaluates to true
iff the attribute evaluates to a non-null value.
1.3.0
A filter that evaluates to true
iff the attribute evaluates to null.
A filter that evaluates to true
iff the attribute evaluates to null.
1.3.0
Some extensions to JdbcDialect
used by Snappy implementation.
Trait to apply different join order policies like Replicates with filters first, then largest colocated group, and finally non-colocated with filters, if any.
Trait to apply different join order policies like Replicates with filters first, then largest colocated group, and finally non-colocated with filters, if any.
One can change the ordering policies as part of query hints and later can be admin provided externally against a regex based query pattern.
e.g. select * from /*+ joinOrder(replicates+filters, non-colocated+filters) */ table1, table2 where ....
note: I think this should be at the query level instead of per select scope i.e. something like /*+ joinOrder(replicates+filters, non-colocated+filters) */ select * from tab1, (select xx from tab2, tab3 where ... ), tab4 where ...
A filter that evaluates to true
iff the attribute evaluates to a value
less than value
.
A filter that evaluates to true
iff the attribute evaluates to a value
less than value
.
1.3.0
A filter that evaluates to true
iff the attribute evaluates to a value
less than or equal to value
.
A filter that evaluates to true
iff the attribute evaluates to a value
less than or equal to value
.
1.3.0
::DeveloperApi
::DeveloperApi
API for updates and deletes to a relation.
A filter that evaluates to true
iff child
is evaluated to false
.
A filter that evaluates to true
iff child
is evaluated to false
.
1.3.0
A filter that evaluates to true
iff at least one of left
or right
evaluates to true
.
A filter that evaluates to true
iff at least one of left
or right
evaluates to true
.
1.3.0
A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Row objects.
A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Row objects.
The actual filter should be the conjunction of all filters
,
i.e. they should be "and" together.
The pushed down filters are currently purely an optimization as they will all be evaluated again. This means it is safe to use them with methods that produce false positives such as filtering partitions based on a bloom filter.
1.3.0
A BaseRelation that can eliminate unneeded columns before producing an RDD containing all of its tuples as Row objects.
A BaseRelation that can eliminate unneeded columns before producing an RDD containing all of its tuples as Row objects.
1.3.0
::DeveloperApi:: A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Unsafe Row objects.
::DeveloperApi:: A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Unsafe Row objects.
The actual filter should be the conjunction of all filters
,
i.e. they should be "and" together.
The pushed down filters are currently purely an optimization as they will all be evaluated again. This means it is safe to use them with methods that produce false positives such as filtering partitions based on a bloom filter.
1.3.0
Implemented by objects that produce relations for a specific kind of data source.
Implemented by objects that produce relations for a specific kind of data source. When Spark SQL is given a DDL operation with a USING clause specified (to specify the implemented RelationProvider), this interface is used to pass in the parameters specified by a user.
Users may specify the fully qualified class name of a given data source. When that class is
not found Spark SQL will append the class name DefaultSource
to the path, allowing for
less verbose invocation. For example, 'org.apache.spark.sql.json' would resolve to the
data source 'org.apache.spark.sql.json.DefaultSource'
A new instance of this class will be instantiated each time a DDL call is made.
1.3.0
Table to table or Table to index replacement.
A set of possible replacements of table to indexes.
A set of possible replacements of table to indexes.
Note: The chain if consists of multiple partitioned tables, they must satisfy
colocation criteria.
Multiple replacements.
user provided join + filter conditions.
Replace table with index if colocation criteria is satisfied.
Replace table with index hint
::DeveloperApi
::DeveloperApi
An extension to InsertableRelation
that allows for data to be
inserted (possibily having different schema) into the target relation after
comparing against the result of insertSchema
.
Implemented by objects that produce relations for a specific kind of data source with a given schema.
Implemented by objects that produce relations for a specific kind of data source with a given schema. When Spark SQL is given a DDL operation with a USING clause specified ( to specify the implemented SchemaRelationProvider) and a user defined schema, this interface is used to pass in the parameters specified by a user.
Users may specify the fully qualified class name of a given data source. When that class is
not found Spark SQL will append the class name DefaultSource
to the path, allowing for
less verbose invocation. For example, 'org.apache.spark.sql.json' would resolve to the
data source 'org.apache.spark.sql.json.DefaultSource'
A new instance of this class will be instantiated each time a DDL call is made.
The difference between a RelationProvider and a SchemaRelationProvider is that users need to provide a schema when using a SchemaRelationProvider. A relation provider can inherits both RelationProvider and SchemaRelationProvider if it can support both schema inference and user-specified schemas.
1.3.0
A class for tracking the statistics of a set of numbers (count, mean and variance) in a numerically robust way.
A class for tracking the statistics of a set of numbers (count, mean and variance) in a numerically robust way. Includes support for merging two StatVarianceCounters.
Taken from Spark's StatCounter implementation removing max and min.
::Experimental::
Implemented by objects that can produce a streaming Sink
for a specific format or system.
::Experimental::
Implemented by objects that can produce a streaming Sink
for a specific format or system.
2.0.0
::Experimental::
Implemented by objects that can produce a streaming Source
for a specific format or system.
::Experimental::
Implemented by objects that can produce a streaming Source
for a specific format or system.
2.0.0
A filter that evaluates to true
iff the attribute evaluates to
a string that contains the string value
.
A filter that evaluates to true
iff the attribute evaluates to
a string that contains the string value
.
1.3.1
A filter that evaluates to true
iff the attribute evaluates to
a string that starts with value
.
A filter that evaluates to true
iff the attribute evaluates to
a string that starts with value
.
1.3.1
A filter that evaluates to true
iff the attribute evaluates to
a string that starts with value
.
A filter that evaluates to true
iff the attribute evaluates to
a string that starts with value
.
1.3.1
A BaseRelation that can produce all of its tuples as an RDD of Row objects.
A BaseRelation that can produce all of its tuples as an RDD of Row objects.
1.3.0
Plan for update of a column or row table.
Plan for update of a column or row table. The "table" passed should be a resolved one (by parser and other callers) else there is ambiguity in column resolution of updateColumns/expressions between table and child.
Simply assemble rest of the tables as per user defined join order.
Pick the current colocated group and put tables with filters with the currently built plan.
This doesn't require any alteration to joinOrder as such.
This we have to copy from spark patterns.scala because we want handle single table with filters as well.
This we have to copy from spark patterns.scala because we want handle single table with filters as well.
This will have another advantage later if we decide to move our rule to the last instead of injecting just after ReorderJoin, whereby additional nodes like Project requires handling.
This hint too doesn't require any implementation as such.
Put rest of the colocated table joins after applying ColocatedWithFilters.
Tables considered non-colocated according to currentColocatedGroup with Filters are put into join condition.
Put replicated tables with filters first.
Put replicated tables with filters first. If we find only one replicated table with filter, we try that with largest colocated group.
Support for DML and other operations on external tables.
A set of APIs for adding data sources to Spark SQL.