Collects placeholders marked as planLater by strategy and its LogicalPlans
Collects placeholders marked as planLater by strategy and its LogicalPlans
Prunes bad plans to prevent combinatorial explosion.
Prunes bad plans to prevent combinatorial explosion.
A list of execution strategies that can be used by the planner
A list of execution strategies that can be used by the planner
Used to plan the aggregate operator for expressions based on the AggregateFunction2 interface.
Select the proper physical plan for join based on joining keys and size of logical plan.
Select the proper physical plan for join based on joining keys and size of logical plan.
At first, uses the ExtractEquiJoinKeys pattern to find joins where at least some of the predicates can be evaluated by matching join keys. If found, Join implementations are chosen with the following precedence:
- Broadcast: if one side of the join has an estimated physical size that is smaller than the user-configurable SQLConf.AUTO_BROADCASTJOIN_THRESHOLD threshold or if that side has an explicit broadcast hint (e.g. the user applied the org.apache.spark.sql.functions.broadcast() function to a DataFrame), then that side of the join will be broadcasted and the other side will be streamed, with no shuffling performed. If both sides of the join are eligible to be broadcasted then the - Shuffle hash join: if the average size of a single partition is small enough to build a hash table. - Sort merge: if the matching join keys are sortable.
If there is no joining keys, Join implementations are chosen with the following precedence: - BroadcastNestedLoopJoin: if one side of the join could be broadcasted - CartesianProduct: for Inner join - BroadcastNestedLoopJoin
Plans special cases of limit operators.
Used to plan aggregation queries that are computed incrementally as part of a StreamingQuery.
Used to plan aggregation queries that are computed incrementally as part of a StreamingQuery. Currently this rule is injected into the planner on-demand, only when planning in a org.apache.spark.sql.execution.streaming.StreamExecution
This strategy is just for explaining Dataset/DataFrame
created by spark.readStream
.
This strategy is just for explaining Dataset/DataFrame
created by spark.readStream
.
It won't affect the execution, because StreamingRelation
will be replaced with
StreamingExecutionRelation
in StreamingQueryManager
and StreamingExecutionRelation
will
be replaced with the real relation using the Source
in StreamExecution
.