Package

org.apache.spark.sql

execution

Permalink

package execution

The physical execution component of Spark SQL. Note that this is a private package. All classes in catalyst are considered an internal API to Spark SQL and are subject to change between minor releases.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. execution
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. case class AlterTableAddColumnCommand(tableIdent: TableIdentifier, addColumn: StructField, extensions: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  2. case class AlterTableDropColumnCommand(tableIdent: TableIdentifier, column: String, extensions: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  3. case class AlterTableMiscCommand(tableIdent: TableIdentifier, sql: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  4. case class AlterTableToggleRowLevelSecurityCommand(tableIdent: TableIdentifier, enableRls: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  5. case class AppendColumnsExec(func: (Any) ⇒ Any, deserializer: Expression, serializer: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Applies the given function to each input row, appending the encoded result at the end of the row.

  6. case class AppendColumnsWithObjectExec(func: (Any) ⇒ Any, inputSerializer: Seq[NamedExpression], newColumnsSerializer: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with ObjectConsumerExec with Product with Serializable

    Permalink

    An optimized version of AppendColumnsExec, that can be executed on deserialized object directly.

  7. trait ApplyLimitOnExternalRelation extends ExternalRelation

    Permalink
  8. class Approximate extends Comparable[Approximate] with Ordered[Approximate] with Serializable

    Permalink

    True count is > lower Bound & less than Max , with the given probability

    True count is > lower Bound & less than Max , with the given probability

    Annotations
    @SQLUserDefinedType()
  9. class ApproximateType extends UserDefinedType[Approximate]

    Permalink
  10. trait BaseLimitExec extends SparkPlan with UnaryExecNode with CodegenSupport

    Permalink

    Helper trait which defines methods that are shared by both LocalLimitExec and GlobalLimitExec.

  11. trait BatchConsumer extends SparkPlan with CodegenSupport

    Permalink
  12. trait BinaryExecNode extends SparkPlan

    Permalink
  13. trait BucketsBasedIterator extends AnyRef

    Permalink
  14. abstract class BufferedRowIterator extends AnyRef

    Permalink
  15. class CMSParams extends Serializable

    Permalink
  16. class CacheManager extends internal.Logging

    Permalink

    Provides support in a SQLContext for caching query results and automatically using these cached results when subsequent queries are executed.

    Provides support in a SQLContext for caching query results and automatically using these cached results when subsequent queries are executed. Data is cached using byte buffers stored in an InMemoryRelation. This relation is automatically substituted query plans that return the sameResult as the originally cached query.

    Internal to Spark SQL.

  17. case class CachedData(plan: LogicalPlan, cachedRepresentation: InMemoryRelation) extends Product with Serializable

    Permalink

    Holds a cached logical plan and its data

  18. class CatalogStaleException extends Exception

    Permalink
  19. case class CoGroupExec(func: (Any, Iterator[Any], Iterator[Any]) ⇒ TraversableOnce[Any], keyDeserializer: Expression, leftDeserializer: Expression, rightDeserializer: Expression, leftGroup: Seq[Attribute], rightGroup: Seq[Attribute], leftAttr: Seq[Attribute], rightAttr: Seq[Attribute], outputObjAttr: Attribute, left: SparkPlan, right: SparkPlan) extends SparkPlan with BinaryExecNode with ObjectProducerExec with Product with Serializable

    Permalink

    Co-groups the data from left and right children, and calls the function with each group and 2 iterators containing all elements in the group from left and right side.

    Co-groups the data from left and right children, and calls the function with each group and 2 iterators containing all elements in the group from left and right side. The result of this function is flattened before being output.

  20. class CoGroupedIterator extends Iterator[(InternalRow, Iterator[InternalRow], Iterator[InternalRow])]

    Permalink

    Iterates over GroupedIterators and returns the cogrouped data, i.e.

    Iterates over GroupedIterators and returns the cogrouped data, i.e. each record is a grouping key with its associated values from all GroupedIterators. Note: we assume the output of each GroupedIterator is ordered by the grouping key.

  21. case class CoalesceExec(numPartitions: Int, child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Physical plan for returning a new RDD that has exactly numPartitions partitions.

    Physical plan for returning a new RDD that has exactly numPartitions partitions. Similar to coalesce defined on an RDD, this operation results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions. If a larger number of partitions is requested, it will stay at the current number of partitions.

    However, if you're doing a drastic coalesce, e.g. to numPartitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of numPartitions = 1). To avoid this, you see ShuffleExchange. This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is).

  22. class CoalescedPartitioner extends Partitioner

    Permalink

    A Partitioner that might group together one or more partitions from the parent.

  23. case class CodegenSparkFallback(child: SparkPlan, session: SnappySession) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Catch exceptions in code generation of SnappyData plans and fallback to Spark plans as last resort (including non-code generated paths).

  24. trait CodegenSupport extends SparkPlan

    Permalink

    An interface for those physical operators that support codegen.

  25. trait CodegenSupportOnExecutor extends SparkPlan with CodegenSupport

    Permalink

    Allow invoking produce/consume calls on executor without requiring a SparkContext.

  26. case class CollapseCodegenStages(conf: SQLConf) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Find the chained plans that support codegen, collapse them together as WholeStageCodegen.

  27. case class CollectLimitExec(limit: Int, child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Take the first limit elements and collect them to a single partition.

    Take the first limit elements and collect them to a single partition.

    This operator will be used when a logical Limit operation is the final operator in an logical plan, which happens when the user is collecting results back to the driver.

  28. case class CreateIndexCommand(indexName: TableIdentifier, baseTable: TableIdentifier, indexColumns: Seq[(String, Option[SortDirection])], options: Map[String, String]) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  29. case class CreatePolicyCommand(policyIdent: TableIdentifier, tableIdent: TableIdentifier, policyFor: String, applyTo: Seq[String], expandedPolicyApplyTo: Seq[String], currentUser: String, filterStr: String, filter: BypassRowLevelSecurity) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  30. case class CreateSchemaCommand(ifNotExists: Boolean, schemaName: String, authId: Option[(String, Boolean)]) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  31. case class CreateTableUsingCommand(tableIdent: TableIdentifier, baseTable: Option[String], userSpecifiedSchema: Option[StructType], schemaDDL: Option[String], provider: String, mode: SaveMode, options: Map[String, String], partitionColumns: Array[String], bucketSpec: Option[BucketSpec], query: Option[LogicalPlan], isBuiltIn: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  32. trait DataSourceScanExec extends SparkPlan with LeafExecNode with CodegenSupport

    Permalink
  33. case class DeployCommand(coordinates: String, alias: String, repos: Option[String], jarCache: Option[String], restart: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  34. case class DeployJarCommand(alias: String, paths: String, restart: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  35. class DescribeSnappyTableCommand extends DescribeTableCommand

    Permalink

    This extends Spark's describe to add support for CHAR and VARCHAR types.

  36. case class DeserializeToObjectExec(deserializer: Expression, outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with UnaryExecNode with ObjectProducerExec with CodegenSupport with Product with Serializable

    Permalink

    Takes the input row from child and turns it into object using the given deserializer expression.

    Takes the input row from child and turns it into object using the given deserializer expression. The output of this operator is a single-field safe row containing the deserialized object.

  37. case class DictionaryCode(dictionary: ExprCode, bufferVar: String, dictionaryIndex: ExprCode) extends Product with Serializable

    Permalink

    Extended information for ExprCode variable to also hold the variable having dictionary reference and its index when dictionary encoding is being used.

  38. case class DropIndexCommand(ifExists: Boolean, indexName: TableIdentifier) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  39. case class DropPolicyCommand(ifExists: Boolean, policyIdentifer: TableIdentifier) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  40. case class DropSchemaCommand(schemaName: String, ignoreIfNotExists: Boolean, cascade: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  41. case class DropTableOrViewCommand(tableIdent: TableIdentifier, ifExists: Boolean, isView: Boolean, purge: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink

    Like Spark's DropTableCommand but checks for non-existent table case upfront to avoid unnecessary warning logs from Spark's DropTableCommand.

  42. class EncoderPlan[T] extends LogicalRDD

    Permalink
  43. case class EncoderScanExec(rdd: RDD[Any], encoder: ExpressionEncoder[Any], isFlat: Boolean, output: Seq[Attribute]) extends SparkPlan with LeafExecNode with CodegenSupport with Product with Serializable

    Permalink

    Efficient SparkPlan with code generation support to consume an RDD that has an ExpressionEncoder.

  44. abstract class ExecSubqueryExpression extends PlanExpression[SubqueryExec]

    Permalink

    The base class for subquery that is used in SparkPlan.

  45. case class ExecutePlan(child: SparkPlan, preAction: () ⇒ Unit = () => ()) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    A wrapper plan to immediately execute the child plan without having to do an explicit collect.

    A wrapper plan to immediately execute the child plan without having to do an explicit collect. Only use for plans returning small results.

  46. case class ExpandExec(projections: Seq[Seq[Expression]], output: Seq[Attribute], child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Apply all of the GroupExpressions to every input row, hence we will get multiple output rows for an input row.

    Apply all of the GroupExpressions to every input row, hence we will get multiple output rows for an input row.

    projections

    The group of expressions, all of the group expressions should output the same schema specified bye the parameter output

    output

    The output Schema

    child

    Child operator

  47. case class ExternalRDD[T](outputObjAttr: Attribute, rdd: RDD[T])(session: SparkSession) extends LeafNode with ObjectProducer with MultiInstanceRelation with Product with Serializable

    Permalink

    Logical plan node for scanning data from an RDD.

  48. case class ExternalRDDScanExec[T](outputObjAttr: Attribute, rdd: RDD[T]) extends SparkPlan with LeafExecNode with ObjectProducerExec with Product with Serializable

    Permalink

    Physical plan node for scanning data from an RDD.

  49. trait ExternalRelation extends AnyRef

    Permalink
  50. trait FileRelation extends AnyRef

    Permalink

    An interface for relations that are backed by files.

    An interface for relations that are backed by files. When a class implements this interface, the list of paths that it returns will be returned to a user who calls inputPaths on any DataFrame that queries this relation.

  51. case class FileSourceScanExec(relation: HadoopFsRelation, output: Seq[Attribute], outputSchema: StructType, partitionFilters: Seq[Expression], dataFilters: Seq[Filter], metastoreTableIdentifier: Option[TableIdentifier]) extends SparkPlan with DataSourceScanExec with Product with Serializable

    Permalink

    Physical plan node for scanning data from HadoopFsRelations.

    Physical plan node for scanning data from HadoopFsRelations.

    relation

    The file-based relation to scan.

    output

    Output attributes of the scan.

    outputSchema

    Output schema of the scan.

    partitionFilters

    Predicates to use for partition pruning.

    dataFilters

    Data source filters to use for filtering data within partitions.

    metastoreTableIdentifier

    identifier for the table in the metastore.

  52. case class FilterExec(condition: Expression, child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with PredicateHelper with Product with Serializable

    Permalink

    Physical plan for Filter.

  53. case class FlatMapGroupsInRExec(func: Array[Byte], packageNames: Array[Byte], broadcastVars: Array[Broadcast[AnyRef]], inputSchema: StructType, outputSchema: StructType, keyDeserializer: Expression, valueDeserializer: Expression, groupingAttributes: Seq[Attribute], dataAttributes: Seq[Attribute], outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with UnaryExecNode with ObjectProducerExec with Product with Serializable

    Permalink

    Groups the input rows together and calls the R function with each group and an iterator containing all elements in the group.

    Groups the input rows together and calls the R function with each group and an iterator containing all elements in the group. The result of this function is flattened before being output.

  54. case class GenerateExec(generator: Generator, join: Boolean, outer: Boolean, generatorOutput: Seq[Attribute], child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows.

    Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows. This operation is similar to a flatMap in functional programming with one important additional feature, which allows the input rows to be joined with their output.

    generator

    the generator expression

    join

    when true, each output row is implicitly joined with the input tuple that produced it.

    outer

    when true, each input row will be output at least once, even if the output of the given generator is empty. outer has no effect when join is false.

    generatorOutput

    the qualified output attributes of the generator of this node, which constructed in analysis phase, and we can not change it, as the parent node bound with it already.

  55. case class GlobalLimitExec(limit: Int, child: SparkPlan) extends SparkPlan with BaseLimitExec with Product with Serializable

    Permalink

    Take the first limit elements of the child's single output partition.

  56. case class GrantRevokeIntpCommand(isGrant: Boolean, users: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  57. case class GrantRevokeOnExternalTable(isGrant: Boolean, table: TableIdentifier, users: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  58. class GroupedIterator extends Iterator[(InternalRow, Iterator[InternalRow])]

    Permalink

    Iterates over a presorted set of rows, chunking it up by the grouping expression.

    Iterates over a presorted set of rows, chunking it up by the grouping expression. Each call to next will return a pair containing the current group and an iterator that will return all the elements of that group. Iterators for each group are lazily constructed by extracting rows from the input iterator. As such, full groups are never materialized by this class.

    Example input:

    Input: [a, 1], [b, 2], [b, 3]
    Grouping: x#1
    InputSchema: x#1, y#2

    Result:

    First call to next():  ([a], Iterator([a, 1])
    Second call to next(): ([b], Iterator([b, 2], [b, 3])

    Note, the class does not handle the case of an empty input for simplicity of implementation. Use the factory to construct a new instance.

  59. class Hokusai[T] extends AnyRef

    Permalink

    Implements the algorithms and data structures from "Hokusai -- Sketching Streams in Real Time", by Sergiy Matusevych, Alexander Smola, Amr Ahmed.

    Implements the algorithms and data structures from "Hokusai -- Sketching Streams in Real Time", by Sergiy Matusevych, Alexander Smola, Amr Ahmed. http://www.auai.org/uai2012/papers/231.pdf

    Aggregates state, so this is a mutable class.

    Since we are all still learning scala, I thought I'd explain the use of implicits in this file. TimeAggregation takes an implicit constructor parameter: TimeAggregation[T]()(implicit val cmsMonoid: CMSMonoid[T]) The purpose for that is: + In Algebird, a CMSMonoid[T] is a factory for creating instances of CMS[T] + TimeAggregation needs to occasionally make new CMS instances, so it will use the factory + By making it an implicit (and in the curried param), the outer context of the TimeAggregation can create/ensure that the factory is there. + Hokusai[T] will be the "outer context" so it can handle that for TimeAggregation

    TODO 1. Decide if the underlying CMS should be mutable (save memory) or functional (algebird) I'm afraid that with the functional approach, and having so many, every time we merge two CMS, we create a third and that is wasteful of memory or may take too much memory. If we go with a mutable CMS, we have to either make stream-lib's serializable, or make our own.

    2. Clean up intrusion of algebird shenanigans in the code (implicit factories etc)

    3. Decide on API for managing data and time. Do we increment time in a separate operation or add a time parameter to addData()?

    4. Decide if we want to be mutable or functional in this datastruct. Current version is mutable.

  60. case class InSubquery(child: Expression, plan: SubqueryExec, exprId: ExprId, result: Array[Any] = null, updated: Boolean = false) extends ExecSubqueryExpression with Product with Serializable

    Permalink

    A subquery that will check the value of child whether is in the result of a query or not.

  61. case class InputAdapter(child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    InputAdapter is used to hide a SparkPlan from a subtree that support codegen.

    InputAdapter is used to hide a SparkPlan from a subtree that support codegen.

    This is the leaf node of a tree with WholeStageCodegen that is used to generate code that consumes an RDD iterator of InternalRow.

  62. case class InterpretCodeCommand(code: String, snappySession: SnappySession, options: Map[String, String] = Map.empty) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink

    Allow execution of adhoc scala code on the Lead node.

    Allow execution of adhoc scala code on the Lead node. Creates a new Scala interpreter for a Snappy Session. But, cached for the life of the session. Subsequent invocations of the 'interpret' command will resuse the cached interpreter. Allowing any variables (e.g. dataframe) to be preserved across invocations. State will not be preserved during Lead node failover.

    Application is injected (1) The SnappySession in variable called 'session' and (2) The Options in a variable called 'intp_options'.

    To return values set a variable called 'intp_return' - a Seq[Row].

  63. class IntervalTracker extends AnyRef

    Permalink
  64. final class KeyFrequencyWithTimestamp[T] extends AnyRef

    Permalink
  65. trait LeafExecNode extends SparkPlan

    Permalink
  66. case class ListPackageJarsCommand(isJar: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  67. case class LocalLimitExec(limit: Int, child: SparkPlan) extends SparkPlan with BaseLimitExec with Product with Serializable

    Permalink

    Take the first limit elements of each child partition, but do not collect or shuffle them.

  68. case class LocalTableScanExec(output: Seq[Attribute], rows: Seq[InternalRow]) extends SparkPlan with LeafExecNode with Product with Serializable

    Permalink

    Physical plan node for scanning data from a local collection.

  69. case class LogicalRDD(output: Seq[Attribute], rdd: RDD[InternalRow], outputPartitioning: Partitioning = UnknownPartitioning(0), outputOrdering: Seq[SortOrder] = Nil)(session: SparkSession) extends LeafNode with MultiInstanceRelation with Product with Serializable

    Permalink

    Logical plan node for scanning data from an RDD of InternalRow.

  70. case class MapElementsExec(func: AnyRef, outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with ObjectConsumerExec with ObjectProducerExec with CodegenSupport with Product with Serializable

    Permalink

    Applies the given function to each input object.

    Applies the given function to each input object. The output of its child must be a single-field row containing the input object.

    This operator is kind of a safe version of ProjectExec, as its output is custom object, we need to use safe row to contain it.

  71. case class MapGroupsExec(func: (Any, Iterator[Any]) ⇒ TraversableOnce[Any], keyDeserializer: Expression, valueDeserializer: Expression, groupingAttributes: Seq[Attribute], dataAttributes: Seq[Attribute], outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with UnaryExecNode with ObjectProducerExec with Product with Serializable

    Permalink

    Groups the input rows together and calls the function with each group and an iterator containing all elements in the group.

    Groups the input rows together and calls the function with each group and an iterator containing all elements in the group. The result of this function is flattened before being output.

  72. case class MapPartitionsExec(func: (Iterator[Any]) ⇒ Iterator[Any], outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with ObjectConsumerExec with ObjectProducerExec with Product with Serializable

    Permalink

    Applies the given function to input object iterator.

    Applies the given function to input object iterator. The output of its child must be a single-field row containing the input object.

  73. abstract class NonRecursivePlans extends SparkPlan

    Permalink

    Base class for SparkPlan implementations that have only the code-generated version and use the same for non-codegenerated case.

    Base class for SparkPlan implementations that have only the code-generated version and use the same for non-codegenerated case. For that case this prevents recursive calls into code generation in case it fails for some reason.

  74. trait ObjectConsumerExec extends SparkPlan with UnaryExecNode

    Permalink

    Physical version of ObjectConsumer.

  75. case class ObjectHashMapAccessor(session: SnappySession, ctx: CodegenContext, keyExprs: Seq[Expression], valueExprs: Seq[Expression], classPrefix: String, hashMapTerm: String, dataTerm: String, maskTerm: String, multiMap: Boolean, consumer: CodegenSupport, cParent: CodegenSupport, child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Provides helper methods for generated code to use ObjectHashSet with a generated class (having key and value columns as corresponding java type fields).

    Provides helper methods for generated code to use ObjectHashSet with a generated class (having key and value columns as corresponding java type fields). This implementation saves the entire overhead of UnsafeRow conversion for both key type (like in BytesToBytesMap) and value type (like in BytesToBytesMap and VectorizedHashMapGenerator).

    It has been carefully optimized to minimize memory reads/writes, with minimalistic code to fit better in CPU instruction cache. Unlike the other two maps used by HashAggregateExec, this has no limitations on the key or value column types.

    The basic idea being that all of the key and value columns will be individual fields in a generated java class having corresponding java types. Storage of a column value in the map is a simple matter of assignment of incoming variable to the corresponding field of the class object and access is likewise read from that field of class . Nullability information is crammed in long bit-mask fields which are generated as many required (instead of unnecessary overhead of something like a BitSet).

    Hashcode and equals methods are generated for the key column fields. Having both key and value fields in the same class object helps both in cutting down of generated code as well as cache locality and reduces at least one memory access for each row. In testing this alone has shown to improve performance by ~25% in simple group by queries. Furthermore, this class also provides for inline hashcode and equals methods so that incoming register variables in generated code can be directly used (instead of stuffing into a lookup key that will again read those fields inside). The class hashcode method is supposed to be used only internally by rehashing and that too is just a field cached in the class object that is filled in during the initial insert (from the inline hashcode).

    For memory management this uses a simple approach of starting with an estimated size, then improving that estimate for future in a rehash where the rehash will also collect the actual size of current entries. If the rehash tells that no memory is available, then it will fallback to dumping the current map into MemoryManager and creating a new one with merge being done by an external sorter in a manner similar to how UnsafeFixedWidthAggregationMap handles the situation. Caller can instead decide to dump the entire map in that scenario like when using for a HashJoin.

    Overall this map is 5-10X faster than UnsafeFixedWidthAggregationMap and 2-4X faster than VectorizedHashMapGenerator. It is generic enough to be used for both group by aggregation as well as for HashJoins.

  76. trait ObjectProducerExec extends SparkPlan

    Permalink

    Physical version of ObjectProducer.

  77. case class OptimizeMetadataOnlyQuery(catalog: SessionCatalog, conf: SQLConf) extends Rule[LogicalPlan] with Product with Serializable

    Permalink

    This rule optimizes the execution of queries that can be answered by looking only at partition-level metadata.

    This rule optimizes the execution of queries that can be answered by looking only at partition-level metadata. This applies when all the columns scanned are partition columns, and the query has an aggregate operator that satisfies the following conditions: 1. aggregate expression is partition columns. e.g. SELECT col FROM tbl GROUP BY col. 2. aggregate function on partition columns with DISTINCT. e.g. SELECT col1, count(DISTINCT col2) FROM tbl GROUP BY col1. 3. aggregate function on partition columns which have same result w or w/o DISTINCT keyword. e.g. SELECT col1, Max(col2) FROM tbl GROUP BY col1.

  78. case class OutputFakerExec(output: Seq[Attribute], child: SparkPlan) extends SparkPlan with Product with Serializable

    Permalink

    A plan node that does nothing but lie about the output of its child.

    A plan node that does nothing but lie about the output of its child. Used to spice a (hopefully structurally equivalent) tree from a different optimization sequence into an already resolved tree.

  79. trait PartitionedDataSourceScan extends PrunedUnsafeFilteredScan

    Permalink
  80. case class PlanLater(plan: LogicalPlan) extends SparkPlan with LeafExecNode with Product with Serializable

    Permalink
  81. case class PlanSubqueries(sparkSession: SparkSession) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Plans scalar subqueries from that are present in the given SparkPlan.

  82. case class ProjectExec(projectList: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Physical plan for Project.

  83. case class PutIntoValuesColumnTable(table: CatalogTable, colNames: Option[Seq[String]], values: Seq[Seq[Expression]]) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  84. class QueryExecution extends AnyRef

    Permalink

    The primary workflow for executing relational queries using Spark.

    The primary workflow for executing relational queries using Spark. Designed to allow easy access to the intermediate phases of query execution for developers.

    While this is not a public class, we should avoid changing the function names for the sake of changing them, because a lot of developers use the feature for debugging.

  85. class QueryExecutionException extends Exception

    Permalink
  86. abstract class RDDKryo[T] extends RDD[T] with KryoSerializable

    Permalink

    base RDD KryoSerializable class that will serialize minimal RDD fields

  87. case class RDDScanExec(output: Seq[Attribute], rdd: RDD[InternalRow], nodeName: String, outputPartitioning: Partitioning = UnknownPartitioning(0), outputOrdering: Seq[SortOrder] = Nil) extends SparkPlan with LeafExecNode with Product with Serializable

    Permalink

    Physical plan node for scanning data from an RDD of InternalRow.

  88. case class RangeExec(range: Range) extends SparkPlan with LeafExecNode with CodegenSupport with Product with Serializable

    Permalink

    Physical plan for range (generating a range of 64 bit numbers).

  89. final class RecordBinaryComparator extends RecordComparator

    Permalink
  90. class ReservoirRegionSegmentMap[V] extends ReentrantReadWriteLock with SegmentMap[Row, V] with Serializable

    Permalink

    Created by vivekb on 21/10/16.

  91. case class ReuseSubquery(conf: SQLConf) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Find out duplicated exchanges in the spark plan, then use the same exchange for all the references.

  92. case class RowDataSourceScanExec(output: Seq[Attribute], rdd: RDD[InternalRow], relation: BaseRelation, outputPartitioning: Partitioning, metadata: Map[String, String], metastoreTableIdentifier: Option[TableIdentifier]) extends SparkPlan with DataSourceScanExec with Product with Serializable

    Permalink

    Physical plan node for scanning data from a relation.

  93. abstract class RowIterator extends AnyRef

    Permalink

    An internal iterator interface which presents a more restrictive API than scala.collection.Iterator.

    An internal iterator interface which presents a more restrictive API than scala.collection.Iterator.

    One major departure from the Scala iterator API is the fusing of the hasNext() and next() calls: Scala's iterator allows users to call hasNext() without immediately advancing the iterator to consume the next row, whereas RowIterator combines these calls into a single advanceNext() method.

  94. case class SHAMapAccessor(session: SnappySession, ctx: CodegenContext, keyExprs: Seq[Expression], valueExprs: Seq[Expression], classPrefix: String, hashMapTerm: String, overflowHashMapsTerm: String, keyValSize: Int, valueOffsetTerm: String, numKeyBytesTerm: String, numValueBytes: Int, currentOffSetForMapLookupUpdt: String, valueDataTerm: String, vdBaseObjectTerm: String, vdBaseOffsetTerm: String, nullKeysBitsetTerm: String, numBytesForNullKeyBits: Int, allocatorTerm: String, numBytesForNullAggBits: Int, nullAggsBitsetTerm: String, sizeAndNumNotNullFuncForStringArr: String, keyBytesHolderVarTerm: String, baseKeyObject: String, baseKeyHolderOffset: String, keyExistedTerm: String, skipLenForAttribIndex: Int, codeForLenOfSkippedTerm: String, valueDataCapacityTerm: String, storedAggNullBitsTerm: Option[String], storedKeyNullBitsTerm: Option[String], aggregateBufferVars: Seq[String], keyHolderCapacityTerm: String, shaMapClassName: String, useCustomHashMap: Boolean, previousSingleKey_Position_LenTerm: Option[(String, String, String)], codeSplitFuncParamsSize: Int, splitAggCode: Boolean, splitGroupByKeyCode: Boolean) extends SparkPlan with CodegenSupport with Product with Serializable

    Permalink
  95. case class SampleExec(lowerBound: Double, upperBound: Double, withReplacement: Boolean, seed: Long, child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Physical plan for sampling the dataset.

    Physical plan for sampling the dataset.

    lowerBound

    Lower-bound of the sampling probability (usually 0.0)

    upperBound

    Upper-bound of the sampling probability. The expected fraction sampled will be ub - lb.

    withReplacement

    Whether to sample with replacement.

    seed

    the random seed

    child

    the SparkPlan

  96. final case class SampleOptions(qcs: Array[Int], name: String, fraction: Double, stratumSize: Int, errorLimitColumn: Int, errorLimitPercent: Double, memBatchSize: Int, timeSeriesColumn: Int, timeInterval: Long, concurrency: Int, schema: StructType, bypassSampling: Boolean, qcsPlan: Option[(CodeAndComment, ArrayBuffer[Any], Int, Array[DataType])]) extends Serializable with Product

    Permalink
  97. final class SamplePartition extends Partition with Serializable with Logging

    Permalink
  98. case class ScalarSubquery(plan: SubqueryExec, exprId: ExprId) extends ExecSubqueryExpression with Product with Serializable

    Permalink

    A subquery that will return only one row and one column.

    A subquery that will return only one row and one column.

    This is the physical copy of ScalarSubquery to be used inside SparkPlan.

  99. case class SerializeFromObjectExec(serializer: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with ObjectConsumerExec with CodegenSupport with Product with Serializable

    Permalink

    Takes the input object from child and turns in into unsafe row using the given serializer expression.

    Takes the input object from child and turns in into unsafe row using the given serializer expression. The output of its child must be a single-field row containing the input object.

  100. case class SetSchemaCommand(schemaName: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  101. class SetSnappyCommand extends SetCommand

    Permalink
  102. class ShowSnappyTablesCommand extends ShowTablesCommand

    Permalink

    Changes the name of "database" column to "schemaName" over Spark's ShowTablesCommand.

    Changes the name of "database" column to "schemaName" over Spark's ShowTablesCommand. Also when hive compatibility is turned on, then this does not include the schema name or "isTemporary" to return hive compatible result.

  103. case class ShowViewsCommand(session: SnappySession, schemaOpt: Option[String], viewPattern: Option[String]) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  104. class ShuffledRowRDD extends RDD[InternalRow]

    Permalink

    This is a specialized version of org.apache.spark.rdd.ShuffledRDD that is optimized for shuffling rows instead of Java key-value pairs.

    This is a specialized version of org.apache.spark.rdd.ShuffledRDD that is optimized for shuffling rows instead of Java key-value pairs. Note that something like this should eventually be implemented in Spark core, but that is blocked by some more general refactorings to shuffle interfaces / internals.

    This RDD takes a ShuffleDependency (dependency), and an optional array of partition start indices as input arguments (specifiedPartitionStartIndices).

    The dependency has the parent RDD of this RDD, which represents the dataset before shuffle (i.e. map output). Elements of this RDD are (partitionId, Row) pairs. Partition ids should be in the range [0, numPartitions - 1]. dependency.partitioner is the original partitioner used to partition map output, and dependency.partitioner.numPartitions is the number of pre-shuffle partitions (i.e. the number of partitions of the map output).

    When specifiedPartitionStartIndices is defined, specifiedPartitionStartIndices.length will be the number of post-shuffle partitions. For this case, the ith post-shuffle partition includes specifiedPartitionStartIndices[i] to specifiedPartitionStartIndices[i+1] - 1 (inclusive).

    When specifiedPartitionStartIndices is not defined, there will be dependency.partitioner.numPartitions post-shuffle partitions. For this case, a post-shuffle partition is created for every pre-shuffle partition.

  105. case class SnappyCacheTableCommand(tableIdent: TableIdentifier, queryString: String, plan: Option[LogicalPlan], isLazy: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink

    Alternative to Spark's CacheTableCommand that shows the plan being cached in the GUI rather than count() plan for InMemoryRelation.

  106. class SnappyContextAQPFunctions extends SnappyContextFunctions with Logging

    Permalink
  107. case class SnappySortExec(sortPlan: SortExec, child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Custom Sort plan.

    Custom Sort plan. Currently this enables lazy sorting i.e. sort only when iterator is consumed the first time. Useful for SMJ when the left-side is empty. Useful only for non code-generated plans, since latter are already lazy (SortExec checks for "needToSort" so happens only on first processNext).

  108. case class SnappyStreamingActionsCommand(action: Int, batchInterval: Option[Duration]) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  109. class SnapshotConnectionListener extends TaskCompletionListener with TaskFailureListener

    Permalink

    This is a TaskContext listener (for both success and failure of the task) that handles startup, commit and rollback of snapshot transactions for the task.

    This is a TaskContext listener (for both success and failure of the task) that handles startup, commit and rollback of snapshot transactions for the task. It also provides a common connection that can be shared by all plans executing in the task. In conjunction with the apply methods of the companion object, it ensures that only one instance of this listener is attached in a TaskContext which is automatically removed at the end of the task execution.

    This is the preferred way for all plans that need connections and/or snapshot transactions so that handling transaction start/commit for any level of plan nesting etc can be dealt with cleanly for the entire duration of the task. Additionally cases where an EXCHANGE gets inserted between two plans are also handled as expected where separate transactions and connections will be used for the two plans. Both generated code and non-generated code (including RDD.compute) should use the apply methods of the companion object to obtain an instance of the listener, then use its connection() method to obtain the connection.

    One of the overloads of the apply method also allows one to send a custom connection creator instead of using the default one, but it is also assumed to return SnappyData connection only (either embedded or thin) for snapshot transactions to work. Typical usage of custom creator is for smart connector RDDs to use direct URLs without load-balance to the preferred hosts for the buckets being targeted instead of the default creator that will always use the locator.

  110. case class SortExec(sortOrder: Seq[SortOrder], global: Boolean, child: SparkPlan, testSpillFrequency: Int = 0) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Performs (external) sorting.

    Performs (external) sorting.

    global

    when true performs a global sort of all partitions by shuffling the data first if necessary.

    testSpillFrequency

    Method for configuring periodic spilling in unit tests. If set, will spill every frequency records.

  111. class SparkOptimizer extends Optimizer

    Permalink
  112. abstract class SparkPlan extends QueryPlan[SparkPlan] with internal.Logging with Serializable

    Permalink

    The base class for physical operators.

    The base class for physical operators.

    The naming convention is that physical operators end with "Exec" suffix, e.g. ProjectExec.

  113. class SparkPlanInfo extends AnyRef

    Permalink

    :: DeveloperApi :: Stores information about a SQL SparkPlan.

    :: DeveloperApi :: Stores information about a SQL SparkPlan.

    Annotations
    @DeveloperApi()
  114. class SparkPlanner extends SparkStrategies

    Permalink
  115. class SparkSqlAstBuilder extends AstBuilder

    Permalink

    Builder that converts an ANTLR ParseTree into a LogicalPlan/Expression/TableIdentifier.

  116. class SparkSqlParser extends AbstractSqlParser

    Permalink

    Concrete parser for Spark SQL statements.

  117. abstract class SparkStrategies extends QueryPlanner[SparkPlan]

    Permalink
  118. abstract class SparkStrategy extends GenericStrategy[SparkPlan]

    Permalink

    Converts a logical plan into zero or more SparkPlans.

    Converts a logical plan into zero or more SparkPlans. This API is exposed for experimenting with the query planner and is not designed to be stable across spark releases. Developers writing libraries should instead consider using the stable APIs provided in org.apache.spark.sql.sources

  119. case class StratifiedSample(options: Map[String, Any], child: LogicalPlan, baseTable: Option[TableIdentifier] = None)(qcs: (Array[Int], Array[String]) = ..., weightedColumn: AttributeReference = ...) extends UnaryNode with Product with Serializable

    Permalink
  120. case class StratifiedSampleExecute(child: SparkPlan, output: Seq[Attribute], options: Map[String, Any], qcs: Array[Int]) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Perform stratified sampling given a Query-Column-Set (QCS).

    Perform stratified sampling given a Query-Column-Set (QCS). This variant can also use a fixed fraction to be sampled instead of fixed number of total samples since it is also designed to be used with streaming data.

  121. final class StratifiedSampledRDD extends RDD[InternalRow] with Serializable

    Permalink
  122. abstract class StratifiedSampler extends Serializable with Cloneable with Logging

    Permalink
  123. class StratifiedSamplerCached extends StratifiedSampler with CastLongTime

    Permalink

    A stratified sampling implementation that uses a fraction and initial cache size.

    A stratified sampling implementation that uses a fraction and initial cache size. Latter is used as the initial reservoir size per stratum for reservoir sampling. It primarily tries to satisfy the fraction of the total data repeatedly filling up the cache as required (and expanding the cache size for bigger reservoir if required in next rounds). The fraction is attempted to be satisfied while ensuring that the selected rows are equally divided among the current stratum (for those that received any rows, that is).

  124. class StratifiedSamplerCachedInRegion extends StratifiedSamplerCached

    Permalink

    Created by vivekb on 14/10/16.

    Created by vivekb on 14/10/16.

    Attributes
    protected
  125. final class StratifiedSamplerErrorLimit extends StratifiedSampler with CastLongTime

    Permalink

    A stratified sampling implementation that uses an error limit with confidence on a numerical column to sample as much as required to maintaining the expected error within the limit.

    A stratified sampling implementation that uses an error limit with confidence on a numerical column to sample as much as required to maintaining the expected error within the limit. An optional initial cache size can be specified that is used as the initial reservoir size per stratum for reservoir sampling. The error limit is attempted to be honoured for each of the stratum independently and the sampling rate increased or decreased accordingly. It uses standard closed form estimation of the sampling error increasing or decreasing the sampling as required (and expanding the cache size for bigger reservoir if required in next rounds).

  126. final class StratifiedSamplerReservoir extends StratifiedSampler

    Permalink

    A simple reservoir based stratified sampler that will use the provided reservoir size for every stratum present in the incoming rows.

  127. final class StratumCache extends StratumReservoir

    Permalink

    An extension to StratumReservoir to also track total samples seen since last time slot and short fall from previous rounds.

  128. class StratumInternalRow extends InternalRow

    Permalink
  129. class StratumReservoir extends DataSerializable with Sizeable with Serializable

    Permalink

    For each stratum (i.e.

    For each stratum (i.e. a unique set of values for QCS), keep a set of meta-data including number of samples collected, total number of rows in the stratum seen so far, the QCS key, reservoir of samples etc.

  130. case class SubqueryExec(name: String, child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Physical plan for a subquery.

  131. trait TableExec extends SparkPlan with UnaryExecNode with CodegenSupportOnExecutor

    Permalink

    Base class for bulk insert/mutation operations for column and row tables.

  132. case class TakeOrderedAndProjectExec(limit: Int, sortOrder: Seq[SortOrder], projectList: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Take the first limit elements as defined by the sortOrder, and do projection if needed.

    Take the first limit elements as defined by the sortOrder, and do projection if needed. This is logically equivalent to having a Limit operator after a SortExec operator, or having a ProjectExec operator between them. This could have been named TopK, but Spark's top operator does the opposite in ordering so we name it TakeOrdered to avoid confusion.

  133. final class TokenizedScalarSubquery extends ScalarSubquery

    Permalink

    Extends Spark's ScalarSubquery to avoid emitting a constant in generated code rather pass as a reference object using TokenLiteral to enable generated code re-use.

  134. trait TopK extends Serializable

    Permalink
  135. final class TopKHokusai[T] extends Hokusai[T] with TopK

    Permalink
  136. class TopKStub extends TopK with Serializable

    Permalink
  137. final class TopKWrapper extends ReadWriteLock with CastLongTime with Serializable

    Permalink
    Attributes
    protected[org.apache.spark.sql]
  138. case class TruncateManagedTableCommand(ifExists: Boolean, table: TableIdentifier) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  139. case class UnDeployCommand(alias: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  140. trait UnaryExecNode extends SparkPlan

    Permalink
  141. case class UnionExec(children: Seq[SparkPlan]) extends SparkPlan with Product with Serializable

    Permalink

    Physical plan for unioning two plans, without a distinct.

    Physical plan for unioning two plans, without a distinct. This is UNION ALL in SQL.

  142. final class UnsafeExternalRowSorter extends AnyRef

    Permalink
  143. final class UnsafeFixedWidthAggregationMap extends AnyRef

    Permalink
  144. final class UnsafeKVExternalSorter extends AnyRef

    Permalink
  145. abstract class UnsafeKeyValueSorter extends AnyRef

    Permalink
  146. class UnsafeRowSerializer extends Serializer with Serializable

    Permalink

    Serializer for serializing UnsafeRows during shuffle.

    Serializer for serializing UnsafeRows during shuffle. Since UnsafeRows are already stored as bytes, this serializer simply copies those bytes to the underlying output stream. When deserializing a stream of rows, instances of this serializer mutate and return a single UnsafeRow instance that is backed by an on-heap byte array.

    Note that this serializer implements only the Serializer methods that are used during shuffle, so certain SerializerInstance methods will throw UnsupportedOperationException.

  147. case class WholeStageCodegenExec(child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    WholeStageCodegen compile a subtree of plans that support codegen together into single Java function.

    WholeStageCodegen compile a subtree of plans that support codegen together into single Java function.

    Here is the call graph of to generate Java source (plan A support codegen, but plan B does not):

    WholeStageCodegen Plan A FakeInput Plan B

    -> execute() | doExecute() ---------> inputRDDs() -------> inputRDDs() ------> execute() | +-----------------> produce() | doProduce() -------> produce() | doProduce() | doConsume() <--------- consume() | doConsume() <-------- consume()

    SparkPlan A should override doProduce() and doConsume().

    doCodeGen() will create a CodeGenContext, which will hold a list of variables for input, used to generated code for BoundReference.

  148. case class WholeStageCodegenRDD(sc: SparkContext, source: CodeAndComment, references: Array[Any], durationMs: SQLMetric, inputRDDs: Seq[RDD[InternalRow]]) extends ZippedPartitionsBaseRDD[InternalRow] with Serializable with KryoSerializable with Product

    Permalink

Value Members

  1. object Approximate extends Serializable

    Permalink
  2. object ApproximateType extends ApproximateType

    Permalink
  3. object BucketsBasedIterator

    Permalink
  4. object CMSParams extends Serializable

    Permalink
  5. object CommonUtils

    Permalink
  6. object ConnectionPool

    Permalink

    A global way to obtain a pooled DataSource with a given set of pool and connection properties.

    A global way to obtain a pooled DataSource with a given set of pool and connection properties.

    Supports Tomcat-JDBC pool and HikariCP.

  7. object DictionaryOptimizedMapAccessor

    Permalink

    Makes use of dictionary indexes for strings if any.

    Makes use of dictionary indexes for strings if any. Depends only on the presence of dictionary per batch of rows (where the batch must be substantially greater than its dictionary for optimization to help).

    For single column hash maps (groups or joins), it can be turned into a flat indexed array instead of a map. Create an array of class objects as stored in ObjectHashSet having the length same as dictionary so that dictionary index can be used to directly lookup the array. Then for the first lookup into the array for a dictionary index, lookup the actual ObjectHashSet for the key to find the map entry object and insert into the array. An alternative would be to pre-populate the array by making one pass through the dictionary, but it may not be efficient if many of the entries in the dictionary get filtered out by query predicates and never need to consult the created array.

    For multiple column hash maps having one or more dictionary indexed columns, there is slightly more work. Instead of an array as in single column case, create a new hash map where the key columns values are substituted by dictionary index value. However, the map entry will remain identical to the original map so to save space add the additional index column to the full map itself. As new values are inserted into this hash map, lookup the full hash map to locate its map entry, then point to the same map entry in this new hash map too. Thus for subsequent look-ups the new hash map can be used completely based on integer dictionary indexes instead of strings.

    An alternative approach can be to just store the hash code arrays separately for each of the dictionary columns indexed identical to dictionary. Use this to lookup the main map which will also have additional columns for dictionary indexes (that will be cleared at the start of a new batch). On first lookup for key columns where dictionary indexes are missing in the map, insert the dictionary index in those additional columns. Then use those indexes for equality comparisons instead of string.

    The multiple column dictionary optimization will be useful for only string dictionary types where cost of looking up a string in hash map is substantially higher than integer lookup. The single column optimization can improve performance for other dictionary types though its efficacy for integer/long types will be reduced to avoiding hash code calculation. Given this, the additional overhead of array maintenance may not be worth the effort (and could possibly even reduce overall performance in some cases), hence this optimization is currently only for string type.

  8. object ExternalRDD extends Serializable

    Permalink
  9. object GrantRevokeOnExternalTable extends Serializable

    Permalink
  10. object GroupedIterator

    Permalink
  11. object Hokusai

    Permalink
  12. object IntervalTracker

    Permalink
  13. object ObjectHashMapAccessor extends Serializable

    Permalink
  14. object ObjectOperator

    Permalink

    Helper functions for physical operators that work with user defined objects.

  15. object RDDConversions

    Permalink
  16. object RefreshMetadata extends Enumeration with Function with GetFunctionMembers

    Permalink
    Annotations
    @SerialVersionUID()
  17. object RowIterator

    Permalink
  18. object SHAMapAccessor extends Serializable

    Permalink
  19. object SQLExecution

    Permalink
  20. object SecurityUtils extends Logging

    Permalink

    Common security related calls.

  21. object SnappyContextAQPFunctions

    Permalink
  22. object SnappyQueryExecution

    Permalink
  23. object SnapshotConnectionListener extends Logging

    Permalink

    This companion class is primarily to ensure that only a single listener is attached in a TaskContext (e.g.

    This companion class is primarily to ensure that only a single listener is attached in a TaskContext (e.g. delta buffer + column table scan, or putInto may try to attach twice).

  24. object SortPrefixUtils

    Permalink
  25. object SparkPlan extends Serializable

    Permalink
  26. object StratifiedSampler extends Serializable

    Permalink
  27. object SubqueryExec extends Serializable

    Permalink
  28. object TopKHokusai extends Serializable

    Permalink
  29. object TopKWrapper extends Serializable

    Permalink
  30. object UnaryExecNode extends Serializable

    Permalink
  31. object WholeStageCodegenExec extends Serializable

    Permalink
  32. package aggregate

    Permalink
  33. package aqp

    Permalink
  34. package bootstrap

    Permalink
  35. package closedform

    Permalink
  36. package cms

    Permalink
  37. package columnar

    Permalink
  38. package command

    Permalink
  39. package common

    Permalink
  40. package datasources

    Permalink
  41. package exchange

    Permalink
  42. package joins

    Permalink

    Physical execution operators for join operations.

  43. package metric

    Permalink
  44. package oplog

    Permalink
  45. package python

    Permalink
  46. package r

    Permalink
  47. package row

    Permalink
  48. package serializer

    Permalink
  49. package sources

    Permalink
  50. package stat

    Permalink
  51. package streaming

    Permalink
  52. package streamsummary

    Permalink
  53. package ui

    Permalink
  54. package vectorized

    Permalink
  55. package window

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped