Check if this dialect instance can handle a certain jdbc url.
Check if this dialect instance can handle a certain jdbc url.
the jdbc url.
True if the dialect can be applied on the given jdbc url.
NullPointerException
if the url is null.
Override connection specific properties to run before a select is made.
Override connection specific properties to run before a select is made. This is in place to allow dialects that need special treatment to optimize behavior.
The connection object
The connection properties. This is passed through from the relation.
Create a new schema.
Create a new schema.
Get the custom datatype mapping for the given jdbc meta information.
Get the custom datatype mapping for the given jdbc meta information.
The sql type (see java.sql.Types)
The sql type name (e.g. "BIGINT UNSIGNED")
The size of the type.
Result metadata associated with this type.
The actual DataType (subclasses of org.apache.spark.sql.types.DataType) or null if the default type mapping should be used.
Get JDBC metadata for a catalyst column representation.
Get JDBC metadata for a catalyst column representation.
See SPARK-10101 issue for similar problem. If the PR raised is merged we can update VARCHAR handling here accordingly. [UPDATE] related PRs have been merged and SPARK-10101 closed but it is only for passing through full types in CREATE TABLE and not when reading unlike what is reauired for SnappyData.
the DataType of the column
any additional Metadata for the column
if true then the type name string returned will be suitable for column definition in CREATE TABLE etc; the differences being that it will include size definitions for CHAR/VARCHAR and complex types will be mapped to underlying storage format rather than for display (i.e. BLOB)
Look SPARK-10101 issue for similar problem.
Look SPARK-10101 issue for similar problem. If the PR raised is ever merged we can remove this method here.
The metadata
The new JdbcType if there is an override for this DataType
Retrieve the jdbc / sql type for a given datatype.
Retrieve the jdbc / sql type for a given datatype.
The datatype (e.g. org.apache.spark.sql.types.StringType)
The new JdbcType if there is an override for this DataType
The SQL query that should be used to discover the schema of a table.
The SQL query that should be used to discover the schema of a table. It only needs to ensure that the result set has the same schema as the table, such as by calling "SELECT * ...". Dialects can override this method to return a query that works best in a particular database.
The name of the table.
The SQL query to use for discovering the schema.
Get the SQL query that should be used to find if the given table exists.
Get the SQL query that should be used to find if the given table exists. Dialects can override this method to return a query that works best in a particular database.
The name of the table.
The SQL query to use for checking the table.
Return Some[true] iff TRUNCATE TABLE
causes cascading default.
Return Some[true] iff TRUNCATE TABLE
causes cascading default.
Some[true] : TRUNCATE TABLE causes cascading.
Some[false] : TRUNCATE TABLE does not cause cascading.
None: The behavior of TRUNCATE TABLE is unknown (default).
Quotes the identifier.
Quotes the identifier. This is used to put quotes around the identifier in case the column name is a reserved keyword, or in case it contains characters that require quotes (e.g. space).
Query string to check for existence of a table
Query string to check for existence of a table
Get the DDL to truncate a table, or null/empty if truncate is not supported.
Get the DDL to truncate a table, or null/empty if truncate is not supported.
Base implementation of various dialect implementations for SnappyData.