Overview¶
This section introduces you to several common operations such as starting a cluster, working with tables (load, query, update), working with streams and running approximate queries.
Running the Examples: Topics in this section refer to source code examples that are shipped with the product. Instructions to run these examples can be found in the source code.
Source code for these examples is located in the quickstart/src/main/scala/org/apache/spark/examples/snappydata and in quickstart/python directories of the SnappyData product distribution.
You can run the examples in any of the following ways:
-
In the Local Mode: By using the
bin/run-example
script (to run Scala examples) or by using thebin/spark-submit
script (to run Python examples). These examples run colocated with Spark + SnappyData Store in the same JVM. -
As a Job: Many of the Scala examples are also implemented as a SnappyData job. In this case, examples can be submitted as a job to a running SnappyData cluster. Refer to the jobs section for details on how to run a job.
Note
SnappyData also supports Java API. Refer to the documentation for more details on Java API.
The following topics are covered in this section:
-
How to Access SnappyData Store from an existing Spark Installation using Smart Connector
-
How to Load Data from External Data Stores (e.g. HDFS, Cassandra, Hive, etc)
-
How to import data from a Hive Table into a SnappyData Table
-
How to Configure Apache Zeppelin to Securely and Concurrently access the SnappyData Cluster