package cassandra
- Alphabetic
- By Inheritance
- cassandra
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
- case class AnalyzedPredicates(handledByCassandra: Set[Filter], handledBySpark: Set[Filter]) extends Product with Serializable
- case class Auto(ratio: Double) extends DseSearchOptimizationSetting with Product with Serializable
-
class
BasicCassandraPredicatePushDown[Predicate] extends AnyRef
Determines which filter predicates can be pushed down to Cassandra.
Determines which filter predicates can be pushed down to Cassandra.
- Only push down no-partition key column predicates with =, >, <, >=, <= predicate 2. Only push down primary key column predicates with = or IN predicate. 3. If there are regular columns in the pushdown predicates, they should have at least one EQ expression on an indexed column and no IN predicates. 4. All partition column predicates must be included in the predicates to be pushed down, only the last part of the partition key can be an IN predicate. For each partition column, only one predicate is allowed. 5. For cluster column predicates, only last predicate can be non-EQ predicate including IN predicate, and preceding column predicates must be EQ predicates. If there is only one cluster column predicate, the predicates could be any non-IN predicate. 6. There is no pushdown predicates if there is any OR condition or NOT IN condition. 7. We're not allowed to push down multiple predicates for the same column if any of them is equality or IN predicate.
The list of predicates to be pushed down is available in
predicatesToPushDown
property. The list of predicates that cannot be pushed down is available inpredicatesToPreserve
property. - trait CassandraMetadataFunction extends UnaryExpression with Unevaluable
- trait CassandraPredicateRules extends AnyRef
- implicit final class CassandraSQLContextFunctions extends AnyVal
- final class CassandraSQLRow extends GettableData with Row with Serializable
-
case class
CassandraSourceOptions(pushdown: Boolean = true, confirmTruncate: Boolean = false, cassandraConfs: Map[String, String] = Map.empty) extends Product with Serializable
Store data source options
- implicit final class CassandraSparkSessionFunctions extends AnyVal
- case class CassandraTTL(child: Expression) extends UnaryExpression with CassandraMetadataFunction with Product with Serializable
- trait CassandraTableDefProvider extends AnyRef
- case class CassandraWriteTime(child: Expression) extends UnaryExpression with CassandraMetadataFunction with Product with Serializable
- implicit final class DataFrameReaderWrapper extends AnyVal
- implicit final class DataFrameWriterWrapper[T] extends AnyVal
- implicit final class DataStreamWriterWrapper[T] extends AnyVal
-
class
DefaultSource extends TableProvider with DataSourceRegister
A Pointer to the DatasourceV2 Implementation of The Cassandra Source
A Pointer to the DatasourceV2 Implementation of The Cassandra Source
CREATE TEMPORARY TABLE tmpTable USING org.apache.spark.sql.cassandra OPTIONS ( table "table", keyspace "keyspace", cluster "test_cluster", pushdown "true", spark.cassandra.input.fetch.sizeInRows "10", spark.cassandra.output.consistency.level "ONE", spark.cassandra.connection.timeoutMS "1000" )
- sealed trait DirectJoinSetting extends AnyRef
- sealed trait DseSearchOptimizationSetting extends AnyRef
- class NullableUnresolvedAttribute extends UnresolvedAttribute
-
trait
PredicateOps[Predicate] extends AnyRef
A unified API for predicates, used by BasicCassandraPredicatePushDown.
A unified API for predicates, used by BasicCassandraPredicatePushDown.
Keeps all the Spark-specific stuff out of
BasicCassandraPredicatePushDown
It is also easy to plug-in custom predicate implementations for unit-testing. - class SolrPredicateRules extends CassandraPredicateRules with Logging
Value Members
-
val
CassandraFormat: String
A data frame format used to access Cassandra through Connector
-
def
cassandraOptions(table: String, keyspace: String, cluster: String = ..., pushdownEnable: Boolean = true): Map[String, String]
Returns a map of options which configure the path to Cassandra table as well as whether pushdown is enabled or not
- def ttl(column: String): Column
- def ttl(column: Column): Column
- def writeTime(column: String): Column
- def writeTime(column: Column): Column
- object AlwaysOff extends DirectJoinSetting with Product with Serializable
- object AlwaysOn extends DirectJoinSetting with Product with Serializable
- object Automatic extends DirectJoinSetting with Product with Serializable
- object CassandraMetaDataRule extends Rule[LogicalPlan]
- object CassandraMetadataFunction
- object CassandraSQLContextParams
- object CassandraSQLRow extends Serializable
- object CassandraSourceRelation extends Logging
-
object
DataTypeConverter extends Logging
Convert Cassandra data type to Catalyst data type
- object DefaultSource
-
object
DsePredicateRules extends CassandraPredicateRules with Logging
A series of pushdown rules that only apply when connecting to Datastax Enterprise
- object InClausePredicateRules extends CassandraPredicateRules with Logging
- object Off extends DseSearchOptimizationSetting with Product with Serializable
- object On extends DseSearchOptimizationSetting with Product with Serializable
-
object
PredicateOps
Provides
PredicateOps
adapters for Expression and Filter classes - object SolrConstants
-
object
TimeUUIDPredicateRules extends CassandraPredicateRules with Logging
All non-equal predicates on a TimeUUID column are going to fail and fail in silent way.
All non-equal predicates on a TimeUUID column are going to fail and fail in silent way. The basic issue here is that when you use a comparison on a time UUID column in C* it compares based on the Time portion of the UUID. When Spark executes this filter (unhandled behavior) it will compare lexically, this will lead to results being incorrectly filtered out of the set. As long as the range predicate is handled completely by the connector the correct result will be obtained.