Asynchronously executes tasks but blocks if the limit of unfinished tasks is reached.
A RowWriter that can write CassandraRow objects.
A RowWriter
suitable for saving objects mappable by a ColumnMapper.
A RowWriter
suitable for saving objects mappable by a ColumnMapper.
Can save case class objects, java beans and tuples.
Provides a low-priority implicit RowWriterFactory
able to write objects of any class for which
a ColumnMapper is defined.
A leaking bucket rate limiter.
A leaking bucket rate limiter. It can be used to limit rate of anything, but typically it is used to limit rate of data transfer.
It starts with an empty bucket. When packets arrive, they are added to the bucket. The bucket has a constant size and is leaking at a constant rate. If the bucket overflows, the thread is delayed by the amount of time proportional to the amount of the overflow.
This class is thread safe and lockless.
A utility class for determining the Replica Set (Ip Addresses) of a particular Cassandra Row.
A utility class for determining the Replica Set (Ip Addresses) of a particular Cassandra Row. Used by the com.datastax.spark.connector.RDDFunctions.keyByCassandraReplica method. Uses the Java Driver to obtain replica information.
RowWriter
knows how to extract column names and values from custom row objects
and how to convert them to values that can be written to Cassandra.
RowWriter
knows how to extract column names and values from custom row objects
and how to convert them to values that can be written to Cassandra.
RowWriter
is required to apply any user-defined data type conversion.
Creates instances of RowWriter objects for the given row type T
.
Creates instances of RowWriter objects for the given row type T
.
RowWriterFactory
is the trait you need to implement if you want to support row representations
which cannot be simply mapped by a ColumnMapper.
A RowWriter that can write SparkSQL Row
objects.
Writes RDD data into given Cassandra table.
Writes RDD data into given Cassandra table. Individual column values are extracted from RDD objects using given RowWriter Then, data are inserted into Cassandra with batches of CQL INSERT statements. Each RDD partition is processed by a single thread.
Write settings for RDD
Write settings for RDD
approx. number of bytes to be written in a single batch or exact number of rows to be written in a single batch;
the number of distinct batches that can be buffered before they are written to Cassandra
which rows can be grouped into a single batch
consistency level for writes, default LOCAL_QUORUM
inserting a row should happen only if it does not already exist
number of batches to be written in parallel
the default TTL value which is used when it is defined (in seconds)
the default timestamp value which is used when it is defined (in microseconds)
whether or not enable task metrics updates (requires Spark 1.2+)
Estimates amount of memory required to serialize Java/Scala objects
Helper methods for mapping a set of data to their relative locations in a Cassandra Cluster.
Provides an implicit RowWriterFactory
for saving CassandraRow objects.
Contains components for writing RDDs to Cassandra