Confluent.Kafka
Implements an Apache Kafka admin client.
Refer to
Refer to
Refer to
Refer to
Refer to
Refer to
Initialize a new AdminClient instance.
An underlying librdkafka client handle that the AdminClient will use to
make broker requests. It is valid to provide either a Consumer, Producer
or AdminClient handle.
Refer to
Refer to
Refer to
Refer to
Refer to
Refer to
An opaque reference to the underlying librdkafka
client instance.
Releases all resources used by this AdminClient. In the current
implementation, this method may block for up to 100ms. This
will be replaced with a non-blocking version in the future.
Releases the unmanaged resources used by the
and optionally disposes the managed resources.
true to release both managed and unmanaged resources;
false to release only unmanaged resources.
A builder for .
The config dictionary.
The configured error handler.
The configured log handler.
The configured statistics handler.
The configured OAuthBearer Token Refresh handler.
Initialize a new instance.
A collection of librdkafka configuration parameters
(refer to https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)
and parameters specific to this client (refer to:
).
At a minimum, 'bootstrap.servers' must be specified.
Set the handler to call on statistics events. Statistics are provided
as a JSON formatted string as defined here:
https://github.com/edenhill/librdkafka/blob/master/STATISTICS.md
You can enable statistics by setting the statistics interval
using the statistics.interval.ms configuration parameter
(disabled by default).
Executes on the poll thread (a background thread managed by
the admin client).
Set the handler to call on error events e.g. connection failures or all
brokers down. Note that the client will try to automatically recover from
errors that are not marked as fatal. Non-fatal errors should be interpreted
as informational rather than catastrophic.
Executes on the poll thread (a background thread managed by the admin
client).
Set the handler to call when there is information available
to be logged. If not specified, a default callback that writes
to stderr will be used.
By default not many log messages are generated.
For more verbose logging, specify one or more debug contexts
using the 'debug' configuration property.
Warning: Log handlers are called spontaneously from internal
librdkafka threads and the application must not call any
Confluent.Kafka APIs from within a log handler or perform any
prolonged operations.
Set SASL/OAUTHBEARER token refresh callback in provided
conf object. The SASL/OAUTHBEARER token refresh callback
is triggered via 's admin methods
(or any of its overloads) whenever OAUTHBEARER is the SASL
mechanism and a token needs to be retrieved, typically
based on the configuration defined in
sasl.oauthbearer.config. The callback should invoke
or
to indicate success or failure, respectively.
An unsecured JWT refresh handler is provided by librdkafka
for development and testing purposes, it is enabled by
setting the enable.sasl.oauthbearer.unsecure.jwt property
to true and is mutually exclusive to using a refresh callback.
the callback to set; callback function arguments:
IProducer - instance of the admin client's inner producer instance
which should be used to set token or token failure string - Value of configuration
property sasl.oauthbearer.config
Build the instance.
Represents an error that occured during an alter configs request.
Initializes a new instance of AlterConfigsException.
The result corresponding to all ConfigResources in the request
(whether or not they were in error). At least one of these
results will be in error.
The result corresponding to all ConfigResources in the request
(whether or not they were in error). At least one of these
results will be in error.
Options for the AlterConfigs method.
The overall request timeout, including broker lookup, request
transmission, operation time on broker, and response. If set
to null, the default request timeout for the AdminClient will
be used.
Default: null
If true, the request should be validated only without altering
the configs.
Default: false
The result of an alter config request for a specific resource.
The resource the result corresponds to.
The error (or success) of the alter config request.
Encapsulates a config property name / value pair.
The config name.
The config value.
A config property entry, as reported by the Kafka admin api.
Whether or not the config value is the default or was
explicitly set.
Whether or not the config is read-only (cannot be updated).
Whether or not the config value is sensitive. The value
for sensitive configuration values is always returned
as null.
The config name.
The config value.
The config source. Refer to
for
more information.
All config values that may be used as the value of this
config along with their source, in the order of precedence.
A class representing resources that have configs.
The resource type (required)
The resource name (required)
Tests whether this ConfigResource instance is equal to the specified object.
The object to test.
true if obj is a ConfigResource and all properties are equal. false otherwise.
Returns a hash code for this ConfigResource.
An integer that specifies a hash value for this ConfigResource.
Tests whether ConfigResource instance a is equal to ConfigResource instance b.
The first ConfigResource instance to compare.
The second ConfigResource instance to compare.
true if ConfigResource instances a and b are equal. false otherwise.
Tests whether ConfigResource instance a is not equal to ConfigResource instance b.
The first ConfigResource instance to compare.
The second ConfigResource instance to compare.
true if ConfigResource instances a and b are not equal. false otherwise.
Returns a string representation of the ConfigResource object.
A string representation of the ConfigResource object.
Enumerates the different config sources.
Unknown
Dynamic Topic
Dynamic Broker
Dynamic Default Broker
Static
Default
Describes a synonym of a config entry.
The config name.
The config value.
The config source. Refer to
for
more information.
Represents an error that occured during a create partitions request.
Initialize a new instance of CreatePartitionsException.
The result corresponding to all topics in the request
(whether or not they were in error). At least one of these
results will be in error.
The result corresponding to all topics in the request
(whether or not they were in error). At least one of these
results will be in error.
Options for the CreatePartitions method.
If true, the request should be validated only without creating the partitions.
Default: false
The overall request timeout, including broker lookup, request
transmission, operation time on broker, and response. If set
to null, the default request timeout for the AdminClient will
be used.
Default: null
The broker's operation timeout - the maximum time to wait for
CreatePartitions before returning a result to the application.
If set to null, will return immediately upon triggering partition
creation.
Default: null
The result of a create partitions request for a specific topic.
The topic.
The error (or success) of the create partitions request.
The result of a request to create a specific topic.
The topic name.
The error (or success) of the create topic request.
Represents an error that occured during a create topics request.
Initialize a new instance of CreateTopicsException.
The result corresponding to all topics in the request
(whether or not they were in error). At least one of these
results will be in error.
The result corresponding to all topics in the request
(whether or not they were in error). At least one of these
results will be in error.
Options for the CreateTopics method.
If true, the request should be validated on the broker only
without creating the topic.
Default: false
The overall request timeout, including broker lookup, request
transmission, operation time on broker, and response. If set
to null, the default request timeout for the AdminClient will
be used.
Default: null
The broker's operation timeout - the maximum time to wait for
CreateTopics before returning a result to the application.
If set to null, will return immediately upon triggering topic
creation.
Default: null
Represents an error that occured during a delete records request.
Initializes a new DeleteRecordsException.
The result corresponding to all topic partitions in the request
(whether or not they were in error). At least one of these
results will be in error.
The result corresponding to all topics partitions in the request
(whether or not they were in error). At least one of these
results will be in error.
Options for the DeleteRecordsAsync method.
The overall request timeout, including broker lookup, request
transmission, operation time on broker, and response. If set
to null, the default request timeout for the AdminClient will
be used.
Default: null
The broker's operation timeout - the maximum time to wait for
DeleteRecordsAsync before returning a result to the application.
If set to null, will return immediately upon triggering record
deletion.
Default: null
The per-partition result of delete records request
(including error status).
The topic name.
The partition.
Post-deletion low-watermark (smallest available offset of all
live replicas).
Per-partition error status.
The per-partition result of delete records request.
The topic name.
The partition.
Post-deletion low-watermark offset (smallest available offset of all
live replicas).
The result of a request to delete a specific topic.
The topic.
The error (or success) of the delete topic request.
Represents an error that occured during a delete topics request.
Initializes a new DeleteTopicsException.
The result corresponding to all topics in the request
(whether or not they were in error). At least one of these
results will be in error.
The result corresponding to all topics in the request
(whether or not they were in error). At least one of these
results will be in error.
Options for the DeleteTopics method.
The overall request timeout, including broker lookup, request
transmission, operation time on broker, and response. If set
to null, the default request timeout for the AdminClient will
be used.
Default: null
The broker's operation timeout - the maximum time to wait for
DeleteTopics before returning a result to the application.
If set to null, will return immediately upon triggering topic
deletion.
Default: null
Represents an error that occured during a describe configs request.
Initializes a new instance of DescribeConfigsException.
The result corresponding to all ConfigResource in the request
(whether or not they were in error). At least one of these
results will be in error.
The result corresponding to all ConfigResources in the request
(whether or not they were in error). At least one of these
results will be in error.
Options for the DescribeConfigs method.
The overall request timeout, including broker lookup, request
transmission, operation time on broker, and response. If set
to null, the default request timeout for the AdminClient will
be used.
Default: null
The result of a request to describe the configs of a specific resource.
The resource associated with the describe configs request.
Configuration entries for the specified resource.
The error (or success) of the describe config request.
The result of a request to describe the configs of a specific resource.
The resource associated with the describe configs request.
Configuration entries for the specified resource.
Specification for new partitions to be added to a topic.
The topic that the new partitions specification corresponds to.
The replica assignments for the new partitions, or null if the assignment
will be done by the controller. The outer list is indexed by the new
partitions relative index, and the inner list contains the broker ids.
The partition count for the specified topic is increased to this value.
Enumerates the set of configuration resource types.
Unknown resource
Any resource
Topic resource
Group resource
Broker resource
Specification of a new topic to be created via the CreateTopics
method. This class is used for the same purpose as NewTopic in
the Java API.
The configuration to use to create the new topic.
The name of the topic to be created (required).
The number of partitions for the new topic or -1 (the default) if a
replica assignment is specified.
A map from partition id to replica ids (i.e., static broker ids) or null
if the number of partitions and replication factor are specified
instead.
The replication factor for the new topic or -1 (the default) if a
replica assignment is specified instead.
Metadata pertaining to a single Kafka broker.
Initializes a new BrokerMetadata class instance.
The Kafka broker id.
The Kafka broker hostname.
The Kafka broker port.
Gets the Kafka broker id.
Gets the Kafka broker hostname.
Gets the Kafka broker port.
Returns a JSON representation of the BrokerMetadata object.
A JSON representation of the BrokerMetadata object.
IClient extension methods
Set SASL/OAUTHBEARER token and metadata.
The SASL/OAUTHBEARER token refresh callback or
event handler should invoke this method upon
success. The extension keys must not include
the reserved key "`auth`", and all extension
keys and values must conform to the required
format as per https://tools.ietf.org/html/rfc7628#section-3.1:
the instance of a
the mandatory token value to set, often (but
not necessarily) a JWS compact serialization
as per https://tools.ietf.org/html/rfc7515#section-3.1.
when the token expires, in terms of the number
of milliseconds since the epoch.
the mandatory Kafka principal name associated
with the token.
optional SASL extensions dictionary, to be
communicated to the broker as additional key-value
pairs during the initial client response as per
https://tools.ietf.org/html/rfc7628#section-3.1.
SASL/OAUTHBEARER token refresh failure indicator.
The SASL/OAUTHBEARER token refresh callback or
event handler should invoke this method upon failure.
the instance of a
mandatory human readable error reason for failing
to acquire a token.
Encapsulates information provided to a Consumer's OnOffsetsCommitted
event - per-partition offsets and success/error together with overall
success/error of the commit operation.
Possible error conditions:
- Entire request failed: Error is set, but not per-partition errors.
- All partitions failed: Error is set to the value of the last failed partition, but each partition may have different errors.
- Some partitions failed: global error is success.
Initializes a new instance of CommittedOffsets.
per-partition offsets and success/error.
overall operation success/error.
Gets the overall operation success/error.
Gets the per-partition offsets and success/error.
Base functionality common to all configuration classes.
Initialize a new empty instance.
Initialize a new instance based on
an existing instance.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
Initialize a new wrapping
an existing key/value dictionary.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
Set a configuration property using a string key / value pair.
Two scenarios where this is useful: 1. For setting librdkafka
plugin config properties. 2. You are using a different version of
librdkafka to the one provided as a dependency of the Confluent.Kafka
package and the configuration properties have evolved.
The configuration property name.
The property value.
Gets a configuration property value given a key. Returns null if
the property has not been set.
The configuration property to get.
The configuration property value.
Gets a configuration property int? value given a key.
The configuration property to get.
The configuration property value.
Gets a configuration property bool? value given a key.
The configuration property to get.
The configuration property value.
Gets a configuration property double? value given a key.
The configuration property to get.
The configuration property value.
Gets a configuration property enum value given a key.
The configuration property to get.
The enum type of the configuration property.
The configuration property value.
Set a configuration property using a key / value pair (null checked).
The configuration properties.
Returns an enumerator that iterates through the property collection.
An enumerator that iterates through the property collection.
Returns an enumerator that iterates through the property collection.
An enumerator that iterates through the property collection.
The maximum length of time (in milliseconds) before a cancellation request
is acted on. Low values may result in measurably higher CPU usage.
default: 100
range: 1 <= dotnet.cancellation.delay.max.ms <= 10000
importance: low
Names of all configuration properties specific to the
.NET Client.
Producer specific configuration properties.
Specifies whether or not the producer should start a background poll
thread to receive delivery reports and event notifications. Generally,
this should be set to true. If set to false, you will need to call
the Poll function manually.
default: true
Specifies whether to enable notification of delivery reports. Typically
you should set this parameter to true. Set it to false for "fire and
forget" semantics and a small boost in performance.
default: true
A comma separated list of fields that may be optionally set in delivery
reports. Disabling delivery report fields that you do not require will
improve maximum throughput and reduce memory usage. Allowed values:
key, value, timestamp, headers, status, all, none.
default: all
Consumer specific configuration properties.
A comma separated list of fields that may be optionally set
in
objects returned by the
method. Disabling fields that you do not require will improve
throughput and reduce memory consumption. Allowed values:
headers, timestamp, topic, all, none
default: all
The maximum length of time (in milliseconds) before a cancellation request
is acted on. Low values may result in measurably higher CPU usage.
default: 100
range: 1 <= dotnet.cancellation.delay.max.ms <= 10000
Partitioner enum values
Random
Consistent
ConsistentRandom
Murmur2
Murmur2Random
AutoOffsetReset enum values
Latest
Earliest
Error
BrokerAddressFamily enum values
Any
V4
V6
SecurityProtocol enum values
Plaintext
Ssl
SaslPlaintext
SaslSsl
SslEndpointIdentificationAlgorithm enum values
None
Https
PartitionAssignmentStrategy enum values
Range
RoundRobin
CooperativeSticky
IsolationLevel enum values
ReadUncommitted
ReadCommitted
CompressionType enum values
None
Gzip
Snappy
Lz4
Zstd
SaslMechanism enum values
GSSAPI
PLAIN
SCRAM-SHA-256
SCRAM-SHA-512
OAUTHBEARER
Acks enum values
None
Leader
All
Configuration common to all clients
Initialize a new empty instance.
Initialize a new instance wrapping
an existing instance.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
Initialize a new instance wrapping
an existing key/value pair collection.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
SASL mechanism to use for authentication. Supported: GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512. **NOTE**: Despite the name, you may not configure more than one mechanism.
This field indicates the number of acknowledgements the leader broker must receive from ISR brokers
before responding to the request: Zero=Broker does not send any response/ack to client, One=The
leader will write the record to its local log but will respond without awaiting full acknowledgement
from all followers. All=Broker will block until message is committed by all in sync replicas (ISRs).
If there are less than min.insync.replicas (broker configuration) in the ISR set the produce request
will fail.
Client identifier.
default: rdkafka
importance: low
Initial list of brokers as a CSV list of broker host or host:port. The application may also use `rd_kafka_brokers_add()` to add brokers during runtime.
default: ''
importance: high
Maximum Kafka protocol request message size. Due to differing framing overhead between protocol versions the producer is unable to reliably enforce a strict max message limit at produce time and may exceed the maximum size by one message in protocol ProduceRequests, the broker will enforce the the topic's `max.message.bytes` limit (see Apache Kafka documentation).
default: 1000000
importance: medium
Maximum size for message to be copied to buffer. Messages larger than this will be passed by reference (zero-copy) at the expense of larger iovecs.
default: 65535
importance: low
Maximum Kafka protocol response message size. This serves as a safety precaution to avoid memory exhaustion in case of protocol hickups. This value must be at least `fetch.max.bytes` + 512 to allow for protocol overhead; the value is adjusted automatically unless the configuration property is explicitly set.
default: 100000000
importance: medium
Maximum number of in-flight requests per broker connection. This is a generic property applied to all broker communication, however it is primarily relevant to produce requests. In particular, note that other mechanisms limit the number of outstanding consumer fetch request per broker to one.
default: 1000000
importance: low
Period of time in milliseconds at which topic and broker metadata is refreshed in order to proactively discover any new brokers, topics, partitions or partition leader changes. Use -1 to disable the intervalled refresh (not recommended). If there are no locally referenced topics (no topic objects created, no messages produced, no subscription or no assignment) then only the broker list will be refreshed every interval but no more often than every 10s.
default: 300000
importance: low
Metadata cache max age. Defaults to topic.metadata.refresh.interval.ms * 3
default: 900000
importance: low
When a topic loses its leader a new metadata request will be enqueued with this initial interval, exponentially increasing until the topic metadata has been refreshed. This is used to recover quickly from transitioning leader brokers.
default: 250
importance: low
Sparse metadata requests (consumes less network bandwidth)
default: true
importance: low
Apache Kafka topic creation is asynchronous and it takes some time for a new topic to propagate throughout the cluster to all brokers. If a client requests topic metadata after manual topic creation but before the topic has been fully propagated to the broker the client is requesting metadata from, the topic will seem to be non-existent and the client will mark the topic as such, failing queued produced messages with `ERR__UNKNOWN_TOPIC`. This setting delays marking a topic as non-existent until the configured propagation max time has passed. The maximum propagation time is calculated from the time the topic is first referenced in the client, e.g., on produce().
default: 30000
importance: low
Topic blacklist, a comma-separated list of regular expressions for matching topic names that should be ignored in broker metadata information as if the topics did not exist.
default: ''
importance: low
A comma-separated list of debug contexts to enable. Detailed Producer debugging: broker,topic,msg. Consumer: consumer,cgrp,topic,fetch
default: ''
importance: medium
Default timeout for network requests. Producer: ProduceRequests will use the lesser value of `socket.timeout.ms` and remaining `message.timeout.ms` for the first message in the batch. Consumer: FetchRequests will use `fetch.wait.max.ms` + `socket.timeout.ms`. Admin: Admin requests will use `socket.timeout.ms` or explicitly set `rd_kafka_AdminOptions_set_operation_timeout()` value.
default: 60000
importance: low
Broker socket send buffer size. System default is used if 0.
default: 0
importance: low
Broker socket receive buffer size. System default is used if 0.
default: 0
importance: low
Enable TCP keep-alives (SO_KEEPALIVE) on broker sockets
default: false
importance: low
Disable the Nagle algorithm (TCP_NODELAY) on broker sockets.
default: false
importance: low
Disconnect from broker when this number of send failures (e.g., timed out requests) is reached. Disable with 0. WARNING: It is highly recommended to leave this setting at its default value of 1 to avoid the client and broker to become desynchronized in case of request timeouts. NOTE: The connection is automatically re-established.
default: 1
importance: low
How long to cache the broker address resolving results (milliseconds).
default: 1000
importance: low
Allowed broker IP address families: any, v4, v6
default: any
importance: low
Close broker connections after the specified time of inactivity. Disable with 0. If this property is left at its default value some heuristics are performed to determine a suitable default value, this is currently limited to identifying brokers on Azure (see librdkafka issue #3109 for more info).
default: 0
importance: medium
The initial time to wait before reconnecting to a broker after the connection has been closed. The time is increased exponentially until `reconnect.backoff.max.ms` is reached. -25% to +50% jitter is applied to each reconnect backoff. A value of 0 disables the backoff and reconnects immediately.
default: 100
importance: medium
The maximum time to wait before reconnecting to a broker after the connection has been closed.
default: 10000
importance: medium
librdkafka statistics emit interval. The application also needs to register a stats callback using `rd_kafka_conf_set_stats_cb()`. The granularity is 1000ms. A value of 0 disables statistics.
default: 0
importance: high
Disable spontaneous log_cb from internal librdkafka threads, instead enqueue log messages on queue set with `rd_kafka_set_log_queue()` and serve log callbacks or events through the standard poll APIs. **NOTE**: Log messages will linger in a temporary queue until the log queue has been set.
default: false
importance: low
Print internal thread name in log messages (useful for debugging librdkafka internals)
default: true
importance: low
If enabled librdkafka will initialize the PRNG with srand(current_time.milliseconds) on the first invocation of rd_kafka_new() (required only if rand_r() is not available on your platform). If disabled the application must call srand() prior to calling rd_kafka_new().
default: true
importance: low
Log broker disconnects. It might be useful to turn this off when interacting with 0.9 brokers with an aggressive `connection.max.idle.ms` value.
default: true
importance: low
Signal that librdkafka will use to quickly terminate on rd_kafka_destroy(). If this signal is not set then there will be a delay before rd_kafka_wait_destroyed() returns true as internal threads are timing out their system calls. If this signal is set however the delay will be minimal. The application should mask this signal as an internal signal handler is installed.
default: 0
importance: low
Request broker's supported API versions to adjust functionality to available protocol features. If set to false, or the ApiVersionRequest fails, the fallback version `broker.version.fallback` will be used. **NOTE**: Depends on broker version >=0.10.0. If the request is not supported by (an older) broker the `broker.version.fallback` fallback is used.
default: true
importance: high
Timeout for broker API version requests.
default: 10000
importance: low
Dictates how long the `broker.version.fallback` fallback is used in the case the ApiVersionRequest fails. **NOTE**: The ApiVersionRequest is only issued when a new connection to the broker is made (such as after an upgrade).
default: 0
importance: medium
Older broker versions (before 0.10.0) provide no way for a client to query for supported protocol features (ApiVersionRequest, see `api.version.request`) making it impossible for the client to know what features it may use. As a workaround a user may set this property to the expected broker version and the client will automatically adjust its feature set accordingly if the ApiVersionRequest fails (or is disabled). The fallback broker version will be used for `api.version.fallback.ms`. Valid values are: 0.9.0, 0.8.2, 0.8.1, 0.8.0. Any other value >= 0.10, such as 0.10.2.1, enables ApiVersionRequests.
default: 0.10.0
importance: medium
Protocol used to communicate with brokers.
default: plaintext
importance: high
A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. See manual page for `ciphers(1)` and `SSL_CTX_set_cipher_list(3).
default: ''
importance: low
The supported-curves extension in the TLS ClientHello message specifies the curves (standard/named, or 'explicit' GF(2^k) or GF(p)) the client is willing to have the server use. See manual page for `SSL_CTX_set1_curves_list(3)`. OpenSSL >= 1.0.2 required.
default: ''
importance: low
The client uses the TLS ClientHello signature_algorithms extension to indicate to the server which signature/hash algorithm pairs may be used in digital signatures. See manual page for `SSL_CTX_set1_sigalgs_list(3)`. OpenSSL >= 1.0.2 required.
default: ''
importance: low
Path to client's private key (PEM) used for authentication.
default: ''
importance: low
Private key passphrase (for use with `ssl.key.location` and `set_ssl_cert()`)
default: ''
importance: low
Client's private key string (PEM format) used for authentication.
default: ''
importance: low
Path to client's public key (PEM) used for authentication.
default: ''
importance: low
Client's public key string (PEM format) used for authentication.
default: ''
importance: low
File or directory path to CA certificate(s) for verifying the broker's key. Defaults: On Windows the system's CA certificates are automatically looked up in the Windows Root certificate store. On Mac OSX this configuration defaults to `probe`. It is recommended to install openssl using Homebrew, to provide CA certificates. On Linux install the distribution's ca-certificates package. If OpenSSL is statically linked or `ssl.ca.location` is set to `probe` a list of standard paths will be probed and the first one found will be used as the default CA certificate location path. If OpenSSL is dynamically linked the OpenSSL library's default path will be used (see `OPENSSLDIR` in `openssl version -a`).
default: ''
importance: low
CA certificate string (PEM format) for verifying the broker's key.
default: ''
importance: low
Comma-separated list of Windows Certificate stores to load CA certificates from. Certificates will be loaded in the same order as stores are specified. If no certificates can be loaded from any of the specified stores an error is logged and the OpenSSL library's default CA location is used instead. Store names are typically one or more of: MY, Root, Trust, CA.
default: Root
importance: low
Path to CRL for verifying broker's certificate validity.
default: ''
importance: low
Path to client's keystore (PKCS#12) used for authentication.
default: ''
importance: low
Client's keystore (PKCS#12) password.
default: ''
importance: low
Path to OpenSSL engine library. OpenSSL >= 1.1.0 required.
default: ''
importance: low
OpenSSL engine id is the name used for loading engine.
default: dynamic
importance: low
Enable OpenSSL's builtin broker (server) certificate verification. This verification can be extended by the application by implementing a certificate_verify_cb.
default: true
importance: low
Endpoint identification algorithm to validate broker hostname using broker certificate. https - Server (broker) hostname verification as specified in RFC2818. none - No endpoint verification. OpenSSL >= 1.0.2 required.
default: none
importance: low
Kerberos principal name that Kafka runs as, not including /hostname@REALM
default: kafka
importance: low
This client's Kerberos principal name. (Not supported on Windows, will use the logon user's principal).
default: kafkaclient
importance: low
Shell command to refresh or acquire the client's Kerberos ticket. This command is executed on client creation and every sasl.kerberos.min.time.before.relogin (0=disable). %{config.prop.name} is replaced by corresponding config object value.
default: kinit -R -t "%{sasl.kerberos.keytab}" -k %{sasl.kerberos.principal} || kinit -t "%{sasl.kerberos.keytab}" -k %{sasl.kerberos.principal}
importance: low
Path to Kerberos keytab file. This configuration property is only used as a variable in `sasl.kerberos.kinit.cmd` as ` ... -t "%{sasl.kerberos.keytab}"`.
default: ''
importance: low
Minimum time in milliseconds between key refresh attempts. Disable automatic key refresh by setting this property to 0.
default: 60000
importance: low
SASL username for use with the PLAIN and SASL-SCRAM-.. mechanisms
default: ''
importance: high
SASL password for use with the PLAIN and SASL-SCRAM-.. mechanism
default: ''
importance: high
SASL/OAUTHBEARER configuration. The format is implementation-dependent and must be parsed accordingly. The default unsecured token implementation (see https://tools.ietf.org/html/rfc7515#appendix-A.5) recognizes space-separated name=value pairs with valid names including principalClaimName, principal, scopeClaimName, scope, and lifeSeconds. The default value for principalClaimName is "sub", the default value for scopeClaimName is "scope", and the default value for lifeSeconds is 3600. The scope value is CSV format with the default value being no/empty scope. For example: `principalClaimName=azp principal=admin scopeClaimName=roles scope=role1,role2 lifeSeconds=600`. In addition, SASL extensions can be communicated to the broker via `extension_NAME=value`. For example: `principal=admin extension_traceId=123`
default: ''
importance: low
Enable the builtin unsecure JWT OAUTHBEARER token handler if no oauthbearer_refresh_cb has been set. This builtin handler should only be used for development or testing, and not in production.
default: false
importance: low
List of plugin libraries to load (; separated). The library search path is platform dependent (see dlopen(3) for Unix and LoadLibrary() for Windows). If no filename extension is specified the platform-specific extension (such as .dll or .so) will be appended automatically.
default: ''
importance: low
A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config `broker.rack`.
default: ''
importance: low
AdminClient configuration properties
Initialize a new empty instance.
Initialize a new instance wrapping
an existing instance.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
Initialize a new instance wrapping
an existing key/value pair collection.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
Producer configuration properties
Initialize a new empty instance.
Initialize a new instance wrapping
an existing instance.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
Initialize a new instance wrapping
an existing key/value pair collection.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
Specifies whether or not the producer should start a background poll
thread to receive delivery reports and event notifications. Generally,
this should be set to true. If set to false, you will need to call
the Poll function manually.
default: true
importance: low
Specifies whether to enable notification of delivery reports. Typically
you should set this parameter to true. Set it to false for "fire and
forget" semantics and a small boost in performance.
default: true
importance: low
A comma separated list of fields that may be optionally set in delivery
reports. Disabling delivery report fields that you do not require will
improve maximum throughput and reduce memory usage. Allowed values:
key, value, timestamp, headers, status, all, none.
default: all
importance: low
The ack timeout of the producer request in milliseconds. This value is only enforced by the broker and relies on `request.required.acks` being != 0.
default: 30000
importance: medium
Local message timeout. This value is only enforced locally and limits the time a produced message waits for successful delivery. A time of 0 is infinite. This is the maximum time librdkafka may use to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded. The message timeout is automatically adjusted to `transaction.timeout.ms` if `transactional.id` is configured.
default: 300000
importance: high
Partitioner: `random` - random distribution, `consistent` - CRC32 hash of key (Empty and NULL keys are mapped to single partition), `consistent_random` - CRC32 hash of key (Empty and NULL keys are randomly partitioned), `murmur2` - Java Producer compatible Murmur2 hash of key (NULL keys are mapped to single partition), `murmur2_random` - Java Producer compatible Murmur2 hash of key (NULL keys are randomly partitioned. This is functionally equivalent to the default partitioner in the Java Producer.), `fnv1a` - FNV-1a hash of key (NULL keys are mapped to single partition), `fnv1a_random` - FNV-1a hash of key (NULL keys are randomly partitioned).
default: consistent_random
importance: high
Compression level parameter for algorithm selected by configuration property `compression.codec`. Higher values will result in better compression at the cost of more CPU usage. Usable range is algorithm-dependent: [0-9] for gzip; [0-12] for lz4; only 0 for snappy; -1 = codec-dependent default compression level.
default: -1
importance: medium
Enables the transactional producer. The transactional.id is used to identify the same transactional producer instance across process restarts. It allows the producer to guarantee that transactions corresponding to earlier instances of the same producer have been finalized prior to starting any new transactions, and that any zombie instances are fenced off. If no transactional.id is provided, then the producer is limited to idempotent delivery (if enable.idempotence is set). Requires broker version >= 0.11.0.
default: ''
importance: high
The maximum amount of time in milliseconds that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction. If this value is larger than the `transaction.max.timeout.ms` setting in the broker, the init_transactions() call will fail with ERR_INVALID_TRANSACTION_TIMEOUT. The transaction timeout automatically adjusts `message.timeout.ms` and `socket.timeout.ms`, unless explicitly configured in which case they must not exceed the transaction timeout (`socket.timeout.ms` must be at least 100ms lower than `transaction.timeout.ms`). This is also the default timeout value if no timeout (-1) is supplied to the transactional API methods.
default: 60000
importance: medium
When set to `true`, the producer will ensure that messages are successfully produced exactly once and in the original produce order. The following configuration properties are adjusted automatically (if not modified by the user) when idempotence is enabled: `max.in.flight.requests.per.connection=5` (must be less than or equal to 5), `retries=INT32_MAX` (must be greater than 0), `acks=all`, `queuing.strategy=fifo`. Producer instantation will fail if user-supplied configuration is incompatible.
default: false
importance: high
**EXPERIMENTAL**: subject to change or removal. When set to `true`, any error that could result in a gap in the produced message series when a batch of messages fails, will raise a fatal error (ERR__GAPLESS_GUARANTEE) and stop the producer. Messages failing due to `message.timeout.ms` are not covered by this guarantee. Requires `enable.idempotence=true`.
default: false
importance: low
Maximum number of messages allowed on the producer queue. This queue is shared by all topics and partitions.
default: 100000
importance: high
Maximum total message size sum allowed on the producer queue. This queue is shared by all topics and partitions. This property has higher priority than queue.buffering.max.messages.
default: 1048576
importance: high
Delay in milliseconds to wait for messages in the producer queue to accumulate before constructing message batches (MessageSets) to transmit to brokers. A higher value allows larger and more effective (less overhead, improved compression) batches of messages to accumulate at the expense of increased message delivery latency.
default: 5
importance: high
How many times to retry sending a failing Message. **Note:** retrying may cause reordering unless `enable.idempotence` is set to true.
default: 2147483647
importance: high
The backoff time in milliseconds before retrying a protocol request.
default: 100
importance: medium
The threshold of outstanding not yet transmitted broker requests needed to backpressure the producer's message accumulator. If the number of not yet transmitted requests equals or exceeds this number, produce request creation that would have otherwise been triggered (for example, in accordance with linger.ms) will be delayed. A lower number yields larger and more effective batches. A higher value can improve latency when using compression on slow machines.
default: 1
importance: low
compression codec to use for compressing message sets. This is the default value for all topics, may be overridden by the topic configuration property `compression.codec`.
default: none
importance: medium
Maximum number of messages batched in one MessageSet. The total MessageSet size is also limited by batch.size and message.max.bytes.
default: 10000
importance: medium
Maximum size (in bytes) of all messages batched in one MessageSet, including protocol framing overhead. This limit is applied after the first message has been added to the batch, regardless of the first message's size, this is to ensure that messages that exceed batch.size are produced. The total MessageSet size is also limited by batch.num.messages and message.max.bytes.
default: 1000000
importance: medium
Delay in milliseconds to wait to assign new sticky partitions for each topic. By default, set to double the time of linger.ms. To disable sticky behavior, set to 0. This behavior affects messages with the key NULL in all cases, and messages with key lengths of zero when the consistent_random partitioner is in use. These messages would otherwise be assigned randomly. A higher value allows for more effective batching of these messages.
default: 10
importance: low
Consumer configuration properties
Initialize a new empty instance.
Initialize a new instance wrapping
an existing instance.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
Initialize a new instance wrapping
an existing key/value pair collection.
This will change the values "in-place" i.e. operations on this class WILL modify the provided collection
A comma separated list of fields that may be optionally set
in
objects returned by the
method. Disabling fields that you do not require will improve
throughput and reduce memory consumption. Allowed values:
headers, timestamp, topic, all, none
default: all
importance: low
Action to take when there is no initial offset in offset store or the desired offset is out of range: 'smallest','earliest' - automatically reset the offset to the smallest offset, 'largest','latest' - automatically reset the offset to the largest offset, 'error' - trigger an error (ERR__AUTO_OFFSET_RESET) which is retrieved by consuming messages and checking 'message->err'.
default: largest
importance: high
Client group id string. All clients sharing the same group.id belong to the same group.
default: ''
importance: high
Enable static group membership. Static group members are able to leave and rejoin a group within the configured `session.timeout.ms` without prompting a group rebalance. This should be used in combination with a larger `session.timeout.ms` to avoid group rebalances caused by transient unavailability (e.g. process restarts). Requires broker version >= 2.3.0.
default: ''
importance: medium
The name of one or more partition assignment strategies. The elected group leader will use a strategy supported by all members of the group to assign partitions to group members. If there is more than one eligible strategy, preference is determined by the order of this list (strategies earlier in the list have higher priority). Cooperative and non-cooperative (eager) strategies must not be mixed. Available strategies: range, roundrobin, cooperative-sticky.
default: range,roundrobin
importance: medium
Client group session and failure detection timeout. The consumer sends periodic heartbeats (heartbeat.interval.ms) to indicate its liveness to the broker. If no hearts are received by the broker for a group member within the session timeout, the broker will remove the consumer from the group and trigger a rebalance. The allowed range is configured with the **broker** configuration properties `group.min.session.timeout.ms` and `group.max.session.timeout.ms`. Also see `max.poll.interval.ms`.
default: 45000
importance: high
Group session keepalive heartbeat interval.
default: 3000
importance: low
Group protocol type. NOTE: Currently, the only supported group protocol type is `consumer`.
default: consumer
importance: low
How often to query for the current client group coordinator. If the currently assigned coordinator is down the configured query interval will be divided by ten to more quickly recover in case of coordinator reassignment.
default: 600000
importance: low
Maximum allowed time between calls to consume messages (e.g., rd_kafka_consumer_poll()) for high-level consumers. If this interval is exceeded the consumer is considered failed and the group will rebalance in order to reassign the partitions to another consumer group member. Warning: Offset commits may be not possible at this point. Note: It is recommended to set `enable.auto.offset.store=false` for long-time processing applications and then explicitly store offsets (using offsets_store()) *after* message processing, to make sure offsets are not auto-committed prior to processing has finished. The interval is checked two times per second. See KIP-62 for more information.
default: 300000
importance: high
Automatically and periodically commit offsets in the background. Note: setting this to false does not prevent the consumer from fetching previously committed start offsets. To circumvent this behaviour set specific start offsets per partition in the call to assign().
default: true
importance: high
The frequency in milliseconds that the consumer offsets are committed (written) to offset storage. (0 = disable). This setting is used by the high-level consumer.
default: 5000
importance: medium
Automatically store offset of last message provided to application. The offset store is an in-memory store of the next offset to (auto-)commit for each partition.
default: true
importance: high
Minimum number of messages per topic+partition librdkafka tries to maintain in the local consumer queue.
default: 100000
importance: medium
Maximum number of kilobytes of queued pre-fetched messages in the local consumer queue. If using the high-level consumer this setting applies to the single consumer queue, regardless of the number of partitions. When using the legacy simple consumer or when separate partition queues are used this setting applies per partition. This value may be overshot by fetch.message.max.bytes. This property has higher priority than queued.min.messages.
default: 65536
importance: medium
Maximum time the broker may wait to fill the Fetch response with fetch.min.bytes of messages.
default: 500
importance: low
Initial maximum number of bytes per topic+partition to request when fetching messages from the broker. If the client encounters a message larger than this value it will gradually try to increase it until the entire message can be fetched.
default: 1048576
importance: medium
Maximum amount of data the broker shall return for a Fetch request. Messages are fetched in batches by the consumer and if the first message batch in the first non-empty partition of the Fetch request is larger than this value, then the message batch will still be returned to ensure the consumer can make progress. The maximum message batch size accepted by the broker is defined via `message.max.bytes` (broker config) or `max.message.bytes` (broker topic config). `fetch.max.bytes` is automatically adjusted upwards to be at least `message.max.bytes` (consumer config).
default: 52428800
importance: medium
Minimum number of bytes the broker responds with. If fetch.wait.max.ms expires the accumulated data will be sent to the client regardless of this setting.
default: 1
importance: low
How long to postpone the next fetch request for a topic+partition in case of a fetch error.
default: 500
importance: medium
Controls how to read messages written transactionally: `read_committed` - only return transactional messages which have been committed. `read_uncommitted` - return all messages, even transactional messages which have been aborted.
default: read_committed
importance: high
Emit RD_KAFKA_RESP_ERR__PARTITION_EOF event whenever the consumer reaches the end of a partition.
default: false
importance: low
Verify CRC32 of consumed messages, ensuring no on-the-wire or on-disk corruption to the messages occurred. This check comes at slightly increased CPU usage.
default: false
importance: medium
Allow automatic topic creation on the broker when subscribing to or assigning non-existent topics. The broker must also be configured with `auto.create.topics.enable=true` for this configuraiton to take effect. Note: The default value (false) is different from the Java consumer (true). Requires broker version >= 0.11.0.0, for older broker versions only the broker configuration applies.
default: false
importance: low
Represents an error that occured during message consumption.
Initialize a new instance of ConsumeException
An object that provides information know about the consumer
record for which the error occured.
The error that occured.
The exception instance that caused this exception.
Initialize a new instance of ConsumeException
An object that provides information know about the consumer
record for which the error occured.
The error that occured.
An object that provides information known about the consumer
record for which the error occured.
Implements a high-level Apache Kafka consumer with
deserialization capability.
keeps track of whether or not assign has been called during
invocation of a rebalance callback event.
Releases all resources used by this Consumer without
committing offsets and without alerting the group coordinator
that the consumer is exiting the group. If you do not call
or
prior to Dispose, the group will rebalance after a timeout
specified by group's `session.timeout.ms`.
You should commit offsets / unsubscribe from the group before
calling this method (typically by calling
).
Releases the unmanaged resources used by the
and optionally disposes the managed resources.
true to release both managed and unmanaged resources;
false to release only unmanaged resources.
Refer to
A builder class for .
The config dictionary.
The configured error handler.
The configured log handler.
The configured statistics handler.
The configured OAuthBearer Token Refresh handler.
The configured key deserializer.
The configured value deserializer.
The configured partitions assigned handler.
The configured partitions revoked handler.
Whether or not the user configured either PartitionsRevokedHandler or PartitionsLostHandler
as a Func (as opposed to an Action).
The configured partitions lost handler.
The configured offsets committed handler.
Initialize a new ConsumerBuilder instance.
A collection of librdkafka configuration parameters
(refer to https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)
and parameters specific to this client (refer to:
).
At a minimum, 'bootstrap.servers' and 'group.id' must be
specified.
Set the handler to call on statistics events. Statistics
are provided as a JSON formatted string as defined here:
https://github.com/edenhill/librdkafka/blob/master/STATISTICS.md
You can enable statistics and set the statistics interval
using the StatisticsIntervalMs configuration property
(disabled by default).
Executes as a side-effect of the Consume method (on the same
thread).
Exceptions: Any exception thrown by your statistics handler
will be wrapped in a ConsumeException with ErrorCode
ErrorCode.Local_Application and thrown by the initiating call
to Consume.
Set the handler to call on error events e.g. connection failures or all
brokers down. Note that the client will try to automatically recover from
errors that are not marked as fatal. Non-fatal errors should be interpreted
as informational rather than catastrophic.
Executes as a side-effect of the Consume method (on the same thread).
Exceptions: Any exception thrown by your error handler will be silently
ignored.
Set the handler to call when there is information available
to be logged. If not specified, a default callback that writes
to stderr will be used.
By default not many log messages are generated.
For more verbose logging, specify one or more debug contexts
using the 'Debug' configuration property.
Warning: Log handlers are called spontaneously from internal
librdkafka threads and the application must not call any
Confluent.Kafka APIs from within a log handler or perform any
prolonged operations.
Exceptions: Any exception thrown by your log handler will be
silently ignored.
Set SASL/OAUTHBEARER token refresh callback in provided
conf object. The SASL/OAUTHBEARER token refresh callback
is triggered via
(or any of its overloads) whenever OAUTHBEARER is the SASL
mechanism and a token needs to be retrieved, typically
based on the configuration defined in
sasl.oauthbearer.config. The callback should invoke
or
to indicate success or failure, respectively.
An unsecured JWT refresh handler is provided by librdkafka
for development and testing purposes, it is enabled by
setting the enable.sasl.oauthbearer.unsecure.jwt property
to true and is mutually exclusive to using a refresh callback.
the callback to set; callback function arguments:
IConsumer - instance of the consumer which should be used to
set token or token failure string - Value of configuration
property sasl.oauthbearer.config
Set the deserializer to use to deserialize keys.
If your key deserializer throws an exception, this will be
wrapped in a ConsumeException with ErrorCode
Local_KeyDeserialization and thrown by the initiating call to
Consume.
Set the deserializer to use to deserialize values.
If your value deserializer throws an exception, this will be
wrapped in a ConsumeException with ErrorCode
Local_ValueDeserialization and thrown by the initiating call to
Consume.
Specify a handler that will be called when a new consumer group partition assignment has
been received by this consumer.
The actual partitions to consume from and start offsets are specified by the return value
of the handler. Partition offsets may be a specific offset, or special value (Beginning, End
or Unset). If Unset, consumption will resume from the last committed offset for each
partition, or if there is no committed offset, in accordance with the `auto.offset.reset`
configuration property.
Kafka supports two rebalance protocols: EAGER (range and roundrobin assignors) and
COOPERATIVE (incremental) (cooperative-sticky assignor). Use the PartitionAssignmentStrategy
configuration property to specify which assignor to use.
## EAGER Rebalancing (range, roundrobin)
The set of partitions returned from your handler may differ from that provided by the
group (though they should typically be the same). These partitions are the
entire set of partitions to consume from. There will be exactly one call to the
partitions revoked or partitions lost handler (if they have been set using
SetPartitionsRevokedHandler / SetPartitionsLostHandler) corresponding to every call to
this handler.
## COOPERATIVE (Incremental) Rebalancing
The set of partitions returned from your handler must match that provided by the
group. These partitions are an incremental assignment - are in addition to those
already being consumed from.
Executes as a side-effect of the Consumer.Consume call (on the same thread).
(Incremental)Assign/Unassign must not be called in the handler.
Exceptions: Any exception thrown by your partitions assigned handler will be wrapped
in a ConsumeException with ErrorCode ErrorCode.Local_Application and thrown by the
initiating call to Consume.
Specify a handler that will be called when a new consumer group partition assignment has
been received by this consumer.
Following execution of the handler, consumption will resume from the last committed offset
for each partition, or if there is no committed offset, in accordance with the
`auto.offset.reset` configuration property.
Kafka supports two rebalance protocols: EAGER (range and roundrobin assignors) and
COOPERATIVE (incremental) (cooperative-sticky assignor). Use the PartitionAssignmentStrategy
configuration property to specify which assignor to use.
## EAGER Rebalancing (range, roundrobin)
Partitions passed to the handler represent the entire set of partitions to consume from.
There will be exactly one call to the partitions revoked or partitions lost handler (if
they have been set using SetPartitionsRevokedHandler / SetPartitionsLostHandler)
corresponding to every call to this handler.
## COOPERATIVE (Incremental) Rebalancing
Partitions passed to the handler are an incremental assignment - are in addition to those
already being consumed from.
Executes as a side-effect of the Consumer.Consume call (on the same thread).
(Incremental)Assign/Unassign must not be called in the handler.
Exceptions: Any exception thrown by your partitions assigned handler will be wrapped
in a ConsumeException with ErrorCode ErrorCode.Local_Application and thrown by the
initiating call to Consume.
Specify a handler that will be called immediately prior to the consumer's current assignment
being revoked, allowing the application to take action (e.g. offsets committed to a custom
store) before the consumer gives up ownership of the partitions. The Func partitions revoked
handler variant is not supported in the incremental rebalancing (COOPERATIVE) case.
The value returned from your handler specifies the partitions/offsets the consumer should
be assigned to read from following completion of this method (most typically empty). This
partitions revoked handler variant may not be specified when incremental rebalancing is in use
- in that case, the set of partitions the consumer is reading from may never deviate from
the set that it has been assigned by the group.
The second parameter provided to the handler provides the set of partitions the consumer is
currently assigned to, and the current position of the consumer on each of these partitions.
Executes as a side-effect of the Consumer.Consume/Close/Dispose call (on the same thread).
(Incremental)Assign/Unassign must not be called in the handler.
Exceptions: Any exception thrown by your partitions revoked handler will be wrapped in a
ConsumeException with ErrorCode ErrorCode.Local_Application and thrown by the initiating call
to Consume/Close.
Specify a handler that will be called immediately prior to partitions being revoked
from the consumer's current assignment, allowing the application to take action
(e.g. commit offsets to a custom store) before the consumer gives up ownership of
the partitions.
Kafka supports two rebalance protocols: EAGER (range and roundrobin assignors) and
COOPERATIVE (incremental) (cooperative-sticky assignor). Use the PartitionAssignmentStrategy
configuration property to specify which assignor to use.
## EAGER Rebalancing (range, roundrobin)
The second parameter provides the entire set of partitions the consumer is currently
assigned to, and the current position of the consumer on each of these partitions.
The consumer will stop consuming from all partitions following execution of this
handler.
## COOPERATIVE (Incremental) Rebalancing
The second parameter provides the subset of the partitions assigned to the consumer
which are being revoked, and the current position of the consumer on each of these
partitions. The consumer will stop consuming from this set of partitions following
execution of this handler, and continue reading from any remaining partitions.
May execute as a side-effect of the Consumer.Consume/Close/Dispose call (on the same
thread).
(Incremental)Assign/Unassign must not be called in the handler.
Exceptions: Any exception thrown by your partitions revoked handler will be wrapped
in a ConsumeException with ErrorCode ErrorCode.Local_Application and thrown by the
initiating call to Consume.
Specify a handler that will be called when the consumer detects that it has lost ownership
of its partition assignment (fallen out of the group). The application should not commit
offsets in this case, since the partitions will likely be owned by other consumers in the
group (offset commits to Kafka will likely fail).
The value returned from your handler specifies the partitions/offsets the consumer should
be assigned to read from following completion of this method (most typically empty). This
partitions lost handler variant may not be specified when incremental rebalancing is in use
- in that case, the set of partitions the consumer is reading from may never deviate from
the set that it has been assigned by the group.
The second parameter provided to the handler provides the set of all partitions the consumer
is currently assigned to, and the current position of the consumer on each of these partitions.
Following completion of this handler, the consumer will stop consuming from all partitions.
If this handler is not specified, the partitions revoked handler (if specified) will be called
instead if partitions are lost.
May execute as a side-effect of the Consumer.Consume/Close/Dispose call (on the same
thread).
(Incremental)Assign/Unassign must not be called in the handler.
Exceptions: Any exception thrown by your partitions revoked handler will be wrapped
in a ConsumeException with ErrorCode ErrorCode.Local_Application and thrown by the
initiating call to Consume.
Specify a handler that will be called when the consumer detects that it has lost ownership
of its partition assignment (fallen out of the group). The application should not commit
offsets in this case, since the partitions will likely be owned by other consumers in the
group (offset commits to Kafka will likely fail).
The second parameter provided to the handler provides the set of all partitions the consumer
is currently assigned to, and the current position of the consumer on each of these partitions.
If this handler is not specified, the partitions revoked handler (if specified) will be called
instead if partitions are lost.
May execute as a side-effect of the Consumer.Consume/Close/Dispose call (on the same
thread).
(Incremental)Assign/Unassign must not be called in the handler.
Exceptions: Any exception thrown by your partitions revoked handler will be wrapped
in a ConsumeException with ErrorCode ErrorCode.Local_Application and thrown by the
initiating call to Consume.
A handler that is called to report the result of (automatic) offset
commits. It is not called as a result of the use of the Commit method.
Executes as a side-effect of the Consumer.Consume call (on the same thread).
Exceptions: Any exception thrown by your offsets committed handler
will be wrapped in a ConsumeException with ErrorCode
ErrorCode.Local_Application and thrown by the initiating call to Consume/Close.
Build a new IConsumer implementation instance.
Represents a message consumed from a Kafka cluster.
The topic associated with the message.
The partition associated with the message.
The partition offset associated with the message.
The TopicPartition associated with the message.
The TopicPartitionOffset associated with the message.
The Kafka message, or null if this ConsumeResult
instance represents an end of partition event.
The Kafka message Key.
The Kafka message Value.
The Kafka message timestamp.
The Kafka message headers.
True if this instance represents an end of partition
event, false if it represents a message in kafka.
The consumer group metadata associated with a consumer.
The result of a produce request.
An error (or NoError) associated with the message.
The TopicPartitionOffsetError associated with the message.
Encapsulates the result of a successful produce request.
The topic associated with the message.
The partition associated with the message.
The partition offset associated with the message.
The TopicPartition associated with the message.
The TopicPartitionOffset associated with the message.
The persistence status of the message
The Kafka message.
The Kafka message Key.
The Kafka message Value.
The Kafka message timestamp.
The Kafka message headers.
A builder class for instance
implementations that leverage an existing client handle.
The configured client handle.
An underlying librdkafka client handle that the AdminClient.
Build a new IAdminClient implementation instance.
A builder class for instance
implementations that leverage an existing client handle.
[API-SUBJECT-TO-CHANGE] - This class may be removed in the future
in favor of an improved API for this functionality.
The configured client handle.
The configured key serializer.
The configured value serializer.
The configured async key serializer.
The configured async value serializer.
An underlying librdkafka client handle that the Producer will use to
make broker requests. The handle must be from another Producer
instance (not Consumer or AdminClient).
The serializer to use to serialize keys.
The serializer to use to serialize values.
The async serializer to use to serialize keys.
The async serializer to use to serialize values.
Build a new IProducer implementation instance.
Deserializers for use with .
String (UTF8 encoded) deserializer.
Null value deserializer.
Deserializer that deserializes any value to null.
System.Int64 (big endian encoded, network byte ordered) deserializer.
System.Int32 (big endian encoded, network byte ordered) deserializer.
System.Single (big endian encoded, network byte ordered) deserializer.
System.Double (big endian encoded, network byte ordered) deserializer.
System.Byte[] (nullable) deserializer.
Byte ordering is original order.
Represents an error that occured when interacting with a
Kafka broker or the librdkafka library.
Initialize a new Error instance that is a copy of another.
The error object to initialize from.
Initialize a new Error instance from a native pointer to
a rd_kafka_error_t object, then destroy the native object.
Initialize a new Error instance from a particular
value.
The value associated with this Error.
The reason string associated with this Error will
be a static value associated with the .
Initialize a new Error instance.
The error code.
The error reason. If null, this will be a static value
associated with the error.
Whether or not the error is fatal.
Initialize a new Error instance from a particular
value and custom
string.
The value associated with this Error.
A custom reason string associated with the error
(overriding the static string associated with
).
Gets the associated with this Error.
Whether or not the error is fatal.
Whether or not the operation that caused the error is retriable.
Whether or not the current transaction is abortable
following the error.
This is only relevant for the transactional producer
API.
Gets a human readable reason string associated with this error.
true if Code != ErrorCode.NoError.
true if this is error originated locally (within librdkafka), false otherwise.
true if this error originated on a broker, false otherwise.
Converts the specified Error value to the value of it's Code property.
The Error value to convert.
Converts the specified value to it's corresponding rich Error value.
The value to convert.
Tests whether this Error instance is equal to the specified object.
The object to test.
true if obj is an Error and the Code property values are equal. false otherwise.
Returns a hash code for this Error value.
An integer that specifies a hash value for this Error value.
Tests whether Error value a is equal to Error value b.
The first Error value to compare.
The second Error value to compare.
true if Error values a and b are equal. false otherwise.
Tests whether Error value a is not equal to Error value b.
The first Error value to compare.
The second Error value to compare.
true if Error values a and b are not equal. false otherwise.
Returns the string representation of the error.
Depending on error source this might be a rich
contextual error message, or a simple static
string representation of the error Code.
A string representation of the Error object.
Enumeration of local and broker generated error codes.
Error codes that relate to locally produced errors in
librdkafka are prefixed with Local_
Received message is incorrect
Bad/unknown compression
Broker is going away
Generic failure
Broker transport failure
Critical system resource
Failed to resolve broker
Produced message timed out
Reached the end of the topic+partition queue on the broker. Not really an error.
Permanent: Partition does not exist in cluster.
File or filesystem error
Permanent: Topic does not exist in cluster.
All broker connections are down.
Invalid argument, or invalid configuration
Operation timed out
Queue is full
ISR count < required.acks
Broker node update
SSL error
Waiting for coordinator to become available.
Unknown client group
Operation in progress
Previous operation in progress, wait for it to finish.
This operation would interfere with an existing subscription
Assigned partitions (rebalance_cb)
Revoked partitions (rebalance_cb)
Conflicting use
Wrong state
Unknown protocol
Not implemented
Authentication failure
No stored offset
Outdated
Timed out in queue
Feature not supported by broker
Awaiting cache update
Operation interrupted
Key serialization error
Value serialization error
Key deserialization error
Value deserialization error
Partial response
Modification attempted on read-only object
No such entry / item not found
Read underflow
Invalid type
Retry operation.
Purged in queue
Purged in flight
Fatal error: see rd_kafka_fatal_error()
Inconsistent state
Gap-less ordering would not be guaranteed if proceeding
Maximum poll interval exceeded
Unknown broker
Functionality not configured
Instance has been fenced
Application generated exception.
Unknown broker error
Success
Offset out of range
Invalid message
Unknown topic or partition
Invalid message size
Leader not available
Not leader for partition
Request timed out
Broker not available
Replica not available
Message size too large
StaleControllerEpochCode
Offset metadata string too large
Broker disconnected before response received
Group coordinator load in progress
Group coordinator not available
Not coordinator for group
Invalid topic
Message batch larger than configured server segment size
Not enough in-sync replicas
Message(s) written to insufficient number of in-sync replicas
Invalid required acks value
Specified group generation id is not valid
Inconsistent group protocol
Invalid group.id
Unknown member
Invalid session timeout
Group rebalance in progress
Commit offset data size is not valid
Topic authorization failed
Group authorization failed
Cluster authorization failed
Invalid timestamp
Unsupported SASL mechanism
Illegal SASL state
Unsupported version
Topic already exists
Invalid number of partitions
Invalid replication factor
Invalid replica assignment
Invalid config
Not controller for cluster
Invalid request
Message format on broker does not support request
Isolation policy violation
Broker received an out of order sequence number
Broker received a duplicate sequence number
Producer attempted an operation with an old epoch
Producer attempted a transactional operation in an invalid state
Producer attempted to use a producer id which is not currently assigned to its transactional id
Transaction timeout is larger than the maximum value allowed by the broker's max.transaction.timeout.ms
Producer attempted to update a transaction while another concurrent operation on the same transaction was ongoing
Indicates that the transaction coordinator sending a WriteTxnMarker is no longer the current coordinator for a given producer
Transactional Id authorization failed
Security features are disabled
Operation not attempted
Disk error when trying to access log file on the disk.
The user-specified log directory is not found in the broker config.
SASL Authentication failed.
Unknown Producer Id.
Partition reassignment is in progress.
Delegation Token feature is not enabled.
Delegation Token is not found on server.
Specified Principal is not valid Owner/Renewer.
Delegation Token requests are not allowed on this connection.
Delegation Token authorization failed.
Delegation Token is expired.
Supplied principalType is not supported.
The group is not empty.
The group id does not exist.
The fetch session ID was not found.
The fetch session epoch is invalid.
No matching listener.
Topic deletion is disabled.
Unsupported compression type.
Provides extension methods on the ErrorCode enumeration.
Returns the static error string associated with
the particular ErrorCode value.
Thrown when there is an attempt to dereference a null Message reference.
Initializes a new instance of MessageNullException.
Represents an error that occured during a Consumer.Position request.
Initializes a new instance of OffsetsRequestExceptoion.
The result corresponding to all topic partitions of the request
(whether or not they were in error). At least one of these
results will be in error.
The result corresponding to all ConfigResources in the request
(whether or not they were in error). At least one of these
results will be in error.
Represents an error that occured during a Consumer.Position request.
Initializes a new instance of OffsetsRequestExceptoion.
The result corresponding to all topic partitions of the request
(whether or not they were in error). At least one of these
results will be in error.
The result corresponding to all ConfigResources in the request
(whether or not they were in error). At least one of these
results will be in error.
Encapsulates information describing a particular
Kafka group.
Initializes a new instance of the GroupInfo class.
Originating broker info.
The group name.
A rich value associated with the information encapsulated by this class.
The group state.
The group protocol type.
The group protocol.
The group members.
Gets the originating-broker info.
Gets the group name
Gets a rich value associated with the information encapsulated by this class.
Gets the group state
Gets the group protocol type
Gets the group protocol
Gets the group members
Encapsulates information describing a particular
member of a Kafka group.
Initializes a new GroupMemberInfo class instance.
The member id (generated by the broker).
The client's client.id.
The client's hostname.
Gets the member metadata (binary). The format of this data depends on the protocol type.
Gets the member assignment (binary). The format of this data depends on the protocol type.
Gets the member id (generated by broker).
Gets the client's client.id.
Gets the client's hostname.
Gets the member metadata (binary). The format of this data depends on the protocol type.
Gets the member assignment (binary). The format of this data depends on the protocol type.
A handle for a librdkafka client instance. Also encapsulates
a reference to the IClient instance that owns this handle.
Gets a value indicating whether the encapsulated librdkafka handle is invalid.
true if the encapsulated librdkafka handle is invalid; otherwise, false.
Represents a kafka message header.
Message headers are supported by v0.11 brokers and above.
The header key.
Get the serialized header value data.
Create a new Header instance.
The header key.
The header value (may be null).
A collection of Kafka message headers.
Message headers are supported by v0.11 brokers and above.
Append a new header to the collection.
The header key.
The header value (possibly null). Note: A null
header value is distinct from an empty header
value (array of length 0).
Append a new header to the collection.
The header to add to the collection.
Get the value of the latest header with the specified key.
The key to get the associated value of.
The value of the latest element in the collection with the specified key.
The key was not present in the collection.
Try to get the value of the latest header with the specified key.
The key to get the associated value of.
The value of the latest element in the collection with the
specified key, if a header with that key was present in the
collection.
true if the a value with the specified key was present in
the collection, false otherwise.
Removes all headers for the given key.
The key to remove all headers for
Returns an enumerator that iterates through the headers collection.
An enumerator object that can be used to iterate through the headers collection.
Returns an enumerator that iterates through the headers collection.
An enumerator object that can be used to iterate through the headers collection.
Gets the header at the specified index
The zero-based index of the element to get.
The number of headers in the collection.
Defines an Apache Kafka admin client.
Get information pertaining to all groups in
the Kafka cluster (blocking)
[API-SUBJECT-TO-CHANGE] - The API associated
with this functionality is subject to change.
The maximum period of time the call may block.
Get information pertaining to a particular
group in the Kafka cluster (blocking).
[API-SUBJECT-TO-CHANGE] - The API associated
with this functionality is subject to change.
The group of interest.
The maximum period of time the call
may block.
Returns information pertaining to the
specified group or null if this group does
not exist.
Query the cluster for metadata for a
specific topic.
[API-SUBJECT-TO-CHANGE] - The API associated
with this functionality is subject to change.
Query the cluster for metadata.
[API-SUBJECT-TO-CHANGE] - The API associated
with this functionality is subject to change.
Increase the number of partitions for one
or more topics as per the supplied
PartitionsSpecifications.
A collection of PartitionsSpecifications.
The options to use when creating
the partitions.
The results of the
PartitionsSpecification requests.
Delete a set of topics. This operation is not
transactional so it may succeed for some
topics while fail for others. It may take
several seconds after the DeleteTopicsResult
returns success for all the brokers to become
aware that the topics are gone. During this
time, topics may continue to be visible via
admin operations. If delete.topic.enable is
false on the brokers, DeleteTopicsAsync will
mark the topics for deletion, but not
actually delete them. The Task will return
successfully in this case.
The topic names to delete.
The options to use when deleting topics.
The results of the delete topic requests.
Create a set of new topics.
A collection of specifications for
the new topics to create.
The options to use when creating
the topics.
The results of the create topic requests.
Update the configuration for the specified
resources. Updates are not transactional so
they may succeed for some resources while fail
for others. The configs for a particular
resource are updated atomically. This operation
is supported by brokers with version 0.11.0
or higher. IMPORTANT NOTE: Unspecified
configuration properties will be reverted to
their default values. Furthermore, if you use
DescribeConfigsAsync to obtain the current set
of configuration values, modify them, then use
AlterConfigsAsync to set them, you will loose
any non-default values that are marked as
sensitive because they are not provided by
DescribeConfigsAsync.
The resources with their configs
(topic is the only resource type with configs
that can be updated currently).
The options to use when altering configs.
The results of the alter configs requests.
Get the configuration for the specified
resources. The returned configuration includes
default values and the IsDefault property can be
used to distinguish them from user supplied values.
The value of config entries where IsSensitive is
true is always null so that sensitive information
is not disclosed. Config entries where IsReadOnly
is true cannot be updated. This operation is
supported by brokers with version 0.11.0.0 or higher.
The resources (topic and broker resource
types are currently supported)
The options to use when describing configs.
Configs for the specified resources.
Delete records (messages) in topic partitions
older than the offsets provided.
The offsets to delete up to.
The options to use when deleting records.
The result of the delete records request.
A deserializer for use with .
Deserialize a message key or value.
The raw byte data to deserialize.
True if this is a null value.
Context relevant to the deserialize operation.
A that completes
with the deserialized value.
Defines a serializer for use with .
Serialize the key or value of a
instance.
The value to serialize.
Context relevant to the serialize operation.
A that
completes with the serialized data.
Defines methods common to all client types.
An opaque reference to the underlying
librdkafka client instance. This can be used
to construct an AdminClient that utilizes the
same underlying librdkafka client as this
instance.
Gets the name of this client instance.
Contains (but is not equal to) the client.id
configuration parameter.
This name will be unique across all client
instances in a given application which allows
log messages to be associated with the
corresponding instance.
Adds one or more brokers to the Client's list
of initial bootstrap brokers.
Note: Additional brokers are discovered
automatically as soon as the Client connects
to any broker by querying the broker metadata.
Calling this method is only required in some
scenarios where the address of all brokers in
the cluster changes.
Comma-separated list of brokers in
the same format as the bootstrap.server
configuration parameter.
There is currently no API to remove existing
configured, added or learnt brokers.
The number of brokers added. This value
includes brokers that may have been specified
a second time.
Defines a high-level Apache Kafka consumer
(with key and value deserialization).
Poll for new messages / events. Blocks
until a consume result is available or the
timeout period has elapsed.
The maximum period of time (in milliseconds)
the call may block.
The consume result.
The partitions assigned/revoked and offsets
committed handlers may be invoked as a
side-effect of calling this method (on the
same thread).
Thrown
when a call to this method is unsuccessful
for any reason. Inspect the Error property
of the exception for detailed information.
Poll for new messages / events. Blocks
until a consume result is available or the
operation has been cancelled.
A cancellation token
that can be used to cancel this operation.
The consume result.
The partitions assigned/revoked and
offsets committed handlers may be invoked
as a side-effect of calling this method
(on the same thread).
Thrown
when a call to this method is unsuccessful
for any reason (except cancellation by
user). Inspect the Error property of the
exception for detailed information.
Thrown on cancellation.
Poll for new messages / events. Blocks
until a consume result is available or the
timeout period has elapsed.
The maximum period of time
the call may block.
The consume result.
The partitions assigned/revoked and offsets
committed handlers may be invoked as a
side-effect of calling this method (on the
same thread).
Thrown
when a call to this method is unsuccessful
for any reason. Inspect the Error property
of the exception for detailed information.
Gets the (dynamic) group member id of
this consumer (as set by the broker).
Gets the current partition assignment as set by
or implicitly.
Gets the current topic subscription as set by
.
Update the topic subscription.
Any previous subscription will be
unassigned and unsubscribed first.
The topics to subscribe to.
A regex can be specified to subscribe to
the set of all matching topics (which is
updated as topics are added / removed from
the cluster). A regex must be front
anchored to be recognized as a regex.
e.g. ^myregex
The topic subscription set denotes the
desired set of topics to consume from.
This set is provided to the consumer
group leader (one of the group
members) which uses the configured
partition.assignment.strategy to
allocate partitions of topics in the
subscription set to the consumers in
the group.
Sets the subscription set to a single
topic.
Any previous subscription will be
unassigned and unsubscribed first.
The topic to subscribe to.
A regex can be specified to subscribe to
the set of all matching topics (which is
updated as topics are added / removed from
the cluster). A regex must be front
anchored to be recognized as a regex.
e.g. ^myregex
Unsubscribe from the current subscription
set.
Sets the current set of assigned partitions
(the set of partitions the consumer will consume
from) to a single .
Note: The newly specified set is the complete
set of partitions to consume from. If the
consumer is already assigned to a set of
partitions, the previous set will be replaced.
The partition to consume from.
Consumption will resume from the last committed
offset, or according to the 'auto.offset.reset'
configuration parameter if no offsets have been
committed yet.
Sets the current set of assigned partitions
(the set of partitions the consumer will consume
from) to a single .
Note: The newly specified set is the complete
set of partitions to consume from. If the
consumer is already assigned to a set of
partitions, the previous set will be replaced.
The partition to consume from.
If an offset value of Offset.Unset (-1001) is
specified, consumption will resume from the last
committed offset, or according to the
'auto.offset.reset' configuration parameter if
no offsets have been committed yet.
Sets the current set of assigned partitions
(the set of partitions the consumer will consume
from) to .
Note: The newly specified set is the complete
set of partitions to consume from. If the
consumer is already assigned to a set of
partitions, the previous set will be replaced.
The set of partitions to consume from.
If an offset value of Offset.Unset (-1001) is
specified for a partition, consumption will
resume from the last committed offset on that
partition, or according to the
'auto.offset.reset' configuration parameter if
no offsets have been committed yet.
Sets the current set of assigned partitions
(the set of partitions the consumer will consume
from) to .
Note: The newly specified set is the complete
set of partitions to consume from. If the
consumer is already assigned to a set of
partitions, the previous set will be replaced.
The set of partitions to consume from.
Consumption will resume from the last committed
offset on each partition, or according to the
'auto.offset.reset' configuration parameter if
no offsets have been committed yet.
Incrementally add
to the current assignment, starting consumption
from the specified offsets.
The set of additional partitions to consume from.
If an offset value of Offset.Unset (-1001) is
specified for a partition, consumption will
resume from the last committed offset on that
partition, or according to the
'auto.offset.reset' configuration parameter if
no offsets have been committed yet.
Incrementally add
to the current assignment.
The set of additional partitions to consume from.
Consumption will resume from the last committed
offset on each partition, or according to the
'auto.offset.reset' configuration parameter if
no offsets have been committed yet.
Incrementally remove
to the current assignment.
The set of partitions to remove from the current
assignment.
Remove the current set of assigned partitions
and stop consumption.
Store offsets for a single partition based on
the topic/partition/offset of a consume result.
The offset will be committed according to
`auto.commit.interval.ms` (and
`enable.auto.commit`) or manual offset-less
commit().
`enable.auto.offset.store` must be set to
"false" when using this API.
A consume result used to determine
the offset to store and topic/partition.
Current stored offset or a partition
specific error.
Thrown if the request failed.
Thrown if result is in error.
Store offsets for a single partition.
The offset will be committed (written) to the
offset store according to `auto.commit.interval.ms`
or manual offset-less commit(). Calling
this method in itself does not commit offsets,
only store them for future commit.
`enable.auto.offset.store` must be set to
"false" when using this API.
The offset to be committed.
Thrown if the request failed.
Commit all offsets for the current assignment.
Thrown if the request failed.
Thrown if any of the constituent results is in
error. The entire result (which may contain
constituent results that are not in error) is
available via the
property of the exception.
Commit an explicit list of offsets.
The topic/partition offsets to commit.
Thrown if the request failed.
Thrown if any of the constituent results is in
error. The entire result (which may contain
constituent results that are not in error) is
available via the
property of the exception.
Commits an offset based on the
topic/partition/offset of a ConsumeResult.
The ConsumeResult instance used
to determine the committed offset.
Thrown if the request failed.
Thrown if the result is in error.
A consumer at position N has consumed
messages with offsets up to N-1 and will
next receive the message with offset N.
Hence, this method commits an offset of
.Offset + 1.
Seek to on the
specified topic partition which is either
an absolute or logical offset. This must
only be done for partitions that are
currently being consumed (i.e., have been
Assign()ed). To set the start offset for
not-yet-consumed partitions you should use the
Assign method instead.
The topic partition to seek
on and the offset to seek to.
Thrown if the request failed.
Pause consumption for the provided list
of partitions.
The partitions to pause consumption of.
Thrown if the request failed.
Per partition success or error.
Resume consumption for the provided list of partitions.
The partitions to resume consumption of.
Thrown if the request failed.
Per partition success or error.
Retrieve current committed offsets for the
current assignment.
The offset field of each requested partition
will be set to the offset of the last consumed
message, or Offset.Unset in case there was no
previous message, or, alternately a partition
specific error may also be returned.
The maximum period of time the call
may block.
Thrown if the request failed.
Thrown if any of the constituent results is in
error. The entire result (which may contain
constituent results that are not in error) is
available via the
property of the exception.
Retrieve current committed offsets for the
specified topic partitions.
The offset field of each requested partition
will be set to the offset of the last consumed
message, or Offset.Unset in case there was no
previous message, or, alternately a partition
specific error may also be returned.
the partitions to get the committed
offsets for.
The maximum period of time the call
may block.
Thrown if the request failed.
Thrown if any of the constituent results is in
error. The entire result (which may contain
constituent results that are not in error) is
available via the
property of the exception.
Gets the current position (offset) for the
specified topic / partition.
The offset field of each requested partition
will be set to the offset of the last consumed
message + 1, or Offset.Unset in case there was
no previous message consumed by this consumer.
Thrown if the request failed.
Look up the offsets for the given partitions
by timestamp. The returned offset for each
partition is the earliest offset for which
the timestamp is greater than or equal to
the given timestamp. If the provided
timestamp exceeds that of the last message
in the partition, a value of Offset.End (-1)
will be returned.
The consumer does not need to be assigned to
the requested partitions.
The mapping from partition
to the timestamp to look up.
The maximum period of time the
call may block.
A mapping from partition to the
timestamp and offset of the first message with
timestamp greater than or equal to the target
timestamp.
Thrown
if the operation fails.
Thrown if any of the constituent results is
in error. The entire result (which may contain
constituent results that are not in error) is
available via the
property of the exception.
Get the last cached low (oldest available /
beginning) and high (newest/end) offsets for
a topic/partition. Does not block.
The low offset is updated periodically (if
statistics.interval.ms is set) while the
high offset is updated on each fetched
message set from the broker. If there is no
cached offset (either low or high, or both)
then Offset.Unset will be returned for the
respective offset.
The topic partition of interest.
The requested WatermarkOffsets
(see that class for additional documentation).
Query the Kafka cluster for low (oldest
available/beginning) and high (newest/end)
offsets for the specified topic/partition.
This is a blocking call - always contacts
the cluster for the required information.
The topic/partition of interest.
The maximum period of time
the call may block.
The requested WatermarkOffsets (see
that class for additional documentation).
Commits offsets (if auto commit is enabled),
alerts the group coordinator
that the consumer is exiting the group then
releases all resources used by this consumer.
You should call
instead of
(or just before) to ensure a timely consumer
group rebalance. If you do not call
or ,
the group will rebalance after a timeout
specified by the group's `session.timeout.ms`.
Note: the partition assignment and partitions
revoked handlers may be called as a side-effect
of calling this method.
Thrown if the operation fails.
The current consumer group metadata associated with this consumer,
or null if a GroupId has not been specified for the consumer.
This metadata object should be passed to the transactional producer's
method.
Defines a deserializer for use with .
Deserialize a message key or value.
The data to deserialize.
Whether or not the value is null.
Context relevant to the deserialize operation.
The deserialized value.
A type for use in conjunction with IgnoreDeserializer that enables
message keys or values to be read as null, regardless of their value.
Defines a Kafka message header.
The header key.
The serialized header value data.
Attempt to load librdkafka.
true if librdkafka was loaded as a result of this call, false if the
library has already been loaded.
throws DllNotFoundException if librdkafka could not be loaded.
throws FileLoadException if the loaded librdkafka version is too low.
throws InvalidOperationException on other error.
Var-arg tag types, used in producev
This class should be an exact replica of other NativeMethods classes, except
for the DllName const.
This copy/pasting is required because DllName must be const.
TODO: generate the NativeMethods classes at runtime (compile C# code) rather
than copy/paste.
Alternatively, we could have used dlopen to load the native library, but to
do that we need to know the absolute path of the native libraries because the
dlopen call does not know .NET runtime library storage conventions. Unfortunately
these are relatively complex, so we prefer to go with the copy/paste solution
which is relatively simple.
This class should be an exact replica of other NativeMethods classes, except
for the DllName const.
This copy/pasting is required because DllName must be const.
TODO: generate the NativeMethods classes at runtime (compile C# code) rather
than copy/paste.
Alternatively, we could have used dlopen to load the native library, but to
do that we need to know the absolute path of the native libraries because the
dlopen call does not know .NET runtime library storage conventions. Unfortunately
these are relatively complex, so we prefer to go with the copy/paste solution
which is relatively simple.
This class should be an exact replica of other NativeMethods classes, except
for the DllName const.
This copy/pasting is required because DllName must be const.
TODO: generate the NativeMethods classes at runtime (compile C# code) rather
than copy/paste.
Alternatively, we could have used dlopen to load the native library, but to
do that we need to know the absolute path of the native libraries because the
dlopen call does not know .NET runtime library storage conventions. Unfortunately
these are relatively complex, so we prefer to go with the copy/paste solution
which is relatively simple.
This class should be an exact replica of other NativeMethods classes, except
for the DllName const.
This copy/pasting is required because DllName must be const.
TODO: generate the NativeMethods classes at runtime (compile C# code) rather
than copy/paste.
Alternatively, we could have used dlopen to load the native library, but to
do that we need to know the absolute path of the native libraries because the
dlopen call does not know .NET runtime library storage conventions. Unfortunately
these are relatively complex, so we prefer to go with the copy/paste solution
which is relatively simple.
Unknown configuration name.
Invalid configuration value.
Configuration okay
This object is tightly coupled to the referencing Producer /
Consumer via callback objects passed into the librdkafka
config. These are not tracked by the CLR, so we need to
maintain an explicit reference to the containing object here
so the delegates - which may get called by librdkafka during
destroy - are guaranteed to exist during finalization.
Note: objects referenced by this handle (recursively) will
not be GC'd at the time of finalization as the freachable
list is a GC root. Also, the delegates are ok to use since they
don't have finalizers.
this is a useful reference:
https://stackoverflow.com/questions/6964270/which-objects-can-i-use-in-a-finalizer-method
Prevent AccessViolationException when handle has already been closed.
Should be called at start of every function using the handle,
except in ReleaseHandle.
Setting the config parameter to IntPtr.Zero returns the handle of an
existing topic, or an invalid handle if a topic with name
does not exist. Note: Only the first applied configuration for a specific
topic will be used.
- allTopics=true - request all topics from cluster
- allTopics=false, topic=null - request only locally known topics (topic_new():ed topics or otherwise locally referenced once, such as consumed topics)
- allTopics=false, topic=valid - request specific topic
Dummy commit callback that does nothing but prohibits
triggering the global offset_commit_cb.
Used by manual commits.
Creates and returns a C rd_kafka_topic_partition_list_t * populated by offsets.
If offsets is null a null IntPtr will be returned, else a IntPtr
which must destroyed with LibRdKafka.topic_partition_list_destroy()
Extension methods for the class.
Extension methods for the class.
Extension methods for the class.
Converts the TimeSpan value to an integer number of milliseconds.
An is thrown if the number of milliseconds is greater than Int32.MaxValue.
The TimeSpan value to convert to milliseconds.
The TimeSpan value in milliseconds.
Convenience class for generating and pinning the UTF8
representation of a string.
Interpret a zero terminated c string as UTF-8.
Defines a high-level Apache Kafka producer client
that provides key and value serialization.
Asynchronously send a single message to a
Kafka topic. The partition the message is
sent to is determined by the partitioner
defined using the 'partitioner' configuration
property.
The topic to produce the message to.
The message to produce.
A cancellation token to observe whilst waiting
the returned task to complete.
A Task which will complete with a delivery
report corresponding to the produce request,
or an exception if an error occured.
Thrown in response to any produce request
that was unsuccessful for any reason
(excluding user application logic errors).
The Error property of the exception provides
more detailed information.
Thrown in response to invalid argument values.
Asynchronously send a single message to a
Kafka topic/partition.
The topic partition to produce the
message to.
The message to produce.
A cancellation token to observe whilst waiting
the returned task to complete.
A Task which will complete with a delivery
report corresponding to the produce request,
or an exception if an error occured.
Thrown in response to any produce request
that was unsuccessful for any reason
(excluding user application logic errors).
The Error property of the exception provides
more detailed information.
Thrown in response to invalid argument values.
Asynchronously send a single message to a
Kafka topic. The partition the message is sent
to is determined by the partitioner defined
using the 'partitioner' configuration property.
The topic to produce the message to.
The message to produce.
A delegate that will be called
with a delivery report corresponding to the
produce request (if enabled).
Thrown in response to any error that is known
immediately (excluding user application logic
errors), for example ErrorCode.Local_QueueFull.
Asynchronous notification of unsuccessful produce
requests is made available via the
parameter (if specified). The Error property of
the exception / delivery report provides more
detailed information.
Thrown in response to invalid argument values.
Thrown in response to error conditions that
reflect an error in the application logic of
the calling application.
Asynchronously send a single message to a
Kafka topic partition.
The topic partition to produce
the message to.
The message to produce.
A delegate that will be called
with a delivery report corresponding to the
produce request (if enabled).
Thrown in response to any error that is known
immediately (excluding user application logic errors),
for example ErrorCode.Local_QueueFull. Asynchronous
notification of unsuccessful produce requests is made
available via the
parameter (if specified). The Error property of the
exception / delivery report provides more detailed
information.
Thrown in response to invalid argument values.
Thrown in response to error conditions that reflect
an error in the application logic of the calling
application.
Poll for callback events.
The maximum period of time to block if
no callback events are waiting. You should
typically use a relatively short timeout period
because this operation cannot be cancelled.
Returns the number of events served since
the last call to this method or if this
method has not yet been called, over the
lifetime of the producer.
Wait until all outstanding produce requests and
delivery report callbacks are completed.
[API-SUBJECT-TO-CHANGE] - the semantics and/or
type of the return value is subject to change.
The maximum length of time to block.
You should typically use a relatively short
timeout period and loop until the return value
becomes zero because this operation cannot be
cancelled.
The current librdkafka out queue length. This
should be interpreted as a rough indication of
the number of messages waiting to be sent to or
acknowledged by the broker. If zero, there are
no outstanding messages or callbacks.
Specifically, the value is equal to the sum of
the number of produced messages for which a
delivery report has not yet been handled and a
number which is less than or equal to the
number of pending delivery report callback
events (as determined by the number of
outstanding protocol requests).
This method should typically be called prior to
destroying a producer instance to make sure all
queued and in-flight produce requests are
completed before terminating. The wait time is
bounded by the timeout parameter.
A related configuration parameter is
message.timeout.ms which determines the
maximum length of time librdkafka attempts
to deliver a message before giving up and
so also affects the maximum time a call to
Flush may block.
Where this Producer instance shares a Handle
with one or more other producer instances, the
Flush method will wait on messages produced by
the other producer instances as well.
Wait until all outstanding produce requests and
delivery report callbacks are completed.
A cancellation token to observe whilst waiting
the returned task to complete.
This method should typically be called prior to
destroying a producer instance to make sure all
queued and in-flight produce requests are
completed before terminating.
A related configuration parameter is
message.timeout.ms which determines the
maximum length of time librdkafka attempts
to deliver a message before giving up and
so also affects the maximum time a call to
Flush may block.
Where this Producer instance shares a Handle
with one or more other producer instances, the
Flush method will wait on messages produced by
the other producer instances as well.
Thrown if the operation is cancelled.
Initialize transactions for the producer instance.
This function ensures any transactions initiated by previous instances
of the producer with the same TransactionalId are completed.
If the previous instance failed with a transaction in progress the
previous transaction will be aborted.
This function needs to be called before any other transactional or
produce functions are called when the TransactionalId is configured.
If the last transaction had begun completion (following transaction commit)
but not yet finished, this function will await the previous transaction's
completion.
When any previous transactions have been fenced this function
will acquire the internal producer id and epoch, used in all future
transactional messages issued by this producer instance.
Upon successful return from this function the application has to perform at
least one of the following operations within TransactionalTimeoutMs to
avoid timing out the transaction on the broker:
* ProduceAsync (et.al)
* SendOffsetsToTransaction
* CommitTransaction
* AbortTransaction
The maximum length of time this method may block.
Thrown if an error occured, and the operation may be retried.
Thrown on all other errors.
Begin a new transaction.
InitTransactions must have been called successfully (once)
before this function is called.
Any messages produced, offsets sent (SendOffsetsToTransaction),
etc, after the successful return of this function will be part of
the transaction and committed or aborted atomatically.
Finish the transaction by calling CommitTransaction or
abort the transaction by calling AbortTransaction.
With the transactional producer, ProduceAsync and
Prodce calls are only allowed during an on-going
transaction, as started with this function.
Any produce call outside an on-going transaction,
or for a failed transaction, will fail.
Thrown on all errors.
Commit the current transaction (as started with
BeginTransaction).
Any outstanding messages will be flushed (delivered) before actually
committing the transaction.
If any of the outstanding messages fail permanently the current
transaction will enter the abortable error state, in this case
the application must call AbortTransaction before attempting a new
transaction with BeginTransaction.
IMPORTANT NOTE: It is currently strongly recommended that the application
call CommitTransaction without specifying a timeout (which will block up
to the remaining transaction timeout - ProducerConfig.TransactionTimeoutMs)
because the Transactional Producer's API timeout handling is inconsistent with
the underlying protocol requests (known issue).
This function will block until all outstanding messages are
delivered and the transaction commit request has been successfully
handled by the transaction coordinator, or until
expires, which ever comes first. On timeout the application may
call the function again.
Will automatically call Flush to ensure all queued
messages are delivered before attempting to commit the
transaction.
The maximum length of time this method may block.
Thrown if the application must call AbortTransaction and
start a new transaction with BeginTransaction if it
wishes to proceed with transactions.
Thrown if an error occured, and the operation may be retried.
Thrown on all other errors.
Commit the current transaction (as started with
BeginTransaction).
Any outstanding messages will be flushed (delivered) before actually
committing the transaction.
If any of the outstanding messages fail permanently the current
transaction will enter the abortable error state, in this case
the application must call AbortTransaction before attempting a new
transaction with BeginTransaction.
This function will block until all outstanding messages are
delivered and the transaction commit request has been successfully
handled by the transaction coordinator, or until the transaction
times out (ProducerConfig.TransactionTimeoutMs) which ever comes
first. On timeout the application may call the function again.
Will automatically call Flush to ensure all queued
messages are delivered before attempting to commit the
transaction.
Thrown if the application must call AbortTransaction and
start a new transaction with BeginTransaction if it
wishes to proceed with transactions.
Thrown if an error occured, and the operation may be retried.
Thrown on all other errors.
Aborts the ongoing transaction.
This function should also be used to recover from non-fatal abortable
transaction errors.
Any outstanding messages will be purged and fail.
IMPORTANT NOTE: It is currently strongly recommended that the application
call AbortTransaction without specifying a timeout (which will block up
to the remaining transaction timeout - ProducerConfig.TransactionTimeoutMs)
because the Transactional Producer's API timeout handling is inconsistent with
the underlying protocol requests (known issue).
This function will block until all outstanding messages are purged
and the transaction abort request has been successfully
handled by the transaction coordinator, or until
expires, which ever comes first. On timeout the application may
call the function again.
The maximum length of time this method may block.
Thrown if an error occured, and the operation may be retried.
Thrown on all other errors.
Aborts the ongoing transaction.
This function should also be used to recover from non-fatal abortable
transaction errors.
Any outstanding messages will be purged and fail.
This function will block until all outstanding messages are purged
and the transaction abort request has been successfully
handled by the transaction coordinator, or until the transaction
times out (ProducerConfig.TransactionTimeoutMs), which ever comes
first. On timeout the application may call the function again.
Thrown if an error occured, and the operation may be retried.
Thrown on all other errors.
Sends a list of topic partition offsets to the consumer group
coordinator for , and marks
the offsets as part part of the current transaction.
These offsets will be considered committed only if the transaction is
committed successfully.
The offsets should be the next message your application will consume,
i.e., the last processed message's offset + 1 for each partition.
Either track the offsets manually during processing or use
Position property (on the consumer) to get the current offsets for
the partitions assigned to the consumer.
Use this method at the end of a consume-transform-produce loop prior
to committing the transaction with CommitTransaction.
The consumer must disable auto commits
(set EnableAutoCommit to false on the consumer).
Logical and invalid offsets (such as Offset.Unset) in
will be ignored, if there are no valid offsets in
the function will not throw
and no action will be taken.
List of offsets to commit to the consumer group upon
successful commit of the transaction. Offsets should be
the next message to consume, e.g., last processed message + 1.
The consumer group metadata acquired via
The maximum length of time this method may block.
Thrown if group metadata is invalid.
Thrown if the application must call AbortTransaction and
start a new transaction with BeginTransaction if it
wishes to proceed with transactions.
Thrown if an error occured, and the operation may be retried.
Thrown on all other errors.
Defines a serializer for use with .
Serialize the key or value of a
instance.
The value to serialize.
Context relevant to the serialize operation.
The serialized value.
Represents an error that occured during an interaction with Kafka.
Initialize a new instance of KafkaException based on
an existing Error instance.
The Kafka Error.
Initialize a new instance of KafkaException based on
an existing Error instance and inner exception.
The Kafka Error.
The exception instance that caused this exception.
Initialize a new instance of KafkaException based on
an existing ErrorCode value.
The Kafka ErrorCode.
Gets the Error associated with this KafkaException.
Represents an error where the operation that caused it
may be retried.
Initialize a new instance of KafkaRetriableException
based on an existing Error instance.
The Error instance.
Represents an error that caused the current transaction
to fail and enter the abortable state.
Initialize a new instance of KafkaTxnRequiresAbortException
based on an existing Error instance.
The Error instance.
Methods that relate to the native librdkafka library itself
(do not require a Producer or Consumer broker connection).
Gets the librdkafka version as an integer.
Interpreted as hex MM.mm.rr.xx:
- MM = Major
- mm = minor
- rr = revision
- xx = pre-release id (0xff is the final release)
E.g.: 0x000901ff = 0.9.1
Gets the librdkafka version as string.
Gets a list of the supported debug contexts.
true if librdkafka has been successfully loaded, false if not.
Loads the native librdkafka library. Does nothing if the library is
already loaded.
true if librdkafka was loaded as a result of this call, false if the
library has already been loaded.
You will not typically need to call this method - librdkafka is loaded
automatically on first use of a Producer or Consumer instance.
Loads the native librdkafka library from the specified path (note: the
specified path needs to include the filename). Does nothing if the
library is already loaded.
true if librdkafka was loaded as a result of this call, false if the
library has already been loaded.
You will not typically need to call this method - librdkafka is loaded
automatically on first use of a Producer or Consumer instance.
The total number librdkafka client instances that have been
created and not yet disposed.
OnLog callback event handler implementations.
Warning: Log handlers are called spontaneously from internal librdkafka
threads and the application must not call any Confluent.Kafka APIs from
within a log handler or perform any prolonged operations.
The method used to log messages by default.
Enumerates different log level enumerations.
Confluent.Kafka.SysLogLevel (severity
levels correspond to syslog)
Microsoft.Extensions.Logging.LogLevel
System.Diagnostics.TraceLevel
Encapsulates information provided to the
Producer/Consumer OnLog event.
Instantiates a new LogMessage class instance.
The librdkafka client instance name.
The log level (levels correspond to syslog(3)), lower is worse.
The facility (section of librdkafka code) that produced the message.
The log message.
Gets the librdkafka client instance name.
Gets the log level (levels correspond to syslog(3)), lower is worse.
Gets the facility (section of librdkafka code) that produced the message.
Gets the log message.
Convert the syslog message severity
level to correspond to the values of
a different log level enumeration type.
Represents a (deserialized) Kafka message.
Gets the message key value (possibly null).
Gets the message value (possibly null).
Enumerates different parts of a Kafka message
The message key.
The message value.
All components of except Key and Value.
The message timestamp. The timestamp type must be set to CreateTime.
Specify Timestamp.Default to set the message timestamp to the time
of this function call.
The collection of message headers (or null). Specifying null or an
empty list are equivalent. The order of headers is maintained, and
duplicate header keys are allowed.
Kafka cluster metadata.
Instantiates a new Metadata class instance.
Information about each constituent broker of the cluster.
Information about requested topics in the cluster.
The id of the broker that provided this metadata.
The name of the broker that provided this metadata.
Gets information about each constituent broker of the cluster.
Gets information about requested topics in the cluster.
Gets the id of the broker that provided this metadata.
Gets the name of the broker that provided this metadata.
Returns a JSON representation of the Metadata object.
A JSON representation of the Metadata object.
A type for use in conjunction with NullSerializer and NullDeserializer
that enables null key or values to be enforced when producing or
consuming messages.
Represents a Kafka partition offset value.
This structure is the same size as a long -
its purpose is to add some syntactical sugar
related to special values.
A special value that refers to the beginning of a partition.
A special value that refers to the end of a partition.
A special value that refers to the stored offset for a partition.
A special value that refers to an invalid, unassigned or default partition offset.
Initializes a new instance of the Offset structure.
The offset value
Gets the long value corresponding to this offset.
Gets whether or not this is one of the special
offset values.
Tests whether this Offset value is equal to the specified object.
The object to test.
true if obj is an Offset and has the same value. false otherwise.
Tests whether this Offset value is equal to the specified Offset.
The offset to test.
true if other has the same value. false otherwise.
Tests whether Offset value a is equal to Offset value b.
The first Offset value to compare.
The second Offset value to compare.
true if Offset value a and b are equal. false otherwise.
Tests whether Offset value a is not equal to Offset value b.
The first Offset value to compare.
The second Offset value to compare.
true if Offset value a and b are not equal. false otherwise.
Tests whether Offset value a is greater than Offset value b.
The first Offset value to compare.
The second Offset value to compare.
true if Offset value a is greater than Offset value b. false otherwise.
Tests whether Offset value a is less than Offset value b.
The first Offset value to compare.
The second Offset value to compare.
true if Offset value a is less than Offset value b. false otherwise.
Tests whether Offset value a is greater than or equal to Offset value b.
The first Offset value to compare.
The second Offset value to compare.
true if Offset value a is greater than or equal to Offset value b. false otherwise.
Tests whether Offset value a is less than or equal to Offset value b.
The first Offset value to compare.
The second Offset value to compare.
true if Offset value a is less than or equal to Offset value b. false otherwise.
Add an integer value to an Offset value.
The Offset value to add the integer value to.
The integer value to add to the Offset value.
The Offset value incremented by the integer value b.
Add a long value to an Offset value.
The Offset value to add the long value to.
The long value to add to the Offset value.
The Offset value incremented by the long value b.
Returns a hash code for this Offset.
An integer that specifies a hash value for this Offset.
Converts the specified long value to an Offset value.
The long value to convert.
Converts the specified Offset value to a long value.
The Offset value to convert.
Returns a string representation of the Offset object.
A string that represents the Offset object.
Represents a Kafka partition.
This structure is the same size as an int -
its purpose is to add some syntactical sugar
related to special values.
A special value that refers to an unspecified / unknown partition.
Initializes a new instance of the Partition structure.
The partition value
Gets the int value corresponding to this partition.
Gets whether or not this is one of the special
partition values.
Tests whether this Partition value is equal to the specified object.
The object to test.
true if obj is a Partition instance and has the same value. false otherwise.
Tests whether this Partition value is equal to the specified Partition.
The partition to test.
true if other has the same value. false otherwise.
Tests whether Partition value a is equal to Partition value b.
The first Partition value to compare.
The second Partition value to compare.
true if Partition value a and b are equal. false otherwise.
Tests whether Partition value a is not equal to Partition value b.
The first Partition value to compare.
The second Partition value to compare.
true if Partition value a and b are not equal. false otherwise.
Tests whether Partition value a is greater than Partition value b.
The first Partition value to compare.
The second Partition value to compare.
true if Partition value a is greater than Partition value b. false otherwise.
Tests whether Partition value a is less than Partition value b.
The first Partition value to compare.
The second Partition value to compare.
true if Partition value a is less than Partition value b. false otherwise.
Tests whether Partition value a is greater than or equal to Partition value b.
The first Partition value to compare.
The second Partition value to compare.
true if Partition value a is greater than or equal to Partition value b. false otherwise.
Tests whether Partition value a is less than or equal to Partition value b.
The first Partition value to compare.
The second Partition value to compare.
true if Partition value a is less than or equal to Partition value b. false otherwise.
Returns a hash code for this Partition.
An integer that specifies a hash value for this Partition.
Converts the specified int value to an Partition value.
The int value to convert.
Converts the specified Partition value to an int value.
The Partition value to convert.
Returns a string representation of the Partition object.
A string that represents the Partition object.
Metadata pertaining to a single Kafka topic partition.
Initializes a new PartitionMetadata instance.
The id of the partition this metadata relates to.
The id of the broker that is the leader for the partition.
The ids of all brokers that contain replicas of the partition.
The ids of all brokers that contain in-sync replicas of the partition.
Note: this value is cached by the broker and is consequently not guaranteed to be up-to-date.
A rich object associated with the request for this partition metadata.
Gets the id of the partition this metadata relates to.
Gets the id of the broker that is the leader for the partition.
Gets the ids of all brokers that contain replicas of the partition.
Gets the ids of all brokers that contain in-sync replicas of the partition.
Gets a rich object associated with the request for this partition metadata.
Note: this value is cached by the broker and is consequently not guaranteed to be up-to-date.
Returns a JSON representation of the PartitionMetadata object.
A JSON representation the PartitionMetadata object.
Enumeration of possible message persistence states.
Message was never transmitted to the broker, or failed with
an error indicating it was not written to the log.
Application retry risks ordering, but not duplication.
Message was transmitted to broker, but no acknowledgement was
received. Application retry risks ordering and duplication.
Message was written to the log and acknowledged by the broker.
Note: acks='all' should be used for this to be fully trusted
in case of a broker failover.
Represents an error that occured whilst producing a message.
Initialize a new instance of ProduceException based on
an existing Error value.
The error associated with the delivery result.
The delivery result associated with the produce request.
The exception instance that caused this exception.
Initialize a new instance of ProduceException based on
an existing Error value.
The error associated with the delivery report.
The delivery result associated with the produce request.
The delivery result associated with the produce request.
A high level producer with serialization capability.
Releases the unmanaged resources used by the
and optionally disposes the managed resources.
true to release both managed and unmanaged resources;
false to release only unmanaged resources.
Calculate a partition number given a
and serialized . The
is also provided, but is typically not used.
A partioner instance may be called in any thread at any time and
may be called multiple times for the same message/key.
A partitioner:
- MUST NOT call any method on the producer instance.
- MUST NOT block or execute for prolonged periods of time.
- MUST return a value between 0 and partitionCnt-1.
- MUST NOT throw any exception.
The topic.
The number of partitions in .
The serialized key data.
Whether or not the key is null (distinguishes the null and empty case).
The calculated , possibly
.
A builder class for .
The config dictionary.
The configured error handler.
The configured log handler.
The configured statistics handler.
The configured OAuthBearer Token Refresh handler.
The per-topic custom partitioners.
The default custom partitioner.
The configured key serializer.
The configured value serializer.
The configured async key serializer.
The configured async value serializer.
A collection of librdkafka configuration parameters
(refer to https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)
and parameters specific to this client (refer to:
).
At a minimum, 'bootstrap.servers' must be specified.
Set the handler to call on statistics events. Statistics are provided as
a JSON formatted string as defined here:
https://github.com/edenhill/librdkafka/blob/master/STATISTICS.md
You can enable statistics and set the statistics interval
using the StatisticsIntervalMs configuration property
(disabled by default).
Executes on the poll thread (by default, a background thread managed by
the producer).
Exceptions: Any exception thrown by your statistics handler
will be devivered to your error handler, if set, else they will be
silently ignored.
Set a custom partitioner to use when producing messages to
.
Set a custom partitioner that will be used for all topics
except those for which a partitioner has been explicitly configured.
Set the handler to call on error events e.g. connection failures or all
brokers down. Note that the client will try to automatically recover from
errors that are not marked as fatal. Non-fatal errors should be interpreted
as informational rather than catastrophic.
Executes on the poll thread (by default, a background thread managed by
the producer).
Exceptions: Any exception thrown by your error handler will be silently
ignored.
Set the handler to call when there is information available
to be logged. If not specified, a default callback that writes
to stderr will be used.
By default not many log messages are generated.
For more verbose logging, specify one or more debug contexts
using the Debug configuration property.
Warning: Log handlers are called spontaneously from internal
librdkafka threads and the application must not call any
Confluent.Kafka APIs from within a log handler or perform any
prolonged operations.
Exceptions: Any exception thrown by your log handler will be
silently ignored.
Set SASL/OAUTHBEARER token refresh callback in provided
conf object. The SASL/OAUTHBEARER token refresh callback
is triggered via
whenever OAUTHBEARER is the SASL mechanism and a token
needs to be retrieved, typically based on the configuration
defined in sasl.oauthbearer.config. The callback should
invoke
or
to indicate success or failure, respectively.
An unsecured JWT refresh handler is provided by librdkafka
for development and testing purposes, it is enabled by
setting the enable.sasl.oauthbearer.unsecure.jwt property
to true and is mutually exclusive to using a refresh callback.
the callback to set; callback function arguments:
IConsumer - instance of the consumer which should be used to
set token or token failure string - Value of configuration
property sasl.oauthbearer.config
The serializer to use to serialize keys.
If your key serializer throws an exception, this will be
wrapped in a ProduceException with ErrorCode
Local_KeySerialization and thrown by the initiating call to
Produce or ProduceAsync.
The serializer to use to serialize values.
If your value serializer throws an exception, this will be
wrapped in a ProduceException with ErrorCode
Local_ValueSerialization and thrown by the initiating call to
Produce or ProduceAsync.
The serializer to use to serialize keys.
If your key serializer throws an exception, this will be
wrapped in a ProduceException with ErrorCode
Local_KeySerialization and thrown by the initiating call to
Produce or ProduceAsync.
The serializer to use to serialize values.
If your value serializer throws an exception, this will be
wrapped in a ProduceException with ErrorCode
Local_ValueSerialization and thrown by the initiating call to
Produce or ProduceAsync.
Build a new IProducer implementation instance.
Context relevant to a serialization or deserialization operation.
The default SerializationContext value (representing no context defined).
Create a new SerializationContext object instance.
The component of the message the serialization operation relates to.
The topic the data is being written to or read from.
The collection of message headers (or null). Specifying null or an
empty list are equivalent. The order of headers is maintained, and
duplicate header keys are allowed.
The topic the data is being written to or read from.
The component of the message the serialization operation relates to.
The collection of message headers (or null). Specifying null or an
empty list are equivalent. The order of headers is maintained, and
duplicate header keys are allowed.
Serializers for use with .
String (UTF8) serializer.
Null serializer.
System.Int64 (big endian, network byte order) serializer.
System.Int32 (big endian, network byte order) serializer.
System.Single (big endian, network byte order) serializer
System.Double (big endian, network byte order) serializer
System.Byte[] (nullable) serializer.
Byte order is original order.
An adapter that allows an async deserializer
to be used where a sync deserializer is required.
In using this adapter, there are two potential
issues you should be aware of:
1. If you are working in a single threaded
SynchronizationContext (for example, a
WindowsForms application), you must ensure
that all methods awaited by your deserializer
(at all levels) are configured to NOT
continue on the captured context, otherwise
your application will deadlock. You do this
by calling .ConfigureAwait(false) on every
method awaited in your deserializer
implementation. If your deserializer makes use
of a library that does not do this, you
can get around this by calling await
Task.Run(() => ...) to force the library
method to execute in a SynchronizationContext
that is not single threaded. Note: all
Confluent async deserializers comply with the
above.
2. In any application, there is potential
for a deadlock due to thread pool exhaustion.
This can happen because in order for an async
method to complete, a thread pool thread is
typically required. However, if all available
thread pool threads are in use waiting for the
async methods to complete, there will be
no threads available to complete the tasks
(deadlock). Due to (a) the large default
number of thread pool threads in the modern
runtime and (b) the infrequent need for a
typical async deserializer to wait on an async
result (i.e. most deserializers will only
infrequently need to execute asynchronously),
this scenario should not commonly occur in
practice.
Initializes a new SyncOverAsyncDeserializer.
Deserialize a message key or value.
The data to deserialize.
Whether or not the value is null.
Context relevant to the deserialize
operation.
The deserialized value.
Extension methods related to SyncOverAsyncDeserializer.
Create a sync deserializer by wrapping an async
one. For more information on the potential
pitfalls in doing this, refer to .
An adapter that allows an async serializer
to be used where a sync serializer is required.
In using this adapter, there are two potential
issues you should be aware of:
1. If you are working in a single threaded
SynchronizationContext (for example, a
WindowsForms application), you must ensure
that all methods awaited by your serializer
(at all levels) are configured to NOT
continue on the captured context, otherwise
your application will deadlock. You do this
by calling .ConfigureAwait(false) on every
method awaited in your serializer
implementation. If your serializer makes use
of a library that does not do this, you
can get around this by calling await
Task.Run(() => ...) to force the library
method to execute in a SynchronizationContext
that is not single threaded. Note: all
Confluent async serializers are safe to use
with this adapter.
2. In any application, there is potential
for a deadlock due to thread pool exhaustion.
This can happen because in order for an async
method to complete, a thread pool thread is
typically required. However, if all available
thread pool threads are in use waiting for the
async methods to complete, there will be
no threads available to complete the tasks
(deadlock). Due to (a) the large default
number of thread pool threads in the modern
runtime and (b) the infrequent need for a
typical async serializer to wait on an async
result (i.e. most serializers will only
infrequently need to execute asynchronously),
this scenario should not commonly occur in
practice.
Initializes a new SyncOverAsyncSerializer
instance.
Serialize the key or value of a
instance.
The value to serialize.
Context relevant to the serialize operation.
the serialized data.
Extension methods related to SyncOverAsyncSerializer.
Create a sync serializer by wrapping an async
one. For more information on the potential
pitfalls in doing this, refer to .
Represents enumeration with levels coming from syslog(3)
System is unusable.
Action must be take immediately
Critical condition.
Error condition.
Warning condition.
Normal, but significant condition.
Informational message.
Debug-level message.
Encapsulates a Kafka timestamp and its type.
A read-only field representing an unspecified timestamp.
Unix epoch as a UTC DateTime. Unix time is defined as
the number of seconds past this UTC time, excluding
leap seconds.
Initializes a new instance of the Timestamp structure.
The unix millisecond timestamp.
The type of the timestamp.
Initializes a new instance of the Timestamp structure.
Note: is first converted to UTC
if it is not already.
The DateTime value corresponding to the timestamp.
The type of the timestamp.
Initializes a new instance of the Timestamp structure.
Note: is first converted
to UTC if it is not already and TimestampType is set
to CreateTime.
The DateTime value corresponding to the timestamp.
Initializes a new instance of the Timestamp structure.
Note: TimestampType is set to CreateTime.
The DateTimeOffset value corresponding to the timestamp.
Gets the timestamp type.
Get the Unix millisecond timestamp.
Gets the UTC DateTime corresponding to the .
Determines whether two Timestamps have the same value.
Determines whether this instance and a specified object,
which must also be a Timestamp object, have the same value.
true if obj is a Timestamp and its value is the same as
this instance; otherwise, false. If obj is null, the method
returns false.
Determines whether two Timestamps have the same value.
The timestamp to test.
true if other has the same value. false otherwise.
Returns the hashcode for this Timestamp.
A 32-bit signed integer hash code.
Determines whether two specified Timestamps have the same value.
The first Timestamp to compare.
The second Timestamp to compare
true if the value of a is the same as the value of b; otherwise, false.
Determines whether two specified Timestamps have different values.
The first Timestamp to compare.
The second Timestamp to compare
true if the value of a is different from the value of b; otherwise, false.
Convert a DateTime instance to a milliseconds unix timestamp.
Note: is first converted to UTC
if it is not already.
The DateTime value to convert.
The milliseconds unix timestamp corresponding to
rounded down to the previous millisecond.
Convert a milliseconds unix timestamp to a DateTime value.
The milliseconds unix timestamp to convert.
The DateTime value associated with with Utc Kind.
Enumerates the different meanings of a message timestamp value.
Timestamp type is unknown.
Timestamp relates to message creation time as set by a Kafka client.
Timestamp relates to the time a message was appended to a Kafka log.
Metadata pertaining to a single Kafka topic.
Initializes a new TopicMetadata class instance.
The topic name.
Metadata for each of the topic's partitions.
A rich object associated with the request for this topic metadata.
Gets the topic name.
Gets metadata for each of the topics partitions.
A rich object associated with the request for this topic metadata.
Returns a JSON representation of the TopicMetadata object.
A JSON representation the TopicMetadata object.
Represents a Kafka (topic, partition) tuple.
Initializes a new TopicPartition instance.
A Kafka topic name.
A Kafka partition.
Gets the Kafka topic name.
Gets the Kafka partition.
Tests whether this TopicPartition instance is equal to the specified object.
The object to test.
true if obj is a TopicPartition and all properties are equal. false otherwise.
Returns a hash code for this TopicPartition.
An integer that specifies a hash value for this TopicPartition.
Tests whether TopicPartition instance a is equal to TopicPartition instance b.
The first TopicPartition instance to compare.
The second TopicPartition instance to compare.
true if TopicPartition instances a and b are equal. false otherwise.
Tests whether TopicPartition instance a is not equal to TopicPartition instance b.
The first TopicPartition instance to compare.
The second TopicPartition instance to compare.
true if TopicPartition instances a and b are not equal. false otherwise.
Returns a string representation of the TopicPartition object.
A string that represents the TopicPartition object.
Compares the current instance with another object of the same type and returns
an integer that indicates whether the current instance precedes, follows, or
occurs in the same position in the sort order as the other object.
Less than zero: This instance precedes obj in the sort order.
Zero: This instance occurs in the same position in the sort order as obj.
Greater than zero: This instance follows obj in the sort order.
Represents a Kafka (topic, partition, error) tuple.
Initializes a new TopicPartitionError instance.
Kafka topic name and partition values.
A Kafka error.
Initializes a new TopicPartitionError instance.
A Kafka topic name.
A Kafka partition value.
A Kafka error.
Gets the Kafka topic name.
Gets the Kafka partition.
Gets the Kafka error.
Gets the TopicPartition component of this TopicPartitionError instance.
Tests whether this TopicPartitionError instance is equal to the specified object.
The object to test.
true if obj is a TopicPartitionError and all properties are equal. false otherwise.
Returns a hash code for this TopicPartitionError.
An integer that specifies a hash value for this TopicPartitionError.
Tests whether TopicPartitionError instance a is equal to TopicPartitionError instance b.
The first TopicPartitionError instance to compare.
The second TopicPartitionError instance to compare.
true if TopicPartitionError instances a and b are equal. false otherwise.
Tests whether TopicPartitionError instance a is not equal to TopicPartitionError instance b.
The first TopicPartitionError instance to compare.
The second TopicPartitionError instance to compare.
true if TopicPartitionError instances a and b are not equal. false otherwise.
Returns a string representation of the TopicPartitionError object.
A string representation of the TopicPartitionError object.
Represents a Kafka (topic, partition, offset) tuple.
Initializes a new TopicPartitionOffset instance.
Kafka topic name and partition.
A Kafka offset value.
Initializes a new TopicPartitionOffset instance.
A Kafka topic name.
A Kafka partition.
A Kafka offset value.
Gets the Kafka topic name.
Gets the Kafka partition.
Gets the Kafka partition offset value.
Gets the TopicPartition component of this TopicPartitionOffset instance.
Tests whether this TopicPartitionOffset instance is equal to the specified object.
The object to test.
true if obj is a TopicPartitionOffset and all properties are equal. false otherwise.
Returns a hash code for this TopicPartitionOffset.
An integer that specifies a hash value for this TopicPartitionOffset.
Tests whether TopicPartitionOffset instance a is equal to TopicPartitionOffset instance b.
The first TopicPartitionOffset instance to compare.
The second TopicPartitionOffset instance to compare.
true if TopicPartitionOffset instances a and b are equal. false otherwise.
Tests whether TopicPartitionOffset instance a is not equal to TopicPartitionOffset instance b.
The first TopicPartitionOffset instance to compare.
The second TopicPartitionOffset instance to compare.
true if TopicPartitionOffset instances a and b are not equal. false otherwise.
Returns a string representation of the TopicPartitionOffset object.
A string that represents the TopicPartitionOffset object.
Represents a Kafka (topic, partition, offset, error) tuple.
Initializes a new TopicPartitionOffsetError instance.
Kafka topic name and partition values.
A Kafka offset value.
A Kafka error.
Initializes a new TopicPartitionOffsetError instance.
Kafka topic name, partition and offset values.
A Kafka error.
Initializes a new TopicPartitionOffsetError instance.
A Kafka topic name.
A Kafka partition value.
A Kafka offset value.
A Kafka error.
Gets the Kafka topic name.
Gets the Kafka partition.
Gets the Kafka partition offset value.
Gets the Kafka error.
Gets the TopicPartition component of this TopicPartitionOffsetError instance.
Gets the TopicPartitionOffset component of this TopicPartitionOffsetError instance.
>
Tests whether this TopicPartitionOffsetError instance is equal to the specified object.
The object to test.
true if obj is a TopicPartitionOffsetError and all properties are equal. false otherwise.
Returns a hash code for this TopicPartitionOffsetError.
An integer that specifies a hash value for this TopicPartitionOffsetError.
Tests whether TopicPartitionOffsetError instance a is equal to TopicPartitionOffsetError instance b.
The first TopicPartitionOffsetError instance to compare.
The second TopicPartitionOffsetError instance to compare.
true if TopicPartitionOffsetError instances a and b are equal. false otherwise.
Tests whether TopicPartitionOffsetError instance a is not equal to TopicPartitionOffsetError instance b.
The first TopicPartitionOffsetError instance to compare.
The second TopicPartitionOffsetError instance to compare.
true if TopicPartitionOffsetError instances a and b are not equal. false otherwise.
Converts TopicPartitionOffsetError instance to TopicPartitionOffset instance.
NOTE: Throws KafkaException if Error.Code != ErrorCode.NoError
The TopicPartitionOffsetError instance to convert.
TopicPartitionOffset instance converted from TopicPartitionOffsetError instance
Returns a string representation of the TopicPartitionOffsetError object.
A string representation of the TopicPartitionOffsetError object.
Represents a Kafka (topic, partition, timestamp) tuple.
Initializes a new TopicPartitionTimestamp instance.
Kafka topic name and partition.
A Kafka timestamp value.
Initializes a new TopicPartitionTimestamp instance.
A Kafka topic name.
A Kafka partition.
A Kafka timestamp value.
Gets the Kafka topic name.
Gets the Kafka partition.
Gets the Kafka timestamp.
Gets the TopicPartition component of this TopicPartitionTimestamp instance.
Tests whether this TopicPartitionTimestamp instance is equal to the specified object.
The object to test.
true if obj is a TopicPartitionTimestamp and all properties are equal. false otherwise.
Returns a hash code for this TopicPartitionTimestamp.
An integer that specifies a hash value for this TopicPartitionTimestamp.
Tests whether TopicPartitionTimestamp instance a is equal to TopicPartitionTimestamp instance b.
The first TopicPartitionTimestamp instance to compare.
The second TopicPartitionTimestamp instance to compare.
true if TopicPartitionTimestamp instances a and b are equal. false otherwise.
Tests whether TopicPartitionTimestamp instance a is not equal to TopicPartitionTimestamp instance b.
The first TopicPartitionTimestamp instance to compare.
The second TopicPartitionTimestamp instance to compare.
true if TopicPartitionTimestamp instances a and b are not equal. false otherwise.
Returns a string representation of the TopicPartitionTimestamp object.
A string that represents the TopicPartitionTimestamp object.
Represents the low and high watermark offsets of a Kafka
topic/partition.
You can identify a partition that has not yet been written
to by checking if the high watermark equals 0.
Initializes a new instance of the WatermarkOffsets class
with the specified offsets.
The offset of the earliest message in the topic/partition. If
no messages have been written to the topic, the low watermark
offset is set to 0. The low watermark will also be 0 if
one message has been written to the partition (with offset 0).
The high watermark offset, which is the offset of the latest
message in the topic/partition available for consumption + 1.
Gets the offset of the earliest message in the topic/partition. If
no messages have been written to the topic, the low watermark
offset is set to 0. The low watermark will also be 0 if
one message has been written to the partition (with offset 0).
Gets the high watermark offset, which is the offset of the latest
message in the topic/partition available for consumption + 1.
Returns a string representation of the WatermarkOffsets object.
A string representation of the WatermarkOffsets object.