Skip to content

Destinations: More optimal way to transform Conduit records into Kafka records #58

@hariso

Description

@hariso

Feature description

When transforming Conduit records into Kafka records (in destinations), we do the following:

  1. For raw records: bytes -> JSON object -> Kafka record (using Kafka Connect's JsonConverter)
  2. For structured records: struct -> bytes -> JSON object -> Kafka record (using Kafka Connect's JsonConverter)

We use KC's JsonConverter since it already does all the work and handles all the cases. However, it's public API only has a method which transforms JSON objects into Kafka records (there is a private method though).

The additional steps to get to the JSON object consume extra memory. It would be good if we can optimize it so the wrapper can more easily be used for handling larger sets of data.

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureNew feature or request

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions