Feature description
When transforming Conduit records into Kafka records (in destinations), we do the following:
- For raw records: bytes -> JSON object -> Kafka record (using Kafka Connect's JsonConverter)
- For structured records: struct -> bytes -> JSON object -> Kafka record (using Kafka Connect's JsonConverter)
We use KC's JsonConverter since it already does all the work and handles all the cases. However, it's public API only has a method which transforms JSON objects into Kafka records (there is a private method though).
The additional steps to get to the JSON object consume extra memory. It would be good if we can optimize it so the wrapper can more easily be used for handling larger sets of data.