Row vs Column-Oriented Databases | SentinelOne

Scalyr Kafka Connector

Building off the introduction to Kafka blog (https://www.scalyr.com/blog/apache-kafka-tutorial/), Scalyr is announcing the release of a Scalyr Kafka Connector which will allow customers using Kafka, or considering using Kafka, in their log processing pipeline to send logs easily to Scalyr to take advantage of all the benefits that Kafka provides such as stream processing, high performance and multi-home publishing.

How Scalyr Supports Kafka

Given the benefits outlined in the Kafka blog post, Scalyr has built a Kafka Connector to send logs to Scalyr from your existing Kafka infrastructure. You can find details of how to configure Kafka to work with Scalyr here:  https://app.scalyr.com/solutions/kafka-connect. The plug-in is completely open sourced. We’ll go over a few examples of using the Scalyr Kafka connector below.

Using Scalyr connector with Filebeats and Kafka

Users may already have an existing log ingestion pipeline where they are utilizing Filebeats and Kafka already in an ELK stack setup. The Kafka Scalyr Connector supports Filebeats out of the box with minimal configuration.

The Scalyr Kafka Connector automatically sends the message, logfile, and host name fields from the Filebeat message.

Using Scalyr Connector for Custom Applications

There are times when an application directly writes logs to a Kafka topic. For example, a Java application can invoke Kafka Producer APIs to write logs directly into Kafka. Scalyr provides support for these types of logs. The data mapping feature in the Connector provides extensibility to support logs of any type.

Using Scalyr Kafka Connector with Fluentd

For users already using Fluentd/Fluent Bit to send their logs to Kafka, Scalyr supports those use cases as well. If your organization is already using Fluentd to ship logs to Kafka, all it takes is a few configuration changes and installing  the Scalyr Kafka Connector in order to start viewing and querying logs on Scalyr’s platform.

Summary

Kafka has become a popular event store with many applications. As outlined in our previous blog on Kafka, there are many reasons an organization would choose to use Kafka in their technology stacks. Scalyr’s Kafka Connector makes it easier for organizations to send data to Scalyr with little or minimal changes to their existing stacks. Users are able to send logs to Scalyr to take advantage of Scalyr’s lightning fast query speeds to gain valuable insights from their data and to triage production incidents. 

For full details and configurations on how to get Scalyr’s Kafka Sink Connector to run in your environment please visit the Scalyr documentation on Kafka. 

Scalyr is always looking for community feedback into how we can improve and help users solve their problems. Have some ideas around Kafka? Drop us a note!