32 subscribers
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده


1 Battle Camp S1: Reality Rivalries with Dana Moon & QT 1:00:36
Collecting Data with a Custom SIEM System Built on Apache Kafka and Kafka Connect ft. Vitalii Rudenskyi
Manage episode 424666799 series 2510642
The best-informed business insights that support better decision-making begin with data collection, ahead of data processing and analytics. Enterprises nowadays are engulfed by data floods, with data sources ranging from cloud services, applications, to thousands of internal servers. The massive volume of data that organizations must process presents data ingestion challenges for many large companies. In this episode, data security engineer, Vitalli Rudenskyi, discusses the decision to replace a vendor security information and event management (SIEM) system by developing a custom solution with Apache Kafka® and Kafka Connect for a better data collection strategy.
Having a data collection infrastructure layer is mission critical for Vitalii and the team in helping enterprises protect data and detect security events. Building on the base of Kafka, their custom SIEM infrastructure is configurable and designed to be able to ingest and analyze huge amounts of data, including personally identifiable information (PII) and healthcare data.
When it comes to collecting data, there are two fundamental choices: push or pull. But how about both? Vitalii shares that Kafka Connect API extensions are integral to data ingestion in Kafka. The three key components to allow their SIEM system to collect and record daily by pushing and pulling:
- NettySource Connector: A connector developed to receive data from different network devices to Apache Kafka. It helps receive data using both the TCP and UDP transport protocols and can be adopted to receive any data from Syslog to SNMP and NetFlow.
- PollableAPI Connector: A connector made to receive data from remote systems, pulling data from different remote APIs and services.
- Transformations Library: These are useful extensions to the existing out-of-the-box transformations. Approach with “tag and apply” transformations that transform data into the right place in the right format after collecting data.
Listen to learn more as Vitalii shares the importance of data collection and the building of a custom solution to address multi-source data management requirements.
EPISODE LINKS
- Feed Your SIEM Smart with Kafka Connect
- To Pull or to Push Your Data with Kafka Connect? That Is the Question.
- Free Kafka Connect 101 Course
- Syslog Source Connector for Confluent Platform
- Join the Confluent Community
- Learn more with Kafka tutorials, resources, and guides at Confluent Developer
- Live demo: Kafka streaming in 10 minutes on Confluent Cloud
- Use 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
265 قسمت
Manage episode 424666799 series 2510642
The best-informed business insights that support better decision-making begin with data collection, ahead of data processing and analytics. Enterprises nowadays are engulfed by data floods, with data sources ranging from cloud services, applications, to thousands of internal servers. The massive volume of data that organizations must process presents data ingestion challenges for many large companies. In this episode, data security engineer, Vitalli Rudenskyi, discusses the decision to replace a vendor security information and event management (SIEM) system by developing a custom solution with Apache Kafka® and Kafka Connect for a better data collection strategy.
Having a data collection infrastructure layer is mission critical for Vitalii and the team in helping enterprises protect data and detect security events. Building on the base of Kafka, their custom SIEM infrastructure is configurable and designed to be able to ingest and analyze huge amounts of data, including personally identifiable information (PII) and healthcare data.
When it comes to collecting data, there are two fundamental choices: push or pull. But how about both? Vitalii shares that Kafka Connect API extensions are integral to data ingestion in Kafka. The three key components to allow their SIEM system to collect and record daily by pushing and pulling:
- NettySource Connector: A connector developed to receive data from different network devices to Apache Kafka. It helps receive data using both the TCP and UDP transport protocols and can be adopted to receive any data from Syslog to SNMP and NetFlow.
- PollableAPI Connector: A connector made to receive data from remote systems, pulling data from different remote APIs and services.
- Transformations Library: These are useful extensions to the existing out-of-the-box transformations. Approach with “tag and apply” transformations that transform data into the right place in the right format after collecting data.
Listen to learn more as Vitalii shares the importance of data collection and the building of a custom solution to address multi-source data management requirements.
EPISODE LINKS
- Feed Your SIEM Smart with Kafka Connect
- To Pull or to Push Your Data with Kafka Connect? That Is the Question.
- Free Kafka Connect 101 Course
- Syslog Source Connector for Confluent Platform
- Join the Confluent Community
- Learn more with Kafka tutorials, resources, and guides at Confluent Developer
- Live demo: Kafka streaming in 10 minutes on Confluent Cloud
- Use 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
265 قسمت
Tutti gli episodi
×
1 Migrate Your Kafka Cluster with Minimal Downtime 1:01:30

1 Top 6 Worst Apache Kafka JIRA Bugs 1:10:58

1 Optimizing Apache JVMs for Apache Kafka 1:11:42

1 International Podcast Day - Apache Kafka Edition | Streaming Audio Special 1:02:22




1 Capacity Planning Your Apache Kafka Cluster 1:01:54




1 Streaming Analytics and Real-Time Signal Processing with Apache Kafka 1:06:33



1 Common Apache Kafka Mistakes to Avoid 1:09:43













1 Scaling an Apache Kafka Based Architecture at Therapie Clinic 1:10:56





1 The Evolution of Apache Kafka: From In-House Infrastructure to Managed Cloud Service ft. Jay Kreps 46:32



1 Expanding Apache Kafka Multi-Tenancy for Cloud-Native Systems ft. Anna Povzner and Anastasia Vela 31:01



1 From Batch to Real-Time: Tips for Streaming Data Pipelines with Apache Kafka ft. Danica Fine 29:50


















1 How to Build a Strong Developer Community with Global Engagement ft. Robin Moffatt and Ale Murray 35:18







1 Collecting Data with a Custom SIEM System Built on Apache Kafka and Kafka Connect ft. Vitalii Rudenskyi 25:14








به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.