Data Quality Rules, part of the first fully managed governance solution for Apache Kafka, helps teams enforce data integrity and quickly resolve data quality issues.
Confluent, Inc. has announced new Confluent Cloud capabilities that give customers confidence that their data is trustworthy and can be easily processed and securely shared. With Data Quality Rules, an expansion of the Stream Governance suite, organizations can easily resolve data quality issues so data can be relied on for making business-critical decisions. In addition, Confluent’s new Custom Connectors, Stream Sharing, the Kora Engine, and early access program for managed Apache Flink make it easier for companies to gain insights from their data on one platform, reducing operational burdens and ensuring industry-leading performance.
“Real-time data is the lifeblood of every organization, but it’s extremely challenging to manage data coming from different sources in real time and guarantee that it’s trustworthy. As a result, many organizations build a patchwork of solutions plagued with silos and business inefficiencies. Confluent Cloud’s new capabilities fix these issues by providing an easy path to ensuring trusted data can be shared with the right people in the right formats,” said Shaun Clowes, Chief Product Officer at Confluent.
Having high-quality data that can be quickly shared between teams, customers, and partners helps businesses make decisions faster. However, this is a challenge many companies face when dealing with highly distributed open source infrastructure like Apache Kafka. According to Confluent’s new 2023 Data Streaming Report, 72% of IT leaders cite the inconsistent use of integration methods and standards as a challenge or major hurdle to their data streaming infrastructure.
To address the need for more comprehensive data contracts, Confluent’s Data Quality Rules, a new feature in Stream Governance, enable organizations to deliver trusted, high-quality data streams across the organization using customizable rules that ensure data integrity and compatibility.
“High levels of data quality and trust improves business outcomes, and this is especially important for data streaming where analytics, decisions, and actions are triggered in real time,” said Stewart Bond, VP of Data Intelligence and Integration Software at IDC.
Many organizations have unique data architectures and need to build their own connectors to integrate their homegrown data systems and custom applications to Apache Kafka. However, these custom-built connectors then need to be self-managed, requiring manual provisioning, upgrading, and monitoring, taking away valuable time and resources from other business-critical activities.
Confluent’s new Custom Connectors are available on AWS in select regions. Support for additional regions and other cloud providers will be available in the future.