Databricks Summit: New integration announced for enterprise users

To further strengthen our commitment to providing industry-leading data technology coverage, VentureBeat is excited to welcome Andrew Brust and Tony Baer as regular contributors. Note their article in the data pipeline.

Databricks joined the party a few weeks after the launch of Snowflake and MongoDB product fireworks. The San Francisco-based data lakehouse company has received a lot of attention at the ongoing data and AI summit, starting with Project Lightspeed, which aims to improve streaming data processing to a more open delta lake and improve MLFlow. I made an announcement that should be done.

However, the summit is not just about improving the platform with Databricks. Players forming part of the latest data stack have also announced new and improved integrations to help customers maximize their investment in lakehouses.

Below is an overview of the major new integrations.

Monte Carlo

Data observability provider Monte Carlo was the first to announce rapid code-free integration to enable enterprise users to obtain end-to-end data observability for the Databricks data pipeline. The company enables companies to connect Monte Carlo to the Databricks Metastore, Unity Catalog, or Delta Lake and use them to quickly identify data freshness, quantity, distribution, schema, lineage, and related anomalies. I said I plan to do it. With them. In this way, the team quickly detects structured and unstructured data incidents, from ingestion into databricks to the business intelligence (BI) layer, and resolves them before they impact downstream users. I can do it.

acceldata

Acceldata, a competitor in Monte Carlo’s data observability space, has also announced the integration of end-to-end data pipeline visibility. The solution tracks pipeline quality inside and outside Databricks, flags incidents, and also includes performance optimization features such as automated stability tracking and cost intelligence.

Rohit Choudhary, Founder and CEO of Acceldata, said: “With this integration, the Acceldata Data Observable Cloud (also) provides customers with an additional layer of cost intelligence to detect and reduce inefficiencies, optimize performance and maximize their investment in Databricks. “

Decodeable

Data engineering company Decodable has announced a new Delta Lake connector that allows you to bring streaming data into Databricks in an easy and cost-effective way.

The current process of capturing streaming data involves batch processing, which is costly and complex. However, the new connector, now generally available, is the bronze and silver stage of the Databricks medallion data layer architecture, which can be taken from any source in the cloud to allow application developers and data engineers to quickly connect streaming data. Available with a free Decodable developer account, you can unlock the host of Databricks’ powerful AI and analytics capabilities.

Sigma computing

San Francisco-based Sigma Computing has announced an integration that makes analytics interfaces like no-code spreadsheets available in Databricks. This will enable all business users working with companies leveraging Databricks to analyze cloud-scale live data at a detailed level. The integration requires a one-time deployment, where users can create advanced pivot tables, create dashboards, iterate, aggregate or free drill into dynamic live data in the data lakehouse. I can do it.

The Databricks Data + AI Summit ends today, June 30th.

Venture Beat’s mission It’s about becoming a Digital Town Square for technical decision makers to gain knowledge about innovative enterprise technologies and transactions. See membership details.