Dremio Speeds AI and BI Workloads with Spring Lakehouse Release

Posted by:

|

On:

|

Dremio used today’s Iceberg Summit as the venue to unleash across-the-board improvements to its data lakehouse platform, including enhancements for data management that will resonate with organizations pursuing anything from analytics to generative AI.

Dremio has morphed over the years from a developer of an open source SQL query engine to a full-stack provider of a data lakehouse. Its Intelligent Data Lakehouse Platform combines its query engine with a data storage environment based on Apache Iceberg, the innovative open table format developed at Netflix and Apple to solve data consistency issues in Hadoop clusters.

The enhancements that Dremio unveiled today with its Spring 2025 release address several issues and use cases that are impacting its customers, ranging from data management and SQL analytics to ensuring performance of AI applications.

At the top of the list of updates is a new capability dubbed Autonomous Reflections that will function as a materialized cache that is always kept up-to-date with the latest data. Dremio says this cache will provide sub-second queries for AI and SQL queries and eliminate the need for manual performance tuning while cutting compute costs.

Another cost-saving measure can be found in Iceberg Clustering, which Dremio says will automatically deliver optimized data layouts for Iceberg lakehouses. Iceberg Clustering will improve query speeds while eliminating the need to partition tables, the company says.

Kevin Petrie, vice president of research at BARC US, says Dremio’s moves will help customers’ AI projects.

“Many popular AI use cases, including chatbots, recommendation engines and anomaly detection, require simple real-time access to structured or semi-structured data,” Petrie said. “Dremio gets positive reviews for meeting these requirements for ease of use and performance, and this announcement further strengthens those capabilities.”

Dremio’s Spring 2025 release also introduces support for Apache Polaris, the open source metadata catalog introduced by Snowflake last year, in Dremio’s enterprise metadata catalog offering.

Polaris, which is currently incubating at the Apache Software Foundation, provides fine-grained data access controls and data lineage-tracking capabilities for Iceberg tables, and serves as the interface for query engines that want to access Iceberg data in lakehouses. Dremio says it is the first vendor to provide a Polaris-powered metadata catalog that can run in any cloud or on-premise environments.

“Supporting Polaris makes perfect sense because it works well with the Apache Iceberg open table format,” Petrie told BigDATAwire. “As a longtime backer of Iceberg, Dremio understands the need to give customers this catalog option for Iceberg environments.”

Dremio’s metadata catalog also supports Project Nessie, which Dremio spearheaded before Snowflake developed Polaris. Last October, the company committed to supporting Polaris.

Russell Spitzer, a Principal Software Engineer at Snowflake who sits on the Apache committees for Iceberg and Polaris, says the work on open table formats is proof of what can happen when people work together.

“The interoperability of the Iceberg format has allowed it to be successful in an industry full of options for working with big data,” Spitzer stated. “Apache Polaris (Incubating) is that next step towards data interoperability and perfectly fits the missing niche of a project that is contributed to by a variety of engineers aiming for vendor-neutrality and interoperability first.”

Dremio also launched a new AI-enabled semantic search capability that it says will reduce data discovery time from days to just seconds. It says the new search function will eliminate the need for technical skills or advanced SQL knowledge to discover existing data, either by humans or AI agents.

With the Spring 2025 release, Dremio is transforming how enterprises deliver data for AI initiatives, said Dremio founder Tomer Shiran.

“The paradox today is clear: AI demands massive amounts of high-quality data, yet teams are being asked to do more with less,” Shiran stated. “Our Spring 2025 release resolves this tension by eliminating the bottlenecks slowing teams down.”

Dremio made the announcements at Iceberg Summit 2025, a two-day event taking place today (in San Francisco) and tomorrow (online). Dremio, Snowflake, AWS, and Microsoft are sponsoring the event, which includes nearly 50 sessions on Iceberg and how to take advantage of the format in real-world big data environments.

Related Items:

Dremio Goes Hybrid with Nessie-Based Metadata Catalog

Dremio Unveils New Features to Enhance Apache Iceberg Data Lakehouse Performance

Dremio Report: 86% of Organizations Focusing on Data Unification for AI Readiness

 

The post Dremio Speeds AI and BI Workloads with Spring Lakehouse Release appeared first on BigDATAwire.

Posted by

in

Leave a Reply

Your email address will not be published. Required fields are marked *