news

Data governance for generative AI in analytics – Business

Spread the love

The democratization of data governance for generative AI is reshaping the landscape of analytics and artificial intelligence.

As organizations strive to leverage AI for actionable insights, the emphasis on maintaining high-quality, timely and diverse data becomes paramount, according to Sharad Kumar (pictured), field chief technology officer for data at QlikTech International AB.

Sharad Kumar, field CTO for data at QlikTech International AB talking to theCUBE about data governance for generative AI at Data + AI Summit 2024

QlikTech’s Sharad Kumar talks to theCUBE about data governance for generative AI.

“What we realized, and I talked to a lot of CIOs and seniors, they realize in order to do AI and gen AI, you need good data,” Kumar said. “It needs to be good quality. What I was presenting yesterday was how you need six principles to ensure what I call goodness in your data and its fitness for AI.”

Kumar spoke with theCUBE Research’s John Furrier and Savannah Peterson at the Data + AI Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the convergence of data management and generative AI, focusing on the integration of open data formats, the democratization of data governance and the creation of a trusted data foundation for effective AI applications. (* Disclosure below.)

Building trusted data foundations for generative AI

The discussion highlighted the critical role of unified data formats and governance in advancing AI applications. Such integrations aim to create a singular, open lake house format that enhances compatibility and flexibility for various applications, according to Kumar.

“A couple of things which we saw coming with the acquisition of Tabular, it’s really the convergence of the Delta Lake and the Iceberg format,” Kumar said. “If you look at the last couple of years, it kind of been divergent formats, and even though Databricks has been kind of uniform … eventually you bring the two formats into a singular format. So now you … can plug in different apps and different things onto the same lakehouse.”

The discussion also touched on the issue of data fragmentation and legacy systems, which often hinder seamless data integration. Consolidating data into a unified storage platform, such as a lake house, can help overcome these challenges, Kumar added.

“If I look at customers, these have multiple databases … now they’re going to bring the data into, let’s say a lakehouse. That’s the first step where you create a unified storage place. You bring all your data together in one place,” Kumar said. “Databricks has a great, diverse, thriving and growing ecosystem of partners. But that means if I’m a customer, if I’m building an end-to-end platform around Databricks, often you have to pick a different tool to ingest the data, a different tool to transform the data, something else to secure the data.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of the Data + AI Summit:

(* Disclosure: TheCUBE is a paid media partner for the Data + AI Summit. Neither Databricks Inc., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)


 

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU