Skip to main content
search
0

Record Source and the Business Vault

Watch the Video

In our ongoing Data Vault Friday series, our CEO Michael Olschimke addresses a question raised by a member of our audience.

“We load a Same-As-Link (SAL) from multiple sources to deduplicate records across these source systems. How do you define the record source? The data is not clearly coming from only one source.”

In this insightful video, Michael delves into the intricacies of handling Same-As-Links (SAL) when loading data from multiple sources for the purpose of deduplication. The specific challenge raised about defining the record source when data is sourced from multiple systems is a common issue in many projects.

Michael shares best practices, providing clarity on the intended use of the record source and offering guidance on how to avoid common pitfalls associated with its misuse. The discussion sheds light on the importance of correctly defining the record source to ensure accurate data lineage and enhance the effectiveness of deduplication efforts.

For those navigating the complexities of deduplication in a multi-source environment, this video offers valuable insights and practical recommendations.

Meet the Speaker

Profile picture of Michael Olschimke

Michael Olschimke

Michael has more than 15 years of experience in Information Technology. During the last eight years he has specialized in Business Intelligence topics such as OLAP, Dimensional Modelling, and Data Mining. Challenge him with your questions!

Data Quality in the Enterprise Data Warehouse (EDW) in Data Vault

Watch the Video

In our continuous Data Vault Friday series, our CEO Michael Olschimke engages with a set of pertinent questions posed by our audience.

“How to deal with dirty data managed in BV? What are the best practices for correct data management? How can business rules, versions, and fixes be managed on correction properly?”

In this enlightening video, Michael addresses the challenges associated with handling dirty data within the Business Vault (BV) and explores best practices for effective data management. He dives into the complexities of managing business rules, versions, and corrections, offering insights into the proper approaches for ensuring data accuracy and consistency.

The video emphasizes a strict Extract, Load, Transform (ELT) approach, advocating for the application of data cleansing rules after loading the Raw Data Vault. Michael explains the rationale behind this methodology and highlights the advantages of maintaining a robust data flow.

For teams grappling with data quality issues and seeking optimal data management strategies, this video provides valuable guidance and practical considerations.

Meet the Speaker

Profile picture of Michael Olschimke

Michael Olschimke

Michael has more than 15 years of experience in Information Technology. During the last eight years he has specialized in Business Intelligence topics such as OLAP, Dimensional Modelling, and Data Mining. Challenge him with your questions!

Data Vault 2.0: Best of Breed from Data Warehousing and Data Lakes

Image of the structure from Data Vault

There are two competing approaches to data analytics available in the industry, and most professionals at least tend to one or the other as the preferred tool of choice: data warehousing vs data lake. This article sheds light on the differences between both approaches and how Data Vault 2.0 provides a best of breed solution that integrates the advantages of both approaches into a unified concept.

About Data Warehousing

Data Warehousing is the traditional enterprise solution for providing reliable information to decision makers at every level of the organization. Data warehouse solutions (but also data lakes) are based on a data model, which is traditionally defined on the basis of either the information requirement in a bottom-up approach or in a top-down approach based on an integrated enterprise information model.

In any case, the traditional data warehouse is based on a concept called “schema-on-write” where the data model is established when loading the data into the data warehouse. This often leads to non-agile data processing, as this data model often requires modifications in order to cope with changes in the business.

About Data Lakes

Data lakes, on the other hand, are based on the “schema-on-read” concept. Instead of modeling the enterprise or fitting the incoming dataset into a target information model, the data is first and foremost stored on the data lake as delivered without any modelling applied. 

While traditional data warehousing often leads to overmodeling and non-agile data analytics, the data lake approach often leads to the direct opposite: to unmanaged data and inconsistent information results.

The Best of Breed

Both approaches are on the extreme ends of the data analytics space and used throughout the years with mixed results. With the emergence of the Data Vault 2.0 concept, a third option is available to industry professionals to build data analytics platforms. 

Data Vault 2.0 is a best of breed between traditional data warehousing and data lakes: for example, there is a data model to manage the data and business logic as in traditional data warehousing, but it follows a schema-on-write approach as in data lakes. 

The Data Vault 2.0 architecture comprises multiple layers:

Image of the structure from Data Vault

The first layer is the staging area: it is used to extract the data from the source systems. The next layer is the Raw Data Vault. This layer is still functionally oriented as the staging layer, but integrates and versionizes the data. To achieve this, the incoming source data model is broken down into smaller components: business keys (stored in hubs), relationships between business keys (stored in links) and descriptive data (captured by satellites). 

 

The Business Vault is the next layer, but only sparsely modelled: only where business logic is required to deliver useful information, a Business Vault entity is put in place. The Business Vault bridges the gap between the target information model (as in the next layer) and the actual raw data. Often, the raw data doesn’t meet the expectations of the business regarding data quality, completeness, or content and thus needs to be adjusted. Business logic is used to fill the gap.

The final layer is the information mart layer where the information model is produced to deliver the final information in the desired format, e.g., a dimensional star schema. This model is used by the business user either directly in ad-hoc queries or using business intelligence tools such as dashboarding or reporting software.

The first layers until the Raw Data Vault are still functionally oriented because the model is still derived either directly from the source system (as in the staging area) or by breaking down the incoming data model into smaller, normalized components, as in the Raw Data Vault. The target schema is only applied at the latest layer, the information mart layer. This is when the desired information model is applied. Because the information mart is often virtualized using SQL views, the target schema is actually applied during query time. Queries against the view layers are merged with the SQL statements inside the view layer and run against the materialized tables in the Raw Data Vault, the actual data. Therefore, the schema-on-read concept is used in Data Vault 2.0.

Data Vault 2.0 also preserves the agility: the concept has demonstrated in many projects that it is easy to extend over time when either the source system structures change, the business rules change or the information models need to be adjusted. In addition, it is easy to add new data sources, additional business logic and additional information artifacts to the data warehouse. 

On top of that, the Data Vault 2.0 model is typically integrated with a data lake: the above diagram shows the use of a data lake for staging purposes, which is the recommended “hybrid architecture” for new projects at Scalefree. But the data lake can also be used to capture semi-structured or unstructured data for the enterprise data warehouse or to deliver unstructured information marts.

With all that in mind, the Data Vault 2.0 concept has been established itself as a best of breed approach between the traditional data warehouse and the data lake. Organizations of all sizes use is to build data analytics platforms to deliver useful information to their decision makers.

 

-by Michael Olschimke (Scalefree)

Managed Self-service Monitoring in Data Vault

Watch the Video

In our ongoing Data Vault Friday series, our CEO Michael Olschimke engages with a pertinent question posed by our audience.

“Why is monitoring important in Managed Self-Service BI?”

In this insightful video, Michael delves into one of his favorite topics – managed self-service BI. He expounds on the significance and role of monitoring within the context of managed self-service BI scenarios. Drawing from his expertise, Michael provides a comprehensive exploration of the uses, values, and critical importance of monitoring tools in ensuring the effectiveness and efficiency of managed self-service BI implementations.

For those keen on optimizing their self-service BI initiatives and understanding the practical applications of monitoring, this video offers valuable insights and considerations.

Meet the Speaker

Profile picture of Michael Olschimke

Michael Olschimke

Michael has more than 15 years of experience in Information Technology. During the last eight years he has specialized in Business Intelligence topics such as OLAP, Dimensional Modelling, and Data Mining. Challenge him with your questions!

Ghost Records in Data Vault

Watch the Video

In our ongoing Data Vault Friday series, our CEO Michael Olschimke addresses an interesting question raised by a member of our audience.

“We wonder if a ghost record in a satellite should have NULL values in the descriptive fields or not. What is the advantage of non-NULL values?”

In this insightful video, Michael explores the nuances of handling ghost records in satellites, specifically focusing on whether descriptive fields should contain NULL values or default values. He provides a comprehensive analysis of the pros and cons associated with each solution, shedding light on the advantages and considerations that come with choosing either non-NULL values or default values for descriptive data in ghost records.

For those grappling with the challenges of managing ghost records within the Data Vault framework, this video offers valuable insights and considerations to inform decision-making.

Meet the Speaker

Profile picture of Michael Olschimke

Michael Olschimke

Michael has more than 15 years of experience in Information Technology. During the last eight years he has specialized in Business Intelligence topics such as OLAP, Dimensional Modelling, and Data Mining. Challenge him with your questions!

LINK Record Source in Data Vault

Watch the Video

In our continuous Data Vault Friday series, our CEO Michael Olschimke takes on a thoughtful question posed by a member of our audience.

“LINK table has only one ‘Record Source’ element to capture the source system value. In case the LINK generates the key relationship between two hubs which are sourced from different systems (I have not encountered any such but I can foresee this situation), What would be the RECORD SOURCE Value?”

In this enlightening video, Michael delves into the fundamental modeling principles of link entities within the Data Vault framework. Specifically, he addresses the scenario where a LINK table establishes key relationships between two hubs originating from distinct systems. Michael provides insights into determining the appropriate “Record Source” value in such situations, offering guidance on maintaining clarity and accuracy in data lineage.

For those navigating the nuances of link table modeling and considering potential cross-system relationships, this video provides valuable insights and best practices.

Meet the Speaker

Profile picture of Michael Olschimke

Michael Olschimke

Michael has more than 15 years of experience in Information Technology. During the last eight years he has specialized in Business Intelligence topics such as OLAP, Dimensional Modelling, and Data Mining. Challenge him with your questions!

How to Successfully Manage Data Vault 2.0 Using dbt

Watch the Webinar

More and more companies are making the big step towards a modern data architecture. As they deal with the components, they can’t actually stumble upon dbt and Data Vault. Among a lot of features “automation” is one, that they have totally in common. So why not combine these worlds?

Hear firsthand from dbt Labs and Scalefree how to successfully manage Data Vault 2.0 using dbt!

Highlights

– Learn how dbt will move your data platform to the next level and how the dbt Cloud product will simplify your code development and project management.

– Is dbt x Data Vault the right fit for your company? We will talk about the considerations you should make when starting your project.

Watch Webinar Recording

Webinar Agenda

1. Dbt Snapshots
2. Dbt Incremental Models
3. Incremental Satellites in Data Vault
4. Conclusion

Meet the Speaker

Marvin Geerken

Marvin Geerken

Marvin is working as a BI consultant for Scalefree since his Master’s degree in BI & Analytics in the beginning of 2021. He’s an expert in the design and implementation of BI solutions, with focus on the Data Vault 2.0 methodology. In his client projects, he also has been heavily focussing on the application of dbt and Snowflake.

Implementing GDPR in Data Warehousing

Solutions

Implementing GDPR

In the realm of data warehousing, whether it be Data Vault 2.0 or traditional approaches like Kimball and Inmon, data is stored and processed across multiple layers. The intricacies of privacy, particularly the application of security measures and the concept of the “right to be forgotten,” permeate every layer housing personal data.

For privacy implementation, the primary objective is the removal of Personally Identifiable Information (PII) data from each layer. This meticulous process aims to extract PII data, leaving non-PII data intact. In the ideal scenario, this ensures a reduction in consumer data proportionate to the removed PII data.

The General Data Protection Regulation (GDPR) casts a significant influence on data warehouse projects, introducing stringent requirements for data processing and storage. This impact spans across security considerations, determining who has access to what data, and privacy mandates, addressing the right to be forgotten.

ACCESS THE SOLUTION
Close Menu