Skip to main content
Category

Scalefree Newsletter

Multi-Temporality in Data Vault 2.0 – Part 1

By Scalefree Newsletter No Comments

Part 1 – Theoretical understanding of multiple timelines in a data warehouse.

What is “Multi-Temporality” in a Data Warehouse?

Before we start talking about multi-temporality, let’s define the term bi-temporality first, given, a common misconception is that Data Vault 2.0 is only bi-temporal (which is wrong):

“Bitemporal Modeling is a specific case of Temporal database information modeling technique designed to handle historical data along two different timelines. This makes it possible to rewind the information to “as it actually was” in combination with “as it was recorded” at some point in time.” (According to: https://en.wikipedia.org/wiki/Bitemporal_Modeling)

Bi-temporality just addresses two timelines, which are commonly referred to as “System Time” (the technical timeline) and “Valid Time” (the business timeline). Data Vault Satellites, Point-in-Time tables (PIT) and Bridge tables are able to address multiple active timelines in the same record. Let’s categorise just some of them: Read More

Open Source Production Grade Data Integration – Part 2

By Scalefree Newsletter No Comments

Meltano in action

In our last overview, we talked about Meltano and its architecture. Now, we would like to illustrate the ease in which you can use Meltano to create a data integration pipeline.
Before we start, please ensure that you have already installed Meltano on your machine. If you haven’t yet, you can follow Meltano’s official installation guide.

First we will initialize a Meltano project.
Initialize a new project in a directory of your choice by using  “meltano init”.

Read More

Open Source Production Grade Data Integration – Part 1

By Scalefree Newsletter No Comments
In our past blog post, we introduced an open source framework for ELT processes called Singer. This framework can be wrapped up using another open source tool which adds more interesting features to Singer including installation, setup of environments, monitoring, scheduling and orchestration. At Scalefree, we moved all of our ELT pipelines into this framework on AWS and are pleased with the results.

 

Please note, there are a large number of platforms for managing data integration but there is a lack of robust and easy-to-use, free open source solutions. The Meltano project aims to provide a solution to that situation. Meltano is a full-package data integration platform that challenges the most established players in the data space. Meltano is built on top of the best open source tools for data integration and infuses them with DataOps best practices.

Meltano is the easiest way to build, run and orchestrate ELT pipelines made-up of Singer taps, targets and dbt models. It is open source, self-hosted and version controlled as well as containerized.

Meltano’s open source model lets you easily adapt it to your own needs and reduces cost. Read More

Running modern ETL-Processes with Framework-Based Tools – Part 2

By Scalefree Newsletter 2 Comments

In the last blog post, we introduced Singer, the open-source framework, as a powerful tool for ETL processes. This time, we’d like to discuss how you can implement the framework in your own projects.

How to start working with Singer

Starting a test run is rather simple. First, you need to create a python environment,  for which step-by-step instructions to do so are available online. 

As soon as you’ve done that, it’s time to create your first virtual environment inside python.
Please note before beginning that it’s a best practice to create and use an individual virtual environment for every tap and target. This avoids any conflicts between module requirements for the different modules. 

The next step is to install the tap and target you’ve chosen into their corresponding virtual environment. This installation can be performed very easily using a pip install command. This example command installs the tap-salesforce to the load data from your Salesforce account:
Read More

Running modern ETL-Processes with Framework-Based Tools – Part 1

By Scalefree Newsletter No Comments

A big part of every Enterprise Datawarehouse are ETL- or ELT-processes.
In both abbreviations, the letters stand for the same words, only the order in which each process is done changes.
To brush-up on those processes, “E” stands for extraction, “T” for transformation and “L” is for loading.

That said, rather than dive into the benefits of each,  we would like to present a powerful open-source framework to execute the processes instead.

Why use a framework?

Rather than developing individual solutions per source system, using standardized frameworks provides a wide variety of benefits. The main of which we have already mentioned, standardization.
Another benefit, using the same concept for extracting data from different source systems allows your system to become more auditable and reliable.
And when taking into consideration the varied benefits between frameworks, other potential upsides become available as well. Read More

Implementing Data Vault 2.0 Zero Keys

By Scalefree Newsletter 4 Comments
In a previous blog post, we discussed how to implement ghost records within a Data Vault 2.0 solution. This time around, we’d like to talk about “the other” concept, namely zero keys, which oftentimes are referenced interchangeably with ghost records.

 

As discussed in the previous part of this series, a ghost record is a dummy record in satellite entities containing default values. Simply put, zero keys are the entry in each hub and link entity that is a counterpart to the satellite’s ghost record containing its hash key. In this manner, the term “zero key” is oftentimes used to describe the ghost record’s hash key, which might show up in other Data Vault entities such as in Point-in-Time (PIT) tables or links. Accompanying the zero hash key is, similar to a ghost record, a default value for the business key . Or, in the case of a composite business key, multiple default values for each of its components.

Read More

Using Multi-Active Satellites the Correct Way (2/2)

By Scalefree Newsletter 2 Comments
In our first post about multi-active satellites, we briefly explained different implementations that can be used to solve multi-activity. Now, we’re going to go into more detail regarding the advantages and disadvantages of these approaches having delta checks on or off.

 

Short summary of Multi-Active Satellites

Multi-active satellites allow you to implement multi-active records per business key in Data Vault 2.0. To illustrate the need for the solution, let’s look at the common occurrence of a source system that doesn’t provide the needed metadata such as when working with XML-files.
One solution to the above is to create a multi-active satellite by adding a subsequence number per business key. This accounts for any instance in which there is no multi-active attribute delivered by the source itself. Regarding phone numbers, this information could be a tag for a business, home or mobile phone number. Another possibility is to create an extra hub for the multi-active attribute. Though, since it doesn’t present a real business object, the first solution can be more effective.

Delta Check OFF

There are two ways to insert new records into a multi-active satellite – having delta checks active or inactive. With delta checks turned off, all records of a business key are inserted into the satellite from your source delivery.
The advantage to that is that loads are faster and have a consistent load date timestamp to the parent hash key, independent of the multi-active attribute.
Later on, it simplifies the query based on the multi-active data (see figure 1). As a critical drawback, the ingested amount of data can increase strongly if full date loads are received.
In this case, you should partition your data by the load date timestamp. 

Read More

Using Multi-Active Satellites the Correct Way (1/2)

By Scalefree Newsletter No Comments
With multi-active satellites, you’re able to store multiple active records for one business key. Depending on how the data arrives from your source, there are different ways to implement multi-activity in Data Vault 2.0. In this post, we’ll explain your options for modeling. 

 

What is a Multi-Active Satellite?

A multi-active satellite is similar to a standard satellite and its structure. As said before, it stores multiple active records per key at a point in time. This exact structure depends on the use case though.
See the exemple Data Vault model in figure 1.

Read More

Effort estimation in Data Vault 2.0 projects

By Scalefree Newsletter No Comments

There are many options available when choosing a method to estimate the necessary effort within agile IT projects.
In Data Vault 2.0 projects, we recommend estimating the effort by applying a Function Point Analysis (FPA). In this article, you will learn why FPA is a good choice and why you should consider using this method in your own Data Vault 2.0 projects.

GOOD OLD PLANNING POKER

Probably the best known method for estimating work in agile projects is Planning Poker. Within the process, so-called story points, based upon the Fibonacci sequence (0, 0.5, 1, 2, 3, 5, 8, 13, 20, 40 and 100), are used to estimate the effort of a given task. 

To begin the process, the entire development team sits together as each member simultaneously assigns story points to each user story that they feel are appropriate. If the story points match, the final estimate is made. Alternatively, if a consensus cannot be reached the effort is discussed until a decision is made.  Read More

Implementing Data Vault 2.0 ghost records

By Scalefree Newsletter 4 Comments

Implementing Data Vault 2.0 ghost records

During the development of Data Vault, from the first iteration to its latest Data Vault 2.0, we’ve mentioned the two terms “ghost records” and “zero keys” in our literature as well as in our Data Vault 2.0 Boot Camps. And since then, we’ve noticed these concepts oftentimes being referenced to interchangeably. 

In this blog entry, we’ll discuss the implementation of ghost records in Data Vault 2.0. Please note, that this article is part one of a multi-part blog series clarifying Ghost records vs. Zero Keys. Read More

About Non-Functional Requirements

By Scalefree Newsletter 2 Comments
In our trainings and consulting practice, we often pitch the idea of “focusing on the business value” to the audience. Business value in enterprise data warehousing is defined as “something of value to the business” (believe it or not, we believe it should be said sometimes).

Typically, the reason by business setups a budget for the enterprise data warehouse is that they want some reports or dashboards with actionable information.

On the other hand, the most important counter-argument is that the enterprise data warehouse is more than just reports and dashboards. There is actually a lot of more technical components (non-functional requirements) to be built, including but not limited to: Read More

Organization of Information Requirements

By Scalefree Newsletter 5 Comments

Just a Recommendation…how we Organize our Information Requirements

 

Information is required by business users throughout the industry. However, as part of our consulting engagements, we also encounter a lack of proper description as to what the business user actually needs. 

So, we want to use this article to present the way we structure our information requirements internally at Scalefree as well as the way we do so for many of our customers.

What about User Stories?

We all know user stories from Scrum and many business intelligence projects.
Their structure is typically something that looks like:

As a <type of user>, I want <some goal> so that <some reason>.

The following example represents a typical user story we would receive in a project:

As a <marketing user>, I want <to have an overview report with the number of leads from a marketing channel> so that <I can adjust the marketing budget accordingly>.

Now, what should we do with this user story?
Many details are missing, and yes, we all know about product backlog refinement. The problem is that the user story is just not sufficient enough within business intelligence efforts and some structure might be of help.

Information Requirements

Developers in enterprise data warehousing and business intelligence need much more detail than just the user story. On the other hand, the user story is a good starting point for the information requirement. So, it can be treated as a typical introduction. The overall structure looks like this:
Read More

About Information Marts in Data Vault 2.0

By Scalefree Newsletter No Comments
In the Data Vault 2.0 architecture, information marts are used to deliver information to the end-users.

Conceptually, an information mart follows the same definition as a data mart in legacy data warehousing. However, in legacy data warehousing, a data mart is used to deliver useful information, not raw data. This is why the data mart has been renamed in Data Vault 2.0 to better reflect the use case.

But the definition of information marts has more facets. In the book “Building a Scalable Data Warehouse with Data Vault 2.0” we present three types of marts: Read More

Granularities of Business Vault Entities

By Scalefree Newsletter No Comments
The Business Vault is the layer in the Data Vault 2.0 architecture where business logic is implemented to transform, cleanse and modify the data.

The book “Building a Scalable Data Warehouse with Data Vault 2.0” by Scalefree’s founders Dan Linstedt and Michael Olschimke and the Data Vault 2.0 Boot Camp shows how to implement such business logic using various Business Vault entities, such as computed satellites.

However, it is worth to note that this is only half the story, half the knowledge. The book shows computed satellites (and other entities) with a load date in the primary key of the computed satellite. Such satellites are great for capturing the results from business logic that is applied on the incoming deltas. However, there are two different types of granularities for business logic in the Business Vault: Read More

Data Warehousing and why we need it

By Scalefree Newsletter No Comments

A data warehouse is a subject oriented, nonvolatile, integrated, time variant collection of data to support management’s decisions

  • Inmon, W. H. (2005). Building the Data Warehouse. Indianapolis, Ind.: Wiley.
It provides the technical infrastructure needed to run Business Intelligence effectively. Its purpose is to integrate data from different data sources and to provide a historicised database. Through a DWH, consistent and reliable reporting can be ensured. A standardised view of the data can prevent interpretation errors, improved data quality and leads to better decision-making. Furthermore, the historization of data offers additional analysis possibilities and leads to (complete) auditability.  Read More

Handling Validation of Relationships in Data Vault 2.0

By Scalefree Newsletter 4 Comments
In Data Vault 2.0, we differentiate data by keys, relationships and description.
That said, an often underestimated point is the handling of relationships in Data Vault 2.0.
In the following we explain what to consider and how to deal with it:

There are different ways to handle the validation of relationships from source systems depending on how the data is delivered, (full-extract or CDC), and the way a delete is delivered by the source system, such as a soft delete or hard delete.

First, let us explain the different kinds of deletes in source systems:

  1. Hard delete – A record is hard deleted in the source system and no longer appears in the system.
  2. Soft delete – The deleted record still exists in the source systems database and is flagged as deleted.


Secondly, let’s explore  how we find the data in the staging area:

  1. Full-extract – This can be the current status of the source system or a delta/incremental extract.
  2. CDC (Change Data Capture) – Only new, updated or deleted records to load data in an incremental/delta way.

 

To keep the following explanation as simple as possible, our assumption is that we want to mark relationships as deleted as soon as we get the delete information, even if there is no audit trail from the source system (data aging is another topic).

Read More

Salesforce meets Data Vault

By Scalefree Newsletter No Comments

It’s a Match!

Data integration with Salesforce can be tricky and is in dire need of a system of business intelligence to handle that complexity.

Data Vault is capable of decoupling all the necessary business-driven changes, extensions and customizations to the platform while maintaining the ability to become the cornerstone of an integrated architecture. The decoupling is a part of our Data Vault Boot Camp and is summarized in Figure 1.

Scalefree can provide knowledge and implementation assistance in both Data Vault as well as Salesforce therefore creating the optimal partner for your Salesforce integration project.

Read More

Data Quality in the Data Vault Architecture

By Scalefree Newsletter No Comments

The Basis for solid decision-making

In making business decisions whether daily or long term, the quality of data is a critical facet to factor into these decision-making processes.

Thus, the immediate access to the data and certainty on its quality can enhance business performance immensely. But the sad truth is that we see bad data in operational systems due to human-caused errors such as typos, ignoring standards and duplicates, in addition to lack of input-validators in operating systems such as must-fields not being declared as well as references to other entities (primary-foreign-key constraints) not being defined.

Read More