This site uses cookies in order to improve your user experience and to provide content tailored specifically to your interests. Detailed information on the use of cookies on this website is provided in our Privacy Policy. You can also manage your preferences there.

By using this website, you consent to the use of cookies.

Learn more
Internet of Things

The chief objective of ‘leanness’ in manufacturing is to eliminate waste, i.e. anything that does not create added value. One of the chief challenges for lean production is to identify these wasted ressources. And here Big Data holds the key. “Lean Digital Factory” will connect more than 30 plants to one Manufacturing Data Platform via MindSphere and implement a purpose-built Industrial Edge layer at the same time. This data architecture also enables a rich portfolio of applications like operational optimzation, business analytics, and the implementation of machine learning.

Since October 2017 the program “Lean Digital Factory (LDF)”, run by a group of experts from different business units and technology areas, has been defining a conceptual holistic digital transformation roadmap for all factories of our operating company “Digital Industries (DI)”.

Value creation is the number one priority during implementation of the LDF program. This entails getting the maximum out of the organization’s data landscape. Traditional IT architecture reaches its limits when it comes to scalability and usability. New architectural patterns need to be developed to harness the ultimate power of data! These are brought forward by Industrial Edge in conjunction with MindSphere Data Lake solution.

The supply of manufacturing data to the unlimited possibilities of current and future applications must be done with no or low effort. This can only be achieved if data interfaces are reduced or even avoided and all data and interfaces are available for reuse, which will lead to standardization in certain aspects. In other words, to fully capture the value of using big data in manufacturing the plants of DI need to have a flexible data architecture, which enables the different users, internal as well as external, e.g. suppliers, to extract maximum value from the whole data ecosystem.

At the same time, real time performance with low data traffic costs is required, to get added value use cases down to the shop-floor. Here, the Industrial Edge layer comes into the picture, which processes the data close to the sensors and to the data source (figure 1). 

Figure 1: The edge layer between control and cloud level (manufacturing data platform)

Industrial Edge and the data lake concept together will enable a faster and more powerful solution development than any other data storage and utilization concept. Regardless if you are coming from the shop-floor with a clear operational problem, from the central department to create business reports, from the data scientist point of view to look for new patterns by using analytic sandboxes, or from the data analyst view, introducing new business models like data-as-a-service or even analytics-as-a-service. 

Let’s have a look at the conceptual construction of the LDF Manufacturing Data Platform by MindSphere (MDP) and in this context also the Industrial Edge. To start with, here are some general statements:

  • The MDP will be a colossal storage area for all manufacturing data and will therefore be tremendously powerful for all user levels
  • The MDP data platform is a centralized and indexed aggregation of distributed organized datasets
  • Big data will be stored in the MDP independently of its later use, this means as raw data
  • In combination with Industrial Edge, the MDP is the pre-requisite for effective and scalable cloud computing and machine learning
  • The Industrial Edge is used in this architecture for multiple purposes like data ingestion, pre-preparation, security-gate, real-time decisions etc. 
  • Highly integrated, but module- and service-based eco-system-functionalities, e.g. importing/ exporting, persisting and analytics (in order to avoid redundancies, it is possible to refer instead of persisting)

Classic Data Warehouse Architecture

To see where the manufacturing industry is coming from and grasp the novelty of the new architectural approach to managing data, let me say a few words about a the classic data warehouse architecture.

The classic “Data Warehouse” architecture pattern in general follows the philosophy of understand, transform, load and analyze data. The data is extracted, transformed and loaded (ETL) from data sources to the data storage. 

While this is taking place, some data cleansing and structure creation is performed. The data models are predefined and it enables to create department specific reports, thanks to the Online Analytical Processing (OLAP)-cube dimensions. The OLAP-cube allows you to slice, dice, pivot, drill-down, -up and -through and self-service business intelligence (BI).

This drives in two major pre-requisites which are also the reason, that this pattern cannot scale the possibilities required for big data analytics. The first pre-requisite in this philosophy is, we need to understand the data first with all its consequences like “are there any anomalies”, “what is the source system, cardinality etc.” This is extensive and complex work. 

The second, as an implication of the first pre-requisite, is, that compromises and choices must be made on which data is to be stored and which data is to be discarded. This definitely limits future possibilities for chronological reviews or trend analytics beside the fact, that the effort to decide, which data to bring in and to leave out, is tremendous. 

Finally, more time is spent with data administration then with data analytics to uncover valuable patterns.

Find out how the Lean Digital Factory solves this issue in my next blog: The North Star of Lean Digital Factory and its Data.