When the fridge takes control: A legal perspective on Industry 4.0 by Professor Thomas Klindt
Computer scientists are working on new algorithms, business economists have $-signs in their eyes – the digital transformation is creating a spirit of optimism for pioneers in a wide range of industries. But what do the lawyers say, who are to set up the legal framework in which people and companies move? Does Law 3.0 – the economic law of the 20th century – also fit for Industry 4.0? I had a conversation with Professor Dr. Thomas Klindt, member of the legal working group on the platform Industry 4.0.
The quote that "data is the oil of the 21st century" comes from Malte Spitz. One may question the viability of this analogy – but it's undisputed that data has a high value in the digital economy. To what extent does this fit into the German or European legal framework?
Yes, data is indeed the essential asset in the new data-driven economy. From a technical or economic viewpoint, data can be classified in many ways. Legally, there are exactly two categories of data: personal data, subject to data privacy laws, and “other” data, which could be termed machine-related data.
Personal data may not be processed, be recorded, and not even be deleted without the consent of the person concerned. In Germany, that has been the law since 1977, and business models simply have to take that into account. Totally prohibited, for example, would be to employ recognition and tracking technologies to create employee productivity profiles without their awareness; no matter how high the demand for such a system might be on the market.
It’s different with machine-related data. There is no legal protection for this type of data at all in Germany, everything goes. But is the data therefore worthless? Far from it! For example, consider the data that results from opening a roller door in a factory building. By itself, this data is irrelevant, but if a rolling door manufacturer can collect the entire field population of that data, then that’s not just big data, but actual smart data – because the data is meaningful. Through evaluations, the manufacturer may notice, for example, that the doors are operated much less often than expected – meaning that the drives are completely oversized; by adjusting this product properly, perhaps 30% of the cost can be saved.
This data can thus have a tangible value contributing to the business success of a company. The situation becomes problematic when this data is the only asset of a business case: if the asset disappears – so does the business case. For instance, if an employee takes this data and starts their own company, there is no law preventing that. Machine-related data enjoys no protection whatsoever.
At Siemens, it’s contractually regulated that data of different customers in the cloud is always kept strictly separate and remains the property of the respective customers – subject to the highest possible security standards.
Precisely because the legislation does not provide any protection, you must get this protection yourself, especially if the data is mission-critical to the success of the business. There are the two ways you described: You must provide technical protection, such as encryption, access control, firewalls, backups, etc. And you must protect yourself contractually, for example, through data pool contracting.
In my opinion, the industry is not yet paying sufficient attention to the concept of non-personal data and its utter vulnerability!
(Find out data regulations at Siemens here.)
The data is one thing, to use it, though, I also need something we call “Digital Connectivity”. The discussions about the emerging “Internet of Things”, which is also becoming increasingly important for the industry, are primarily influenced by technology, sometimes business economics, but almost never the law. Is this a problem?
Yes, absolutely! Connectivity has risks that we lawyers discuss because they are actually pressing and important subjects. A distinction is made between active and passive risks. Active risks arise from the fact that connectivity does not happen in a legal vacuum. If this is not taken into account, there is a risk that a product innovation will run counter to legal regulations and thus be unlawful. A famous example is the WLAN-enabled doll “Cayla”, which the German Federal Network Agency classified as a disguised listening device, and whose possession is thus a criminal offense. Only a minor modification to the product that makes the internet connection obvious could have saved the business.
The passive risks of connectivity – keyword cyber security – also concern us immensely, especially in industry. Imagine, you are a manufacturer of industrial hot water boilers that are WLAN-based and can be controlled via an app. What if a hacker, from the outside, succeeds in causing a boiler to explode – resulting in property damage and injuries? Sure, if we catch the hacker, the law will mete out punishment. The problem is, we don’t catch them all that often.
Cyber resilience is one of the last outstanding issues in European product liability law. And this debate must not be left up to lawyers to decide.
Then the debate sprouts why the product could be hacked. Is it a product liability complaint to the manufacturer why the product could be hacked at all? Cyber security thus becomes cyber resilience, an intrinsic property of the product. This seemingly small legal question, though, decides on billions of euros in potential revenues or costs. Because if I were legally responsible for being cyber-resilient, I would have to provide a patch free-of-charge whenever I discover a new vulnerability in my already delivered products. The patch would be the mandatory “digital recall.”
Personally, I think that is completely wrong! Look, if I were to set fire to your jacket, no one would consider holding your tailor accountable – you’re going to grab Mr Klindt, end of story. Cyber resilience is one of the last outstanding issues in European product liability law. And this debate must not be left up to lawyers to decide.
Let's go one step further. The data is now being processed and used to make predictions, decisions…
…which for me, represents the outstanding quality of digitalization! Automation has been going on for many decades, and just because robots now mow lawns and vacuum, they are not an expression of digitalization. Automation concerns action, digitalization concerns decision-making – for me as a lawyer, a completely different quality.
How does this digital decision-making impact business transactions? For example, what does a lawyer say about a refrigerator or a Kanban system that autonomously reorders goods?
Lawyers speak of machine-to-machine contracts here, which in addition to the aforementioned reordering, also play an important role in predictive maintenance concepts. But let’s just stick to the refrigerator, which has sensors, RFID, and cameras, and therefore always knows which products have expired or are running out. The refrigerator then automatically orders fresh butter, sausage, and vegetables from an online retailer, and has it delivered to your home. Watched in slow motion, the following happens: The fridge says I need 2 liters of milk, the retailer’s chatbot says OK you’ll get 2 liters of milk, which is delivered and then charged via a payment service. Everyone is happy.
For us lawyers, this is a process that we can’t explain. That’s why we examine the facts in every last detail as to understand what’s actually happening. Through this magnifying glass, the given situation plays out as follows: The refrigerator is now ordering, for whatever reason, 20,000 liters of milk, and the chatbot is perfectly fine with that. It organizes 20,000 liters of milk and then charges everything again via the payment service. You now have 20,000 liters of milk in your kitchen and ask your lawyer how to get out of that mess.
The problem is that for centuries, we have had a constant when it comes to law: people enter into contracts. This is the law. A contract consists of two corresponding declarations of intent made by different persons. Of course, these persons can also be a limited liability company or a corporation. But nowhere is there anything about refrigerators, robots, Kanban systems. Or machine-generated declarations of intent – because we currently assume that a refrigerator has no will of its own, and therefore can’t declare intent; no matter which data records are pushed back and forth.
This can have strange outcomes: For example, in high-frequency trading on the Frankfurt Stock Exchange, all transactions are printed out at the end of the day and signed manually – to at least maintain the fiction that the will of a person is behind it all. The whole thing has nothing to do with stable legal certainty.
But the consequences of your own digital decision-making are not just contracts – if misbehaving, machines could have very serious impacts on property or even life and limb. How can we hold algorithms accountable?
We lawyers call this “code is law”. Take a traffic accident for example: An obstacle appears suddenly and unexpectedly in front of my car. As a human driver, I might overcorrect the steering to avoid it and race into a group of people. That may be battery or even homicide, but as a driver I probably did not act culpably – it’s certainly a tragedy, but probably no wrongdoing. Same situation in the autonomously driving car. But now nobody is overcorrecting the steering; the decision is made by a programmed algorithm coded years ago. But, was the programmer right in writing the crucial line of code like that? May a person program, years before an accident, who is killed? This topic, too, can be transferred well to the industry, for example, when an algorithm monitors the safety of a potentially explosive facility.
This is how terms such as machine ethics or “code is law” come into being. That which is in the codes is already the law. In the discussion it’s said again and again, either ethics enter the programs, or the programmers consider ethical issues. Do we need a TÜV or any other authority to inspect algorithms? Many experts say that would not be feasible and would end up in an enormous agency. But my simple counterquestion: Do we want to allow all algorithms unchecked?
What options do I have as an entrepreneur to obtain sufficient legal certainty for an innovative business model or innovative digital products?
Get a good lawyer! (laughs)
As an industrial lawyer with Noerr LLP, Prof. Thomas Klindt has extensive experience in dealing with product liability crisis scenarios and has acted in many crossborder b2b and b2c product recalls including mandatory notifications to market surveillance authorities. He lectures in European Product and Technology Law at the university of Bayreuth.