Interoperability

Nov 6, 2024

Picture this: in one corner of your office, a team is chatting away on Slack about a project deadline. Across the room, another group is knee-deep in Jira tickets, tracking bugs and features. Down the hall, someone is organizing ideas in Notion. Each team is speaking its own language, using its own tools, and while everyone is working hard, it feels like we're all on different planets.

Now, picture trying to get all these tools to talk to each other, to share information seamlessly, and to actually understand what the others are saying.

I remember when we first started exploring this issue. It was clear that while technology had advanced to the point where we could give "life" to almost anything with AI—from an inventory to a jet engine —the real hurdle was getting these intelligent systems to work together smoothly. The potential was enormous: if we could solve interoperability, we could align all digital twins inside the company, reduce friction, and reach our goals faster and more efficiently.

But here's the kicker: most tools, systems and their digital twins we use today are pretty much blind to each other. They might offer APIs or basic integrations, but they don't truly "understand" one another. It's like having people in a room who speak different languages; they might exchange a few words, but meaningful conversation is limited.

To understand the depth of this challenge, let's break down the seven levels of interoperability, each representing a step toward true synergy among systems:

  1. Level 0 – No Interoperability

    • Description: Systems operate in complete isolation.

    • Example: A standalone simulation tool that doesn't share data with any other system.

  2. Level 1 – Technical Interoperability

    • Description: Systems can exchange raw data, but without any structure or meaning.

    • Example: A sensor sends raw data to a control system, but the system doesn't know what to do with it.

  3. Level 2 – Syntactic Interoperability

    • Description: Systems share data in a common format, like JSON or XML.

    • Example: Two applications exchange data in XML, but they might interpret the data differently.

  4. Level 3 – Semantic Interoperability

    • Description: Systems understand the meaning of the data they exchange.

    • Example: Multiple healthcare systems sharing patient data where "blood pressure" means the same thing across all systems.

  5. Level 4 – Pragmatic Interoperability

    • Description: Systems understand the context and can act upon the data appropriately.

    • Example: A maintenance system that schedules repairs based on data about machine wear and operational context.

  6. Level 5 – Dynamic Interoperability

    • Description: Systems adapt in real-time to changes and can modify their operations accordingly.

    • Example: Smart home devices that adjust energy usage based on real-time electricity rates and personal habits.

  7. Level 6 – Conceptual Interoperability

    • Description: Systems fully understand each other's models, goals, and reasoning processes.

    • Example: An integrated city infrastructure where transportation, energy, and emergency services collaborate seamlessly during a crisis.

As you can see, climbing up these levels isn't just about better data exchange; it's about systems gaining deeper understanding and working together intelligently.

Let’s look inside any modern organization. Companies rely on dozens, if not hundreds, of digital tools, each created independently and with its own perspective on the world. Take project management, for instance. You might use Jira for tracking progress, Slack for communication, and Notion for organizing knowledge. These tools don’t share a common understanding of terms, processes, or objectives. Slack sees a “channel” differently than Jira understands a “project,” and each of these tools processes tasks and data in ways that serve their individual functions. Yet none of these tools truly understands what the others do. As a result, organizations face tremendous complexity just in trying to get these systems to communicate effectively. Today’s tools only achieve the most basic form of interoperability: APIs that allow them to exchange bits of data. This “Level 1” interoperability is the equivalent of giving two people different dictionaries but no context or language to understand each other. While they can share words, they lack the understanding of what those words mean, leading to blind spots in communication.

In industries like manufacturing, where digital twins are frequently used, the same problem persists. A digital twin of a jet engine might operate independently of the digital twin for inventory or supply chain management. This lack of connectivity between systems creates an exhausting overhead, where transferring information across tools and processes is time-consuming and resource-intensive. At higher levels of interoperability, systems could autonomously interpret each other’s data and adapt accordingly, but current approaches to interoperability lack the necessary depth and intelligence to achieve this. Past solutions have attempted to enforce standardized frameworks that developers must follow, but in reality, developers prioritize the freedom to innovate in ways that suit their projects. Forced frameworks stifle creativity and fail to gain traction, ultimately leading to fragmentation rather than unity. So despite the progress in interoperability tools, each has limitations that restrict its scope:

  1. DTDL (Digital Twin Definition Language)

    • Purpose: Microsoft’s DTDL is a JSON-based schema used to define data structures for digital twins.

    • Limitations: Primarily operates within the Microsoft ecosystem, focusing on syntactic interoperability but lacking semantic depth. DTDL defines the “what” of data but doesn’t capture the contextual “why” that systems need to adapt and respond in real time.

  2. OPC UA (Open Platform Communications Unified Architecture)

    • Purpose: Enables standardized, secure data communication, especially in industrial settings.

    • Limitations: Designed for consistent data format exchange but lacks adaptability to varying contexts, meaning it supports technical interoperability but doesn’t handle pragmatic or dynamic interoperability needs.

  3. FMI (Functional Mockup Interface)

    • Purpose: Allows simulation models to interoperate within a co-simulation framework.

    • Limitations: Although FMI supports co-simulation, it doesn’t address real-time adaptability or conceptual interoperability, which limits its use in more dynamic, responsive digital twin environments.

  4. IoT Communication Protocols (e.g., MQTT, CoAP)

    • Purpose: Facilitate low-latency data exchange among IoT devices, often used in basic data communication.

    • Limitations: Optimized for raw data transfer, these protocols lack mechanisms for semantic or contextual interpretation, limiting them to Level 1 interoperability.

Now, let's delve into some mistakes we've observed in how companies are trying to achieve interoperability, especially with digital twins—digital replicas of physical assets or processes.

  1. Forcing a One-Size-Fits-All Framework

    Many big players in the industry, like AWS IoT and Azure IoT, offer frameworks and languages that they expect companies to adopt wholesale. The idea is that everyone builds their digital twins using these predefined tools and languages. But here's the rub: companies are protective of their intellectual property (IP) and have unique ways of doing things. They don't want to shoehorn their processes into someone else's framework.

    Our Take: Instead of forcing companies to adapt to a new framework, why not create a system that can understand and model any tool or digital twin as it is? By translating and connecting their existing systems, companies can maintain their proprietary methods while still achieving interoperability.

  2. Limited Functionality of Languages Like DTDL

    The Digital Twin Definition Language (DTDL) focuses on defining data inputs and outputs—the "what" of data—but falls short on the "how." It doesn't allow for defining workflows or executing processes within the digital twin environment.

    Our Take: In practice, we need more than just data models; we need to define and execute workflows, process data dynamically, and perform actions across systems. That's why in MFlow, we've developed a language that handles both the nodes of knowledge and the workflows, allowing for a more comprehensive and actionable digital twin.

  3. Neglecting the Power of AI and LLMs

    Many current systems were designed before the rise of large language models (LLMs) like GPT-4. They rely on manual entry to build ontologies, which is time-consuming and doesn't scale well with the complexity of modern data.

    Our Take: By integrating AI and LLMs, we can automate the creation of ontologies and models. LLMs can understand natural language descriptions and generate detailed, accurate models of systems and processes. This not only speeds up the development but also makes the system adaptable and future-proof.

  4. Lack of a User-Friendly Interface

    Let's face it: digital twins can be complex and intimidating. Expecting people to interact directly with them isn't realistic. Users want tools that solve their immediate problems without requiring them to understand the underlying complexities.

    Our Take: We've focused on creating an interface that feels familiar—a project manager's dashboard, for example. This way, users interact with a system that meets their needs directly, while the digital twin works behind the scenes to provide insights and coordinate actions. By embedding the digital twin into everyday tools, we increase adoption and make the benefits accessible to everyone.

Let’s get back to the point one and translation, what if instead of forcing tools to change, we create a system that can understand and model any tool in its own terms? Kind of like a universal translator for software. This is where the concept of ontologies came into play. An ontology is a structured way to organize and define the knowledge within a domain, outlining the key terms, relationships, and attributes relevant to that domain. By using ontologies, digital twins can understand and interpret data across systems in a consistent way. An ontology for digital twins serves as a “map” that captures and standardizes the language, concepts, and purposes of various systems, allowing them to communicate more effectively.

Ontologies provide the semantic foundation for interoperability by ensuring that digital twins don’t just exchange data—they exchange meaning. Through shared definitions and contexts, ontologies enable each system to interpret terms and concepts in a unified manner, allowing digital twins to adapt as organizations evolve or as new tools are introduced.

In a manufacturing organization, an ontology might define concepts like "Machine Health," "Energy Consumption," and "Maintenance Status." When a new data point, such as "Temperature," is added from a different system, other digital twins know how to interpret and respond to this data because they share a common understanding of its context and implications.

By achieving deep interoperability, we're not just making systems talk to each other; we're enabling them to understand and collaborate in meaningful ways.

For example, imagine a manufacturing plant where the machines, maintenance systems, supply chain, and even customer orders are all part of this interconnected network. If a machine starts to show signs of wear, the maintenance system knows immediately and can schedule repairs without human intervention. The supply chain adjusts accordingly, and customers are kept in the loop about any potential delays. It's a seamless operation that saves time, money, and a lot of headaches.

Of course, getting to this point wasn't easy. There were technical challenges to overcome, like modeling complex systems accurately and ensuring the AI could handle the nuances of different tools. But we are close. Let’s build the future together.

Next
Next

Inherent Virality