What is Co-Simulation?

Co-simulation is the joint simulation of loosely coupled stand-alone sub-simulators.

  • A co-simulation algorithm takes care of time synchronization and interactions across the sub-simulators.
  • The interactions between these sub-simulators are only synchronized at discrete communication points.
  • Sub-simulators are assumed to be completely independent of each other between the communication points.
  • The sub-simulators behave conceptually like black boxes. They accept inputs (from other sub-simulators), advance in time with a built-in solver routine up to the next communication point, and finally output some results. The results may in turn be used again as inputs to other sub-simulators.

Why Should You Care?

There are generally three approaches for combining models for full-system simulation:

  1. The entire full-system model is built in one tool for the purpose of simulation.
  2. The exchange of models between tools to run a simulation in one of them (model exchange).
  3. Loosely coupling two or more simulators (co-simulation).

Co-simulation offers several advantages over the other two options:

  • Its modular nature and high flexibility whereby a full-system simulation is built from several stand-alone sub-simulators.
  • Models can be developed efficiently in parallel and they can easily be re-used.
  • The ability to include Hardware-, Software-, Human-in-the-Loop with relative ease and consistency.
  • The ability to leverage specialized tool chains and domain-specific knowledge already in use by participants and partners.
  • The open nature of co-simulation together with a black box approach that protects IPR.
  • Sub-simulators can run in parallel with the potential to speed up full-system simulations.

When to Use Co-Simulation

Due to its characteristics and advantages, co-simulation is best used for complex and heterogeneous cyber-physical systems. Typical telltale signs that co-simulation is worth considering are further:

  • A high degree of flexibility is required, and several submodels are involved from various fields and disciplines.
  • The submodels are developed by different partners. Especially if there are already substantial investments made in knowledge, tools, people, and if IPR protection is a concern.
  • The system spans a wider range of physical and engineering domains, and several different time scales are involved.

What is FMI?

The Functional Mock-Up Interface (FMI) is a tool-independent standard for making submodels binary compatible with each other. In so doing it removes the need for recompilation and facilitates model sharing and co-simulation. FMI was first published in 2010 as a result of the ITEA2 project MODELISAR. Since 2011, maintenance and development of the standard have been performed by the Modelica Association, and a second version of the standard was released in 2014. It is completely open and free to use and is supported by a large and growing number of tools, for example by Dymola,, SIMPACK, SimulationX, and Simulink.

A model which implements FMI is called a Functional Mock-Up Unit (FMU). In effect, an FMU is an archive file (ZIP format) consisting of model code for one or more platforms (C or binary), a description of the interface data (XML format), and optional documentation and metadata. The FMI standard specifies the APIs that must be implemented by the model code. Note that an FMU can also represent interfaces to hardware such as sensors, actuators, or devices for human input.

FMI for co-simulation is based on a master/slave model of communication and control, where sub-simulators are slaves that are controlled by a master algorithm (the co-simulation algorithm). The sub-simulators do not have any information about each other, nor about the simulation environment, except for the values they receive for their input variables. Thus, they have no knowledge about or control over which other sub-simulators they are coupled to; the data are routed by the master algorithm.

Note that FMI only specifies how the (co-)simulation software interacts with the models; it is not in itself a simulation software, nor does it specify or restrict any other parts of the architecture of such a software. FMI does not specify how the sub-simulators are time synchronized, nor in what format, data are transported between them.

Limitations of FMI

While FMI shows enormous potential, is continuously and actively refined, and has turned to the de facto standard for co-simulation, it also has various deficiencies. Some aspects of co-simulation are altogether poorly addressed by the standard, and because limitations are often rather subtle, it is well worth pointing them out here.

  • It is fully possible to design FMUs and master algorithms that are standard compliant but exhibit nondeterministic and unexpected behavior.
  • It is not directly clear how noncontinuous models can be encoded as FMUs.
  • FMI does not address error control.
  • While FMI does facilitate the use of units, it does not enforce logical checks and that pieces actually fit. For example, it does not assist the user in making sure that coordinate systems are consistent between submodels

What Co-Simulation Will Not Do

There is no guarantee that all the different parts of a co-simulation will play together nicely. There is also no general and easy-to-use co-simulation solution (software or algorithm) available to date. This is mainly due to technical challenges and because it is very difficult to define a general-purpose method that works for all sorts of situations and requirements. Specifically:

  • Co-simulation will in most cases require a fair amount of understanding on a system level, and potentially require some submodel domain knowledge too.
  • It is far from straight-forward to know where and how to split a system into submodels, resulting in a trade-off between modularity, accuracy and performance. This has potentially far-reaching implications for submodel development, collaboration, and easy-of-use.
  • Defining the connections between the submodels is not always easy either, and how computational causality is defined between submodels needs to be considered.
  • There is a large range of co-simulation algorithms to choose from which range from very simple (with potential drawbacks in terms of speed and accuracy) to rather complicated (with drawbacks in terms of parameter tuning and additional model requirements). The simplest co-simulation algorithms will only support constant communication intervals, leaving it up to the user to choose the right interval for the simulation at hand.

For the interested reader, an overview of the most relevant technical issues surrounding the use of co-simulation can be found in, for example, [1].

Further reading

[1] S. Sadjina, L. T. Kyllingstad, M. Rindarøy, S. Skjong, V. Æsøy and E. Pedersen, “Distributed Co-Simulation of Maritime Systems and Operations,” Journal of Offshore Mechanics and Arctic Engineering, vol. 141, no. 1, p. 011302, 2019.
[2] “Prostep ivip,” [Online]. Available: [Accessed 12 2018].
[3] R. Kübler and W. Schiehlen, “Two methods of simulator coupling,” Mathematical and Computer Modelling of Dynamical Systems, vol. 6, no. 2, p. 93–113, 2000.
[4] Modelica Association, “Official web site of the Function Mock-Up Interface,” 2012. [Online]. Available: [Accessed 31 1 2019].
[5] S. Sadjina, L. T. Kyllingstad, S. Skjong and E. Pedersen, “Energy conservation and power bonds in co-simulations: non-iterative adaptive step size control and error estimation,” Engineering with Computers, pp. 607-620, 2017.


Scroll to top