Case Study

Simulation and Emulation – A Primer

Ari Siesser

Simulation and Emulation – A Primer

Many of the folks we run into seem a little confused by the terminology. Often times practitioners use “simulation” and “emulation” interchangeably, while in fact they are very different. This post will serve as a quick tutorial on both simulation and emulation.

10,000 Foot Overview

First things first – simulation and emulation are both techniques for answering questions about something in the real world by building a “model” of it.

Simulation is used to answer questions about the performance of a system. Let’s say for example that your company is considering either building a new manufacturing facility or adding capabilities to an existing one. Before you make a decision, you’ll likely want to know how key metrics like throughput will change in each scenario so you can determine which investment has the highest expected return.

Emulation, on the other hand, is used to answer questions about the control-logic of a system, i.e. is the automated equipment programmed correctly?

For those that don’t know how automated equipment works: an automated system functions like a loop. The automated equipment is controlled by a computer called a PLC, or programmable logic controller. Sensors in the facility send signals to the PLC ➔ the PLC then receives those signals and sends instructions out to the automated equipment, e.g. a robot, conveyor, diverter, etc. ➔ the equipment executes its instructions causing the state of the system to change ➔ the sensors report the new state of the system to the PLC ➔ the loop continues.

In an emulation, we create a high-fidelity software model of the facility, and connect it to the PLC to create the same loop as above. We want to test as many scenarios as possible to find ways in which the PLC programming might cause a problem in real life so that it can be corrected.

That might have sounded a bit confusing, so here is an example. Consider a conveyor that is feeding packages to a robot. A laser sensor (known in the industry as a photo eye) indicates whether a package is present and tells the conveyor to stop so that the robot can pick the package up. In a simulation, you might just be interested in how many boxes the robot can move. In an emulation, you will actually test the code that controls the conveyor and the robot.

When we start sending boxes down the conveyor in the model, the emulated photo eye will generate the same signal as the real photo eye, i.e. either 1 or 0 depending on whether or not a box is present. This signal is then sent to the PLC, which is unaware that it is listening to the software and not the actual physical equipment. The logic controller processes the input and sends instructions back to the emulation, e.g. tell the robot to pick up the box. The software reacts to the PLC’s instructions exactly as the physical system would, and if there is an error, e.g. the robot goes to the wrong location or uses too much force such that it crushes the package, we will catch it. The process of testing the control logic using offline using an emulation is called virtual commissioning.

You might be wondering how accurately the computer model can represent reality. The answer is very accurately. Recent advances in computation allow us to account for the physics of the physical system! Consider the previous example with the conveyor and boxes: what happens if the friction between the box and the conveyor belt is not enough to keep the box from flying off at a given speed, or what happens if the robot applies too much force to the box as it is picking it up? These are the types of questions we would seek to answer in an emulation so that we can tune the programming of the automated equipment and avoid these types of problems in real life.

When and why should we use a model?

Ok, so now we understand the tools, but when do we really need them? The emulation case seems more straightforward, but what about simulation? Why not just plug some numbers into a spreadsheet and call it a day?

Why simulate?

Model Complexity

Consider the following system with inputs arriving at rate \lambda and a server processing the inputs at a rate of \mu :

Assume for a moment that this system represents a new machine that you are considering adding to your facility. You know how quickly WIP is arriving (i.e. the arrival rate \lambda), and the manufacturer of this new machine has told you service rate, \mu, i.e. how many units it can process in a given amount of time. What if I asked you how many units you’d have in the queue on average, or how long each unit would wait on average in this system before exiting? How would you approach the problem?

The answer comes with good and bad news:

The good news is that queueing theory is a very well-studied branch of operations research, and there are equations to calculate things like average queue length and time spent waiting.

The bad news is that real systems like the one below are much more complex, and unfortunately there are no equations to describe their behavior. The best way to estimate how this system will perform is to simulate it.

Image courtesy of Demo3D


In the real world, we can’t predict everything. Things like machine failure, service times for broken machines, customer arrivals and orders, etc. are all unpredictable. The more complex your system is, the more susceptible your results are to randomness, and the greater the benefit of simulating the system.

When simulating phenomena like the ones mentioned above, it’s really important to have a solid understanding of the probabilistic assumptions you are making. I have seen too many instances of bad assumptions ruining a model. A few examples:

  • A customer was rationalizing an investment in new machinery and did not account for machine failure/down-time in their model. The customer grossly over-estimated throughput by assuming the machine would be up 100% of the time. Because of this, the ROI realized on the project was substantially lower than what was projected.
  • In another example dealing with machine failures, a customer was using a poor choice of probability distribution when modeling when the machines would fail and how long it would take them to get fixed. Despite including randomness in the model, the customer’s predictions were way off compared with their actual performance.

Why Emulate?

I spent a lot of time in the introduction explaining the types of problems emulation can solve. Below, I’ll quickly address the main benefits of deploying an emulation model in your organization.

Save Time and Money!

Virtual commission, i.e. testing the controls logic offline using an emulation, allows us to complete automation projects much faster. Check out the diagram below.

Photo courtesy of Virtual Components

Faster project completion means your system is generating revenue much sooner, increasing your ROI. Additionally, future updates to your system are completed much faster since you will already have a library of tests you can run to pinpoint issues.

Use Virtual Reality to Train Machine Operators

This one may sound less obvious since we haven’t addressed VR yet in this post, however all of the models we build at Automation Intelligence can be experienced in VR. The same screens that a machine operator would use in real life to control the machines (called HMIs, or human machine interfaces, in industry) can be incorporated into the virtual facility. Because the virtual representation is actually connected to the logic controllers, the machines will react exactly the same way in the virtual world as they would in real life. This opens the door for a safe and interactive way to train machine operators, and is much more effective than using handbooks and manuals alone.

Remote collaboration

Additionally, emulation combined with virtual reality allows remote teams to look at the same model together in real time from anywhere in the world. This greatly improves your team’s ability to diagnose and fix problems with your automated system quickly.

Automation Intelligence’s team includes both seasoned engineers and academics with operations research expertise. For more information, please reach out to

Located in Georgia Tech's CODA Advanced Computing Building in