Verification teams often employ such software-driven tests, but this is usually assumed to be running on a processor embedded within the SoC hardware. Ability for software running on the virtual CPU to adapt stimulus to the response from the design under test is a powerful way to augment coverage, which may otherwise take a thousand times more verification cycles to reach by constrained random methods alone.
If you create a fix to some unexpected behaviour, you can test that fix in exactly the same combination of conditions that exposed the original bug. The hybrid solution drives the hardware with stimulus from the virtual side, for example, with communication messages and data in a certain protocol being extracted by the virtual model from data files on the virtual model’s host workstation. A library of such stimulus might even be built up and reused across projects.
In the same way, virtual models can access real world data from a USB port on the FPGA based hardware, for example, in order to exercise middleware, and drivers being created by software engineers on the virtual side. This would be an advantage at so-called Plugfest events, in which early developers of new communication standards such as USB 3.0 can physically connect their prototypes to those of cooperative developers of other companies. They could use the CPU and software running in the virtual part of our hybrid emulation platform in order to gain excellent debug visibility into the new driver or even PHY performance.
Emulator users and advanced users of FPGA based prototyping provide only remote access to their environments, as opposed to wheeling these into a lab environment. Many end users only work on the software aspects of the SoC and do not need access to the real world, yet they do need to interface with some cycle-accurate behaviour of the RTL running in hardware. In those situations, hardware can be maintained in closed server rooms and accessed over usual networks, perhaps even via Internet protocols in remote locations. This not only makes it easier to share time on the hybrid emulation platform across multiple users, but also isolates it from accidental misuse and shields you from any fragility in the hardware.
Partitioning an SoC across virtual and hardware platforms
In all possible use cases for hybrid emulation, there will be some blocks of the SoC design running in FPGA hardware and the rest in virtual models. In some cases, the choice will be governed by the availability of models or RTL. If you have both, however, then it will depend on the side of the virtual-hardware boundary where that particular block is placed. This leads to the critical question of where to place the boundary between virtual and physical domains.
Virtual platforms
In the SoC era, virtual platforms are commonly used for system architecture exploration and, following this, as a golden reference model for SoC verification. It turns out that virtual platforms are also very useful when it comes to firmware and software development, as virtual reference models are usually available much earlier than the SoC prototype. Co-emulation with virtual platforms offers numerous advantages to design, development and verification teams.
Advantages of co-emulation with virtual platforms are:
• Emulator replaces lacking virtual models
• Virtual platforms replace modules unavailable in RTL code
• Emulation ensures real hardware accuracy
• Emulator is faster than complex virtual model
• Virtual platforms are usually based on SystemC simulation kernel and use TLM as an interface to interconnect simulated models. The same interface is used to connect with emulated designs
Challenges
One challenge with the hybrid emulation approach is that it can generate insane amounts of data very fast. As a result, there is often a window of a million or so cycles that are recorded and can be used when a bug is encountered to investigate the root cause. But running software and emulation and doing a system boot or bringing up a Wi-Fi connection may take billions of instructions. The point at which the bug was injected by an error and the point at which it is first observed may be too far apart for the million cycle window to be useful.
Instead, ZeBu post-run debug works like the following: Every few seconds, a device under test checkpoint is made. All inputs to the system are captured. The system can then be rerun from any one of those device under test checkpoints, and is completely deterministic. This enables billions of cycles to be recorded without actually having to record these.
Feel interested? More articles available here.