Electronics

Why Worst-Case Execution Time Matters in Multicore Real-Time Systems

Why Worst-Case Execution Time Matters in Multicore Real-Time Systems

Other reasons WCET can’t be overlooked in real-time, mission-critical systems include:

  • Hidden concurrency effects: Even tasks that share no code or data can interfere via shared caches, memory buses, or peripherals, introducing unpredictable latency spikes.
  • Rare event vulnerability: Timing violations often occur under unusual but critical circumstances, such as bursts of sensor data, AI/ML workload spikes, or simultaneous interrupts — exactly when reliability matters most.
  • Safety and certification implications: Standards like ISO 26262, DO-178C, and IEC 61508 require evidence that timing guarantees are met. Ignoring WCET can jeopardize certification or introduce liability.
  • Dynamic system behavior: Memory contention, cache eviction, and task overlap vary over time. Relying on average execution times may mask worst-case scenarios.
  • Mission-critical impact: For hard real-time systems such as aircraft control, automotive safety, or medical devices, missing a deadline even once could be catastrophic.

In short, WCET is the single most important factor in determining whether a real-time system is able to operate safely and reliably under all conditions. Ignoring it risks failures that testing averages alone can’t reveal.

Why is WCET Analysis So Difficult?

Multicore timing coupling

Even tasks that share no data, no functions, and run on separate cores can interfere with each other due to shared infrastructure that includes L2/L3 caches, memory buses and interconnects, shared peripherals, and RAM access patterns. These hidden interference channels create timing spikes that are impossible to predict from code inspection alone.

Concurrent execution is non-deterministic

Execution overlap between tasks changes from run to run. A task may overlap with another for milliseconds in one test and barely at all in another. This variability directly affects WCET and makes analysis an iterative, measurement-driven process.

Data size changes cache behavior

When a task’s working set exceeds L1 cache capacity, it depends on shared L2 or RAM access. Multiple tasks crossing this threshold can spike cache contention and unpredictably inflate timing, sometimes by 40% or more in real systems.

Synthetic interference isn’t realistic

Generated interference patterns can’t replicate real hardware contention. WCET must be assessed on the target, under realistic load, using the actual cache and memory configuration.

Analyzing Multicore WCET in Three Complementary Stages

As recognized by industry guidance documents and standards, developers can employ various techniques to measure execution timing in complex, multicore processor-based systems. Three proven methods can provide valuable timing data throughout the development cycle:

1. Early-Stage Development: Halstead’s Metrics and Static Analysis

Halstead’s complexity metrics can act as an early warning system for developers. This approach provides insights into the complexity and resource demands of specific sections of code. By employing static analysis, developers can use Halstead data with real-time measurements from the target system. This helps developers ensure a more efficient path to lower the WCET.

Such metrics and others shed light on timing-related aspects of code, including module size, control flow structures, and data flow. By identifying code sections of larger size, higher complexity, and more intricate data flow patterns, developers can prioritize their efforts and fine-tune code that puts the highest demands on processing time. Optimizing these resource-intensive areas early in the lifecycle is an effective way to reduce risk of timing violations and simply later analysis processes.

2. Mid-Stage Development: Empirical Dynamic Analysis of Execution Times

When modules fail to meet timing requirements, developers can measure, analyze, and track individual task execution times to help identify and mitigate timing issues (empirical analysis). It’s critical to eliminate the influence of configuration differences between development and production, such as compiler options, linker options, and hardware features. Therefore, analysis must occur in the actual environment where the application will run. As a result, empirical analysis can’t be fully employed until a test system is in place.

To account for environmental and application variability between runs, sufficient tests must be executed repeatedly to ensure accurate, reliable, and consistent results (empirical dynamic analysis). To execute sufficient tests within a reasonable timeframe, automation is essential. Automation also reduces developer workloads as well as eliminates human error that can occur during manual processes.

3. Late-Stage Development: Application Control and Data-Coupling Analysis

Evaluating contention involves identifying task dependencies within applications, both from a control and data perspective. Using control and data-coupling analysis, developers can explore how execution and data dependencies between tasks affect each another. The standards insist on such analyses not only to ensure that all couples have been exercised, but also because of their capacity to reveal potential problems.

How an End-to-End Solution Improves the WCET Analysis

Because WCET depends on both software and hardware behavior, developers need tools that instrument code running on the target, capture timing, and trade data in real-time; reveal data, control, and timing coupling; and support reproducible, standards-aligned verification. 

Accurate WCET analysis depends on understanding how code behaves once it runs on the target hardware. Together, these tools give developers a complete view of execution behavior, enabling more accurate and defensible WCET analysis:

  • Compilers influence WCET by shaping the determinism of machine code. Predictable optimizations and stable code generation help ensure execution timing remains consistent.
  • Debuggers give developers real-time insight into system execution under load. Trace-enabled tools reveal instruction flow, task overlap, memory access patterns, and timing spikes caused by multicore interference.
  • Software quality and verification tools instrument code to collect detailed runtime measurements. They help developers identify data, functional, and timing coupling; validate execution paths; and correlate results with system requirements. This provides the traceable evidence needed to support safety and certification objectives.

For WCET, hardware analysis is equally essential. It involves identifying all potential interference channels in the target’s specific hardware configuration and then designing tests or configuring the system to trigger the worst-case conditions under which WCET should be measured.

Mitigating Timing Coupling to Keep WCET Under Control

Timing coupling occurs when tasks affect each other’s execution time, even if they don’t share code or data. On multicore systems, hidden dependencies arise through shared caches, memory buses, peripherals, and other hardware resources. These interactions can create unexpected delays, inflating WCET and causing mission-critical tasks to miss deadlines (Fig. 2).

Leave a Reply

Your email address will not be published. Required fields are marked *