First you have to install a backend. The list of available backends is exhausting. For simplification purpose I choosed https://ollama.com/. The install is available for MacOS, Windows and Linux.
Even though Rust enforces strict memory safety at compile time, reference cycles are still possible when using reference-counted types like Rc<T> or Arc<T>—similar to shared_ptr in C++. This is one of the few scenarios in which Rust can leak memory, albeit safely (i.e., without undefined behavior).
Concept
C++
Rust
Reference cycles
Can occur with std::shared_ptr
Can occur with Rc<T> or Arc<T>
Prevention
Use std::weak_ptr to break cycles
Use Weak<T> to break cycles
Memory leak risk
Yes, if cycles are not broken
Yes, in same cases — logic errors, not unsafe
Runtime check?
No automatic detection
No automatic detection eitherEven though Rust enforces strict memory safety at compile time, reference cycles are still possible when using reference-counted types like Rc<T> or Arc<T>—similar to shared_ptr in C++. This is one of the few scenarios in which Rust can leak memory, albeit safely (i.e., without undefined behavior). Concept C++ Rust Reference cycles Can occur with std::shared_ptr Can occur with Rc<T> or Arc<T> Prevention Use std::weak_ptr to break cycles Use Weak<T> to break cycles Memory leak risk Yes, if cycles are not broken Yes, in same cases — logic errors, not unsafe Runtime check? No automatic detection No automatic detection either
In the context of safety-critical applications on SoC architectures, safety monitoring is having a revival and gaining increasing importance. ARM Reference Design 1 (RD1) implements a three-level approach to runtime monitoring of functional software, aiming at cost-efficient separation between application and monitoring. This article analyzes the technical foundations of this approach, evaluates its strengths and limitations, and outlines its implications for software and system development – particularly with regard to redundancy, diagnostic coverage, and common-mode risks.
Introduction
Integrating safety-relevant functions into modern SoCs presents significant challenges for developers. This is especially true in mixed-criticality systems, where safety-critical and non-safety-critical software run side by side. Structured monitoring becomes essential in such environments. ARM Reference Design 1 (RD1) addresses this need with a standardized concept that combines three different monitoring layers: application, software monitor, and hardware monitoring. While the concept promises scalability and modularity, it also reaches systemic limits – particularly at higher ASIL requirements.
Architectural Overview
RD1 assigns functional software, monitoring instances, and hardware diagnostics to clearly separated IP blocks. The application itself runs in the so-called High-Performance Island (HP), typically on a Cortex-A-based core under a high-performance operating system such as Linux. The corresponding monitoring instance – referred to as the Function Monitor – is implemented in the Safety Island (SI), usually on a Cortex-R or Cortex-M core running an RTOS as Zephyr. Hardware-side fault detection is provided by RAS mechanisms (Reliability, Availability, Serviceability), which capture random faults and relay them via interrupts to the software monitor (et al).
Red elements not conceptually described, blacked dotted are implicitelly given to SoC (not baord) level
Software Architecture and Runtime Monitoring
The application in the HP Island is generally developed as Quality Managed (QM) software. It is not subject to the stringent requirements of functional safety as per ISO 26262, as it is not classified with an ASIL rating. While some projects apply systematic error-avoidance measures, a safety-oriented architecture is not part of this software path.
Monitoring is handled by a separate software instance in the Safety Island. In RD1’s baseline concept, this is limited to alive monitoring – a heartbeat mechanism checks whether the application sends signals at regular intervals. Semantic or content-based validation – such as input value checks or actuator state monitoring – is not performed, as the application is not generically interpretable from the monitor’s perspective. This significantly reduces the functional scope of the monitoring.
Hardware Diagnostics and Fault Classification
RAS mechanisms complement software monitoring by providing hardware-side detection of random faults, such as ECC errors in RAM or caches. Detection occurs in real time and is typically forwarded to the monitoring instance via interrupt. RAS is especially effective in detecting transient faults that occur briefly and resolve themselves – typically caused by electromagnetic interference. Latent faults can also be detected under certain conditions, provided that sufficient diagnostic coverage is implemented – what a hell of work.
Systemic Risks and Architectural Limitations
Despite the separation between HP and Safety Islands, architectural couplings remain – such as shared memory regions, buses, or interrupt sources. These shared resources represent potential common cause faults and possible single points of failure – particularly in cases of voltage drops, clock anomalies, or interconnect failures. While separation at the operating system level is typically ensured, physical isolation is often incomplete, which must be critically evaluated in safety-oriented applications. Looking forward to ARM MPAM with ISA 8.4.
Development Implications and Redundancy Requirements
From a development perspective, the RD1 concept introduces increased complexity. Two software paths must be independently developed, tested, and validated: the application itself and the function monitor. Since the latter cannot perform complete functional monitoring, additional system-level redundancy is required – for instance, through dual-redundant sensors (2oo2) or diverse actuator paths. Without such measures, the system lacks the ability to detect and compensate for functional failures.
Furthermore, RD1 does not, but technically could, offer true functional redundancy in the sense of lockstep-based dual-core systems or 2oo2 architectures. Monitoring remains limited to structural and temporal aspects, clearly restricting its applicability to ASIL-B and selected ASIL-C functions.
Remember: To determine an Automotive Safety Integrity Level (ASIL), developers and engineers evaluate three key factors:
Controllability: This assesses the likelihood that a typical driver or operator could recognize the hazard in time and take appropriate action to avoid injury.
Severity: This refers to the potential seriousness of harm or injury that could result from a hazardous event.
Exposure: This considers how often the operational or environmental conditions occur that could lead to such a hazardous event. In other words, it reflects how frequently a situation arises where the system is at risk—making the frequency of use or occurrence a central aspect of this factor.
Conclusion
ARM RD1 provides a well-considered, modular safety monitoring concept for heterogeneous SoCs. Its three-level approach combines software and hardware mechanisms for runtime monitoring but is primarily suited for applications with medium safety requirements. The lack of functional redundancy and the risk of architecture-related common-mode failures significantly limit its suitability for higher ASIL levels. For developers, this means that additional safety measures outside the RD1 core are necessary – especially concerning sensor/actuator redundancy, communication paths, and fault response strategies. KUDOS to the ARM team. Well done.
In modern large language models (LLMs), tokenizers break down a given input (i.e., a prompt) into smaller units called tokens. For example, the sentence “The OEMs vehicle features are defined by software” would be split into tokens – e.g., every word a token . The LLM processes these tokens and generates a response based on the same tokenized vocabulary. However, this approach differs from how humans analyze information. Humans apply multiple levels of abstraction, such as visual representations like schematic diagrams.
This concept can be likened to creating a presentation slide deck, where visuals are often used instead of dense blocks of text. These layers of abstraction can be referred to as concepts. According to the paper Large Concept Models: Language Modeling in a Sentence Representation Space, concepts represent ideas or actions that are not tied to a specific language or modality. The paper provides an illustrative example of this idea.
One key advantage of this approach is improved handling of long-context information. Since large amounts of text are compressed into a smaller set of concepts, it enhances efficiency. Additionally, the use of concepts enables hierarchical reasoning, which is particularly useful in image-processing tasks where relationships between elements must be understood at different levels.
Concepts can be viewed as a form of compression, where words (or word vectors) are mapped into a concept space through dimensionality reduction. This transformation can also be achieved using neural networks, leading to what is known as neural composition (though this topic is beyond the scope of this discussion).
Now, let’s consider inference. Similar to traditional LLMs, where a sequence of tokens predicts the next token, in this approach, the sequence predicts the next concept instead of a token. The paper illustrates this with diagrams, further expanding on techniques such as diffusion (de-noising) and multi-tower (seperaton-of-concerns) architectures.
The landscape of software development has undergone significant transformations over the years. While earlier programming languages were primarily designed around a single paradigm, modern software increasingly demands flexibility and adaptability. Multi-paradigm programming languages have emerged as a crucial solution, offering developers the ability to blend different styles—object-oriented, functional, procedural, and more—to build efficient, scalable, and maintainable applications.
Benefits of Multi-Paradigm Programming
Versatility in Problem-Solving
Different programming paradigms cater to different problem domains.
Object-oriented programming (OOP) is well-suited for modular applications.
Functional programming is ideal for handling immutable data and concurrency.
Procedural programming allows structured execution of logic.
Languages such as:
Rust integrate low-level system programming with functional and object-oriented constructs (in weighted order: functional, OOP, procedural).
JavaScript enables a mix of procedural, OOP, and functional programming for web applications (in weighted order: !event-driven!, functional, OOP).
Python combines procedural simplicity with OOP structure and functional capabilities (in weighted order: procedural, OOP, functional).
This flexibility enables developers to apply the most appropriate paradigm for the given context.
Improved Efficiency and Performance
By leveraging multiple paradigms, developers can optimize efficiency.
Functional programming, for instance, enhances parallel processing and reduces side effects,
while object-oriented (OOP) principles structure large codebases for clarity and reuse.
This combination reduces redundancy and boosts maintainability.
Scalability and Maintainability
Multi-paradigm programming allows for modular and adaptable software architectures.
These factors lead to better testability, easier debugging, and improved scalability.
Enhanced Collaboration Across Teams
Teams with diverse expertise can benefit from multi-paradigm languages, as different developers can work within their preferred styles while maintaining a cohesive codebase. This fosters better collaboration and reduces the learning curve for newcomers.
Alignment with Modern Hardware Trends
Modern hardware architectures emphasize multi-core processing. Functional programming aids parallel computation by eliminating shared states, while procedural and object-oriented approaches organize system logic efficiently. Combining these paradigms leads to better utilization of computing resources.
Multi-Paradigm Programming and Design Patterns
Design patterns are essential for software architecture. Multi-paradigm programming enables developers to integrate different design patterns effectively, such as:
Middleware Pattern: Frequently used in frameworks like Express.js, FastAPI, and Actix Web to structure request-handling workflows.
The Impact of AI (LLMs)
Artificial intelligence, particularly large language models (LLMs), is reshaping how developers interact with code. AI-assisted development tools introduce new opportunities and challenges in multi-paradigm programming:
AI-generated code may not always align with project-specific requirements or architectural decisions.
Developers risk losing a deep understanding of paradigms if they depend too heavily on AI-generated suggestions.
AI-assisted code might introduce subtle errors that require manual intervention to fix.
The evolution of programming languages toward multi-paradigm support reflects the growing need for flexibility, efficiency, and maintainability in modern software development. By combining different paradigms, developers can tailor solutions to fit diverse scenarios, resulting in more robust and scalable applications.
Rust as multi-paradigm language supports many of the design goals outlined in the SOLID principles. It emphasizes safety, performance, and concurrency, making it an interesting candidate for evaluating how well it adheres to these principles.
Single Responsibility Principle (SRP)
Components should be small and focused, with each handling only one responsibility.
Rust encourages modularity through its strict ownership and borrowing rules, which inherently promote smaller, focused units of functionality.
Instead of large “God objects,” Rust favors composition through structs, enums, and modules, allowing each component to handle a single responsibility.
The type system and traits (Rust’s version of interfaces) also promote well-defined, single-purpose abstractions.
struct Order {
id: u32,
total: f64,
}
struct OrderValidator;
impl OrderValidator {
fn validate(&self, order: &Order) -> bool {
// Validate the order (e.g., non-negative total, valid ID, etc.)
order.total > 0.0
}
}
struct PaymentProcessor;
impl PaymentProcessor {
fn process_payment(&self, order: &Order) {
// Process payment for the order
println!("Processing payment for order ID: {} with total: ${}", order.id, order.total);
}
}
struct OrderNotifier;
impl OrderNotifier {
fn send_confirmation(&self, order: &Order) {
// Notify the customer about the order
println!("Sending confirmation for order ID: {}", order.id);
}
}
// Putting it together
fn main() {
let order = Order { id: 1, total: 100.0 };
let validator = OrderValidator;
let payment_processor = PaymentProcessor;
let notifier = OrderNotifier;
if validator.validate(&order) {
payment_processor.process_payment(&order);
notifier.send_confirmation(&order);
} else {
println!("Invalid order. Cannot process.");
}
}
Open/Closed Principle (OCP)
Systems should allow for extending functionality without modifying existing code.
Rust’s traits allow for extensibility without modifying existing code. You can define new traits or implement existing ones for new types to extend functionality.
Enums with pattern matching also enable adding variants while minimizing changes to the code handling them.
However, since Rust lacks inheritance, extending behavior often requires composition rather than subclassing, which can sometimes make adhering to OCP slightly more verbose.
Interfaces should be minimal and specific to avoid forcing components to implement unused functionality.
Rust’s trait system inherently aligns with ISP. Traits can be designed to provide minimal, focused functionality rather than large, monolithic interfaces.
A struct or type can implement multiple small traits, avoiding the pitfalls of being forced to implement unnecessary methods.
trait Flyable {
fn fly(&self);
}
trait Swimable {
fn swim(&self);
}
struct Duck;
impl Flyable for Duck {
fn fly(&self) {
println!("Duck is flying");
}
}
impl Swimable for Duck {
fn swim(&self) {
println!("Duck is swimming");
}
}
struct Penguin;
impl Swimable for Penguin {
fn swim(&self) {
println!("Penguin is swimming");
}
}
Dependency Inversion Principle (DIP)
High-level modules should depend on abstractions rather than low-level implementations to ensure flexibility and maintainability.
Rust promotes dependency inversion through its ownership model and the use of trait objects or generics.
High-level modules depend on abstractions (traits), and low-level modules implement these abstractions.
The use of Box<dyn Trait> or generics (impl Trait) allows developers to decouple components effectively.
While reflecting on how automotive software system design differs from other industries, I ended up at this overview of common attributes.
Scalability mechanism
Data with the aspect of availability is ensured through replication, which involves creating copies of active and mirrored data or besides replication, the sharding of multiple active data portions e.g., for caching purposes.
Compute is managed with horizontal and vertical partitioning to enable separation and isolation, along with load balancing to control access to the partitions. The access follows either a synchronous, asynchronous or publish-subscribe communication style, with the purpose of data (aka resource) orientation, action orientation (aka procedure call) or question oriented e.g. GraphQL.
The new ARM features can be categorized into two main groups, each addressing distinct aspects of ISA (Instruction Set Architecture) functionality and design.
The program flow related functionality
The first group encompasses program flow related features, which operates primarily at the CPU core level, affecting the memory management unit (MMU) and cache hierarchy. These features directly influence the feature programmers, compiler settings and finally the resulting program flow of the feature itself.
For instance, ARM’s Memory Tagging Extension (MTE) is a significant innovation within this group. MTE operates through the MMU and cache hierarchy, enhancing debugging and runtime memory safety. By detecting potential memory safety violations, it can trigger predefined actions such as logging incidents or terminating processes.
// Allocate memory and set a tag
void *buffer = malloc(64);
// Assign tag 0x3 to the memory region
mte_set_tag(buffer, 0x3);
char *ptr = buffer;
// Assign the same tag to the pointer
mte_set_pointer_tag(ptr, 0x3);
// Tags match: Access allowed
ptr[0] = 'A';
// Unsafe access (out-of-bounds): Fault raised
ptr[80] = 'B';
// Free memory
free(buffer);
While MTE serves to prevent memory safety issues, its implementation introduces notable changes to control flow, creating additional software variants, increasing demands on quality management, and ultimately contributing to technical debt.
Other examples of ISA-related features include:
Branch Target Identification (BTI): During execution, the processor verifies whether an indirect branch or jump targets a valid BTI-marked location. If it does not, the processor halts execution or traps the error.
Pointer Authentication (PAC): PAC employs cryptographic keys stored securely within the processor (e.g., in hardware registers). It appends a Pointer Authentication Code to pointers and validates their integrity during execution, protecting against unauthorized modifications.
The supervision related functionality
The second group consists of features that rely primarily on CSRs (Control and Status Registers). These registers enable configuration, control and monitoring of hardware mechanisms, particularly for managing multi-tenant performance. The program flow of a specific feature is less to not impacted. Here a supervisor e.g., hypervisor for VM or operating system for processes inject the necessary control.
Note: To be clear, the CPU cores also have to do processings based on the register values, but the technical realization primarily involves many other components.Â
A key example is MPAM (Memory Partitioning And Monitoring), which provides resource partitioning and monitoring capabilities for shared resources like caches, interconnects, and memory bandwidth in ARM-based systems-on-chip (SoCs).
MPAM, introduced in the ARMv8.4-A architecture, is implemented in components such as L3 caches, DRAM controllers and interconnects to enforce quality-of-service (QoS) policies and monitor resource usage.
MPAM is designed to enable fine-grained control over system resources in multi-core systems. Its functionalities include:
Partitioning: Allocating memory and cache resources to specific processes, cores and/ or virtual machines (VMs). Most configurations can be made via EL1 register.
Monitoring: Tracking resource usage, such as memory bandwidth consumption, to facilitate system profiling and optimization.
Enforcement: Preventing resource starvation by ensuring fair allocation and predictable performance across workloads.
Key components involved for MPAM:
L3 Cache (DSU): Segments shared cache (e.g., L3) into partitions for different cores or tasks. Usually per core cluster.
Interconnect (Coherent Mesh Network, short CMN, as part of ARM’s CoreLink product family): Enforces QoS policies, controlling memory bandwidth and traffic prioritization.
Memory Controllers: Allocates memory regions and enforces bandwidth quotas based on MPAM policies.
Accelerators (GPUs): Tags traffic to manage their impact on shared resources.
Peripherals (I/O): Regulates peripheral access to shared system memory and bandwidth.
Note: Memory controller is not MMU.
The Memory controller is responsible for managing physical access to the computer’s memory (RAM). It acts as the interface between the CPU and the physical memory.
The MMU is responsible for virtual memory management and translates virtual addresses (used by software) into physical addresses (used by hardware).