Own Code Co-Pilot made simple

First you have to install a backend. The list of available backends is exhausting. For simplification purpose I choosed https://ollama.com/. The install is available for MacOS, Windows and Linux.

curl -fsSL https://ollama.com/install.sh | sh

After install start the server

ollama serve &

Check if the server is running approriate

http://127.0.0.1:11434/

Some usefull commands:

sudo systemctl status ollama
sudo systemctl stop ollama
sudo systemctl disable ollama
sudo systemctl restart ollama

Now install the preffered model(s). The easiest way is to surf on Huggingface and pick your models as follows e.g.,

Now add the model to ollama::

ollama run hf.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF:Q2_K

Next, VSCode needs to utilize the model. To do this, install the ‘Continue’ extension and connect it to Ollama.

Afterwards, we can chat and collaboratively edit a flexible context across various files.

Have fun 🙂

Posted in Uncategorized | Leave a comment

OS type and how they differ

Posted in Uncategorized | Leave a comment

C++ Smart Pointer versus RUST ownership and borrowing

This week I had a nice discussion on Smart Pointer versus RUST. My takeaways are the following.

FeatureC++ Smart PointersRust Ownership & Borrowing
(also has SmartPointer)
GoalManage dynamic memory with RAII – “Resource Acquisition Is Initialization”Ensure memory safety at compile time
Core ConceptPointer with automatic resource releaseOwnership rules enforced by the compiler
Memory SafetyRuntime safety via smart pointersCompile-time safety via borrow checker

And while programming

ConceptC++Rust
Unique ownershipstd::unique_ptr<T>Box<T> (heap allocation) + ownership transfer
Shared ownershipstd::shared_ptr<T>Rc<T> (single-threaded), Arc<T> (thread-safe)
Non-owning referenceRaw pointer or std::weak_ptr<T>&T (immutable borrow), &mut T (mutable borrow)
Null safetynullptr possibleNo null references (Option<T> is used instead)

Code example:

#include <memory>

std::unique_ptr<int> makeInt() {
    return std::make_unique<int>(42);
}

int main() {
    std::unique_ptr<int> ptr = makeInt();
    // *ptr = 10; // OK
}
fn make_int() -> Box<i32> {
    Box::new(42)
}

fn main() {
    let ptr = make_int();
    // *ptr = 10; // OK
}

Same ease of use

Safety considerations

CategoryC++Rust
Dangling pointersPossible if raw pointers or misused shared_ptrPrevented by ownership rules
Use-after-freePossible, unless smart pointers are carefully usedStatistically eliminated by borrow checker
Data racesPossible in multithreaded contextsCompiler prevents them via aliasing rules
Reference countingRuntime cost with shared_ptr/weak_ptrRuntime cost with Rc/Arc, compile-time borrow checks otherwise
Zero-cost abstractionNot guaranteedStrong focus on zero-cost abstractions

Reference Cycles

But also consider this nice chapter in the RUST handbook: Reference Cycles Can Leak Memory – The Rust Programming Language

Even though Rust enforces strict memory safety at compile time, reference cycles are still possible when using reference-counted types like Rc<T> or Arc<T>—similar to shared_ptr in C++. This is one of the few scenarios in which Rust can leak memory, albeit safely (i.e., without undefined behavior).

ConceptC++Rust
Reference cyclesCan occur with std::shared_ptrCan occur with Rc<T> or Arc<T>
PreventionUse std::weak_ptr to break cyclesUse Weak<T> to break cycles
Memory leak riskYes, if cycles are not brokenYes, in same cases — logic errors, not unsafe
Runtime check?No automatic detectionNo automatic detection eitherEven though Rust enforces strict memory safety at compile time, reference cycles are still possible when using reference-counted types like Rc<T> or Arc<T>—similar to shared_ptr in C++. This is one of the few scenarios in which Rust can leak memory, albeit safely (i.e., without undefined behavior).
Concept
C++
Rust
Reference cycles
Can occur with std::shared_ptr
Can occur with Rc<T> or Arc<T>
Prevention
Use std::weak_ptr to break cycles
Use Weak<T> to break cycles
Memory leak risk
Yes, if cycles are not broken
Yes, in same cases — logic errors, not unsafe
Runtime check?
No automatic detection
No automatic detection either
Posted in Uncategorized | Leave a comment

ARM RD1 – a first tought

Abstract

In the context of safety-critical applications on SoC architectures, safety monitoring is having a revival and gaining increasing importance. ARM Reference Design 1 (RD1) implements a three-level approach to runtime monitoring of functional software, aiming at cost-efficient separation between application and monitoring. This article analyzes the technical foundations of this approach, evaluates its strengths and limitations, and outlines its implications for software and system development – particularly with regard to redundancy, diagnostic coverage, and common-mode risks.

Introduction

Integrating safety-relevant functions into modern SoCs presents significant challenges for developers. This is especially true in mixed-criticality systems, where safety-critical and non-safety-critical software run side by side. Structured monitoring becomes essential in such environments. ARM Reference Design 1 (RD1) addresses this need with a standardized concept that combines three different monitoring layers: application, software monitor, and hardware monitoring. While the concept promises scalability and modularity, it also reaches systemic limits – particularly at higher ASIL requirements.

Architectural Overview

RD1 assigns functional software, monitoring instances, and hardware diagnostics to clearly separated IP blocks. The application itself runs in the so-called High-Performance Island (HP), typically on a Cortex-A-based core under a high-performance operating system such as Linux. The corresponding monitoring instance – referred to as the Function Monitor – is implemented in the Safety Island (SI), usually on a Cortex-R or Cortex-M core running an RTOS as Zephyr. Hardware-side fault detection is provided by RAS mechanisms (Reliability, Availability, Serviceability), which capture random faults and relay them via interrupts to the software monitor (et al).

Red elements not conceptually described, blacked dotted are implicitelly given to SoC (not baord) level

Software Architecture and Runtime Monitoring

The application in the HP Island is generally developed as Quality Managed (QM) software. It is not subject to the stringent requirements of functional safety as per ISO 26262, as it is not classified with an ASIL rating. While some projects apply systematic error-avoidance measures, a safety-oriented architecture is not part of this software path.

Monitoring is handled by a separate software instance in the Safety Island. In RD1’s baseline concept, this is limited to alive monitoring – a heartbeat mechanism checks whether the application sends signals at regular intervals. Semantic or content-based validation – such as input value checks or actuator state monitoring – is not performed, as the application is not generically interpretable from the monitor’s perspective. This significantly reduces the functional scope of the monitoring.

Hardware Diagnostics and Fault Classification

RAS mechanisms complement software monitoring by providing hardware-side detection of random faults, such as ECC errors in RAM or caches. Detection occurs in real time and is typically forwarded to the monitoring instance via interrupt. RAS is especially effective in detecting transient faults that occur briefly and resolve themselves – typically caused by electromagnetic interference. Latent faults can also be detected under certain conditions, provided that sufficient diagnostic coverage is implemented – what a hell of work.

Systemic Risks and Architectural Limitations

Despite the separation between HP and Safety Islands, architectural couplings remain – such as shared memory regions, buses, or interrupt sources. These shared resources represent potential common cause faults and possible single points of failure – particularly in cases of voltage drops, clock anomalies, or interconnect failures. While separation at the operating system level is typically ensured, physical isolation is often incomplete, which must be critically evaluated in safety-oriented applications. Looking forward to ARM MPAM with ISA 8.4. 

Development Implications and Redundancy Requirements

From a development perspective, the RD1 concept introduces increased complexity. Two software paths must be independently developed, tested, and validated: the application itself and the function monitor. Since the latter cannot perform complete functional monitoring, additional system-level redundancy is required – for instance, through dual-redundant sensors (2oo2) or diverse actuator paths. Without such measures, the system lacks the ability to detect and compensate for functional failures.

Furthermore, RD1 does not, but technically could, offer true functional redundancy in the sense of lockstep-based dual-core systems or 2oo2 architectures. Monitoring remains limited to structural and temporal aspects, clearly restricting its applicability to ASIL-B and selected ASIL-C functions.

Remember: To determine an Automotive Safety Integrity Level (ASIL), developers and engineers evaluate three key factors:

  • Controllability: This assesses the likelihood that a typical driver or operator could recognize the hazard in time and take appropriate action to avoid injury.
  • Severity: This refers to the potential seriousness of harm or injury that could result from a hazardous event.
  • Exposure: This considers how often the operational or environmental conditions occur that could lead to such a hazardous event. In other words, it reflects how frequently a situation arises where the system is at risk—making the frequency of use or occurrence a central aspect of this factor.

Conclusion

ARM RD1 provides a well-considered, modular safety monitoring concept for heterogeneous SoCs. Its three-level approach combines software and hardware mechanisms for runtime monitoring but is primarily suited for applications with medium safety requirements. The lack of functional redundancy and the risk of architecture-related common-mode failures significantly limit its suitability for higher ASIL levels. For developers, this means that additional safety measures outside the RD1 core are necessary – especially concerning sensor/actuator redundancy, communication paths, and fault response strategies. KUDOS to the ARM team. Well done.

Posted in Uncategorized | Leave a comment

Large Context Model

In modern large language models (LLMs), tokenizers break down a given input (i.e., a prompt) into smaller units called tokens. For example, the sentence The OEMs vehicle features are defined by software” would be split into tokens – e.g., every word a token . The LLM processes these tokens and generates a response based on the same tokenized vocabulary. However, this approach differs from how humans analyze information. Humans apply multiple levels of abstraction, such as visual representations like schematic diagrams.

This concept can be likened to creating a presentation slide deck, where visuals are often used instead of dense blocks of text. These layers of abstraction can be referred to as concepts. According to the paper Large Concept Models: Language Modeling in a Sentence Representation Space, concepts represent ideas or actions that are not tied to a specific language or modality. The paper provides an illustrative example of this idea.

Source: 2412.08821v2

One key advantage of this approach is improved handling of long-context information. Since large amounts of text are compressed into a smaller set of concepts, it enhances efficiency. Additionally, the use of concepts enables hierarchical reasoning, which is particularly useful in image-processing tasks where relationships between elements must be understood at different levels.

Concepts can be viewed as a form of compression, where words (or word vectors) are mapped into a concept space through dimensionality reduction. This transformation can also be achieved using neural networks, leading to what is known as neural composition (though this topic is beyond the scope of this discussion).

Now, let’s consider inference. Similar to traditional LLMs, where a sequence of tokens predicts the next token, in this approach, the sequence predicts the next concept instead of a token. The paper illustrates this with diagrams, further expanding on techniques such as diffusion (de-noising) and multi-tower (seperaton-of-concerns) architectures.

Source: 2412.08821v2

Posted in Uncategorized | Leave a comment

The Role of Multi-Paradigm Programming Languages in Modern Software Development

Introduction

The landscape of software development has undergone significant transformations over the years. While earlier programming languages were primarily designed around a single paradigm, modern software increasingly demands flexibility and adaptability. Multi-paradigm programming languages have emerged as a crucial solution, offering developers the ability to blend different styles—object-oriented, functional, procedural, and more—to build efficient, scalable, and maintainable applications.

Benefits of Multi-Paradigm Programming

Versatility in Problem-Solving

Different programming paradigms cater to different problem domains.

  • Object-oriented programming (OOP) is well-suited for modular applications.
  • Functional programming is ideal for handling immutable data and concurrency.
  • Procedural programming allows structured execution of logic.

Languages such as:

  • Rust integrate low-level system programming with functional and object-oriented constructs (in weighted order: functional, OOP, procedural).
  • JavaScript enables a mix of procedural, OOP, and functional programming for web applications (in weighted order: !event-driven!, functional, OOP).
  • Python combines procedural simplicity with OOP structure and functional capabilities (in weighted order: procedural, OOP, functional).

This flexibility enables developers to apply the most appropriate paradigm for the given context.

Improved Efficiency and Performance

By leveraging multiple paradigms, developers can optimize efficiency.

  • Functional programming, for instance, enhances parallel processing and reduces side effects,
  • while object-oriented (OOP) principles structure large codebases for clarity and reuse.

This combination reduces redundancy and boosts maintainability.

Scalability and Maintainability

Multi-paradigm programming allows for modular and adaptable software architectures.

  • Functional programming reduces unwanted dependencies,
  • OOP encapsulates complex logic and
  • procedural paradigms enforce clear execution paths.

These factors lead to better testability, easier debugging, and improved scalability.

Enhanced Collaboration Across Teams

Teams with diverse expertise can benefit from multi-paradigm languages, as different developers can work within their preferred styles while maintaining a cohesive codebase. This fosters better collaboration and reduces the learning curve for newcomers.

Alignment with Modern Hardware Trends

Modern hardware architectures emphasize multi-core processing. Functional programming aids parallel computation by eliminating shared states, while procedural and object-oriented approaches organize system logic efficiently. Combining these paradigms leads to better utilization of computing resources.

Multi-Paradigm Programming and Design Patterns

Design patterns are essential for software architecture. Multi-paradigm programming enables developers to integrate different design patterns effectively, such as:

  • OOP Patterns: Singleton, Factory, Observer, Strategy
  • Functional Patterns: Functors, Monads, Pipelines
  • Middleware Pattern: Frequently used in frameworks like Express.js, FastAPI, and Actix Web to structure request-handling workflows.

The Impact of AI (LLMs)

Artificial intelligence, particularly large language models (LLMs), is reshaping how developers interact with code. AI-assisted development tools introduce new opportunities and challenges in multi-paradigm programming:

  • AI-generated code may not always align with project-specific requirements or architectural decisions.
  • Developers risk losing a deep understanding of paradigms if they depend too heavily on AI-generated suggestions.
  • AI-assisted code might introduce subtle errors that require manual intervention to fix.

But a single file problems can be well solved: https://github.com/BDUG/llmgames

Conclusion

The evolution of programming languages toward multi-paradigm support reflects the growing need for flexibility, efficiency, and maintainability in modern software development. By combining different paradigms, developers can tailor solutions to fit diverse scenarios, resulting in more robust and scalable applications.

Posted in Uncategorized | Leave a comment

CNCF landscape overview

The https://landscape.cncf.io can be morphed according to the following schema:

Demand ActivityPurpose/GoalTools
Requirement GatheringCollect and define project needs and objectives.Jira, Confluence, Miro, Trello, Google Docs
Architecture DesignCreate high-level designs and system architecture.Lucidchart, Draw.io, AWS Well-Architected Tool, Azure Architecture Center, Visio
Source Code ManagementTrack, review, and manage code changes.Git, GitHub, GitLab, Bitbucket
Infrastructure as Code (IaC)Automate infrastructure provisioning and management.Terraform, Pulumi, AWS CloudFormation, Ansible, Chef, SaltStack
ContainerizationPackage applications with dependencies for portability.Docker, Buildpacks, Podman
Continuous Integration (CI)Automate code builds and initial testing.Jenkins, GitHub Actions, GitLab CI, CircleCI, Bamboo
Continuous Deployment (CD)Automate application deployment to environments.ArgoCD, Spinnaker, FluxCD, Tekton, Harness
Testing (Automated)Ensure application quality through automated tests.Selenium, JUnit, Cypress, Postman, SonarQube, OWASP ZAP
Monitoring and ObservabilityTrack system performance and detect issues.Prometheus, Grafana, Datadog, New Relic, Splunk, ELK Stack
Logging and TracingCollect logs and trace application requests for debugging.Elasticsearch, Logstash, Kibana (ELK), Fluentd, OpenTelemetry
Vulnerability ScanningIdentify security vulnerabilities in code and dependencies.Snyk, Trivy, Clair, Aqua Security, Twistlock
Secrets ManagementSecurely manage sensitive information (e.g., passwords).HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, CyberArk
Deployment OrchestrationManage deployments across clusters.Kubernetes, Helm, kubectl, Docker Swarm
Scaling and Resource ManagementDynamically scale resources based on demand.Kubernetes Autoscaler, AWS Auto Scaling, GCP Cloud Functions, Azure Monitor Autoscale
Alerting and Incident ResponseNotify teams of issues and resolve incidents.PagerDuty, Opsgenie, VictorOps, Prometheus Alertmanager, Splunk On-Call
Cost Management and OptimizationAnalyze and reduce cloud costs.AWS Cost Explorer, GCP Cost Management, Azure Cost Management, CloudHealth by VMware
Compliance ManagementEnsure adherence to industry standards and regulations.Prisma Cloud, Open Policy Agent (OPA), AWS Audit Manager, Azure Security Center
Runtime SecurityProtect running applications from threats.Falco, Sysdig, Aqua Security, Twistlock, StackRox
Posted in Uncategorized | Leave a comment

RUST and the SOLID principals

Rust as multi-paradigm language supports many of the design goals outlined in the SOLID principles. It emphasizes safety, performance, and concurrency, making it an interesting candidate for evaluating how well it adheres to these principles.

Single Responsibility Principle (SRP)

Components should be small and focused, with each handling only one responsibility.

  • Rust encourages modularity through its strict ownership and borrowing rules, which inherently promote smaller, focused units of functionality.
    • Instead of large “God objects,” Rust favors composition through structs, enums, and modules, allowing each component to handle a single responsibility.
    • The type system and traits (Rust’s version of interfaces) also promote well-defined, single-purpose abstractions.
struct Order {
    id: u32,
    total: f64,
}

struct OrderValidator;

impl OrderValidator {
    fn validate(&self, order: &Order) -> bool {
        // Validate the order (e.g., non-negative total, valid ID, etc.)
        order.total > 0.0
    }
}

struct PaymentProcessor;

impl PaymentProcessor {
    fn process_payment(&self, order: &Order) {
        // Process payment for the order
        println!("Processing payment for order ID: {} with total: ${}", order.id, order.total);
    }
}

struct OrderNotifier;

impl OrderNotifier {
    fn send_confirmation(&self, order: &Order) {
        // Notify the customer about the order
        println!("Sending confirmation for order ID: {}", order.id);
    }
}

// Putting it together
fn main() {
    let order = Order { id: 1, total: 100.0 };

    let validator = OrderValidator;
    let payment_processor = PaymentProcessor;
    let notifier = OrderNotifier;

    if validator.validate(&order) {
        payment_processor.process_payment(&order);
        notifier.send_confirmation(&order);
    } else {
        println!("Invalid order. Cannot process.");
    }
}

Open/Closed Principle (OCP)

Systems should allow for extending functionality without modifying existing code.

  • Rust’s traits allow for extensibility without modifying existing code. You can define new traits or implement existing ones for new types to extend functionality.
    • Enums with pattern matching also enable adding variants while minimizing changes to the code handling them.
    • However, since Rust lacks inheritance, extending behavior often requires composition rather than subclassing, which can sometimes make adhering to OCP slightly more verbose.
trait Shape {
    fn area(&self) -> f64;
}

struct Circle {
    radius: f64,
}

impl Shape for Circle {
    fn area(&self) -> f64 {
        3.14 * self.radius * self.radius
    }
}

Liskov Substitution Principle (LSP)

Subtypes must be substitutable for their base types without altering the correctness of the program.

  • Rust avoids classical inheritance, so LSP is achieved through traits and polymorphism.
    • By implementing traits, different types can be substituted for each other as long as they conform to the expected interface.
    • Rust’s strict type system and compile-time checks ensure that substitution errors are caught early.
trait Logger {
    fn log(&self, message: &str);
}

struct ConsoleLogger;

impl Logger for ConsoleLogger {
    fn log(&self, message: &str) {
        println!("Console log: {}", message);
    }
}

struct FileLogger;

impl Logger for FileLogger {
    fn log(&self, message: &str) {
        println!("Writing to file: {}", message); // Simulate file logging
    }
}

fn use_logger(logger: &dyn Logger, message: &str) {
    logger.log(message);
}

Interface Segregation Principle (ISP)

Interfaces should be minimal and specific to avoid forcing components to implement unused functionality.

  • Rust’s trait system inherently aligns with ISP. Traits can be designed to provide minimal, focused functionality rather than large, monolithic interfaces.
  • A struct or type can implement multiple small traits, avoiding the pitfalls of being forced to implement unnecessary methods.
trait Flyable {
    fn fly(&self);
}

trait Swimable {
    fn swim(&self);
}

struct Duck;

impl Flyable for Duck {
    fn fly(&self) {
        println!("Duck is flying");
    }
}

impl Swimable for Duck {
    fn swim(&self) {
        println!("Duck is swimming");
    }
}

struct Penguin;

impl Swimable for Penguin {
    fn swim(&self) {
        println!("Penguin is swimming");
    }
}

Dependency Inversion Principle (DIP)

High-level modules should depend on abstractions rather than low-level implementations to ensure flexibility and maintainability.

  • Rust promotes dependency inversion through its ownership model and the use of trait objects or generics.
    • High-level modules depend on abstractions (traits), and low-level modules implement these abstractions.
    • The use of Box<dyn Trait> or generics (impl Trait) allows developers to decouple components effectively.
trait PaymentProcessor {
    fn process_payment(&self, amount: f64);
}

struct PayPal;

impl PaymentProcessor for PayPal {
    fn process_payment(&self, amount: f64) {
        println!("Processing ${} payment via PayPal", amount);
    }
}

struct Stripe;

impl PaymentProcessor for Stripe {
    fn process_payment(&self, amount: f64) {
        println!("Processing ${} payment via Stripe", amount);
    }
}

struct PaymentService<'a> {
    processor: &'a dyn PaymentProcessor,
}

impl<'a> PaymentService<'a> {
    fn new(processor: &'a dyn PaymentProcessor) -> Self {
        Self { processor }
    }

    fn pay(&self, amount: f64) {
        self.processor.process_payment(amount);
    }
}

In summary, Rust aligns well with the SOLID principles, albeit with a different approach than traditional OOP languages. 

Posted in Uncategorized | Leave a comment

System design Attributes

While reflecting on how automotive software system design differs from other industries, I ended up at this overview of common attributes.

Scalability mechanism

  • Data with the aspect of availability is ensured through replication, which involves creating copies of active and mirrored data or besides replication, the sharding of multiple active data portions e.g., for caching purposes.
  • Compute is managed with horizontal and vertical partitioning to enable separation and isolation, along with load balancing to control access to the partitions. The access follows either a synchronous, asynchronous or publish-subscribe communication style, with the purpose of data (aka resource) orientation, action orientation (aka procedure call) or question oriented e.g. GraphQL.

Posted in Uncategorized | Leave a comment

ARM features classified

The new ARM features can be categorized into two main groups, each addressing distinct aspects of ISA (Instruction Set Architecture) functionality and design.

The program flow related functionality

The first group encompasses program flow related features, which operates primarily at the CPU core level, affecting the memory management unit (MMU) and cache hierarchy. These features directly influence the feature programmers, compiler settings and finally the resulting program flow of the feature itself. 

For instance, ARM’s Memory Tagging Extension (MTE) is a significant innovation within this group. MTE operates through the MMU and cache hierarchy, enhancing debugging and runtime memory safety. By detecting potential memory safety violations, it can trigger predefined actions such as logging incidents or terminating processes. 

// Allocate memory and set a tag
void *buffer = malloc(64);

// Assign tag 0x3 to the memory region
mte_set_tag(buffer, 0x3);      
char *ptr = buffer;

// Assign the same tag to the pointer
mte_set_pointer_tag(ptr, 0x3);  

// Tags match: Access allowed
ptr[0] = 'A';   
   
// Unsafe access (out-of-bounds): Fault raised
ptr[80] = 'B';  

// Free memory
free(buffer);

While MTE serves to prevent memory safety issues, its implementation introduces notable changes to control flow, creating additional software variants, increasing demands on quality management, and ultimately contributing to technical debt.

Other examples of ISA-related features include:

  • Branch Target Identification (BTI): During execution, the processor verifies whether an indirect branch or jump targets a valid BTI-marked location. If it does not, the processor halts execution or traps the error.
  • Pointer Authentication (PAC): PAC employs cryptographic keys stored securely within the processor (e.g., in hardware registers). It appends a Pointer Authentication Code to pointers and validates their integrity during execution, protecting against unauthorized modifications.

The supervision related functionality 

The second group consists of features that rely primarily on CSRs (Control and Status Registers). These registers enable configuration, control and monitoring of hardware mechanisms, particularly for managing multi-tenant performance. The program flow of a specific feature is less to not impacted. Here a supervisor e.g., hypervisor for VM or operating system for processes inject the necessary control.

Note: To be clear, the CPU cores also have to do processings based on the register values, but the technical realization primarily involves many other components. 

A key example is MPAM (Memory Partitioning And Monitoring), which provides resource partitioning and monitoring capabilities for shared resources like caches, interconnects, and memory bandwidth in ARM-based systems-on-chip (SoCs). 

MPAM, introduced in the ARMv8.4-A architecture, is implemented in components such as L3 caches, DRAM controllers and interconnects to enforce quality-of-service (QoS) policies and monitor resource usage.

MPAM is designed to enable fine-grained control over system resources in multi-core systems. Its functionalities include:

  • Partitioning: Allocating memory and cache resources to specific processes, cores and/ or virtual machines (VMs). Most configurations can be made via EL1 register. 
  • Monitoring: Tracking resource usage, such as memory bandwidth consumption, to facilitate system profiling and optimization.
  • Enforcement: Preventing resource starvation by ensuring fair allocation and predictable performance across workloads.

Key components involved for MPAM:

  • L3 Cache (DSU): Segments shared cache (e.g., L3) into partitions for different cores or tasks. Usually per core cluster. 
  • Interconnect (Coherent Mesh Network, short CMN, as part of ARM’s CoreLink product family): Enforces QoS policies, controlling memory bandwidth and traffic prioritization.
  • Memory Controllers: Allocates memory regions and enforces bandwidth quotas based on MPAM policies.
    • Accelerators (GPUs): Tags traffic to manage their impact on shared resources.
      • Peripherals (I/O): Regulates peripheral access to shared system memory and bandwidth.

Note: Memory controller is not MMU.

  • The Memory controller is responsible for managing physical access to the computer’s memory (RAM). It acts as the interface between the CPU and the physical memory.
  • The MMU is responsible for virtual memory management and translates virtual addresses (used by software) into physical addresses (used by hardware).

Posted in Uncategorized | Leave a comment