DDS follows the communication model publish-subscribe (maybe based on TCP communication as a point2-point communication model). An great classification of different communication protocols can be found here (external link to RTI).
Primary pillars of the DDS communication model are
- Global data space (GDS)as cache
- Participants as namespace in the GDS
- Topic as the communication subject, attached to a GDS participant
- Reader and Writer for topics
- QoS policies as contract between Reader (request QoS) and Writer (provide QoS)
- Listener for events on GDS related element changes e.g., topics, reader, writer, etc. (asynchronous via listener, synchronous via wait-set)
Data objects are expressed per topic via a so called GDS address. Each GDS address consists of:
- 1 X Domain ID (as grouping for reader and write) maybe combined with partitions within a domain – all as logical (sub-)grouping. For ROS(2) mapping compare here (external link).
- 1 X Topic name
- multiple X key name and value (kind of subject filter, not filtering the message/payload)
The standardized QoS policies are:
1. Per GDS cache state, primarily:
- History (Keep N last, Keep all) per Topic
- Reliability (Reliable, Best Effort) per Message
- Durability (Volatile, Transient Local, Transient, persistent) per Message and “late joining” Reader. Behavior of message delivery if reader joins after write is available, from volatile to persistent.
- etc.
2. Per Reader view such as SQL 92 like content filter, partitioning, ordering, latency budget etc. with high impact on data distribution properties:
- Data Throughput: Time based filter + latency budget
- Data Latency: Transport priority + deadline + latency budget
- Data Availability: Lifespan + history + durability + ownership
- Data Delivery: Reliability + order
Common pattern due to here (external link):
Pattern | Reliability | History | Durability | When |
Streaming Data | Best effort | N/A | Volatile | Last value is best e.g, for Digital Twin cases |
Event Alarm Data | Reliable | Keep all | Depend on the use case | Asynchronous messages with confirmed delivery |
State Data | Reliable | Keep last N | Transient local | Synchronized message passing |
Note: The GDS cache behavior highly depend on the available system resources. The control is possible via the “Resource limit” QoS (max messages, max reader/ writer instances). If no space is remaining the data writer blocks and the data reader rejects. This could cause raise conditions.