Digital Twin is about a virtual representation of a physical subject. This virtual representation keeps the last known state. In some flavor, aspects (aka functions) extend the digital twin with mutable behavior. Nevertheless this not makes a software agent. The root question is how the behavior is executed. As soon it becomes proactive we can talk about a software agent. Due to Wikipedia it is “a computer program that acts for a user or other program in a relationship of agency” – a kind of autonomous.
Agent follow different objectives. One popular objective is e.g. “learning“, with “collaborative” or “transfer” learning where
- one agent judge the “work” other agent do, somehow equal to the “reinforcement learning” approach, or
- multiple agents do its “work” on the same or split data sets, the variance of results at the end get combined (aka knowledge distilling, privacy preserving computing/ learning or federated computing/ learning).
Common for all these agent (learning) approaches is its decentralized nature. In essence, for the learning objective, the training and learning happens while holding data samples or judgement locally (may at the edge). The information passed between agents are the locally, individual gained training results.
Two nice software frameworks are
- TensorFlow’s approach to Federated Learning is called TensorFlow Federated (TFF): https://www.tensorflow.org/federated
- Flower as developed by the German Startup Adap: https://flower.dev
Let’s spend a minute on the combination and/or judgement of one or multiple agent results for continuous learning. A good starting point is the learning-classification of reinforcement learning approaches:
- quality optimization (state > action reward)
- policy optimization (parameter reward)
- etc.
In all cases a function has to score the result (how good or how bad the objective is meet) or producing candidates which get selected.