Todays software architectures are usually micro services one, where multiple, independent and distributed software components are loosely coupled. By introducing decoupled and distributed architectures a new kind of problems is introduced. First of all we have to “mesh” the distributed services and data, data- and service-mesh thing.
Software proxies (aka broker), like Linkerd, NGINX, HAProxy, Envoy, Traefik, play a key role, because they
- support to discover services and data,
- route traffic between services (aka software components),
- load balance between multiple instances of (usually stateless) services and data shards,
- generate telemetry (aka insights or “observability”) of e.g., service and data usage,
- perform security and control of traffic e.g., add encryption to not encrypted traffic.
In these manners software proxies is what we call a “data plane” (compare e.g., https://opendataplane.org). The control plane, like Istio, Nelson, SmartStack, on top performs the programmatically and dynamic configuration e.g., where to route the traffic between two applications A and B.
Data mesh is also seen as the new, cool kid on street. It is the next evolution of “data lakes”, which are the posterior of our old data warehouses, extended by “data catalogues”. Mesh moves the question of i.e. selecting a “data catalogue” to a more abstract question of “being compatible with an API”.