Agents and collaborative learning

Collaborative or distributed or federated learning focuses on a joined optimization of a model, with maximum accuracy  (or minimal error) in mind. Compare this post here.

The answer depends on statistics and the distribution of the value instances. If for example the given value is 3 and most value instances (mean value) is from value 3, then the information content is high. The mean value is also known as the expected value. More value instances (learning agent) lead to variance reduction – compare the law of big numbers.

The usually applied optimization approaches relate to ensembling / bagging – such as the Random-Forest algorithm, put multiple weak learning models in parallel and combine the results.

In practice e.g.,

  1. aggregate multiple regressions into one combined mean-one
  2. aggregate multiple neural network weight-sets, of all linear components, into one mean-one

For numeric data types this works very well. For more specific questions feature-translations approaches are to apply e.g., one-hot encoding.

manwomanOne hot encoding
0101 = woman
1010 = man

It is a specific (algebraic) language comparable to language processing techniques e.g., bag of words.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *