Rana, Omer ORCID: https://orcid.org/0000-0003-3597-2646, Spyridopoulos, Theodoros ORCID: https://orcid.org/0000-0001-7575-9909, Hudson, Nathaniel, Baughman, Matt, Chard, Kyle, Foster, Ian and Khan, Aftab 2024. Hierarchical and decentralised federated learning. Presented at: 2022 Cloud Continuum Conference, 05 February 2022. 2022 Cloud Continuum Proceedings. IEEE, 10.1109/CloudContinuum57429.2022.00008 |
Preview |
PDF
- Accepted Post-Print Version
Download (900kB) | Preview |
Abstract
Federated Learning (FL) is a recent approach for distributed Machine Learning (ML) where data are never communicated to a central node. Instead, an ML model (for example, a deep neural network) is initialized by a designated central (aggregation) node and shared with training nodes that have direct access to data of interest. These training nodes then perform small batches of training on their local data. Periodically, each training node submits ML model parameter/weight updates to the central node. The central node aggregates the parameters/weights to create a new global ML model that it then re-shares with the training nodes. This process can either take place indefinitely or be repeated until the ML model converges with respect to some evaluation metric (for example, mean average error, accuracy).
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Published Online |
Status: | Published |
Schools: | Computer Science & Informatics |
Publisher: | IEEE |
ISBN: | 978-1-6654-7609-6 |
Date of First Compliant Deposit: | 19 February 2024 |
Date of Acceptance: | 15 December 2023 |
Last Modified: | 21 Mar 2024 02:30 |
URI: | https://orca.cardiff.ac.uk/id/eprint/166384 |
Actions (repository staff only)
Edit Item |