Communication-efficient model pruning for federated learning in mobile edge computing
Communication-efficient model pruning for federated learning in mobile edge computing
Blog Article
In the mobile edge computing scenario, the distributed architecture of federated learning allows the edge server and mobile terminals to cooperatively train the deep model, without necessitating sharing of local data across the mobile terminals.While the training process generally consists of multiple rounds between the server and several clients, which can incur high communication costs and training overhead.To address this issue, a communication-efficient model pruning craggy range sauvignon blanc 2022 for federated learning (CEMP-FL) framework, which employed the single-shot layer balance network pruning (SBNP) algorithm, combined with unstructured sparse weight compression, was proposed to significantly reduce the size of the global model, and to effectively alleviate the biased pruning due to training samples discrepancy between clients.Meanwhile, layer balance policy (LBP) was adopted to ensure a balance of the model parameters between layers, which could effectively circumvent the problem of layer-collapse in the case of high sparsity.
Finally, the performance of CEMP-FL alarecre.com in wireless scenarios was discussed on two benchmark datasets.The experimental results show that the proposed CEMP-FL method achieves the highest compression ratio of communication costs while maintaining performance, and provides efficient communication in the distributed architecture of federated learning.