PublicationsResearchTSSG News

Publication: Distributed and Collaborative High-Speed Inference Deep Learning for Mobile Edge with Topological Dependencies

By 20th March 2020 No Comments

Shagufta Henna, a Postdoctoral Researcher/Connect Research Fellow in the area of machine learning for future generation networks, recently had a paper titled “Distributed and Collaborative High-Speed Inference Deep Learning for Mobile Edge with Topological Dependencies” published in the IEEE Transactions on Cloud Computing. To bring more intelligence to the edge under topological dependencies, compared to optimization heuristics, this work proposes a novel collaborative distributed deep learning approach.

Ubiquitous computing has potentials to harness the flexibility of distributed computing systems including cloud, edge, and internet of things devices. Mobile edge computing (MEC) benefits time-critical applications by providing low latency connections. However, most of the resource-constrained edge devices are not computationally feasible to host deep learning (DL) solutions. Further, these edge devices if deployed under denser deployments may result in topological dependencies which if not taken into consideration may adversely affect the MEC performance. To bring more intelligence to the edge under topological dependencies, compared to optimization heuristics, this work proposes a novel collaborative distributed DL approach. The proposed approach exploits topological dependencies of the edge using a resource-optimized graph neural network (GNN) version with an accelerated inference.

By exploiting edge collaborative learning using stochastic gradient (SGD), the proposed approach called CGNN-edge ensures fast convergence and high accuracy. Collaborative learning of the deployed CGNN-edge incurs extra communication overhead and latency. To cope, this work proposes compressed collaborative learning based on momentum correction called cCGNN-edge with better scalability while preserving accuracy. Performance evaluation under IEEE 802.11ax-high-density wireless local area networks deployment demonstrates that both the schemes outperform cloud-based GNN inference in response time, satisfaction of latency requirements, and communication overhead.

 

Pre-print of the paper is available in the following link:

https://ieeexplore.ieee.org/document/9036889