Accelerating Deep Learning Inference via Model Parallelism and Partial Computation Offloading

Huan Zhou, Mingze Li, Ning Wang, Geyong Min, Jie Wu

Research output: Contribution to journalArticlepeer-review

31 Scopus citations

Abstract

With the rapid development of Internet-of-Things (IoT) and the explosive advance of deep learning, there is an urgent need to enable deep learning inference on IoT devices in Mobile Edge Computing (MEC). To address the computation limitation of IoT devices in processing complex Deep Neural Networks (DNNs), computation offloading is proposed as a promising approach. Recently, partial computation offloading is developed to dynamically adjust task assignment strategy in different channel conditions for better performance. In this paper, we take advantage of intrinsic DNN computation characteristics and propose a novel Fused-Layer-based (FL-based) DNN model parallelism method to accelerate inference. The key idea is that a DNN layer can be converted to several smaller layers in order to increase partial computation offloading flexibility, and thus further create the better computation offloading solution. However, there is a trade-off between computation offloading flexibility as well as model parallelism overhead. Then, we investigate the optimal DNN model parallelism and the corresponding scheduling and offloading strategies in partial computation offloading. In particular, we propose a Particle Swarm Optimization with Minimizing Waiting (PSOMW) method, which explores and updates the FL strategy, path scheduling strategy, and path offloading strategy to reduce time complexity and avoid invalid solutions. Finally, we validate the effectiveness of the proposed method in commonly used DNNs. The results show that the proposed method can reduce the DNN inference time by an average of 12.75 times compared to the legacy No FL (NFL) algorithm, and is very close to the optimal solution achieved by the Brute Force (BF) algorithm with the difference of less than 0.04%.

Original languageEnglish (US)
Pages (from-to)475-488
Number of pages14
JournalIEEE Transactions on Parallel and Distributed Systems
Volume34
Issue number2
DOIs
StatePublished - Feb 1 2023
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Hardware and Architecture
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Accelerating Deep Learning Inference via Model Parallelism and Partial Computation Offloading'. Together they form a unique fingerprint.

Cite this