Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Multi-model running latency optimization in an edge computing paradigm

Li, Peisong, Wang, Xinheng, Huang, Kaizhu, Huang, Yi, Li, Shancang and Iqbal, Muddesar 2022. Multi-model running latency optimization in an edge computing paradigm. Sensors 22 (16) , e6097. 10.3390/s22166097

[thumbnail of sensors-22-06097.pdf] PDF - Published Version
Download (2MB)


Recent advances in both lightweight deep learning algorithms and edge computing increasingly enable multiple model inference tasks to be conducted concurrently on resource-constrained edge devices, allowing us to achieve one goal collaboratively rather than getting high quality in each standalone task. However, the high overall running latency for performing multi-model inferences always negatively affects the real-time applications. To combat latency, the algorithms should be optimized to minimize the latency for multi-model deployment without compromising the safety-critical situation. This work focuses on the real-time task scheduling strategy for multi-model deployment and investigating the model inference using an open neural network exchange (ONNX) runtime engine. Then, an application deployment strategy is proposed based on the container technology and inference tasks are scheduled to different containers based on the scheduling strategies. Experimental results show that the proposed solution is able to significantly reduce the overall running latency in real-time applications.

Item Type: Article
Date Type: Published Online
Status: Published
Schools: Computer Science & Informatics
Additional Information: License information from Publisher: LICENSE 1: URL:
Publisher: MDPI
Date of First Compliant Deposit: 17 August 2022
Date of Acceptance: 11 August 2022
Last Modified: 17 Aug 2022 19:00

Actions (repository staff only)

Edit Item Edit Item


Downloads per month over past year

View more statistics