Predictive Modeling of Loss Ratio for Congestion Control in IoT Networks Using Deep Learning
Résumé
Congestion in the Internet of Things (IoT) networks arises when multiple flows share the same network, which can significantly impede the performance of IoT networks. This problem is exacerbated by the limitations of low-power lossy networks (LLNs), resulting in increased latency, high packet losses, reduced goodput, and other capacity-related issues. To ensure a high quality of service (QoS), and network reliability, it is crucial to implement effective congestion control mechanisms in IoT networks. Congestion in a network leads to an increase in packet losses. The loss ratio represents the proportion of lost packets to the total number of transmitted packets and is a critical metric for assessing the network's traffic load and congestion levels. This paper emphasizes the significance of studying IoT application-generated traffic to predict the loss ratio accurately. For instance, reliable data transfer is essential for IoT applications such as health monitoring, which are highly susceptible to performance degradation due to congested traffic and packet loss. This study proposes a novel approach that uses time series data and Deep Learning (DL) models to predict loss ratio in IoT networks. Our approach involves the implementation of a sliding window technique, as well as the validation and comparison of various DL models using data generated by the Cooja/Contiki framework.
Origine | Fichiers produits par l'(les) auteur(s) |
---|