SUPR
Edge computing in IoT systems
Dnr:

NAISS 2024/22-188

Type:

NAISS Small Compute

Principal Investigator:

Ramin Firouzi

Affiliation:

Stockholms universitet

Start Date:

2024-02-12

End Date:

2025-03-01

Primary Classification:

10201: Computer Sciences

Allocation

Abstract

In the evolving landscape of the Internet of Things (IoT), a significant shift has been observed from traditional centralized cloud computing towards a more decentralized edge computing model. As the IoT ecosystem expands, with estimates suggesting the deployment of hundreds of billions of devices, the data generated by these devices has reached unprecedented volumes. The traditional approach of transmitting this vast amount of data to centralized cloud facilities is increasingly becoming a bottleneck, introducing challenges related to latency, energy consumption, and concerns over security and privacy. To address these challenges, there is a growing recognition of the potential to leverage edge devices not just as data collectors but as active participants in decision-making processes. This distributed intelligence paradigm envisions a more balanced distribution of computational tasks, where edge devices undertake a portion of the processing workload. Such an approach necessitates a seamless integration of distributed edge computing resources with the cloud, ensuring dynamic and efficient IoT service delivery. A critical component of realizing this vision is the adoption of federated learning, a distributed machine learning approach that enables edge devices to collaboratively learn a shared model while keeping the training data localized, thereby enhancing privacy and efficiency. However, federated learning is computationally intensive and requires substantial processing power, particularly for simulating complex models and algorithms. To effectively simulate and evaluate federated learning applications within this distributed IoT architecture, it is imperative to have access to a server equipped with multiple GPU cores. GPUs offer parallel processing capabilities that are essential for the high-volume, concurrent computations characteristic of federated learning algorithms. This infrastructure will not only facilitate the development and testing of federated learning models but also support the deployment of these models in a real-world, distributed IoT environment. By integrating such a server into our SDN-based IoT architecture, we can enhance our ability to conduct in-depth evaluations of federated learning applications from various perspectives, ensuring the scalability, efficiency, and effectiveness of distributed intelligence in the IoT ecosystem.