SUPR
Semantic Communication in 6G
Dnr:

NAISS 2023/22-1261

Type:

NAISS Small Compute

Principal Investigator:

Jingwen Fu

Affiliation:

Kungliga Tekniska högskolan

Start Date:

2023-11-28

End Date:

2024-06-01

Primary Classification:

20299: Other Electrical Engineering, Electronic Engineering, Information Engineering

Webpage:

Allocation

Abstract

This project focuses on the application of semantic communication in 6G networks, as the evolution of network technology propels communication into the 6G era, introducing new application scenarios like holographical communication, the Internet of Things (IoT), etc. These emerging technological landscapes impose advanced requirements necessitating corresponding advancements in communication technologies. Specifically, our focus lies in the application of semantic communication within the 6G framework. Shannon and Weaver's theory about information transmission comprises three levels: bit-level information transmission, semantic-level information transmission, and application-level information transmission. Our emphasis in this project is about semantic information transmission, delving into the extraction of semantic information embedded in speech, language, images, audio files, etc. This realization is made feasible through recent technological advancements in deep learning and large-scale language models (LLM), bringing forth advantages such as heightened information density, optimization of communication channel capacity, and acceleration of communication transmission speed. This project employs machine learning and deep learning methodologies to model semantic communication, utilizing autoencoder and autodecoder techniques. Our primary exploratory directions encompass: 1. The application of semantic transmission within the federated learning framework to enhance transmission rates and conserve communication channel capacity. 2. The application of dynamic neural network methods to semantic transmission, aims to compress the neural network size to enhance the efficiency of semantic transmission. Two main categories of databases form the foundation of this project: linguistic databases and image datasets. For the linguist models, we will use the European Parliament Minutes containing approximately 2 million sentences and 53 million words. For the image databases, we majorly use Cifar10, Cifar100, ImageNet, etc. The anticipated base model is the DeepSC model, supplemented with federal learning models and recent dynamic network models.