SoftBank Enhances 5G AI-RAN Through Innovative Transformer AI, Boosting Throughput by 30%

SoftBank

SoftBank Corp. recently announced a significant advancement in its “AI for RAN” research, successfully developing a new AI architecture that utilizes a high-performance Transformer AI model for wireless signal processing. This innovation is a crucial aspect of the AI-RAN initiative, which aims to promote the evolution of Radio Access Networks (RAN) through AI technology. As a result, SoftBank has achieved an approximate 30% increase in 5G throughput.

SoftBank has validated this technology’s real-time operational capabilities in a genuine wireless environment compliant with 3GPP 5G standards, significantly enhancing communication quality. This achievement marks the transition of AI-RAN from the conceptual phase to practical application.

Real-World Demonstration: Approximately 30% Increase in Uplink Throughput

SoftBank has been progressing its RAN enhancement research in phases. In earlier studies, the company used Convolutional Neural Networks (CNN) for uplink channel interpolation, which resulted in a 20% increase in uplink throughput compared to traditional signal processing methods. In the latest demonstration, the new Transformer-based architecture operated on GPUs and was tested in a real over-the-air (OTA) environment. The results showed further throughput improvements while achieving ultra-low latency.

Significant Throughput Enhancement

After employing the new architecture for uplink channel interpolation, uplink throughput increased by an additional 8% compared to the traditional CNN-based model. Overall throughput increased by about 30% compared to baseline methods that did not utilize AI, demonstrating that the continuous evolution of AI models can effectively enhance communication quality in real-world scenarios.

Achieving Higher AI Performance with Ultra-Low Latency

Real-time 5G communications require processing delays of less than 1 millisecond. In this demonstration, the Transformer-based processing averaged around 338 microseconds, which is approximately 26% faster than the CNN architecture, achieving ultra-low latency. Typically, improvements in AI model performance can lead to slower processing speeds; however, this outcome successfully overcame the technical challenge of balancing high performance with low latency.

Simulation Environment Demonstration: Downlink Throughput Increase of Over 100%

SoftBank also utilized the new architecture to simulate the “Sounding Reference Signal (SRS) Prediction” required for optimal wireless beamforming. In previous research, a simpler Multi-Layer Perceptron (MLP) AI model was used, achieving a maximum 13% increase in downlink throughput at a terminal speed of 80 km/h. In the new simulation with the Transformer architecture, downlink throughput increased by approximately 29% at 80 km/h and around 31% at 40 km/h. These results indicate that enhancing AI model capabilities can lead to more than double the increase in downlink throughput, potentially resulting in significantly improved communication speeds and user experiences.

Technical Challenges and Features of the New Architecture

The primary technical challenge in practicalizing “AI for RAN” is to further enhance communication quality using high-performance AI models while meeting the real-time processing constraint of less than 1 millisecond. In response to this challenge, SoftBank developed a lightweight and efficient Transformer architecture that focuses on essential processing tasks, simultaneously achieving low latency and maximized AI performance. Its main features include:

  • Comprehensive Capture of Wireless Signal Correlations: The architecture employs the core mechanism of Transformer—Self-Attention—to capture extensive correlations in wireless signals across frequency and time domains. This allows for high AI performance while maintaining light weight.
  • Retention of Physical Information: Typically, input data in AI models is normalized to stabilize training. However, this architecture utilizes a proprietary design that directly employs unnormalized raw amplitude information of wireless signals, preventing the loss of critical physical information that represents communication quality.
  • Multi-Task Generality: The architecture features a unified and highly versatile design, requiring only minor adjustments to the output layer to adapt to various tasks, including channel interpolation/estimation, SRS prediction, and signal demodulation. This significantly reduces the time and costs associated with developing separate AI models for different tasks.

The demonstration results indicate that high-performance AI models, such as Transformers, and the GPUs on which they operate are essential for achieving the high communication performance required in the 5G-Advanced and 6G eras. Furthermore, the AI-RAN architecture, which controls RAN based on GPUs, allows for continuous performance enhancements through software updates as more advanced AI models emerge, aiding telecom operators in maximizing their investment value.

Looking ahead, SoftBank plans to expedite the commercialization of the technologies validated in this demonstration. By further enhancing communication quality and driving network evolution with AI-RAN, SoftBank aims to contribute to the innovation of future communication infrastructure.

Original article by NenPower, If reposted, please credit the source: https://nenpower.com/blog/softbank-enhances-5g-ai-ran-through-innovative-transformer-ai-boosting-throughput-by-30/

Like (0)
NenPowerNenPower
Previous January 27, 2026 4:59 pm
Next January 27, 2026 6:33 pm

相关推荐