3 Ways Nokia is Using Machine Learning in 5G Networks
Artificial intelligence could quickly schedule beams and configure channels in future wireless networks
Wireless carriers around the world are pushing to bring 5G service to their customers as quickly as possible, but the new radio access networks—which will rely on emerging technologies including millimeter waves and huge antenna arrays known as massive MIMO—will be a lot more complicated than what came before.
Nokia is applying machine learning to some of the problems that result from this complexity, hoping that artificial intelligence can boost network performance and cut costs, Rajeev Agrawal said recently during a 5G summit at the Computex trade show in Taipei, Taiwan.
Agrawal, who is in charge of Nokia’s radio access network offerings, presented three possibilities for machine learning and 5G that Nokia has studied internally but not yet published in academic research papers.
Scheduling Beamforming in Massive MIMO Networks
In a MIMO (multiple-input multiple-output) network, cellular base stations send and receive radio frequency signals in parallel through many more antennas than are normally used on a base station. This means the base station can transmit and receive more data, but these signals also interfere with one another.
Beamforming is a signal processing technology that lets base stations send targeted beams of data to users, reducing interference and making more efficient use of the radiofrequency spectrum.
One of the challenges in building these systems is figuring out how to schedule the beams. Nokia, for example, has a system with 128 antennas all working together to form 32 beams and wants to schedule up to four beams in a specified amount of time. The company also wants to schedule those beams in a sequence that will provide the highest spectral efficiency, which is a measure of how many bits per second a base station can send to a set of users.
The number of possible ways to schedule four of 32 beams mathematically adds up to more than 30,000 options. There’s simply not enough processing power on a base station to quickly find the best schedule for that many combinations.
Nokia says it was able to train neural networks how to find the best schedule offline, and then later quickly predict the best schedules on demand, although the company did not provide data to back up their performance or allow comparisons to other possible heuristics.
Indoor Positioning
Another way to make more efficient use of spectrum in 5G networks is to install miniature base stations, or small cells, that can deliver wireless service closer to where customers are physically located. This can also help carriers solve another problem—finding the location of indoor objects, such as sensors or smart speakers in a home. GPS signals can typically identify an object’s indoor location no more accurately than within about 50 meters.
Agrawal said a small cell network’s radiofrequency data can be used to train a machine learning algorithm to infer the positions of network users’ equipment. A slide from his presentation claimed mean positioning errors of 10 centimeters (cm), 13 cm, and 9 cm using LTE eNB radiofrequency data from cells on different floors of a mall in China.
Nokia’s approach is to first, for multiple points in a room, measure the received signal strengths from each cell. Then, the company uses these maps to train neural networks to predict the location of a device based on the strength of the signals it receives from nearby cells.
Configuring Uplink and Downlink Channels
In order for a smartphone to work properly on a cellular network, engineers need to effectively configure the size of that devices’ uplink control channel, which transmits feedback on network quality. The more spectrum the uplink control channel uses, the better the quality of data transmission could be from a customer’s smartphone —but this also means that there is less spectrum available for data transmission. It’s a tradeoff.
There are already techniques to automatically make this tradeoff decision for 3G and 4G, but Agrawal said it is a “very important problem as we go to 5G” in part because the uplink control channel data will be more “rich.” For example, it could carry important information on the beams in a massive MIMO network.
Agrawal said a machine learning system would first predict user equipment characteristics, such as mobility. Then, the system would make a prediction about what the uplink/downlink throughputs would be, against different settings, and pick the best setting.
Agarwal said he’s “not trying to say all of these [applications I presented] are right,” but to him, machine learning will be a key part of 5G networks. [READ MORE]
Comments :