How Cloud Based Radio Access Networks Can Solve Operators’ Problems


vBS will enable improved network performance, coverage and capacity, and could be deployed in numerous network configurations including C-RAN, traditional macro-cell sites, in-building/outdoor distributed antenna system deployments and even small-cell implementations.

The advent of FPGAs and solutions enables us to support our early customer engagements faster while offering maximum flexibility to address their needs. Emerging vBS requires high performance and programmability, while delivering low cost and power. The software stack runs on baseband unit servers together with an FPGA front-end processing board that emulates baseband signal processing flow.

The FPGA board integrates PCI Express, 10GBps Ethernet, common public radio interface, optical connections and so on, which enables it to bridge telecom and IT datacenter domains. As an accelerator, the FPGA implements many key algorithm units required by baseband signal processing, which increases the system computing power greatly.

RapidIO protocol

Each radio node in the architecture serves a set of users in a small- or macro-cell configuration. In both traditional and distributed architectures, RapidIO protocol connects multiple processing units (for example, DSPs, SOCs, ASICs and FPGAs) in channel or baseband cards. The protocol ensures guaranteed delivery with lowest latency of around 100nsec between any two processing nodes.

fig 2
Fig. 2: RapidIO protocol switch interface

In a typical RAN architecture, once the signal processing related to a particular radio interface is completed, data is transported between the baseband and the radio using CPRI protocol. In a distributed architecture, CPRI protocol may not lead to a cost-effective implementation of load management and interference control, since, by definition, CPRI protocol does not provide a standardised low-latency packet based switching capability that can be used to distribute traffic across multiple baseband cards from radios. RapidIO in this case is expected to provide best-in-class performance.

Two major flows in the access network include transmission from mobile to base station (uplink flow) and reception at mobile from the base station (downlink flow). Retransmission, in the form of hybrid-automatic-repeat-request (HARQ), might be required in case of transmission errors as part of LTE/LTE-A protocol.

HARQ timing of around 4ms is actually much lower than LTE/LTE-A round-trip latency. If the interconnect fabric between processing nodes supports superior flow control and fault-tolerance capability, it is further possible to minimise the number of HARQ retransmissions, resulting into lower latency. This provides a differentiation point for OEMs and eventually leads to better quality of experience for the end user.

To support handover, load balancing and interference management, above major flows further include exchange of information related to handover, channel quality and load indication between various processing units at the base station.

Exchange of information between baseband units should be supported with lowest deterministic latency and guaranteed delivery. This allows demand on the network and interference between various users to be identified without error and in a timely manner. This also enables low-latency handover with reliability as users cross cell boundaries.
To meet the requirements in C-RAN and a small cell, in particular, reliable load management, hand-over and interference management, OEMs are evolving base station designs by taking advantage of RapidIO’s interconnect features and advancements in systems on chip (SoC), memory and radio sub-system components.

For the interconnect, there are two important functions to consider: RapidIO end-point (EP) and RapidIO switching fabric. With integrated EP within SoCs, it is possible to offer lowest latency between applications. With low-latency high-throughput packet based switching protocol, applications can be executed and partitioned across a large number of baseband computing units.

To support lowest latency, it is further possible to co-locate baseband processing units for a large number of small cells in one location. In this case, X2 interface is local to the baseband cluster. Exchange of information with deterministic delivery and lowest latency allows the baseband cluster to control data exchange between the right set of radio units and the baseband units at the right time while minimising or avoiding interference even for users located at cell boundary.

With a cluster of baseband units, it is possible to virtualise and share processing units. This allows average processing capability of computation units to meet the total capacity of a group of cells at any given time, instead of a specific cell all the time.

The technique will be used in big cities, especially in stadiums and subways, which carriers will not want to pack with base stations to handle traffic peaks. C-RANs will also help carriers share the cost of network infrastructure. In some cases, even single antenna will be shared by carriers and will have an IPsec session with data from different carriers in it. Real rollouts are more for 5G networks and define architectural requirements in hardware and software.

One of the biggest challenges is getting servers to handle real-time requirements of physical layer baseband processors. Ideally, some baseband traffic should traverse round-trip paths at milli- or even micro-second speeds. However, time constraints are still fuzzy, because it is more about user experience and the Internet experience.

At first, hybrid designs would let servers handle session- to application-layer jobs, offloading lower-level work to accelerator cards. A classic server works for the control plane but today would not handle transport functions and layer 2 and down very well. In the long term, it will presumably look for ways to integrate server and baseband functions.

Software poses several challenges because carriers will want C-RANs to use their current code. In addition, developers need to figure out how to assign various networking jobs to virtual machines and keep those machines secure. They will also have to find ways to run both general-purpose processor and accelerator codes in virtual machines.

V.P. Sampath is an active member of IEEE and Institution of Engineers India Ltd. He is a regular contributor to national newspapers, IEEE-MAS section, and has published international papers on VLSI and networks



Please enter your comment!
Please enter your name here