About the MultiSphere Project

This project is about targeting pragmatic future wireless systems able to deliver the capacity scaling predicted in theory, the proposed research focuses on two, currently in progress, paradigm shifts that have a strong potential to transform the way we design wireless communications systems.

First paradigm shift

The one from orthogonal to non-orthogonal signal transmissions according to which, instead of trying to prevent transmitting signals from interfering, we now intentionally allow mutually interfering information streams.

Second paradigm shift

The one from sequential to parallel (receiver) processing according to which instead of using one processing element to perform the calculations of a functionality, we now split the corresponding processing load onto several processing units.

While digital processing systems with tens or even hundreds of processing elements have been predicted, it is still not obvious how we can efficiently exploit this processing power to develop high-throughput and power efficient wireless communication systems, and specifically how we can cope with the exponentially computationally intensive case of optimally recovering a large number of (intentionally) interfering information streams.

Research targets

This research targets a theoretical and practical framework for efficiently parallelizing sphere decoders used to optimally reconstruct a large number of mutually interfering information streams.

Sphere decoding is a well-known technique that dramatically reduces the related complexity. However, while sphere decoding is simpler, compared to other solutions that are able to deliver optimal performance, its complexity still increases exponentially with the number of interfering streams, preventing the practical throughput gains from being scaled by increasing the number of mutually interfering streams as predicted in theory.

This research targets practical sphere decoders able to support a large number of interfering streams with processing latency or power consumption requirements which are orders of magnitude smaller than those of single processor systems.

Challenges

Complexity and throughput

Linear detection approaches offer low latency and complexity, but can result in a highly sub-optimal throughput.

Maximum Likelihood (ML) detection on the other hand, allows to scale the throughput with the number of antennas, at the cost of exponentially increased complexity beyond the capability of current processor architectures.

Take a look at the results.

Processor speed and lithography

Moreover, based on the Semiconductor Industry Association roadmap (ITRS Report), transistors can stop shrinking and thus cease becoming faster by as soon as 2021.

Thus, a paradigm shift from sequential to parallel processing is required. By exploiting multiple processing elements (PEs) per single die, research can lead to high-throughput MIMO systems.

Take a look at the results.