Abstract
Cell-free multiple-input-multiple-output (MIMO) is poised to enable scalable next-generation cellular networks. To this end, it is crucial to optimize the cell-free MIMO link configuration, including user associations, data stream allocation, and beamforming (BF). However, the scalability of link configuration optimization is significantly challenged as signaling and computational costs increase with the number of base stations (BSs) and user equipments (UEs). To address this scalability issue, this paper proposes a distributed multi-agent deep reinforcement learning (MADRL)-based cell-free MIMO link configuration
framework that leverages interference approximation to minimize signaling overhead required for channel state information (CSI) exchange. Our proposed framework reduces the solution search space suitable for distributed MADRL, by decomposing the original sum rate maximization problem into BS-specific
tasks. Simulation results show that our proposed method achieves scalability, as the sum rate increases with the number of BSs and UEs.
Main Figure
