•FedMed-GAN, to the best of our knowledge, is the first work to establish a new benchmark for federally cross-modality brain image synthesis, which greatly facilitates the development of medical GAN with differential privacy guarantees.•We provide comprehensive explanations for treating mode collapse and performance drop compared to centralized training.•The proposed work simulates as much as possible proportions of unpaired and paired data for each client with various data distributions for all clients. The performance of FedMed-GAN remains stable when facing long-tail data distributions. Utilizing multi-modal neuroimaging data is proven to be effective in investigating human cognitive activities and certain pathologies. However, it is not practical to obtain the full set of paired neuroimaging data centrally since the collection faces several constraints, e.g., high examination cost, long acquisition time, and image corruption. In addition, these data are dispersed into different medical institutions and thus cannot be aggregated for centralized training considering the privacy issues. There is a clear need to launch federated learning and facilitate the integration of dispersed data from different institutions. In this paper, we propose a new benchmark for federated domain translation on unsupervised brain image synthesis (FedMed-GAN) to bridge the gap between federated learning and medical GAN. FedMed-GAN mitigates the mode collapse without sacrificing the performance of generators, and is widely applied to different proportions of unpaired and paired data with variation adaptation properties. We treat the gradient penalties using the federated averaging algorithm and then leverage the differential privacy gradient descent to regularize the training dynamics. A comprehensive evaluation is provided for comparing FedMed-GAN and other centralized methods, demonstrating that the proposed algorithm outperforms the state-of-the-art. Our code is available at: https://github.com/M-3LAB/FedMed-GAN.