Unsupervised domain adaptation (UDA) plays a crucial role in transferring knowledge gained from a labeled source domain to effectively apply it in an unlabeled and diverse target domain. While UDA commonly involves training on data from both domains, accessing labeled data from the source domain is frequently constrained, citing concerns related to patient data privacy or intellectual property. The source-free UDA (SFUDA) can be promising to sidestep this difficulty. However, without the source domain supervision, the SFUDA methods can easily fall into the dilemma of “winner takes all”, in which the majority category can dominate the deep segmentor, and the minority categories are largely ignored. In addition, the over-confident pseudo-label noise in self-training-based UDA is a long-lasting problem. To sidestep these difficulties, we propose a novel class-balanced complementary self-training (CBCOST) framework for SFUDA segmentation. Specifically, we jointly optimize the pseudo-label-based self-training with two mutually reinforced components. The first class-wise balanced pseudo-label training (CBT) explicitly exploits the fine-grained class-wise confidence to select the class-wise balanced pseudo-labeled pixels with the adaptive within-class thresholds. Second, to alleviate the pseudo-labeled noise, we propose a complementary self-training (COST) to exclude the classes that do not belong to, with a heuristic complementary label selection scheme. We evaluated our CBCOST framework on both 2D and 3D cross-modality cardiac anatomical segmentation tasks and brain tumor segmentation tasks. Our experimental results showed that our CBCOST performs better than existing SFUDA methods and yields similar performance, compared with UDA methods with the source data.Copyright © 2023 Elsevier B.V. All rights reserved.