Federated Learning (FL) uses local data to perform distributed training on clients and combines resulting models on a public server to mitigate privacy exposure by avoiding data sharing. However, further research indicates that communication overheads continue to be the primary limitation for FL relative to alternative considerations. This is especially true when training models on non-independent and identically distributed data, such as financial default risk data, where FL's computational costs increase significantly. This study aims to address financial credit risk data by establishing a dynamic receptive field adjustment mechanism for feature extraction, efficiently extracting default features with varying distributions and attributes. Additionally, by constructing a distributed feature fusion architecture, characteristics from both local and overarching models are aggregated to attain higher accuracy with lower transmission costs. Experimental results demonstrate that the proposed FL framework can utilize dynamic receptive fields to allocate convolutional kernel weights, thereby improving feature extraction. In the feature fusion stage, the proposed Multi-Fusion strategy efficiently customizes the aggregation of features from local and global models. The final solution reduces the communication rounds in federated learning by approximately 80%.
© 2024. The Author(s).