Multi-modal neuroimages, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), can provide complementary structural and functional information of the brain, thus facilitating automated brain disease identification. Incomplete data problem is unavoidable in multi-modal neuroimage studies due to patient dropouts and/or poor data quality. Conventional methods usually discard data-missing subjects, thus significantly reducing the number of training samples. Even though several deep learning methods have been proposed, they usually rely on pre-defined regions-of-interest in neuroimages, requiring disease-specific expert knowledge. To this end, we propose a spatially-constrained Fisher representation framework for brain disease diagnosis with incomplete multi-modal neuroimages. We first impute missing PET images based on their corresponding MRI scans using a hybrid generative adversarial network. With the complete (after imputation) MRI and PET data, we then develop a spatially-constrained Fisher representation network to extract statistical descriptors of neuroimages for disease diagnosis, assuming that these descriptors follow a Gaussian mixture model with a strong spatial constraint (i.e., images from different subjects have similar anatomical structures). Experimental results on three databases suggest that our method can synthesize reasonable neuroimages and achieve promising results in brain disease identification, compared with several state-of-the-art methods.