Gradient nonlinearities not only induce spatial distortion in magnetic resonance imaging (MRI), but also introduce discrepancies between intended and acquired diffusion sensitization in diffusion weighted (DW) MRI. Advances in scanner performance have increased the importance of correcting gradient nonlinearities. The most common approaches for gradient nonlinear field estimations rely on phantom calibration field maps which are not always feasible, especially on retrospective data. Here, we derive a quadratic minimization problem for the complete gradient nonlinear field (L(r)). This approach starts with corrupt diffusion signal and estimates the L(r) in two scenarios: (1) the true diffusion tensor known and (2) the true diffusion tensor unknown (i.e., diffusion tensor is estimated). We show the validity of this mathematical approach, both theoretically and through tensor simulation. The estimated field is assessed through diffusion tensor metrics: mean diffusivity (MD), fractional anisotropy (FA), and principal eigenvector (V1). In simulation with 300 diffusion tensors, the study shows the mathematical model is not ill-posed and remains stable. We find when the true diffusion tensor is known (1) the change in determinant of the estimated L(r) field and the true field is near zero and (2) the median difference in estimated L(r) corrected diffusion metrics to true values is near zero. We find the results of L(r) estimation are dependent on the level of L(r) corruption. This work provides an approach to estimate gradient field without the need for additional calibration scans. To the best of our knowledge, the mathematical derivation presented here is novel.
Keywords: Magnetic resonance distortion; gradient nonlinearity; tensor simulation.