Nonlinear inverse problems in electromagnetics are typically solved by dividing the Earth into cells of constant conductivity, linearizing the equations about a current model, computing the sensitivities, and then solving an optimization problem to obtain an updated estimate of the conductivity. In principle, this procedure can be implemented for any size problem, but in practice the computations involved may be too large for the available computing hardware. In electromagnetics this is currently the situation irrespective of whether the interpreter has access to a workstation or a supercomputer. In addition to the demands imposed by the need to compute the predicted responses from a specified model (i.e., invoking a forward mapping) there are two computational roadblocks encountered when solving an inverse problem: (1) calculation of the sensitivity matrix and (2) solution of the resultant large system of equations. If either of these operations cannot be carried out in reasonable time then an alternate strategy is required. Such strategies include generalized subspace methods, conjugate gradient methods, or approximate inverse mapping (AIM) procedures. The theoretical foundations and computational details of these strategies are explored in this paper with the ultimate goal that the inversionist, after assessing his/her computing power and knowing the time required to perform forward modeling, can generate a methodology by which to solve the problem. The methodologies are compared quantitatively by considering an archetypal inversion problem in electromagnetics, the inversion of dc potential data to recover the electrical conductivity.