Guest Editorial for the Special Issue on Optimization


  • This article was published online on [5 July 2013]. This notice is included in the online and print versions to indicate that both have been modified [24 September 2013].

Although optimization itself has been integral to engineering ever since its advent, only recently have automated optimization methods become a central design strategy. In centuries past, an engineer would design some object, test its performance, make small adjustments, and test again. The invention of the calculus obviated the need for some of this arduous repetition: If some measure of performance can be written as a simple continuous algebraic function of design parameters, optimal designs can be found through pure mathematics.

With the advent of automatic computation, this strategy has been expanded further. Even without a well-defined mathematical figure of merit, objective function landscapes may be explored by various techniques that use local measurements of function behavior. Line search and trust region algorithms can rapidly find local function optima, either by analytically computing or approximating derivatives, and can be combined with powerful simulation software to ‘tweak’ design concepts.

In the past 20 years, these mathematical approaches have been augmented (some would say superseded) by nature-based optimization schemes that operate not with conventional mathematical principles but by an imitation of some natural system. The first of these schemes to see broad application was simulated annealing (SA), which imitates the heating and cooling of metals to remove crystal defects. SA was first proposed in the mid 1980s based on an algorithm created in the 1950s and was soon followed by a host of competitors including genetic algorithms (GAs), particle swarm optimization (PSO), and ant colony optimization each based on different natural phenomena, and each geared to different types of optimization problems. Unlike their more mathematical predecessors, nature-based optimizers are robust in the face of multiple optima, noise, and even discontinuous fitness landscapes. This enhanced generality is generally purchased with some sacrifice of speed in the location of solutions and the complete abdication of any attempt to prove the optimality of the results. Nonetheless, the resiliency of these newer algorithms allows them to be used as complete design methods for many problems: create a function to optimize, describe the design parameters and limits on their values, and set the computer to work.

Numerical modeling and simulation play a central role in the solution of many scientific and engineering problems nowadays. Modeling and simulation require a deep understanding of the mathematical equations governing the physical phenomena being studied, as well as algorithms capable of solving these equations efficiently and accurately. The governing equations can be either posed in differential or integral form, or a combination of the thereoff. Differential equations (DE) historically have been solved using finite difference and finite element methods (FDM/FEM). Integral equations (IEs) often are solved using boundary element methods (BEM), often called the method of moments by electrical engineers. Although classically formulated DE and IE solvers are useful in the solution of many real-world problems, recently, several nontraditional discretization schemes (NDSs) for solving DEs and IEs pertinent to the analysis of device, network, and field phenomena have gained significant traction. Examples of NDSs include Nyström methods; meshless techniques including partition of unity and generalized FEM and BEM schemes, as well as stochastic methods including random walk and walk-on-sphere techniques. Generally, NDSs aim to overcome certain drawbacks of traditional discretization methods. For example, Nyström-based integral equation methods often are easier to implement than their Galerkin counterparts, especially when used in conjunction with point-based fast multipole algorithms. Meshless differential and integral equation solvers significantly reduce the cost of developing computer-aided design models and offer substantial flexibility when analyzing problems involving deformable and moving objects and boundaries. Finally, random walk and walk on sphere methods provide for particularly simple, stable, and easy to parallelize avenues for solving certain differential equations.

This special issue relates six papers detailing advances in the application of optimization methods to the analysis and design of electronic networks, devices, and fields. These papers cover the applications of several different algorithms to the optimization of devices ranging over the full span of the journal's coverage.

The issue begins with two papers on the use of differential evolution (DE) for the optimization of antenna arrays. As an algorithm, DE is closely related to the GA approach, but the wildly varying changes that can be generated by the standard crossover operator of the GA are tempered with a differential approach. The first of the two DE papers, by Chatterjee, Mahanti, and Choudhury, optimizes the footprint of space-based antenna arrays through amplitude and phase adjustments of the elements. The fast Fourier transform is applied to accelerate the computation and achieve the desired designs. Zhang, Chen, and Jiao discuss the use of DE for the Pareto optimization of concentric ring arrays. The Pareto design approach is an important topic in engineering, as it allows the designer to choose between a set of designs that are optimal in the trade-off between design goals offered.

More antenna design applications are to be found in the paper by Sánchez-Montero, López-Espí, Manjarres, Landa-Torres, Salcedo-Sanz, and Del Ser, and that by Weatherspoon, Connor, and Foo. The first of these papers uses Pareto techniques to find the best trade-offs between gain and bandwidth for an antenna configuration created from a hybrid between a planar inverted F antenna and a circular patch antenna. The optimization approach taken in this work is evolutionary programming, and results are presented for three different antennas representing efficient trade-offs for mobile telephony applications. The second of these papers applies a new algorithm, the cross-entropy (CE) method to general linear pattern synthesis problems. Unlike most modern stochastic optimizers, CE is not based on a simplified simulation of natural processes but instead works by creating population of potential solutions that is updated in a manner that makes it probable that the best solutions in the current pool will find themselves in future generations. The paper demonstrates that CE is a viable alternative to PSO and GAs for this problem.

Finally, this issue contain four papers involving issues far beyond antenna design. Tseng and Wu discuss the technology computer-aided design of heat dissipators for InGaP/GaAs heterojunction bipolar transistors. Their work results in efficiency improvements of 45%. The paper by Kilic, El-Araby, Nguyen, and Dang designs both antennas and antireflective surfaces using PSO. This work is especially interesting because the work is carried out on a graphics processing unit (GPU) which vastly accelerates the optimization. Channel-dropping microwave filter design is described by Harun, Shaari, Menon, Razak, and Bidin using the Taguchi method. Finally, an open source simulation system for cylindrical geometries and especially suited for MRI modeling is presented in Leibig, Rennings, Held, and Erni.

Any special issue requires the help of many people, both associated with the journal management and outside its structure. The guest editor would like to thank the authors and reviewers for their time and effort in creating and improving the papers that appear here. Equally importantly, this issue would never have seen the light of day if it were not for the tireless efforts of Eric Michielssen, the Editor-in-Chief, and Alice Wood at Wiley who organized, cajoled, and clarified until this special issue became a reality.


  • Image of creator

    Daniel S. Weile obtained his BSEE and his BS (in Mathematics) at the University of Maryland at College Park and 1994, and MS and PhD in Electrical Engineering at the University of Illinois at Urbana-Champaign in 1995 and 1999, respectively. Currently, he is an associate professor of Electrical Engineering at the University of Delaware. In 1994, he worked at the Institute for Plasma Research developing interactive software for the design of depressed collectors for gyrotron beams. As a research assistant and visiting assistant professor at the University of Illinois, Dr. Weile worked on the efficient design of electromagnetic devices using stochastic optimization techniques and fast time-domain integral equation methods for the solution of scattering problems. His current research interests include computational electromagnetics (especially time-domain integral equations), periodic structures, and the use of evolutionary optimization in electromagnetic design. Dr. Weile is the recipient of an NSF CAREER Award and an ONR Young Investigator Award. He is a member of Eta Kappa Nu, Tau Beta Pi, Phi Beta Kappa, and URSI Commission B.