The most important part of the control design for nonlinear dynamic systems is to guarantee the stability. Then, the control is quantitatively designed to meet multiple and often conflicting performance objectives. The performance of the closed-loop system is a function of various system and control parameters. The quantitative design using multiple parameters to meet multiple conflicting performance objectives is a challenging task. In this talk, we present the recent results of quantitative design of controls for nonlinear dynamic systems by using the advanced algorithms of multi-objective optimization. The controls can be of linear PID type or nonlinear feedback such as sliding mode. The advanced algorithms of multi-objective optimization consist of parallel cell mapping methods with sub-division techniques. Interesting examples of linear and nonlinear controls will be presented with both numerical simulations and experimental validations.

Here we consider multiobjective optimization problem $f_1(x)\rightarrow min,...,f_m(x)\rightarrow min,x\in \mathbb{R}^d$. We will later focus on the bicriteria case $m=2$, though some methods that will be discussed are more general.

In previous work ([1], [2], and [3]) the hypervolume gradient has been suggested as a direction for steering a set of points in the direction of sets with larger or maximal hypervolume indicator values. The set of points was interpreted as a vector of dimension $\mu\cdot d$, where $\mu$ is the number of points in the set and $d$ the dimension of any of these points. A search point, representing a set, is thus a vector $p\in(\mathbb{R}^d)^{\mu}$. We will refer to its components by $p^{(1)},...,p^{(\mu)}$, i.e. the concatenation $p^{(1)},...,p^{(\mu)}$ equals $p$. Given a sufficiently large reference point $r\in\mathbb{R}^m$ the hypervolume indicator of $p$ is defined by:

$$HI(p)=\lambda(\cup_{i=1}^{\mu}[f(p^{(i)},r])$$

The hypervolume gradient $\nabla HI(p)$ is defined for non-degenerate cases (see [1]). Practically, the degenerate cases do not occur, i.e. they form a measure zero subset of $\mathbb{R}^{d\cdot\mu}$. However, two other problem occurs when using the gradient direction as a search direction. In order to discuss these problems we will introduce subgradients

$$\nabla^{(i)}HI(p):=\left(\left(\frac{\partial HI}{\partial p_1^{(i)}}\right)(p),...,\left(\frac{\partial HI}{\partial p_d^{(i)}}\right)(p)\right)^T,i=1,...,\mu$$

A component $p^{(i)},i=1,...,\mu$ is called dominated w.r.t. $p$, if and only if there exists $p^{(j)},j\in\{1,...,\mu\}$ with $p^{(j)}$ (Pareto) dominates $p(i)$. In this case the $i$-th subgradient is zero, i.e. $\nabla^{(i)}HI(p)=(0,...,0)^T\in\mathbb{R}^d$.

In the hypervolume gradient ascent method a zero subgradient leads to the unwanted effect that part of the points in the set – the dominated points – stay in the same position [3].*The goal of this work is to propose and test methods that replace zero subgradients by vectors that lead in the direction of the Pareto front approximation*.Once the dominated components get non-dominated (relative to the other components) they can contribute to the hypervolume indicator and their subgradients will be non-zero again.

Besides, a known problem in hypervolume gradient ascent is that the length of the subgradients can differ widely, which leads to a slow convergence in some parts of the objective space. This phenomenon was analysed in [3] and termed ’*creepiness*’. To counteract creepiness, it was proposed to normalize all subgradients before applying them in the gradient based search [5]. This strategy is also adopted in this work.Five methods are proposed to replace subgradients. These are:

M1 Use the gradient of $f_1(p^{(i)})+f_2(p^{(i)})$. In regular cases this method will lead the dominated points to the Pareto front albeit the methods tends to steer the points towards the same point on the Pareto front (tangent of 45).

M2 To avoid diversity loss as in M1, in M2 the proposed method is to use instead of the gradient of a randomly weighted aggregated sum $w_1f_1(p^{(i)})+(1-w_1)f_2(p^{(i)})$, with $w_1\sim U(0,1)$ being a uniformly distributed random number.

M3 For this method a direction that points into the dominance cone of a single point was proposed in López as

$$\nabla f_1(x)/||\nabla f_1(x)||+\nabla f_2(x)/||\nabla f_2(x)||.$$

M4 This methods seeks to find a point that fills the nearest gap of the current Parteto front approximation with respect to the dominated point. For this it computes the secant of the two neighboring points of the closest gap and the angle of this secant. The weights of the linear aggregation $w_1f_1+w_2f_2$ are chosen in such a way that a tangential point on the Pareto front with this angle is approximated. In other words, if the secant has the slope $m$, the weights are chosen such that $w_1/w_2=m$.

M5 This method seeks to get closer to the midpoint of the secant described in M4. This midpoint we will term $c$ and the subgradient is replaced by the subgradient of the squared Euclidean distance to this point. That is for a dominated component with index $i$ we replace the subgradient by:

$$\nabla((f_1(p^{(i)})-c_1)^2+(f_2(p^{(i)})-c_2)^2)$$

The five methods are tested on both convex and concave Pareto fronts and their convergence behavior will be compared. The study will reveal interesting patterns in the convergence behavior and provide explanations for these.

[1] Emmerich, M., & Deutz, A. (2014). Time complexity and zeros of the hypervolume indicator gradient field. In EVOLVE-A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computa-tion III (pp. 169-193). Springer International Publishing.

[2] Emmerich, M., Deutz, A., & Beume, N. (2007). Gradient-based/evolutionary relay hybrid for computing Pareto front approximations maximizing the S-metric (pp. 140-156). Springer Berlin Heidelberg.

[3] Hernández, V. A. S., Schütze, O., & Emmerich, M. (2014). Hypervolume Maximization via Set Based Newton’s Method. In EVOLVE-A Bridge between Probability, Set Oriented Numerics, and Evolution-ary Computation V (pp. 15-28). Springer International Publishing.

[4] López, A. L., Coello, C. A. C., & Schütze, O. (2012). Using Gradient Based Information to Build Hybrid Multi-objective Evolutionary Algorithms (Doctoral dissertation, PhD thesis, Computer Science Department, CINVESTAV-IPN).

[5] Wilco Verhoef, André H. Deutz, and Michael T.M. Emmerich: On Gradient-based and Swarm-based Algorithms for Set-oriented Bicriteria Optimization, EVOLVE 2015 - A Bridge between Set OrientedNumerics, Probability Theory, and Evolutionary Computation, Iasi, Romania, June 2015 (in print)

Nowadays, the importance of solving multi-objective optimization problems (MOPs) becomes crucial for different application areas such as chemistry, design, manufacture, medicine, among others. However, the task of solving this kind of problems is not straightforward, since several objectives have to be optimized at the same time. The solution set of a MOP is called the Pareto set, which typically forms a $(k-1)$-dimensional surface, where $k$ is the number of objectives that have to be optimized. The most common way to obtain a finite approximation of the solution set is by using deterministic techniques. Nevertheless,other techniques such as evolutionary algorithms have rep orted very promising results, and even being more robust than the mathematical techniques. In order to assess the finite approximation obtained by any algorithm researchers have proposed some performance indicators, which help us to select the best approximation between a set of them according to our needs. The Hypervolume is one of the most widely used indicators, since it has some desirable properties. Recently, the Hypervolume Newton method was proposed as standalone technique able to converge the whole population toward the best hypervolume distribution. One of the highlights of this method is its ability to converge quadratically which makes it a natural candidate to use within a global approach such as evolutionary algorithms. It is known, the success of hybridizing indicator based evolutionary algorithms with local searcher techniques for improving the performance or even refining the final solution. Here, we present the first integration of the proposed Hypervolume Newton Method into an evolutionary algorithm. To do this we first present the formulation of the Hypervolume Hessian matrix, since in previous works an approximation was used. Then, we formulate the Hypervolume Newton Method algorithm and we will show applications on some tests. In order to extend the applicability of the Hypervolume Newton method, we introduce a constrained handling technique for inequality constraints as well. Finally, we will present the numerical results of the memetic strategy against the evolutionary algorithms without our lo cal search.

In a variety of applications in industry and finance one is faced with the problem that several objectives have to be optimized concurrently leading to a *multi-objective optimization problem*(MOP). A common goal for such problems is to identify the set of optimal solutions (the so-called *Pareto set*, denote by $P_Q$) and its image in objective space, the *Pareto front*. Hereby optimality is typically defined on the concept of *dominance*. Moreover, in certain situations it may be beneficial for the decision maker (DM) to consider in addition to the optimal solutions also nearly optimal ones, which are alternatively called approximate solutions or $\epsilon$-efficient solutions. The reason for this is that by this the number of valid options for the current setting may increase (note that if two points $x$ and $y$ are near in objective space,i.e., if $F(x)\approx F(y)$, this does not have to hold in parameter space. In fact, $x$ and $y$ can differ significantly depending on properties of $F$ such as (quasi-) periodicities or symmetries).

All of the works dealing with approximate solutions in multi-objective optimization are based on the concept of $\epsilon$-dominance which has been introduced in [3, 9]. $\epsilon$-dominance or $\epsilon$-efficiency has been studied and used by many researchers, e.g. to allow (or tolerate) nearly optimal solutions [3], [9], to approximate the set of optimal solutions $P_Q$[4], or in order to discretize this set [2, 7, 8].$\epsilon$-efficient solutions have also been used to tackle a variety of real world problems including portfolio selection problems [10], a location problem [1], or a minimal cost flow problem [4].

To tap the full potential of the additional consideration of approximate solutions $(P_{Q,\epsilon})$ against the ‘sole’ consideration of the optimal solutions it is required to maintain a representation of the *entire* set $P_{Q,\epsilon}$. The challenge in this case is that for the consideration of a problem with $n$ parameters and $k$ objectives, the set $P_{Q,\epsilon}$ forms an $n$-dimensional object while $P_Q$ is under some mild regularity assumptions on $F$ ‘only’ $(k-1)$-dimensional ($n>>k$ can be assumed). Hence, for the effective extension from $P_Q$ to $P_{Q,\epsilon}$ as the basis for the decision making process a suitable discretization of the latter set is essential.

In this work, we design and investigate archiving strategies aiming for the approximation of this set. We will show that the first archiving strategy converges under certain assumptions on the generator in the probabilistic sense toward $P_{Q,\epsilon}$. We present and investigate further a possible discretization strategies to maintain a finite size representation of $P_{Q,\epsilon}$ in the limit in both parameter and objective space. We also give bounds on the approximation quality and on the cardinality of the limit solution set. Finally, we make a comparative study on some test problems in order to visualize the effect of all strategies. Preliminary studies of this work can be found in [5, 6].

[1] R. Blanquero and E. Carrizosa. A. d.c. biobjective location model.Journal of Global Optimization,23(2):569–580, 2002.

[2] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler. Combining convergence and diversity in evolutionarymultiobjective optimization.Evolutionary Computation, 10(3):263–282, 2002.

[3] P. Loridan.-solutions in vector minimization problems.Journal of Optimization, Theory and Application, 42:265–276, 1984.

[4] G. Ruhe and B. Fruhwirt.-optimality for bicriteria programs and its application to minimum cost flows.Computing, 44:21–34, 1990.

[5] O. Schütze, C. A. Coello Coello, and E.-G. Talbi. Approximating the-efficient set of an MOP withstochastic search algorithms. In A. Gelbukh and A. F. Kuri Morales, editors,Mexican InternationalConference on Artificial Intelligence (MICAI-2007), pages 128–138. Springer-Verlag Berlin Heidelberg,2007.

[6] O. Schütze, C. A. Coello Coello, E. Tantar, and E.-G. Talbi. Computing finite size representations of the set of approximate solutions of an mop with stochastic search algorithms. InGECCO ’08:Proceedings of the 10th annual conference on Genetic and evolutionary computation, pages 713–720,New York, NY, USA, 2008. ACM.

[7] O. Schütze, M. Laumanns, C. A. Coello Coello, M. Dellnitz, and E.-G. Talbi. Convergence of stochastic search algorithms to finite size Pareto set approximations.Journal of Global Optimization, 41(4):559–577, 2008.

[8] O. Schütze, M. Laumanns, E. Tantar, C. A. Coello Coello, and E.-G. Talbi. Computing gap free Pareto front approximations with stochastic search algorithms. To appear in Evolutionary Computation,2007.

[9] D. J. White. Epsilon efficiency.Journal of Optimization Theory and Applications, 49(2):319–337,1986.

[10] D. J. White. Epsilon-dominating solutions in mean-variance portfolio analysis.European Journal ofOperational Research, 105(3):457–466, 1998.

Nowadays, the evolutionary algorithms have been widely used in order to approximate the solutions for optimization problems. The so-called memetic algorithms, hybridize evolutionary algorithms along with mathematical programing techniques in order to improve the convergence rate of the algorithm. However, a drawback of these methods is that in some cases the cost (in terms of function calls) of the mathematical programming techniques is higher than the cost of evolutionary mechanism. Furthermore, in some optimization problems the explicit gradient information that some of these techniques require is missing. In order to reduce the cost of the mathematical programming techniques, we propose a method that exploits the data calculated by the evolutionary algorithms. Given a candidate solution, the Gradient Subspace Approximation (GSA) aims to compute the most greedy direction exploiting the neighborhood information of the solution. Since the GSA uses information that is already calculated, the cost of this method in terms of function evaluations can be competitive in comparison with the mechanism of the evolutionary algorithm. In this study we perform an analysis of the GSA method and its behavior as the mathematical technique used within a memetic algorithm. A first approach of a memetic algorithm obtained via Dierential Evolution (DE) and GSA have shown promising results.

In finance, GARCH models are widely used, for example to identify trend, risk and variation in the markets. Nevertless, the choice of the optimal parameters is a hard task. Because of this, in most of cases methods fail and solutions are not good enough.The optimization algorithm presented in this work is a memetic algorithm, which uses two techniques, differential evolution coupled with mathematical programing techniques. The first one is used to find an initial value, and mathematical programing is use to guarantee convergence.We present numerical results on a benchmark related to time series and GARCH models.

Generally, the multiobjective optimization problem involves finding a set with the best solutions among mathematically incomparable compromises, i.e., vectors whose image can not be improved by other vectors in all its components, but it could be improved in at least one of them. However, there are real-world problems in which a decision maker has knowledge about the problem or he/she wants to obtain optimal solutions with certain characteristics instead of all solutions. The reference point methods are useful for this scenario, the idea is to get the closest solution to a given vector, usually infeasible, which is a guess of the decision maker. Nevertheless, the scenario in which a reference point changes over time has been poorly treated. Recently, continuation methods have been used to solve the multiobjective optimization problem, these methods have the advantage that they move through the Pareto front as Hillermeier method and Pareto Tracer. The method proposed here, called Pareto Explorer, is a continuation method which takes into account preferences of the decision maker to calculate the predictor, which makes it also an interactive method. In addition, if we change the corrector by an achievement function, this method can be used to solve dynamic reference point problems. We present results on some benchmark models, as well as on a 14-objective problem that arises in the design of a laundry system.