Parameter free Optimization algorithm for Garch models

Published in 23 Wednesday

Track

Set Oriented Numerics

In finance, GARCH models are widely used, for example to identify trend, risk and variation in the markets. Nevertless, the choice of the optimal parameters is a hard task. Because of this, in most of cases methods fail and solutions are not good enough.The optimization algorithm presented in this work is a memetic algorithm, which uses two techniques, differential evolution coupled with mathematical programing techniques. The first one is used to find an initial value, and mathematical programing is use to guarantee convergence.We present numerical results on a benchmark related to time series and GARCH models.

Read more...

Pareto Explorer for the Local Exploration of Many Objective Optimization Problems

Published in 23 Wednesday

Track

Set Oriented Numerics

Generally, the multiobjective optimization problem involves finding a set with the best solutions among mathematically incomparable compromises, i.e., vectors whose image can not be improved by other vectors in all its components, but it could be improved in at least one of them. However, there are real-world problems in which a decision maker has knowledge about the problem or he/she wants to obtain optimal solutions with certain characteristics instead of all solutions. The reference point methods are useful for this scenario, the idea is to get the closest solution to a given vector, usually infeasible, which is a guess of the decision maker. Nevertheless, the scenario in which a reference point changes over time has been poorly treated. Recently, continuation methods have been used to solve the multiobjective optimization problem, these methods have the advantage that they move through the Pareto front as Hillermeier method and Pareto Tracer. The method proposed here, called Pareto Explorer, is a continuation method which takes into account preferences of the decision maker to calculate the predictor, which makes it also an interactive method. In addition, if we change the corrector by an achievement function, this method can be used to solve dynamic reference point problems. We present results on some benchmark models, as well as on a 14-objective problem that arises in the design of a laundry system.

Read more...

The Gradient Subspace Approximation for Scalar Optimization

Published in 23 Wednesday

Track

Set Oriented Numerics

Nowadays, the evolutionary algorithms have been widely used in order to approximate the solutions for optimization problems. The so-called memetic algorithms, hybridize evolutionary algorithms along with mathematical programing techniques in order to improve the convergence rate of the algorithm. However, a drawback of these methods is that in some cases the cost (in terms of function calls) of the mathematical programming techniques is higher than the cost of evolutionary mechanism. Furthermore, in some optimization problems the explicit gradient information that some of these techniques require is missing. In order to reduce the cost of the mathematical programming techniques, we propose a method that exploits the data calculated by the evolutionary algorithms. Given a candidate solution, the Gradient Subspace Approximation (GSA) aims to compute the most greedy direction exploiting the neighborhood information of the solution. Since the GSA uses information that is already calculated, the cost of this method in terms of function evaluations can be competitive in comparison with the mechanism of the evolutionary algorithm. In this study we perform an analysis of the GSA method and its behavior as the mathematical technique used within a memetic algorithm. A first approach of a memetic algorithm obtained via Dierential Evolution (DE) and GSA have shown promising results.

Read more...

Archivers for the Set of Approximate Solutions on Multi-objective Optimization

Published in 23 Wednesday

Track

Set Oriented Numerics

In a variety of applications in industry and finance one is faced with the problem that several objectives have to be optimized concurrently leading to a multi-objective optimization problem(MOP). A common goal for such problems is to identify the set of optimal solutions (the so-called Pareto set, denote by \(P_Q\)) and its image in objective space, the Pareto front. Hereby optimality is typically defined on the concept of dominance. Moreover, in certain situations it may be beneficial for the decision maker (DM) to consider in addition to the optimal solutions also nearly optimal ones, which are alternatively called approximate solutions or \(\epsilon\)-efficient solutions. The reason for this is that by this the number of valid options for the current setting may increase (note that if two points \(x\) and \(y\) are near in objective space,i.e., if \(F(x)\approx F(y)\), this does not have to hold in parameter space. In fact, \(x\) and \(y\) can differ significantly depending on properties of \(F\) such as (quasi-) periodicities or symmetries).

All of the works dealing with approximate solutions in multi-objective optimization are based on the concept of \(\epsilon\)-dominance which has been introduced in [3, 9]. \(\epsilon\)-dominance or \(\epsilon\)-efficiency has been studied and used by many researchers, e.g. to allow (or tolerate) nearly optimal solutions [3], [9], to approximate the set of optimal solutions \(P_Q\)[4], or in order to discretize this set [2, 7, 8].\(\epsilon\)-efficient solutions have also been used to tackle a variety of real world problems including portfolio selection problems [10], a location problem [1], or a minimal cost flow problem [4].

To tap the full potential of the additional consideration of approximate solutions \((P_{Q,\epsilon})\) against the ‘sole’ consideration of the optimal solutions it is required to maintain a representation of the entire set \(P_{Q,\epsilon}\). The challenge in this case is that for the consideration of a problem with \(n\) parameters and \(k\) objectives, the set \(P_{Q,\epsilon}\) forms an \(n\)-dimensional object while \(P_Q\) is under some mild regularity assumptions on \(F\) ‘only’ \((k-1)\)-dimensional (\(n>>k\) can be assumed). Hence, for the effective extension from \(P_Q\) to \(P_{Q,\epsilon}\) as the basis for the decision making process a suitable discretization of the latter set is essential.

In this work, we design and investigate archiving strategies aiming for the approximation of this set. We will show that the first archiving strategy converges under certain assumptions on the generator in the probabilistic sense toward \(P_{Q,\epsilon}\). We present and investigate further a possible discretization strategies to maintain a finite size representation of \(P_{Q,\epsilon}\) in the limit in both parameter and objective space. We also give bounds on the approximation quality and on the cardinality of the limit solution set. Finally, we make a comparative study on some test problems in order to visualize the effect of all strategies. Preliminary studies of this work can be found in [5, 6].

References

[1] R. Blanquero and E. Carrizosa. A. d.c. biobjective location model.Journal of Global Optimization,23(2):569–580, 2002.

[2] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler. Combining convergence and diversity in evolutionarymultiobjective optimization.Evolutionary Computation, 10(3):263–282, 2002.

[3] P. Loridan.-solutions in vector minimization problems.Journal of Optimization, Theory and Application, 42:265–276, 1984.

[4] G. Ruhe and B. Fruhwirt.-optimality for bicriteria programs and its application to minimum cost flows.Computing, 44:21–34, 1990.

[5] O. Schütze, C. A. Coello Coello, and E.-G. Talbi. Approximating the-efficient set of an MOP withstochastic search algorithms. In A. Gelbukh and A. F. Kuri Morales, editors,Mexican InternationalConference on Artificial Intelligence (MICAI-2007), pages 128–138. Springer-Verlag Berlin Heidelberg,2007.

[6] O. Schütze, C. A. Coello Coello, E. Tantar, and E.-G. Talbi. Computing finite size representations of the set of approximate solutions of an mop with stochastic search algorithms. InGECCO ’08:Proceedings of the 10th annual conference on Genetic and evolutionary computation, pages 713–720,New York, NY, USA, 2008. ACM.

[7] O. Schütze, M. Laumanns, C. A. Coello Coello, M. Dellnitz, and E.-G. Talbi. Convergence of stochastic search algorithms to finite size Pareto set approximations.Journal of Global Optimization, 41(4):559–577, 2008.

[8] O. Schütze, M. Laumanns, E. Tantar, C. A. Coello Coello, and E.-G. Talbi. Computing gap free Pareto front approximations with stochastic search algorithms. To appear in Evolutionary Computation,2007.

[9] D. J. White. Epsilon efficiency.Journal of Optimization Theory and Applications, 49(2):319–337,1986.

[10] D. J. White. Epsilon-dominating solutions in mean-variance portfolio analysis.European Journal ofOperational Research, 105(3):457–466, 1998.

Read more...

Hypervolume Newton method as a Local Searcher for Indicator based Evolutionary Algorithm

Published in 23 Wednesday

Track

Set Oriented Numerics

Nowadays, the importance of solving multi-objective optimization problems (MOPs) becomes crucial for different application areas such as chemistry, design, manufacture, medicine, among others. However, the task of solving this kind of problems is not straightforward, since several objectives have to be optimized at the same time. The solution set of a MOP is called the Pareto set, which typically forms a \((k-1)\)-dimensional surface, where \(k\) is the number of objectives that have to be optimized. The most common way to obtain a finite approximation of the solution set is by using deterministic techniques. Nevertheless,other techniques such as evolutionary algorithms have rep orted very promising results, and even being more robust than the mathematical techniques. In order to assess the finite approximation obtained by any algorithm researchers have proposed some performance indicators, which help us to select the best approximation between a set of them according to our needs. The Hypervolume is one of the most widely used indicators, since it has some desirable properties. Recently, the Hypervolume Newton method was proposed as standalone technique able to converge the whole population toward the best hypervolume distribution. One of the highlights of this method is its ability to converge quadratically which makes it a natural candidate to use within a global approach such as evolutionary algorithms. It is known, the success of hybridizing indicator based evolutionary algorithms with local searcher techniques for improving the performance or even refining the final solution. Here, we present the first integration of the proposed Hypervolume Newton Method into an evolutionary algorithm. To do this we first present the formulation of the Hypervolume Hessian matrix, since in previous works an approximation was used. Then, we formulate the Hypervolume Newton Method algorithm and we will show applications on some tests. In order to extend the applicability of the Hypervolume Newton method, we introduce a constrained handling technique for inequality constraints as well. Finally, we will present the numerical results of the memetic strategy against the evolutionary algorithms without our lo cal search.

Read more...
Subscribe to this RSS feed