Specialist predictors of expected performance for Genetic Programming classifier

10:45 am - 11:10 am 24 Thursday

Track

Genetic Programming

Determining problem difficulty has been an important issue in evolutionary computation for several years[2]. From an algorithmic perspective, problem difficulty can be related to the total runtime (or memory)required to find an optimal solution, difficult problems require exponential run time while easy problems can be solved in linear time. Limiting our overview of previous works related to GP research, this work focuses on two groups of methods used to measure problem difficulty in GP, these are: Evolvability Indicators(EIs) and Predictors of the Expected Performance (PEP). Broadly speaking, EIs attempt to provide a measure related to the underlying difficulty of the search process and on the other hand a PEP provides a prediction of the expected performance of a GP search, which might or might not coincide with the difficulty of the search. Currently, the development of PEP’s represents the minority of research devoted to problem difficulty in GP, with only a few recent proposals. In particular, Graff and Poli [1] have studied the development of such predictive models, for symbolic regression, boolean and time-series problems.

This work is an extension of our previous work [3], where PEP’s were first proposed for a GP algorithm applied to supervised classification. The general process relies on posing a supervised learning task, whereeach problem instance is described by a domain-specific feature vector and the performance of the GP-system on a set of synthetic problems is used as the ground truth to learn a PEP. The learned modelis then used to predict the performance on unseen problem instances. Of particular interest is the fact that learning can be done on an arbitrarily generated set of synthetic problems and testing can then be performed on real-world datasets. Our goal is to improve performance prediction on real-world problems by using an ensemble approach several PEP models, each one referred to as an SPEP.

The proposal is depicted in Figure 1, an extension of the the basic PEP approach.

Where given a classification problem we do the following. First, apply a preprocessing step to simplify the feature extraction process and deal with multidimensional representations. Second, perform feature extraction to obtain an abstraction of the problem. Third, each problem is classified into a specif group using its corresponding feature vector \(\beta\). Fourth, to use a Ensemble SPEP model that takes as input the extracted features and produces as output the predicted classification error (PCE) for a given problem,each group is associated to a particular SPEP in the ensemble, hence if a problem is classified into the \(i-th\) group then the \(i-th\) SPEP in the ensemble is used to compute the performance prediction.

Focusing on the real-world problems, Figure 2 summarize the performance of the Ensemble-2 predictors using 5 features. In particular, the left column show plot of the ground truth CE\(\mu\)of each problem(triangles) and the Ensemble-2 prediction PCE. In particular, these plots show three types of PCE: (1)correctly classified problems for which the appropriate SPEP was selected (CC-PCE); (2) missclassified problems for which an incorrect SPEP was selected (MC-PCE);and (3) for the missclassified problems,the oracle SPEP prediction (O-PCE), which is the PCE produced by the correct SPEP. Moreover, the second column of Figure 2 present a scatter plot of the true CE\(\mu\) and the PCE, using the same notations and different predictions. These plots provide a graphical confirmation of the quality of the performance prediction achieved. It is important to highlight the impact of a missclassified problem (shown as a black circle) compared to the prediction on the same problem if the correct SPEP had been chosen (O-PCE).For all problems for which the correct SPEP was chosen the PCE is highly correlated with the ground truth with only marginal differences in most cases. Finally,we build the SPEP models with a synthetics benchmark and tested over real-world classification problems. These showing a good results an interesting way for to solve the problem of to predict performance. 

References

[1] Mario Graff and Riccardo Poli. Practical model of genetic programming’s performance on rational symbolic regression problems. InEuroGP, pages 122–133, 2008.

[2] Kent McClymont, David Walker, and Max Dupenois. The lay of the land: a brief survey of problem understanding. In Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference companion, GECCO Companion ’12, pages 425–432, New York, NY, USA,2012. ACM.

[3] Leonardo Trujillo, Yuliana Martínez, Edgar Galván-López, and Pierrick Legrand. Predicting problem difficulty for genetic programming applied to data classification. In Proceedings of the 13th annual conference on Genetic and evolutionary computation, GECCO ’11, pages 1355–1362, New York, NY,USA, 2011. ACM.

Speaker:
Yuliana Martinez