Measures of competence and Dynamic Selection techniques

Measures of competence and Dynamic Selection techniques

Classifier competence defines how much we trust an expert given a classification task (Cruz et al., 2018; Britto et al., 2014). The level of competence of each base classifier is measured taking into account each new test instance, and only the classifiers that reach a certain level of competence for the current test instance are selected to compose the ensemble (DES) or the classifier (DCS). On the other hand, it is necessary to define the criterion to measure the level of competence of each classifier. For dynamic selection techniques, the search criterion is locally applied to fit each test pattern (Cruz et al., 2018).

There are two types of dynamic selection criteria, individual-based level of competences known as: Ranking, Local Accuracy, Oracle, Probabilistic, Behavior and group-based: Diversity, Ambiguity and Data handling (Britto et al., 2014; Cruz et al., 2015b; Cruz, 2016; Cruz et al., 2018).

For the definitions, θxq = {x1,…,xk} represents patterns belonging to the region of competence of the unseen sample xq, K is the size of the RoC, ci is the base classifier from the pool C, wk is the class attribute of xk and δi,q is the level of competence of the classifier ci for classifying the instance xq .

Individual-based measures of competence

Ranking
Several Dynamic Selection techniques have been developed according to this taxonomy. For the individual-based measures, Classifier Rank (Sabourin et al., 1993; Cruz et al., 2018; Britto et al., 2014) is considered as one of the first proposed approaches to estimate the base classifiers’ competence level in DS. The ranking of a classifier ci is found by counting the number of consecutive correctly classified samples (Cruz et al., 2018). The classifier that correctly classifies the most consecutive samples coming from the region of competence is considered to have the highest competence level (Cruz, 2016).

Accuracy
Overall local accuracy (OLA) estimates each individual classifier’s accuracy in local regions of the feature space surrounding a test sample, and then uses the decision of the most locally accurate classifier (Woods et al., 1997; Britto et al., 2014; Cruz et al., 2018). The level of competence δi,q of a base classifier ci is computed as the percentage of samples in the region of competence that are correctly classified.

Oracle
Ko et al. (Ko et al., 2008) passed from Dynamic Classifier Selection (DCS) to Dynamic Ensemble Selection (DES) by developing the concept of the K-nearest Oracles (KNORA), which is close to the concepts of OLA, LCA, the A Priori and A Posteriori methods in their consideration of the neighborhood of test patterns, but it can be distinguished from the others by the direct use of the oracle property of having training samples in the region of competence with which to find the most suitable ensemble for a given query (Ko et al., 2008). For any test data point, KNORA simply finds its nearest K neighbors in DSEL, assess which classifiers correctly classify those neighbors and uses them as the ensemble for classifying the given pattern in that test set(Ko et al., 2008). KNORA has different designs, we can state the following ones:

KNORA-ELIMINATE (KNORA-E) The KNORA-Eliminate approach exploits the concept of the Oracle (Ko et al., 2008) which is the upper limit of a classifiers ensemble. For a region of competence θq of a given query xq in DSEL, sole the classifiers that correctly classify all the neighborhood samples (achieving a 100% accuracy, hence, operating as « local oracles ») are selected(Ko et al., 2008) to build the ensemble. The selected base classifiers’ decision is combined using majority voting. In the case where, there exists no classifiers that perfectly classify all the neighborhood samples, the method reduces the size of θq by eliminating the samples that are most distant from the xq until at least one classifier is chosen.

KNORA-E has another alternative called « KNORA-E-W » which is a weighted version of the original KNORA-E, according to the Euclidean distance between the samples in DSEL and the test query (Ko et al., 2008).

KNORA-UNION (KNORA-U) In this scheme, KNORA-UNION operates by selecting all the base classifiers that can correctly classify at least one sample from the neighborhood θq.

The method grants a vote to each classifier ci that correctly classifies one sample from the neighborhood θq. This means that the base classifier ci could have more than one vote if it correctly classifies more than one sample. Therefore, the votes gathered by all the classifiers are aggregated using a majority voting rule to obtain the ensemble decision (Ko et al., 2008).

KNORA-U has another alternative called « KNORA-U-W » which is a weighted version of the original KNORA-E, according to the Euclidean distance between the samples in DSEL and the test query (Ko et al., 2008).

Note
Recently, Oliveira el al. (Oliveira et al., 2018) proposed two new variants of the KNORA-E DES technique scheme. KNORA-B, B stands for borderline is a DES technique-based adapted from KNORA-E. It actually diminishes from the the region of competence but keeps at least one sample from each class that is in the original region of competence as opposed to the Original KNORA-E. KNORA-BI is a spin off of KNORA-B where I stands for imbalance datasets, which reduces the region of competence by only removing samples belonging to the majority class, leaving the minority untouched.

Table des matières

INTRODUCTION
CHAPTER 1 RELATED WORK
1.1 The Oracle
1.1.1 Dynamic Selection
1.1.2 Region of Competence definition
1.1.3 Measures of competence and Dynamic Selection techniques
1.1.3.1 Individual-based measures of competence
1.1.3.2 Group-based measures of competence
1.1.4 Dynamic Selection Versus K-NN
1.1.5 Dynamic Selection in the indecision Regions
1.2 Ensemble Generation methods
1.2.1 The wisdom of crowds
1.2.2 Bagging
1.2.3 Random Subspaces Method
1.2.4 Boosting
1.2.5 Oracle-based generation method
1.3 Summary, discussion and a brief introduction to the proposed system
CHAPTER 2 TOWARDS LOCAL POOL GENERATION FOR DYNAMIC SELECTION
2.1 Basic concepts
2.1.1 Region of Competence in the context of this study
2.1.2 Indecision Region
2.1.3 frienemies samples
2.1.4 Instance Hardness
2.1.4.1 k Disagreeing Neighbors (kDN), an instance hardness measure
2.2 The proposed Local Pool Generation for Dynamic Selection System
2.2.1 How does the proposed local pool generation work?
2.2.2 A pairwise separation between frienemies
2.2.3 Strategies for local selection of classifiers
2.2.3.1 Strategy 1: DCS as a guide for local selection, no errors allowed
2.2.3.2 Strategy 2 : DCS as a guide for local selection
2.2.3.3 Strategy 3: KNORA-E as a guide for local selection
2.2.3.4 Strategy 4 : frienemies distinction as a guide for local selection, one classifier allowed
2.2.3.5 Strategy 5 : frienemies distinction as a guide for local selection, multiple classifiers allowed
2.3 The generalization phase
2.4 Case study:The P2 problem
2.4.1 Local Pool Generation for P2
2.4.2 Case study summary
2.5 Discussion
CHAPTER 3 EXPERIMENTS AND RESULTS
3.1 Experimental protocol
3.2 Results analyses and discussions
3.2.1 Comparison between the proposed local pool generation strategies and the state of the art generation methods
3.3 General discussion
CONCLUSION

Cours gratuitTélécharger le document complet

 

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *