Abstract
Active learning is a generic approach to accelerate training of classifiers in order to achieve a higher accuracy with a small number of training examples. In the past, simple active learning algorithms like random learning and query learning have been proposed for the design of support vector machine (SVM) classifiers. In random learning, examples are chosen randomly, while in query learning examples closer to the current separating hyperplane are chosen at each learning step. However, it is observed that a better scheme would be to use random learning in the initial stages (more exploration) and query learning in the final stages (more exploitation) of learning. Here we present two novel active SV learning algorithms which use adaptive mixtures of random and query learning. One of the proposed algorithms is inspired by online decision problems, and involves a hard choice among the pure strategies at each step. The other extends this to soft choices using a mixture of instances recommended by the individual pure strategies. Both strategies handle the exploration- exploitation trade-off in an efficient manner. The efficacy of the algorithms is demonstrated by experiments on benchmark datasets.
Original language | English |
---|---|
Pages | 38-42 |
Number of pages | 5 |
DOIs | |
State | Published - 2005 |
Externally published | Yes |
Event | 20th Annual ACM Symposium on Applied Computing - Santa Fe, NM, United States Duration: 13 Mar 2005 → 17 Mar 2005 |
Conference
Conference | 20th Annual ACM Symposium on Applied Computing |
---|---|
Country/Territory | United States |
City | Santa Fe, NM |
Period | 13/03/05 → 17/03/05 |
Keywords
- Multi-Arm Bandit Problem
- Pool Based Active Learning
- SVM
- Stochastic scheduling