A Novel Multiple Instance Learning Method Based on Extreme Learning Machine

Wang, Jie; Cai, Liangjian; Peng, Jinzhu; Jia, Yuheng
February 2015
Computational Intelligence & Neuroscience;2/3/2015, Vol. 2015, p1
Academic Journal
Since real-world data sets usually contain large instances, it is meaningful to develop efficient and effective multiple instance learning (MIL) algorithm. As a learning paradigm, MIL is different from traditional supervised learning that handles the classification of bags comprising unlabeled instances. In this paper, a novel efficient method based on extreme learning machine (ELM) is proposed to address MIL problem. First, the most qualified instance is selected in each bag through a single hidden layer feedforward network (SLFN) whose input and output weights are both initialed randomly, and the single selected instance is used to represent every bag. Second, the modified ELM model is trained by using the selected instances to update the output weights. Experiments on several benchmark data sets and multiple instance regression data sets show that the ELM-MIL achieves good performance; moreover, it runs several times or even hundreds of times faster than other similar MIL algorithms.


Related Articles

  • Manifold regularized extreme learning machine. Liu, Bing; Xia, Shi-Xiong; Meng, Fan-Rong; Zhou, Yong // Neural Computing & Applications;Feb2016, Vol. 27 Issue 2, p255 

    Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM...

  • Relationship between phase and amplitude generalization errors in complex- and real-valued feedforward neural networks. Hirose, Akira; Yoshida, Shotaro // Neural Computing & Applications;Jun2013, Vol. 22 Issue 7/8, p1357 

    We compare the generalization characteristics of complex-valued and real-valued feedforward neural networks. We assume a task of function approximation with phase shift and/or amplitude change in signals having various coherence. Experiments demonstrate that complex-valued neural networks show...

  • A Semi-Supervised Learning Algorithm Based on Modified Self-training SVM. Yun Jin; Chengwei Huang; Li Zhao // Journal of Computers;Jul2011, Vol. 6 Issue 7, p1438 

    In this paper, we first introduce some facts about semi-supervised learning and its often used methods such as generative mixture models, self-training, co-training and Transductive SVM and so on. Then we present a self-training semi-supervised SVM algorithm based on which we give out a modified...

  • Learning to Combine Bottom-Up and Top-Down Segmentation. Levin, Anat; Weiss, Yair // International Journal of Computer Vision;Jan2009, Vol. 81 Issue 1, p105 

    Bottom-up segmentation based only on low-level cues is a notoriously difficult problem. This difficulty has lead to recent top-down segmentation algorithms that are based on class-specific image information. Despite the success of top-down algorithms, they often give coarse segmentations that...

  • A general graph-based semi-supervised learning with novel class discovery. Feiping Nie; Shiming Xiang; Yun Liu; Changshui Zhang // Neural Computing & Applications;May2010, Vol. 19 Issue 4, p549 

    In this paper, we propose a general graph-based semi-supervised learning algorithm. The core idea of our algorithm is to not only achieve the goal of semi-supervised learning, but also to discover the latent novel class in the data, which may be unlabeled by the user. Based on the normalized...

  • MULTI-LABEL CLASSIFICATION USING ERROR CORRECTING OUTPUT CODES. KAJDANOWICZ, TOMASZ; KAZIENKO, PRZEMYS┼üAW // International Journal of Applied Mathematics & Computer Science;Dec2012, Vol. 22 Issue 4, p829 

    A framework for multi-label classification extended by Error Correcting Output Codes (ECOCs) is introduced and empirically examined in the article. The solution assumes the base multi-label classifiers to be a noisy channel and applies ECOCs in order to recover the classification errors made by...

  • Smooth Harmonic Transductive Learning. Ying Xie; Bin Luo; Rongbin Xu; Sibao Chen // Journal of Computers;Dec2013, Vol. 8 Issue 12, p3079 

    In this paper, we present a novel semi-supervised smooth harmonic transductive learning algorithm that can get closed-form solution. Our method introduces the unlabeled class information to the learning process and tries to exploit the similar configurations shared by the label distribution of...

  • Analysis of Various Clustering and Classification Algorithms in Datamining. Valsala, Sandhia; Thomas, Bindhya; George, Jissy Ann // International Journal of Computer Science & Network Security;Nov2012, Vol. 12 Issue 11, p54 

    Clustering and classification of data is a difficult problem that is related to various fields and applications. Challenge is greater, as input space dimensions become larger and feature scales are different from each other. The term "classification" is frequently used as an algorithm for all...

  • A novel hybrid feature selection method based on rough set and improved harmony search. Inbarani, H.; Bagyamathi, M.; Azar, Ahmad // Neural Computing & Applications;Nov2015, Vol. 26 Issue 8, p1859 

    Feature selection is a process of selecting optimal features that produce the most prognostic outcome. It is one of the essential steps in knowledge discovery. The crisis is that not all features are important. Most of the features may be redundant, and the rest may be irrelevant and noisy. This...


Read the Article


Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics