TITLE

Learning Tract Variables with Distal Supervised Learning Model

AUTHOR(S)
Ying Chen; Shaobai Zhang
PUB. DATE
February 2013
SOURCE
Journal of Networks;Feb2013, Vol. 8 Issue 2, p397
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
This paper compared the performance of tract variables (TVs) estimation with pellet trajectory estimation by using the trajectory mixture density networks (TMDN), and the result indicated that TVs can be estimated more accurately than the pellet trajectories. We used eight tract variables as articulatory information to model speech dynamics, and parameterized the speech signal as melfrequency cepstral coefficients (MFCCs) and acoustic parameters (APs), and then we analyzed TV estimation using the distal supervised learning (DSL) model. For the DSL, we first analyzed its theoretical foundation and then proposed a global optimization approach for its inversion model. The results of the experiment showed that distal supervised learning has a good estimation performance for tract variables, so it plays an important role in speech inversion and gesture-based ASR architecture.
ACCESSION #
86708157

 

Related Articles

  • Confidence Estimation for Graph-based Semi-supervised Learning. Tao Guo; Guiyang Li // Journal of Software (1796217X);Jun2012, Vol. 7 Issue 6, p1307 

    To select unlabeled example effectively and reduce classification error, confidence estimation for graph-based semi-supervised learning (CEGSL) is proposed. This algorithm combines graph-based semi-supervised learning with collaboration-training. It makes use of structure information of sample...

  • Systematic Approach for Detecting Text in Images Using Supervised Learning. Minh Hieu Nguyen; GueeSang Lee // International Journal of Contents;Jun2013, Vol. 9 Issue 2, p8 

    Locating text data in images automatically has been a challenging task. In this approach, we build a three stage system for text detection purpose. This system utilizes tensor voting and Completed Local Binary Pattern (CLBP) to classify text and non-text regions. While tensor voting generates...

  • Parse reranking for domain-adaptative relation extraction. Xu, Feiyu; Li, Hong; Zhang, Yi; Uszkoreit, Hans; Krause, Sebastian // Journal of Logic & Computation;Apr2014, Vol. 24 Issue 2, p413 

    The article demonstrates how generic parsers in a minimally supervised information extraction framework can be adapted to a given task and domain for relation extraction (RE). For the experiments, two parsers that deliver n-best readings are included: (1) a generic deep-linguistic parser (PET)...

  • Information-Theoretic Inference of Large Transcriptional Regulatory Networks. Meyer, Patrick E.; Kontos, Kevin; Lafitte, Frederic; Bontempi, Gianluca // EURASIP Journal on Bioinformatics & Systems Biology;2007, p1 

    The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR), an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in...

  • Semi-supervised learning with density-ratio estimation. Kawakita, Masanori; Kanamori, Takafumi // Machine Learning;May2013, Vol. 91 Issue 2, p189 

    In this paper we study statistical properties of semi-supervised learning, which is considered to be an important problem in the field of machine learning. In standard supervised learning only labeled data is observed, and classification and regression problems are formalized as supervised...

  • Why Does Unsupervised Pre-training Help Deep Learning? Erhan, Dumitru; Bengio, Yoshua; Courville, Aaron; Manzagol, Pierre-Antoine; Vincent, Pascal; Bengio, Samy // Journal of Machine Learning Research;2/1/2010, Vol. 11 Issue 2, p625 

    Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks...

  • Progressive Transductive Minmum Mahanobis Enclosing Ellipsoid Machine. Zhenxia Xue; Guolin Yu // Journal of Convergence Information Technology;Jan2013, Vol. 8 Issue 1, p502 

    In many machine learning problems, a large amount of data is available but only a few of them can be labelled easily. This provides a research branch to effectively combine unlabeled and labelled data to infer the labels of unlabeled ones, that is, to develop transductive learning in...

  • A Novel Multiple Instance Learning Method Based on Extreme Learning Machine. Wang, Jie; Cai, Liangjian; Peng, Jinzhu; Jia, Yuheng // Computational Intelligence & Neuroscience;2/3/2015, Vol. 2015, p1 

    Since real-world data sets usually contain large instances, it is meaningful to develop efficient and effective multiple instance learning (MIL) algorithm. As a learning paradigm, MIL is different from traditional supervised learning that handles the classification of bags comprising unlabeled...

  • Bayesian approach, theory of empirical risk minimization. Comparative analysis. Sergienko, I. V.; Gupal, A. M.; Vagis, A. A. // Cybernetics & Systems Analysis;Nov2008, Vol. 44 Issue 6, p822 

    Error estimates of empirical-risk minimization methods for an infinite number of decision rules are analyzed. Optimal deterministic estimates of the error of the Bayesian classification procedure for independent features are obtained based on averaging over a great number of training samples as...

Share

Read the Article

Courtesy of THE LIBRARY OF VIRGINIA

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics