Please use this identifier to cite or link to this item:
http://repo.lib.jfn.ac.lk/ujrr/handle/123456789/1902
Title: | A Study on Pairwise LDA for X-vector based Speaker Recognition |
Authors: | Ahilan, K. Sridharan, S. Ganapathy, S. Fookes, C. |
Issue Date: | 2019 |
Citation: | Kanagasundaram, A., Sridharan, S., Ganapathy, S., & Fookes, C. (2019). Study on pairwise LDA for x-vector-based speaker recognition. Electronics Letters, 55(14), 813-816. |
Abstract: | In typical x-vector based speaker recognition systems, standard linear discriminant analysis (LDA) is used to transform the x-vector space with the aim of maximizing the between-speaker discriminant information while minimizing the within-speaker variability. For LDA, it is customary to use all the available speakers in the speaker recognition development dataset. In this study, we investigate if it would be more beneficial to estimate the between-speaker discriminant information and the withinspeaker variability using the most confusing samples and the most distant samples (from the target speaker mean) respectively in the LDA based channel compensation. The between-speaker variance is estimated using a pairwise approach where the most confusing non-target speaker samples are found based on the Euclidean distance between the speaker mean and adjacent speaker’s samples. The within-speaker variance is estimated using the mean of each speaker and the furthermost samples in the speaker sessions. Experimental results demonstrate the proposed LDA approach for an x-vector x-vector based speaker recognition system achieves over 17% relative improvement on EER over standard LDA based x-vector speaker recognition systems on the NIST2010 corext-corext condition. |
URI: | http://repo.lib.jfn.ac.lk/ujrr/handle/123456789/1902 |
Appears in Collections: | Electrical & Electronic Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
A Study on Pairwise LDA for X-vector based.pdf | 223.44 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.