Feature Analysis of Marginalized Stacked Denoising Autoenconder for Unsupervised Domain Adaptation.
IEEE Trans Neural Netw Learn Syst. 2018 Sep 27;:
Authors: Wei P, Ke Y, Goh CK
Marginalized stacked denoising autoencoder (mSDA), has recently emerged with demonstrated effectiveness in domain adaptation. In this paper, we investigate the rationale for why mSDA benefits domain adaptation tasks from the perspective of adaptive regularization. Our investigations focus on two types of feature corruption noise: Gaussian noise (mSDAg) and Bernoulli dropout noise (mSDAbd). Both theoretical and empirical results demonstrate that mSDAbd successfully boosts the adaptation performance but mSDAg fails to do so. We then propose a new mSDA with data-dependent multinomial dropout noise (mSDAmd) that overcomes the limitations of mSDAbd and further improves the adaptation performance. Our mSDAmd is based on a more realistic assumption: different features are correlated and, thus, should be corrupted with different probabilities. Experimental results demonstrate the superiority of mSDAmd to mSDAbd on the adaptation performance and the convergence speed. Finally, we propose a deep transferable feature coding (DTFC) framework for unsupervised domain adaptation. The motivation of DTFC is that mSDA fails to consider the distribution discrepancy across different domains in the feature learning process. We introduce a new element to mSDA: domain divergence minimization by maximum mean discrepancy. This element is essential for domain adaptation as it ensures the extracted deep features to have a small distribution discrepancy. The effectiveness of DTFC is verified by extensive experiments on three benchmark data sets for both Bernoulli dropout noise and multinomial dropout noise.
PMID: 30281483 [PubMed – as supplied by publisher]