我們可以使用嵌入的維特比算法來訓練CD-DNN-HMM,主要的步驟總結(jié)見下表。

CD-DNN-HMM包含三個組成部分,一個深度神經(jīng)網(wǎng)絡dnn、一個隱馬爾可夫模型hmm,以及一個狀態(tài)先驗概率分布prior。由于CD-DNN-HMM系統(tǒng)和GMM-HMM系統(tǒng)共享音素綁定結(jié)構(gòu),訓練CD-DNN-HMM 的第一步就是使用訓練數(shù)據(jù)訓練一個GMM-HMM系統(tǒng)。因為DNN訓練標注是由GMM-HMM系統(tǒng)采用維特比算法產(chǎn)生得到的,而且標注的質(zhì)量會影響DNN系統(tǒng)的性能。因此,訓練一個好的GMM-HMM系統(tǒng)作為初始模型就非常重要。
一旦訓練好GMM-HMM模型hmm0,我們就可以創(chuàng)建一個從狀態(tài)名字到senoneID的映射。這個從狀態(tài)到senoneID的映射(stateTosenoneIDMap )的建立并不簡單。這是因為每個邏輯三音素HMM是由經(jīng)過聚類后的一系列物理三音素HMM代表的。換句話說,若干個邏輯三音素可能映射到相同的物理三音素,每個物理三音素擁有若干個(例如3)綁定的狀態(tài)(用senones表示)。
使用已經(jīng)訓練好的GMM-HMM模型hmm0,我們可以在訓練數(shù)據(jù)上采用維特比算法生成一個狀態(tài)層面的強制對齊,利用stateTosenoneIDMap,我們能夠把其中的狀態(tài)名轉(zhuǎn)變?yōu)閟enonelIDs。然后可以生成從特征到senonelD的映射對( featuresenoneIDPairs )來訓練DNN。相同的featuresenoneIDPairs也被用來估計senone先驗概率。
利用GMM-HMM模型hmm0,我們也可以生成一個新的隱馬爾可夫模型hmm,其中包含和 hmm0相同的狀態(tài)轉(zhuǎn)移概率,以便在 DNN-HMM系統(tǒng)中使用。一個簡單的方法是把hmm0中的每個GMM(即每個senone的模型)用一個(假的)一維單高斯代替。高斯模型的方差(或者說精度)是無所謂的,它可以設置成任意的正整數(shù)(例如,總是設置成1),均值被設置為其對應的senoneID。應用這個技巧之后,計算每個senone的后驗概率就等價于從 DNN的輸出向量中查表,找到索引是senoneID 的輸出項(對數(shù)概率)。
在這個過程中,我們假定一個CD-GMM-HMM存在,并被用于生成senone對齊。在這種情況下,用于對三音素狀態(tài)聚類的決策樹也是在GMM-HMM訓練的過程中構(gòu)建的。但這其實不是必需的,如果我們想完全去除圖中的GMM-HMM步驟,可以通過均勻地把每個句子分段(稱為flat-start )來構(gòu)建一個單高斯模型,并使用這個信息作為訓練標注。這可以形成一個單音素DNN-HMM,我們可以用它重新對句子進行對齊。然后可以對每個單音素估計一個單高斯模型,并采用傳統(tǒng)方法構(gòu)建決策樹。事實上,這種無須GMM 的CD-DNN-HMM是能夠成功訓練的,可以參考[GMM-free DNN training]詳細了解(http://bacchiani.net/resume/papers/ICASSP2014_3.pdf)。

以下是還含有GMM的CD-DNN-HMM訓練腳本:
#!/usr/bin/env bash # Copyright 2017 Beijing Shell Shell Tech. Co. Ltd. (Authors: Hui Bu) # 2017 Jiayu Du # 2017 Xingyu Na # 2017 Bengu Wu # 2017 Hao Zheng # Apache 2.0 # This is a shell script, but it's recommended that you run the commands one by # one by copying and pasting into the shell. # Caution: some of the graph creation steps use quite a bit of memory, so you # should run this on a machine that has sufficient memory. #data=/export/a05/xna/data data=/home/data data_url=www.openslr.org/resources/33 . ./cmd.sh local/download_and_untar.sh $data $data_url data_aishell || exit 1; local/download_and_untar.sh $data $data_url resource_aishell || exit 1; # Lexicon Preparation, local/aishell_prepare_dict.sh $data/resource_aishell || exit 1; # Data Preparation, local/aishell_data_prep.sh $data/data_aishell/wav $data/data_aishell/transcript || exit 1; # Phone Sets, questions, L compilation utils/prepare_lang.sh --position-dependent-phones false data/local/dict \ "" data/local/lang data/lang || exit 1; # LM training local/aishell_train_lms.sh || exit 1; # G compilation, check LG composition utils/format_lm.sh data/lang data/local/lm/3gram-mincount/lm_unpruned.gz \ data/local/dict/lexicon.txt data/lang_test || exit 1; # Now make MFCC plus pitch features. # mfccdir should be some place with a largish disk where you # want to store MFCC features. mfccdir=mfcc for x in train dev test; do steps/make_mfcc_pitch.sh --cmd "$train_cmd" --nj 10 data/$x exp/make_mfcc/$x $mfccdir || exit 1; steps/compute_cmvn_stats.sh data/$x exp/make_mfcc/$x $mfccdir || exit 1; utils/fix_data_dir.sh data/$x || exit 1; done steps/train_mono.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/mono || exit 1; # Monophone decoding utils/mkgraph.sh data/lang_test exp/mono exp/mono/graph || exit 1; steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/mono/graph data/dev exp/mono/decode_dev steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/mono/graph data/test exp/mono/decode_test # Get alignments from monophone system. steps/align_si.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/mono exp/mono_ali || exit 1; # train tri1 [first triphone pass] steps/train_deltas.sh --cmd "$train_cmd" \ 2500 20000 data/train data/lang exp/mono_ali exp/tri1 || exit 1; # decode tri1 utils/mkgraph.sh data/lang_test exp/tri1 exp/tri1/graph || exit 1; steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/tri1/graph data/dev exp/tri1/decode_dev steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/tri1/graph data/test exp/tri1/decode_test # align tri1 steps/align_si.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri1 exp/tri1_ali || exit 1; # train tri2 [delta+delta-deltas] steps/train_deltas.sh --cmd "$train_cmd" \ 2500 20000 data/train data/lang exp/tri1_ali exp/tri2 || exit 1; # decode tri2 utils/mkgraph.sh data/lang_test exp/tri2 exp/tri2/graph steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/tri2/graph data/dev exp/tri2/decode_dev steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/tri2/graph data/test exp/tri2/decode_test # train and decode tri2b [LDA+MLLT] steps/align_si.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri2 exp/tri2_ali || exit 1; # Train tri3a, which is LDA+MLLT, steps/train_lda_mllt.sh --cmd "$train_cmd" \ 2500 20000 data/train data/lang exp/tri2_ali exp/tri3a || exit 1; utils/mkgraph.sh data/lang_test exp/tri3a exp/tri3a/graph || exit 1; steps/decode.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri3a/graph data/dev exp/tri3a/decode_dev steps/decode.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri3a/graph data/test exp/tri3a/decode_test # From now, we start building a more serious system (with SAT), and we'll # do the alignment with fMLLR. steps/align_fmllr.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri3a exp/tri3a_ali || exit 1; steps/train_sat.sh --cmd "$train_cmd" \ 2500 20000 data/train data/lang exp/tri3a_ali exp/tri4a || exit 1; utils/mkgraph.sh data/lang_test exp/tri4a exp/tri4a/graph steps/decode_fmllr.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri4a/graph data/dev exp/tri4a/decode_dev steps/decode_fmllr.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri4a/graph data/test exp/tri4a/decode_test steps/align_fmllr.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri4a exp/tri4a_ali # Building a larger SAT system. steps/train_sat.sh --cmd "$train_cmd" \ 3500 100000 data/train data/lang exp/tri4a_ali exp/tri5a || exit 1; utils/mkgraph.sh data/lang_test exp/tri5a exp/tri5a/graph || exit 1; steps/decode_fmllr.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri5a/graph data/dev exp/tri5a/decode_dev || exit 1; steps/decode_fmllr.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri5a/graph data/test exp/tri5a/decode_test || exit 1; steps/align_fmllr.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri5a exp/tri5a_ali || exit 1; # nnet3 local/nnet3/run_tdnn.sh # chain local/chain/run_tdnn.sh # getting results (see RESULTS file) for x in exp/*/decode_test; do [ -d $x ] && grep WER $x/cer_* | utils/best_wer.sh; done 2>/dev/null for x in exp/*/*/decode_test; do [ -d $x ] && grep WER $x/cer_* | utils/best_wer.sh; done 2>/dev/null exit 0;
機器學習
版權(quán)聲明:本文內(nèi)容由網(wǎng)絡用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔相應法律責任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實后本網(wǎng)站將在24小時內(nèi)刪除侵權(quán)內(nèi)容。
版權(quán)聲明:本文內(nèi)容由網(wǎng)絡用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔相應法律責任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實后本網(wǎng)站將在24小時內(nèi)刪除侵權(quán)內(nèi)容。