JP2744622B2 - Plosive consonant identification method - Google Patents
Plosive consonant identification methodInfo
- Publication number
- JP2744622B2 JP2744622B2 JP63230891A JP23089188A JP2744622B2 JP 2744622 B2 JP2744622 B2 JP 2744622B2 JP 63230891 A JP63230891 A JP 63230891A JP 23089188 A JP23089188 A JP 23089188A JP 2744622 B2 JP2744622 B2 JP 2744622B2
- Authority
- JP
- Japan
- Prior art keywords
- rupture
- closing
- likelihood
- plosive
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Description
【発明の詳細な説明】 〔概要〕 音声識別処理において音声中の破裂子音を識別する破
裂子音識別方式に関し、 高い識別性能を持つことを目的とし、 入力音声中の破裂子音を識別する破裂子音識別方式に
おいて、該破裂子音に先行する母音の終了する該入力音
声の閉鎖時点を検出する閉鎖時点検出手段と、該破裂子
音が開始する該入力音声の破裂時点を検出する破裂時点
検出手段と、該入力音声の閉鎖時点近傍である閉鎖部
を、上記閉鎖時点を基準として分析して閉鎖部識別パラ
メータを求める閉鎖部分析手段と、該入力音声の破裂時
点近傍である破裂部を、上記破裂時点を基準として分析
して破裂部識別パラメータを求める破裂部分析手段と、
該閉鎖部識別パラメータから該閉鎖部と破裂音の各カテ
ゴリーとの尤度を求める閉鎖部尤度演算手段と、該破裂
部識別パラメータから該破裂部と破裂音の各カテゴリー
との尤度を求める破裂部尤度演算手段と、該閉鎖部の尤
度と該破裂部の尤度とから該破裂子音を判定識別する判
定手段とを有し構成する。DETAILED DESCRIPTION OF THE INVENTION [Summary] A plosive consonant discrimination method for discriminating plosive consonants in speech in speech discrimination processing. A closing time detecting means for detecting a closing time point of the input voice at which the vowel preceding the plosive sound ends, a bursting time detecting means for detecting a bursting time of the input voice at which the plosive sound starts, and A closing part near the closing point of the input voice is analyzed with reference to the closing point to obtain a closing part identification parameter, and a rupture part near the rupture point of the input voice is referred to as the puncturing point. Rupture portion analysis means for analyzing as a reference to obtain a rupture portion identification parameter;
Closed part likelihood calculating means for calculating the likelihood between the closed part and each category of plosives from the closed part identification parameter, and determining the likelihood of the plosive part and each category of the plosive sound from the closed part identification parameters It comprises bursting part likelihood calculating means and determining means for determining and identifying the plosive consonant from the likelihood of the closed part and the likelihood of the rupture part.
本発明は破裂子音識別方式に関し、音声認識処理にお
いて音声中の破裂子音を識別する破裂子音識別方式に関
する。The present invention relates to a plosive consonant identification method, and more particularly to a plosive consonant identification method for identifying a plosive consonant in a voice in a speech recognition process.
音声認識処理の中でも破裂子音の認識は特に困難であ
り、入力音声中の破裂子音の分析はより詳細に行なう必
要がある。Recognition of plosive consonants is particularly difficult in speech recognition processing, and it is necessary to analyze plosive consonants in input speech in more detail.
第4図は従来の破裂子音識別方式の一例の構成を示す
ブロック図を示す。FIG. 4 is a block diagram showing a configuration of an example of a conventional plosive consonant identification system.
同図中、破裂時点検出手段11は入力音声のディジタル
時系列信号から破裂子音の破裂時点を検出する。分析手
段12は検出された破裂時点の近傍で分析を行ない識別パ
ラメータを求める。In the figure, a burst time detecting means 11 detects a burst time of a plosive consonant from a digital time-series signal of an input voice. The analysis means 12 performs an analysis near the detected rupture time point to obtain an identification parameter.
次に尤度演算手段13で破裂子音の先行及び後続母音別
の標準パターンを用いて上記識別パラメータと破裂音の
各カテゴリーとの尤度を演算して求める。Next, the likelihood calculating means 13 calculates and calculates the likelihood between the identification parameter and each category of the plosive using a standard pattern for each of the preceding and subsequent vowels of the plosive consonant.
更に判定手段14は求められた尤度が最も大きいカテゴ
リーを判定して破裂子音の判定識別を行なう。Further, the judging means 14 judges the category having the highest likelihood to determine the consonant.
従来方式では破裂時点近傍を分析して得た標準パラメ
ータのみで判定識別を行なっているため、先行・後続母
音の組み合わせによっては充分な識別性能が得られない
という問題があった。In the conventional method, the judgment and discrimination are performed only using the standard parameters obtained by analyzing the vicinity of the burst point, and thus there is a problem that sufficient discrimination performance cannot be obtained depending on the combination of the preceding and succeeding vowels.
例えば「abi」と「agi〕との2つを識別する場合に
は、先行母音「a」から後続母音「i」へのホルマント
の変化即ちホルマント渡りに影響されて破裂部に「b」
と「g」とを区別する特徴が現われにくく、充分な識別
性能が得られない。For example, in the case of discriminating two of "abi" and "agi", the change of the formant from the preceding vowel "a" to the succeeding vowel "i", that is, the crossing of the formant is affected by the "b"
And the feature of distinguishing between "g" and "g" does not easily appear, and sufficient discrimination performance cannot be obtained.
本発明は上記の点に鑑みなされたもので、高い識別性
能を持つ破裂子音識別方式を提供することを目的とす
る。The present invention has been made in view of the above points, and has as its object to provide a plosive consonant discrimination method having high discrimination performance.
第1図は本発明方式の原理ブロック図を示す。 FIG. 1 shows a principle block diagram of the system of the present invention.
同図中、入力音声のディジタル時系列信号が閉鎖時点
検出手段1及び破裂時点検出手段2夫々に供給される。In the figure, a digital time-series signal of an input voice is supplied to a closing time detecting means 1 and a rupture time detecting means 2, respectively.
閉鎖時点検出手段1は、破裂子音に先行する母音の終
了する該入力音声の閉鎖時点を検出し、 また、破裂時点検出手段2は、破裂子音が開始する該
入力音声の破裂時点を検出する。The closing time detecting means 1 detects the closing time of the input voice at which the vowel preceding the plosive consonant ends, and the bursting time detecting means 2 detects the bursting time of the input voice at which the plosive consonant starts.
閉鎖部分析手段3は、入力音声の閉鎖時点近傍である
閉鎖部を、上記閉鎖時点を基準として分析して閉鎖部識
別パラメータを求める。The closing part analyzing means 3 analyzes a closing part near the closing time point of the input voice with reference to the closing time point to obtain a closing part identification parameter.
破裂部分析手段4は、入力音声の破裂時点近傍である
破裂部を、上記破裂時点を基準として分析して破裂部識
別パラメータを求める。The rupture portion analysis means 4 analyzes a rupture portion near the rupture time point of the input voice with reference to the rupture time point to obtain a rupture portion identification parameter.
閉鎖部尤度演算手段5は閉鎖部識別パラメータから閉
鎖部と破裂音の各カテゴリーとの尤度を求める。The closed part likelihood calculating means 5 calculates the likelihood between the closed part and each category of the plosive from the closed part identification parameter.
破裂部尤度演算手段6は、破裂部識別パラメータから
破裂部と破裂音の各カテゴリーとの尤度を求める。The plosive part likelihood calculating means 6 calculates the likelihood between the plosive part and each category of the plosive from the plosive part identification parameter.
判定手段7は、閉鎖部の尤度と破裂部の尤度とから破
裂子音を判定識別する。The determining means 7 determines and identifies a plosive consonant from the likelihood of the closed part and the likelihood of the rupture part.
本発明においては、破裂子音が開始する破裂部に加え
て先行母音が終了する閉鎖部の特徴を抽出して、夫々に
ついて各カテゴリーとの尤度を求め、それらを総合して
尤度の最も高いカテゴリーを判定する。In the present invention, in addition to the plosive part where the plosive consonant starts, the characteristics of the closed part where the preceding vowel ends are extracted, the likelihood of each category is obtained for each, and the highest likelihood is obtained by integrating them. Determine the category.
これによって破裂部だけでは識別が困難であった例え
ば「abi」と「agi」との夫々を識別でき、識別性能が向
上する。As a result, for example, each of "abi" and "agi", which were difficult to identify only by the rupture portion, can be identified, and the identification performance is improved.
第2図は本発明方式の一実施例の構成を示すブロック
図を示す。この実施例では有声破裂子音「b,d,g」を含
む単語音声を入力して。その有声破裂子音間の識別を行
なうものである。FIG. 2 is a block diagram showing the configuration of an embodiment of the system of the present invention. In this embodiment, a word voice including a voiced consonant “b, d, g” is input. The discrimination between the voiced plosive consonants is performed.
同図中、20は音声データメモリであり、有声破裂子音
とそれに先行する母音及び後続する母音の第3図(A)
に示す如き入力音声のディジタル時系列信号が供給され
て記録される。なお、第3図(A)の入力音声は「ab
i」を発音したときのものである。In the figure, reference numeral 20 denotes a voice data memory, which is a voiced plosive, a preceding vowel and a succeeding vowel in FIG.
The digital time series signal of the input voice as shown in FIG. Note that the input voice in FIG.
"i" is pronounced.
この音声データメモリから読み出されたディジタル時
系列信号は閉鎖時点検出部21及び破裂時点検出部22夫々
に供給される。The digital time-series signal read from the audio data memory is supplied to the closing time detecting unit 21 and the rupture time detecting unit 22, respectively.
閉鎖時点検出部21は第3図(B)に示す如き高域強調
した入力音声のパワーが一定の閾値TH1より低くなった
時点を閉鎖時点Aとして検出する。この閉鎖時点Aは
唇,舌,口蓋等の調音点が閉鎖され先行母音区間が終了
する時点である。The closing point detecting section 21 detects, as the closing point A, a point in time when the power of the input sound with the high-frequency emphasis becomes lower than a certain threshold value TH1 as shown in FIG. 3 (B). The closing point A is a point at which the articulation points of the lips, tongue, palate, etc. are closed and the preceding vowel section ends.
また、破裂時点検出部22は第3図(B)に示す入力音
声のパワーが一定の閾値TH2より高くなった時点を破裂
時点Bとして検出する。この破裂時点Bは調音点が開放
され子音区間が開始する時点である。Further, the rupture time detection unit 22 detects a time point when the power of the input voice shown in FIG. 3B becomes higher than a certain threshold value TH2 as a rupture time point B. The burst point B is a point at which the articulation point is released and the consonant section starts.
次に閉鎖部分析位置設定部23は3フレームの分析を行
なう場合、分析フレーム周期をTとして閉鎖時点Aを基
準(中心)とする分析フレームT0及びそれに先行する分
析フレームT-1,T-2を設定する。If then closure analysis position setting unit 23 for performing an analysis of three frames, the analysis frame T 0 and the analysis frame T -1 preceding it to the closure point A the analysis frame period as T as a reference (center), T - Set 2 .
この後、閉鎖部分析部24は上記分析フレームT0の周波
数分析を行ない例えば周波数0〜8kHzを16の帯域に分割
して、この16帯域夫々のパワースペクトルを求め、同様
に分析フレームT-1,T-2についても周波数分析を行なっ
て、合計48個のパワースペクトルを求め、これを48次元
の閉鎖部識別パラメータX(要素xi,i=1〜48)とす
る。Thereafter, the closing part analyzing unit 24 performs a frequency analysis of the analysis frame T0, for example, divides a frequency of 0 to 8 kHz into 16 bands, obtains a power spectrum of each of the 16 bands, and similarly analyzes the analysis frame T −1. , T -2 are also subjected to frequency analysis to obtain a total of 48 power spectra, which are used as 48-dimensional closed part identification parameters X (elements x i , i = 1 to 48).
ところで、閉鎖部標準パターン辞書26には予め多数の
データから求めておいた閉鎖部の主成分係数ベクトルM
と、主成分展開された8次元のデータの「b」,
「d」,「g」夫々の群の平均ベクトルEb,Ed,Eg,と、
「b」,「d」,「g」の各群の分散共分散行列の平均
分散共分散行列の逆行列Vとが格納されている。By the way, the closed part standard pattern dictionary 26 has a principal component coefficient vector M
And "b" of the eight-dimensional data expanded by the principal component,
The average vectors Eb, Ed, Eg of the groups “d” and “g”, respectively,
The inverse matrix V of the average variance-covariance matrix of the variance-covariance matrix of each group of “b”, “d”, and “g” is stored.
ここで主成分展開とは48次元のパラメータを8次元の
パラメータに圧縮することであり、主成分係数ベクトル
Mはこの主成分展開に必要な係数のベクトルで、8×8
次元の行列(要素mij,i=1〜48,j=1〜8)である。
また、各群の平均ベクトルE(Eb,Ed,Eg)は8次元のベ
クトル(要素ej,j=1〜8)であり、分散共分散行列と
は各群の多数のデータの分散を表わす行列であって、V
は8×8次元の行列(要素vij,i=1〜8,j=1〜8)で
ある。Here, the principal component expansion is to compress a 48-dimensional parameter into an eight-dimensional parameter, and a principal component coefficient vector M is a vector of coefficients necessary for this principal component expansion, and is 8 × 8
It is a dimensional matrix (elements m ij , i = 1 to 48, j = 1 to 8).
The average vector E (Eb, Ed, Eg) of each group is an 8-dimensional vector (elements e j , j = 1 to 8), and the variance-covariance matrix represents the variance of a large number of data in each group. A matrix, V
Is an 8 × 8-dimensional matrix (elements v ij , i = 1 to 8, j = 1 to 8).
閉鎖部主成分展開部25は閉鎖部分析部24で得られた閉
鎖部識別パラメータXと、閉鎖部標準パターン辞書26か
らの主成分係数ベクトルMから次式の演算を行ない、上
記パラメータXの主成分R(要素rj,j=1〜8)を求め
る 次に閉鎖部距離演算部27は上記の主成分Rと、閉鎖部
標準パターン辞書26からの平均ベクトルEb,Ed,Egと逆行
列Vから次式の演算を行ない、各カテゴリー「b」,
「d」,「g」との距離P(スカラー量)を求める。The closed part principal component developing unit 25 calculates the following equation from the closed part identification parameter X obtained by the closed part analysis unit 24 and the principal component coefficient vector M from the closed part standard pattern dictionary 26, and calculates the main Find component R (element r j , j = 1 to 8) Next, the closing portion distance calculation unit 27 performs the following calculation from the above-described principal component R, the average vectors Eb, Ed, Eg and the inverse matrix V from the closing portion standard pattern dictionary 26, and calculates each category “b”,
The distance P (scalar amount) between “d” and “g” is obtained.
Pq=(R−Eq)t・V・(R−Eq) ……(2) (但し、q=b,d,g、tは転置を表わす) この距離Pb,Pd,Pg夫々は各カテゴリーとの尤度が高い
程小さい値となる。P q = (R−E q ) t · V · (R−E q ) (2) (where q = b, d, g, and t represent transpositions) Each of the distances Pb, Pd, and Pg is The smaller the likelihood with each category, the smaller the value.
次に破裂部分析位置設定部28は3フレームの分析を行
なう場合、分析フレーム周期をTとして破裂時点Bを基
準(中心)とする分析フレームT1及びそれに先行する分
析フレームT2,T3を設定する。Next, when analyzing three frames, the rupture portion analysis position setting unit 28 sets the analysis frame period T as the analysis frame T 1 with the rupture time point B as a reference (center) and the analysis frames T 2 and T 3 preceding the analysis frame T 1 . Set.
この後、破裂部分析部29は上記分析フレームT1の周波
数分析を行ない例えば周波数0〜8kHzを16の帯域に分割
して、この16帯域夫々のパワースペクトルを求め、同様
に分析フレームT2,T3についても周波数分析を行なっ
て、合計48個のパワースペクトルを求め、これを48次元
の破裂部識別パラメータY(要素yi,i=1〜48)とす
る。Thereafter, the rupture part analyzing unit 29 is divided in the band of the analysis frame T 1 of the performs frequency analysis for example the frequency 0~8KHz 16, obtains the power spectrum of s the 16 bands each similarly analyzed frame T 2, by performing frequency analysis also T 3, obtains a total of 48 power spectrum, which is referred to as 48-dimensional rupture portion identification parameter Y (component y i, i = 1~48).
ところで、破裂部標準パターン辞書31には予め多数の
データから求めておいた破裂部の主成分係数ベクトルN
と、主成分展開された8次元のデータの「b」,
「d」,「g」夫々の群の平均ベクトルFb,Fd,Fg,と、
「b」,「d」,「g」の各群の分散共分散行列の平均
分散共分散行列の逆行列Wとが格納されている。By the way, in the rupture part standard pattern dictionary 31, the principal component coefficient vector N of the rupture part obtained in advance from a large number of data is stored.
And "b" of the eight-dimensional data expanded by the principal component,
Average vectors Fb, Fd, Fg of the respective groups of “d” and “g”;
The inverse matrix W of the average variance-covariance matrix of the variance-covariance matrix of each group of “b”, “d”, and “g” is stored.
ここで、主成分係数ベクトルNは主成分展開に必要な
係数のベクトルで、8×8次元の行列(要素nij,i=1
〜48,j=1〜8)である。また、各群の平均ベクトルF
(Fb,Fd,Fg)は8次元のベクトル(要素fj,j=1〜8)
であり、Wは8×8次元の行列(要素wij,i=1〜8,j=
1〜8)である。Here, the principal component coefficient vector N is a vector of coefficients necessary for principal component expansion, and is an 8 × 8 dimensional matrix (element n ij , i = 1
4848, j = 11〜8). Also, the average vector F of each group
(Fb, Fd, Fg) is an 8-dimensional vector (elements f j , j = 1 to 8)
And W is an 8 × 8 dimensional matrix (elements w ij , i = 1 to 8, j =
1 to 8).
破裂部主成分展開部30は破裂部識別パラメータYと、
破裂部標準パターン辞書31からの主成分係数ベクトルN
から次式の演算を行ない、上記パラメータYの主成分 S(要素sj,j=1〜8)を求める 次に破裂部距離演算部32は上記の主成分Rと、破裂部
標準パターン辞書31からの平均ベクトルFb,Fd,Fgと逆行
列Wから次式の演算を行ない、各カテゴリー「b」,
「d」,「g」との距離Q(スカラー量)を求める。The rupture portion main component developing unit 30 includes a rupture portion identification parameter Y,
Principal component coefficient vector N from rupture part standard pattern dictionary 31
And the following equation is used to obtain the principal component S (element s j , j = 1 to 8) of the parameter Y Next, the rupture portion distance calculation unit 32 performs the following calculation from the above-described principal component R, the average vectors Fb, Fd, Fg and the inverse matrix W from the rupture portion standard pattern dictionary 31, and obtains each category “b”,
The distance Q (scalar amount) between “d” and “g” is obtained.
Qq=(S−Fq)t・W・(S−Fq) ……(4) (但し、q=b,d,g) この距離Qb,Qd,Qg夫々は各カテゴリーとの尤度が高い
程小さい値となる。Q q = (S−F q ) t · W · (S−F q ) (4) (however, q = b, d, g) Each of the distances Qb, Qd, Qg is the likelihood of each category. The smaller the value, the smaller the value.
判定部33は閉鎖部距離演算部27、破裂部距離演算部32
夫々より供給される各カテゴリーの閉鎖部距離Pと破裂
部距離Qとを加算して、その値が最小で最も尤度の高い
カテゴリーを判定し、これを識別結果とする。The determining unit 33 includes a closing unit distance calculating unit 27 and a rupture unit distance calculating unit 32.
The closing part distance P and the rupture part distance Q of each category supplied from each are added, and the category having the minimum value and the highest likelihood is determined, and this is determined as the identification result.
このように破裂部に加えて閉鎖部の特徴を捉え、夫々
の尤度から総合的に判定を行なうため、従来充分な識別
性能が得られなかった先行及び後続母音の組み合せにつ
いても高い識別性能が得られる。As described above, in addition to the rupture part, the characteristics of the closed part are captured, and comprehensive judgment is performed from the likelihood of each part. can get.
上述の如く、本発明の破裂子音識別方式によれば、先
行及び後続母音の組み合わせによっては従来識別が困難
であった破裂子音も識別でき高い識別性能が得られ、実
用上きわめて有用である。As described above, according to the plosive consonant discrimination method of the present invention, plosive consonants, which were conventionally difficult to discriminate depending on the combination of the preceding and succeeding vowels, can be distinguished and high discrimination performance is obtained, which is extremely useful in practice.
第1図は本発明方式の原理ブロック図、 第2図は本発明方式の一実施例の構成を示すブロック
図、 第3図は入力音声波形とその対数パワーを示す図、 第4図は従来方式の一例の構成を示すブロック図であ
る。 図において、 1は閉鎖時点検出手段、 2は破裂時点検出手段、 3は閉鎖部分析手段、 4は破裂部分析手段、 5は閉鎖部尤度演算手段、 6は破裂部尤度演算手段、 7は判定手段 を示す。1 is a block diagram showing the principle of the system of the present invention, FIG. 2 is a block diagram showing the configuration of an embodiment of the system of the present invention, FIG. 3 is a diagram showing an input voice waveform and its logarithmic power, and FIG. FIG. 3 is a block diagram illustrating a configuration example of a system. In the figure, 1 is a closing time detecting means, 2 is a rupture time detecting means, 3 is a closing part analyzing means, 4 is a rupture part analyzing means, 5 is a closing part likelihood calculating means, 6 is a rupture part likelihood calculating means, 7 Indicates a judgment means.
Claims (1)
識別方式において、 該破裂子音に先行する母音の終了する該入力音声の閉鎖
時点を検出する閉鎖時点検出手段(1)と、 該破裂子音が開始する該入力音声の破裂時点を検出する
破裂時点検出手段(2)と、 該入力音声の閉鎖時点近傍である閉鎖部を、上記閉鎖時
点を基準として分析して閉鎖部識別パラメータを求める
閉鎖部分析手段(3)と、 該入力音声の破裂時点近傍である破裂部を、上記破裂時
点を基準として分析して破裂部識別パラメータを求める
破裂部分析手段(4)と、 該閉鎖部識別パラメータから該閉鎖部と破裂音の各カテ
ゴリーとの尤度を求める閉鎖部尤度演算手段(5)と、 該破裂部識別パラメータから該破裂部と破裂音の各カテ
ゴリーとの尤度を求める破裂部尤度演算手段(6)と、 該閉鎖部の尤度と該破裂部の尤度とから該破裂子音を判
定識別する判定手段(7)とを有することを特徴とする
破裂子音識別方式。1. A plosive consonant discrimination method for identifying a plosive consonant in an input voice, comprising: a closing time detecting means (1) for detecting a closing time of the input voice at which a vowel preceding the plosive consonant ends; A burst point detecting means (2) for detecting a burst point of the input voice at which a consonant starts, and a closing part near the closing point of the input voice is analyzed with reference to the closing point to obtain a closing part identification parameter. Closing part analyzing means (3); rupture part analyzing means (4) for analyzing a rupture part near the rupture time point of the input voice with reference to the rupture time point to obtain a rupture part identification parameter; Closing part likelihood calculating means (5) for obtaining the likelihood between the closing part and each category of plosives from the parameters; and bursting for obtaining the likelihood between the plosive part and each category of the plosives from the burst identification parameters. Likelihood And arithmetic means (6), rupture consonant detection method, characterized in that from the likelihood of the likelihood and the rupture of the closure having a determining identifying judging means (7) said burst consonants.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP63230891A JP2744622B2 (en) | 1988-09-14 | 1988-09-14 | Plosive consonant identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP63230891A JP2744622B2 (en) | 1988-09-14 | 1988-09-14 | Plosive consonant identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
JPH0279098A JPH0279098A (en) | 1990-03-19 |
JP2744622B2 true JP2744622B2 (en) | 1998-04-28 |
Family
ID=16914921
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP63230891A Expired - Lifetime JP2744622B2 (en) | 1988-09-14 | 1988-09-14 | Plosive consonant identification method |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP2744622B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114566169A (en) * | 2022-02-28 | 2022-05-31 | 腾讯音乐娱乐科技(深圳)有限公司 | Wheat spraying detection method, audio recording method and computer equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5730894A (en) * | 1980-07-31 | 1982-02-19 | Matsushita Electric Ind Co Ltd | Sound element recognition system |
-
1988
- 1988-09-14 JP JP63230891A patent/JP2744622B2/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
JPH0279098A (en) | 1990-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5025471A (en) | Method and apparatus for extracting information-bearing portions of a signal for recognizing varying instances of similar patterns | |
JPH02195400A (en) | Speech recognition device | |
US4817159A (en) | Method and apparatus for speech recognition | |
JP2744622B2 (en) | Plosive consonant identification method | |
Kamble et al. | Emotion recognition for instantaneous Marathi spoken words | |
JPH0316040B2 (en) | ||
JPH034918B2 (en) | ||
JP2606211B2 (en) | Sound source normalization method | |
JPH0682275B2 (en) | Voice recognizer | |
JPH0114600B2 (en) | ||
JPS6363920B2 (en) | ||
JPH01260499A (en) | Consonant recognizing method | |
JPS58209800A (en) | Phoneme discrimination system | |
JPH026078B2 (en) | ||
JPH0120440B2 (en) | ||
JPH026079B2 (en) | ||
JPS6136797A (en) | Voice segmentation | |
JPS63220297A (en) | Segmentation sorting for consonant | |
JPS6363919B2 (en) | ||
JPS6136798A (en) | Voice segmentation | |
JPH0455520B2 (en) | ||
JPS59124389A (en) | Word voice recognition system | |
JPH01260500A (en) | Consonant recognizing method | |
JPH0458638B2 (en) | ||
JPS6053997A (en) | Phoneme discrimination |