Read 2010-169.pdf text version


Vol. 24,aNo. 5,a2011

tDOI: 10.3901/CJME.2011.05.***, available online at


Ignition Pattern Analysis for Automotive Engine Trouble Diagnosis using Wavelet Packet Transform and Support Vector Machines

1* 2 2 2 VONG Chiman , WONG Pakkin , TAM Lapmou , ZHANG Zaiyong

1 Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau, China 2 Department of Electromechanical Engineering, Faculty of Science and Technology, University of Macau, Macau, China

Received June 4, 2010 revised April 14, 2011 accepted May 5, 2011 published electronically May 12, 2011

Abstract: Engine spark ignition is an important source for diagnosis of engine faults. Based on the waveform of the ignition pattern, a mechanic can guess what may be the potential malfunctioning parts of an engine with his/her experience and handbooks. However, this manual diagnostic method is imprecise because many spark ignition patterns are very similar. Therefore, a diagnosis needs many trials to identify the malfunctioning parts. Meanwhile the mechanic needs to disassemble and assemble the engine parts for verification. To tackle this problem, an intelligent diagnosis system was established based on ignition patterns. First, the captured patterns were normalized and compressed. Then Wavelet Packet Transform (WPT) was employed to extract the representative features of the ignition patterns. Finally, a classification system was constructed by using multiclass Support Vector Machines (SVM) and the extracted features. The classification system can intelligently classify the most likely engine fault so as to reduce the number of diagnosis trials. Experimental results show that SVM produces higher diagnosis accuracy than the traditional multilayer feedforward neural network. This is the first trial on the combination of WPT and SVM to analyze ignition patterns and diagnose automotive engines. Key words: automotive engine, ignition pattern diagnosis, pattern classification, wavelet packet transform, support vector machines.

1 Introduction


Automotive engine ignition systems vary in construction, but are similar in basic operation. All ignition systems have a primary circuit that causes a spark in the secondary circuit. Then this spark must be delivered to the correct spark plug at the proper time. The oscilloscope or scope meter is used to analyze ignitionsystem operation. The scope patterns show ignitionsystem troubles and help pinpoint their causes. Conditions in the ignition system and in the cylinder affect the ignition pattern (i.e. scope pattern). The way that ignition pattern varies from normal pattern indicates where the ignition problem exists. The scope meter can detect wide or narrow sparkplug gaps, open sparkplug cables, a shorted ignition coil, etc. The scope meter also detects engine conditions that change the firing voltages. This is the voltage needed to fire the plug. These conditions alter the duration or slope of the spark line. The scope meter can usually show several patterns and possible causes for the ignition system and engine problems. Knowing the basic ignition patterns and what each section

* Corresponding author: Email: [email protected] This project is supported by University of Macau Research Grant,China (Grant No. RG057/0809S/VCM/FST, Grant No. UL011/09Y1/ EME/ WPK01/FST).

of the pattern represents makes the scope meter a valuable tool for detecting engine and ignition problems [1] . Fig. 1 shows some examples of engine ignition patterns per engine cycle of an electronic ignition system and their corresponding engine faults. When using a scope meter for engine trouble diagnosis, a pattern pickupclamp of the scope meter is connected to the ignition system to capture its spark ignition signal. The ignition signal captured requires intervention of the mechanic to indicate the starting and end points of the signal. Then, the mechanic can compare the captured signal with those wellknown signals in the handbook [1­2] . According to the diagnosis, the corresponding parts in the engine will be disassembled for intensive investigation (this is called a trial). However, this kind of manual diagnosis may be unreliable because the signal patterns are very similar. In addition, different (or even the same) engine models suffering from the same kind of fault produce more or less different shapes of ignition patterns. So an ignition pattern has no standard amplitude and duration of time but just refers to a shape. It is obvious that comparing those signals is merely based on the experience of the mechanic, which is a difficult task. To find out a fault based on ignition patterns, several trials (disassembling and assembling of engine parts) are usually necessary which incurs a large amount of time and effort for the mechanics.


Vong Chiman, et al. Ignition Pattern Analysis for Automotive Engine Trouble Diagnosis using Wavelet Packet Transform and Support Vector Machines

To improve this problem, a computerbased pattern classification system is necessary for a mechanic.

white noise. In order to tackle these issues, wavelet packet transform (WPT) [3­4] is proposed as a feature extraction method to retrieve the significant frequency features of an ignition pattern. Under these extracted features, modeling and classification of ignition patterns become possible. A cycle duration of about 120 ms is typical for many [1­2] engines , so the automotive scope meter is usually set to a high frequency sampling rate, say 100 kHz. For capturing the entire ignition pattern of each engine cycle under high frequency sampling rate and idlespeed testing condition, which is one of the most common conditions, the number of data points collected is enormous. For example: at 800 r/min idlespeed, 100 kHz sampling rate and fourstroke engine cycle, 15 000 data points are produced for each sample pattern. It is unwise to use all the data points for feature extraction and classification, but the most important ones should be selected. So the procedure of feature extraction and compression is needed, which can be done by wavelet packet compression. In terms of modeling and classification technique, multilayer feedforward neural network (MFN) and Gaussian process classification are traditionally employed to handle signal pattern recognition [5­6] . However, these two methods suffer from several drawbacks. For MFN, firstly the architecture including the number of hidden neurons has to be determined a priori or modified while training by heuristics, which results in a nonnecessarily optimal network structure. Secondly the training process (i.e. the minimization of the residual squared error cost function) in neural networks can easily become stuck by local minima, although various ways of preventing local minima such as early stopping, weight decay, etc., can be employed. However, these methods greatly affect the generalization of the estimated function, i.e. the capacity to handle new input cases. For Gaussian process classification, the data are assumed to be normally distributed which is not the case for the signals in this research. Although the distribution of the Fig. 1. Examples of ignition patterns of an electronic ignition signals can be transformed into Gaussian using Laplace system and their corresponding engine faults approximation, this transformation is often inaccurate. Moreover, the training of Gaussian process takes a long Currently, there is almost no research in the literature on time so that high dimensional data set such as the current computerbased ignition pattern analysis for automotive application (a signal contains several thousands of points) engine troublediagnosis. The main reasons are as cannot be applied. following: Recent research [7­21] showed that support vector (1) The ignition patterns are timevarying and non machines (SVM) are superior to MFN in terms of accuracy. stationary. Different engine models produce various Compared with Gaussian process classification, SVM is amplitude and duration for the same ignition pattern. Even more suitable for high dimensional data set. So SVM was the same engine, it may produce different sizes of ignition employed to build a computerbased pattern classification patterns under different engine operating conditions. This system in this study. The application of the above further increases the difficulty for human diagnosis. mentioned techniques to ignition pattern analysis for (2) The ignition patterns of different engine faults are automotive engine troublediagnosis is a new attempt. In very similar. From the human point of view, it is hard to this paper, both SVM and MFN are used to respectively distinguish the patterns correctly especially when the construct a classification system for comparison. captured pattern from the scope meter is under distortion of This paper is organized as follows. Sections 2 and 3


firstly review the above different techniques. Section 4 illustrates the construction of the computerbased classification system for ignition patterns. Sections 5 and 6 show the experimental setup and results respectively. Finally a conclusion is made at section 7.

­1 G(z) = zH(­z ).



A sequence of filters with increasing length (indexed by j) can be obtained, j = 0 to N­1:

H j + 1 ( ) = H ( z 2 ) j ( z ) , z H G j + 1 ( z ) = G z 2 ) j ( z ) , ( H

i i

2 Wavelet Packet Transform

2.1 Discrete wavelet transform (DWT) Discrete wavelet transform (DWT) [3] has been proven very efficient in signal analysis of many engineering applications [8­12] . The main advantage of DWT is that it constructs a varying support (window size), being dilated (wide) for low frequency and sharp (narrow) for high frequency. It implies that high frequency of a signal requires more detailed analysis in frequency domain. On the other hand, low frequency of the signal requires analysis in time domain. Hence, this policy leads to the construction of an optimal timefrequency resolution in all the frequency ranges. The general concept of DWT is [11] depicted in Fig. 2 .

(3) (4)

with the initial condition H0(z) = 1. The filters are expressed as a twoscale relation in time domain:

h j +1 ( ) = [ ] 2 i h ( ) , k h - i k g j +1 ( ) = [g] 2 i h ( ) , k k i (5) (6)

where subscript []m indicates the upsampling by a factor of m and k is the equally sampled discrete time. The normalized wavelet and scale basis functions ji,l(k), i,l(k) can be defined as follows:

i/2 i ji,l(k) = 2 hi (k­2 l),

i/2 i i,l(k) = 2 gi(k­2 l),

(7) (8)

i/2 where the factor 2 is inner product normalization, i and l are the scale parameter and the translation parameter, respectively. The DWT decomposition of a signal x(k) can be described as follows:

A(i)(l) = x(k) ji,l(k), D(i)(l) = x(k) i,l(k),

Fig. 2. Four decomposed levels of DWT

(9) (10)

where A(i)(l) and D(i)(l) are the approximation coefficients and the detail coefficients at resolution i, respectively [3] . 2.2 Wavelet packet transform (WPT) Wavelet packet transform (WPT) is a generalization of wavelet decomposition that offers a richer signal analysis. In the decomposition of a signal by DWT, only the lower frequency band is decomposed, giving a right recursive binary tree structure, where its right lobe represents the lower frequency band. Its left lobe represents the higher frequency band. In the corresponding decomposition by WPT, the lower as well as the higher frequency bands are decomposed giving a balanced binary tree structure [4] . Such a tree is given in Fig. 3 [13] .

Generally, under DWT, a signal x[N] is decomposed iteratively by two digital filters and then downsampled by 2. The first filter g[] is a highpass filter and the discrete mother wavelet, aiming to extract the high frequencies of a signal. The second filter h[] is a lowpass filter, which is a mirror of g[], aiming for the extraction of low frequencies. The downsampled outputs of a signal with g[] and h[] are defined as detail Dj and approximation Aj respectively, where j is the decomposition level. Then the same filtering process continues for the approximation Aj to produced Dj+1 and Aj+1 until termination condition. All DWTs can be specified in terms of a lowpass filter h, which satisfies the standard quadrature mirror filter condition:

­1 ­1 H(z)H(z ) +H(­z)H(­z ) = 1,


where H(z) denotes the ztransform of the filter h. Its complementary highpass filter can be defined as


Vong Chiman, et al. Ignition Pattern Analysis for Automotive Engine Trouble Diagnosis using Wavelet Packet Transform and Support Vector Machines

Fig. 3. Three decomposed levels of WPT

information processing point of view, this selection is actually data compression. 2.4 Feature extraction After wavelet packet compression, WPT can be used for feature extraction. Feature extraction is the determination of a feature or a feature vector from a pattern with minimal loss of important information. Usually a feature vector is a reduceddimensional representation of that pattern so as to reduce the modeling complexity and computational cost. L Through WPT, a set of 2 subbands of a signal can be obtained, L is the level of WPT decomposition. Here the * optimal level L can be computed with the application of bestlevt optionally.

Hence WPT is an extension of the DWT, and the advantage of WPT is that a signal can be decomposed to a set of frequency subbands, on which entropy can be measured. Entropy describes informationrelated properties for an accurate representation of a given signal, which appears as an ideal tool for quantifying the ordering of nonstationary signals. Based on the measured entropy of each subband, the most suitable decomposition of a given signal can be selected. The entropy type used in WPT is usually one of the followings.

p Norm: E(S) = |Si | , p1 Log energy: E(S) = log (Si 2 ) SURE (Stein's unbiased risk estimate): E(S) = n ­ #{ i such that | Si |e} + min(Si 2 , e2 ).

3 Multiclass Support Vector Machines

3.1 Support vector machines for twoclass problems [14] SVM is currently a well known machine learning technique. It has been applied to deal with a wide range of engineering problems [7, 15] due to its high generalization such as gear fault diagnosis [16] . The main idea of SVM classifier is to construct a linear hyperplane in a high (or even infinite) dimension on which all training examples are mapped through a kernel trick so that nonlinear examples can be classified more accurately. Given a set of training 2.3 Compression patterns {xi, yi } where i = 1, 2,...,l. yi Î{+1, -1} and xi Î d As mentioned above, the number of sampling points of a R . In this application, xi is the ignition patterns captured captured pattern is always enormous, it usually takes a long by the automotive scope meter, whereas yi is the label time to train and test such massive ignition patterns. associated to each pattern indicating one of the classes. Ignition patterns can be compressed as a preprocessing step Thus, a twoclass problem basically consists of finding the before feature extraction. The above WPT procedure can optimal hyperplane that separates the samples labeled -1 also be used for compression in addition to feature from those labeled +1, with a given margin between one set extraction. and the other. Such a hyperplane can be found when the Consider a signal S=f(x), it can be decomposed on the margin is maximal. Instead of solving this optimization problem, its dual problem can be easier to solve [17] : level j (j>0) by wavelet packet bases Where S is the signal measured (in this case, a subband), Si is the ith wavelet coefficient of the signal, n is the signal length, e is a threshold calculated by the formula: [2ln(n 1/2 log2(n))] . With this entropybased criterion, a best tree of wavelet decomposition can be computed automatically. A function bestlevt has been implemented in MATLAB wavelet toolbox for this calculation.

{ ( j - k ) / 2 w k + m ( j -k x - l ) l Î Z} 2 2 2



min LD =


reconstructed by using a weighted combination of them, if the signal f(x) on level j is decomposed, then the reconstructed (compressed) signal F(x) can be written as Eq. 11, where á×,×ñ is a dot product :

¥ j k 2 -1

1 å a i a j y i y j K ( x i , x j ) - å a i , 2 i , j i


s. t.

0 £ a i , a j £ C

å a y = 0

i i i



F ( x = ååå f ( x 2 j -k ) / 2 w k + m ( j -k x - l ) * ) ), ( 2 2

l = -¥ k =1 m = 0




( j k ) / 2 -

w k + m ( x - l ) 2 2

j - k

Since the frequency distribution of a given signal is not always even, the weight of these wavelet packet bases may vary greatly. So it is possible to abandon the lowweight bases and select the highweight ones to decompose a given signal and approximately reconstruct it. This process is called best wavelet packet basis selection. From the

In Eq. (12), i and j are the Lagrange multipliers of the primal optimization problem of obtaining maximum margin, C is a regularization parameter and K is a kernel function that computes the inner product in a higher dimension space. This kernel function should satisfy Mercer's Theorem to ensure that the optimization problem exposed is convex [17] . In this application, radial function basis (RBF) kernel is used:

2 K ( xi , y ) = exp g ||xi y|| ,



where g Î R is a hyperparameter for user adjustment. The output of a binary classifier is then calculated as:

æ N s ö f ( z ) = sign å a i y i K ( x , z ) + b ÷ Î{ 1, 1} , + ç i è i = 1 ø



where Ns is the number of support vectors found as a result of the optimization problem, xi the support vectors and b a threshold parameter updated in the training phase. Based on sign functin, the class of z can be computed directly from f(z). 3.2 Multiclass strategies for SVM Multiclass SVM considers a multiclass problem as a collection of binary classification. There are two popular strategies, namely, oneagainstall, and oneagainstone. In oneagainstall strategy, k different classifiers are constructed for k classes. The kth classifier constructs a hyperplane between the class k and the rest of the classes. In oneagainstone strategy, a total of k(k-1)/2 hyperplanes are defined, where each hyperplane separates its own class from the others. Although the second strategy sometimes improves the results obtained with the one againstall strategy, it should be noted that the number of SVMs increases exponentially, even in the case where the number of classes is small. However, in the current study, oneagainstone strategy is employed because the accuracy is the main desire.

However, the number of sampling points of every captured pattern is not exactly the same due to engine speed fluctuation and various testing conditions. Then all patterns are usually normalized. In the current study, the number of sampling points for every signal is less than 17 000. For sake of conservation, a standard length for all patterns is set to 18 000. Therefore all signals with sampling points fewer than 18 000 are appended with a series of steadystate value until reaching the maximum number (Fig. 5). As the maximum firing voltage is not usually over 15 kV, the amplitude of all each pattern is also normalized within [0, 15].

Fig. 5. Normalization of sampling points for a pattern

4.2 Compression The number of sampling points of a signal is equal to the number of inputs for SVM. Currently the number of 18 000 is too excessive for SVM to train or even to run because the common implementation tool for PCs (such as MATLAB) can usually manipulate a matrix with a maximum size of 3 4 Construction of Ignition Pattern Classifier 000 by 3 000. Therefore, the number of 18 000 inputs must be reduced to a number less than 3 000. The compression rate is a key factor because high compression rate will This part presents the general construction idea of an result in information loss while low compression rate will intelligent ignition pattern classifier. Fig. 4 shows the four not satisfy the requirement of tools for PCs. The relative phases of building the classification system. The input of amount of energy in the lowfrequency subband is a good the first phase is the training data which are the data points [20] indicator of compression tolerance . The equation of of the ignition patterns and their corresponding engine wavelet energy proportion is shown below: faults. The detail operation in each phase is presented in the following subsections, and the training data acquisition for N / 2 J 2 a case study is described in section 5. å d j , i

P = j

i =1


2 J

N / 2 J 2


å å d j , i

j =1 i =1

Fig. 4. Four phases in building an intelligent ignition pattern classification system

4.1 Normalization Ignition patterns can be captured by the automotive scope meter with a high sampling frequency (100 kHz).

where dj,i is the value of the ith point of jth subband in level J, N is the number points of the original signal. Pj is the wavelet energy proportion of the jth subband. In this research, the proportion of energy of lowfrequency subband is 1 in level3 compression and about 0.974 in level4 compression. So level3 compression was carried out and the signals are compressed into 18 000/ 3 2 =2 250, which is less than 3 000 and can be handled by the common implementation tool for PCs. Usually the following steps are necessary: (1) Obtain the WPT decomposition of the signal. Decomposition level and the choice of mother wavelet are left for user adjustment. Usually, a deeper decomposition


Vong Chiman, et al. Ignition Pattern Analysis for Automotive Engine Trouble Diagnosis using Wavelet Packet Transform and Support Vector Machines 3 4 5 6 7 8 9 Broken spark plug cable Defective spark plug Narrow spark plug gap Misfire due to extremely lean mixture Carbon fouled in spark plug Engine knock due to carbon deposits in combustion chambers Rich mixture

level can be chosen to produce an overcomplete wavelet packet tree. (2) Use the function bestlevt mentioned above to compute the optimal wavelet packet decomposition (3) Soft thresholding [13] is applied to each of the detail coefficients D(i), except the approximation coefficients A(i). (4) Reconstruct the signal based on the approximation coefficients A(i) and the softthresholded detail coefficients D(i) from step (3). 4.3 Feature extraction As mentioned in section 1, ignition patterns are timevarying and nonstationary, i.e., every engine (the same or different models) can produce ignition patterns of different length and amplitude. To build a classification system with these timevarying patterns, it may be inappropriate to compare the ignition patterns directly. Instead the important features of ignition patterns can be considered for comparison. WPT provides a tool to decompose and extract the high and low frequency subbands of ignition patterns. These extracted frequency subbands (features) are considered as training data. 4.4 Modeling Finally, modeling technique is employed to construct a classifier with all normalized and preprocessed training data. In the future, whenever a new data arrives for classification, it requires going through the previous steps of normalization, compression, and featuring extraction before classification.

To capture ignition patterns, firstly the sampling frequency of the scope meter is set to a higher rate, say, 100 kHz, i.e., 100 000 sampling points per second can be obtained, or equivalently, 100 sampling points per millisecond (ms).

Fig. 6. Collection of ignition patterns from an engine using a computerlinked automotive scope meter

For every symptom in each test engine, eight ignition patterns (two patterns for each cylinder) were captured over three different engine testing conditions according to the [2] 5 Experimental Setup standard procedure in Reference (1 200 r/min, 2 000 r/min, and sudden acceleration). The reason for selecting 5.1 Data collection two patterns per cylinder is that a constant engine speed is To verify the effectiveness of the proposed methodology, difficult to be held during testing, and each cylinder has its an experiment was set up for sample data acquisition and own manufacturing error, inlet and exhaust flow evaluation tests. To prepare training and test data set, characteristics as well. So the ignition pattern captured may different models of engines have been used to imitate the not be repeatable even for the same testing condition and nine wellknown malfunctioning symptoms reflected by the cylinder. Out of these eight ignition patterns, six were used ignition patterns as shown in Table 1. The nine symptoms for training, while the remaining two for testing. So, the are selected as demonstration examples potentially more whole data set Ds was divided into training data set TRAIN symptoms can be captured and trained. In this case study, (3/4 of Ds) and test data set TEST (1/4 of Ds) for training three wellknown inline 4cylinder 4stroke engines, and verifying the SVM classifier, respectively. In this case HONDA B18C, HONDA B16A, and MITSUBISHI 4G15, study, the total number of ignition patterns in TRAIN is were employed to imitate the nine different malfunctioning equal to 9 symptoms ´ (2 patterns ´ 4 cylinders ´ 3/4) ´ symptoms. The reason why different engines are used for 3 conditions ´ 3 engines = 486, whereas TEST contains training is that the generalization of the classifier can be 162 patterns (i.e. 9 symptoms ´ (2 patterns ´4 cylinders ´ enhanced. The experimental setup for capturing the ignition 1/4) ´ 3 conditions ´ 3 engines). patterns in the case study is shown in Fig. 6. 5.2 Data normalization and preprocessing First of all, a standard number of sampling points of Table 1. Sample possible causes of engine trouble reflected by ignition patterns 18 000 was set which can cover all the patterns. Moreover, some steadystate values were also appended to the rear Case No. Symptom or possible cause part of the patterns to standardize the length of all the 1 Normal ignition patterns, if necessary. Normally, the steadystate 2 High resistance in spark plug cable


value for the ignition pattern is equal to zero (0 V). Hence, this procedure is equivalent to appending zeros until the number of 18 000 sampling points is reached. In order to reduce the training and test time, all patterns in TRAIN and TEST were compressed into 1/8 times (2 250 points) by WPT. Usually, mother wavelet is selected based on trialanderror. In the current study, mother wavelet was selected as Haar. Haar is the simplest possible wavelet and is not continuous so that the L compression ratio can be exactly set as 2 where L is the [13] level number . A level of 3 had been firstly chosen and then bestlevt was used to calculate the best wavelet decomposition tree. The SURE entropy was selected for compression. The threshold for the current sampled 1/2 patterns is calculated as [2 ln(n log2(n))] = 4.50 for n = 2 250. After compression, the compressed wavelet tree was then reconstructed. WPT was employed again to perform feature extraction on the compressed patterns. In wavelet transform, the family of Daubechies wavelets (dbN, where N = 1 to 10) is the most popular. We tried level L= 2, 3, and 4 for dbN wavelet to determine which combination of N and L is the best. According to the test, the best combination was 3 chosen as db6 and level 3. So, 2 =8 subbands of wavelet coefficients are generated for the patterns. The sizes of all patterns are still the same, i.e., 2 250, except that the compressed patterns are transformed into a feature vector of wavelet coefficients. Finally, the processed data sets TRAIN and TEST could be passed to SVM for training. 5.3 SVM training To construct the SVM classifier, the MATLAB toolbox LIBSVM [18] was employed. The kernel function K used in the SVM classifier was RBF kernel. Moreover, Eq. (12) and Eq. (14) indicate that users have to adjust two hyperparameters (g, C) to ensure the models performing well. In this situation, 10fold cross validation is usually applied to select the best values for hyperparameters. In our case, we assumed the value of g was taken from the range of ­10 to 10 (except 0) with increment 1 and assumed the value of C was taken from the range of 1 to 10 000 with magnitude of 10 (i.e., 1, 10, 100, 1 000, 10 000). There are totally 100 (20´5) combinations of these two hyperparameters. By testing all combinations, the best combination of ((g, C)) (the one with the highest accuracy) was chosen as (1, 100). 5.4 MFN training MFN classifier was implemented in MATALB Neural Network Toolbox R2008a [19] . In this case study, an MFN with 2 250 input neurons, 4 output neurons (i.e., 4 bits for nine classes), and 25 hidden neurons was used. Normally, 25 hidden neurons can provide enough capability to approximate a highly nonlinear function. Learning rate was set to 0.1. The activation function used inside the hidden neurons was Tansig transfer function followed by a purelin


filter for the output neuron. Back propagation algorithm was used to train a model with the same training data TRAIN.

6 Results

The classifiers for both SVM and MFN approaches were run under a PC with a Core2 Duo E6700 and 2Gb RAM. In order to evaluate the classifiers, the following aspect for accuracy is measured. 6.1 Evaluation The performance of the SVM classifier can be determined by classification accuracy. The evaluation of classification accuracy is simple because the evaluation just compares the calculated class of an input vector x and its given target class. Given a test set TEST of N cases, every test case ti Î TEST, i= 1 to N, is passed to the SVM classifier. If the calculated class of ti is not equal to its given target class, its corresponding error Ei is set to 1, and otherwise Ei is set to 0. Finally, the sum of error Ei is divided by the total number of test cases N. Its complement gives the accuracy function as follows:

1 N é ù A = ê1 - å E i ú ´ 100 , % N i = 1 û ë


6.2 SVM results The best combination of the hyperparametes (g, C) was found to be (1, 100). The time for SVM training takes 0.064 8 s. After applying with N = 162 test cases from TEST and using Eq. (17), its accuracy over TEST is 95.68%. The confusion matrix for each class is shown in Table 2 where only 7 out of 162 signals are classified into wrong classes and most of the troubles are identified. The result clearly indicates the effectiveness and the reliability of the WPT approach for extracting important features from ignition patterns. In addition, SVM also produces very good classification accuracy over TEST.

Table 2. Confusion matrix for SVM classification

Total 9´18 cases C1 C1 Actual classes C2 C3 C4 C5 C6 C7 C8 C9 18 17 19 18 16 18 18 19 2 17 1 17 17 18 16 18 16 18 18 19 2 1 C2 Predicted classes C3 C4 C5 C6 C7 C8 C9 1 18 18 18 18 18 18 18 18 18

Ci represent different classes and every class has 18 test cases


Vong Chiman, et al. Ignition Pattern Analysis for Automotive Engine Trouble Diagnosis using Wavelet Packet Transform and Support Vector Machines

classification system which can save the automotive mechanics a lot of time and effort to read the ignition 6.3 MFN results The MFN model was tested by using the same test data. patterns and make diagnosis. (3) In this research, the experimental results show that Table 3 shows the confusion matrix of MFN model where SVM produces high classification accuracy of 95.68% over 17 out of 162 signals are classified into wrong classes. Table 4 shows the comparison between SVM and MFN. unseen test cases TEST. The training time spent for SVM is For the MFN model, the accuracy of MFN model is also very short which takes only 0.064 8 s. Both SVM 89.51% which is lower than 95.68% of SVM model. results and times are superior to those of MFN as well. (4) For further development, more symptoms from Moreover, the issues of hyperparameters and training time were also compared. In SVM, two hyperparameters (g, C) different models of engines can be captured to provide a are required for user estimation (10fold crossvalidation wider range of training data, so that the classifier can be was employed in this case study). In MFN, the learning rate more general. and number of hidden neurons are required to be supplied from the users. They all can be selected using 10fold References crossvalidation, but the generalization of SVM is better than that of MFN. Table 4 also shows that SVM runs much faster than MFN over TRAIN and TEST.

Table 3. Confusion matrix for MFN classification

Total 9´18 cases C1 C1 Actual classes C2 C3 C4 C5 C6 C7 C8 C9 16 19 18 18 18 17 18 17 1 2 2 1 15 1 16 16 18 16 16 15 15 18 21 2 1 1 C2 Predicted classes C3 C4 C5 1 1 2 C6 C7 1 C8 C9 1 18 18 18 18 18 18 18 18 18 [1] CROUSE W H, ANGLIN D L. Automotive mechanics [M]. 10th ed. Columbus: McGrawHill Education, 1993. [2] LIU C C, CHU J. Automotive Computer Control Engine's Wave Analysis and Inspection Technology [M]. Taiwan: Sunlex, 2005. (In Chinese) [3] DAUBECHIES I. The wavelet transform, timefrequency localization and signal analysis [J]. IEEE Transactions on Information Theory, 1990, 36(5): 961­1 005. [4] MALLAT S. A wavelet tour of signal processing [M]. 3rd ed. The Sparse Way: Academic Press, 2008. [5] KOHONEN T, BARNA G, CHRISLEY R. Statistical Pattern Recognition with Neural Networks: Benchmarking Studies [C]// IEEE International Conference on Neural Networks, San Diego, USA, 1988, 61­68. [6] FOO S, STUART G, HARVEY B, et al. Neural Networkbased EKG Pattern Recognition [J]. Engineering Applications of Artificial Intelligence, 2002, 15(3­4): 253­260. [7] VONG C M, WONG P K, LI Y P, et al. Modeling of modern automotive petrol engine performance using support vector machines [J]. Journal of Zhejiang University SCIENCE, 2005, 6A (1): 1­8. [8] AVCI E. An expert system based on Wavelet Neural NetworkAdaptive Norm Entropy for scale invariant texture classification [J]. Expert Systems with Applications, 2007, 32(3): 919­926. [9] AVCI E. A new optimum feature extraction and classification method for speaker recognition: GWPNN [J]. Expert Systems with Applications, 2007, 32(2): 485­498. [10] ÜBEYLI E. Decision support systems for timevarying biomedical signals: EEG signals classification [J]. Expert Systems with Applications, 2009, 36(2): 2 275­2 284. [11] UYAR M, YILDIRIM S, GENCOGLU M T. An effective waveletbased feature extraction method for classification of power quality disturbance signals [J]. Electric Power Systems Research, 2008, 78(10): 1 747­1 755. [12] WU J D, HSU C C, WU G Z. Fault gear identification and classification using discrete wavelet transform and adaptive neurofuzzy inference [J]. Expert Systems with Applications, 2009, 36(3): 6 244­6 255. [13] MATLAB R2008a version Wavelet Toolbox [CP]. MathWorks Company, 2008. [14] CRISTIANINI N, SHAWETAYLOR J. An Introduction to Support Vector Machines and Other Kernelbased Learning Methods [M]. UK: Cambridge University Press, 2000. [15] HORNG M H. Multiclass support vector machine for classification of the ultrasonic images of supraspinatus [J]. Expert Systems with Applications, 2009, 36(4): 8 124­8 133. [16] LIU G J, LIU X M, QIU J, et al. Fault diagnosis approach based on hidden markov model and support vector machine [J]. Chinese Journal of Mechanical Engineering, 2007, 20(5): 92­95.

Ci represents different classes and every class has 18 test cases

Table 4. Performance of two different classifiers (SVM, MFN)

Method Training time (over TRAIN) Running time (over TEST) Accuracy (over TEST) SVM 0.064 8s 0.087 0s 95.68% MFN 0.285 452s 0.290 578s 89.51%

7 Conclusions

WPT and SVM have been successfully applied to solve the automotive engine trouble diagnosis based on intelligent ignition pattern recognition. This is the first attempt for this automotive application. (1) WPT has been demonstrated to be an effective tool for data compression and feature extraction from engine ignition patterns. Entropybased criterion in WPT also shows its usefulness in determining automatically the best level of wavelet coefficients tree. (2) SVM has been applied to produce a reliable


[17] BURGES C J. A tutorial on support vector machines for pattern recognition [J]. Data Mining Knowledge Discovery, 1998, 2(1): 121­167. [18] MATLAB R2008a version toolbox LIBSVM [CP]. http: //, 2008. [19] MATLAB R2008a version Neural Network Toolbox [CP]. MathWorks Company, 2008. [20] ERICKSON B, MANDUCA A, PALISSON P, et al. Wavelet compression of medical images [J]. Radiology, 1988, 206(1): 599­607. [21] VONG C M, WONG P K, LI Y P. Prediction of automotive engine power and torque using least squares support vector machines and bayesian inference [J]. Engineering Applications of Artificial Intelligence, 2006, 19(3): 227­297.


WONG Pakkin is currently the department head of Electromechanical Engineering, Faculty of Science and Technology, University of Macau, Macau, China. He received his PhD degree of mechanical engineering from the Hong Kong Polytechnic University, Hong Kong, China, in 1997. He is also the vicepresident (boards of directors) of Association for Promotion of Science and Technology of Macau, China. His research interests include automotive engineering, fluid transmission & control and engineering applications of artificial intelligence Tel: +85383974956 Email: [email protected] TAM Lapmou is a full professor at Department of Electromechanical Engineering, Faculty of Science and Technology, University of Macau, China. He received his PhD degree of mechanical engineering from the Oklahoma State University, USA, in 1995He is also the president of Institute for the Development and Quality, Macau, China. His research interests include heat transfer, chaos, and energy saving. Tel: +85383974457 Email: [email protected] ZHANG Zaiyong is a master candidate at Department of Electromechanical Engineering, Faculty of Science and Technology, University of Macau, Macau, China. His research interest is automotive engine diagnosis. Tel: +85362112738 Email: [email protected]

Biographical notes

VONG Chiman is an assistant professor at Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau, China. He received his PhD degree of software engineering from University of Macau, Macau, China, in 2005. His research interests include support vector machines, casebased reasoning, and engineering applications of artificial intelligence. Tel: +85383974357 Email: [email protected]


9 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate


You might also be interested in

13-bilecik MYO.doc
18-sivil hav..doc