dot
Detailansicht
Katalogkarte GBA
Katalogkarte ISBD
Suche präzisieren
Drucken
Download RIS
Hier klicken, um den Treffer aus der Auswahl zu entfernen
Titel Neural Network Signal Classification for CTBTO Hydroacoustic Applications
VerfasserIn Mark Prior, Paul Dysart, Mark Lockwood, David Salzberg
Konferenz EGU General Assembly 2010
Medientyp Artikel
Sprache Englisch
Digitales Dokument PDF
Erschienen In: GRA - Volume 12 (2010)
Datensatznummer 250034909
 
Zusammenfassung
A process by which neural networks may be developed to classify signals is described. The signals in question are those recorded on hydrophone sensors in the hydroacoustic network of the International Monitoring System (IMS) of the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO). Signals are required to be classified as H-phase (arising from an in-water explosion), T-phase (arising from an earthquake) or N-phase (a noise signal). This classification is required to be made purely on the basis of the signal waveform properties, as summarised by a series of parameters, referred to as “hydro features”. The hydro features quantify temporal, energy and cepstral properties of the signal in a series of preset frequency bands and are present only for those bands in which a detection is made and associated with the signal. The neural network classification proceeds in a two-stage manner, first splitting signals into T/Not-T classifications then subsequently splitting the Not-T class into H and N classes. The parameters used in these two stages are independent and determined by the physical differences between the sources of the different signals. Hydro features to be used in the neural networks are selected on the basis of user-supplied thresholds for Mahalanobis distance and inter-variable covariance. The number of hidden-layer neurons is selected on the basis of a simple principal component analysis of the selected parameters. Since there is no a priori method for selecting Mahalanobis distance and inter-variable covariance thresholds, a grid search is performed to identify optimum values. Optimisation in this context is based on maximising a measure of performance (MOP) equal to the difference between probability of correct classification and probability of false classification. Candidate networks are built using a training dataset of signals but their MOP is calculated using an independent testing dataset. When optimum threshold settings have been identified in this way, the resulting network is further analysed so that the reasons behind its predictions can better be understood. This effort is made in an attempt to mitigate one of the supposed disadvantages of neural networks, i.e. that they provide “black box” solutions where classifications arrive with no supporting information as to the factors that influenced the classification process. Network analysis is carried out by investigating how network output varies when random, uncorrelated variables are input. Correlation between inputs, network output and hidden-layer neuron outputs reveal the connectivity of the network and highlight which variables strongly control signal classification and which provide only marginal benefit. It is shown that, by investigating the relative importance of the variables input to the “optimum” network, it is possible to produce reduced networks which replicate their performance with a smaller number of inputs and a greater degree of transparency. This transparency is expressed in terms of a series of simple rules which describe the essence of the network performance. The views expressed are those of the authors and do not necessarily reflect the view of CTBTO Preparatory Commision.