Next Article in Journal / Special Issue
Influence of the Quality of Consumer Headphones in the Perception of Spatial Audio
Previous Article in Journal
Biaxial-Type Concentrated Solar Tracking System with a Fresnel Lens for Solar-Thermal Applications
Previous Article in Special Issue
A Review of Time-Scale Modification of Music Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semantically Controlled Adaptive Equalisation in Reduced Dimensionality Parameter Space †

Digital Media Technology Lab, Birmingham City University, Birmimgham B42 2SU, UK
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in the 18th International Conference on Digital Audio Effects, Trondheim, Norway, 30 November–3 December 2015.
Appl. Sci. 2016, 6(4), 116; https://doi.org/10.3390/app6040116
Submission received: 24 February 2016 / Revised: 4 April 2016 / Accepted: 5 April 2016 / Published: 20 April 2016
(This article belongs to the Special Issue Audio Signal Processing)

Abstract

:
Equalisation is one of the most commonly-used tools in sound production, allowing users to control the gains of different frequency components in an audio signal. In this paper we present a model for mapping a set of equalisation parameters to a reduced dimensionality space. The purpose of this approach is to allow a user to interact with the system in an intuitive way through both the reduction of the number of parameters and the elimination of technical knowledge required to creatively equalise the input audio. The proposed model represents 13 equaliser parameters on a two-dimensional plane, which is trained with data extracted from a semantic equalisation plug-in, using the timbral adjectives warm and bright. We also include a parameter weighting stage in order to scale the input parameters to spectral features of the audio signal, making the system adaptive. To maximise the efficacy of the model, we evaluate a variety of dimensionality reduction and regression techniques, assessing the performance of both parameter reconstruction and structural preservation in low-dimensional space. After selecting an appropriate model based on the evaluation criteria, we conclude by subjectively evaluating the system using listening tests.

1. Introduction

Equalisation, as described in [1], is an integral part of the music production workflow, with applications in live sound engineering, recording, music production, and mastering, in which multiple frequency dependent gains are imposed upon an audio signal. Generally, the process of equalisation can be categorised under one of the following headings as described in [2], corrective equalisation: in which problematic frequencies are often attenuated in order to prevent issues such as acoustic feedback, and creative equalisation: in which the audio spectrum is modified to achieve a desired timbral aesthetic. Whilst the former is primarily based on adapting the effect parameters to the changes in the audio signal, the latter often involves a process of translation between a perceived timbral adjective such as bright, flat, or sibilant and an audio effect input space, by which a music producer must reappropriate a perceptual representation of a timbral transformation as a configuration of multiple parameters in an audio processing module. As music production is an inherently technical process, this mapping procedure is not necessarily trivial, and is made more complex by the source-dependent nature of the task.

2. Background

2.1. Semantically-Controlled Audio Effects

Engineers and producers generally use a wide variety of timbral adjectives to describe sound, each with varying levels of agreement. By modelling these adjectives, we are able to provide perceptually meaningful abstractions, which lead to a deeper understanding of musical timbre and systems that facilitate the process of audio manipulation. The extent to which timbral adjectives can be accurately modelled is defined by the level of exhibited agreement, a concept investigated in [3], in which terms such as bright, resonant, and harsh all exhibit strong agreement scores, and terms such as open, hard, and heavy all show low subjective agreement scores. It is common for timbral descriptors to be represented in low-dimensional space; brightness, for example, is shown to exhibit a strong correlation with spectral centroid [4,5] and has further dependency on the fundamental frequency of the signal [6]. Similarly, studies such as [7,8] demonstrate the ability to reduce complex data to lower-dimensional spaces using dimensionality reduction.
Recent studies have also focused on modification of the audio signal using specific timbral adjectives, where techniques such as spectral morphing [9] and additive synthesis [10] have been applied. For the purposes of equalisation, timbral modification has also been implemented via psychoacoustic measurements such as loudness [11], spectral masking [12], and semantically-meaningful controls and intuitive parameter spaces. SocialEQ [13], for example, collects timbral adjective data via a web interface and approximates the configuration of a graphic equaliser curve using multiple linear regression. Similarly, subjEQt [14] provides a two-dimensional interface, created using a Self-Organising Map, in which users can navigate between presets such as boomy, warm, and edgy using natural neighbour interpolation. This is a similar model to 2DEQ [15], in which timbral descriptors are projected onto a two-dimensional space using Principal Component Analysis (PCA). The Semantic Audio Feature Extraction (SAFE) project provides a similar non-parametric interface for semantically controlling a suite of audio plug-ins, in which semantics data is collected within a given Digital Audio Workstation (DAW). Adaptive presets can then be selectively derived based on audio features, parameter data, and music production metadata.

2.2. Aims

In this study, we propose a system that projects the controls of a parametric equaliser comprising five biquad filters, as detailed in [16], arranged in series onto an editable two-dimensional space, allowing the user to manipulate the timbre of an audio signal using an intuitive interface. Whilst the axes of the two-dimensional space are somewhat arbitrary, underlying timbral characteristics are projected onto the space via a training stage using two-term musical semantics data. In addition to this, we propose a signal processing method of adapting the parameter modulation process to the incoming audio data based on feature extraction applied to the long-term average spectrum (LTAS), as detailed in [17,18,19], capable of running in near-real-time. The model is implemented using the SAFE architecture (detailed in [20]), and is provided as an extension of the current Semantic Audio Parametric Equaliser (available for download at [21]), as shown in Figure 1a.

3. Methodology

In order to model the desired relationship between the two parameter spaces, a number of problems must be addressed. Firstly, the data reduction process should account for maximal variance in high-dimensional space without bias towards a smaller subset of the equaliser parameters. Similarly, we should be able to map to the high-dimensional space with minimal reconstruction error, given a new set of (x, y) coordinates. This process of mapping between spaces is nontrivial, due to loss of information in the reconstruction process. Furthermore, the low-dimensional parameter space should be configured in a way that preserves an underlying timbral characteristic in the data, thus allowing a user to transform the incoming audio signal in a musically meaningful way. Finally, the process of parameter space modification should not be agnostic of the incoming audio signal, meaning that any mapping between the two-dimensional plane and the equaliser parameters should be expressed as a function of the (x, y) coordinates and some representation of the signal spectral energy. In addition to this, the system should be capable of running in near-real time, enabling its use in a DAW environment.
To address these problems, we develop a model that consists of two phases. The first is a training phase, in which a map is derived from a corpus of semantically-labelled parameter data, and the second is an implementation phase in which a user can present (x, y) coordinates and an audio spectrum, resulting in a 13-dimensional vector of parameter state variables. To optimise the mapping process, we experiment with a combination of 6 dimensionality reduction techniques and 5 reconstruction methods, followed by a stacked-autoencoder (sAE) model that encapsulates both the dimensionality reduction and reconstruction processes. The techniques were chosen to represent a variable range of complexity and nonlinearity, and were intended to provide a selection of possible solutions to the problem, in which the highest performing section would be used for implementation. With the intention of scaling the parameters to the incoming audio signal, we derive a series of weights based on a selection of features, extracted from the signal LTAS coefficients. To evaluate the model performance under a range of conditions, we train it with binary musical semantics data and measure both objective and subjective performance based on the reconstruction of the input space and the structural preservation in reduced dimensionality space.

3.1. Dataset

For the training of the model, we compile a dataset of 800 semantically-annotated equaliser parameter space settings, comprising 40 participants equalising 10 musical instrument samples using two descriptive terms: warm and bright. To do this, participants were presented with the musical instrument samples in a DAW and asked to use a parametric equaliser to achieve the two timbral settings. After each setting was recorded, the data were recorded and the equaliser was reset to unity gain. During the test, samples were presented to the participants in a random order across separate DAW channels. Furthermore, the musical instrument samples were all performed unaccompanied, were Root Mean Square (RMS) normalised and ranged from 20 to 30 s in length. All of the participants had normal hearing, were aged 18–40, and all had at least 3 years’ music production experience.
The descriptive terms (warm and bright) were selected for a number of reasons; firstly, the agreement levels exhibited by participants tend to be high (as suggested by [3]), meaning there should be less intra-class variance when subjectively assigning parameter settings. When measured using an agreement metric, defined by [13] as the log number of terms over the trace of the covariance matrix, warm and bright were the two highest ranked terms in a dataset of 210 unique adjectives. Secondly, the two terms are deemed to be sufficiently different enough to form an audible timbral variation in low dimensional space. While the two terms do not necessarily exhibit orthogonality (for example, brightness can be modified with constant warmth [9]), they have relatively dissimilar timbral profiles, with brightness widely accepted to be highly correlated with the signal’s spectral centroid, and warmth often attributed to the ratio of the first three harmonics to the remaining harmonic partials in the magnitude spectrum [22].
The parameter settings were collected using a modified build of the SAFE data collection architecture, in which descriptive terms, audio feature data, parameter data, and metadata can be collected remotely within the DAW environment and uploaded to a server. As illustrated in Figure 1b, the SAFE architecture allows for the capture of audio feature data before and after processing has been applied. Similarly, the interface parameters P are captured and stored in a linked database. For the purpose of this experiment, the architecture was modified by adding the functionality to capture LTAS coefficients, with a window size of 1024 samples and a hop size of 256.
While the SAFE project comprises a number of DAW plug-ins, we focus solely on the parametric equaliser, which utilises five biquad filters arranged in series, consisting of a low-shelving filter (LS), three peaking filters ( P f n ), and a high-shelving filter (HS), where the LS and HS filters each have two parameters and the ( P f n ) filters each have three, as described in Table 1.

3.2. Evaluation Criteria

To evaluate the model under various conditions and to select an appropriate mapping topology, we apply objective metrics to the data during the dimensionality reduction and reconstruction processes. These allow us to evaluate the extent to which (1) the dimensionality reduction technique retains the structure of the high-dimensional data (trustworthiness, continuity, K-Nearest Neighbours (K-NN)), (2) the classes are separable in low-dimensional space (Jeffries–Matusita Distance), and (3) the system accurately reconstructs the high-dimensional parameter space (reconstruction error).

3.2.1. Trustworthiness and Continuity

To evaluate the structural preservation of each technique, the metrics trustworthiness and continuity [23] are applied to the dataset. Here, the distance of point i in high-dimensional space is measured against its k closest neighbours using rank order, and the extent to which each rank changes in low-dimensional space is measured. For n samples, let r ( i , j ) be the rank in distance of sample i to sample j in the high-dimensional space U i k . Similarly, let r ^ ( i , j ) be the rank of the distance between sample i and sample j in low-dimensional space V i k . Using the k-nearest neighbours, the map is considered trustworthy if these k neighbours are also placed close to point i in the low-dimensional space, as shown in Equation (1).
T ( k ) = 1 - 2 n k ( 2 n - 3 k - 1 ) i = 1 n j U i ( k ) ( r ( i , j ) - k )
Similarly, continuity (shown in Equation 2) measures the extent to which original clusters of datapoints are preserved, and can be considered the inverse to trustworthiness, finding sample points that are close to point i in low-dimensional space, but not in the high-dimensional plane.
C ( k ) = 1 - 2 n k ( 2 n - 3 k - 1 ) i = 1 n j V i ( k ) ( r ^ ( i , j ) - k )
In both of these equations, a normalising factor is used to bound the trustworthiness and continuity scores between 0 and 1. These measures evaluate the extent to which the local structure of the original dataset is preserved in a low-dimensional map; this is described in [24], where it is shown that the local structure of the dataset needs to be retained for a successful map of the datapoints.

3.2.2. K-NN

In order to measure the similarities in inter-class structures within the high and low dimensional space, we apply a K-NN classifier with k = 1 , as described in [25], and then measure the differences in classification accuracies. The nearest neighbours are found using Euclidean distances with 13 and 2 dimensions, respectively. The accuracies are derived using K-fold cross validation with K = 20, where 20% of the data is partitioned for testing. This allows us to measure the extent to which the between-class structures have been preserved in the reduction process, and effectively acts as a supervised structural preservation metric.

3.2.3. Jeffries–Matusita Distance

In order to evaluate the extent to which timbral descriptors lie at opposing ends of the mapped parameter space, we can measure the extent to which the timbre classes are separable using a distance metric. Typically, this can be done by finding the divergence between class distributions using a technique such as Kullback–Leibler Divergence (KLD), as we proposed in [26]; however, as explained in [27], this does not satisfy the triangle inequality based on the measurement’s asymmetry. While two-sided KLD addresses this, as explained in [28], [29] proposes Jeffries–Matusita Distance (JMD) as a more appropriate alternative. JMD (as shown in Equation 4) is a metric derived from the Bhattacharya (BH) distance, as in Equation (3), which bounds the output of the measure from 0 (no separability) to 2 (perfect separability).
B H i , j = 1 8 ( m i - m j ) T ( S i + S j 2 ) - 1 ( m i - m j ) + 0 . 5 ln ( 0 . 5 ( S i + S j ) S i S j
J M D 1 , 2 = 2 ( 1 - e - B H i , j )
Here m represents the mean and S represents the covariance of classes i and j, respectively.

3.2.4. Reconstruction Error

To measure the reconstruction accuracy (low-to-high-dimensionality mapping) of the model, we measure the input/output error for each pair-wise combination of dimensionality reduction and reconstruction techniques by computing the mean absolute error between predicted and actual parameter values. This is done using K-fold cross validation with k = 20 iterations, and a test partition size of 20% (160 training examples). As some of the dimensionality reduction techniques are unable to embed new information into the reduced-dimensionality space, the first part of the test process (i.e., the prediction of new low-dimensional values as implemented in [26]) was omitted, and only regression and interpolation techniques were evaluated.

3.3. Subjective Evaluation

Using the metrics defined in Section 3.2, we are able to select an appropriate model which is capable of accurately reducing the dataset while preserving the data structure and accurately reconstructing the input parameters with minimal error. To validate this, we implement subjective user tests in which participants are asked to equalise a series of audio samples using the reduced-dimensionality interface. To do this, 10 participants were asked to apply the process to 10 input sounds using only the two-dimensional interface. Each participant was asked to achieve a warm or bright output sound for each stimuli. During the test, samples were presented to participants in a random order across separate DAW channels, and the equaliser parameters remained hidden. No indication was given as to the underlying distribution of datapoints. The stimuli comprised unaccompanied musical instrument samples and ranged from 20 to 30 s in length. The samples were primarily taken from electric guitars and included a variety of genres, taken from the Mixing Secrets Multitrack Audio Dataset [30]. All of the participants had normal hearing, were aged 18–35, and had varied music production experience, from 0 to 5 years.

4. Model

The proposed system maps between the equaliser parameter space, consisting of 13 filter parameters and a two-dimensional plane, while preserving the context-dependent nature of the audio effect. After an initial training phase, the user can then submit (x, y) coordinates to the system using a track-pad interface, resulting in a timbral modification via the corresponding filter parameters. To demonstrate this, we train the model with two class (bright, warm) musical semantics data taken from the SAFE equaliser database, thus resulting in an underlying transition between opposing timbral descriptors in two-dimensional space. By training the model in this manner, we intend to retain the high-dimensional structure of the dataset in the two-dimensional space while minimising the reconstruction error inherent to dimensionality reduction methods.
The model (illustrated in Figure 2) has two key operations. The first involves weighting the parameters by computing the vector α n ( A ) from the input signal long-term spectral energy (A). We can then modify the parameter vector (P) to obtain a weighted vector ( P ). The second component scales the dimensionality of ( P ), resulting in a compact audio-dependent representation. During the model implementation phase, we apply an unweighting procedure based on the (x, y) coordinates and the signal modified spectrum. This is done by multiplying the estimated parameters with the inverse weight vector, resulting in an approximation of the original parameters. In addition to the weighting and dimensionality reduction stages, a scale-normalisation procedure is applied, aiming to convert the ranges of each parameter (given in Table 1), to ( 0 < p n < 1 ). This converts the data into a suitable format for dimensionality reduction.

4.1. Parameter Scaling

As the configuration of the filter parameters assigned to each descriptor by the user during equalisation is likely to vary based on the audio signal being processed, the first requirement of the model is to apply weights to the parameters based on knowledge of the audio data at the time of processing. To do this, we selectively extract features from the signal LTAS before and after the filter is applied. This is possible due to the configuration of the data collection architecture, highlighted in Figure 1b. The weights ( α m ) can then be expressed as a function of the LTAS, where the function’s definition varies based on the parameter representation (i.e., gain, centre frequency, or bandwidth of the corresponding filter). We use the LTAS to prevent the parameters from adapting each time a new frame is read. In practice, we are able to do this by presenting users with means to store the audio data, rather than continually extracting it from the audio stream. Each weighting is defined as the ratio between a spectral feature taken from the filtered audio signal ( A k ) and the signal filtered by an enclosing rectangular window ( R k ). Here, the rectangular window is bounded by the minimum and maximum frequency values attainable by the observed filter f k ( A ) .
We can define the equaliser as an array of biquad functions arranged in series, as depicted in Equation (5).
f k = f k - 1 ( A , P k - 1 ) k = 1 , , K - 1
Here, K = 5 represents the number of filters used by the equaliser and f k represents the k t h biquad function, which we can define by its transfer function, given in Equation (6).
H k ( z ) = c · 1 + b 1 z - 1 + b 2 z - 2 1 + a 1 z - 1 + a 2 z - 2
The LTAS is then modified by the filter as in Equation (7) and the weighted parameter vector can be derived using the function expressed in Equation (8).
A k = | H k ( e j ω ) | A k
p n = α m ( k ) · p n
where p n is the n t h parameter in the vector P. The weighting function is then defined by the parameter type (m), where m = 0 represents gain, m = 1 represents centre-frequency, and m = 2 represents bandwidth. For gain parameters, the weights are expressed as a ratio of the spectral energy in the filtered spectrum ( A ) to the spectral energy in the enclosing rectangular window ( R k ), derived in Equation (9) and illustrated in Figure 3.
α 0 ( k ) = i ( A k ) i i ( R k ) i
For frequency parameters ( m = 1 ), the weights are expressed as a ratio of the respective spectral centroids of A and R k , as demonstrated in Equation (10), where b n i are the corresponding frequency bins.
α 1 ( k ) = i ( A k ) i b n i i ( A k ) i / i ( R k ) i b n i i ( R k ) i
Finally, the weights for bandwidth parameters ( m = 2 ) are defined as the ratio of spectral spread exhibited by both A and R k . This is demonstrated in Equation (11), where ( x ) s c represents the spectral centroid of x.
α 2 ( k ) = i b n i - ( A k ) s c 2 ( A k ) i i ( A k ) i / i b n i - ( R k ) s c 2 ( R k ) i i ( R k ) i
During the implementation phase, retrieval of the unweighted parameters, given a weighted vector, can be achieved by simply multiplying the weighted parameters with the inverse weights vector, as in Equation (12).
p ^ n = α m - 1 ( k ) · p n
where p ^ is a reconstructed version of p, after dimensionality reduction has been applied.
To ensure the parameters are in a consistent format for each of the dimensionality scaling algorithms, a scale normalisation procedure is applied using Equation (13), where during the training process, the p m i n and p m a x represent the minimum and maximum values for each parameter (given in Table 1), and q m i n and q m a x represent 0 and 1. During the implementation process, these values are exchanged such that q m i n and q m a x represent the minimum and maximum values for each parameter and p m i n and p m a x represent 0 and 1.
ρ n = ( p n - q m i n ) ( p m a x - p m i n ) q m a x - q m i n + p m i n
Additionally, a sorting algorithm was used to place the three mid-band filters in ascending order based on their centre frequency. This prevents normalisation errors due to the frequency ranges, allowing filters to be rearranged by the user.

4.2. Parameter Space Mapping

Once the filters have been weighted by the audio signal, the mapping from 13 equaliser variables to a two-dimensional subspace can be accomplished using a range of dimensionality reduction techniques. In this study, we expand on [26] and evaluate the performance of six dimensionality reduction techniques. Here, the algorithms that were used for the dimensionality reduction are available as part of the dimensionality reduction toolbox in [31]. In addition to this, parameter space mapping is evaluated by measuring the quality of reduction with rank-based measures and nearest neighbour classification algorithms. In dimensionality reduction, the reconstruction process is often less common due to the nature of the task (e.g., feature optimisation, data reduction). We evaluate the efficacy of two regression-based techniques and three interpolation techniques at mapping two-dimensional interface variables to a vector of equaliser parameters. This is done by approximating functions using the weighted parameter data and measuring the reconstruction error. Finally, we evaluate an sAE model of data reduction, in which the parameter space is both reduced and reconstructed in the same algorithm; we are then able to isolate the reconstruction (decoder) stage for the implementation process.
Dimensionality reduction is implemented using the following techniques: PCA, a widely used method of embedding data into a linear subspace of reduced dimensionality by finding the eigenvectors of the covariance matrix, originally proposed by [32]; Kernel PCA (kPCA), a non-linear manifold mapping technique in which the eigenvectors are computed from a kernel matrix as opposed to the covariance matrix, as defined by [33]; probabilistic PCA (pPCA), a method that considers standard PCA as a latent variable model and makes use of an Expectation Maximisation (EM) algorithm, a method for finding the maximum-likelihood estimate of the parameters in an underlying distribution from a given data set, depending on unobserved latent variables [34] as described in [35]; Factor Analysis (FA), a statistical analysis technique that identifies the relationship between different variables of a dataset and groups those variables by the correlation of the underlying factors [36]; Diffusion Maps (DM), a technique inspired by the field of dynamical systems, reducing the dimensionality of data by embedding the original dataset in a low-dimensional space by retrieving the eigenvectors of Markov random walks [37]; Linear Discriminant Analysis (LDA), a supervised projection technique that maps to a linear subspace while maximising the separability between data points that belong to different classes (see [38]). As LDA projects the data-points onto the dimensions that maximise inter-class variance for C classes, the dimensionality of the subspace is set to C - 1 . This means that in a binary classification problem such as ours, we need to reconstruct the second dimension arbitrarily. For each of the other algorithms, we select the first two variables for mapping, and for the kPCA algorithm, the feature distances are computed using a Gaussian kernel.
The parameter reconstruction process was implemented using the following techniques: Linear Regression (LR), a process by which a linear function is used to estimate latent variables; Natural Neighbour Interpolation (NaNI), a method for interpolating between scattered data points using Voronoi tessellation, as used by [14] for a similar application; Nearest Neighbour Interpolation (NeNI), an interpolation method where the query point takes the value of the nearest neighbour [39]; Linear Interpolation (LI), an interpolation technique that assumes a linear relationship between the existing points in a dataset; Support Vector Regression (SVR), a non-linear kernel-based regression technique (see [40]), for which we choose a Gaussian kernel function.
An autoencoder is an Artificial Neural Network (ANN) with a topology capable of learning a compact representation of a dataset by optimising a matrix of weights, such that a loss function representing the difference between the output and input vectors is minimised. Autoencoders can then be stacked together using the output of the prior layer as the input for the next in order to construct a deep network architecture. Each autoencoder is then trained individually, learning to minimise the reconstruction error between its input and the predicted output. This approach has been used for data compression [41], and by extension, dimensionality reduction. This type of ANN is often used in order to improve the classification accuracy of logistic regression [42]; however, since our problem involves data reconstruction as opposed to classification, a logistic layer is not implemented.

Network Topology

The autoencoder was built using the Theano Python library [43], where we observed an error of 0.086 using a single hidden layer with N (in this case N = 2 ) units. To reduce the error, a mirrored [ 13 - 9 - 2 ] architecture was selected empirically, resulting in an error measurement of 0.08. To improve reconstruction accuracy further, noise was introduced at each stage in the network, as demonstrated by [44]. Here, the first autoencoder was corrupted with 0.6 magnitude noise, and the second with 0.5. This approach is able to further reduce the reconstruction error to 0.0784. Additionally, we replace the previously-used stochastic gradient descent algorithm with an RMSprop method [45] with a batch size of 10 as the pre-training and fine-tuning methods of optimization, and a learning rate of 0.01 and 0.001, respectively. This approach allows for faster optimization, as shown in [46]. For the weighted parameters, we found that a three-layer denoising autoencoder with an architecture of [ 13 - 9 - 6 - 2 ] and noise of magnitude ( 0 . 5 , 0 . 4 , 0 . 3 ) is able to outperform our two-layer denoising autoencoder model.

5. Results

5.1. Parameter Space Evaluation

To evaluate the extent to which structures in the parameter space are preserved in the reduced dimensionality map, we report the trustworthiness, continuity, and class-wise similarity (k-NN). This is applied to data shown in Figure 4a–g, in which a two-dimensional projection of the 13 equaliser parameters is given for both warm and bright samples in the dataset.

5.1.1. Low-Dimensional Mapping Accuracy

From Table 2 we show that for trustworthiness, pPCA achieves the highest rating (0.8426), with the sAE also performing similarly (0.842). The rest of the techniques are also able to achieve a high score, ranging from 0.81 for kPCA to 0.839 for standard PCA. The only technique that does not perform to the same standard is LDA, as the algorithm maximizes the separability of classes in the data instead of preserving the structure of the original dataset unrelated to its classes. For continuity, we can see that the majority of the techniques perform similarly, with scores ranging from 0.943 for the sAE to 0.958 for kPCA. However, as was the case with trustworthiness, LDA does not perform as well (0.868), due to the map reduction process.
Trustworthiness and continuity metrics were used with a varying number of neighbours, ranging from 1 to 250. Here, the sAE exhibits higher scores for a lower number of neighbours (<120), as shown in Figure 5a—a result that suggests the system is better at retaining the local structure of the data, which is a necessary goal for a successful mapping technique. Furthermore, while the continuity score of the autoencoder is lower than the remaining dimensionality reduction techniques (Table 2), its error from the best performing technique in terms of continuity (kPCA) is only 0.015, which is deemed negligible.

5.1.2. Class Preservation

The classification of 1-NN in the original dataset achieves an average of 91.21% for 100 iterations of the algorithm. None of the dimensionality reduction techniques are able to replicate this response, with pPCA achieving the highest score (87.92%), as seen in Table 2. On the other hand, the sAE achieved an accuracy of 84.01%, the lowest among the techniques being tested, 7.2% worse than the classification accuracy of the algorithm in the high-dimensional dataset. This result reveals that sAE is not as capable as other reduction techniques in preserving the classes on the low-dimensional space; however, as sAE is able to achieve better results than the other techniques for trustworthiness for a lower number of neighbours, and its performance in 1-NN is not drastically worse (3.91%) than the best technique in pPCA, it can be considered a minor problem.

5.1.3. Class Separation

By applying JMD (Equation 4) to the dimensionality reduction techniques, we find that kPCA outperforms the rest of the techniques used, achieving 0.607, whereas the optimised autoencoder model performs slightly less favourably with a score 0.558, as shown in Table 3. The only technique that was excluded from this process was LDA, for two reasons: (1) it is a supervised technique that specifically maximizes the separability between the different classes in the low-dimensional space, and (2) in the context of our study, LDA has reduced the dataset to a single dimension, while all the other techniques have reduced the dimensionality to two dimensions. While class-separability is not necessarily correlated with accurate preservation of structure, high separability will allow users to effectively modulate between contrasting timbral descriptors.

5.2. Parameter Reconstruction Error

In [26], the sAE was able to achieve the lowest reconstruction error, 0.086, while the technique that came the closest to its accuracy was kPCA with support vector regression, achieving an error of 0.09. The sAE technique still outperforms all the other combinations of techniques, as can be seen in Table 4, achieving an overall error 0.074. It should also be noted that the sAE is able to reconstruct the most parameters of the equaliser (6) more accurately than any other combination of techniques.

5.3. Parameter Weighting

In order to evaluate the effectiveness of the signal specific weights, we measure the reconstruction accuracy of each system after the weights have been applied (see Table 5). Overall, the systems exhibit a general improvement in the reconstruction accuracy of the gain and Q parameters. All the systems have improved accuracy measurements, with the highest performing pair being PCA with SVR, achieving an error of 0.059. Similarly, the sAE with the same architecture, with hidden layer sizes [ 9 , 2 ] , is able to achieve a reconstruction accuracy of 0.06—a further improvement from the 0.0748 error observed with unweighted parameters. For the weighted parameters we found that a three-layer denoising autoencoder was able to outperform our two-layer autoencoder, improving the reconstruction accuracy by 0.02.
Finally, the parameter weighting stage improves the trustworthiness of the low-dimensional mapping when using PCA, pPCA, kPCA, DM, and sAE, whilst FA and LDA exhibited significantly lower scores, as presented in Table 6. On the other hand, the continuity of the systems had very little change, with pPCA, kPCA, DM, FA, and sAE showing very minor reductions, LDA showing significant reduction, and PCA showing an improvement. In this case, sAE with parameter weighting still outperforms the other techniques in terms of trustworthiness for a lower number of neighbours, as in Figure 5c, and the performance in terms of continuity sees the sAE performing better than FA (Figure 5d).

5.4. User Evaluation

We evaluate the performance of the selected mode (sAE) using subjective tests in which we present the user with various samples and ask them to equalise it using the low-dimensional space (shown in Figure 6). We then measure the class separability using the JMD metric presented in Section 3.2. In Table 7 we present the degree of separation between user inputs using high-dimensional and low-dimensional responses from the subjective data. From this we can deduce that the overlap between warm and bright descriptors has decreased, with a value of 0.8527. This is higher than the high-dimensional dataset instances (0.5581). Furthermore, we see an increase in separation between the high-dimensional classes and the opposing low-dimensional classes. For instance, the high-dimensional warm examples and the low-dimensional bright examples achieve a separation of 0.7719, again higher than the original separation between the high-dimensional classes. Similarly, a strong positive correlation between high-dimensional and low-dimensional equalisation is exhibited by examples in the same class, a desired effect that displays the ability of the users to choose the corresponding regions for the two descriptors.
This is reinforced by the low Euclidean distances between class centroids (shown in Figure 6) and strong positive coherence (spectral correlation) between the equaliser curves achieved using the 13- and 2-dimensional interfaces (shown in Figure 7a,b). These results are provided through the Pearson correlation measures in Table 8, revealing a positive correlation between the high-dimensional and low-dimensional datasets for the same descriptor: 0.9346 for warm and 0.9247 for bright, and a negative correlation between opposite high-dimensional and low dimensional descriptors: −0.7594 and −0.9121, respectively.

6. Discussion

For reconstruction accuracy we find that the sAE is able to outperform all pairwise combinations of dimensionality reduction and reconstruction techniques, whether the system includes parameter weighting or not (Table 4 and Table 5). Furthermore, the sAE is able to achieve the second highest trustworthiness score (see Table 2, Figure 5a,c) in low-dimensional space, and performs to a high standard in the preservation of high-dimensional clusters (continuity), as in Table 2 and Figure 5b,d. Using a sAE however, the class-separability in low-dimensional space is reduced when parameter weighting is applied. Furthermore, the system is able to reconstruct the most parameters of the equaliser accurately (six for the unweighted parameters and five for the weighted parameters), while FA with SVR is the only combination able to accurately reconstruct five parameters for the weighted reconstruction. It achieves lower results for overall reconstruction accuracy (0.065), trustworthiness (0.7761), and classification (59.52%), and marginally lower for continuity (0.9359).
Whilst the parameter reconstruction of the autoencoder is sufficiently accurate for our application, it is bound by the intrinsic dimensionality of the data, defined as the minimum number of variables required to accurately represent the variance in lower dimensional space. For the bright/warm parameter-space data used in this experiment, we can show that the intrinsic dimensionality requires three variables when computed using Maximum Likelihood Estimation [47]. As our application requires a two-dimensional interface, this means the reconstruction accuracy is inherently limited.
Additionally, the user tests revealed that the two-dimensional slider using a sAE is able to accurately reconstruct the equaliser curve, retaining the characteristics associated with warm (boost on low-mid and cut on high-end) and bright (cut on low-end and boost on high-end), as displayed in Figure 7a,b. Participants of the experiment also commented that the underlying two-dimensional map is easy to quickly learn and provides an intuitive tool for controlling an audio equaliser. Taking into account that the final audio effect should be incorporated alongside the equaliser, with the high-dimensional parameters also available to the users, and with indications as to where the semantic regions are placed, it can be expected that the resulting effect will feature a quick way of achieving the different descriptors (using the two-dimensional slider) and a further fine-tuning stage (via changing the high-dimensional equaliser parameters) if that is necessary.
Providing the model training is applied offline, mapping techniques such as PCA, LDA, DM, pPCA, kPCA, and FA are all capable of running in real-time given the lower degree of computational complexity, as do reconstruction methods such as the interpolation techniques (LI, NaNI, NeNI) and the sAE. Similarly, while the sAE requires iterative training, which will have variable training times based on the number of iterations, the learning rate and the number of neurons and hidden layers, it still offers a fast implementation as the user-input process is relatively lightweight.

7. Conclusions

We have presented a model for the modulation of equalisation parameters using a two-dimensional control interface. The model utilises a sAE to modify the dimensionality of the input data and a weighting process that adapts the parameters to the LTAS of the input audio signal. We train the model with semantics data in order to get the appropriate decoder weights and bias units, which can then be applied to any new input data. This data is given by a user as the position of the cursor changes in an (x,y) Cartesian space. This new information will compute high-dimensional values, which will be rescaled and unweighted, and consequently passed to the equaliser parameters. We show that the sAE model achieves better reconstruction accuracy than other regression and interpolation techniques, achieving an error as low as 0.058. Similarly, the trustworthiness and continuity of the system perform similarly to (and in some cases outperform) the rest of the dimensionality reduction techniques. Through subjective testing, we can show that the 2D equaliser provides users with an intuitive tool to recreate the high-dimensional equaliser settings extracted from the original dataset. This is demonstrated by comparing the centroids taken from the high and low-dimensional maps and by comparing the equalisation curves when applied to warm and bright samples.

Acknowledgments

The work of the first author is supported by The Alexander S. Onassis Public Benefit Foundation.

Author Contributions

The work was done in close collaboration. Spyridon Stasis conducted the experiments, derived results and contributed to the manuscript. Ryan Stables defined the mathematical models and drafted sections of the manuscript. Jason Hockman co-developed the models and contributed to the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Valimaki, V.; Reiss, J. All About Audio Equalization: Solutions and Frontiers. Appl. Sci. unpublished work. 2016. [Google Scholar]
  2. Bazil, E. Sound Equalization Tips and Tricks; PC Publishing: Norfolk, UK, 2009. [Google Scholar]
  3. Sarkar, M.; Vercoe, B.; Yang, Y. Words that describe timbre: A study of auditory perception through language. In Proceedings of the 2007 Language and Music as Cognitive Systems Conference, Cambridge, UK, 11–13 May 2007; pp. 11–13.
  4. Beauchamp, J.W. Synthesis by spectral amplitude and “Brightness” matching of analyzed musical instrument tones. J. Audio Eng. Soc. 1982, 30, 396–406. [Google Scholar]
  5. Schubert, E.; Wolfe, J. Does timbral brightness scale with frequency and spectral centroid? Acta Acust. United Acust. 2006, 92, 820–825. [Google Scholar]
  6. Marozeau, J.; de Cheveigné, A. The effect of fundamental frequency on the brightness dimension of timbre. J. Acoust. Soc. Am. 2007, 121, 383–387. [Google Scholar] [CrossRef] [PubMed]
  7. Grey, J.M. Multidimensional perceptual scaling of musical timbres. J. Acoust. Soc. Am. 1977, 61, 1270–1277. [Google Scholar] [CrossRef] [PubMed]
  8. Zacharakis, A.; Pastiadis, K.; Reiss, J.D.; Papadelis, G. Analysis of musical timbre semantics through metric and non-metric data reduction techniques. In Proceedings of the 12th International Conference on Music Perception and Cognition, Thessaloniki, Greece, 23–28 July 2012; pp. 1177–1182.
  9. Brookes, T.; Williams, D. Perceptually-motivated audio morphing: Brightness. In Proceedings of the 122nd Convention of the Audio Engineering Society, Vienna, Austria, 5–8 May 2007.
  10. Zacharakis, A.; Reiss, J. An additive synthesis technique for independent modification of the auditory perceptions of brightness and warmth. In Proceedings of the 130th Convention of the Audio Engineering Society, London, UK, 13–16 May 2011.
  11. Hafezi, S.; Reiss, J.D. Autonomous multitrack equalization based on masking reduction. J. Audio Eng. Soc. 2015, 63, 312–323. [Google Scholar] [CrossRef]
  12. Perez-Gonzalez, E.; Reiss, J. Automatic equalization of multichannel audio using cross-adaptive methods. In Proceedings of the 127th Convention of the Audio Engineering Society, New York, USA, 9–12 October 2009.
  13. Cartwright, M.; Pardo, B. Social-EQ: Crowdsourcing an equalization descriptor map. In Proceedings of the 14th ISMIR Conference, Curitiba, Brazil, 4–8 November 2013; pp. 395–400.
  14. Mecklenburg, S.; Loviscach, J. SubjEQt: Controlling an equalizer through subjective terms. In Proceedings of the CHI-06, Montreal, QC, Canada, 22–27 April 2006; pp. 1109–1114.
  15. Sabin, A.T.; Pardo, B. 2DEQ: An intuitive audio equalizer. In Proceedings of the 7th ACM Conference on Creativity and Cognition, Berkeley, CA, USA, 27–30 October 2009; pp. 435–436.
  16. Bristow-Johnson, R. Cookbook formulae for audio EQ biquad filter coefficients. Available online: http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt (accessed on 25 February 2016).
  17. Verfaille, V.; Arfib, D. A-DAFx: Adaptive digital audio effects. In Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, 6–8 December 2001.
  18. Verfaille, V.; Zölzer, U.; Arfib, D. Adaptive digital audio effects (A-DAFx): A new class of sound transformations. IEEE Trans. Audio Speech Lang. Process. 2006, 14, 1817–1831. [Google Scholar] [CrossRef]
  19. Zölzer, U.; Amatriain, X.; Arfib, D. DAFX: Digital Audio Effects; Wiley Online Library: New York, NY, USA, 2011. [Google Scholar]
  20. Stables, R.; Enderby, S.; de Man, B.; Fazekas, G.; Reiss, J.D. SAFE: A system for the extraction and retrieval of semantic audio descriptors. In Proceedings of the 15th ISMIR Conference, Taipei, Taiwan, 27–31 October 2014.
  21. Semantic Audio: The SAFE Project. Available online: http://www.semanticaudio.co.uk/ (accessed on 20 January 2016).
  22. Brookes, T.; Williams, D. Perceptually-motivated audio morphing: Warmth. In Proceedings of the 128th Convention of the Audio Engineering Society, London, UK, 22–25 May 2010.
  23. Venna, J.; Kaski, S. Local multidimensional scaling with controlled tradeoff between trustworthiness and continuity. In Proceedings of 5th Workshop on Self-Organizing Maps, Paris, France, 5–8 September 2005; pp. 695–702.
  24. Van der Maaten, L.J.P.; Postma, E.O.; van den Herik, H.J. Dimensionality reduction: A comparative review. J. Mach. Learn. Res. 2009, 10, 66–71. [Google Scholar]
  25. Sanguinetti, G. Dimensionality reduction of clustered data sets. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 535–540. [Google Scholar] [CrossRef] [PubMed]
  26. Stasis, S.; Stables, R.; Hockman, J. A model for adaptive reduced-dimensionality equalisation. In Proceedings of the 18th International Conference on Digital Audio Effects, Trondheim, Norway, 30 November–3 December 2015.
  27. Chaudhuri, K.; McGregor, A. Finding metric structure in information theoretic clustering. COLT Citeseer 2008, 8. Available online: https://people.cs.umass.edu/ mcgregor/papers/08-colt.pdf (accessed on 20 April 2016). [Google Scholar]
  28. Johnson, D.; Sinanovic, S. Symmetrizing the kullback-leibler distance. Computer and Information Technology Institute, Department of Electrical and Computer Engineering, Rice University: Houston, Texas, USA, 2001. Available online: http://www.ece.rice.edu/~dhj/resistor.pdf (accessed on 10 March 2016).
  29. Bruzzone, L.; Roli, F.; Serpico, S.B. An extension of the Jeffreys-Matusita distance to multiclass cases for feature selection. IEEE Trans. Geosci. Remote Sens. 1995, 33, 1318–1321. [Google Scholar] [CrossRef]
  30. Senior, M. Mixing secrets for the small studio additional resources. Available online: http://www.cambridge-mt.com/ms-mtk.htm (accessed on 20 January 2016).
  31. Van der Maaten, L.J.P. An introduction to dimensionality reduction using Matlab. Report 2007, 1201. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.107.1327&rep=rep1&type=pdf (accessed on 20 April 2016). [Google Scholar]
  32. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417–441. [Google Scholar] [CrossRef]
  33. Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
  34. Bilmes, J.A. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. Int. Comput. Sci. Inst. 1998, 4. Available online: http://lasa.epfl.ch/teaching/lectures/ML_Phd/Notes/GP-GMM.pdf (accessed on 20 April 2016). [Google Scholar]
  35. Roweis, S. EM algorithms for PCA and SPCA. Adv. Neural Inf. Process. Syst. 1998, 626–632. [Google Scholar]
  36. Khosla, N. Dimensionality Reduction Using Factor Analysis. Ph.D. Thesis, Griffith University, Brisbane, Queensland, Australia, December 2004. [Google Scholar]
  37. Nadler, B.; Lafon, S.; Coifman, R.R.; Kevrekidis, I.G. Diffusion maps, spectral clustering and reaction coordinates of dynamical systems. Appl. Comput. Harmonic Anal. 2006, 21, 113–127. [Google Scholar] [CrossRef]
  38. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
  39. Bobach, T.; Umlauf, G. Natural neighbor interpolation and order of continuity. University of Kaiserslautern, Computer Science Department/IRTG: Kaiserslautern, Germany, 2006. Available online: http://www-umlauf.informatik.uni-kl.de/~bobach/work/publications/dagstuhl06.pdf (accesed on 20 January 2016).
  40. Drucker, H.; Burges, C.J.C.; Kaufman, L.; Smola, A.; Vapnik, V. Support vector regression machines. Adv. Neural Inf. Process. Syst. 1997, 9, 155–161. [Google Scholar]
  41. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  42. Bengio, Y. Learning deep architectures for AI. Found. Trends Mach. Learn. 2009, 2, 1–127. [Google Scholar] [CrossRef]
  43. Bergstra, J.; Breuleux, O.; Bastien, F.; Lamblin, P.; Pascanu, R.; Desjardins, G.; Turian, J.; Warde-Farley, D.; Bengio, Y. Theano: A CPU and GPU math compiler in Python. In Proceedings of the 9th Python in Science Conference (SciPy), Austin, Texas, USA, 28 June–3 July 2010.
  44. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  45. Tieleman, T.; Hinton, G. Lecture 6e - rmsprop: Divide the gradient by a running average of its recent magnitude. 2012. Available online: http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf (accessed on 20 January 2016).
  46. Dauphin, Y.; de Vries, H.; Bengio, Y. Equilibrated adaptive learning rates for non-convex optimization. In Proceedings of Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 1504–1512.
  47. Levina, E.; Bickel, P.J. Maximum likelihood estimation of intrinsic dimension. In Proceeding of Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 13–18 December 2004.
Figure 1. An overview of the Semantic Audio Feature Extraction (SAFE) equaliser and its feature extraction architecture. (a) The extended Semantic Audio Equalisation plug-in with the two-dimensional interface. To modify the brightness/warmth of an audio signal, a point is positioned in two-dimensional space; (b) The SAFE feature extraction process, where A represents the audio features captured before the effect is applied, A represents the features captured after the effect is applied, and P represents the parameter vector.
Figure 1. An overview of the Semantic Audio Feature Extraction (SAFE) equaliser and its feature extraction architecture. (a) The extended Semantic Audio Equalisation plug-in with the two-dimensional interface. To modify the brightness/warmth of an audio signal, a point is positioned in two-dimensional space; (b) The SAFE feature extraction process, where A represents the audio features captured before the effect is applied, A represents the features captured after the effect is applied, and P represents the parameter vector.
Applsci 06 00116 g001
Figure 2. An overview of the proposed model. The grey horizontal paths represent training and implementation (user input) phases.
Figure 2. An overview of the proposed model. The grey horizontal paths represent training and implementation (user input) phases.
Applsci 06 00116 g002
Figure 3. An example spectrum taken from an input example, weighted by the biquad coefficients, where the red line represents a peaking filter, the black line represents the biquad-filtered spectrum, and the blue line represents the spectral energy in the rectangular window ( R k ).
Figure 3. An example spectrum taken from an input example, weighted by the biquad coefficients, where the red line represents a peaking filter, the black line represents the biquad-filtered spectrum, and the blue line represents the spectral energy in the rectangular window ( R k ).
Applsci 06 00116 g003
Figure 4. Two-dimensional parameter-space representations using seven data reduction techniques, where the red data points are taken from parameter spaces described as bright and the blue points are described as warm. (a), (c), (f), and (g). PCA: Principal Components Analysis; pPCA: Probabilistic PCA; kPCA: Kernel PCA; FA: Factor Analysis; DM: Diffusion Maps; LDA: Linear Discriminant Analysis; sAE: stacked-autoencoder.
Figure 4. Two-dimensional parameter-space representations using seven data reduction techniques, where the red data points are taken from parameter spaces described as bright and the blue points are described as warm. (a), (c), (f), and (g). PCA: Principal Components Analysis; pPCA: Probabilistic PCA; kPCA: Kernel PCA; FA: Factor Analysis; DM: Diffusion Maps; LDA: Linear Discriminant Analysis; sAE: stacked-autoencoder.
Applsci 06 00116 g004
Figure 5. Trustworthiness and continuity plots across the different dimensionality reduction techniques for number of neighbors ( 1 : 250 ) . (a) Trustworthiness; (b) Continuity; (c) Trustworthiness (Weighted Parameters); (d) Continuity (Weighted Parameters).
Figure 5. Trustworthiness and continuity plots across the different dimensionality reduction techniques for number of neighbors ( 1 : 250 ) . (a) Trustworthiness; (b) Continuity; (c) Trustworthiness (Weighted Parameters); (d) Continuity (Weighted Parameters).
Applsci 06 00116 g005
Figure 6. Equalisation settings shown in reduced dimensionality space where the figure (a) shows the results of users recording warm and bright samples using 13 parameters; (b) the results of users producing the same descriptors using a sAE-based two-dimensional equaliser. Here, diamonds represent the class centroids.
Figure 6. Equalisation settings shown in reduced dimensionality space where the figure (a) shows the results of users recording warm and bright samples using 13 parameters; (b) the results of users producing the same descriptors using a sAE-based two-dimensional equaliser. Here, diamonds represent the class centroids.
Applsci 06 00116 g006
Figure 7. The reconstructed equaliser curves for the centroid of the warm and bright descriptors for both the high-dimensional (red) and low-dimensional (blue) datasets. (a) Reconstructed warm equaliser curve; (b) Reconstructed bright equaliser curve.
Figure 7. The reconstructed equaliser curves for the centroid of the warm and bright descriptors for both the high-dimensional (red) and low-dimensional (blue) datasets. (a) Reconstructed warm equaliser curve; (b) Reconstructed bright equaliser curve.
Applsci 06 00116 g007
Table 1. A list of the parameter space variables and their ranges of possible values, taken from the Semantic Audio Feature Extraction (SAFE) parametric equaliser interface.
Table 1. A list of the parameter space variables and their ranges of possible values, taken from the Semantic Audio Feature Extraction (SAFE) parametric equaliser interface.
nAssignmentRangenAssignmentRange
0LS gain−12–12 dB7 P f 1 Q0.1–10 Hz
1LS Freq22–1000 Hz8 P f 2 Gain−12–12 dB
2 P f 0 Gain−12–12 dB9 P f 2 Freq220–10,000 Hz
3 P f 0 Freq82–3900 Hz10 P f 2 Q0.1–10 Hz
4 P f 0 Q0.1–10 Hz11HS Gain−12–12 dB
5 P f 1 Gain−12–12 dB12HS Freq580–20,000 Hz
6 P f 1 Freq180–4700 Hz
Table 2. Trustworthiness and continuity scores for the different dimensionality reduction techniques (higher values are better) and classification accuracy of 1-NN classification
Table 2. Trustworthiness and continuity scores for the different dimensionality reduction techniques (higher values are better) and classification accuracy of 1-NN classification
TechniqueTrustworthinessContinuity1-NN Classification
Original--91.21%
PCA0.83980.954187.61%
pPCA0.84260.956787.92%
kPCA0.81020.958386.14%
FA0.83370.949086.19%
DM0.83950.953387.89%
LDA0.72920.868485.40%
sAE0.84200.943984.01%
Table 3. Jeffries–Matusita Distance (JMD) scores showing separation across different dimensionality reduction techniques.
Table 3. Jeffries–Matusita Distance (JMD) scores showing separation across different dimensionality reduction techniques.
Separability MeasurePCApPCAkPCAFADMsAE
JMD0.51420.51520.60760.48620.51250.5581
Table 4. Mean reconstruction error per parameter using combinations of dimensionality reduction and reconstruction techniques, with the lowest reconstruction error highlighted in grey. The final column shows the mean (μ) error across all techniques, while the model with the lowest mean reconstruction error (Stacked Autoencoder, sAE) is highlighted in green. LR: Linear Regression; SVR: Support Vector Regression; NaNI: Natural Neighbour Interpolation; NeNI: Nearest Neighbour Interpolation; LI: Linear Interpolation.
Table 4. Mean reconstruction error per parameter using combinations of dimensionality reduction and reconstruction techniques, with the lowest reconstruction error highlighted in grey. The final column shows the mean (μ) error across all techniques, while the model with the lowest mean reconstruction error (Stacked Autoencoder, sAE) is highlighted in green. LR: Linear Regression; SVR: Support Vector Regression; NaNI: Natural Neighbour Interpolation; NeNI: Nearest Neighbour Interpolation; LI: Linear Interpolation.
P:0123456789101112μ
PCA-LR0.0990.0700.1420.0470.0410.1390.0790.0280.1240.0900.0290.1020.1090.084
LDA-LR0.1940.0700.1500.0470.0410.1710.0820.0280.1160.0900.0300.1230.1060.096
kPCA-LR0.0810.0700.1360.0470.0400.1500.0820.0270.1300.0840.0290.1200.1070.085
pPCA-LR0.0990.0690.1380.0460.0390.1420.0780.0270.1260.0920.0300.1040.1080.084
DM-LR0.1040.0700.1380.0470.0400.1390.0810.0270.1260.0910.0310.1020.1060.085
FA-LR0.1510.0680.1560.0420.0400.1430.0680.0290.1440.0840.0300.1030.0940.089
PCA-SVR0.0860.0640.1230.0460.0400.1370.0790.0280.1250.0890.0310.0970.0950.080
LDA-SVR0.1960.0680.1520.0480.0400.1710.0810.0280.1160.0870.0310.1230.1050.096
kPCA-SVR0.0770.0690.1360.0450.0390.1440.0790.0260.1300.0880.0320.1110.0990.083
pPCA-SVR0.0890.0660.1280.0470.0400.1360.0770.0270.1280.0880.0310.0960.0970.081
DM-SVR0.0880.0670.1210.0470.0400.1330.0780.0260.1240.0890.0310.0960.0950.080
FA-SVR0.1440.0620.1370.0410.0390.1440.0660.0260.1440.0850.0300.0980.0820.084
PCA-NaNI0.0910.0800.1370.0540.0450.1490.0920.0290.1440.1070.0320.1040.1070.090
LDA-NaNI0.2630.0980.2090.0710.0460.2160.1170.0310.1490.1240.0330.1580.1280.126
kPCA-NaNI0.0830.0820.1590.0560.0420.1540.0950.0290.1600.1160.0330.1250.1080.096
pPCA-NaNI0.0920.0780.1390.0500.0410.1480.0900.0280.1390.1060.0340.1050.1060.089
DM-NaNI0.0940.0800.1390.0520.0430.1460.0910.0260.1430.1070.0300.1070.1030.089
FA-NaNI0.1520.0700.1570.0460.0410.1640.0750.0280.1590.0980.0330.1020.0870.093
PCA-NeNI0.0990.0930.1630.0600.0470.1770.1060.0300.1620.1230.0350.1210.1210.103
LDA-NeNI0.2520.1000.1940.0600.0420.2170.1090.0310.1510.1200.0370.1580.1150.122
kPCA-NeNI0.0920.0960.1870.0600.0420.1750.1100.0250.1800.1280.0290.1350.1240.106
pPCA-NeNI0.1030.0880.1620.0590.0420.1700.1070.0270.1600.1230.0340.1200.1170.101
DM-NeNI0.1100.0900.1610.0590.0460.1750.1010.0250.1590.1240.0340.1220.1160.102
FA-NeNI0.1760.0820.1710.0540.0410.1930.0870.0280.2050.1140.0340.1380.0960.109
PCA-LI0.0920.0780.1410.0550.0420.1490.0950.0260.1430.1140.0330.1080.1080.091
LDA-LI0.2540.0970.1950.0620.0430.2090.1070.0320.1530.1150.0370.1550.1130.121
kPCA-LI0.0830.0820.1590.0580.0390.1590.1020.0280.1600.1140.0300.1270.1150.096
pPCA-LI0.0910.0800.1380.0530.0470.1480.0950.0290.1460.1120.0340.1080.1070.091
DM-LI0.0980.0760.1420.0510.0450.1490.0890.0300.1460.1120.0330.1080.1050.091
FA-LI0.1600.0700.1530.0460.0410.1720.0780.0280.1760.1020.0320.1190.0870.097
sAE(2-Layer)0.0730.0460.1260.0390.0270.1490.0670.0140.1230.0910.0170.0990.0960.074
Table 5. Mean reconstruction error per parameter using combinations of dimensionality reduction and reconstruction techniques for the weighted parameterers, with the lowest reconstruction error highlighted in grey. The final column shows the mean (μ) error across all techniques, while the model with the lowest mean reconstruction error (Stacked Autoencoder) is highlighted in green.
Table 5. Mean reconstruction error per parameter using combinations of dimensionality reduction and reconstruction techniques for the weighted parameterers, with the lowest reconstruction error highlighted in grey. The final column shows the mean (μ) error across all techniques, while the model with the lowest mean reconstruction error (Stacked Autoencoder) is highlighted in green.
P:0123456789101112μ
PCA-LR0.0520.0590.0620.0400.0230.1140.0750.0180.1070.0880.0200.0340.1060.061
LDA-LR0.1490.0680.1160.0470.0220.1180.0830.0170.1010.0880.0200.0280.1050.074
kPCA-LR0.0390.0660.0560.0430.0210.1130.0840.0160.1120.0890.0210.0350.1050.062
pPCA-LR0.0540.0660.0620.0420.0220.1110.0740.0170.1080.0900.0220.0360.1100.063
DM-LR0.0580.0680.0660.0410.0230.1110.0740.0160.1100.0910.0200.0360.1070.063
FA-LR0.1490.0620.1410.0350.0210.1110.0630.0150.0660.0750.0220.0240.0910.067
PCA-SVR0.0460.0590.0590.0410.0210.1110.0710.0150.1030.0870.0210.0350.0990.059
LDA-SVR0.1550.0700.1200.0470.0230.1210.0810.0160.1090.0940.0200.0270.1040.076
kPCA-SVR0.0360.0680.0520.0440.0230.1110.0800.0160.1060.0900.0220.0350.1080.061
pPCA-SVR0.0470.0610.0580.0410.0230.1130.0740.0160.1060.0940.0210.0350.1010.061
DM-SVR0.0500.0630.0600.0420.0240.1100.0740.0160.1030.0890.0200.0350.1000.060
FA-SVR0.1410.0500.1360.0360.0230.1080.0580.0170.0640.0750.0190.0240.0920.065
PCA-NaNI0.0480.0660.0640.0470.0260.1270.0810.0190.1160.0960.0240.0380.1110.066
LDA-NaNI0.1950.0920.1520.0620.0250.1600.1060.0200.1350.1230.0260.0330.1230.096
kPCA-NaNI0.0380.0750.0610.0510.0260.1370.0980.0200.1200.1020.0240.0390.1100.069
pPCA-NaNI0.0460.0650.0640.0450.0270.1280.0800.0220.1170.0940.0210.0360.1100.066
DM-NaNI0.0540.0700.0690.0460.0280.1280.0840.0190.1180.1000.0240.0380.1090.068
FA-NaNI0.1640.0550.1630.0400.0230.1240.0690.0190.0770.0900.0250.0290.1040.076
PCA-NeNI0.0570.0770.0800.0570.0290.1570.1000.0220.1400.1190.0220.0430.1260.079
LDA-NeNI0.1950.0960.1570.0630.0270.1570.1050.0230.1320.1220.0270.0320.1230.097
kPCA-NeNI0.0420.0810.0720.0580.0300.1540.1080.0240.1450.1120.0250.0450.1250.079
pPCA-NeNI0.0540.0720.0760.0550.0270.1550.0970.0220.1370.1100.0220.0420.1300.077
DM-NeNI0.0590.0750.0840.0530.0300.1580.0950.0220.1430.1140.0250.0450.1290.079
FA-NeNI0.1850.0640.1900.0470.0290.1440.0850.0200.0910.1090.0250.0330.1170.088
PCA-LI0.0520.0700.0690.0500.0270.1360.0870.0210.1270.1020.0260.0380.1190.071
LDA-LI0.1920.1030.1540.0620.0270.1610.1100.0180.1400.1350.0250.0350.1240.099
kPCA-LI0.0370.0690.0640.0490.0270.1380.0940.0200.1220.1060.0240.0400.1130.069
pPCA-LI0.0520.0710.0690.0490.0260.1370.0840.0200.1250.1020.0240.0390.1160.070
DM-LI0.0540.0700.0700.0460.0290.1320.0850.0200.1210.0990.0240.0370.1130.069
FA-LI0.1700.0560.1620.0400.0260.1240.0700.0210.0770.0930.0250.0300.1030.077
sAE(3-Layer)0.0650.0530.0810.0400.0210.1060.0750.0150.0770.0810.0170.0280.0960.058
Table 6. Trustworthiness and continuity scores (including weighting) for the different dimensionality reduction techniques (higher values are better), and classification accuracy of 1-nn classification
Table 6. Trustworthiness and continuity scores (including weighting) for the different dimensionality reduction techniques (higher values are better), and classification accuracy of 1-nn classification
TechniqueTrustworthinessContinuity1-NN Classification
Original--84.9%
PCA0.84630.956267.85%
pPCA0.84540.955267.39%
kPCA0.82630.956669.40%
FA0.77610.935959.52%
DM0.84770.956166.03%
LDA0.67020.834073.92%
sAE(3-Layer)0.84400.943173.51%
Table 7. Jeffries-Matusita Distance (JMD) scores showing separation for data gathered from 13-dimensional parameters and a two-dimensional interface using warm(W) and bright(B) examples. Higher scores are desirable for the first four measurements, while lower scores are better for the last two columns.
Table 7. Jeffries-Matusita Distance (JMD) scores showing separation for data gathered from 13-dimensional parameters and a two-dimensional interface using warm(W) and bright(B) examples. Higher scores are desirable for the first four measurements, while lower scores are better for the last two columns.
SeparabilityW(13-d)/B(13-d)W(2-d)/B(2-d)W(13-d)/B(2-d)B(13-d)/W(2-d)W(13-d)/W(2-d)B(13-d)/B(2-d)
JMD0.55810.85270.77190.69880.08460.1439
Table 8. Pearson correlation between the reconstructed equaliser curves.
Table 8. Pearson correlation between the reconstructed equaliser curves.
MetricB(13-d)/B(2-d)W(13-d)/W(2-d)W(13-d)/B(13-d)W(2-d)/B(2-d)
Pearson correlation0.93460.9247-0.7594-0.9121

Share and Cite

MDPI and ACS Style

Stasis, S.; Stables, R.; Hockman, J. Semantically Controlled Adaptive Equalisation in Reduced Dimensionality Parameter Space. Appl. Sci. 2016, 6, 116. https://doi.org/10.3390/app6040116

AMA Style

Stasis S, Stables R, Hockman J. Semantically Controlled Adaptive Equalisation in Reduced Dimensionality Parameter Space. Applied Sciences. 2016; 6(4):116. https://doi.org/10.3390/app6040116

Chicago/Turabian Style

Stasis, Spyridon, Ryan Stables, and Jason Hockman. 2016. "Semantically Controlled Adaptive Equalisation in Reduced Dimensionality Parameter Space" Applied Sciences 6, no. 4: 116. https://doi.org/10.3390/app6040116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop