energy_fault_detector.autoencoders.conditional_autoencoder

Conditional autoencoder implementation (deterministic).

class ConditionalAE(conditional_features=None, layers=None, code_size=10, learning_rate=0.001, batch_size=128, epochs=10, loss_name='mean_squared_error', metrics=None, kernel_initializer='he_normal', act='prelu', last_act='linear', early_stopping=False, decay_rate=None, decay_steps=None, patience=3, min_delta=0.0001, noise=0.0)

Bases: Autoencoder

Conditional symmetric autoencoder. Same as the MultilayerAutoencoder, where we use certain features in the input as conditions. These are concatenated to the input of both the encoder and decoder.

NOTE: If the input of the fit, tune or predict method is a numpy array or a tensorflow tensor, we assume that the

first couple of columns are the conditions.

Parameters:
  • layers (List[int]) – list of integers indicating the size (# units) of the layers in both the encoder and in the decoder (reversed order in this case). Default [200]

  • code_size (int) – number of units of the encoded layer (bottleneck layer). (number of features to compress the input features to). Default 10.

  • learning_rate (float) – learning rate of the adam optimizer. Default 0.001

  • batch_size (int) – number of samples per batch. Default 128

  • epochs (int) – number of epochs to run. Default 10

  • loss_name (str) – name of loss metric to use. Default mean_squared_error

  • metrics (List[str]) – list of additional metrics to track. Default [mean_absolute_error].

  • act (str) – activation function to use, prelu, relu, … Defaults to prelu.

  • last_act (str) – activation function for last layer, prelu, relu, sigmoid, linear… Defaults to linear.

  • kernel_initializer (str) – initializer to use in each layer. Default he_normal.

  • early_stopping (bool) – Whether to use EarlyStopping(monitor=’val_loss’, min_delta=1e-4, patience=5, restore_best_weights=True). Cannot be used if there is no validation data. In that case, add a callback directly via the fit method.

  • decay_rate (float) – learning rate decay. Optional. If not defined, a fixed learning rate is used.

  • decay_steps (int) – number of steps to decay learning rate over. Optional.

  • patience (int) – parameter for early stopping. If early stopping is used the training will end if more than patience epochs in a row have not shown an improved loss. (Default is 3)

  • min_delta (float) – parameter of the early stopping callback. If the losses of an epoch and the next epoch differ by less than min_delta, they are considered equal (i.e. no improvement).

  • noise (float) – float value that determines the influence of the noise term on the training input. High values mean highly noisy input. 0 means no noise at all. Default 0. If noise >0 is used validation metrics will not be affected by it. Thus training loss and validation loss can differ depending on the magnitude of noise.

model

keras Model object - the autoencoder network.

encoder

keras Model object - encoder network of the autoencoder.

history

dictionary with the losses and metrics for each epoch.

Configuration example:

train:
  autoencoder:
    name: ConditionalAutoencoder
    params:
      layers: [200]
      code_size: 40
      learning_rate: 0.001
      batch_size: 128,
      epochs: 15
      loss_name: mse
      conditional_features:
       - condition1
       - condition2
create_model(input_dimension, condition_dimension, **kwargs)

Compile a symmetric dense autoencoder.

Parameters:
  • input_dimension (int) – number of features in input data.

  • condition_dimension (int) – number of features in conditional data.

Return type:

Model

Returns:

A Keras model.