Lecture 07 - Autoencoders

MachineLearningCourse.Lecture07Module
Lecture07

Autoencoders

Available Functions

  • demo(): Train autoencoder for MNIST handwritten digits
  • display_latent_space(encoder, X_flat, Y_label): Display 2D latent space visualization
  • display_reconstruction(model, image): Display original vs reconstructed image

Usage

using MachineLearningCourse
Lecture07.demo()
using MachineLearningCourse
X_flat, Y_label, autoencoder, encoder = Lecture07.demo(display=false)
i = 1 # set image index
Lecture07.display_reconstruction(autoencoder, X_flat[:, i])
using MachineLearningCourse
X_flat, Y_label, autoencoder, encoder = Lecture07.demo(display=false)
Lecture07.display_latent_space(encoder, X_flat, Y_label)
source
MachineLearningCourse.Lecture07.create_autoencoderMethod
create_autoencoder(input_dim, hidden_dim, latent_dim)

Create symmetric autoencoder with encoder-decoder architecture.

Arguments

  • input_dim::Int: Input dimension (e.g., 784 for MNIST)
  • hidden_dim::Int: Hidden layer dimension
  • latent_dim::Int: Latent space dimension (bottleneck)

Returns

  • Flux.Chain: Autoencoder model with architecture:
    • Encoder: input → hidden (ReLU) → latent (linear)
    • Decoder: latent → hidden (ReLU) → output (sigmoid)

The model learns to compress input to latent space and reconstruct it.

source
MachineLearningCourse.Lecture07.demoMethod
demo(; train_size=5000, epochs=50, hidden_dim=128, latent_dim=2, display=true)

Complete MNIST autoencoder demo with 2D visualization.

Arguments

  • train_size::Int: Number of training samples (default: 5000)
  • epochs::Int: Number of training epochs (default: 50)
  • hidden_dim::Int: Hidden layer dimension (default: 128)
  • latent_dim::Int: Latent space dimension (default: 2)
  • display::Bool: Whether to display latent space plot (default: true)

Returns

  • Tuple: (X_test, model, encoder, plot)
    • X_test::Array{Float32, 2}: Flattened test images (784 × 1000)
    • Y_test::Int: Labels
    • autoencoder: Trained autoencoder model
    • encoder: Encoder part of the autoencoder
source
MachineLearningCourse.Lecture07.display_latent_spaceMethod
display_latent_space(encoder, X, Y)

Display 2D latent space visualization colored by digit class.

Arguments

  • encoder: Trained encoder model
  • X::Array{Float32, 2}: Flattened images (784 × n)
  • Y::Vector{Int}: Digit labels (0-9)

Returns

  • Plot object
source
MachineLearningCourse.Lecture07.extract_encoderMethod
extract_encoder(model)

Extract encoder part from trained autoencoder.

Arguments

  • model: Trained autoencoder model

Returns

  • Flux.Chain: Encoder portion (input → latent space mapping)

Useful for visualizing latent representations or transfer learning.

source
MachineLearningCourse.Lecture07.train!Method
train!(model, X_train; epochs=50, η=0.001f0, batch_size=256)

Train autoencoder using reconstruction loss (MSE).

Arguments

  • model: Autoencoder model to train
  • X_train: Training data (features × samples), values in [0,1]
  • epochs::Int: Number of training epochs (default: 50)
  • η::Float32: Adam optimizer learning rate (default: 0.001f0)
  • batch_size::Int: Mini-batch size (default: 256)

Returns

  • Vector{Float32}: Training losses per epoch

Trains the autoencoder to minimize MSE between input and reconstruction.

source