Lecture 07 - Autoencoders
MachineLearningCourse.Lecture07 — Module
Lecture07Autoencoders
Available Functions
demo(): Train autoencoder for MNIST handwritten digitsdisplay_latent_space(encoder, X_flat, Y_label): Display 2D latent space visualizationdisplay_reconstruction(model, image): Display original vs reconstructed image
Usage
using MachineLearningCourse
Lecture07.demo()using MachineLearningCourse
X_flat, Y_label, autoencoder, encoder = Lecture07.demo(display=false)
i = 1 # set image index
Lecture07.display_reconstruction(autoencoder, X_flat[:, i])using MachineLearningCourse
X_flat, Y_label, autoencoder, encoder = Lecture07.demo(display=false)
Lecture07.display_latent_space(encoder, X_flat, Y_label)MachineLearningCourse.Lecture07.create_autoencoder — Method
create_autoencoder(input_dim, hidden_dim, latent_dim)Create symmetric autoencoder with encoder-decoder architecture.
Arguments
input_dim::Int: Input dimension (e.g., 784 for MNIST)hidden_dim::Int: Hidden layer dimensionlatent_dim::Int: Latent space dimension (bottleneck)
Returns
Flux.Chain: Autoencoder model with architecture:- Encoder: input → hidden (ReLU) → latent (linear)
- Decoder: latent → hidden (ReLU) → output (sigmoid)
The model learns to compress input to latent space and reconstruct it.
MachineLearningCourse.Lecture07.demo — Method
demo(; train_size=5000, epochs=50, hidden_dim=128, latent_dim=2, display=true)Complete MNIST autoencoder demo with 2D visualization.
Arguments
train_size::Int: Number of training samples (default: 5000)epochs::Int: Number of training epochs (default: 50)hidden_dim::Int: Hidden layer dimension (default: 128)latent_dim::Int: Latent space dimension (default: 2)display::Bool: Whether to display latent space plot (default: true)
Returns
Tuple: (X_test, model, encoder, plot)X_test::Array{Float32, 2}: Flattened test images (784 × 1000)Y_test::Int: Labelsautoencoder: Trained autoencoder modelencoder: Encoder part of the autoencoder
MachineLearningCourse.Lecture07.display_latent_space — Method
display_latent_space(encoder, X, Y)Display 2D latent space visualization colored by digit class.
Arguments
encoder: Trained encoder modelX::Array{Float32, 2}: Flattened images (784 × n)Y::Vector{Int}: Digit labels (0-9)
Returns
- Plot object
MachineLearningCourse.Lecture07.display_reconstruction — Method
display_reconstruction(model, image)Display original vs reconstructed image comparison.
Arguments
model: Trained autoencoder modelimage::Array{Float32, 1}: Flattened image vector (784 elements)
Returns
Tuple: (plot, error)
MachineLearningCourse.Lecture07.extract_encoder — Method
extract_encoder(model)Extract encoder part from trained autoencoder.
Arguments
model: Trained autoencoder model
Returns
Flux.Chain: Encoder portion (input → latent space mapping)
Useful for visualizing latent representations or transfer learning.
MachineLearningCourse.Lecture07.train! — Method
train!(model, X_train; epochs=50, η=0.001f0, batch_size=256)Train autoencoder using reconstruction loss (MSE).
Arguments
model: Autoencoder model to trainX_train: Training data (features × samples), values in [0,1]epochs::Int: Number of training epochs (default: 50)η::Float32: Adam optimizer learning rate (default: 0.001f0)batch_size::Int: Mini-batch size (default: 256)
Returns
Vector{Float32}: Training losses per epoch
Trains the autoencoder to minimize MSE between input and reconstruction.