Testing Guide
TinyRL includes a comprehensive test suite using Google Test for validating the Autograd core.
Quick Start
Running Tests
Via CMake (Recommended)
cd build
ctest --output-on-failure # Run all tests
ctest -R test_operations # Run specific test
ctest -V # Verbose output
Standalone Scripts
Run Specific Test Categories
cd build
./tests/test_operations # Basic tensor ops
./tests/test_mlps # MLP networks
./tests/test_cnns # CNN networks
./tests/test_error_handling # Error validation
Test Categories
| Test File | Coverage | Tests |
|---|---|---|
test_operations.cpp |
Tensor operations, autograd, broadcasting | 18 |
test_mlps.cpp |
Linear layers, Sequential, optimizers | 6 |
test_cnns.cpp |
Conv2D, pooling, 4D tensors | 4 |
test_error_handling.cpp |
Shape validation, error messages | 10 |
What's Tested
Operations:
- Element-wise: +, -, *, /, pow, exp, log, sqrt
- Matrix: matmul, transpose, reshape, flatten
- Reductions: sum, mean
- Activations: relu, leaky_relu, tanh, softmax, softplus
- Broadcasting across all operations
Layers:
- Linear: forward, backward, parameter shapes
- Conv2D: convolution, padding, stride
- LayerNorm: normalization, epsilon handling
- Sequential: composition, parameter collection
Optimizers:
- SGD: parameter updates, learning rate
- RMSProp: adaptive learning, epsilon stability
Writing Tests
Basic Test Structure
#include <gtest/gtest.h>
#include "autograd.h"
TEST(CategoryName, TestName) {
ag::manual_seed(42); // Reproducibility
ag::Tensor x(ag::Matrix::Random(2, 3), true);
ag::Tensor y(ag::Matrix::Random(3, 2), true);
auto z = x.matmul(y);
auto loss = ag::sum(z);
loss.backward();
// Verify gradients exist and have correct shape
EXPECT_EQ(x.grad().rows(), 2);
EXPECT_EQ(x.grad().cols(), 3);
}
Gradient Checking
TEST(Autograd, NumericalGradientCheck) {
ag::manual_seed(42);
ag::Tensor x(ag::Matrix::Random(2, 2), true);
auto loss = ag::sum(ag::pow(x, 2.0));
loss.backward();
// Numerical gradient: d/dx[sum(x^2)] = 2x
double eps = 1e-5;
for (int i = 0; i < 2; ++i) {
for (int j = 0; j < 2; ++j) {
double analytical = x.grad()(i, j);
double expected = 2.0 * x.value()(i, j);
EXPECT_NEAR(analytical, expected, eps);
}
}
}
Best Practices
| Practice | Description |
|---|---|
| Seed RNG | Use ag::manual_seed(42) for reproducibility |
| Small matrices | Use small sizes (2x2, 3x3) for exact checks |
| Tolerance | Use EXPECT_NEAR(a, b, 1e-5) for floating point |
| Test shapes | Verify output shapes, not just values |
| Test gradients | Check both forward and backward passes |
| Edge cases | Test broadcasting, single elements, edge shapes |
Debugging
GDB Debugging
Verbose CTest Output
Print Tensor Values
std::cout << "x value:\n" << x.value() << std::endl;
std::cout << "x grad:\n" << x.grad() << std::endl;
Continuous Integration
Tests run automatically on push. To run locally before committing:
All 38 tests should pass before submitting changes.
See Also
- Contributing Guide — How to contribute
- API Reference — API documentation
- Examples — Usage examples