Python Bindings
TinyRL provides Python bindings via pybind11, exposing the Autograd core API for rapid prototyping and experimentation.
Quick Start
import autograd
import numpy as np
# Create tensor from NumPy array
x = autograd.Tensor(np.random.rand(2, 3), requires_grad=True)
# Operations
y = autograd.relu(x)
loss = autograd.sum(y)
loss.backward()
print(f"Gradient shape: {x.grad().shape}")
Installation
Quick Install
Manual Build
With Stream-X Bindings
API Overview
| Component | Python API | Description |
|---|---|---|
| Tensor | autograd.Tensor(data, requires_grad=True, name="") |
Core tensor class |
| Linear | autograd.Linear(in, out) |
Fully connected layer |
| Conv2D | autograd.Conv2D(in_ch, out_ch, k) |
2D convolution |
| Sequential | autograd.Sequential() |
Model container |
| SGD | autograd.SGD(lr) |
Optimizer |
| RMSProp | autograd.RMSProp(lr, alpha, epsilon) |
Adaptive optimizer |
Math Functions
# Element-wise
autograd.relu(x)
autograd.sigmoid(x)
autograd.tanh(x)
autograd.softmax(x) # Applied over all elements
autograd.softplus(x)
autograd.exp(x)
autograd.log(x)
autograd.pow(x, 2.0)
autograd.sqrt(x)
# Reductions
autograd.sum(x)
autograd.mean(x)
# Matrix operations
x.matmul(y)
autograd.transpose(x)
Examples
Tensor Operations
import autograd
import numpy as np
# Create tensors
a = autograd.Tensor(np.array([[1, 2], [3, 4]]), requires_grad=True)
b = autograd.Tensor(np.array([[5, 6], [7, 8]]), requires_grad=True)
# Operations
c = a.matmul(b)
d = autograd.relu(c)
loss = autograd.sum(d)
# Backward pass
loss.backward()
print(f"a gradient:\n{a.grad()}")
print(f"b gradient:\n{b.grad()}")
Training a Model
import autograd
import numpy as np
# Create model
model = autograd.Sequential()
model.add(autograd.Linear(10, 32))
model.add(autograd.ReLU())
model.add(autograd.Linear(32, 1))
# Create optimizer
opt = autograd.SGD(0.01)
opt.add_parameters(model.layers())
# Training loop
for epoch in range(100):
x = autograd.Tensor(np.random.rand(16, 10), requires_grad=False)
y = autograd.Tensor(np.random.rand(16, 1), requires_grad=False)
pred = model.forward(x)
loss = autograd.sum(autograd.pow(pred - y, 2.0)) / 16
opt.zero_grad()
loss.backward()
opt.step()
if epoch % 20 == 0:
print(f"Epoch {epoch}, Loss: {loss.item():.4f}")
Environment Setup
If installed via install.sh, the module is automatically available. For in-place builds:
The compiled module (autograd.so or autograd.pyd) is placed in examples/python/.
Limitations
| Limitation | Details |
|---|---|
| Dtype | Determined at compile time (float32 default) |
| Stream-X | Requires explicit --with-stream-x-bindings flag |
| No PyPI | Must build from source |
Note: The module name
autogradis unrelated to other Python packages with similar names.
See Also
- Build Guide — Installation options
- Examples — Python example scripts
- API Reference — Complete API documentation