require(bridgestan)
= StanModel$new(
model "../blocks/ariane.stan",
"../blocks/ariane.json",
1
)
$log_density_gradient(c(0.1, 0.2), propto = TRUE, jacobian = TRUE) model
$val
[1] -5.299514
$gradient
[1] 11.389504 1.451964
As we just saw, HMC requires computing the gradient (vector of partial derivatives) at each iteration. But notice we did not have to manually derive the gradient when writing Stan models.
Why? Thanks to autodiff!
Reverse-mode autodiff: see wikipedia for details; in summary,
If you just need just the gradient of the log density of your Stan model and not HMC (e.g., to develop new inference algorithms), use BridgeStan (bridges Stan with R, Julia, Rust, etc).
To install it:
install.packages("remotes")
require(remotes)
::install_github("https://github.com/roualdes/bridgestan", subdir="R") remotes
To use it:
require(bridgestan)
= StanModel$new(
model "../blocks/ariane.stan",
"../blocks/ariane.json",
1
)
$log_density_gradient(c(0.1, 0.2), propto = TRUE, jacobian = TRUE) model
$val
[1] -5.299514
$gradient
[1] 11.389504 1.451964
In contrast, both forward mode autodiff and numerical differentiation would be \(d\) time slower at computing \(\nabla \log \gamma\) compared to \(\log \gamma\).↩︎