Currently, laplace_marginal_rv_logp assumes that the dimension of the latent field $x$ is values[0].shape[-1], that is, it assumes:
- There is only one observable $y$ having its valued passed into the logp (this should be true given the structure of INLA):
y = values[0].
-
$x$ is of the same dimension as the observable $y$.
- That dimension is the final entry in
y.shape (for example, if the observable tensor is of shape 1000x3, there are 1000 datapoints and the dimension is 3).
Ideally, there should be a more robust way of obtaining dimension d. There is no risk of invalid results if d is incorrect since the code will simply crash, but this is not desirable behavior either.