Releases: TuringLang/Turing.jl
v0.40.3
Turing v0.40.3
This patch makes the resume_from
keyword argument work correctly when sampling multiple chains.
In the process this also fixes a method ambiguity caused by a bugfix in DynamicPPL 0.37.2.
This patch means that if you are using RepeatSampler()
to sample from a model, and you want to obtain MCMCChains.Chains
from it, you need to specify sample(...; chain_type=MCMCChains.Chains)
.
This only applies if the sampler itself is a RepeatSampler
; it doesn't apply if you are using RepeatSampler
within another sampler like Gibbs.
Merged pull requests:
- Fix typos in comments and variable names (#2665) (@Copilot)
- Fix multiple-chain method ambiguity (#2670) (@penelopeysm)
Closed issues:
v0.40.2
Turing v0.40.2
sample(model, NUTS(), N; verbose=false)
now suppresses the 'initial step size' message.
Merged pull requests:
- Improve error message for initialization failures with troubleshootin… (#2637) (@AoifeHughes)
- Suppress info message with verbose=false (#2657) (@penelopeysm)
Closed issues:
- Option to suppress "Warning" and "Info" statements (#1398)
v0.40.1
Turing v0.40.1
Extra release to trigger Documenter.jl build (when 0.40.0 was released GitHub was having an outage). There are no code changes.
Closed issues:
v0.40.0
Turing v0.40.0
Breaking changes
DynamicPPL 0.37
Turing.jl v0.40 updates DynamicPPL compatibility to 0.37.
The summary of the changes provided here is intended for end-users of Turing.
If you are a package developer, or would otherwise like to understand these changes in-depth, please see the DynamicPPL changelog.
-
@submodel
is now completely removed; please useto_submodel
. -
Prior and likelihood calculations are now completely separated in Turing. Previously, the log-density used to be accumulated in a single field and thus there was no clear way to separate prior and likelihood components.
@addlogprob! f
, wheref
is a float, now adds to the likelihood by default.- You can instead use
@addlogprob! (; logprior=x, loglikelihood=y)
to control which log-density component to add to. - This means that usage of
PriorContext
andLikelihoodContext
is no longer needed, and these have now been removed.
-
The special
__context__
variable has been removed. If you still need to access the evaluation context, it is now available as__model__.context
.
Log-density in chains
When sampling from a Turing model, the resulting MCMCChains.Chains
object now contains not only the log-joint (accessible via chain[:lp]
) but also the log-prior and log-likelihood (chain[:logprior]
and chain[:loglikelihood]
respectively).
These values now correspond to the log density of the sampled variables exactly as per the model definition / user parameterisation and thus will ignore any linking (transformation to unconstrained space).
For example, if the model is @model f() = x ~ LogNormal()
, chain[:lp]
would always contain the value of logpdf(LogNormal(), x)
for each sampled value of x
.
Previously these values could be incorrect if linking had occurred: some samplers would return logpdf(Normal(), log(x))
i.e. the log-density with respect to the transformed distribution.
Gibbs sampler
When using Turing's Gibbs sampler, e.g. Gibbs(:x => MH(), :y => HMC(0.1, 20))
, the conditioned variables (for example y
during the MH step, or x
during the HMC step) are treated as true observations.
Thus the log-density associated with them is added to the likelihood.
Previously these would effectively be added to the prior (in the sense that if LikelihoodContext
was used they would be ignored).
This is unlikely to affect users but we mention it here to be explicit.
This change only affects the log probabilities as the Gibbs component samplers see them; the resulting chain will include the usual log prior, likelihood, and joint, as described above.
Particle Gibbs
Previously, only 'true' observations (i.e., x ~ dist
where x
is a model argument or conditioned upon) would trigger resampling of particles.
Specifically, there were two cases where resampling would not be triggered:
- Calls to
@addlogprob!
- Gibbs-conditioned variables: e.g.
y
inGibbs(:x => PG(20), :y => MH())
Turing 0.40 changes this such that both of the above cause resampling.
(The second case follows from the changes to the Gibbs sampler, see above.)
This release also fixes a bug where, if the model ended with one of these statements, their contribution to the particle weight would be ignored, leading to incorrect results.
The changes above also mean that certain models that previously worked with PG-within-Gibbs may now error.
Specifically this is likely to happen when the dimension of the model is variable.
For example:
@model function f()
x ~ Bernoulli()
if x
y1 ~ Normal()
else
y1 ~ Normal()
y2 ~ Normal()
end
# (some likelihood term...)
end
sample(f(), Gibbs(:x => PG(20), (:y1, :y2) => MH()), 100)
This sampler now cannot be used for this model because depending on which branch is taken, the number of observations will be different.
To use PG-within-Gibbs, the number of observations that the PG component sampler sees must be constant.
Thus, for example, this will still work if x
, y1
, and y2
are grouped together under the PG component sampler.
If you absolutely require the old behaviour, we recommend using Turing.jl v0.39, but also thinking very carefully about what the expected behaviour of the model is, and checking that Turing is sampling from it correctly (note that the behaviour on v0.39 may in general be incorrect because of the fact that Gibbs-conditioned variables did not trigger resampling).
We would also welcome any GitHub issues highlighting such problems.
Our support for dynamic models is incomplete and is liable to undergo further changes.
Other changes
- Sampling using
Prior()
should now be about twice as fast because we now avoid evaluating the model twice on every iteration. Turing.Inference.Transition
now has different fields.
Ift isa Turing.Inference.Transition
,t.stat
is always a NamedTuple, notnothing
(if it genuinely has no information then it's an empty NamedTuple).
Furthermore,t.lp
has now been split up intot.logprior
andt.loglikelihood
(see also 'Log-density in chains' section above).
v0.39.10
Turing v0.39.10
Added a compatibility entry for DataStructures v0.19.
Merged pull requests:
- Use accumulators to fix all logp calculations when sampling (#2630) (@penelopeysm)
- CompatHelper: bump compat for DataStructures to 0.19, (keep existing compat) (#2643) (@github-actions[bot])
v0.39.9
Turing v0.39.9
Revert a bug introduced in 0.39.5 in the external sampler interface.
For Turing 0.39, external samplers should define
Turing.Inference.getparams(::DynamicPPL.Model, ::MySamplerTransition)
rather than
AbstractMCMC.getparams(::DynamicPPL.Model, ::MySamplerState)
to obtain a vector of parameters from the model.
Note that this may change in future breaking releases.
Merged pull requests:
- DPPL 0.37 compat for particle MCMC (#2625) (@mhauru)
- "Fixes" for PG-in-Gibbs (#2629) (@penelopeysm)
- Fix externalsampler interface (#2640) (@penelopeysm)
v0.39.8
Turing v0.39.8
MCMCChains.jl doesn't understand vector- or matrix-valued variables, and in Turing we split up such values into their individual components.
This patch carries out some internal refactoring to avoid splitting up VarNames until absolutely necessary.
There are no user-facing changes in this patch.
Merged pull requests:
- Fix typos in README.md (#2624) (@xukai92)
- Gibbs fixes for DPPL 0.37 (plus tiny bugfixes for ESS + HMC) (#2628) (@penelopeysm)
Closed issues:
v0.39.7
Turing v0.39.7
Update compatibility to AdvancedPS 0.7 and Libtask 0.9.
These new libraries provide significant speedups for particle MCMC methods.
Merged pull requests:
- AdvancedPS v0.7 (and thus Libtask v0.9) support (#2585) (@mhauru)
- Add changelog for AdvancedPS/Libtask #2585 (#2622) (@penelopeysm)
- Bump patch in Project.toml (#2623) (@penelopeysm)
Closed issues:
- Dirac not working properly when assigned in a vector (#2621)
v0.39.6
Turing v0.39.6
Bumped compatibility of AbstractPPL to include 0.13.
Merged pull requests:
- [email protected] compat (#2620) (@penelopeysm)
v0.39.5
Turing v0.39.5
Fixed a bug where sampling with an externalsampler
would not set the log probability density inside the resulting chain.
Note that there are still potentially bugs with the log-Jacobian term not being correctly included.
A fix is being worked on.
Merged pull requests:
- Update logp in varinfo when external samplers are used (#2616) (@penelopeysm)
Closed issues:
- Gibbs sampler does not carry through log-prob from an external sampler (#2583)