Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add specialize logdensityof directly where applicable? #59

Open
oschulz opened this issue Jun 5, 2022 · 2 comments
Open

Add specialize logdensityof directly where applicable? #59

oschulz opened this issue Jun 5, 2022 · 2 comments

Comments

@oschulz
Copy link
Collaborator

oschulz commented Jun 5, 2022

In some cases, we'd gain performance by specializing logdensityof directly:

julia> direct_logdensityof(::StdNormal, x) = - muladd(x, x, log2π) / 2;

julia> m = StdNormal();

julia> X = rand(m^1000); Y = similar(X);

julia> @benchmark broadcast!(logdensityof, $Y, $(Ref(m)), $X)
BenchmarkTools.Trial: 10000 samples with 864 evaluations.
 Range (min  max):  137.241 ns  364.372 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     166.777 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   169.626 ns ±   7.933 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

julia> @benchmark broadcast!(direct_logdensityof, $Y, $(Ref(m)), $X)
BenchmarkTools.Trial: 10000 samples with 967 evaluations.
 Range (min  max):   82.804 ns  453.609 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     106.885 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   108.045 ns ±  12.766 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%
@cscherrer
Copy link
Collaborator

Great! I'm all in favor of optimizations like this. We should try to make it apply as often as possible.

In methods like HMC, we associate to each measure $\mu \in \mathcal{M}(X)$ a transform $f : \mathbb{R}^n \to X$. The transform maps to the support of $\mu$ by construction, so we can ignore the call to insupport and call unsafe_logdensityof directly. So my first thought is to add a method for that.

We could also add a method

logdensity_def(::StdNormal, ::Lebesgue) = - muladd(x, x, log2π) / 2

though ideally we could get performance for both cases while adding only one method.

@oschulz
Copy link
Collaborator Author

oschulz commented Jun 6, 2022

Ok, I'll do some more benchmarking ... :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants