From imaging algorithms to quantum methods Seminar

Europe/Warsaw
https://cern.zoom.us/j/66151941204?pwd=n7upvvZYibexBhbtyn5kvTpy36L0Wo.1 (Zoom)

https://cern.zoom.us/j/66151941204?pwd=n7upvvZYibexBhbtyn5kvTpy36L0Wo.1

Zoom

Konrad Klimaszewski (NCBJ), Wojciech Krzemien (NCBJ)

#Title: Bayesian fitting in positronium lifetime imaging
#presenter: Roman Y. Shopa
#date: 16.06 2025

#Participants: 

Wojciech Krzemien (WK),  Lech Raczyński(LR),  Roman Shopa (RS), Wojciech Wiślicki (WW), Michał Mazurek (MM), Mateusz Bała (MB), Konrad Klimaszewski (KK), Krzysztof Nawrocki (KN)

 

#Questions/Remarks

WW: What does the pathway mean?
RS: It is just a type of decay: para-, ortho- etc.

WK: Is the sampling from the prior distribution?
RS: Yes

WK: Which posterior sampling method do you use underneath?
RS: Hamiltonian MC implementation

WK: Can you give the intuition on how this convergence works?e.g. In which direction is the next parameter chosen?
RS: I would need to read a bit more.

KK: Does your result already include the MCMC or Non-U-turn?
RS: Yes

MM: What are those vertical lines on p.13? E.g. red line at -10 ns?
RS: The line on the left probably limits the spectrum region, as for the red one - I hardly see it, but the slide is from J.Qi's presentation, so I have little idea.

LR: It is not obvious why the Bayesian method works worse in the log scale and better in the linear scale.

WW: Dependence of priors is visible, then it is worrying. If it is really a Markov chain, then it cannot depend on priors. It comes from the Markov theorem.
So either there is some numerical problem, or it is not a truly Markov chain.

LR: Markov process, by definition, depends only on the previous step. The method should not be sensitive to any starting points. This is guaranteed mathematically.

RS: Hamiltonian Monte Carlo introduces momentum variables for each parameter, treating the sampling process as a particle movement. It simulates the particle’s trajectory guided by the gradient of the log-posterior, proportional to the likelihood x prior.

MM: Would Bayesian neural networks be helpful in this case?
RS: Not sure, worth checking, but impractical for PLI where there are many spectra for many voxels... Bayesian fitting itself is a good alternative to least-squares, which struggles with overfitting and is sensitive to the initial guess.

KK: Do you observe a problem with initialisation for Bayesian fitting?
RS: I observe a dependence on priors. If I start with the prior far away from the assumed truth then it can not converge correctly.

KK: How fast is the convergence?
RS: Very slow, even in multi-thread mode. It requires many iterations/Markov chains, each processed slower than least-squares minimisation. Also, the R package 'brms' requires C++ compilation each time I adjust the prior. It is impractical for multi-voxel PLI.

There are minutes attached to this event. Show them.