**Title:** *Neutron Activation Techniques in Detecting Hazardous Materials and Cancer Treatment*
**Presenters:** Piotr Prusinski (PP), Tomasz Fruboes (TF)
**Date:** 30.03.2026
---
### Participants
- Piotr Prusinski (PP)
- Tomasz Fruboes (TF)
- Wojciech Krzemień (WK)
- Konrad Klimaszewski (KK)
- Krzysztof Nawrocki (KN)
- Michał Mrozowski (MM)
- Roman Shopa (RS)
- Michał Obara (MO)
- Lech Raczyński (LR)
- Aleksander Ogonowski (AO)
- Karol Frydrych (KF)
- Roch Kwiatkowski (RK)
- Aurélien Coussat (AC)
- Mateusz Bala (MB)
---
### Discussion
**WK:** At p.11, over which variable is the average taken?
**PP:** It is averaged over time.
**WK:** I also did not understand whether these calculations are performed in parallel for each cell, or how this is implemented.
**PP:** It is an iterative process that computes multiple variables within each cell based on set of constitutive equations (slide 4) while accounting for neighbouring conditions from the previous time step.
**WK:** When you refer to averaging over time, what is the time-averaging range? Is it deterministic?
**PP:** It is deterministic. The range depends on the application. Here, by way of additional clarification. The issue of time, however, requires a separate discussion. The simulation evolves both in space and in time. It therefore has both spatial and temporal resolution. Discretisation applies to both of these notions. It is worth noting that there exists the concept of a grid cell (as the smallest spatial dimension) as well as a time step (as the smallest temporal dimension). There is even a relationship between the grid cell and the time step (the CFL condition). As for the core of the question, namely the process of time averaging itself, there is no single specific time interval. Typically, an (unsteady) simulation is run until the so-called convergence point is reached for a given physical parameter (usually the one that constitutes the essence of the simulation). You may achieve convergence for two components of velocity while still observing oscillations in the third—and that is still acceptable.
**WK:** Do you start with a random seed, or is the process deterministic?
**PP:** Running the same simulation (meaning the same mesh, the same fluid, the same conditions, essentially the same numerical setup on the same hardware and solver) multiple times leads only to minor numerical fluctuations; overall, the results follow the same trend. That is because it is based on a strict logic governed by multi-parametric and non-linear Navier-Stokes equations. Nonetheless, we usually introduce a tiny bit of "noise" at the start to kickstart the process that should happen and follow the rules given in the equations.
.
**WK:** What about the initial conditions? Can you assess the sensitivity of your solution to them?
**PP:** Sensitivity study is an essential part of CFD analysis. CFD is a multi-parametric study of combined variables, so the correlations are not necessarily obvious up front. CFD simulation is sensitive to numerous aspects, including initial conditions. In order to check the sensitivity to any possible impacting factor, we usually run several simulations, with a range of values of the potentially sensitive parameter. One value (or formula) per case.
**WK:** How do you select your model? What is the criterion for choosing it?
**PP:** The choice is guided by existing literature. Matching can be based, for example, on Reynolds numbers when the case is well documented. For new cases, one can only estimate a range of deviations between different models. Huge impact on choice, if no literature reference available, is the engineer’s past experience (own-evolved good practices).
**AC:** Are there alternatives to Ansys Fluent, since you complain about the interface? What does the software landscape look like?
**PP:** The landscape is wide and rich, there are quite a few established commercial codes, like ANSYS Fluent, ALATAIR CFD, but there are also open-source codes like OpenFOAM or Code_Saturne, there are also in-house (proprietry) codes like formerly known as (CEA France) Trio_U, to mention just a few I was using in my life. All the codes are based on the same formulas and lead to the same answers in the end. Although Fluent yields some discomfort when it comes to sensitivity study, there are tools to make it easier and to release other powerful code capabilities. One advantage of ANSYS Fluent is the local technical support – a great value to any CFD project. ANSYS Fluent is still the most popular code in Poland.
**KK:** You referred to DNS as “exact simulations,” but this is only true up to a point, as other effects may still be present. Would it be sufficient to accurately describe a large separation bubble and ensure that smaller-scale structures are also properly resolved?
**PP:** Well, the question actually is made of answers. I mean, the essence of DNS definition it to resolve all the eddy scales (not only the large ones), so up to the domains boundaries, up to the smallest “flow creatures”. The smallest eddy should be covered by at least 2 mesh cells in each direction. In a hexahedral mesh it means 8 cells to cover 3D eddy. And the smallest ones are several orders of magnitude smaller than the average one. Also, there is a requirement stating the need for having a possibly uniform mesh in each direction and the elements aspect ratio (shortest to longest edge) equal to unity. This means meshes of an enormous number of elements and a requirement for computational power exceeding the Computing Center Świerk capabilities. This means DNS is reserved for basic physics mainly. To answer fundamental questions, to let understand underlying physics, and to be a general reference for turbulence model calibration source. DNS cases are mostly simplified/idealized geometries focused on specific physics. There is a robust methodology to check if the mesh and the results are resolved and not modelled.
**KK:** Question to TF: Does the API you are developing run in parallel?
**TF:** Yes, it handles parallel processing. The post-processing is performed in two steps.