2nd International Workshop on Machine Learning and Quantum Computing Applications in Medicine and Physics

Europe/Warsaw
Description

2nd International Workshop on Machine Learning and Quantum Computing Applications in Medicine and Physics

04 to 07 June 2024, Warsaw, Poland

We cordially invite you to the 2nd International Workshop on Machine Learning and Quantum Computing Applications in Medicine and Physics, which will take place in Warsaw (Poland) from 04 to 07 June 2024. The workshop is organized by the National Centre for Nuclear Research in cooperation with scientists from the University of Vienna, Jagiellonian University and the Polish National Competence Centre in HPC Phase 2 (EuroCC2).

In the last years, we have seen an increasing number of interdisciplinary exchanges, especially with machine learning methods involved. Many times similar techniques and tools are used to solve a variety of problems. The transfer of methods between disciplines is highly needed and it has repeatedly proved to be a very fruitful approach. This workshop aims to facilitate such transfers of knowledge by bringing together experts from various institutions and research groups in the fields of medical imaging, radiotherapy, physics, and computer science. The workshop serves as a platform to enable research at the edge of various disciplines.

The scope of the workshop covers among others quantum simulations, quantum algorithms, and (classical or quantum) machine-learning algorithms with a focus on application in Physics and Medicine.

List of topics:

  • machine learning methods in medical applications,
  • machine learning methods in high-energy physics and astrophysics,
  • quantum machine learning,
  • Generate Adversarial Networks for fast simulations both in medicine and particle physics,
  • quantum simulations,
  • quantum and quantum-inspired computing algorithms,
  • novel methods in medical imaging,
  • High-Performance Computing (HPC) in physics and medicine, in particular on heterogeneous platforms such as FPGA and GPU.

Feel free to share this invitation with your peers and whoever you think would be interested. Please, do not hesitate to contact us in case of any questions: wmlq2024@ncbj.gov.pl

Link to the previous edition WMLQ2022: https://events.ncbj.gov.pl/event/141/page/65-home

Registration
WMLQ2024 registration
Participants
  • Artur Kalinowski
  • Aurélien Coussat
  • Bartłomiej Rachwał
  • Beatrix Hiesmayr
  • Didier BENOIT
  • Elena Perez del Rio
  • Emilia Majerz
  • Fernando Gonzalo Moncada Gutierrez
  • Francesco Pistis
  • Grzegorz Korcyl
  • Jakub Mrożek
  • Jami Rönkkö
  • Jeams Chanel NGUELONG NAME
  • Julien Bert
  • Justyna Mędrala-Sowa
  • Konrad Klimaszewski
  • Lech Raczyński
  • Luis Eduardo Suelves
  • Mateusz Bała
  • Moncef ATI
  • Piotr Gawron
  • Piotr Kalaczyński
  • Pooja Tanty
  • Rafał Możdżonek
  • Roman Shopa
  • Sarkis Kassounian
  • Szymon Gamrat
  • Tomasz Malkiewicz
  • Tomasz Rybotycki
  • Wojciech Krzemien
    • 09:00 10:00
      Pre-Coffee / Registration 1h
    • 10:00 12:30
      LUMI tutorial: Part 1
    • 12:30 13:30
      Lunch break 1h
    • 13:30 15:30
      LUMI tutorial: Part 2
    • 08:30 09:30
      Pre-Coffee / Registration 1h
    • 09:30 12:30
      Opening session
      • 09:30
        Welcome address 20m
      • 09:50
        Welcome from the organizers 20m
      • 10:10
        Machine learning vs. network science: A comparison of two paradigms for the interpretation of high-throughput data in biology and medicine 40m

        Over the last decades biology and medicine have become data sciences. High-throughput ('omics') data on the level of gene expression, metabolic activity, epigenetic regulation and others now serve as a prominent source of systemic information. This makes these fields accessible to data-driven computational methods, in particular network science and machine learning.

        Network science employs the formal view of graph theory to understand the design principles of complex systems. Abstracting cellular processes (gene regulation, metabolism, protein interactions) into networks has revolutionized the way we think about biological systems.

        Machine learning is most prominent in biological and medical research via the successes of image analysis and of protein structure prediction via AlphaFold. Attempts to train machine learning devices to interpret 'omics' data has been less successful so far.

        Focusing on gene expression data as the most common example (beyond the genome) of 'omics' data, we discuss possible reasons for the limited success of machine learning in biology and medicine. We start with a (deceptively) simple biological situation, bacterial gene regulation, and then move to the analysis of medical data.

        Speaker: Marc Hütt
      • 10:50
        Coffee break 30m
      • 11:20
        AI in Space: Ready for Prime Time? 40m

        Exciting advancements in remote sensing, AI, and edge computing are transforming scientific and industrial sectors via in-orbit data processing. This technology enables real-time applications such as environmental monitoring, precision agriculture, disaster detection, and in-orbit anomaly detection from telemetry data. Integrating AI into space-based systems and edge devices swiftly converts raw data, like multi- or hyperspectral images, into actionable insights on board satellites. Challenges remain, including hardware limitations, model validation, and sparse ground-truth datasets. In this talk, we will explore concrete opportunities, challenges and solutions related to deploying AI in space, focusing on Earth observation and anomaly detection from satellite telemetry data. The real satellite missions, including OPS-SAT by European Space Agency and Intuition-1 by KP Labs will serve as real-world examples. Finally, we will discuss the most exciting research and development avenues in on-board and on-ground (quantum) AI for space applications. Fasten your seatbelts, we are ready to take off.

        Speaker: Dr Jakub Nalepa (Silesian University of Technology/KP Labs)
      • 12:00
        Quantum computing in the service of satellite data processing 30m

        Earth observation data are constantly being produced by constantly growing number of satellites. Processing these data efficiently consists a major challenge and not all the produced data is processed and analyzed. At the same time Earth observation provides important information about our ecosystems in the age of rapidly changing climate. For this reason research on application of quantum computing for Earth observation data analysis has been initiated be a couple of research institutions. For me personally participation in this field allows me to study how one can apply a variety of quantum algorithmic techniques to image processing and to join efforts aiming at reducing the impact of climate change. I will present a short review of ideas and activities that aim at finding, possibly impactful, new methods of satellite data processing using quantum computing techniques.

        Speaker: Piotr Gawron (AstroCeNT / Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences)
    • 12:30 13:30
      Lunch break 1h
    • 13:30 17:00
      Machine learning in high energy physics
      • 13:30
        Machine learning in HEP 40m

        I will recall a basic idea behind Machine Learning, then I will bring random examples of ML applications in High Energy Physics.

        Speaker: Artur Kalinowski (University of Warsaw)
      • 14:10
        Deep Generative Models for Particle Simulations at ALICE, CERN 40m

        At the European Organisation for Nuclear Research (CERN), scientists study the fundamental properties of matter by replicating the extreme conditions of the early universe within the Large Hadron Collider (LHC). Understanding particle collisions requires running simulations that mirror the detectors' expected responses within the LHC. With over 50% of CERN's GRID computing power dedicated to High Energy Physics simulations, the need for more efficient simulation methods is critical.

        We propose employing generative machine learning to directly simulate detector responses, leveraging advancements in generative adversarial networks (GANs), autoencoders, and diffusion models to tackle simulation challenges. Our contributions include introducing a modified GAN training objective that accommodates varying simulation variance across different conditional inputs, supplemented with additional regularization to increase the simulation fidelity. For autoencoders, we introduce a conditional control mechanism enhancing simulation control by independently manipulating output parameters of the generated samples. With diffusion models, we explore the efficiency of latent diffusion models and the trade-off between inference time and simulation quality.

        Our proposed methodologies have the potential to advance particle collision simulations by offering more streamlined, controllable, and faster methods, maintaining the fidelity demanded by modern high-energy physics experiments.

        Speakers: Patryk Będkowski (Warsaw University of Technology), Karol Rogoziński, Mikołaj Kita (Warsaw University of Technology)
      • 14:50
        Fast simulation of the Zero Degree Calorimeter responses with generative neural networks 25m

        Applying machine learning methods to high-energy physics simulations has recently emerged as a rapidly developing area. A prominent example is the Zero Degree Calorimeter (ZDC) simulation in the ALICE experiment at CERN, where substituting the traditional computationally extensive Monte Carlo methods with generative models radically reduces computation time. Although numerous studies have addressed the fast ZDC simulation, there remains significant potential for innovations. Recent developments in generative neural networks have enabled the creation of models capable of producing high-quality samples indistinguishable from real data. In this paper, we apply the latest advances to the simulation of the ZDC neutron detector and highlight the potential benefits and challenges. Our focus is on exploring novel architectures and state-of-the-art generative frameworks. We compare their performance against established methods, demonstrating competitive outcomes in speed and efficiency.

        Speakers: Maksymilian Wojnar (AGH University of Krakow), Ms Emilia Majerz (AGH University of Krakow)
      • 15:15
        Coffee break 30m
      • 15:45
        LLM-based physics analysis agent at BESIII and exploration of future AI scientist 25m

        The data processing and analyzing is one of the main challenges at HEP experiments, normally one physics result can take more than 3 years to be conducted. To accelerate the physics analysis and drive new physics discovery, the rapidly developing Large Language Model (LLM) is the most promising approach, it have demonstrated astonishing capabilities in recognition and generation of text while most parts of physics analysis can be benefitted. In this talk we will discuss the construction of a dedicated intelligent agent, an AI assistant at BESIII based on LLM, the potential usage to boost hadron spectroscopy study, and the future plan towards a AI scientist.

        Speakers: Ke Li (Institute of High Energy Physics, China), Dr Zhengde Zhang (Institute of High Energy Physics, China), Dr Yiyu Zhang (Institute of High Energy Physics, China)
      • 16:10
        A Deep Generative Model for Hadronization 25m

        Hadronization is a critical step in the simulation of high-energy physics. As there is no first principles understanding of this process, phenomenological models with a large number of parameters are used. We propose an alternative approach that uses deep generative models, which are a natural replacement for classical techniques since they are more flexible and may be able to improve the overall precision. We first demonstrate using neural networks to emulate specific hadronization when trained using the inputs and outputs of classical methods. Then, a protocol is developed to fit a deep generative hadronization model to the realistic situation of experimental data. Finally, we build a deep generative hadronization model that includes both kinematic (continuous) and flavor (discrete) degrees of freedom. Our approach is based on Generative Adversarial Networks and we show the performance within the context of the cluster model within the Herwig Monte Carlo generator.

        Speaker: Prof. Andrzej Siodmok (Jagiellonian Universitu)
      • 16:35
        HEP 5 25m

        LHCb

    • 08:30 09:00
      Pre-Coffee 30m
    • 09:00 10:30
      Medical imaging: Monte Carlo simulations in medical imaging
      • 09:00
        Invited Talk: Nils Krah 40m
      • 09:40
        GGEMS - GPU Geant4-based Monte Carlo Simulations 25m

        In medical physics, GPU-based Monte Carlo simulations (MCS) have been proposed for computational gains. However, they remain limited to specific applications and are not easily generalized.
        GGEMS (GPU Geant4-based Monte Carlo Simulations) is advanced MCS software that uses OpenCL. Entirely written in C++, its software architecture allows flexibility and generality for numerous applications such as imaging and radiation therapy. It has been validated on architectures including NVIDIA and Intel graphics cards, as well as multi-core Intel processors on both Windows and Linux. A Python interface is provided for scripting, and an OpenGL graphical interface has been developed to assist users.
        GGEMS was evaluated through various medical applications, demonstrating fast simulation. For example, for a CT projection simulating 10^9 particles, the computation times were: 112s on GeForce 1050Ti, 385s on Quadro P400, 421s on Xeon 16 threads, and 91s on 1050Ti+P400.

        Speaker: Didier BENOIT
      • 10:05
        Up-scaling for measuring the spatial distribution of radiation dose for applications in the preparation of individual patient treatment plans 25m

        The super-resolution (SR) techniques are often used in the up-scaling process to add-in details that are not present in the original low-resolution image. In radiation therapy the SR can be applied to enhance the quality of medical images used in treatment planning. The Dose3D detector measuring spatial dose distribution [1], the dedicated set of ML algorithms for SR has been proposed to perform final dose distribution up-scaling. As the SR technique, the SRCNN [2] architecture has been adjusted. The training and validation data being produced with MC simulation with two different scoring resolutions. Extra features related to the beam shape have been defined. The input data resolution is the one coming from the measurement (1cc) and the target data resolution is defined at the level of the CT image. Our research's latest breakthroughs and advancements will feature at the conference.
        References:
        [1] https://dose3d.fis.agh.edu.pl,
        [2] https://doi.org/10.1007/978-3-319-10593-2_13

        Speaker: Bartłomiej Rachwał (AGH University of Krakow)
    • 10:30 11:00
      Coffee break 30m
    • 11:00 12:35
      High Performance Computing
      • 11:00
        Invited Talk: Tomasz Małkiewicz 40m
      • 11:40
        Developing Artificial Intelligence in the Cloud: the AI_INFN platform 25m

        The INFN CSN5-funded project AI_INFN ("Artificial Intelligence at INFN") aims at fostering the adoption of ML and AI within INFN by providing support on multiple aspects, including the provision of state-of-the-art hardware for AI and ML, leveraging on cloud native solutions in the context of INFN Cloud, to share hardware accelerators as effectively as possible without compromising on the diversity of the research activities of the Institute. AI_INFN evolves the Virtual-Machine-based model towards a more flexible platform built on top of Kubernetes. This is meant to be a composable toolkit and currently features: JWT-based authentication, JupyterHub multitenant interface, distributed filesystem, customizable conda environments, and a specialized monitoring and accounting system. Last but not least, the platform is an enabler to implement the offloading mechanism based on Virtual Kubelet and interLink API, a synergy with InterTwin. Preliminary results and applications will be presented.

        Speaker: Rosa Petrini (INFN)
      • 12:05
        EUROCC 30m
    • 12:35 13:30
      Lunch break 55m
    • 13:30 14:20
      High Performance Computing
      • 13:30
        HPC 2 25m

        UJ - FPGA

      • 13:55
        Preliminary study on artificial intelligence methods for cybersecurity threat detection in computer networks based on raw data packets 25m

        Most of the methods of the intrusion detection systems for cybersecurity threats detection in computer networks are based on traffic flow characteristics. However, this approach may not fully exploit the potential of deep learning algorithms to directly extract features from raw packets. Moreover, it impedes real-time monitoring due to the necessity of waiting for the processing pipeline to complete and introduces dependencies on additional software components.

        In this paper, we investigate deep learning methodologies capable of detecting attacks in real-time directly from raw packet data within network traffic. Our investigation utilizes the CICIDS2017 dataset, which includes both benign traffic and prevalent real-world attacks, providing a comprehensive foundation for our research.

        Speaker: Mr Michał Żebrowski (NCBJ)
    • 14:20 14:50
      Coffee break 30m
    • 14:50 16:30
      Medical imaging: Novel reconstruction algorithms and techniques
      • 14:50
        Advanced TOF MLEM reconstruction of a human patient scanned by the modular J-PET 25m

        We present one of the first images for an in vivo scan of a human patient, measured by the modular Jagiellonian positron emission tomographic (J-PET) prototype, which records 511-keV annihilation photons via Compton scattering in plastic scintillators [1]. The original reconstruction algorithm is adapted from the maximum likelihood expectation maximisation (MLEM), with the realistic J-PET system matrix modelling, time-of-flight (TOF) information and attenuation and scatter correction applied [2].

        The attenuation map of the patient was measured by a CT scan. The scatter factors were estimated by the single scatter simulation (SSS), implemented in the STIR software [3]. The subsampled SSS-sinogram was acquired using the known attenuation factors and a prior MLEM reconstruction, made without additive corrections and later upscaled by interpolation. As a result, a significant improvement was achieved in noise suppression and resolution recovery for the reconstructed PET image.

        [1] Moskal P et al. MedRxiv 2024.02.01.23299028 (2024)
        [2] Shopa RY et al. IEEE TRPMS 7 509 (2023)
        [3] Thielemans K et al. PMB 57 867 (2012)

        Speaker: Roman Shopa (National Centre for Nuclear Research)
      • 15:15
        Improvement of time-of-flight resolution of PET scanner using additional prompt photon 25m

        Positronium Imaging (PI) has become one of the most prospective branch of Positron Emission Tomography. During the PI measurement two classes of events are required: double-coincidence events originated from pair of back-to-back annihilation photons and triple-coincidence events comprised with three photons, i.e, two annihilation photons and one additional prompt photon. The standard reconstruction of the emission position along the line-of-response of triple-coincidence event is the same as in the case of double-coincidence event and is based on times and positions of two annihilation photons only; an information introduced by the additional prompt photon is ignored. In this presentation, we propose to extend the reconstruction of position of triple-coincidence event by taking into account the time and position of prompt photon. Moreover, we incorporate the knowledge about the positronium lifetime distribution and derive the algorithm for the position reconstruction. We discuss the limitations of the method based on the simulation data.

        Speaker: Lech Raczyński (National Centre for Nuclear Research)
      • 15:40
        MED 3 25m

        J-PET - corrections

      • 16:05
        Assessment of Internal Radiation Dose: Understanding the Influence of Respiratory Motion 25m

        Assessing radiation doses to internal organs is crucial in evaluating the risks and benefits of diagnostic and therapeutic nuclear medicine procedures, such as PET, for patients. Respiratory motion causes significant displacement of internal organs, affecting the absorbed dose in cases of external r adiation exposure. In this study, our focus was on determining the role of respiratory motion in assessing the absorbed dose of S values for Lu177, Dy 165, I 131, and Tc99m. Despite this, there has been no previous report on the impact of respiratory motion on internal radiation dosimetry

        Speaker: Dr Moncef ATI (University Of Oran1)
    • 08:30 09:00
      Pre-Coffee 30m
    • 09:00 10:30
      Machine learning in medicine
      • 09:00
        Invited Talk: Georg Schramm 40m
      • 09:40
        Using 3D CNNs for distortion corrections in PET imaging 25m

        In Positron Emission Tomography the problem of image distortion due to scattered photons or accidental coincidences becomes more pronounced for large field-of-view scanners capable of measuring the whole patient in one scan. We propose a novel method of encoding coincidence event information to enhance the efficiency of noise filtration classification. The proposed encoding enables the usage of Convolutional Neural Networks as feature extractors in the classification task. We take advantage of the voxel nature of underlying data and evaluate the performance of the 3-D CNN network to classify true, scattered and accidental coincidences for imaging quality improvement with large field-of-view PET scanners.

        Speaker: Konrad Klimaszewski (NCBJ)
      • 10:05
        Machine Learning-based Scatter Correction for a Dual-Panel Positron Emission Mammography Scanner 25m

        Positron Emission Mammography (PEM) is a Nuclear Medicine technique for breast imaging based on a dedicated scanner assembled with parallel dual-panel detector arrays. Patient positioning in close contact with the scanner enhances spatial resolution and sensitivity in comparison with ring-based scanners, but this geometry hinders the adaptation of conventional attenuation and scatter correction methods, which affects the quantitative assessment of studies. In this work we trained several machine learning algorithms for scatter correction with list-mode data from a Monte Carlo simulation of a PEM prototype being built in our lab. The features for this binary classification problem were energy and position of detection, where energy had the higher feature importance in agreement with traditional methods. The best results were found with a Random Forest of 38 estimators and a maximum depth of 7, which reduced the scatter fraction of a study of 1 million events from 11% to 4% in 2 seconds.

        Speaker: Mr Fernando Moncada-Gutiérrez (Instituto de Física, UNAM)
    • 10:30 11:00
      Coffee break 30m
    • 11:00 12:30
      Quantum computing and systems
      • 11:00
        Invited Talk: Jakub Mielczarek 40m
      • 11:40
        Utilizing Superconducting Quantum Computer for Education and Research 25m

        With a growing interest in quantum technology globally, there is an increasing
        need for accessing relevant physical systems for education and research. This talk introduces a commercially available on-site quantum computer utilizing superconducting technology. We show how this system can be used in education to
        teach quantum concepts and deepen understanding of quantum theory and quantum computing. It offers learning opportunities for future talent and contributes
        to fundamental research and technological progress. We highlight the advantages of having complete hands-on access to the hardware. As educational and research use cases we demonstrate the violation of CHSH inequality, a GHZ state experiment offering intuitive account for decoherence and simulation of neutrino flavor oscillations.

        Speaker: Mr Jami Rönkkö (IQM Quantum Computers)
      • 12:05
        Parallel Two-Qubit Gates on IQM Resonance: Garnet 20-Qubit Quantum Computer 25m

        The promise of quantum computing speeding up the execution of certain computational tasks cannot be achieved without quality hardware. Superconducting qubits are one of the most mature technologies for an implementation of a quantum processing unit (QPU), and such devices are already available to be used by researchers in cloud. Among them is IQM's Garnet QPU available on Resonance platform. This 20-qubit quantum computer represents IQM's core technology choices, such as a floating tunable transmon coupler [1], allowing QPU median CZ gate fidelity of 99.5% and entangling all the qubits on the chip by preparing a GHZ state with 62% fidelity. I will present benchmarking results ranging from fidelities to application benchmarks quantifying performance in certain tasks. Subsequently, I will describe the tunable coupler architecture, and the methods for fast, reliable and automatic calibration of high-fidelity parallel two-qubit gates.
        [1] Fabian Marxer et al. PRX Quantum 4 010314 (2023)

        Speaker: Dr Jakub Mrożek (IQM Quantum Computers)
    • 12:30 13:30
      Lunch break 1h
    • 13:30 15:00
      Quantum computing and machine learning
      • 13:30
        Invited Talk: Łukasz Rudnicki 40m
      • 14:10
        Quantum Neural Networks: current status and next steps 25m

        Classical machine learning has proven valuable since its implementation on classical computers became feasible. On the other hand, quantum computation claims to present an exponential advantage over any classical algorithm for specialized tasks. Thus, adapting the machine learning paradigm to the quantum realm is a promising way forward.
        We start the talk with a general introduction to the mathematical framework required for this adaptation. These basic notions are crucial to understanding how we can manipulate quantum systems and what the limitations are. Afterward, we discuss different approaches to "quantizing" the neural network architecture, i.e., adapting classical neural networks to quantum systems, before focusing on Dissipative Quantum Neural Networks. We show that this ansatz has the potential to be a "quantum universal approximator" as it can be used to learn any quantum operation. Lastly, preliminary numerical results and possible next steps are discussed.

        Speaker: Mr Tobias Christoph Sutter (University of Vienna)
      • 14:35
        Entanglement classification via Neuronal Networks 25m

        In this talk, we investigate the application of machine learning to
        an NP-hard problem in quantum information theory, the separability
        problem of classifying a quantum state as entangled or
        separable. This problem arises for entangled
        quantum systems of dimension three or higher, where no exact solution
        is currently known. We demonstrate that neural networks can accurately classify mixtures
        of Bell states. This classification can be achieved by
        considering the properties of the mixtures themselves and
        by entropy-related quantities.We further highlight convolutional neural networks in this
        process. Our findings indicate that these networks can reflect
        entanglement structures crucial for accurate
        classification. The study underscores the synergistic potential of machine learning
        and quantum information science. It suggests a promising direction for
        their combined application in solving complex quantum problems.

        Speaker: Mr Christopher Popp (University Vienna)
    • 15:00 15:30
      Coffee break 30m
    • 15:30 17:30
      Quantum computing tutorial
      • 15:30
        Quantum Computing Hardware with QGates 2h
        Speaker: Prof. Beatrix Hiesmayr (University of Vienna)
    • 19:00 23:00
      Conference dinner
    • 08:30 09:00
      Pre-Coffee 30m
    • 09:00 10:05
      Quantum computing and machine learning
      • 09:00
        Machine Learning based on quantum or classical systems: a comparison 40m

        It is well known that quantum laws are fundamentally different and are currently being used to boost the performance of computers, including machine learning algorithms. We elaborate on the differences and challenges from different perspectives. Furthermore, we point out that with the recent trend in research to publish the computer code along with the research results, a causal link between the (formal) mathematical model and the set of results can no longer be certain. This is especially true for the recent advances in AI-driven applications.

        Speaker: Prof. Beatrix Hiesmayr (University of Vienna)
      • 09:40
        Auto Quantum Machine Learning with AQMLator 25m

        Since the 2010s, when deep learning became feasible, machine learning (ML) has been experiencing ever-growing attention. The ability to teach large ML models gave rise to various neural network architectures, such as convolutional neural networks or generative adversarial networks. Around the same time, technological advancements allowed us to also direct our attention to quantum computing (QC), a computation paradigm that uses quantum mechanical phenomena. Naturally, quantum and hybrid ML models began to appear, and with them, a daunting task -- how to design the architecture for such models?

        We present AQMLator, an Auto Quantum Machine Learning platform. It aims to automatically propose and train the quantum layers of an ML model with minimal input from the user. This way, AI scientists can overcome the entry barrier for QC and use quantum machine learning (QML) or hybrid models. AQMLator uses standard ML Python libraries, making it easy to introduce into existing ML pipelines.

        Speaker: Dr Tomasz Rybotycki (SRI PAS, NCAC PAS, CEAI AGH)
    • 10:05 10:35
      Coffee break 30m
    • 10:35 12:30
      Machine learning in astrophysics
      • 10:35
        Invited Talk: Agnieszka Pollo 40m
      • 11:15
        Characteristic sky background features around galaxy mergers 25m

        Galaxy merger identification in large-scale surveys is one of the main areas of Astronomy that are benefitting from the development of Machine Learning (ML), especially for galaxy classification. In this talk, I will focus on the combination of ML, clustering, and dimensionality reduction techniques, with astronomical images and measurements. The goal of this methodology is to discern galaxy mergers from the rest of galaxies in the sky. An initial Neural Network was applied to the flux measurements from the images, and the iteration on multiple combinations of these parameters led us to find how one parameter traced galaxy mergers with a test-set accuracy of up to 91 %. This parameter is the error in the sky background measurement, which we interpret to trace low signal-to-noise features around observed galaxies. With this work, I want to stress the benefits of interpreting the results of ML models and how it led us to unveil a completely new path for galaxy morphology classification.

        Speaker: Luis Eduardo Suelves (NCBJ)
      • 11:40
        Using Machine Learning to Identify outliers in the Fundamental Metallicity Relation. 25m

        The chemical evolution of galaxies is intricately linked to the interplay between Active Galaxy Nuclei (AGNs) and galactic interactions. This is exemplified in the fundamental metallicity relation (FMR) which characterizes the chemical evolution of galaxies where stars are formed. Although AGN feedback is reflected in the FMR, galaxies that host AGNs follow the same relation as those that are star-forming. However, interacting galaxies, such as pairs or mergers, seem to constitute a distinctive population that deviates from the FMR. Our objective is to identify outliers through machine-learning algorithms that scour for correlations with incorrectly classified galaxy types or interaction statuses.

        Speaker: Francesco Pistis (National Centre for Nuclear Research)
      • 12:05
        Reconstruction of muon bundles in KM3NeT detectors using machine learning methods 25m

        The network of two next-generation underwater Cherenkov neutrino telescopes: ARCA and ORCA is being successively deployed in the Mediterranean Sea by the KM3NeT Collaboration. The focus of ARCA is neutrino astronomy, while ORCA is mainly dedicated to neutrino oscillation studies. Both detectors are already operational in their intermediate states and collect valuable results, including the measurements of the atmospheric muons produced by cosmic ray interactions. This work explores the potential of intermediate as well as complete detector configurations of ARCA and ORCA to observe events composed of multiple muons, originating from a common primary cosmic ray, called muon bundles. An approach to infer the total number of observed muons in a bundle as well as their total energy and even the energy of the primary will be presented.

        Speaker: Dr Piotr Kalaczyński (CAMK PAN)
    • 12:30 13:30
      Lunch break 1h
    • 13:30 17:30
      LLM tutorial
      • 13:30
        EuroCC Tutorial on Using Large Language Models (LLM) for Private Data - part 1 1h 45m
      • 15:15
        Coffee break 30m
      • 15:45
        EuroCC Tutorial on Using Large Language Models (LLM) for Private Data - part 2 1h 45m
Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×