2nd International Workshop on Machine Learning and Quantum Computing Applications in Medicine and Physics

Europe/Warsaw
Description

2nd International Workshop on Machine Learning and Quantum Computing Applications in Medicine and Physics

04 to 07 June 2024, Warsaw, Poland

We cordially invite you to the 2nd International Workshop on Machine Learning and Quantum Computing Applications in Medicine and Physics, which will take place in Warsaw (Poland) from 04 to 07 June 2024. The workshop is organized by the National Centre for Nuclear Research in cooperation with scientists from the University of Vienna, Jagiellonian University and the Polish National Competence Centre in HPC Phase 2 (EuroCC2).

In the last years, we have seen an increasing number of interdisciplinary exchanges, especially with machine learning methods involved. Many times similar techniques and tools are used to solve a variety of problems. The transfer of methods between disciplines is highly needed and it has repeatedly proved to be a very fruitful approach. This workshop aims to facilitate such transfers of knowledge by bringing together experts from various institutions and research groups in the fields of medical imaging, radiotherapy, physics, and computer science. The workshop serves as a platform to enable research at the edge of various disciplines.

The scope of the workshop covers among others quantum simulations, quantum algorithms, and (classical or quantum) machine-learning algorithms with a focus on application in Physics and Medicine.

List of topics:

  • machine learning methods in medical applications,
  • machine learning methods in high-energy physics and astrophysics,
  • quantum machine learning,
  • Generate Adversarial Networks for fast simulations both in medicine and particle physics,
  • quantum simulations,
  • quantum and quantum-inspired computing algorithms,
  • novel methods in medical imaging,
  • High-Performance Computing (HPC) in physics and medicine, in particular on heterogeneous platforms such as FPGA and GPU.

Feel free to share this invitation with your peers and whoever you think would be interested. Please, do not hesitate to contact us in case of any questions: wmlq2024@ncbj.gov.pl

Link to the previous edition WMLQ2022: https://events.ncbj.gov.pl/event/141/page/65-home

Participants
  • Adam Padee
  • Agnieszka Pollo
  • Aleksander Ogonowski
  • Arkadiusz Ćwiek
  • Artur Kalinowski
  • Aurélien Coussat
  • Bartłomiej Rachwał
  • Beatrix Hiesmayr
  • Christopher Popp
  • Didier BENOIT
  • Dmytro Melnychuk
  • Elena Perez del Rio
  • Emilia Majerz
  • Fernando Gonzalo Moncada Gutierrez
  • Francesco Pistis
  • Georg Schramm
  • Grzegorz Korcyl
  • Hans Georg Stöhr
  • Jakub Mielczarek
  • Jakub Mrożek
  • Jakub Nalepa
  • Jami Rönkkö
  • Jan Dubiński
  • Jeams Chanel NGUELONG NAME
  • Julien Bert
  • Justyna Mędrala-Sowa
  • Justyna Sykuła
  • Karol Rogoziński
  • Konrad Klimaszewski
  • Krzysztof Nawrocki
  • Lech Raczyński
  • Luis Eduardo Suelves
  • Marc Hütt
  • Marek Magrys
  • Mateusz Bała
  • Michał Mazurek
  • Michał Obara
  • Michał Żebrowski
  • Mikołaj Kita
  • Moncef ATI
  • Nils Krah
  • Patryk Będkowski
  • Piotr Gawron
  • Piotr Kalaczyński
  • Pooja Tanty
  • Rafał Możdżonek
  • Roman Shopa
  • Rosa Petrini
  • Sarkis Kassounian
  • Szymon Gamrat
  • Tobias Christoph Sutter
  • Tomasz Fruboes
  • Tomasz Malkiewicz
  • Tomasz Rybotycki
  • Wojciech Krzemien
  • Wojciech Wiślicki
    • 09:00 10:00
      Pre-Coffee / Registration 1h
    • 10:00 13:00
      LUMI tutorial: Part 1

      https://docs.google.com/presentation/d/106otzniJ8tILjenMFI58unf0LrO7HAYBOBmQAuIfJEk/edit?usp=sharing

      • 10:00
        Using machine learning frameworks on LUMI 3h

        LUMI is one of the largest European supercomputers and flagship EuroHPC systems. It is unified, heterogeneous computing infrastructure aiming at large, accelerated workflows. One of the key elements of its computing architecture is accelerator type and its programming environment. This half-day workshop is presenting typical transition steps from common clusters and cloud resources together with use cases with both traditional HPC workloads and fundamental elements for running large models with machine learning frameworks.

        This part shows how to setup environment for AI workflows on a real supercomputing system. It combines common HPC tools with popular machine learning frameworks relying on GPU offloading and multiprocessing across multiple computing nodes.

        https://events.plgrid.pl/event/55/

        Speaker: Maciej Szpindler (ACK Cyfronet)
    • 13:00 14:00
      Lunch break 1h
    • 14:00 16:00
      LUMI tutorial: Part 2

      https://docs.google.com/presentation/d/106otzniJ8tILjenMFI58unf0LrO7HAYBOBmQAuIfJEk/edit?usp=sharing

      • 14:00
        Working with large language models using Megatron-DeepSpeed on LUMI 2h

        This builds up on details from the first part to setup specific environment for LLM processing. It uses Megatron-DeepSpeed framework to experiment with pretrained data for prompt engineering tasks.

        https://events.plgrid.pl/event/55/

        Speaker: Maciej Szpindler (ACK Cyfronet)
    • 08:30 10:00
      Pre-Coffee / Registration 1h 30m
    • 10:00 12:30
      Opening session
      • 10:00
        Welcome address 15m

        A welcome addess from Professor Agnieszka Pollo

        Speaker: Prof. Agnieszka Pollo (National Centre for Nuclear Research AND Jagiellonian University)
      • 10:15
        Welcome from the organizers 10m

        Short communication from the workshop organizers

        Speaker: Wojciech Krzemien (NCBJ)
      • 10:25
        Machine learning vs. network science: A comparison of two paradigms for the interpretation of high-throughput data in biology and medicine 40m

        Over the last decades biology and medicine have become data sciences. High-throughput ('omics') data on the level of gene expression, metabolic activity, epigenetic regulation and others now serve as a prominent source of systemic information. This makes these fields accessible to data-driven computational methods, in particular network science and machine learning.

        Network science employs the formal view of graph theory to understand the design principles of complex systems. Abstracting cellular processes (gene regulation, metabolism, protein interactions) into networks has revolutionized the way we think about biological systems.

        Machine learning is most prominent in biological and medical research via the successes of image analysis and of protein structure prediction via AlphaFold. Attempts to train machine learning devices to interpret 'omics' data has been less successful so far.

        Focusing on gene expression data as the most common example (beyond the genome) of 'omics' data, we discuss possible reasons for the limited success of machine learning in biology and medicine. We start with a (deceptively) simple biological situation, bacterial gene regulation, and then move to the analysis of medical data.

        Speaker: Marc Hütt (Constructor University)
      • 11:05
        Coffee break 20m
      • 11:25
        AI in Space: Ready for Prime Time? 40m

        Exciting advancements in remote sensing, AI, and edge computing are transforming scientific and industrial sectors via in-orbit data processing. This technology enables real-time applications such as environmental monitoring, precision agriculture, disaster detection, and in-orbit anomaly detection from telemetry data. Integrating AI into space-based systems and edge devices swiftly converts raw data, like multi- or hyperspectral images, into actionable insights on board satellites. Challenges remain, including hardware limitations, model validation, and sparse ground-truth datasets. In this talk, we will explore concrete opportunities, challenges and solutions related to deploying AI in space, focusing on Earth observation and anomaly detection from satellite telemetry data. The real satellite missions, including OPS-SAT by European Space Agency and Intuition-1 by KP Labs will serve as real-world examples. Finally, we will discuss the most exciting research and development avenues in on-board and on-ground (quantum) AI for space applications. Fasten your seatbelts, we are ready to take off.

        Speaker: Dr Jakub Nalepa (Silesian University of Technology/KP Labs)
      • 12:05
        Quantum computing in the service of satellite data processing 25m

        Earth observation data are constantly being produced by constantly growing number of satellites. Processing these data efficiently consists a major challenge and not all the produced data is processed and analyzed. At the same time Earth observation provides important information about our ecosystems in the age of rapidly changing climate. For this reason research on application of quantum computing for Earth observation data analysis has been initiated be a couple of research institutions. For me personally participation in this field allows me to study how one can apply a variety of quantum algorithmic techniques to image processing and to join efforts aiming at reducing the impact of climate change. I will present a short review of ideas and activities that aim at finding, possibly impactful, new methods of satellite data processing using quantum computing techniques.

        Speaker: Piotr Gawron (AstroCeNT / Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences)
    • 12:30 13:00
      Conference Photo 30m
    • 13:00 14:00
      Lunch break 1h
    • 14:00 17:30
      Machine learning in high energy physics
      • 14:00
        Machine learning in HEP 40m

        I will recall a basic idea behind Machine Learning, then I will bring random examples of ML applications in High Energy Physics.

        Speaker: Artur Kalinowski (University of Warsaw)
      • 14:40
        Deep Generative Models for Particle Simulations at ALICE, CERN 40m

        At the European Organisation for Nuclear Research (CERN), scientists study the fundamental properties of matter by replicating the extreme conditions of the early universe within the Large Hadron Collider (LHC). Understanding particle collisions requires running simulations that mirror the detectors' expected responses within the LHC. With over 50% of CERN's GRID computing power dedicated to High Energy Physics simulations, the need for more efficient simulation methods is critical.

        We propose employing generative machine learning to directly simulate detector responses, leveraging advancements in generative adversarial networks (GANs), autoencoders, and diffusion models to tackle simulation challenges. Our contributions include introducing a modified GAN training objective that accommodates varying simulation variance across different conditional inputs, supplemented with additional regularization to increase the simulation fidelity. For autoencoders, we introduce a conditional control mechanism enhancing simulation control by independently manipulating output parameters of the generated samples. With diffusion models, we explore the efficiency of latent diffusion models and the trade-off between inference time and simulation quality.

        Our proposed methodologies have the potential to advance particle collision simulations by offering more streamlined, controllable, and faster methods, maintaining the fidelity demanded by modern high-energy physics experiments.

        Speakers: Patryk Będkowski (Warsaw University of Technology), Karol Rogoziński (Warsaw University of Technology), Mikołaj Kita (Warsaw University of Technology)
      • 15:20
        Fast simulation of the Zero Degree Calorimeter responses with generative neural networks 25m

        Applying machine learning methods to high-energy physics simulations has recently emerged as a rapidly developing area. A prominent example is the Zero Degree Calorimeter (ZDC) simulation in the ALICE experiment at CERN, where substituting the traditional computationally extensive Monte Carlo methods with generative models radically reduces computation time. Although numerous studies have addressed the fast ZDC simulation, there remains significant potential for innovations. Recent developments in generative neural networks have enabled the creation of models capable of producing high-quality samples indistinguishable from real data. In this paper, we apply the latest advances to the simulation of the ZDC neutron detector and highlight the potential benefits and challenges. Our focus is on exploring novel architectures and state-of-the-art generative frameworks. We compare their performance against established methods, demonstrating competitive outcomes in speed and efficiency.

        Speakers: Maksymilian Wojnar (AGH University of Krakow), Ms Emilia Majerz (AGH University of Krakow)
      • 15:45
        Coffee break 30m
      • 16:15
        LLM-based physics analysis agent at BESIII and exploration of future AI scientist 25m

        The data processing and analyzing is one of the main challenges at HEP experiments, normally one physics result can take more than 3 years to be conducted. To accelerate the physics analysis and drive new physics discovery, the rapidly developing Large Language Model (LLM) is the most promising approach, it have demonstrated astonishing capabilities in recognition and generation of text while most parts of physics analysis can be benefitted. In this talk we will discuss the construction of a dedicated intelligent agent, an AI assistant at BESIII based on LLM, the potential usage to boost hadron spectroscopy study, and the future plan towards a AI scientist.

        https://cern.zoom.us/j/67924643443?pwd=oHCoX0bnlWFwWq9f1AmnKa1ckQMGGB.1

        Speakers: Ke Li (Institute of High Energy Physics, China), Dr Zhengde Zhang (Institute of High Energy Physics, China), Dr Yiyu Zhang (Institute of High Energy Physics, China)
      • 16:40
        Generic ML for fast simulations 30m

        In the field of high energy physics, Monte Carlo simulations play a key role in interpreting physics results, as well as the design of new detectors. Leveraging machine learning for fast simulation is essential for generating the required amount of simulated samples. Nevertheless, transitioning from initial models to full-scale productions is usually a very challenging task.

        In this talk, we will show how to use Gaussino, an experiment-agnostic core simulation framework, to streamline the incorporation of machine learning models for fast simulations: starting from an early, generic prototype to a fully deployed model used in production at scale. We will also present one of the first implementations of ML-based fast simulation models based on the CaloChallenge initiative, trained and validated on the LHCb electromagnetic calorimeter, and finally integrated within the LHCb simulation framework.

        Speaker: Mr Michał Mazurek (National Centre for Nuclear Research)
    • 08:30 09:00
      Pre-Coffee 30m
    • 09:00 10:30
      Medical imaging: Monte Carlo simulations in medical imaging
      • 09:00
        AI meets physics - an overview of AI applications in the context of Monte Carlo particle transport simulations 40m
        Speaker: Dr Nils Krah (INSA Lyon)
      • 09:40
        GGEMS - GPU Geant4-based Monte Carlo Simulations 25m

        In medical physics, GPU-based Monte Carlo simulations (MCS) have been proposed for computational gains. However, they remain limited to specific applications and are not easily generalized.
        GGEMS (GPU Geant4-based Monte Carlo Simulations) is advanced MCS software that uses OpenCL. Entirely written in C++, its software architecture allows flexibility and generality for numerous applications such as imaging and radiation therapy. It has been validated on architectures including NVIDIA and Intel graphics cards, as well as multi-core Intel processors on both Windows and Linux. A Python interface is provided for scripting, and an OpenGL graphical interface has been developed to assist users.
        GGEMS was evaluated through various medical applications, demonstrating fast simulation. For example, for a CT projection simulating 10^9 particles, the computation times were: 112s on GeForce 1050Ti, 385s on Quadro P400, 421s on Xeon 16 threads, and 91s on 1050Ti+P400.

        Speaker: Didier BENOIT
      • 10:05
        Up-scaling for measuring the spatial distribution of radiation dose for applications in the preparation of individual patient treatment plans 25m

        The super-resolution (SR) techniques are often used in the up-scaling process to add-in details that are not present in the original low-resolution image. In radiation therapy the SR can be applied to enhance the quality of medical images used in treatment planning. The Dose3D detector measuring spatial dose distribution [1], the dedicated set of ML algorithms for SR has been proposed to perform final dose distribution up-scaling. As the SR technique, the SRCNN [2] architecture has been adjusted. The training and validation data being produced with MC simulation with two different scoring resolutions. Extra features related to the beam shape have been defined. The input data resolution is the one coming from the measurement (1cc) and the target data resolution is defined at the level of the CT image. Our research's latest breakthroughs and advancements will feature at the conference.
        References:
        [1] https://dose3d.fis.agh.edu.pl,
        [2] https://doi.org/10.1007/978-3-319-10593-2_13

        Speaker: Bartłomiej Rachwał (AGH University of Kraków)
    • 10:30 11:00
      Coffee break 30m
    • 11:00 12:35
      High Performance Computing
      • 11:00
        LUMI: Europe's most powerful supercomputer 40m

        The EuroHPC Joint Undertaking has installed three leadership-class supercomputers. We will discuss one of these systems, LUMI, located in Kajaani, Finland. LUMI is currently the fastest supercomputer in Europe and in general one of the most powerful and advanced computing systems in the world. In this talk, I will present the technical architecture of the LUMI infrastructure and its status, together with plans and ambitions for the near future. Then, an overview of the scientific showcases and achievements from the first months of LUMI will be presented. These include, for example, contributions to the Destination Earth initiative, work on large language models and breakthroughs from extreme-scale computing capabilities in many fields of computational science.

        Speaker: Tomasz Malkiewicz (CSC / NeIC)
      • 11:40
        Developing Artificial Intelligence in the Cloud: the AI_INFN platform 25m

        The INFN CSN5-funded project AI_INFN ("Artificial Intelligence at INFN") aims at fostering the adoption of ML and AI within INFN by providing support on multiple aspects, including the provision of state-of-the-art hardware for AI and ML, leveraging on cloud native solutions in the context of INFN Cloud, to share hardware accelerators as effectively as possible without compromising on the diversity of the research activities of the Institute. AI_INFN evolves the Virtual-Machine-based model towards a more flexible platform built on top of Kubernetes. This is meant to be a composable toolkit and currently features: JWT-based authentication, JupyterHub multitenant interface, distributed filesystem, customizable conda environments, and a specialized monitoring and accounting system. Last but not least, the platform is an enabler to implement the offloading mechanism based on Virtual Kubelet and interLink API, a synergy with InterTwin. Preliminary results and applications will be presented.

        Speaker: Rosa Petrini (INFN)
      • 12:05
        Polish National Competence Centre in HPC: Enabling AI in EuroHPC 30m

        This presentation delves into the importance of the EuroHPC Joint Undertaking and the supercomputing resources it provides for European users from industry, science and public administration. In particular, it highlights the LUMI supercomputer, ranked 5th on the recent Top500 list of the fastest supercomputers, making it the fastest supercomputer in Europe. The EuroCC2 project is also introduced as a critical initiative in fostering HPC competencies in Europe, with the Polish National Competence Centre activities in Poland described in more detail. A focal point of the presentation is the SpeakLeash project, a compelling example of AI utilization facilitated by NCC Poland, demonstrating how cutting-edge HPC resources can be harnessed to drive innovative AI solutions.

        Speaker: Marek Magrys (Cyfronet)
    • 12:35 13:30
      Lunch break 55m
    • 13:30 14:20
      High Performance Computing
      • 13:30
        FPGAs in HPC - applications and methods 25m

        In recent years, Field Programmable Gate Array technology has gained momentum in the HPC sector as the abundance of configurable resources and high-level development tools allow to implement complex algorithmics. Unique features of this technology such as adaptable computing, pipelined processing and integrated high-speed transceivers provide means to compete with classic CPUs or GPUs in certain applications. In this talk, I will present the technology fundamentals, development methodologies and overview the computing areas that can benefit from employing FPGAs.

        Speaker: Grzegorz Korcyl (Jagiellonian University)
      • 13:55
        Preliminary study on artificial intelligence methods for cybersecurity threat detection in computer networks based on raw data packets 25m

        Most of the methods of the intrusion detection systems for cybersecurity threats detection in computer networks are based on traffic flow characteristics. However, this approach may not fully exploit the potential of deep learning algorithms to directly extract features from raw packets. Moreover, it impedes real-time monitoring due to the necessity of waiting for the processing pipeline to complete and introduces dependencies on additional software components.

        In this paper, we investigate deep learning methodologies capable of detecting attacks in real-time directly from raw packet data within network traffic. Our investigation utilizes the CICIDS2017 dataset, which includes both benign traffic and prevalent real-world attacks, providing a comprehensive foundation for our research.

        Speakers: Aleksander Ogonowski (National Centre for Nuclear Research), Mr Michał Żebrowski (National Centre for Nuclear Research)
    • 14:20 14:50
      Coffee break 30m
    • 14:50 16:30
      Medical imaging: Novel reconstruction algorithms and techniques
      • 14:50
        Advanced TOF MLEM reconstruction of a human patient scanned by the modular J-PET 25m

        We present one of the first images for an in vivo scan of a human patient, measured by the modular Jagiellonian positron emission tomographic (J-PET) prototype, which records 511-keV annihilation photons via Compton scattering in plastic scintillators [1]. The original reconstruction algorithm is adapted from the maximum likelihood expectation maximisation (MLEM), with the realistic J-PET system matrix modelling, time-of-flight (TOF) information and attenuation and scatter correction applied [2].

        The attenuation map of the patient was measured by a CT scan. The scatter factors were estimated by the single scatter simulation (SSS), implemented in the STIR software [3]. The subsampled SSS-sinogram was acquired using the known attenuation factors and a prior MLEM reconstruction, made without additive corrections and later upscaled by interpolation. As a result, a significant improvement was achieved in noise suppression and resolution recovery for the reconstructed PET image.

        [1] Moskal P et al. MedRxiv 2024.02.01.23299028 (2024)
        [2] Shopa RY et al. IEEE TRPMS 7 509 (2023)
        [3] Thielemans K et al. PMB 57 867 (2012)

        Speaker: Roman Shopa (National Centre for Nuclear Research)
      • 15:15
        Improvement of time-of-flight resolution of PET scanner using additional prompt photon 25m

        Positronium Imaging (PI) has become one of the most prospective branch of Positron Emission Tomography. During the PI measurement two classes of events are required: double-coincidence events originated from pair of back-to-back annihilation photons and triple-coincidence events comprised with three photons, i.e, two annihilation photons and one additional prompt photon. The standard reconstruction of the emission position along the line-of-response of triple-coincidence event is the same as in the case of double-coincidence event and is based on times and positions of two annihilation photons only; an information introduced by the additional prompt photon is ignored. In this presentation, we propose to extend the reconstruction of position of triple-coincidence event by taking into account the time and position of prompt photon. Moreover, we incorporate the knowledge about the positronium lifetime distribution and derive the algorithm for the position reconstruction. We discuss the limitations of the method based on the simulation data.

        Speaker: Lech Raczyński (National Centre for Nuclear Research)
      • 15:40
        Towards total-body J-PET: overview of data correction techniques for image reconstruction 25m

        Positron Emission Tomography (PET) is a widely employed medical imaging technique that uses radiotracers to visualise various metabolic processes. PET functions by detecting gamma rays resulting from the decay of the radiotracer in the patient body. The acquired data are then utilised to reconstruct an image representing the initial radiotracer distribution. However, numerous effects, including accidental coincidences, photon scattering or positron range, affect the data in ways that cause artefacts in the reconstructed image. A number of data correction techniques exist to compensate for these undesired effects and produce images of satisfactory quality. This talk will review existing techniques, with a focus on the total-body Jagiellonian PET, a prototype of a long-AFOV PET system that uses plastic scintillators currently under development at the Jagiellonian University in Poland.

        Speaker: Aurélien Coussat
      • 16:05
        Assessment of Internal Radiation Dose: Understanding the Influence of Respiratory Motion 25m

        Assessing radiation doses to internal organs is crucial in evaluating the risks and benefits of diagnostic and therapeutic nuclear medicine procedures, such as PET, for patients. Respiratory motion causes significant displacement of internal organs, affecting the absorbed dose in cases of external r adiation exposure. In this study, our focus was on determining the role of respiratory motion in assessing the absorbed dose of S values for Lu177, Dy 165, I 131, and Tc99m. Despite this, there has been no previous report on the impact of respiratory motion on internal radiation dosimetry

        Speaker: Dr Moncef ATI (University Of Oran1)
    • 08:30 09:00
      Pre-Coffee 30m
    • 09:00 10:30
      Machine learning in medicine
      • 09:00
        From Signal Acquisition to Image Reconstruction: Potential Applications of Machine Learning in Positron Emission Tomography 40m

        Positron Emission Tomography (PET) is a functional medical imaging technique that allows for the visualization and measurement of metabolic processes in the body by detecting pairs of 511-keV gamma rays originating from a tracer molecule
        labeled with a positron emitter.

        Despite its advanced capabilities, PET imaging faces significant challenges, including high noise levels and limited spatial resolution of the acquired data, which severely hampers the diagnostic quality of the reconstructed images.

        In addition to classical algorithms traditionally used for signal processing, image reconstruction, and image post-processing, machine learning (ML) based algorithms are now being explored to enhance the quality of PET raw data and the quality of reconstructed PET images.

        This talk provides an overview of the current applications of ML in PET imaging,
        encompassing various stages from signal acquisition to image reconstruction and post-processing.

        Additionally, the presentation addresses the current challenges in the field and explores future needs and directions for a sustainable and successful integration of ML in PET imaging.

        Speaker: Georg Schramm
      • 09:40
        Using 3D CNNs for distortion corrections in PET imaging 25m

        In Positron Emission Tomography the problem of image distortion due to scattered photons or accidental coincidences becomes more pronounced for large field-of-view scanners capable of measuring the whole patient in one scan. We propose a novel method of encoding coincidence event information to enhance the efficiency of noise filtration classification. The proposed encoding enables the usage of Convolutional Neural Networks as feature extractors in the classification task. We take advantage of the voxel nature of underlying data and evaluate the performance of the 3-D CNN network to classify true, scattered and accidental coincidences for imaging quality improvement with large field-of-view PET scanners.

        Speaker: Konrad Klimaszewski (National Centre for Nuclear Research)
      • 10:05
        Machine Learning-based Scatter Correction for a Dual-Panel Positron Emission Mammography Scanner 25m

        Positron Emission Mammography (PEM) is a Nuclear Medicine technique for breast imaging based on a dedicated scanner assembled with parallel dual-panel detector arrays. Patient positioning in close contact with the scanner enhances spatial resolution and sensitivity in comparison with ring-based scanners, but this geometry hinders the adaptation of conventional attenuation and scatter correction methods, which affects the quantitative assessment of studies. In this work we trained several machine learning algorithms for scatter correction with list-mode data from a Monte Carlo simulation of a PEM prototype being built in our lab. The features for this binary classification problem were energy and position of detection, where energy had the higher feature importance in agreement with traditional methods. The best results were found with a Random Forest of 38 estimators and a maximum depth of 7, which reduced the scatter fraction of a study of 1 million events from 11% to 4% in 2 seconds.

        Speaker: Mr Fernando Moncada-Gutiérrez (Instituto de Física, UNAM)
    • 10:30 11:00
      Coffee break 30m
    • 11:00 12:30
      Quantum computing and systems
      • 11:00
        Quantum computing of gauge fields 40m

        This talk aims to explore the relation between gauge fields, which are at the basis of our understanding of fundamental interactions (including gravity) and quantum information. Our primary focus is on SU(2) gauge fields, where a spin network representation of gauge-invariant states is possible. The spin network framework offers a unique perspective on the entanglement structure inherent in gauge theories. Additionally, representing these states through quantum circuits paves the way for simulating non-abelian field theories using quantum computers. We will present the results from quantum simulations of simple SU(2) gauge field configurations on IBM's 5-qubit (Yorktown) and 15-qubit (Melbourne) superconducting quantum computers.

        Speaker: Jakub Mielczarek (Jagiellonian University)
      • 11:40
        Utilizing Superconducting Quantum Computer for Education and Research 25m

        With a growing interest in quantum technology globally, there is an increasing
        need for accessing relevant physical systems for education and research. This talk introduces a commercially available on-site quantum computer utilizing superconducting technology. We show how this system can be used in education to
        teach quantum concepts and deepen understanding of quantum theory and quantum computing. It offers learning opportunities for future talent and contributes
        to fundamental research and technological progress. We highlight the advantages of having complete hands-on access to the hardware. As educational and research use cases we demonstrate the violation of CHSH inequality, a GHZ state experiment offering intuitive account for decoherence and simulation of neutrino flavor oscillations.

        Speaker: Mr Jami Rönkkö (IQM Quantum Computers)
      • 12:05
        Parallel Two-Qubit Gates on IQM Resonance: Garnet 20-Qubit Quantum Computer 25m

        The promise of quantum computing speeding up the execution of certain computational tasks cannot be achieved without quality hardware. Superconducting qubits are one of the most mature technologies for an implementation of a quantum processing unit (QPU), and such devices are already available to be used by researchers in cloud. Among them is IQM's Garnet QPU available on Resonance platform. This 20-qubit quantum computer represents IQM's core technology choices, such as a floating tunable transmon coupler [1], allowing QPU median CZ gate fidelity of 99.5% and entangling all the qubits on the chip by preparing a GHZ state with 62% fidelity. I will present benchmarking results ranging from fidelities to application benchmarks quantifying performance in certain tasks. Subsequently, I will describe the tunable coupler architecture, and the methods for fast, reliable and automatic calibration of high-fidelity parallel two-qubit gates.
        [1] Fabian Marxer et al. PRX Quantum 4 010314 (2023)

        Speaker: Dr Jakub Mrożek (IQM Quantum Computers)
    • 12:30 13:30
      Lunch break 1h
    • 13:30 14:20
      Quantum computing and machine learning
      • 13:30
        Quantum Neural Networks: current status and next steps 25m

        Classical machine learning has proven valuable since its implementation on classical computers became feasible. On the other hand, quantum computation claims to present an exponential advantage over any classical algorithm for specialized tasks. Thus, adapting the machine learning paradigm to the quantum realm is a promising way forward.
        We start the talk with a general introduction to the mathematical framework required for this adaptation. These basic notions are crucial to understanding how we can manipulate quantum systems and what the limitations are. Afterward, we discuss different approaches to "quantizing" the neural network architecture, i.e., adapting classical neural networks to quantum systems, before focusing on Dissipative Quantum Neural Networks. We show that this ansatz has the potential to be a "quantum universal approximator" as it can be used to learn any quantum operation. Lastly, preliminary numerical results and possible next steps are discussed.

        Speaker: Mr Tobias Christoph Sutter (University of Vienna)
      • 13:55
        Entanglement classification via Neuronal Networks 25m

        In this talk, we investigate the application of machine learning to
        an NP-hard problem in quantum information theory, the separability
        problem of classifying a quantum state as entangled or
        separable. This problem arises for entangled
        quantum systems of dimension three or higher, where no exact solution
        is currently known. We demonstrate that neural networks can accurately classify mixtures
        of Bell states. This classification can be achieved by
        considering the properties of the mixtures themselves and
        by entropy-related quantities.We further highlight convolutional neural networks in this
        process. Our findings indicate that these networks can reflect
        entanglement structures crucial for accurate
        classification. The study underscores the synergistic potential of machine learning
        and quantum information science. It suggests a promising direction for
        their combined application in solving complex quantum problems.

        Speaker: Mr Christopher Popp (University Vienna)
    • 14:20 14:50
      Coffee break 30m
    • 14:50 16:50
      Quantum computing tutorial
      • 14:50
        Quantum Computing Hardware with QGates 2h

        In this workshop, a modular hardware will be presented that enables an introduction to the basic algorithms of quantum computing such as the Shor algorithm (factorization of 15 and 21), the Grover algorithm, the quantum K-means algorithm, etc. The low-cost hardware is based on microcontrollers and enables exact quantum simulations of quantum circuits with up to 8 qubits with so-called "QGates". The modular design consisting of several identical boards enables the complexity to be cascaded and thus a didactic introduction to the complexity and challenges of the upcoming quantum computers.

        Speaker: Prof. Beatrix Hiesmayr (University of Vienna)
    • 19:00 23:00
      Conference dinner
    • 08:30 09:00
      Pre-Coffee 30m
    • 09:00 10:05
      Quantum computing and machine learning
      • 09:00
        Machine Learning based on quantum or classical systems: a comparison 40m

        It is well known that quantum laws are fundamentally different and are currently being used to boost the performance of computers, including machine learning algorithms. We elaborate on the differences and challenges from different perspectives. Furthermore, we point out that with the recent trend in research to publish the computer code along with the research results, a causal link between the (formal) mathematical model and the set of results can no longer be certain. This is especially true for the recent advances in AI-driven applications.

        Speaker: Prof. Beatrix Hiesmayr (University of Vienna)
      • 09:40
        Auto Quantum Machine Learning with AQMLator 25m

        Since the 2010s, when deep learning became feasible, machine learning (ML) has been experiencing ever-growing attention. The ability to teach large ML models gave rise to various neural network architectures, such as convolutional neural networks or generative adversarial networks. Around the same time, technological advancements allowed us to also direct our attention to quantum computing (QC), a computation paradigm that uses quantum mechanical phenomena. Naturally, quantum and hybrid ML models began to appear, and with them, a daunting task -- how to design the architecture for such models?

        We present AQMLator, an Auto Quantum Machine Learning platform. It aims to automatically propose and train the quantum layers of an ML model with minimal input from the user. This way, AI scientists can overcome the entry barrier for QC and use quantum machine learning (QML) or hybrid models. AQMLator uses standard ML Python libraries, making it easy to introduce into existing ML pipelines.

        Speaker: Dr Tomasz Rybotycki (SRI PAS, NCAC PAS, CEAI AGH)
    • 10:05 10:35
      Coffee break 30m
    • 10:35 12:30
      Machine learning in astrophysics
      • 10:35
        Astrophysics of the nearest future: big data and machine learning challenges 40m

        It is often said that we are now living in an era of astronomical "big data", with the amount of observational data increasing by orders of magnitudes during the last decades, ane expected to increase even much faster in the coming years, with the advent of huge wide field observatories, like Vera Rubin Observatory or The Square Kilometre Array Observatory. With hundreds of petabytes of new data appearing every year, machine learning becomes a necessity. At the same time, the application of machine learning methods to astrophysical problems yields many specific challenges: small and not fully representative training samples, physical interpretability, or effective search for anomalies. I will discuss the aims, difficulties, and approaches that are being developed, making use, among other things, of examples from the research made in our extragalactic astrophysics group in NCBJ and UJ.

        Speaker: Agnieszka Pollo (National Centre for Nuclear Research AND Jagiellonian University)
      • 11:15
        Characteristic sky background features around galaxy mergers 25m

        Galaxy merger identification in large-scale surveys is one of the main areas of Astronomy that are benefitting from the development of Machine Learning (ML), especially for galaxy classification. In this talk, I will focus on the combination of ML, clustering, and dimensionality reduction techniques, with astronomical images and measurements. The goal of this methodology is to discern galaxy mergers from the rest of galaxies in the sky. An initial Neural Network was applied to the flux measurements from the images, and the iteration on multiple combinations of these parameters led us to find how one parameter traced galaxy mergers with a test-set accuracy of up to 91 %. This parameter is the error in the sky background measurement, which we interpret to trace low signal-to-noise features around observed galaxies. With this work, I want to stress the benefits of interpreting the results of ML models and how it led us to unveil a completely new path for galaxy morphology classification.

        Speaker: Luis Eduardo Suelves (National Centre for Nuclear Research)
      • 11:40
        Using Machine Learning to Identify outliers in the Fundamental Metallicity Relation. 25m

        The chemical evolution of galaxies is intricately linked to the interplay between Active Galaxy Nuclei (AGNs) and galactic interactions. This is exemplified in the fundamental metallicity relation (FMR) which characterizes the chemical evolution of galaxies where stars are formed. Although AGN feedback is reflected in the FMR, galaxies that host AGNs follow the same relation as those that are star-forming. However, interacting galaxies, such as pairs or mergers, seem to constitute a distinctive population that deviates from the FMR. Our objective is to identify outliers through machine-learning algorithms that scour for correlations with incorrectly classified galaxy types or interaction statuses.

        Speaker: Francesco Pistis (National Centre for Nuclear Research)
      • 12:05
        Reconstruction of muon bundles in KM3NeT detectors using machine learning methods 25m

        The network of two next-generation underwater Cherenkov neutrino telescopes: ARCA and ORCA is being successively deployed in the Mediterranean Sea by the KM3NeT Collaboration. The focus of ARCA is neutrino astronomy, while ORCA is mainly dedicated to neutrino oscillation studies. Both detectors are already operational in their intermediate states and collect valuable results, including the measurements of the atmospheric muons produced by cosmic ray interactions. This work explores the potential of intermediate as well as complete detector configurations of ARCA and ORCA to observe events composed of multiple muons, originating from a common primary cosmic ray, called muon bundles. An approach to infer the total number of observed muons in a bundle as well as their total energy and even the energy of the primary will be presented.

        Speaker: Dr Piotr Kalaczyński (CAMK PAN)
    • 12:30 13:30
      Lunch break 1h
    • 13:30 17:30
      LLM tutorial
      • 13:30
        EuroCC Tutorial on Using Large Language Models (LLM) for Private Data - part 1 1h 45m

        The tutorial will explore the possibilities of utilizing LLMs for interacting with private data. We will introduce tools that enable harnessing the power of generative AI in scenarios where no data can leave your execution environment at any point. We explore the architecture and data requirements for creating your private ChatGPT, leveraging semantic understanding while maintaining control over your data.

        Hands-on requirements: laptop with working Wi-Fi, up-to-date web browser, SSH client.

        Speaker: Krzysztof Nawrocki (National Centre for Nuclear Research)
      • 15:15
        Coffee break 30m
      • 15:45
        EuroCC Tutorial on Using Large Language Models (LLM) for Private Data - part 2 1h 45m

        The tutorial will explore the possibilities of utilizing LLMs for interacting with private data. We will introduce tools that enable harnessing the power of generative AI in scenarios where no data can leave your execution environment at any point. We explore the architecture and data requirements for creating your private ChatGPT, leveraging semantic understanding while maintaining control over your data.

        Hands-on requirements: laptop with working Wi-Fi, up-to-date web browser, SSH client.

        Speaker: Krzysztof Nawrocki (National Centre for Nuclear Research)
Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×