Tesi etd-10062025-112932 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
DISPOTO, LAPO
URN
etd-10062025-112932
Titolo
Graph Neural Networks for particle tracking in the MEG II experiment
Dipartimento
FISICA
Corso di studi
FISICA
Relatori
relatore Galli, Luca
Parole chiave
- drift chamber
- graph neural networks
- machine learning
- MEG II
- particle tracking
Data inizio appello
20/10/2025
Consultabilità
Completa
Riassunto
At the Paul Scherrer Institut in Switzerland, the MEG II experiment is looking for any hint of the $\mu^+\longrightarrow e^+\gamma$ process which would probe physics beyond the Standard Model.
The goal is to set a sensitivity of about $6 \times 10^{-14}$, allowing to test beyond-Standard-Model theories at energy scales that are not currently accessible nor foreseen to be by the most advanced experimental facilities.
The MEG II experiment is conducted at the $\pi e5$ beamline, where $5 \times 10^7$ muons per second are stopped and decay in a thin target placed at the center of the experimental apparatus. The energy, time of flight, and direction of the detectable decay products, a photon and a positron, are then measured. Photon-related quantities are measured using a homogeneous liquid xenon calorimeter, whereas the energy and quasi-helical trajectory of the positrons are reconstructed with a cylindrical drift chamber permeated by a gradient magnetic field. Additionally, the positron time of flight is measured at high precision by a pixelated timing counter.
Recently, MEG II has established the most strict upper bound on the branching ratio $\mathcal{B}(\mu^+ \longrightarrow e^+ \gamma)<1.5\times10^{-13}$ and plans to reach the final goal by accumulating statistics until 2026. The measured branching ratio is obtained from the ratio of the number of signal candidate events to the normalization factor. This quantity is linearly proportional to the number of decays looked at by the detector and the efficiencies of the experimental apparatus, among them the positron tracking efficiency.
We observed some degradation of the tracking efficiency as the beam rate increased. Furthermore, with the current algorithm, the tracking task scales combinatorially with the number of hits in the cylindrical drift chamber; thus, a high amount of computing time is required for higher rates. In fact, up to 75\% of the analysis is spent on the tracking task, and with up to six months required to process a full year of data, exploratory analyses become practically unfeasible.
Given the two problems of decreasing tracking efficiency and increasing computing time at higher pile-up conditions, this thesis aims to develop a new tracking algorithm that addresses and solves these challenges.
A class of Machine Learning algorithms known as Graph Neural Networks (GNNs) has been employed. These networks were originally designed for applications involving sparse datasets and non-Euclidean feature geometries, such as those encountered in tracking applications in MEG-II and other HEP experiments.
The proposed algorithm is applied before the current one, acting on the input hits from the cylindrical drift chamber by performing a multi-class classification task: separating noise hits from those belonging to different helical turns. In this way, the combinatorial complexity in the number of hits is significantly reduced.
The primary objective is to reduce the inference time needed for the tracking phase of the experiment, maintaining or even improving the current tracking efficiency. The importance of achieving such results relies on allowing more maneuverability for the analysis of the experiment.
In the highest pile-up condition for montecarlo simulation, the following results have been achieved: on a test set, 94\% of the signal hits are kept and 73\% of the noise hits are discarded. While the computing time of the pattern recognition has been reduced to 36\%, implying a single event time analysis reduction to 57\%, the obtained results were not satisfactory on the tracking efficiency side, which was reduced by 8\%. Preliminary studies on this matter were made, which led to the conclusion that removing noise hits actually deteriorates the tracking efficiency; the actual reason is not clarified yet. To achieve these results, an initial attempt to integrate the algorithm into the single-event routine analysis of the experiment was successfully carried out, and all sanity checks for the implementation were passed.
Additional work is planned in the collaboration to further reduce the computing time and improve the tracking efficiency beyond the current algorithms, with the ultimate goal to include our model in the MEG II's official analysis.
The goal is to set a sensitivity of about $6 \times 10^{-14}$, allowing to test beyond-Standard-Model theories at energy scales that are not currently accessible nor foreseen to be by the most advanced experimental facilities.
The MEG II experiment is conducted at the $\pi e5$ beamline, where $5 \times 10^7$ muons per second are stopped and decay in a thin target placed at the center of the experimental apparatus. The energy, time of flight, and direction of the detectable decay products, a photon and a positron, are then measured. Photon-related quantities are measured using a homogeneous liquid xenon calorimeter, whereas the energy and quasi-helical trajectory of the positrons are reconstructed with a cylindrical drift chamber permeated by a gradient magnetic field. Additionally, the positron time of flight is measured at high precision by a pixelated timing counter.
Recently, MEG II has established the most strict upper bound on the branching ratio $\mathcal{B}(\mu^+ \longrightarrow e^+ \gamma)<1.5\times10^{-13}$ and plans to reach the final goal by accumulating statistics until 2026. The measured branching ratio is obtained from the ratio of the number of signal candidate events to the normalization factor. This quantity is linearly proportional to the number of decays looked at by the detector and the efficiencies of the experimental apparatus, among them the positron tracking efficiency.
We observed some degradation of the tracking efficiency as the beam rate increased. Furthermore, with the current algorithm, the tracking task scales combinatorially with the number of hits in the cylindrical drift chamber; thus, a high amount of computing time is required for higher rates. In fact, up to 75\% of the analysis is spent on the tracking task, and with up to six months required to process a full year of data, exploratory analyses become practically unfeasible.
Given the two problems of decreasing tracking efficiency and increasing computing time at higher pile-up conditions, this thesis aims to develop a new tracking algorithm that addresses and solves these challenges.
A class of Machine Learning algorithms known as Graph Neural Networks (GNNs) has been employed. These networks were originally designed for applications involving sparse datasets and non-Euclidean feature geometries, such as those encountered in tracking applications in MEG-II and other HEP experiments.
The proposed algorithm is applied before the current one, acting on the input hits from the cylindrical drift chamber by performing a multi-class classification task: separating noise hits from those belonging to different helical turns. In this way, the combinatorial complexity in the number of hits is significantly reduced.
The primary objective is to reduce the inference time needed for the tracking phase of the experiment, maintaining or even improving the current tracking efficiency. The importance of achieving such results relies on allowing more maneuverability for the analysis of the experiment.
In the highest pile-up condition for montecarlo simulation, the following results have been achieved: on a test set, 94\% of the signal hits are kept and 73\% of the noise hits are discarded. While the computing time of the pattern recognition has been reduced to 36\%, implying a single event time analysis reduction to 57\%, the obtained results were not satisfactory on the tracking efficiency side, which was reduced by 8\%. Preliminary studies on this matter were made, which led to the conclusion that removing noise hits actually deteriorates the tracking efficiency; the actual reason is not clarified yet. To achieve these results, an initial attempt to integrate the algorithm into the single-event routine analysis of the experiment was successfully carried out, and all sanity checks for the implementation were passed.
Additional work is planned in the collaboration to further reduce the computing time and improve the tracking efficiency beyond the current algorithms, with the ultimate goal to include our model in the MEG II's official analysis.
File
| Nome file | Dimensione |
|---|---|
| Tesi_Dispoto.pdf | 18.26 Mb |
Contatta l’autore |
|