Tesi etd-10092023-175446 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
COLOMBINI, JACOPO JOY
URN
etd-10092023-175446
Titolo
Generative artificial antelligence for graph semantic mobility data
Dipartimento
FISICA
Corso di studi
FISICA
Relatori
relatore Prof.ssa Giannotti, Fosca
correlatore Prof. Pellungrini, Roberto
correlatore Prof. Rizzi, Andrea
correlatore Prof. Pellungrini, Roberto
correlatore Prof. Rizzi, Andrea
Parole chiave
- generative artificial intellignece
- mobility
- purpose of motion
Data inizio appello
23/10/2023
Consultabilità
Non consultabile
Data di rilascio
23/10/2093
Riassunto
Mobility studies have long tried to correctly predict and comprehend the mobility of people, firstly with deteministic models and, in new years and thanks to the mobile revolution, with new data-driven machine learning models.
One problem with this models is that little or no effort is devoted to understanding the semantics behind movements, i.e. the purpose of motion, probably due to a lack of semantic mobility data, which indeed are tipically costly and slow to obtain.
In this work we want to takle this issue with the construction of a labelled graph generative artificial intelligence. The graphs generated are a special mobility graphs, called Individual Mobility Network(IMN), which are able to summarize the mobility habits of an individual with the corresponding activity behind every visited location.
To generate this graphs we use a regularized autoencoder and generate over the latent space separatley with a normalizing flow and a Waisserstain GAN.
We compare the results of this two architectures with a simple regularized autoencoder with Gaussian generation over the latent space
and a Deep Convolutional GAN, finding that our architectures work better.
The performance evaluation is carried out in two different ways: on one hand we use "topological" function to induce distances between graphs, and then evaluate a generated sample confronting the distribution of distances between generated graphs, between test graphs and between test and generated graph; on the other we use a Graph Isomorphism Network to generate rich vector embeddings, and then evaluate the generative process confronting how the vectors representing the generated set and the test set would distribute.
This whole process has been iterated over different latent dimensions over the latent space, finding hints of how the Waisserstain GAN works best over a latent space of bigger dimensionality, while the Normalizing Flow works best on smaller latent spaces.
One problem with this models is that little or no effort is devoted to understanding the semantics behind movements, i.e. the purpose of motion, probably due to a lack of semantic mobility data, which indeed are tipically costly and slow to obtain.
In this work we want to takle this issue with the construction of a labelled graph generative artificial intelligence. The graphs generated are a special mobility graphs, called Individual Mobility Network(IMN), which are able to summarize the mobility habits of an individual with the corresponding activity behind every visited location.
To generate this graphs we use a regularized autoencoder and generate over the latent space separatley with a normalizing flow and a Waisserstain GAN.
We compare the results of this two architectures with a simple regularized autoencoder with Gaussian generation over the latent space
and a Deep Convolutional GAN, finding that our architectures work better.
The performance evaluation is carried out in two different ways: on one hand we use "topological" function to induce distances between graphs, and then evaluate a generated sample confronting the distribution of distances between generated graphs, between test graphs and between test and generated graph; on the other we use a Graph Isomorphism Network to generate rich vector embeddings, and then evaluate the generative process confronting how the vectors representing the generated set and the test set would distribute.
This whole process has been iterated over different latent dimensions over the latent space, finding hints of how the Waisserstain GAN works best over a latent space of bigger dimensionality, while the Normalizing Flow works best on smaller latent spaces.
File
Nome file | Dimensione |
---|---|
Tesi non consultabile. |