ETD

Archivio digitale delle tesi discusse presso l'Università di Pisa

Tesi etd-09012007-221554


Tipo di tesi
Tesi di laurea specialistica
Autore
Pellinacci, Marco
Indirizzo email
marco.pellinacci@alice.it
URN
etd-09012007-221554
Titolo
A Decentralized and Distributed IDS for Securing Robotic Multi-Agent Systems
Dipartimento
INGEGNERIA
Corso di studi
INGEGNERIA INFORMATICA
Relatori
Relatore Dini, Gianluca
Relatore Prof. Bicchi, Antonio
Relatore Ing. Fagiolini, Adriano
Parole chiave
  • reputation
  • intrusion detection system
  • trust
  • multi--agent system
  • MAS
  • decentralized
  • robotic
  • IDS
  • distributed
  • robot
  • consensus
Data inizio appello
02/10/2007
Consultabilità
Non consultabile
Data di rilascio
02/10/2047
Riassunto
This thesis addresses a security problem in cooperative systems consisting of teams of robotic agents. In our scenario, agents may be performing different independent tasks, but have to cooperate in order to guarantee the entire system's safety, according to a common set of rules. Our focus is to detect non--cooperative agents whose behavior may arbitrarily deviate from the rules, due to spontaneous failure or even tampering.

We consider systems where cooperation rules are decentralized, i.e. they dictate actions that depend only on the configurations of neighboring agents. In this setting, we propose a distributed Intrusion Detection System (IDS) consisting of monitors embedded on--board every agent. Such monitors are decentralized processes, i.e. they use only locally available information, by which every agent is able to measure the cooperativeness of its neighbors.

In this thesis, we introduce an agreement mechanism by which local monitors may share information --- thus overcoming single monitor's sensing limitation --- and reach a unique decision on the cooperativeness of a given target--agent. This mechanism is consensus--based, and it is formulated in terms of a generic update function by means of which non--scalar estimations of different monitors can be combined together. We provide conditions for the consensus convergence and an upper bound to its transient duration.

Furthermore, we address the still open problem considering the presence of monitors that may lie for personal gain and may lead the other monitors involved in the consensus protocol to incorrectly classify a given target--agent. With the intention of doing the first steps toward trust establishment, we propose a trust mechanism that does not claim to be the final solution to the problem of establishing trust relationships among robotic agents with partial knowledge of the
system's state; indeed, our primary objective in this thesis is to understand such difficult problem deeply.

Lastly, effectiveness of the proposed IDS is shown through simulation of a case study.
File