Tesi etd-10022025-104757 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
BONFIGLIO, MARIA VITTORIA
URN
etd-10022025-104757
Titolo
An EEG Frequency-Tagging Approach to Measure Automatic Phonetic Categorizations
Dipartimento
BIOLOGIA
Corso di studi
NEUROSCIENCE
Relatori
relatore Prof. Bottari, Davide
relatore Prof.ssa Binda, Paola
relatore Prof.ssa Binda, Paola
Parole chiave
- articulation
- auditory
- automatic discrimination
- automatic phonetic categorization
- consonant discrimination
- consonant-vowel syllables
- consonants
- eeg
- electroencephalography
- Fourier transform
- frequency
- frequency spectrum
- frequency-tagging
- language
- manner of articulation
- oddball
- phonemes
- place of articulation
- sounds
- speaker
- speech perception
- steady state auditory evoked potentials
- syllables
- topographies
- vowels
- z-scores
Data inizio appello
20/10/2025
Consultabilità
Non consultabile
Data di rilascio
20/10/2065
Riassunto
Automatic discrimination is the process the brain is able to perform, in order to categorize multiple sensory inputs without the involvement of conscious intention. The most canonic method used for the evaluation of automatic discrimination is the mismatch negativity paradigm (MMN), but we think that other approaches can be used for evidencing the ongoing process of preattentive mechanisms. For this purpose, we decided to use the frequency tagging technique, traditionally used for visual discrimination and sound sources discrimination and apply it, in a totally unprecedented way, to the understanding of auditory discrimination in the context of language processing. The frequency tagging technique refers to the elicitation of steady state evoked potentials, in this case, of the auditory kind (SSAEPs). Steady state evoked potentials arise from the periodic modulation of a single stimulus attribute. If the stimulus is perceived correctly, the brain exhibits frequency-specific entrainment to the periodic stimulation. Since these responses are confined to specific frequencies, they are more appropriately analysed in the frequency domain rather than the time domain. The response spectrum has narrowband peaks at frequencies that are directly related to the stimulus design. Ultimately, if an automatic discrimination has actually taken place, we will observe significant amplitudes in both the frequency of stimulation (3.333 Hz), and the deviant stimulus frequency of stimulation (1.111 Hz), together with their harmonics. In our specific case, the brain activity will be recorded while adult participants (between 18 and 30 years old) will be asked to listen to speech sounds (e.g., syllables with varying consonants) presented periodically. The task is passive. To measure the discriminative response, speech sounds will vary: every n sounds of a particular syllable, that is the “standard” (e.g. ba), an “oddball” syllable will be presented with a different phoneme (e.g., consonant change, pa). Under these periodic stimulations, one specific frequency is associated with the “standard” stimuli and one with the “oddball”. If the brain can discriminate the two sounds, it will respond to both stimulation frequencies; if not, only to the standard one. Different contrasts between consonants will be tested (p vs b, p vs t, p vs f). The decision of evaluating consonant categorization specifically, has its precise valence, as consonants are considered the most salient elements of the speech flow, in concordance with the theory of the “consonantal bias”. Having completed this phase, where we aim to detect automatic discrimination among consonantal contrasts, our next step is to test the robustness of the method by introducing variability into the system. Specifically, we will examine whether significant peaks corresponding to the deviant stimulus frequency (and its harmonics) can still be observed in the spectrum when the associated vowel is modulated and when multiple speakers are involved in the uttering of the syllables. Based on the overall results, we also aim to theorize a model that can give an insight about the most salient modalities (motor/articulatory vs spectral/acoustic) that the brain utilizes for consonants discrimination.
File
| Nome file | Dimensione |
|---|---|
La tesi non è consultabile. |
|