Loading Events

Διάλεξη με τίτλο: Brain’s music perception and our music-to-image dreaming project: deepsing

  • This event has passed.

20/12/2019 | 19:00 - 22:00

Join us in our attempt to uncover the exciting interplay between music, neuroscience, cognition and deep learning in the last meetup of this year on Friday 20/12 – 19:00 at ΟΚ!Thess

Agenda:

19:15: Predicting music preferences through mind-reading: connecting the dots between neuroscience, machine learning and innovation, Dr. Dimitrios Adamos

20:15: Seeing music using deepsing: Creating machine-generated visual stories of songs, Nikolaos Passalis and Stavros Doropoulos

21:00: Networking and socializing

‘Predicting music preferences through mind-reading: connecting the dots between neuroscience, machine learning and innovation’Dr. Dimitrios Adamos

Abstract: What exactly happens in our brain, when we enjoy a song? Can we use mobile neuroimaging to predict our favourite music? Is it feasible to build computational models of our musical taste at the population level?

This talk will overview research efforts in decoding the listener’s brain dynamics to identify signatures of aesthetic evaluation, mined from wearable EEG recordings. A technology demonstrator for the media campaign of Norway’s largest mobile network operator, featuring the participation of famous Norwegian artists, will be presented. These efforts built upon recent empirical evidence that music-induced pleasure is associated with increased functional connectivity and richer network organization in the human brain. Hence, the use of graph-emanated representations of the brain as a complex networked system will be demonstrated to robustly attain the listener’s “mind-reading”. In addition, challenges in using modern machine learning tools to train deep learning models on such EEG signals will be discussed. Finally, my current work will be presented to lead the first large-scale inclusion of volunteers for collecting human brainwaves during music listening at the population level, in collaboration with London’s Science Museum.

‘Seeing music using deepsing: Creating machine-generated visual stories of songs’ – Nikolaos Passalis (Postdoctoral Researcher, AUTh) and Stavros Doropoulos (CIO, DataScouting)

Abstract: Can machines feel? Is music perception a solely human ability? Are machines creative? Can they express their “feelings”? These are some questions that naturally arise from the artificial intelligence revolution that we are currently going through. In this talk, we will discuss these issues and present a novel method, deepsing, which will bring us closer to machines that can feel and express themselves. deepsing was born to materialize our idea of translating audio to images inspired by Futurama Holophoner. In this way, deepsing is able to autonomously generate visual stories in order to convey the emotions expressed in songs. In this presentation, we will briefly present the technology and the methodological advancements needed to realize deepsing, as well as generate visual stories for several well-known songs using only neural networks!

Details

Date:
20/12/2019
Time:
19:00 - 22:00
Event Category:

Venue

OK!Thess
Κομοτηνής 2
Θεσσαλονίκη, 54655

share