Machine Learning for music and art generation: Music Tech Community India meetup, Bengaluru

    We are at a special age of technology where, the ease of creating, distributing and collaborating has led to tremendous creative minds being born in their living rooms. Are you a musician, developer, artist, producer, designer, researcher or neither of them, but heavily motivated in the intersection of music and technology. Then this is for you! Please come and say hi.

    In this edition of the music tech meetup, the following speakers will be delivering talks about their personal experience in building Machine Learning based art and music generation systems in industrial/academic and artistic spaces. The speakers will also demonstrate some examples of the recent developments in the field AI-based art/music generation. It is an exciting time for new media artists with tools like machine learning and programming, seamlessly available for the creative minds of today. Just like the intelligent brushes of photoshop augmented the abilities of a traditional painter, music has a whole world of new possibilities. 

    Programme 

    * 16.00h- Networking

    * 16.10h- Introduction

    * 16.15h- Talk by Harshit on "Making art using AI: The evolution of cyborg artist" 

    * 16.45h - Talk by Srikanth on "Music generation using ML at Jukedeck" (Live) 

    * 17.20h - Combined talk by Albin and Manaswi on "Intelligent music production"

    * 17.40h - Networking 

    * 18.00h onwards - Possibly continue our discussions at a bar or cafe for those who are interested


    About Speakers


    Harshit Agarwal - http://harshitagrawal.com

    Harshit is a new media artist and human computer interaction (HCI) researcher. Through his artwork, he creates experiences for people to explore and express with seemingly distant technologies like artificial intelligence/ machine learning, drones, digital fabrication, sensors, augmented reality and in the process invite people to reflect upon and re-evaluate their relationship with technology. Often, these artworks are tools to study how technology can blend with and enhance human creative expression. A lot of the works that he did focuses on the interplay between human and machine imaginations and intentions, spanning across virtual and physical embodiments.

    Harshit is a graduate of the Fluid Interfaces group at MIT Media Lab and the Indian Institute of Technology (IIT Guwahati). He'd carried out art residencies at various places to develop my practice in diverse cultural contexts, including at the Art Center Nabi (Seoul), Museum of Tomorrow (Rio de Janeiro), Kakehi-Lab (Tokyo/ Yokohama). His wokrs have been exhibited at premier art festivals and museums around the world, like the Ars Electronica Festival, Tate Modern, Asia Culture Center (at Otherly Spaces/Knowledge exhibition curated by Kazunao Abe-san), QUT Art Museum (Why the Future Still Needs Us exhibition), Museum of Tomorrow, Alt-Ai (at the School For Poetic Computation, NYC), Art Center Nabi, Laval Virtual, BeFantastic Festival (Bangalore, India), ISEA. His works also have been extensively covered in international media. Along with this, he had also published several research papers on creation tools at human computer interaction conferences, including Siggraph, UIST, UbiComp, TEI, IUI, IDC.


    Srikanth Cherla - https://cherla.org

    Srikanth is a Machine Learning Researcher at Jukedeck where he contributes to the design and development of an ingenious AI music composer which employs a range of computational techniques to automatically generate music of different moods and styles. He was awarded a doctorate degree (PhD) in Computer Science in July 2016 by City, University of London under the supervision of Artur Garcez and Tillman Weyde. His research involved the development of novel Neural Network based Machine Learning models, as well as the use of existing ones to learn temporal patterns in musical scores and also to classify non-musical data. He received a master's degree (MSc) from the Music Technology Group at Universitat Pompeu Fabra and holds a bachelor's degree (B.Tech.) in Computer Science and Engineering from the International Institute of Information Technology - Hyderabad.

    Srikanth has previously worked at Siemens Corporate Technology - India as a Research Engineer (2007-10), on human action recognition in video and event detection in environmental audio among other video and audio analysis topics. He was a Research Assistant (2011-2012) at the Technologies for Acoustics and Audio Processing (TAAP) lab at Simon Fraser University where he worked on digital waveguide synthesis techniques for the tenor saxophone. He also did a brief internship at PMC Technologies (2011) during which he assisted with work on regression methods for failure prediction in manufacturing units in the semi-conductor industry.

    He enjoys playing the guitar and has been playing mostly rock and heavy metal music for several years now as a hobby. He also holds a Grade 6 certification in Electric Guitar awarded by Rock School.


    Manaswi Mishra - https://manaswimishra.com

    Manaswi Mishra is a music technology researcher currently exploring Music Information Retrieval techniques for augmented learning of musical instrument (IITB). As a graduate student at the Music Technology Group, Barcelona, he researches data driven methods for generating new timbres/textures of sounds. He has spent a year at the Center for Computer Research in Music and Acoustics, Stanford, and also worked as a researcher at Shazam (CA) and AdoriLabs (Bangalore). With an undergraduate degree in Engineering Physics, at IITM, his interests spread from physical modeling of sounds, numerical synthesis to human computer interaction, signal processing and computational creativity. Manaswi is also an active musician with various audio visual projects blending deep learning, creative coding and the arts.


    Albin Correya - [website]

    Based in Barcelona, Spain, Albin is an interdisciplinary researcher who works within the intersection of music and technology. His personal research interests are aligned on applying knowledge from Audio Signal Processing, Music Information Retrieval, Machine Learning, Natural Language Processing and Human Computer Interaction studies into audio and music production environments. 

    Albin is currently working as a research engineer at Music Technology Group, Barcelona where he investigate and develop algorithms for the automatic identification of cover song versions in collaboration with the german music start-up Flits. He has previously worked at the french music streaming giant Deezer at their Paris HQ. He holds a M.Sc degree in Sound & Music Computing from Universitat Pompeu Fabra, Barcelona and Bachelor's degree in Compute Science from Mahatma Gandhi University, Kerala. His works has been also featured in various international  music tech conferences and hackathons such as Sonar+D 2017, Barcelona, Ableton Loop - 2017, Berlin, HAMR@ISMIR 2018, Paris, BnF Hackday, Paris, Music Hackathon Bulgaria, Sofia etc. He is also an active music producer and multi-instrumentalist. His compositions were featured in award winning documentaries and movies (IMDB). He is a great fan of open science and actively contribute to various open community initiatives around the world.


    Tickets

    Free Entry!

    Please RSVP, so we can arrange the required facilties.


    For any queries,

    +91 8197238177
    Sid


    Cheers,

    Music Tech Community India

    https://musictechcommunityindia.wordpress.com/

    Location

    91 Springboard, Koramangala
    Bengaluru

    Dates

    From 29th December 2018 - 04:00 PM
    to 29th December 2018 - 06:00 PM