I am an interdisciplinary engineer, with a major in music production, interested in projects that combine arts and science. One of the topics I am very curious about is the use of EEG devices for artistic purposes.
I picture a future in which the developments of Cognitive Science, Art and Artificial Intelligence drive new interfaces that extend human perception, learning and creativity. I hope I'll be part of the democratisation of creativity through technology.
Contemporary Guitar Ensemble (CGE) collaborated with me in my final career project, which consisted in producing a multichannel concert for 10 guitarrists and recording it using a Binaural technique (Neumann KU 100 binaural dummy).
You may wonder, as watching the picture, why did we arranged the chairs around concentric circles. Let me explain you: The guitar ensemble follows the tradition and technique of Guitar Craft , the school of Robert Fripp. By surrounding the audience they seek to explore the potentialities of space as an additional dimension of musical expressivity. Through the musical performance they generate movement by playing with phase differences and timbric variations.
Finally, it's important to mention within my plans there was the use of a 360º camera for achieving an immersive VR musical experience, but infortunately I did not get the funding for doing so.
This was a game I designed and programmed inspired in the original Twister game from Milton Bradley. The game mechanics are basically the same, the difference is that every combination of pads triggers different loops (sampes) of music while you are playing. The audio is controlled by the DAW Ableton live, and the user interface and game engine was programmed in Processing (Java). The communication between Ableton and Processing is based on the OSC protocol.
The game pads were built with aluminium foil and EVA, and by connecting them to the Makey Makey device they work as keyboard keys when pressed.
This was the final project of the Audio Programming class. I thought it would be interesting to implement some gestures for controlling the parameters of a granular synth by using Kinect. The synth was programmed from scratch in the Max visual language. The communication between Max and Kinect was achieved with Synapse (an app that sends OSC messages to Max)
I love collaborating with artists from different fields. This time I had the opportunity to help Pablo Martínez Zárate, a professional filmaker and friend.
This project is a visual-poetic exploration that seeks to cross poetical experience with a graphic interface. It reflects on landscape tradition in Mexican painting and it brings it to actuality through an immersive-poetry proposal. From the technical point of view, the project is based on the Myo device, which serves as a controller for navigating a landscape that contains poems at various key points. With the cursor the audience can invoke the poems by hovering over the markers. The music was composed by Pablo using elements from the mexican soundscape.
After a course on Cinder for artists with Thomas Sánchez Lengeling at CNART , I began experimenting with fractals and mathematical formulas for generating visual patterns. The following pictures show some of the results of my experiments with different programming languages. Also one of my passions is using Complex Systems theory concepts and Artificial Life for studying the creative process and generating art (Generative Art).
Depth First Search Labyrinth (Processing)
Shape Grammars ( Structure Synth )
Modular Math Art (Processing)
Inspired by Conway's Game of Life I developed a music sequencer that plays notes whenever a cell is born inside a trigger (squares marked by colors). The notes are selected from a Pentatonic scale for avoiding dissonance.
I am currently developing this work with Pablo Martínez Zárate. One of our objectives is to reflect on the psychological effects of the noise in Mexico city. Our advances up to date are a Max application that distorts video by using their own audio signal. By doing this we can represent the effect of noise in a visual way. We look forward developing a mobile app for people to share their own processed videos live, and also creating a website for a sharing the results.
This project came out as a result of an internship with Arturia, a company specialized in the development of music software and hardware. I am currently in charge of developing open software for using the Raspberry Pi (and other development boards, like the BeagleBone + Bela) as an electronic musical instrument together with the Beatstep Pro (a MIDI sequencer) and other Arturia products. We expect releasing the code in 2017.
Programmed with Pure Data.
A set of samples is loaded to an SD card togheter with de Pure Data patch cointaining the sequencer. The Raspberry Pi (running Debian and connected to a DAC via I2C) is configured to launch the patch at startup on headless mode. Because the system should work in realtime I had to kill all unnecessary services consuming extra memory. Once the Beatstep Pro is conencted via USB to the Raspberry Pi MIDI communication is established and you can start playing. New samples can be added by uploading them to an USB drive in real time (or directly to the SD). Effect presets can be saved and shared. The system has a playback latency of 5 ms. Currently I am translating the code to run it on C++ in the Bela platform where latency is < 2 milliseconds.
Programmed by Santiago Rentería and Miguel Moreno.
Test screen for checking if the patch is receiving input MIDI data from the Beatstep Pro.
Programming blocks for each of the 16 samples of the sequencer/drumachine.
ctlin are objects routing control MIDI messages coming from Beatstep Pro knobs. Playback speed, start and end times of each sample can be configured.
You can apply effects (Filters, Bitcrusher, Delay, etc.) to each sample individually.
As a part of an elective course on Musicotherapy I developed with my team a research on the effects of binaural audio. We had the opportunity to present our results at CIM14 (a conference on Interdisciplinary Musicology ) in Berlin. During this project I had my first contact with the experimental and research methods of science. It's also important to mention that the technology we used for the EEG measurments was pretty limited (Mindwave Neurosky), but the research constituted an important effort towards preparing a serious scientific paper.
NOTE: The image refers to a Max patch I developed for retrieving the signals from the Mindwave EEG device for later analysis.
A short explanation and comparison of the software I developed (in Max/MSP) for recording EEG signals from the Mindwave Neurosky device.
As a part of the call for experimentation in education innovation: NOVUS 2015, I received funding for giving a course of three modules about Machine Learning applied to digital arts. We explored the philosophical implications of AI in art, and the technical aspects for applying Machine Learning in Music and Interactive Art. The funding was used for buying various interfaces (Myo, Leap Motion, Kinect and Arduino) and Emotiv EEG sensors.
During the NOVUS 2015 project, I had the opportunity to work with an interdisciplinary team on the development of a Brain-Computer Interface. This consisted in controlling a 3D printed arm via servo motors to play the piano. The team was made up by students from Systems Engineering, Mechatronics and Music Production Engineering.
NOTE: The robotic arm yet can't move horizontally across the keyboard. Finger movement is achieved via facial gestures. We look forward implementing more control gestures for extending expressivity.
This section contains various Digital Signal Processing algorithms implemented in the functional audio language Faust
An experimental effects processor for playing with two pitch shifted signals that pan periodically.
Code: Click here
Implementation of formant synthesis for generating human-like (vocal synthesis).
Code: Click here
A guitar effect with distortion and a slicer based on the interference of two pulse trains.
Code: Click here
I was invited by professor Eric Pérez Segura to work in the development of a real time music transcription system for contemporary music. The work is still ongoing, but here I show the latest advances.
Generate hybrid graphical-regular scores in real time
Obtain specific algorithms for detecting musical patterns (v.g. thrills, clusters, etc.)
Create a new system for composition/improvisation
We took Eric's notation and scores as a guide and defined a set of musical patterns to recognize using algorithms. Currently we are able to recognize trinos and chords, but in the future we look for rendering this results in real time as graphical notation.
Here I showcase my work on sound arts, contemporary and algorithmic music. All works are original (if not noted otherwise) and were recorded during the period 2014 - 2017.
Computer generated improvisation based on water drop samples and the Karplus-Strong String physical model. Programmed with Max. The algorithm allows controlling speed and probability of events (water drops and string plucks) over time.
During a workshop taught by Jeff Treviño at Tecnologico de Monterrey, I explored the expressive possibilities of AudioGuide, a program for concatenative synthesis that analyzes databases of sound segments and arranges them to follow a target sound according to audio descriptors.
Target sound: Polyrhytmic pattern
Target sound: Harder Better Faster (Daft Punk)
Corpus: Technologic (Daft Punk)
In a MOOC taught by Ajay Kapur I learned to program generative music using Chuck. Here are the results of my first project. A Random song using polyrythms based on the egyptian scale: C D Eb F# G Ab B C
Visuals generated transforming a Mandelbrot set with recursive zoom in Adobe After Effects and Max 7 Vizzie. Sounds synthesized with Arturia's Modular V.
Here I showcase my work on popular music. All works are original compositions or arrangements (if not noted otherwise) and were recorded during the period 2011 - 2017.
Midterm popular music project. Recorded with Logic virtual instruments.
Midterm orchestration project. Recorded with Kontakt.
Chiptune musical experiment with Reason 6.
Dance/Chiptune musical experiment with Pro Tools virtual instruments (Air Boom & Air Expand!2.
Dubstep/Chiptune musical experiment with Reason 6.
One of my first piano compositions. Recorded in Logic.
All guitar and bass parts were recorded and composed by me. Drums were programmed using virtual instruments in Logic and Ableton.
Just a love song.
Mellow guitar ballad.
A song I composed during a trip to Cuernavaca.
One of my first guitar compositions.
Free improvisation (during a guitar class) for practicing Minor harmonic and pentatonic scales. Original music by Santana.
During a Music technology hackaton (a 6 hour programming contest) I developed with two friends a Pure Data program for sonifying emotions through facial expressions. For this purpose we used Affectiva, a facial recognition API driven by Machine Learning. For more information please watch the video. And don't forget turning on english subtitles.
We got 1st place and won an Arturia MiniBrute Analog Synthesizer
Thanks to the class of Music Production and Recording Techniques I had the opportunity to collaborate with various musicians and learn from Juan Switalski, a friend and talented recording engineer.
Recording and mixing of Escuela Superior de Música Orchestra at Centro Cultural Coyoacanense.
Decca Tree and AB Recording Techniques
AKG 414 Mic
Juan Switalski and some colleagues
"The artist can decide either to dominate the machine or give it the ability to talk in his language".
How the relationship between the artist and the machine has changed artistic expression? How machines can become a new kind of medium? Can machines enhance aesthetic awareness?
During a workshop at Centro Nacional de las Artes, José Manuel Ruiz invited us to reflect on these questions by exploring the concept of media, theoretically and experimentally by developing an artwork with the machines at the media lab.
After playing with the photocopier I realized I could use it as a medium for expressing the "transduction of the primitive". A human process of symbollic exploration that began with the first handprint in a cave, and continued through time with the use of technology: automatic reproduction, replication and transformation of symbols. Below are the results.
How do we inhabit and talk through the media?
What marks do media leave on us?
"Media are means of extending and enlarging our organic sense lives into our environment". (McLuhan)
During the class of Music and psychoacoustics I studied with my team the impact of melodic components in memory. For this purpose we conducted two experiments: The first one consisted in studying the effects of the melody's prescence in the short term memory by means of sequences of sung and spoken letters. The second used two groups of sung melodies (structured and chaotic) to study how melodic and rhytmic features impact memorization. Both experiments were carried out with students between 12 and 16 years old. The following video explains the results and methodology in detail.
Inspired by the concept of metacreation (the idea of endowing machines with creative behavior), I began exploring complex systems by taking a MOOC on Agent Based Modeling (ABM) . This field studies how systems composed of multiple individual elements (agents) interact with each other, and give rise to aggregate (emergent) properties that generally are not predictable from the elements themselves. In other words, in complex systems order can emerge without any design or designer (self-organization).
Even though these kind of systems have been studied previosly by using equation-based models and fractal theory, Agent Based Modeling makes the underlying mechanisms of complex systems explicit, in a way they can be understood by young children. In 1980 Papert confirmed this hypothesis by describing a turtle agent (from Logo language)) as a “body-syntonic” object: A user could project oneself into the turtle and, in order to figure out what commands it should be given, users could imagine what they would do with their bodies to achieve the desired effect. In this way ABMs are an intuitive representation of complex phenomena that are generally difficult to apprehend.
The model I developed as a final project for the MOOC addresses the phenomenon of Ideology Difussion: How people decide which political viewpoint to adopt? Are people surrounded by those who share their same political viewpoint less likely to change theirs? What is the minimum number of influencers required for a person changing his political viewpoint (peer pressure)? On the other hand, in a system with multiple competing ideologies, does the system reach a balance (ideological segregation), or a dynamic equiliburm is established (of agents that continually change their ideology)? Can we relate the dynamics of this model to cultural segregation?
I invite you to try the model and draw your own conclusions. You can download it here. Remember to install NetLogo first.
Spacetime , a multidisciplinary architecture and design agency, invited me to collaborate at Bahidorá Festival with two art installations.
The first one, named Colormancy, involed the interplay between code, divination and chance. By mapping peoples' names to particular colors custom ambiences were created out of light and smoke. Eventually a quote based on song lyrics fragments was shown as a personal fortune in a screen. A Raspberry Pi and a touchscreen were used for controlling the whole interaction: receiveing users' input, automating multicolor LEDs and smoke machines via DMX and electronic relays.
The second installation used no electronics but played with basic shapes (lines and points) translated in space and time.
Migrante is a stage performance that describes the journey of a character (that you can be yourself) in six stages. Migration, from the metaphorical point of view, is a search in which all human beings are immersed either for need or will. From this phenomenon emerges a sense of migration from the awareness of change, that allows us to question our mission in life, as we assume to be free to chart our own course and write a personal (his)story.
The project was directed by Bernardo Rubinstein with the goal in mind of generating an interdiscipline of dance, emotions and sound. While its nature remained physical it also involed theoretical reflection around the concept of migration and its historic meaning.
I consider Migrante as one of the most challenging experiences in my life, because it involved embracing my body as a canvas for emotional expression. A task not very usual for a musician.
If you want to read the essay I wrote for the performance please click here .
El cuerpo es un archivo is a 360º video montage with multichannel sound inspired in the Mexican Movement of 1968. It is part of the permanent exhibition at Centro Cultural Universitario Tlatelolco in Mexico City. During the performance dancers interact with historical photographs. In response, a dance unfolds leaving the viewer immersed and inviting the audience to engange with the archive (of Tlatelolco Massacre).
The project was directed by Pablo Martínez Zárate and involved an indisciplinary team. My job here was setting multichannel audio and mapping a 360º video over a cylindrical screen using 6 short throw projectors. Below you can see the process of wrapping the image to the curved surface with the MadMapper software.
To most of us birdsong is background, a part of nature's soundtrack, but if we are to reveal one of the greatest secrets of evolution we should look closer to see what kinds of tricks are up their sleeves. Bird singing is not unusual, we can find birds almost everywhere in the world, but the ability to sing is actually quite rare in evolutionary terms. Passeriformes , also known as perching birds, display immense variability in their songs, ranging from simple repeated thrills to intricate non-random sequences. While studying birds has been proven to be challenging, given technical aspects of signal analysis (v.g. phrase segmentation) and environmental stochasticity, it has made important contributions to such areas as neurobiology, ecology, ethology and evolutionary biology.
This is an ongoing project and part of my masters thesis in Computational Science at Tecnológico de Monterrey. I am collaborating with a team leaded by Charles Taylor, Martin Cody and Edgar Vallejo (my advisor). The general objective is to develop an unsupervised Long-short Term Memory (LSTM) model to analyze birdsong recordings. For more information please watch the next video: