12 Week time frame
Maestro is a natural Augmented Reality interface that removes the barrier between people and music creation.
Music began as a simple form of human expression, people made music simply for the act of making music. Music was live and had to be memorized or improvised.
Now music can be played and written, different people can play the same song without having to be taught how to do it. The life of songs has increased.
The ability to record songs changed the entire music market, suddenly everyone could listen to music, however very few people make it. The complexity of music creation begins to rise.
Everything is digital, people have a level of control over sound that had never been seen. However now, technology created a barrier for those who only want to make music, moving squares on a screen is no the most intuitive way of doing it.
Technology stands in
between people and music
creation and is an obstacle
for musical innovation.
...So, what's next?
Immersive visualization and
interactions can drastically
enhance and naturalize the
music creation process.
the process of
From inspiration and experiences the idea is the base in which to build a song.
When the idea takes shape, beats and new instruments are added to complete the song
An audio engineer makes everything fit and feel together with no audio interferences.
Making the song fuller, the engineer will enhance the general feel off the song.
Kinds of People
with an idea
Starting a Vibe
People might not like it
There we go!
Looking for sounds
Tons of plugins
Send it Over
to the Engineer
Mindful of Artist's Vision
Crazy sound levels
Send it over
Step by Step
Software and hardware knowledge turns into a barrier.
Being the spark of inspiration can be challenging.
Knowing how to make a recording sound good requires
Adjusting for frequency overlap is boring.
Lack of Easy Sound Customization
Several plugins have to be used to achieve custom sounds.
The unintuitive use of effects on instruments make their placement time consuming.
The users are people who
know what they want at the
most basic level. Using the
tools to bring their idea to life.
Controlling Artificial Intelligence players with the most natural interactions.
Using A.I. to fill in where information is lacking, being it automatic playing, instrument translation or mixing and mastering.
Immersive interfaces enable the use of volumetric movements, making interactions more natural than it would otherwise be.
level of immersion.
player, designed to
strings to percussion.
Soft keys enable a greater level of musical freedom.
Genre presets will suggest Environments, Instruments and Players according to the ones selected
Choosing between a Cathedral or a Theatre will affect how grandiose the song will sound.
The preset library can be expanded by the users.
Physically placing the instruments intuitively adjusts volume and compression.
The Instrument Library is controlled by the users, new instruments can be added and changed depending on the desired feel.
Translates piano interactions into any other instrument, using both Artificial intelligence and the keyboard’s expression interface.
Artificial Intelligent players taken from an Online library play according to the Key, Tempo, Volume and Positioning set by the conductor.
No A.I. (You)
When we’re not limited by physics, new ways of interacting with sound are born.
The sound is created based on the given physical attributes of the instrument, together with an interaction.
People can create
and share their new
Mixing and Mastering will be taken care of by a neural network specialized in song production
How it works
An animated storyboard showing Maestro interactions.
Sound On recommended!