The next level for VST instruments

Have you ever wondered why VST instruments never sound like the real orchestras? I know you will tell me about all the subtle interpretations and phrasing of each musician which are not able to reproduce, all the processing into the musicians’ mind that changes from legato to spiccato, to glissando and so on, all the unique bowing tension each of the virtuoso player brings, and you are absolutely right. But even if we go to the next level of programming all this into a library instrument, even it can ‘foresee’ the melody, or ‘learn’ how to do it, even render it, while you wait for all the millions of possible calculations till we get it right, there is another issue that it is not addressed at all – at least till now: it is the acoustic resonance of each instrument while amplifying the music played by the rest of the orchestra, the rest of the players next to it. You see, all this ink wasted writing about the room acoustics, but not enough about resonance between instruments. When a cello is sitting next to the violin section, its huge hollow body can not help but resonate and amplify some of the sound of the violins too. And vise verse. This is why if we record all the string section’s instruments separately and then put them together, they do not really sound like ‘a string section’ but like a ‘selection of individual studio musicians’. But how can you produce this resonance of the rest of the orchestra while the instrument is either idle or playing along? It is a tough engineering and programming skill. It can be done, of course, by having each instrument produce extra layers of this sound after each line of the rest of the orchestra is recorded. Obviously not in real time, but that’s perfectly all right for recordings. It is the same more or less concept of adding shadows and reflections onto surfaces in an animation software. You apply the light and then the software has to calculate all the near surfaces and apply all the information so that the image becomes realistic. In a way, a good convolution reverb tries to do just that, but now I think we have to take it to the next level of calculating the resonance of the instruments in the orchestra, one by one! What I am trying to say is that we need to go deeper into sampling and sample more than the sounds of instruments. We need to sample, or better code in, the behavior of each instrument when a sound source is next to it, in order to take VST sampling to the next level. I know it will be possible in one of the next years. I just can’t wait to hear the result.
Translate »