Five years ago, I wrote a post on the excommunication of the tritone by the catholic church. The tritone is an interval of six semitones (or three tones). The church excommunicated the tritone a long time ago. Presumably, they were hearing it as it used to sound then – with a tuning that was different than the tuning we use today. Today, we tune instruments so that the 12 notes on the chromatic scale split an octave evenly (i.e., the chromatic scale is equally tempered). In the past, tuning tried to get better use of harmonics (for example, the Pythogorean tuning).
We are mixing an album. Everything is progressing well, but there are problems with some of the initial recordings. Most of these are related to vocals. Sometimes they are actual problems, such as the vocal sounds too much like in a box and no amount of equalization can correct for that. Sometimes, it is not clear that there is an actual problem, but the vocalist is not happy either way. Perhaps the melody was not right. Perhaps the melody is correct, but a syllable does not hit the right pitch, there is a glaring "aaa", the timing of a word is out of place, and so on and so on.
A long, long time ago, we used CoolEdit Pro to record and mix our music. This would have been around 2001, on an ancient desktop running a processor at 300 MHz, in our mockup home "studio." At that time, CoolEdit was impressive. I can rant about how great it was for a long time, but here is the important stuff.
10 years ago, I played with Samplitude – a powerful recording and mixing software. I liked it, because it was intuitive. Learning how to use it took little time.
The simplest way to change the pitch of a signal is to simply stretch it or compress it in time. This type of pitch shifting has a disadvantage though, as it also introduces a change in the timing of the signal – it speeds it up or slows it down. It can still be useful, if the signal is short or if its tempo is unimportant. It can be used, for example, for changing the tone of the drum samples used by drum machines or the notes of the wave samples used by MIDI wave mappers.
The following is an example frequency filter that combines the standard low pass FIR filter with a standard second order low pass Butterworth filter. We are doing this to show that we do not have to stick to the standard filter design, but can experiment. Relying on the standard filters is a start, but there is much we can do.
A few years back, I wrote a few posts about MIDI. Those posts were technical. They explained the messages that MIDI devices use to communicate with each other. Later, we moved most of this to the Wiki, but not all. Some notes, I thought, should come back.
We began uploading a series of updates to Orinj, starting with version 2.5.0 of March 11, 2016. As we are working to develop Orinj version 3, we are fixing bugs and applying cosmetic changes that can also be useful in version 2. There is no sense in waiting for version 3 to be ready. This can take some time. In the meantime, version 2 will be updated as often as needed.
A typical reverb is supposedly implemented in two parts. First, a tapped delay line is used to simulate the initial (early) reflections that may be few in number and distinct. Second, a Shroeder reverb is used to simulate the late reverb, which contains a large number of indistinct reflections. The Shroeder reverb itself consists of several sequential all pass filters, the output of which is fed through a several parallel feedforward comb filters (simple delays).
Digital Signal Processing for Audio Application. Second Edition was published on November 18, 2014. What's new in the second edition? A few things.