

These aspects affect not only our perception of left-right stereo width but also height-our ability to get a sense of how high the sound source is relative to our ears. Even the shape of our ears themselves plays a role, with the pinna-the outer ear-focusing sound waves into the ear canal. Here, the sound wave may have to travel around the head and is subject to the damping effect of that object in its path. The head itself affects the sound waves reaching the ears, with more of a masking effect on a sound from a particular source at the more distant ear. Of course, stereo perception in speaker monitoring is more complex than just simple interaural crosstalk. And of course, the additional delayed signal at each ear combines with the intended signal for that ear, creating interference effects like comb filtering. The same thing occurs with the right ear and left speaker-the left speaker’s wave arrives at the right ear a little later and at a slightly lower level than the right speaker’s wave. Also due to that slightly greater distance, the right speaker wave arrives a little later at the left ear than the left speaker wave (up to 600µsec).

To the left ear, the right speaker signal comes from a very slightly greater distance, so the right speaker level is slightly lower at the left ear compared to the left speaker signal. The left ear receives the sound wave from the left speaker, but it also receives some sound from the right speaker.
#CROSSFEED HEADPHONE APP PLUS#
The right ear hears the right speaker, plus some of the left speaker’s signal. The left ear hears the left speaker plus some of the signal from the right signal. Interaural Crosstalk from stereo speakers. With speakers, the two actual sound sources-the left and right speakers-produce sound into the same physical space, so while in theory the sound from the left speaker is intended for the left ear and the sound from the right speaker is intended for the right ear, what actually happens is that both ears hear the sound from both speakers to a greater or lesser extent. The way sound from stereo speakers reaches our ears is inherently different from monitoring with headphones. While calibration is an important step, there will still be fundamental differences between monitoring on speakers vs.

If a mixer decides to use headphones as their monitor system, one of the first things to do would be to calibrate the headphones for flat/ideal response so that the frequency response of the headphones can be trusted to translate well to the outside world. We might be traveling with only a laptop and a pair of phones or we may not want to bother others around us. Often it’s simply inconvenient or impractical to mix over speakers. Headphone mixing also suits situations where the monitoring environment may be less than ideal-such as in many project studios. In these modern times, given that many music lovers use headphones as their primary listening environment, mixers have begun to rethink this conventional wisdom. This is due to stereo image perception on ‘phones and the headphones’ tendency to highlight certain details of a mix.

The thinking behind this has always been that mixes done over speakers will translate better to headphones than vice versa. Historically, most engineers have mixed on speakers and only used headphones to listen for details they may have missed on monitors.
