YouTube requires the audio codec to be set to AAC, even when streaming video without sound, because of the platform’s standardization and compatibility requirements. Here are the key reasons:
- Standardized Codec: YouTube supports and recommends using AAC (Advanced Audio Codec) for audio because it is a widely adopted and highly efficient codec. Even if there is no audio in the stream, YouTube may expect an audio track to be present for compatibility with its encoding systems and streaming protocols.
- Container Format: Many video containers (such as MP4) require an audio stream to be present alongside the video stream. Even if no audio is included, YouTube expects an AAC track to ensure seamless processing and playback across different devices and platforms.
- Streaming Protocols: When streaming, some protocols (like HLS or DASH) may require an audio stream in the metadata, and the AAC codec is a default for YouTube’s systems. Setting the encoder to AAC ensures proper streaming and synchronization with YouTube's infrastructure.
- Playback Consistency: Using AAC ensures that the stream remains consistent for users across a variety of devices, including mobile phones, smart TVs, and browsers, which may have native support for AAC audio playback.
In summary, even for video-only streams, YouTube requires an AAC audio codec to maintain compatibility with its encoding, container, and streaming requirements, ensuring a consistent user experience.
What's Soundtrack
The difference between stereo, dual channel and mono
Assuming that the waveforms of the left and right channels of the dual channel are the same, there is no stereoscopic effect, and the effect is the same as that of the single channel. (That is to say, the two channels are not necessarily stereo.) When the waveforms of the left and right channels of the two channels are exactly the same, artificially creating a certain phase difference can create a wide sound field, making it three-dimensional, and artificially creating phase difference It is fixed, so this kind of stereo is called pseudo stereo. True stereo is two waveforms with completely different channels, each with a different phase difference every moment.
After understanding these principles, many novice problems can be easily solved.
Is the recording in stereo or mono?
Since a microphone can only record two sound channels with exactly the same waveform, there is no difference between single and double sound recording. The only difference is that the available capacity for recording into a dual-channel hard disk becomes smaller.
How to record in stereo?
Use a stereo microphone, or use two microphones to pick up the sound in different positions.
Should I add a mono effector or a two-channel effector?
A dual-channel effector is equivalent to two effectors working. If a mono effector is connected to the stereo, there is no side effect. Generally, the effector that affects the phase will not have a mono plug-in. For example, you are in the MONO of WAVES. You can't find the delay plugin supertap 2-Tap Mod. It doesn't matter what is monophonic.
What software can be used to eliminate the vocals in the song?
As mentioned in the previous theory, vocal cancellation is achieved through phase cancellation, so no software can really eliminate vocals, and the poor quality of the accompaniment is inevitable. Not only the vocals are canceled, but also because of the BASS and The phase difference of the drum is very small, and it is also cancelled out, so the anechoic accompaniment must be compensated for low frequency.
Why does the vocals I recorded have a very obvious muffled sound?
The walls of the room without sound absorption will cause the reflected sound and the direct sound to have a certain degree of acoustic cancellation, especially the position where the microphone is pointing, that is, the position behind the singer. Avoiding offset is a problem that must be paid attention to in recording.