FAQs: Codec Audio Settings and Troubleshooting

FAQs: Audio of Codec

Audio Channel

How many audio channels can I configure on my HDMI codec?

The number of audio channels you can configure on your HDMI codec depends on the specific model. For example, the SS50 MS7 Codec Series supports a maximum of 8 audio channels in total—4 channels for encoding and 4 channels for decoding. This allows for versatile audio configuration, making it suitable for complex audio setups that require multiple audio streams.

Please note that the maximum number of supported audio channels may vary across different models, so it's important to check the specifications of your specific codec for precise details.

Audio Input

What are the audio input options available for HDMI codecs?

HDMI codecs typically offer three audio input options, it includes:
  1. HDMI Audio Input – This allows the codec to extract audio directly from the HDMI input signal, making it ideal for devices that send both video and audio over HDMI.
  2. Line-in Audio Input – This provides an alternative way to connect an external audio source, such as a microphone or audio mixer, via the line-in port.
  3. Audio Mix – HDMI codecs offer audio mixing capabilities, allowing you to combine multiple audio sources for more complex setups.
These options provide flexibility for integrating the audio setup based on your requirements, whether you're working with video, external audio devices, or both.
Notes
NOTES: Audio Mix is only available for CH4 (MIX)

Which connectors are supported for Line-In Audio on my HDMI codec?

Generally, 3-pole TRS connectors are suitable for the TV30 ITE Codec Series. These connectors are typically used for stereo audio signals, carrying the left and right channels, along with ground. In contrast, 4-pole TRS connectors are designed for the SS50 MS7 Codec Series and SS52 MS7 Codec Series. They support stereo audio signals and also include an additional pole for microphone input or other audio sources, alongside the left and right channels.

Why are there so much noises with Aux Audio?

The presence of noise when using an auxiliary audio (Aux) connection can often be attributed to a few different factors:
  1. Interference: Noise can occur due to interference from other electronic devices or poor shielding of the audio cables. Using a shielded cable and keeping it away from other electronic devices can help reduce interference.
  2. Poor connection: If the auxiliary cable is not properly connected or if there are loose connections, it can lead to noise. Ensuring a secure and proper connection can help minimize noise.
  3. Ground loops: Ground loops can cause buzzing or humming noise in audio systems. Using a ground loop isolator can help mitigate this issue.
And it's strongly recommended to use a standard 3.5mm audio connector with two black circles and keeping the volume below 80, this is to ensure compatibility and optimal audio quality, especially for TV30 ITE Codec Series. Using the correct connector ensures that the audio is properly transmitted, and keeping the volume below 80 may prevent distortion and potential noise issues.

In summary, to minimize noise with an Aux audio connection, it's important to use a quality cable, ensure a secure connection, and address any issues related to interference or ground loops. Additionally, following the recommended connector type and volume level can contribute to better audio quality and reduced noise.

Why does it not work to stream to YouTube when switching audio input from HDMI to Line-in?

When you switch the audio input from HDMI to Line-in, the encoding algorithm used for streaming is not automatically adjusted, resulting in pending audio. Rebooting the device after the switch is necessary to reset the encoding algorithm and ensure that the audio is properly streamed to YouTube. This issue could be related to the audio encoding settings not properly switching when the input source is changed, thereby requiring a reboot to apply the changes.

What's Audio Mix?

Audio mix refers to the process of combining multiple audio sources or tracks into a single audio output. When combining audio from HDMI and 3.5mm Line-In sources, it typically involves blending or mixing the audio signals from these sources into a single composite audio stream.

For example, in a multimedia or audiovisual setup, such as a presentation or video conference, you may want to combine the audio from a video source connected via HDMI with another audio input from a separate device using a 3.5mm Line-In connection. This could be achieved by using an EXVIST HDMI video encoder with the latest firmware released around December 2023. For more information, please refer to Available Models of Audio Mix.

What's Active Audio?

Active audio typically refers to audio components or devices that require external power to operate. In the context of audio equipment,  "active" generally refers to devices that contain built-in amplification or signal processing capabilities, which necessitate an external power source to function. This includes powered speakers, amplifiers, mixers with built-in preamps, and active audio interfaces.

What's Passive Audio?

Passive audio typically refers to a type of audio signal or system that does not require external power or an active amplifier. In the context of speakers, a passive speaker does not have a built-in amplifier and relies on an external power source, such as a separate amplifier, to drive the audio signal and produce sound.

Audio Sampling

What should I do if the audio in my MP4 file has a sample rate of 44.1 kHz while using the SS50/SS52 Codec Series?

If the audio in your MP4 file has a sample rate of 44.1 kHz, it's important to configure the audio settings on your SS50/SS52 Codec Series to reflect this. Set the audio sample rate to 44100 Hz to match the file's audio characteristics.

This adjustment is crucial because mismatched sample rates can cause interruptions or audio playback issues. By setting the sample rate correctly, you ensure smoother audio processing and minimize potential disruptions during playback or streaming.

Make sure to double-check the codec's audio settings in the configuration menu to ensure they match the sample rate of your source file.

Audio Codec

How should we select codec type of audio?

AAC (Advanced Audio Coding) is a widely supported audio format that is ideal for live broadcasting due to its high-quality sound and efficient compression. It is commonly used for streaming audio and video content over the internet.

G.711A is a codec standard used for compressing audio, specifically in voice over IP (VoIP) communications. It is one of the two variants of the G.711 audio codec, the other being G.711U (often referred to as u-law). 

G.711U is a standard audio codec commonly used in video surveillance systems. It provides high-quality audio with low latency, making it suitable for real-time monitoring and recording in surveillance applications.

When setting audio for live broadcasting, you should select AAC as the audio format to ensure high-quality sound and efficient streaming. For video surveillance, G.711U is the preferred choice to ensure clear and reliable audio for monitoring and recording purposes.

What is AAC and how is it used in audio streaming?

AAC (Advanced Audio Coding) is a digital audio codec designed to compress audio data while maintaining high sound quality. It is part of the MPEG-4 standard and is widely used for audio streaming, broadcasting, and media playback. AAC was developed as an improvement over MP3 and is known for providing better sound quality at lower bitrates.

Key Features of AAC:
  1. High Compression Efficiency: AAC provides better audio quality than MP3 at the same bitrate. This makes it more efficient for audio streaming, offering high-quality sound while minimizing file sizes.
  2. Wide Range of Bitrates: AAC supports a broad range of bitrates, from very low to high, making it suitable for various applications, from mobile streaming to high-quality audio broadcasting.
  3. Multichannel Support: AAC can handle multiple audio channels, including stereo and surround sound (5.1 and 7.1), making it ideal for use in video streaming and home theater systems.
  4. Lower Bitrate, Better Quality: At lower bitrates, AAC outperforms MP3 in terms of audio quality. This makes it an excellent choice for applications where bandwidth or storage is limited.
  5. Compatibility Across Devices and Platforms: AAC is supported by a wide range of devices, including smartphones, tablets, computers, smart TVs, and streaming platforms such as YouTube, Spotify, and Apple Music.
  6. Adaptive to Network Conditions: AAC adapts well to varying network conditions, making it particularly suitable for streaming over the internet where bandwidth can fluctuate. It ensures a consistent, high-quality audio experience even with low or unstable internet connections.
  7. Enhanced Audio Performance: AAC offers enhanced audio quality with features like frequency band prediction, temporal noise shaping, and joint stereo, which contribute to a more detailed and lifelike listening experience.

What is G.711A and how is it used in audio streaming?

G.711A is a codec standard used for compressing audio, specifically in voice over IP (VoIP) communications. It is one of the two variants of the G.711 audio codec, the other being G.711U (often referred to as u-law). Here's an overview:

Key Features of G.711A:
  1. Audio Compression: G.711A is a lossless codec that provides high-quality audio with minimal compression, meaning it delivers near-CD quality audio but with a bit rate of 64 kbps (kilobits per second).
  2. Encoding: G.711A uses A-law companding (a form of audio compression), which is primarily used in Europe and other parts of the world for telephony applications. It's designed to balance dynamic range compression and signal quality, which makes it suitable for low-latency audio transmission.
  3. Latency: G.711A has low latency, which is important in real-time communication applications such as voice calls, video conferencing, and live streaming.
  4. Applications: G.711A is widely used in telecommunications, VoIP systems, and real-time audio applications where high quality and low latency are critical, such as in video conferencing, remote broadcasting, and communication systems.
G.711A is preferred in environments that require clear, uninterrupted audio but can tolerate larger file sizes and lower compression ratios. It's a widely supported codec in IP-based audio communication systems, including in video encoders and decoders.

What is G.711U, and what are its key features?

G.711U is an audio codec standard that is widely used in telecommunications and VoIP (Voice over IP) systems. It is a type of Pulse Code Modulation (PCM) codec, which is used to encode and compress audio signals for transmission over digital communication systems.

Key Features of G.711U:
  1. Uncompressed Audio: G.711U uses 64 kbps of bandwidth per channel and does not involve compression, meaning it preserves the original quality of the audio signal.
  2. Sampling Rate: It operates at a sampling rate of 8 kHz, which provides a frequency range from 300 Hz to 3400 Hz, sufficient for high-quality voice transmission.
  3. Encoding Method: G.711U uses a uniform PCM encoding method, which means it converts the analog audio signal into a series of binary values with a fixed sample size.
  4. Low Latency: G.711U offers low latency because it does not require complex encoding or decoding processes. This is particularly beneficial for real-time communications like voice calls.
  5. Compatibility: It is one of the most widely supported codecs in telecommunication and VoIP systems, making it highly compatible with a wide range of devices and services.
  6. High Audio Quality: As an uncompressed codec, G.711U offers very high audio quality with minimal loss compared to compressed codecs.

Why does YouTube require the audio codec to be set to AAC, even when streaming video without sound using my codec?

YouTube requires the audio codec to be set to AAC, even when streaming video without sound, because of the platform’s standardization and compatibility requirements. Here are the key reasons:
  1. Standardized Codec: YouTube supports and recommends using AAC (Advanced Audio Codec) for audio because it is a widely adopted and highly efficient codec. Even if there is no audio in the stream, YouTube may expect an audio track to be present for compatibility with its encoding systems and streaming protocols.
  2. Container Format: Many video containers (such as MP4) require an audio stream to be present alongside the video stream. Even if no audio is included, YouTube expects an AAC track to ensure seamless processing and playback across different devices and platforms.
  3. Streaming Protocols: When streaming, some protocols (like HLS or DASH) may require an audio stream in the metadata, and the AAC codec is a default for YouTube’s systems. Setting the codec to AAC ensures proper streaming and synchronization with YouTube's infrastructure.
  4. Playback Consistency: Using AAC ensures that the stream remains consistent for users across a variety of devices, including mobile phones, smart TVs, and browsers, which may have native support for AAC audio playback.
In summary, even for video-only streams, YouTube requires an AAC audio codec to maintain compatibility with its encoding, container, and streaming requirements, ensuring a consistent user experience.

Stereo/Mono Channel

What's Soundtrack

The difference between stereo, dual channel and mono

Assuming that the waveforms of the left and right channels of the dual channel are the same, there is no stereoscopic effect, and the effect is the same as that of the single channel. (That is to say, the two channels are not necessarily stereo.) When the waveforms of the left and right channels of the two channels are exactly the same, artificially creating a certain phase difference can create a wide sound field, making it three-dimensional, and artificially creating phase difference It is fixed, so this kind of stereo is called pseudo stereo. True stereo is two waveforms with completely different channels, each with a different phase difference every moment.

After understanding these principles, many novice problems can be easily solved.

Is the recording in stereo or mono?

Since a microphone can only record two sound channels with exactly the same waveform, there is no difference between single and double sound recording. The only difference is that the available capacity for recording into a dual-channel hard disk becomes smaller.

How to record in stereo?

Use a stereo microphone, or use two microphones to pick up the sound in different positions.

Should I add a mono effector or a two-channel effector?

A dual-channel effector is equivalent to two effectors working. If a mono effector is connected to the stereo, there is no side effect. Generally, the effector that affects the phase will not have a mono plug-in. For example, you are in the MONO of WAVES. You can't find the delay plugin supertap 2-Tap Mod. It doesn't matter what is monophonic.

What software can be used to eliminate the vocals in the song?

As mentioned in the previous theory, vocal cancellation is achieved through phase cancellation, so no software can really eliminate vocals, and the poor quality of the accompaniment is inevitable. Not only the vocals are canceled, but also because of the BASS and The phase difference of the drum is very small, and it is also cancelled out, so the anechoic accompaniment must be compensated for low frequency.

Why does the vocals I recorded have a very obvious muffled sound?

The walls of the room without sound absorption will cause the reflected sound and the direct sound to have a certain degree of acoustic cancellation, especially the position where the microphone is pointing, that is, the position behind the singer. Avoiding offset is a problem that must be paid attention to in recording.

Learn more about Audio of Codec

  1. Audio of Codec

    • Related Articles

    • Audio of Codec

      SS50 MS7 Codec Series Audio Go to Setting->Video/Audio->Audio Item Description Channel CH1~CH8(Determined by added decoding channels) Audio input HDMI or Line-in(3.5mm only) Or Audio Mix [Only available for CH4(MIX)] Samples 8000, 16000, 32000, ...
    • Quick Start of Codec

      Connection Diagram Power on encoder and connect it to router via ethernet cable NOTE: At the very beginning, it's strongly recommended to follow this instructions to connect the encoder, especially for network segment issues. If it's failed to be ...
    • FAQs: RTMP of Codec

      RTMP Protocol What's RTMP? RTMP stands for Real-Time Messaging Protocol. It is a streaming protocol developed by Adobe Systems for transmitting audio, video, and data over the internet in real-time. RTMP is commonly used for live streaming and ...
    • FAQs: Overview of Codec

      What does 1920*1080P@30HZ mean? If it shows 1920*1080P@30HZ on GUI, probably it's failed to detect the HDMI signal. Take the following steps if there is a failure to detect an HDMI signal. Ensuring that the HDMI cable is properly connected and meets ...
    • FAQs: Preview of Codec

      H.265 Preview How can I address the issue of "Black Screen"? To address the issue of black screen when trying to preview live video in browsers due to the lack of support for H.265 in Flash, you can consider the following options: Use Alternative ...