data | 一个包含音频数据的浮点值数组 。 |
channels | 一个 int ,用于存储传递给该委托的音频数据的声道数。 |
如果实现了 OnAudioFilterRead,Unity 将在音频 DSP 链中插入一个自定义滤波器。
滤波器的插入顺序与 Inspector 中显示 MonoBehaviour 脚本的顺序相同。
OnAudioFilterRead is called every time a chunk of audio is sent to the filter (this happens frequently, every ~20ms depending on the sample rate and platform).
The audio data is an array of floats ranging from [-1.0f;1.0f] and contains audio from the previous filter in the chain or the AudioClip on the AudioSource. If this is the first filter in the chain and a clip isn't attached to the audio source, this filter will be played as the audio source. In this way you can use the filter as the audio clip, procedurally generating audio.
如果有多个声道,则以交错方式处理声道数据。这意味着数组中的每个连续数据样本都来自不同的声道,直到声道结束并循环回第一个声道。data.Length
报告数据的总大小,以便找到每个声道的样本数(用 data.Length
除以 channels
)。
如果实现了 OnAudioFilterRead,Inspector 中会出现一个 VU 计量表,以显示传出样本级别。此外,系统还将测量该滤波器的处理时间,并在 VU 计量表旁边显示花费的毫秒数。如果滤波器占用过多时间,数字将变为红色,表示混合器需要等待音频数据。
另请注意,OnAudioFilterRead 是在主线程(称为音频线程)以外的线程上调用的,因此不允许从该函数调用许多 Unity 函数(如果您尝试这样做,将在运行时显示一条警告)。
另请参阅:音频滤波器。
using UnityEngine;
// The code example shows how to implement a metronome that procedurally
// generates the click sounds via the OnAudioFilterRead callback.
// While the game is paused or suspended, this time will not be updated and sounds
// playing will be paused. Therefore developers of music scheduling routines do not have
// to do any rescheduling after the app is unpaused
[RequireComponent(typeof(AudioSource))]
public class AudioTest : MonoBehaviour
{
public double bpm = 140.0F;
public float gain = 0.5F;
public int signatureHi = 4;
public int signatureLo = 4;
private double nextTick = 0.0F;
private float amp = 0.0F;
private float phase = 0.0F;
private double sampleRate = 0.0F;
private int accent;
private bool running = false;
void Start()
{
accent = signatureHi;
double startTick = AudioSettings.dspTime;
sampleRate = AudioSettings.outputSampleRate;
nextTick = startTick * sampleRate;
running = true;
}
void OnAudioFilterRead(float[] data, int channels)
{
if (!running)
return;
double samplesPerTick = sampleRate * 60.0F / bpm * 4.0F / signatureLo;
double sample = AudioSettings.dspTime * sampleRate;
int dataLen = data.Length / channels;
int n = 0;
while (n < dataLen)
{
float x = gain * amp * Mathf.Sin(phase);
int i = 0;
while (i < channels)
{
data[n * channels + i] += x;
i++;
}
while (sample + n >= nextTick)
{
nextTick += samplesPerTick;
amp = 1.0F;
if (++accent > signatureHi)
{
accent = 1;
amp *= 2.0F;
}
Debug.Log("Tick: " + accent + "/" + signatureHi);
}
phase += amp * 0.3F;
amp *= 0.993F;
n++;
}
}
}