现在的位置: 首页 > 综合 > 正文

android中AudioRecord采集音频的参数说明以及audioTrack的播放

2013年01月03日 ⁄ 综合 ⁄ 共 3465字 ⁄ 字号 评论关闭

android中采集音频的apiandroid.media.AudioRecord

在android中播放音频也是从api中的类分析

其中构造器的几个参数就是标准的声音采集参数

以下是参数的含义解释

1. public AudioRecord (int audioSource, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes)

Since: API Level 3

Class constructor.

Parameters

audioSource

the recording source. See MediaRecorder.AudioSource for recording source definitions.

音频源:指的是从哪里采集音频。这里我们当然是从麦克风采集音频,所以此参数的值为MIC

sampleRateInHz

the sample rate expressed in Hertz. Examples of rates are (but not limited to) 44100, 22050 and 11025.

采样率:音频的采样频率,每秒钟能够采样的次数,采样率越高,音质越高。给出的实例是441002205011025但不限于这几个参数。例如要采集低质量的音频就可以使用40008000等低采样率。

channelConfig

describes the configuration of the audio channels. See
CHANNEL_IN_MONO
and
CHANNEL_IN_STEREO

声道设置:android支持双声道立体声和单声道。MONO单声道,STEREO立体声

audioFormat

the format in which the audio data is represented. See
ENCODING_PCM_16BIT
and
ENCODING_PCM_8BIT

编码制式和采样大小:采集来的数据当然使用PCM编码(脉冲代码调制编码,即PCM编码。PCM通过抽样、量化、编码三个步骤将连续变化的模拟信号转换为数字编码。)
android
支持的采样大小16bit
或者8bit。当然采样大小越大,那么信息量越多,音质也越高,现在主流的采样大小都是16bit,在低质量的语音传输的时候8bit足够了。

bufferSizeInBytes

the total size (in bytes) of the buffer where audio data is written to during the recording. New audio data can be read from this buffer in smaller chunks than this size. SeegetMinBufferSize(int,
int, int)
to determine the minimum required buffer size for the successful creation of an AudioRecord instance. Using values smaller than getMinBufferSize() will result in an initialization failure.

采集数据需要的缓冲区的大小,如果不知道最小需要的大小可以在getMinBufferSize()查看。

采集到的数据保存在一个byteBuffer中,可以使用流将其读出。亦可保存成为文件的形式。

 

android.media.AudioTrack.AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode) throws
IllegalArgumentException


2. public AudioTrack (int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode)

Parameters
streamType the type of the audio stream. See STREAM_VOICE_CALL,
STREAM_SYSTEM,
STREAM_RING,
STREAM_MUSIC and
STREAM_ALARM
sampleRateInHz the sample rate expressed in Hertz. Examples of rates are (but not limited to) 44100, 22050 and 11025.
channelConfig describes the configuration of the audio channels. See CHANNEL_OUT_MONO and
CHANNEL_OUT_STEREO
audioFormat the format in which the audio data is represented. See ENCODING_PCM_16BIT and
ENCODING_PCM_8BIT
bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read from for playback. If using the AudioTrack in streaming mode, you can write data into this buffer in smaller chunks than this size. If using the AudioTrack in static mode, this is the maximum
size of the sound that will be played for this instance. See getMinBufferSize(int, int, int) to determine the minimum required buffer
size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller than getMinBufferSize() will result in an initialization failure.
mode streaming or static buffer. See MODE_STATIC and
MODE_STREAM

    使用这个类可以很强轻松地将音频数据在Android系统上播放出来,下面贴出我自己写的源码:
    AudioTrack audio = new AudioTrack(
                           AudioManager.STREAM_MUSIC, // 指定在流的类型
                           32000, // 设置音频数据的采样率 32k,如果是44.1k就是44100
                           AudioFormat.CHANNEL_OUT_STEREO, // 设置输出声道为双声道立体声,而CHANNEL_OUT_MONO类型是单声道
                           AudioFormat.ENCODING_PCM_16BIT, // 设置音频数据块是8位还是16位,这里设置为16位。好像现在绝大多数的音频都是16位的了
                           AudioTrack.MODE_STREAM // 设置模式类型,在这里设置为流类型,另外一种MODE_STATIC貌似没有什么效果
                       );
    audio.play(); // 启动音频设备,下面就可以真正开始音频数据的播放了
    // 打开mp3文件,读取数据,解码等操作省略 ...
    byte[] buffer = new buffer[4096];
    int count;
    while(true)
    {
        // 最关键的是将解码后的数据,从缓冲区写入到AudioTrack对象中
        audio.write(buffer, 0, 4096);
        if(文件结束) break;
    }
    // 最后别忘了关闭并释放资源
    audio.stop();
    audio.release();

抱歉!评论已关闭.