AudioCapturer录音+AudioRenderer播放音频

根据AudioCapture示例代码实现录音功能,发现录音生成的.wav文件在windows上无法播放。

AudioCapturer录音+AudioRenderer播放音频-鸿蒙开发者社区

HarmonyOS
2024-05-20 21:02:53
浏览
收藏 0
回答 1
待解决
回答 1
按赞同
/
按时间
ychfang

首先按照官网文档,实现AudioCapture音频录制

import audio from '@ohos.multimedia.audio'; 
import fs from '@ohos.file.fs'; 
  
const TAG = 'AudioCapturerDemo'; 
let context = getContext(this); 
  
let audioCapturer: audio.AudioCapturer | undefined = undefined; 
let audioStreamInfo: audio.AudioStreamInfo = { 
  samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, 
  channels: audio.AudioChannel.CHANNEL_1, 
  sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, 
  encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW 
} 
let audioCapturerInfo: audio.AudioCapturerInfo = { 
  source: audio.SourceType.SOURCE_TYPE_MIC, // 音源类型 
  capturerFlags: 0 // 音频采集器标志 
} 
let audioCapturerOptions: audio.AudioCapturerOptions = { 
  streamInfo: audioStreamInfo, 
  capturerInfo: audioCapturerInfo 
} 
  
// 初始化,创建实例,设置监听事件 
async function init() { 
  audio.createAudioCapturer(audioCapturerOptions, (err, capturer) => { // 创建AudioCapturer实例 
    if (err) { 
      console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`); 
      return; 
    } 
    console.info(`${TAG}: create AudioCapturer success`); 
    audioCapturer = capturer; 
    if (audioCapturer !== undefined) { 
      (audioCapturer as audio.AudioCapturer).on('markReach', 1000, (position: number) => { // 订阅markReach事件,当采集的帧数达到1000时触发回调 
        if (position === 1000) { 
          console.info('ON Triggered successfully'); 
        } 
      }); 
      (audioCapturer as audio.AudioCapturer).on('periodReach', 2000, (position: number) => { // 订阅periodReach事件,当采集的帧数达到2000时触发回调 
        if (position === 2000) { 
          console.info('ON Triggered successfully'); 
        } 
      }); 
    } 
  }); 
} 
  
// 开始一次音频采集 
async function start() { 
  if (audioCapturer !== undefined) { 
    let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED]; 
    if (stateGroup.indexOf((audioCapturer as audio.AudioCapturer).state.valueOf()) === -1) { // 当且仅当状态为STATE_PREPARED、STATE_PAUSED和STATE_STOPPED之一时才能启动采集 
      console.error(`${TAG}: start failed`); 
      return; 
    } 
    await (audioCapturer as audio.AudioCapturer).start(); // 启动采集 
    const path = context.filesDir + '/test.wav'; // 采集到的音频文件存储路径 
    let file: fs.File = fs.openSync(filePath, fs.OpenMode.READ_WRITE | fs.OpenMode.CREATE); // 如果文件不存在则创建文件 
    let fd = file.fd; 
    let numBuffersToCapture = 150; // 循环写入150次 
    let count = 0; 
    class Options { 
      offset: number = 0; 
      length: number = 0 
    } 
    while (numBuffersToCapture) { 
      let bufferSize = await (audioCapturer as audio.AudioCapturer).getBufferSize(); 
      let buffer = await (audioCapturer as audio.AudioCapturer).read(bufferSize, true); 
      let options: Options = { 
        offset: count * bufferSize, 
        length: bufferSize 
      }; 
      if (buffer === undefined) { 
        console.error(`${TAG}: read buffer failed`); 
      } else { 
        let number = fs.writeSync(fd, buffer, options); 
        console.info(`${TAG}: write date: ${number}`); 
      } 
      numBuffersToCapture--; 
      count++; 
    } 
  } 
} 
  
// 停止采集 
async function stop() { 
  if (audioCapturer !== undefined) { 
    // 只有采集器状态为STATE_RUNNING或STATE_PAUSED的时候才可以停止 
    if ((audioCapturer as audio.AudioCapturer).state.valueOf() !== audio.AudioState.STATE_RUNNING && (audioCapturer as audio.AudioCapturer).state.valueOf() !== audio.AudioState.STATE_PAUSED) { 
      console.info('Capturer is not running or paused'); 
      return; 
    } 
    await (audioCapturer as audio.AudioCapturer).stop(); // 停止采集 
    if ((audioCapturer as audio.AudioCapturer).state.valueOf() === audio.AudioState.STATE_STOPPED) { 
       console.info('Capturer stopped'); 
    } else { 
       console.error('Capturer stop failed'); 
    } 
  } 
} 
  
// 销毁实例,释放资源 
async function release() { 
  if (audioCapturer !== undefined) { 
    // 采集器状态不是STATE_RELEASED或STATE_NEW状态,才能release 
    if ((audioCapturer as audio.AudioCapturer).state.valueOf() === audio.AudioState.STATE_RELEASED || (audioCapturer as audio.AudioCapturer).state.valueOf() === audio.AudioState.STATE_NEW) { 
      console.info('Capturer already released'); 
      return; 
    } 
    await (audioCapturer as audio.AudioCapturer).release(); // 释放资源 
    if ((audioCapturer as audio.AudioCapturer).state.valueOf() === audio.AudioState.STATE_RELEASED) { 
      console.info('Capturer released'); 
    } else { 
      console.error('Capturer release failed'); 
    } 
  } 
}

查看文档得知,AudioCapturer是音频采集器,用于录制PCM(Pulse Code Modulation)音频数据。很多播放器无法直接播放。把test.wav文件导出到电脑端,使用Audacity软件,文件->导入->原始数据,选择了编码,采样率,声道后导入。可以播放该音频。

使用AudioRenderer播放PCM音频

AudioRenderer:用于音频输出的的ArkTS/JS API,仅支持PCM格式,需要应用持续写入音频数据进行工作。应用可以在输入前添加数据预处理,如设定音频文件的采样率、位宽等,要求开发者具备音频处理的基础知识,适用于更专业、更多样化的媒体播放应用开发。

示例代码

import audio from '@ohos.multimedia.audio'; 
import fs from '@ohos.file.fs'; 
  
const TAG = 'AudioRendererDemo'; 
  
let context = getContext(this); 
let renderModel: audio.AudioRenderer | undefined = undefined; 
let audioStreamInfo: audio.AudioStreamInfo = { 
  samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // 采样率 
  channels: audio.AudioChannel.CHANNEL_2, // 通道 
  sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // 采样格式 
  encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // 编码格式 
} 
let audioRendererInfo: audio.AudioRendererInfo = { 
  content: audio.ContentType.CONTENT_TYPE_MUSIC, // 媒体类型 
  usage: audio.StreamUsage.STREAM_USAGE_MEDIA, // 音频流使用类型 
  rendererFlags: 0 // 音频渲染器标志 
} 
let audioRendererOptions: audio.AudioRendererOptions = { 
  streamInfo: audioStreamInfo, 
  rendererInfo: audioRendererInfo 
} 
  
// 初始化,创建实例,设置监听事件 
async function init() { 
  audio.createAudioRenderer(audioRendererOptions, (err, renderer) => { // 创建AudioRenderer实例 
    if (!err) { 
      console.info(`${TAG}: creating AudioRenderer success`); 
      renderModel = renderer; 
      if (renderModel !== undefined) { 
        (renderModel as audio.AudioRenderer).on('stateChange', (state: audio.AudioState) => { // 设置监听事件,当转换到指定的状态时触发回调 
          if (state == 2) { 
            console.info('audio renderer state is: STATE_RUNNING'); 
          } 
        }); 
        (renderModel as audio.AudioRenderer).on('markReach', 1000, (position: number) => { // 订阅markReach事件,当渲染的帧数达到1000帧时触发回调 
          if (position == 1000) { 
            console.info('ON Triggered successfully'); 
          } 
        }); 
      } 
    } else { 
      console.info(`${TAG}: creating AudioRenderer failed, error: ${err.message}`); 
    } 
  }); 
} 
  
// 开始一次音频渲染 
async function start() { 
  if (renderModel !== undefined) { 
    let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED]; 
    if (stateGroup.indexOf((renderModel as audio.AudioRenderer).state.valueOf()) === -1) { // 当且仅当状态为prepared、paused和stopped之一时才能启动渲染 
      console.error(TAG + 'start failed'); 
      return; 
    } 
    await (renderModel as audio.AudioRenderer).start(); // 启动渲染 
     
    const bufferSize = await (renderModel as audio.AudioRenderer).getBufferSize(); 
     
    let path = context.filesDir; 
    const filePath = path + '/test.wav'; // 使用沙箱路径获取文件,实际路径为/data/storage/el2/base/haps/entry/files/test.wav 
     
    let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY); 
    let stat = await fs.stat(filePath); 
    let buf = new ArrayBuffer(bufferSize); 
    let len = stat.size % bufferSize === 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1); 
    class Options { 
      offset: number = 0; 
      length: number = 0 
    } 
    for (let i = 0; i < len; i++) { 
      let options: Options = { 
        offset: i * bufferSize, 
        length: bufferSize 
      }; 
      let readsize = await fs.read(file.fd, buf, options); 
       
      // buf是要写入缓冲区的音频数据,在调用AudioRenderer.write()方法前可以进行音频数据的预处理,实现个性化的音频播放功能,AudioRenderer会读出写入缓冲区的音频数据进行渲染 
       
      let writeSize: number = await (renderModel as audio.AudioRenderer).write(buf); 
        if ((renderModel as audio.AudioRenderer).state.valueOf() === audio.AudioState.STATE_RELEASED) { // 如果渲染器状态为released,关闭资源 
        fs.close(file); 
      } 
      if ((renderModel as audio.AudioRenderer).state.valueOf() === audio.AudioState.STATE_RUNNING) { 
        if (i === len - 1) { // 如果音频文件已经被读取完,停止渲染 
          fs.close(file); 
          await (renderModel as audio.AudioRenderer).stop(); 
        } 
      } 
    } 
  } 
} 
  
// 暂停渲染 
async function pause() { 
  if (renderModel !== undefined) { 
    // 只有渲染器状态为running的时候才能暂停 
    if ((renderModel as audio.AudioRenderer).state.valueOf() !== audio.AudioState.STATE_RUNNING) { 
      console.info('Renderer is not running'); 
      return; 
    } 
    await (renderModel as audio.AudioRenderer).pause(); // 暂停渲染 
    if ((renderModel as audio.AudioRenderer).state.valueOf() === audio.AudioState.STATE_PAUSED) { 
      console.info('Renderer is paused.'); 
    } else { 
      console.error('Pausing renderer failed.'); 
    } 
  } 
} 
  
// 停止渲染 
async function stop() { 
  if (renderModel !== undefined) { 
    // 只有渲染器状态为running或paused的时候才可以停止 
    if ((renderModel as audio.AudioRenderer).state.valueOf() !== audio.AudioState.STATE_RUNNING && (renderModel as audio.AudioRenderer).state.valueOf() !== audio.AudioState.STATE_PAUSED) { 
      console.info('Renderer is not running or paused.'); 
      return; 
    } 
    await (renderModel as audio.AudioRenderer).stop(); // 停止渲染 
    if ((renderModel as audio.AudioRenderer).state.valueOf() === audio.AudioState.STATE_STOPPED) { 
      console.info('Renderer stopped.'); 
    } else { 
      console.error('Stopping renderer failed.'); 
    } 
  } 
} 
  
// 销毁实例,释放资源 
async function release() { 
  if (renderModel !== undefined) { 
    // 渲染器状态不是released状态,才能release 
    if (renderModel.state.valueOf() === audio.AudioState.STATE_RELEASED) { 
      console.info('Renderer already released'); 
      return; 
    } 
    await renderModel.release(); // 释放资源 
    if (renderModel.state.valueOf() === audio.AudioState.STATE_RELEASED) { 
      console.info('Renderer released'); 
    } else { 
      console.error('Renderer release failed.'); 
    } 
  } 
}

总结

AudioCapturer和AudioRenderer用于录制和播放录制PCM音频,适合有音频开发经验的开发者实现更灵活的录制和播放功能。

附件为Demo,实现了开始录制,停止录制,播放录音,暂停录音,停止录音功能。


分享
微博
QQ
微信
回复1
2024-05-21 16:48:14
相关问题
使用AudioRenderer开发音频播放功能
420浏览 • 1回复 待解决
OH _Audio播放音频问题
481浏览 • 1回复 待解决
使用AudioRenderer播放pcm音频流失败
490浏览 • 1回复 待解决
鸿蒙Dev远程真机能否播放音频
3929浏览 • 1回复 待解决
使用AudioCapturer开发音频录制功能
526浏览 • 1回复 待解决
AudioRenderer播放器是什么关系?
2533浏览 • 1回复 待解决
求大佬告知如何后台播放音
764浏览 • 1回复 待解决
SoundPool实现音频播放功能
538浏览 • 1回复 待解决
如何后台播放音乐,你知道吗?
828浏览 • 1回复 待解决
音频播放长时任务不生效
433浏览 • 1回复 待解决
鸿蒙 如何使用 player 播放网络音频
5864浏览 • 1回复 已解决
AVplayer开发音频播放功能
436浏览 • 1回复 待解决
怎么使用player播放网络音频呢?
2123浏览 • 1回复 待解决
AVPlayer实现音频播放(c++侧)
335浏览 • 1回复 待解决
鸿蒙-如何实现播放一段音频
9614浏览 • 2回复 待解决
SoundPool播放音频是否支持WMV格式
649浏览 • 1回复 待解决