基于Camera Kit,获取相机流数据传递给native,进行压缩编码

ATS侧启动相机,使用摄像头采集视频流数据,获取相机视频流数据传递到native侧,通过buffer模式将视频编码成MP4文件保存到沙箱路径。

HarmonyOS
2024-06-13 00:16:40
浏览
收藏 0
回答 1
待解决
回答 1
按赞同
/
按时间
e_lion

具体实现步骤可分为:

Step1:申请权限,启动相机。

Step2: 启动录制,获取视频流数据,获取一帧图像转成JPG格式保存到沙箱路径。

Step3: 视频流数据传递到native侧,进行压缩编码,生成文件保存。

步骤一: 申请权限,启动相机。需要相机、麦克风、媒体位置、写入媒体和读取媒体权限。

"requestPermissions": [ 
  { 
    "name": "ohos.permission.CAMERA", 
    "reason": "$string:app_name", 
    "usedScene": { 
      "abilities": [ 
        "FormAbility" 
      ], 
      "when": "always" 
    } 
  }, 
  { 
    "name": "ohos.permission.MICROPHONE", 
    "reason": "$string:app_name", 
    "usedScene": { 
      "abilities": [ 
        "FormAbility" 
      ], 
      "when": "always" 
    } 
  }, 
  { 
    "name": "ohos.permission.MEDIA_LOCATION", 
    "reason": "$string:app_name", 
    "usedScene": { 
      "abilities": [ 
        "FormAbility" 
      ], 
      "when": "always" 
    } 
  }, 
  { 
    "name": "ohos.permission.WRITE_MEDIA", 
    "reason": "$string:app_name", 
    "usedScene": { 
      "abilities": [ 
        "FormAbility" 
      ], 
      "when": "always" 
    } 
  }, 
  { 
    "name": "ohos.permission.READ_MEDIA", 
    "reason": "$string:app_name", 
    "usedScene": { 
      "abilities": [ 
        "FormAbility" 
      ], 
      "when": "always" 
    } 
  } 
]

2:启动相机,预览实现。导入camera接口,创建双路预览流通道,使用XComponent组件和ImageReceiver组件创建Surface用来显示和获取预览图像。

async function createDualChannelPreview(cameraManager: camera.CameraManager, XComponentSurfaceId: string, receiver: image.ImageReceiver): Promise<void> { 
  // 获取支持的相机设备对象 
  let camerasDevices: Array<camera.CameraDevice> = cameraManager.getSupportedCameras(); 
  // 获取支持的模式类型 
  let sceneModes: Array<camera.SceneMode> = cameraManager.getSupportedSceneModes(camerasDevices[0]); 
  let isSupportPhotoMode: boolean = sceneModes.indexOf(camera.SceneMode.NORMAL_PHOTO) >= 0; 
  if (!isSupportPhotoMode) { 
    console.error('photo mode not support'); 
    return; 
  } 
  // 获取profile对象 
  let profiles: camera.CameraOutputCapability = cameraManager.getSupportedOutputCapability(camerasDevices[0], camera.SceneMode.NORMAL_PHOTO); // 获取对应相机设备profiles 
  let previewProfiles: Array<camera.Profile> = profiles.previewProfiles; 
  // 预览流1 
  let previewProfilesObj: camera.Profile = previewProfiles[0]; 
  // 预览流2 
  let previewProfilesObj2: camera.Profile = previewProfiles[0]; 
  // 创建 预览流1 输出对象 
  let previewOutput: camera.PreviewOutput = cameraManager.createPreviewOutput(previewProfilesObj, XComponentSurfaceId); 
  // 创建 预览流2 输出对象 
  let imageReceiverSurfaceId: string = await receiver.getReceivingSurfaceId(); 
  let previewOutput2: camera.PreviewOutput = cameraManager.createPreviewOutput(previewProfilesObj2, imageReceiverSurfaceId); 
  // 创建cameraInput对象 
  let cameraInput: camera.CameraInput = cameraManager.createCameraInput(camerasDevices[0]); 
  // 打开相机 
  await cameraInput.open(); 
  // 会话流程 
  let photoSession: camera.CaptureSession = cameraManager.createCaptureSession(); 
  // 开始配置会话 
  photoSession.beginConfig(); 
  // 把CameraInput加入到会话 
  photoSession.addInput(cameraInput); 
  // 把 预览流1 加入到会话 
  photoSession.addOutput(previewOutput); 
  // 把 预览流2 加入到会话 
  photoSession.addOutput(previewOutput2); 
  // 提交配置信息 
  await photoSession.commitConfig(); 
  // 会话开始 
  await photoSession.start(); 
}

步骤二:启动录制,获取相机视频流数据。

1:生成相机视频流数据:视频流数据是通过在onPageShow里面启动本地录制生成,当页面显示时,会调用 startRecord()方法开始录制,在页面隐藏时,调用 stopRecorder()函数停止录制视频,并释放相机资源。

async onPageShow() { 
  this.startRecord(); 
  await grantPermission().then(res => { 
    console.info(TAG, `权限申请成功  ${JSON.stringify(res)}`); 
    if (res) { 
      createDualChannelPreview(this.surfaceId, this.receiver); 
    } 
  }) 
} 
private startRecord() { 
  videoCompressor.startRecorder(getContext(), cameraWidth, cameraHeight) 
    .then((data) => { 
      if (data.code == CompressorResponseCode.SUCCESS) { 
        Logger.debug("videoCompressor-- record success"); 
      } else { 
        Logger.debug("videoCompressor code:" + data.code + "--error message:" + data.message); 
      } 
    }).catch((err: Error) => { 
    Logger.debug("videoCompressor error message" + err.message); 
  }); 
} 
onPageHide() { 
  videoCompressor.stopRecorder(); // 测试停止录制 
  Logger.debug("onPageHide-- stopRecorder"); 
  releaseCamera() 
}

2: 获取相机视频流数据通过imageReceiver获取相机流数据,Videocompressor.pushoneframedata(buffer) 接收buffer数据。

function createImageReceiver(): image.ImageReceiver { 
  let receiver: image.ImageReceiver = image.createImageReceiver(cameraWidth, cameraHeight, 4, 8); 
  receiver.on('imageArrival', () => { 
    receiver.readNextImage((err: BusinessError, nextImage: image.Image) => { 
      if (err || nextImage === undefined) { 
        Logger.error("receiveImage -error:" + err + " nextImage:" + nextImage); 
        return; 
      } 
      nextImage.getComponent(image.ComponentType.JPEG, (err: BusinessError, imgComponent: image.Component) => { 
        if (err || imgComponent === undefined) { 
          Logger.error("receiveImage--getComponent -error:" + err + " imgComponent:" + imgComponent); 
          return; 
        } 
        if (imgComponent.byteBuffer as ArrayBuffer) { 
          let buffer = imgComponent.byteBuffer; 
          Logger.debug("receiveImage--byteBuffer -success:" + " buffer:" + buffer); 
          recordedFrameCount++; 
          videoCompressor.pushOneFrameData(buffer) 
          Logger.debug("receiveImage-- record >>pushOneFrameData with no." + recordedFrameCount +  " frame"); 
          nextImage.release() 
        } else { 
          Logger.debug("receiveImage--byteBuffer -error:" + " imgComponent.byteBuffer:" + imgComponent.byteBuffer); 
          return; 
        } 
      }); 
    }); 
  }); 
  return receiver; 
}

3:再此基础上获取一帧图像转成JPG格式保存到沙箱路径。

获取图像数据,使用imagePackerApi接口将图像数据打包成JPG格式。

nextImage.getComponent(image.ComponentType.JPEG, async (err, imgComponent: image.Component) => { 
  if (err || imgComponent === undefined) { 
    return; 
  } 
  if (imgComponent.byteBuffer as ArrayBuffer) { 
    let sourceOptions: image.SourceOptions = { 
      sourceDensity: 120, 
      sourcePixelFormat: 8, // NV21 
      sourceSize: { 
        height: this.previewProfilesObj2!.size.height, 
        width: this.previewProfilesObj2!.size.width 
      }, 
    } 
    let imageResource = image.createImageSource(imgComponent.byteBuffer, sourceOptions); 
    let imagePackerApi = image.createImagePacker(); 
    let packOpts: image.PackingOption = { format: "image/jpeg", quality: 98 }; 
    const filePath: string = getContext().cacheDir + "/image.jpg"; 
    let file = fs.openSync(filePath, fs.OpenMode.CREATE | fs.OpenMode.READ_WRITE); 
    imagePackerApi.packToFile(imageResource, file.fd, packOpts).then(() => { 
      console.error('pack success: ' + filePath); 
    }).catch((error: BusinessError) => { 
      console.error('Failed to pack the image. And the error is: ' + error); 
    }) 
    imageResource.createPixelMap({}).then((res)=>{ 
      this.imgUrl = res; 
    }); 
  } else { 
    return; 
  } 
  nextImage.release(); 
})

步骤三:视频流数据传递到native侧,进行压缩编码。

1:native侧和JS侧交互实现,创建一个VideoCompressor类实例绑定到JS对象中。

napi_value VideoCompressor::JsConstructor(napi_env env, napi_callback_info info) { 
  napi_value targetObj = nullptr; 
  void *data = nullptr; 
  size_t argsNum = 0; 
  napi_value args[2] = {nullptr}; 
  napi_get_cb_info(env, info, &argsNum, args, &targetObj, &data); 
  auto classBind = std::make_unique<VideoCompressor>(); 
  napi_wrap( 
    env, nullptr, classBind.get(), 
    [](napi_env env, void *data, void *hint) { 
    VideoCompressor *bind = (VideoCompressor *)data; 
    delete bind; 
    bind = nullptr; 
  }, 
  nullptr, nullptr); 
  return targetObj; 
}

Videocompressor在JS侧自定义封装的对象,里面包含启动本地录制方法。

declare class VideoCompressor { 
  startRecordNative: ( 
    outputFileFd: number, 
    width:number, 
    height:number, 
    outPutFilePath: string, 
  ) => Promise<CompressorResponse>; 
 
  pushOneFrameDataNative: ( 
    byteBuffer: ArrayBuffer 
  )=> Promise<CompressorResponse>; 
 
  stopRecordNative: ( 
  )=> Promise<CompressorResponse>; 
}

native侧自定义封装一个视频录制管理器类,包含启动本地录制方法。

class VideoRecordManager { 
  bool videoRecorderIsReady = false; 
  int32_t CreateVideoEncode(); 
  int32_t CreateMutex(); 
  void VideoCompressorWaitEos(); 
  void NativeRecordStart();  // 启动本地录制 
  void SetCallBackResult(int32_t code, std::string str); 
  void Release(); 
  public: 
    std::unique_ptr<VideoRecordBean> videoRecordBean_; 
  static VideoRecordManager &getInstance() { 
  static VideoRecordManager instance; 
  return instance; 
} 
  void startRecord();  // 开始录制 
  void pushOneFrameData(void *data);  // 推送一帧视频数据 
  void stopRecord();  // 停止录制视频 
};

2:接收JS侧数据。

napi_value VideoCompressor::pushOneFrameDataNative(napi_env env,napi_callback_info info) { 
  // 从js中获取传递的参数 
  napi_value args[1] = {nullptr}; 
  size_t argc = 1; 
  napi_get_cb_info(env, info, &argc, args, nullptr, nullptr); 
  void *arrayBufferPtr = nullptr; 
  size_t arrayBufferSize = 0; 
  napi_get_arraybuffer_info(env, args[0], &arrayBufferPtr, &arrayBufferSize); // 获取到输入的帧数据 
  auto &videoRecorder = VideoRecordManager::getInstance(); 
  videoRecorder.pushOneFrameData(arrayBufferPtr); 
  return 0; 
}

成功推送一帧视频数据后,pushOneFrameData函数会将一帧数据推送到编码器中进行编码。

void VideoRecordManager::pushOneFrameData(void *data){ 
  // 判断是否已经可以推数据了(编码器是否准备好了) 
  if (!videoRecorderIsReady) { 
    OH_LOG_ERROR(LOG_APP, "videoRecorderIsNotReady"); 
    return; 
  } 
  videoRecordBean_->vEncSample->get()->pushFrameData(data); 
}

3:编码使用的是buffer模式,编码过程分为创建编码器实例对象 --> 设置编码器回调函数 --> 启动编码器,开始编码 --> 写入编码码流 -->将数据推入编码器的输入队列中进行编码 --> 编码完成通知编码器码流结束 --> 输出编码帧 --> 销毁编码器实例,释放资源。

3.1: 创建编码器实例对象,创建回调函数。

static void VencError(OH_AVCodec *codec, int32_t errorCode, void *userData) { 
  OH_LOG_ERROR(LOG_APP, "VideoEnc - VencError:%d", errorCode); 
} 
static void VencFormatChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData) { 
  OH_LOG_ERROR(LOG_APP, "VideoEnc - VencFormatChanged"); 
}

3.2 设置回调函数SetVideoEncoderCallback,可以通过处理该回调报告的信息,确保编码器正常运转。

int32_t VideoEnc::SetVideoEncoderCallback() { 
  signal_ = make_unique<VEncSignal>(); 
  if (signal_ == nullptr) { 
    OH_LOG_ERROR(LOG_APP, "Failed to new VencSignal"); 
    return AV_ERR_UNKNOWN; 
  } 
  signal_->arrayBufferSize = width * height * 3 / 2; 
  signal_->stopInput.store(false); 
  cb_.onError = VencError; 
  cb_.onStreamChanged = VencFormatChanged; 
  cb_.onNeedOutputData = VencOutputDataReady; 
  cb_.onNeedInputData = VencNeedInputData; 
  return OH_VideoEncoder_SetCallback(venc_, cb_, static_cast<void *>(signal_.get())); 
}

3.3:编码器就绪,开始编码。

int32_t VideoEnc::StartVideoEncoder() { 
  outputIsRunning_.store(true); 
  // 启动编码器,开始编码 
  int ret = OH_VideoEncoder_Start(venc_); 
  if (ret != AV_ERR_OK) { 
    OH_LOG_ERROR(LOG_APP, "Failed to start video codec"); 
    outputIsRunning_.store(false); 
    signal_->outCond_.notify_all(); 
    Release(); 
    return ret; 
  } 
  outputLoop_ = make_unique<thread>(&VideoEnc::OutputFunc, this); 
  if (outputLoop_ == nullptr) { 
    OH_LOG_ERROR(LOG_APP, "Failed to cteate output video outputLoop"); 
    outputIsRunning_.store(false); 
    Release(); 
    return AV_ERR_UNKNOWN; 
  } 
  return AV_ERR_OK; 
}

3.4:写入编码码流。

void VideoEnc::pushFrameData(void *arrayBufferPtr) { 
  unique_lock<mutex> lock(signal_->inputMutex_); 
  if (signal_->stopInput) 
    return; 
  size_t dataSize = signal_->arrayBufferSize; // nv21的图像数据 
  void *copyBuffer = std::malloc(dataSize); 
  if (copyBuffer == nullptr) { 
    OH_LOG_ERROR(LOG_APP, "pushFrameData: failed with malloc error"); 
    return; 
  } 
  OH_LOG_ERROR(LOG_APP, "VideoEnc -pushFrameData --start"); 
  std::memcpy(copyBuffer, arrayBufferPtr, dataSize); 
  // 将 copyBuffer 添加到队列中 
  signal_.get()->inputBufferQueue_.push(copyBuffer); 
  OH_LOG_ERROR(LOG_APP, "VideoEnc -pushFrameData:%{public}zu", signal_.get()->arrayBufferSize); 
  signal_->inputCond_.notify_one(); 
}

3.5 将数据推入编码器的输入队列中进行编码。

static void VencNeedInputData(OH_AVCodec *codec, uint32_t index, OH_AVMemory *data, void *userData) { 
  VEncSignal *signal = static_cast<VEncSignal *>(userData); 
  unique_lock<mutex> lock(signal->inputMutex_); 
  signal->inputCond_.wait(lock, [&signal] { return !signal->inputBufferQueue_.empty() || signal->stopInput; }); 
  OH_LOG_ERROR(LOG_APP, "VideoEnc -VencNeedInputData inputBufferQueue_ has data :%{public}zu", 
    signal->arrayBufferSize); 
  auto now = std::chrono::system_clock::now(); 
  auto timestamp = std::chrono::duration_cast<std::chrono::nanoseconds>(now.time_since_epoch()).count() / 1000; 
  // 配置buffer info信息 
  OH_AVCodecBufferAttr attr; 
  attr.size = 0; 
  attr.offset = 0; 
  attr.pts = 1000000 / 24 * num; 
  num++; 
  //        attr.pts =timestamp; 
  if (signal->stopInput) { 
    attr.flags = AVCODEC_BUFFER_FLAGS_EOS; 
    /** 
     * 写入编码码流 
     */ 
    int32_t ret = OH_VideoEncoder_PushInputData(codec, index, attr); 
    if (ret != AV_ERR_OK) { 
      OH_LOG_ERROR(LOG_APP, "Failed to OH_VideoEncoder_PushInputData"); 
    } 
    OH_LOG_ERROR(LOG_APP, "StopInput --VencNeedInputData >>PushInputData-EOS"); 
    return; 
  } 
  if (signal->inputBufferQueue_.empty()) { 
    return; 
  } 
  attr.size = signal->arrayBufferSize; 
  attr.flags = AVCODEC_BUFFER_FLAGS_CODEC_DATA; 
  auto &arrayBuffer = signal->inputBufferQueue_.front(); 
  // 输入帧buffer对应的index,送入InIndexQueue队列 
  // 输入帧的数据mem送入InBufferQueue队列 
  OH_LOG_ERROR(LOG_APP, "VideoEnc -VencNeedInputData --before memcpy"); 
  uint8_t *dataAddr = OH_AVMemory_GetAddr(data); 
  int32_t dataSize = OH_AVMemory_GetSize(data); 
  OH_LOG_ERROR(LOG_APP, "VideoEnc -VencNeedInputData data size :%{public}zu", dataSize); 
  OH_LOG_ERROR(LOG_APP, "VideoEnc -arrayBuffer data size :%{public}zu", signal->arrayBufferSize); 
  std::memcpy(dataAddr, arrayBuffer, signal->arrayBufferSize); 
  OH_LOG_ERROR(LOG_APP, "VideoEnc -VencNeedInputData --after memcpy"); 
  // 送入编码输入队列进行编码,index为对应输入队列的下标 
  int32_t ret = OH_VideoEncoder_PushInputData(codec, index, attr); 
  OH_LOG_ERROR(LOG_APP, "VencNeedInputData OH_VideoEncoder_PushInputData"); 
  if (ret != AV_ERR_OK) { 
    OH_LOG_ERROR(LOG_APP, "Failed to OH_VideoEncoder_PushInputData"); 
  } 
  signal->inputBufferQueue_.pop(); 
  std::free(arrayBuffer); // 释放内存 
};

3.6:编码完成通知编码器码流结束。

if (signal->stopInput) { 
  attr.flags = AVCODEC_BUFFER_FLAGS_EOS; 
  int32_t ret = OH_VideoEncoder_PushInputData(codec, index, attr); 
  if (ret != AV_ERR_OK) { 
    OH_LOG_ERROR(LOG_APP, "Failed to OH_VideoEncoder_PushInputData"); 
  } 
  OH_LOG_ERROR(LOG_APP, "StopInput --VencNeedInputData >>PushInputData-EOS"); 
  return; 
}

3.7:输出编码帧,拿到编码后的数据。

void VideoEnc::OutputFunc() { 
  uint32_t errCount = 0; 
  int64_t enCount = 0; 
  while (true) { 
        if (!outputIsRunning_.load()) { 
            break; 
        } 
        unique_lock<mutex> lock(signal_->outMutex_); 
        signal_->outCond_.wait(lock, 
                               [this]() { return (signal_->outIdxQueue_.size() > 0 || !outputIsRunning_.load()); }); 
        if (!outputIsRunning_.load()) { 
            break; 
        } 
  uint32_t index = signal_->outIdxQueue_.front(); 
  OH_AVCodecBufferAttr attr = signal_->outputAttrQueue.front(); 
  if (attr.flags == AVCODEC_BUFFER_FLAGS_EOS) { 
    outputIsRunning_.store(false); 
    signal_->outCond_.notify_all(); 
    OH_LOG_ERROR(LOG_APP, "StopInput --OutputFunc ENCODE EOS %{public}lld", enCount); 
    break; 
  } 
  OH_AVMemory *buffer = signal_->outBufferQueue_.front(); 
  if (OH_AVMuxer_WriteSample(muxer->muxer, muxer->vTrackId, buffer, attr) != AV_ERR_OK) { 
    OH_LOG_ERROR(LOG_APP, "input video track data failed"); 
  } 
  if (OH_VideoEncoder_FreeOutputData(venc_, index) != AV_ERR_OK) { 
    OH_LOG_ERROR(LOG_APP, "videoEncode FreeOutputDat error"); 
    errCount = errCount + 1; 
  } 
  if (errCount > 0) { 
    OH_LOG_ERROR(LOG_APP, "videoEncode errCount > 0"); 
    outputIsRunning_.store(false); 
    signal_->outCond_.notify_all(); 
    Release(); 
    break; 
  } 
  signal_->outIdxQueue_.pop(); 
  signal_->outputAttrQueue.pop(); 
  signal_->outBufferQueue_.pop(); 
  enCount++; 
} 
}

3.8:数据写入到输出文件中保存。

startRecorder(context: Context, width: number, height: number): Promise<CompressorResponse> { 
  try { 
  let date = new Date(); 
  this.outPutFilePath = context.filesDir + "/" + date.getTime() + ".mp4"; //创建输出文件 
  let outputFile = fs.openSync(this.outPutFilePath, fs.OpenMode.READ_WRITE | fs.OpenMode.CREATE); 
  if (!outputFile) { 
  console.info("videoCompressor outputFile create error"); 
  return new Promise((resolve, reject) => { 
  fs.unlink(this.outPutFilePath); 
  reject(new Error("videoCompressor outputFile create error")); 
}); 
} 
return this.object.startRecordNative(outputFile.fd, width, height, this.outPutFilePath) 
} catch (err) { 
  return new Promise((resolve, reject) => { 
    fs.unlink(this.outPutFilePath); 
    reject(err); 
  }); 
} 
}
分享
微博
QQ
微信
回复
2024-06-13 20:13:48
相关问题
HarmonyOS 数据传递问题
353浏览 • 1回复 待解决
如何连续获取相机预览数据
562浏览 • 1回复 待解决
HarmonyOS 视频数据传
338浏览 • 1回复 待解决
HarmonyOS router.getParams()数据传递
335浏览 • 1回复 待解决
HarmonyOS 关于Provide数据传递问题咨询
308浏览 • 1回复 待解决
基于CameraKit对相机进行操作
564浏览 • 1回复 待解决
实现文件解压缩数据压缩
1226浏览 • 1回复 待解决
camera 获取预览数据
1680浏览 • 1回复 待解决
多hap调用及数据传递,有人知道吗?
919浏览 • 1回复 待解决
手机如何与电脑端进行数据传
3179浏览 • 1回复 待解决
HarmonyOS gzip二进制压缩和解压缩
601浏览 • 1回复 待解决