
回复
作为一名鸿蒙应用开发者,我在近期项目中遇到了一个需求:如何在多设备协同场景中实现高效的数据交互?经过技术调研,"统一拖拽"框架成为了我的突破口。本文将分享我的实践过程,包含完整代码示例及实现细节。
在某些应用中,为防止敏感数据在拖拽过程被滥用,水印功能至关重要。通过鸿蒙提供的绘制API,仅需三步即可实现动态时间水印。
1. 设置图像可拖拽
// ImageWatermark.ets
Image($rawfile('river.png'))
.draggable(true)
.onDragStart((event: DragEvent) => {
// 获取原始图像像素数据
const resourceMgr = this.context.resourceManager;
let rawFileDesc = resourceMgr.getRawFdSync('river.png');
let pixelMap = image.createImageSource(rawFileDesc).createPixelMapSync();
// 添加水印并处理数据(见步骤2)
})
2. 水印绘制函数
private addWaterMark(watermark: string, pixelMap: image.PixelMap) {
const canvas = new drawing.Canvas(pixelMap);
const font = new drawing.Font().setSize(48);
// 在右下角绘制半透明白色文字
canvas.attachBrush(new drawing.Brush().setColor([255, 255, 255, 102]));
canvas.drawText(watermark, {
x: pixelMap.width - 300, // 向右偏移300像素
y: pixelMap.height - 100 // 底部向上100像素
});
return pixelMap;
}
3. 保存带水印图像
// 转换并保存处理后的图像
let markedPixelMap = this.addWaterMark(new Date().toLocaleString(), pixelMap);
let tempFile = fs.openSync("watermark.png", fs.OpenMode.CREATE);
image.createImagePacker().packToFile(markedPixelMap, tempFile.fd);
fs.closeSync(tempFile.fd);
// 设置拖拽数据源
let imgData = new unifiedDataChannel.Image();
imgData.imageUri = `file://${tempFile.path}`;
(event as DragEvent).setData(new unifiedDataChannel.UnifiedData(imgData));
这里列一个图片绘制到Canvas画布的示例,可以参考实现
// 获取拖拽时间
this.time = this.getTimeWatermark(systemDateTime.getTime(false));
let markPixelMap = this.addWaterMark(this.time, pixelMap);
// 绘制水印
addWaterMark(watermark: string, pixelMap: image.PixelMap) {
if (canIUse('SystemCapability.Graphics.Drawing')) {
watermark = this.context.resourceManager.getStringSync($r('app.string.drag_time')) + watermark;
let imageWidth = pixelMap.getImageInfoSync().size.width;
let imageHeight = pixelMap.getImageInfoSync().size.height;
let imageScale = imageWidth / display.getDefaultDisplaySync().width;
const canvas = new drawing.Canvas(pixelMap);
const pen = new drawing.Pen();
const brush = new drawing.Brush();
pen.setColor({
alpha: 102,
red: 255,
green: 255,
blue: 255
})
brush.setColor({
alpha: 102,
red: 255,
green: 255,
blue: 255
})
const font = new drawing.Font();
font.setSize(48 * imageScale);
let textWidth = font.measureText(watermark, drawing.TextEncoding.TEXT_ENCODING_UTF8);
const textBlob = drawing.TextBlob.makeFromString(watermark, font, drawing.TextEncoding.TEXT_ENCODING_UTF8);
canvas.attachBrush(brush);
canvas.attachPen(pen);
canvas.drawTextBlob(textBlob, imageWidth - 24 * imageScale - textWidth, imageHeight - 32 * imageScale);
canvas.detachBrush();
canvas.detachPen();
} else {
hilog.info(0x0000, TAG, 'watermark is not supported');
}
return pixelMap;
}
默认拖拽预览图较单调,我们通过自定义Builder实现了品牌化设计:
// CustomDragPreview.ets
@Builder
previewBuilder() {
Column() {
Image($r('app.media.logo')) // 自定义品牌图标
.width(60)
.margin(8)
Text('正在拖动重要文件')
.fontColor('#333')
}
.padding(12)
.backgroundBlur(20) // 高斯模糊背景
}
// 使用组件
Image($r('app.media.document'))
.onDragStart(() => ({
pixelMap: this.getSnapshot(),
builder: () => this.previewBuilder()
}))
为提升图片素材库的使用效率,开发了拖拽识图功能:
// AIDropZone.ets
Text(this.recognizedText)
.allowDrop([UniformDataType.IMAGE])
.onDrop(async (event: DragEvent) => {
// 解析拖入图片
const imgData = event.getData().getRecords()[0] as unifiedDataChannel.Image;
const pixelMap = await this.loadImage(imgData.imageUri);
// 调用OCR引擎
const recognition = await textRecognition.recognizeText({ pixelMap });
this.recognizedText = recognition.value;
})
private async loadImage(uri: string) {
// 处理不同来源的图片URI
if (uri.startsWith('file://')) {
return image.createImageSource(uri).createPixelMap();
} else if (uri.startsWith('resource://')) {
return context.resourceManager.getResource(uri).getPixelMap();
}
}
在拖拽过程中,可以自定义拖拽背板图展示信息
以下以一个自定义组件为例
// src/main/ets/pages/background/Background.ets
@Builder
pixelMapBuilder() {
Column() {
Text($r('app.string.background_content'))
.fontSize('16fp')
.fontColor(Color.Black)
.margin({
left: '16vp',
right: '16vp',
top: '8vp',
bottom: '8vp'
})
}
.backgroundColor(Color.White)
.borderRadius(18)
}
将自定义组件转换成PixelMap,作为拖拽过程中显示的图片
private getComponentSnapshot(): void {
this.getUIContext().getComponentSnapshot().createFromBuilder(() => {
this.pixelMapBuilder()
},
(error: Error, pixmap: image.PixelMap) => {
if (error) {
hilog.error(0x0000, TAG, JSON.stringify(error));
return;
}
this.pixelMap = pixmap;
})
}
在拖出对象的onDragStart()接口中,将回调的PixelMap作为拖拽中的背板图
Image($r('app.media.mount'))
// ...
.onDragStart(() => {
let dragItemInfo: DragItemInfo = {
pixelMap: this.pixelMap,
builder: () => {
this.pixelMapBuilder()
},
extraInfo: "this is extraInfo"
};
return dragItemInfo;
})
跨设备拖拽核心代码:
// 检测远程设备连接
import distributedDeviceManager from '@ohos.distributedDeviceManager';
const deviceList = await distributedDeviceManager.getDeviceList();
if (deviceList.length > 0) {
// 绑定投送目标
dragController.setCrossDeviceTarget(deviceList[0].networkId);
}
在实现过程中遇到过两个关键问题:模拟器不支持部分硬件能力,通过真机调试解决;大文件拖拽时偶现卡顿,通过异步分块传输优化。建议开发时重点关注: