|-转 python报错 can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
ERROR:app.services.audio_processor:保存音频或其他操作失败: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
ERROR:app.services.audio_processor:处理音频时发生错误: 保存音频或其他操作失败: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
ERROR:app.api.audio_router:处理音频时发生错误:
Traceback (most recent call last):
File "D:\python\fastapi-speaker-extractor\app\services\audio_processor.py", line 677, in process_audio
target_embedding = get_or_create_target_embedding(target_audio_path, CACHE_DIR, verification)
File "D:\python\fastapi-speaker-extractor\app\utils\audio_utils.py", line 110, in get_or_create_target_embedding
np.save(cache_file, embedding.numpy()) # 存为 numpy
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
During handling of the above exception, another exception occurred:
# ========================================= # 4. 函数:获取目标 embedding(支持缓存) def get_or_create_target_embedding(audio_path, cache_dir ,verification): os.makedirs(cache_dir, exist_ok=True) with open(audio_path, 'rb') as f: file_hash = hashlib.md5(f.read()).hexdigest() cache_file = os.path.join(cache_dir, f"{file_hash}.npy") if os.path.exists(cache_file): print(f"✅ [缓存命中] 从缓存加载目标说话人 embedding: {cache_file}") # 从 .npy 加载的是 numpy,需要转为 Tensor embedding_np = np.load(cache_file) return torch.from_numpy(embedding_np) # ✅ 转为 PyTorch Tensor else: print(f"🔁 [缓存未命中] 正在提取目标说话人音频的 embedding: {audio_path}") embedding = extract_audio_embedding(audio_path ,verification) # Tensor # 保存为 numpy 到缓存 np.save(cache_file, embedding.numpy()) # 存为 numpy print(f"💾 [缓存保存] 已保存目标说话人 embedding 到缓存: {cache_file}") return embedding # Tensor
kimi瞬间解决了...