源码方式安装llama.cpp及调试
- 人工智能
- 2025-08-26 02:03:02

llama.cpp源码方式安装和调试配置 构建和编译 注意这里是cuda,且要开启debug模式 cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Debug cmake --build build --config Debug 正在编译: 配置launch.json用于调式:
要根据自己的环境路径做相应修改
{ "version": "0.2.0", "configurations": [ { "name": "(gdb) 启动", "type": "cppdbg", "request": "launch", "program": "${workspaceFolder}/build/bin/llama-simple", // "args": [ // "-m", "output.gguf", "-n", "32", "-ngl", "99", "Hello my name is" ], "stopAtEntry": false, "cwd": "${workspaceFolder}", "environment": [], "externalConsole": false, "MIMode": "gdb", // "setupCommands": [ { "description": "为 gdb 启用整齐打印", "text": "-enable-pretty-printing", "ignoreFailures": true }, { "description": "将反汇编风格设置为 Intel", "text": "-gdb-set disassembly-flavor intel", "ignoreFailures": true } ], "miDebuggerPath": "/usr/bin/gdb" // } ] } 转换模型为gguf格式 python convert_hf_to_gguf.py --outtype f16 --outfile "output.gguf" "/raid/home/huafeng/models/Meta-Llama-3-8B-Instruct" 运行第一个程序 调试程序(llama.cpp/examples/simple/simple.cpp)源码方式安装llama.cpp及调试由讯客互联人工智能栏目发布,感谢您对讯客互联的认可,以及对我们原创作品以及文章的青睐,非常欢迎各位朋友分享到个人网站或者朋友圈,但转载请说明文章出处“源码方式安装llama.cpp及调试”