Llama 3.2 was recently introduced at Meta’s Developer Conference, showcasing impressive multimodal capabilities and a version optimized for mobile ...
For further actions, you may consider blocking this person and/or reporting abuse
Awesome! Worked out of the box for me (Vivo v30 Lite, Android 14, and the latest Termux APK, specifically: github.com/termux/termux-app/relea...)
Note that this will almost certainly NOT work if you download Termux from Google Play Store - while it's fine for casual use it is NOT the same as the open source distro that you download from GitHub.
Glad to hear it worked out of the box for you on your Vivo V30 Lite with Android 14! 🎉 Thanks for sharing your setup details.
And yes, you're absolutely right—downloading Termux from the Google Play Store can cause issues since it's outdated and lacks important updates. The GitHub version is the way to go for the latest features and compatibility.
Great article. Thank
Está corriendo suavemente el de 3 b
Muchas gracias
¡Me alegra saber que el modelo de 3B está funcionando sin problemas! 😊 Si tienes alguna otra pregunta o necesitas más ayuda, no dudes en preguntar. Aquí estoy para ayudarte. ¡Feliz experimentación con Ollama y los modelos de IA! 🚀
Y lo he hecho funcionar con un celular Android de gama baja el poco c65 de Xiaomi

no es de gama alta como los Samsung s23 en el cual funcionan muy rápido gracias
Parece que lograste hacerlo funcionar en un Xiaomi Poco C65, ¡eso es genial! A pesar de ser un dispositivo de gama baja, demuestra que la optimización y la eficiencia del software pueden marcar la diferencia. 🚀 ¿Notaste algún problema de rendimiento o corre bien en general?
Ya lo instale en un galaxy s24 y ejecuta sus análisis igual en los mismos tiempos si utilizo el el 1b responde más rápido pero tiende a mentir y hacer circular la respuesta
I got this error 🤕 on my Samsung Galaxy Tab S9 Ultra:
Successfully run deepseek-coder for the first time locally even too slow! (I'll change the model later that run faster on my Samsung A51 device 😅)
Image description
Hi, I have got the same error on Honor Magic6 Pro (snapdragon 8 gen3).
update :
I have found a workaround here:
github.com/ollama/ollama/issues/7292
cheers.
I had the same, but found a workaround here
Basically, you modify llama.go#L37-L38 to remove
-D__ARM_FEATURE_MATMUL_INT8
Qualcomm's spec sheets for Snapdragon 8 gen 3 suggest it can use GPU and a DSP to speed up LLM inference.
Do you, or any other readers know whether Ollama is taking advantage of the hardware?
if not are there any open source projects that are utilizing the full capabilities of the Gen3?
Thanks for the very useful article.
Based on the search results, here’s a detailed response to your questions regarding the utilization of Snapdragon 8 Gen 3 hardware (GPU and DSP) for LLM inference, particularly with Ollama and other open-source projects:
1. Is Ollama Taking Advantage of Snapdragon 8 Gen 3 Hardware?
As of the latest information, Ollama does not currently fully utilize the GPU and DSP capabilities of the Snapdragon 8 Gen 3 for LLM inference. While Ollama supports running models like Llama 3.2 on Android devices using Termux, its primary focus has been on CPU-based inference. There are discussions and efforts to integrate GPU and NPU support, but these are still in progress and not yet fully realized.
For example:
2. Open-Source Projects Utilizing Snapdragon 8 Gen 3 Hardware
Several open-source projects and frameworks are actively leveraging the full capabilities of the Snapdragon 8 Gen 3, including its GPU, DSP, and NPU for AI and LLM tasks:
a. Qualcomm AI Hub Models
b. MiniCPM-Llama3-V 2.5
c. Llama.cpp
d. Qualcomm AI Engine Direct
3. Future Prospects
Conclusion
Currently, Ollama does not fully utilize the GPU and DSP capabilities of the Snapdragon 8 Gen 3, but there are promising open-source projects like Qualcomm AI Hub Models, MiniCPM-Llama3-V 2.5, and Llama.cpp that are making significant strides in this area. As development continues, we can expect more tools to take full advantage of Snapdragon hardware for efficient on-device AI and LLM inference.
For further details, you can explore the referenced projects and discussions in the search results.
your url is incomplete, it should be like this
https://github.com/ollama/ollama.git
Thank you 🫢
I got this and i don't know what to do, can someone please help me?
It looks like your build error is due to missing ARM NEON FP16 support. The identifiers
vld1q_f16
andvld1_f16
are ARM NEON intrinsics for float16 operations, which might not be enabled by default in your compiler.Possible fixes:
As for the warnings about format specifiers (
%lu
vs.uint64_t
), you can fix them by using%llu
or explicitly casting the argument to(unsigned long long)
.Let me know if you need more help!
Cómo agregar el modelo de deepseek a ollama y seleccionar lo podrías agregar las instrucciones como otra opción
Descargue el modelo DeepSeek de Hugging Face.
Convierta el modelo al formato GGUF (si es necesario).
Cree un Modelfile y especifique la ruta al modelo.
Construya el modelo en Ollama:
bash Copiar ollama crear deepseek -f Modelfile Ejecute el modelo:
bash Copiar ollama ejecutar deepseek Solución de problemas Si el modelo no se carga, asegúrese de que el archivo del modelo esté en el formato correcto (GGUF/GGML).
Verifique los registros de Ollama en busca de errores:
bash Copiar ollama server --verbose Si encuentra problemas de memoria, intente usar un modelo más pequeño o ejecutar Ollama en un dispositivo con más RAM.
Si sigue estos pasos, podrá agregar y utilizar con éxito el modelo DeepSeek en Ollama. ¡Avíseme si necesita más ayuda!
Me sale eso despues del comando go build .

./ollama run llama3.2:3b --verbose
Error: could not connect to ollama app
What can I do to solve this problem?
The error
Error: could not connect to ollama app
typically occurs when the Ollama server is not running or is not accessible. Since you're running this on an Android device using Termux, here are some steps to troubleshoot and resolve the issue:1. Ensure the Ollama Server is Running
ollama run
command to work.Start the Ollama server by running:
The
&
at the end runs the server in the background. You should see a message confirming the server is running.Check if the server is running:
Run the following command to see if the Ollama process is active:
If you don't see the
ollama serve
process, restart it.2. Verify the Ollama Server Port
11434
. Ensure this port is not blocked or used by another process.Check if the port is open:
If the port is not listed, the server might not be running correctly.
3. Check for Errors in the Server Logs
If the server fails to start, check the logs for errors:
Look for any error messages that might indicate why the server isn't starting. Common issues include missing dependencies or insufficient memory.
4. Ensure Termux Has Proper Permissions
Grant storage access:
Ensure Termux has internet access by testing with:
5. Reinstall Ollama
If the server still doesn't start, try reinstalling Ollama:
Then, start the server again:
6. Check Device Resources
llama3.2:1b
.7. Test with a Smaller Model
If the issue persists, test with a smaller model to rule out resource constraints:
If the smaller model works, the issue might be related to the device's ability to handle the 3B model.
8. Use Verbose Mode for Debugging
Run the Ollama server in verbose mode to get more detailed logs:
Look for any specific errors or warnings that might indicate the root cause.
9. Check for Termux Updates
Update Termux packages:
10. Restart Termux
Sometimes, simply restarting Termux can resolve connectivity issues:
11. Verify the Model Name
llama3.2:3b
is correct. If the model doesn't exist, Ollama will fail to connect.List available models:
If the model isn't listed, pull it first:
12. Check Network Connectivity
Test connectivity by pulling a model:
If the pull fails, there might be a network issue.
13. Use a Different Device
Summary of Commands
Here’s a quick summary of the key commands to troubleshoot and resolve the issue:
If you’ve tried all the steps above and the issue persists, feel free to provide more details about the error logs or behavior, and I’ll help you further!