Actually, I’ve been thinking for a while now that AI going open-source was just a matter of time. What’s surprising, though, is that as soon as Deepseek R1 was open-sourced, its capabilities almost surpassed all other models and the most important thing is way cheaper than others. This world is really moving forward at a speed we can’t even imagine.
Unfortunately, I haven’t been able to purchase API credits for some reason, so I can only use the official web browser. Because of that, I ended up diving into how to locally deploy Deepseek R1’s LLM. Other than requiring some basic knowledge of the code itself, the process of setting up the environment is a real headache. There are compatibility issues between the NVIDIA driver, CUDA, and some related packages, not to mention dependency conflicts between Python’s Torch and the Transformers library. It took me an entire day to sort it out. It seems that no AI platform has really nailed the integration problem yet. I think in the near future, these skills will become essential for engineers, and programmers lacking integration skills might end up with no real value in the field.
Just like the software world before the emergence of AI, no one really considered source code to be valuable—the real value lay in the combination of business models and specialized fields. Similarly, after AI technologies are open-sourced, they’ll likely advance at an even faster pace. That’s why learning about AI can’t be postponed or delayed. The key is to adapt to and apply these new technologies alongside our existing goals.
Top comments (0)