I’d like to spark a discussion about the future of LLMs.
As we know, AI models rely on vast datasets scraped from the internet. When we encounter a bug, the AI generates a fix based on historical discussions and solved errors found in that data.
However, as the image shows, traffic and engagement on sites like StackOverflow are significantly down. This raises a critical question: What happens when we encounter new problems that don't exist in current datasets? If developers rely solely on AI and stop documenting solutions on public forums, AI models will eventually run out of fresh training data to solve novel issues.

Top comments (0)