Inevitably at some point in the near future, somebody high up in the leadership of OpenAI will realise they can make mountains of money by capitalising on ChatGPT, if this hasn't happened already a long time ago. I suspect one of the primary and easiest methods to make money on this thing, is to sell access to localised protected trained models to companies.
For instance, I would love to have access to a "persistent training model" that allows me to train it to use and understand Hyperlambda. This would allow me to integrate it as an API with my own private API key towards Aista's services. The idea being to provide myself and other users with a Hyperlambda assistent, capable of generating and understand Hyperlambda code. Purely logically, this would make me 100 times more productive, and others would be able to much more easily understand Hyperlambda and become productive using it to create their own software systems.
I have already tried to train it without success. It is able to learn Hyperlambda quite well, but the problem is that when my session ends, it seems to suffer from amnesia. Hence, 5 minutes later, it doesn't remember any of the things I taught it in previous sessions. At the same time, opening it up to allow anyone to teach it anything globally, will inevitable result in that it turns into a racist neo-nazi, due to being hijacked by trolls, like we saw with Microsoft's initiatives and Google's initiatives previously.
The solution is to create "protected training models" that allows for companies and individuals to "license" its model, allowing them to train it, only reinforcing the training model if the private API key is supplied during conversations - While allowing others without the private training key only gain access to using the model, and not actually train it.
This would allow for any company to train ChatGPT and provide it as their primary support mechanism, answering whatever questions are relevant to that particular company. I assume most companies in the world with an internet presence would line up to buy access to such a thing, simply because of the cost savings it would give them in the long run. I would also assume that OpenAI would be interested in giving away free training models such as these for things such as open source projects, implying Magic could possibly use this for free to teach Hyperlambda to developers. It would simply be in their interest to market their commercial offerings.
In fact, I am so certain of that ChatGPT will have features such as the above in the near future, that I will start a series of blogs here at Dev, where my intentions are to gather small articles, where I teach Hyperlambda, in such a way that I can simply copy and paste these into ChatGPT later down the road, to teach it the language from scratch to a complete understanding.
This is my declaration of intention, and the introduction to a "book", and the book will be named "Teaching the machine Hyperlambda". If you want to read these articles, you're of course welcome - However, I really don't care if you simply ignore them, or how many page views I end up getting. I never wrote these articles for you, I wrote them for the machine. I am anyways not particularly good with human beings, having started coding when I was 8 years old, and I prefer interacting with machines if I can.
Edit; Looks like OpenAI beat me to the punch ...
Top comments (0)