In order to capitalise on the rapid pace of change in AI, a pragmatic, future inclusive approach is required. That means building something that can be extended, pivot and upgraded.
This is one such approach i've been doing with my latest product that supplies you with a team of AI agents that gamify productivity and expand via plugins.
More detail on my progress at present can be found in my in the video below.
Anyway, I've been working on AI projects for over 20 years and I wanted to share a few thoughts on my attempts to integrate various AI models into a single app:
-
Redundancy: What happens when a service like GPT3.5 goes down or the app goes offline? Redundancy is the solution. My approach is to build redundancy in layers such as:
- PRIMARY MODEL
- SECONDARY MODEL (Backup)
- Offline lightweight model
- Cached responses
- Conditional logic
- Failover logic
-
Rapid Recalibration: Big companies often claim they've perfected their systems. Yet, I've witnessed them struggle when new customer features roll out. For example, when customers are given the ability to add aliases to a database. The solutions I've found include:
- Future-proofing ML data design (to minimise impact on retraining times)
- Incremental training
- Test first and fail fast - let your developers experiment with adding new fields and see how the model responds.
Plugins and Expansion: One of the best IDEs out there is VScode, what makes it so powerful is unlike older IDEs which are monolithic and static design from release. VSC completely pivots based upon the plugins you use. This along with other powerful plugin dominated tools such as jenkins inspired me to apply this same idea to AI agents. Rather than throwing away all the integrated code, I write the integration functions that can re-plumb themselves as things change down the line.
Anyway, this has been just a few brain farts - if you are interested to know more, let me know!
Top comments (0)