Hey fellow devs! π
I've just published a comprehensive series on Medium detailing my journey of porting Phi-3-Vision, a powerful vision-language model, from Hugging Face to Apple's MLX framework. As a Python hobbyist, I wanted to share my experience and hopefully inspire others to dive into AI model optimization.
π Series Overview:
- Basic Implementation: Getting Phi-3-Vision up and running in MLX.
- Su-scaled Rotary Position Embeddings (SuRoPE): Implementing 128K context support.
- Batching: Optimizing for multiple inputs.
- Caching: Speeding up text generation.
- Choice Selection: Implementing constrained output.
- Constrained Decoding: Guiding the model's output structure.
- LoRA Training: Fine-tuning the model efficiently.
- Agent and Toolchain System: Building flexible AI workflows.
π€ Why This Matters:
- Run advanced AI models efficiently on Apple Silicon
- Learn about model optimization techniques
- Understand the internals of vision-language models
- Explore the capabilities of MLX for AI development
π Read the Full Series:
π» GitHub Repository:
I've open-sourced all the code and markdown files used in this series. You can find them in my GitHub repository:
https://github.com/JosefAlbers/Phi-3-Vision-MLX
Feel free to explore, experiment, and contribute!
π¬ Let's Discuss:
- Have you worked with MLX or other AI frameworks on Apple Silicon?
- What challenges have you faced in porting or optimizing AI models?
- Any specific parts of the series you'd like to dive deeper into?
I'm excited to hear your thoughts and experiences! Let's learn from each other and push the boundaries of what's possible with AI on consumer hardware.
Top comments (0)