In the not-so-distant past, dabbling in generative AI technology meant leaning heavily on proprietary models. The routine was straightforward: snag...
For further actions, you may consider blocking this person and/or reporting abuse
I have used llava to OCR some image documents, but without success. Sometimes works, sometimes says that is not possible, low resolution and others answers. Someone know another model that can do OCR image-to-text of the text tha has in the image, like a letter ? Tks!
There is a way(hack) by which you can download any huggingface’s open-source image to text model locally.
youtu.be/fnvZJU5Fj3Q?si=YiiHdwRw90...
Additionally, you can also upscale an image using a different AI model and then feed it to llava for better results.
I’m sure the llava model will get better with time.
Thanks Sarthak, I will try it!
@isslerman Please do not rely on the traditional LLMs for the OCR purposes. They are not good at it. You should be using the OCR Providers for ex: OCR API
Thanks Ranjan, I will take a look it. The project itself is more than an OCR. We are using LLMs for other proposes and the OCR is a step. But yes, OCR API can be a good solution too. I will take a look if there is any open source that I can use or check the providers, because the documents in this case is sensitive to be shared.
Bests, Marcos Issler
Sure, it's understandable. There are always pros and cons when it comes to the open source options. Please do remember about the accuracy issues with the open source. However, Since you have mentioned the sensitive document OCR, I would highly recommend you to consider the public cloud options such as AWS, Azure. They provide you the ultimate solution, and they are also GDPR and SOC 2 complaint. Generally, we need to keep several things in mind. Cost, Accuracy, Security, Reliability, Scalability etc. If you can do something with the in-house open source and think that works the best, please proceed with the same. However, it's good to experiment and see the best options.
I have done a survey to see which local LLMs are mostly used in Taiwanese AI community. 2024-2-20~2024-2-29. Here are the top 5 results.
Ollama on the rise
Local LLMs integrated into the OS could be such a blessing for people like me. I've already noticed I use ChatGPT much more compared to Google search when I need a concept explained quickly and succinctly. In fact, "define ______" and "what is ______?" were two of my most frequent search queries up until last year.
Now I just ask ChatGPT. When things don't make sense, I just ask it to explain like I'm five or ten, and it works wonderfully 80% of the times. Having a local LLM capable of doing this at your fingertips will make this even smoother!
Exactly. Recently, the Arc Search App( a browser actually) released a feature that allows you to pinch a page, and it will summarize it for you. Although these are simple use cases of Generative AI, they give you a glimpse of where we are heading.
This was really inspiring thanks for sharing!
The rise of local Large Language Models (LLMs) is an absolute game-changer! Your exploration of Ollama and its potential applications is truly fascinating. The ability to run powerful AI models locally, offline, with no associated costs, opens up a realm of possibilities.
I love the practical examples you provided, from integrating Ollama with Obsidian for auto-completion to using it as a replacement for GitHub's Copilot in VSCode. The prospect of adding an Ollama provider to projects for free AI features is revolutionary.
The diversity of models available, from Gemma for lightweight open models to LLava for computer vision capabilities, showcases the versatility of this approach. It's exciting to think about the future integration of these models into operating systems, potentially even on mobile devices like the Samsung Galaxy S24 Ultra.
I've been using Ollama + ROCM with my fairly underpowered RX 580 (8gb), and have had a lot of success with different models. I'm surprised at how well everything works and can see myself building a home server dedicated to AI workloads in the relatively near future.
Got goosebumps just by thinking infinite number of possibilities LLMs are going to make us believe in and in not so distant future.
This 5 minute read really pushed all my brain cells to imagine what are heading towards.
Would definitely love to read more such tech updates.
Amazing Article!!!
One of the great posts I have come across lately! Great work! 🙏
I used this AI tool, and it's fantastic. Thank you for the Guidance and wonderful post.
Appreciate your kind words. 😊
Great ! Does the popularity of models on their website align with their performance, from your point of view ? Is it the same for coding purpose ?
Of course, it does. With the right hardware, it can amaze you. Try using this amazing experiment by hugging face huggingface.co/chat/
Thanks for sharing this, now I don't need to worry about internet connection all the time when I do travel coding.
Good Explorations into Local LLMs. Langchain is a marvellous invention
Ollama on the rise
thanks for your content.
I have some questions regarding to OCR.
Which service is best do you think?
My past client is considering using AWS / Google Vision OCR, but worry about getting the desired results..
What configuration do you suggest for running Local LLMs ?
I have a Windows 10 machine with 16GB RAM. I run ollama inside WSL2
I tried several 7B models including codellama, and response is VERY SLOW.
Some 3B models are only slightly better.
This windows PC does not have a GPU
OTOH, my work macbook pro M2 with 16GB RAM has respectable response time.
Thanks for the great content. I was looking a few months ago for a way to integrate a local LLM into obsidian. Now you gave me the answer on how to do it. Thanks for sharing!
Cheers to that mate 😄
have already installded. But thanks for sharing ...