PHP enthusiast embracing quality tools, AI exploration, and knowledge sharing through articles. Automation is my playground, always learning and innovating.
Although Transformers.js was originally designed to be used in the browser, it’s also able to run inference on the server. This article is about and testes for running inference on the server (node). I prefer to implement the logic on the server side, and exposing an API to the frontend. Especially because some of the models could be huge (hundreads of Gigabytes)
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Although Transformers.js was originally designed to be used in the browser, it’s also able to run inference on the server. This article is about and testes for running inference on the server (node). I prefer to implement the logic on the server side, and exposing an API to the frontend. Especially because some of the models could be huge (hundreads of Gigabytes)