DEV Community

Cover image for How to run LLM modal locally with Hugging-Face 🤗
Roshan Sanjeewa Wijesena
Roshan Sanjeewa Wijesena

Posted on

6

How to run LLM modal locally with Hugging-Face 🤗

Welcome Back - In this topic i would like to talk about how to download and run any LLM modal into your local machine/environment.

We again use hugging-face here 🤗. You would need hugging-face API key first.

Run below code to download google/flan-t5-large in to your local machine, it will take a while and you will see the progress in your jupyter notebook.

from langchain.llms import HuggingFacePipeline
import torch
from transformers import pipeline,AutoTokenizer,AutoModelForCausalLM,AutoModelForSeq2SeqLM
model_id= "google/flan-t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)

pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer,max_length=128)
local_llm = HuggingFacePipeline(pipeline=pipe)

prompt = PromptTemplate(
    input_variables=["name"],
    template="Can you tell me about footballer {name}",
)
chain = LLMChain(prompt=prompt, llm=local_llm)
chain.run("messi")

Enter fullscreen mode Exit fullscreen mode

Image of Docusign

🛠️ Bring your solution into Docusign. Reach over 1.6M customers.

Docusign is now extensible. Overcome challenges with disconnected products and inaccessible data by bringing your solutions into Docusign and publishing to 1.6M customers in the App Center.

Learn more

Top comments (2)

Collapse
 
jayanath_liyanage_3991f20 profile image
Jayanath Liyanage

Great work 👍

Collapse
 
ouyangzhigang profile image
Voyagergle

let me try

Postmark Image

Speedy emails, satisfied customers

Are delayed transactional emails costing you user satisfaction? Postmark delivers your emails almost instantly, keeping your customers happy and connected.

Sign up