Skip to content
Navigation menu
Search
Powered by Algolia
Search
Log in
Create account
DEV Community
Close
#
vram
Follow
Hide
Posts
Left menu
đź‘‹
Sign in
for the ability to sort posts by
relevant
,
latest
, or
top
.
Right menu
I built a duty-cycle throttler for my RTX 4060 (because undervolting wasn't enough)
Yaroslav Pristupa
Yaroslav Pristupa
Yaroslav Pristupa
Follow
Apr 6
I built a duty-cycle throttler for my RTX 4060 (because undervolting wasn't enough)
#
softwaredevelopment
#
gpu
#
vram
#
hardware
Comments
Add Comment
4 min read
I Couldn't Build a Local LLM PC for $1,300 — Budget Tiers and the VRAM Cliffs Between Them
plasmon
plasmon
plasmon
Follow
Apr 4
I Couldn't Build a Local LLM PC for $1,300 — Budget Tiers and the VRAM Cliffs Between Them
#
llm
#
gpu
#
localllm
#
vram
Comments
Add Comment
6 min read
Unleash Large AI Models: Extend GPU VRAM with System RAM (Nvidia Greenboost)
Umair Bilal
Umair Bilal
Umair Bilal
Follow
Mar 19
Unleash Large AI Models: Extend GPU VRAM with System RAM (Nvidia Greenboost)
#
nvidia
#
gpu
#
vram
#
ai
Comments
Add Comment
17 min read
Cloud LLMs vs Local Models: Can 32GB of VRAM Actually Compete with Claude Opus?
Alan West
Alan West
Alan West
Follow
Mar 25
Cloud LLMs vs Local Models: Can 32GB of VRAM Actually Compete with Claude Opus?
#
localllm
#
claudeopus
#
ollama
#
vram
Comments
Add Comment
4 min read
A Free Tool to Check VRAM Requirements for Any HuggingFace Model
karan singh
karan singh
karan singh
Follow
Jan 7
A Free Tool to Check VRAM Requirements for Any HuggingFace Model
#
vram
#
gpu
#
llm
#
inferencing
5
 reactions
Comments
Add Comment
2 min read
đź‘‹
Sign in
for the ability to sort posts by
relevant
,
latest
, or
top
.
We're a place where coders share, stay up-to-date and grow their careers.
Log in
Create account