{"id":1229,"date":"2025-03-24T12:00:01","date_gmt":"2025-03-24T13:00:01","guid":{"rendered":"http:\/\/www.diveintoaccessibility.com\/?p=1229"},"modified":"2025-04-30T10:31:09","modified_gmt":"2025-04-30T10:31:09","slug":"running-large-language-models-llms-locally-with-lm-studio","status":"publish","type":"post","link":"http:\/\/www.diveintoaccessibility.com\/index.php\/2025\/03\/24\/running-large-language-models-llms-locally-with-lm-studio\/","title":{"rendered":"Running Large Language Models (LLMs) Locally with LM Studio"},"content":{"rendered":"
Running large language models (LLMs) locally with tools like LM Studio<\/a> or Ollama<\/a> has many advantages, including privacy, lower costs, and offline availability. However, these models can be resource-intensive and require proper optimization to run efficiently.<\/p>\n In this article, we will walk you through optimizing your setup, and in this case, we will be using LM Studio to make things a bit easier with its user-friendly interface and easy installation. We\u2019ll be covering model selection and some performance tweaks to help you get the most out of your LLM setup.<\/p>\n I assume that you have LM Studio installed; otherwise, please check out our article: How to Run LLM Locally on Your Computer with LM Studio<\/a>.<\/p>\n Once you have it installed and running on your computer, we can get started:<\/p>\n Selecting the right Large Language Model (LLM) is important to get efficient and accurate results. Just like choosing the right tool for a job, different LLMs are better suited for different tasks.<\/p>\n There are a few things that we can look for when selecting models:<\/p>\n Think of parameters as the \u201cknobs\u201d and \u201cdials\u201d inside the LLM that are adjusted during training. They determine how the model understands and generates text.<\/p>\n The number of parameters is often used to describe the \u201csize\u201d of a model. You\u2019ll commonly see models referred to as 2B (2 billion parameters), 7B (7 billion parameters), 14B, and so on.<\/p>\n A model with more parameters generally has a greater capacity to learn complex patterns and relationships in language, but it typically also requires more RAM and processing power to run efficiently.<\/p>\n Here are some practical approaches you can take when selecting a model based on your system\u2019s resources:<\/p>\n<\/figure>\n
\nSelecting the Right Model<\/h2>\n
1. The Model Parameters<\/h3>\n