Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Microsoft AI launches its first in-house models

    August 29, 2025

    Samsung offers enticing preorder deal for new Galaxy tablets ahead of September Unpacked

    August 29, 2025

    Nvidia CEO: Some Jobs Will Disappear As AI Advances

    August 29, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Microsoft AI launches its first in-house models
    • Samsung offers enticing preorder deal for new Galaxy tablets ahead of September Unpacked
    • Nvidia CEO: Some Jobs Will Disappear As AI Advances
    • Google’s new Pixel phone insurance includes unlimited claims, but is it legit? I did the math
    • Lost luggage hauls are the internet’s strangest new trend
    • Salt Typhoon APT techniques revealed in new report
    • Today’s Wordle #1532 Hints And Answer For Friday, August 29th
    • Onboarding Success: Learn the Cold Start Algorithm
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»How to run an LLM on your laptop
    AI

    How to run an LLM on your laptop

    TechurzBy TechurzJuly 18, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    How to run an LLM on your laptop
    Share
    Facebook Twitter LinkedIn Pinterest Email

    For Pistilli, opting for local models as opposed to online chatbots has implications beyond privacy. “Technology means power,” she says. “And so who[ever] owns the technology also owns the power.” States, organizations, and even individuals might be motivated to disrupt the concentration of AI power in the hands of just a few companies by running their own local models.

    Breaking away from the big AI companies also means having more control over your LLM experience. Online LLMs are constantly shifting under users’ feet: Back in April, ChatGPT suddenly started sucking up to users far more than it had previously, and just last week Grok started calling itself MechaHitler on X.

    Providers tweak their models with little warning, and while those tweaks might sometimes improve model performance, they can also cause undesirable behaviors. Local LLMs may have their quirks, but at least they are consistent. The only person who can change your local model is you.

    Of course, any model that can fit on a personal computer is going to be less powerful than the premier online offerings from the major AI companies. But there’s a benefit to working with weaker models—they can inoculate you against the more pernicious limitations of their larger peers. Small models may, for example, hallucinate more frequently and more obviously than Claude, GPT, and Gemini, and seeing those hallucinations can help you build up an awareness of how and when the larger models might also lie.

    “Running local models is actually a really good exercise for developing that broader intuition for what these things can do,” Willison says.

    How to get started

    Local LLMs aren’t just for proficient coders. If you’re comfortable using your computer’s command-line interface, which allows you to browse files and run apps using text prompts, Ollama is a great option. Once you’ve installed the software, you can download and run any of the hundreds of models they offer with a single command. 

    If you don’t want to touch anything that even looks like code, you might opt for LM Studio, a user-friendly app that takes a lot of the guesswork out of running local LLMs. You can browse models from Hugging Face from right within the app, which provides plenty of information to help you make the right choice. Some popular and widely used models are tagged as “Staff Picks,” and every model is labeled according to whether it can be run entirely on your machine’s speedy GPU, needs to be shared between your GPU and slower CPU, or is too big to fit onto your device at all. Once you’ve chosen a model, you can download it, load it up, and start interacting with it using the app’s chat interface.

    As you experiment with different models, you’ll start to get a feel for what your machine can handle. According to Willison, every billion model parameters require about one GB of RAM to run, and I found that approximation to be accurate: My own 16 GB laptop managed to run Alibaba’s Qwen3 14B as long as I quit almost every other app. If you run into issues with speed or usability, you can always go smaller—I got reasonable responses from Qwen3 8B as well.

    laptop LLM Run
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow AI is changing the GRC strategy
    Next Article Honor Magic V5 to hit its first European market on August 12
    Techurz
    • Website

    Related Posts

    AI

    Microsoft AI launches its first in-house models

    August 29, 2025
    AI

    Google’s new Pixel phone insurance includes unlimited claims, but is it legit? I did the math

    August 29, 2025
    AI

    Onboarding Success: Learn the Cold Start Algorithm

    August 28, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Our Picks

    Microsoft AI launches its first in-house models

    August 29, 2025

    Samsung offers enticing preorder deal for new Galaxy tablets ahead of September Unpacked

    August 29, 2025

    Nvidia CEO: Some Jobs Will Disappear As AI Advances

    August 29, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.