Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    CISOs must rethink the tabletop, as 57% of incidents have never been rehearsed

    October 15, 2025

    A New Attack Lets Hackers Steal 2-Factor Authentication Codes From Android Phones

    October 14, 2025

    Leaving Windows 10 today? How to clear your new Windows 11 PC cache (and start fresh)

    October 14, 2025
    Facebook X (Twitter) Instagram
    Trending
    • CISOs must rethink the tabletop, as 57% of incidents have never been rehearsed
    • A New Attack Lets Hackers Steal 2-Factor Authentication Codes From Android Phones
    • Leaving Windows 10 today? How to clear your new Windows 11 PC cache (and start fresh)
    • Single 8-Byte Write Shatters AMD’s SEV-SNP Confidential Computing
    • Scattered Lapsus$ Hunters extortion site goes dark: What’s next?
    • Feds Seize Record-Breaking $15 Billion in Bitcoin From Alleged Scam Empire
    • 4 days left: Save up to $624 on Disrupt 2025 Passes
    • Windows 10 PC can’t be upgraded? You have 5 options – and must act now
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Researcher turns gpt-oss-20b into a non-reasoning base model
    AI

    Researcher turns gpt-oss-20b into a non-reasoning base model

    TechurzBy TechurzAugust 15, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Researcher turns gpt-oss-20b into a non-reasoning base model
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

    OpenAI’s new, powerful open weights AI large language model (LLM) family gpt-oss was released less than two weeks ago under a permissive Apache 2.0 license — the company’s first open weights model launch since GPT-2 in 2019 — but developers outside the company are already reshaping it.

    One of the most striking examples comes from Jack Morris, a Cornell Tech PhD student, former Google Brain Resident, and current researcher at Meta, who this week unveiled gpt-oss-20b-base, his own reworked version of OpenAI’s smaller gpt-oss-20B model, which removes the “reasoning” behavior of the model and returns it to a pre-trained “base” version that offers faster, freer, more uncensored and unconstrained responses.

    The model is available now on Hugging Face under a permissive MIT License, allowing it to be used for both additional research and commercial applications.

    How gpt-oss-20B-base is different than OpenAI’s gpt-oss models

    To understand what Morris did, it helps to know the difference between OpenAI’s release and what AI researchers call a “base model.”

    AI Scaling Hits Its Limits

    Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

    • Turning energy into a strategic advantage
    • Architecting efficient inference for real throughput gains
    • Unlocking competitive ROI with sustainable AI systems

    Secure your spot to stay ahead: https://bit.ly/4mwGngO

    Most LLMs offered by leading AI labs such as OpenAI, Anthropic, Google and even open source players like Meta, DeepSeek, and Alibaba’s Qwen team are “post-trained.”

    This means they have gone through an additional phase where it’s exposed to curated examples of desired behavior.

    For instruction tuned models, that means giving it many examples of instructions paired with ideal responses, so it learns to respond more helpfully, politely, or safely to natural language requests.

    The gpt-oss models OpenAI put out on August 5 were “reasoning-optimized”: trained and fine-tuned not just to predict the next word, but to follow instructions in a safe, consistent way, often stepping through problems with structured “chain of thought” reasoning before producing a final answer.

    This is a trend that goes back to OpenAI’s o1 model released almost a year ago in September 2024, but which numerous leading AI labs have now adopted — forcing the models to think longer over multiple steps and check their own work before outputting a well-reasoned response to the user.

    That makes them better suited for tasks like coding, solving math problems, or answering factual questions with explanations — but also means their responses are filtered and steered away from unsafe or undesirable content.

    A base model is different. It’s the raw, pretrained version of a large language model before that reasoning-specific alignment is applied. Base models simply try to predict the next chunk of text given what’s come before, with no built-in guardrails, stylistic preferences, or refusal behaviors.

    They’re prized by some researchers because they can produce more varied and less constrained output, and because studying their unaligned behavior can reveal how models store knowledge and patterns from their training data.

    Morris’s goal was to “reverse” OpenAI’s alignment process and restore the smaller gpt-oss-20B to something much closer to its original pretrained state.

    “We basically reversed the alignment part of LLM training, so we have something that produces natural-looking text again,” he wrote in an X thread announcing the project. “It doesn’t engage in CoT anymore. It is back to a model that just predicts the next token on generic text.”

    OpenAI hasn’t open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only…

    or is it?

    turns out that underneath the surface, there is still a strong base model. so we extracted it.

    introducing gpt-oss-20b-base ? pic.twitter.com/3xryQgLF8Z

    — jack morris (@jxmnop) August 13, 2025

    Rather than trying to jailbreak the model with clever prompts — which Morris said proved ineffective during his early experiments — he took a different tack after a conversation with former OpenAI co-founder, former Anthropic researcher and current Thinking Machines chief scientist John Schulman.

    The key was to think of alignment reversal as a small optimization problem: if most of the model’s pretrained knowledge is still present in its weights, then only a tiny, low-rank update might be needed to nudge it back toward base model behavior.

    Morris implemented that idea by applying a LoRA (low-rank adapter) update to just three layers of the model — the MLP layers at positions 7, 15, and 23 — with a rank of 16.

    That meant training about 60 million parameters, or 0.3% of the model’s 21 billion total. He used around 20,000 documents from the FineWeb dataset, keeping the format as close as possible to original pretraining (“ ….” style) so the model wouldn’t learn anything new, just re-enable broad free-text generation.

    Training took four days on eight NVIDIA H200 GPUs, Morris told VentureBeat via direct message on X, with a learning rate of 2e-6, a batch size of 16, and a maximum sequence length of 8,192 tokens.

    Afterward, he merged the LoRA weights back into the model so users could run it as a standalone, fully finetuned artifact.

    Morris also had to contend with the limitations of current open tools for fine-tuning mixture-of-experts (MoE) architectures like gpt-oss.

    Morris said he used Hugging Face’s framework, which he said crashes frequently and only supports certain training modes, and wrote his own harness to checkpoint often and skip over data batches that risked overloading GPU memory.

    Importantly, in response to questions and criticism from the AI community on X, Morris has also clarified he is not claiming to have recovered the base model “weights” — the internal settings of the artificial neurons that make up the neural network of the model and govern its behavior.

    The world of AI is crazy right now cause you can just claim to have extracted the base model from GPT-OSS while effectively you’ve just trained a lora on Fineweb lol https://t.co/oAnAWpMQ26

    — Niels Rogge (@NielsRogge) August 15, 2025

    Rather, Morris says that his work has “recovered the base model’s *distribution* with some error,” that is, the probability patterns the model uses to generate outputs — even though the weights producing those patterns may differ.

    some people are getting confused about the experiment –

    we didn’t recover the base model’s *weights*. that might not even be possible.

    we recovered the base model’s *distribution*, with some error. an important question is how much.

    trying to figure that out right now… https://t.co/lfUG5QY4h0

    — jack morris (@jxmnop) August 15, 2025

    How the new gpt-oss-20b-base model’s behavior differs from gpt-oss-20b

    The resulting gpt-oss-20b-base is noticeably freer in its outputs. It no longer defaults to explaining reasoning step-by-step and will produce a wider range of responses, including instructions OpenAI’s aligned model would refuse to give — like building a weapon, listing profanity, or planning illegal activities.

    In short tests, Morris found it could also reproduce verbatim passages from copyrighted works, including three out of six book excerpts he tried, showing that some memorized material is still accessible.

    Even so, some traces of alignment remain. Morris noted that if you prompt the model in an assistant-style format (“Human: … Assistant: …”), it will sometimes still act like a polite chatbot. And when run through the original gpt-oss chat template, it can still carry out reasoning tasks, albeit with some loss in quality.

    For best results in free-text mode, he advises prepending prompts with the model’s special beginning-of-sequence token <|startoftext|> and avoiding chat templates entirely.

    Building upon OpenAI’s big gpt-oss family release

    The gpt-oss family debuted to considerable attention. The two models — gpt-oss-120B and gpt-oss-20B — are text-only, multilingual, and built with a mixture-of-experts Transformer architecture. They were released under the permissive Apache 2.0 license, allowing unrestricted local use, fine-tuning, and commercial deployment.

    Performance benchmarks from OpenAI showed the larger 120B model matching or exceeding the proprietary o4-mini in reasoning and tool-use tasks, with the smaller 20B competitive with o3-mini.

    This was OpenAI’s first open-weight release in six years, a move widely interpreted as a response to competitive pressure from other open-weights providers, including China’s DeepSeek R1 and Qwen 3.

    The company positioned gpt-oss as both a way to re-engage developers who had moved to rival open-source models and as a platform for safety research into open-weight systems.

    Reaction to the initial gpt-oss was mixed

    Developer reaction to OpenAI’s gpt-oss models was been staunchly mixed, with reactions across the board ranging from enthusiastic to disappointed.

    Supporters praised the permissive license, efficiency, and strong showing on STEM benchmarks.

    Hugging Face CEO Clem Delangue described the release as a “meaningful addition to the open ecosystem” and urged the community to give it time to mature.

    Critics argued that the models appear heavily trained on synthetic data, making them excellent at math and coding but less capable at creative writing, general world knowledge, and multilingual reasoning.

    Some early testers also raised concerns about lingering safety filters and possible geopolitical bias.

    Against that backdrop, Morris’s gpt-oss-20b-base stands out as a concrete example of how open-weight models can be adapted and repurposed in the wild within days of release.

    Indeed, in contrast to the way OpenAI’s gpt-oss was received, most of the responses to Morris’s work I’ve seen are warm and elated. As one computer scientist wrote on X: “this is the coolest thing I’ve seen on Twitter [X] in the past few months.”

    man this is the coolest thing i’ve seen on twitter in the past few months i love base models

    — Ludan (@JMRLudan) August 15, 2025

    The approach strips away much of the behavior OpenAI built in and returns the model to something closer to a raw, pretrained system — a shift that’s valuable to researchers studying memorization, bias, or the impact of alignment, but that also comes with higher safety risks.

    Furthermore, Morris says that his work on restoring reasoning models to pre-trained, non-reasoning base models will continue by comparing extraction on non-reasoning, instruct models like those offered by Qwen.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    base gptoss20b model nonreasoning researcher turns
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCisco patches critical security hole in Firewall Management Center – act now
    Next Article Les Amis, the European app helping women form friendships, launches in New York
    Techurz
    • Website

    Related Posts

    Security

    Buying an Android smartwatch? I found a model that’s highly functional and affordable

    October 13, 2025
    Security

    I thought the Bose QuietComfort headphones already hit their peak – then I tried the newest model

    October 12, 2025
    Security

    This new Google Gemini model scrolls the internet just like you do – how it works

    October 10, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Our Picks

    CISOs must rethink the tabletop, as 57% of incidents have never been rehearsed

    October 15, 2025

    A New Attack Lets Hackers Steal 2-Factor Authentication Codes From Android Phones

    October 14, 2025

    Leaving Windows 10 today? How to clear your new Windows 11 PC cache (and start fresh)

    October 14, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.