Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    WhatsApp 0-Day, Docker Bug, Salesforce Breach, Fake CAPTCHAs, Spyware App & More

    September 1, 2025

    5 days left: Exhibit tables are disappearing for Disrupt 2025

    September 1, 2025

    Is AI the end of software engineering or the next step in its evolution?

    September 1, 2025
    Facebook X (Twitter) Instagram
    Trending
    • WhatsApp 0-Day, Docker Bug, Salesforce Breach, Fake CAPTCHAs, Spyware App & More
    • 5 days left: Exhibit tables are disappearing for Disrupt 2025
    • Is AI the end of software engineering or the next step in its evolution?
    • Look out, Meta Ray-Bans! These AI glasses just raised over $1M in pre-orders in 3 days
    • How I took control of my email address with a custom domain
    • Google Pixel 10 Pro Fold vs. Samsung Galaxy Z Fold 7: Here’s the clear winner after testing both
    • Rethinking Security for Scattered Spider
    • 3 Ways To Build Unbreakable Trust In Your Relationship, By A Psychologist
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»Startups»The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI
    Startups

    The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI

    TechurzBy TechurzMay 21, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Dario Amodei’s AI safety contingent was growing disquieted with some of Sam Altman’s behaviors. Shortly after OpenAI’s Microsoft deal was inked in 2019, several of them were stunned to discover the extent of the promises that Altman had made to Microsoft for which technologies it would get access to in return for its investment. The terms of the deal didn’t align with what they had understood from Altman. If AI safety issues actually arose in OpenAI’s models, they worried, those commitments would make it far more difficult, if not impossible, to prevent the models’ deployment. Amodei’s contingent began to have serious doubts about Altman’s honesty.

    “We’re all pragmatic people,” a person in the group says. “We’re obviously raising money; we’re going to do commercial stuff. It might look very reasonable if you’re someone who makes loads of deals like Sam, to be like, ‘All right, let’s make a deal, let’s trade a thing, we’re going to trade the next thing.’ And then if you are someone like me, you’re like, ‘We’re trading a thing we don’t fully understand.’ It feels like it commits us to an uncomfortable place.”

    This was against the backdrop of a growing paranoia over different issues across the company. Within the AI safety contingent, it centered on what they saw as strengthening evidence that powerful misaligned systems could lead to disastrous outcomes. One bizarre experience in particular had left several of them somewhat nervous. In 2019, on a model trained after GPT‑2 with roughly twice the number of parameters, a group of researchers had begun advancing the AI safety work that Amodei had wanted: testing reinforcement learning from human feedback (RLHF) as a way to guide the model toward generating cheerful and positive content and away from anything offensive.

    But late one night, a researcher made an update that included a single typo in his code before leaving the RLHF process to run overnight. That typo was an important one: It was a minus sign flipped to a plus sign that made the RLHF process work in reverse, pushing GPT‑2 to generate more offensive content instead of less. By the next morning, the typo had wreaked its havoc, and GPT‑2 was completing every single prompt with extremely lewd and sexually explicit language. It was hilarious—and also concerning. After identifying the error, the researcher pushed a fix to OpenAI’s code base with a comment: Let’s not make a utility minimizer.

    In part fueled by the realization that scaling alone could produce more AI advancements, many employees also worried about what would happen if different companies caught on to OpenAI’s secret. “The secret of how our stuff works can be written on a grain of rice,” they would say to each other, meaning the single word scale. For the same reason, they worried about powerful capabilities landing in the hands of bad actors. Leadership leaned into this fear, frequently raising the threat of China, Russia, and North Korea and emphasizing the need for AGI development to stay in the hands of a US organization. At times this rankled employees who were not American. During lunches, they would question, Why did it have to be a US organization? remembers a former employee. Why not one from Europe? Why not one from China?

    During these heady discussions philosophizing about the long‑term implications of AI research, many employees returned often to Altman’s early analogies between OpenAI and the Manhattan Project. Was OpenAI really building the equivalent of a nuclear weapon? It was a strange contrast to the plucky, idealistic culture it had built thus far as a largely academic organization. On Fridays, employees would kick back after a long week for music and wine nights, unwinding to the soothing sounds of a rotating cast of colleagues playing the office piano late into the night.

    Altman asked Audit Countersurveillance OpenAI Sam time
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleI Was About to Throw My Router Out Until I Tried These Tweaks
    Next Article I’m an AI expert, and these 8 announcements at Google I/O impressed me the most
    Techurz
    • Website

    Related Posts

    Startups

    How I took control of my email address with a custom domain

    September 1, 2025
    Startups

    3 Ways To Build Unbreakable Trust In Your Relationship, By A Psychologist

    September 1, 2025
    Startups

    Handle Business Finances Like a Pro With This One-Time QuickBooks Deal

    September 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Our Picks

    WhatsApp 0-Day, Docker Bug, Salesforce Breach, Fake CAPTCHAs, Spyware App & More

    September 1, 2025

    5 days left: Exhibit tables are disappearing for Disrupt 2025

    September 1, 2025

    Is AI the end of software engineering or the next step in its evolution?

    September 1, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.