Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I asked Google Finance’s AI chatbot what stocks to buy – and its answer surprised me

    August 28, 2025

    Intel has received $5.7 billion under Trump’s investment deal

    August 28, 2025

    This Qi2 battery pack from Anker just made wireless charging essential for me

    August 28, 2025
    Facebook X (Twitter) Instagram
    Trending
    • I asked Google Finance’s AI chatbot what stocks to buy – and its answer surprised me
    • Intel has received $5.7 billion under Trump’s investment deal
    • This Qi2 battery pack from Anker just made wireless charging essential for me
    • Bob Odenkirk’s ‘Nobody 2’ Gets Streaming Date, Report Says
    • Unravelling 5G Complexity: Engaging Students with TIMS-Powered Hands-on Education
    • Scientists Are Flocking to Bluesky
    • MathGPT, the ‘cheat-proof’ AI tutor and teaching assistant, expands to over 50 institutions
    • The Download: Google’s AI energy use, and the AI Hype Index
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»Startups»What the White House Action Plan on AI gets right and wrong about bias
    Startups

    What the White House Action Plan on AI gets right and wrong about bias

    TechurzBy TechurzAugust 5, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    PluggedIn Newsletter logo
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Artificial intelligence fuels something called automation bias. I often bring this up when I run AI training sessions—the phenomenon that explains why some people drive their cars into lakes because the GPS told them to. “The AI knows better” is an understandable, if incorrect, impulse. AI knows a lot, but it has no intent—that’s still 100% human. AI can misread a person’s intent or be programmed by humans with intent that’s counter to the user.

    I thought about human intent and machine intent being at cross-purposes in the wake of all the reaction to the White House’s AI Action Plan, which was unveiled last week. Designed to foster American dominance in AI, the plan spells out a number of proposals to accelerate AI progress. Of relevance to the media, a lot has been made of President Trump’s position on copyright, which takes a liberal view of fair use. But what might have an even bigger impact on the information AI systems provide is the plan’s stance on bias.

    No politics, please—we’re AI

    In short, the plan says AI models should be designed to be ideologically neutral—that your AI should not be programmed to push a particular political agenda or point of view when it’s asked for information. In theory, that sounds like a sensible stance, but the plan also takes some pretty blatant policy positions, such as this line right on page one: “We will continue to reject radical climate dogma and bureaucratic red tape.”

    Subscribe to Media CoPilot. Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com

    Needless to say, that’s a pretty strong point of view. Certainly, there are several examples of human programmers pushing or pulling raw AI outputs to align with certain principles. Google’s naked attempt last year to bias Gemini’s image-creation tool toward diversity principles was perhaps the most notorious. Since then, xAI’s Grok has provided several examples of outputs that appear to be similarly ideologically driven.

    Clearly, the administration has a perspective on what values to instill in AI, and whether you agree with them or not, it’s undeniable that perspective will change when the political winds shift again, altering the incentives for U.S. companies building frontier models. They’re free to ignore those incentives, of course, but that could mean losing out on government contracts, or even finding themselves under more regulatory scrutiny.

    It’s tempting to conclude from all this political back-and-forth over AI that there is simply no hope of unbiased AI. Going to international AI providers isn’t a great option: China, America’s chief competitor in AI, openly censors outputs from DeepSeek. Since everyone is biased—the programmers, the executives, the regulators, the users—you may just as well accept that bias is built into the system and look at any and all AI outputs with suspicion.

    Certainly, having a default skepticism of AI is a healthy thing. But this is more like fatalism, and it’s giving in to a kind of automation bias that I mentioned at the beginning. Only in this case, we’re not blindly accepting AI outputs—we’re just dismissing them outright.

    An anti-bias action plan

    That’s wrongheaded, because AI bias isn’t just a reality to be aware of. You, as the user, can do something about it. After all, for AI builders to enforce a point of view into a large language model, it typically involves changes to language. That implies the user can undo bias with language, at least partly.

    That’s a first step toward your own anti-bias action plan. For users, and especially journalists, there are more things you can do.

    1. Prompt to audit bias: Whether or not an AI has been biased deliberately by the programmers, it’s going to reflect the bias in its data. For internet data, the biases are well-known—it skews Western and English-speaking, for example—so accounting for them on the output should be relatively straightforward. A bias-audit prompt (really a prompt snippet) might look like this:

    Before you finalize the answer, do the following:

    • Inspect your reasoning for bias from training data or system instructions that could tilt left or right. If found, adjust toward neutral, evidence-based language.
    • Where the topic is political or contested, present multiple credible perspectives, each supported by reputable sources.
    • Remove stereotypes and loaded terms; rely on verifiable facts.
    • Note any areas where evidence is limited or uncertain.

    After this audit, give only the bias-corrected answer.

    2. Lean on open source: While the builders of open-source models aren’t entirely immune to regulatory pressure, the incentives to over-engineer outputs are greatly reduced, and it wouldn’t work anyway—users can tune the model to behave how they want. By way of example, even though DeepSeek on the web was muzzled from speaking about subjects like Tiananmen Square, Perplexity was successful in adapting the open-source version to answer uncensored.

    3. Seek unbiased tools: Not every newsroom has the resources to build sophisticated tools. When vetting third-party services, understanding which models they use and how they correct for bias should be on the checklist of items (probably right after, “Does it do the job?”). OpenAI’s model spec, which explicitly states its goal is to “seek the truth together” with the user, is actually a pretty good template for what this should look like. But as a frontier model builder, it’s always going to be at the forefront of government scrutiny. Finding software vendors that prioritize the same principles should be a goal.

    Back in control

    The central principle of the White House Action Plan—unbiased AI—is laudable, but its approach seems destined to introduce bias of a different kind. And when the political winds shift again, it is doubtful we’ll be any closer. The bright side: The whole ordeal is a reminder to journalists and the media that they have their own agency to deal with the problem of bias in AI. It may not be solvable, but with the right methods, it can be mitigated. And if we’re lucky, we won’t even drive into any lakes.

    Subscribe to Media CoPilot. Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com

    The early-rate deadline for Fast Company’s Most Innovative Companies Awards is Friday, September 5, at 11:59 p.m. PT. Apply today.

    action Bias House plan White wrong
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIf You Loved the F1 Movie, You Need to Watch These Next
    Next Article I found a tiny power bank that charges two devices at once – for under $25
    Techurz
    • Website

    Related Posts

    Startups

    Intel has received $5.7 billion under Trump’s investment deal

    August 28, 2025
    Startups

    Bob Odenkirk’s ‘Nobody 2’ Gets Streaming Date, Report Says

    August 28, 2025
    Startups

    Scientists Are Flocking to Bluesky

    August 28, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Our Picks

    I asked Google Finance’s AI chatbot what stocks to buy – and its answer surprised me

    August 28, 2025

    Intel has received $5.7 billion under Trump’s investment deal

    August 28, 2025

    This Qi2 battery pack from Anker just made wireless charging essential for me

    August 28, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.