Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Sheryl Sandberg-backed Flint wants to use AI to autonomously build and update websites

    October 14, 2025

    Chinese Hackers Exploit ArcGIS Server as Backdoor for Over a Year

    October 14, 2025

    Oracle issues second emergency patch for E-Business Suite in two weeks

    October 14, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Sheryl Sandberg-backed Flint wants to use AI to autonomously build and update websites
    • Chinese Hackers Exploit ArcGIS Server as Backdoor for Over a Year
    • Oracle issues second emergency patch for E-Business Suite in two weeks
    • 3 Best VPN for iPhone (2025), Tested and Reviewed
    • Less than 4 days to get your Disrupt 2025 exhibit table
    • 5 reasons you should ditch Windows for Linux today
    • FleetWorks raises $17M to match truckers with cargo faster
    • How Threat Hunting Builds Readiness
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits
    AI

    Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits

    TechurzBy TechurzJune 13, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Senator's RISE Act would require AI developers to list training data, evaluation methods in exchange for 'safe harbor' from lawsuits
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    Amid an increasingly tense and destabilizing week for international news, it should not escape any technical decision-makers’ notice that some lawmakers in the U.S. Congress are still moving forward with new proposed AI regulations that could reshape the industry in powerful ways — and seek to steady it moving forward.

    Case in point, yesterday, U.S. Republican Senator Cynthia Lummis of Wyoming introduced the Responsible Innovation and Safe Expertise Act of 2025 (RISE), the first stand-alone bill that pairs a conditional liability shield for AI developers with a transparency mandate on model training and specifications.

    As with all new proposed legislation, both the U.S. Senate and House would need to vote in the majority to pass the bill and U.S. President Donald J. Trump would need to sign it before it becomes law, a process which would likely take months at the soonest.

    “Bottom line: If we want America to lead and prosper in AI, we can’t let labs write the rules in the shadows,” wrote Lummis on her account on X when announcing the new bill. We need public, enforceable standards that balance innovation with trust. That’s what the RISE Act delivers. Let’s get it done.”

    It also upholds traditional malpractice standards for doctors, lawyers, engineers, and other “learned professionals.”

    If enacted as written, the measure would take effect December 1 2025 and apply only to conduct that occurs after that date.

    Why Lummis says new AI legislation is necessary

    The bill’s findings section paints a landscape of rapid AI adoption colliding with a patchwork of liability rules that chills investment and leaves professionals unsure where responsibility lies.

    Lummis frames her answer as simple reciprocity: developers must be transparent, professionals must exercise judgment, and neither side should be punished for honest mistakes once both duties are met.

    In a statement on her website, Lummis calls the measure “predictable standards that encourage safer AI development while preserving professional autonomy.”

    With bipartisan concern mounting over opaque AI systems, RISE gives Congress a concrete template: transparency as the price of limited liability. Industry lobbyists may press for broader redaction rights, while public-interest groups could push for shorter disclosure windows or stricter opt-out limits. Professional associations, meanwhile, will scrutinize how the new documents can fit into existing standards of care.

    Whatever shape the final legislation takes, one principle is now firmly on the table: in high-stakes professions, AI cannot remain a black box. And if the Lummis bill becomes law, developers who want legal peace will have to open that box—at least far enough for the people using their tools to see what is inside.

    How the new ‘Safe Harbor’ provision for AI developers shielding them from lawsuits works

    RISE offers immunity from civil suits only when a developer meets clear disclosure rules:

    • Model card – A public technical brief that lays out training data, evaluation methods, performance metrics, intended uses, and limitations.
    • Model specification – The full system prompt and other instructions that shape model behavior, with any trade-secret redactions justified in writing.

    The developer must also publish known failure modes, keep all documentation current, and push updates within 30 days of a version change or newly discovered flaw. Miss the deadline—or act recklessly—and the shield disappears.

    Professionals like doctors, lawyers remain ultimately liable for using AI in their practices

    The bill does not alter existing duties of care.

    The physician who misreads an AI-generated treatment plan or a lawyer who files an AI-written brief without vetting it remains liable to clients.

    The safe harbor is unavailable for non-professional use, fraud, or knowing misrepresentation, and it expressly preserves any other immunities already on the books.

    Reaction from AI 2027 project co-author

    Daniel Kokotajlo, policy lead at the nonprofit AI Futures Project and a co-author of the widely circulated scenario planning document AI 2027, took to his X account to state that his team advised Lummis’s office during drafting and “tentatively endorse[s]” the result. He applauds the bill for nudging transparency yet flags three reservations:

    1. Opt-out loophole. A company can simply accept liability and keep its specifications secret, limiting transparency gains in the riskiest scenarios.
    2. Delay window. Thirty days between a release and required disclosure could be too long during a crisis.
    3. Redaction risk. Firms might over-redact under the guise of protecting intellectual property; Kokotajlo suggests forcing companies to explain why each blackout truly serves the public interest.

    The AI Futures Project views RISE as a step forward but not the final word on AI openness.

    What it means for devs and enterprise technical decision-makers

    The RISE Act’s transparency-for-liability trade-off will ripple outward from Congress straight into the daily routines of four overlapping job families that keep enterprise AI running. Start with the lead AI engineers—the people who own a model’s life cycle. Because the bill makes legal protection contingent on publicly posted model cards and full prompt specifications, these engineers gain a new, non-negotiable checklist item: confirm that every upstream vendor, or the in-house research squad down the hall, has published the required documentation before a system goes live. Any gap could leave the deployment team on the hook if a doctor, lawyer, or financial adviser later claims the model caused harm.

    Next come the senior engineers who orchestrate and automate model pipelines. They already juggle versioning, rollback plans, and integration tests; RISE adds a hard deadline. Once a model or its spec changes, updated disclosures must flow into production within thirty days. CI/CD pipelines will need a new gate that fails builds when a model card is missing, out of date, or overly redacted, forcing re-validation before code ships.

    The data-engineering leads aren’t off the hook, either. They will inherit an expanded metadata burden: capture the provenance of training data, log evaluation metrics, and store any trade-secret redaction justifications in a way auditors can query. Stronger lineage tooling becomes more than a best practice; it turns into the evidence that a company met its duty of care when regulators—or malpractice lawyers—come knocking.

    Finally, the directors of IT security face a classic transparency paradox. Public disclosure of base prompts and known failure modes helps professionals use the system safely, but it also gives adversaries a richer target map. Security teams will have to harden endpoints against prompt-injection attacks, watch for exploits that piggyback on newly revealed failure modes, and pressure product teams to prove that redacted text hides genuine intellectual property without burying vulnerabilities.

    Taken together, these demands shift transparency from a virtue into a statutory requirement with teeth. For anyone who builds, deploys, secures, or orchestrates AI systems aimed at regulated professionals, the RISE Act would weave new checkpoints into vendor due-diligence forms, CI/CD gates, and incident-response playbooks as soon as December 2025.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    act data Developers Evaluation Exchange harbor lawsuits list methods require rise Safe Senators Training
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe cloud broke Thursday and it’ll happen again – how to protect your business before then
    Next Article Dbrand’s Killswitch Switch 2 review: the best case scenario
    Techurz
    • Website

    Related Posts

    Security

    npm, PyPI, and RubyGems Packages Found Sending Developer Data to Discord Channels

    October 14, 2025
    Security

    Satellites Are Leaking the World’s Secrets: Calls, Texts, Military and Corporate Data

    October 14, 2025
    Security

    German state replaces Microsoft Exchange and Outlook with open-source email

    October 13, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Our Picks

    Sheryl Sandberg-backed Flint wants to use AI to autonomously build and update websites

    October 14, 2025

    Chinese Hackers Exploit ArcGIS Server as Backdoor for Over a Year

    October 14, 2025

    Oracle issues second emergency patch for E-Business Suite in two weeks

    October 14, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.