Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How to make IT operations more efficient

    September 1, 2025

    Volunteer at Disrupt 2025 while you still can

    September 1, 2025

    Here’s how we picked this year’s Innovators Under 35

    September 1, 2025
    Facebook X (Twitter) Instagram
    Trending
    • How to make IT operations more efficient
    • Volunteer at Disrupt 2025 while you still can
    • Here’s how we picked this year’s Innovators Under 35
    • Building Tech With No Experience Taught Me This Key Skill
    • I’ve tried 3 different smart rings but I keep going back to Apple Watch – here’s why
    • You can buy an iPhone 16 Pro for $250 off on Amazon right now – how the deal works
    • ‘Cyberpunk 2077’ Is Teasing Something For Three Days From Now
    • WhatsApp 0-Day, Docker Bug, Salesforce Breach, Fake CAPTCHAs, Spyware App & More
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Inside Amsterdam’s high-stakes experiment to create fair welfare AI
    AI

    Inside Amsterdam’s high-stakes experiment to create fair welfare AI

    TechurzBy TechurzJune 12, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    The Download: Amsterdam’s welfare AI experiment, and making humanoid robots safer
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Finding a better way

    Every time an Amsterdam resident applies for benefits, a caseworker reviews the application for irregularities. If an application looks suspicious, it can be sent to the city’s investigations department—which could lead to a rejection, a request to correct paperwork errors, or a recommendation that the candidate receive less money. Investigations can also happen later, once benefits have been dispersed; the outcome may force recipients to pay back funds, and even push some into debt.

    Officials have broad authority over both applicants and existing welfare recipients. They can request bank records, summon beneficiaries to city hall, and in some cases make unannounced visits to a person’s home. As investigations are carried out—or paperwork errors fixed—much-needed payments may be delayed. And often—in more than half of the investigations of applications, according to figures provided by Bodaar—the city finds no evidence of wrongdoing. In those cases, this can mean that the city has “wrongly harassed people,” Bodaar says. 

    The Smart Check system was designed to avoid these scenarios by eventually replacing the initial caseworker who flags which cases to send to the investigations department. The algorithm would screen the applications to identify those most likely to involve major errors, based on certain personal characteristics, and redirect those cases for further scrutiny by the enforcement team.

    If all went well, the city wrote in its internal documentation, the system would improve on the performance of its human caseworkers, flagging fewer welfare applicants for investigation while identifying a greater proportion of cases with errors. In one document, the city projected that the model would prevent up to 125 individual Amsterdammers from facing debt collection and save €2.4 million annually. 

    Smart Check was an exciting prospect for city officials like de Koning, who would manage the project when it was deployed. He was optimistic, since the city was taking a scientific approach, he says; it would “see if it was going to work” instead of taking the attitude that “this must work, and no matter what, we will continue this.”

    It was the kind of bold idea that attracted optimistic techies like Loek Berkers, a data scientist who worked on Smart Check in only his second job out of college. Speaking in a cafe tucked behind Amsterdam’s city hall, Berkers remembers being impressed at his first contact with the system: “Especially for a project within the municipality,” he says, it “was very much a sort of innovative project that was trying something new.”

    Smart Check made use of an algorithm called an “explainable boosting machine,” which allows people to more easily understand how AI models produce their predictions. Most other machine-learning models are often regarded as “black boxes” running abstract mathematical processes that are hard to understand for both the employees tasked with using them and the people affected by the results. 

    The Smart Check model would consider 15 characteristics—including whether applicants had previously applied for or received benefits, the sum of their assets, and the number of addresses they had on file—to assign a risk score to each person. It purposefully avoided demographic factors, such as gender, nationality, or age, that were thought to lead to bias. It also tried to avoid “proxy” factors—like postal codes—that may not look sensitive on the surface but can become so if, for example, a postal code is statistically associated with a particular ethnic group.

    In an unusual step, the city has disclosed this information and shared multiple versions of the Smart Check model with us, effectively inviting outside scrutiny into the system’s design and function. With this data, we were able to build a hypothetical welfare recipient to get insight into how an individual applicant would be evaluated by Smart Check.  

    This model was trained on a data set encompassing 3,400 previous investigations of welfare recipients. The idea was that it would use the outcomes from these investigations, carried out by city employees, to figure out which factors in the initial applications were correlated with potential fraud. 

    Amsterdams create experiment fair highstakes welfare
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleLosing My Job Without Savings Was a Nightmare. Here’s How to Start a Layoff Fund
    Next Article Apple is salvaging Image Playground with a boost from ChatGPT
    Techurz
    • Website

    Related Posts

    AI

    How to make IT operations more efficient

    September 1, 2025
    AI

    Here’s how we picked this year’s Innovators Under 35

    September 1, 2025
    AI

    I’ve tried 3 different smart rings but I keep going back to Apple Watch – here’s why

    September 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Our Picks

    How to make IT operations more efficient

    September 1, 2025

    Volunteer at Disrupt 2025 while you still can

    September 1, 2025

    Here’s how we picked this year’s Innovators Under 35

    September 1, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.