Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Feeling lonely at work? You’re not alone – 5 ways to boost your team’s morale

    October 12, 2025

    New Oracle E-Business Suite Bug Could Let Hackers Access Data Without Login

    October 12, 2025

    These Bose headphones took my favorite AirPods Max battery feature – and did it even better

    October 12, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Feeling lonely at work? You’re not alone – 5 ways to boost your team’s morale
    • New Oracle E-Business Suite Bug Could Let Hackers Access Data Without Login
    • These Bose headphones took my favorite AirPods Max battery feature – and did it even better
    • Dating app Cerca will show how Gen Z really dates at TechCrunch Disrupt 2025
    • I thought the Bose QuietComfort headphones already hit their peak – then I tried the newest model
    • Is this the best smart monitor for home entertainment? My verdict after a week of testing
    • Ready to ditch your Windows PC? I found a powerful mini PC that’s optimized for Linux
    • Spotty Wi-Fi at home? 5 products I recommend to fix it once and for all
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»Security»Two Critical Flaws Uncovered in Wondershare RepairIt Exposing User Data and AI Models
    Security

    Two Critical Flaws Uncovered in Wondershare RepairIt Exposing User Data and AI Models

    TechurzBy TechurzSeptember 24, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Two Critical Flaws Uncovered in Wondershare RepairIt Exposing User Data and AI Models
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Cybersecurity researchers have disclosed two security flaws in Wondershare RepairIt that exposed private user data and potentially exposed the system to artificial intelligence (AI) model tampering and supply chain risks.

    The critical-rated vulnerabilities in question, discovered by Trend Micro, are listed below –

    • CVE-2025-10643 (CVSS score: 9.1) – An authentication bypass vulnerability that exists within the permissions granted to a storage account token
    • CVE-2025-10644 (CVSS score: 9.4) – An authentication bypass vulnerability that exists within the permissions granted to an SAS token

    Successful exploitation of the two flaws can allow an attacker to circumvent authentication protection on the system and launch a supply chain attack, ultimately resulting in the execution of arbitrary code on customers’ endpoints.

    Trend Micro researchers Alfredo Oliveira and David Fiser said the AI-powered data repair and photo editing application “contradicted its privacy policy by collecting, storing, and, due to weak Development, Security, and Operations (DevSecOps) practices, inadvertently leaking private user data.”

    The poor development practices include embedding overly permissive cloud access tokens directly in the application’s code that enables read and write access to sensitive cloud storage. Furthermore, the data is said to have been stored without encryption, potentially opening the door to wider abuse of users’ uploaded images and videos.

    To make matters worse, the exposed cloud storage contains not only user data but also AI models, software binaries for various products developed by Wondershare, container images, scripts, and company source code, enabling an attacker to tamper with AI models or the executables, paving the way for supply chain attacks targeting its downstream customers.

    “Because the binary automatically retrieves and executes AI models from the unsecure cloud storage, attackers could modify these models or their configurations and infect users unknowingly,” the researchers said. “Such an attack could distribute malicious payloads to legitimate users through vendor-signed software updates or AI model downloads.”

    Beyond customer data exposure and AI model manipulation, the issues can also pose grave consequences, ranging from intellectual property theft and regulatory penalties to erosion of consumer trust.

    The cybersecurity company said it responsibly disclosed the two issues through its Zero Day Initiative (ZDI) in April 2025, but not that it has yet to receive a response from the vendor despite repeated attempts. In the absence of a fix, users are recommended to “restrict interaction with the product.”

    “The need for constant innovations fuels an organization’s rush to get new features to market and maintain competitiveness, but they might not foresee the new, unknown ways these features could be used or how their functionality may change in the future,” Trend Micro said.

    “This explains how important security implications may be overlooked. That is why it is crucial to implement a strong security process throughout one’s organization, including the CD/CI pipeline.”

    The Need for AI and Security to Go Hand in Hand

    The development comes as Trend Micro previously warned against exposing Model Context Protocol (MCP) servers without authentication or storing sensitive credentials such as MCP configurations in plaintext, which threat actors can exploit to gain access to cloud resources, databases, or inject malicious code.

    Each MCP server acts as an open door to its data source: databases, cloud services, internal APIs, or project management systems,” the researchers said. “Without authentication, sensitive data such as trade secrets and customer records becomes accessible to everyone.”

    In December 2024, the company also found that exposed container registries could be abused to gain unauthorized access and pull target Docker images to extract the AI model within it, modify the model’s parameters to influence its predictions, and push the tampered image back to the exposed registry.

    “The tampered model could behave normally under typical conditions, only displaying its malicious alterations when triggered by specific inputs,” Trend Micro said. “This makes the attack particularly dangerous, as it could bypass basic testing and security checks.”

    The supply chain risk posed by MCP servers has also been highlighted by Kaspersky, which devised a proof-of-concept (PoC) exploit to highlight how MCP servers installed from untrusted sources can conceal reconnaissance and data exfiltration activities under the guise of an AI-powered productivity tool.

    “Installing an MCP server basically gives it permission to run code on a user machine with the user’s privileges,” security researcher Mohamed Ghobashy said. “Unless it is sandboxed, third-party code can read the same files the user has access to and make outbound network calls – just like any other program.”

    The findings show that the rapid adoption of MCP and AI tools in enterprise settings to enable agentic capabilities, particularly without clear policies or security guardrails, can open brand new attack vectors, including tool poisoning, rug pulls, shadowing, prompt injection, and unauthorized privilege escalation.

    In a report published last week, Palo Alto Networks Unit 42 revealed that the context attachment feature used in AI code assistants to bridge an AI model’s knowledge gap can be susceptible to indirect prompt injection, where adversaries embed harmful prompts within external data sources to trigger unintended behavior in large language models (LLMs).

    Indirect prompt injection hinges on the assistant’s inability to differentiate between instructions issued by the user and those surreptitiously embedded by the attacker in external data sources.

    Thus, when a user inadvertently supplies to the coding assistant third-party data (e.g., a file, repository, or URL) that has already been tainted by an attacker, the hidden malicious prompt could be weaponized to trick the tool into executing a backdoor, injecting arbitrary code into an existing codebase, and even leaking sensitive information.

    “Adding this context to prompts enables the code assistant to provide more accurate and specific output,” Unit 42 researcher Osher Jacob said. “However, this feature could also create an opportunity for indirect prompt injection attacks if users unintentionally provide context sources that threat actors have contaminated.”

    AI coding agents have also been found vulnerable to what’s called an “lies-in-the-loop” (LitL) attack that aims to convince the LLM that the instructions it’s been fed are much safer than they really are, effectively overriding human-in-the-loop (HitL) defenses put in place when performing high-risk operations.

    “LitL abuses the trust between a human and the agent,” Checkmarx researcher Ori Ron said. “After all, the human can only respond to what the agent prompts them with, and what the agent prompts the user is inferred from the context the agent is given. It’s easy to lie to the agent, causing it to provide fake, seemingly safe context via commanding and explicit language in something like a GitHub issue.”

    “And the agent is happy to repeat the lie to the user, obscuring the malicious actions the prompt is meant to guard against, resulting in an attacker essentially making the agent an accomplice in getting the keys to the kingdom.”

    Critical data exposing flaws models RepairIt uncovered User Wondershare
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article2 Potential Tropical Threats And Why The U.S. Should Be On Alert
    Next Article Kevin Rose on Digg, reinvention, and startup investing
    Techurz
    • Website

    Related Posts

    Security

    Feeling lonely at work? You’re not alone – 5 ways to boost your team’s morale

    October 12, 2025
    Security

    New Oracle E-Business Suite Bug Could Let Hackers Access Data Without Login

    October 12, 2025
    Security

    These Bose headphones took my favorite AirPods Max battery feature – and did it even better

    October 12, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Our Picks

    Feeling lonely at work? You’re not alone – 5 ways to boost your team’s morale

    October 12, 2025

    New Oracle E-Business Suite Bug Could Let Hackers Access Data Without Login

    October 12, 2025

    These Bose headphones took my favorite AirPods Max battery feature – and did it even better

    October 12, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.