Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    5 days left: Exhibit tables are disappearing for Disrupt 2025

    September 1, 2025

    Is AI the end of software engineering or the next step in its evolution?

    September 1, 2025

    Look out, Meta Ray-Bans! These AI glasses just raised over $1M in pre-orders in 3 days

    September 1, 2025
    Facebook X (Twitter) Instagram
    Trending
    • 5 days left: Exhibit tables are disappearing for Disrupt 2025
    • Is AI the end of software engineering or the next step in its evolution?
    • Look out, Meta Ray-Bans! These AI glasses just raised over $1M in pre-orders in 3 days
    • How I took control of my email address with a custom domain
    • Google Pixel 10 Pro Fold vs. Samsung Galaxy Z Fold 7: Here’s the clear winner after testing both
    • Rethinking Security for Scattered Spider
    • 3 Ways To Build Unbreakable Trust In Your Relationship, By A Psychologist
    • Latam-GPT: The Free, Open Source, and Collaborative AI of Latin America
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»News»Why your enterprise AI strategy needs both open and closed models: The TCO reality check
    News

    Why your enterprise AI strategy needs both open and closed models: The TCO reality check

    TechurzBy TechurzJune 28, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Why your enterprise AI strategy needs both open and closed models: The TCO reality check
    Share
    Facebook Twitter LinkedIn Pinterest Email

    This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue.

    For the last two decades, enterprises have had a choice between open-source and closed proprietary technologies.

    The original choice for enterprises was primarily centered on operating systems, with Linux offering an open-source alternative to Microsoft Windows. In the developer realm, open-source languages like Python and JavaScript dominate, as open-source technologies, including Kubernetes, are standards in the cloud.

    The same type of choice between open and closed is now facing enterprises for AI, with multiple options for both types of models. On the proprietary closed-model front are some of the biggest, most widely used models on the planet, including those from OpenAI and Anthropic. On the open-source side are models like Meta’s Llama, IBM Granite, Alibaba’s Qwen and DeepSeek.

    Understanding when to use an open or closed model is a critical choice for enterprise AI decision-makers in 2025 and beyond. The choice has both financial and customization implications for either options that enterprises need to understand and consider.

    Understanding the difference between open and closed licenses

    There is no shortage of hyperbole around the decades-old rivalry between open and closed licenses. But what does it all actually mean for enterprise users?

    A closed-source proprietary technology, like OpenAI’s GPT 4o for example, does not have model code, training data, or model weights open or available for anyone to see. The model is not easily available to be fine-tuned and generally speaking, it is only available for real enterprise usage with a cost (sure, ChatGPT has a free tier, but that’s not going to cut it for a real enterprise workload).

    An open technology, like Meta Llama, IBM Granite, or DeepSeek, has openly available code. Enterprises can use the models freely, generally without restrictions, including fine-tuning and customizations.

    Rohan Gupta, a principal with Deloitte, told VentureBeat that the open vs. closed source debate isn’t unique or native to AI, nor is it likely to be resolved anytime soon. 

    Gupta explained that closed source providers typically offer several wrappers around their model that enable ease of use, simplified scaling, more seamless upgrades and downgrades and a steady stream of enhancements. They also provide significant developer support. That includes documentation as well as hands-on advice and often delivers tighter integrations with both infrastructure and applications. In exchange, an enterprise pays a premium for these services.

     “Open-source models, on the other hand, can provide greater control, flexibility and customization options, and are supported by a vibrant, enthusiastic developer ecosystem,” Gupta said. “These models are increasingly accessible via fully managed APIs across cloud vendors, broadening their distribution.”

    Making the choice between open and closed model for enterprise AI

    The question that many enterprise users might ask is what’s better: an open or a closed model? The answer however is not necessarily one or the other.

    “We don’t view this as a binary choice,” David Guarrera, Generative AI Leader at EY Americas, told VentureBeat. ” Open vs closed is increasingly a fluid design space, where models are selected, or even automatically orchestrated, based on tradeoffs between accuracy, latency, cost, interpretability and security at different points in a workflow.” 

    Guarrera noted that closed models limit how deeply organizations can optimize or adapt behavior. Proprietary model vendors often restrict fine-tuning, charge premium rates, or hide the process in black boxes. While API-based tools simplify integration, they abstract away much of the control, making it harder to build highly specific or interpretable systems.

    In contrast, open-source models allow for targeted fine-tuning, guardrail design and optimization for specific use cases. This matters more in an agentic future, where models are no longer monolithic general-purpose tools, but interchangeable components within dynamic workflows. The ability to finely shape model behavior, at low cost and with full transparency, becomes a major competitive advantage when deploying task-specific agents or tightly regulated solutions.

    “In practice, we foresee an agentic future where model selection is abstracted away,” Guarrera said.

    For example, a user may draft an email with one AI tool, summarize legal docs with another, search enterprise documents with a fine-tuned open-source model and interact with AI locally through an on-device LLM, all without ever knowing which model is doing what. 

    “The real question becomes: what mix of models best suits your workflow’s specific demands?” Guarrera said.

    Considering total cost of ownership

    With open models, the basic idea is that the model is freely available for use. While in contrast, enterprises always pay for closed models.

    The reality when it comes to considering total cost of ownership (TCO) is more nuanced.

    Praveen Akkiraju, Managing Director at Insight Partners explained to VentureBeat that TCO has many different layers. A few key considerations include infrastructure hosting costs and engineering: Are the open-source models self-hosted by the enterprise or the cloud provider? How much engineering, including fine-tuning, guard railing and security testing, is needed to operationalize the model safely? 

    Akkiraju noted that fine-tuning an open weights model can also sometimes be a very complex task. Closed frontier model companies spend enormous engineering effort to ensure performance across multiple tasks. In his view, unless enterprises deploy similar engineering expertise, they will face a complex balancing act when fine-tuning open source models. This creates cost implications when organizations choose their model deployment strategy. For example, enterprises can fine-tune multiple model versions for different tasks or use one API for multiple tasks.

    Ryan Gross, Head of Data & Applications at cloud native services provider Caylent told VentureBeat that from his perspective, licensing terms don’t matter, except for in edge case scenarios. The largest restrictions often pertain to model availability when data residency requirements are in place. In this case, deploying an open model on infrastructure like Amazon SageMaker may be the only way to get a state-of-the-art model that still complies. When it comes to TCO, Gross noted that the tradeoff lies between per-token costs and hosting and maintenance costs. 

    “There is a clear break-even point where the economics switch from closed to open models being cheaper,” Gross said. 

    In his view, for most organizations, closed models, with the hosting and scaling solved on the organization’s behalf, will have a lower TCO. However, for large enterprises, SaaS companies with very high demand on their LLMs, but simpler use-cases requiring frontier performance, or AI-centric product companies, hosting distilled open models can be more cost-effective.

    How one enterprise software developer evaluated open vs closed models

    Josh Bosquez, CTO at Second Front Systems is among the many firms that have had to consider and evaluate open vs closed models. 

    “We use both open and closed AI models, depending on the specific use case, security requirements and strategic objectives,” Bosquez told VentureBeat.

    Bosquez explained that open models allow his firm to integrate cutting-edge capabilities without the time or cost of training models from scratch. For internal experimentation or rapid prototyping, open models help his firm to iterate quickly and benefit from community-driven advancements.

    “Closed models, on the other hand, are our choice when data sovereignty, enterprise-grade support and security guarantees are essential, particularly for customer-facing applications or deployments involving sensitive or regulated environments,” he said. “These models often come from trusted vendors, who offer strong performance, compliance support, and self-hosting options.”

    Bosquez said that the model selection process is cross-functional and risk-informed, evaluating not only technical fit but also data handling policies, integration requirements and long-term scalability.

    Looking at TCO, he said that it varies significantly between open and closed models and neither approach is universally cheaper. 

    “It depends on the deployment scope and organizational maturity,” Bosquez said. “Ultimately, we evaluate TCO not just on dollars spent, but on delivery speed, compliance risk and the ability to scale securely.”

    What this means for enterprise AI strategy

    For smart tech decision-makers evaluating AI investments in 2025, the open vs. closed debate isn’t about picking sides. It’s about building a strategic portfolio approach that optimizes for different use cases within your organization.

    The immediate action items are straightforward. First, audit your current AI workloads and map them against the decision framework outlined by the experts, considering accuracy requirements, latency needs, cost constraints, security demands and compliance obligations for each use case. Second, honestly assess your organization’s engineering capabilities for model fine-tuning, hosting and maintenance, as this directly impacts your true total cost of ownership.

    Third, begin experimenting with model orchestration platforms that can automatically route tasks to the most appropriate model, whether open or closed. This positions your organization for the agentic future that industry leaders, such as EY’s Guarrera, predict, where model selection becomes invisible to end-users.

    Check closed enterprise models Open reality Strategy TCO
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article6 Best Smart Speakers (2025): Alexa, Google Assistant, Siri
    Next Article Making Sense of Those Apple TV 4K 4th Gen Rumors
    Techurz
    • Website

    Related Posts

    AI

    Latam-GPT: The Free, Open Source, and Collaborative AI of Latin America

    September 1, 2025
    Security

    Wave of npm supply chain attacks exposes thousands of enterprise developer credentials

    August 30, 2025
    Startups

    EnGenius Unveils New Wi-Fi 7 Enterprise Wireless Access Point At A Consumer-Level Price

    August 29, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Our Picks

    5 days left: Exhibit tables are disappearing for Disrupt 2025

    September 1, 2025

    Is AI the end of software engineering or the next step in its evolution?

    September 1, 2025

    Look out, Meta Ray-Bans! These AI glasses just raised over $1M in pre-orders in 3 days

    September 1, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.