Broadcom – The AI Winner Most Investors Still Underestimate
I view Broadcom as one of the clearest potential long-term winners of this technological revolution. Let me explain why!
On Thursday, December 12, Broadcom released its fiscal Q4 earnings report, and it honestly couldn’t have been much better than it was, with the company beating the top- and bottom-line consensus for a 16th straight quarter and issuing guidance that was well ahead of Wall Street’s and my own expectations.
Broadcom is simply firing on all cylinders and continues to far surpass estimates, as it is emerging as one of the absolute top beneficiaries of the AI boom. In fact, together with Google, I view Broadcom as one of the clearest potential long-term winners of this technological revolution.
Last quarter’s results combined with underlying demand dynamics and customer wins only further reinforced my long-term conviction in Broadcom, and with that, my regret of not owning any shares yet…
These results were brilliant, prompting me to significantly raise my medium-term financial projections, in part fueled not by the results themselves but by underlying industry shifts.
Let me just take you through it all – today, I want to go over Broadcom’s AI strategy and positioning, the company’s fiscal Q4 results, management’s commentary, and underlying developments in great detail to update my long-term thesis and medium-term financial projections.
No, this isn’t just an earnings review; more so a deep dive into this next-gen AI winner!
But first, let me repeat my quick introduction of this company from my September analysis for those of you not so familiar with Broadcom, or those who are new here.
You see, Broadcom is one of the most strategically essential companies in global technology today. What began as a mid-sized semiconductor supplier has transformed into an almost $2 trillion infrastructure powerhouse that combines two worlds: high-performance (networking) semiconductors and enterprise software.
In very simple terms, on the semiconductor side, Broadcom dominates networking chips that power hyperscale data centers, cloud infrastructure, and AI clusters, with products like its Tomahawk and Jericho switch silicon serving as the backbone of modern internet traffic.
Broadcom’s semiconductors are essentially the highways inside AI data centers. Training and running AI models isn’t just about powerful GPUs; it’s about connecting tens of thousands of them so they can work together. That requires ultra-fast, reliable networking chips that move enormous amounts of data between processors with minimal delay.
This is where Broadcom comes in: its networking silicon ensures data can flow seamlessly across massive GPU clusters. Without this “plumbing,” GPUs would sit idle, bottlenecked by slow connections. In other words, Broadcom’s chips don’t do the AI math, but they enable it to be done at scale, which is why they are indispensable to every hyperscaler building out AI infrastructure today.
It’s no surprise then that Broadcom is experiencing an explosion in demand, as hyperscalers spend hundreds of billions to expand their AI capacity and rely heavily on Broadcom’s networking solutions to make it possible.
On top of this, Broadcom is increasingly embedded one layer deeper in the AI stack through its growing custom silicon business. Rather than simply supplying off-the-shelf networking chips, Broadcom partners with hyperscalers and large AI labs to co-design custom accelerators and system-level silicon tailored to their specific workloads. More on this all in a bit!
Also, beyond semiconductors, Broadcom has built a formidable enterprise software business that today generates roughly 41% of its revenue. Through acquisitions such as CA Technologies, Symantec Enterprise, and most recently VMware, Broadcom has assembled a portfolio of mission-critical software that enterprises depend on to run, secure, and virtualize their operations.
This dual model, with a rapidly growing semiconductor engine paired with a highly profitable, recurring software business, makes Broadcom a unique conglomerate and one of the most strategically essential companies in the entire tech ecosystem.
Alright, with that background now established, let’s delve in!
Welcome to InvestInsights — an independent equity research publication rooted in long-term, buy-and-hold investing, publishing actionable stock/equity research reports weekly!
📈 You’re reading my latest [FREE] stock analysis. If you like this analysis, make sure to like & subscribe to receive much more like this!
If you’d like full portfolio access (16% CAGR since 2022) & additional exclusive analyses, consider becoming a paid subscriber!
Broadcom’s AI strategy is best-in-class
Before we delve into the actual quarterly results, I want to explain in detail why Broadcom is one of the real winners in “AI” today and why I view it as one of the clearest potential long-term winners of this technological revolution.
You see, Broadcom has put itself in a unique, strategically advantageous position in the AI computing/Hyperscaler market, making it, in my view, a superior pick to AMD, Nvidia, or any of the hyperscalers themselves.
Instead of selling GPUs, Broadcom embeds itself at the core of AI infrastructure by acting as the preferred partner for hyperscalers building custom AI silicon (like TPUs) and the systems that surround them. This includes major hyperscalers like Google, Oracle, and OpenAI.
In simple terms, Broadcom is effectively positioning itself as the gateway to custom silicon, thereby deeply embedding itself in its customers’ infrastructure.
Let me break it up into two parts.
First of all, what does Broadcom do in relation to AI?
For one, Broadcom supplies high-end off-the-shelf networking solutions that are foundational to AI infrastructure, as addressed in the introduction. Its products are widely deployed by hyperscalers to connect large GPU clusters used for AI training and inference.
As AI models scale, the performance of these networks increasingly determines overall system efficiency, making Broadcom’s silicon a critical piece in modern AI data centers. This part of Broadcom’s AI exposure is already material, highly visible, and directly tied to ongoing hyperscaler capex.
More strategically important, however, is Broadcom’s growing role in custom silicon. This part of the business is often misunderstood as being primarily about customized Ethernet switching. And while networking remains a core pillar, Broadcom today plays a much broader role across AI compute, interconnect, and system-level silicon, increasingly acting as a full-stack partner to hyperscalers and AI start-ups.
Nowadays, Broadcom’s custom silicon deals typically span one or more of the following layers: AI compute accelerators, networking silicon, and IP.
AI Compute Accelerators (TPU-like ASICs)
Broadcom is directly involved in the co-design and industrialization of custom AI accelerators used for training and inference. These are compute chips (not networking products) and are designed to reduce customer dependence on off-the-shelf GPUs from the likes of Nvidia and AMD.
In these engagements, Broadcom contributes:
Architecture co-design and physical implementation
Critical IP blocks (HBM memory controllers, high-speed SerDes, die-to-die interconnect)
End-to-end execution from design through tape-out, advanced packaging, and manufacturing coordination with TSMC
So, whereas customers design their chips in-house, Broadcom IP and its design and manufacturing expertise bring them to life. Well-known examples include Google’s TPUs (made with Broadcom) and more recent custom AI accelerator programs for large AI labs and hyperscalers, such as OpenAI, Microsoft, and Anthropic.
Customized Networking Silicon
Networking remains a major revenue driver, but even here, Broadcom’s role is increasingly custom rather than off-the-shelf. Hyperscalers deploy Broadcom-designed Ethernet switch ASICs and fabrics that are specifically tuned for AI workloads, including large-scale training clusters and high-bandwidth inference environments. These networking components are often deployed alongside custom accelerators as part of a single system design.
Interconnect, Memory, and System IP (“The Glue”)
Perhaps the least visible but most strategic layer is Broadcom’s interconnect and system IP. This includes chip-to-chip links, die-to-die interfaces, memory controllers, retimers, and PHYs. These components are embedded inside customers’ own silicon, even when the final chip is not branded as Broadcom. This layer significantly increases customer lock-in, as it ties Broadcom deeply into the customer’s silicon roadmap across multiple product generations.
Taken together, this explains why Broadcom’s custom silicon business cannot be understood as a collection of individual chip sales, as it does for AMD or Nvidia.
The strategically crucial aspect of Broadcom’s custom silicon business is that it reflects system-level participation rather than discrete chip sales. Broadcom is not paid to deliver a single accelerator or switch; instead, it becomes embedded across the customer’s in-house AI architecture over multiple years as a core enabling partner to the customer’s own-designed silicon.
Revenue, therefore, accrues across an ongoing infrastructure program spanning compute, interconnect, and networking. That reduces churn risk, makes revenue more durable, and improves visibility, even in periods when the pace of AI investment fluctuates.
For example, a single “custom silicon customer” may involve Broadcom co-designing the AI accelerator itself (like Google TPUs), supplying the interconnect and memory IP embedded in that chip, delivering custom networking silicon to connect thousands of those accelerators, and supporting multiple respins and follow-on generations as workloads evolve. Each of these elements generates revenue at different times, but they are all part of one overarching program.
From an investor’s standpoint, this is highly valuable. System-level participation leads to longer revenue visibility, higher switching costs, reduced competition, and a different risk profile than product-based sales – it’s far superior and perfect in this environment.
Once a customer’s software, networking topology, and data center design are optimized around a specific custom silicon architecture, changing partners becomes costly and disruptive. As a result, Broadcom’s “custom silicon” revenue behaves less like cyclical chip sales and more like contracted infrastructure participation, which is why management emphasizes commitments and backlog rather than individual product wins.
Its positioning is just brilliant – Broadcom’s system-level role makes it one of the most consistent and visible beneficiaries of the AI buildout.
Second, why partner up and why Broadcom?
This naturally raises the question: why do hyperscalers increasingly choose to partner with Broadcom?
The answer lies in how AI computing itself is evolving. Initially, off-the-shelf accelerators from Nvidia and, to a lesser extent, AMD were the obvious choice. They offered a mature software ecosystem, rapid availability, and flexibility across many workloads. This made them ideal for research, experimentation, and early production deployment.
That phase is now giving way to a more industrialized phase of AI computing, where cost, power efficiency, and utilization dominate decision-making, especially as AI will become a commodity.
Inference is central to this transition. While training consumes large bursts of compute, inference is continuous and scales directly with user demand. Every query, recommendation, and generated response runs through the inference infrastructure, often at massive and predictable volumes.
As inference becomes the dominant workload, inefficiencies that were tolerable during training become economically significant. Off-the-shelf GPUs are extremely powerful but general-purpose devices designed to handle a wide range of workloads. In large-scale inference environments, that generality increasingly manifests as excess power consumption, unused silicon, and a higher cost per query.
That is critical – this is where custom silicon comes in.
Custom silicon enables companies to design accelerators tailored to the exact characteristics of their inference workloads. Think Google TPUs or Amazon’s Graviton CPUs - that is custom silicon. Architectural choices can be optimized for lower precision, specific model structures, memory access patterns, and latency constraints. When deployed at hyperscale, even modest efficiency gains compound into meaningful reductions in power draw, cooling requirements, and total cost of ownership.
This does not mean that off-the-shelf accelerators are being entirely displaced. For frontier model training, Nvidia and AMD remain the default choice due to their performance, interconnect technology, and a deeply optimized software stack. Training workloads are less predictable, evolve faster, and benefit from the flexibility of general-purpose accelerators.
The result is a hybrid architecture: training and experimentation often rely on commercial GPUs, while large-scale, steady-state inference increasingly shifts to custom accelerators.
And, as laid out, that is where Broadcom comes in. Yet, why Broadcom?
At a high level, hyperscalers look for partners because building leading-edge AI chips is no longer just a design problem; it is an execution, manufacturing, and systems-integration problem. Hyperscalers can design their own accelerators, but they cannot afford schedule slips, yield issues, or manufacturing missteps when each generation involves billions of dollars of capital investment.
Very few companies can reliably take a hyperscaler’s internal architecture, turn it into a manufacturable chip at advanced nodes, and deliver it at scale, on time, and across multiple generations. Broadcom is one of the very few firms that can do this consistently.
Beyond decades of experience, Broadcom sits at the intersection of design, manufacturing, and packaging, where most custom silicon efforts fail. Within these deals, Broadcom does not simply license IP or hand over a finished RTL block; it manages the full design-to-silicon flow, works directly with TSMC on advanced nodes, and coordinates advanced packaging, memory integration, and yield optimization. For customers, this effectively outsources the hardest and riskiest parts of the process while still allowing them to retain architectural control and design differentiation.
Second, there is IP. Modern AI accelerators and networking chips are constrained less by raw compute than by memory bandwidth, interconnect, and power efficiency. Broadcom’s SerDes, die-to-die links, memory controllers, and optical and electrical interfaces are among the best-proven in production.
Customers partner with Broadcom because these components are already validated at scale and across generations, reducing both technical risk and time-to-deployment.
Another critical factor is the alignment of incentives and the business model. Broadcom is not trying to sell a proprietary software ecosystem or lock customers into a closed platform. It is content to be the silicon partner behind the scenes, enabling customers to own the software stack and the end-user relationship. For hyperscalers and AI labs that view silicon as a strategic differentiator, this neutrality is critical. This is why partnering with Nvidia or AMD is not an option.
In short, customers partner with Broadcom not because it offers the flashiest chips, but because it reliably turns ambitious silicon roadmaps into deployed reality at large scale.
The result is brilliant for Broadcom. It doesn’t need to specialize in designing actual GPUs; that is not what it does. It wouldn’t be able to compete with the decades or years of research from Nvidia and AMD. No, Broadcom lets the design up to the hyperscaler itself, which spends tens of billions on R&D to realize these designs, and Broadcom comes in as the critical partner that enables their realization across multiple fronts.
But, “Why Broadcom?” Isn’t there a lot of competition or alternatives? Well, no.
While hyperscalers do have theoretical alternatives to Broadcom, each comes with meaningful trade-offs. Fully in-house development dramatically increases execution risk at advanced nodes, while peers such as Marvell operate at a smaller scale with less breadth across system layers and a shorter track record of multi-generation hyperscale deployments. Smaller design houses and startups may offer niche capabilities, but they lack the balance sheet, manufacturing leverage, and execution history required for mission-critical AI infrastructure at scale.
When viewed in this context, Broadcom’s competitive position becomes clearer. Hyperscalers are not choosing Broadcom because there are no alternatives in theory; they are choosing Broadcom because the practical alternatives either increase risk, fragment responsibility, or fail to scale economically. Broadcom combines deep custom silicon design capability, best-in-class networking and interconnect, proven execution at leading-edge nodes, and a willingness to operate as a long-term, behind-the-scenes partner rather than a platform owner.
This strategy makes Broadcom less replaceable than many AI beneficiaries that compete at a single layer of the stack, giving it greater revenue visibility and long-term leverage. Even as competition intensifies elsewhere in AI, the set of companies capable of playing Broadcom’s system-level role remains extremely small, which is why hyperscalers continue to return to it despite having nominal alternatives.
There is a reason Broadcom has already signed multi-year custom silicon deals with giants such as Google, OpenAI, Oracle, Microsoft, Anthropic, and likely AWS as well. And it continues to add to the list every single quarter.
And don’t forget: these are multi-year, deeply embedded commitments with each customer!
Through one of the best business models, Broadcom is one of the clearest potential long-term winners of this technological revolution, of the AI boom.
With that framework in place, let’s turn to the quarterly results.
Broadcom delivers a clean quarter
Jumping straight into the numbers, Broadcom reported total fiscal Q4 revenue of $18 billion, beating consensus estimates by an impressive $560 million and delivering 28% year-over-year growth. Revenue also came in above management’s own guidance, driven by better-than-expected performance in both AI-related semiconductor revenue and infrastructure software.
Overall, these results were very encouraging. The AI-related beat was largely in line with expectations given the announcements throughout the quarter, but the software beat was a positive surprise, driven by VMware-related momentum as Broadcom continues to execute well on the post-acquisition integration. The combination resulted in 28% year-over-year growth, the best reported by Broadcom in over 4 years, as growth just continues to accelerate, driven by both software and semiconductors, but undeniably mainly thanks to the booming AI-related demand Broadcom sees amid rapid hyperscaler build-out investments.
At this scale, a revenue beat of more than 3% against already elevated expectations is notable. Broadcom continues to exceed consensus estimates quarter after quarter, reinforcing confidence in both its execution and the durability of its growth drivers.
Breaking down revenue by operation, let’s start with semiconductors. Q4 semiconductor revenue surged to $11.1 billion, up 35% YoY, and now accounts for 61% of revenue. Of course, the accelerating growth here was driven by AI-related revenue.
AI semiconductor revenue was up 74% YoY to $6.5 billion. This has exploded in recent quarters, now up 10x from 11 quarters ago. This is driven by healthy demand for Broadcom’s networking solutions amid rapid data center buildouts, and Broadcom is increasingly emerging as the go-to custom silicon partner for hyperscalers.
Starting with the latter, Broadcom reported accelerating revenue growth from custom silicon (or XPU), which more than doubled YoY in Q4. This is driven by a combination of factors.
For one, Broadcom continues to sign more and more multi-year custom silicon contracts with hyperscalers.
Last quarter, it received a new $10 billion XPU order, which has now been revealed to come from Anthropic, the AI start-up backed by Google, Nvidia, and Microsoft, becoming its fourth custom silicon client. This quarter, Anthropic placed another $11 billion order. The first revenue from this deal will be realized in the second half of 2026.
On top of this, Broadcom also signed a contract with a fifth XPU customer in Q4. The deal is only worth $1 billion, but the critical factor is that this is another hyperscaler or AI start-up choosing Broadcom for a multi-year custom silicon journey, integrating its IP deeply into its infrastructure. Similar to Anthropic, there is a good chance this customer will put in more orders with Broadcom over time, so the customer win is great news.
And quite recently, news also surfaced that Microsoft is reportedly discussing an XPU deal with Broadcom, moving away from its close peer Marvell. Nothing concrete has been announced, but this would be a big win for Broadcom, marking its sixth XPU customer.
These multi-year deals improve revenue visibility and long-term growth prospects.
Second, the push for custom silicon and TPUs for inference workloads in general is benefiting Broadcom.
During the earnings call, Broadcom indicated that its XPUs are not just used by its hyperscaler partner but are also being sold to third parties, expanding Broadcom’s revenue potential from these deals.
Google is probably the best example. Google’s tensor TPUs are being designed and manufactured in partnership with Broadcom, and these chips are now used not only by Google in Gemini but also shipped to companies such as Apple, Coherent, and SSI. And after recent success, both OpenAI and Meta are also named as potential Google TPU customers.
As TPU adoption extends beyond Google’s own needs, Broadcom benefits from higher volumes, longer program lifetimes, and a larger, more effectively addressable market tied to the same underlying architecture. These are very positive developments, with Broadcom the key beneficiary of a move away from Nvidia and AMD’s off-the-shelf solutions.
The strong XPU revenue growth reflects the combination of these factors, and momentum is expected to pick up further in the coming quarters, with Broadcom able to generate revenue from these deals very quickly and more deals coming in. But more on the outlook later on!
Besides XPU revenue, growth in AI networking has also been strong, helped by Broadcom’s recent release of its latest 102-terabit-per-second Tomahawk 6 switch, which has been ramping rapidly amid explosive demand.
As a result, Broadcom has a $10 billion order backlog in AI switches alone. Add to this the current XPU orders, and it has an AI-related backlog of over $73 billion, which is almost half of its total backlog of $162 billion (which includes software RPO).
Broadcom expects to deliver its entire AI backlog over the next 18 months, though this number is likely to keep growing even faster.
Turning to non-AI semiconductors, Q4 revenue came in at $4.6 billion, up 2% YoY, with consumer-facing products still facing lackluster demand amid customer caution. Broadband showed a solid recovery, but wireless remained flat, and all other end markets remained down YoY.
On to infrastructure software, Broadcom reported revenue of $6.9 billion, up a very strong 19% YoY and exceeding guidance by $200 million. Meanwhile, Bookings also remained very strong, sitting at $10.4 billion in Q4 and lifting the infrastructure software backlog to $73 billion, up almost 50% YoY, with orders consistently outpacing realized revenue.
While the street obviously focuses on AI, Broadcom’s software operations are performing very well and remain a critical, high-margin, and anti-cyclical part of the business.
Really, Broadcom is firing on all cylinders across both semiconductors and software. AI-related demand continues to accelerate, custom silicon programs are scaling in both number and depth, and networking remains capacity-constrained amid hyperscaler buildouts, while the software segment provides a growing, high-margin, and highly visible earnings base following the successful VMware integration.
The combination of rapid AI-driven growth and durable, recurring software revenue gives Broadcom a rare mix of upside and resilience, reinforcing its position as one of the most strategically advantaged beneficiaries of the ongoing AI infrastructure buildout.
Since this also marked the end of its fiscal FY25, it is worth noting that the company delivered 24% revenue growth to a record-high $64 billion. AI revenue grew 65% YoY, and software revenue was up 26% YoY.
With that, let’s move to the bottom-line results.
Broadcom reported a Q4 gross margin of 77.9%, better than guided but still down sequentially due to a negative semiconductor product mix (offset by higher-than-expected software revenue). You see, while custom silicon is driving exceptional growth for Broadcom, these revenues do carry far lower margins, closer to 45-55%, so as this grows, it is becoming a growing drag on Broadcom’s gross margin, as highlighted in Q4. Positively, this was somewhat offset by a very strong software gross margin of 93%, up 200 bps YoY.
Moving further down the line, Broadcom reported total operating expenses of $2.1 billion, up only 5% YoY. As a result, Broadcom delivered an operating income of $11.9 billion, up 35% YoY and well outpacing revenue, driven by a 66.2% operating margin, which was up 70 bps sequentially despite a 50 bps decline in gross margin. Additionally, Broadcom reported an adjusted EBITDA of $12.2 billion, up 34% YoY, reflecting a 67.8% EBITDA margin, up 310 bps YoY, and coming in ahead of management’s 67% guidance.
Ultimately, this led to a non-GAAP EPS of $1.95, up 37% YoY and beating consensus estimates by $0.08.
Finally, Broadcom reported a very strong FCF of $7.5 billion in Q4, reflecting a 41% FCF margin. This brought the FY25 FCF to an impressive $26.9 billion, up 39% YoY. As shown below, Broadcom’s FCF margin took a hit in 2024 due to the VMware acquisition, a temporary headwind. However, this has been recovering gradually, with a strong rebound in 2025, and VMware is now a solid contributor to FCF, which hit a record high in Q4, even as the FCF margin is still recovering.
Broadcom is still an absolute FCF machine. And over time, I anticipate its FCF margin will continue to recover toward the 50% mark it sat at pre-VMware. Broadcom anticipates its operating margin expanding in the coming years despite a lower gross margin due to product mix, which should bode well for FCF.
When it comes to operating efficiency, there is nothing like Broadcom.
Driven by these strong cash flows and the expectation that this will continue to grow strongly in the years ahead, I am not concerned about Broadcom’s balance sheet, which does show leverage following the VMware acquisition.
As of the end of Q4, Broadcom held total cash of $16.2 billion and total debt of $67.1 billion, leaving it in a considerable net debt position of roughly $51 billion, which is far from ideal. Yet, considering this business will generate over $40 billion in FCF from 2026 onward, I don’t think this needs to be a concern. For reference, its cash pile is up $5.5 billion sequentially as of Q4, thanks to brilliant FCF generation.
Therefore, I think this debt pile is manageable for Broadcom, leaving it in acceptable financial health.
In fact, its brilliant cash generation still allows it return cash to shareholders, while strengthening its balance sheet. Broadcom raised its dividend by another 10% last week, bringing its annual dividend obligation to $12.2 billion, reflecting a conservative FCF payout ratio below 30%.
Shares now yield 0.66%, and while not impressive in itself, when combined with a conservative payout ratio, a 5-year growth rate of 13%, 14 consecutive years of growth, and its FCF outlook, it becomes much more compelling.
On that note, let’s move to its outlook!
Want more out of your subscription? Even more content like this weekly? Consider InvestInsights PRO — $7.50/month ($70/annually)
PRO gets you:
A guaranteed minimum of 6-8 stock analyses every month (of which at least 3-4 are paid-exclusive).
Full insight into my own portfolio, including allocation, transactions, watchlist, and performance (16% annualized return since January 2022).
Instant transaction alerts anytime I buy or sell any shares (Fully transparent).
An overview of all my target prices and ratings (available online, updated weekly).
Access to the InvestInsights Hub, containing all of the above in a single online sheet, updated constantly!
Outlook & Valuation
During the Q4 earnings call, Broadcom management was extremely bullish on its fiscal 2026 prospects. It sees a continued acceleration in AI revenue growth driven by strong customer spending momentum and a robust pipeline of deals. Meanwhile, Broadcom expects infrastructure revenue to remain a solid contributor, with software revenue growth likely to remain in the low double digits, offset slightly by stable non-AI semiconductor revenues.
This is a very positive backdrop, reflected in management’s Q1 guidance.
It now guides Q1 revenue to approximately $19.1 billion, reflecting 28% YoY growth, in line with Q4. This sat well ahead of my expectations and a pre-earnings consensus of $18.31 billion.
This revenue number includes the expectation for semiconductor revenue of $12.3 billion, up 50% YoY and accelerating from 35% in Q4, driven by an expected 100% YoY growth in AI semiconductor revenue to $8.2 billion, accelerating from 74% growth in Q4. This is much better than I previously anticipated.
At the same time, software revenue will be a bit of a drag on growth, expected to only be around $6.8 billion, up 2% YoY. This is due to seasonal renewals in Q1, but growth should improve materially in the following quarters, likely allowing for group revenue growth to accelerate considerably.
On the bottom line, management anticipated the gross margin to fall another 100 bps sequentially to roughly 77%, primarily due to a shift in the AI mix toward lower-margin XPUs and a lower share of high-margin software revenue. This is a headwind that will likely persist throughout the year and mount.
Positively, management expects growing operating leverage to offset this headwind further down the line, and still anticipates a resilient operating margin. Management expects this to result in an EBITDA margin of 67%, just below the Q4 number. Also, the tax rate will be 250 bps higher YoY due to the impact of the global minimum tax and shift in geographic mix of income, presenting a mild headwind to EPS.
Moving to my own projections, the better-than-expected Q4 results, upbeat Q1 guidance and commentary, as well as very promising underlying developments, allow me to meaningfully raise my medium-term financial estimates, with Broadcom clearly fulfilling a much more present role in the AI boom. I clearly underestimated Broadcom in my September coverage.
For fiscal 2026, I am raising my revenue projection by almost 50%(!) to $96.5 billion. This expectation is driven mainly by the fact that 1) management is likely guiding cautiously for Q1, especially on software, 2) AI revenue should continue to accelerate throughout the year, especially with a lot of recently signed deals expected to start delivery in 2H26, which suggests AI revenue will likely grow by over 150% in 2026, and 3) Broadcom is likely to sign more deals that could start delivery later in 2026. So, growth will lean more toward the end of the year, and it should accelerate rapidly from the 28% growth guided for Q1.
The 2026 backdrop for Broadcom is just nothing short of exceptional, with XPU (custom silicon) demand booming and Broadcom the top candidate to win these deals.
Of course, the downside of this rapid AI growth is a weakening of gross margins, which will be felt in 2026 as AI revenue will account for a much higher percentage of total revenue. Therefore, I expect some pressure on margins, though I will follow management’s guidance, pointing to a resilient operating margin, with improving operating leverage (mainly from size) offsetting a much lower gross margin. So, I expect a roughly flat to slightly down operating margin, but with limited buybacks and a higher tax rate, I anticipate some pressure on net income and EPS. Therefore, I expect EPS to grow more slowly than revenue, rising 47% YoY in 2026.
Finally, I anticipate a roughly flat YoY FCF margin, reflecting some margin weakness. Nevertheless, this translated into a $40.5 billion 2026 FCF.
Then, looking further ahead, I expect Broadcom to keep delivering exceptional growth through the end of the decade, driven by rapid growth in the underlying market. For reference, analysts expect the AI accelerator market to grow at a 40-50% CAGR in the coming years, and custom silicon will take a growing piece of the pie.
For Broadcom, even with just its existing customer base, consisting of Google, Anthropic, OpenAI, Oracle, and soon Microsoft, it will be able to fully benefit from this market growth, with new deals adding further upside.
Therefore, I anticipate a 31.4% revenue CAGR for Broadcom through fiscal 2029, which could very well turn out rather conservative again, as I take a good margin of safety in my projections the further ahead we go, so 2029 revenue could turn out much better, for sure.
Meanwhile, I expect margins to expand again from 2027 onward, as operating leverage will outpace the drag on gross margins from product mix. As a result, I expect EPS to grow at an even faster 35% CAGR and FCF to grow at a 36% CAGR, resulting in a fiscal 2029 FCF of a whopping $91 billion (48% FCF margin).
All these assumptions are reflected in the numbers below!
That then brings us to valuation, and although Broadcom shares have outperformed, being up 55% YTD and 99% over the last 12 months, I actually think Broadcom shares are far from overvalued, with projections having risen much faster than its share price has!
For reference, at a current share price of $360 (following yesterday’s 11% sell-off), $AVGO shares now trade at:
36x this fiscal year’s earnings and 26x next year’s.
A growth-adjusted PEG of 1.13
42x this year’s FCF and 30x next year’s FCF.
Expensive? Objectively, yes, but when viewed through the lens of Broadcom’s current growth profile, runway, and, more importantly, the quality and visibility of that growth and the business itself, I believe the opposite is true – shares look rather attractive.
Broadcom is no longer a classic cyclical semiconductor name. It is increasingly a hybrid of (1) an AI infrastructure enabler with real multi-year program visibility and (2) a high-margin, recurring enterprise software compounder. In other words, a larger share of its future cash flows is both more durable and more contract-like than what the market typically associates with semis.
In that context, a ~36x forward earnings multiple does not look extreme for a business I now expect to compound EPS at a mid-thirties rate through the end of the decade, while also producing massive free cash flow that can be used to de-lever, grow the dividend, and eventually resume meaningful buybacks. And that is me being cautious – there is still loads of upside.
The market is effectively paying up for three things that Broadcom has in an unusual combination: a sustained AI-driven top-line acceleration, an expanding long-term custom silicon opportunity set, and a software segment that anchors margins and smooths cyclicality.
The more relevant lens here is free cash flow. At ~30x next year’s FCF, Broadcom screens optically rich versus the broad market, but far less demanding versus other “AI winners” once you adjust for durability and downside protection. Many AI beneficiaries are valued on revenue-growth narratives, still face significant uncertainty around monetization, or operate in brutally competitive layers of the stack. Broadcom, in contrast, sits in the infrastructure layer where spending is more necessity-driven, with high switching costs and a growing portion of demand tied to multi-year deployments rather than quarter-to-quarter experimentation – this differentiated positioning means its backlog and outlook are much more realistic than those we see elsewhere in the AI supply chain.
That said, I do not want to hand-wave away the key valuation risk: the shift toward custom silicon can pressure gross margin, and the stock will likely remain sensitive to any signs of hyperscaler capex digestion or deployment schedule delays. In other words, the multiple can compress in the short run even if the long-term thesis remains intact.
However, I think current prices offer plenty of downside protection. For reference, if we assume a 30x (earnings) 2028 exit multiple, which I believe is a conservative multiple that leaves ample upside and downside protection, I calculate an end-of-fiscal-2028 target price of $548. Based on today’s $360 share price, this implies potential annualized returns exceeding 15%.
In my book, that is a sufficient, market-beating return profile. And considering this is based on loads of conservatism, I believe the risk-reward is excellent at its current price.
So, long story short: there are two stocks to own in the next generation of computing, two stocks to own amid the AI boom. One of those is Google (a conversation for another day), and the other is Broadcom.
Yes, I would prefer to own Broadcom over Nvidia or AMD at any price – very much so.
After yesterday’s sell-off, I am pulling the trigger on Broadcom.
Rating: Buy - Accumulate below $365
FY27 Target Price: $548
Implied CAGR from the current price: 15.5%









