Loading Stories...
Loading Stories...
That would only work if Intel has a competitive foundry. Intel produces very high margin chips. Can it be competitive with TSMC in low margin chips where costs must be controlled?
The rumors I've heard (not sure about their credibility) is that Intel is simply not competitive in terms of costs and yields.
And that's even before considering it doesn't really have an effective process competitive with TSMC.
It's easy to say it should become a foundry, it's much harder to actually do that.
Intel flopped so hard on process nodes for 4 years up until Gelsinger took the reigns... it was honestly unprecedented levels of R&D failure. What happened over the 8 years prior was hedge funds and banks had saddled up on Intel stock which was paying healthy dividends due to cost cutting and "coasting". This sudden shock of "we're going to invest everything in R&D and catch back up" was news that a lot of intel shareholders didn't want to hear. They dumped the stock and the price adjusted in kind.
Intel's 18A is roughly 6 months ahead of schedule, set to begin manufacturing in the latter half of 2024. Most accounts put this ahead of TSMC's equivalent N2 node...
Fab investments have a 3 year lag on delivering value. We're only starting to see the effect of putting serious capital and focus on this, as of this year. I also think we'll see more companies getting smart about having all of their fabrication eggs in one of two baskets (samsung or tsmc) both within a 500 mile radius circle in the south china sea.
Intel has had 4 years of technical debt on it's fabrication side, negative stock pressure from the vacuum created by AMD and Nvidia, and is still managing to be profitable.
I think the market (and analysts like this) are all throwing the towel in on the one company that has quite a lot to gain at this point after losing a disproportionate amount of share value and market.
I just hope they keep Pat at the helm for another 2 years to fully deliver on his strategy or Intel will continue where it was headed 4 years ago.
My understanding is that 5nm has been and continues to be "problematic" in terms of yield. The move to 3nm seems to not be afflicted by as many issues. There is also a massive drive to get more volume (and a die shrink will do that), due to the demands of all things ML.
I suspect that TSMC's move here is a bit more nuanced than the (valid) point that the article is making on this step...
Here's one examples but there's a pile of them.
https://twitter.com/PGelsinger/status/1751653865009631584
I guess Intel does need the help of a higher power at the stage.
They got lazy and sat on their laurels when AMD was struggling and they didnt view ARM as a threat. TSMC was a probably a joke to them...until everyone brought out killer products and Intel had no response to them. They could have been way ahead of the pack by now but they decided to harvest the market instead of innovating aggresively. Right now they're worth less than $200bn, which is less than half of Broadcom or TSMC, its 30% less than AMD and 10% of Nvidia. Is it intrisically worth that little? Probably not, I think its a buy at this price.
The fact that Intel stock went up during the Brian Krzanich tenure as CEO is simply a reflection of that being the free money era that lifted all boats/stocks. Without that we would be writing Intel’s epitaph now.
You cannot play offense in tech when there is a big market shift.
I am hoping Intel rights its wrongs so they can stay competitive. It takes a good amount of competition to keep businesses honest.
Having separate processors with separate memory and separate software stack to do matrix operations works, but it would be much more convenient and productive to have a system with one RAM and CPUs that can efficiently do matrix operations and would not require the programmer to delegate these operations to a separate stack. Event the name 'Graphics Processing Unit' suggest that the current approach for AI is rather hacky.
Because of this, in a long run there can an opportunity for Intel and other CPU manufacturers to regain the lucrative market from NVIDIA.
That makes a lot of sense to me: that's how and why PCBs are usually designed as they are. How true it that there is an actual advantage vs TSMC?
They needed (and need) a lot more paranoia.
They didnt. Every press conference downplayed deflected and denied the performance issues, every patch to the Linux kernel was "disabled by default." They lied through their teeth and real players like amazon vultr and other hosting providers in turn left for AMD.
No, but ARM should've rung many bells.
Intel poured tons of billions in mobile.
Didn't understood the future from smartphones to servers was about power efficiency and scale.
Eventually their lack of power efficiency made them lose ground in all their core business. I hope they will get this back and not just by competing on manufacturing but architecture too.
The problem is that Intel has had a defensive strategy for a long time. Yes, they crushed many attempts to breach the x86 moat but failed completely and then gave up attempts to reach beyond that moat. Mobile, foundry, GPUs etc have all seen half-hearted or doomed attempts (plus some bizarre attempts to diversify - McAfee!).
I think that, as Ben essentially says, they put too much faith in never-ending process leadership and the ongoing supremacy of x86. And when that came to an end the moat was dry.
I think TSMC learned from Intel's downfall more than Intel. I don't see any industry traction from IFS. They can research any new technology they want. Without wafer orders it's a recipe for a quick cash burn..
Edit: relevant video, because whoever downvoted did not get what I was referring to: https://www.youtube.com/watch?v=xUT4d5IVY0A
From their long pipeline of future CMOS manufacturing processes with which Intel hopes to close the performance gap between them and TSMC, for now there exists a single commercial product: Meteor Lake, which consists mostly of chips made by TSMC, with one single Intel 4 die, the CPU tile.
The Meteor Lake CPU seems to have finally reached the energy efficiency of the TSMC 5-nm process of almost 4 years ago, but it also has obvious difficulties in reaching high clock frequencies, exactly like Ice Lake in the past, so once more Intel has been forced to accompany Meteor Lake with Raptor Lake Refresh made in the old technology, to cover the high-performance segment.
Nevertheless, Meteor Lake demonstrates reaching the first step with Intel 4.
If they will succeed to launch on time and with good performance, later this year, their server products based on Intel 3, that will be a much stronger demonstration of their real progress than this Meteor Lake preview, which has also retained their old microarchitecture for the big cores, so it shows nothing new there.
Only by the end of 2024 it will become known whether Intel has really become competitive again, after seeing the Arrow Lake microarchitecture and the Intel 20A manufacturing process.
I'd buy this if they'd actually built a fab, but right now this seems too-little, too-late for a producer's economy.
The rest frankly doesn't matter much. Intel processors are only notable in small sections of the market.
And frankly—as counter-intuitive as this may seem to such an investor-bullish forum—the death knell was the government chip subsidy. I simply can't imagine american government and private enterprise collaborating to produce anything useful in 2024, especially when the federal government has shown such a deep disinterest in holding the private economy culpable to any kind of commitment. Why would intel bother?
It is not a good long term strategy: The winds of politics may change, politicians may set more terms (labour and environment), foreign market access may become politicized too (US politicians will have to sell chips like they sell airplanes on foreign trips).
So Intel will end up like the old US car makers or Boeing - no longer driven by technological innovation but instead by its relationship to Washington.
In the terms of end product - not really. Last 3-4 gens are indistinguishable to the end user. It's a combined effect of marketing failure and really underwhelming gains - when marketing screams "breakthrough gen", but what you get is +2%/ST perf for another *Lake, you can't sell it.
They might've built a foundation and that might be a deliberate tactics to get back into the race, we'll see. But i'm not convinced for now.
For the rebound you're theory is probably more true. It has been better for AMD obviously, but INTC has almost doubled in value since its $25 low, which is not slouching by any means.
I can agree on being bullish long term (I had short puts exercise back in the $20s). Like a lot of tech, INTC has more money than God and they'll get it right eventually.
- The wintel monopoly is losing its relevance now that ARM chips are creeping into the windows laptop market and now that Apple has proven that ARM is fantastic for low power & high performance solutions. Nobody cares about x86 that much any more. It's lost its shine as the "fastest" thing available.
- AI & GPU market is where the action is and Intel is a no-show for that so far. It's not about adding AI/GPU features to cheap laptop chips but about high end workstations and dedicated solutions for large scale compute. Intel's GPUs lack credibility for this so far. Apple's laptops seem popular with AI researchers lately and the goto high performance solutions seem to be provided by NVidia.
- Apple has been leading the way with ARM based, high performance integrated chips powering phones, laptops, and recently AR/VR. Neither AMD nor Intel have a good answer to that so far. Though AMD at least has a foothold in the door with e.g. Xbox and the Steam Deck depending on their integrated chips and them still having credible solutions for gaming. Nvidia also has lots of credibility in this space.
- Cloud computing is increasingly shifting to cheap ARM powered hardware. Mostly the transition is pretty seamless. Cost and energy usage are the main drivers here.
Why the fuck are shareholders often so short-sighted?
Or do they just genuinely think the R&D investment won't pay off?
has told the story for more than a decade that Intel has been getting high on its own supply and that the media has been uncritical of the stories it tells.
In particular I think when it comes to the data center they’ve forgotten their roots. They took over the data center in the 1990s because they were producing desktop PCs in such numbers they could afford to get way ahead of the likes of Sun Microsystems, HP, and SGI. Itanium failed out of ignorance and hubris but if they were true evil geniuses they couldn’t have made a better master plan to wipe out most of the competition for the x86 architecture.
Today they take the desktop for granted and make the false claim that their data center business is more significant (not what the financial numbers show.). It’s highly self-destructive because when they pander to Amazon, Amazon takes the money they save and spends it on developing Graviton. There is some prestige in making big machines for the national labs but it is an intellectual black hole because the last thing they want to do is educate anyone else on how to simulate hydrogen bombs in VR.
So we get the puzzle that most of the performance boost customers could be getting comes from SIMD instructions and other “accelerators” but Intel doesn’t make a real effort to get this technology working for anyone other than the Facebook and the national labs and, in particular, they drag their feet in getting it available on enough chips that it is is worth it for mainstream developers to use this technology.
A while back, IBM had this thing where they might ship you a mainframe with 50 cores and license you to use 30 and if you had a load surge you could call you up and they could turn on another 10 cores at a high price.
I was fooled when I heard this the first time and thought it was smart business but after years of thinking about how to deliver value to customers I realized it’s nothing more than “vice signaling”. It makes them look rapacious and avaricious but really somebody is paying for those 20 cores and if it is not the customer it is the shareholders. It’s not impossible that IBM and/or the customer winds up ahead in the situation but the fact is they paid to make those 20 cores and if those cores are sitting there doing nothing they’re making no value for anyone. If everything was tuned up perfectly they might make a profit by locking them down, but it’s not a given at all that it is going to work out that way.
Similarly Intel has been hell-bent to fuse away features on their chips so often you get a desktop part that has a huge die area allocated to AVX features that you’re not allowed to use. Either the customer or the shareholders are paying to fabricate a lot of transistors the customer doesn’t get to use. It’s madness but except for Charlie Demerjian the whole computer press pretends it is normal.
Apple bailed out on Intel because Intel failed to stick to its roadmap to improve their chips (they’re number one why try harder?) and they are lucky to have customers that accept that a new version of MacOS can drop older chips which means MacOS benefits from features that were introduced more than ten years ago. Maybe Intel and Microsoft are locked in a deadly embrace but their saving grace is that every ARM vendor other than Apple has failed to move the needle on ARM performance since 2017, which itself has to be an interesting story that I haven’t seen told.
> (samsung or tsmc) both within a 500 mile radius circle in the south china sea.
Within 500 mile radius of great power competitor, perhaps. The closest points on mainland Taiwan and Korea are 700 miles apart. Fabs about 1000 miles, by my loose reckoning.
1) Intel is up 100% from ten years ago when it was at $ 23. All that despite revenue being flat/negative, inflation and costs rising and margins collapsing.
2) Intel is up 60% in the last 12 months alone.
Doesn't look to me like they throwing the towel at all.
Intel never "crushed" ARM. Intel completely failed to develop a mobile processor and ARM has a massive marketshare there.
ARM has always beaten the crap out of Intel at performance per watt, which turned out to be extremely important both in mobile and data center scale.
You're right: AMD wasn't competitive for an incredibly long time and ARM wasn't really meaningful for a long time. That's the perfect situation for some MBAs to come into. You start thinking that you're wasting money on R&D. Why create something 30% better this year when 10% better will cost a lot less and your competitors are so far behind that it doesn't matter?
It's not that Intel should have seen AMD coming or should have seen ARM coming. It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle. Intel should have been smart enough to understand that backing off of R&D would mean giving up the moat they'd created. Even if it looked like no one was coming for their crown at the moment, you need to understand that disinvestment doesn't get rewarded over the long-run.
Intel should have understood that trying to be cheap about R&D and extract as much money from customers wasn't a long-term strategy. It wasn't the strategy that built them into the dominant Intel we knew. It wouldn't keep them as that dominant Intel.
Pretty certainly, Intel is improving, and of course should not be written off. But they did get themselves into a hole to dig out from, and not just because the 5nm process was really hard to get working.
The writing was on the wall 6 years ago; Intel was not doing well in mobile and it was only a matter of time until that tech improved. Same as Intel unseating the datacenter chips before it. Ryzen I will give you is a surprise, but in a healthy competive market, "the competition outengineered us this time" _should_ be a potential outcome.
IMO the interesting question is basically whether Intel could have done anything differently. Clay Christianson's sustaining vs disrupting innovation model is well known in industry, and ARM slowly moving up the value chain is obvious in that framework. Stratechery says they should have opened up their fabs to competitors, but how does that work?
Trying to breathe as Intel was pushing their head under water.
We saw AMD come back after their lawsuit against Intel got through and Intel had to stop paying everyone to not use AMD.
Look at Japan generally and Toyota specifically. In Japan the best award you can get for having an outstanding company in terms of profit, topline, quality, free-cash, people, and all the good measures is the Deming Award. Deming was our guy (an American) but we Americans in management didn't take him seriously enough.
The Japanese to their credit did ... ran with it and made it into their own thing in a good way. The Japanese took 30% of the US auto market in our own backyard. Customers knew Hondas, Toyotas cost more but were worth every dollar. They resold better too. (Yes some noise about direct government investment in Japanese companies by the government was a factor too, but not the chief factor in the long run).
We Americans got it "explained to us." We thought we were handling it. Nah, it was BS. But we eventually got our act together. Our Deming award is the Malcolm Baldridge award.
Today, unfortunately the Japanese economy isn't rocking like it was the 80s and early 90s. And Toyota isn't the towering example of quality it once was. I think -- if my facts are correct --- they went too McDonalds and got caught up in lowering price in their materials, and supply chain with bad effects net overall.
So things ebb and flow.
The key thing: is management through action or inaction allowing stupid inbred company culture to make crappy products? Do they know their customers etc etc. Hell, mistakes even screw-ups are not life ending for companies the size of Intel. But recurring stupidity is. A lot of the times the good guys allow themselves to rot from the inside out. So when is enough enough already?
Crushed by Intel's illegal anticompetitive antics?
Had Intel adopted the molten tin of EUV, the cycle of failure would have been curtailed.
Hats off to SMIC for the DUV 7nm which they produced so quickly. They likely saw quite a bit of failed effort.
And before we discount ARM, we should remember that Acorn produced a 32-bit CPU with a 25k transistor count. The 80386 was years later, with 275k transistors.
Intel should have bought Acorn, not Olivetti.
That's a lot of mistakes, not even counting Itanium.
Then fastest single core and multi core x86 cpus that money can buy will go to databases and similar vertically scaled systems.
That's where you can put up the most extreme margins. It's "winner takes all the margins". Beeing somewhat competitive, but mostly a bit worse is the worst business position. Also...
I put money on AMD when they were taking over the crown.
Thank you for this wakeup call, I'll watch closely if Intel can deliver on this and take it back. I'll habe to adjust accordingly.
Not clear about what the role of activist hedge funds is here but Intel's top shareholders are mutual funds like Vanguard which are part of many people's retirement investments. If an activist hedge fund got to run the show, it means that they could get these passive shareholders on their side or to abstain. It would have meant those funds along with pension funds, who should have been in a place to push back against short term thinking, didn't push back. These funds should really be run much more competently given their outsized influence, but the incentives are not there.
I have similar stories where adding a standard wafer-rinse step took a year of testing, documenting, pitching to customers, etc.
They should have been worried about their process leadership for a long time. IIRC even the vaunted 14nm that they ended up living on for so long was pretty late. That would have had me making backup plans for 10nm but it looked more like leadership just went back to the denial well for years instead. It seemed like they didn't start backport designs until after Zen1 launched to me.
And Sandy Bridge all but assured AMD wouldn't be relevant for the better part of a decade.
It's easy to forget just how fast Sandy Bridge was when it came out; over 12 years later and it can still hold its own as far as raw performance is concerned.
You should probably have some intel (haha) on ARM and AMD chips. They didn't care.
Why? It's monopoly business tactics, except they didn't realize they weren't Microsoft.
It's not like this was overnight. Intel should have watched AMD like a hawk after that slimeball Ruiz was deposed and a real CEO put in charge.
And the Mac chips have been out, what, two years now, and the Apple processors on the iPhones at least 10?
Come on. This is apocalyptic scale incompetence.
Of course we don't need to know everything, but in terms of news, maybe we could reference key technologies in use that have the largest effect on performance targets and downstream process modification.
Putting aside whether the statement is considered true or not, buying McAfee under the guise of the kind of security meant when talking about silicon is... weird, to say the least.
Famously they divested their ARM-based mobile processor division just before smartphones took off.
The new CEO, as the article mentions, seems to have a lot more of a clue. We just hope he hasn't arrived too late.
Had Intel figured out hyperthreading security and avoided all the various exploits that later showed up …
They've made this situation fairly clear, in my eyes.
Alchemist is the product line for their first attempt at true dedicated GPUs like those Nvidia and AMD produce. It's based on Intel Xe GPU architecture.
It's done decently well, and they've been very diligent about driver updates.
Battlemage is the next architecture that will replace it when it's ready, which I believe was targeted for this year. Similar to how the Nvidia 4k series replaced the 3k before it. Celestial comes a couple years later, then druid a couple years after that, etc. They don't exist simultaneously, they're just the names they use for generations of their GPUs.
It's remarkably common and heavily incentivized.
They have been trying half-heartedly with GPUs on and off since the late 1990's i740 series.
The root cause is probably the management mantra "focus on core competencies". They had an effective monopoly on fast CPUs from 2007 until 2018. This monopoly meant very little improvement in CPU speed.
https://money.cnn.com/magazines/fortune/fortune500_archive/f...
I wonder how many of our companies will survive at the top for even 20 more years?
https://companiesmarketcap.com/
Berkshire is the only one I feel sure about because they hold lots of different companies.
Apple and Microsoft have both managed to avoid this, and been the exception.
Datacenters were and still are full of monstrous Xeons, and for a good reason.
Additionally, both Intel and AMD manufacturer CPUs for HEDT and servers which are far beyond anything Apple is fabricating at the moment. Apple has no response to Epyc, Threadripper, or higher end Xeon. Similarly, Apple has no equivalent to Intel, AMD, and Nvidia discrete GPUs.
Apple made a quality cell phone chip, and managed to exploit chiplets and chip-to-chip interconnects to double and quadruple it into a decent APU. But it's still an APU, just one segment addressed by current x86 designs.
Intel's roadmap looks great. However, I'm skeptical of whether they're actually meeting that roadmap. Meteor Lake was launched last month using Intel 4, but it looks like Intel 4 has lower transistor density than TSMC's 5nm. Intel 3 is listed on their roadmap as second-half 2023, but we've yet to see any Intel 3 parts.
Realistically, there won't be too much of an advantage for Intel. It's pretty clear that even when Intel ships things, they aren't shipping these new nodes in volume. Intel 4 is only being used for some laptop processors and they're even using TSMC's 5nm and 6nm processes for the graphics and IO on those chips. They canceled the desktop version of Meteor Lake so desktops are staying on Intel 7 for now. Intel's latest server processors launched last month are also Intel 7.
If Intel were able to get a year or two ahead of TSMC, then I could see a nice advantage. However, it looks like Intel's a year behind its roadmap and even being a year behind they're only shipping laptop parts (and not even able to fab the whole chip themselves).
But past success/failure doesn't predict the future. Maybe Intel will be shipping 20A with backside power in 2024 and it'll be 2025 before TSMC gives Apple a 2nm chip with backside power.
But given that we haven't seen them shipping Intel 3 and they haven't taken Intel 3 off their roadmap, I'm going to be a bit skeptical. Intel is doing way better than they had been doing. However, I've yet to see something convincing that they're doing better than TSMC. That's not to say they aren't going to do better than TSMC, but at this point Intel is saying "we're going to jump from slightly worse than 5nm (Intel 4) to 2nm in a year or less!" Maybe Intel is doing that well, but it's a tall ask. TSMC launched 5nm in 2020 and 3 years later got to 3nm. It doesn't take as long to catch up because technology becomes easier over time, but Intel is kinda claiming it can compress 5-6 years worth of work into a single year. Again, maybe Intel has been pouring its resources into 20A and 18A and maybe some of it is more on ASML and maybe Intel has been neglecting Intel 4 and Intel 3 because it knows it's worth putting its energy toward something actually better. But it also seems reasonable to have a certain amount of doubt about Intel's claims.
I'd love for Intel to crush its roadmap. Better processors and better competition benefit us all. But I'm still wondering how well that will go. TSMC seems to be having a bit of trouble with their 3nm process. 2024's flagship Android processors will remain on 4nm and it looks like the 2024 Zen 5 processors will be 4nm as well (with 3nm parts coming in 2025). So it looks like 3nm will basically just be Apple until 2025 which kinda indicates that TSMC isn't doing wonderfully with its 3nm process. Android processors moved to 5nm around 6 months after Apple did, but it looks like they'll move to 3nm around 18 months after Apple. But just because TSMC isn't doing great at 3nm doesn't mean Intel will face similar struggles. It just seems likely that if TSMC (a company that has been crushing it over the past decade) is facing higher struggles at 3nm, it's a bit of wishful thinking to believe Intel won't face similar struggles at 3nm and below.
People have been trying to break NVIDIA's moat by creating non-GPU accelerators. The core maths operations aren't that varied so it's an obvious thing to try. Nothing worked, not even NVIDIA's own attempts. AI researchers have a software stack oriented around devices with separate memory and it's abstracted, so unifying CPU and GPU ram doesn't seem to make a big difference for them.
Don't be fooled by the name GPU. It's purely historical. "GPUs" used for AI don't have any video output capability. I think you can't even render with them. They are just specialised computers come with their own OS ("driver"), compilers, APIs and dev stack, which happen to need a CPU to bring them online and which communicate via PCIe instead of ethernet.
Which is why you only use it when you really need it - on a GPU/AI accelerator.
I have a pile of laptops including the x13s, and that part burns a good ~30W for a couple minutes, while running ~30% (or worse) slower, then thermally throttles and drops another ~30% in performance.
For light browsing it manages to stay under 5W and gets 10h from its 50Wh battery. This is also doable on just about any AMD or Mac laptop. The AMD machine I'm typing this on tells me I have more than 12 hours remaining and i'm sitting at 81% battery (67Wh). And this machine is a good 3x faster compiling code, or running simple benchmarks like speedometer when its plugged in or allowed to burn 35W. Except it also has a fan that turns on when its under heavy load to keep it from thermally throttling.
Yes the x13s is cute, it is a thin/light laptop and is probably the first windows on arm machine that isn't completely unusable. But, its going to take another leap or two to catch-up with current top of the line machines.
Everyone loves geekbench so, here is an obviously unthrottled x13s vs a similarly rated 28W AMD 7840U.
https://browser.geekbench.com/v5/cpu/21596449 https://browser.geekbench.com/processors/amd-ryzen-7-7840u
That Amd is 2x the single threaded perf, and 50% faster multithreaded, and anything graphics related is going to look even worse.
The far greater threats are that they missed the phone market, and are missing the GPU/AI market. Those are entirely new markets where growth would happen, instead of the mostly fixed size of the current CPU market where the same players are just trading small percentage points.
They jump onto the latest trend, e.g. ESG to get in good with the banks and funds without thinking about what long term damage it is doing to their brands and products.
N5 is interesting because it's the first process fully designed around EUV and because it was pretty much exclusive to Apple for almost two years. It launched in Apple products in late 2020, then crickets until about late 2022 (Zen 4, RTX 4000, Radeon 7000). Launches of the other vendors were still on N7 or older processes in 2020 - RTX 3000 for example used some 10nm Samsung process in late 2020. All of those were DUV (including Intel 7 / 10ESF). That's the step change we are looking at.
I think it's safe to say that 80BN+ in subsidies are already well in the process of being deployed. Intel, along with Samsung and TSMC, are heavily subsidized and have been so for a very long time. Any government with modest intelligence understands the gravity of having microchip manufacturing secured.
Not sure where that's coming from? The released parts are mobile chips, and the fastest is a 45W TDP unit that boosts at 5.1GHz. AMD's fastest part in that power range (8945HS) reaches 5.2GHz. Apple seems to do just fine at 4GHz with the M3.
I'm guessing you're looking at some numbers for socketed chips with liquid cooling?
That is not a partner for creating logical systems. Very clear their current decisions are political.
It will challenge your concerns
See there's only 2 or 3 more obvious generations of obvious die shrinks available. Beyond those generations we'll have to innovate some other way, so whoever grabs the fab market for these modes now gets a longer period to enjoy the fruits of their innovation.
Meanwhile server CPU TDPs are hitting the 400W+ mark and DC owners are looking dubiously at big copper busbars, die shrinks tend to reduce the Watts per Calculation so they're appealing. In the current power market, more efficient computing translates into actual savings on your power bill. There's still demand for better processors, even if we are sweating those assets for 5-7 years now.
Apple has proven that Apple Silicon on TSMC's best process is great. There are no other ARM vendors competing well in that space yet. SOCs that need to compete with Intel and AMD on the same nodes are still stuck at the low margin end of the market.
And acqu-hired Jensen as CEO.
You can also see the lasting popularity of 7nm-class nodes in consumer products. For example, RDNA3 uses 5nm for the core parts (GCD), but the peripheral parts (the memory chiplets/MCDs) are built on 6nm, and the monolithic low-end parts (RX 7600) are even fully built on 6nm.
That was an intentional act to force customers into Itanium, which was 64 bit from the outset.
They are a dead company walking.
[...] when Intel’s manufacturing prowess hit a wall Intel’s designs were exposed. Gelsinger told me: 'So all of a sudden, as Warren Buffet says, “You don’t know who’s swimming naked until the tide goes out.” [...]'
These companies didn’t fail because of myopic financial engineers. The ones focused on quarter to quarter tend to bomb the company relatively quickly and do get flushed out.
These companies failed because of long term financial visionaries. These are the worst because they are thinking about how the company can capture markets at all costs, diversify into all adjacent markets, etc. They are a hard cancer to purge because on the surface they don’t sacrifice stuff for the current quarter. They sacrifice laser focus for broad but shallow approaches to all kinds of things.
“Sure, we’ll build a fab here. And we’ll also do enterprise NICs… And also storage… and also virtualization… and also AI… and also acquire some startups in these fields that might have something interesting…”
The company becomes big listless monster coasting on way too many bets spread out all over the place. There is no core engineering mission or clear boundary between what is the business and what isn’t.
Intel is far from “myopic”. If it was something as obvious as a next quarter driven CEO, the board would have made a change immediately.
ARM would have been much more influential under Intel, rather than pursuing the i960 or the iAPX 432.
Just imagine Intel ARM Archimedes. It would have crushed the IBM PS/2.
Whoops.
Seriously, even DEC was smart enough.
>Intel Poaches Head Apple Silicon Architect Who Led Transition To Arm And M1 Chips. Intel has reacquired the services of Jeff Wilcox, who spearheaded the transition to Arm and M1 chips for Apple. Wilcox will oversee architecture for all Intel system-on-a-chip (SoC) designs
note re-acquired.
Raja Koduri also came back to Intel (from AMD Radeon) and only recently left to dabble in VFX, as opposed to working for a competitor to Intel.
Anton Kaplanyan (father of RTX at nvidia) is at Intel now.
I think people are not checking linkedin when they make the claim that Intel's talent has been drained and there is nobody left at home. Where there is remuneration and opportunity you will find talent. I think it's safe to say no industry experts have written down Intel.
edit: first foundry customer online in New Mexico fab: https://www.intel.com/content/www/us/en/newsroom/news/intel-...
Indeed, Apple has shown not just once but multiple times that they'll happily blow up their entire development ecosystem, whether it's software (Mac Finder vs. MacOS X) or hardware (68k, PPC, Intel, and now ARM). I think Intel didn't expect Apple to switch architectures so quickly and thoroughly and got caught flat-footed.
You must mean, performance relative to Intel, not absolute performance. Clearly Qualcomm has improved Snapdragon over time as have a number of other Android SOC vendors.
But I wonder if it's even true, have ARM vendors other than Apple failed to move the needle on performance (let's call performance single thread geekbench) relative to Intel? If someone is up for tracking down all the numbers I'd read that blog post. :)
However China is now a full fledged dictatorship. I'm not sure you can count on them being a rational actor on the world stage.
They can do a lot of damage, but would also get absolutely devastated in return. They are food, energy insecure and entirely dependent on exports after all.
China is facing a deflating real estate bubble, but they still managed to grow the last year (official sources are disputed but independent estimates are still positive).
Are there any dam-having countries for which this isn't the case?
Their third employee that later went on to become their third CEO and guide Intel from the memory to processor transition literally coined the term and wrote a book called "Only the Paranoid Survive" [1]. It's inexcusable that management degraded that much.
[1] https://en.wikipedia.org/wiki/Andrew_Grove#Only_the_Paranoid...
So much of Intels decline reeks of arrogant MBAs.
And it's not like they didn't notice either. Apple literally asked intel to supply the chips for the first iPhone, but the intel CEO at the time "didn't see it".
https://www.theverge.com/2013/5/16/4337954/intel-could-have-...
And a few years prior to that Intel made the most competitive ARM chips (StrongARM). Chances are that an Intel chip would have powered the iPhone had they not scrapped their ARM division due to “reasons”
Coincidentally, ARM1 and 80386 were both introduced in 1985. I'm a big fan of the ARM1 but I should point out that the 386 is at a different level, designed for multitasking operating systems and including a memory management unit for paging.
Intel had StrongARM though. IIRC they made best ARM cpus in the early 2000s and were designing their own cores. Then Intel decided to get rid of it because obviously they were just wasting money and could design a better x86 mobile chip…
AMD bet BIG on the software industry leaning in heavily on massive thread-counts over high throughput, single-threaded usage... But it never happened so the cores tanked.
It was never a secret WHY that generation of core sucked, and it was relatively clear what AMD needed to do to fix the problem, and they were VERY vocal about "doing the thing" once it became clear their bet wasn't paying off.
Knowing how to do the right thing is simple human decency.
Management Consulting sells ideas, many of them silly or not important that are marketed and packaged as era-defining. A manager who implements #FASHIONABLE_IDEA can look forward to upwards mobility, while boring, realistic, business-focused ideas from people in the trenches usually get ignored (unless you want a lateral transfer to a similar job). hashtag-collection-of-ideas is much easier to explain when the time comes for the next step up.
This explains why you get insane things like Metaverse fashion shows that barely manage to look better than a liveleak beheading video. These sorts of things might seem like minor distractions, but getting these sorts of boondoggles up and running creates stress and drowns out other concerns. Once the idea is deployed, the success or failure of the idea is of minimal importance, it must be /made/ successful so that $important_person can get their next job.
These projects starve companies of morale, focus and resources. I recall the struggle for a ~USD $20k budget on a project to automate internal corporate systems, while some consultants received (much) more than 10 times that amount for a report that basically wound up in the bin.
Oddly, this sort of corporate supplication to management consultants worked out for me (personally). I was a dev who wound up as a manager and was able to deliver projects internally, while other decent project managers could not get budget and wound up looking worse for something that wasn't their fault.
I don't think any of the projects brought by management consultants really moved the needle in any meaningful way while I worked for any BigCos.
Had Itanium been a success then Intel would have crushed the competition (however it did succeed in killing Alpha, SPARC, and workstation MIPS).
If you're hoping for an nVidia competitor, the units in that market may be more per unit, but there's already an 1-ton gorilla there and AMD can't seem to compete either. Rather, Arc makes sense as an in-house GPU unit to pair with existing silicon (CPUs) and low / mid range dGPUs to compete where nVidia's left that market and where AMD has a lot of lunch to undercut.
IF (big IF) the trend continues we might see Intel and AMD get pretty close, and a lot more comptetion on the market again (I hope)
On the server side, I don't have the number but that's probably a way harder turf to protect for Intel going forward, if they're even still ahead ?
[0] https://www.statista.com/statistics/735904/worldwide-x86-int...
Of course, but the average Joe does not want to wear ear protection when running their laptop. Nor do they want the battery to last 40 minutes or have it be huge brick, or have to pour liquid nitrogen on it to not get it to not thermal throttle.
Apple innovated by making chips that fit the form and function most people need in their personal devices. They don't need to be the absolute fastest, but innovation isn't solely tied to the computing power of a processor. It make sense that Intel excels in the market segment where people do need to wear ear protection to go near their products. If they need to crank in an extra 30 watts to achieve their new better compute then so be it.
We don't know the specifics of the conversations between Apple and Intel. Hopefully for Intel it was just the fact that they didn't want to innovate for personal computing processors and not that they couldn't.
I don't think that was deliberate. Apple has a long history of not cooling their computers enough.
[1] https://browser.geekbench.com/macs/macbook-pro-15-inch-mid-2...
[2] https://browser.geekbench.com/processors/intel-core-i9-9980h...
> ESG investment funds in the United States saw capital inflows of $3.1 billion in 2022 while non-ESG investment funds saw capital outflows of $370 billion during the stock market decline that year
Improvements since have been modest, to the extent that N3 is only barely any better (c.f. the Apple M3 is... still a really great CPU, but not actually that much of an upgrade over the M2).
There's a hole for Intel to aim at now. We'll see.
[1] Also 32nm and 45nm, really. It's easy to forget now, but Intel strung together a just shocking number of dominant processes in the 00's.
Apple "sandbagging" was them desperately trying to get a poor Intel product to work in laptops they wanted to be thin and light.
Even today though their designs haven't really changed all that much--in fact the MacBook Air is actually just as thin and now even completely fan-less. It just has a chip in it that doesn't suck.
To ask the foolish question, why? My guess would be power efficiency. I've only ever seen them in workstations, and in that use-case it was the number of cores that was the main advantage.
It's absurd to point out that apple could have gotten higher perf from x86 at higher power as if it's some kind of win for intel. Yes, obviously they could have, that's true of every chip ever. They could take it even further and cool every macbook with liquid nitrogen and require lugging around a huge tank of the stuff just to read your email! They don't, because that's dumb. Apple goes even further than most laptop manufacturers and thinks that hissing fans and low battery life are dumb too. This leads them to low power as a design constraint and what matters is what performance a chip can deliver within that constraint. That's the perspective from which the M1 was designed and that's the perspective from which it delivered in spades.
As of Nov 2023, BRK has ~915.5M AAPL shares, with a market cap of $172B. BRK’s market cap is $840B.
Per the CNBC link, the market cap of BRK’s AAPL holding is 46% of the market cap of BRK’s publicly listed holdings, but BRK’s market cap being much higher than $172B/46% means there is $840B - $374B = $466B worth of market cap in BRK’s non publicly listed assets.
I would say $172/$840 = 20% is more representative of BRK’s AAPL proportion.
Intel's P/E was about 50 at that time, AMD was over 1000. That's "throwing in the towel" in as pointed of a way that the market can possibly do.
They certainly tried selling their chips below cost to move into markets ARM dominated, but "contra revenue" couldn't save them.
> Intel Corp.’s Contra-Revenue Strategy Was a Huge Waste of Money
https://www.fool.com/investing/general/2016/04/21/intel-corp...
>Arm now claims to hold a 10.1% share of the cloud computing market, although that's primarily due to Amazon and its increasing use of homegrown Arm chips. According to TrendForce, Amazon Web Services (AWS) was using its custom Graviton chips in 15% of all server deployments in 2021.
https://www.fool.com/investing/2023/09/23/arm-holdings-data-...
There were and are exactly zero third parties licensing nvidia IP to build competing GPU products.
(dropping molten tin 1000 times a second and then shooting it with a laser just to get a lamp that can bless you with the hard light you need for your fancy fine few nanometers thin shadows? sure, why not, but don't forget to shoot the plasma ball with a weaker pulse to nudge it into the shape of a lens, cheerio.
and you know that all other parts are similarly scifi sounding.
and their middle management got greedy and they were bleeding talent for a decade.)
Prior to the ATI acquisition nvidia actually had been the motherboard chipset manufacturer of choice for AMD cpus for a number of years.
Long term financial visionaries? No they’re simply plundering the business to reward shareholders and executives through buybacks and dividends. They rationalize this as “maximizing shareholder value” but it is destroying long term value, growth and innovation.
Regarding other plans, QPI and UPI for cache coherent FPGAs were pretty infeasible to do at the sluggish pace that they need in the logic fabric. CXL doesn't need a close connection between the two chips (or the companies), and just uses the PCIe lanes.
FPGA programming has always been very hard, too, so the dream of them everywhere is just not happening.
Maybe Habana Labs?
I can't really tell if it's working out for Intel, but I do hear them mentioned now and then.
No liquid cooling needed for either of them, just standard 14" or 15" laptops without special cooling, or NUC-like small cases, because they do not need discrete GPUs.
Both CPUs have the same microarchitecture of the big cores.
If Intel had been able to match the clock frequencies of their previous generation, they would have done that, because it is embarrassing that Meteor Lake wins only the multi-threaded benchmarks, due to the improved energy efficiency, but loses in the single-threaded benchmarks, due to lower turbo clock frequency, when compared to the last year's products.
Moreover, Intel could easily have launched a Raptor Lake Refresh variant of i9-13900H, with a clock frequency increased to 5.6 GHz. They have not done this only to avoid an internal competition for Meteor Lake, so they have launched only HX models of Raptor Lake Refresh, which do not compete directly with Meteor Lake (because they need a discrete GPU).
During the last decade, the products made at TSMC with successive generations of their processes had a continuous increase of their clock frequencies.
On the other hand Intel had a drop in clock frequency at all switches in the manufacturing processes, at 14-nm with the first Broadwell models, then at 10-nm with Cannon Lake and Ice Lake (and even Tiger Lake could not reach clock frequencies high enough for desktops), and now with Meteor Lake in the new Intel 4 process.
With the 14-nm and 10-nm (now rebranded as Intel 7), Intel has succeeded to greatly increase the maximum clock frequencies after many years of tuning and tweaking. Now, with Meteor Lake, this will not happen, because they will pass immediately to different better manufacturing processes.
According to rumors, the desktop variant of Arrow Lake, i.e. Arrow Lake S, will be manufactured at TSMC in order to ensure high-enough clock frequencies, and not with the Intel 20A, which will be used only for the laptop products.
Intel 18A is supposed to be the process that Intel will be able to use for several years, like their previous processes. It remains to be seen how much time will pass until Intel will become able to reach again 6.0 GHz in the Intel 18A process.
Imagine what it would do if Intel became strongly associated with one side in the Israel-Palestine conflict. It could really hurt their business.
Usually business leaders are smart enough to stay out of politics.
You’re not wrong, maybe all the scaremongering in the west about China overtaking us got them delusional enough in a Japanese nationalist type way for them to behave this irrational, but i highly doubt it. But that can also change pretty quick if they feel like their back is against the wall, you’re not wrong in that regard
China is in a world of hurt, but the government is trying desperately to hide how bad it actually is. If this continues for a few more months, it will be an existential situation for their economy.
[1] - https://www.bloomberg.com/news/articles/2024-01-31/china-hom...
[2] - https://www.bloomberg.com/news/articles/2024-01-31/china-sto...
[3] - https://www.piie.com/blogs/realtime-economics/foreign-direct...
edit: one also should keep in mind that the Chinese real estate market is entirely different in its importance to its populations wealth. "Buying" real estate is pretty much the only sanctioned market to invest your earnings. They still pretend to be communist after all.
[1]: https://www.ispsw.com/wp-content/uploads/2020/09/718_Lin.pdf
"In this case, the Three Gorges Dam may become a military target. But if this happens, it would be devastating to China as 400 million people live downstream, as well as the majority of the PLA's reserve forces that are located midstream and downstream of the Yangtze River."
> "The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."
In that circumstance, I think most people would have made the same decision.
After all, Apple and ARM came from the idea to have better end user products around softer factors than shear CPU power. Since Intel‘s products aren’t highly integrated Phones nor assembled computer, Intel had no stake directly.
It is complex.
DEC had started developing ARM chips as they concluded it was a bad idea to try and scale down their alpha chips to be more energy efficient.
Then, after the success of these ARM chips in the blackberry and most of the palm PDAs as well as MP3 players and HTC smartphones, Intel sells it off, so it could focus on trying to make its big chips more energy efficient, making the mistake DEC avoided.
iPhone was a defining moment, but at the time it was completely obvious that smartphones would be a thing, it's just that people thought that the breakthrough product would come from Nokia or Sony-Ericsson (who were using ARM SoCs from TI and Quallcomm respectively). Selling off the ARM division would not have been my priority?
So it's a string of unforced errors. Nevertheless, Intel remains an ARM licensee, they didn't give that up when selling StrongARM, so it seems some people still saw the future..
On servers, 32 bit x86 was doing wonders for small workloads. AMD64 quickly chipped away at the places where 1-4 processor SPARC would have previously been used.
It was process failures. Their fabs couldn't fab the designs. Tiger lake was what? 4 years late?
Why did that generation of core (Bulldozer) suck?
What was it that AMD needed to do to fix the problem?
(Links to relevant stories would be sufficient for me!)
No one wanted Itanium. It was another political project designed to take the wind out of HP and Sun's sails, with the bonus that it would cut off access to AMD.
Meanwhile AMD released AMD64 (aka x86-64) and customers started buying it in droves. Eventually Intel was forced to admit defeat and adopt AMD64. That was possible because of the long-standing cross-licensing agreement between the two companies that gave AMD rights to x86 way back when nearly all CPUs had to have "second source" vendors. FWIW Intel felt butt-hurt at the time, thinking the chips AMD had to offer (like I/O controllers) weren't nearly as valuable as Intel's CPUs. But a few decades later the agreement ended up doing well for Intel. At one point Intel made some noise about suing AMD for something or another (AVX?) but someone in their legal department quickly got rid of whoever proposed nonsense like that because all current Intel 64-bit CPUs rely on the AMD license.
It's embarassing when they go to market and there's no way to say it's faster than the other guy. Currently, they need to pump 400W through the chip to get the clock high enough.
But perf at 200w or even 100w isn't that far below perf at 400w. If you limit power to something like 50w, the compute efficiency is good.
Contrast that to Apple, they don't have to compete in the same way, and they don't let their chips run hot. There's no way to get the extra 1% of perf if you need it.
However, they underperform greatly compared to competitors' cards with similar die areas and memory bus widths. For example the Arc A770 is 406mm^2 on TSMC N6 and a 256-bit bus and performs similarly to the RX 6650XT which is 237mm^2 on TSMC N7 with a 128-bit bus. They're probably losing a lot of money on these cards.
I wouldn't call what Apple did innovation - they followed predictable development trajectories - more integration. They licensed ARM's instruction set, Imagination's PowerVR GPU, most of the major system busses (PCIe, Thunderbolt, USB 3, Displayport, etc), they bonded chiplets together with TSMC's packaging and chip-to-chip communication technologies, and they made extensions (like optional x86 memory ordering for all ARM instructions which removes a lot of the work of emulation). Incidentally, Apple kicked off it's chip design efforts by purchasing PA Semi. Those folks had all the power management chip design expertise already.
But again, it's been a good first showing for Apple. I think they were smart to ship on-package DRAM in a consumer device. Now is about the right time for the CPU to be absorbing DRAM, as can be seen by AMD's 3D VCache in another form. And it's cool for Apple folks to have their own cool thing. Yay y'all. But I've run Linux for 20 years, I've run Linux on every computer I can get my hands on in that time, and through that lens, Apple silicon performs like any x86 APU in a midrange laptop or desktop. And as regards noise, I never hear the fans on my 7800x3D / 3090Ti, and it is very very noticeably faster than my M1 Mac. Apple Silicon's great, it's just for laptops and midrange desktops right now.
I think Netburst mostly came from a misguided place where Intel thought that clock frequency was in fact the holy grail (and would scale far beyond what actually ended up happening), and that all the IPC issues such as costly mispredicts could be solved by e.g. improving branch prediction.
I applaud Intel for providing fully capable drivers at no additional cost. Combined with better availability for purchase they are competing in the VDI space.
https://www.intel.com/content/www/us/en/products/docs/discre...
The amount of funds with an environmental or social mandate is trivial; and nearly all of those mandates require essentially nothing.
It is sad how easy it is to dupe mediocre men into believing thinking they are victims, when in fact they are simply whiny little snowflake losers.
They don’t want to be selling faster x86 CPUs in the cloud, they want you to buy more vcpu units instead, and they want those units to be arm. And that’s how they’ve structured their offerings. It’s not the limit of what’s possible, just what is currently the most profitable approach for Amazon and google.
Space and power in datacenters are at a premium; packing so many pretty decent cores into one CPU allows to run a ton of cloud VMs on a physically compact server.
AMD EPYC, by the way, follows the same dtatcenter-oriented pattern.
I'm not sure what's so upsetting about the assertion that chips composed of a similar number of transistors, on a similar process node, cooled similarly, might function similarly. Because when all variables are controlled for, that's what I see.
I don't know about NPUs. Do you mean Apple? Apple silicon is interesting because of the unified memory architecture, but beyond letting you save RAM by using it for both graphics and app code simultaneously I'm not sure it has much benefit. Certainly, datacenters will remain on NVIDIA for the time being unless AMD manage to catch up on their software stack. Intel is the dark horse here. I hear rumblings that their first offering is actually quite good.
This is such an uncharitable and adversarial interpretation of parent. Sandbagging intel =/= juicing M1.
FWIW I don't buy his explanation anyway. Intel at the time had zero desire to be a fab. Their heart was not in it. They wanted to own all the IP for fat margins. They have yet to prove anything about that has changed despite the noise they repeatedly make about taking on fab customers.
https://www.macrotrends.net/stocks/charts/META/meta-platform...
What's fascinating albeit somewhat depressing is that it seems something similar happened when McDonnell merged with/took over Douglas as well: https://admiralcloudberg.medium.com/a-legal-and-moral-questi...
a lot that can be critiqued about that period.
Like the time they appointed Will.I.Am?Wasn't that AMD (perhaps also AMD)? Qualcomm Adreno GPUs are ATi Radeon IP, hence the anagram.
Even if Intel makes a thing that doesn't mean it's actually any good.
I had some of those Intel ULV parts back in the day. They sucked.
https://corp.mediatek.com/news-events/press-releases/mediate...
like it’s of course dependent on what “build competing products” means, but assuming you mean semicustom (like AMD sells to Sony or Samsung) then nvidia isn’t as intractibly opposed as you’re implying.
regulators can be dumb fanboys/lack vision too, and nvidia very obviously was not spending $40b just to turn around and burn down the ecosystem. Being kingmaker on a valuable IP is far more valuable than selling some more tegras for a couple years. People get silly when nvidia is involved and make silly assertions, and most of the stories have become overwrought and passed into mythology. Bumpgate is… something that happened to AMD on that generation of gpus too, for instance. People baked their 7850s to reflow the solder back then too - did AMD write you a check for their defective gpu?
nVidia always had the trump card of saying "if you want SLI, you have to buy our chipset." But conversely, a lot of the options weren't great. VIA tended to alternate between decent and incompetent chipsets, SIS was mostly limited to the bottom of the market, and ATI's chipsets were very rare.
CEOs not focused on product vision who focus on “company growth” as a vague vision are what lead to Intel, GE, etc. There is no greedy raiding of the coffers like your caricature implies in this scenarios.
[1] https://www.theregister.com/2023/02/14/google_prepares_its_o...
I'm guessing this has increased since 2021. I've moved the majority of our AWS workloads to ARM because the price savings (it mostly 'just works'). If companies are starting to tighten their belts, this could accelerate even more ARM adoption.
I can get a top of the line Xeon Gold basically next day with a incredibly high quality out of band management from a reputable server provider. (HP, Dell).
Ampere? Give it 6 months, €5,000 and maybe you can get one, from Gigabyte. Not known for server quality.
(yes, I'm salty, I have 4 of these CPUs and it took a really long time to get them while costing just as much as AMD EPYC Milan's).
But I guess thats the problem - I had to provide an example
Apple does really well on its rare acquisitions, but they aren't very public as they get successfully absorbed. PA Semi, Intrinsity, more I can't remember.
ATi and Xilinx have by all accounts worked out really well for AMD.
Most chipmakers saw gains moving from n5 to n5p at tsmc, which wasn't even a process jump simply maturity and optimization on the existing node.
Intel is shipping competitive clock frequencies on Intel 4 vs. everyone in the industry except the most recent generation of their own RPL parts, which have the advantage of being being up-bins of an evolved and mature process.
That sounds pretty normal to me? New processes launch with conservative binning and as yields improve you can start selling the outliers in volume. And... it seems like you agree, by pointing out that this happened with Intel 7 and 14nm too.
Basically: this sounds like you're trying to spin routine manufacturing practices as a technical problem. Intel bins differently than AMD (and especially Apple, who barely distinguish parts at all), and they always have.
Also arm isa is just isa
You seem to focus on it too much when it isnt THAT relevant.
Isa doesnt imply perf/energy characteristics
Also what does "400 million people live downstream" even mean? There's ten million people living downstream of this dam https://en.wikipedia.org/wiki/Federal_Dam_(Troy), and ten million more living downstream of the various Mississippi dams and so on.
The CPUs were indeed horrible, and would've caused a lot of pain if the projects had actually continued. (source: I was working on the software side for the early Nokia intel prototypes)
The entire three gorges meme originated from FaLunGong/Epoche times propaganda, including in linked article (to interview with Simone Gao) and all the dumb google map photos of deformed damn due to lens distortion. PRC planners there aren't concerned about dam breech, but general infra terrorism.
The onne infra PRC planners are concerned about are coastal nuclear plants under construction, which is much better ordnance trade for TW anyway, and just as much of a war crime.
It’s not like they couldn’t afford it and taking chances is important
In that circumstance, I think most MBAs would have made the same decision.
Fixed that for you
ARM literally came from “we need a much better and faster processor” and “how hard can this be?” [2]
[1] https://en.wikipedia.org/wiki/History_of_Apple_Inc.#1971%E2%...
https://chipsandcheese.com/2023/01/22/bulldozer-amds-crash-m...
https://chipsandcheese.com/2023/01/24/bulldozer-amds-crash-m...
---
From a consumer perspective, Bulldozer and revisions as compared to Skylake and revisions were:
+ comparable on highly multi-threaded loads
+ cheaper
- significantly behind on less multi-threaded loads
- had 1 set of FPUs per 2 cores, so workloads with lots of floating point calculations were also weaker
- Most intensive consumer software was single or a very small number of thread focused still (this was also a problem for Intel in trying to get people to buy more expensive i7s/i9s over i5s in those days)
1.Bulldozer had a very long pipeline akin to a Pentium 4. This allows for highclocks but comparatively little work being done per cycle vs their competition. Since clocks have a ceiling around 5GHz they could never push the clocks high enough to compete with intel. 2.They used a odd core design with 1 FPU for every 2 integer unit instead of the normal 1:1 that we have seen on every x86 since the i486. This leads to very weak FPU performance needed for many professional applications. Conversely it allowed for very competitive performance on highly threaded integer applications like rendering. This decision was probably under the assumption APUs would integrate their GPUs better and software would be written with it in mind since a GPU easily out does a CPUs FPU but it requires more programming. This didn't come to be. 3. They were stuck using Global Foundries due to previous contracts when they spun it off requiring AMD use GloFlo. This became a anchor as Gloflo fell behind market competitors like TSMC. Leaving AMD stuck on 32nm for a long while, until gloflo got 14nm and eventually AMD got out of the contract between zen 1-2.
bonus: Many IC designers have bemoaned how much of bulldozers design was automated with little hand modifications which tends to lead to a less optimized design. 3. 3.
I wish Pat well and I think he might be the only who could save the company if it's not already too late.
Sourced: worked with many ex-Intel people.
POSTSCRIPT: I have seen from the inside (not at Intel) how a politically motivated acquisition failed so utter spectacularly due to that same internal power struggle. I think there are some deeply flawed incentives in corporate America.
Bulldozer seemed to be designed under the assumption heavy floating point work would be done on the GPU (APU) which all early construction cores had built in. But no one is going to rewrite all of their software to take advantage of the iGPU that isn't present in existing CPUs and isn't present in the majority of CPUs (Intel) so it sort of smelt like Intel's itantic moment, only worse.
I think they were desperate to see some near term return on the money they spent on buying ATI. ATI wasn't a bad idea for a purchase but they seemed to heavily overpay for it which probably really clouded management's judgement.
I thought this was interesting enough to track down more of the backstory here, and found this fascinating article:
https://archive.computerhistory.org/resources/access/text/20...
Page 60, hard breaks added for readability:
Baum: Apple owned a big chunk of it, but when Apple was really having hard times, they sold off their chunk at quite a profit, but they sold off the chunk. And then-- oh, while Newton was going on, some people from DEC came to visit, and they said, “Hey, we were looking at doing a low power Alpha and decided that just couldn’t be done, and then looked at the ARM. We think we can make an ARM which is really low power, really high performance, really tiny, and cheap, and we can do it in a year. Would you use that in your Newton?” Cause, you know, we were using ARMs in the Newton, and we all kind of went, “Phhht, yeah. You can’t do it, but, yeah, if you could we’d use it.”
That was the basis of StrongARM, which became a very successful business for DEC. And then DEC sued Intel. Well, I worked on the StrongARM 1500, which was a very interesting product. It was an ARM and a DSP kind of highly combined. It was supposed to be like video processing using set top boxes, and things like that. And then we finished that project and our group in Palo Alto, we were just gonna start an Alpha project.
And just then it was announced that DEC was-- no. No. Intel, at that time, Intel, DEC sued Intel for patent violations, didn’t go to them and say, “Hey, pay up or stop using it.” They just sued them. Intel was completely taken by surprise. There was a settlement. The settlement was you have to buy our Microelectronics Division and pay us a whole pile of money, and everything will go away.
So they sold the Microelectronics Division, which we were part of, except for the Alpha Design Group, 'cause they didn’t think that they could sell that to Intel and have the SEC approve, 'cause the two can conflict. So I went away on vacation not knowing whether I would be coming back and working for Intel, or coming back working for DEC. And it turned out they decided to keep the Alpha Design Group, so I was still working for DEC. Except the reason for the lawsuit was Compaq wanted to buy DEC, but didn’t want to buy ‘em with this Fab and Microelectronics Division. So by doing this, they got rid of the Microelectronics Division, and now they could sell themselves to Compaq.
The difference in performance for 95% of what I do is zero. I even run some (non-AAA) Windows games via crossover, and that's driving a 1440p 165hz display. All while it sits there consuming no more than about 35w (well, plus a bit for all my USB ssds, etc) and I've never seen the thermals much past 60c, even running nastive-accelerated LLMs or highly multithreaded chess engines and the like. Usually sits at about 40c at idle.
It's exactly what almost 40 year old me wants out of a computer. It's quiet, cool, and reliable - but at the same time I'm very picky about input devices so a-bring-your-own peripherals desktop machine with a ton of USB ports is non-negotiable.
Do you have source for this other than Cinebench R23, which is hand optimized for x87 AVX instructions through Intel Embree Engine?
From all sources, Apple Silicon has 2-3x more perf/watt than AMD's APUs in multithread and a bigger gap in single thread.
By this yardstick, nobody in semiconductors has ever innovated.
Even if Nvidia also wants TSMC's latest process, that could work to Apple's advantage. Right now it's looking like Apple might end up with TSMC's 3nm process for 18 months. If Apple and Nvidia split the initial 2nm capacity, it could be 3+ years before AMD and Qualcomm can get to 2nm.
If Nvidia launches the RTX 50 series in late 2024 or early 2025 on TSMC's 3nm (which seems to be the rumor), what does that do for availability for AMD and Qualcomm? Maybe what we'll see going forward is Apple taking the capacity for the first year and Nvidia taking the capacity for the second year leaving AMD and Qualcomm waiting longer.
That would certainly benefit Apple. Apple isn't competing against Nvidia. If Nvidia uses up TSMC capacity leaving Apple's actual competitors at a bigger disadvantage, that's pretty great for Apple.
The fact that the exact same laptop designs absolutely soared when an M1 was put in them with no changes tells you everything you need to know about how Intel dropped the ball.
Maybe they were worse than they needed to be, but the best they could have been with Intel would have still left so much to be desired.
The one place I agree with you is in the > ~16 core space (Server style) with TBs of RAM, where total performance density is more important than power, they don't bother to really compete. Where I differ slightly is I don't think there is anything about the technical design that prevents this, just look how Epyc trounced Intel in the space by using a bunch of 8 core chip modules instead of building a monolith, rather they just don't have interest in serving that space. If Apple was able to turn a phone chip into something with the multicore performance of a 24 core 13900k it doesn't exactly scream "architectural limitation" to me.
While one reason why TSMC did not have such problems is that they have made more incremental changes from one process variant to another, avoiding any big risks, the other reason is that Intel has repeatedly acted as if they had been unable to estimate from simulations the performance characteristics of their future processes and they have always been caught by surprise by inferior experimental results compared to predictions, so they always had to switch the product lines from plan A to plan B during the last decade, unlike the previous decade when all appeared to always go as planned.
A normal product replacement strategy is for the new product to match most of the characteristics of the old product that is replaced, but improve on a few of them.
Much too frequently in recent years many Intel new products have improved some characteristics only with the price of making worse other characteristics. For example raising the clock frequency with the price of also increased power consumption, increasing the number of cores but removing AVX-512, or, like in Meteor Lake, raising the all-cores-active clock-frequency with the price of lowering the few-cores-active clock frequency.
While during the last decade Intel has frequently progressed in the best case by making two steps forward and one step backward, all competitors have marched steadily forwards.
I remember very well as back then I was working in University porting Linux to an Intel XScale development platform we had gotten recently.
After I completed the effort, Android was released as a public beta and I dared to port it too to that Development Board as a side project. I thought back then Intel was making a big mistake by missing that opportunity. But Intel were firm believers in the x86 architecture, specially on their Atom Cores.
Those little Intel PXA chips were actually very capable, I had back then my own Sharp Zaurus PDA running a full Linux system on an Intel ARM chip and I loved it. Great performance and great battery life.
Amazon has its own ARM CPUs on AWS, and you can get them on demand, too.
Xeons and EPYCs are great for "big loads", however some supercomputer centers also started to install "experimental" ARM partitions.
The future is bright not because Intel is floundering, but there'll be at least three big CPU producers (ARM, AMD and Intel).
Also, don't have prejudices about "brands". most motherboard brands can design server-class hardware if they wish. They're just making different trade-offs because of the market they're in.
I used servers which randomly fried parts of their motherboard when see some "real" load. Coming one morning and having no connectivity because a top of the line 2 port gigabit onboard Ethernet fried itself on a top of the line, flagship server is funny in its own way.
Here’s an article unpacking the financialization at Intel. The good stuff is about halfway down.
https://www.ineteconomics.org/perspectives/blog/how-intel-fi...
Haven't heard much about successful Google acquisitions lately though.
The Mac is annoying since I think some pieces of their silicon designs come from Israel (storage controller). Can someone correct me if I am wrong on that?
I just don’t think it’s very likely for just about any leader putting themselves into the position you are describing. This is a reoccurring narrative in western media, and I’m not here to defend dictators, but i feel like reality is less black and white than that.
Many of the "crazed leaders" we are told are acting irrational, often do not. It’s just a very, very different perspective, often bad ones, but regardless.
Let me try to explain what I mean: during the Iraq war, Saddam Hussein was painted as this sort of crazed leader, irrationally deciding to invade Kuwait. But that’s not the entire truth. Hussein may have been an evil man, but the way the borders of Iraq were re-drawn, Iraq was completely cut off from any sources of fresh water. As expected, their neighbors cut off their already wonky water supplies and famine followed. One can still think it’s not justified to invade Kuwait over this, but there’s a clear gain to be had from this "irrational" act. Again, not a statement of personal opinion, just that there IS something to be had. I’m not trying to say that i am certain that Hussein had the prosperity of his people at heart, but i do think that it isn’t entirely irrational to acknowledge that every country in human history is 3 missed meals away from revolution. That’s not good, even if you are their benevolent god and dictator for lifetime(tm).
Russia "irrationally" invading the Ukraine may seem that way to us, but let’s see. Russias economy is just about entirely dependent on their petrochem industry. Without, their are broke. The reason why they still can compete in this market is their asset of soviet infrastructure and industry. A good majority of USSR pipelines run through the Ukraine. I’m not saying it’s okay for them to invade, but i can see what they seek to gain and why exactly they fear NATO expansion all that much.
I personally don’t see a similar gain to be had from China invading Taiwan, at least right now. They have lots to lose and little to gain. Taiwans semiconductor industry is useless without western IP, lithography equipment and customers. There are even emergency plans to destroy taiwans fabs in case of invasion. And that’s beside the damage done to mainland China itself.
But as i stated, this may very well change when they get more desperate. Hussein fully knew the consequences of screwing with the wests oil supply, but the desperation was too acute.
I just don’t buy irrationality, there’s always something to be had or something to lose. It may be entirely different from our view, but there’s gotta be something.
But, as it seems, i vastly underestimated the effort needed to cause my theorized catastrophe. I’m entirely open to admit being wrong about that, always good to learn.
Also, correct me if I’m wrong, but afaik, the viability of nuclear plants as strategic targets has been vastly overblown. I’ll go read up on it, but i don’t think it’s that big of a risk.
It could not run x86 apps faster than x86 cpus, so it didn't compete in the MS Windows world. Itanium was a headache for compiler writers as it was very difficult to optimize for, so it was difficult to get good performance out of Itanium and difficult to emulate x86.
Itanium was introduced after the dot-com crash, so the marked was flooded with cheap slightly used SPARC systems, putting even more pressure on price.
This is unlike when Apple introduced Macs with PowerPC cpus: they had much higher performance than the 68040 cpu they replaced. PowerPC was price competetive and easy to write optimizing compilers for.
[1] https://www.gottabemobile.com/meego-powered-nokia-n9-to-laun...
I'm pretty sure most people here panned the iPhone after it came out, so it's not as if anyone would have predicted it prior to even being told it existed.
Simply false brand-oriented thinking.
My point is that visionaries can drive everything into the ground and people like you will ride the ship down with support not realizing the negligence at play.
I’m here saying, “fires are not the only thing that destroy homes, floods wreck way more” and you’re digging in with evidence of fires.
That article is looking at things focused on financials, but intel did not fuck up on underinvestment. They invested in cutting edge stuff like uv lithography that now others are reaping the benefits on.
I'll be blunt: you're interpreting a "problem" where none exists. I went back and checked: when Ivy Bridge parts launched the 22nm process (UNDENIABLY the best process in the world at that moment, and by quite a bit) the highest-clocked part from Intel was actually a 4.0 GHz Sandy Bridge SKU, and would be for a full 18 months until the 4960X matched it.
This is just the way Intel ships CPUs. They bin like crazy and ship dozens and dozens of variants. The parts at the highest end need to wait for yields to improve to the point where there's enough volume to sell. That's not a "problem", it's just a manufacturing decision.
Apple switched to their own silicon, and life for their customers has radically improved.
Maybe Intel has a great E Core now. Good for them if they do.
Kind of like Decca Records turning down The Beatles.
In the geography which with I am familiar, Iraq has two incredibly famous rivers and the Persian Gulf is actually salt water.
When you're saying "Putin invaded Ukraine irrationally" you're implicitly projecting your own value system and worldview onto him.
Let's take goals. What do you think Putin's goals are? I don't think it's too fanciful to imagine that welfare of ordinary Russians is less important to him than going down in history as someone who reunited the lost Russian Empire, or even just keeping in power and adored. It's just a fact that the occupation of Crimea was extremely popular and raised his ratings, so why not try the same thing again?
What about the worldview? It is well established that Putin didn't think much of Ukraine's ability to defend, having been fed overly positive reports by his servile underlings. Hell, even Pentagon thought Ukraine will fold, shipping weapons that would work well for guerrilla warfare (Javelins) and dragging their feet on stuff regular armies need (howitzers and shells). Russians did think it'll be a walk in the park, they even had a truck of crowd control gear in that column attacking Kyiv, thinking they'll need police shields.
So when you put yourself into Putin's shoes, attacking Ukraine Just Makes Sense: a cheap&easy way to boost ratings and raise his profile in history books, what not to like? It is completely rational — for his goals and his perceived reality.
Sadly, people often fall into the trap of overextending their own worldview/goals onto others, finding a mismatch, and trying to explain that mismatch away with semi-conspiratorial thinking (Nato expansion! Pipelines! Russian speakers!) instead of reevaluating the premise.
> Russia "irrationally" invading the Ukraine may seem that way to us, but let’s see.
Invading one of their largest neighbors and ruining their relationship with a nation they had significant cultural exchange and trade with (including many of their weapons factories) is irrational.
But Russia's leaders didn't want a positive neighborly relationship they wanted to conquer Ukraine and restore the empire. Putin has given speeches on this comparing himself to the old conquering czars.
> Russias economy is just about entirely dependent on their petrochem industry. Without, their are broke.
True enough
> The reason why they still can compete in this market is their asset of soviet infrastructure and industry.
Much of the equipment is western and installed in the post Soviet period.
> A good majority of USSR pipelines run through the Ukraine.
Then they probably shouldn't have invaded in 2014? Almost seems like they made a bad irrational choice. They had other pipelines that bypassed Ukraine like NS1 and NS2 which didn't enter service due to the war
> I’m not saying it’s okay for them to invade, but i can see what they seek to gain
Please explain what they tried to gain. Ukraine wouldn't have objected to exports of gas through Ukraine if not for the Russian invasion and they already had pipelines that bypassed Ukraine.
> and why exactly they fear NATO expansion all that much.
They don't fear NATO expansion, they disliked it because it prevented them from conquering or bullying countries with threats of invasion. They've taken troops of the NATO border with Finland (and didn't even invade Finland when Finland joined NATO). Russia acknowledged the right of eastern European nations to join NATO and promised to respect Ukraine's sovereignty and borders.
> I personally don’t see a similar gain to be had from China invading Taiwan, at least right now. They have lots to lose and little to gain. Taiwans semiconductor industry is useless without western IP, lithography equipment and customers. There are even emergency plans to destroy taiwans fabs in case of invasion. And that’s beside the damage done to mainland China itself.
The fabs are a red herring, they're largely irrelevant. If China invades (which I hope doesn't happen) it will not be because of any economic gains. There are no possible economic gains that would justify the costs of a war. If they invade it will be for the same reason that Russia did, because of extreme nationalism/revanchism and trying to use that extreme nationalism to maintain popularity among the population.
I think you're talking about US, willing to escalate to mainland attack, specifically strategic targets that support war economy. Nuclear plants being sensation overblown since it's basically jsut another piece of hard power infra. Which BTW very few US strategic planners have actually indicated willingness to do, but also inevitably must since PRC can prosecute TW (and SKR/JP) war completely from mainland.
To which IMO, most also vastly underestimate the effort needed. Reality right now is, the amount of fire power US can surge in region (naval strikes, aviation regional runway access, CONUS long range bombers), is very limited relative to number of PRC strategic targets, and in contested space theatre. To be blunt, PRC mainland is significantly larger (more targets) and capable (less ability to hit targets) than any previous US adversaries. By 1-2 order of magnitude. Most don't grasp this.
For reference the US+co air campaign in Gulf War, where US+co surged 6 carriers and had extremely geographically favourable regional basing to supplement land aviation, conducted ~100,000 sorties in 40 days, on Iraq, a country 20x smaller (realistically 10x since PRC targets are mostly east half of country), with 80x less people (even less aggregate productive/manufacturing ability). And that campian was essentially UNCONTESTED, since IIRC the french who designed Iraqi anti-air network sold out entire system to west. And it was efficient since regional base (CENTAF Saudi) was close enough that US fighters can sortie with minimal refueling.
None of that is true in PRC campaign, distances involved and limited basing US has access to (at least relative to PRC access to their entire military infra), means US unlikely to forward deploy as much aviation, and sorties need midair tanking (possibly multiple times) to deliver weapons, assuming those fighters aren't shot down/destroyed on the ground in the first place. Same with navy - US can throw in all but the effects won't scale proprotionally since US can't actually sustain/replenish surge for more than a few weeks, assuming support assets don't get destroyed themselves when they restock in port. So to summarize PRC is 10x-20x bigger than Iraq, 80x+ more targets, in contested region where PRC has home team advantage and where US has visiting team disadvantage (with regional partners factored in), in manner that US might not even be able to sustain forward posture for more than a few weeks (vs 5 weeks of initial Gulf War campaign). If you just naively scale Iraq air campaign to PRC, it would take US 5+ years to degrade PRC same way it did Iraq.
That's the scale of problem. Granted it's very hand wavy and napkin mathy but it illustrates how gargantuan PRC actually is and how big the challenges has become relative to US military capability that is calibrated to stomp small/medium sized countries. IMO why planners last 10 years have focused on SLOC/energy blockade, because land war in Asian is stupid. But even blockade talk is going to quiet down (and IMO US supporting TW militarily) in a few years when PRC roles out CONUS conventional strike with ICBMs to mutual conventional homeland vunerability. But that's another matter entirely, the TLDR is US game theory on TW going to be very different when they realize 200-300 oil refineries and lng plants and a few F35 assembly plants can significantly degrade CONUS and NATO. The other part of hitting a 100s of smaller targets vs 1 large target that triggers nuclear retaliation is there's more rungs/opportunity to deescalate, which is probably top priority in actual US/PRC war.
So ironically announcing Itanium was genius, but then they should have just canceled it.
That's how technology proliferates. The point is if the M1 wasn't innovative, that rules out pretty much everything AMD, Intel and potentially even NVIDIA have done in the last three decades.
I think at the time that article got published we didn't even have the intel devboards distributed (that is a screen and macroboards, way before it starts looking like a phone). We did have some intel handsets from a 3rd party for meego work, but that was pretty much just proof of concept - nobody ever really bothered looking into trying to get the modem working, for example.
What became the N9 was all the time planned as an arm based device - exact name and specs changed a few times, but it still pretty much was developed as a maemo device, just using the meego name for branding, plus having some of the APIs (mainly, qt mobility and QML) compatible with what was scheduled to become meego. The QML stuff was a late addition there - originally it was supposed to launch with MTF, and the device was a wild mix of both when it launched, with QML having noticeable issues in many areas.
Development on what was supposed to be proper meego (the cooperation with intel) happened with only a very small team (which I was part of) at that time, and was starting to slowly ramp up - but massive developer effort from Nokia to actually make a "true" meego phone would've started somewhere mid-11.
Turkey, Iran and Syria can and have caused horrible water shortages in Iraq.
I'm not sure what you mean by "woke disease," but consumerism involves evaluation.
Oct 7 was horrible, but it didn't come out of nowhere. Sabra and Shatila, for example (Waltz with Bashir being a very good Israeli film on the topic), and the many thousands of people killed mutilated or displaced in their usual unhelpfully disproportionate response..
I didn't realize HN served multiple alternate realities. Over here where I'm looking from, by far the largest boycotts of the last 8 years have been from conservatives who were upset by events like trans people being featured in ads and the existence of gay people in movies/tv shows.
(But yeah, clearly his actual goal was to increase his personal prestige. Is that not common knowledge yet?)
I'd say a lot more innovation happens on the process side. Manufacturing.
All the architecture changes look like things mainframes did decades ago.
You leave no room for nuance here, friend. And somehow I doubt you're familiar with every single one of Intel's tens of thousands of SKUs over decades. Intel has made a lot of CPUs. They're in lots of laptops. You might consider your wording if you're not trying to be corrected.
none of those claims are anything but proven, historical facts by the way.
Wanna lose your appetite? The leadership in charge of the described operations in Vietnam gleefully talked about their management genius. They implemented kīll quotas.
this list also is everything but exhaustive.
Rough recent history for Intel.
I agree that Apple figured if they were going to switch, they should just go ahead and switch to themselves. But the choice was really to switch to either themselves or AMD. Sticking with Intel at the time was untenable. 14nm is certainly a big part of that story, and I'm glad you at least finally recognize there was a serious problem.
If Intel had been able to deliver on their node shrink roadmap, perhaps Apple never would have felt the need to switch--or may have at least delayed those plans. Who knows, that's alternate history speculation at this point.
The article in question is about Intel potentially getting back to some level of process parity, perhaps even leadership. I'm looking forward to that because I think a competitive market is important.
But pretending Intel's laptop processors weren't garbage for most of the last 8 or so years is kind of living in an alternate reality.
It remains to be seen if Apple is willing or able to scale their architecture to something workstation class (the last Intel Mac Pro supported 1.5TB of ram, it's easy to build a 4TB Epyc workstation these days).
Just the first example I grabbed. Intel made quite a few more chips in the 15W and higher envelopes, still under M1's ~20W TDP.
I didn't check their embedded SKUs yet. Nor enterprise.
TDP doesn't seem to be the whole story - the datasheet is one thing, but I've yet to see a reviewer get 30 hours from an Intel laptop.
Your very own source gives the M1 a 75 vs 59 score on power efficiency. They don't seem to provide a benchmark for it, but it should have been a clue regardless.
Here's the first result on M1 power efficiency I found https://www.anandtech.com/show/17024/apple-m1-max-performanc... :
"In single-threaded workloads, Apple’s showcases massive performance and power advantages against Intel’s best CPU. In CineBench, it’s one of the rare workloads where Apple’s cores lose out in performance for some reason, but this further widens the gap in terms of power usage, whereas the M1 Max only uses 8.7W, while a comparable figure on the 11980HK is 43.5W.
In other ST workloads, the M1 Max is more ahead in performance, or at least in a similar range. The performance/W difference here is around 2.5x to 3x in favour of Apple’s silicon.
In multi-threaded tests, the 11980HK is clearly allowed to go to much higher power levels than the M1 Max, reaching package power levels of 80W, for 105-110W active wall power, significantly more than what the MacBook Pro here is drawing. The performance levels of the M1 Max are significantly higher than the Intel chip here, due to the much better scalability of the cores. The perf/W differences here are 4-6x in favour of the M1 Max, all whilst posting significantly better performance, meaning the perf/W at ISO-perf would be even higher than this."
(as if we all didn't already know this)
Screen backlight usually has as much to say about battery life as the CPU does. And it's hard to deny Apple's advantage of being fully vertically integrated. Their OS only needs to support a finite number of hardware configurations, and they are free to implement fixes at any layer. Most of my PC laptops make choices which use more power in exchange for modularity and upgradeability as well, like using SODIMMs for ram, and m.2 or SATA removeable storage, all of which consume more power than Apple's soldered on components.
It doesn't support your contention that Intel was making comparable chips at the time to the M1, I'll say that at the least.
Anandtech has talked about some of Intel's TDP shenanigans before, even for Tiger Lake: https://www.anandtech.com/show/16084/intel-tiger-lake-review...