This is more of a successor to the Xeon 6E (Xeon 6700E / Sierra Forest-SP), which had 144 cores. There was supposed to be a 288-core variant (presumably Xeon 6900E / Sierra Forest-AP), but they never released it to the public. I was looking forward to it since Sierra Forest-AP was supposed to support a 2-socket configuration (from motherboard spec). That's 576 physical cores in a single server!
Back in the day when Xeon Phi was around, I'd run `make -j 256` to run on the ~240 available hyperthreads. Those things were build machine beasts, assuming there weren't too many dependencies. For example. the Linux kernel would build files approximately ~240 at a time, which greatly sped up the build process, but linking was extremely slow (single-threaded on one very slow Phi core).
Even more interestingly, the Knights Landing series had a PCIe coprocessor version, which ran a stripped-down Linux kernel, and you could SSH onto it. One of my friends got one for free at a conference, and I really wish I'd picked one up!
Incredibly good. A lot of main GPU board partners are making LLM-focused boards. There was a few previews on Gamers Nexus from... Computex I think? I wanna say lots of memory, dual GPU boards that looked very well built for local LLM usage.
N150 is still using the old Gracemont core (same as N100, but with higher clock). From there, there is Crestmont (node-shrink Gracemont in Meteor Lake), and Skymont (in Lunar Lake/Arrow Lake), whose performance is already comparable to that of Golden Cove (12th Gen P-core). Chips and Cheese has written a very detailed analysis for Skymont[1] if you're interested.
The rumors are that there will be no true successor for N150.
Instead of that, there will be a cut down variant of Panther Lake, named Wildcat Lake, with 2 P-cores and 4 E-cores.
That will clearly have a much better performance than N150, including a much better Xe3 GPU, but it seems very unlikely that Intel will sell it at the low prices that made the N-series so attractive.
I assume that Wildcat Lake will be priced similarly to N350, so it will be available in computers closer in price to $300 than to the $100 that can get you an N100 or N150 computer.
> It's genuinely crazy how much better value an N100 is and how much better it works out of the box than a Pi for anything.. that doesn't need to talk to electronics/GPIO.. Low cost x86_64 solutions beat the pants off ARM in the PPPITA (performance per pain in the arse) department. The Raspberry Pi software ecosystem advantage nopes the moment x86 shows up to the party.
> Intel N150 is the first consumer Atom CPU (in 15 years!) to include TXT/DRTM for measured system launch with owner-managed keys. At every system boot, this can confirm that immutable components (anything from BIOS+config to the kernel to immutable partitions) have the expected binary hash/tree.
I completely agree, got an N100 minipc server at home myself which replaced an rpi 4. Fantastic value.
My point is Intel's accountants have a history of seeing 'fantastic value' as 'margins too low', which reduces the margin of error in engineering to ~0.
N100/N150 (unintentionally?) fueled a Cambrian explosion in mini PCs. It needs a few more years of OEM experimentation with form factors and ISV software stacks to identify new vertical markets, before trying to harvest high margin segments. It's too early and too successful to abandon the category now.
it is a part that you really wouldn't want to make if you could avoid it. given the ~120 pricepoint for a whole system the CPU is probably only ~$30. I'm glad it exists, but it's hard to imagine they're making money on it
That first paragraph made my head hurt. A lot of contortion to avoid saying 'Intel has slightly more than half the (edit) revenue in the server market'.
I thought I'd find a chart, best I can do is here:
The Darkmont cores of Clearwater Forest are almost identical with the Skymont cores of Lunar Lake, Arrow Lake S and Arrow Lake H. The only significant difference is that they are made with the Intel 18A CMOS process instead of a TSMC "3 nm" process.
These Darkmont cores have a performance very similar with the Arm Neoverse V3/Cortex-X4 cores and lower than the performance of AMD Zen 5 compact cores.
However, for applications that do not use vector operations, the Darkmont cores have a better performance per die area than Zen 5 compact.
For many applications, a 288-core Clearwater Forest will have a lower performance than an 192-core Turin dense, but for some applications it will be faster and the low area per core will enable Intel to sell it for a lower price than Turin dense.
It remains to be seen when will be the launch of Zen 6 in 2026, but it is likely that it will be a few months after Clearwater Forest.
It is difficult to call Intel's E-core server chips a “clone” of Zen 5c. Sierra Forest was announced before Zen 4c, and they're actually designed-for-area cores instead of just resynthesizing the same chip with a different library (which is the thing you do if you suddenly need a small core, but don't have time to make a proper one) and less cache, like Zen 5c is.
If it's a clone if anything, it's of Arm's “little” strategy.
288 cores and 12 memory channels. If I did anything with this chip, I would probably just swamp its memory bandwidth and get no performance gains over a 48 core chip.
How is the "e-core" scheduling going these days? I am not an Intel guy, but I remember reading about it causing a lot of issues for multicore workloads and people having to explicitly pin processes away from those cores.
There are interesting core scheduling questions with a CPU this large, considering its 4-way-shared L2 design and profusion of thermal domains, however these issues are so complex that no system I've ever heard of — certainly not Linux or Windows — attempts to optimize it.
While Intel's E cores share some history with the Atom line, they have almost no resemblance to the Atom line you might remember from the netbook days. In fact, Intel hasnt made a consumer Atom in a decade (there are still Atom processors for the embedded market).
The first E cores came out a few years ago with performance similar to Skylake and performance has only increased from there. Intel's first Xeon E core processor from last year even outperformed the some of the big core Xeons from the year before: https://www.phoronix.com/review/intel-xeon-6780e-6766e/10
The evolution of the Atom cores has been continuous in the series ... Silvermont, Airmont, Goldmont, Golmont Plus, Tremont (also included as E-core inside Intel Lakefield), Gracemont (from Alder Lake/Raptor Lake/Alder Lake N/Amston Lake), Crestmont (from Meteor Lake/Sierra Forest), Skymont (Lunar Lake/Arrow Lake), Darkmont (Clearwater Forest).
All the cores of this series have been designed in succession by the same team and the improvements between them have been incremental. These cores have always been intended as competitors for various Arm Cortex-A cores, e.g. Goldmont Plus for Cortex-A75, Tremont for Cortex-A76, Gracemont for Cortex-A78, and now Darkmont/Skymont for Neoverse V3/Cortex-X4.
Before Tremont, these Atom cores were used only in homogeneous CPUs. In recent years only those sold for industrial applications have retained the brand "Atom". The most recent CPUs branded as "Atom" have been launched last year.
For now, it does not matter whether one refers to any of these cores as "Atom" cores or as "E-cores", because either name refers unambiguously to the same cores. "Atom" seems preferable, because it is more clearly Intel-specific. One could refer as "efficient cores" to cores in CPUs made by others.
The first E-cores were the Tremont cores of Intel Lakefield, which was launched in Q2 2020, which were paired with Ice Lake cores, and which had a performance much less than Skylake cores. Only the next generation of Atom cores a.k.a. E-cores, Gracemont, has reached performances comparable with Skylake. While Gracemont was first launched in Alder Lake, later it has become widely available in Atom-branded CPUs.
"Consumer Atom" is now, and has been for years, contradictory. The Atom brand has long been reserved for Xeon-equivalent customers while the lesser implementations of the same idea now calls itself Pentium or Celeron or, my personal favorite, just "Intel Processor". Incredible marketing.
Anyway as you note these are excellent cores, and having 576 slightly wimpy cores in a box is an idea worth considering.
These are intended to run a lot of VMs, so you could pin VM cores to CPU cores strategically to keep the same VM in the same zone. These are mostly for cloud work loads.
I wonder how they'd work for AI compared to a GPU? That's all about memory bandwidth so I'm guessing the GPU would still win unless it was an AI type that is more compute-bound.
After Intel released a microcode update that turned my processor into one that retails for $100 less, I don't really trust claims or even benchmarks until they're > 1 year old.
Intel also, as far as I'm concerned, owes me $100.
I am not sure why you tried to draw a distinction between the BIOS and the IME in that regard. The IME also stores its firmware in the SPI flash alongside the rest of the platform firmware.
Other comments miss the point entirely. Sure it's possible to circumvent this performance impact, but that's not without its risks and you are still right about being skeptical over performance claims.
As far as I'm aware there's no disabling this one, it's that 13th/14th gen performance "fix" (that coincidentally turns your processor into one that sells for $100 less).
Yeah, it "works" in that it's a processor and does things. But the "fix" they provided was to basically eliminate its top end performance (the thing that got benchmarked, and the reason it sells at a premium).
As far as I can tell that’s not what happened. What happened is that some motherboard manufacturers gave the chip too much power, more than intel recommended, and fried it.
No that wasn't the problem. Intel was providing too much voltage internally on boost and it is damaging the CPUs. They decreased the voltage but then a bunch of already damaged CPUs are marginal so now they are boosting other voltages to bring them to stability and the consequence is a reduction in the boost clock speeds. The patches will keep coming as these CPUs degrade more rapidly, they are all going to die young its just a question of when and whether Intel will be on the hook for it or not.
That's what Intel wants everyone to believe but not quite accurate. Intel was vague at best about what the motherboard manufacturers should and were allowed to do but even following their guidance as best as possible resulted in CPU failures. Gamers Nexus did a few videos on this, here is one that covers most of it.
I'm with you brother. Intel needs to fix their ways and I don't think a couple generations of playing second fiddle will be enough. Intel burns a lot of money on marketing hoping to trick people about how great their new shit will be (anyone remember when 14th gen was released?) and it ends up being hot air. I have no faith in this one either.
If anyone was confused like I was, it's (Xeon 7) (E-Core). It'll have 288 cores. The Diamond Rapids Xeon 7 P-core chip has 192 cores.
This is more of a successor to the Xeon 6E (Xeon 6700E / Sierra Forest-SP), which had 144 cores. There was supposed to be a 288-core variant (presumably Xeon 6900E / Sierra Forest-AP), but they never released it to the public. I was looking forward to it since Sierra Forest-AP was supposed to support a 2-socket configuration (from motherboard spec). That's 576 physical cores in a single server!
Sounds like a UltraSPARC-T1, all over again. A processor stuffed with efficiency cores for cloud loads.
Makes sense.
That did take me a beat. 7 cores? Must be pretty fast...
jealous of the person at Intel who gets to run `make -j 288`
Back in the day when Xeon Phi was around, I'd run `make -j 256` to run on the ~240 available hyperthreads. Those things were build machine beasts, assuming there weren't too many dependencies. For example. the Linux kernel would build files approximately ~240 at a time, which greatly sped up the build process, but linking was extremely slow (single-threaded on one very slow Phi core).
Even more interestingly, the Knights Landing series had a PCIe coprocessor version, which ran a stripped-down Linux kernel, and you could SSH onto it. One of my friends got one for free at a conference, and I really wish I'd picked one up!
Don't you get limited by memory bandwidth at that point? (assuming all disk contents are cached)
Is there _any_ confirmed information about Diamond Rapids, beyond one leaked (and seemingly slightly dubious) slide?
maybe update the description
Intel's "Clearwater Forest" 288-core Xeon 7 CPU Will Be a Beast
Intel made some really good low power gpus with good driver support. How are their gpus coming along for LLM support of local first?
I've also heard Intel GPUs make good cheap hardware transcoders for home media servers.
Incredibly good. A lot of main GPU board partners are making LLM-focused boards. There was a few previews on Gamers Nexus from... Computex I think? I wanna say lots of memory, dual GPU boards that looked very well built for local LLM usage.
With unified memory?
https://www.servethehome.com/maxsun-intel-arc-pro-b60-dual-g...
It's two B60 GPUs on a single x16 PCIe card. Nothing unified.
Very curious about this as well.
Though I would say the fact that they were not involved with OpenAI on gpt-oss is not a good sign.
(OAI partnered with Databricks, NVIDIA, Dell, bunch of MSPs. etc.)
This <insert next gen> will be a beast!
"how many N100 minipcs can you pack in 1U?"
turns out the answer is "lots"
IIUC these cores are a lot better than n100. They're 2 generations newer and have some fairly significant improvements.
N150 isn't much of an improvement, I don't expect one more generation to be a crucial difference.
N150 is still using the old Gracemont core (same as N100, but with higher clock). From there, there is Crestmont (node-shrink Gracemont in Meteor Lake), and Skymont (in Lunar Lake/Arrow Lake), whose performance is already comparable to that of Golden Cove (12th Gen P-core). Chips and Cheese has written a very detailed analysis for Skymont[1] if you're interested.
[1]: https://chipsandcheese.com/p/skymont-intels-e-cores-reach-fo...
The rumors are that there will be no true successor for N150.
Instead of that, there will be a cut down variant of Panther Lake, named Wildcat Lake, with 2 P-cores and 4 E-cores.
That will clearly have a much better performance than N150, including a much better Xe3 GPU, but it seems very unlikely that Intel will sell it at the low prices that made the N-series so attractive.
I assume that Wildcat Lake will be priced similarly to N350, so it will be available in computers closer in price to $300 than to the $100 that can get you an N100 or N150 computer.
> rumors are that there will be no true successor for N150
N100/N150 reinvigorated the x86 ecosystem with price/perf of Arm plus compatibility with existing software.
> Wildcat Lake will be .. closer in price to $300
That price and P-core power budget would compete with compatible x86 AMD Ryzen instead of incompatible Arm/RISC-V boards.
Why would Intel surrender a long-sought advantage over Arm, after they finally succeeded with capable E-cores?
Intel needs to stop being run by accountants, otherwise accountants will be the ones turning off the lights one last time, and soon...
https://news.ycombinator.com/item?id=44463813
> It's genuinely crazy how much better value an N100 is and how much better it works out of the box than a Pi for anything.. that doesn't need to talk to electronics/GPIO.. Low cost x86_64 solutions beat the pants off ARM in the PPPITA (performance per pain in the arse) department. The Raspberry Pi software ecosystem advantage nopes the moment x86 shows up to the party.
https://news.ycombinator.com/item?id=44465319
> Intel N150 is the first consumer Atom CPU (in 15 years!) to include TXT/DRTM for measured system launch with owner-managed keys. At every system boot, this can confirm that immutable components (anything from BIOS+config to the kernel to immutable partitions) have the expected binary hash/tree.
I completely agree, got an N100 minipc server at home myself which replaced an rpi 4. Fantastic value.
My point is Intel's accountants have a history of seeing 'fantastic value' as 'margins too low', which reduces the margin of error in engineering to ~0.
N100/N150 (unintentionally?) fueled a Cambrian explosion in mini PCs. It needs a few more years of OEM experimentation with form factors and ISV software stacks to identify new vertical markets, before trying to harvest high margin segments. It's too early and too successful to abandon the category now.
If they must have a $300 price point, offer one with many E-cores and zero P-cores. I wonder how long Twin Lake N150 will be sold, if there is no successor, https://www.intel.com/content/www/us/en/products/sku/241636/...
it is a part that you really wouldn't want to make if you could avoid it. given the ~120 pricepoint for a whole system the CPU is probably only ~$30. I'm glad it exists, but it's hard to imagine they're making money on it
Core generations not product. The n150 is a clock bumped n100.
That first paragraph made my head hurt. A lot of contortion to avoid saying 'Intel has slightly more than half the (edit) revenue in the server market'.
I thought I'd find a chart, best I can do is here:
https://www.techpowerup.com/322317/amd-hits-highest-ever-x86...
Intel's revenue (edit) is falling year-on-year, and AMD gaining. Does anyone have a better chart?
So, its just a clone of current gen Zen5C products, and will fall behind Zen6-based products.
The Darkmont cores of Clearwater Forest are almost identical with the Skymont cores of Lunar Lake, Arrow Lake S and Arrow Lake H. The only significant difference is that they are made with the Intel 18A CMOS process instead of a TSMC "3 nm" process.
These Darkmont cores have a performance very similar with the Arm Neoverse V3/Cortex-X4 cores and lower than the performance of AMD Zen 5 compact cores.
However, for applications that do not use vector operations, the Darkmont cores have a better performance per die area than Zen 5 compact.
For many applications, a 288-core Clearwater Forest will have a lower performance than an 192-core Turin dense, but for some applications it will be faster and the low area per core will enable Intel to sell it for a lower price than Turin dense.
It remains to be seen when will be the launch of Zen 6 in 2026, but it is likely that it will be a few months after Clearwater Forest.
It is difficult to call Intel's E-core server chips a “clone” of Zen 5c. Sierra Forest was announced before Zen 4c, and they're actually designed-for-area cores instead of just resynthesizing the same chip with a different library (which is the thing you do if you suddenly need a small core, but don't have time to make a proper one) and less cache, like Zen 5c is.
If it's a clone if anything, it's of Arm's “little” strategy.
288 "efficiency" cores.
That each have 26 execution ports, can retire up to 16 ops per cycle.
288 cores and 12 memory channels. If I did anything with this chip, I would probably just swamp its memory bandwidth and get no performance gains over a 48 core chip.
it also has .5gbGB of cache, and that's 12 lanes of ddr5-8000
`make -j288` baby
This is a claim you can easily confirm or deny, considering that 144-core, 8-channel implementations of the Xeon 6E exist in the marketplace today.
The hyper-scalers are going to love this, since this has just brought down the cost per vCPU massively.
How is the "e-core" scheduling going these days? I am not an Intel guy, but I remember reading about it causing a lot of issues for multicore workloads and people having to explicitly pin processes away from those cores.
It's was fine like a month after the first e-core chips launched, certainly handled better than AMDs split ccd vcache which still needs to be pinned.
This isn't a hybrid design. It's a gigantic Atom.
There are interesting core scheduling questions with a CPU this large, considering its 4-way-shared L2 design and profusion of thermal domains, however these issues are so complex that no system I've ever heard of — certainly not Linux or Windows — attempts to optimize it.
> It's a gigantic Atom.
While Intel's E cores share some history with the Atom line, they have almost no resemblance to the Atom line you might remember from the netbook days. In fact, Intel hasnt made a consumer Atom in a decade (there are still Atom processors for the embedded market).
The first E cores came out a few years ago with performance similar to Skylake and performance has only increased from there. Intel's first Xeon E core processor from last year even outperformed the some of the big core Xeons from the year before: https://www.phoronix.com/review/intel-xeon-6780e-6766e/10
The E-cores are just rebranded Atom cores.
The evolution of the Atom cores has been continuous in the series ... Silvermont, Airmont, Goldmont, Golmont Plus, Tremont (also included as E-core inside Intel Lakefield), Gracemont (from Alder Lake/Raptor Lake/Alder Lake N/Amston Lake), Crestmont (from Meteor Lake/Sierra Forest), Skymont (Lunar Lake/Arrow Lake), Darkmont (Clearwater Forest).
All the cores of this series have been designed in succession by the same team and the improvements between them have been incremental. These cores have always been intended as competitors for various Arm Cortex-A cores, e.g. Goldmont Plus for Cortex-A75, Tremont for Cortex-A76, Gracemont for Cortex-A78, and now Darkmont/Skymont for Neoverse V3/Cortex-X4.
Before Tremont, these Atom cores were used only in homogeneous CPUs. In recent years only those sold for industrial applications have retained the brand "Atom". The most recent CPUs branded as "Atom" have been launched last year.
For now, it does not matter whether one refers to any of these cores as "Atom" cores or as "E-cores", because either name refers unambiguously to the same cores. "Atom" seems preferable, because it is more clearly Intel-specific. One could refer as "efficient cores" to cores in CPUs made by others.
The first E-cores were the Tremont cores of Intel Lakefield, which was launched in Q2 2020, which were paired with Ice Lake cores, and which had a performance much less than Skylake cores. Only the next generation of Atom cores a.k.a. E-cores, Gracemont, has reached performances comparable with Skylake. While Gracemont was first launched in Alder Lake, later it has become widely available in Atom-branded CPUs.
"Consumer Atom" is now, and has been for years, contradictory. The Atom brand has long been reserved for Xeon-equivalent customers while the lesser implementations of the same idea now calls itself Pentium or Celeron or, my personal favorite, just "Intel Processor". Incredible marketing.
Anyway as you note these are excellent cores, and having 576 slightly wimpy cores in a box is an idea worth considering.
> … gigantic Atom
I did see mention of Ponte Vecchio.
It seems like Intel is getting closer to its Larrabee vision of massive MP low power chips (which were 486 cores in the shelved 2010 project).
P54 (pentium) cores, not 486. Separate U and V pipes, etc.
These are intended to run a lot of VMs, so you could pin VM cores to CPU cores strategically to keep the same VM in the same zone. These are mostly for cloud work loads.
I wonder how they'd work for AI compared to a GPU? That's all about memory bandwidth so I'm guessing the GPU would still win unless it was an AI type that is more compute-bound.
Somehow I doubt it.
The last time I was excited about anything Intel did was the Xeon Phi. And Intel failed spectacularly to follow up on a very good idea.
Intel simply can't innovate.
After Intel released a microcode update that turned my processor into one that retails for $100 less, I don't really trust claims or even benchmarks until they're > 1 year old.
Intel also, as far as I'm concerned, owes me $100.
Doesn't microcode need to be reloaded each boot? Can't you just not load the update?
There two ways:
Update the BIOS/UEFI which includes an updated microcode.
Via software at runtime after each reboot (Windows/*nix).
Correct, they are typically applied through the BIOS.
Not to be confused with Intel IME.
https://en.wikipedia.org/wiki/Intel_Management_Engine
Also through windows or Linux. Or is that not done anymore?
It's still done this way too.
https://wiki.archlinux.org/title/Microcode
I am not sure why you tried to draw a distinction between the BIOS and the IME in that regard. The IME also stores its firmware in the SPI flash alongside the rest of the platform firmware.
Other comments miss the point entirely. Sure it's possible to circumvent this performance impact, but that's not without its risks and you are still right about being skeptical over performance claims.
I mean if the guy wants he can de-lid the processor and start fiddling around with transistors if he can operate at 6nm scale.
- Hi, Intel customer support, how may I help you
- I've delid the processor and now it doesn't work. Can I have it replaced under warranty?
- dead tone
Disable mitigations (at your own risk)
As far as I'm aware there's no disabling this one, it's that 13th/14th gen performance "fix" (that coincidentally turns your processor into one that sells for $100 less).
Do you have any info about this? I got mine RMA’d and it works fine.
Yeah, it "works" in that it's a processor and does things. But the "fix" they provided was to basically eliminate its top end performance (the thing that got benchmarked, and the reason it sells at a premium).
As far as I can tell that’s not what happened. What happened is that some motherboard manufacturers gave the chip too much power, more than intel recommended, and fried it.
No that wasn't the problem. Intel was providing too much voltage internally on boost and it is damaging the CPUs. They decreased the voltage but then a bunch of already damaged CPUs are marginal so now they are boosting other voltages to bring them to stability and the consequence is a reduction in the boost clock speeds. The patches will keep coming as these CPUs degrade more rapidly, they are all going to die young its just a question of when and whether Intel will be on the hook for it or not.
That's what Intel wants everyone to believe but not quite accurate. Intel was vague at best about what the motherboard manufacturers should and were allowed to do but even following their guidance as best as possible resulted in CPU failures. Gamers Nexus did a few videos on this, here is one that covers most of it.
https://www.youtube.com/watch?v=b6vQlvefGxk
That's what they claimed at first. Then investigation found widespread oxidation issues.
How do you fix an oxidation issue with firmware.
By derating the processor, aka, turning it into one that retails for $100 less.
Unless the firmware update disabled overclocking, can't you still override that, too?
Intel was asleep at the wheel for a decade at best and realistically was just milking profits while sitting on innovation. Disgraceful.
I'm team AMD all the way.
I have no faith or expectation anything Intel does will matter.
Maybe 10 years of them being irrelevant will convince them of their ways?
I'm with you brother. Intel needs to fix their ways and I don't think a couple generations of playing second fiddle will be enough. Intel burns a lot of money on marketing hoping to trick people about how great their new shit will be (anyone remember when 14th gen was released?) and it ends up being hot air. I have no faith in this one either.
Once AMD takes the spot, the lack of competition will encourage AMD to act in all the same ways you associate with Intel.
Indeed, this already happened several times already and it just happens that we're currently at the peak of an AMD cycle.
I think Xeon 7 is the first generation after they woke up. We'll see if it's enough.
I find this comment surprising. They have always had better single-core performance than AMD at the same price tag.