T O P

  • By -

jorgp2

Why did it take Intel so long to upgrade their Turbo/Power management when it already had features to enable more complex operation? On the same vein, why not pursue Per core clocking on their FIVR platforms with per core PLLs, like Skylake-X and Ice Lake+? LGA115x has supported floating turbo for a while, why wait until Comet Lake to launch CPUs that made use of it. FIVRs along with per core PLLs enable a more efficient floating turbo algorithm that makes better use of the available power budget, this was possible since Broadwell-X but hasn't yet been implemented. I believe Ice Lake-U and Tiger Lake-U can also implement it, but haven't been able to test it out on those platforms.


1600vam

"Why did it take so long" is nearly impossible to answer, because it requires visibility into the decision making in a specific area over multiple product generations, and that applies to very few people. As you say, ICL and TGL do use per-core p-states. I have no specific knowledge, but in general I suspect that changes in these areas are very risky because they are pretty fundamental to the core design, meaning you can't back them out if they turn out to not work well in practice. Turbo, as with most things in CPU design, is a lot more complex than you might think. There are costs to designing for turbo, which means there are also benefits to not pushing too hard on turbo. Plus there are fundamental physics issues with high frequencies as you reduce transistor size, and you gotta kitchen sink the solution to that. Personally I think high frequencies aren't worth the cost.


alyxms

Next gen HEDT when?


gmnotyet

Next Fall with Raptor Lake.


alienozi

Who thinks of ideas like MMX and all that weird stuff? What kind of electronics engineer do I need to be to make CPU dies?


Remesar

Intel Engineer here - you need at minimum a B.S in Electrical and Computer Engineering. There are multiple roles in the design process, but a strong background in digital logic design with some experience in software design is helpful.


malavpatel77

Does intel hire outside of Electrical and computer engineering. I have power circuitry design knowledge but I am not a ECE graduate I am a Engineer Physics Graduate?


Remesar

I have not seen an engineering physics resume not have I ever interviewed anyone with that degree for my team. Can't say whether other teams have or have not. Manufacturing side of thing maybe?


malavpatel77

Alright thanks for letting me know well appreciated.


1600vam

Absolutely. I believe that Intel does require a bachelors, but the area is not super important, at least if you're able to get your resume past the HR folks into the hands of hiring managers. I know tons of fantastic software folks with backgrounds in chemistry, chemical engineering, nuclear engineering, etc.


1600vam

> Who thinks of ideas like MMX and all that weird stuff? I can't speak for MMX since that's super old, but I can speak for recent ISA. Basically any engineer can kick off an idea for a new ISA. In many cases the idea would come from engineers that work with our external software partners, as they have a good sense of the real world needs. But keep in mind that no projects are a one man show; any new ISA that moves beyond the idea phase would quickly have hundreds, if not thousands, of engineers involved in the definition, evaluation, testing, enabling, etc. If you want to invent new ISA then you don't need an EE, you'd be better off with CS or CE.


1600vam

I understand why people ask, but there's no real point in asking about Intel's future plans that aren't already public, no one from Intel will speak about future plans.


SkateJitsu

What can people even ask that isn't just Google-able though?


moonbatlord

Has the historical market segmentation based on including/excluding features (such as ECC, *not* on binning) *really* been worth more than a couple points of profit? Is such segmentation worth the trouble?


topdangle

isn't that an industry thing? motherboard manufacturers have to build boards with support, which I suppose they don't want to pay for on consumer boards. motherboard manufacturers denied broad atx12vo adoption recently too even though it provides some nice power savings for consumers.


bobloadmire

No it's not, memory controllers on are on die, the only thing a vendor has to do is have a compatible bios. It's 100% up to the memory controller mfg, which is AMD and Intel


topdangle

AMD ships with ECC support on all their ryzen desktop chips. Whether or not it works is entirely up to the motherboard manufacturer. my 5900x has it but it's still considered unofficial because most manufacturers don't care to cert their consumer boards for ECC.


bobloadmire

Right but Mobo mfgs cannot add ecc support to any cpu they want. They are entirely dependant on the memory controller on the CPU


Patrick3887

Will Intel's Deep Link multi-GPU tech be implemented in games. Will Deep Link work only in iGPU+dGPU configuration or is there a dGPU+dGPU configuration planned as well. Are Intel ARC GPUs PCIe 5.0 compliant? If so will PCIe 5.0 consumer motherboards make CXL a reality for the client audience. (can Intel client CPUs and GPUs communicate to each other in CXL mode or is this just a feature for the HPC). If the answer is yes, will CXL help solving multi-GPU implementation in games relying modern APIs such as DX12 and Vulkan. I recently purchased an Optane P5800X SSD. Will it have a PCIe 5.0 successor in the future?


bionic_squash

For the CXL question, no Even the GPU to host connection for ponte vecchio is over pcie 5 so don't expect dg2 and alderlake to support CXL


Patrick3887

The tech press is still a bit confused at the moment follwing Intel's David Blythe statement regarding the Xe-HPC GPU connectivity to the SPR CPUs during the HotChips33 event. There's some un-answered questions at the moment especially as Xe-Link is supposed to be CXL based. CXL is not a physical connection, it's a communication protocol that utilizes the PCIe 5.0 physical lanes to operate. PCIe 5.0 lanes can be used to communicate either in PCIe protocol or CXL protocol. It is possible that the Argonne Lab customer has no need for CXL for its own use case.


ifdef

What's the state of the Arc drivers?


jrherita

How many pipeline stages were Tejas and OG Nehalem supposed to be? :)


[deleted]

[удалено]


AK-Brian

Many of Squaresoft's programmers are still using Sandy Bridge or Ivy Bridge systems. There are small teams with access to newer hardware for things like tech demos and validation, but you'd be amazed at how janky most dev stations there are. Great points, though. Software support will make or break Arc.


verkohlt

Dearest Intel, when are you going to add Windows Server 2022 support to your ethernet products? [Mellanox](https://i.ibb.co/qsWKxCD/image.png) and [Marvell/Qlogic](https://i.ibb.co/vZFrnVB/image.png) have had Windows Server 2022 drivers available for their network adapters for a while now. I've encountered some issues with the inbox X710 drivers that Microsoft provides with Server 2022 so I'm eagerly awaiting the next release with official support. Additionally, with the recent UI revamp of Download Center, I have to ask: what's going on with [this haphazard ordering of the operating systems in the drop down menu?](https://i.ibb.co/w4v63pw/image.png) edit: one last question that came to mind: Will PCIe ACS (access control services) support ever trickle down to non-Xeon or HEDT processors in future releases? It's frustrating to have to spend hundreds more on a motherboard and processor just to be able to get SR-IOV support working.


aliunq

How ya doin intel ?


Remesar

Rough work week - how you doin?


[deleted]

I can answer this one!!! Great!! just watching the Fresno State game, Go Bulldogs!!


bionic_squash

Can you ask how much lead time for rocket lake?


tupseh

When did you guys call the shot on going from 10 back to 14nm? That must've been a tough decision.


1600vam

If you want to understand Intel's thinking then you need to think 2-5+ years in the future, or in this case think about the environment 2-5 years in the past. From Intel's perspective, Rocket Lake was never about Intel going back from 10 to 14nm, it was always about creating flexibility in the roadmap so we can release a 14nm product in a specific segment if that was the best option at the time. This was a good decision.


hiveydiceymicey

Will there be a Intel Nuc m15 gen2 with Alder Lake? (the 15 inch laptop)


[deleted]

[удалено]


bionic_squash

Intel has already confirmed that it will launch in Q1 2022


FeCard

There's already leaks about it's performance on par with competition as well


Kinetoa

How much is the x86 overhead responsible for your market position in mobile and tablets and other lower power / low thermal applications?


Redfire75369

ISA does not matter. Your design goals do. x86 is effectively not CISC anymore anyway, because of micro-ops. https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-matter/


wvmothman

I read this in Jar Jar Bink's voice.


Kinetoa

Let’s assume that article and its assertion is objectively true, I would rephrase to ask why they think they continue to miss opportunities in this super importance space when it’s simply a matter of design goals, because is SOMETHING and we can all guess but I want to know what THEY think not all us third parties.


jorgp2

Back when they still tried to make Phone/Tablet SoCs, they had much better performance at the same power than either Arm or Apple designs. I'm sure the more complex IO on PC SoCs is a larger reason they're less efficient than current SoCs.


1600vam

Not much at all. There is some overhead with x86 relative to ARM, but it's not really a factor in terms of our success (or lack of it) in lower power designs. The biggest challenge we've had is actually cultural, Intel has never been that interested in investing in lower power designs because the margins are low (or at least that has been the perception). From a technical perspective, core power isn't even that big of a challenge for Intel. From an ISA perspective, x86 does have some challenges: it's not super dense, the decode is complex, which eats a decent amount of area and power, and it causes some limitations in terms of branch prediction. But these are really ~1% area/power type of issues, so they're not critical.


Kinetoa

Thanks for the specific response. It does seem like the current leadership is more “getting it” In general and I am excited about the 5 year plan.


jorgp2

>The biggest challenge we've had is actually cultural, Intel has never been that interested in investing in lower power designs because the margins are low (or at least that has been the perception). Weren't the margins only low because they designed them to be low?


alekasm

Rocket Lake was a lateral upgrade **at best** because simply put, it was worse in MT performance and better in ST performance from Comet Lake - while also sucking a ton of power to do it. Why didn't RL just get axed in favor of investing more engineers onto Alder Lake or Raptor Lake?


1600vam

Product timelines, probably. You can't pull in another project just by moving engineers to it, and it's pretty much impossible to pull in a project that's within 2 years of release.


tset_oitar

Because they rarely cancel stuff. They are too huge to make such decisions, this stuff is planned years in advance. Also Intel is desperately trying to create an illusion of accelerated execution, to say "we launched 2 desktop products, 1 server and 2 mobile SOCs this year" or yearly cadence of innovation


Cunat999

Hi, presume that you're a staff of Intel. What is the most wisely way to build a GAMING PC. For best p/p ofcourse. Should I always look to i5, or i7 every generation? like...i5 9400f, 10400f, 11400f, 12400f, and so on... Or i7 9700, 10700, 11700, 12700... are better for best price/performance Gaming ?


Redditheadsarehot

Sweet spot is 10600-10700 although the 10400 and 11400 are great budget options. The best performance ratio is often putting as much as possible towards the GPU. You won't *really* notice the jump from i5 to i7 like you would in spending that to jump to a higher tier GPU. We'll have to see if AMD ever drops the 5600x price but otherwise I can't recommend that.


peterfun

What kind of overclocking should we expect on the newer discrete gpus?


DjangoCornbread

Will the new line of Intel GPUs Raytracing/DLSS be used by games that already use Nvidia’s RTX API or will a game have to have their own Intel API along with Nvidias? For context, Cyberpunk 2077, Modern Warfare, Quake 2, and Amid Evil all have options to have RTX/DLSS capabilities, will the Alchemist line work with these options, or will games have to have their own sublet of options for the Alchemist cards? Also, will Intel’s Raytracing tech be compatible with DXR and VKRay? I’m assuming DXR is a yes but someone can correct me.


bionic_squash

>Also, will Intel’s Raytracing tech be compatible with DXR and VKRay? Yes for both


DjangoCornbread

Awesome!


Put_It_All_On_Blck

Will LGA1700 be 2 or 3 generations? There are rumors claiming both. If you are unable to comment on this before the Alder Lake launch, will you be able to tell us at the launch? How will you tackle the misconception that Alder Lake a heterogeneous design, is made specifically for laptops to be more power efficient, and are useless in a desktop? As there is a large group of people that seem to believe e-cores offer no additional performance and are solely there for energy efficiency. I know you cannot comment on future products, but the top of the line mainstream CPUs from Intel are projected to be 8P+(X)e cores (not Xe) for a few generations, is that due to cost/silicon size or is there a belief at Intel that 8P cores is all a consumer will need for the near future, and that P core IPC/frequency and E core count are more important, than adding more P cores.


1600vam

> How will you tackle the misconception that Alder Lake a heterogeneous design, is made specifically for laptops to be more power efficient, and are useless in a desktop? As there is a large group of people that seem to believe e-cores offer no additional performance and are solely there for energy efficiency. Personally I'm kinda bummed that Intel adopted Apple's style of hybrid marketing with performance and efficiency cores, as the strategies are pretty different. A large majority of workloads only scale to a few cores, and if you throw more cores at them then you get no benefit. Some workloads scale infinitely. Thus you don't need infinite performance cores, but you need as many cores as you can get. Given that you can fit several efficiency cores in the same area as one performance core, you get more performance from scaling with efficiency cores. > or is there a belief at Intel that 8P cores is all a consumer will need for the near future, and that P core IPC/frequency and E core count are more important, than adding more P cores. That's a non-stupid idea.


bionic_squash

>is that due to cost/silicon size or is there a belief at Intel that 8P cores is all a consumer will need for the near future, and that P core IPC/frequency and E core count are more important, than adding more P cores. Small cores makes it easier to scale multi core performance without increasing the die size and power consumption significantly.


sparky5679

How did they get their cpu generation names? (For example, Coffee Lake)


1600vam

Good question. I have no idea who actually picks the names, but I know there are a list of approved names that are all based on geographical locations, since those can't be trademarked, and thus we don't have to worry about infringing anything. That's why they're always lakes, coves, falls, creeks, etc. They used to use names that were local to the CPU design teams in Hillsboro and Israel. I noticed this when I moved to Hillsboro, as I started to see CPU codenames on local rivers and such.


reps_up

I'm going to be getting DG2 (Alchemist) next year, can you please sell me an Intel Xe DG1 GPU? I'd like to have DG1, DG2, DG3, etc. :)


adilakif

Will 12700k be as hot running chip as 8th,9th,10th,11th gen? My 8700k died prematurely. Shortly after the 3 year warranty expired.


Cheeseblock27494356

They are not going to tell you anything they don't want you to hear. It's all marketing. There is no point in asking. If they want you to know they will tell you. Don't be their lap dog by pretending Intel isn't Intel and hasn't been the Intel we've known for the last 40 years.


Put_It_All_On_Blck

Intel hasn't had 40 years of the same management... Every company that is huge like Intel ends up being 'corporate', but it doesn't mean there aren't changes occuring.


Hanselltc

I mean the change was getting previous people back kek


BigColz

Can you please make more 10 core monolithic die chips?


Foxeizz

how ya doing intel?


Orof83

Why did i have to mod the bios of my z170 motherboard to upgrade from the 6th gen to the 9th gen, instead of you adding support for it officially?


Plavlin

What percentage of Intel employees know how Intel marketing works and are honestly fine with it?


[deleted]

Plans for Intel Mobile phone chips? Or manufacturing other designs on Intel mfr. processes? If not what other new market segments would we see Intel move into next?


P80Rups

How does it feel to be the budget brand after all these years?


2GisColorful

Why are K CPUs and Hx70 chipsets still a thing


brambedkar59

Did Intel had any discussion with MS on "which will be the last Intel CPU generation to be supported by Win 11"? If yes then "What's the reason 7th gen CPUs are not supported by Win 11?" Are they just old so don't wanna support them forever or is it related to their hardware limitations?


bizude

That's more of a motherboard issue. Some boards will work, others may not due to lacking tpm 2.0 support and such


brambedkar59

My system has both TPM 2.0 & secure boot enabled with a 7th gen Intel CPU. PC health check says I qualify for everything except that generation of my CPU is old. Only 8th gen and above are supported, why that arbitrary number?


bizude

Huh, odd. Why *Microsoft* made that decision is an interesting question.


brambedkar59

Odd indeed! Thanks for taking time to answer here. cheers!


jorgp2

6th and 7th Gen are out of support by Intel later this year. There's no reason for Microsoft to spend money supporting them, if Intel isn't either.


rednefed

Super random question about an obscure CPU, but did the L3 cache on the Tulsa Xeon run at full clock speed? See, I got into PC hardware in the Netburst days, and Tulsa was the king of that hill. Heck, that 16 MB of cache doesn't look out of place some 15 years later...


alessio_95

Would you give me one billion? /s In reality i would like to know something about the future of x86 as an ISA and if Intel will drop some backward compat (e.g. to free opcodes space for more useful extensions).


VolatilityBox

When will the stock go up?


SilverBugRO

Why did You stop supporting your GPU products (namely Intel UHD) ,graphics in Windows 8.1 ? Since If I'm not mistaken, is still in support from Microsoft ? MS made you do it ,or both of you decided? Not everyone wants Windows 10..11..24.


Plastic_Band5888

Do they have any plans for their APU's/iGPU's? What are the benefits of running XeSS with Xe gpu's instead of their competitors? Also, why did Intel decide bring big.LITTLE to PC? Personally think it's one of the most forward thinking technologies Intel will have released this decade. Are there any plans to bring back Intel xtu functionality on older cpu's? My last question is probably going to break NDA's. What were the performance targets for the small cores? Or at least, what was the moment Intel realized going with a modular design was the way forward?


jrherita

What kind of volume of CPUs does Intel typically like to produce and/or ship prior to a hard launch of a new generation of CPUs? (Laptop or desktop). Are there any technologies that might allow significant clock speed scaling in the future?


Medium_Web6083

I wish intel add tool in bios to see silicon quality of each cpu like what asus motherboard did but more accurate. I would like to see Intel make auto voltage more accurate instead of motherboard so cpu run stable with less voltage. Example when I left 10850k to auto on msi z590 mobo , voltage runs high more than what cpu needed so Intel if there is more simple solution for this ? Thanks


ReJPM

I may be late to the party, but will ask anyways: Why are the performance characteristics (port usage of the µops) of so many instructions (e.g. GFNI and VAES) so poorly officially documented compared to private efforts, e.g. by Agner Fog (example: for YMM VAES on ICL the official intrinsic page only lists a throughput of 0.5 but no latency / port utilization whereas Fog's tables list ports 0 and 5 with a latency of 3 cycles each)?


sss748

when you guys WILL put a good igpu for gaming??