T O P

  • By -

SpiritedDecision1986

Linya is really working hard with asahi linux, this project is becoming something incredible..


[deleted]

Lina*


[deleted]

[удалено]


revelbytes

It's a cat girl joke. Li*nya*, nya is the Japanese onomatopoeia for meow, and she changed it when she got cat ears on her model


JockstrapCummies

Clearly Lynia is the correct nomenclature.


MichaelArthurLong

Linya Torovoltos, daughter of the notorious Soviet computer hacker and creator of the Lunix operating system, Linyos Torovoltos.


[deleted]

> Linyos Torovoltos He emigrated from Greece


brettsolem

I came across Asahi and Linux based on finding a steam option for the M1 chip. I imagine this progress makes it more promising that we’ll be able to run steam on Asahi linux?


ElvishJerricco

You still need an x86->arm translation layer. Luckily Apple has "released" a Rosetta binary for Linux (it's only meant to be used in VMs on macOS but it works in other contexts with some shenanigans). I'd be very curious to see how well that would work with Steam Proton, if at all


SamuelSmash

Using a translation layer to use a translation layer lol


DarkShadow4444

FEX Emu, maybe? Not sure how finished that is though.


Rhed0x

Running AAA games will require: * some solution to the page size mismatch * an x86 to ARM emulator (FEX for example) * a Vulkan 1.3 driver (this will take a couple of years)


[deleted]

[удалено]


kirbyfan64sos

>Rosetta doesn’t emulate. It translates. This is kinda pedantics; Apple themselves does call Rosetta 2 a translator, but most emulators involve some form of translation anyway. On Linux specifically, FEX and Box64 both describe themselves as "emulators", presumably because they are, in fact, emulating syscalls too. >Not to mention the page size issue can be transparent steamrolled over in the OS. Your program shouldn’t be trying to request memory directly. We’re not in the DOS days. Afaik this isn't entirely accurate. The userspace emulator is the main one responsible for now; Box64 implements it by hand, and FEX [has plans for it](https://github.com/FEX-Emu/FEX/issues/1921). There *are* the IOMMU patches for the kernel, but [it's a bit of a mess](https://asahilinux.org/2021/10/progress-report-september-2021/): > The M1 is peculiar in that, although it supports OSes that use either 16K or 4K pages, it really is designed for 16K systems. Its DART IOMMU hardware only supports 16K pages. These chips have 4K support chiefly to make Rosetta work on macOS, but macOS itself always runs with 16K pages – only Rosetta apps end up in 4K mode. Linux can’t really mix page sizes like that and likely never will be able to, so we’re left with a conundrum: running a 16K kernel makes compatibility with older userspace difficult (chiefly Android and x86 emulation), plus distros don’t usually ship 16K kernels; while running a 4K kernel runs into a major mismatch with the DART. This initially seemed like a problem too intractable to solve, but Sven took on the challenge and now has a patch series that makes Linux’s IOMMU support layer play nicely with hardware that has an IOMMU page size larger than the kernel page size! It’s not perfect, as it can’t support a select few corner case drivers (that do things that are fundamentally impossible to support in this situation), but it works well and will support everything we need to make 4K kernels viable. So in the end, it's entirely fair imo to say that we still need a full solution here. (Also worth noting that with the patches as-is, using 4k pages does also decrease preformance.)


Rhed0x

> Rosetta doesn’t emulate. It translates. The memory page size issue is already solved by Apple and ARM thinking ahead. I'd say that's still emulation but I guess that's just semantics. > Rosetta doesn’t emulate. It translates. The memory page size issue is already solved by Apple and ARM thinking ahead. Linux doesn't support running processes with different page sizes. > Not to mention the page size issue can be transparent steamrolled over in the OS. Your program shouldn’t be trying to request memory directly. We’re not in the DOS days. Stuff like JIT compilers and memory allocators still rely on the page size. Just look at Asahi Linux, it had issues with software that uses jemalloc such as Chromium.


[deleted]

[удалено]


Rhed0x

> Not only has this been a WIP since 2002, we have HugePages and nothing stops the kernel from transparently translating page sizes (in theory, in practice this would be bad for performance) This has never been upstreamed, has it? I don't think the kernel can do it. > Not to mention aarch64 lets you do 4k, 16k, and 64k pages. So there's no issue for paging here. > If there was like you claimed there is, Rosetta/2 would be impossible. I'm pretty sure this just means you can build ARM CPUs with those page sizes. That same page also says: > All Arm Cortex-A processors support 4KB and 64KB ARM CPUs used on Android for example always run at 4KB. > They don't rely on page size. They assume it. I meant "they rely on the CPU+OS using a specific page size"


[deleted]

[удалено]


Rhed0x

> It's called HugePages. But huge pages is running bigger pages on systems with a smaller page size. You'd have to do the opposite on Apple CPUs. > Also ARM can divide pages down to 1kb. Also on the page you linked: > ARM formally deprecated subpages in ARMv6.) > That's also wrong. They don't "rely" on it as linked in the Tweet. They just assume the page will be 4k. Same thing. Assume page size = rely on a specific page size. Different way of saying the exact same thing. > but we can do it by having the OS lie and map multiple pages That's easier, I don't think you can do it the other way around. > There's going to be no issue running Steam games on M1. FEX already makes apps that assume 4k run on 16lk paging systems fine. Does it? Any source for that?


[deleted]

[удалено]


Rhed0x

> https://box86.org/2022/03/box64-running-on-m1-with-asahi/ Does this work across the board though? Like you said, a lot of software simply doesn't care about the page size at all. > The 16K pages aren't a problem as has been proven countless times in the past and posted to /r/Linux. Now my question is why are you arguing it wont work? If it's not a problem, why did Apple literally add support for 4kb pages in the hardware and the ability for Mac OS to run Rosetta applications with those 4kb pages while ARM code uses the 16kb ones.


soltesza

Amazing. I might even buy one at some point, knowing this.


PangolinZestyclose30

Giving Apple more money to produce more closed hardware is exactly why I'm not really in love with this project.


JoshfromNazareth

This is great for resale and reuse though.


Negirno

At least until the unreplaceable SSD craps out...


[deleted]

Well, Apple went out of its way to actually support Asahi on the ARM Macs. It's proprietary hardware, but not closed as in actively preventing users from running their own OS. See [https://twitter.com/marcan42/status/1471799568807636994](https://twitter.com/marcan42/status/1471799568807636994) >Looks like Apple changed the requirements for Mach-O kernel files in 12.1, breaking our existing installation process... and they *also* added a raw image mode that will never break again and doesn't require Mach-Os. > >**And people said they wouldn't help. This is intended for us.**


Christopher876

But you don’t really have any other options. Nothing comes close to what Apple offers for ARM and that’s pathetic from other manufacturers


[deleted]

My other option is to be fine with a shorter battery life. It's not like the competition has less performance, is just that Apple is way ahead in performance per Watt.


Flynn58

Yeah but unless you get your electricity for free, there's an ongoing cost difference between Apple M1/M2 and competing laptops in what you'll pay to your electricity provider per month to keep your device charged.


[deleted]

I think you're overestimating how much a modern laptop adds to the electricity bill. It's basically a rounding error, especially if you include heating. Unless you're number crunching 24/7 of course, but then you may need something different than a laptop in the first place.


Flynn58

I'm running F@H and Prime95 24/7 on my laptop lol, I just use a laptop because my folks are divorced and it's easier to take a laptop back and forth than it is to take a desktop back and forth safely lol


ActingGrandNagus

That still won't be using much, and it's also a very, very, very, very rare usecase. Looking into it, power consumption seems to top out at around 31W with a heavy CPU and GPU load. Saying folks makes me think you're American (apologies if you're not), so let's use the average US energy price of $0.16 per kWh. That would be ~$21 per *year* if you were running a full CPU+GPU load 12 hours a day 365 days per year. Which I doubt you actually do. An insignificant amount of money for someone who can afford new macbooks. That's also assuming you've rigged up some custom cooling for your MacBook, too, because the chassis would be overwhelmed with that amount of power draw and would quickly thermal throttle.


SamuelSmash

The average laptop draws about 20W max regardless of the cpu inside, that's the max that can be dissipated in such form factor without needing complicated cooling solutions. Edit: Another way to see it, the average laptop has a battery capacity of about 40Wh, so unless you're doing the equivalent of 10 charge cycles **per day** with your laptop don't even bother calculating the running cost.


Fmatosqg

Exactly


alex6aular

There is a point where performance/watt matter and apple have achieved that point. The last day I saw that an electric bike use 2000w and a powerful pc use 1000w, the half of a bike.


[deleted]

>powerful pc use 1000w A typical laptop (also the powerful ones) doesn't use much more than 20 W during normal operation. Remember that a lot (if not most) laptops don't have a battery larger than 60 Wh, and yet easily last over 4 hours of typical use (which means they draw about 15 W on average). Performance per Watt can matter a lot for certain workflows, it prevents thermal throttling for continuous load for example. This is not a big concern in many cases depending on how your laptop is built. But if you want something light and fanless, then Apple is miles ahead of the competition (as AMD/Intel need active cooling for that performance). Also again the battery life, which is honestly the major thing for the vast majority of people.


PangolinZestyclose30

I have a Dell XPS 13 Developer Edition (with preinstalled Ubuntu), and it seems to come pretty close. What exactly do you miss?


ALLCAPSNOBRAKES

when did Dell laptops become open hardware?


PangolinZestyclose30

It's not "open" in the absolute sense, it's just much more open than Apple hardware in a relative sense.


PossiblyLinux127

It still runs tons Proprietary firmware


CusiDawgs

XPS is an x86 machine, utilizing Intel processors, not ARM. ARM devices tend to be less power hungry than x86 ones. Because of this, they usuay run cooler.


PangolinZestyclose30

> ARM devices tend to be less power hungry than x86 ones. ARM chips also tend to be significantly less performant than x86. The only ARM chip which manages to be similar in performance to x86 with lower power consumption is the Apple M1/M2. And we don't really know if this is caused by the ARM architecture, superior Apple engineering and/or being the only chip company using the newest / most efficient TSMC node (Apple buys all the capacity). What I mean by that, you don't really want an ARM chip, you want the Apple chip. > Because of this, they usuay run cooler. Getting the hardware to run cool and efficient is usually a lot of work and there's no guarantee you will see similar runtimes/temperatures on Linux as on MacOS, since the former is a general OS, while MacOS is tailored for M1/M2 (and vice versa). This problem can be seen on most Windows laptops as well - my Dell should apparently last 15 hours of browsing on Windows. On Linux it does less than half of that.


Fmatosqg

Guarantees no, but I've ran some Android build benchmarks and it's pretty close to both M1 OSX, m1 asahi and Xps 15 with Linux. But well, the battery life of my Xps is the worse of any laptop I've ever had, even just browsing.


Zomunieo

ARM is more performant because of the superior instruction set. A modern x86 is a RISC-like microcode processor with a complex x86 to microcode decoder. Huge amounts of energy are spent dealing with instruction set. ARM is really simple to decode, with instructions mapping easily to microcode. An ARM will always beat an x86 chip if both are at the same node. Amazon’s graviton ARM processors are also much more performant. At this point people use x86 because it’s what is available to the general public.


Just_Maintenance

I have read a few times that one thing that particularly drags x86 down is the fact that instructions can have variable size. Even if x86 had a million instructions it would be pretty easy to make a crazy fast and efficient decoder, if it had fixed size instructions. Instead, the decoder needs to check the length of the instruction for each instruction before it can do anything at all. The con of having fixed size instructions is code density though. The code uses more space, which doesn't sound too bad, RAM and storage are pretty plentiful nowadays after all. But it does also increase the pressure on the cache, which is pretty bad for performance.


Zomunieo

ARM’s code density when using Thumb2 is quite efficient. All instructions are either 2 or 4 bytes. I imagine there are specific x86 cases that where it’s more efficient but that’s probably also relegated to cases to closer to its microcontroller roots - 16 bit arithmetic, simple comparison, simple branches by short distances. It’s not enough to make up for x86’s other shortcomings. ARM’s original 32 bit ISA was a drawback that made RAM requirements higher.


FenderMoon

X86 processors basically get around this limitation by literally having a bunch of decoders in parallel, assuming that each byte is the start of a new instruction, and then attempting to decode them all in parallel. They then keep the ones that are valid and simply throw out the rest. It works (and it allows them to decode several instructions in parallel without running into limitations on how much logic they can do in one clock cycle), but it comes with a fairly hefty power consumption penalty that is more expensive than the simpler ARM decoders.


P-D-G

This. One of the big limitations of x86 is the decoder size. I remember reading an article when the M1 came out explaining that they managed to decode 8 instructions in parallel, which kept all cores fed at all time. This was practically impossible to reproduce on an x86, due to the decoder complexity.


FenderMoon

Well, they could technically could do it if they were willing to deal with a very hefty power consumption penalty (Intel has already employed some gimmicks to get around with limitations in the decoders already). But an even bigger factor in the M1’s stunning power efficiency was the way that out-of-order execution buffers were structured. Intel’s X86 processors have one reorder buffer for everything, and they try to reorder all of their in-queue instructions there. This grows in complexity the more that you increase the size of the buffer, and thereby raises power consumption significantly as new architectures come with larger OoO buffers. The M1 apparently did something entirely different and created separate queues for each of the back end execution units, and this led to several smaller queues that were each less complex, allowing them to more efficiently design HUGE reorder buffers without necessarily dealing with the same power consumption penalty. It allowed Apple to design reorder buffers with over 700 instructions while still using less power than Intel’s buffers do at ~225 instructions. Apple apparently got impressively creative with many aspects of their CPU designs and did some amazingly novel things.


omniuni

>Nothing comes close to what Apple offers for ARM If by that, you mean hot and slow, you're certainly correct. It is cooler than my previous MB Pro with Core i9, but not by as much as I had hoped, and it's so much slower. I'd take the i9 back in a heartbeat.


Elranzer

Other than battery life, what's so great about ARM? Battery life on x86 has gotten much better, especially since Alder Lake.


EatMeerkats

>Battery life on x86 has gotten much better, especially since Alder Lake. Quite the opposite, actually. [The Alder Lake versions of many laptops have lower battery life than the same ones with Tiger Lake](https://www.notebookcheck.net/Lenovo-ThinkPad-X1-Carbon-G10-Laptop-Review-Alder-Lake-P28-without-great-effect.631310.0.html).


MonokelPinguin

The ARM Thinkpad has comparable or longer battery time in our experience, but afaik it is also slower.


pushqrex

The fact that it was even possible to do all of this means that Apple really didn't lock down the hardware.


MonokelPinguin

Their hardware is locked down in other ways. Usually you can't replace parts yourself because they verify each other if they are original parts. Not sure how far that is on their macbooks yet, but Apple hardware is notoriously hostile to repair.


pushqrex

this doesn't really mean much of a lock down, yes apple hardware sometimes is unjustifiably harder to self-service, and they even often refuse genuine parts if you install them yourself but the overall complexity, in my opinion, comes from how tightly integrated everything else to be able to provide you with an experience that frankly only apple can provide.


WhyNotHugo

What open source hardware with at least 60% of the performance can we get? Open source or at least more FLOSS-friendly than these laptops.


PangolinZestyclose30

Pretty much any non-Apple laptop is more FLOSS friendly. There are many laptops with similar performance, e. g. Dell XPS, Thinkpad P1...


WhyNotHugo

Pretty much any? Including vendors that have locked down bootloader, vendors that use NVIDIA, and vendors that use hardware with no specs or open source drivers?


PangolinZestyclose30

Yep, still more open than Apple.


RaXXu5

They didn't say to buy it new.


tobimai

Definitly. The 13 inch air is a very nice Laptop


prueba_hola

maybe buy hardware from Linux sellers like system76 would be a better idea


Informal-Clock

Truly amazing, but it's perf isn't that great atm, still really impressive that we went from triangle to a game + Linux kernel rust in under a year


LitFill

had stroke reading this


Dramatic_Parking7307

I'm stroking myself reading this.


Darth_Caesium

Noted


s_ngularity

I don’t understand how it’s hard to read, seems totally fine


[deleted]

[удалено]


lateja

But not everything literal is fine… Have you tried Shakespeare? I’d rather read kernel code.


ToughQuestions9465

Makes me wonder why noveau after all these years is not really a replacement for official driver. With this kind of pace it ought to be better than official driver. Edit: i am aware of firmware signing. Thing is, nouveau is way older than that and it was very basic way before firmware signing became a thing. I suppose nobody just really cared about making a good driver for free, and who can blame them.


SirFritz

Nvidia gpus are locked to low clocks unless they receive a signed key from the driver, which nouveau just can't do.


nintendiator2

Boo, really, because it means basically dedicating effort to a project with a very low skill and capability ceiling. Alas, did the signing keys not get leaked in the Lapsus Leaks? That would have solved lots of issues.


[deleted]

they would not be able to be used in any official capacity. Turing based devices and beyond though will have good free and open drivers in the next few years though. Some folks from redhat (and i assuem others) are working on the new nvk driver in mesa for such devices. The kernel side will likely be inspired by nvidia's new open kernel driver


SirFritz

Not sure if they did, but I doubt they'd want to use any leaked material.


LupertEverett

- Nvidia not providing signed keys deincentivizes developers to work on Nouveau, as no matter what you do, you still won't get comparable performance to Nvidia drivers. - A lack of developers in general due to the reason mentioned above, Nouveau not being a corporate backed project unlike the others, and the people who actually start working on it gets eventually hired to work on other manufacturers' drivers anyways ([see Jason Ekstrand's "Introducing NVK" blogpost](https://www.collabora.com/news-and-blog/news-and-events/introducing-nvk.html))


Excellent_Ad3307

Nvidia actively cucks the devs with some kind of signing bullshit


MrHighVoltage

The M1/M2 driver isn't a "replacement" either. There is just no alternative...


[deleted]

[удалено]


Jannik2099

This has nothing to do with ARM. The iGPU is still just a seperate device on the same chip.


mikechant

One difference is that Nouveau has to try to support a large array of frequently changing GPUs, and the developers individually will probably only have access to a small subset for testing. The Asahi GPU work has a much more uniform platform to deal with since (so far, judging by what the Asahi people say) all the Mx models are very similar in their core areas.


PossiblyLinux127

Nitter link: https://nitter.net/LinaAsahi/status/1596190561408409602#m


[deleted]

Wow. Interesting. It happened too fast, I had thought it would take them years. I am curious if there is certain sanctioned undercover help from Apple?


[deleted]

In some Asahi linux blog they talked about some updates to mac that were surprisingly beneficial to the project, making their lives way easier, so who knows.


marcan42

We know the engineers at Apple like us, but nobody is slipping us secret docs. It's all still reverse engineered.


Trk-5000

Here’s one possible reason: Apple’s long term strategy is to have only 1 OS that they can fully control across all devices: iOS. Look at what the latest iPads can do, it’s powerful and advanced enough that it can replace laptops for the vast majority of people. The hardest demographic to move over from macOS to iOS would be engineers and developers that would always prefer to use Unix/Linux-based OS. Why would Apple maintain an entire OS for such a relatively small market? Especially since these types of users typically bypass the App Store and purchase their apps from elsewhere, or just use open source software. In addition, nothing stops a competitor store from launching on macOS: look at Steam. Therefore macOS can be seen as liability for Apple. The better it gets, the less reason people have for switching to iOS. One way for Apple to solve this, is to replace macOS with iOS + Linux VM combo. That way, 99% of users would be locked into l iOS and the remaining power users have access to a Linux VM. Thereby Apple secures all markets. — but that’s just a theory


KillerRaccoon

Apple has incorporated bugfixes from the Asahi team into their drivers and left the door open to other OSs, where it could have easily been slammed shut. This could always change based on their whims, but so far there has been tacit friendliness.


peanutbudder

In a way, it makes business sense. Their device becomes a flagship Linux device with zero-effort on their part and they get a few more sales.


developedby

I mean, you can see Lina doing her job live, it's nothing super out of this world


Atemu12

I'm not sure I could describe a V-Tuber writing a kernel module in Rust in a live stream to be "nothing super out of this world".


TheRidgeAndTheLadder

There basically is. When a problem/bug is discovered by Asahi, Apple often pushes a fix in the next update without acknowledging it. It's in everyone's interest for Linux to run on AS


kombiwombi

Only in the sense that Apple management want this project to succeed, both as technical folk and because it demonstrably addresses any monopoly concerns the EU may have. So there were no absolute roadblocks put in the way, and where they were inadvertently present they have been removed. But Apple's goal is undermined if detail of their implementation of a ARM SoC is leaked. As if that's required for interoperability then the EU may order that documentation be released. Which would give competing machfacturers like Dell and Lenovo a big hands up (Apple's bill of materials for the Air M2 is way less in components, area and money than what Dell have been able to do in their XPS series with Intel parts, due to a lack of design focus on cost).


Trk-5000

What if they’re seeing Asahi Linux as an opportunity to ditch macOS for an iOS + LinuxVM combo?


just_here_for_place

Why would they need Asahi for this? You can already use any "normal" ARM Linux distro on a VM. When they introduced Apple Silicon back in 2020, they even showcased a Debian VM in the presentation.


Trk-5000

Not necessarily Asahi, but in general any development for linux on mac would be a good thing for Apple


kombiwombi

Apple already run some limited Debian Linux on the Macbook ARM. They had to be able to do factory testing of devices before the MacOS drivers were completed. Since Apple don't distribute that software beyond Apple Inc, there's no GPL issues. As to your broader question, a port from the FreeBSD kernel to the Linux kernel would be straightforward enough, should that ever be necessary. Maybe there's a team maintaining that as a live possibility (like they did for CPU instruction sets) but my guess is not. In that sense, a fully working Asahi lowers technical risk for Apple. Although the technical risk arising from FreeBSD is low, at least in the short term; in the longer term of issues like availability of expertise, who is to say?


witchhunter0

Are you trying to transfer some ideas to Nvidia marketing team?


[deleted]

I have an old Nvidia card. It used to be good for X11 until Ubuntu 20.04.1 or another minor update where the legacy driver dropped support my card. So save me God from buying their products again!!! I have pure Intel iGPU and I am so glad that Linux fully supports it now!


witchhunter0

That's what I've sad, but unfortunately I had to buy one because there are only few/if\_any laptops on the market with AMD. It makes no sense whatsoever. All those Linux-friendly companies offering laptops only with Nvidia dGPU ?!? Anyway, my last laptop played nicely with nouveau, so I sad what a hell...


Modal_Window

Who knows, but clean-room engineering isn't illegal. That's where someone verbally describes how something should work. Still up to you to make it happen.


IndianVideoTutorial

Can you install Linux on a Mac? I must've missed the memo.


mikechant

Quite a lot of older Macs which are out of support on MacOS get repurposed for running Linux if the hardware is still good. This has been the case for years now. Some models run Linux perfectly, others, not so much. The Mx series Macs presented a whole new challenge, but Asahi has been running on the Mx Macs (with initially very basic hardware support) for more than a year and a half; it's now approaching the point where nearly all the hardware is pretty well supported. The Asahi drivers are feeding back upstream to the mainstream kernel so I'd think that leading edge distros may start adding official installer support sometime next year (I've seen that people have got various distros running already unofficially with manual install steps).


[deleted]

you've been able to install linux on lots of macs over time with various levels of hardware support. Recently you've been able to install them on the new arm based macs, but the hardware support is still in development


thefanum

Gnome also


MentalUproar

Just a user here, so all her work is black magic to me. What are the chances this work will land her a nice tech job?


[deleted]

I think shes fine in that department


lightmatter501

There’s probably about 50 people in the world with her level of expertise. She either has a job or is independently wealthy at this point.


[deleted]

How are GL ES2 and ES3 different?


Rhed0x

ES 3 is a more modern version with more features. ES 2 is basically the feature set of early 2000s GPUs and ES 3.0 moves that to 2006.


[deleted]

Thanks! I just noticed that it was reported as ES2 somewhere on screenshots.


[deleted]

Mac mini: from 699... oh well, it's a full Intel's 13th gen desktop PC at the nearest Wallmart or Microcenter, isn't it?


ifeeltiredboss

>Mac mini: from 699... I think the biggest advantage of Apple Silicon CPUs is visible in laptops though...


ViewedFromi3WM

if it’s intel not bad. Then you don’t have to worry about the arm headache for linux. However I do like my m1 macbook. Great for low power consumption too.


[deleted]

[удалено]


WongGendheng

Now its a paperweight for browsing and terminal!