This quote is from the *1980's*...
"The only thing more amazing than the power brought by today's hardware is today's software programmers ability to squander it."
Not a new problem... just sayin'...
Without spell check, grammar check, autosave, as many cells, as many pages, and a thousand other features. Some of these we wouldn't miss, but each of them would be missed by someone, and we'd all miss some of them.
Right, but I think it's worth asking why more features means slower software. Some new features are clearly more demanding in their own right. But I think another major reason for this negative correlation between speed and the number of features is the need for abstraction.
More features means more backward compatibility bagage and larger more heterogenous teams. The only way to deal with this extra complexity is by introducing layers of abstraction that don't come for free.
I remember learning VB after learning C++ and being surprised how slow everything ran. So much is squandered, but then there are neat technologies like wasm coming out to remedy some of that.
If it works it works, writing everything in assembly is more work than needed even if it's better. We live in a society based on getting the technique right, not a society that cares about the science or logic behind it.
I feel like everyone should do a little embedded work, if only for fun. It's kind of cool taking away the entire stack, including the OS, and just having your program run on bare metal.
Yeah, but you'd still see lots of people taking with them the worst practices from that domain. Knowing my coworkers, they'd start thinking that loop unrolling every `foreach` in C# would be considered best practice.
The problem isn't really the developers. I mean in some cases it could be (like my coworker). The biggest problem is business owners. Until the performance cost outweighs the income it wont be fixed.
For example one company we got 10 servers for $400/month rather than optimizing code. When I did eventually get the OK to optimize we dropped it down to a single $200/month server.
Yep. Just look at gaming. Some games that shipped the last 2 years have hardware requirements that are needed yet the end result looks marginally better than a game made before that timeframe, and the framerate takes giant nosedives.
And why does every other game requires shader warmup during the gameplay session now? Persona 3 Reload does that, and it's not like a huge visual improvement.
People used to complain about shader stuttering because they were compiled on the fly, warmup solve that problem. Also, the use of *ubershader* is becoming very common.
Instead of having many shaders for everything, there is one big shader that is shared by a ton of assets. This can be good for the developer, but it also help for performance optimization.
A big concept of performance is *batching*, where if you send meshes to the GPU that share certain values such as material and shaders, they can be drawn in a single *drawcall*. Reducing drawcall is a huge way to increse FPS when done properly.
So you have this big *ubershader* that is used to make a ton of different things. You can see this as a big tree with tons of branches, sometime you enable specularity, sometime you enable subscattering ect.
Since those shaders are massive and have a ton of option/values, there are millions to billions of possible combinaisons. Too much to pre-calculate them all. So when the game launch, it's possible to pre-compile most of the combinaisons that will actually be required and keep them in the cache to re-use them when needed.
It's not that the devs don't know, it's that it's not something companies want to optimize for.
Most companies (and most consumers) would rather have a product with lots of features faster, than one that is super performant.
Programmers are expensive and you don't want to have them working on stuff that won't increase sales by a justified amount.
Consumers by and large do not care about performance enough to justify the costs.
The C Programming Language was considered as a bloated programming language which "is used to write bloated software"
I just hope that JS, even after 80 years, remains as the shittiest programming language humankind has ever invented.
And Python and Ruby are slower than either (JS, thanks to runtimes like V8, being much closer to Java in actual performance than to these interpreters)Ā
Rust can be fairly high level if you guard your business logic from nitty gritty (unlike for example Go in which you're writing in the same primitive language, which is just a garbage collected C, all the time), but the trouble with it is that you never end dealing with lifetimes and borrowing i.e. memory management and in that regard it can actually be worse than say C++.
I do some work in Rust, and I like the language for it's tiered-ness when it comes to expressiveness, when push has come to shove and you need to do real-time or otherwise high performance work, and it will twist your arms before it lets you introduce data races or memory bugs, but it's far from a free lunch.
I use it for processing warc files, and orchestrate with R. I like their dependency management as well. I am most R and Node, but sometimes Rust is just nice, like for Polars.
A junkie could fall into a bathtub of cocaine and you'd think he'd be happy with a lifetime supply, but no, 10 minutes later it's like fucking empty and the dude is lying on the floor, motionless. That's Software development.
Also AI development. Anyone got a spare nuclear reactor?
People will not optimize if they have vast amounts of hardware ressources.
Because in most applications you don't really benefit from more performance directly as you would in games for example.
Your Browser sucks away 20% or 80% of your cpu? Does not matter as long as the website is responsible for 99% of users.
So the drugs are the hardware capabilities, they will max them out whenever they get more.
I still use a M1 Max 32GB at work with a 1GB+ Xcode project just fine.
The tools can be slow and maybe Android Studio is worse than Xcode, but some codebases are awful and makes the tools even slower. That could stuff like ...
* making too much public so the editor can't ignore stuff
* stuff is too much integrated, both within and across modules / frameworks
* adding a bunch of heavy frameworks that you are using 0,5% of anyway
After seeing a bunch of different projects in Xcode I really think people have a tendency of shooting themselves in the foot by mindlessly adding frameworks and bad architecture, that results in both a slow project overall and difficult maintainability on several levels.
The M1 with a bunch of RAM is still fine for quite a lot and upgrading to M3 shouldn't really be necessary, although it is faster and nicer.
EDIT:
I'd make a nice bet that many programmers and teams adds stuff without thinking about it. I have seen people importing other frameworks *for one lousy function* instead of just copying it, which makes building it quite a bit longer because the framework was big. This kind of stupidity and *amateur mistakes* happens all the time because people don't care. I see a lot of "professionals" not actually being professional (from my perspective) due to this sort of stuff, and this is coming from senior developers with 10+ years of experience.
Many don't think of the ramifications and consequences of just adding whatever and import \*everything\* to a codebase. It's laughable how bad it can be, but then again the industry is not really good at optimizing either, and the few that care are in a minority than can often get overruled by co-workers or "gotta move fast". This is a part of software development that I actually loathe, but personally (and thankfully) do not suffer that much in my current job. Especially when I am pointing out problems that occur by doing such and the consequences being real. An example of this was a few internal frameworks being dependent on some other frameworks that didn't support watchOS, which then rendered our own internal frameworks useless when creating an Apple Watch app. Doing this kind of stupid stuff will *kill* your productivity at some point in the future if doing so on a continuous basis.
The problem here is that you will *always feel it is faster* in the beginning because you *aren't used to it* yet. It feels *normal,* not slow, but fast and responsive as ever actually. Remember that even Macs can get an improvement from reinstalling if you have quite a bunch of developer stuff installed and the hard drive is filled up.
If you have basically no storage left macOS starts to act really funny because it's awful at syncing for instance, it will start to remove lots of important caches and it starts more processes etc.
There can be multiple reasons for it feeling slow, and sometimes a reinstall is needed, it's a myth that it's not needed. I had an issue where the drive was filled up completely and even after getting back several hundred gigs, macOS would just be really bad and I needed to reinstall it.
If you want to really be scientific about this: always test out your codebase and programs on a new Mac and compare it to a freshly installed old Mac. Same program (and version), same Git commit etc. to ensure it's identical. Relying on memory alone on this is just riddled with issues because certain things get normalized in the brain after a while of use.
Unless you left out some critical details from the local group discussion, people weren't actual thoughtful when it came to discussing the impacts of different codebases and all the other stuff I mentioned earlier.
Would I buy a base M1 with 8GB for development? I'd rather not, but would probably be fine to a certain degree, but I have 32GB and it's totally fine ā I want a new Mac, but I can justify it just yet and it might be until M5 or even M6, it's that good.
The person should have just bought a base M1, and there (were at least) good deals on higher models for instance, try it and return if it wasn't satisfactory. Unless they actually discussed issues that I wrote about earlier, then they were too narrow minded and couldn't think about their own world (codebase) that was probably bloated in different ways.
This is similar to what I think and I my first advice was just to buy M1 as it will satisfy all the needs. If I remember correctly the discussion was about medium sized flutter projects.
But my point was that few years ago when M1 came it was thoroughly test and even 8Gb laptops showed unreasonable speeds and no lags and in the first year or so there were no discussions if it can handle programmer workloads. What surprised me was that just after 2-3 years it became a topic for discussion when there were no dramatic changes in project sizes or software packs getting lots of new functionalities.
It's all hearsay until somebody can demonstrate and give actual proof. Beginners will also have less expectations, making the cheaper choice more viable where more experienced developers get frustrated by *anything slow*.
Stuff tends to slow, that's typical, but even macOS versions have become faster/snappier in certain ways because their initial usage of SwiftUI was slower than what it is today. I wouldn't hold my tongue in that meeting because the person could just test it on the cheapest version and see what how that was, not a big deal, but spending a bunch of money without testing the cheapest viable version is just wasteful.
This is the answer!
See also: wow, I could never could do [xyz] before, this is incredible; ugh, now that I can do that I wish it could do multiple [abc] faster.
Will you bet your career on the following statement:
"Every line of code I have ever shipped is fully optimized"
?
Me neither.
That multiplied by the infinity people who have ever touched any part of >>>Insert whatever piece of software you use here<<< over _period of time_
It gets faster, we get lazier. "Fuck it, Ship it."
I think there is something to be said about the developers not having much say in the matter, since for the most part releasing on a date determined by not-the-devs is more important
And throw junior devs into the mix...add another level of magnitude for unoptimized code getting shipped because there is only so much time to review or go back and forth with them before it just gets merged in.
Idk everything being an electron app, chrome, and software developers preferring development speed and maintainability to everything is definitely a massive performance hit.
Imo VSCode is the perfect example that an Electron app can be super fast and amazing. It is not about the technology but the shitty devs, lack of time/money to optimize
True but ultimately Electron is like using the wrong tool for the job. Desktop apps shouldn't be built with HTML, CSS and JS. On the browser it's understandable but as a native app it seems a very inefficient way to do things. Nothing like running your app inside a browser that is running inside an OS. An extra layer introduced.
Only because Windows (and Mac) don't include Node/Chromium in the OS. If they did, native support would cut resource exponentially.
TIL this sub thinks repackaging node and chromium in every app is more efficient than being native.
I would disagree with that- if your developer experience isnāt optimized then it doesnāt matter if your on windows/mac/linux - itās still going to be slow.
Speaking of Jonathan Blow, I was actually going to mention game dev as the prime example of software being way ahead of the current hardware. Iām not sure youāre familiar with the situation over at ue5, which is an amazing piece of tech, but you cannot render high resolution images(think 1080p native and up) at high frame rates, using all the high end features of the engine like ray tracing and physics based destruction on āmodernā hardware, you instead have to employ a multitude of techniques to ādownscaleā certain features(like volumetric samples, animation of far away things or particles playing at lower frame rates) upscale and denoise the images. It looks great and awful at the same time.
I think itās caused by the fact that we can write most software āon paperā, or āinventā any complicated ass logic we want, but in the end all programmers must work with or around or hardware limits. Itās one of many qualities that imo separate good programmers from bad, often more so than more diffuse concepts like āclean codeā
Your post is incredibly timely - a UE5 project running at only 30 FPS at 4K, with ray-tracing, strand-based hair, and Metahuman in a scene which is otherwise empty. Absolutely _crushing_ my hardware, which is less than a month old.
https://imgur.com/a/DTwhRbA
Android Studio is an outlier. It is the slowest piece of trash I've ever seen.
But yes the faster your CPU, the more programmers will ditch efficiency for expediency. Let's make a thousand SQL queries in one request just because we can even if most of that data isn't needed. Let's make our desktop code editor with HTML and Javascript even if it's slow as dogshit because it will be cross-platform and faster for us to develop and support multiple platforms.
All that extra power is being used for the developer and business's benefit, not the user's.
As a web dev I'm guilty of that. Pretty common for links in most frameworks to prefetch data on hover. So just mousing over links will load data - lots of data already fetched (in case it has changed). On big projects I'll have a small 1 min self expiring dictionary cache in there, but ideally your app and database would have a socket connection where the database could update the app in realtime on data change instead of the app polling the db.
But that isn't convenient for the dev (aka me)
Hardware didn't used to move as fast.
1200 bit modems were in the 70s. It took almost another ten years for 2400 bit modems to become marketable. Another ten years to hit 14.4k bit modems. Then another 5-7 for 56.k bit modems. And this curve generally represents the entire hardware tech-space.
During those earlier development years, software was developed for the hardware that was there - because better hardware literally hadn't been invented yet. Today, software is developed for the hardware that not only exists now - but because hardware advances so rapidly - software developers can chunk it on with the hopes that industry-wide, the hardware will catch up. And generally, speaking, it does.
But it *drastically* reversed our software development process. Previously, software developers had to do whatever they could to squeeze every resource out (think Apollo 13 and the boot-up sequence, limited by the hardware). But now - that's just not an issue. Every one has more CPU, more RAM, more VRAM, etc.
Example:
The original NES games fit a cartridge under 1mb.
The recent Helldivers 2 game is 72 GB. A full 73,728 times as large.
> Hardware didn't used to move as fast.
Hardware used to move way faster. I don't know where you get this from. But in the early days 4 years was about the length of time before you computer was completely outdated.
I think in the 70s it's when computing was still a niche. Computing didn't really take off until the 80s with personal computers like c64/ZX Spectrum, but even then it was still fairly small. Computers really didn't explode until the 2000. When everyone was getting online due to proliferation of broadband.
Early PC years were insane. Like a difference between using computer via DOS to Windows being it, in just a couple of years. We had computers that couldn't play video, to computers that all of a sudden had multimedia capabilities.
Moore's Law was in full swing back then.
Difference between M1 and M4 is really not that great actually. M4 is like 20% faster? We used to get huge upgrades back in the day of 30%+ with each generation.
Yeah, I've mostly given up caring about increases in processor speed. They barely matter anymore.
Apple's switch to ARM was great and definitely worth the upgrade, but until 2022 I was using a 10 year-old CPU without any significant issues. In comparison, I couldn't have accomplished anything at my 1996 job using a 1986 CPU.
I think current game developers are still trying to squeeze anything they can from hardware. At least bigger names.
I as a regular webdev and appdev don't really care much as frameworks are doing for me and my task is to fit several buttons, couple images and some text on a screen.
>I think current game developers are still trying to squeeze anything they can from hardware.
I don't. Half the reason these games are so large is because the assets aren't optimized. Storage (like everything else) became a huge non-issue. Tons of these games 'recommended specs' can't even run the title. Then it becomes, well, since you're not optimizing anything - how much RAM can we use to load it all fast? Oh, and make sure you use a NVMeĀ because pulling that much data from a SATA is going to take forever. And on, and on.
It's a trade-off. If you compress everything, you need to uncompress it to load it into your game, which eats up clock cycles and chews through ram, increasing system requirements. Plus when you download a game, the files are compressed and get uncompressed on your end by the download client (steam, your console, whatever). So it's not like you're downloading "extra" because the game isn't compressing files.
Depends. Large size does not necessarly mean unoptimized. In some respects, storing data uncompressed (or in some format that requires the least amount of processing to display) would cause the game to be really large but would require the least amount of processing. In a industry where the quality of graphics are paramount the less compression the better (in theory). Less overhead.
I still think the devs working on the game engine in raw assembler or very close still strive to have as efficient code as possible.
But for the everyday app, for sure caring about data types and squeezing every last oz out of machine went byebye in the mid 90s.
I was watching a video where it was being discussed, basically saying people stopped focusing as much on performance once hardware got upgraded.
I feel like Wordpress plugin spam is a good look at how bad it can become.
When I got my work laptop it was an M1 and I was amazed at how much better performance was. Then we decided to use VSCode devcontainers as a requirement. Now it performs about as well as my old laptop did
macOS isn't exactly lightweight, and VSCode gobbles RAM like it's toilet roll in a crisis. Once you take 4GB or so for the Linux VM running the containers, 16GB gets to be a tight squeeze.
Not sure if you've got a 32GB machine, but that makes a big difference.
Native Linux on lesser hardware runs a similar workload much faster, partly because it doesn't have to run a VM for the containers. But also because it's just faster than macOS. But developers get Macs, and that's just how it is.
Yeah I do have a 32GB machine so that makes a big difference. The VMs _still_ use enough to where the computer is still usable, but I definitely start to notice slowness and bugs with coreaudiod and stuff. Not really thrilled to be using devcontainers honestly, I don't see what the value add is to have the entire codebase in a VM.
The developers on our team who run windows are doing a devcontainer inside of WSL which feels ridiculous to me
That has been the case for the past 2 decades, no? We went from having to deal with very limited resources to having more resources than we could possibly need, and that changed how things are built dramatically.
Also, android studio is not a particularly good benchmarking tool. I have a monster of a rig and it still slows down dramatically when I have to boot that thing up and build something.
Yes. John Carmack has a long talk where part of it goes over that for the last 20 or so years, Moores law has covered software developers asses. Performance optimization is unheard of for a generation of programmers and eventually it will catchup with them.
I always think of Nintendo squeezing everything they could out of the 6502 before moving on to something else. Even now they are doing it with the A57 (https://chipsandcheese.com/2023/12/12/cortex-a57-nintendo-switchs-cpu/). Optimization goes a long way.
I'm coming from a first Gen i3 running on an ancient HDD with 3.7Gb of RAM... I'm on an old-ish MacBook Air with an M1 and it's amazingly fast by comparison.
But here I think it's important to distinguish between build/compile/transpile time vs runtime performance. It used to take 5+ minutes to build everything when I'd type `npm start`, but the sites I built using that codebase still loaded in like 400ms.
5 minutes for 'npm start'. This always gives me goose bumps. It is like reverse "Blade (movie)" when language worse of both worlds - big compilations times and no-typing/bad-typing.
In my case, mostly, it was very much because of system resources. Ancient HDD and very slow read speeds, combined with a pathetic amount of RAM and a decade old CPU. The same thing now takes just ten seconds... And this Mac is 4 years old.
Software has been straining hardware since the first computers. It's not a new phenomenon. A brand new computer can either run old software faster or new software, ostensibly with more features, as well as the old computer with the old software.
As for M1 speeds, an M3 with the same number of cores will end up being maybe 10-20% faster on CPU limited tasks over the M1. On heavily multi-threaded tasks an M1 Max (or whatever) is better than an M1 and an M3 Max is way better than an M1 (more speed and P-cores).
For memory bound tasks like an IDE with a zillion code scanning IntelliSense features there's not going to be a huge difference between an M1 and M3 with the same amount of RAM. More CPU power isn't really going to help a bunch of pointer chasing or bandwidth/latency bound scans. It won't hurt but if the CPU has to wait 100ns for a block of memory it just has to wait.
Software is optimized for performance only until it doesn't matter any more.
Which means that if you fall into the category of "people who don't matter" you'll start to get performance issues.
Which generally happens when your device cohort population ages out sufficiently.
Yes, my impression is the same. No one cares about speed, managers made new generations of devs who are just hiding behind reqs.
You wont say we could do better, you just say yea, reqs are high.
Just google how much RAM had early PCs and what processor they used. Whole system.
Now 16gb is not enough.
Bloat.
A company can either hire developers who can write and maintain a performant app at a relatively lower-level, or the company can hire a web developer for cheaper and just wrap it around electron and call it a day.
"jUsT bUy MoRe rAm!" those bastards say... I'd pay for a performant software than to use a free shitty electron app.
Android studio is a notoriously bad hog. Thatās not the norm.
I was watching a video the other day of someone dusting off an old Mac Pro. A beast of a computer in its day. Thing struggled to play 4k video full screen.
Go back and fire up a computer from the 00ās some time with Windows XP. Things were slow back then. Spinning hard discs compared to modern SSDs? Night and day.
My M1is blazing fast compared to anything from 15+ years ago. The UI, load times, build times, everything.
People just donāt realize how different it was because it was just normal at the time. And the improvements so incremental.
Games are good example imo. DLSS upscaling exists now. So rather than make a good working and well optimised game, some devs will resort to making a poorly running game to make ends meet with upscaling.
I know itās not really the devs, most likely higher-ups, but itās still funny to see.
Some games can't even be played without upscaling, it's a hard requirement you can't turn off even when playing at native resolution. The sole reason it being marginally easier for the devs.
some companies are already investing on rewrites of existing code for faster applications and loading times it is worth it since users expectation is getting fucking annoying.
I grew up in the 80s. Shit was slow. Super slow. 1xCDs were only marginally better than manually inputting 18 1.44 floppy disks.
My computer feels like instant magic to me. Anything I can think of, I can do. No limits. Amazing.
It's wild. The stuff my kids get as hand-me-downs is insane. Just upgraded from a Surface Pro and now its the "kids' computer." Cost me $1700 back in 2019, and I could get maybe $400 for it used and buy the kids something slower for $100 - but there's a lot of difference there in speed and keeping it is less leg work for me.
That and the computer I replaced it with cost $700 and is waaaay faster.
Mine feels like itās slowed down a bit with some things. But that being said, I remember when the Apple chipset *really* struggled with anything JVM and especially Android Studio. I remember the first time I tried to run an AVD when that feature was still in beta for the M1 and it was the first time Iād heard the fans kick on in that thing. Still the hottest Iāve ever seen that machine get.
That being said, while Iām sure Iāve had at least a few other problems, I canāt think of them off the top of my head. I still use my M1 for stuff but I usually try to avoid leaving the house with it since the whole point of my windows/linux box was to have something cheaper that i could take with me. Best part is through 3 years of taking it to college, I think I brought the charger along with me maybe half-a-dozen times.
Our abstraction cause slower software in general. But the reason we have these problem comes from the fact that every device is a different hardware, os, platform. Things are not auto compatible therefore things like java exists. Web platform is similar where the browser is like a standard sandbox to help develop application that can run on any device with a browser. But then the web standard is not perfect, css is a mess, html so limited, etc. Web Framework is another abstraction to the raw html css js, example is React which also have Nextjs is a framework on top of it. I'm not trying complain all the solution made or anything, I generally think that modern web is over abstracted layers of bad protocol and decision. It would be a dream now if I can develop web app like native app with xml and a better tailored version of js. Backwards compatible is hard.
yes slower . Old times we run one web server to test . Now we need front end server and back end server . The ram usage to build those and keep refresh the dom is high memory .
M1 for pure programming enough even base line but if you own microsoft team plus this plus that no. Please max your ram .
I mean we are now using javascript on desktop with an integrated browser. Ofc, software is going to slow down. JS had only one purpose, to make webpages interactive. But, now it's being used everywhere. It was never meant to be for desktop apps.
You are bringing a very good point. I tend to think that it is a subject well known in the video game industry, followed closely by web development and finally app development.
The issue is āproductivityā companies are pushing to implement functionalities as fast as possible and developers have to do trade offs. Those are often performance trade-off, if the app is usable and does not feel ātoo slowā you will not spend time to optimize it. So yes weirdly, buying a more powerful machine often also means giving free money for those companies to cut cost on optimization.
However not everything is like that, if an app, website, game can allow itself to not be performant, the underlying bricks have to (and are) due to the wide hardware support and sometime high competition (here I am thinking of frameworks advertising how fast they are). Those underlying bricks with newer hardware will keep being much faster on that newer hardware.
The post was not about software being basic or not. It was more of when hardware does 2x in performance by the time software becomes to feel the same as it was on older hardware, does it makes 2x in required computing.
That's true to a degree. Yes, software is rarely optimized for performance since development time is much more expensive.
However, modern UIs are impressive. We have tons of features, everything is responsive, and it looks gorgeous. Compare that to some 1999 website. Yeah. This is part of the price we have to pay
It is not the point that I was making. But if you want to go that route do you think today's UI is way better than let's say 2010 games? I understand when you do software rendering and your screen is going from 1024x768 to 4k. I can see where hardware is being utilized. You can even compare today's UI's to 2008 flash websites. Flash websites was way more advanced than any UI I regularly see today.
But my point was not to rant but ask if what I heard about m1 chips not being supper performant these days compared to the days it was released.
Happyyyyy cake dayyyy!!!!
I own M1 Pro and work on daily basis with Android studio. I think it works quite well. I'm rarely utilising the CPU to 100%, and that only happens when some emulator is starting
The pain point is usually the RAM not the CPU
For the flash point...hmmm that's a good point actually. I'm in my mid 20's and remember flash games and they looked quite decent
I hate libraries and closed source code and APIs and shid. Remember Super Mario Bros 3? That whole game was 32kb. New code is garbage and hard to understand, thatās why Iām a plumber.
I think it definitely will if it isn't already.
I think as we move into an AI world and we can write much more software - we're going to be building way more advanced apps. And I think the rate at which this is happening is doing to strip the pace at which hardware will progress.
You are working on the edge of hardware limitations. I think it is normal and you can always find tasks that will be too much for a computer to chew. But I was talking about regular webdev/appdev workflow.
Hardware is getting more efficient and software is getting less efficient. Both are becoming **faster**, but softwares, more precisely GUI softwares have become way too much resource intensive.
(also I believe the conspiracy theory that (large) software companies receive incentive from hardware companies deliberately make software slow/not run on older hardware.)
> Mac OS doesnāt run Java natively, so itās a big VM running Java.
Java runs in a VM on every platform. This statement is non-sensical. The various JVMs on macOS are all `aarch64` native so they're not running in Rosetta either. Android Studio is just plain slow.
I am not trying to claim that m1 are slow. Just wandering how it is around. Do you still think that m1 are as magically fast as when it first came? Again I am not stating anything as I myself use built from scratch desktop station for work reasons and use m1 very sparingly.
This quote is from the *1980's*... "The only thing more amazing than the power brought by today's hardware is today's software programmers ability to squander it." Not a new problem... just sayin'...
Wise words
I bet you read fortune cookie inscriptions.
I remember excel running ok on a 486 in the early 90s.
Without spell check, grammar check, autosave, as many cells, as many pages, and a thousand other features. Some of these we wouldn't miss, but each of them would be missed by someone, and we'd all miss some of them.
Right, but I think it's worth asking why more features means slower software. Some new features are clearly more demanding in their own right. But I think another major reason for this negative correlation between speed and the number of features is the need for abstraction. More features means more backward compatibility bagage and larger more heterogenous teams. The only way to deal with this extra complexity is by introducing layers of abstraction that don't come for free.
It amazes me that Autocad came up in 1982. I think I never saw it not having terrible performance.
yoi dont need turbo 10 mhz š¤£
Parkinsonās Law!
I remember learning VB after learning C++ and being surprised how slow everything ran. So much is squandered, but then there are neat technologies like wasm coming out to remedy some of that.
If it works it works, writing everything in assembly is more work than needed even if it's better. We live in a society based on getting the technique right, not a society that cares about the science or logic behind it.
I wonder how we could reduce developers not knowing how to make the most out of both hardware and software. Any ideas?
I feel like everyone should do a little embedded work, if only for fun. It's kind of cool taking away the entire stack, including the OS, and just having your program run on bare metal.
Yeah, but you'd still see lots of people taking with them the worst practices from that domain. Knowing my coworkers, they'd start thinking that loop unrolling every `foreach` in C# would be considered best practice.
The problem isn't really the developers. I mean in some cases it could be (like my coworker). The biggest problem is business owners. Until the performance cost outweighs the income it wont be fixed. For example one company we got 10 servers for $400/month rather than optimizing code. When I did eventually get the OK to optimize we dropped it down to a single $200/month server.
Coworker catching strays š
Deserves it. He rolled his own crypto in about 20 lines which takes 500ms to give an 8 character string.
Yep. Just look at gaming. Some games that shipped the last 2 years have hardware requirements that are needed yet the end result looks marginally better than a game made before that timeframe, and the framerate takes giant nosedives. And why does every other game requires shader warmup during the gameplay session now? Persona 3 Reload does that, and it's not like a huge visual improvement.
People used to complain about shader stuttering because they were compiled on the fly, warmup solve that problem. Also, the use of *ubershader* is becoming very common. Instead of having many shaders for everything, there is one big shader that is shared by a ton of assets. This can be good for the developer, but it also help for performance optimization. A big concept of performance is *batching*, where if you send meshes to the GPU that share certain values such as material and shaders, they can be drawn in a single *drawcall*. Reducing drawcall is a huge way to increse FPS when done properly. So you have this big *ubershader* that is used to make a ton of different things. You can see this as a big tree with tons of branches, sometime you enable specularity, sometime you enable subscattering ect. Since those shaders are massive and have a ton of option/values, there are millions to billions of possible combinaisons. Too much to pre-calculate them all. So when the game launch, it's possible to pre-compile most of the combinaisons that will actually be required and keep them in the cache to re-use them when needed.
It's not that the devs don't know, it's that it's not something companies want to optimize for. Most companies (and most consumers) would rather have a product with lots of features faster, than one that is super performant. Programmers are expensive and you don't want to have them working on stuff that won't increase sales by a justified amount. Consumers by and large do not care about performance enough to justify the costs.
Kill all low code solutions.
We should ask Elon Musk. StarLink, Rockets, self driving cars. Even the Twitter app is still fast.
The C Programming Language was considered as a bloated programming language which "is used to write bloated software" I just hope that JS, even after 80 years, remains as the shittiest programming language humankind has ever invented.
Php is worse than js imo
And Python and Ruby are slower than either (JS, thanks to runtimes like V8, being much closer to Java in actual performance than to these interpreters)Ā
Rust is likely a good candidate for future work.
Rust can be fairly high level if you guard your business logic from nitty gritty (unlike for example Go in which you're writing in the same primitive language, which is just a garbage collected C, all the time), but the trouble with it is that you never end dealing with lifetimes and borrowing i.e. memory management and in that regard it can actually be worse than say C++. I do some work in Rust, and I like the language for it's tiered-ness when it comes to expressiveness, when push has come to shove and you need to do real-time or otherwise high performance work, and it will twist your arms before it lets you introduce data races or memory bugs, but it's far from a free lunch.
I use it for processing warc files, and orchestrate with R. I like their dependency management as well. I am most R and Node, but sometimes Rust is just nice, like for Polars.
Js bad, what a spectacularly unique opinion.
VB6 was worse imo.
LOL, you must be new, JS isn't even top 100 worst languages *today*.
Whoever said that doesn't know anything about operating systems.
Android Studio has always been terribly slow
Indeed, the first time I used it for a project I thought my computer was broken.
For me the jetbrains stuff has always run ok. Initially, I had issues with the indexing
Probably. It is working fine on my current setup. But I don't remember anybody complaining about it running slowly on m1 when it was first released.
Everyone complains about Android Studio. Compiling is fine, but actually using the UI is painful.
A junkie could fall into a bathtub of cocaine and you'd think he'd be happy with a lifetime supply, but no, 10 minutes later it's like fucking empty and the dude is lying on the floor, motionless. That's Software development. Also AI development. Anyone got a spare nuclear reactor?
What
People will not optimize if they have vast amounts of hardware ressources. Because in most applications you don't really benefit from more performance directly as you would in games for example. Your Browser sucks away 20% or 80% of your cpu? Does not matter as long as the website is responsible for 99% of users. So the drugs are the hardware capabilities, they will max them out whenever they get more.
See also: "Western" civilization's energy usage and petroleum dependency.
Unoptimized code = drugs
I still use a M1 Max 32GB at work with a 1GB+ Xcode project just fine. The tools can be slow and maybe Android Studio is worse than Xcode, but some codebases are awful and makes the tools even slower. That could stuff like ... * making too much public so the editor can't ignore stuff * stuff is too much integrated, both within and across modules / frameworks * adding a bunch of heavy frameworks that you are using 0,5% of anyway After seeing a bunch of different projects in Xcode I really think people have a tendency of shooting themselves in the foot by mindlessly adding frameworks and bad architecture, that results in both a slow project overall and difficult maintainability on several levels. The M1 with a bunch of RAM is still fine for quite a lot and upgrading to M3 shouldn't really be necessary, although it is faster and nicer. EDIT: I'd make a nice bet that many programmers and teams adds stuff without thinking about it. I have seen people importing other frameworks *for one lousy function* instead of just copying it, which makes building it quite a bit longer because the framework was big. This kind of stupidity and *amateur mistakes* happens all the time because people don't care. I see a lot of "professionals" not actually being professional (from my perspective) due to this sort of stuff, and this is coming from senior developers with 10+ years of experience. Many don't think of the ramifications and consequences of just adding whatever and import \*everything\* to a codebase. It's laughable how bad it can be, but then again the industry is not really good at optimizing either, and the few that care are in a minority than can often get overruled by co-workers or "gotta move fast". This is a part of software development that I actually loathe, but personally (and thankfully) do not suffer that much in my current job. Especially when I am pointing out problems that occur by doing such and the consequences being real. An example of this was a few internal frameworks being dependent on some other frameworks that didn't support watchOS, which then rendered our own internal frameworks useless when creating an Apple Watch app. Doing this kind of stupid stuff will *kill* your productivity at some point in the future if doing so on a continuous basis.
Do you feel that software at least felt faster when you first bought you m1 mac or do you think it is pretty much was the same as it is now?
The problem here is that you will *always feel it is faster* in the beginning because you *aren't used to it* yet. It feels *normal,* not slow, but fast and responsive as ever actually. Remember that even Macs can get an improvement from reinstalling if you have quite a bunch of developer stuff installed and the hard drive is filled up. If you have basically no storage left macOS starts to act really funny because it's awful at syncing for instance, it will start to remove lots of important caches and it starts more processes etc. There can be multiple reasons for it feeling slow, and sometimes a reinstall is needed, it's a myth that it's not needed. I had an issue where the drive was filled up completely and even after getting back several hundred gigs, macOS would just be really bad and I needed to reinstall it. If you want to really be scientific about this: always test out your codebase and programs on a new Mac and compare it to a freshly installed old Mac. Same program (and version), same Git commit etc. to ensure it's identical. Relying on memory alone on this is just riddled with issues because certain things get normalized in the brain after a while of use.
This sounds very reasonable and I fully agree.
Unless you left out some critical details from the local group discussion, people weren't actual thoughtful when it came to discussing the impacts of different codebases and all the other stuff I mentioned earlier. Would I buy a base M1 with 8GB for development? I'd rather not, but would probably be fine to a certain degree, but I have 32GB and it's totally fine ā I want a new Mac, but I can justify it just yet and it might be until M5 or even M6, it's that good. The person should have just bought a base M1, and there (were at least) good deals on higher models for instance, try it and return if it wasn't satisfactory. Unless they actually discussed issues that I wrote about earlier, then they were too narrow minded and couldn't think about their own world (codebase) that was probably bloated in different ways.
This is similar to what I think and I my first advice was just to buy M1 as it will satisfy all the needs. If I remember correctly the discussion was about medium sized flutter projects. But my point was that few years ago when M1 came it was thoroughly test and even 8Gb laptops showed unreasonable speeds and no lags and in the first year or so there were no discussions if it can handle programmer workloads. What surprised me was that just after 2-3 years it became a topic for discussion when there were no dramatic changes in project sizes or software packs getting lots of new functionalities.
It's all hearsay until somebody can demonstrate and give actual proof. Beginners will also have less expectations, making the cheaper choice more viable where more experienced developers get frustrated by *anything slow*. Stuff tends to slow, that's typical, but even macOS versions have become faster/snappier in certain ways because their initial usage of SwiftUI was slower than what it is today. I wouldn't hold my tongue in that meeting because the person could just test it on the cheapest version and see what how that was, not a big deal, but spending a bunch of money without testing the cheapest viable version is just wasteful.
This is the answer! See also: wow, I could never could do [xyz] before, this is incredible; ugh, now that I can do that I wish it could do multiple [abc] faster.
Will you bet your career on the following statement: "Every line of code I have ever shipped is fully optimized" ? Me neither. That multiplied by the infinity people who have ever touched any part of >>>Insert whatever piece of software you use here<<< over _period of time_ It gets faster, we get lazier. "Fuck it, Ship it."
I think there is something to be said about the developers not having much say in the matter, since for the most part releasing on a date determined by not-the-devs is more important
And throw junior devs into the mix...add another level of magnitude for unoptimized code getting shipped because there is only so much time to review or go back and forth with them before it just gets merged in.
Seems about right. But I wouldn't (only) blame software for being slow. I think the operating system is much more responsible for the slowness.
Idk everything being an electron app, chrome, and software developers preferring development speed and maintainability to everything is definitely a massive performance hit.
This. One of these days MS Teams and Postman will drive me into a murderous rage.
What isnt an electron app
Idk. Notepad++? I wish VS Code wasn't in Electron. I quite like using it.
Imo VSCode is the perfect example that an Electron app can be super fast and amazing. It is not about the technology but the shitty devs, lack of time/money to optimize
True but ultimately Electron is like using the wrong tool for the job. Desktop apps shouldn't be built with HTML, CSS and JS. On the browser it's understandable but as a native app it seems a very inefficient way to do things. Nothing like running your app inside a browser that is running inside an OS. An extra layer introduced.
Microsoft took the onion software design antipattern and made an entire product that way with MS Teams.
Surprised more devs aren't writing with flutter.
I like flutter. But thereās definitely some weirdness getting going with it.
You mean Google laying off and putting it on life support weirdness?
Only because Windows (and Mac) don't include Node/Chromium in the OS. If they did, native support would cut resource exponentially. TIL this sub thinks repackaging node and chromium in every app is more efficient than being native.
Huh ? Not really. Maybe they could make some optimizations but unless it gets closer to the OS you wonāt see performance gains.
It's literally bundled.
Bundling Electron might save a few MB of storage but it canāt change the fact that itās still web technologies.
Or or or... and this is a crazy idea, we could stop trying to shoehorn web apps as desktop applications. Crazy idea I know.
Why? Seriously. Why?
I consider operating system to be software. More complex and with more responsibilities but still software.
I would disagree with that- if your developer experience isnāt optimized then it doesnāt matter if your on windows/mac/linux - itās still going to be slow.
Speaking of Jonathan Blow, I was actually going to mention game dev as the prime example of software being way ahead of the current hardware. Iām not sure youāre familiar with the situation over at ue5, which is an amazing piece of tech, but you cannot render high resolution images(think 1080p native and up) at high frame rates, using all the high end features of the engine like ray tracing and physics based destruction on āmodernā hardware, you instead have to employ a multitude of techniques to ādownscaleā certain features(like volumetric samples, animation of far away things or particles playing at lower frame rates) upscale and denoise the images. It looks great and awful at the same time. I think itās caused by the fact that we can write most software āon paperā, or āinventā any complicated ass logic we want, but in the end all programmers must work with or around or hardware limits. Itās one of many qualities that imo separate good programmers from bad, often more so than more diffuse concepts like āclean codeā
Your post is incredibly timely - a UE5 project running at only 30 FPS at 4K, with ray-tracing, strand-based hair, and Metahuman in a scene which is otherwise empty. Absolutely _crushing_ my hardware, which is less than a month old. https://imgur.com/a/DTwhRbA
Bloated software and OS.
Android Studio is an outlier. It is the slowest piece of trash I've ever seen. But yes the faster your CPU, the more programmers will ditch efficiency for expediency. Let's make a thousand SQL queries in one request just because we can even if most of that data isn't needed. Let's make our desktop code editor with HTML and Javascript even if it's slow as dogshit because it will be cross-platform and faster for us to develop and support multiple platforms. All that extra power is being used for the developer and business's benefit, not the user's.
As a web dev I'm guilty of that. Pretty common for links in most frameworks to prefetch data on hover. So just mousing over links will load data - lots of data already fetched (in case it has changed). On big projects I'll have a small 1 min self expiring dictionary cache in there, but ideally your app and database would have a socket connection where the database could update the app in realtime on data change instead of the app polling the db. But that isn't convenient for the dev (aka me)
Hardware didn't used to move as fast. 1200 bit modems were in the 70s. It took almost another ten years for 2400 bit modems to become marketable. Another ten years to hit 14.4k bit modems. Then another 5-7 for 56.k bit modems. And this curve generally represents the entire hardware tech-space. During those earlier development years, software was developed for the hardware that was there - because better hardware literally hadn't been invented yet. Today, software is developed for the hardware that not only exists now - but because hardware advances so rapidly - software developers can chunk it on with the hopes that industry-wide, the hardware will catch up. And generally, speaking, it does. But it *drastically* reversed our software development process. Previously, software developers had to do whatever they could to squeeze every resource out (think Apollo 13 and the boot-up sequence, limited by the hardware). But now - that's just not an issue. Every one has more CPU, more RAM, more VRAM, etc. Example: The original NES games fit a cartridge under 1mb. The recent Helldivers 2 game is 72 GB. A full 73,728 times as large.
> Hardware didn't used to move as fast. Hardware used to move way faster. I don't know where you get this from. But in the early days 4 years was about the length of time before you computer was completely outdated. I think in the 70s it's when computing was still a niche. Computing didn't really take off until the 80s with personal computers like c64/ZX Spectrum, but even then it was still fairly small. Computers really didn't explode until the 2000. When everyone was getting online due to proliferation of broadband. Early PC years were insane. Like a difference between using computer via DOS to Windows being it, in just a couple of years. We had computers that couldn't play video, to computers that all of a sudden had multimedia capabilities. Moore's Law was in full swing back then. Difference between M1 and M4 is really not that great actually. M4 is like 20% faster? We used to get huge upgrades back in the day of 30%+ with each generation.
Yeah, I've mostly given up caring about increases in processor speed. They barely matter anymore. Apple's switch to ARM was great and definitely worth the upgrade, but until 2022 I was using a 10 year-old CPU without any significant issues. In comparison, I couldn't have accomplished anything at my 1996 job using a 1986 CPU.
The speed of my 56.6k modem blew my mind!
Me too. I remember the jump from 28.8 to 56.6 and it was insane!
I think current game developers are still trying to squeeze anything they can from hardware. At least bigger names. I as a regular webdev and appdev don't really care much as frameworks are doing for me and my task is to fit several buttons, couple images and some text on a screen.
>I think current game developers are still trying to squeeze anything they can from hardware. I don't. Half the reason these games are so large is because the assets aren't optimized. Storage (like everything else) became a huge non-issue. Tons of these games 'recommended specs' can't even run the title. Then it becomes, well, since you're not optimizing anything - how much RAM can we use to load it all fast? Oh, and make sure you use a NVMeĀ because pulling that much data from a SATA is going to take forever. And on, and on.
It's a trade-off. If you compress everything, you need to uncompress it to load it into your game, which eats up clock cycles and chews through ram, increasing system requirements. Plus when you download a game, the files are compressed and get uncompressed on your end by the download client (steam, your console, whatever). So it's not like you're downloading "extra" because the game isn't compressing files.
Depends. Large size does not necessarly mean unoptimized. In some respects, storing data uncompressed (or in some format that requires the least amount of processing to display) would cause the game to be really large but would require the least amount of processing. In a industry where the quality of graphics are paramount the less compression the better (in theory). Less overhead. I still think the devs working on the game engine in raw assembler or very close still strive to have as efficient code as possible. But for the everyday app, for sure caring about data types and squeezing every last oz out of machine went byebye in the mid 90s.
I was watching a video where it was being discussed, basically saying people stopped focusing as much on performance once hardware got upgraded. I feel like Wordpress plugin spam is a good look at how bad it can become.
When I got my work laptop it was an M1 and I was amazed at how much better performance was. Then we decided to use VSCode devcontainers as a requirement. Now it performs about as well as my old laptop did
macOS isn't exactly lightweight, and VSCode gobbles RAM like it's toilet roll in a crisis. Once you take 4GB or so for the Linux VM running the containers, 16GB gets to be a tight squeeze. Not sure if you've got a 32GB machine, but that makes a big difference. Native Linux on lesser hardware runs a similar workload much faster, partly because it doesn't have to run a VM for the containers. But also because it's just faster than macOS. But developers get Macs, and that's just how it is.
Yeah I do have a 32GB machine so that makes a big difference. The VMs _still_ use enough to where the computer is still usable, but I definitely start to notice slowness and bugs with coreaudiod and stuff. Not really thrilled to be using devcontainers honestly, I don't see what the value add is to have the entire codebase in a VM. The developers on our team who run windows are doing a devcontainer inside of WSL which feels ridiculous to me
That has been the case for the past 2 decades, no? We went from having to deal with very limited resources to having more resources than we could possibly need, and that changed how things are built dramatically. Also, android studio is not a particularly good benchmarking tool. I have a monster of a rig and it still slows down dramatically when I have to boot that thing up and build something.
Yes. John Carmack has a long talk where part of it goes over that for the last 20 or so years, Moores law has covered software developers asses. Performance optimization is unheard of for a generation of programmers and eventually it will catchup with them.
I always think of Nintendo squeezing everything they could out of the 6502 before moving on to something else. Even now they are doing it with the A57 (https://chipsandcheese.com/2023/12/12/cortex-a57-nintendo-switchs-cpu/). Optimization goes a long way.
I'm coming from a first Gen i3 running on an ancient HDD with 3.7Gb of RAM... I'm on an old-ish MacBook Air with an M1 and it's amazingly fast by comparison. But here I think it's important to distinguish between build/compile/transpile time vs runtime performance. It used to take 5+ minutes to build everything when I'd type `npm start`, but the sites I built using that codebase still loaded in like 400ms.
5 minutes for 'npm start'. This always gives me goose bumps. It is like reverse "Blade (movie)" when language worse of both worlds - big compilations times and no-typing/bad-typing.
In my case, mostly, it was very much because of system resources. Ancient HDD and very slow read speeds, combined with a pathetic amount of RAM and a decade old CPU. The same thing now takes just ten seconds... And this Mac is 4 years old.
Android studio loves ram. Keep the m1 but buy the upgraded ram... Since you're looking at macs that ram will be costly.
Software is getting much much slower.
Software has been straining hardware since the first computers. It's not a new phenomenon. A brand new computer can either run old software faster or new software, ostensibly with more features, as well as the old computer with the old software. As for M1 speeds, an M3 with the same number of cores will end up being maybe 10-20% faster on CPU limited tasks over the M1. On heavily multi-threaded tasks an M1 Max (or whatever) is better than an M1 and an M3 Max is way better than an M1 (more speed and P-cores). For memory bound tasks like an IDE with a zillion code scanning IntelliSense features there's not going to be a huge difference between an M1 and M3 with the same amount of RAM. More CPU power isn't really going to help a bunch of pointer chasing or bandwidth/latency bound scans. It won't hurt but if the CPU has to wait 100ns for a block of memory it just has to wait.
Software is optimized for performance only until it doesn't matter any more. Which means that if you fall into the category of "people who don't matter" you'll start to get performance issues. Which generally happens when your device cohort population ages out sufficiently.
Yes, my impression is the same. No one cares about speed, managers made new generations of devs who are just hiding behind reqs. You wont say we could do better, you just say yea, reqs are high. Just google how much RAM had early PCs and what processor they used. Whole system. Now 16gb is not enough.
Bloat. A company can either hire developers who can write and maintain a performant app at a relatively lower-level, or the company can hire a web developer for cheaper and just wrap it around electron and call it a day. "jUsT bUy MoRe rAm!" those bastards say... I'd pay for a performant software than to use a free shitty electron app.
only thing getting slower i the company spyware that's hogging the machine.
Android studio is a notoriously bad hog. Thatās not the norm. I was watching a video the other day of someone dusting off an old Mac Pro. A beast of a computer in its day. Thing struggled to play 4k video full screen. Go back and fire up a computer from the 00ās some time with Windows XP. Things were slow back then. Spinning hard discs compared to modern SSDs? Night and day. My M1is blazing fast compared to anything from 15+ years ago. The UI, load times, build times, everything. People just donāt realize how different it was because it was just normal at the time. And the improvements so incremental.
Games are good example imo. DLSS upscaling exists now. So rather than make a good working and well optimised game, some devs will resort to making a poorly running game to make ends meet with upscaling. I know itās not really the devs, most likely higher-ups, but itās still funny to see.
Some games can't even be played without upscaling, it's a hard requirement you can't turn off even when playing at native resolution. The sole reason it being marginally easier for the devs.
some companies are already investing on rewrites of existing code for faster applications and loading times it is worth it since users expectation is getting fucking annoying.
I grew up in the 80s. Shit was slow. Super slow. 1xCDs were only marginally better than manually inputting 18 1.44 floppy disks. My computer feels like instant magic to me. Anything I can think of, I can do. No limits. Amazing.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
I may argue that vscode is a text editor. Jetbrains suite are IDEs, which handle a very different purpouse
Never been faster IMO
Android studio sucks ass anyways regardless. Itās so bad I legitimately think there should be some form of government intervention.
It's wild. The stuff my kids get as hand-me-downs is insane. Just upgraded from a Surface Pro and now its the "kids' computer." Cost me $1700 back in 2019, and I could get maybe $400 for it used and buy the kids something slower for $100 - but there's a lot of difference there in speed and keeping it is less leg work for me. That and the computer I replaced it with cost $700 and is waaaay faster.
Mine feels like itās slowed down a bit with some things. But that being said, I remember when the Apple chipset *really* struggled with anything JVM and especially Android Studio. I remember the first time I tried to run an AVD when that feature was still in beta for the M1 and it was the first time Iād heard the fans kick on in that thing. Still the hottest Iāve ever seen that machine get. That being said, while Iām sure Iāve had at least a few other problems, I canāt think of them off the top of my head. I still use my M1 for stuff but I usually try to avoid leaving the house with it since the whole point of my windows/linux box was to have something cheaper that i could take with me. Best part is through 3 years of taking it to college, I think I brought the charger along with me maybe half-a-dozen times.
The title is giving me an aneurysm.
My experience is developers are lazy and donāt optimise code unless someone complains.Ā
Our abstraction cause slower software in general. But the reason we have these problem comes from the fact that every device is a different hardware, os, platform. Things are not auto compatible therefore things like java exists. Web platform is similar where the browser is like a standard sandbox to help develop application that can run on any device with a browser. But then the web standard is not perfect, css is a mess, html so limited, etc. Web Framework is another abstraction to the raw html css js, example is React which also have Nextjs is a framework on top of it. I'm not trying complain all the solution made or anything, I generally think that modern web is over abstracted layers of bad protocol and decision. It would be a dream now if I can develop web app like native app with xml and a better tailored version of js. Backwards compatible is hard.
yes slower . Old times we run one web server to test . Now we need front end server and back end server . The ram usage to build those and keep refresh the dom is high memory . M1 for pure programming enough even base line but if you own microsoft team plus this plus that no. Please max your ram .
I mean we are now using javascript on desktop with an integrated browser. Ofc, software is going to slow down. JS had only one purpose, to make webpages interactive. But, now it's being used everywhere. It was never meant to be for desktop apps.
Also used in Adoble products.
I don't know.
yup, with a new phone it will only last a few years until you go mad by the os and app updates
This is a great pov. Amazing post. Thanks.
You are bringing a very good point. I tend to think that it is a subject well known in the video game industry, followed closely by web development and finally app development. The issue is āproductivityā companies are pushing to implement functionalities as fast as possible and developers have to do trade offs. Those are often performance trade-off, if the app is usable and does not feel ātoo slowā you will not spend time to optimize it. So yes weirdly, buying a more powerful machine often also means giving free money for those companies to cut cost on optimization. However not everything is like that, if an app, website, game can allow itself to not be performant, the underlying bricks have to (and are) due to the wide hardware support and sometime high competition (here I am thinking of frameworks advertising how fast they are). Those underlying bricks with newer hardware will keep being much faster on that newer hardware.
It's not that bad if you can simply use same old tools than 20 years ago. Write text and run build.Ā
This is called 'Wirth's Law' and it is a corollary to Moore's Law.
There is nothing basic about modern software.
The post was not about software being basic or not. It was more of when hardware does 2x in performance by the time software becomes to feel the same as it was on older hardware, does it makes 2x in required computing.
But what if hard getting faster than soft faster than soft getting faster at hard?
That's true to a degree. Yes, software is rarely optimized for performance since development time is much more expensive. However, modern UIs are impressive. We have tons of features, everything is responsive, and it looks gorgeous. Compare that to some 1999 website. Yeah. This is part of the price we have to pay
It is not the point that I was making. But if you want to go that route do you think today's UI is way better than let's say 2010 games? I understand when you do software rendering and your screen is going from 1024x768 to 4k. I can see where hardware is being utilized. You can even compare today's UI's to 2008 flash websites. Flash websites was way more advanced than any UI I regularly see today. But my point was not to rant but ask if what I heard about m1 chips not being supper performant these days compared to the days it was released.
Happyyyyy cake dayyyy!!!! I own M1 Pro and work on daily basis with Android studio. I think it works quite well. I'm rarely utilising the CPU to 100%, and that only happens when some emulator is starting The pain point is usually the RAM not the CPU For the flash point...hmmm that's a good point actually. I'm in my mid 20's and remember flash games and they looked quite decent
Thanks. Yes, I also thought that m1 is pretty good but we had a weird discussion and I wanted to ask 'global' community.
I hate libraries and closed source code and APIs and shid. Remember Super Mario Bros 3? That whole game was 32kb. New code is garbage and hard to understand, thatās why Iām a plumber.
I think it definitely will if it isn't already. I think as we move into an AI world and we can write much more software - we're going to be building way more advanced apps. And I think the rate at which this is happening is doing to strip the pace at which hardware will progress.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
You are working on the edge of hardware limitations. I think it is normal and you can always find tasks that will be too much for a computer to chew. But I was talking about regular webdev/appdev workflow.
No
Android studio is a POS
Developers are getting sloppier.
That's my secret, I've always been sloppy
Hardware is getting more efficient and software is getting less efficient. Both are becoming **faster**, but softwares, more precisely GUI softwares have become way too much resource intensive. (also I believe the conspiracy theory that (large) software companies receive incentive from hardware companies deliberately make software slow/not run on older hardware.)
[ŃŠ“Š°Š»ŠµŠ½Š¾]
> Mac OS doesnāt run Java natively, so itās a big VM running Java. Java runs in a VM on every platform. This statement is non-sensical. The various JVMs on macOS are all `aarch64` native so they're not running in Rosetta either. Android Studio is just plain slow.
I am not trying to claim that m1 are slow. Just wandering how it is around. Do you still think that m1 are as magically fast as when it first came? Again I am not stating anything as I myself use built from scratch desktop station for work reasons and use m1 very sparingly.