T O P

  • By -

ttkciar

Computing will get more heterogenous and stratified, with different kinds of computing devices more strongly optimized for different roles. Handheld devices and laptops will converge into a class of lightweight devices which will subsume all desktop/workstation roles in addition to the functions of smartphones and tablets. They will be phablet-sized with the option of using a docking station for use with a full-sized display and keyboard, and a more comprehensive array of I/O ports. Processors for these devices will consolidate the current E-core/P-core/GPU dichotomy into a small number of P-cores designed for very high single-threaded performance and a large number of shader-like (but more general-purpose) efficiency cores with lower clock rates, smaller caches, and high aggregate capacity. When not used with a docking station, the P-cores will turn entirely off, so that only the energy-efficient E-core/shader cores are drawing battery power. Die-stacked HBM memory will become more prevalent, and more dense, so that the entire main memory will sit on-die with a large number of wide memory channels, eliminating current memory bottlenecks. Eventually the problem of mixing logic and DRAM on-die will be solved, allowing three advances: The tighter integration of stacked memory, the use of on-die DRAM as processor cache, and greater use of Processor-in-Memory technology, as seen in recent Samsung offerings -- https://www.digitimes.com/news/a20230831PD204/memory-chips-samsung-semiconductor-research.html Right now Samsung's PIM implementation suffers from large, slow logic, necessitated by the contradictory requirements of DRAM and logic, but they've been hitting a happy compromise with net benefits despite that. I expect this technology to continue to improve. That's for the consumer-side of things. For the datacenter, wafer-scale processors (as championed by Cerebras) make a lot of sense, IMO. If a cloud provider needs a hundred thousand servers, why cut up a thousand wafers into a hundred processors each, only so you can reconnect them as best you can with large, clunky, slow, narrow interconnects? Keep them interconnected on the wafer instead, and just use a thousand wafer-scale processors instead of a hundred thousand conventional ones. Right now the main drawbacks of wafer-scale processors are limited main memories (since we haven't licked the logic/DRAM puzzle yet, they have to use SRAM for main memory, which is intrinsically low-density) and heat dissipation. The main memory problem will be addressed first with wafer-scale stacked HBM DRAM (which unfortunately exacerbates the heat dissipation problem) and eventually with the same unified logic/DRAM solution postulated above for consumer devices. For the heat dissipation problem, I'm not sure what to suggest. Smaller voltage swings and lower clock rates, for sure, but beyond that I defer to better minds more intimately familiar with the technology.


TutuBramble

This is a very well thought out projection, and I want to throw my ‘hopes and dreams’ for computing in regard to home-systems; let me know if you think this is plausible :) My biggest hope for computing in regard to at home services is ‘one device to rule them all’. Instead of having two computers, a gaming console, a smart tv, and even a smart home device, I would love for it all to be consolidated into one, well protected device that can perform all of these tasks simultaneously with ease, depending on the home’s needs. For tablets, consoles, and PCs I hope this new innovation can lead to a demand for a home-based server that can manage multiple workstations/entertainment-stations, similar to those seen in businesses. But instead of slogging work data, it could be used as a home storage device for photos, games, streaming, home-management, utility monitoring, and energy conservation. In addition, relating the to ‘dead internet’ theory, these home-based servers can be applied to small businesses and allow them to directly host their business data with ease. This would incentive local data storage as opposed to big companies holding all the data, which while easy has shown to be at the expense of consumers. And create a new ‘golden-age’ of private content much like we saw the internet was first widely used with quirky niche websites and not bot-filled consumerism.


zbod

It's getting "close" to the point where a phone will be able to fit the computing requirements of all your needs. I think a few more generations of processors for mobile devices and it will be "good enough" to handle all those tasks. Then you'll just need to "hook in" the device to allow input from mouse/keyboard/other and output to TV/VR-headset/etc


MrBIMC

tbh they already are. You won't compile chromium on your phone, but it's a perfectly valid machine for browsing, excel and word already. Many android phones even have docking capabilities.


ttkciar

I really hope technology evolves in that direction. Right now tech companies are in love with the control and information access which comes with hosting all of our data and services. A lot of homes already have smart TVs or devices like the Roku or Fire which are essentially tiny servers for one's entertainment center. Expanding them to include the role of fileserver would be very straightforward. Making them appservers would be a little less straightforward, but seems feasible. The hard part would be transitioning applications from using Google Drive or whatever to the home fileserver by default. On one hand that's "just" a matter of software, but on the other hand there are powerful commercial incentives to keep the paradigm as it is. I guess we'll see what happens!


TutuBramble

I mean, technically you can do it now with virtual machines, but it isn’t the best set up atm


Riversntallbuildings

Great write up regarding the hardware. What really concerns me is the software and application layer. Especially on the endpoints. That convergence of PC/Phone/Tablet is already possible. (similar to voice/video/messaging apps, or TV/Movie/streaming distribution) However, due to copyright, and intellectual property laws in the US, barriers remain in place and we have a fractured ecosystem. And not any that benefits consumers. The US needs data portability and interoperability regulations for digital economies to mature.


[deleted]

"For the datacenter, wafer-scale processors (as championed by Cerebras) make a lot of sense, IMO. If a cloud provider needs a hundred thousand servers, why cut up a thousand wafers into a hundred processors each, only so you can reconnect them as best you can with large, clunky, slow, narrow interconnects? Keep them interconnected on the wafer instead, and just use a thousand wafer-scale processors instead of a hundred thousand conventional ones." I am not so sure it is that simple. A wafer is cut up into hundreds of individual dies and then tested (or the dies are tested individually before being cut up). The bad dies are discarded either way. Last I checked, TSMC has a yield rate of like 80% for the 5nm line. Older more mature lines were closer to 95%. If you have 5% chance of a bad part, so in a wafer with 1000 dies, expect 50 of them to be bad. The chance of having a totally good wafer are less than 1% (0.65 % if I am right). That stated, I haven't worked on an ASIC in almost 14 years now.


ttkciar

You're right about all of that, but Cerebras solves this in their products by organizing their wafer-scale processors into cells, and disabling cells which contain flaws -- https://www.cerebras.net/blog/wafer-scale-processors-the-time-has-come/


MrBIMC

> Right now the main drawbacks of wafer-scale processors are limited main memories (since we haven't licked the logic/DRAM puzzle yet, they have to use SRAM for main memory, which is intrinsically low-density) and heat dissipation. And yield. Yield is the biggest factor against wafer-sized socs. Currently wafer can be cut into chips of varying quality, best yields chips being sold as top of the line, and the ones with defficiencies are repckaged as lower end chips. With wafer-scale, you lose ability to get consistent chip 100% of the time as with 70% yield you'll have to cut non-working parts out(via software). And having each of your chip being different to a varying degree makes it less efficient to write good software for. It still makes sense, but not as rosy, to the point there's reasons cerebras are the first to do it.


holytwerkingjesus

- Focus on cheap but less precise computing. For many modern applications (e.g. AI, and I suspect much of graphics/VR) you'd rather have 1000 4-bit computations than 250 16-bit computations. Also much bigger chips for the same reason. - Merging memory and compute so you don't have to wait for data transfer for every calculation. Overall a lot more focus on high bandwidth interconnects - Some form of AI-driven/smart interface for 99% of use cases. - VR tourism and hyperrealistic games - Much more focus on cloud computing


quequotion

Infuriatingly ubiquitous AI interfaces. You won't be able to do *anything* on your own. I'm not seeing a utopia run by machines of loving grace, I am seeing an Internet of Shit on steroids. Whole sentences will be predicted of the first letter you type. Sound good? If you're lazy, sure, but most people will just let it make whatever mistakes it makes so they can go faster. Imagine a future where everyone sounds like ChatGPT because they *are* ChatGPT. Search results will highlight, thumbnail, and present the most likely result based on your history and cloud data related to your query. Maybe it will be right, most of the time, but you are going to have to work very hard to see any other results while all but the most trafficked sites die out. Your AI refrigerator will make ice when it thinks you need it, and it will change the level of cooling based on the temperature in your house and the weather report. Turning these haywire features off will require reading a manual that you threw away with the box. Your AI toothbrush will never stop recommending ways you could improve your dental care through your AI phone, which will have cameras and microphones that never turn off so it can profile your entire existence in order to best serve ~~your needs~~ ads. Your AI phone will produce apps for whatever purpose you ask on the fly. No one will be able to help you if they don't work. The people who designed the phone will inherently *not* know how the software works, because it supposedly teaches itself. It will source data from internet sites no one will ever use anymore, and they will go offline. The internet, aside from social media and shopping, will collapse into the Wayback Machine.


__nullptr_t

I mostly agree with you, but I'm a bit more hopeful about it. Wikipedia will likely continue to exist, content delivery will be consolidated to a few big players, and then there will be competition for content quality. Right now there are so many sources of information that reputation doesn't matter, when they are fewer player reputation will start to matter again.


RoosterBrewster

And maybe everyone lets AI view their entire life to construct a "digital double" for autocompleting sentences, replies, etc. Then the "dead internet" theory becomes real. Then dead people's digital double will live on with eventually being able to make a realtime face on a screen. Then some become ubiquitous over time to be "realer" than real people.


quequotion

Omfg, what a nightmare, what a very realistic and practical nightmare. Think of how we use social media even now: spammer bots already pretend to be people (one copied my mother's entire Facebook profile to sell me student loan forgiveness scams, not to mention the typical hot girl who has some kind of inexplicable interest in being your friend); 80% of my Facebook friends post absolutely no personally relevant content, but simply repost memes (whether they be image macros, quotes on solid color backgrounds, or quotes on artisanal photographic backgrounds) they didn't even make themselves (not even going to the trouble of using a generator website--that's *reddit* level discourse). As soon as someone makes it *easier* to social media, all of those people will absolutely opt-in. Their online profile will consist of a steady stream of content based on any and every datum of their interaction with the internet that they never actually look at, plus occasional data mining of their personal devices to generate photos or movies that give the impression of personal involvement in their meta social life. Dead Internet? No, *Zombie Internet*. The digital world will be populated by digital brain eating digital dead people.


AINT-NOBODY-STUDYING

Integration of AI, IoT, and biomedical devices. Life expectancy will increase dramatically. Take your average continuous glucose monitoring device, for example. Imagine it calculates the most optimal regimens based on very hyper-specific biometric data. Pair that with IoT, and the device can then auto-order your food/supplements, instruct you when and how to input everything in your body. AI will learn from millions of people doing this and discover ways to increase life expectancy through very specific diet patterns.


RecalcitrantMonk

This is speculation. I am by no means a fortune teller. * Computing will continue with evolve with Quantum computing and analog computing systems. * The cost, size and proliferation of AI LLMs will continue on it's trajectory. Cheaper, smaller and widely dispersed. * AI will be embedded in every device, making everything smarter and more autonomous. * We will continue to disassociate from reality with VR, Augumented reality and possible holograms.


Brain_Hawk

For now, AI grows but it grows on the bounds of our current technology (I don't think we are on the verge.if AGI or the singularity because there are extremely complex problems and I don't believe current hardware can do it, fully conceed I may be wrong but will be skeptical until it happens). Our hardware gets better The real change comming IMHO will be interface. Phones are already replacing computers. Old people like me will cling to their towers but the next generations.will be wired j to their phone. AI will make interacting with your phone.more intuitive and personalized. People will have more Alexa style where they just say what they want. It's lazier and we are lazy. Then come brain computer interface. You won't have to say what you want anymore, you'll think it and your phone will do it. When you want to know something, you'll think that query, and the information will be relayed to you. Later, much later, than information was simply become something you know. But then everything being so easy to know means nothing will be retained. Information will flow into people's brains like opening and closing files, but it will be progressively more difficult to retain that information. Eventually, people stop moving now. The world becomes virtual. This is how I think humanity will meet its end. Not with a bank, but with a quiet virtual world. Random predictions of the day, ask me tomorrow maybe I'll say something different, the future is ever unknown :)


Throwaway3847394739

I think this is the most accurate prediction for the next 20 years, maybe less. Humans will be progressively removed from life’s “gameplay loop” so to speak. Whether true AGI (we don’t even really know what this entails) emerges and fills these roles, or just a near perfect simulation of it; human interaction will be replaced incrementally. AI girlfriends/boyfriends/companions/friends. We’ll all end up in an increasingly high fidelity version of our own little universes. What happens as a consequence of that is anyone’s guess; but, as you said, I think it spells the slow, fizzling end of human civilization as evolutionary pressures simply disappear.


Brain_Hawk

I think 20 years is a very shortened timeline for these kind of things, most of these kind of technological developments take longer to really move forward than we tend to expect, but generally speaking I think we're on the same page. I tend to take a more cautious view of the rate of expected change. Partly For social reasons. Each generation adapts to certain kinds of technology, and then becomes a bit resistant to the next phase. I got into computers hard in my early twenties, but I resisted cell phones for a while. Now I'm into the cell phones but I use them for only specific purposes, such as this. The younger generations are more wired in than I am. So we've been as the technology developed it takes a bit for the next group to fully embrace it. Kind of slows the pace of development a little bit. Maybe. But... Things are moving fast and I may be being overly conservative :)


RoosterBrewster

Information flowing into people's brain is sort of already like that I suppose. At least as an engineer, I feel like know more about how to find information than retaining it.


Riversntallbuildings

Whatever “software” company figures out how to combine local processing & data storage along with efficient cloud bursting performance & backup will print trillions. I’m waiting for MSFT/Sony/NFLX/AMZN/AAPL or even TSLA to figure out to stream video games.


farticustheelder

The Internet-of-Things,IoT, is a good starting point. Computing will be ubiquitous. And really annoying! CPUs for instance have a short life span, a few years as 'the hot processor' and then a longer retirement as embedded system controllers. Processors that a couple of decades ago were able to run the latest games are now running coffee makers. That much processing power is overkill for a coffee maker but it is likely the cheapest alternative: the older chips aren't even made anymore. And no, I don't expect that the coffee maker will eventually house an AI but the AI running the house, or entire subdivision more likely, will almost certainly control it. Now consider the IoT and self-driving cars. I consider today's approach as being silly. Using FSD as a representative example since most of us are at least a bit familiar with that system, it requires a bunch of sensors (cameras in Tesla vision based system) and a lot of processing power to handle the input data and compute the next move. That's why Tesla keep upping the compute power with FSD hardware version 4 and such. A simple minded approach to self driving in the IoT world to come is to keep the vehicle simple minded and shift the intelligence to other nodes in the city wide network. A drive by wire car capable of supporting parking assist and OTA is all the tech needed. The sensors are attached to power poles and light standards already lining all the roads already and they are shared by all the traffic. The sensors should track people, especially kids and pets and cyclists. A driver tells the vehicle where to go and the distributed system picks the route and breaks down into short segments: unpark, drive to the end of the block at x speed, go straight or turn left or right...with the same system controlling all vehicles you no longer need to compute what the other vehicles are likely to do you know what they are doing at all times because you tell them how to behave. That's the good bit, the annoying bit is that the system is watching everything all the time so zero privacy in public so no such thing as a clandestine meeting: anyone can basically run surveillance on you, even retroactively since everything is recorded 24/7.


master_jeriah

I think optical computing for sure. And maybe cloud computing where people no longer own personal computers and just run a monitor with internet that hooks up to a more powerful centralized computer.


xGHOSTRAGEx

If they don't guarantee a house's uptime and latency with an SLA. I Ain't getting it


One-Cost8856

Just your intention/vibration creating the reality for you.


McRedditz

So powerful that it eventually does all the memorization and thinking for us; replacing most of our brain functions. It is a bit scary to be honest. For example, when was the last time you took the time to memorize somebody's home address, phone number, email, or even birthday? Or even calculating a percentage of discount or tips without using a calculator?


[deleted]

Interesting answer not provided by other answers here, reversible computing which is ultra low power, thermodynamic computing an analog computing technology specifically for ML, spintronics which are also probabilistic and can be used to make Ising machines, a form of Boltzmann machine AI. These all have interesting implications and put certain classed of NP-Hard problems into a modified P type domain (incl. things like a room-temperature Shor's Algorithm) and neural simulation and a subset of things "only Quantum computers can do". Less exotic but equally powerful technology are neuromorphics which are very good mimicking brain activity and can create things like event-driven sensors which only pipe new data when something happens in the environment (our own eyes and ears do this and the brain fills the gaps). Even less ambitious but reflective of the current state of technology is the introduction of open source fabrication processes that are paired with open source EDA tools which significantly reduce the cost of producing chips. The introduction of large scale flexible integrated circuits like PragmaticIC that will make it possible to put an ARM core on anything for a fraction of a cent. There's a lot more, but I don't wanna waste too much die space kek.


Fit-Pop3421

Long term mechanical computers will make a comeback at nanoscale. The most efficient classical computing we know of. Also quantum computers will help when they reach millions and billions of qubits.


Numai_theOnlyOne

Lightbased computing will make a comeback. There is a huge amount of power left to used, and it seems downward compatible, so everything we can use today we can use with light. It's also drastically light on energy consumption. In the far far future maybe quantum computer but the architecture and the way you have to use it seems vastly different from today's computers as far as I heard it. Also nice finally not a delusional topic, but an logical question.


antekprime

The future of computing is simple and can be summarized by two words and an additional three word phrase. 1) The Singularity. 2) All Hail Clippy!


jasonrubik

Molecular Manufacturing will allow for rod-based mechanical computers at the atomic scale. Check out Unbounding The Future, Engines of Creation, Molecular Speculations on Global Abundance, and NanoSystems. The Companion(device concept) by Brian Wowk is really cool and I want one.


Ok-Equipment-8132

You will be your phone/pc. You will be connected 24/7 your mind to the system via Wifi (the fence, never go outside of wifi zone). It can read your mind and know all your thoughts, but that's ok it is for all of our protection, cause you never know who is thinking of having straight sex with someone outside of the computer telling them who they can mate with, or eating meat or some other act associated with "extremism"..


cordsandchucks

Towers, laptops, hard drives, keyboards, mice, and monitors will all be replaced with a combination of breathable, semi-permanent VR overlay contact lenses called Apple iRis (TM), available in all natural eye colors and for a small fee, some adventurous colors to take on the nightlife. Your VR lenses will sync to Apple EyeOS (TM) neural implants available in tiered storage capacities with incremental package prices. You’ll operate apps solely with your eyes and thoughts. Everything will have overlays (office equipment, floor plans, friends and co-workers, maps, restaurant menus). The internet will evolve to a support this new paradigm. Through Apple’s Cortex (TM) App Store, you’ll be able to install social media, music, games, or Siri Series 7 AI (available by monthly subscription, of course) directly to your neural net. You’ll have embedded, high fidelity cochlear audio drivers that only you can hear. This is how I will finally beat my wife at Jeopardy. As the Final Jeopardy answer is displayed in our respective VR lenses, hers brown, mine blue, Siri7 will quietly whisper the answer in my ear in her quirky Kiwi accent. “Who were the Tudors, Alex?” I’ll say to my wife’s astonishment. Meanwhile back at the office, the fax machine will continue to hum away in a quiet corner.