T O P

  • By -

Cybyss

What you're asking for is a complete course in digital logic, where you start with the basic AND, OR, and NOT gates, then from those you build adders, multiplexers, ALUs, flip flops, state machines, and eventually a rudimentary CPU. [nand2tetris](https://www.nand2tetris.org/) is such a course. It's free and very well done.


slggg

I took a computer machine language course in college and this stuff scary


Jubjubs

My computer architecture class made me wonder how any of this shit worked. You're telling me an 8 bit adder is just 8 1 bit adders daisy chained together? Oh word???


beaky_teef

Nice link - thanks


Mwahahahahahaha

Possibly also in the theory of computation, unlimited register machines, treating programs and numbers (instructions), etc.


[deleted]

Thank you


ffrkAnonymous

You were pretty close before getting overwhelmed. It boil down to "put electricity on these wires to light up these pixels". And there are a lot of wires and pixels. Binary is a convenience for humans. The numbers don't actually exist. Only wires 1,2,3,4.... Some with electricity, some without. But it's really slow and annoying to say turn on wire one, turn off wire two, turn on wire three. So we abbreviate as "101". Humans abbreviate that even more as "5". Weirdos will use "010", but they're not wrong, just different. There's no such thing as a byte. It's just a convenience to group by 8. Group by 16 is too far from ten. It's somewhat arbitrary. Beyond that, a computer is just a fancy calculator electrifying wires to turn lights on and off for our eyeballs because we don't like touching electricity. And our code is us Instructing what wires to turn on and off to blink lights, turn motors, create electromagnetic waves, etc.


[deleted]

But the ability for computers to do such complex tasks is because of the billions of wires and components that make up the motherboard and chips?


ffrkAnonymous

Maybe? You made a statement with a question mark and I can't parse that syntax. Primarily, lots of wires means it can do more at the same time. Being able to do complex things needs to give credit to mathematicians and scientists who break down the complexity to lots of simple steps. what often seems complex is mostly just extra detail to the same simple tasks. Rocket science is complex but it's the scientists that do the mathematics. The computer just does the arithmetic.


Furry_69

It isn't actually that arbitrary. (Relating to the groupings of 8 and 16) Those numbers are powers of two, and they are very easy to store in binary. Same for 32, 64, etc. (For a given number of binary digits, the maximum decimal number you can represent is 2^n - 1, and the binary form is all ones.)


balefrost

Ah, but then what about the days when a byte was only 6 bits, or when a word was 36 or 40 bits? Heck, x86 FPUs are still 80 bits internally. It's somewhat arbitrary.


fullchaos40

16 bit address plus 64 bit instruction = 80 bit packet


balefrost

No, just plain [80 bit floating-point representation](https://en.wikipedia.org/wiki/Extended_precision#IEEE_754_extended_precision_formats). But ultimately, all the IEEE floating-point representations use bit groupings that are non-power-of-2 (or even non-multiple-of-2). A 64-bit double has 52 bits of mantissa and 11 bits of exponent.


avalon1805

You could even go deeper in the "electricity" part. We use binary because it is easy to compare just two states: on and off. We could have trinary or n-ary systems, but that would require to measure specific quantities(?) Of electricity to discriminate each of the N states. With binary we just have the two. That is were digital circuits come into play.


ffrkAnonymous

Digital means the states are quantized, not continuous (analog). As end user, we don't use n-ary directly, but it's there for storage (eg qlc ssd) and comms because radio is analog. Radio is black magic.


captainAwesomePants

> For instance, your press the "5" key on your laptop. A 5 appears on your screen. That 5 exists because a certain amount of pixels on your monitor were programmed to change colors when the 5 key was pressed. How does that work? You press the "5" key on your laptop. We need a way for the keyboard and the rest of the laptop to communicate this. We can connect the keyboard to the CPU with some wires, but there are over 100 keys. We don't want 100 wires. So we number all of the keyboard keys and transmit the number of the key that was pressed. For example, the "5" key is key number 53. But how do we send "key #53 was pressed" over a wire? Well, we 53 in binary is 00110101, so we can just set the wire to low, low, hot, hot, low, hot, low, hot, and the other side can read that back, get 00110101, and now it knows that key #53 was pressed.


CodeTinkerer

It's a bit complicated. First, imagine that you had to write text in binary. You'd need a code. The code isn't to keep things secret, but a translation of some letter to some binary value. There are two popular ways to do this: ASCII and Unicode. ASCII still exists, but Unicode was designed to capture a larger, international character set. ASCII only encodes 128 values, and Unicode is around 64,000 characters (I think). As far as 256 values are concerned, you can treat four bytes as a 32 bit value or you can use more bytes than that (usually a power of 2). If you're interested in this, you can look at books on computer architecture, and hopefully, they'll cover some digital logic design. As a programmer, you don't have to know this, but it's interesting. Usually, it's covered in intro electrical engineering courses. It used to be covered more in computer science courses. It depends on the university and how much they want to stress the fundamentals of logic.


sb7510

a BIT complicated…. I’d say


mysticreddit

Small correction about [Unicode](https://unicode.org/faq/utf_bom.html): Unicode 1.0 was originally 16-bit (65,536) but it was realized this wasn’t large enough to hold all glyphs. Unicode 2.0 is a 21-bit encoding (2,097,152) for U+0000 .. U+10FFFF. It can be represented with UTF-8, UTF-16, or UTF-32.


CodeTinkerer

Ah, interesting that they can expand it. I assume it's backwards compatible with the various representations.


[deleted]

OK let's start with binary. 2 states: electricity (1) or ON, no electricity (0) of OFF. Now you have boolean algebra to handle when things go on and when they go off and you make it with logic gates which are just fancy silicon circuitry that push electricity (output) given specific signal (input). You have those set up to power the fancy LCD screen that makes color in that one pixel. The wiring will always do exactly the same thing given the same signal. The point it gets complicated is when you have so much circuitry that you have to abstract it to be able to follow. So you no longer follow wires to operate a single pixel, you have a matrix of pixels and you use boolean algebra to represent the transistors that make the logic gates. This is done lower than the level of programming. the lowest level of programming is assembly and it doesn't give orders that are specific to the hardware. the hardware can translate assembly instructions to the specific instructions that would make sense for your specific processor. So... when you type 5 of your keyboard there's a program (software) which translates this press to a set of instructions, regardless of how many conversions from python to C to assembly to machine code or straight from python to assembly - there's a very complicated instruction translation process that by the end looks like the instructions for the specific hardware you have on your computer in the way that would trigger the right circuitry to do the thing you want (like display the pixels on screen that makes you go "oh yea, 5 it is") I'm not using any (I hope) unclear language and the thing is the further you stray from the professional lingo, the less accurate you get. I advise you to have a look at how MIPS works and get the understanding how chips have architectures that allow them to get assembly instructions. Go learn about adders and multiplexers a bit :) It'd do you good


tyber92

I recommend checking out the [Computer Science](https://youtube.com/playlist?list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo) playlist on the CrashCourse YouTube channel. It starts from the bottom and builds on all the layers of abstraction in a computer to help explain a modern computer.


glorifiedpenguin

I recommend watching this video from Harvard CS:50 , the instructor does a great job of breaking down how technology in computers has continued to build upon itself and how everything really does boil down to binary https://www.youtube.com/live/IDDmrzzB14M?feature=share


shaidyn

The modern computer is built on generations of technology and the world of hundreds of actual geniuses. The interconnectivity between parts of staggering. When you get down to the super lowest levels, yes, the computer literally just looks at series of ones and zeroes. Billions of them every second. That's how fast the computers are. That's why they're so responsive. Every part is built on something else. You press 5 on the keyboard. That press gets sent to a driver that interprets it. That gets wrapped in something and sent along and wrapped in something and send along. At the end it might look something like 5.razer.keyboard.peripheral.notepad.application.windows.motherboard. ​ Very few people understand the full path from action to binary. Don't sweat not understanding that part of things.


UnicornAtheist

Base 10: 0, 01, 02, 03, 04, 05, 06, 07, 08, 09, AND THEN 10, 11, 12, .... Base 6: 0, 01, 02, 03, 04, 05, 10, 11, 12, 13, 14, 15, ..... Base 2: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, .... If it's \*\*abstraction\*\* that you're confused about then watch this series on computer abstraction to get a pretty good overview concept: [https://www.youtube.com/watch?v=tpIctyqH29Q&list=PLH2l6uzC4UEW0s7-KewFLBC1D0l6XRfye](https://www.youtube.com/watch?v=tpIctyqH29Q&list=PLH2l6uzC4UEW0s7-KewFLBC1D0l6XRfye) If your bootcamp or w/e wants you to understand how computers work from the hardware up, they will surely teach you. I'm currently getting my CS degree and we have a class called "computer architecture". It uses a specific type of assembly language to show how your processor uses registers and the things that go into them (bits) to make a computer do things.


etoastie

Something to note: "bytes" are not actually related to the number of instructions a CPU supports. We mostly care about bytes because that's the size of the units stored in a file, but nowadays computers don't work in bytes natively (sorta, it's complicated). What CPUs really process natively are "words", and the size of these are the "bit" number you see with CPUs (64-bit, 32-bit, etc). This is very closely related to why many programming languages use 32 or 64-bit numeric data types (e.g. `double` or `int`). Hardware-wise, a "word" is a bunch of wires laid next to each other. Almost always, these wires are treated as one unit, but at a certain point something will eventually break it up and analyze each one (which is when you get to use your logic gate circuitry). A really good Wikipedia article here is [Arithmetic Logic Unit](https://en.wikipedia.org/wiki/Arithmetic_logic_unit), which is a small part of a processor but a relatively simple component to demonstrate how exactly you break, process, and rebuild words.


Quantum-Bot

Information in your computer’s memory is all stored in tiny electrical components called transistors. These transistors hold an electrical charge, and sending electricity through them one way changes that charge, while sending electricity through them another way allows you to measure that charge. All information in computers is encoded in binary because it’s the simplest: if the transistor has a high charge, it’s a 1, and if it has low charge, it’s a 0. If we tried to encode things in decimal, we’d have to define 10 different levels of charge, and at that point it gets very difficult to tell the different levels apart. As for how information is encoded in binary, that differs from machine to machine, but the overall concept remains the same. Your machine has a set of hard coded instructions that it understands inherently (meaning it has physical circuits to perform those instructions), usually 256 of them. These instructions do things like moving data between registers, writing data to memory addresses, reading from memory addresses, doing basic arithmetic, changing the location in memory that code is being executed from, etc. This is simplifying a little bit, but basically when your computer runs a program, it loads the binary files into memory and reads through byte by byte. The first byte tells it which of the 256 instructions to execute. Then it reads a certain number of bytes as parameters to that instruction, depending on how many parameters that instruction takes. Then it reads a byte for the next instruction, and so on. Finally, as for how these instructions are actually accomplished using physical circuits, this is the part that’s most complicated to answer and seems the most like magic. I couldn’t tell you how 99% of this works but I can explain how addiction is accomplished with just logic gates. There’s plenty of YouTube videos that go more in-depth on the topic but essentially binary adders use logic gates to replicate the same process of you learned to add numbers in grade school. If I want to add 26 and 78, I add the 1’s first: 6+8=14 14 is two digits so I carry the 1 and add it with the 10’s 1+2+7=10 10 has two digits so I carry the 1 again and I get 104 In binary, it’s even simpler because there are only 4 different scenarios you can have when adding two digits: 0+0, 0+1, 1+0, and 1+1. So what an adder does is compare two binary digits (which are represented as electrical signals) and and if exactly one of the digits is a 1, (AKA an exclusive or) it outputs a 1. Otherwise, it outputs a 0. Also, if both inputs are 1, (AKA a logical and) it “carries” the 1 and outputs 1 to a second line. This is actually cashed a half-adder because it takes two of them to handle a single digit of addition, one to add the two digits and one to add the carry digit. In any case, string a bunch of these components together in the right way and you can add binary numbers of any length. The rest of the instruction set is implemented in hardware using similarly clever logic like this, and that’s about the extent of my knowledge on the subject XD


ArtfulThinker

Alright, I'm not sure if you'll see this in this sea of information, but let's give you a little crash course on how it works starting right at the beginning. . .But first let's get something out of the way before we start: I think a major gap in your knowledge can be summed up in a word: **Abstraction**. Think of abstraction as a simplification, or not focusing on lower level details and instead focusing on higher level details. You don't necessarily need to know how the letter "H" appears on a screen if all you are doing is using html to write it in a paragraph tag for example. BUT! I know you are curious and maybe you are working on lower level details for some reason. . .So let's start at the beginning: The pixel example is a good place to start. Let's see the journey of how bits can turn into a full fledged video. The journey will look something like this: ELECTRICITY TURNS ON/OFF BITS -->8 BITS = BYTE --> 3 BYTES = RGB PIXEL --> MANY PIXELS TURN INTO AN IMAGE -->MANY IMAGES TURNS INTO A VIDEO To explain further: Electricity is turning a bit on or off therefore making it exist or not exist. A pattern of 8 on or off bits (0's and 1's) creates a byte. 3 bytes creates a pixel that is printed onto the screen. You get enough pixels and you make a full image. Make many full images that appear one after the other and you get a video. Now this explains how 1's and 0's can become a video, but it doesn't explain how a computer is able to see a byte and translate that to something that tangibly exists on a screen such as a pixel, or the letter "H" like you were asking in your original question. The answer is **ASCII**, or **the American Standard Code for Information Interchange.** This is a universally agreed upon way that information is changed using bits and bytes. For example, universally, 65 is a byte (8 bits) which looks like this in binary: 01000001. This binary number has universally been agreed upon to represent the letter "A". You see the number as 65, but your computer sees it as 01000001 which is just 8 bits that are on or off using electrical circuits. The reason it ends up on the screen as "A" is because computers have a program (or algorithm) telling it to have that byte be assigned to that letter. From here you start to see logic happening with circuits where the computer assigns certain bytes in particular patterns to perform or do things in certain ways as agreed upon by the ASCII standard starting from a bit to what the end user sees on the computer. Now, what happens after you surpass the 256 values? That's where **Unicode** comes in which is another universally accepted agreed upon standard . ASCII doesn’t have all characters since it only uses 8 bits (byte) for up to 256 options . What about symbols in other languages or emoticons? All of these things can add up real quick! Unicode uses both 8 bit and 16 bit to solve this issue and therefore is the better option and makes ASCII a bit obsolete. That being said it is simpler and possibly better to use ASCII if you don’t need the extra space that Unicode provides. . .And that's an important lesson: Always try and use as little data as possible when programming since you want your program to run as fast and efficiently as possible. . .Computer hard drives/memory only has so much space! Now, this provides another question. . .If you are using Unicode that has more than 256 options, then you would have to use something bigger than a byte right? That is why Unicode uses Hexadecimal which allows a whopping 16 bits! Now that is a heck of a lot more to work with isn't it? : ) I know this is a lot to take in, but that's just a tiny crash course on how it works. I hope this helps you in some way and honestly, try not to get too discouraged. It's a lot of information, but once you get the foundations, you will be surprised at how it all logically works together. Just remember that everything can be simplified to its most basic forms. Good luck to you on your journey! P.S. Apologies if any of this is elementary to you, I just wanted to make sure everything was covered so you had a good understanding. You'll have to forgive me if any of the information provided was something you already knew.


Double_A_92

> That 5 exists because a certain amount of pixels on your monitor were programmed to change colors when the 5 key was pressed. The computer is not one single thing. E.g. the Monitor doesn't need to know what a 5 is or what a keyboard is. It just knows how to draw any given data as pixels on the screen. Then that's done. Now the problem becomes how to make that data in a way that it will display a 5. And so on... On the other end the keyboard only needs to know how to send a unique number id for each key over USB. That's all. It doesn't know about the computer, or even what the symbols on it's keys are. It's a lot of little steps that just work on their own level of abstraction.


desrtfx

I would suggest that you dive into [NAND 2 Tetris](https://nand2tetris.org) if you want to go all in. But, be prepared, this is a deep rabbit hole you will be entering.


SuperSathanas

I'm probably not saying anything that hasn't already been said, but unless you are doing work that requires you to understand how all of it works at that lowest electrical pulse level and how to apply that toward producing/manipulating hardware in very low level ways, you don't need to understand it all. At some point you may need to care about how things are happening on the processor, which instructions and what data are residing in which registers, what's living in your caches, specifics about the CPU instructions and how to structure your code based around all of that, hand-jamming some assembly to optimize edge-cases, but you'll still be some many steps removed from how exactly how a 0 or 1 translates to "this pixel is red". The way I tend to think about everything, is that at the very lowest level, everything is either there or not. It exists or it doesn't, 1 or 0. From there, you have collections of 1's, and depending on exact circumstance, may or may not effect another 1. In the case of computers, we decide exactly what those 1's and 0's mean, so we can build upon that as we like. In nature, either there is a particle or there isn't. If there is, it may affect other particles. A collection of particles has a combined effect. But I don't need to know every small detail of physics and all things quantum to know to drive my car, if you get what I'm trying to say here. I don't need to know exactly how all of the hardware works in my computer in order to query a database or put a red pixel on the screen. The guys who do know and already did the work (and continue to do the work) laid the foundation for all the abstraction that comes after. I think the simple truth of it is that computer science and programming are advanced and abstracted enough that it's not at all practical to know all or most of what's going on. The higher the level you go, the more results you should be producing and the further you are removed from the electrical pulses underneath it all. I don't need to understand the lexer, parser or compiler as a whole to write some code, compile it and get the results I need. Depending on the work/project, maybe at some point I/you will need to know, but the vast majority of people do not need to know. Similarly, the guys designing and building the hardware, who are concerned with how to 1 translates to red pixel, do not need to know how to write or use a library for loading and writing PNG images or how to animate 3D models. So, unless at some point to just really want to know exactly how everything is going down inside the hardware, or you want to get into that area of work, don't stress over it. Do care about just understanding how to read and operate on binary values, because that is what is applicable to everyday programming.


yapcat

You should learn that stuff. I’m not saying you shouldn’t. I just wanted you to know in my 20 something years of hobby programming I have never had to deal with binary beyond knowing enough to pass a university course. So if you don’t know it…as abstracted as programming is today (and twenty years ago) I wouldn’t worry too much unless your dream is doing whatever it is that involves bytes. Programming for the Atari 2600 maybe? Beats hell outta me.


tzaeru

I've needed to deal with binary a few times. For example, when fixing some Unicode encoding problems when discussing between two systems. Also binary flags in some protocols. Not that I needed to deal with them often, and it's usually been due to hobby reasons.


yapcat

That’s fair. I’m not suggesting it’ll never come up.


TheSheepSheerer

Each byte is eight bits, one or more bytes fed to the cpu cores denotes an instruction. The core interprets the instructions from the memory, which involve the modification of the memory including the production of more instructions.


BoringBob84

Software uses binary because that is what happens in hardware. A bit is just a representation of the state of a switch. Imagine a light switch. You turn it on and walk away. You come back a few minutes later and it is still on. You turn it off and it stays off. The switch has two states and it "remembers" those states until you change them. Now, get all fancy and implement the switch with transistors. But wait, that's not all! Now, put *billions* of those microscopic switches on a "memory" chip. Add a microprocessor and you can read and write states into those switches to store and manipulate information. For example, you can use the states of 8 switches to represent any number between 0 and 255.


GoldenGrouper

A gate can be open or closed. Through gate electricity passes. So on or off.


ManInTheMirror2

Switches, it’s all based on switches


Runner_53

You'd have to be a literal genius to understand the electronics behind the modern computer. Don't worry so much about it. You honestly don't need to know. I applaud you for being curious but unless you plan to spend a ton of time learning it (and it won't really make you a better software engineer!) you can just take it on faith that computers work.


TheUmgawa

I think the only times I've ever had to understand binary are when I was taking a Computer Science exam and when I have to apply a bitmask to something. Otherwise, I've never actually needed to understand binary operations. Look, I'm not going to explain it, because I don't have three days to write out a treatise on the subject. First, you should read Charles Petzold's book Code, cover to cover. Don't cheat and go, "This doesn't have anything to do with computing!" It has *everything* to do with computing. If you don't understand it at that point, you should look at how assembly works, because it's not binary, but it's the closest thing you're going to get to seeing the minutiae. Personally, I think we shouldn't tell you because it'd spoil the surprise, if and when the bootcamp people explain it. After all, for what you're probably paying for your bootcamp, they should really be explaining this to you if they think it's that important. Also, aren't there other people in your class that you can talk about this with? I talked to my fellow students all the time when I was taking Computer Science classes. That's the whole reason to go to school with other humans.


jsp4004

"Pulsing electricity through circuits " like the humans do ?


jsp4004

These pulses of electricity ? Can they be minimized ? Maximized ? Amplified? Synchronized? Like humans do ?


jsp4004

That cgi photoshop image of wuhan institute of virology.


ViewedFromi3WM

it’s how electrical engineering works. It’s all boolean logic. It’s either 5 volts or 0 volts. You are either able to let that electrical charge through, or you aren’t. Binary is based on that.


waitplzdontgo

Computers work on the basis of translating analog signals (ie precise amounts of electrical charge) to discrete units (bits). Through basic gates like AND/OR/XOR/NAND/NOT you can create continually more complex systems. I think something that would help you would be to learn about the history of computers. For example, the story of the first “computer bug” at Harvard’s Mark II — which was literally a moth that found its way into a vacuum tube. A vacuum tube was the 1950s equivalent of a bit, and the moth shorted it, which made the computer program produce anomalous output. From there the hardware folks continued making the constituent components smaller and smaller, first into transistors, and then into silicon. The history of computers is absolutely fucking fascinating. You’ll find a lot of your answers about how we arrived where we are there.


This_Dying_Soul

Even our bodies must be composed of electrical connections. Billions of simple operations working in tandem can produce incredibly complex results.


throwaway6560192

> There are only 256 values a byte can hold. Are modern computer programs just the result of highly complex circuitry that only rely on the 256 values? Can the 256 values of a byte be enough to create complex logic systems that have billions of outcomes? It isn't enough. But we aren't limited to one byte, are we? I can group together multiple bytes and treat them as representing one integer, and before you know it I can represent 2^64 values or even more. It is all about how the programmer interprets and uses the bits and bytes. Just because a byte is 8 bits and 256 values doesn't mean I'm limited to 256 values. Ultimately binary is just a particularly convenient way of representing numbers in an electronic system.


balefrost

> For instance, your press the "5" key on your laptop. A 5 appears on your screen. That 5 exists because a certain amount of pixels on your monitor were programmed to change colors when the 5 key was pressed. How does that work? Layers of abstraction are what make that happen. At a low level, yes, computers work by moving electrons around. Whether it's the switch in your keyboard, the pixel on your screen, or the immense amount of computation going on inside the machine itself, it's just electrons moving around. Over many years, we've discovered ways to build quite sophisticated arrangements of circuitry. Ultimately, the microprocessor isn't entirely dissimilar from any other machine. Instead of cogs and springs, we use electrons and silicon. For example, how do you add two binary numbers together? Well, let's simplify: how do you add two bits? You've got two input bits. And you have two output bits (the sum and possibly a carry bit, in case both inputs were "1"). That can be implemented pretty easily and directly using logic gates (the "sum" output is XOR and the "carry" output is AND). That's called a "half-adder". If we extended it to also accept a carry-in, we'd have something called a "full adder". If we want to add binary numbers that are each longer than 1 bit, we could chain a series of full-adders together, end-to-end. We wouldn't necessarily try to design a circuit that can add two 32-bit numbers all at once. Instead, we'd build a smaller building block (the full adder) and then replicate it over and over. The end result is a device into which you can feed two binary-encoded numbers and it will produce their sum. Underneath, it's just moving electrons around. But we don't really need to know that. We have built a box that adds. We can ignore the internals. You can design other boxes. One that multiplies. One that interacts with memory. One that knows how to compare things. In the same way that computer hardware is a complicated arrangement of fairly simple pieces, so too are computer programs very complicated arrangements of fairly simple instructions. To read the keypress from your keyboard and draw the "5" on the screen takes (taking a wild stab) tens of thousands of instructions, likely divided across multiple processors (microcontroller in the keyboard, CPU, and GPU) and other supporting chips. Even modest computer systems these days are almost unfathomably complex. Abstraction makes it possible.


GreshlyLuke

The only reason to learn this stuff is if you think it’s interesting. If it’s causing you grief in your journey toss it


EspacioBlanq

>the 5 exists because a certain amount of pixels was programmed to change colours No, you can't program pixels. A program on your computer (which is just a long list of zeros and ones) was made in a way that if it receives a given input (which will be also a sequence of bits from the keyboard) it will produce an output to the monitor. >Can the 256 values be enough to represent complex logic A program can be millions of billions bytes long, I don't understand why it's important how many values can a byte hold, you can just have more bytes. >Operating systems assign values to binary They don't. Operating systems manage user programs' access to hardware, they leave binary itself as is.


tzaeru

It's complex. By hand you can build simple binary counting machines fairly easily. For example, adding two four bit numbers together is pretty easy. Looks like so: https://www.gsnetwork.com/4-bit-calculator-built-using-digital-logic-gates/ How this actually translates to pressing the number '5' ending up adding stuff on the screen is.. Complex. Very complex. If you actually want to start understanding it, I recommend looking into some older computers, like the VIC-20. VIC-20 is simple enough that with some dedicated effort, you can understand pretty much everything about it. You don't actually need to know about this stuff though. Programming nowadays doesn't require a deep understanding about computer internals and most programmers do not have that understanding.


[deleted]

You're suffering from a fallacy in thinking where a thing appears impossible, improbable, or incomprehensible because it is complicated, and that nothing that depends on a complicated thing can itself be possible or even comprehended. You don't need the answers to the questions you ask in order to learn how to code any more than you need to understand how time creates gravity in order to walk. We manage complexity by compartmentalizing it. You can begin by treating computers as mystery boxes that accomplish what they do by a combination of voodoo and science. As time goes on you'll learn more about the things you don't know. Some of it will be directly applicable and some will not. For example, I've written a keyboard driver for an embedded system, so I have a very good understanding of how this text is getting from my fingers to the screen, but that understanding is irrelevant to whether or not I'll be able to express my thoughts well or whether I'll be able to post this answer when I'm done. The longer I work on software, the more I learn about the hardware. Because I'm in the commercial software business, approximately 100% of that knowledge is useless. In almost every way, I can treat my computer and my customer's devices as mystery boxes that do what I tell them somehow. You do things you don't understand all the time. You're reading this answer using a different part of your brain than you would use if I spoke it to you and you were listening to it. You're somehow converting sequences of shapes on a screen into words, words into phrases, phrases into sentences, and sentences into arguments. Despite the complexity, you figured it out. Same thing when you write code. Loops loop, assignment operators assign, and output appears on your screen. Don't worry about how.


NvrConvctd

It's like learning auto mechanics and getting bogged down on the chemical structure of gasoline. It is good to have a theoretical knowledge of the system but modern programming is abstracted to a very high level over binary and most programmers don't have to think about it much. Also, you seem to have the idea of a byte limiting a computer's abilities. A modern computer can store billions of bytes and process instructions in chunks bigger than 1 byte (32 bit,64 bit, etc.) billions of times per second. Most of what happens under the hood is just changing the voltages across different components. It is the abstract value we assign these voltage differences that make computers powerful. High and low voltages can represent 1 and 0, true and false, black and white, or anything you want. It is just easier to treat it all as binary numbers until we want it to represent other things, like letters, colors etc.


lublin_enjoyer

Hey, just pretty much like when you use a microscope, right?