Select Page
Poker Forum
Over 1,291,000 Posts!
Poker ForumFTR Community

Help me understand what goes on in a CPU.

Results 1 to 21 of 21
  1. #1
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO

    Default Help me understand what goes on in a CPU.

    Long story short:

    I saw a youtube vid in which someone made a 4-bit computer in Minecraft.

    I was kind of immediately impressed with the idea of walking through a CPU the size of a building, where I could see how each gate fits in the whole system.

    It's a bit of a mystery to me how a power switch can turn on, which sets in motion a series of switches, which ultimately culminates in my monitor, keyboard and mouse working as a computer.


    So I've done a bit of research, but I'm really stuck on some stuff.

    Currently I'm thinking about exactly what an ALU does. I know it does arithmetic and logic, but which functions? In that Minecraft vid I mentioned, the adder and the subtractor were separate buildings. I would not have expected that, since they are similar processes. If it's a 4-bit ALU, how does it output 1-bit logic (like A>B). Does it really have an extra bit of output?



    If anyone here understands the inner workings of a CPU, please post.

    I would even appreciate any links to good online sources about CU, ALU, CPU and/or FPU (math coprocessor / coprocessor) architecture. I'm probably mostly interested in more primitive stuff right now... since that's all I understand.


    My final goal will be to fully understand at least a basic calculator from the gates to the user interface.
  2. #2
    It's been a while since my CS degree, and I wasn't that great of a student to begin with, this is more of a comp eng question, I only took one 2nd year course that would help answer this question so I forget a lot of details, and am a little drunk now, but here we are.

    Through the magic of logic gates implemented physically via wires and transistors and stuff, we have bits represented as voltage, where 0 is ~0V and 1 is ~5V, and adders:

    http://en.wikipedia.org/wiki/Adder_(electronics). They are used everywhere in computer hardware to perform basic arithmetic.

    Skipping some steps... CPUs have instruction sets which define how every series of x bits thrown down the pipes are to be handled...many logic gates to throw the voltages around various tiny wires and transistors and stuff. A "32bit" architecture has the first few bits reserved for the instruction code, the next bits is the data for that operation (and then some other stuff probably), so a 64bit architecture has more room for bigger numbers (which is important for memory addresses and stuff, see: 32bit windows and 4+GB of ram).

    See: http://en.wikipedia.org/wiki/X86_instruction_listings or http://en.wikipedia.org/wiki/X86_assembly_language cuz x86 is a common example of a CPU instruction set, but there are many, and all hardware specific because the instruction processing is physically hardwired into the CPU itself via transistors and stuff. Some instructions are simple arithmetic, some instructions are like...save this data in this RAM, some instructions are go fuck your mother.

    The CPU interacts with the video card via the motherboard using... "special instructions" that tell the CPU to tell the motherboard to tell the video card to do stuff like how to color the pixels right and stuff. It knows how to tell the video card stuff cuz drivers.

    TLDR: magic
  3. #3
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    I don't know how primitive you want to get, but you might enjoy taking a look at the actual electronics used to create the various logic gates with diodes/transistors. From there you can learn what a flip-flop is and probably go from there to understanding things like accumulators and programming more basic microprocessors with assembly: http://en.wikipedia.org/wiki/Flip-fl...electronics%29
  4. #4
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Thanks, DrunkD0zer

    I messed around in my studies and built some stuff in Minecraft using a mod called RedLogic.
    This allows me to place logic gates and wire them up with digital signals without worrying about the voltages and transistors.
    Which is fine, because I studied that much in college. I have pored back through that textbook but it kind of just jumps past gates to IC's and doesn't look back.

    In Minecraft, I made a 16-bit add/sub in IEEE half precision floating point. I'm reinventing the wheel with every step, so it's probably nowhere near an ideal design, but it does what it's meant to do.

    That's when I started to realize that I'm repeating the adders all over the damn place, and the bitshifter is used twice. I had heard of ALUs, so I started to try and find out what it does. A broad description is all I could find.

    So, I'm knowledgable about adders and bit shifting... Even that my multiplier (4-bit int) was using a primitive Wallace Tree, where a Dadda Tree is better.


    I'm rather confident I wont understand the machine code until I create a bit of ROM myself. I do understand that the ROM is in the CU and basically defines the machine code instructions that the CU, and thus the CPU can process. I'm intrigued to hear of the go fuck your mother.

    I'm lost as to how the ALU takes part in the various functions. E.g. Is the multiplier an architecture within the ALU, or is it encoded into the ROM and the ALU just takes care of each part of the process sequentially with AND gates and adders?

    In the video I watched, the adder and subtractor were 2 separate structures. I would have assumed to make a single structure that does either function. Do you know what benefit it serves to decouple these nearly identical structures?

    I guess my point is that I've been reinventing the wheel for a while on this project, and I'm willing to just copy someone's work at this point. If I see how one of them works, then I can "get it" as I build it, without having to figure it out first. I am trying to catch up to the 50's, after all. Also, this isn't homework, or professional, so I'm allowed to copy.

    Is there anything that's small enough to be comprehended by a monkey that has a public wiring diagram? It'd be a great bonus if it used one of those machine codes you linked to, or if the machine codes were also available.
  5. #5
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Thanks, Spoon

    I have gone that primitive in college. I get how an AC power signal is converted to DC and transformed down to a usable voltage (~5 V usually). I get how that can be interpreted as a digital ON vs. 0 V being a digital OFF.

    I have built NOR and NAND gates and worked with transistors in college. Additionally, I still have the textbook to look over that stuff.

    I do need to get into memory, and I understand that there are many kinds of flip-flops (FF): DFF, JKFF, RS-NOR latch, etc.

    I do not know about accumulators, so that's something I need to look into.

    What constitutes a microprocessor? Is it a CPU, RAM and I/O?
  6. #6
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    On the topic of adding and subtracting being different entities, in the mathematical sense they are essentially the same, but afaik they have to be treated differently when you're using logic on that low of a level.
  7. #7
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Nah, a subtractor IS an adder, it just prepares the value with trickery.

    In binary, -X = NOT(X)+1

    E.g.
    1001 = 9
    0110 = NOT(9)
    0111 = -9

    Now, that clearly = 7, not -9. The trick is that in binary, they're equivalent.

    E.g.
    0000 1001 = 9
    1111 0110 = NOT(9)
    1111 0 111 = -9

    In any number of bits, this quality of a positive number being equivalent to a negative number holds.

    Binary subtraction exploits this by changing a request for A - B into A + (-B).
    All it has to do to accomplish this is to take the NOT of each input and force a carry-bit into the ones digit adder.

    A binary subtractor IS a binary adder with one extra gate.
  8. #8
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Quote Originally Posted by spoonitnow View Post
    On the topic of adding and subtracting being different entities, in the mathematical sense they are essentially the same, but afaik they have to be treated differently when you're using logic on that low of a level.
    As far as I can tell, what we are taught in grade school as how to add/subtract/multiply/divide on paper, by hand, is the same thing that a computer does.

    This is true for addition and subtraction and division without any stretches.

    With multiplication, it's the same, but a computer can't immediately recognize that 0*x = 0.
    When you or I multiply 4 * 113 on paper, we break the problem down to 4*3 + 4*10 + 4*100.

    A computer would (if it was able to calculate in decimal), see the problem as this: 004 * 113.
    To break it down, the computer would go: 4*3 + 4*10 + 4*100 + 00*3 + 00*10 + 00*100 + 000*3 + 000*10 + 000*100.

    So a computer can't ignore the leading 0's like you or I can. Other than that, it's the same process as working it out long hand.

    I know a couple of iterative algorithms for approximating the square root of a number. I'm not sure how a computer does logarithm, exponential, or trigonometric equations.

    I want to understand all of this, but for now, I want to focus on the "basics" of simple math.

    ***
    What I'm confused about is whether the computer, e.g. when multiplying, has a single architecture (like a Dadda Tree) that sits behind a multiplier, or whether the CU simply tells the computer to add each partial sum to a storage register by utilizing a single adder over and over for each partial sum.

    I'm really interested in the exact capabilities of the CPU in terms of:
    What are the machine code functions the CPU recognizes? (Thanks D0zer, I didn't realize at first that just seeing specific machine code commands will help me understand what they can do.)

    What are the actual steps the machine code processes for each request?

    What architecture is necessary to support this?

    I'm getting the feeling that there is a lot of variation in CPU's and that there is always a choice in chip design between fast and enormous or not as fast and smaller.
    If you build a dedicated architecture for each task, you can optimize each build for it's specific purpose. If you build a single architecture to handle similar tasks, then you can't perform those tasks simultaneously, and the architecture may not be optimal in speed for any of the operations it can handle. However, being smaller makes it cheaper to manufacture. This cost to benefit analysis will ultimately drive how the chip is designed, and therefore how the computer will operate.

    It might make a lot of sense for me to stop being so general and to pick a specific CPU to study. Once I understand one, I can study another and compare them. I tried looking for whatever chip the NES used (the MOS 6502) and found out some interesting stuff, but not enough to build one myself.

    Is there a good online resource that fully describes something like this? I don't mind if it's a 4-bit CPU, instead of the 8-bit 6502.
  9. #9
    Try thinking about the simplest computer whose job is to simply add two numbers together, using binary obviously.

    You have 2 banks of 4 switches.
    You enter your two numbers by flipping the switches.
    eg 1010 and 1010
    You now ask your CPU to add them.
    And your 5 Light bulbs display the answer: 10100

    I'm sure you can work out the Logic Gates necessary to build this CPU.
    Hint: Just ANDing this would give you 0000 so you need extra gates to handle the carry.


    Moving on 40 years you get to the MOS 6502,
    which can handle more than the one ADD instruction and uses more advanced Input Output techniques than switches and light bulbs.
    The switches are replaced by 8 bit Registers which can handle numbers up to 256, Hex $00-$FF
    However it has 2 other tricks up it's sleeve to handle bigger numbers.
    - It can use two 8 bit bytes as Low and High to process 16-bit numbers.
    - It can use it's Registers to point to memory addresses where the numbers are stored.
    The Programs are now held in memory so there are instructions to jump or branch to different parts of the program.

    So the CPU would get instructions like:
    Code:
     LDA $0A   ; Load the Accumulator
     ADC $0A   ; Add value to the accumulator
     STA $20   ; Store the result from the accumulator somewhere.
     JMP $FC00 ; Jump to memory location $FC00 for your next instruction, perhaps a routine to display the result on the screen.

    Spoiler:


    Code:
     ; Example 6502 code.  Simple 16-bit square root.
               ;
               ; Returns the 8-bit square root in $20 of the
               ; 16-bit number in $20 (low) and $21 (high). The
               ; remainder is in location $21.
               
               sqrt16  LDY #$01     ; lsby of first odd number = 1
                       STY $22
                       DEY
                       STY $23      ; msby of first odd number (sqrt = 0)
               again   SEC
                       LDA $20      ; save remainder in X register
                       TAX          ; subtract odd lo from integer lo
                       SBC $22
                       STA $20
                       LDA $21      ; subtract odd hi from integer hi
                       SBC $23
                       STA $21      ; is subtract result negative?
                       BCC nomore   ; no. increment square root
                       INY
                       LDA $22      ; calculate next odd number
                       ADC #$01
                       STA $22
                       BCC again
                       INC $23
                       JMP again
                nomore STY $20      ; all done, store square root
                       STX $21      ; and remainder
                       RTS
             
               
               This is based on the observation that the square root of an 
               integer is equal to the number of times an increasing odd 
               number can be subtracted from the original number and remain 
               positive.  For example,
               
                       25
                     -  1         1
                       --
                       24
                     -  3         2
                       --
                       21
                     -  5         3
                       --
                       16
                     -  7         4
                       --
                        9
                     -  9         5 = square root of 25
                       --
                        0

    Notice from the hidden example that the 6502 did have an SBC subtraction instruction.
    As well as DEC, INC, DEX, INX, DEY, INY to add or subtract one from the accumulator or the X and Y registers, Incrementing and Decrementing.
    Technically the ALU section of the CPU would perform those actions along with ORA, ASL and other arithmetic or logic instructions.
    Whilst the CU section of the CPU would perform JMP, BNE, TXA, STX, STA type instructions.
    But it is a single chip, processing a single instruction per clock cycle, so it isn't really relevant to separate CU and ALU instructions apart from keeping your sock drawer organised.
    So after making your minecraft ADD building you can make a DEC building, SUB building, STA building and warehouse(memory).


    So how does the 6502 display the result?
    >> It doesn't
    Not it's job.
    The CPU processes the programs.
    The Operating System is also just a program but it works with the BIOS which controls all the physical electronics.

    A 6502 based Commodore 64 for example would display the ASCII characters the program stored in memory locations reserved for the screen, ($0400-$07E7).
    But you can't just STA $0400 the result of your addition, you have to work out the ascii string that represents it for humans to read.
    (Those light bulbs are looking like a much better interface now aren't they).
    Not forgetting the 'Video Interface Chip' which was actually scanning those memory locations for the creation of the analogue signals which were sent to the cathode ray tube.


    Move on another 30 years.
    Modern computers don't use screen addressing, a dedicated GPU is told by the CPU that the programmer has requested some fancy shaded graphic, so get on and do it, you can find the instructions in this location.
    Whilst another core of the CPU is telling the maths co-processor where to find the instructions to decode the encrypted streaming video that the GPU will be asked to display.

    Amazingly though this is all still happening with Lo Volt or Hi Volt signals being sent through an array of standard logic gates, (millions of them, which is why it is helpful to keep them organised).
  10. #10
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Thanks, Chemist. So much good stuff I have to work with in there.

    Quote Originally Posted by chemist View Post
    So after making your minecraft ADD building you can make a DEC building, SUB building, STA building and warehouse(memory).
    Hah. Cool.

    I wouldn't have bothered with Minecraft if not for the RedLogic mod. The add/sub design I built is not really a building; it's more like a slab.

    Spoiler:
    It's 2 tall, 24 deep (26 counting input and output rails) and 2*(n+1) wide. Where n is number of bits. This is a ripple-adder, not CLA. Technically, the adder is only 16 deep. The other 8 tiles in the slab perform a 2's compliment with logic that converts negative results into a 5-bit signed format.

    That was built before I discovered the RedLogic "Bundled" gates. With the Bundled AND and Bundled XOR gates, you only need 2 of each to make a 16 bit full adder. However, you have to bit shift the carries, which becomes the dominating part of the architecture in size.


    I made an INC/DEC "slab" and a bit shifter, already

    So all that thought I put into coupling the adder and subtractor was for naught. Cool.

    I'm a teensy bit disappointed that it's a decoupling of all the things. It would be cooler if it was one of those cases where all of the things simplified down into a relatively tiny set of gates.


    Wooo! I knew you guys would be kicking this process along. Thank you.


    EDIT: Oh yeah. In Minecraft, the minimum possible time for a single gate operation is 50 ms. The notion of graphics processing is just not going to happen when 20 Hz is the clock speed of each gate. It's several orders of magnitude away from anything like that.
    Last edited by MadMojoMonkey; 09-30-2014 at 09:29 PM.
  11. #11
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    OK, it's all a lot less sexy than I had imagined.

    I was hoping that there would be some brilliant combination of gates that allows all of the functions we need to come through a single set of gates... however, it's really not like that. It's not a synthesis of many functions into one streamlined thing. It's a tetris-style smashing of isolated things that may perform nearly identical functions all crammed next to each other.

    In short, the whole thing is basically copy/paste over and over again.

    I guess that's the practical result of abstraction, but I never saw the big picture.

    This has seriously fizzled my enthusiasm.


    E.g. a half-adder is 1 AND and 1 XOR gate. a full adder is 2 of those in a row, which allows for the carry bit from the adder next to it. There is a full adder for each bit.

    So create a half-adder. Copy/Paste it into a full adder. Copy/paste that n times to make an n-bit adder. Then copy/paste that anywhere in the chipset where addition is required.

    Even a "simple" processor has a minimum of 5 n-bit adders (typically more than twice that), which may have different values of n, based on their specific function.


    E.g. A flip-flop or latch can be used to store a single bit-state. Copy/paste those so that you can store bytes. Then copy/paste those and you can store many bytes. That's a memory register.


    I'm a bit confused as to whether the memory registers available to the CU and ALU are within the chipset, or if they are the system RAM.

    Anyway, the processor has access to a load of registers... including registers which hold addresses of registers, so that relatively small amounts of data can be used to retrieve relatively large amounts of data.

    I still need to figure out how the memory registers and address pointers all work.


    Also, I still haven't gotten to the bottom of accumulators.
  12. #12
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Accumulators are different from memory registers because location.

    Accumulators are built inside the chip, RAM is built outside the chip. The exact architecture of the accumulators and the RAM may or may not be identical, but they are functionally the same.

    So accumulators are registers. They are just the registers located within the chipset. They serve to hold intermediate results close at hand.

    I read that accumulators are more expensive than RAM chips, but I don't know why. I have some ideas, but I'm not sure.


    E.g. 3*(4+1) = 15 requires the addition to happen, and the intermediate result of that addition is then multiplied by 3.

    If the processor had to send the 15 off to RAM, only to retrieve it a moment later, this would take much more time than if it stores the 15 inside the chip and then retrieves it a moment later. The time for the electrical signals to physically traverse the wires is what's different.


    Usually, an accumulator is just a memory register. In this case, whenever data is sent to the register, that data overwrites whatever data was previously left in the register.

    Sometimes, depending on the chip, an accumulator might be hardwired to an adder. In this case, whenever a number is sent to the accumulator, the accumulator adds the value, creating a running total. This set up requires a means of clearing the register, setting the value to 0, as needed.

    Modern chips have hundreds of memory registers in them.

    The CU directs the connections between the ALU and the accumulators, and the accumulators and the RAM.
  13. #13
    calculations are only performed on the contents of the accumulators
    the accumulators have to be loaded with the values, either directly, indirectly or relatively to a register from values held in memory or pulled from a temporary stack.
    btw your temporary storage place is usually the stack which uses regular memory.
    Your CPU will need a Stack Pointer.
    And a Program Counter (which points to the next instructions memory location).
  14. #14
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Thanks, chemist.
    I am making much more progress on this since you (all) have been pitching in.

    Quote Originally Posted by chemist View Post
    calculations are only performed on the contents of the accumulators
    Ah. Yes, I see that. The calculations must happen on values, which must be stored in registers. As I said, the register's location makes a big difference in operation time. It makes sense that all operands would be within the chip at the time of operation.

    Quote Originally Posted by chemist View Post
    the accumulators have to be loaded with the values, either directly, indirectly or relatively to a register from values held in memory or pulled from a temporary stack.
    I think I understand what you're saying here.

    There are a few ways the accumulators can be set.
    As I understand it, they basically boil down to one of 2 things:
    1) They can be connected to another register, copying that data.
    2) They can be connected to an input device, allowing a user to enter data.

    Is that right?
    Quote Originally Posted by chemist View Post
    btw your temporary storage place is usually the stack which uses regular memory.
    Your CPU will need a Stack Pointer.
    And a Program Counter (which points to the next instructions memory location).
    I know what a stack is, and there are FIFO and LIFO stacks.

    I have gathered that using stack memory is common but not ubiquitous.
    I.e. some processors use stacks and some don't.
    Am I wrong there?


    I am aware of pointers. I just don't think I have enough understanding of registers and accumulators to try to explain pointers. I have even deleted a couple of posts on pointers because I didn't like the way they read. Meaning, to me, that I don't understand them.

    My basic understanding is to liken a register to a PO Box.
    Both have an address, which is the way the post office (CU) refers to the location of the box.
    Both have contents, neither the post office nor the CU are too interested in the contents.
    Both have a name. The name is who uses the PO Box, or a user-assigned variable name to identify the register.

    E.g.
    int x = 5;
    This command uses a register to store the number 5. Since I (the user) didn't tell it exactly which memory register to store it in, it picked an unused register for me. Now, I don't know the register's address, but when I tell the computer to operate on x, it treats 'x' like the register's address.

    Does it use another register to hold the name of the variable? Then another register to hold the pointer to the data register's address?
    I.e. If I say x = x + 3
    It loads 3 into Accumulator B.
    It selects addition.
    It searches for x in a register.
    It finds x, which is linked to a pointer to the location of 5.
    It loads 5 into Accumulator A.
    The result 8 is in Accumulator C.
    It searches for x in a register.
    It finds x, which is linked to a pointer to the location of 5.
    It replaces 5 with the contents of Accumulator C, 8.


    Can anyone critique this? Something seems wrong in the process of using a variable name to find a register's value.
  15. #15
    Quote Originally Posted by MadMojoMonkey View Post
    Can anyone critique this?
    If you say:
    int x = 5;
    x = x + 3

    Then you are talking in a high level programming language.
    The CPU would not understand anything, it would need an interpreter* or a compiler.
    *(this may sound like an analogy but it is actually an application for an interpeted programming language)

    The interpreter would evaluate the expression using it's own code.
    A compiler would compile it to CPU usable machine code, (but the actual code would depend on so many things that it is not possible to provide a simple translation here without defining a bagfull of assumptions)

    To understand the CPU better try looking at the state of the art micro computing from 1975 with the Altair 8800
    http://www.youtube.com/watch?v=EV1ki6LiEmg

    So after watching the video you should now realise you need to write the machine code and load it in to the machine for the cpu to process.
    Therefore as the video showed how to turn the computer on and jump to location 20 we could enter our code at location 20.

    First we need the 8080 processor Op code to load the accumulator.
    There are several variations so we will choose the one to do it immediately with one byte (code 3E) which uses the next memory location for the immediate value.
    So you would use your switches to enter 3E at 20
    then 5 at 21 for your value of x=5.

    Now the CPU expects another Operation Code in the next location 22.
    So we need to choose an Add instruction (eg C6 add immediately with the next byte)
    (we could instead have chosen code 86 to add immediately with the next two bytes for bigger numbers, or 8E which would do the same but also uses a Carry, or 80 which would add the contents of the B register to the accumulator if we had loaded something in to the B register)
    Flick the switches to enter C6 to location 22, then enter 3 to location 23, which completes our x=x+3 calculation.

    The next opcode will be expected in location 24, so as our answer at the moment is just in the accumulator we could tell the cpu to store it in a memory location.
    This will be a 3 byte command, The opcode followed by Low Byte and High Byte of the memory location.

    When you have entered all that code, then you can put your programme counter back to zero and run the whole programme.
    (or you could put the programme counter to 20 and just run the commands we talked about here without processing that first Jump command).
    After running if you inspect the memory location you specified to store the accumulated value to, it should show the value 8 is now in that location.


    Code:
    Notice that the CPU itself only works with numbers.
    Programmers prefer to use mnemonics. Later they would use an Assembler to generate the machine code.
    
    Intel 8080
    MVI A,5
    ADI 3
    
    Zilog z80
    LD  A,5
    ADD A,3
    
    But coincidentally (although not a coincidence because Zilog like AMD later was aiming for some compatability) in both cases the memory would be stuffed with the same values.
    3E,05
    C6,03
    
    The MOS6502 however used completely different mnemonics and codes.
    A9,05  LDA #05
    69,03  ADC #03


    Quote Originally Posted by MadMojoMonkey View Post
    I.e. some processors use stacks and some don't.
    Am I wrong there?
    I don't know of any processor that doesn't use a stack.
    They usually have commands to PUSH and POP, (the 6502 calls it PHA and PLA).
    The processor also uses it with the Jump to Subroutine command, pushing the current address to the stack and pulling it back of the stack when it gets to the RTS command.
    Last edited by chemist; 10-10-2014 at 09:33 PM.
  16. #16
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Quote Originally Posted by chemist View Post
    If you say:
    int x = 5;
    x = x + 3

    Then you are talking in a high level programming language.
    The CPU would not understand anything, it would need an interpreter* or a compiler.
    *(this may sound like an analogy but it is actually an application for an interpeted programming language)

    I knew that... at some point... I keep forgetting how far away I am from a programming language that I actually understand.
    That's the point of the thread, though.

    Quote Originally Posted by chemist View Post
    To understand the CPU better try looking at the state of the art micro computing from 1975 with the Altair 8800
    http://www.youtube.com/watch?v=EV1ki6LiEmg
    This was immensely helpful, thanks for the link.
  17. #17
    How is the minecraft cpu going?

    just randomly came across this page
    http://www.homebrewcpu.com/
    and remembered your project
    This guy is making a custom CPU from 74 series TTL chips.
    In fact there is a whole ring of them at the bottom of the page. (LOL webrings almost as old and obsolete as 74LS chips).
    Some great projects though showing what really happens electronically inside the microprocessors we take so much for granted.

    Perhaps you can make a magic-1 in minecraft.
  18. #18
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    I stopped working on it when I got flustered with debugging my signed 16-bit floating point adder/subtractor.

    I spent days going through logic tables to wade through the consequences of using signed numbers. It took me ages to convince myself I'd covered all the cases. I don't remember why, but I decided to re-order the operands such that the greater one was first. I had to then determine the sign of the output based on the signs of the inputs, knowing that, whether A + B or A - B, that A was greater, but may need to negate the result. I think this was because I was trying to not force a subtraction that actually resulted in a negative number, since the binary negative was not the same as a signed negative.

    I never even got near the CPU-level logic. I was looking at how other people had clocked their minecraft CPUs and it was on the order of 4 seconds per clock tick. The trouble is that Minecraft only updates the game at 20 Hz. The graphics update and player-camera can move at much higher rates, but the changes made in the world are at 20 Hz. Furthermore, due to the cube-based nature of the game, any "compact" circuit has at most 4 I/O spots, so a 3-way AND gate is as big as you can go in a single game tick. Meaning that even optimizing a CLA (carry look ahead adder) design to optimize the ripple adder was a nightmare.

    In the end, I decided that I learned what I intended to learn at the start, and I was getting really frustrated with the project.

    What I wanted to know:
    What happens in the first seconds of turning on a computer when somehow a current flowing through a wire and directing a switch can emerge into a calculator or a modern computer?

    What I learned:
    Meaning is where the designer decides it should be. The designer decided to give meaning to voltages. The designer decided to let no voltage mean a binary zero and some voltage to mean a binary one. That meaning is clearly a conceit. It is fabricated by the designer.

    However, it is a clever conceit because it opens up the entire language of mathematics through binary numbers.

    Using that conceit, we find that A XOR B gives the ones digit of A + B, and A AND B gives the carry digit of A + B, so long as A and B are one digit binary numbers. These gate functions existed outside of this binary conceit, but they become useful tools in a new light and context by the designer's choice.

    So with that, the designer can extrapolate to an adder, and then a subtractor, multiplier and divider...
    but wait. What just happened? adder of what?

    Well, it doesn't actually add or multiply anything. It's just a machine that, when given a certain input, produces a certain output. Whatever meaning in the input and output is a conceit by humans. The designer has just been clever in how he's interpreting inputs to be numbers and outputting corresponding numbers, given a request for a certain mathematical operation - which is just another input, with no meaning in and of itself, and not even the binary numbers to lean on.

    It's not that adding 1 + 0 = 1, it's that a certain kind of avalanche happens for each input, which leads to a specific output.

    Whether its

    no yes no [select avalanche type "add"] no yes no [avalanche] yes no no
    or
    010 + 010 -> 100
    or
    2 + 2 = 4

    is merely a choice of how to display the inputs and output. There is no more meaning in the yes/no line than any other representation, except the readability to a human working with the machine.

    Once we establish a clever abstraction which works well, we can stop considering it as individual pieces and consider a single thing.
    So all those XOR and AND gates get lumped together into larger structures like half adders and full adders... which we then build to a clever working abstraction and when it works well, we step back again. We design integrated circuits whose specific internal gate structure isn't as important to us as the overall I/O logic of the chip. We make a bunch of those and put them together in new ways, and, again, when we get them working well, we step back and build ... etc.

    This clevering up and stepping back took decades. The final product I use today bears the marks of all that prior cleverness embedded in the individual gates and structures from the smallest scales up to the whole computer.

    By the time we get to a CPU, there's a whole underlying structure there of which we want to increase the functionality. The CPU doesn't rely on binary mathematics, but we still have this system of binary logic built up, and it's awesome, so we're going to run with what's right here. Again, the functionality of the CPU will drive us to create new conceits - imbuing meaning in 1s and 0s to fit our desired outcomes.
  19. #19
    WOW bit of a read that,
    so in conclusion you learnt what you wanted and are happy now just to use the modern machines as they are.
    I hope you didn't type all that and just dictated it to your android.

    The clevering up happened really fast.
    I remember laughing at Scotty shouting at the mouse 'Computer'.
    I remember the early days of Dragon Speach, reading set texts for half an hour to train it and then giving up as the results were still terrible.
    And now voice recognition is getting so good, I can actually imagine my grandchildren (if that happens, or some other little brats) in a few years will be asking if we really had to press a switch to turn the lights on.
    And then getting my own back by sending them off to press the switch to reboot the house when we get left in the dark.
  20. #20
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Quote Originally Posted by chemist View Post
    WOW bit of a read that,
    so in conclusion you learnt what you wanted and are happy now just to use the modern machines as they are.
    I hope you didn't type all that and just dictated it to your android.
    Sorry so long, but I felt like you really helped me out on this project - and others, too. I didn't want to give too brief an answer which glazed over the stuff I learned. Plus, maybe someone else is as fascinated as I am to realize that each of these useful functions of a computer is really, fundamentally, an illusion. The computer doesn't draw letters as I type them... it merely turns some pixels on and others off in a fashion which tricks me into seeing the letter I typed. The notion that it's a letter or that strings of letters may or may not be meaningful words is nothing to do with my computer. That is really mind blowing to me.

    ***
    I typed it. I'm a fast typist, though.
    I'm guessing that wuf is similar in that we can directly think things and have them appear typed on the screen in front of us without consciously thinking about how our fingers are moving to make it happen.

    Quote Originally Posted by chemist View Post
    The clevering up happened really fast.
    I remember laughing at Scotty shouting at the mouse 'Computer'.
    Too funny. Loved it.

    That transparent aluminum he wanted is now a real thing, BTW. Well... sort of.

    Quote Originally Posted by chemist View Post
    I remember the early days of Dragon Speach, reading set texts for half an hour to train it and then giving up as the results were still terrible.
    I remember Dr. Sbaitso.

    I still remember how it tried to read it if you told it to say abcdefghijklmnopqrstuvwxyz... well, I remember the abcdefghij part, anyway.
    And that when I told it to say, "you are a pussy" is pronounced "pussy" like puss from a blister.
    My 15(ish)-year-old sensibilities thought that was among the funniest things ever.
  21. #21
    nice share

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •