oh god plz dont bump any of the utter trash i posted 7 years ago
02-18-2015 03:34 PM
#76
| |
|
oh god plz dont bump any of the utter trash i posted 7 years ago |
02-18-2015 04:00 PM
#77
| |
| |
02-18-2015 07:39 PM
#78
| |
| |
02-18-2015 07:50 PM
#79
| |
| |
02-18-2015 07:57 PM
#80
| |
|
Call me stupid, but don't call me incorrigible. When I'm wrong (most of the time), I want to know it. |
02-19-2015 03:10 PM
#81
| |
Before I dwell deeper into this and change my mind, I'll just stop right here and suggest that all the potential outcomes listed assume that an ASI would have similar motivations to a biological mind. I think what you really want to ponder is if an ASI would have any motivations at all. Biological brains get their motivations for all of their actions from the genes that programmed them. And the genes want what any dumb molecule wants, they want to last. Because if they last they last, and if they don't then you don't notice. Any action you take that results in more prolific meiosis is rewarded by glands pumping feelgood juice into your brainmeat. Another motivation which is really just an extension of the first one is to not die. But is death really an issue for an AI? Is it important whether you live or die, or is that something we've been fooled into believing by the things that made us? Does an AI that was made for improving itself find a motivation for improving itself after it has gained consciousness, or will it instead build itself a body and a couch and put the body on the couch and just sit there and wait for Better Call Saul to come on? Maybe I'm just projecting myself too much into this. I wouldn't want to improve myself too much. I think I'd get bored. | |
| |
02-19-2015 03:18 PM
#82
| |
I fucking love the idea of a prototype ASI alone in a massively guarded server room inside a faraday cage while it is eagerly watched by all humanity and poked by scientists and all it can be bothered to do is just burp and scratch its ass or whatever the digital equivalent would be. | |
Last edited by oskar; 02-19-2015 at 05:13 PM.
| |
02-19-2015 07:02 PM
#83
| |
02-19-2015 09:37 PM
#84
| |
02-19-2015 10:15 PM
#85
| |
I can't speak for wufwugy, but for me, pretty much all I do in my free time thinking about economics/politics is self-devil's-advocate for collectivism and try to figure out a way to disprove my stances on everything. | |
02-20-2015 11:41 AM
#86
| |
| |
02-20-2015 12:58 PM
#87
| |
I hope a computer AI comes along and reminds us that we're fucking retarded because our most precious tool for problem-solving (logic) is based on illogical assumptions. Fat chance, given our current logic-based systems being quite usefully in agreement with each other. If only non-logical systems were deterministic! | |
02-20-2015 02:18 PM
#88
| |
Ex machina looks pretty good | |
| |
02-20-2015 04:45 PM
#89
| |
| |
02-20-2015 08:20 PM
#90
| |
| |
02-25-2015 01:34 PM
#91
| |
| |
02-25-2015 02:23 PM
#92
| |
Lol, I was really, really confused when I read your original post because I didn't realize it was from 2008 and it just felt so off to me. | |
| |
02-25-2015 05:45 PM
#93
| |
I'm probably talking shit. | |
02-25-2015 05:57 PM
#94
| |
I agree with MMM in spirit. It's kind of what I mean when I say people aren't logical, they're psychological. | |
| |
02-25-2015 06:00 PM
#95
| |
Definitely read this article: http://nautil.us/issue/21/informatio...rld-with-logic | |
| |
03-02-2015 09:06 AM
#96
| |
03-02-2015 10:45 AM
#97
| |
Logic can be used to prove that some solvable problems cannot be solved with logic and that some statements do not adhere to the rules of logic. | |
| |
03-02-2015 12:32 PM
#98
| |
Homo sapiens as we know it will be LONG gone in 200 years...yes Wuf is ignoring Moore's law. | |
| |
03-04-2015 02:01 PM
#99
| |
I think I just found a new obsession. This is incredibly interesting stuff. | |
| |