|
all right i have never in my life left an internet argument without coming to some cordial conclusion even if the conclusion is to agree to disagree (or me being wrong). so i haven't forgotten about this, ive just had teh flu last few days and didn't wanna think stuff about things
Originally Posted by DrivingDog
I don't follow AI science very closely but those seem like almost unreachable goals. Moreover, how are you going to know if you've succeeded? Ask the computer "are you conscious?" or "Are you experiencing emotions?" And if it says yes, does that constitute any sort of proof?
assuming they're unreachable is a wee naive. we know this just from our experience with scientific progress over the decades, but we can add that our understanding of emotions and consciousness is minute and so to preclude that we can create that is a shot in the dark. also, our understanding of those now suggest that they die when we do, we dont know this of course, but it's likely true. if this is the case then they're strictly a physical existence, and we have no evidence that things in the physical world cannot be recreated. even if consciousness didn't die with our bodies that point still stands.
as far as proof goes, i wont attempt to know how to control variables correctly, but suggesting that your method is sound is almost patronizing science.
By entities that can learn I assume you're referring to living creatures. In that case, the real purpose behind their learning is to propogate their genes. Whether or not this moves them higher up the chain is incidental. It just happens the two often coincide - e.g., the alpha male is the one who mates, gets to eat first, etc..
its like i said 'an apple is juicy' and you said 'no an apple is round'.
Again you're talking about living things that follow the rules of natural selection. There's no inherent requirement for AI to emulate us in this way. There's no reason to assume that because they possess the human characteristic of intelligence that this will make them human-like in any other way.
you're right, i am speculating here. its not totally without merit though. in fact, its very much full of merit to postulate that advanced intellectual beings will take on a similar course of adaptation as we see everywhere.
Consciousness is a red herring in this debate. Just possessing consciousness or being free-thinking doesn't necessarily lead to all kinds of other human characteristics such as the will to power, any more than building a machine that can beat us at chess means that same machine will 'want' to beat us. It's just a machine, it's not driven in the same ways living things are.
i completely disagree. i cannot fathom that an entity with an ego wouldn't try to defeat the competition just like every other example we've ever known.
There's plenty of machines that adjust their output to adapt to circumstances. For example, there's lots of artificial neural networks that learn things and none of them have yet run amok and tried to subjugate us. Whereas they are programmed to learn and adapt, to my knowledge no-one has specifically had to program them not to seek world domination. Sorry if that sounds facetious I just think you're making an invalid assumption here.
well these programs are not even infinitesimally on the level of what super advanced AI would be. that changes everything. i explain a little more below.
This is pretty much how the army operates, they program their soldiers to obey orders. So again you are assuming an AI would necessarily have to have human characteristics.
thats not at all what i was getting at.
in order for us to create a robot that can operate field of battle and everything that goes along with it to the level and beyond of human capacity, it will need to have senses. it will need to feel. among so many things, it will need to know the difference between a good and bad, whatever they may be.
let me put it this way, if it gets hit by a heavy force it will need to know that that is bad and possibly life-threatening and it will need to react accordingly. now it can be taught/programmed to not react the same when that type of thing happens in a different circumstance (like something involving its CO), but what exactly do you think it will think when that does? it will be in a psychological/experiential/moral dilemma like we all face all the time everywhere. it could not be programmed to react in a strictly rigid fashion because then it wouldn't be able to learn and would be worthless for its purpose.
I think our disagreement boils down to the idea that I don't believe, even if we were capable of doing so, that we'd ever want to create an AI that would both a) have the ability to surpass us; and b) have the 'motivation' to dominate us. Certainly the former is well within our current capabilities, but the latter would be a very bad move on our part indeed. I suppose it could happen by accident...
im sure that if you went back to 3000BC and asked some tribesman if he thinks it would ever be reasonable to make a device that could blow up the land as far as the eye could see that he would say no. yet here we are having created that device.
|