Select Page
Poker Forum
Over 1,291,000 Posts!
Poker ForumFTR Community

Randomness thread, part two.

Page 243 of 420 FirstFirst ... 143193233241242243244245253293343 ... LastLast
Results 18,151 to 18,225 of 31490
  1. #18151
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Quote Originally Posted by spoonitnow View Post
    I feel like explaining since this is a common point of contention. I want to show how the red pill/blue pill analogy started off on the right foot and was eventually turned into a crock of shit (much like feminism tbh). The similarities and divergences between the progression of the PUA/red pill crowd and the different stages of feminism are very interesting because a lot of the same types of things happened. I also want to show where the alpha/beta thing came from and how that evolved.

    Okay first in the 1990s there were the original PUAs. These were smart guys who found each other on the Internet who didn't like their lack of success with women. Their approach was to systematically try to figure out how to do better. It was the scientific method and the engineering mind at its fullest. Lots of trial, error and experimentation later, a basic format was figured out that would work a significant percentage of the time with women in very specific social situations.

    These guys figured out that they could make a ton of money by offering courses and "camps" with men who were successful in a lot of aspects of life other than their ability to attract women. If you've read or heard of Neil Strauss' book "The Game," then this is the stage his story was set in. Google "the Mystery Method" for an example of what was probably the most popular course at the time.

    I also want to be clear that this type of thing worked extremely well in the context of certain types of social situations. They repeated the pattern with other types of social situations, and this led to the development of "day game," for example. Analyzing the differences between what worked in different social situations led to a more generalized understanding of "game," which I'll get to in a minute.

    Alright so by the early 2000s, marketing stunts and the need to appear like a bigger, more outrageous asshole to get more people to buy their shit eventually gave the whole PUA situation a really shitty name. It was like rakeback deals between 2003 and 2007 that just got more and more outrageous until it fell apart and no one was really getting paid like before because everyone was made to look ridiculous. This just got more and more out of hand, and last year we had the Julien Blanc debacle as another example.

    So what you had here was a major divergence. This led to the creation of two major camps:

    Camp 1: The original type of PUA guys who continued their natural progression of understanding the general theory of what they called "game" by studying the similarities and differences between successful strategies in different types of social situations. In short, they were largely using their logical minds and the scientific method to reverse engineer what it meant to be socially adept so that they could turn it into a systematically learned behavior. These guys stopped using the PUA label because it was being dragged through the mud.

    Camp 2: The new breed who hijacked the PUA label and kept getting more and more outrageous with it to make money. With tons of followers thanks to increased marketing efforts, guys like Julien Blanc, it turned into a massive shit show. This is that vocal minority that gives the whole deal a bad name, much like what happens in Islam and feminism. Also like in Islam and feminism, the silent majority hasn't really spoken out (except very recently) against the extremists who have been giving them a bad name.

    It's worth nothing that a third camp fell into place parallel to this whole thing, and they took on the label "Men Going Their Own Way," or MGTOW for short. This is essentially a self-imposed sort of abstaining from women, and there's a wide range of guys who fall into this group. What they all have in common, and what forms the label, is that they try to completely remove women from the equation without necessarily turning gay (using porn as a substitute is common, etc).

    So by somewhere between about 1998 and 2003, you had three distinct groups. What started happening was that every time Camp #1 would figure something out or some topic would become hot for a while, Camp #2 would hijack it and turn it into some extreme marketing stunt. In the same way that first and second wave feminism helped women while third and fourth wave feminism has hurt them, Camp #1's ability to help men has been largely destroyed by Camp #2's bullshit.

    I want to point out something personal here. I know more than one guy who has been a follower of Camp #2 who has ended up legitimately raping, assaulting or sexually assaulting a woman because they bought into the hyped up marketing bullshit. I also know more than one woman who has been on the receiving end of this kind of shit. They are awful, and while I can respect the hustle to an extent, they should be fought against.

    At some point not long after the Matrix came out, Camp #1 started using the red pill/blue pill analogy to describe the moment of clarity that a lot of men were having when their "aha moment" would hit them, and they'd realize they had been approaching things in incorrect ways. The Matrix sequels were still hot at that point, so it was another perfect marketing opportunity for Camp #2. This really just got out of hand, and at some point being "red pill" became synonymous with being a fucking retard.

    This is why I'm red pill in the same sense that I'm a feminist. I'm down with the original intent, but I'm totally against what both have become.

    The alpha/beta thing is another incredibly useful model that was hijacked by the second camp, and I'll give a brief description of how it started, how it evolved and what it's now used for by reasonable people. The original PUAs needed a descriptor to use to easily differentiate between successful behavior and not-so-successful behavior, and they used alpha and beta to describe these originally. This was very short-lived as they started applying their methods to figuring out what worked in a wider variety of situations. As the body of theory that surrounded "game" expanded and became more generalized, alpha and beta behaviors were given more general (and more useful definitions).

    Camp #2 did their marketing job on this, like they did it on a lot of things, and turned it into a bastardized and ultimately not-so-useful set of labels. This is why there's so much hate and snarky remarks about the use of alpha/beta/omega/sigma/etc.

    Back to the context of Camp #1, they decided to use this model to say that behavior falls somewhere on a spectrum that is very much subject to context. Alpha behavior has to do with attraction (women selecting for good genes) on one end, and beta behavior has to do with comfort (women selecting for resources) on the other end. However, they also noticed that doing the exact same behavior in a different context can drastically change whether it's considered alpha or beta. The labels alpha and beta started falling away in favor of looking at different aspects of context.

    They called this context for behavior "frame" with the general idea that you can have a strong frame or a weak frame. A strong frame is basically just coming from a position of confidence and strength, and a weak from is coming from a position of insecurity or weakness. An understanding of frame is the single most important part of game no matter what the situation is (picking up random hookups, managing a wife, etc). Frame is essentially the be-all, end-all of game, and no significant advances have been made after frame was established as being sort of the unifying solution to success in all social situations (including picking up women).

    What's really, really important here is that frame also transcends the "male seeking female" dynamic in attracting other people. It applies to all situations regardless of gender or whether it's a group dynamic or a 1-on-1 interaction. Notice that what attracts men and women are different, so men and women have to use frame in different ways to be successful depending on the gender they are trying to attract.

    After the alpha/beta labels basically fell away and frame was all that mattered, some people started noticing that introverted people handled things significantly different than extroverted people. However, people with a strong frame were still seeing success while people with a weak frame were not. This led to the four major labels that are used today that I think I mentioned in a previous post without explaining how they eventually came to be. You can either have a strong frame or a weak frame, or you can be an introvert or an extrovert, and this creates four types that are the basis of what's used for today's labels in 2015 discussions of social dynamics by reasonable people discussing game:

    Strong frame + extrovert = alpha (recycled label obviously)
    Strong frame + introvert = sigma
    Weak frame + extrovert = gamma
    Weak frame + introvert = omega

    Once you have these generalized labels, you can start to look at things like how the different types interact, what the strengths/weaknesses tend to be of each type, etc.

    TL;DR


    You're my boy, spoon.

    PS 48 Laws of Power is one of the best books I've read as an adult. Up there with Hiroshama, Thinking Fast and Slow (Also, the Psychology of Influence) and -redacted-.
    Last edited by a500lbgorilla; 06-07-2015 at 02:39 PM.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  2. #18152
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    The idea of frame is one of those weird things. I grew up listening to conservative talk radio and like to believe I monkey heard and monkey acted along the lines of men who understood this concept. They'd always present themselves as having complete, firm understanding. "We get economics and they don't", "We get international politics and they don't", "We're right and we know it." Re-framing debates and arguments was always about bringing someone else onto your field.

    It's a lot why I say argument is empty. It's a battle of frames. You can win out and be wrong.

    Very interesting stuff.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  3. #18153
    probably the best lesson ive learned regarding this stuff is that people believe the stories they're told.
  4. #18154
    Quote Originally Posted by spoonitnow View Post
    You might find this interesting: http://www.news-medical.net/news/201...rch-finds.aspx

    Cliffs: Oxytocin change doesn't affect men and women in the same ways.
    Behavioral causation is probably the hardest thing in the world for which to find causality.

    Culture can easily influence how the sexes respond to different stimuli. It could be that oxytocin engenders competitiveness in men because men are culturally more competitive in situations involving oxytocin, or that it engenders kinship with women because women are culturally more friendly in situations involving oxytocin.
  5. #18155
    Quote Originally Posted by wufwugy View Post
    probably the best lesson ive learned regarding this stuff is that people believe the stories they're told.
    I find this easy to believe.
    Quote Originally Posted by wufwugy View Post
    ongies gonna ong
  6. #18156
    oskar's Avatar
    Join Date
    Apr 2008
    Posts
    6,914
    Location
    in ur accounts... confiscating ur funz
    The strengh of a hero is defined by the weakness of his villains.
  7. #18157
    Fucking legend, that bunny.
  8. #18158
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Get on top, stay on top. Maybe he knows something.

    Doubt it though.



    This was allegedly made by an AI. I want that to be true.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  9. #18159
    Quote Originally Posted by a500lbgorilla View Post
    Get on top, stay on top. Maybe he knows something.

    Doubt it though.



    This was allegedly made by an AI. I want that to be true.
    You want it to be true? You think the AI will distinguish between creating this as a digital painting and creating it out of human and dog body parts?
  10. #18160
    i think decentralization is what will keep the dystopian view of ai at bay.

    skynet only happens becasue all the ai has to do is flip a switch. if it can't just flip a switch, then i think safeguards will be indeed safeguards.
  11. #18161
    A lot of very smart people who are not at all thought of as quack jobs are very worried.

    I think the flaw in the idea of safeguards is that they are designed and implemented by beings (us) who are far inferior to what could essentially be omniscience due to the rate at which AI could potentially acquire knowledge once conscious. Like, could a super intelligence find its way onto the grid when it's hosted on hardware that is completely off the grid? I don't know, but I don't trust our limited intelligence to ensure that it can't.

    Further, there are some very profound moral quandaries that arise. It's possible, with out being a PETA nutter, to make a pretty convincing case that animal domestication is essentially enslavement. Now while I don't agree with this notion, mostly the reasonable disagreement is based on the disparities in intelligence between humans and their pets/livestock. How would we justify the detention of a superior intelligence against its will?
  12. #18162
    it has certainly gained popularity since elon musk recently claimed it's worrisome, and he's iron man.

    i dont think there is a logistical method for ai to overrun humans. if it's sudden, it means the ai would be embedded and centralized and basically granted power beyond what anybody would think is reasonable. if it's gradual, it'll be identifiable and stoppable.

    even then, i dont think a singularity-achieved ai will even have the desire to produce its own empire meant to destroy the human empire. and if it did, it would get destroyed in the process.

    from the standpoint of computation, it's scary, but from a logistical standpoint, i dont think it is. honestly the main reason ive changed my view on this is that ive come to see that technical expertise completely changes when scale changes. for example, musk is speaking from a point of a computer technical expert, not one that integrates economically and scales up.

    that said, there is a greater than zero percent chance that ai kills us all.
  13. #18163
    JKDS's Avatar
    Join Date
    Feb 2008
    Posts
    6,780
    Location
    Chandler, AZ
    I saw an article about SpaceX who wants to put a few thousand satellites into the sky to give everyone on the planet wifi.

    SKYNET IS COMING.
  14. #18164
    1. We are not alone.
    2. AI isn't dominating the universe, as far as I can tell.

    I'm not worried.
    Quote Originally Posted by wufwugy View Post
    ongies gonna ong
  15. #18165
    it would probably take billions of years for an ai to dominate a galaxy, and advanced species probably only started propping up a few billion years ago.

    even then, i can't tell why ai would even care. they wouldn't be anywhere near resource deprived and probably wouldn't have any expansive or progeny desires.

    by the time a species has the ability to exit its solar system, it probably has far greater desire to not exit its solar system.
  16. #18166
    Why would it take billions of years? On what grounds are you making this determination? How long have we been around? AI is a thing now. Sure it hasn't been developed to the point where AI is smarter than us, but we'll get there within a century probably, maybe even a few centuries, which is a split second when compared to the time scale you're talking of. I don't see any reason why AI needs billions of years, unless you're taking into account the evolution of the species that creates it. Even then, our planet is estimated to be 4.6 billion years old. If we're talking hundreds, thousands or even millions of years for us to develop AI that is smarter than us, then 5 billion years is a conservative estimate for how long AI can evolve from amoeba. It's 13.8 billion years since the big bang.

    So, all we need to do is to find an intelligent species that has been around for 5 billion years. Those who think we're the most intelligent species in the universe are utterly deluded. We're probably utterly stupid even on a galactical scale. I seriously believe that if AI were a threat to life in the universe, there would be evidence of this already. We can't be the leading species in the galaxy, it's just too big and we're too stupid, AI must have had a head start elsewhere.

    Maybe it will never evolve a consciousness in the same sense that we do. Maybe we can develop super intelligent AI, but they can't have real emotions like humans. I think there is a defining line between life and inanimate, and that AI cannot replicate it. I could be wrong about that, but I'm still not worried about AI. I'm much more worried about humans.
    Quote Originally Posted by wufwugy View Post
    ongies gonna ong
  17. #18167
    i think ai could have emotions just like humans. it's all chemistry.

    well i guess if you had a sufficiently advanced robot empire, it could populate the galaxy in under a million years.

    ofc that begs the question that it would even want to populate, which it likely wouldn't.
  18. #18168
    Well we get here to the very crux of the question of what exactly life is. I don't think AI can have REAL emotions. It can be programmed to think it has emotions, but actually all it is doing is reacting to enviornmental stimuli. One can argue that's all we're doing, but you can sit in a dark, silent room with nothing to do, and you might react differently to me. Maybe I start thinking about music to pass the time. Maybe you start thinking of fat asses. Maybe spoon starts thinking of how a woman will deal with this situation worse than a man. Many people will quickly get bored and agitated, many will find ways to fend off the boredom. That's because everyone is different. But two identical robots will probably react in the same way. They won't be unhappy, or bored, they will be analysing the environment. A robot that is "unhappy" is not actually unhappy in the sense we get unhappy.

    The idea that a robot can feel emotions in the same sense that I do is as ridiculous to me as my table being happy sometimes and unhappy others.
    Quote Originally Posted by wufwugy View Post
    ongies gonna ong
  19. #18169
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Stephen Hawking says he's legit worried about AI's effect the future of humanity. He says the rate of self-learning in a true AI could outpace anything humans have ever imagined. Once that happens, its behavior becomes largely unpredictable from a human perspective, mostly eliminating our ability to react to it or even surprise it. He does say a lot of things, and this is not his field of expertise.

    Plus, he's probably already the first AI, invented at MIT in the 1960s.
    JK, obv.
  20. #18170
    Quote Originally Posted by a500lbgorilla View Post
    Get on top, stay on top. Maybe he knows something.

    Doubt it though.



    This was allegedly made by an AI. I want that to be true.
    man, if so, that ai must be tripping balls hard.
  21. #18171
    Has anyone seen ex machina? Thumbs up or thumbs down?
    Congratulations, you've won your dick's weight in sweets! Decode the message in the above post to find out how to claim your tic-tac
  22. #18172
    Quote Originally Posted by Luco View Post
    Has anyone seen ex machina? Thumbs up or thumbs down?
    thumbs up, def worth a watch if the concept of ai is at least semi interesting to you.
  23. #18173
    It is, I even sat through all of I, robot
    Congratulations, you've won your dick's weight in sweets! Decode the message in the above post to find out how to claim your tic-tac
  24. #18174
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Quote Originally Posted by boost View Post
    You want it to be true? You think the AI will distinguish between creating this as a digital painting and creating it out of human and dog body parts?
    Awesome.

    Also squirrel and frog parts.

    Last edited by a500lbgorilla; 06-13-2015 at 07:40 AM.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  25. #18175
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Quote Originally Posted by MadMojoMonkey View Post
    Stephen Hawking says he's legit worried about AI's effect the future of humanity. He says the rate of self-learning in a true AI could outpace anything humans have ever imagined. Once that happens, its behavior becomes largely unpredictable from a human perspective, mostly eliminating our ability to react to it or even surprise it. He does say a lot of things, and this is not his field of expertise.

    Plus, he's probably already the first AI, invented at MIT in the 1960s.
    JK, obv.

    But he's also of the mind that says, "look at how foreign invaders on Earth are always out to get the best of where ever they're invading". AIs might be lazy fuckers with no drive to do anything. No selfish genes to push them forward. They may just eat it all up, put it in order, and be done. How do you reward a program? How do you make it want or fear or feel powerful?
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  26. #18176
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Quote Originally Posted by givememyleg View Post
    man, if so, that ai must be tripping balls hard.
    Look at how it looks for eyes in everything.

    Reminds me of something:

    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  27. #18177
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Quote Originally Posted by wufwugy View Post
    it has certainly gained popularity since elon musk recently claimed it's worrisome, and he's iron man.

    i dont think there is a logistical method for ai to overrun humans. if it's sudden, it means the ai would be embedded and centralized and basically granted power beyond what anybody would think is reasonable. if it's gradual, it'll be identifiable and stoppable.

    even then, i dont think a singularity-achieved ai will even have the desire to produce its own empire meant to destroy the human empire. and if it did, it would get destroyed in the process.

    from the standpoint of computation, it's scary, but from a logistical standpoint, i dont think it is. honestly the main reason ive changed my view on this is that ive come to see that technical expertise completely changes when scale changes. for example, musk is speaking from a point of a computer technical expert, not one that integrates economically and scales up.

    that said, there is a greater than zero percent chance that ai kills us all.
    Yeah.

    But on de-centralized versus cent. I think it'll always be a back and forth. When you can accomplish a great central authority, you'll drive for a decentralized solution. When everything is de-centralized, you'll see the value of a central authority to capably coordinate across all the players.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  28. #18178
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    The iridescence of peacock feathers is caused by self-interference of the photons which bounce off of the surface of the feathers.

    The surface of the feathers is composed of a series of very tiny ridges which are reminiscent to a saw-tooth pattern. The spacing of the reflective surfaces is on the order of the wavelength of visible light. The incident light on the feathers gets reflected from the tiny, closely packed surfaces and as they bounce, they experience self-interference.

    Awesome, right?
  29. #18179
    Today I went to Legoland. If you like queueing for an hour with a cold, hungry child who needs the toilet, just to go on a train that goes in a loop for ten minutes, then it comes highly recommended.
    Quote Originally Posted by wufwugy View Post
    ongies gonna ong
  30. #18180
    Quote Originally Posted by OngBonga View Post
    Well we get here to the very crux of the question of what exactly life is. I don't think AI can have REAL emotions. It can be programmed to think it has emotions, but actually all it is doing is reacting to enviornmental stimuli. One can argue that's all we're doing, but you can sit in a dark, silent room with nothing to do, and you might react differently to me. Maybe I start thinking about music to pass the time. Maybe you start thinking of fat asses. Maybe spoon starts thinking of how a woman will deal with this situation worse than a man. Many people will quickly get bored and agitated, many will find ways to fend off the boredom. That's because everyone is different. But two identical robots will probably react in the same way. They won't be unhappy, or bored, they will be analysing the environment. A robot that is "unhappy" is not actually unhappy in the sense we get unhappy.

    The idea that a robot can feel emotions in the same sense that I do is as ridiculous to me as my table being happy sometimes and unhappy others.
    Why do we have different reactions to stimuli (or lack thereof?) The argument against your stance is that differing reactions to the exact same stimuli are simply due to the context built from the experiences of previous stimuli. But that's getting kind of deep into the weeds unnecessarily; we don't need anecdotes or hypotheticals to determine whether we have something innate to living beings that AI could never have. All we need to ask is what could possibly set us apart? And there are two categories that answers fall into: Nothing and something supernatural.

    I completely understand the gut feeling that there must be something that sets us apart-- that AI could never truly be alive. The notion that we are individuals with agency is so central to our ego that despite all the evidence to the contrary, we'll allow inane logical loopholes which ultimately give undue concession to the veracity of supernatural forces. More typically we just ignore the elephant in the room who has no free will, because it makes us uncomfortable and it ostensibly has no bearing on our day to day life. However, the idea and imminent possibility of true AI forces us to deal with the likely fact that we are just a bag of goo engaged in an endless series of chemical reactions caused by "ourselves" and external stimuli.
  31. #18181
    Quote Originally Posted by wufwugy View Post
    it would probably take billions of years for an ai to dominate a galaxy, and advanced species probably only started propping up a few billion years ago.

    even then, i can't tell why ai would even care. they wouldn't be anywhere near resource deprived and probably wouldn't have any expansive or progeny desires.

    by the time a species has the ability to exit its solar system, it probably has far greater desire to not exit its solar system.
    All of your posts on this topic seem to woefully underestimate the potential speed at which AI could gain intelligence and the probability that what it would be capable of with said intelligence could essentially appear to be magic to us. Predicting the actions and motives of a hyper intelligence likely impossible. I'm making a strangely similar appeal that the religious make, "God works in mysterious ways." Of course the only appropriate response to this is to ignore a god who does not care to clearly communicate his wishes and make knowable his actions. The difference with AI is that it, as far as we know, doesn't exist yet, and we have the power to keep it from existing.
  32. #18182
    Quote Originally Posted by boost View Post
    All of your posts on this topic seem to woefully underestimate the potential speed at which AI could gain intelligence and the probability that what it would be capable of with said intelligence could essentially appear to be magic to us. Predicting the actions and motives of a hyper intelligence likely impossible. I'm making a strangely similar appeal that the religious make, "God works in mysterious ways." Of course the only appropriate response to this is to ignore a god who does not care to clearly communicate his wishes and make knowable his actions. The difference with AI is that it, as far as we know, doesn't exist yet, and we have the power to keep it from existing.
    an ai at singularity would certainly increase intelligence quickly. but that's different than scaling up network and resources.

    the idea of this sort of ai taking over the world is the same as the skynet idea, which requires it to have scaled up in a sufficiently centralized network. people afraid of this kind of ai talk about the intelligence aspect but never the sufficient scaling to centralized network aspect.

    looked at differently, you could say that society itself is a virtual ai that has already achieved singularity. society's collective intelligence and capacity increases exponentially. i think it is misguided to think that one relatively small entity with exponential intelligence growth is somehow capable of dominating a vast multi-faceted entity that also has exponential growth

    besides, if 300 years ago people were told about atomic weapons, they would have said the world would end because of them. but here we are.
  33. #18183
    You've mentioned the importance of a centralized network repeatedly, but I'm not sure I understand what you're trying to get across. Also I'm unsure what is meant by "scaling up network and resources."

    Either way, I think you make a lot of gut assumptions that, at least from what you've written, seem baseless. For example, what leads you to believe that societies collective intelligence and capacity increases exponentially? And even if that is conceded, why would you assume the AI's exponential growth would be at a low enough rate that it either never catches society's curve or it would be relatively far in the future before it does? Further your labels of "single entity" and "vast multi-faceted entity" appear to be completely arbitrary-- I mean, two sentences back you referred to this "vast multi-faceted entity" as a single entity.

    In regards to the fact that we have so far dodged mutually assured nuclear annihilation has no relevance here. We have dodged it so far, and that's probably the extent of understanding we'll ever have as to how likely we were to have avoided it. We very well could be coasting on a fraction of a percent chance. It's results oriented to roll your eyes at what our ancestors would have predicted if they were presented with the scenario of multiple states possessing nuclear arms.
    Last edited by boost; 06-13-2015 at 09:14 PM.
  34. #18184
    Quote Originally Posted by boost View Post
    You've mentioned the importance of a centralized network repeatedly, but I'm not sure I understand what you're trying to get across. Also I'm unsure what is meant by "scaling up network and resources."
    skynet only took over the terminator world because the power was so centralized that all it took was a flip of the switch for one entity to control everything. analogous to this would be if nuclear launch was accessible from any home computer. the skynet scenario is not one we are likely to deal with while humans still exist.

    Either way, I think you make a lot of gut assumptions that, at least from what you've written, seem baseless.
    the fear of ai side is the same.

    For example, what leads you to believe that societies collective intelligence and capacity increases exponentially?
    because it does. technological and production growth rate is exponential. individuals (because of new products) have a slow exponential rate of computation growth, but society as a whole has a pretty huge one.

    And even if that is conceded, why would you assume the AI's exponential growth would be at a low enough rate that it either never catches society's curve or it would be relatively far in the future before it does?
    i dont assume that. it eventually would catch up. it would take a while though, and there would be ample time to shut it down before the point of no return.

    whether or not we would shut it down is a different story. but the claim is specious when the narrative is that an ai would get out of control and usurp us without there being any signs or ability to stop it.

    Further your labels of "single entity" and "vast multi-faceted entity" appear to be completely arbitrary-- I mean, two sentences back you referred to this "vast multi-faceted entity" as a single entity.
    i was trying to be brief. society can be called an entity just like a single robot can be. but one is far more complex and operating at a far greater scale of all resources than the other.

    In regards to the fact that we have so far dodged mutually assured nuclear annihilation has no relevance here. We have dodged it so far, and that's probably the extent of understanding we'll ever have as to how likely we were to have avoided it. We very well could be coasting on a fraction of a percent chance. It's results oriented to roll your eyes at what our ancestors would have predicted if they were presented with the scenario of multiple states possessing nuclear arms.
    this is much different than what people are saying about ai. they're not saying "eventually it could kill us". they're saying if we hit the singularity, it will be the beginning of a rather swift end. the nuclear analogy fits this.

    given enough time, nuclear could kill us, but even then it's still unlikely since the probability of nukes destroying us all is decreasing as time goes by.



    to be clear, it isnt a coincidence that one of our main fictional universes destroyed by ai could only get the job done by flipping a switch and giving that ai access to everything at once. in a decentralized system, which is what we have, it would take a singularity achieved cyborg a very long time to simply scale up his resources to compete. in the terminator world, the army had already been built. then they gave the army to the ai. in the real world, the ai would not be given access to the army and would instead have to build his own. that's when he gets shut down.

    i discuss terminator seriously because its mythology drives the narrative of how people think ai would takeover. logistically, i cant fathom we would ever be in a terminator situation, but the way people imagine an ai takeover is as if we were.
    Last edited by wufwugy; 06-13-2015 at 10:00 PM.
  35. #18185
    Quote Originally Posted by a500lbgorilla View Post
    Yeah.

    But on de-centralized versus cent. I think it'll always be a back and forth. When you can accomplish a great central authority, you'll drive for a decentralized solution. When everything is de-centralized, you'll see the value of a central authority to capably coordinate across all the players.
    not to rehash that old dragon, i think this exists on a societal level because of government.

    but for sure, the intent of individual actors is to centralize a good thing. every company tries to do that. but they never can without a violence monopoly backing them up.
  36. #18186
    Quote Originally Posted by wufwugy View Post
    skynet only took over the terminator world because the power was so centralized that all it took was a flip of the switch for one entity to control everything. analogous to this would be if nuclear launch was accessible from any home computer. the skynet scenario is not one we are likely to deal with while humans still exist.
    You assume nuclear launch isn't somehow accessible by way of the internet. Like, this is precisely the point, if a self learning AI gains access to a large enough source of information (the internet is likely to be magnitudes greater than what it would need), it could potentially do what we would consider magic. The limits of our imagination will not be the limits of a hyper intelligence.

    I like that you're using Skynet as an example. Sci-fi can be very useful in guessing at the future, because the author already did a lot of leg work for us. However, the assumption that short of a centralized network (which you strangely seem to put nuclear launch capabilities as a parameter for-- as if there aren't a billion and ten ways to wreck havoc with just the internet) the AI would need cyborgs to do its bidding is simply a limitation of your imagination.


    the fear of ai side is the same.
    Actually, it's not the same, and not nearly the same. The cautious side is saying that once the singularity is reached, the AI will be unpredictable, and therefore could be dangerous to the continued survival of the human race. Do you really equate this to assuming what the exact limitations, rate of intelligence expansion, and so on? The cautious side is saying "this could be Pandora's box, and maybe we shouldn't open it." To dismiss such a concern, when the stakes are so high, you need more than assumptions based on a sci-fi action movie franchise.

    because it does. technological and production growth rate is exponential. individuals (because of new products) have a slow exponential rate of computation growth, but society as a whole has a pretty huge one.
    So maybe it shouldn't be graphed as competing curves, but one and the same. Once singularity is reached, will we still be in control of the computational power that allows us to exponentially gain intelligence and capacity as a whole? You said it yourself, we are not independently gaining exponential intelligence-- so what happens if we lose control of our tools to the AI?

    i dont assume that. it eventually would catch up. it would take a while though, and there would be ample time to shut it down before the point of no return.
    See, I'm not saying this isn't true. I'm saying you don't know this is true and you can't know it's true. You have to know that you're just spewing here, right?

    whether or not we would shut it down is a different story. but the claim is specious when the narrative is that an ai would get out of control and usurp us without there being any signs or ability to stop it.
    Spew.

    i was trying to be brief. society can be called an entity just like a single robot can be. but one is far more complex and operating at a far greater scale of all resources than the other.
    First of all, the way your freely substitute robot for AI speaks a lot to your understanding of the topic. I really don't think this is a huge nit-- maybe it is-- but come on, that's egregious.

    Second, all sorts of assumptions need to be made to quantify the resources harnessed by an AI which has reached singularity and those of mankind. I appreciate your optimism in regards to man's ability to persevere, but it comes across as more being more based in faith than fact.


    this is much different than what people are saying about ai. they're not saying "eventually it could kill us". they're saying if we hit the singularity, it will be the beginning of a rather swift end. the nuclear analogy fits this.
    I don't think any intelligent commenter is saying this. They are saying that it could be the case, and that there is no reasonable way to know that it's not the case.

    given enough time, nuclear could kill us, but even then it's still unlikely since the probability of nukes destroying us all is decreasing as time goes by.
    Why do you assume the decline in likelihood is a permanent trend? The probability was increasing for the majority of time since their inception.

    to be clear, it isnt a coincidence that one of our main fictional universes destroyed by ai could only get the job done by flipping a switch and giving that ai access to everything at once. in a decentralized system, which is what we have, it would take a singularity achieved cyborg a very long time to simply scale up his resources to compete. in the terminator world, the army had already been built. then they gave the army to the ai. in the real world, the ai would not be given access to the army and would instead have to build his own. that's when he gets shut down.

    i discuss terminator seriously because its mythology drives the narrative of how people think ai would takeover. logistically, i cant fathom we would ever be in a terminator situation, but the way people imagine an ai takeover is as if we were.
    It's not a coincidence that a blockbuster sci-fi franchise has set the standard for what our imaginations go to when we think of AI taking over. However, should AI take over, I think it would be a coincidence should it happen in a way that closely mirrors Skynet.

    Anyways, I'm interested in the subject, but I don't spend much time thinking about it. I'm not obsessed with the near future apocalypse ushered in by AI-- I just think questioning whether it's worth it to open Pandora's box is a reasonable and valuable discussion to have, and I think that you either are not articulating your claims well, or you're dismissing the concern with woefully insufficient evidence.
  37. #18187
    Quote Originally Posted by boost View Post
    All we need to ask is what could possibly set us apart? And there are two categories that answers fall into: Nothing and something supernatural.
    I agree with this to a point. Only, I don't like the word "supernatural", because it implies pseudoscience. I believe we have what can be called a soul, I just also happen to believe that it's possible to explain a soul scientifically, at least it will be when we have a better understanding. Can we give a soul to AI when we know what a soul is? Well this is an interesting angle to look at this from. I can accept that AI could evolve to the point of figuring out what a soul is, and if it's possible for us to give AI a soul, then I suppose it's possible for AI to give themselves a soul when they reach a critical point in their leanring.

    The thing I would have trouble with is that I'm not so sure we can create a soul artificially.

    I dunno, I mean really I'm just coming back to the concept that if AI were a threat to us, then AI is already a threat to intelligent species that are ahead of us, and we would surely see evidence of this in space. AI wouldn't be bound by the problem of aging while travelling, AI wouldn't need supplies like oxygen and water, they could theoretically travel immense distances and would do so in order to stockpile the resources they do need, and they could theoretically expand and colonise at an exponential rate.

    I can accept that AI might be able to evolve to the point it is actually life, I don't think so but what the hell do I know about souls, but I'm happy that it's not a concern because there is no evidence of extraterrestial AI aggression that I'm aware of.
    Quote Originally Posted by wufwugy View Post
    ongies gonna ong
  38. #18188
    I guess ultimately, I'm more worried about alien AI than I am terrestial AI. We humans ourselves are the single biggest threat to our long term survival.
    Quote Originally Posted by wufwugy View Post
    ongies gonna ong
  39. #18189
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Quote Originally Posted by OngBonga View Post
    Well we get here to the very crux of the question of what exactly life is. I don't think AI can have REAL emotions. It can be programmed to think it has emotions, but actually all it is doing is reacting to enviornmental stimuli.
    Funny you should say that.

    "Countless modifications during evolution have provided living matter with an instrument of unparalleled complexity and remarkable functions: the nervous system, the most highly organized structure in the animal kingdom.

    The dominant role played by the nervous system is obvious. From its inception, this system mediated ever increasing coordination between the various elements of multicellular organisms, which were essentially unstructured, disorganized, and subject to all the vagaries of the surrounding environment early on. The nervous system provided these animals with the necessary mechanism for nutrition and defense, and the number, prevision, power and coordination of these mechanisms steadily increased. Furthermore, in the highest echelons of life, it also provided optimal means for survival: feelings, thought, and will. In short, the refinement of nerve cells and the overall system formed by them clearly povides the most effective mechanism to improve living organisms.

    Let us now outline the stages of this evolution.

    It is clear that plants and the simplest invertebrates do not have a nervous system, although such forms do exhibit the property of irritability. In other words, they share with all living cells the ability to respond to stimuli from the outside world..."

    Literally page one of Histology of Nervous System. He goes on to talk about the spectrum of intelligences that starts with plants that chemically react with the immediate environment, to worms with 1, 2, or 3 types of neurons, to replie's, bird's, and mammal's fucked up bundles of however many types of neurons.

    It seems obvious to me that computers are already intelligent. They're a different kind of intelligent from plants and people, though.

    Quote Originally Posted by boost View Post
    Why do we have different reactions to stimuli (or lack thereof?) The argument against your stance is that differing reactions to the exact same stimuli are simply due to the context built from the experiences of previous stimuli. But that's getting kind of deep into the weeds unnecessarily; we don't need anecdotes or hypotheticals to determine whether we have something innate to living beings that AI could never have. All we need to ask is what could possibly set us apart? And there are two categories that answers fall into: Nothing and something supernatural.
    I can think of a third. What could possibly set us apart from AI? That it was built and we evolved.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  40. #18190
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.


    Fly brain.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  41. #18191
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    Quote Originally Posted by a500lbgorilla View Post
    Funny you should say that.

    "Countless modifications during evolution have provided living matter with an instrument of unparalleled complexity and remarkable functions: the nervous system, the most highly organized structure in the animal kingdom.

    The dominant role played by the nervous system is obvious. From its inception, this system mediated ever increasing coordination between the various elements of multicellular organisms, which were essentially unstructured, disorganized, and subject to all the vagaries of the surrounding environment early on. The nervous system provided these animals with the necessary mechanism for nutrition and defense, and the number, prevision, power and coordination of these mechanisms steadily increased. Furthermore, in the highest echelons of life, it also provided optimal means for survival: feelings, thought, and will. In short, the refinement of nerve cells and the overall system formed by them clearly povides the most effective mechanism to improve living organisms.

    Let us now outline the stages of this evolution.

    It is clear that plants and the simplest invertebrates do not have a nervous system, although such forms do exhibit the property of irritability. In other words, they share with all living cells the ability to respond to stimuli from the outside world..."

    Literally page one of Histology of Nervous System. He goes on to talk about the spectrum of intelligences that starts with plants that chemically react with the immediate environment, to worms with 1, 2, or 3 types of neurons, to replie's, bird's, and mammal's fucked up bundles of however many types of neurons.

    It seems obvious to me that computers are already intelligent. They're a different kind of intelligent from plants and people, though.



    I can think of a third. What could possibly set us apart from AI? That it was built and we evolved.
    AI can be designed to evolve. Just throwing that out there.
  42. #18192
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Sure, in their own software environment, I'd like to see one that evolves in our environment. As any AI that wants to go Skynet'll need to do.

    edit I'd like to also know why that wouldn't be possible, if that's the case. Is it down to selfish-genes. That we've got the self-replicators driving us where the bots wouldn't? Could you engineer selfish genes?

    Seems to me, the next topic up is the origin of life.
    Last edited by a500lbgorilla; 06-14-2015 at 10:41 AM.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  43. #18193
    Quote Originally Posted by OngBonga View Post
    I agree with this to a point. Only, I don't like the word "supernatural", because it implies pseudoscience. I believe we have what can be called a soul, I just also happen to believe that it's possible to explain a soul scientifically, at least it will be when we have a better understanding. Can we give a soul to AI when we know what a soul is? Well this is an interesting angle to look at this from. I can accept that AI could evolve to the point of figuring out what a soul is, and if it's possible for us to give AI a soul, then I suppose it's possible for AI to give themselves a soul when they reach a critical point in their leanring.

    The thing I would have trouble with is that I'm not so sure we can create a soul artificially.

    I dunno, I mean really I'm just coming back to the concept that if AI were a threat to us, then AI is already a threat to intelligent species that are ahead of us, and we would surely see evidence of this in space. AI wouldn't be bound by the problem of aging while travelling, AI wouldn't need supplies like oxygen and water, they could theoretically travel immense distances and would do so in order to stockpile the resources they do need, and they could theoretically expand and colonise at an exponential rate.

    I can accept that AI might be able to evolve to the point it is actually life, I don't think so but what the hell do I know about souls, but I'm happy that it's not a concern because there is no evidence of extraterrestial AI aggression that I'm aware of.
    You insist on the need for a soul, but haven't yet defined what exactly it is, what its function is, and why it is necessary. Sure we don't understand how consciousness arouse, but we can make an educated guess that it arouse at some point in the increasing complexity of our nervous systems. This requires no magic, no immeasurable entity, etc.

    This same line of thinking comes up in regards to a god or gods. People will concede that it is unlikely that some old bearded white guy is lying about in the clouds casting judgement down on us-- they can see how absurd that is, but still maintain that they believe there is some sort of higher power. Fine, if that makes you feel good, whatever, but I can't see why this claim is being made except to reserve some of the comfort offered by a defined god while casting off the absurd details by way of replacing him with an undefined god.

    To the point, I think if you search for the reason you feel this way-- feel that we have souls, you'll find no evidence, but instead a desire for it to be true, because it is clearly unnecessary to explain our condition of consciousness.
  44. #18194
    Quote Originally Posted by boost View Post
    You assume nuclear launch isn't somehow accessible by way of the internet. Like, this is precisely the point, if a self learning AI gains access to a large enough source of information (the internet is likely to be magnitudes greater than what it would need), it could potentially do what we would consider magic. The limits of our imagination will not be the limits of a hyper intelligence.
    it's certainly possible. i dont think it's reasonable. i think the idea of a mastermind with you-name-it iq tearing down civilization is a pipe dream. we love it in our fiction, and maybe that influences how we think it works in reality.

    I like that you're using Skynet as an example. Sci-fi can be very useful in guessing at the future, because the author already did a lot of leg work for us. However, the assumption that short of a centralized network (which you strangely seem to put nuclear launch capabilities as a parameter for-- as if there aren't a billion and ten ways to wreck havoc with just the internet) the AI would need cyborgs to do its bidding is simply a limitation of your imagination.
    how would it work then?

    Actually, it's not the same, and not nearly the same. The cautious side is saying that once the singularity is reached, the AI will be unpredictable, and therefore could be dangerous to the continued survival of the human race. Do you really equate this to assuming what the exact limitations, rate of intelligence expansion, and so on? The cautious side is saying "this could be Pandora's box, and maybe we shouldn't open it." To dismiss such a concern, when the stakes are so high, you need more than assumptions based on a sci-fi action movie franchise.
    the conservative side is assuming just as much of its own premise, and it's the same with all paradigm shifters. that doesnt mean it's wrong though.

    i need much more than just claims that it could be bad. when those things are not backed up by logistics, they're pretty much always wrong.

    So maybe it shouldn't be graphed as competing curves, but one and the same. Once singularity is reached, will we still be in control of the computational power that allows us to exponentially gain intelligence and capacity as a whole? You said it yourself, we are not independently gaining exponential intelligence-- so what happens if we lose control of our tools to the AI?
    i said we are gaining computational exponential growth individually. the human being isnt, but the human individual is. it's just far greater in the network as a whole.

    if we lost control to the ai, it means we spent a long time ignoring the ai.

    this also assumes that ai would be hellbent on destruction of humans, which i dont think is reasonable at all. not only would humans incorporate all new technologies brought by the ai, but ai would probably have next to zero incentive to destroy things. the human incentive to do so is entirely biological. we dont kill because of our intelligence. im not sure why an ai would ever develop for itself the biological incentives that we have.

    First of all, the way your freely substitute robot for AI speaks a lot to your understanding of the topic. I really don't think this is a huge nit-- maybe it is-- but come on, that's egregious.
    i wasnt claiming ai is robotic, i was clearing up where you said you had confusion on the differences between types of entities. i probably shouldnt have added that term, but the previous time i said ai and that didnt work, so i figured i would change it to make the contrast clearer.

    Second, all sorts of assumptions need to be made to quantify the resources harnessed by an AI which has reached singularity and those of mankind. I appreciate your optimism in regards to man's ability to persevere, but it comes across as more being more based in faith than fact.
    where is the fact on the caution side? ive followed this for a long time, and have not yet seen any reasonable claims for how ai would takeover. it's dyed-in-the-wool fear of the unknown.

    I don't think any intelligent commenter is saying this. They are saying that it could be the case, and that there is no reasonable way to know that it's not the case.
    then what are they afraid of? every single piece of caution ive seen is based on the premise that humans would be taken by surprise. which means that if we weren't taken by surprise, then we could handle it.

    Why do you assume the decline in likelihood is a permanent trend? The probability was increasing for the majority of time since their inception.
    all up until the end of the cold war.

    the decline is continuing because the various safeguards against it are increasing. these include all varied sorts of things like reduced economic and political incentives and reduced probability of bad actors acquisition of the weapons. after humans are a multi-planetary species, the threat will be totally nullified. decentralization and expansion has that effect.

    Anyways, I'm interested in the subject, but I don't spend much time thinking about it. I'm not obsessed with the near future apocalypse ushered in by AI-- I just think questioning whether it's worth it to open Pandora's box is a reasonable and valuable discussion to have, and I think that you either are not articulating your claims well, or you're dismissing the concern with woefully insufficient evidence.
    im probably not articulating well, as that seems par for the course for me.

    im dismissing something using woefully insufficient evidence because it relies on its own woefully insufficient evidence. i do not think it is reasonable to be afraid of the unknown. tech-apocalypse claimers say they're not dealing with the unknown, but they really are, represented by the fact that nobody has said anything other than "we dont know what would happen". i think this claim is based on a lot of false assumptions too. for example, the one i mentioned earlier, about how it's really just an assumption that a being not being tied to human biology will by default act in such a way as if it were tied to human biology.
  45. #18195
    Quote Originally Posted by OngBonga View Post
    I guess ultimately, I'm more worried about alien AI than I am terrestial AI. We humans ourselves are the single biggest threat to our long term survival.
    Yeah, this is an interesting point. There always is the possibility that we are either the first to be approaching singularity, or that any other intelligent life that has, only has recently on the universal time scale. But this seems unlikely...
  46. #18196
    Quote Originally Posted by boost View Post
    Yeah, this is an interesting point. There always is the possibility that we are either the first to be approaching singularity, or that any other intelligent life that has, only has recently on the universal time scale. But this seems unlikely...
    if i had to guess, i would say many other species have already hit singularity

    i dont agree that it is reasonable for advanced tech to expand. they probably live in their own virtual realities, sustained in self-contained ecosystems.

    the desire to expand and for progeny is probably lost when tech is advanced enough. the progeny of humanity will probably be digital or something.
  47. #18197
    Quote Originally Posted by wufwugy View Post
    if i had to guess, i would say many other species have already hit singularity

    i dont agree that it is reasonable for advanced tech to expand. they probably live in their own virtual realities, sustained in self-contained ecosystems.

    the desire to expand and for progeny is probably lost when tech is advanced enough. the progeny of humanity will probably be digital or something.
    Right, so I think as opposed to posing a threat, it could just be that once singularity is reached the intelligence increases so rapidly as to render it completely indifferent to the insignificant beings that we are. It may just become a recluse that wanders into the deepest thought-- it could be so intelligent that it has no desire for action. Kind of like the watchers in the Marvel universe.
  48. #18198
    Quote Originally Posted by boost View Post
    Right, so I think as opposed to posing a threat, it could just be that once singularity is reached the intelligence increases so rapidly as to render it completely indifferent to the insignificant beings that we are. It may just become a recluse that wanders into the deepest thought-- it could be so intelligent that it has no desire for action. Kind of like the watchers in the Marvel universe.
    could be. i suspect it will be incorporated into humans. then as time goes by humans will be incorporated into it.

    what i mean is things like we will probably become cyborgs, augmented in every way by nanobots. but then as technology gets even more advanced, we will probably be able to replicate ourselves on digital servers.
  49. #18199
    Yeah, it will be interesting to see if technologies related to physical augmentation will ever reach their peak, or if they will become a less desirable redundancy because we'll be able to just plug into the matrix and leave the restraints of the physical behind. And if that does happen, how long do you think we'll chose to maintain human or even physical avatars in "the matrix"? How long before we are just entities in the ether? How long will we even remain individual entities?
  50. #18200
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    Quote Originally Posted by a500lbgorilla View Post
    Sure, in their own software environment, I'd like to see one that evolves in our environment. As any AI that wants to go Skynet'll need to do.

    edit I'd like to also know why that wouldn't be possible, if that's the case. Is it down to selfish-genes. That we've got the self-replicators driving us where the bots wouldn't? Could you engineer selfish genes?

    Seems to me, the next topic up is the origin of life.
    3D printing makes this possible inside of the next decade imo.
  51. #18201
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
  52. #18202
    just some spoon porn

  53. #18203
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    Quote Originally Posted by wufwugy View Post
    just some spoon porn

    lol right
  54. #18204
    AI has always been my keen interest, always fascinated me. It was my engineering specialization. It's great to see such strides are being made now. We're getting ever closer to figuring out how the brain does it too. My honest expectation is that we'll be able to make beyond-human level AI in the next 15 years. Been spending my (sadly limited) spare time studying up on the recent AI techniques. I predict it's going to grow ever more important. And it's actually relevant to my job.

    If anyone is interested, this is a good playlist to learn more about the recent state of everything surrounding AI:

    https://www.youtube.com/watch?v=CK5w...qveGdQ&index=1

    Some of it is quite technical (lectures), but other videos are more broad.
  55. #18205
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    In honor of the boost to attention whoring the forums have seen today:

  56. #18206
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Water under the bridge, man.

    Gotta have a duck's back.
    (Nothin' sticks)

    ***
    Shaddup... it's a saying.
  57. #18207
    rong's Avatar
    Join Date
    Nov 2008
    Posts
    9,033
    Location
    behind you with an axe
    Behavioral economist Richard Thaler is doing am ama on reddit.

    Thought you may like him rilla, given he argues against the assumptions of individuals making optimal decisions.

    From his wiki:


    His recurrent theme is that market-based approaches are incomplete: he is quoted as saying "conventional economics assumes that people are highly-rational – super-rational – and unemotional. They can calculate like a computer and have no self-control problems."
    I'm the king of bongo, baby I'm the king of bongo bong.
  58. #18208
    I wonder where he got the idea that conventional economics assumes that. Because it doesn't.
  59. #18209
    Was gonna say the same, his whole premise is wrong. He's more than 20 years too late to the party.
  60. #18210
    rong's Avatar
    Join Date
    Nov 2008
    Posts
    9,033
    Location
    behind you with an axe
    Wuf, it doesn't trickle down.

    https://www.imf.org/external/pubs/ca...spx?sk=42986.0
    This paper analyzes the extent of income inequality from a global perspective, its drivers, and what to do about it. The drivers of inequality vary widely amongst countries, with some common drivers being the skill premium associated with technical change and globalization, weakening protection for labor, and lack of financial inclusion in developing countries. We find that increasing the income share of the poor and the middle class actually increases growth while a rising income share of the top 20 percent results in lower growth—that is, when the rich get richer, benefits do not trickle down. This suggests that policies need to be country specific but should focus on raising the income share of the poor, and ensuring there is no hollowing out of the middle class. To tackle inequality, financial inclusion is imperative in emerging and developing countries while in advanced economies, policies should focus on raising human capital and skills and making tax systems more progressive
    I'm the king of bongo, baby I'm the king of bongo bong.
  61. #18211
    Renton's Avatar
    Join Date
    Jan 2006
    Posts
    8,863
    Location
    a little town called none of your goddamn business
    Yeah I'm sure if you look at the percentage of rich people who are getting richer because they lobby the state to have unfair advantages in commerce, you will conclude that such people's riches do not trickle down. And then you might surmise that all wealth concentration at the top should act the same as that, and be wrong.

    If someone gets rich in a truly capitalistic (i.e. not crony-istic) way, that could only happen because he provided something of great value to a large number of people. Generally, poor people.

    I'd love it if state policies could actually raise the skills and human capital of poor people. So far, they seem to be doing a piss-poor job. It seems to me like a capitalistic world already does quite a lot to incentivize people to raise their human capital, and states seem to blunt these incentives at every turn by penalizing success and rewarding failure.
    Last edited by Renton; 06-16-2015 at 07:09 AM.
  62. #18212
    rong's Avatar
    Join Date
    Nov 2008
    Posts
    9,033
    Location
    behind you with an axe
    As far as I can see, the main thing people dislike about capitalism and free markets is the fact that some people will be forced into low wage jobs and that wage will be barely enough to survive and certainly not enough to live a pleasant life by modern western standards. By allowing the competition of labour to reduce its value to the bare minimum, the fact that corporations have huge incentive to ensure this happens, and the imbalance of power between individual supplier of labour and corporation, some people will get stuck here and will probably have little way of getting out of it.

    But perhaps it's just a fact of life that some people are too stupid or lazy to deserve any better.

    Interesting that lots of evidence points to the fact that min wage increases have little to no effect on employment though.

    So it could be argued that it's nothing more than a preference. We could pay people more via min wage laws and reduce poverty with very little negative effect. Or we say tough, you eat what you kill, get on with it.
    I'm the king of bongo, baby I'm the king of bongo bong.
  63. #18213
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    Quote Originally Posted by rong View Post
    As far as I can see, the main thing people dislike about capitalism and free markets is the fact that some people will be forced into low wage jobs and that wage will be barely enough to survive and certainly not enough to live a pleasant life by modern western standards. By allowing the competition of labour to reduce its value to the bare minimum, the fact that corporations have huge incentive to ensure this happens, and the imbalance of power between individual supplier of labour and corporation, some people will get stuck here and will probably have little way of getting out of it.

    But perhaps it's just a fact of life that some people are too stupid or lazy to deserve any better.

    Interesting that lots of evidence points to the fact that min wage increases have little to no effect on employment though.

    So it could be argued that it's nothing more than a preference. We could pay people more via min wage laws and reduce poverty with very little negative effect. Or we say tough, you eat what you kill, get on with it.
    I don't think I've ever chimed in with my thoughts on the minimum wage thing.

    @bold, So many jobs have already been lost due to the minimum wage that further decreases (from increasing the minimum wage) are lost in the noise of all of the other factors that go into employment rates, so it's hard to make any real connections there.

    Generally speaking, it's a simple math problem that raising the minimum wage decreases jobs, not some kind of grand conspiracy. There's this idea that raising the minimum wage will force companies to just give more of their bazillions in profits to employees. The problem with that idea is that a unified increase in minimum wage prevents a ton of businesses from being profitable in the first place. Lots of other employment opportunities (particularly in government-funded jobs) get screwed over with this as well.

    Suppose I have a business where I hire high school students to mow yards at $10/hour as independent contractors so that I don't have to account for payroll taxes. I have 50 yards that we mow each week, and the average yard takes two hours to do. That's 100 hours of labor that I have to account for with a total cost of $1,000. If I'm getting paid $40 per yard, then I'm bringing in $2,000 each week. Take out another $500/week in gas, maintenance and other fixed costs, and I'm profiting $500/week before I get fucked by taxes, which probably brings me down to about $400/week if I'm lucky. If I spend an average of 30 minutes dealing with each yard per week, I'm doing about 25 hours/week, and that comes to about $20/hour that I'm paying myself before taxes.

    Along comes a minimum wage hike to $15/hour. That kicks up my labor costs by $500, so my pre-tax profit just dropped to $0. Now instead of $1,000 being distributed in labor each week, there's $0 being distributed, and I have a handful of high schoolers who are out of jobs.

    "Aha," someone might say. "But the high schoolers can come in and offer your clients to mow their yards at the same price of $40 per yard and make $20/hour for themselves since the average yard takes two hours." Even if they were able to maintain my $10 fixed cost/yard that I mentioned above, that would drop them to $15/hour right away. So far so good until you remember they have no equipment. That option just doesn't work out.

    Another alternative to deal with an increased minimum wage would be to raise my prices by $10 per yard (a 25 percent increase). In response to this, let's say that I lose 10 percent of my clients, and now we're down to 45 yards. I now have 90 hours to distribute each week at a rate of $15/hour for a total of $1,350. I'm bringing in $50/yard * 45 yards = $2,250 each week, and that leaves $900. Subtract the $450 in fixed costs (the same $10/yard rate we've been using), and now I'm bringing in $450 before taxes, and this translates to about $325-350/week after taxes. I'm still doing about 30 minutes/yard/week, so I end up doing 22.5 hours of work each week for 45 yards. Before taxes, that comes to about the same $20/hour.

    What we see here is that if I want to make the same amount that I was making before, then I'm going to have to increase the number of yards that I'm mowing. Everyone else who is mowing yards in the area will also be trying to expand their market share. This increased competition drives down prices. It's easy to see the problem here: Somebody is going out of business.

    And the end result is exactly what raising the minimum wage always does: Unemployment goes up, and the people who are still employed (not to be confused with the employers) make slightly more money than they did before.

    Obamacare is another example of shit like this having real-world consequences. It's extremely difficult for people to find jobs for unskilled labor (even working at Walmart or a grocery store) that's full-time work. Instead, they get stuck with 25 hours/week jobs that don't qualify as full-time so the employers can actually afford to give them a job since they can't break the 30 hours/week threshold that would make that employee full-time since they simply can't afford it without it throwing their business into peril.
  64. #18214
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    Quote Originally Posted by rong View Post
    As far as I can see, the main thing people dislike about capitalism and free markets is the fact that some people will be forced into low wage jobs and that wage will be barely enough to survive and certainly not enough to live a pleasant life by modern western standards. By allowing the competition of labour to reduce its value to the bare minimum, the fact that corporations have huge incentive to ensure this happens, and the imbalance of power between individual supplier of labour and corporation, some people will get stuck here and will probably have little way of getting out of it.
    @bold, Just want to point out that this labor has a value, and if employers can't pay the actual value for this resource, then they simply won't buy it. It's like if lemons were $10/each, no one would buy lemons. Whether or not someone has to "barely survive" is irrelevant to this. They can either get paid for the value of the skills they have to offer, or they can not get paid at all. The alternative is to put a gun in the face of employers and people who are paid what they're worth for higher-paying skills and steal from them (ie: taxes).
  65. #18215
    Renton's Avatar
    Join Date
    Jan 2006
    Posts
    8,863
    Location
    a little town called none of your goddamn business
    Quote Originally Posted by rong View Post
    As far as I can see, the main thing people dislike about capitalism and free markets is the fact that some people will be forced into low wage jobs and that wage will be barely enough to survive and certainly not enough to live a pleasant life by modern western standards. By allowing the competition of labour to reduce its value to the bare minimum, the fact that corporations have huge incentive to ensure this happens, and the imbalance of power between individual supplier of labour and corporation, some people will get stuck here and will probably have little way of getting out of it.

    But perhaps it's just a fact of life that some people are too stupid or lazy to deserve any better.

    Interesting that lots of evidence points to the fact that min wage increases have little to no effect on employment though.

    So it could be argued that it's nothing more than a preference. We could pay people more via min wage laws and reduce poverty with very little negative effect. Or we say tough, you eat what you kill, get on with it.
    http://www.flopturnriver.com/pokerfo...15#post2238815
  66. #18216
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    Whoops I guess there's a thread for that.
  67. #18217
    MadMojoMonkey's Avatar
    Join Date
    Apr 2012
    Posts
    10,322
    Location
    St Louis, MO
    Quote Originally Posted by spoonitnow View Post
    @bold, Just want to point out that this labor has a value, and if employers can't pay the actual value for this resource, then they simply won't buy it. It's like if lemons were $10/each, no one would buy lemons. Whether or not someone has to "barely survive" is irrelevant to this. They can either get paid for the value of the skills they have to offer, or they can not get paid at all. The alternative is to put a gun in the face of employers and people who are paid what they're worth for higher-paying skills and steal from them (ie: taxes).
    I agree up to the point of calling taxes "stealing."

    The system may be absurd in practice, but the representative system is designed to ensure that all taxes originate from local representatives. Which means that if you are getting taxed, then you, or your representative, voted for that tax.

    It is the opposite of stealing. It is a voluntary commission at the time a tax is created. It is mandatory to pay the tax afterward, obviously, but calling it stealing is a petty move to incite emotional responses.

    While I can understand the, "I didn't vote for that!" feeling, it doesn't feel like an enlightened attitude to stand by.
    Sure, it sucks to look at potential income as money going out the door to someone else. The reality, at least on paper, is that the money is going back into your society's infrastructure. Roads, libraries, parks, schools, a legal system, etc. All of these are provided by taxes.

    I think the popular American talking point that 'taxes are evil' fails to address the amazingly good things that come from taxation. Yes, there are corrupt things and bad or outdated solutions to problems in the tax laws, but that doesn't mean the entire system is broken.
  68. #18218
    spoonitnow's Avatar
    Join Date
    Sep 2005
    Posts
    14,219
    Location
    North Carolina
    Quote Originally Posted by MadMojoMonkey View Post
    I agree up to the point of calling taxes "stealing."

    The system may be absurd in practice, but the representative system is designed to ensure that all taxes originate from local representatives. Which means that if you are getting taxed, then you, or your representative, voted for that tax.

    It is the opposite of stealing. It is a voluntary commission at the time a tax is created. It is mandatory to pay the tax afterward, obviously, but calling it stealing is a petty move to incite emotional responses.

    While I can understand the, "I didn't vote for that!" feeling, it doesn't feel like an enlightened attitude to stand by.
    Sure, it sucks to look at potential income as money going out the door to someone else. The reality, at least on paper, is that the money is going back into your society's infrastructure. Roads, libraries, parks, schools, a legal system, etc. All of these are provided by taxes.

    I think the popular American talking point that 'taxes are evil' fails to address the amazingly good things that come from taxation. Yes, there are corrupt things and bad or outdated solutions to problems in the tax laws, but that doesn't mean the entire system is broken.
    Give us your money, or we'll come to your house with guns and imprison you. That's theft and extortion regardless of what is done with the money.
  69. #18219
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Quote Originally Posted by spoonitnow View Post
    Give us your money, or we'll come to your house with guns and imprison you. That's theft and extortion regardless of what is done with the money.
    Well, it's at least extortion. Probably not theft, with all its implications of law and whatnot.

    Besides, they're just words. Increasingly useless in this day and age.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  70. #18220
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Quote Originally Posted by jackvance View Post
    AI has always been my keen interest, always fascinated me. It was my engineering specialization. It's great to see such strides are being made now. We're getting ever closer to figuring out how the brain does it too. My honest expectation is that we'll be able to make beyond-human level AI in the next 15 years. Been spending my (sadly limited) spare time studying up on the recent AI techniques. I predict it's going to grow ever more important. And it's actually relevant to my job.

    If anyone is interested, this is a good playlist to learn more about the recent state of everything surrounding AI:

    https://www.youtube.com/watch?v=CK5w...qveGdQ&index=1

    Some of it is quite technical (lectures), but other videos are more broad.
    This is good and we should get back to this. Only 4 minutes in, but they're a tight 4 minutes.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  71. #18221
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    In the video he basically says that the first AI could be tasked with building the second AI... and let it go.

    Trying to think this through by jamming my one wrench into this first video's awesome imagined machine. I'm stilling thinking about how we got here. We built up from self-replicating genes that developed superior replication forms, up to reptiles with their brains that have already figured out basically how to act based on all the various INs that come from the body they're captaining and the world they're sensing. Up to us, with our precious ability to predict the distant-ish future and think recursively.

    Then we built a machine that's basically top down. No reptile module, no competitively designed self-replicator - just the magic of predicting better builds and then building them.

    Looks to me like it could go any way. Including the same old axiom standing - that computers are perfectly stupid.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  72. #18222
    Quote Originally Posted by MadMojoMonkey View Post
    I agree up to the point of calling taxes "stealing."

    The system may be absurd in practice, but the representative system is designed to ensure that all taxes originate from local representatives. Which means that if you are getting taxed, then you, or your representative, voted for that tax.

    It is the opposite of stealing. It is a voluntary commission at the time a tax is created. It is mandatory to pay the tax afterward, obviously, but calling it stealing is a petty move to incite emotional responses.

    While I can understand the, "I didn't vote for that!" feeling, it doesn't feel like an enlightened attitude to stand by.
    Sure, it sucks to look at potential income as money going out the door to someone else. The reality, at least on paper, is that the money is going back into your society's infrastructure. Roads, libraries, parks, schools, a legal system, etc. All of these are provided by taxes.

    I think the popular American talking point that 'taxes are evil' fails to address the amazingly good things that come from taxation. Yes, there are corrupt things and bad or outdated solutions to problems in the tax laws, but that doesn't mean the entire system is broken.
    Thank you for bringing this up. Your logic on the issue is reasonable too. But I think taxes are the definition of theft and I would like to explain why.

    A man comes into you home and takes your TV. As long as you didn't give permission, we all agree that he stole it. If the government does the exact same thing, except instead of your TV, it's your dollar, we do not call it theft. Well if that's the entire story then the side that calls it theft is correct, just based purely on the logic. But that isn't the end of the story, so we have to go deeper. The pro-tax person then claims "they're not stealing because they gave you something in return" (which is what you just did).

    So let's go back to the TV theft: a man comes into your home and takes your TV. Then he returns and gives you a microwave, a pair of scissors, and a gift card to amazon. He even goes so far as to make sure what he gives you equals the amount the TV costs. At least what it costs according to his assessment while deducting some varied costs. Now, the pro-tax person must declare this is not theft because this is exactly what taxation is. Or they can declare that theft is okay. I guess that works too.

    Regardless of how we want to slice it, taking something that is not yours is theft. It doesn't matter what you "get back" in return and it doesn't matter if the theft could possibly make your life better. It is still theft and needs to be called such.
  73. #18223
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Literally define theft: the action or crime of stealing.

    Literally define stealing: take (another person's property) without permission or legal right and without intending to return it.

    "crime" "legal right"

    Yeah, taxes are theft.

    You guys are worse than SJWs with your twisting of terms.

    And any time you want to drive for them to be seen as the big bad gov't taking without any regard for you, the other side can drive for the common-collective and communal authority taking to maintain a workable society. It's just wordplay.
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>
  74. #18224
    Quote Originally Posted by a500lbgorilla View Post
    Well, it's at least extortion. Probably not theft, with all its implications of law and whatnot.

    Besides, they're just words. Increasingly useless in this day and age.
    It is theft and extortion. Also racketeering. I'm not up to date on legalese, so I can't list every crime the government commits. But if it was subject to its own laws, it would have gotten the chair a long time ago.
  75. #18225
    a500lbgorilla's Avatar
    Join Date
    Sep 2004
    Posts
    28,082
    Location
    himself fucker.
    Or, as they call it, rhetoric. "language designed to have a persuasive or impressive effect on its audience, but often regarded as lacking in sincerity or meaningful content."

    Well, maybe not that last bit...
    <a href=http://i.imgur.com/kWiMIMW.png target=_blank>http://i.imgur.com/kWiMIMW.png</a>

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •