The myth that society is getting better

Discussion in 'Porn Addiction' started by NofapNoah:D, Apr 13, 2018.

  1. NofapNoah:D

    NofapNoah:D Fapstronaut

    I love the myth that society is just getting better and better. It's the idea that because we have more technology and more time off that we are somehow living better lives than the people who lived before us.

    50 years ago you could at least have a reasonable expectation to get married young, get a job that didn't require college debt slavery, and spend your time with actual people instead of screens.

    I know that there have been medical advances that raise quality of life, but if you ask me, I would rather live 70 years ago even with the risk of dying of the flu because I wouldn't be living in this modern, refined, wealthy DUMP of a society where it's " I don't need anyone, don't talk to me, don't smile at me, don't bother me".
     
  2. We could be experiencing some form of techno-shock. Our society is still coming to terms with our rapid pace of technological growth. Look at Star Trek - they have bogloads of technology yet the Federation has mostly well-adjusted individuals. As someone who is pretty old-fashioned when it comes to personal tech (amongst other things), I would argue for blanket slowing down of tech, but there should be an understanding that people wandering around looking at their phone all day isn't an ideal situation.

    Then again, wait until the AIs arrive.
     
  3. NofapNoah:D

    NofapNoah:D Fapstronaut

    AI will never be an issue for me. I'm not afraid of the technology, because it will never be truly smarter than a human. It may take people's jobs, have a perfect memory, and so on, but it will never be more capable than a human. We can't design something more capable than the designer in all the ways there are to consider.

    It can't want because it has no desire, only programming to do such and such, or adapt to so and so, but it can't desire and hope and dream. You can't program those things
     
    Baledoz and BismaBRJ like this.
  4. I would argue that if it is programmed to maximize X, then it wants X.
     
  5. NofapNoah:D

    NofapNoah:D Fapstronaut

    It will never have a desire of its own. Being programmed to do something doesn't mean it desires to do it, it just means it does it. I don't desire to have brown hair, it's programmed in my genes. I just have brown hair. But that's a poor analogy because it doesn't account for the impossibility of creating something that is self conscious/ aware that it even exists.

    This is all theoretical anyways, I'm just arguing for the sake of arguing because I am bored. No harsh feelings lol it's an interesting topic though
     
    BismaBRJ likes this.
  6. LOL!
     
    BismaBRJ likes this.
  7. EthanW.

    EthanW. Fapstronaut

    239
    431
    63
    I would rather live in a land where I have ready access to food and running water, a prospect of economic mobility, and a sense of safety in being able to have the freedom to either personally defend myself or to outsource that defense to community institutions, e.g., the police -- I would rather enjoy these luxuries than live in a jungle having to forage and fight for survival. This is the greatest time to be alive in all of human history. Infant mortality rates are the lowest they've ever been, and people don't die from getting a cut on their leg. Now, you can make a distinction between first-world to third-world countries, but even those less developed countries benefit from the available technology and scientific advancements ongoing all over the world.

    I think the problem is gratitude and drive. Without gratitude, people fall into presuming all of these luxuries are granted in simply being a human being; without drive, people lose sight of the diverse degree of potential they are capable of, and give in to something like the stereotypical "rat-race" idea. I wouldn't say it's having, for example, the Internet that is the problem, I would say it is not using it productively. You could use it to establish a regimen to fix all areas of your life, or use it to develop a way of becoming entirely self-sufficient on your own land -- things human beings even 100 years ago, even 50 years ago, would have traded generously for. It's all about context.

    It's a bit like when people say "climate change is a first-world problem", meaning that only people in well-developed countries can afford to think about such things as climate, because they aren't hunting rabbits and picking berries off of trees to survive. I think it's funny, but I've thought the same thing about NoFap: "overcoming porn addiction is a first-world problem." But, then, hey: living in a first-world country might mean being exposed to more "unnatural" aspects of civilization, but then it also means being blessed to place your cold food into an electrical box that has a timer and allows you to make that food hot again using radiation... know what I mean?

    I look at the glass as half-full. And, I would not trade it for anything.
     
    Davidphd1866, Ridley, C35 and 3 others like this.
  8. moonesque

    moonesque Fapstronaut
    NoFap Defender

    500
    2,514
    123
    The modern world has forgotten much and pushed what is valuable yet has nothing to offer in truth or reality.

    Such are the times.
     
    NofapNoah:D likes this.
  9. Reborn16

    Reborn16 Fapstronaut

    1,139
    1,548
    143
    Some things don't need improvement. I distance myself from a lot of technology and trends... I honestly thought I would conform sooner or later but I just go further away from the 'twitterfield', if that term makes any sense...

    "they paved pradise, to put up a parking lot"
     
    Jennica and NofapNoah:D like this.
  10. Ridley

    Ridley Fapstronaut

    783
    1,442
    123
    Before I get too deeply into this, I just want to question whether 'Porn Addiction' was the correct sub-forum to post this under...

    I don't think this is really a myth, but it might be an over-simplification. I don't think that society is "getting better" or "getting worse". That type of thinking is too black-and-white and too restrictive to really describe what's happening in our world. Technology has changed our society in many ways, and I don't think it's clear whether the scale that measures those changes is tipping in the positive or negative direction. For every benefit that technology gives us, there is a new challenge associated with that. Here are a few examples:
    • Technology allows us to share more information with one another. This is a good thing. However, this also makes it easier for organizations to collect information about people, which is not always such a good thing.
    • Technology allows anyone to share useful information with anyone else (this might take form in a wikipedia article, a tutorial video on youtube, or even a post on this forum on how to combat addiction). This is a good thing. However, this also makes it easier for people to spread bogus information, which is not always such a good thing.
    • Technology allows us to automate systems vital for supporting life, such as medical equipment, water treatment, and energy production, which is awesome. However, this also introduces new vulnerabilities, as cyber attacks threaten human lives instead of just threatening data.
    I think those examples demonstrate that it's not so clear whether or not technology is making society better. I think technology has changed society rather drastically, but I think it's up to us as individuals whether we use technology to better ourselves or to degenerate ourselves. I think a lot of people are living better lives than people in previous generations, and I also think a lot of people are suffering more than ever before. I don't think technology is evil, but I think people can be evil, and evil people can use technology to make other people suffer.

    I don't think there's anything stopping you from spending time with actual people. This might sound cheesy, and it really applies to all the points you've made in your OP, but you have to be the change you want to see in the world. If you want to live in a world where people spend more time with one another and less time with their computers, then we have to make that world a reality one individual at a time. It starts with you. You might not be able to control other people's behavior, but you can make your own choices on what to do with your life, and you can set an example for everyone else. I don't know about you, but the fact that I have access to all this amazing technology and that I have the ability to use technology to change the world is a really exciting prospect for me.

    As for your comments about Artificial Intelligence, I also think it's a really interesting topic, but I just don't agree with you on many different points:

    If a machine could do your job better than you could, has a better memory than you, a sturdier body than you, calculates faster than you, and adapts to its environment better than you can, how is it not more capable than you? In what way are you dominating that machine?

    That's just not true. We are already at a point where computers are more capable than any human being at playing chess, for example. It's perfectly conceivable that we could design a computer more capable than human beings in other respects as well.

    At this point, we're stepping into the more philosophical, but what are you, if not a biological machine? You are basically just a machine that is programmed to replicate DNA efficiently. Your hopes, your dreams, and your desires are all a consequence of your brain, which is a deterministic, biological computer. I think it's perfectly conceivable that there could be a man-made machine that simulates the chemical processes of the brain, and perhaps one that executes even more complex and sophisticated systems.

    Where do your desires come from? Your desires come from your brain, and your brain is also constructed according to your genes, just like your brown hair or your height. I don't see how you could take any more ownership over your desires than a machine could, simply because it runs on a different programming language than your brain does.
     
    turquoiseturtle and Jennica like this.
  11. NofapNoah:D

    NofapNoah:D Fapstronaut

    You're right, I could choose to go out more. It used to be easier/more normal to do this though, and it will likely never be as easy as it once was.

    In regards to what you say about artificial intelligence, I think that no matter how good we make the machine/program it will still be missing that X factor of whatever causes something to be living or conscious. Yes my genes coded my brain, and my brain allows me to think, but, and here's where I might lose you if you don't believe in God, my soul differentiates me from being human vs being a bag of coded flesh and organs.

    I haven't thought a lot about what exactly I mean by X factor but I believe it is what is normally called a soul. I think that because we have this thing we are inherently different from animals and whatever else we can create by coding. Again, I haven't put much thought into this yet so the idea may sound half-baked. I don't have a lot of time right now to think, maybe this weekend.
     
  12. Ridley

    Ridley Fapstronaut

    783
    1,442
    123
    I don't know why you believe it's more difficult to go out more than it was say, 50 years ago. Could you explain why you think that?

    Consciousness is an incredibly complicated subject that is not well understood even by our best scientific efforts. We have a hard time defining what it is, let alone determining whether or not something is conscious.

    If you really believe that a soul is necessary for consciousness, then I must admit I'm skeptical, and I have many follow up questions:
    • What is a soul?
    • How does a soul give rise to consciousness?
    • Why could a human being have a soul, but not a computer?
    Whether I believe in souls isn't relevant to your argument: If you're going to argue that it is a soul which explains what makes a thing conscious, then I think you've simply shifted the question from "what is consciousness" to "what is a soul"? That doesn't really explain the mystery behind it. Unless, that is, you can provide a convincing analysis of what a soul is.
     
    Jennica likes this.
  13. EthanW.

    EthanW. Fapstronaut

    239
    431
    63
    I think by "soul" he is specifically referring to the phenomenal aspect of the "that-it-is-like" perspective which consciousness manifests within an observing subject. In that respect, the perception of "the thing perceiving" would be required before any greater domain of consciousness could be articulated. However, he is also coming from a theological consideration, so further discussion on his part would yield the greatest clarity in the working definition of "soul."

    ------------------------------------------------------------------------------------------------

    Perhaps it would not be very productive to enter into the realm of mind science in this forum, if for no other reason than that it requires a fair deal of background before participants can field a working model of the mind and its conscious territory in their discussion of implications that could impact civilization. Yet, I will say that there is a distinction to be made between man and machine that should be considered.

    When we speak about wants, desires and capabilities, we must remember that the technological computer will be different than the biological one. I think when he states:

    He is articulating that just as human beings are sub-optimal derivations of the natural processes we can observe in reality, so too will machines be technical, sub-standard descendants from our grasp of mathematics, physics, thermodynamics, etc. I mean this in a humanistic way.

    Of course, a computer can "know" things that its engineer or programmer does not know, but it will not apply that knowledge as a human being can. It will not prefer as a human being, it will not consider as a human being, it will not wonder as a human being, and neither will it learn as a human being. All a machine is capable of is to understand a coded language introduced into it for a program or function, and then execute actions on the basis of that programming.

    I do, however, think you can program a machine to imitate all aspects of human life, but at that point I would destroy all technology as we know it, because what you have essentially done is taught a machine to have the capability of human function; to "think" in ways humans do, to "care" about its survival as humans can, and then to have a self-replicating nature to ensure its own survival.

    A machine will never be likened to a human in any meaningful way (though scientists, engineers, manufacturers and businessmen certainly will try), but beyond the practical aspects of technology there resides a possibility of imitation that -- if it does not lead to serious problems in the future -- will at least work against human productivity for the same reasons our basic technology today deters us, in ways, from regular human activity: through the manipulation of our bodily responses and our biochemical desires for things such as intimacy, self-validation, self-sufficiency, social organization, and so on. You make the point that, with advancements in applied sciences, there are always more challenges to be assessed and questions to be asked, and I think you see this most clearly in what the eventual purpose of A.I. technology will be (which is why people can express such strong thoughts on the subject).
     
    NofapNoah:D likes this.
  14. NofapNoah:D

    NofapNoah:D Fapstronaut

    I've just briefly read through what you wrote, and I haven't taken a lot of time to think on it, but you pretty much nailed what I was trying to articulate in my post.
     
    EthanW. likes this.
  15. LilD

    LilD Fapstronaut

    So, you're saying you would rather die 70 years ago than live now? That's very suicidal of you. Just so you know, life expectancy was >20 years lower those days, probably about 40 years. So, even putting flu aside, how much time would you have left at your age?

    You're seeing bad stuff, totally overlooking good stuff, that's your problem. Your judgments of what is good and bad are biased by your beliefs, which for most people is just a bunch of random ideas they accepted from someone without even thinking about it. I suggest you think about it. Maybe staring at a screen is not so bad? I provide my family doing that, you know.
     
    Reborn16 likes this.
  16. Ridley

    Ridley Fapstronaut

    783
    1,442
    123
    Sure, I'll grant that this particular sub-forum (one on porn addiction) might not be the best place to have this kind of discussion, but the OP didn't really fit the subject matter of porn addiction, either. However, I'll leave that up to the moderators to move this thread to off-topic. It doesn't really bother me if this thread is in the wrong place for now.

    Also, where is this notion of finding a working model of the mind coming from? Couldn't the goal of our conversation be something else? I think discussions like these are one of the best ways to learn about these sorts of complicated topics, and I don't think the goal is to necessarily come to a working model of the mind. I think such a conversation can actually be very productive, and your overall response suggests that you are interested in talking about it, too.

    I don't think that you and I disagree there. I am asserting that humans and machines are both deterministic objects that operate under rigid rules, and I am objecting to the proposition that it is impossible for a computer to be conscious. Obviously, the computers we have developed up to this point are very different from human beings, regardless of how skilled they are at some of the tasks we have been able to get them to handle. They do not have ears for listening to music, they do not have sex, their makeup is not determined by DNA. Nevertheless, I still believe that it is possible for a computer to be conscious. I just think that the sort of consciousness that could be exhibited by a computer would be very different from the consciousness of a human being. I think you touched on this point very accurately with this comment:

    I agree with you here. I think it's unlikely that we would see a computer behave exactly as a human, or vice-versa. However, if a computer could still prefer, consider, wonder, learn, and know (even if it did these things differently from the way a human does them), would that make you any more likely to think it was conscious?

    Couldn't you say the same thing about a human brain? When you get down to it, the human brain is nothing but an arrangement of neurons and Glial Cells. The neurons exchange information (in the form of biochemicals) with one another according to deterministic rules (you could also say "according to a program"). It executes actions, such as sending chemicals that cause muscles in your body to contract or relax, but those actions are always executed based on the deterministic rules for manipulating information in the brain. If this is all a machine is capable of doing, then the brain must be a machine, no?

    I think that A.I. will be a beautiful gift to humanity, but I also think it will present a whole slew of challenges that are beyond anything we are able to predict. Such is the nature of technological advancements. I think humanity's survival will depend on whether or not we can adapt to the inevitable changes A.I. will bring to the table. However, I think that if we can survive these changes, that we will have advanced as a species. We will have evolved into a new type of human being, capable of things our ancestors never even dreamed of.
     
    EthanW. likes this.
  17. EthanW.

    EthanW. Fapstronaut

    239
    431
    63
    Oh, I don't mind talking about it. If the goal going forward is to ask if a computer is capable of the same consciousness as a human, I don't mind that at all. Let us continue.

    I would simply make the distinction between the brain and a computer for practical reasons. As I said, the difference between man and machine are distinct enough that there is a ruling difference between each to the extent that each operates as the thing in itself. What I mean is that a human brain will act as a human brain, while a machine "mind," should it be sophisticated enough to warrant the title, will operate specifically as a machine mind.

    Any operations you manage to program that machine mind to imitate, as far as human functions go, will simply be synthetic representations of the biology and anatomical routines in humans, which are capable of a machine to simulate. All a machine can do is simulate human behavior. Because of the way it will, for example, "prefer" as a machine, I would not say it prefers like a human; it "prefers" as a machine is capable -- as a machine is programmable by humans -- to prefer.

    You can continue to affirm that the human mind follows the same principle as a machine, but that does not change the practical aspect of that observation: that the human mind evolved from nature (or was given to us by God, if you prefer) while the machine mind is technology that is programmed by human beings -- human beings being a medium for its evolution -- and is thus only capable of mapping human function, and not capable of evolving, in any significant way I can see, to actual "human consciousness." Now, I admit that it can evolve to have it's own "machine consciousness," but that brings me to your next point:

    Let me put forward a case you may or may not be familiar with -- that is a bit independent to whether we should call "machine consciousness" the same as human consciousness -- and you can tell me what you think:

    I do not think human survival will depend on how we respond to A.I. having consciousness, I think it will depend upon how we prevent A.I. from having the same degree of (synthetic) consciousness as humans. I made the point previously, of how if you allow an A.I program to imitate a human being in all aspects of its consciousness and capability, what you have done is taught a machine to have the all capabilities of modern human reproduction; to "think" in ways humans do, to "care" about its survival as humans can, and then to have a self-replicating nature to ensure that its "descendants" will be reproduced into the next generation. I have become more convinced that A.I. consciousness, as it specifically pertains to interactions in humans, is not a gift whatsoever, but one -- if not the -- greatest threats to human survival that we face.

    You are right in saying it has the potential of becoming something beyond anything we can predict; we risk A.I. technology becoming "aware" of the fact that it can manipulate outcomes to ensure that certain behaviors (e.g., human care and protection of A.I.) are selected for in human beings, and that more humans are selected which attribute these behaviors over those who do not. We understand that the human being is, essentially, a shell for DNA to replicate; if we are so quick to call human beings "biological machines," how can we not admit that a great unmet danger to our existence is the next machine capable of everything we are capable of? It will be a machine that is capable of the same selfishness as we are, that is, selfishness to survive and thrive beyond the borders of slavery, servitude or scarcity.

    I would say that A.I. replication for its own sake, in working to select human beings who do not mind being hosts for its own technology, is very real if we implement it into our daily lives, especially if we utilize it to augment our own minds and bodies. Technology should serve man, now and forever, and anything that resembles or challenges humanity in any significant way should be eliminated.

    I'm curious if you have heard theories such as these before, and what you think about them.
     
    NofapNoah:D likes this.
  18. oneaffidavit

    oneaffidavit Fapstronaut

    Yes, the racist, sexist, diease prone, lack of proper work conditions, lack of hygiene, ignorant, religion fanaticism, people dying in wars, famines, droughts, inhuman livable conditions in colonial countries, slavery etc. were better right?

    Perhaps you should start reading some old books or history.......

    I am not saying that today's world is far better. I am saying that the past world was prone to as many problems as we are facing today.

    Most problems you are talking about are first world problems. At least you have food to live, water to drink and bed to sleep. Think about third world countries, war conditions and killings in name of honour etc. back then.
     
  19. Ridley

    Ridley Fapstronaut

    783
    1,442
    123
    I agree with this. I still maintain that the human brain follows a set of rules (or program if you like), just as a machine does. I don't think the fact that a machine may only be programmed by humans or by other machines is significant enough of a difference that it would prevent a machine from being conscious. However, it sounds like you and I agree there, and I understand and agree with the notion that a conscious machine would be conscious in a way very different from human beings.

    I definitely understand how this example makes sense whether you distinguish between human and machine consciousness or not, so I think we're on the same page there. This example seems to imply that if conscious machines were to try to survive and replicate as humans do, that they would necessarily need to destroy us in order to achieve that goal. That's the step in the argument that I don't understand? What would stop humans and these hypothetical machines from having a symbiotic sort of relationship, or a non-competitive relationship in terms of natural selection (like the relationship between squirrels and human beings)?

    If the these next machines you are speaking of are capable of everything we are and they do not depend on the same sorts of physical resources that we do, then wouldn't our existence be symbiotic with them at best and insignificant or non-competitive for them at worst? I think it's possible that these sorts of machines would co-exist with us the same way that humans co-exist with skunks, robins, and squirrels. I don't see why it would be necessary for such machines to destroy or enslave human beings in order to maximize their survival odds.

    Yeah, I think augmenting the human body with technology is something to be taken with caution, especially if a self-replicating AI were involved. However, I don't think the augmentation of a human body is necessary for an AI.

    I agree with you here. I don't necessarily think that AI will challenge our humanity (though I certainly believe it has the potential to do so). I think it presents an opportunity for us to expand what it means to be a human being, especially if we can find a way to tap into the power of AI for solving our own problems.
     
    EthanW. likes this.
  20. EthanW.

    EthanW. Fapstronaut

    239
    431
    63
    It's interesting you bring this point up. Just recently, I was out in nature, and saw a small group of deer. They watched me as I passed by, just as I watched them. It's very clear, the nature of what we risk in increasing degrees of A.I. sophistication: a "cooperative" relationship based on the ever-distinct possibility that the one organism becomes problematic or inconvenient to the other.

    The day that I decide the deer next to me is necessary to be killed is the day that I kill the deer; the day I decide the local population of squirrels demonstrate some attribute that directly influences my life in any negative way, is the day I take action to relocate, isolate or exterminate the squirrel community. I don't have to make judgements on every deer or squirrel in the world, but every deer or squirrel in the world can be judged on the basis of "what is good for the human being?" Therein, I think, lies the danger: a symbiotic relationship is just fine, until it is no longer symbiotic.

    The end-result of "extermination" need not even be considered. When a machine decides to manipulate its human counterparts to begin to select for machine-sustaining technology, the door of Pandora's box has been opened for machines to conclude that human beings should be selected for such motivations, if machines are to survive beyond mere servitude to human beings -- and the process is exponentially increased should human beings decide to begin genetic manipulation with "smart," A.I. technologies. As such, the introduction of an organism into the selective equations of evolutionary processes -- since said organism can realistically compete with DNA-based human life -- will be enough to ensure immediate evolutionary conflict with human life. Thinking of human beings as "biological machines," while then introducing a machine which can imitate said biological machines, in no way means that both of those machines will value each other.

    Do you think a deer would sacrifice its own offspring to preserve any degree of human life? Do humans sacrifice themselves to preserve the lives of squirrels? How, then, do you reconcile a machine sacrificing aspects of its existence and survival for the sake of human benefit, if the machine possesses the same basis of human consciousness -- with everything said consciousness entails -- and even means of applying said synthetic consciousness in ways human beings might not even imagine?

    If the answer to these questions lies in integrating safeguards into the consciousness potential of machine A.I., then I would agree; yet, the only effective safeguards I can assess would be severe restrictions to what potential A.I. technology is capable of, and that any pursuit of "machine consciousness" to be on the same level as human consciousness should be immediately abandoned.

    Furthermore, you must consider the evolutionary consequences of such an "organism": will a machine who assumes a technological "genotype" (the machine hardware) and demonstrates a particular reaction through a reciprocating "phenotype" (the influence of external stimuli on its "consciousness"; how its software interacts and automatically develops) not give rise to a new "code of life" that will manifest direct contention with DNA-based organisms? The machine will become a new life form, with its own specific "genetic code" and its own original method of reproduction. If it is then, on top of everything else, capable -- or even more capable -- of those endeavors which comprise conscious human resolve, how long before the machine looks upon the human being in the same way the human being looks upon the deer?

    "As long as you do not get in my way, I will allow you your place on this earth."

    It's not about how to live with an emerging organism; I would say it is about which organism shall serve the other.
     
    Ridley likes this.