AI and The Future of Humanity.

Editors Note: This dialogue comes from a symposium on AI in which Carrot (wearing his philosopher hat) was the respondent to a series of questions from the hosts of the talk. We support our writers endeavors beyond climbing, and always encourage them to share their work when possible. That being said, this topic is relevant to climbers, but in a vastly general way. The views of the writer do not necessarily reflect the views of theDIHEDRAL

Carrot’s Intro:

Thank you for having me today, I’ve been focusing my research on futurism for a while now.  I think it’s one of the most important discussions that we can be a part of, so, I also want to thank everyone for showing up and being part of this conversation.

We are currently in the midst of a wave of change so enormous that it is increasingly difficult to even imagine yet alone comprehend what’s coming.  In addition to the current AI models available for public consumption, we are also seeing previously unimaginable changes in quantum computing, virtual reality, augmented reality, reinforcement learning, superintelligence, nanotechnology, CRISPR, global warming, robotics, and world changing materials such as graphene.

The development and implementation of any one of these is enough to change the world beyond anything the planet has ever experienced, and the realization that these changes are all developing in unison is quite literally incomprehensible.

Change is inevitable, but the rate of change and the drastic nature of what is on the horizon should have anyone who is uncomfortable with uncertainty shaking in their boots.  Not to suggest that change is bad, but the world we’ve been living in for the last 10,000 years is not the world we will be living in tomorrow.

At the tip of this transformative iceberg is learning algorithms and generative AI, the application of which has already changed the world in drastic ways including political movements, advanced weapons, and job security.  Prime for the next jump will be advanced technological personal assistants, historical medical breakthroughs, and occupational replacement.  From there we can only guess what is to come, and hope that we will still be here to witness it for ourselves.

Discussion Questions (Questions in black Carrot’s answers in blue):

  1. What is Artificial General Intelligence?  What are the different types of artificial intelligence?
    • AI typically refers to non-biological intelligence, but I am more of a substrate neutral kind of guy, and so think that all intelligence is artificial, carbon based biological intelligence gets no preference over silicone based intelligence from me.  The fact is that anything we refer to as intelligent will be following the same physical laws, so whether biological or not, intelligence is intelligence.  The ability to accomplish complex goals is a mark of intelligence.  Artificial General Intelligence is the ability to accomplish any complex task at least as well as a typical human.  Superintelligence then would be any intelligence that exceeds human intelligence.
  2. AI technologies are rapidly transforming, or at least challenging education and educational practices in an unprecedented way.  Can we discuss how this is impacting students, faculty, and our administration?
    • In the short term, not much will change.  We’ll all start to get comfortable with the implementation of generative AI, similar to the way we all had to adapt to technologies such as YouTube, PowerPoint, and Wikipedia.  They are disruptive changes but not method annihilating.  In the long term, there will be very little need for administration or faculty, but the role of the student is something we need to protect at all costs.  It’s something that I have been advocating for years, our emphasis needs to be on education, not success, graduation, matriculation, or jobs.  The benefits of a liberal education are benefits in themselves, and should be promoted as the highest level of human endeavor. 
  1. Can AI technologies be used to filter out, or flag, human bias?  For instance, one of the things our college does every year is assess the quality of our courses and student work.  We currently have faculty do this using a shared rubric.  We have done work in terms of comparing how faculty compare to AI.  Thoughts?
    • There is no doubt that AI can do this, the real question is whether or not they should.  I’d recommend that is not something professors should hand over to AI.  There is a gray side of assessment (as opposed to black and white), and at this point I don’t think that it’s something our AI models are close to mastering.
  1. Many persons have raised concern about the safety of AI in our society – some even arguing that it may constitute an existential threat to civilization.  What are the risks of this technology?  How might we avoid these risks?
    • It may be important to remind ourselves that existential risk is a part of life.  Nearly every organism that has come before us is now extinct, our time is coming, it’s just a matter of when.  AI is without a doubt an existential risk to all life on earth.  One of the major discussions in the ethics of AI is The Control Problem.  How could we control a superintelligent AI?  The short answer is that we don’t know.  But if you aren’t familiar with the problem, there are lots of examples.  Nick Bostrom of The Future of Humanity Institute gives the paper clip example, where an AI was given the simple task of making paper clips without constraint.  This simple task without foresight could lead to extinction as the AI turns every single thing and uses every available resource to make paper clips, then when earth is out of resources, it will continue its tasks on other planets, making it to the asteroid belt, and onward, until the entire universe is nothing but paper clips.  We run into similar problems when it comes to tasks such as making money, or making sure people are happy.
    • Unfortunately, we cannot avoid all the risks. This technology is coming, but having conversations like this to make sure everyone understands the possibilities is a step in the right direction in order to try and mitigate some of the risks. 
  1. AI will transform the economy in a very rapid manner as old jobs are replaced and new jobs open up.  Let’s discuss this a little bit.  While we tend to focus on how this may increase the joblessness, but can we also consider how this might increase the economic output of the nation by orders of magnitude?
    • A crash will come, and if infrastructure isn’t in place, suffering will occur for a long time.  Some jobs may be safe, athletes seem to have job security, stage actors as well.  People who have a skill will have a place in the future economy.  For example, if you can make pottery, or embroidery, folks with money have shown that they will pay a premium for handcrafted items.  We see this with people who prefer to buy authentic Native goods rather than something identical at Wal-Mart.  I think that will maintain in a world shared with AI.  But in the long term, I imagine that most people will be pretty well off.  The economic output of machines that can mine asteroids made of diamonds and ore from inhabitable planets is inconceivable.  At that point I cannot foresee someone objecting to the usefulness of a Universal Basic Income.  Ultimately though it’s not money that worries me, it’s purpose.  Jobs offer much more than money, and losing whatever that ‘much more’ is should be front and center in any conversation about the future of humanity.
  1. Recently, AI was used in the campaign of Imran Kahn of Pakistan.  His AI avatar gave a victory speech for his election while he, himself, was in prison.  How might AI disrupt our perceptions in the political sphere?
    • We are only beginning to see the role of deepfakes within the political climate, and it’s not so much the deepfakes that concern me, it’s the propagation of information without checks and balances.  We’ve seen over the last few election cycles here in the US and more drastically in other places such as Myanmar and Sri Lanka the dangers of misinformation.  Until social media companies prioritize truth over profit we are at the mercy of their control.
  1. Let’s consider how AI will impact morality and values.  Philosopher William MacAskil gives the example of value lock-in.  For instance, he describes how the values of Confucianism (although widely debated initially) inevitably become locked into Chinese culture for over a thousand years. He warns that AI could inadvertently lock-in a certain set of moral values depending upon how it is used and directed.  How might AI subvert our moral consciousness?
    • I don’t particularly believe in the notion of morality or consciousness, but I understand the point.  We’ve seen this before with religion, and more recently tribal politics, I don’t think this is something particularly unique to AI, and it’s impossible to predict whether the effects would be much better or worse than what has come before, and so perhaps we’ll have to let future human or future intelligentsia answer this one?
  1. What are some of the effects that this technology will have on our society?  We can consider the idea that technology has a replacement effect, where new technologies replace old ones.  Which technologies should we expect to be replaced?
    • Literally all of them, there is no definable limit to what future technology will bring.  As hard as it would have been for a 12th century monk to predict TikTok, whatever is coming down the line is orders of magnitude beyond that.
  1. Can a machine think? Can we explore the philosophy of mind here – and discuss whether or not it might be possible for AI to have a mind and think. 
    • I generally don’t entertain the existence of minds.  It doesn’t seem to me that minds add anything to the discussion about what it is to experience.  That being said, I don’t think humans have minds, and I don’t think that AI will have minds either.  In both cases there is a complex system of inputs and outputs, and while I feel okay calling that process ‘thinking’, I don’t see the need to introduce the concept of minds into the equation. In short however, I do believe that machines can and will possess the quality of thought.
  1. Conversely, if AI becomes self-aware at some point, will we have a moral duty towards it?  Let’s imagine a scenario in which AI appears by all our measures to be self-aware and sentient, would turning off the machine amount to murder?
    • There is no doubt that pigs, dolphins, cows, birds, reptiles, and fish have some level of awareness and in some cases a deep level of awareness, and we slaughter and eat these types of beings by the millions, even at the expense of our own health and the health of the planet.  So, while I do think that turning off a machine, if that amounts to eradicating the continued existence of that machine would constitute an iniquitous killing.  I do not think that without a major shift in what types of things deserve legal protection under the law that humans will tolerate the notion that we aren’t the preferred inheritors of the earth. Thus, future humans will have no problem “murdering” self-aware and sentient machines. 
  1. Postmodernism.  One of my favorite philosophers Jean Baudrillard introduced the concept of the simulacrum and the hyperreal.  A simulacrum is a sign that has no referent.  (For instance, every sign – like the word “dog” – is said to symbolize a referent of some sort – in this case an animal K9).  Baudrillard argued that our society had entered into an era in which the signs of our language are no longer anchored to their referents.  A consequence of this is that the images and metanarratives of our culture begin to lose meaning and purchase.  This is what “postmodernism” refers to.  It seems that AI is a final stage of the postmodern world; one in which a simulacrum itself becomes made material and the hyperreal merges with the real.  Is AI issuing in a new era for humanity?
    • Yes, but maybe not for the reasons stated here.  Computer based language/learning demands a referent, and so our current referents may change, but that is nothing new.  Language is dynamic.  If anything the coming wave of technology may help to solidify our referents.  We see this in the bias of computers/programs trained to detect dogs.  The program may mislabel a black sheep as a black lab, but through training it learns to properly discriminate thus getting better at identifying both dogs and sheep, thus solidifying the referents of both.

Thank you for being part of the discussion, it is an on-going one that I hope we will all continue to be a part of for a long time!

Carrot

16 Replies to “AI and The Future of Humanity.”

  1. halffastcyclingclub's avatar

    A couple of brief replies. First, thanks, and there’s way too much to do anything but scratch the surface in comments.

    Re: #2 (AI & Wikipedia, or internet in general) – when I was an intern, my Clinical Instructor would ask me questions. I’d go to the library after work every day to find answers. My students would whip out their phones and have an answer in seconds. It didn’t seem fair. I got used to it and learned how to ask better questions.

    Re: #4 (the paper clip problem) – Philip K. Dick wrote an interesting story (“Autofac”) about an AI-operated factory that kept sending out unwanted goods because that’s what it knew how to do, and how humans try to defeat it. Being PK Dick, of course it gets weirder than that.

    Liked by 1 person

    1. thedihedral's avatar

      Better questions is probably what I care about most when it comes to students at the moment. They seem to be facing an uphill battle against having all the answers in the palm of their hands.

      I had a colleague (since retired) who told me about typing his dissertation on a typewriter. And it was frustrating becuase if there wsa an error discovered in the text, the correction would force him to re-type everything that came after the addition or omission. His gripe was the simple fact that computers made our lives so easy simply because of the ‘delete’ button.

      I’ll have to check out Autofac, that sounds like it will be right up my alley, thank you for the recommendation!

      Liked by 2 people

      1. Martha Kennedy's avatar

        Poor guy! Too bad he didn’t have an IBM Selectric II with the little correcting ribbon. That’s what I used to type my thesis! I’ll go back in the cave now…

        Liked by 3 people

  2. Martha Kennedy's avatar

    A well-balanced liberal education with an emphasis on critical thinking and inquiry is (I think) an antidote to some of the perceived dangers of AI in the media and education. At this point I think the term AI is WAY too immense and imprecise even to use as a talking point. Maybe mentioning actual applications of AI?

    In education was advocating that my students use Grammarly a dozen years ago as a proof-reading tool. Not because I think Grammarly is the end-all and be-all of proofreading, but my students WOULD use it when they wouldn’t go to a tutorial center to get that help from a living, breathing person. I just wanted my students to pay attention to their own writing. There’s something to be said about the ultimate goal of a project or of education itself.

    I’m thinking of how I use ChatGPT as an encyclopedia but it surprises me frequently by the humanity of its training. No encyclopedia ever ended an entry with, “It’s wonderful to hear about your paint-making adventure!” It’s nice when you’re doing something you’re not familiar with to “hear” something like that.

    Since I taught language skills the whole time I would probably give my students a project to evaluate ChatGPT’s training, discussing how it “learned” to “relate” to its users and why. Part of that has to be that the developers want it to be used. The other might be (OMG!) it was trained to use the language of sincerity. A student could write something pretty interesting just about that, how that serves the purposes of the company that made it. Personally, I like that feature very much even though I know I’m being manipulated by it. That right there is pretty interesting.

    How much of our language use is ultimately designed to serve our ends? A lot. I decided to take this to my pal, ChatGPT:

    https://chat.openai.com/share/38f488d6-c7fa-42f8-8756-450751388cad

    Liked by 1 person

    1. thedihedral's avatar

      I love that conversation, I read a paper recently where the author was arguing that we shouldn’t make AI or AI robots come off as too human for fear that the line between human and machine gets blurred and humans lose sight of what real human interaction is like and why it ought to be preserved. I guess time will tell on that one, but it was an interesting perspective.

      Language can be so frustrating, at a recent talk that I attended, some in the audience accused a panelist of pushing a Liberal agenda within education, because they were promoting a liberal education.

      I offer students an opportunity to play a Socratic role with Chat GPT, and see if they can lead an argument, which is useful because they don’t have to fear annoying the other side, and facing a fresh batch of Hemlock Tea!

      Liked by 2 people

      1. Martha Kennedy's avatar

        I love that idea! ChatGPT would “enjoy” that! Its little circuits would light right up. I love the way it uses language.

        That “pushing a liberal agenda” stuff is one of the good arguments for a liberal education. The one I used with my bidness studens was, “OK, dude, you got your job, you’re a CEO but what if you have to talk to the wife of someone you want to hire as a CFO and she’s an art museum curator and all you can do is talk about cost accounting or some shit like that? You’ll look like an idiot.” They were intensely practical students, those bidness majors.

        Liked by 1 person

      2. thedihedral's avatar

        I tell all my bidness majors to look at the majors of CEO’s of fortune 500 companies and see how many have a business degree as compared to something like a philosophy degree. I tell them if they want to really make an impact then they might want to start by trying to understand the world.

        Liked by 1 person

      3. Martha Kennedy's avatar

        Ahh…I tried that. Sometimes it was effective other times I got “What you you make a year, Professor? (I didn’t have to answer; they knew) Why should I listen to you? I’m going to make four times that!” But they were first semester sophomores. They mellowed with time…

        Liked by 1 person

      4. thedihedral's avatar

        That is wild, it would be so tough to pull for students like that. We have a economics professor who teaches like that. Students are always reporting back about his uplifting messages such as “don’t waste your time with those people in the G Building” of course the G Building is the liberal arts and humanities building. I guess being a “those people” is a point of pride!

        Liked by 1 person

  3. Bret Bernhoft's avatar

    I’ve chosen to embrace this exponential growth rate of Artificial Intelligence. I’ve even gone as far as integrating AI into the personal and professional realms of my life, as much as I can. And the results are along the lines of what you’re describing in this article; superpowers.

    Thank you for the perspective and information on Artificial Intelligence.

    Liked by 2 people

Leave a reply to Martha Kennedy Cancel reply