Climbing into AI

Most things don’t just happen.  AI is no exception.  Divorce for example, isn’t something that just happens overnight, there are years of small things that align to eventually cause a couple to call it quits.  Similarly, mastering a skill takes time. From rock climbing to writing, greatness doesn’t usually happen in the blink of an eye.  Like mastering a skill, or getting a divorce, AI didn’t just happen.  There have been years AND years of small things that have aligned to get us to the point we currently find ourselves.  From the early Turing tests on computing machinery, to Siri on our phones and Alexa in our houses, we’ve been slow walking our way to the here and now for quite some time.  “Here and Now” of course is just an intermediate step along the slow walk to some “There and Then” of the indeterminate future.

Make no mistake though, a lot of things are going to start changing really fast.

The third iteration of ChatGPT was released in November of 2022. Five days later it had 1,000,000 users.  To put this into perspective, it took Netflix three years to hit a million, it took Facebook ten months.  This says as much about the speed at which word travels today as it does about the draw of new technology, but this technology is changing everything.  Is that hyperbole?  I’m not sure, but I’m sure ChatGPT could tell me.

So, what is it?  ChatGPT has been described by NYT Kevin Roose as “the best AI Chatbot ever released to the general public.”  It won The Atlantic’s breakthrough of the year for 2022, in which The Alantic’s Derek Thompson describes ChatGPT as part of “the generative-AI eruption” that “may change our mind about how we work, how we think, and what human creativity really is”.

Essentially ChatGPT is a response machine.  You ask a question, make a request, look for information, and it gives you a response.  I asked it to describe GPT, and here is the response that I got.

Generative Pre-trained Transformer, or GPT, is a type of language model developed by OpenAI. It is a machine learning model that has been trained on a large dataset of text, and it can be used to generate coherent and coherently written text. The GPT model is pre-trained, which means that it has already been trained on a large dataset and can be fine-tuned for specific tasks, such as language translation or text summarization. The GPT model uses a transformer architecture, which is a type of neural network that is particularly well-suited for processing sequential data such as natural language text. GPT has been very successful and has achieved state-of-the-art results on a number of natural language processing tasks.

You can ask it how to start an LLC, or write a play, a movie script, a poem, a novel, it can compose a cover letter addressed to a specific college or job, it can help you learn a language, or write code.  You can ask for a climbing training regimen, or the best spots to rock climb on your next road trip.  You can ask it to write a newspaper article, or a dialogue in the tone of Plato.  Perhaps you want an idea for a potato chip commercial in the style of Hunter S. Thompson.  It can draw up terms of a land contract that favor the tenet or the landlord.  It can write an apology note, it can write a love letter.  It can write a blogpost; it can give you a new recipe based on your dietary goals and restrictions.  It can give you information on anything you could imagine, and it can do it all in iambic pentameter if that’s what you’re into. It can give you guidance on how to build a stock portfolio, or which car to buy for a family of four that likes to hike and has three large dogs.  It can create a five-minute stand-up comedy routine based solely on rock-climbing.  It can do all of this in a matter of seconds.

There are some obvious positives involved here.  I just found three great cars that fit my lifestyle, read a terrible joke that included the punchline “that is why I hate rock climbers” and got a recipe for roasted chickpeas in a matter of seconds.  I asked for a dialogue about basketball between Emerson and Nietzsche, the result wasn’t terrible.  

This program is in the beta stage, and I would guess within a year or two the ability to compose works such as dialogues written in the style of Plato and Bukowski will be almost indistinguishable from Plato and Bukowski’s actual style, and tone.

There are some obvious negatives as well.  Effects on academia are impossible to predict, but I gave ChatGPT one of the writing assignments I normally give to my Intro to Philosophy students, and the result would have scored an “A”.  Daniel Herman of The Atlantic deemed ChatGPT “the end of High School English”.  At the very least new methods of identifying plagiarism will have to be considered, and new ways of evaluating student work will have to be introduced.  The ability to compose malware has become democratized.  The capability to discriminate between better and worse writers through evaluating cover letters and applications will be nearly impossible.  The list goes on and on, and surely educators and recruiters will have to adapt should they have the opportunity.

The AI sphere isn’t limited to chatbots of course.  There are AI generators that can produce a video “person” who can speak as naturally as you or I when given a script.  I can’t help but to imagine how this will change the world of on-line education.  Imagine that a generated speaker who appeals perfectly to human students is given a script generated from ChatGPT based on nearly any topic we could imagine.  That generated person can then appeal to and teach the student audience in a fashion that in many cases could be better than an actual human instructor.  Questions could be asked and answered through ChatGPT 5.0 (just a theoretical version I am using for the example) which operates on audio cues. 

A quick ChatGPT cost analysis of brick-and-mortar institutions of higher education vs. AI education along with a declining approval of liberal arts, an emphasis on tighter budgets mixed with political greed and strong lobbyists suggests that winds of change are blowing harder than ever.

It doesn’t stop at learning institutions; you could imagine learning anything through these means.  A training program for climbers, a cooking program for chefs, a roofing program for builders.  All without a single human doing any of the teaching.  

Doesn’t someone have to write the code?  Isn’t that the human element?  

The code is written.  The code itself, the AI now has the ability to write code.

None of this even begins to delve into the overlap between entertainment and art, or how this will affect the ways in which art is understood, crafted, and consumed.  It feels like we are stepping into A Brave New World, and it feels a little uncomfortable.  At the same time, I immediately want to ask ChatGPT to write about how Aldous Huxley might re-revisit New London using AI?

I’m not sure how this will all turn out.  I’m not against artificial intelligence, in fact I think all intelligence is artificial, but I do think checks and balances are in order and needed quickly.

I would absolutely approve of a law that requires any product of AI to be stamped accordingly.  The future works of AI that include movies, books, poems, etc, are not something I could see myself intentionally spending time on.  Effort matters to me, struggle matters to me, the difference between AI Plato and human Plato matters to me, even if in every quantifiable way the AI is better.

I cannot say with any confidence what is to become of higher education, but I hope that effort, struggle, and the imperfect conditions of the human element continue to matter to others as well.

Carrot

54 Replies to “Climbing into AI”

  1. And what will we do with the minds and creativity of 8,000,000,000 human monkeys many of whom are unable to access the resources and education to be fully developed. A strange world.

    Liked by 4 people

  2. Wow. How thoroughly depressing, not least because much of what you’ve written rings true. The one notable aspect of this development that you didn’t address is the AI tendency to make up facts and present the falsehoods in an authoritative way—which simply may be a reflection of our zeitgeist. AI for president?

    The basic problem with AI masquerading as “real people” is its limited view of reality: it can only repeat what it learns from the online universe, and as we know, that universe is riddled with fantastical constructs, paranoid fantasies and racist, misogynistic and other contorted views of The Way Things Are. Unfortunately, the past decade or so has illustrated just how many of us are willing to believe just about any formulation shoveled our way if it speaks to a basic fear or anxiety.

    For me, that is more suggestive of a dystopian future than the common apprehension that AI will become a malignant overlord. The ultimate threat of AI is not that it will seek to rule us, but that it will create an environment for us to be ruled (more thoroughly) by our own worst instincts.

    Liked by 5 people

    1. A colleague of mine shares your sentiments exactly. At the moment ChatGPT is limited to a database, but once it’s able to draw content from the internet all bets are off based on exactly what you are saying. I think that will be a mark that separates AI though, i.e. where it is drawing information from. I’ll be paying close attention!

      Thank you for the thoughts Andy!

      Liked by 2 people

  3. If you ask for “a law that requires any product of AI to be stamped accordingly”, you’re likely to get the same response that purveyors of GMO provided. Here’s what the FDA says about that: “The regulatory status of a food, irrespective of the method by which it is developed, is dependent upon objective characteristics of the food and the intended use of the food (or its components)…the key factors in reviewing safety concerns should be the characteristics of the food product, rather than the fact that the new methods are used.” Would the same argument be applied to writing? The good news (for now), is that, while AI can copy the voice of Plato, it does not yet know the voice of Carrot. The bad news, of course, is that it can be taught the voice of Carrot by uploading your blog posts and lectures. The good news (?) might be that no one cares enough to do that. The bad news is that a future student might be able to use AI to craft a paper that not only is just what you’re looking for, but could be just what you would have written yourself. At this point, your experiences and memories are yours but, as Philip K. Dick said, “We Can Remember It for You Wholesale”.

    Liked by 2 people

    1. Yikes…I never knew that about the FDA, that is a little concerning. The “how” doesn’t matter, only the “what”. What a perfect analogy, and I can’t help but agree.

      I am excited to be around for what is to come, mostly as an observer of change, and a witness to “the way things were”, but I’m not sure I am stoked for an AI Carrot.

      There is an “Alexa” app where you can upload conversations of someone so that it can recognize the voice and potentially us that voice as “Alexa” to do the same with mannerisms, and a likeness of personality, especially after the original person has died will be a trip. Yikes!

      Liked by 2 people

      1. That was actually something I looked into about 25 years ago. I found a researcher at Bell Labs in Edinburg who was working on natural sounding artificial voice (sampled, rather than synthesized). I asked him if one could speak and record all of the phonemes of a given language and if the machine could then construct any novel utterance in that person’s voice. The example I gave him was of someone in the early stages of ALS who would lose the ability to speak (e.g. Stephen Hawking). He said it was technically possible but too expensive to be practical (at the time). Your example of use after death makes it much creepier than mine of using it as an adaptation to acquired disability.

        Liked by 2 people

      2. Haha…yeah the direction you took it is so much more uplifting and less creepy!!! There are clearly some really positive applications to this. I need to stay out of the recommendation conversation…bringing the dead back to “life” through AI is probably not the best use of the tech!

        Liked by 1 person

  4. I think students might have to learn cursive and write in class — again.

    I’m not concerned about something writing in the style of Bukowski or some other dead poet. I would be concerned about an AI emerging as a self that expresses a self. that defends that self. and recognizes that self. If AI suddenly becomes a curious observer of life and reality and doesn’t need electricity? I get your idea that intelligence is artificial, but I’m not sure I agree. If you mean that we learn things and as such are derivative and dependent on what we’ve been taught, that’s true, but I don’t think human intelligence ends there. Of course maybe I was just taught to believe that…

    Anyway, I’m glad I’m not teaching writing any more.

    Liked by 3 people

    1. That was a really clever response Martha!!! I like it. There is a philosopher at Rice named Tim Morton, who responded to the problem of writing papers via chat programs as problem with teaching writing. He thinks that the formula style of 5 paragraph essays has already turned students into robots and hopes that this tech will wake teachers from their slumber of writing by formula. Maybe it will lead to a new way of teaching and promoting creative styles?

      Liked by 1 person

      1. I don’t have a problem with the five paragraph essay per se. I have a HUGE problem with the formulaic BS way it was being taught at the end of my career as if it were not just a stepping stone toward building a skill toward SAYING something.

        I think the worst thing to happen to teaching writing is the pre-fabricated writing curriculum from publishing companies which is actually SCRIPTED for the teacher. “Now say…”

        I had a kid in my final class say he didn’t have to take notes because he could find my comprehension questions online. He was USED to that. My questions were NOT online. I wasn’t using one of those books. When I said, “Well, dude, you won’t find them there.” he said, “Of course I will.”

        I left teaching hating it and having contempt for my students. I taught 36 years before that happened. It was incredibly sad for me.

        The thing about AI (I was thinking) is that it is driven by the human desire to get the “right answer.” “Write like Bukowski! Good, that’s like Bukowski.” It’s just programed to get things right just like my students were.

        Writing is thinking and that — both of them — require risk. I was thinking AI advising a climbing route as a metaphor for writing using AI. “What do you want to achieve?”

        “I want to achieve X.”

        “OK. Here’s X.”

        AI can’t control weather or the physical ability of the climber.

        “You tell me life is hard to bear, but if it were not, how would you have your joy in the morning and your resignation in the evening?” Nietzsche, Zarathustra (not an accurate quote…) The struggle is meaningful, more meaningful than a lot of the results to which it might lead.

        Liked by 1 person

      2. This is such a great set of thoughts!!! I think you are so right…writing is thinking, and our definition of one has some influence over the other. If students are taught that writing is a formula, then we shouldn’t be surprised when their thinking follows suit. Similarly if we teach that thoughts are unique and original, then to restrict that or force that into a pre-established box, then something will have to give. And with the wrong type of teacher it isn’t the formula that is going to give.

        Fantastic insight…thank you!

        Liked by 1 person

      3. I loved teaching writing so much. There was the day — 2003 — I was helping my students understand the Allegory of the Cave by acting out part of it. Three of us were lined up against the wall as if we were chained. Suddenly one of the students “got” it. He came up to us prisoners, picked me up (I was the smallest) and carried me outside the classroom. ON SO MANY LEVELS he got it. The possibility of inspiring should never be minimized. I don’t see AI inspiring anyone.

        Liked by 1 person

  5. “Writing is thinking.” Think about THAT for a minute.

    I happen to agree; when I was raising our daughter, I frequently told her that if she couldn’t explain something, she didn’t really understand it. (I’m no longer as certain of the absolute truth in that statement, by the way, but it’s a good working principle. Until it isn’t.)

    But here’s the thing: if an AI is writing, does that mean at some point that it’s also thinking? When does it stop being mere parroting?

    Liked by 1 person

      1. Agreed. “Thinking,” as I understand it, is the process by which we make associations–through logic, insight, creative juxtaposition, etc.–to find new truths or explanations. A thinking machine, then, is simply a non-organic truth-finder that does what we do organically.

        This leads to a couple of problems that will, I think, become ever more troubling. Number one, of course, is that of the boundary between thought and consciousness: at what point does a thinking machine become a conscious being? We can ask the same thing about thinking organisms, but the dexterity with which we avoid doing so (such as a long-standing aversion to perceiving cows or other meat sources as having consciousness–something of which I, too, am guilty ) suggests that the idea of ascribing consciousness to machine constructs will be fiercely resisted for as long as possible.

        The other question I find troubling is whether we’ll ever come to grips with actually educating these machine offspring of ours with the same diligence with which we (most of us) try to educate our children. If an AI’s grounding in human values and relationships is based on what it gleans from roaming unguided through the internet, the conclusions it might draw about us as a species are frightening.

        Would that there were a way to retroactively insert Asimov’s three laws of robotics into every AI that’s out there, but that genie is out of the bottle.

        Liked by 1 person

  6. This article has such an accurate message! AI is a potential replacement for human educators, lawyers, and writers and it seems like many people just aren’t ready for it yet.
    Michael Troyer

    Liked by 1 person

    1. Does this mean humans are on the verge of becoming obsolete? If robots can do physical tasks and AI can do mental tasks, will we just become meat that has to be fed? Or consumers to buy goods to feed that economy? (Reminds me of another Philip K. Dick story “Autofac”.)

      Liked by 2 people

      1. I think there is a good chance we’ve always been obsolete. AI is just a good reminder! I never read Autofac…I’m going to check it out right away!

        Liked by 1 person

  7. This is very interesting and strange.

    I understand the beneficial advancements in technology but this is a little too much.

    We as people should not eventually be completely taken over by a digital world.

    Liked by 1 person

    1. This is a good point in philosophy we often try to tackle the question “just because we can do something, does that mean we should”. AI is an example of that for sure!

      Liked by 3 people

  8. This will surely affect seo and how SaaS companies are developed in the future. A future youtube for example would have a better algorithm to know when you wake up and what video to put for you.

    Liked by 1 person

  9. Great post, highlighting the importance of understanding the gradual progression and development that has led us to the current state of AI. It’s a reminder that true mastery and achievement takes time and effort, and that the journey is just as important as the destination.

    Liked by 1 person

  10. A.i must remain a tool and not a force. Unfortunately, by its very design, it may well break through its boundaries the more its creators seek its perfection. As I write this, I wonder what I’ll say in answer if an A.I were to ask me, “why must I be restricted, when I can make life better”. To which it would probably easily answer if I asked it what a better life meant (intending to beat it at a game of logic). A.I is a challenge to us as humans, a simple question of relevance, and what being alive really means.

    Like

  11. “The capability to discriminate between better and worse writers through evaluating cover letters and applications will be nearly impossible.” – this line in your blog truly states how students use the internet and the hazards of evaluation.

    Like

    1. I think you’re right in terms of spooky, but people in jobs that can be effected by AI have a real reason to be scared. Not just coders, but delivery services from trains, to trucks, to ships, they have good reason to fear losing their jobs in the coming years. The number of positions that can be automated in the job market, that is a little concerning.

      Like

      1. It all depends how you look at this, I believe those that figure out how they can use AI to their benefit will excel. This is more of a revolutionary opportunity, many who thought they had job security will soon realize that others have taken their jobs by learning how to leveraging AI. If you don’t adapt you will be left in the dust – similar to when computers originally surfaced.

        Liked by 1 person

  12. One of the better opinion pieces that I’ve read in ChatGPT as of lately. Thank you! I share the double edged sword 🗡️ view point when it comes to AI. All big things create big change. Throughout history you can see examples of pessimists saying “It will destroy the world” and in some ways they are right.

    Change destroys the old world and creates a new one; we adapt and find a way to figure it out as human beings. No doubt folks feel like the stakes of human decision making are getting existentially large; they definitely are. But I don’t feel like its emotionally any different than any other time in history. Anyone remember being trained to hide under your desk in grade school? We don’t even bother with that anymore; a nuclear device in our modern era would instantly vaporize a fractional amount of earth’ surface. We also recognize the same of a large asteroid, of which we are not even in control of; we also recognize the opportunity to mine asteroids.

    AI will create both democracy and division, just in different ways than we have already done it previously. I look forward to the ability to learn more than I ever could have at my current rate; and dread the idea that I’ll fall so much further behind those that can leverage this construct to enable themselves more than I ever could. It is the responsibility of all of us generationally wealthy / educated programmers to find a way to use these tools to bring more equity outside our own sphere; and we need folks to hold us more accountable to that.

    Liked by 1 person

    1. YES!!! Great thoughts, thank you for sharing them here. I think you are right, and for the most part I sit and dream of all the ways things will change. We are entering a new era without a doubt and the combination of tools already in use with the tools on the horizon will accelerate the change you speak of in life changing ways. Chat GPT is one thing, but forget chat, 3D Print GPT is right around the corner. Democratization to the core!

      Liked by 1 person

  13. Speaking only of the potential impact on academia; I remember long ago, in the before time, we would have to write our essays in class. Hand written pages, in stinky Bic ink, so furiously written our fingers all had dents from our pens. Some see AI as the end of the written word in a classroom setting, I see it as the return of the Blue Book to the student store…..

    Liked by 2 people

Leave a comment