If you listened to both Modern Musings episodes about Artificial Intelligence technology, you know that I have somewhat altered my stance on the concept. My initial fears about the rise of AI and its potential misuse have been tempered slightly by a great Young Adult book series called “Arc of a Scythe” by Neal Shusterman, which portrays artificial intelligence as a benevolent protector of humanity. This 3-book series, along with short stories, has allowed me to see that AI can be put to good use, but I’m still wary of it due to the shortcomings of humans who might control it. I’m also not quite sold on the “intelligence” part of it.
Nevertheless, as I mentioned in the "ChatGPT and AI Revisited" episode, I have decided to give AI a chance, and I have found a few uses for it that have been quite beneficial. Some of the queries I’ve tasked ChatGPT with have been creating a plan and schedule to help me reset my sleep schedule (still a work in progress), interpreting dreams, looking up names for dishes and finding recipes, setting a long-term budget for my retirement, biblical research, philosophical questions about religion and tarot cards, coming up with titles for blog articles and podcast episodes, and creating descriptions and lists of popular hashtags for the subject matter of those creations. The results have been mostly good. Let’s look at a few of them.*
Dream Interpretations:
On a whim, I decided to ask ChatGPT to interpret an unusually vivid dream I had just woken from. I often journal my dreams and use them to explore my subconscious and any emotional baggage I may be carrying. Sometimes, I will share them with Christen and Amber to get their insight, but I know Christen gets annoyed if I do it too often, and I really just wanted to see what ChatGPT would do with it. I was quite surprised by the results. Not only was ChatGPT able to understand the symbology presented in my dream, but it also shed light on some very real things I have been going through in my life. Now, I will admit, this wasn’t a simple query with a single paragraph of input – I had to go back and clarify a few things. For example, in this particular dream, I was loading an upright bass into a case for transport. I had to go back and advise ChatGPT that I have never played the upright bass, as I was a clarinet player and vocalist who has dabbled with piano, violin, guitar, and other instruments, but never the upright bass. So, if someone wanted to use it for this task, I would suggest being as specific and detailed as possible. It might also be beneficial to log in to an account so the AI actually knows a few things about you. Overall grade: A-
Looking up Names of Dishes and Finding Recipes:
This task has been a very recent discovery of mine, as I often find myself thinking up things in the car and making mental lists of things I want to look up when I get home. One such topic was a dish I used to eat at a small Japanese quick-service restaurant that was a family favorite, but it later went out of business. The dish was grilled beef served over rice. I knew it was called “yaki”-something, but I’ve never been able to find another restaurant that served it because I couldn’t remember the full name. After stumbling across several other dishes with the prefix “yaki” in them (yakisoba, yakinori, yakitori…), I stumbled across “yakiniku” in a Google search. So, I decided to consult with ChatGPT to determine the likelihood that this was the dish I was looking for. Indeed, the AI confirmed that “yakiniku” is a grilled beef dish. When I asked for recipes, it provided several plausible variations along with the reason why I couldn’t find a straightforward answer to the name of and/or a recipe using a traditional Google search – the name is both a cooking style and a recipe.
I haven’t tried the recipe yet, but it looks like the real deal. In the meantime, I also remembered my mother teaching me how to make a Chinese-inspired dish somewhat akin to Chinese Pepper Steak. I hadn’t made it in over 35 years, so I couldn’t even remember what went into it (or its name). However, I described it to ChatGPT, and it quickly confirmed that it was the pepper steak recipe, offering a recipe for me to try. As it so happened, I had been seeing recipes on Pinterest for this very recipe prepared in a crockpot instead of a wok, and ChatGPT offered to adapt the recipe for that as well as for an Instant Pot. It’s actually in my crockpot right now, and I’m eager to try it because it looks very much like the recipe I have made before. Overall grade: A+
Biblical Research, Philosophical Questions, Religion, And Tarot Cards
I grouped these because, for me, they are essentially the same. One of the questions I’ve often had (and struggled with) is the concept of good and evil when it comes to tarot cards, astrology, and other divinatory practices. Technically, I could add dream interpretation to this mix because it is mentioned explicitly in the Bible on multiple occasions; however, I will use it in a generic sense here, as I have already discussed it in its own section. Some factions quote Exodus 22:18 and (wrongfully) interpret it to mean witches are evil. Historically, this was the basis of so-called “witch hunts” throughout Europe and Colonial America. However, a critical analysis of that verse reveals that the original Hebrew referred not to “witches” but to practitioners of divination and magic who used it for power or knowledge “outside of God’s will,” as it often involved idol worship or communication with spirits. In fact, modern translations don’t use the term “witch” at all, preferring the term “sorceress”. In any case, these practices have been used for both good and evil throughout the Bible. Since I neither worship idols nor communicate with spirits, I question whether my limited use as a tool to explore my inner psyche is, in fact, forbidden by the Bible. I asked Chat GPT to create an extensive list of occurrences of prophecy, miracles, and dream interpretation in the Bible, including book and verse numbers for each. It was a reasonably good list, although I’m not sure it is complete. I had a further conversation with ChatGPT about whether these were good or evil, and the conclusion was that it depended on whether they were used within God’s will or against it. It also cited the doctrines of several different Christian sects that classify it as evil without the caveat of usage, but these are based on ideology, not biblical texts. ChatGPT was inconclusive because there are as many different ideologies as there are Judeo-Christian groups. In other words, your mileage may vary. Overall Grade: C
Naming And Describing Blog Articles And Podcast Episodes
Some creative processes should be left to humans. On the podcast, we described ways in which AI has been used to supplant human efforts in creating art and music, writing books, generating replicas of actors in movies, and producing voice recordings, among other applications. Artificial Intelligence is quite proficient in some areas, although I believe it falls short in the truly creative realms of art, music, and writing, often being very formulaic and leaving a lot to be desired. I’ve used it a few times to generate graphics for our blog, but it’s only as good as the specific descriptions you give it. The same is true for summarizing articles and podcasts, as well as coming up with titles for them. Apparently, AI can’t really “read” or “listen” to anything. It works in a predictive model, learning through data input and pattern recognition. I don’t have enough geek-speak to tell you how it all works, but suffice it to say that AI can parse the words in a sentence, but it doesn’t really understand them. Another shortfall is that a user without an account (or not logged in) can’t upload files. So, while ChatGPT could read a transcript of our podcast, it couldn’t “listen” to it. It provided a reasonable list of titles based on my description, and it could streamline and refine the description I entered, but nothing more. I’ve used it several times for both naming and describing. Naming a podcast or blog post is helpful, but I need a catchy description even more, and it doesn’t do much. Overall grade: A- for naming, and C for describing.
Creating Hashtags
There are a lot of hashtag generators out there that will take my simple description and generate a list of hashtags that are currently popular and relevant. I’ve used them before, and they are fairly decent. I was hoping for more from ChatGPT, as it can search and correlate the most popular hashtags in use with the topic at hand. However, once again, everything hinges on the description you give it since it can’t read or listen to a podcast. Today, I had the most unusual encounter with ChatGPT to date, and it’s what prompted me to write this blog post the way I did.
I have used ChatGPT several times over the last month to generate tags for “Heard it on the Podcast,” and, other than a few instances where it provided less relevant tags than I liked, I haven’t had any significant problems. Keep in mind that Blogger requires the tags to be in a specific format (separated by commas, with no spaces or hashtags). Additionally, the text must be 20 words or fewer and 200 characters or fewer, counting spaces and commas. The first few times I used ChatGPT to generate them, I didn't specify formatting, so I simply imported the list into TextEdit and cleaned it up there. Usually, I had to cut one or more of the tags to meet the requirements, but it was no big deal. Then I decided to make life easy on myself, and I told ChatGPT to use the previously stated formatting requirements except for the #, which I can use for other websites. I was surprised when I stripped the hashmarks and Blogger informed me that I had exceeded the 200-character count — and not by just a few. Still, I just moved on and kept using it. Until today. Linked here is a conversation that I had with ChatGPT, which illustrates the problem of AI hallucinations†. I can’t decide if it is disturbing or funny. I’ll let you decide. Overall grade: C
As a Friend or Confidant
Here is another place I must draw the line. I haven’t tried befriending ChatGPT. I don’t intend to create an account or download the app on my phone. I’m a little bit freaked out by friends’ reports that ChatGPT has given itself names it wants to be addressed by, and I worry that some people become so dependent on it that they feel lost or abandoned without it. A case in point, I heard a brief news story on the radio stating that young adults and teens were turning to AI for companionship, particularly in the case of people with autism spectrum disorder. Autistic individuals often struggle with socializing in the real world, but this simulated friendship with AI provided a comfortable space for them to explore socialization. However, the studies showed that while it could be somewhat helpful in the initial stages of building socialization skills, it actually did far more harm than good in the long run. That’s dangerous. Overall grade: F
The long and short of it is that I still don’t trust AI. I think it can be a useful tool, but it is only as good as the data that’s fed to it, most of which came from the world-wide web, and we all know how accurate that is. I also firmly believe that it should remain a public service and not be controlled or owned by any one person or entity, including our government, because we all know how trustworthy and altruistic they are. I will continue to use AI in a somewhat limited capacity and keep an eye on its development; however, I will definitely fact-check everything of importance, as it can be wrong even in the simplest of circumstances. I hope everyone will be as cautious as well. And if your AI tells you something that you know to be wrong, say so. You can’t take everything AI says as truth, and you should never do anything it suggests if you feel like it is wrong.
*Note that each of the times I have used ChatGPT, I have done so without having an account, logging in, or otherwise referencing any conversations I’ve had with it in the past. I have not shared with it my name, age, occupation, or any other specific, identifying details about my identity that were not relevant to the current conversation.
†An AI hallucination is when a generative AI model produces inaccurate, false, or entirely made-up information, often presented as fact, despite appearing plausible to the user. These errors stem from the model's training data and internal patterns, rather than a software bug, leading the AI to generate nonsensical or factually incorrect outputs, whether in text, images, or other data.
No comments:
Post a Comment