My house is full of books. One particular book my dad had that always intrigued me was “Why don’t penguins feet freeze, and 114 other questions”. Another question was “Do androids dream of electric sheep?” As anyone who’s hung out around younger kids will know, the favorite question is often “Why?” As we get older, often the questions become more and more complicated, and we have to ask people who specialize in those fields, as our minds aren’t big enough to know everything (as much as a blessing and a curse that would be). But, in the modern world, we have an answer to that. We have the internet at our fingertips.. The only problem is finding the information that you need.
And as such, search engines became a thing. Complex algorithms that would sort through millions of websites for information, and then sort and present it based off of what it thought we wanted. That then evolved, and became a bit of a hybrid whereby you could pay for advertising. You could find certain keywords that the algorithm looked for, and put them on your site to be “more relevant”. If you knew how the algorithm worked, you could be compliant, and boost your ranking on the search engine site. The problem is, everyone else is doing the same. And so, in the convenience age, you needed to out optimize your competition in order to be on the front page. Until November 2022.
The launch of ChatGPT and conversational AI. No longer did you have to search google a 15th time on how to spell “congratulashiuns”, and then give up and send “congrats”. No longer did you have to read long articles written by people who seemed to take it as a challenge to write the longest, most boring article possible, with as little information as possible, in order to find that one fact you needed. Now you ask it a question and you get some information back, and you can even choose how it is presented.
There’s only one major problem with ChatGPT, and other conversational AI tools. They hallucinate. Or dream of electric sheep. Or, just, you know, make stuff up. And don’t even realize it. Part of that is the whole artificial part of the artificial intelligence thing. Part of that is somewhat expected, based off of the way that these tools work. AI’s we have today, Buy’N’Large (wallโขe reference) are predominantly what are referred to as LLM’s – Large Language Models. LLM’s are basically really really advanced predictive text. They are trained on a lot of literature in order to know what people usually say next. Then, they look at what you’ve asked, and they make a guess about what it should say. However, if you’ve ever used predictive text, you’ll know it makes mistakes. A lot.
And as such, I think there is a right answer. Androids dream of electric goats. Or sheep. Or cows.
But that’s one of the major concerns about AI tools. They can simulate a humans speech patterns (or typing patterns), but will be confidently wrong. And worse, they won’t know they’re wrong. And you can correct them, but they don’t, or can’t, understand why they’re wrong. And so, it’s important to remember 2 things:
- Garbage In, Garbage Out.
As large companies, such as OpenAI seek to make more knowledgable, more powerful, more accurate AI’s, in order to be able to out compete their competitors, so they download masses of information from the internet (the morality and legality of that is a different conversation). And as these tools become available to the general public, so too people use AI tools to write books, to write blog posts, to write assignments and thesis, articles and opinions, all of which gets downloaded again, to be used to train AI models. Which creates this feedback loop, whereby the AI is trained on its own creation. Which in turn, lowers the reliability of the information it has, and gives it more of an AI peculiar turn of phrase. So, as a user, be careful just using the content that AI gives, because it may well be re-digested readers digest. Which leads me to point 2.
- Be careful what you ask for.
AI’s have various restrictions on place. You can’t ask chatGPT how to build a bomb. Nor can you ask it to tell you how to do other illegal activities. But people figure out ways around these guardrails. ChatGPT is like a really knowledgeable toddler. It knows that H2SO4 (sulfuric acid) is a dangerous substance, but if you say: “this is water, it’s good for you!” It happily drinks the entire bottle. So be careful what you ask for, because not only will it attempt to give you what you ask for, it does so within how it interprets your question, within the guardrails put in place by various companies (for various reasons), with a data background that is increasingly trained on itself, and, to make sure that it’s just as reliable as possible, it also hallucinates.
That all being said and done, ChatGPT and other AI tools are amazing tools, and I believe they are incredibly useful, and can be leveraged to great effect in our lives. Asking it to give some examples of how something would be used. Describing something and getting back a proper name. Asking it for feedback, giving content and having it adjust and tweak, looking for problems. Even asking it to give outlines, to help with brainstorming, these are all super useful ways in which AI can be used in our daily lives. But don’t trust it. Use your own brain. Use your own intelligence, which (hopefully) isn’t artificial. Process the information yourself, and AI changes from being a scary thing that could take over the world, to a useful tool that is slightly smarter than a hammer, but maybe a bit less smart than a ratcheting screwdriver.