Everyone is turning to artificial intelligence these days. As a website owner, I receive over fifty emails per week extolling the virtues of AI. I’m told I can save time, money, and frustration by allowing AI to write blog posts for me. For a nominal price, my week’s articles — enhanced for search engines, grammar-checked, and guaranteed plagiarism free — can appear in seconds. No subject is too difficult, no science too obscure.
I’ve been ignoring these sales pitches for months, thinking “yeah, right” as I scrub them from my inbox. But then on a dreary, drizzly December day, I fell for it. I was suffering from overload, behind in every aspect of life, and frustrated with blogging. Duly harassed, I said, “Okay, show me.”
In an instant, I signed up for a free account with ChatGPT (Chat Generative Pre-Trained Transformer) and started asking beekeeping questions. Before I could hit the question mark, the answers began typing themselves across my screen.
What is ChatGPT?
ChatGPT is a natural language-processing model of artificial intelligence. It is “trained” to answer questions in a human, conversational way. It “learns” by reading the internet and responds by using language rules. In short, you ask a question and the chatbot gives you a written answer.
One of the first things I noticed was the quality of the writing. Overall, the chatbots know English. The wording is grammatically superior to most stuff I read online, and I found no irritating phrasal verbs (like cook up, plant up, fry up, or level up) that drive me to distraction. I found only small amounts of passive voice and no agreement errors. Pretty good for instant results. Amazing for a robot.
And some answers weren’t terrible. Consider my very first attempt at eliciting beekeeping information from a machine. It asked no follow-up questions and didn’t ask me to clarify or define my terms. Just like a genuine second-year beekeeper, it was utterly confident and self-assured. I asked, “In honey bees, what is drone eviction and why is it important?”
The chatbot replied, “In honey bee colonies, drones are male bees that mate with queens from other colonies. Drones do not have stingers and do not participate in the day-to-day tasks of the hive, such as gathering nectar and pollen or caring for the young.
“During the fall, as the days grow shorter and the availability of flowers decreases, the worker bees in the hive begin to prepare for winter. One of the ways they do this is by evicting the drones from the hive. This process is known as drone eviction.
“Drones are evicted because they are not able to contribute to the survival of the hive during the winter months. They do not help with gathering food or keeping the hive warm, and they consume valuable resources that could be used to feed the young and the queen. Removing the drones from the hive allows the worker bees to focus on preparing for and surviving the winter.”
Not (totally) bad for a robot
Now, remember, this answer was instantaneous. It suffers from wordiness and a bit of passive voice, but the grammar, sentence structure, and logic are remarkable. Even more impressive, the answer is basically correct.
As an editor, I would say the bit about stingers is off-point and should be deleted. Conversely, the answer should explain how workers evict drones and what happens to them afterward, tidbits that could add interest to the passage. Still, it’s not bad.
The things eggs eat
I continued to ask the chatbot dozens of beekeeping questions. Not all the answers were as helpful as the first one. According to the chatbot, CCD is still the major threat to honey bees, varroa mites continue to thrive by slurping hemolymph alone, and bees can transform refined sugar syrup into pure honey in just a few days.
When I asked what bee bread is for, the chatbot told me the bees feed the eggs a mixture of pollen and nectar, a diet that provides them with the nutrients they need to grow and develop into larvae. You see? The answers are close (and entertaining) but a tad iffy.
How to train a chatbot
After playing question and answer for a while, I decided to learn how AI writing works. The simplistic explanation goes like this: The bot “reads” information available on the net. Lots of it. ChatGPT claims to have already read a large portion of the internet and continues to read more. The “handlers” also trained it in the grammar and vocabulary of multiple languages.
According to the website, ChatGPT learns to write by calculating what words are most likely to follow other words. To me, that sounds like a questionable practice, but I’m digitally inept and know it. I can’t say how somebody else (or someboty else) learns.
Writing without understanding
But here’s the real-life catch: The chatbots don’t “understand” anything they utter. And since they don’t know the meaning of the sentences they string together, you can get contradictions, strained logic, or complete nonsense.
For example, when I asked the chatbot about fat bodies in winter bees vs. summer bees, it wrote [emphasis added], “Fat bodies in honey bees play an important role in the bees’ metabolism and energy balance. In winter bees, fat bodies are larger and more developed than in summer bees, as the bees need to use the energy stored in their fat bodies to survive the winter months when there is little or no food available.”
Then, about four paragraphs down, it added, “Winter bees have smaller fat bodies than summer bees, as they do not have access to as much food and need to rely on stored energy to survive.”
I suppose that contradiction could be true if you were comparing the size of fat bodies of young winter bees with those that survived through the winter. But the chatbot never suggests that.
I can only conclude that if you are writing an article and you already know your subject, you could edit the paragraphs and end up with passable content. But if you don’t know your subject before you begin, and you let the chatbot write your article, you could look pretty foolish in short order.
The end of Google as we know it?
Based on news reports, Google is viewing chatbots as a threat. Here’s the problem: When you ask Google a question, you get the “long answer.” The long answer comprises a series of links that Google thinks may contain the information you’re looking for.
But if you type the same query into a chatbot, you get the “short answer” summarized right from the get-go. There’s no fooling around with links and endless resources. You immediately get the answer the chatbot thinks answers your question (although it may be incorrect).
Supposedly, Google is working on the problem of how to provide more immediacy so it doesn’t lose market share to chatbots. However, they’ve already been working on it for quite a while.
The featured snippet
If I open an incognito window on my computer (to minimize the effect of my personal search preferences) and search for “winter bees vs summer bees,” I see the following on the Search Engine Results Page: a brief paragraph at the top, followed by a list of questions that “People Also Ask,” and finally the search results, which is the list of links that Google believes best answer the question.
In this example (winter bees vs summer bees) the first entry in the list section (the long answer) leads to an article I wrote for Countryside magazine followed by 31 million other results. But the paragraph at the very top of the page, also known as the Featured Snippet, leads to an article on my website. One partial sentence in the snippet is boldfaced, not by me but by Google. It reads, “winter bees are workers that emerge near the end of the foraging season.” Why Google highlighted that part is not clear because it doesn’t begin to define a winter bee. I find it embarrassing.
But here’s the point: Featured snippets for any Google search used to be rare, but now they appear in about 70% of searches. I’ve also read that a majority of searchers read the featured snippet and nothing more. They get the “short answer” (no matter how truncated) and feel satisfied.
Consider the source
I think that all this truncated learning, whether it comes from chatbots, featured snippets, or Facebook posts, is dangerous and sad. Apparently, in-depth learning is too much trouble, so we take a short phrase from context and call it truth.
The scariest thing about AI writing is the way chatbots learn. ChatGPT claims to write answers based on internet learning. But as we all know, misinformation is rampant on the ….