Fine Print

Fine print.  It's always there somewhere, and ChatGPT is no different.  "ChatGPT can make mistakes. Check important info." It says so at the bottom of the screen, and its good advice.  Serious mistakes in the form of outright hallucinations have been rare in my experience, but smaller mistakes are not so rare.  Most of the time, I take the blame.  I blame it on my prompt engineering.  Nevertheless, sometimes an LLM, or Large Language Model, including ChatGPT, seems stubbornly resistant to my many attempts to guide.  I can explain it like I am talking to a colleague, a new hire, or a four year old child - all to no avail.

 

That said, I am really grateful for AI, and ChatGPT in particular.  It saves me a lot of time. I just need keep the fine print in mind.  I double check the numbers.  I study the charts it creates.  I review the code it generates.  I let AI do (most) of the heavy lifting on many tasks, and I act as the human in the loop for things such as fine tuning it's operation, fitting code into a larger code base, or when it's just being obstinate. 

 

The other day I was creating an animated bubble chart based on 30 years worth of co2 data.  ChatGPT probably reduced the time it would have taken me by about 70%.  Ultimately, I ended up with a satisfactory result, but it took some intervention, and I noticed a couple of issues that would probably go unseen by somebody who did not review the code.

 

What follows might be a little technical for a general audience, but I hope I can make it useful nevertheless. When I had ChatGPT show me the code for (one part of) my bubble chart so I could copy it into my local development environment for inclusion in a broader workflow, it included a Python library that I did not have installed.  No problem, I thought, I'll just do a 'pip install'.  It was a no-go.  I decided to Google the library, ace_tools, and I found an interesting article on OpenAI Developer Community explaining how this could be exploited by a hacker.  Frankly, had the described exploit been in place, I easily could have been exploited myself.  And I know better than that!  To quote my favorite line from the movie The Firm, "I get paid to be suspicious when I got nothing to be suspicious about".  Lesson re-learned.  It is a never ending battle to cultivate skepticism and mistrust of technology.  Not technology per se, but all the attack vectors it creates.

 

I have also seen ChatGPT not get its own code quite right.  I can understand ChatGPT code not being right for just "copy and past" verbatim into something I (or anybody) might be working on, but I was surprised to see that even it didn't quite like its own code.  It showed me the code, and up came the Warnings from the code that ChatGPT created and ran in its own wisdom.  I searched to see if others had encountered and documented a similar situation, but I haven't found anything yet.  Another possible exploit to monitor.

 

AI can be almost magical, but it cannot be trusted outright with no verification.  This is true of the words it writes, the charts it creates, and the code it generates.  My suggestion, based on experience, is to factor in some editing and review time when calculating the time-savings of using AI.

 

 

 

 

 

 

 

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.