ChatGPT is software that is designed to chat with you like a really smart person. It can make up a story, convert it to a Shakespearean poem, and then solve a math problem all automatically and all within seconds. π€―
The Daily covered it nicely on the episode Did Artificial Intelligence Just Get Too Smart?
ChatGPT is pretty incredible, especially for students trying to fake a term paper. But the ChatGPT blog itself calls out some interesting and very human-like limitations. In particular, it’s sort of a bore and a blowhard. π
- It has a tendency to respond with “plausible-sounding but incorrect or nonsensical answers”
- It is “often excessively verbose” and “overuses certain phrases”
- It often fails to “ask clarifying questions when the user provided an ambiguous query”, opting instead to “guess what the user intended”
- And my favorite, “it will sometimes respond to harmful instructions or exhibit biased behavior.”
So it is overconfident and under-reliable, repetitive, a bit of a motormouth, makes assumptions, is biased, and sometimes lacks moral backbone. Does this sound like anyone you know? π
Still, this software an amazing accomplishment. Kudos to the team for being open about its limitations and good luck making it better (and hopefully not evil π€·π»ββοΈ).
One thought on “ChatGPT is amazing, yes, but it’s also a bit too human π€·π»ββοΈ”