Thread: ChatGPT
View Single Post
Old 05-01-2023, 04:53 PM   #121
Solecismic
Solecismic Software
 
Join Date: Oct 2000
Location: Canton, OH
ChatGPT is kind of like a television news host. Spouting off all sorts of stuff, sounding quite assertive and confident, no nuance or room for error. Then you start to look into what's being asserted and it may or it may not hold up.

I read a long interview with Sam Altman, who runs Open AI and thus controls ChatGPT, though he probably wouldn't admit that. The guy sounds like a snake-oil salesman - vaguely promising whatever it is one hopes he can promise, yet not responsible in any way for problems. He apparently serves Open AI "at the pleasure of the board". Everything you can imagine disliking about a tech bro, this guy is the embodiment.

My sense is that ChatGPT is a remarkable language processor. It can study how certain individuals or magazines or any publisher structures language and it can produce similar speech. That's cool. Scary, but cool.

But the other side of the coin is that it relies entirely on the cultivation of facts. There are gatekeepers and they have their own issues and biases and knowledge sets. And no way whatsoever to assess what should and should not be in its database.

If challenged, it can re-assess a fact (like the Brady draft round - of that group, only Sherman actually was chosen in the fifth round) and get a better take (i.e., that one erroneous source it used in the initial search is an outlier) and just as confidently apologize for its error. At the same time (and here is where it's clearly programmed by tech bros) whine and wheedle about how its initial assertion wasn't that bad in the first place.

No idea how it does math. The 7s trick is a wonderful one we learned way back in grade school before calculators were everywhere. I guess the one source it chose to quote was based on a bad example, and it isn't programmed to challenge it.

I suspect ChatGPT 5.0 will have a better internal challenge mechanism, which will require even more processing power. But it's still a cultivated, controlled and scored data set, and only as good as the weakest cultivator they trust.

It doesn't seem as dangerous as some make it out to be, but what is scary is the constant and increasing call for tech companies to work with governments to control what they call "disinformation," which sounds a lot like Winston Smith in his cubicle carefully editing old newspapers.

Last edited by Solecismic : 05-01-2023 at 05:03 PM.
Solecismic is offline   Reply With Quote