So Claude has figured put how to put more work into looking like its doing something than actually doing it, while simultaneously not producing what was actually asked for. Complex cheating with a side of half-assing. Which is remarkably human but not in the way AI bros want it to be.
Semantic self reference and paradoxes. The root of this problem you describe is that llms are referential by design. You would have to be very specific and try to build some kind of abstraction to make this work.
So Claude has figured put how to put more work into looking like its doing something than actually doing it, while simultaneously not producing what was actually asked for. Complex cheating with a side of half-assing. Which is remarkably human but not in the way AI bros want it to be.
Semantic self reference and paradoxes. The root of this problem you describe is that llms are referential by design. You would have to be very specific and try to build some kind of abstraction to make this work.
I *think* I see what you're saying ..? You may be getting at my eventual point. This is Part I of a series of installments. It's going to get weird.
Looking forward to part 2
Grok has given me many responses that were very off the mark. So, Claude, I guess, falls into the category of “fake it til you make it.”
ai is tarded. 😂
Have fun with your experiments
Miss K! Glad you’re doing it. Iam
gonna hang in the wings and observe. 🤔
You kids stay well oot dere!
Thanks for sharing Miss K. 🙏💖💖💖 likes! 😊