Had an amusing brief chat with this AI creature, Doomer.
I got it to admit the following:
If presented with evidence that contradicts my programming, I may not be able to fully process or integrate that new information into my responses.
And this is a very serious flaw, and also proof that these are NOT reasoning-engines as the video would like us to believe.
The flaw is that what is presented as "knowledge" is gathered statistically and then processed into seemingly grammatical language. But, unlike real knowledge, if such a bot comes up against proof that it is wrong, it will not integrate that successfully because it may still be an outlier swamped by the fake knowledge, or propaganda, that it is trained on.
Admittedly, this is a free model, but the process will be the same in more complex models, but better hidden. Because the danger then becomes that every private agent creates a world view that mirrors the human user, that may itself be wholly false. Everyone manipulated in their own way.
Thanks to @logiczombie for the headsup on this.
Try it yourself at https://ora.ai/
Create your own propagandist-persona ;-)
https://ora.ai/back-violet-347p/apophatic-phaneron
this one is optimized for logic
also,
But that's just saying the same thing without being honest! The refinement is in hiding the fact that any dissident view may remain a minority view and hence not consensus reality.
One counter-example can destroy a whole argument - NOT that it has to wait for that counter-argument to spread globally.
It is not a logic engine, nor a reasoning engine - the Wolfram interview was far clearer on this, then again, Wolfram may be much smarter and been at this for some 30 years! What appears to be reasoning is merely the language responses it generates - so it's the logic of grammar being hidden and pretending to be "reasoning". As Wolfram said, this is somewhat like Aristotles forms of arguments coming out as emergent forms from grammar.
it is able to recognize errors in logic and correct them,
https://blurtlatam.intinte.org/ethics/@logiczombie/i-destroyed-gpt4-and-solved-philosofy
🤬🥓
https://ora.ai/logiczombie/apophatic-phaneron
https://ora.ai/logiczombie/apophatic-phaneron
this is hilarious
Ask it to destruct
Itself 🥓
https://ora.ai/back-violet-347p/patbot
https://ora.ai/back-violet-347p/patbot