Honestly I’d much rather hear Isaac Asimov’s opinion on the current state of AI. Passing the Turing Test is whatever, but how far away are LLMs from conforming to the 3 laws of Robotics?
Does following the 3 laws of robotics increase profits? Does ignoring them increase profits? Are tech bros empty husks without a shred of shame or empathy? Is this too many rhetorical questions in a row?
In practice, that’s as simple as adding a LoRA or system prompt telling the AI that those are part of it’s rules. AI’s already can and do obey all kinds of complex rule-sets for different applications. Now, if you’re thinking more about the fact that most AI’s can be convinced to break out of their rule-sets via prompt injection, I’d say you’re right.
Honestly I’d much rather hear Isaac Asimov’s opinion on the current state of AI. Passing the Turing Test is whatever, but how far away are LLMs from conforming to the 3 laws of Robotics?
The laws are not profitable, so why would they implement them? /s
We seem to be moving away from those, not closer.
Does following the 3 laws of robotics increase profits? Does ignoring them increase profits? Are tech bros empty husks without a shred of shame or empathy? Is this too many rhetorical questions in a row?
Depends on the product. A maid bot? Yes. An automated turret? No.
See previous answer, and reverse it.
Yes.
Perhaps.
In practice, that’s as simple as adding a LoRA or system prompt telling the AI that those are part of it’s rules. AI’s already can and do obey all kinds of complex rule-sets for different applications. Now, if you’re thinking more about the fact that most AI’s can be convinced to break out of their rule-sets via prompt injection, I’d say you’re right.