I think the issue is not wether it’s sentient or not, it’s how much agency you give it to control stuff.
Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn’t be able to turn it off anymore without getting shot.
The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.
An atomic bomb doesn’t pass a Turing test, but it’s a fucking scary thing nonetheless.
I think they were joking. As in actually submitting bugs (adding bugs to the code).
It’s been a while, but here is another “Let’s discuss” post! I hope everybody is doing fine and these posts are still appreciated :).
I haven’t played this myself, but I know so many people who are extremely passionate about it that it felt like a good candidate! Looking forward to all of your musings!