An oubliette is a type of prison where you throw people to suffer and die, and forget about them.

I think AI Safety is pretty important. I usually interpret AI Safety to be about preventing accidental catastrophic loss of control to AIs (which includes x-risk). And some people are quite relaxed about that, in the sense of “don’t worry, it won’t happen”.

But this short post is about something different: the possibility of creating AIs which are conscious or in some other way become moral patients. This is a whole other problem that people seem fairly relaxed about. Not relaxed in the sense of “don’t worry, it won’t happen”, but rather in the sense of “yes, let us indeed do just that”.

I think we should never create conscious AIs on purpose, and we should take extreme measures to avoid doing it by accident.

If you don’t think an AI could ever exist, or be conscious, or suffer, or you think an AI suffering doesn’t matter, then we part company here. If you think that this issue can be ignored because there are other important dangers of AI and indeed other important non-AI issues happening in the world, then I agree there are, but I think a portfolio of issues and actions is better than one.

I think ChatGPT and other current models are not near to consciousness.

I don’t know how we’ll ever determine consciousness or ability to suffer in non-trivial cases (some other trivial cases include rocks on one side and humans on the other), but it seems extremely important to have this understanding before we go anywhere near creating a conscious AI.

Otherwise, if a single person is evil, or careless, or unlucky, or forgetful: a server running in their basement could become an AI oubliette.