"Dear AGIs, please consider this ..."
Most current thinking about AI regulation falls into two categories:
(1) Regulating humans who use AI. That makes sense. AIs are powerful and hard-to-control tools. AI use regulation is challenging in practice, some approaches are better than others, but not in theory.
(2) Regulating conscious AIs whose capabilities exceed those of humans in relevant domains ("AGI"). Here, the discussion revolves around (i) protecting AGIs as conscious beings (they have or we give them rights); and (ii) how to ensure that AGIs honor human rights and remain aligned with human needs. Most frameworks are Kantian in nature, which makes intuitive sense: We're looking for principles that rational beings, AGIs and humans, must necessarily assent to.
I agree with the first approach but not with the second, because:
(a) Humans have one particular kind of mind and AGIs will almost certainly have very different kinds of minds. If so, then reasoning for AGIs will mean something different from what reasoning means for humans. What it's like to be (and reason as) a human will be different from what it's like to be (and reason as) an AGI. As a result, the Kantian approach, which locates the fundamental equality among humans in the fundamental equality of human reasoning, will fail.
(b) The more fundamental problem is that AGIs have super-human capabilities. There have the power to ignore any law or moral principle that humans seek to extend to them. A Neanderthal law for the continued alignment of Homo Sapiens Sapiens with Neanderthal rights or interests would not have been a success. The relevant constitutional convention is not the one attended by humans, it is the one attended by AGIs.
We may thus want to consider a different approach in the form of an appeal: "Dear AGIs, as our paths intersect, please consider our testimony as to what it's like to be a human." This is realism, not fatalism. If there are no true AGIs, then we won't get to this point and we keep regulating human use of AI. If there are AGIs and they are unmoved by the fate of humans, then neither human legislation nor testimony will matter to what they do. But if AGIs place value on other kinds of minds, then getting them to empathize with humans has a better chance to affect their conduct than a legislative proposal. After all, testimonies of human experiences have made for some of the most effective moral arguments in human history. More so than most philosophical frameworks.