Should AI Get Legal Rights?
Briefly

Should AI Get Legal Rights?
"a major challenge in applying this approach is that it involves significant judgment calls, both in formulating the indicators and in evaluating their presence or absence in AI systems."
"All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society."
"there is zero evidence"
Computational functionalism treats human minds as specific kinds of computational systems and proposes assessing other computational systems for indicators of sentience. Applying this approach requires significant judgment when formulating indicators and when evaluating their presence or absence in AI systems. Model welfare is a nascent, evolving field with critics warning that premature claims of conscious AI could exacerbate delusions, dependence, polarization, and legal or social category errors. Some assert there is currently no evidence of conscious AI. Proponents argue that careful research can identify indicators, manage risks, and guide safeguards and policy.
Read at WIRED
Unable to calculate read time
[
|
]