Collection

AI Safety

Special Issue Description

The accelerating pace of recent advances in artificial intelligence highlights the importance of safety in developing and deploying new technologies. The nascent field of AI safety is naturally interdisciplinary, bringing together technical researchers, policymakers, and, increasingly, philosophers. This special issue will feature philosophical work that engages with a broad range of issues related to AI safety and existential risk. Possible topics of interest include but are not limited to:

Evaluations of current AI risk models.

Conceptual work on alignment problems.

Explorations of possible “failure modes” — ways in which AI systems might lead to bad outcomes, fail to achieve their intended goals, or exhibit unintended behaviors.

Ideas for improving the transparency and interpretability of artificial neural networks.

New or improved approaches to moral learning in artificial systems.

Investigations of potential conflicts between AI safety and control measures and the possible ethical standing of AI systems.

Analyses of the impact of competitive dynamics between businesses and/or governments on AI safety.

Assessments of the ways in which interactions between AI systems, or between AI systems and humans, might bear on safety outcomes.

Submission Guidelines

We accept submissions not exceeding 10,000 words with a deadline of 1 December 2023 (was November 1st, 2023). Papers should be written in English and include an abstract of up to 300 words and four to six keywords. Submissions must be suitable for double-blind peer review.

Papers must be submitted online through Editorial Manager®. Important: when submitting the paper, be sure that the option “SI: AI Safety” is selected from the Article Type menu. Philosophical Studies’ guidelines for authors and submissions can be found here. Papers will be handled according to the journal’s Peer Review Policy, Process and Guidance and reviewers will be selected according to the Peer Reviewer Selection policies.

We expect to print the whole special issue in mid-2024. However, papers accepted for publication will be uploaded to Philosophical Studies’ webpage as “online first” as soon as possible after the refereeing process is complete.

This Special issue/Collection will bring higher citations and visibility to your paper rather than regular papers and attract more relevant readership due to its scope. The Journal is indexed in the Web of Science and has currently an IF of 1.300 and CiteScore of 2.7.

For any questions, please contact the Guest Editor

Editors

  • Cameron Domenico Kirk-Giannini

    Rutgers University–Newark, United States Cameron Domenico Kirk-Giannini is Assistant Professor of Philosophy at Rutgers University–Newark. He obtained his PhD from Rutgers University—New Brunswick, having previously studied at the University of Oxford and Harvard College. He specializes in philosophy of language, philosophy of artificial intelligence, and social philosophy. Email: cdk58@philosophy.rutgers.edu

  • Dan Hendrycks

    Center for AI Safety, United States Dan Hendrycks is Director of the Center for AI Safety. He obtained his PhD at UC Berkeley, having previously studied at the University of Chicago. He specializes in machine learning safety. His research has been supported by funding from sources including the National Science Foundation and Open Philanthropy. Email: hendrycks@berkeley.edu

Articles (4 in this collection)