Access policy

Summary

We add people based on a) whether the person works professionally in the AI governance field, and b) whether we have reason to believe them to be trustworthy with confidential information.

Introduction

This text is meant to roughly summarize how the TAIGA team decides whether we invite a person to TAIGA.

Since roughly the beginning of 2023, Max has been mainly responsible for processing access requests and Alex approved his decisions based on quick notes he’d share with him. Sometimes also Emma and Alex independently decide to add somebody without consulting Max.

The two criteria we use for evaluation

These two criteria developed somewhat organically and capture what we think about when evaluating an access request, and also by now have the character of the “rules” that we try to follow when evaluating an access request.

As of now, July 2023, the first criterion is most often not fulfilled for the ~dozen cases where we did not grant access.

  1. Does the person work professionally on AI governance?

    1. AI governance here is somewhat fuzzy but captured well by the definition from the EA forum

      1. the study of norms, policies, and institutions that can help humanity navigate the transition to a world with [transformative] artificial intelligence. This includes a broad range of subjects, from global coordination around regulating AI development to providing incentives for corporations to be more cautious in their AI research.

    2. With “working professionally” we intend to capture that the person spends a significant fraction of their time working on issues that we intend to capture with TAIGA. The person understands the relevant contexts of many discussions in the field and will continue to work in the field.

  2. Is the person trustworthy?

    1. Will the person behave cooperatively in general? Will the person be reasonable and civil when interacting with other members on TAIGA? Is there a risk that the person will leak information about e.g. docs they disagree with?

    2. We try to answer this by reaching out to a reference that knows them, or sometimes by knowing the person ourselves, or sometimes just by the person being employed in some trusted AI governance organization (some examples below).

Examples for people in the target audience

  1. People working in central AI governance teams get a relatively automatic pass.

    1. E.g. GovAI, RP’s AIGS team, AI Impacts, Epoch

  2. People working in AI x-risk focussed orgs and their role is in significant part focussed on AI governance

    1. E.g. Open Phil, GPI, FHI, CSER, LPP

  3. People working in orgs that don’t primarily focus on AI x-risks but who are individually closely associated with the AI governance community and who have x-risk reduction among their career priorities 

    1. E.g. Conjecture

  4. People working in policy and who are closely associated with the AI governance research community

  5. People who are in support roles for the AI governance community and are highly trusted.

    1. E.g. 80,000Hours advisors working on AI governance, and individual grant makers.

  6. Promising junior people who are focused on AI governance and are able or nearly-able to make useful contributions themselves.

    1. One rough lower bar we have set here is that we decided to add the winter and summer fellows at GovAI, who are mostly relatively junior and partially haven’t engaged extensively, but who generally show significant promise and who we expect to get up-to-speed fairly quickly.

Examples of people who might be interested but would not (yet) be invited

  1. Not working on AI governance: A person who doesn’t significantly work on AI governance issues but cares about risks from AI and wants to stay up-to-date on the research that is happening.

  2. Insufficient background in AI governance: A junior person who recently steered their career towards AI governance but who very likely hasn’t engaged a lot with the existing research and has not yet networked much within the professional context.

  3. Issues with trustworthiness: A person working on topics close to AI governance but who has shown themselves to be uncooperative to a degree such that e.g. our advisors would not feel comfortable sharing their unpolished research with them.

What we do in case of uncertainty

Sometimes it’s unclear to us whether the person actually works on and/or cares about existential risks from AI. Example reasons:

  • Some people leave very sparse info in their access application.

  • Some people neither Alex, Max or Emma know very well or at all.

  • Some people are fairly new to working on issues associated with x-risk focussed AI governance and it’s unclear what their personal priorities are and whether they are generally trustworthy.

In cases of uncertainty, we might

  1. reach out to the applicant to request more information about them,

  2. reach out to somebody from the broader AI governance community to inquire about the applicant,

  3. (happened in like 2 cases) ignore the application. 

    1. IIRC because it seemed too unlikely we’d grant access and inquiring further information seemed too cumbersome (e.g. one person didn’t fill in the form properly and seemed very unlikely to work on AI governance issues, another was unknown to me and how they introduced themselves indicated relatively little familiarity with the AI governance community)

What happens if we decide not to grant access

We let the applicant know why they won’t get access to TAIGA.

This has so far always been due to the person not working on AI governance issues professionally, or only with an insufficiently small fraction of their time or for a short period of time (like a few months and not not full-time).

Copyright © 2023 The AI Governance Archive
The AI Governance Archive is a project of Players Philanthropy Fund a Maryland charitable trust recognized by IRS as a tax-exempt public charity under Section 501(c)(3) of the Internal Revenue Code (Federal Tax ID: 27-6601178,ppf.org/pp). Contributions to The AI Governance Archive qualify as tax-deductible to the fullest extent of the law.