About
The AI Governance Archive (TAIGA) is a centralized platform for AI governance researchers and collaborators to better coordinate. Members can use TAIGA to share non-public research within the community, search the archive of research to inform their own work, learn what others are working on, and connect with potential collaborators.
While there are public fora to share your writing, often useful research and project ideas are only shared relatively unsystematically via Google docs, e.g. because the thoughts are unpolished, require context, or are somewhat sensitive. TAIGA is to the common goal of the AI governance field, what internal knowledge databases are to corporations, in coordinating and increasing the effectiveness of their employees toward a shared mission. TAIGA aims to increase the reach of such work while still giving contributors control of what info is shared with whom.
TAIGA is run by David Corfield, Max Räuker, and Alex Lintz, and was co-founded in April 2022 by Alex Lintz and Emma Bluemke. We are currently supported by grant funding from the Long Term Future Fund, and are in the process of finalising our advisory board.
Our mission
We believe that AI has the potential to be a radically transformative technology within the next century. We support the broad mission to mitigate the risks from the development of transformative AI and ensure that it is used for the good of humanity.
TAIGA’s long term goal is to be a key part of the centralized infrastructure to support the mission of this field. We aim to do this by providing useful tools, creating a network, as well as listening to and addressing needs as they arise. We recognize that AI governance is a rapidly changing field and we plan to adapt as necessary to provide support and facilitate coordination as the field grows and priorities shift.
If an idea helps further the mission of the field and our database and network can be of use, please Contact Us.
Theory of impact
Through facilitating information exchange in AI governance, especially of non-public content that requires increased levels of context and trust, we hope to:
1) increase the overall quality of research by spreading useful information more widely, allowing other researchers to build on existing work;
2) improve networking e.g., finding collaborators for future projects, finding researchers with relevant expertise, inviting people for events that focus on specific topics;
3) increase the efficiency of the research field by better allocating researcher time and thus e.g. avoiding duplicated efforts;
4) improve talent development by making it possible for new researchers to get up to speed on new topics and find relevant work that is not publicly accessible;
5) provide leaders with the data they need to make critical decisions, e.g., allocating resources to the most neglected areas.
6) support field-building-related tasks, such as organizing workshops at conferences, etc.
We always welcome feedback on our theory of impact and particularly welcome any ideas to help us measure and quantify relevant paths to impact.
Security
There are two relevant concepts of security for TAIGA - the security of who can access information held in TAIGA (Access Security) and the technical security of the data we hold (Cyber Security). We know how important the security and privacy of your data is to our mission and to maintaining the trust of our community.
Access Security
We consider TAIGA reasonably secure so long as you follow our Terms of Use. We can say this in confidence as you control the access to your individual documents through Google access controls, and they are protected by Google’s high level of cybersecurity. All information available in TAIGA (e.g., document name, summary, and personal profile information) is only accessible to the TAIGA community, admittance to which is governed by our Access Policy. That said, we recommend you don’t add highly sensitive information as we can’t strongly vouch for everyone who has been granted access to TAIGA, and it is also possible for a TAIGA member to have their Google account compromised.
To reiterate our policies, we recommend that you:
Don’t include infohazards and other sensitive information in titles or summaries.
Be very careful about the security of your Google account. Our recommendations include using two-factor authentication and a unique and sufficiently long password.
If you prefer your name not to be mentioned anywhere on TAIGA, please write to us and we will find an anonymous option for you.
Sharing your login details, screenshots, or text excerpts from TAIGA with any non-member is a breach of our Terms of Use and will cause you to lose access to TAIGA. Please Contact Us if you have further questions or feedback.
Cyber Security
To ensure protection of the data on TAIGA, we have implemented the following security measures.
We use OAuth 2.0 via Google login for authentication. This ensures that only authorized individuals can access TAIGA and protects against unauthorized access attempts. We also use other standard web security practices such as SSL encryption to safeguard the transmission of your sensitive information.
All of our data is stored in Supabase, which is a SOC2 audited database provider that is widely used and known to be reliable. You can read more about their security practices here.
We treat security as a top priority when developing new features, as we understand that user’s see security as a key feature of the site and expect it to be at a standard above what you would expect from a standard website.
That being said, we are only a small team without dedicated cybersecurity support. Therefore, we suggest you don’t add information on TAIGA itself that would be significantly harmful if it were leaked or accessed illegitimately. We think the most important security barrier is that you restrict access to the Google Docs you consider sensitive and only grant access selectively to people you deem sufficiently trustworthy.
Access philosophy
We sadly can’t approve every access request that we receive, but it is important for us and the community to have a transparent and defensible Access Policy.
TAIGA faces a few tricky trade offs when deciding whom to grant access:
Being a space of trust and confidentiality: TAIGA tries to be a place in which people feel good about sharing relevant work that they wouldn’t share publicly, e.g. because it’s not polished, contains sensitive information, or is easy to misconstrue.
Helping those who share the broader mission of the field of AI governance and work on relevant topics: Those are the people that TAIGA is for.
Growing the field: We know the field needs to grow and that TAIGA can be a useful upskilling resource
Being short on time and funding: We sometimes spend up to an hour processing an individual access request such is our commitment to maintaining a trusted community
We try to balance these considerations as well as we can so the broader community can trust our judgement. That is why we have shared our Access Policy publicly and actively request your Feedback on how we can improve it with those competing objectives in mind. We know that we can’t make everyone happy and that’s a tough reality for us to live with, so we ask that you are kind and patient with us as we iterate this policy together.
Team
Alex Lintz
Alex is an experienced AI governance researcher and has helped execute on a number of projects in the space. He is an affiliate with Rethink Priorities and authored some influential work within the AI governance community on US tilting & advocacy around AGI risk. Aside from TAIGA, Alex’s current focuses include organizing a retreat for top strategic decision-makers in the AI governance field, helping to build an AI advocacy organization, and advising on some smaller projects.
Alex advises the team on strategy and operations.
David Corfield
David is a serial entrepreneur and organization builder. He created a non-profit to help small businesses adapt to the COVID pandemic that generated over $1m in impact, and, before that, a Fintech startup that transacted over $1m in helping freelancers get paid faster and more securely. His research is focused on how Transformative AI should be used for the best of society, which he publishes on his blog What Future World?.
David is responsible for project management, strategy, and product development.
Max Räuker
Max has a background in cognitive science, and previously completed fellowships at Rethink Priorities General Longtermism team, and led an expert survey on intermediate goals in AI governance. He’s currently looking for policy analyst roles.
Max is responsible for general strategy, the access policy and the newsletter.
Advisory board
We're in the process of finalising our founding advisory board - we're excited to share this news with you in the next few weeks!