This jam is now over. It ran from 2023-03-24 17:00:00 to 2023-03-26 23:30:00. View results

A weekend for exploring AI & society! 

With collaborators from OpenAI, Centre for the Governance of AI, Existential Risk Observatory and AI Objectives Institute, we present cases that you will work in small groups to find solutions for during this weekend!

🏆$2,000 on the line 



AI Governance 

There are immense opportunities and risks with artificial intelligence. In this hackathon, we focus on how we can mitigate and secure against the most severe risks using strategic frameworks for AI governance.

Some of these risks include labor displacement, inequality, an oligopolistic global market structure, reinforced totalitarianism, shifts and volatility in national power, strategic instability, and an AI race that sacrifices safety and other values [1].

The objective of this hackathon is to research strategies and technical solutions to the global strategic challenges of AI.

  • Can we imagine scenarios where AI leads to events that cause a major risk for humanity? [2, 3, 4]
  • Can we predict when AI will reach specific milestones and how future AI will act and behave from a technical perspective? [5, 6, 7]
  • Can we find warning signs that should warn us that we are soon to hit a state of extreme risk? [89, 10] And figure out when they actually work [11].

These and more questions will be the target of your 48 hours! With the potential to turn into a technical report or a blog post, we look forward to see what you come up with.

Reading group

Join the Discord above to be a part of the reading group where we read up on the research within AI governance! The preliminary reading list is:

Local groups

If you are part of a local machine learning or AI safety group, you are very welcome to set up a local in-person site to work together with people on this hackathon! We will have several across the world (list upcoming) and hope to increase the amount of local spots. Sign up to run a jam site here.

You will work in groups of 1-6 people within our hackathon Discord and in the in-person event hubs.

Project submission & judging criteria

You write up your case proposals as PDFs using the template and upload them here on itch.io as project submissions for this hackathon. See how to submit your project.

We encourage all participants to rate each others' projects on the following categories:

OverallHow good are your arguments for how this result informs our strategic path with AI? How informative are the results for the field of AI governance in general?
AI governanceHow informative is it in the field of AI governance? Have you come up with a new method or found completely new strategic frameworks?
NoveltyHave the results not been seen before and are they surprising compared to what we expect?
GeneralityDo your research results show a generalization of your hypothesis? E.g. have you covered all relevant strategic cases around your scenario and do we expect it to accurately represent the case.

The final winners will be selected by the judges who have also been part of creating the cases with us.

Inspiration

Follow along on the YouTube and see the reading list above. Other resources from the one-pagers can be found here:

Submissions(47)

All submissions
·
Windows (1)
macOS (1)
Linux (1)
Android (1)
·
Where will AI fit into the democratic system? (11)
Categorizing risks of AI (6)
Whose morals should AI follow? (3)
Custom case (13)
Policy for pausing AGI progress (9)
Bottlenecks investigation (1)
GPT-6 release considerations (4)
·
Toronto (2)
Ho Chi Minh City (1)
Wisconsin-Madison (1)
Sunnyvale (1)
Online (22)
Copenhagen (1)
Aarhus (4)
Bangalore (6)
Cambridge (1)
Delft (1)
Sao Paulo (5)
Paris (2)

No submissions match your filter

AI Ideathon: Helping Policymakers Classify AI Risk
Slowing down AGI progress by taxing usage of data for training large models
By making the AI "meme" more coherent, we could motivate people to act towards slowing down AI.
An essay on the limits and benefits of cooperation and competition in the age of AI
AI will be safe and beneficial, for whom?
This report was written for the “Slowing progress towards AGI” case of the AI governance hackathon.
An analysis of the opportunities and risks of artificial intelligence to democratic systems
A novel strategy and framework for measure bias in text-to-image models
How OpenAI should manage chatGPT-6 responsibly and ensure technological advancement
We propose legislation mandating evaluations of SOTA language models to test for dangerous capabilities.
Using the blockchain to enhance accountability, transparency and safety.
Bridging the gap between AI policy makers and AI developers/engineers.
A targeted approach to governance and technical implementation
What are the pressures and motivations for OpenAI to release GPT-6 quickly?
https://ai-risk.fahamuai.com
How to keep AI sane?
Framework for thinking about forecasting catastrophic AGI risk
Categorizing the risks of future AI systems can make them more accessible to policymakers
Challenges Towards the Implementation of International Regulations on Military Applications of AI
Analogy to explain AI Risks to policymakers
Exploring AI Moral Alignment: Bridging Diverse Values and Preferences for a Fair and Inclusive AI Future
Improving representation of global south