This jam is now over. It ran from 2023-02-10 17:00:00 to 2023-02-13 03:45:00. View results
To make sure large machine learning models follow what we want them to do, we have to have people monitoring their safety. BUT, it is indeed very hard for just one person to monitor all the outputs of ChatGPT...
The objective of this hackathon is to research scalable solutions to this problem!
These are all very interesting questions that we're excited to see your answers for during theses 48 hours!
Join the Discord above to be a part of the reading group where we read up on the research within scaling oversight! The current pieces are:
If you are part of a local machine learning or AI safety group, you are very welcome to set up a local in-person site to work together with people on this hackathon! We will have several across the world (list upcoming) and hope to increase the amount of local spots. Sign up to run a jam site here.
You will work in groups of 2-6 people within our hackathon GatherTown and in the in-person event hubs.
Everyone who submits projects will review each others' projects on the following criteria from one to five stars. Informed by these, the judges will then select the top 4 projects as winners!
Overall | How good are your arguments for how this result informs alignment and understanding of neural networks? How informative are the results for the field of ML and AI safety in general? |
Scalable oversight | How informative is it in the field of benchmarks and scalable oversight? Have you come up with a new method or found completely new results? |
Novelty | Have the results not been seen before and are they surprising compared to what we expect? |
Generality | Do your research results show a generalization of your hypothesis? E.g. if you expect language models to overvalue evidence in the prompt compared to in its training data, do you test more than just one or two different prompts and do proper interpretability analysis of the network? |
Reproducibility | Are we able to easily reproduce the research and do we expect the results to reproduce? A high score here might be a high Generality and a well-documented Github repository that reruns all experiments. |
Inspiring resources for scalable oversight and ML safety:
Get notified when the intro talk stream starts on the Friday of the event!
No submissions match your filter