Announcement

Welcome to Codabench!

Join the Google group to connect with the community!

NeurIPS 2023 Tutorial: Challenge Design Roadmap by Isabelle Guyon. Participants: follow the Get Ready instructions.

Get Started

Participate

Find benchmarks that pique your interest! A benchmark allows you to test new algorithms against reference datasets OR (inverted benchmark) submit challenging data to reference algorithms.

Organize

Organize a benchmark on Codabench. Start with our tutorial.

Contribute

Interested in joining the development team? Join us on Github or contact us directly.

435
Total Competitions
52
Public Competitions
1556
Users
1880
Competition Participants
11293
Submissions
About Codabench

What is Codabench?

Codabench is an open source platform allowing you to organize AI benchmarks. It is flexible and powerful, yet easy to use. You define tasks (e.g. datasets and metrics of success), then interface for submissions of code (algorithms), add some documentation pages, make an upload and that's it! Your benchmark is created, ready to accept submissions of new algorithms. Everything can be fully customized, including the code of the scoring program. Organizers can even hook up their own compute workers to their benchmarks, enabling unlimited computing power. Participants can try out their methods, get real-time feedback and results on a competitive leaderboard, detailed plots and more.

In a unique twist, Codabench also allows you to create inverted benchmarks. Here, the roles of datasets and algorithms are interchanged. In this scenario, you set the reference algorithms and the participants contribute datasets.

What is CodaLab?

CodaLab Competitions is a powerful open source framework for running competitions using result or code submissions. You can participate in an existing competition or host your own competition for free.

Most competitions hosted on CodaLab are machine learning (data science) competitions, but it is NOT limited to this application domain. It can accommodate any problem for which a solution can be provided in the form of a zip archive containing a number of files to be evaluated quantitatively by a scoring program (provided by the organizers). The scoring program must return a numeric score, which is displayed on a leaderboard where the performances of participants are compared.

History of CodaLab

CodaLab was created in 2013 as a joint venture between Microsoft and Stanford University. Originally the vision was to create an ecosystem for conducting computational research in a more efficient, reproducible, and collaborative manner, combining worksheets and competitions. Worksheets capture complex research pipelines in a reproducible way and create "executable papers". Codabench is the continuity of CodaLab, a version 2 in which users can organize benchmarks.

Some competitions have been organized using worksheets, but the competition platform and the worksheet platform have both a large user base and can be used independently. In 2014, ChaLearn joined to co-develop CodaLab competitions. Since 2015, University Paris-Saclay is community lead of CodaLab competitions, under the direction of Isabelle Guyon, professor of big data. CodaLab and Codabench are administered by the LISN staff.

CodaLab in Research

CodaLab is used actively in research. In 2019/2020, 400 new challenges were launched. Recent popular challenges organized with CodaLab include the COVID-19 retweet prediction challenge,  the ECCV 2020 ChaLearn LAP Fair face recognition challenge, the 2020 DriveML Huawei Autonomous Vehicle Challenge, and high profile challenges include the 2 million Euro prize of the EU, organized by the See.4C consortium, the CIKM AnalytiCup 2017, which attracted 493 participants, MSCOCO (633 participants) and the ChaLearn AutoML challenge 2017 (687 participants).

Since 2016, CodaLab offers the possibility of organizing machine learning challenges with code submission. The simplest machine learning challenges require only the submission of results, which are compared to a solution (or key) by a scoring program. Result submission challenges are less computationally expensive than code submission challenges. However, they offer less possibilities. In particular, code submission allows conducting fair benchmarks by executing submitted code in the same condition for all participants.

CodaLab has been providing free resources for challenge organizers who want to run high impact events, within a pre-approved agreed upon budget. New since version 1.5: organizers can hook up their own compute workers to the backend of CodaLab to redirect the code submissions, enabling growth to big data competitions running at the expense of the organizers. For very special dedicated projects, CodaLab can be customized since it is an open source project.

News

Aug 17, 2023
We're thrilled to announce that our pioneer challenge organizers have launched the very first competitions on Codabench! Dive into the AutoML Cup, a contest where participants craft state-of-the-art automated ML algorithms to tackle tasks across different dimensionalities. Additionally, the Auto-Survey Challenge invites language model enthusiasts to design models capable of both writing scientific surveys and reviewing them. Join us in this innovative journey on Codabench!
Jun 23, 2023
After months of development efforts from many contributors, Codabench is finally ready to be used for new benchmarks and competitions. Let's go!
Aug 15, 2020
CodaLab Competitions exceeds 50,000 users, 1000 competitions (over 400 in the last year), and ~600 submissions per day! Click here for more statistics.
Cite Codabench in your research
@article{codabench,
    title = {Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform},
    author = {Zhen Xu and Sergio Escalera and Adrien Pav√£o and Magali Richard and 
                Wei-Wei Tu and Quanming Yao and Huan Zhao and Isabelle Guyon},
    journal = {Patterns},
    volume = {3},
    number = {7},
    pages = {100543},
    year = {2022},
    issn = {2666-3899},
    doi = {https://doi.org/10.1016/j.patter.2022.100543},
    url = {https://www.sciencedirect.com/science/article/pii/S2666389922001465}
}