Codabench is an open source platform allowing you to organize AI benchmarks. It is flexible and powerful, yet easy to use. You define tasks (e.g. datasets and metrics of success), then interface for submissions of code (algorithms), add some documentation pages,
make an upload and that's it! Your benchmark is created, ready to accept submissions of new algorithms. Everything can be fully customized, including the code of the scoring program. Organizers can even hook up their own compute workers to their benchmarks, enabling unlimited computing power.
Participants can try out their methods, get real-time feedback and results on a competitive leaderboard, detailed plots and more.
In a unique twist, Codabench also allows you to create inverted benchmarks. Here, the roles of datasets and algorithms are interchanged. In this scenario, you set the reference algorithms and the participants contribute datasets.
CodaLab Competitions is a powerful open source framework for running competitions using result or code submissions.
You can participate in an existing competition or host your own competition for free.
Most competitions hosted on CodaLab are machine learning (data science) competitions, but it is NOT limited to this application domain.
It can accommodate any problem for which a solution can be provided in the form of a zip archive containing a
number of files to be evaluated quantitatively by a scoring program (provided by the organizers).
The scoring program must return a numeric score, which is displayed on a leaderboard where
the performances of participants are compared.
CodaLab was created in 2013 as a joint venture between Microsoft and Stanford University.
Originally the vision was to create an ecosystem for conducting computational research
in a more efficient, reproducible, and collaborative manner, combining worksheets and
competitions. Worksheets capture complex research pipelines in a reproducible way and create "executable
papers". Codabench is the continuity of CodaLab, a version 2 in which users can organize benchmarks.
Some competitions have been organized using worksheets, but the competition platform
and the worksheet platform have both a large user base and can be used independently. In
ChaLearn joined to co-develop
CodaLab competitions. Since 2015, University Paris-Saclay is
community lead of CodaLab competitions, under the direction of Isabelle Guyon, professor
of big data. CodaLab and Codabench are administered by the LISN staff.
CodaLab is used actively in research. In
2019/2020, 400 new challenges were launched. Recent
popular challenges organized with CodaLab include the
retweet prediction challenge, the ECCV
2020 ChaLearn LAP Fair face recognition challenge,
the 2020 DriveML
Huawei Autonomous Vehicle Challenge, and high
profile challenges include the 2
million Euro prize of the EU, organized by the See.4C consortium, the CIKM
AnalytiCup 2017, which attracted 493 participants,
(633 participants) and the ChaLearn
AutoML challenge 2017 (687 participants).
Since 2016, CodaLab offers the possibility of organizing machine learning challenges with
code submission. The simplest machine learning challenges require only the submission of
results, which are compared to a solution (or key) by a scoring program. Result submission
challenges are less computationally expensive than code submission challenges. However, they offer less
possibilities. In particular, code submission allows conducting fair benchmarks by
executing submitted code in the same condition for all participants.
CodaLab has been providing free resources for challenge organizers who want to run high
impact events, within a pre-approved agreed upon budget. New since version 1.5: organizers can
hook up their own compute workers to the backend of CodaLab to redirect the code submissions,
enabling growth to big data competitions running at the expense of the organizers. For very
special dedicated projects, CodaLab can be customized since it is an open source project.