Submitting an analysis to Rivet
Analyses from LHC (and other) experiments
Our thanks to the LHC experiments who've really got into the swing of submitting Rivet analysis routines directly to us -- this is by far the best way to ensure high-quality analyses that match the experiment analysis procedures.
IMPORTANT: please make sure that your analysis submission uses reference data taken from HepData, e.g. the pre-publication staged version of the experiment data submission to HepData. This is the canonical repository for all HEP analysis data, and Rivet needs to stay synchronised with it for future maintainability of our analysis routines: please help us to avoid divergence.
Rivet plugins tested and validated by the experimental groups of ALICE, ATLAS, CMS and LHCb should now be copied to the rivet downloads/contrib directly by the responsibles of the experiments.
We have machinery in place that detects new files in that folder and alert us to take action, but it's also helpful if you drop a short email to rivet@… to let us know about the upload.
We would like to ask you to scp a single .tar.gz tarball for each new analysis to
the analysis files (.cc, .info, .plot, . yoda) a rivet-plots HTML directory with your validation plots
For our ease of integration into the Rivet releases, it's preferred if the analysis files are all in the same base- or sub-directory of the tarball.
Re. the validation, ideally the labels in these plots should allow us to identify the corresponding line in the corresponding publication. In cases where this is not possible any more, e.g. deleted MC samples, a clear identification of the MC used in the validation and maybe a run card will be very helpful. Reproducibility is good!
If you have written a Rivet analysis and want to make it "official", then you need to submit it to the Rivet authors to become part of the released package.
Note that we don't guarantee to accept all analyses in their initially submitted form, or maybe at all. Every analysis that we accept is one which we need to maintain from then on, and in particular for MC_* analyses (those which don't directly correspond to a published experimental analysis) we aim to make the collection as clear and non-overlapping as possible. This said, we have never yet rejected an analysis completely, although we have asked for significant changes!
Enough doom and gloom -- we're very happy that you want to contribute your analysis for community use. But we still have to maintain these things, and that means that both the physics performance and the code quality need to be good at submission time.
For the physics quality, you should make sure that your analysis code corresponds as closely as possible to the treatment of the MC lines in the paper, and that you can reproduce the behaviour of at least one set of MC lines from the paper -- if possible using the exact same MC sample as the paper used. Please find a way that you can show us these plots.
Rivet has a set of code rules under CodingStyleGuide. Mostly these are good guidance for clear code (despite being for a different language the Python style guide is also full of generally good advice for clear, readable, and maintainable code), but some are purely chosen for stylistic consistence with the rest of the Rivet system. Please pay attention to these and don't use a different style that you personally prefer in analyses that you wish to make official.
As well as the layout, naming conventions, etc., please try to use features of the Rivet library as much as possible. For example, did you know that
- Jet and Particle have their own
phi(), etc. methods so you don't have to use an intermediate call to
- Even better, they have
abspid()methods so no more need for
- There are also many helpful functions like
inRange(x, low, high),
Make sure that you have put correct and helpful information into the
.plot files associated with your analysis. Make sure that the LaTeX is well-formed and that quotes in YAML blocks match. If you have checked in detail that your analysis is correct and matches the MC lines in the paper (or whatever the best metric for "faithful implementation" is), then mark the status as "VALIDATED". Put your name and a long-term email address in the
.info file so users (and us!) can find you when things go wrong!
Thanks again for putting something back :-)