Commit f777489d authored by Vít Novotný's avatar Vít Novotný
Browse files

evaluate_run @ evaluate.py: support for ranks starting at zero (cont.)

parent 69913b98
Pipeline #62919 failed with stage
in 47 seconds
......@@ -36,7 +36,7 @@ trained using subsets of the `task1` and `task2` tasks.
#### Using the `train` subset to train your supervised system
``` sh
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval@0.0.16
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval@0.0.17
$ python
>>> from arqmath_eval import get_topics, get_judged_documents, get_ndcg
>>>
......@@ -65,7 +65,7 @@ Here is the documentation of the available evaluation functions:
#### Using the `validation` subset to compare various parameters of your system
``` sh
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval@0.0.16
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval@0.0.17
$ python
>>> from arqmath_eval import get_topics, get_judged_documents
>>>
......@@ -96,7 +96,7 @@ $ git push # publish your new result and the upd
#### Using the `all` subset to compute the NDCG' score of an ARQMath submission
``` sh
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval@0.0.16
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval@0.0.17
$ python -m arqmath_eval.evaluate MIRMU-task1-Ensemble-auto-both-A.tsv all
0.238
```
......
......@@ -5,7 +5,7 @@ from setuptools import setup
setup(
name='arqmath_eval',
version='0.0.16',
version='0.0.17',
description='Evaluation of ARQMath systems',
packages=['arqmath_eval'],
package_dir={'arqmath_eval': 'scripts'},
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment