Commit 28d0269b authored by Vít Novotný's avatar Vít Novotný
Browse files

Point links in README.md to GitHub

parent de8752c0
Loading
Loading
Loading
Loading
Loading
+8 −8
Original line number Diff line number Diff line
@@ -36,7 +36,7 @@ trained using subsets of the `task1` and `task2` tasks.
#### Using the `train` subset to train your supervised system

``` sh
$ pip install --force-reinstall git+https://gitlab.fi.muni.cz/xstefan3/arqmath-eval@0.0.13
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval/arqmath-eval@0.0.13
$ python
>>> from arqmath_eval import get_topics, get_judged_documents, get_ndcg
>>>
@@ -64,7 +64,7 @@ Here is the documentation of the available evaluation functions:
#### Using the `validation` subset to compare various parameters of your system

``` sh
$ pip install --force-reinstall git+https://gitlab.fi.muni.cz/xstefan3/arqmath-eval@0.0.13
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval/arqmath-eval@0.0.13
$ python
>>> from arqmath_eval import get_topics, get_judged_documents
>>>
@@ -95,18 +95,18 @@ $ git push # publish your new result and the upd
#### Using the `all` subset to compute the NDCG' score of an ARQMath submission

``` sh
$ pip install --force-reinstall git+https://gitlab.fi.muni.cz/xstefan3/arqmath-eval@0.0.13
$ pip install --force-reinstall git+https://github.com/MIR-MU/ARQMath-eval/arqmath-eval@0.0.13
$ python -m arqmath_eval.evaluate MIRMU-task1-Ensemble-auto-both-A.tsv
0.238
```

 [arqmath-task1]:              https://www.cs.rit.edu/~dprl/ARQMath/Task1-answers.html (Task 1: Find Answers)
 [arqmath-task2]:              https://www.cs.rit.edu/~dprl/ARQMath/task2-formulas.html (Task 2: Formula Search)
 [get_judged_documents]:       https://gitlab.fi.muni.cz/xstefan3/arqmath-eval/-/blob/master/scripts/common.py#L61
 [get_ndcg]:                   https://gitlab.fi.muni.cz/xstefan3/arqmath-eval/-/blob/master/scripts/common.py#L94
 [get_random_ndcg]:            https://gitlab.fi.muni.cz/xstefan3/arqmath-eval/-/blob/master/scripts/common.py#L129
 [get_random_normalized_ndcg]: https://gitlab.fi.muni.cz/xstefan3/arqmath-eval/-/blob/master/scripts/common.py#L174
 [get_topics]:                 https://gitlab.fi.muni.cz/xstefan3/arqmath-eval/-/blob/master/scripts/common.py#L34
 [get_judged_documents]:       https://github.com/MIR-MU/ARQMath-eval/arqmath-eval/-/blob/master/scripts/common.py#L61
 [get_ndcg]:                   https://github.com/MIR-MU/ARQMath-eval/arqmath-eval/-/blob/master/scripts/common.py#L94
 [get_random_ndcg]:            https://github.com/MIR-MU/ARQMath-eval/arqmath-eval/-/blob/master/scripts/common.py#L129
 [get_random_normalized_ndcg]: https://github.com/MIR-MU/ARQMath-eval/arqmath-eval/-/blob/master/scripts/common.py#L174
 [get_topics]:                 https://github.com/MIR-MU/ARQMath-eval/arqmath-eval/-/blob/master/scripts/common.py#L34
 [ntcir-11-math-2]:            http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.686.444&rep=rep1&type=pdf (NTCIR-11 Math-2 Task Overview)
 [ntcir-12-mathir]:            https://www.cs.rit.edu/~rlaz/files/ntcir12-mathir.pdf (NTCIR-12 MathIR Task Overview)
 [treceval-format]:            https://stackoverflow.com/a/8175382/657401 (How to evaluate a search/retrieval engine using trec_eval?)