Scripts for evaluating fanfiction NLP processing pipeline (fanfiction-nlp)
To evaluate models, run evaluate_{pipeline,booknlp}.py
.
To get a significance test comparing model performance (from saved predictions in running evaluate_{pipeline,booknlp}.py
, run compare_models.py
.
Both files require settings and filepath inputs in *.cfg
(not on GitHub).