Evaluation of the paper entiled Energy-aware Architecture for Information Search in the Semantic Web of Things.
This project was developed during more than a year. Indeed, since it is based on a previous project, there is two years old code. Consequently, it may be difficult to understand. I did my best to make a neat design and now I've really tried to clearly document it. However, providing that, after carefully reading the documentation and the source code, you don't understand something or you cannot run something, just mail me and I'll do my best to help you ;-)
- src contains the source code of the different modules which compose this project.
- clueseval has the code and the evaluation for the clues
- clues has the code about this core data structure (i.e. utility methods, parsing and serializations)
- evaluation has the evaluation for the Section 6.1
- netuse
- database contains the simulation results are persisted using mongoDB. This module contains the data models stored through mongoengine.
- debugging (ignore it)
- evaluation has the parametrization and processing for the evaluations of sections 6.2 ( number_requests ), 6.3 ( activity ) and 6.4 ( dynamism )
- mdns has the code to simulate the mDNS and DNS-SD protocols
- tracers contains the code to write the simulated HTTP and UDP traces in a file or in mongoDB
- triplespace contains the semantic management classes. It is called like that, because this project originally evolved from another used to assess different Triple Space implementations. For more information, see this paper.
- commons has the code which is used by both clueseval and netuse modules
- testing has code used by the unit tests
- clueseval has the code and the evaluation for the clues
- test contains the unit tests for the different modules of the project.
Note that each of the simulations shown in the paper are located in /netuse/evaluation and have the following files:
- parametrize.py creates the parameters needed for each simulation in the MongoDB database.
- processor.py processes the results of each simulation and generates a summary JSON file.
- diagram.py uses the JSON file generated by processor.py to show a chart (the ones shown in the paper).
Use the bash script on the project's top directory:
bash install.bash
First of all, install the project and its dependencies using pip:
-
Recommended option for development: checkout the code and edit it whenever you need
pip install -e git+https://gomezgoiri@bitbucket.org/gomezgoiri/networkusage.git#egg=netuse
-
If you have already downloaded the code and you don't need to edit it, you can simply do...
pip install ./
-
If a previous version was already installed use this:
sudo pip install ./ --upgrade
And to uninstall it:
sudo pip uninstall netuse
During the installation process, some dependencies won't be installed correctly. While the setup.py is broken, install them manually:
sudo pip install numpy
sudo pip install simpy
After installing them, you should patch "n3meta.py" (issue related with n3 parsing) and "InfixOWL.py":
patch [installation-path]/rdflib/syntax/parsers/n3p/n3meta.py ./patches/n3meta.py.diff
patch [installation-path]/FuXi/Syntax/InfixOWL.py ./patches/InfixOWL.py.patch
Then, download the semantic files needed for the simulation. Note that there is a subfolder with too much unnecessary base-data. To avoid downloading it, mark the checkout as as not recursive (-N) and then just download the needed folders.
svn co https://dev.morelab.deusto.es/svn/aigomez/trunk/dataset/ -N [localfolder]/
svn co https://dev.morelab.deusto.es/svn/aigomez/trunk/dataset/base_ontologies/ [localfolder]/base_ontologies
svn co https://dev.morelab.deusto.es/svn/aigomez/trunk/dataset/data/ [localfolder]/data
Then, create a symbolic link to point from ~/dev/dataset to the actual location of the dataset:
ln -s path/[localfolder] ~/dev/dataset
The dependencies are described in the setup.py file. However, I have not used it in a while. So, just in case, I've also added all the modules installed ( pip freeze ) in the virtualenvwrapper environment I have been using to run the simulations.
These requirements can be found in the requirements.txt file. To install them in you python environment, simply run:
pip install -r requirements.txt
- Dataset:
- By default, the dataset is supposed to be located in ~/dev/dataset.
- However, all the entry points which need it, can receive a different path as a parameter (e.g. '-ds','--data-set).
- MongoDB:
- The database connection details can be changed in src/netuse/database/init.py
To run a concrete simulation, you should follow the following steps:
-
Choose the simulation to run under /netuse/evaluation.
-
Parametrize the simulation. Using parametrize.py (see Directory structure section).
-
Run the simulations.
# If you have installed this project using the _setup.py_ file, an entry point to the simulation class has already been installed in your system: simulate # Or if you prefer to run it in the background: nohup simulate &> output_file.out # Or if the previous two don't work, simply python src/netuse/evaluation/simulate.py
Note that:
- By default the simulation script runs in parallel as much simulations as processors the host machine has.
- You can run all the simulations of a simulation set using different machines as long as they all have access to the same MongoDB database where the parametrization is stored.
To analyze the results of the last simulation runs, you can...
- Summarize the results of the simulations with the proper processor.py
- Generate a chart which summarizes the results using diagram.py
To run any test, just execute the desired one using:
python test/[subfolder/]name.py
Check COPYING.txt.