To deploy the application in Apache Tomcat container (tested for Tomcat 9, requires Servlet 3.0 implementation):
* Create a directory named sbms in the webapps folder of your tomcat installation (e.g. C:\apache-tomcat-9.0.0.M19\webapps\sbms)
This section describes one of the ways to deploy the Semantic BMS APIs. There are certainly many other ways, however this one
is tested and pre-configured.
To deploy the application in Apache Tomcat container (tested for Tomcat 8.5 and higher, requires Servlet 3.0 implementation):
* Download and install the Apache Tomcat (for Windows, ZIP distribution is sufficient)
* Preferred and pre-configured location for Tomcat installation is C:\apache-tomcat
* Create a directory named sbms in the webapps folder of your tomcat installation (e.g. C:\apache-tomcat\webapps\sbms)
* Copy the files from the SemanticBMSClient project into the sbms/client directory
Set up the SemanticAPI to use your TDB (sample data can be created using the scripts in the testBench project)
* Update the tdb.path property in the semantics.properties file
* If your Tomcat installation directory is not C:\apache-tomcat:
* Set up the SemanticAPI to use the desired TDB location by updating the tdb.path property in the semantics.properties file
(The TDB creation itself is described in the following steps)
* Check the ont-policy.rdf file and update paths to the local ontology definitions
* Build the SemanticAPI/DataAccessAPI project using maven:
```
mvn clean install
```
* Copy the resulting build (located in <Yourgitrepository>/semanticBMS/SemanticAPI/target/SemanticAPI-1.0 for the SemanticAPI)
into the new directory directory (e.g. C:\apache-tomcat-9.0.0.M19\webapps\sbms) - the WEB-INF and META-INF folders
are meant to be placed directly in the sbms directory (and merged from both projects)
*Check the ont-policy.rdf file and update paths to the local ontology definitions if needed (both in the source project and in the Tomcat deployment)
into the new directory directory (e.g. C:\apache-tomcat\webapps\sbms) - the WEB-INF and META-INF folders
are meant to be placed directly in the sbms directory (and merged from both projects -- SemanticAPI and DataAccesAPI)
*Prepare the TDB and place it under the webapps/sbms/WEB-INF/tdb directory. There are multiple ways to prepare the TDB. Refer to the TestBench/README.md file.
* Run the Apache Tomcat
* The APIs are available at http://localhost:8080/sbms/semantics and http://localhost:8080/sbms/data
* To test the Semantic API, visit http://localhost:8080/sbms/semantics/types/type or run the performance tests.
@@ -55,7 +55,7 @@ Request "1: All information about a DP" $auth ($Server + "/sbms/semantics/datapo
Request"2a: All DPs according to strict criteria + grouping"$auth($Server+"/sbms/semantics/datapoints/?fields=bmsId&grouping=scope.floor&type=Input&source.type=TemperatureSensor&source.location=S01B04&scope.type=Room&property.domain=Air&property.quality=temperature")
Request"2b: Same query as above, different building"$auth($Server+"/sbms/semantics/datapoints/?fields=bmsId&grouping=scope.floor&type=Input&source.type=TemperatureSensor&source.location=S02B04&scope.type=Room&property.domain=Air&property.quality=temperature")
Request"2b: The same query as above, different building"$auth($Server+"/sbms/semantics/datapoints/?fields=bmsId&grouping=scope.floor&type=Input&source.type=TemperatureSensor&source.location=S02B04&scope.type=Room&property.domain=Air&property.quality=temperature")
Request"3: Generic query with large number of results + grouping"$auth($Server+"/sbms/semantics/datapoints/?fields=bmsId&grouping=scope.building&source.type=HumiditySensor&property.domain=Air")
The testbench is used for performance benchmarking of the Semantic API of the SemanticBMS middleware layer.
# TDB creation and performance benchmarking for the Semantic API
The testbench is used for performance benchmarking of the Semantic API of the SemanticBMS middleware layer. As a "byproduct",
it allows users to create their own TDBs and fill it with the data.
The Powershell scripts available in this toolset can be used for generating the sample data, loading them to the triple store
and running the performance benchmarks.
and running the performance benchmarks. The toolset was tested on WIndows 10. On older version of Windows, it will probably not
run correctly due to the older version of PowerShell environment.
For other platforms (e.g. Linux), there is pre-generated NT file (./NT-Data/sample.nt) that can be loaded directly to the TDB using Apache Jena tdbloader command
line utility. For the performance testing, other tools, such as curl or wget, must be used.
The generated data represent mock-up facility of an organization.
They consist of several sites and buildings. Each building is equipped with an PLC, an energy meter and two data points representing energy consumption - overall and last month.
Each room is equipped with a temperature sensor, a humidity sensor, a motion sensor and a PLC that publishes following data points:
- Air temeprature
- Air humidity
- Setpoint temperature
- Occupancy
* Air temeprature
* Air humidity
* Setpoint temperature
* Occupancy
For each of the data points, there are two trends defined - one in the PLC iself, the other in the archive historian database.
For each of the data points, there are two trends defined - one in the PLC itself, the other in the archive historian database.
The benchmarking procedure consist of several API calls to the datapoints endpoint. The queries test following scenarios:
- Querying for all available information about specific data point
- Selecting data points according to number of restrictions (e.g. specific device type and observed property at certain location)
- Selecting large number of data points based on loose criteria (all temeprature sensors located on specific site)
1. Querying for all available information about a specific data point
2. Selecting data points according to number of restrictions (e.g. a specific device type and observed property at a certain location)
3. Selecting large number of data points based on loose criteria (all humidity sensors in the database)
4. Selecting large number of data points based on loose criteria with the "bottleneck" attribute (all humidity sensors in the database + their data point type)
5. Selecting large number of data points based on loose criteria with all the available information retrieved (all temperature sensors in the database + all attributes)
The aim of the benchmarking is to prove that the qeury performance is sufficent to intended purposes, as the performance was not
a primary design goal during the development.
The prerequisities to run this benchmark are:
- Apache Jena 3.2.0 - the distribution should be downloaded and extracted to the hard drive
- Apache Tomcat 8.5 or higher - the distribution should be downloaded and extracted to the hard drive. Then, The JAVA_HOME environment variable must be set manually.
The prerequisites to run this benchmark are:
* Apache Jena 3.2.0 - the distribution should be downloaded and extracted to the hard drive
* Apache Tomcat 8.5 or higher - the distribution should be downloaded and extracted to the hard drive. Then, The JAVA_HOME environment variable must be set manually.
## Usage:
Usage:
Generally, the optional parameters are not needed when following the expected workflow - they are used only for special use cases.
0) If you intend to use the provided pre-generated data or custom dataset, proceed directly to step 3 and use the optional parameter to specify the correct input file.
If you intend to use the provided pre-generated data, proceed directly to step 3 and use the optional parameter to specify the correct input file.
1) [Optional] Run the script Generate-CSVs.ps1
1. [Optional] Run the script Generate-CSVs.ps1
The script generates CSV files into the CSV-Data folder.
Optional parameters:
-Sites [Default: 5] - number of sites in the dataset
-Buildings [Default: 5] - number of buildings in each site
-Floors [Default: 3] - number of floors in each building
-Rooms [Default: 10] - number of rooms in each floor
-Rooms [Default: 10] - number of rooms in each floor
-OutFolder [Default: ".\CSV-Data"] - output folder for the generated CSV
2) [Optional] Run the script Generate-Triples.ps1
2. [Optional] Run the script Generate-Triples.ps1
The script generates an N-Triples file into the NT-Data folder, based on the CSV files located in the .\CSV-Data folder.
Optional parameters:
-InFolder [Default: .\CSV-Data] - location of the source CSV files generated by Generate-CSVs script
...
...
@@ -47,10 +61,10 @@ Usage:
Pre-generated NT file for the test mock-up with the default size is available in the NT-Data folder as sample.nt.
3) Run the script Create-TDB.ps1
3. Run the script Create-TDB.ps1
The script will create the TDB triplestore using the Apache Jena framework.
Optional parameters:
-TDBPath [Default: \apache-tomcat-9.0.0.M19\webapps\sbms\WEB-INF\tdb] - location of the resulting TDB directory
-TDBPath [Default: \apache-tomcat\webapps\sbms\WEB-INF\tdb] - location of the resulting TDB directory
-NTPath [Default: .\NT-Data\data.nt] - Path to the source NT file
-JenaRoot [Default: \apache-jena-3.2.0] - Path to the directory with the Apache Jena distribution
...
...
@@ -59,26 +73,24 @@ Usage:
Statistics of the TDB can be found in the <Yourtdblocation>\stats.opt file.
4) Deploy the Semantic API into the Apache Tomcat (see Readme on the project site)
4. Deploy the Semantic API into the Apache Tomcat (see Readme on the project site)
5) Set up the SemanticAPI to use the created TDB
5. Set up the SemanticAPI to use the created TDB if needed
Update the tdb.path property in the semantics.properties file
6) [Optional] Run the Apache Tomcat
6. [Optional] Run the Apache Tomcat
Execute the bin\startup.bat script
This step is recommended if you plan to run several rounds of benchmarks, as the Tomcat container is not started each time
the tests are executed
7) Run the script Performance-Test.ps1
The srcipt will start the performance benchmark and outputs the result to standard output
7. Run the script Performance-Test.ps1
The script will start the performance benchmark and outputs the result to standard output
(Use .\Performance-Test.ps1 | OutFile results.txt to redirect it to a file)
If you did not start the Apache Tomcat in the previous step, use the -ManageTomcat switch. You can also set the path to
the Apache Tomcat installation directory by the -TomcatHome param (Default is C:\apache-tomcat-9.0.0.M19)
the Apache Tomcat installation directory by the -TomcatHome param (Default is C:\apache-tomcat)
8) [Optional] Stop the Apache Tomcat
Execute the bin\shutdown.bat script if you did not use the -ManageTomcat option and started the server manually in the step 6)
8. [Optional] Stop the Apache Tomcat
Execute the bin\shutdown.bat script if you did not use the -ManageTomcat option and started the server manually in the step 6