diff --git a/docs/docs.xml b/DocBook/docs.xml similarity index 66% rename from docs/docs.xml rename to DocBook/docs.xml index f7e12a22..cb304136 100644 --- a/docs/docs.xml +++ b/DocBook/docs.xml @@ -17,41 +17,257 @@ - Installation Guide + Installation and Configuration
- HammerDB v3.2 New Features + Release Notes - Bug fixes + The following are the release notes for HammerDB v4.0. - [#132] Running buildschema in CLI for TPC-H has incorrect VU - count +
+ Nomenclature Change + + In the database.xml file and the User Interface the workload + names have changed to TPROC-C and TPROC-H. This is a nomenclature + change only to represent that the workloads are fair use + implementations derived from the TPC specifications and the + nomenclature does not change the functionality of the workload + compared to prior versions using the TPC-C and TPC-H + terminology. +
- [#133] Use all warehouses omits warehouse 1 and fails if more - VUsers than warehouses +
+ Stored Procedure Refactoring and Performance + + At version 4.0 the stored procedures for the Oracle and + PostgreSQL TPROC-C workloads have been refactored. This increases the + expected performance between versions and consequently the performance + from HammerDB v4.0 cannot be compared directly to the performance of + v3.3 or previous releases. Additionally for some workloads HammerDB + v4.0 changes the relationship between the NOPM and TPM metrics + compared to previous versions. As a result of the stored procedure + refactoring using bulk operations more work is processed per commit + and therefore in these cases the NOPM has increased whilst the TPM + remains the same. This indicates a real measure of increased + throughput by doing more work per database transaction and + consequently NOPM is now listed first as the primary metric in + reporting output. However as raised in HammerDB + GitHub Issue #111 there may be cases where there is a + dependency on the wording of the HammerDB log. For this reason a + configuration option in the generic.xml file of first_result is given. + If this option is set to NOPM then the v4.0 format is used if set to + TPM then the output is compatible with v3.3. + + <benchmark> +<rdbms>Oracle</rdbms> +<bm>TPC-C</bm> +<first_result>NOPM</first_result> +</benchmark> +
- [#134] Buildschema in CLI errors if 1 virtual user and more than 1 - warehouse +
+ Redis Deprecated + + The Redis workload has been deprecated and no longer features by + default in the main HammerDB menu. In particular as a single-threaded + database without support for stored procedures it was considered that + Redis was not suitable for running workloads derived from the TPC + specifications and could not reach similar levels of performance as + the relational databases currently supported. Redis can still be + enabled for unsupported use by uncommenting the Redis database entry + in database.xml. + + <!--Redis deprecated, uncomment to enable as unsupported +<redis> +<name>Redis</name> +<description>Redis</description> +<prefix>redis</prefix> +<library>redis</library> +<workloads>{TPROC-C}</workloads> +<commands>redis</commands> +</redis> +--> +
- [GH#58] Bug when building TPC-C and TPC-H on Azure +
+ Known Third-Party Driver Issues - Updated time profiler to report percentile values at 10 second - intervals + HammerDB has a dependency on 3rd party driver libraries to + connect to the target databases. The following are known issues with + some of the 3rd party drivers that HammerDB uses. - Updated PostgreSQL Oracle SLEV Stored Procedure to return stock - count +
+ Oracle on Windows: Oracle Bug 12733000 OCIStmtRelease crashes + or hangs if called after freeing the service context handle + + If you are running HammerDB against Oracle on Windows there is + long established bug in Oracle that can cause application crashes + for multi-threaded applications on Windows.This bug can be + investigated on the My Oracle Support website with the following + reference. Bug 12733000 OCIStmtRelease crashes or hangs if called + after freeing the service context handle. To resolve this Oracle + issue add the following entry to the SQLNET.ORA file on your + HammerDB client. + + SQLNET.AUTHENTICATION_SERVICES = (NTS) +DIAG_ADR_ENABLED=OFF +DIAG_SIGHANDLER_ENABLED=FALSE +DIAG_DDE_ENABLED=FALSE +
- Updated hammerdbcli to enable autostart with script +
+ SQL Server on Linux: unixODBC's handle validation may become + a performance bottleneck + + Using the HammerDB client for SQL Server on Linux can be + slower than the same client on Windows when using the default + installed unixODBC drivers on many Linux distributions. As described + in the SQL + Server Programming Guidelines "When using the + driver with highly multithreaded applications, unixODBC's handle + validation may become a performance bottleneck. In such scenarios, + significantly more performance may be obtained by compiling unixODBC + with the --enable-fastvalidate option. However, beware that this may + cause applications which pass invalid handles to ODBC APIs to crash + instead of returning SQL_INVALID_HANDLE errors." + Recompiling unixODBC with the --enable-fastvalidate option has been + measured to improve client performance by 2X. Example configure + options used to build unixODBC are shown as follows: + + ./configure --prefix=/usr/local/unixODBC --enable-gui=no --enable-drivers=no --enable-iconv +--with-iconv-char-enc=UTF8 --with-iconv-ucode-enc=UTF16LE --enable-threads=yes --enable-fastvalidate +
+
+ +
+ Linux Font Pre-Installation Requirements - Added PostgreSQL v11+ compatible Stored Procedures to use instead - of Functions + On Linux HammerDB requires the Xft FreeType-based font drawing + library for X installed as follows: - Added HTTP Web Service interface + Ubuntu: + + $ sudo apt-get install libxft-devRed + Hat: + + $ yum install libXft +
+
+ +
+ Documentation License and Copyright + + Copyright (C) 2020 Steve Shaw. + + Permission is granted to copy, distribute and/or modify this + document under the terms of the GNU Free Documentation License, Version + 1.3 or any later version published by the Free Software Foundation; with + no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A + copy of the license is included in the section entitled "GNU Free + Documentation License". +
+ +
+ HammerDB v4.0 New Features + + Updated Binaries to: tcl8.6.10, tk8.6.10, thread2.8.5, oratcl4.6, + mysqltcl3.052, pgtcl2.1.1, db2tcl2.0.0 + + HammerDB v4.0 New Features are all referenced to GitHub issues, + where more details for each new feature and related pull requests can be + found here: https://github.com/TPC-Council/HammerDB/issues + + [TPC-Council#152] Inclusive Language Updates + + [TPC-Council#149] Add runtimer and waittocomplete routines to + CLI + + [TPC-Council#148] HammerDB CLI on Windows can have incorrect + colours + + [TPC-Council#140] Update SQL Server ODBC Driver + + [TPC-Council#139] Fix XML closing tags for sprocs and + connpool + + [TPC-Council#138] Update SQL Server library version number for + TCL8.6.10 + + [TPC-Council#137] Update versions, changelog and OSS-TPC-x + workloads + + [TPC-Council#136] update dbms_random for PostgreSQL neword + infinite loop + + [TPC-Council#135] multiple labelled connections / connect pool + feature + + [TPC-Council#132] Use Existing User and DB in Postgres + + [TPC-Council#126] Adding docs directory and documentation + files + + [TPC-Council#124] Make column order in stock_i1 index in + PostgreSQL consistent with Oracle + + [TPC-Council#121] Fix temp environment variables not + detected + + [TPC-Council#119] Fix for Issue #118 using diset with variables + with spaces + + [TPC-Council#115] Add partitioning hints in PostgreSQL neword + transaction + + [TPC-Council#106] Format details repo + + [TPC-Council#105] Deprecated Redis + + [TPC-Council#104] Reverse print out of metrics to put NOPM ahead + of TPM + + [TPC-Council#98] MySQL prepared statements and socket + option + + [TPC-Council#97] Partitioning for Oracle ORDERS and HISTORY + + [TPC-Council#95] PostgreSQL prepared statements for functions and + tablespace option for schema builds + + [TPC-Council#92] Refactor Oracle and Postgres procedural code for + performance + + [TPC-Council#91] Added Master/Slave Modes to CLI + + [TPC-Council#86] Added Geomean calculation to TPC-H + workloads + + [TPC-Council#82] Fix distribution of Warehouse across VUs for All + Warehouse option + + [TPC-Council#46] Consider changing "Benchmark Options" from TPC-C + or TPC-H to something that underscores "based on TPC-C/TPC-H" + + [TPC-Council#19] GUI HD Display
Test Matrix + The following test matrix is provided as a guide based on the + operating system and database releases that HammerDB has been built and + tested against. The matrix is not an exclusive support or configuration + matrix and HammerDB has been designed to be compatible with the + supported databases running on different architectures and operating + systems. HammerDB is built for the x88-64 architecture on Linux and + Windows. Where the database is not running on either Linux or Windows on + x86-64 HammerDB can be run on Linux or Windows and connect to the target + database on another architecture over a network. + HammerDB has been built and tested on the following x86 64-bit Linux and Windows releases. @@ -71,8 +287,7 @@ Linux - Ubuntu 17.04 - 17.10 - 18.04 / RHEL 7.3 - RHEL 7.4 - - RHEL 7.5 - RHEL 7.6 + Ubuntu 17.X 18.X 19.X / RHEL 7.X RHEL 8.X @@ -84,17 +299,7 @@ - On Linux HammerDB requires the Xft FreeType-based font drawing - library for X installed as follows: - - Ubuntu: - - $ sudo apt-get install libxft-devRed - Hat: - - $ yum install libXft - - HammerDB has been built and testing on the following x86 64-bit + HammerDB has been built and testing on the following x86-64 64-bit Databases. @@ -113,13 +318,13 @@ Oracle (TimesTen) - 12c / 18c + 12c / 18c / 19c - SQL Server + Microsoft SQL Server - 2017 + 2017 / 2019 @@ -131,20 +336,142 @@ MySQL (MariaDB) (Amazon Aurora) - 5.7 / 8.0 / 10.2 / 10.3 / 10.4 + 5.7 / 8.0 / 10.2 / 10.3 / 10.4 / 10.5 PostgreSQL (EnterpriseDB) (Amazon Redshift) (Greenplum) - 10.2 / 10.3 + 10.2 / 10.3 / 11 / 12 / 13 + + +
+
+ +
+ Checksum Verification + + Checksums for the HammerDB 4.0 installation files are shown in the + table below. The integrity of the HammerDB installation files can be + verified on Windows with the Microsoft File Checksum Integrity + Verifier. + + fciv -both HammerDB-4.0-Win-x86-64-Setup.exe + + and on Linux with md5sum and sha1sum + + md5sum HammerDB-4.0-Linux.tar.gz +sha1sum HammerDB-4.0-Linux.tar.gz + + + v4.0 Checksum Verification + + + + + File + + MD5 + + SHA-1 + + + + + + Hammerdb-4.0-win-x64-setup.exe + + 98259a467bffae8a5665e11440f9b9b0 + + f42667f985cb31cc7cd8ee5a9ff95442ad32b648 + + + HammerDB-4.0-Linux-x64-installer.run + + 251e3098d93ac8ab2cdd5e10f114c79a + + 59f449769a9ef10961e8406d43cb0109f9bf75e6 + + + + HammerDB-4.0-Win.zip + + fd935a9d4cf68d00fab8694f45f0267b + + bd4ed3a7f1fdc3188a5439afed1cdb5bcdb14d67 + + + + HammerDB-4.0-Linux.tar.gz + + fa9c4e2654a49f856cecf63c8ca9be5b + + a6a35c234d324077c7819ca4ae4aa8eabaff4c15 + + + +
+ + Checksums for the HammerDB 3.3 installation files are shown in the + table below. The integrity of the HammerDB installation files can be + verified on Windows with the Microsoft File Checksum Integrity + Verifier. + + fciv -both HammerDB-3.3-Win-x86-64-Setup.exe + + and on Linux with md5sum and sha1sum + + md5sum HammerDB-3.3-Linux.tar.gz +sha1sum HammerDB-3.3-Linux.tar.gz + + + v3.3 Checksum Verification + + + - Redis + File - 4.0.6 + MD5 + + SHA-1 + + + + + + HammerDB-3.3-Win-x86-64-Setup.exe + + fcf40f89d40ed793e21cf36c456dfbd4 + + 580c470c062c1247fe0d20d1e0ae112d5060b3d3 + + + + HammerDB-3.3-Linux-x86-64-Install + + e6db6bf807f9d0c38262727bed858372 + + 1aed3d97256c1e22b56cd9d1e5d95fd7d28a1bad + + + + HammerDB-3.3-Win.zip + + 2aab7cb47069877676d1c734bea52768 + + aac24182dd41f1c862950834c13f97cbfbb10b0c + + + + HammerDB-3.3-Linux.tar.gz + + c0857cff6d91aaac80e90d50d92d67d5 + + 3a0a7552e74c0f8cb07376301b23c6a5ca29d0f1 @@ -166,27 +493,7 @@ Double click on the Setup file and the language choice is shown. -
- Language Choice - - - - - - -
Click continue to begin the installation.
- -
- Install Continue - - - - - - -
- - Click next to acknowledge the version + Click continue to begin the installation.
HammerDB Version @@ -198,10 +505,10 @@
- Accept or modify the destination location. + Read and Accept the GPL License Agreement.
- Destination + GPL v3 License @@ -210,10 +517,10 @@
- Review the settings before starting copying files. + Choose the installation directory.
- Review + Choose the Installation Directory @@ -222,7 +529,7 @@
- The files will be copied and the uninstall built + Press Next to begin the install.
Files copying @@ -234,7 +541,8 @@
- Click Finish and launch HammerDB + The installer will extract the files into the chosen + directory.
Complete @@ -246,6 +554,20 @@
+ Complete the Install by viewing the Readme File and running + HammerDB. If both options are chosen HammerDB will run after the + Readme is closed. + +
+ Complete the Setup Wizard + + + + + + +
+ HammerDB will start ready for you to use
@@ -300,7 +622,7 @@ Uninstalling HammerDB For a zipfile installation, delete the hammerDB directory. For - an installer based installation doubl-click on uninstall and follow + an installer based installation double-click on uninstall and follow the on-screen prompts.
@@ -327,9 +649,9 @@
Self Extracting Installer - To install from the self-extracting installer refer to the - previous section on the self-extracting installer for Windows, the - installation method is the same. + To install from the self-extracting installer using a graphical + environment refer to the previous section on the self-extracting + installer for Windows, the installation method is the same.
@@ -337,10 +659,10 @@ To install from the tar.gz run the command - tar -zxvf HammerDB-3.0.tar.gz + tar -zxvf HammerDB-4.0.tar.gz This will extract HammerDB into a directory named - HammerDB-3.0. + HammerDB-4.0.
@@ -362,14 +684,24 @@
- Verifying Client Libraries + Verifying the Installation of Database Client Libraries For all of the databases that HammerDB supports it is necessary to - have a 3rd party client library installed that HammerDB can use to - connect and interact with the database. This client library will also be - installed with database server software. The HammerDB command line tool - can be used to check the status of library availability for all - databases. + have a third-party client library installed that HammerDB can use to + connect and interact with the database. This client library will also + typically be installed with database server software. HammerDB does not + statically link the 3rd party libraries to minimise executable size and + provide flexibility in the third-party libraries used. For example if a + bug is detected in a particular library then this can be upgraded + without requiring the HammerDB libraries to be rebuilt. However as the + client libraries are dynamically linked it is essential that the correct + client libraries are already installed and environment variables set to + ensure that HammerDB can find the correct libraries. Note that it is + only necessary to load the libraries for the database that your are + testing. + + The HammerDB command line tool can be used to check the status of + library availability for all databases. To run this utility run the following command @@ -377,37 +709,154 @@ and type librarycheck. - HammerDB CLI v3.0 -Copyright (C) 2003-2018 Steve Shaw + HammerDB CLI v4.0 +Copyright (C) 2003-2020 Steve Shaw Type "help" for a list of commands The xml is well-formed, applying configuration hammerdb>librarycheck Checking database library for Oracle -Error: failed to load Oratcl - can't read "env(ORACLE_HOME)": no such variable -Ensure that Oracle client libraries are installed and the location in the LD_LIBRARY_PATH environment variable +Success ... loaded library Oratcl for Oracle Checking database library for MSSQLServer -Success ... loaded library tclodbc for MSSQLServer +Success ... loaded library tdbc::odbc for MSSQLServer Checking database library for Db2 Success ... loaded library db2tcl for Db2 Checking database library for MySQL Success ... loaded library mysqltcl for MySQL Checking database library for PostgreSQL Success ... loaded library Pgtcl for PostgreSQL -Checking database library for Redis -Success ... loaded library redis for Redis -hammerdb +hammerdb> - in the example it can be seen that the environment is not set for - Oracle however all of the other libraries were found and correctly - loaded. The following table illustrates the first level library that - HammerDB requires however there may be additional dependencies. Refer to - the Test Matrix to determine which database versions HammerDB was built - against. On Linux the command ldd and on Windows the Dependency Walker - Utility to determine additional dependencies. On Linux the - LD_LIBRARY_PATH environment variable can be set to the location of - installed libraries and PATH on Windows. + in the example it can be seen that the libraries for all databases + were found and loaded. The following table illustrates the first level + library that HammerDB requires however there may be additional + dependencies. Refer to the Test Matrix to determine which database + versions HammerDB was built against. On Windows the Dependency Walker + Utility can be used to determine the dependencies and on Linux + the command ldd. + + For example on Windows use dependency walker to open the HammerDB + library for your chosen database. In the following example + libmysqltcl.dll is opened for MySQL. This shows that the key dependency + is on the 64-bit libmysql.dll. + +
+ Dependency Walker MySQL + + + + + + +
+ + Right-clicking on this library shows the properties including + where it was found. + +
+ LIBMYSQL.DLL Properties + + + + + + +
+ + This location was set in the Environment variables under the Path + option. + +
+ Environment Variables + + + + + + +
+ + As shown below hammerDB found the correct MySQL 8.0 library + because the path to the 64-bit MySQL 8.0 library was set correctly in + the environment variables. + +
+ Path environment variable + + + + + + +
+ + On Linux we run a similar test with librarycheck, however in this + instance the library file is not found, although note that it identifies + the file that is missing as libmysqlclient.so.21. + + Checking database library for MySQL +Error: failed to load mysqltcl - couldn't load file "/home/steve/HammerDB-4.0/lib/mysqltcl-3.052/libmysqltcl3.052.so": libmysqlclient.so.21: cannot open shared object file: No such file or directory +Ensure that MySQL client libraries are installed and the location in the LD_LIBRARY_PATH environment variable + + + We can investigate further using the ldd command in an equivalent + way to dependency walker on Windows. This also identifies the file that + is missing. + + $ ldd libmysqltcl3.052.so +linux-vdso.so.1 (0x00007ffc44f7d000) +libmysqlclient.so.21 => not found +libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f33e73e7000) +/lib64/ld-linux-x86-64.so.2 (0x00007f33e79e2000) + + + Checking in our MySQL installation we can find the file + libmysqlclient.so.21. + + $ pwd +/opt/mysql-8.0.18-linux-glibc2.12-x86_64/lib +$ ls libmysqlclient* +libmysqlclient.a libmysqlclient.so libmysqlclient.so.21 libmysqlclient.so.21.1.18 + + + Therefore we know that the file is installed, however we need to + tell HammerDB where to find it. This is done by adding the MySQL library + to the LD_LIBRARY_PATH. + + $ export LD_LIBRARY_PATH=/opt/mysql-8.0.18-linux-glibc2.12-x86_64/lib:$LD_LIBRARY_PATH + + Reversing our steps we can see that the library is now + found. + + $ ldd libmysqltcl3.052.so +linux-vdso.so.1 (0x00007fff7f7e6000) +libmysqlclient.so.21 => /opt/mysql-8.0.18-linux-glibc2.12-x86_64/lib/libmysqlclient.so.21 (0x00007f92b0153000) +libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f92afd62000) +libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f92afb43000) +libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f92af93f000) +libssl.so.1.1 => /opt/mysql-8.0.18-linux-glibc2.12-x86_64/lib/libssl.so.1.1 (0x00007f92af6b5000) +libcrypto.so.1.1 => /opt/mysql-8.0.18-linux-glibc2.12-x86_64/lib/libcrypto.so.1.1 (0x00007f92af270000) +librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f92af068000) +libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f92aecdf000) +libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f92ae941000) +libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f92ae729000) +/lib64/ld-linux-x86-64.so.2 (0x00007f92b0c34000) + + and library check confirms that it can be loaded. + + Checking database library for MySQL +Success ... loaded library mysqltcl for MySQL + + Add the export command to the .bash_profile ensures that it will + be found each time HammerDB is launched from a new shell. + + The following table shows the libraries that are required for each + database version. All libraries are 64-bit. Note that some databases are + considerably more flexible in library versions and therefore the + following section is important to ensure that you install the correct + library for your needs.
3rd party libraries @@ -425,7 +874,7 @@ hammerdb Oracle Linux - libclntsh.so. + libclntsh.so @@ -437,7 +886,7 @@ hammerdb SQL Server Linux - libodbc.so. + libodbc.so @@ -449,7 +898,7 @@ hammerdb Db2 Linux - libdb2.so. + libdb2.so @@ -481,12 +930,6 @@ hammerdb LIBPQ.DLL - - - Redis - - Built in library -
@@ -497,7 +940,7 @@ hammerdb When using the Oracle instant client Oratcl uses the additional environment variable ORACLE_LIBRARY to identify the Oracle client library. On the Windows the Oracle client library is called oci.dll in - a location such as: C:\oraclexe\app\oracle\product\11.2.0\server\bin + a location such as: C:\oraclexe\app\oracle\product\11.2.0\server\bin. On Linux the library is called libclntsh.so where this is typically a symbolic link to a product specific name such as libclntsh.so.12.1 for Oracle 12c. An example .bash_profile file is shown for a typical @@ -554,11 +997,18 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

MySQL - HammerDB version 3.0 has been built and tested against a MySQL - 5.7 client installation. On Linux this means that HammerDB will - require a MySQL client library called libmysqlclient.so.20. This - client library needs to be referenced in the LD_LIBRARY_PATH in the - same way described for Oracle previously in this section. + HammerDB version 4.0 (and version 3.3) has been built and tested + against a MySQL 8.0 client installation, hammerDB version 3.0-3.2 has + been built against MySQL 5.7. On Linux this means that HammerDB will + require a MySQL client library called libmysqlclient.so.21 for + HammerDB version 4.0 and 3.3 and libmysqlclient.so.20 for version 3.2 + and earlier. This client library needs to be referenced in the + LD_LIBRARY_PATH as shown previously in this section. Note that for + testing MariaDB you also need the libmysqlclient.so.21 from an + installation of MySQL 8.0. You do not need to install MySQL 8.0 as the + only file you need is "libmysqlclient.so.21". With this file installed + and in the library path the HammerDB client can connect to + MariaDB.
@@ -574,13 +1024,116 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

+ -
- Redis +
+ XML Configuration - Redis The Redis client package is included with HammerDB for all - installations and requires no further configuration. -
+ HammerDB configuration and settings are defined in a number of XML + files in the config directory. You may edit these files to change the + configuration on startup, however it is recommended to save a copy of + the original in case of incompatible changes. + +
+ XML Configuration Files + + + + + + + + By default the databases in the GUI menu are listed in the order + that the workloads were added to HammerDB. If you wish to change the + order to put a particular database first you can change the rdbms + value in generic.xml to the name of the database of your choice. The + name entry for a particular database can be found in the database.xml + file. For example the following would set SQL Server to be the + database at the top of the menu on startup. + + <benchmark> +<rdbms>MSSQLServer</rdbms> +<bm>TPC-C</bm> +<first_result>NOPM</first_result> +</benchmark> +
+
+ +
+ Themes and Scalable Graphics + + HammerDB v4.0 includes an updated graphical interface that adapts + to scale to UHD displays such as Microsoft pixelsense displays. The + behaviour of the display is set in the theme section of + generic.xml. + + <theme> +<scaling>auto</scaling> +<scaletheme>auto</scaletheme> +<pixelsperpoint>auto</pixelsperpoint> +</theme> + + By default scaling, the scaletheme and pixelsperpoint are all set + to auto. This means that HammerDB will detect the display settings and + scale the interface accordingly. For example the image shows HammerDB + v3.3 and v4.0 on the same UHD display. + +
+ Scaling Graphics + + + + + + +
+ + However some displays or third-party X Windows servers may not be + updated to support scalable graphics. In this case the scaling value can + be set to fixed and a standard 96 dpi display will be used with the + fixed themes from HammerDB v3.3. + + <theme> +<scaling>fixed</scaling> +<scaletheme>auto</scaletheme> +<pixelsperpoint>auto</pixelsperpoint> +</theme> + + The scaletheme value will accept settings of "auto", "awlight", + "arc" or "breeze". If set to the default of "auto", "awlight" will be + used on Linux. + +
+ Linux Theme Awlight + + + + + + +
+ + and "breeze" on Windows. + +
+ Windows Theme Breeze + + + + + + +
+ + The scale factor can be fine-tuned by setting the pixelsperpoint + value. By running the command puts [ tk scaling ] in the console you can + determine the current value. By setting this value slightly larger or + smaller than the default you can adjust the scaling to your system. This + value is not intended for large scale changes from the default as + settings have been adjusted to the detected value. + + (HammerDB-4.0) 49 % puts [ tk scaling ] +1.3333333333333333
@@ -593,9 +1146,9 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

+ file. Select your chosen database and TPROC-C and press OK. This Quick + Start guide uses Microsoft SQL Server, however the process is the same for + all other supported databases.
Building the Schema @@ -610,13 +1163,13 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

- Click on the Benchmark tree view and under TPC-C and select TPC-C - Schema Build Options to display the TPC-C Schema Options window. Within - this window enter the connection details of your database software. - These options will vary depending on the database chosen. Select a - number of warehouses, 10 is good for a first test and set the Virtual - Users to build schema to the number of CPU cores on your system. Click - OK. + Click on the Benchmark tree view, under TPROC-C select TPROC-C + Schema Build Options to display the TPROC-C Schema Options window. + Within this window enter the connection details of your database + software. These options will vary depending on the database chosen. + Select a number of warehouses, 10 is good for a first test and set the + Virtual Users to build schema to the number of CPU cores on your system. + Click OK.
Build Options @@ -802,11 +1355,15 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

On completion observe that the Monitor Virtual User reports a - value for TPM and a value for NOPM. TPM is the transactions per minute - and is a unique value to how each database processes transactions. NOPM - stands for New Orders per Minute and is database independent - Consequently TPM cannot be used to compare tests running the same - database software however NOPM can. + value for NOPM and a value for TPM. NOPM stands for New Orders per + Minute and is extracted from the database schema and is therefore + database independent meaning it is valid to compare between different + databases. TPM is the transactions per minute and is a unique value to + how each database processes transactions. NOPM is the key performance + metric however TPM is the metric that correlates with database + performance tools measurement of transactions per second or minute and + can therefore be used by database engineers for deeper analysis of + database performance.
Test Result @@ -839,7 +1396,7 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

- Introduction to OLTP Testing (OSS-TPC-C) + Introduction to OLTP Testing (TPROC-C derived from TPC-C)
What is a Transactional Workload @@ -854,68 +1411,83 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

- What is the TPC and OSS-TPC-C? + What is the TPC and the TPROC-C workload derived from + TPC-C? Designing and implementing a database benchmark is a significant challenge. Many performance tests and tools experience difficulties in comparing system performance especially in the area of scalability, the ability of a test conducted on a certain system and schema size to be comparable with a test on a larger scale system. When system vendors - wish to publish benchmark information about database performance they - have long had to access to such sophisticated test specifications to do - so and the TPC is the industry body most widely recognized for defining - benchmarks in the database industry recognized by all of the leading - database vendors. TPC-C is the benchmark published by the TPC for Online - Transaction Processing and you can view the published TPC-C results at the - TPC website. + wish to publish validated benchmark information about database + performance they have needed to access sophisticated test specifications + and the TPC is the industry body most widely recognized for defining + benchmarks. TPC specifications are the only published benchmarks in the + database industry recognized by all of the leading database vendors. + TPC-C is the benchmark published by the TPC for Online Transaction + Processing and you can view the published TPC-C results at the TPC + website. The TPC Policies allow for derivations of TPC Benchmark Standards - that comply with the TPC Fair Use rules. The HammerDB OSS-TPC-C + that comply with the TPC Fair Use rules. TPROC-C is the OLTP workload + implemented in HammerDB derived from the TPC-C specification with + modification to make running HammerDB straightforward and cost-effective + on any of the supported database environments. The HammerDB TPROC-C workload is an open source workload derived from the TPC-C Benchmark Standard and as such is not comparable to published TPC-C results, as - the results do not comply with the TPC-C Benchmark Standard. + the results comply with a subset rather than the full TPC-C Benchmark + Standard. The name for the HammerDB workload TPROC-C means "Transaction + Processing Benchmark derived from the TPC "C" specification".
- HammerDB Transactional OSS-TPC-C based workloads + HammerDB TPROC-C workload - The HammerDB OSS-TPC-C workload is intentionally not fully optimized + The HammerDB TPROC-C workload is intentionally not fully optimized and not biased towards any particular database implementation or system - hardware (being open source you are free to inspect all of the HammerDB - source code). The intent is to provide an out of the box type experience. - The crucial element is to reiterate the point made in the previous section - that the HammerDB workloads are designed to be reliable, scalable and tested - to produce accurate, repeatable and consistent results. In other words HammerDB - is designed to measure relative as opposed to absolute database performance - between systems. What this means is if you run a test against one particular - configuration of hardware and software and re-run the same test against exactly - the same configuration you will get exactly the same result within the bounds of - the random selection of transactions which will typically be within 1%. Results - Any differences between results are directly as a result of changes you have - made to the configuration (or management overhead of your system such as database - checkpoints or user/administrator error). Testing has proven that HammerDB - tests re-run multiple times unattended (see the autopilot feature) on the - same reliable configuration produce performance profiles that will overlay - each other almost identically. The Figure below illustrates an example of - this consistency and shows the actual results of 2 sequences of tests run - unattended one after another against one of the supported databases with - the autopilot feature from 1 to 144 virtual users to test modifications to - a WAL (Write Ahead Log File). In other words HammerDB will give you the same - results each time, if your results vary you need to focus entirely on your - database, OS and hardware configuration. + hardware, being open source you are free to inspect all of the HammerDB + source code and to submit pull requests to update or enhance the + workloads. The intent is to provide an out-of-the-box type experience + when testing a database without requiring complex configurations or + additional third-party software in addition to both HammerDB and the + database you are testing. HammerDB can be run on any environment from a + simple laptop based express type database install right through to 8, 16 + and 32 CPU socket servers and clusters. The crucial element is to + reiterate the point made in the previous section that the HammerDB + workloads are designed to be reliable, scalable and tested to produce + accurate, repeatable and consistent results. In other words HammerDB is + designed to measure relative as opposed to absolute database performance + between systems. What this means is if you run a test against one + particular configuration of hardware and software and re-run the same + test against exactly the same configuration you will get exactly the + same result within the bounds of the random selection of transactions + which will typically be within 1%. Any differences between results are + directly as a result of changes you have made to the configuration (or + management overhead of your system such as database checkpoints or + user/administrator error). Testing has proven that HammerDB tests re-run + multiple times unattended (see the autopilot feature) on the same + reliable configuration produce performance profiles that will overlay + each other almost identically. The Figure below illustrates an example + of this consistency and shows the actual results of 2 sequences of tests + run unattended one after another against one of the supported databases + with the autopilot feature from 1 to 144 virtual users to test + modifications to a WAL (Write Ahead Log File). In other words HammerDB + will give you the same results each time, if your results vary you need + to focus entirely on your database, OS and hardware + configuration. @@ -940,54 +1512,46 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

Comparing HammerDB results - HammerDB implements a workload based on the TPC-C specification - however does NOT implement a full specification TPC-C benchmark and the - transaction results from HammerDB cannot be compared with the official - published TPC-C benchmarks in any manner. Official Audited TPC-C - benchmarks are extremely costly, time consuming and complex to establish - and maintain. The HammerDB implementation based on the specification of - the TPC-C benchmark is designed to capture the essence of TPC-C in a - form that can be run at low cost on any system bringing professional, - reliable and predictable load testing to all database environments. For - this reason HammerDB results cannot and should NOT be compared or used - with the term tpmC in any circumstance. You are permitted however to - observe for your own benefit whether a correlation exists between the - ratios of HammerDB results conducted on different systems and - officially, audited and published results. HammerDB workloads produce 2 - statistics to compare systems called TPM and NOPM respectively. TPM is - the specific database transactional measurement typically defined as the - number of user commits plus the number of user rollbacks. Being database - specific TPM values cannot be compared between different database types. - On the other hand the NOPM value is based on a metric captured from - within the test schema itself. As such NOPM (New Orders per minute) is a - performance metric independent of any particular database implementation - and is the recommended primary metric to use. + HammerDB implements a workload called TPROC-C based on the TPC-C + specification called TPROC-C however does NOT implement a full + specification TPC-C benchmark and the transaction results from HammerDB + cannot be compared with the official published TPC-C benchmarks in any + manner. Official Audited TPC-C benchmarks are extremely costly, time + consuming and complex to establish and maintain. The HammerDB + implementation based on the specification of the TPC-C benchmark is + designed to capture the essence of TPC-C in a form that can be run at + low cost on any system bringing professional, reliable and predictable + load testing to all database environments. For this reason HammerDB + results cannot and should NOT be compared or used with the term tpmC in + any circumstance. HammerDB workloads produce 2 statistics to compare + systems called TPM and NOPM respectively. NOPM value is based on a + metric captured from within the test schema itself. As such NOPM (New + Orders per minute) as a performance metric independent of any particular + database implementation is the recommended primary metric to use.
- Understanding the TPC-C workload - - TPC-C implements a computer system to fulfil orders from customers - to supply products from a company. The company sells 100,000 items and - keeps its stock in warehouses. Each warehouse has 10 sales districts and - each district serves 3000 customers. The customers call the company - whose operators take the order, each order containing a number of items. - Orders are usually satisfied from the local warehouse however a small - number of items are not in stock at a particular point in time and are - supplied by an alternative warehouse. It is important to note that the - size of the company is not fixed and can add Warehouses and sales - districts as the company grows. For this reason your test schema can be - as small or large as you wish with a larger schema requiring a more - powerful computer system to process the increased level of transactions. - The TPC-C schema is shown below, in particular note how the number of - rows in all of the tables apart from the ITEM table which is fixed is - dependent upon the number of warehouses you choose to create your - schema. - - + Understanding the TPROC-C workload derived from TPC-C + + The TPC-C specification on which TPROC-C is based implements a + computer system to fulfil orders from customers to supply products from + a company. The company sells 100,000 items and keeps its stock in + warehouses. Each warehouse has 10 sales districts and each district + serves 3000 customers. The customers call the company whose operators + take the order, each order containing a number of items. Orders are + usually satisfied from the local warehouse however a small number of + items are not in stock at a particular point in time and are supplied by + an alternative warehouse. It is important to note that the size of the + company is not fixed and can add Warehouses and sales districts as the + company grows. For this reason your test schema can be as small or large + as you wish with a larger schema requiring a more powerful computer + system to process the increased level of transactions. The TPROC-C + schema is shown below, in particular note how the number of rows in all + of the tables apart from the ITEM table which is fixed is dependent upon + the number of warehouses you choose to create your schema.
- TPC-C Schema + TPROC-C Schema @@ -995,8 +1559,8 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

For additional clarity please note that the term Warehouse in - the context of TPC-C bears no relation to a Data Warehousing workload, - as you have seen TPC-C defines a transactional based system and not a + the context of TPROC-C bears no relation to a Data Warehousing workload, + as you have seen TPROC-C defines a transactional based system and not a decision support (DSS) one. In addition to the computer system being used to place orders it also enables payment and delivery of orders and the ability to query the stock levels of warehouses. Consequently the @@ -1031,116 +1595,101 @@ Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-17.0.so.1.1 UsageCount=1}}

- Generating Performance Profiles - - A particular advantage of HammerDB is is the ability to generate a - performance profile as the load increases on your system (see the Autopilot - feature for doing this in an unattended manner). - -
- Performance Profile - - - - - - -
- - This graph shows the relative performance of real tests on - different database configurations, this example in fact shows the same - database on the same server with the same configuration, however with - source code modifications to improve scalability. It is evident that the - performance improvement only becomes visible once the system reaches - beyond 16 Virtual Users. It is also clear that in both cases adding - Virtual Users beyond peak performance results in lower throughput. It - should therefore be clear that your testing goal for transactional - systems should be to measure the performance profile of your system - across all levels of utilisation starting with 1 virtual user through to - peak utilisation. + TPROC-C key similarities and differences from TPC-C + + HammerDB can be seen as a subset of the the full TPC-C + specification, intentionally modified to make the workload simpler and + easier to run. The key similarities are the schema definition and data + and the 5 transactions implemented as stored procedures. The key + difference is that by default HammerDB will run without keying and + thinking time enabled (Note enabling event driven scaling will enable + keying and thinking time to be run with a large number of user + sessions). This means that HammerDB TPROC-C will run a CPU and memory + intensive version of the TPC-C workload. In turn this also means that + the number of virtual users and the required data set will be much + smaller than a full TPC-C implementation to reach maximum levels of + performance. HammerDB also does not implement terminals as the full + specification does. Nevertheless with the HammerDB TPROC-C + implementation a large number of client systems and third-party + middleware is not required nor a very large data set to reach maximum + levels of performance whilst still providing a robust test of relational + databases capabilities.
- Generating Time Profiles - - In addition to performance profiles based on throughput you should - also take note of transaction response times. Whereas performance - profiles show the cumulative performance of all of the virtual users - running on the system, response times show performance based on the - experience of the individual user. When comparing systems both - throughput and response time are important comparative measurements. - HammerDB includes a time profiling package called etprof that enables - you to select an individual user and measure the response times. This - functionality is enabled by selecting Time Profile checkbox in the - driver options. When enabled the time profile will report response time - percentile values at 10 second intervals as well as cumulative values - for all of the test at the end of the test run. - - Hammerdb Log @ Fri Jul 05 09:55:26 BST 2019 -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- -Vuser 1:Beginning rampup time of 1 minutes -Vuser 2:Processing 1000000 transactions with output suppressed... -Vuser 3:Processing 1000000 transactions with output suppressed... -Vuser 4:Processing 1000000 transactions with output suppressed... -Vuser 5:Processing 1000000 transactions with output suppressed... -Vuser 2:|PERCENTILES 2019-07-05 09:55:46 to 2019-07-05 09:55:56 -Vuser 2:|neword|MIN-391|P50%-685|P95%-1286|P99%-3298|MAX-246555|SAMPLES-3603 -Vuser 2:|payment|MIN-314|P50%-574|P95%-1211|P99%-2253|MAX-89367|SAMPLES-3564 -Vuser 2:|delivery|MIN-1128|P50%-1784|P95%-2784|P99%-6960|MAX-267012|SAMPLES-356 -Vuser 2:|slev|MIN-723|P50%-884|P95%-1363|P99%-3766|MAX-120687|SAMPLES-343 -Vuser 2:|ostat|MIN-233|P50%-568|P95%-1325|P99%-2387|MAX-82538|SAMPLES-365 -Vuser 2:|gettimestamp|MIN-2|P50%-4|P95%-7|P99%-14|MAX-39|SAMPLES-7521 -Vuser 2:|prep_statement|MIN-188|P50%-209|P95%-1067|P99%-1067|MAX-1067|SAMPLES-6 -Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ -... -Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ -Vuser 2:|PERCENTILES 2019-07-05 09:59:26 to 2019-07-05 09:59:36 -Vuser 2:|neword|MIN-410|P50%-678|P95%-1314|P99%-4370|MAX-32030|SAMPLES-4084 -Vuser 2:|payment|MIN-331|P50%-583|P95%-1271|P99%-3152|MAX-43996|SAMPLES-4142 -Vuser 2:|delivery|MIN-1177|P50%-2132|P95%-3346|P99%-4040|MAX-8492|SAMPLES-416 -Vuser 2:|slev|MIN-684|P50%-880|P95%-1375|P99%-1950|MAX-230733|SAMPLES-364 -Vuser 2:|ostat|MIN-266|P50%-688.5|P95%-1292|P99%-1827|MAX-9790|SAMPLES-427 -Vuser 2:|gettimestamp|MIN-3|P50%-4|P95%-7|P99%-14|MAX-22|SAMPLES-8639 -Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ -Vuser 2:|PERCENTILES 2019-07-05 09:59:36 to 2019-07-05 09:59:46 -Vuser 2:|neword|MIN-404|P50%-702|P95%-1296|P99%-4318|MAX-71663|SAMPLES-3804 -Vuser 2:|payment|MIN-331|P50%-597|P95%-1250|P99%-4190|MAX-47539|SAMPLES-3879 -Vuser 2:|delivery|MIN-1306|P50%-2131|P95%-4013|P99%-8742|MAX-25095|SAMPLES-398 -Vuser 2:|slev|MIN-713|P50%-913|P95%-1438|P99%-2043|MAX-7434|SAMPLES-386 -Vuser 2:|ostat|MIN-268|P50%-703|P95%-1414|P99%-3381|MAX-249963|SAMPLES-416 -Vuser 2:|gettimestamp|MIN-3|P50%-4|P95%-8|P99%-16|MAX-27|SAMPLES-8079 -Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ -Vuser 1:3 ..., -Vuser 1:Test complete, Taking end Transaction Count. -Vuser 1:4 Active Virtual Users configured -Vuser 1:TEST RESULT : System achieved 468610 SQL Server TPM at 101789 NOPM -Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ -Vuser 2:|PROCNAME | EXCLUSIVETOT| %| CALLNUM| AVGPERCALL| CUMULTOT| -Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ -Vuser 2:|neword | 82051665|39.96%| 93933| 873| 88760245| -Vuser 2:|payment | 73823956|35.95%| 93922| 786| 80531339| -Vuser 2:|delivery | 22725292|11.07%| 9577| 2372| 23418195| -Vuser 2:|slev | 14396765| 7.01%| 9340| 1541| 14402033| -Vuser 2:|ostat | 10202116| 4.97%| 9412| 1083| 10207260| -Vuser 2:|gettimestamp | 2149552| 1.05%| 197432| 10| 13436919| -Vuser 2:|TOPLEVEL | 2431| 0.00%| 1| 2431| NOT AVAILABLE| -Vuser 2:|prep_statement | 1935| 0.00%| 5| 387| 1936| -Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ - - The output from etprof taken from each system should be used in - context with the overall performance profile to break down the overall - system throughput to the timing of the individual transactions - themselves and then graphed to show the comparison. - -
- Response Times - - - - - - -
+ How many warehouses to create for the TPROC-C test + + This is a very typical FAQ and although detailed in the + documentation some extra details may help in sizing and configuration. + For a basic starting point create a schema with 250-500 warehouses per + server CPU socket for more details size as follows. + + The official TPC-C test has a fixed number of users per warehouse + and uses keying and thinking time so that the workload generated by each + user is not intensive. However most people use HammerDB with keying and + thinking time disabled and therefore each virtual user can approximately + drive the CPU resources of one CPU core on the database server. + Therefore the relationship between virtual users, warehouses and cores + can be seen, you need considerably fewer virtual users and warehouses to + drive a system to maximum throughput than the official test. + + Additionally it is important to understand the workload. By + default each virtual user has the concept of a home warehouses where + approximately 75% of its workload will take place. With HammerDB this + home warehouse is chosen at random at the start of the test and remains + fixed. For example it can then be understood that if you configure a + schema with 1000 warehouses and run a test with 10 virtual users by + default most of the workload will be concentrated upon 10 warehouses. + (It is important to note the “by default” clause here as there are + exceptions to change this behaviour if desired). Also with the home + warehouse chosen at random it should be clear that number of warehouses + should be configured so that when the maximum number of virtual users + that you will run are configured there is a good chance that the + selection of a home warehouse at random will be evenly distributed + across the available warehouses with one or possibly 2 virtual users + selecting the same home warehouse at random but not more. + + As an example configuring a 10 warehouse schema and running 100 + virtual users against this schema would be an error in configuration as + it would be expected for 10 virtual users or more to select the same + warehouse. Doing this would mean that the workload would spend + considerably more time in lock contention and would not produce valid + results. Typically an option of 4 to 5 warehouses per virtual user would + be a minimum value to ensure an even distribution of virtual users to + warehouse. Therefore for the 100 virtual users 400 to 500 warehouses + should be a minimum to be configured. As noted configuring more should + not have a major impact on results as depending on the number of virtual + users used in the test most the warehouses will be idle (and ideally + most of the warehouses you are using will be cached in memory in your + buffer cache so the I/O to the data area is minimal). + + As one virtual user can drive most of the capacity of one CPU core + the actual value for the number of warehouses you choose will depend + upon the number of cores per socket. Note that if using CPUs with + Hyper-Threading allow for additional CPU capacity, so size as if there + were 35% more physical cores. Also depending on your chosen database + some database software will not scale to fully utilise all cores, see + the best practice guides for guidance on your chosen database. If CPU + utilisation is limited then you will need fewer warehouses configured + and virtual users for the test. + + It should also be clear that there is no completely accurate + one-size-fits-all type guidance for warehouse sizing as different + databases will scale differently and some may need more warehouses and + virtual users than others to reach peak performance. A common error is + to size many thousands of warehouses and virtual users with the aim of + reaching high performance but instead resulting in high levels of + contention and low performance. Even for highly scalable databases on + large systems upper limits for tests without keying and thinking time + are in the region of 2000 warehouses for up to 500 virtual users for + maximum performance. + + The exceptions to these guidelines are given in the section + Advanced Driver Script Options in the following Chapter namely the Use + All Warehouses and Event Driven Scaling options. For advanced users + these options enable increasing the amount of data area I/O and running + more Virtual Users less intensively with keying and thinking time + respectively.
@@ -1148,26 +1697,16 @@ Vuser 2:+-----------------+--------------+------+--------+--------------+------- The goal of HammerDB is to make database performance data open source and enable database professionals a fast and low-cost to compare - and contrast database systems. HamerDB maintains a list of 3rd party - publications on the HammerDB website and shares performance results on - Twitter where anyone is free to share their own test data. - -
- Publishing Results - - - - - - -
+ and contrast database systems. HamerDB maintains a list of 3rd party + publications on the HammerDB website.
- How to Run an OLTP Workload + How to Run a TPROC-C Workload - This Chapter provides a general overview on the HammerDB OLTP + This Chapter provides a general overview on the HammerDB TPROC-C workload and gives you an introduction to conducting OLTP (Online Transaction Processing) workloads on all of the supported databases. This will equip you with the essentials for assessing the ability of any system @@ -1183,10 +1722,18 @@ Vuser 2:+-----------------+--------------+------+--------+--------------+------- the load generation server is run on a separate system from the SUT with the load generated across the network. It is possible to run HammerDB on the same system as the SUT however this will be expected to produce - different results from a network based load. Both the SUT and the load - generation server may be virtualized or container databases although - similarly results may differ from a native hardware based - installation. + different results from a network based load. For example where the + database software is highly scalable then running HammerDB on the same + system will result in lower performance as the database software will + not be able to take advantage of all of the available CPU. Conversely + where the database software is less scalable and there is more network + overhead it can take more virtual users to reach the same levels of + performance using an additional load generation server compared to + running HammerDB on the SUT. Both the SUT and the load generation server + may be virtualized or container databases although similarly results may + differ from a native hardware based installation. In all cases when + comparing performance results you should ensure that you are comparing + across the same configurations test network configurations.
SUT Database Server Configuration @@ -1201,12 +1748,19 @@ Vuser 2:+-----------------+--------------+------+--------+--------------+------- and I/O to maximize CPU utilization keying and thinking time should be set to FALSE (keying and thinking time is detailed later in this guide). To achieve this you should aim to create a schema with - approximately 250-500 warehouses per CPU socket. As long as it is not - too small resulting in contention the schema size should not - significantly impact results. You should have sufficient memory to - cache as much of your test schema in memory as possible. If keying and + approximately 250-500 warehouses per CPU socket. By default each + Virtual User will select a home warehouse at random and most of its + work takes place on that home warehouse and therefore the schema + sizing of 250-500 warehouses per socket should ensure that when the + Virtual Users login the choice of a home warehouse at random is evenly + distributed without a large number of Virtual Users selecting the same + home warehouse. As long as it is not too small resulting in contention + the schema size should not significantly impact results when testing + in a default configuration. You should have sufficient memory to cache + as much of your test schema in memory as possible. If keying and thinking time is set to TRUE you will need a significantly larger - schema and number of virtual users to create a meaningful system load. + schema and number of virtual users to create a meaningful system load + and should consider the advanced event-driven scaling option. Reductions in memory will place more emphasis on the I/O performance of the database containing the schema. If the allocated memory is sufficient most of the data will be cached during an OLTP test and I/O @@ -1239,6 +1793,142 @@ Vuser 2:+-----------------+--------------+------+--------+--------------+------- running the same version of SQL Server as the SUT.
+
+ CPU Single-Threaded Performance Calibration + + By far one of the most common configuration errors with database + performance testing is to have configured the CPUs to run in powersave + mode. On some Linux operating systems this is the default + configuration and therefore it is recommended to verify the CPU + single-threaded performance and operating mode before running database + workloads. One way to do this is to use the Julian + Dyke CPU performance test (referenced by permission of Julian + Dyke and there are versions shown below to run directly in HammerDB + and for Oracle PL/SQL and SQL Server T-SQL). Note that the timings are + not meant to equivalent and it is expected that the HammerDB based + test is approximately twice as fast as PL/SQL or T-SQL. The reason for + the faster performance is that the TCL version is compiled into + bytecode and you can observe this by running a Linux utility such as + perf to see that the top function is TEBCresume. (Tcl Execute ByteCode + Resume). During normal HammerDB operations TEBCResume should also be + the top function for the same reason. + + Samples: 67K of event 'cycles:ppp', Event count (approx.): 33450114923 +Overhead Shared Object Symbol + 33.56% libtcl8.6.so [.] TEBCresume + 7.68% libtcl8.6.so [.] Tcl_GetDoubleFromObj + 6.28% libtcl8.6.so [.] EvalObjvCore + 6.14% libtcl8.6.so [.] TclNRRunCallbacks + + + The goal of running these tests is to ensure that your CPU runs + the test at the CPU advertised boost frequency. To do this you can use + the turbostat utility on Linux and the Task Manager utility on + Windows. By default the tests run for 10000000 iterations however this + can be extended if desired to allow sufficient time to monitor the + boost frequency is operational. For the HammerDB version save the + script shown and run it using the CLI. A commented out command is + shown that can be uncommented to observe the bytecode for a particular + procedure. + + proc runcalc {} { +set n 0 +for {set f 1} {$f <= 10000000} {incr f} { +set n [ expr {[::tcl::mathfunc::fmod $n 999999] + sqrt($f)} ] +} +return $n +} +#puts "bytecode:[::tcl::unsupported::disassemble proc runcalc]" +set start [clock milliseconds] +set output [ runcalc ] +set end [ clock milliseconds] +set duration [expr {($end - $start)}] +puts "Res = [ format %.02f $output ]" +puts "Time elapsed : [ format %.03f [ expr $duration/1000.0 ] ]" + + The expected result is 873729.72 as shown in the example output. + Depending on the CPU used the default completion time should be up to + 3 seconds, if longer then investigating the CPU configuration is + recommended. + + hammerdb>source CPUTEST.tcl +Res = 873729.72 +Time elapsed : 2.990 + +hammerdb>source CPUTEST.tcl +Res = 873729.72 +Time elapsed : 2.966 + +hammerdb>source CPUTEST.tcl +Res = 873729.72 +Time elapsed : 2.980 + +hammerdb>source CPUTEST.tcl +Res = 873729.72 +Time elapsed : 2.976 + +hammerdb>source CPUTEST.tcl +Res = 873729.72 +Time elapsed : 2.972 + +hammerdb>source CPUTEST.tcl +Res = 873729.72 +Time elapsed : 2.988 + +hammerdb>source CPUTEST.tcl +Res = 873729.72 +Time elapsed : 2.976 + + + + The following listing shows the original Julian Dyke PL/SQL CPU + test that can be run in an Oracle instance. Example timings are given + at the website link above. + + SET SERVEROUTPUT ON +SET TIMING ON + +DECLARE + n NUMBER := 0; +BEGIN + FOR f IN 1..10000000 + LOOP + n := MOD (n,999999) + SQRT (f); + END LOOP; + DBMS_OUTPUT.PUT_LINE ('Res = '||TO_CHAR (n,'999999.99')); +END; +/ + + The following listing shows the same routine in T-SQL for SQL + Server. + + USE [tpcc] +GO +SET ANSI_NULLS ON +GO +CREATE PROCEDURE [dbo].[CPUSIMPLE] +AS + BEGIN + DECLARE + @n numeric(16,6) = 0, + @a DATETIME, + @b DATETIME + DECLARE + @f int + SET @f = 1 + SET @a = CURRENT_TIMESTAMP + WHILE @f <= 10000000 + BEGIN + SET @n = @n % 999999 + sqrt(@f) + SET @f = @f + 1 + END + SET @b = CURRENT_TIMESTAMP + PRINT ‘Timing = ‘ + ISNULL(CAST(DATEDIFF(MS, @a, @b)AS VARCHAR),”) + PRINT ‘Res = ‘+ ISNULL(CAST(@n AS VARCHAR),”) + END +
+
Administrator PC Configuration @@ -1246,7 +1936,12 @@ Vuser 2:+-----------------+--------------+------+--------+--------------+------- PC must have the minimal requirement to display the graphical output from the load generation server. The PC should also have the ability to connect to the SUT to monitor performance by the installation of an - appropriate database client. + appropriate database client. For Linux clients where remote desktop + displays are used it is recommended to use VNC instead of X Windows + for better graphics performance in particular when using v4.0 SVG + based scalable graphics. running X windows over long distances is + known to impact display refresh rates and is not a HammerDB + issue.
@@ -1262,25 +1957,21 @@ Vuser 2:+-----------------+--------------+------+--------+--------------+------- You should have the Oracle database software installed and a test database created and running. During the installation make a note of your system user password, you will need it for the test schema - creation. You may at your discretion use an existing database however - please note that HammerDB load testing can drive your system - utilization to maximum levels and therefore testing an active - production system is not recommended. After your database server is - installed you should create a tablespace into which the test data will - be installed allowing disk space according to the guide previously in - this chpater. For example the following shows creating the tablespace - in the ASM disk group DATA: + creation. (Note that the system user is used and not sys). You may at + your discretion use an existing database however please note that + HammerDB load testing can drive your system utilization to maximum + levels and therefore testing an active production system is not + recommended. After your database server is installed you should create + a tablespace into which the test data will be installed allowing disk + space according to the guide previously in this chapter. For example + the following shows creating the tablespace in the ASM disk group + DATA: SQL> create bigfile tablespace tpcctab datafile '+DATA' size 100g; - f you are running HammerDB against Oracle on Windows there is - long established bug in Oracle that can cause application crashes for - multi-threaded applications on Windows. Note that again this bug is an - Oracle bug and not a HammerDB bug and can be investigated on the My - Oracle Support website with the following reference. Bug 12733000 - OCIStmtRelease crashes or hangs if called after freeing the service - context handle. To resolve this Oracle issue add the following entry - to your SQLNET.ORA file. + If you are running HammerDB against Oracle on Windows add the + following entry to your SQLNET.ORA file for the reasons described in + the HammerDB release notes. SQLNET.AUTHENTICATION_SERVICES = (NTS) DIAG_ADR_ENABLED=OFF DIAG_SIGHANDLER_ENABLED=FALSE @@ -1328,10 +2019,10 @@ OK (30 msec) configuration on your SQL Server. Additionally your chosen method of authentication is required to be compatible with your chosen ODBC driver. To discover the available drivers use the ODBC Data Source - Administrator tool. The driver name should be entered into HammerDB - exactly as shown in the Data Source Administrator. The default value - is “ODBC Driver 13 for SQL Server” on Windows and “ODBC Driver 17 for - SQL Server” on Linux. + Administrator tool on Windows and the command database drivers on + Linux. The driver name should be entered into HammerDB exactly as + shown in the Data Source Administrator. The default value is “ODBC + Driver 17 for SQL Server” for both Windows and Linux.
ODBC Drivers @@ -1347,23 +2038,26 @@ OK (30 msec)
Db2 - To connect to Db2 requires an ODBC interface and therefore it is - also necessary to install the DB2 client software IBM Data Server - Driver for ODBC and CLI. Note that HammerDB connects to Db2 via CLI as - the db2tcl interface is C based interface enabling CLI connectivity. - ODBC is not used for HammerDB connectivity to Db2. Configure your - db2dsdriver.cfg file with the hostname, port and database that you - have created on the server. + To connect to Db2 requires the IBM CLI interface. Note that CLI + in this context means "call level interface" and should not be + confused the with the HammerDB command-line interface. Db2 CLI is the + 'C' language interface that HammerDB uses. ODBC is not used for + HammerDB connectivity to Db2 however both ODBC and CLI drivers are + packaged together and therefore for Db2 connectivity it is necessary + to install the Db2 client software IBM Data Server Driver for ODBC and + CLI. This is also typically installed with the Db2 database software. + Configure your db2dsdriver.cfg file with the hostname, port and + database that you have created on the server. db2inst1:~/odbc_cli/clidriver/cfg> more db2dsdriver.cfg <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <configuration> <dsncollection> - <dsn alias="TPCC" host="db2v1032bit" name="TPCC" port="50001"/> + <dsn alias="TPCC" host="db2v1064bit" name="TPCC" port="50001"/> </dsncollection> <databases> - <database host="db2v1032bit" name="TPCC" port="50001"/> + <database host="db2v1064bit" name="TPCC" port="50001"/> </databases> </configuration> @@ -1373,7 +2067,7 @@ OK (30 msec) [db2inst1@~/sqllib/cfg]$ more db2cli.ini [TPCC] UID=db2inst1 -PWD=oracle +PWD=ibmdb2 SysSchema=SYSIBM SchemaList=”’SYSIBM’,’TPCC’” DeferredPrepare=1 @@ -1388,7 +2082,7 @@ TxnIsolation=1 StmtConcentrator=OFF - You should have the DB2 database software installed and ready to + You should have the Db2 database software installed and ready to accept connections as shown below. db2inst1~]$ db2stop @@ -1421,8 +2115,8 @@ For more detailed help, refer to the Online Reference Manual. db2 => - With DB2 installed and running manually create and configure a - DB2 Database according to your requirements. Pay particular attention + With Db2 installed and running manually create and configure a + Db2 Database according to your requirements. Pay particular attention to setting a LOGFILSIZ appropriate to your environment, otherwise you are likely to receive a transaction log full error message during the schema build. Additionally HammerDB is bufferpool and tablespace aware @@ -1511,8 +2205,13 @@ DB20000I The SQL command completed successfully. You should have the MySQL database software installed and running. Make sure you set a password for either the root user or a - user with the correct privileges to create the TPC-C database, for - example: + user with the correct privileges to create the TPROC-C database, for + example the following on MySQL 8.0. + + mysql> alter user 'root'@'localhost' identified by 'mysql'; +Query OK, 0 rows affected (0.00 sec) + + and the following on MySQL 5.6 -bash-4.1$ ./mysqladmin -u root password mysqlBy default a MySQL installation will allow connection to the local server @@ -1599,39 +2298,20 @@ host all all 127.0.0.1/32 md5 host all all 192.168.1.67/32 md5
- -
- Redis - - Redis is described as a key-value cache and store as opposed to - a database and is by design a single-threaded server and is not - designed to benefit from multiple CPUs or multiple CPU cores. On the - other hand HammerDB is a multi-threaded load-testing and benchmarking - tool and is specifically designed to use multiple CPU cores and test - multi-threaded databases. Therefore the expectation should be set that - a single Redis server will not achieve the same level of performance - as one of the traditional RDBMS servers with a standard OLTP workload. - Scalability of Redis beyond a single core can be achieved with - clustering or sharding which is beyond the scope of this guide. It is - important to note however that as HammerDB is multi-threaded this does - not necessarily mean that a single virtual user scenario within Redis - is always optimal. HammerDB may generate a workload with multiple - virtual users serviced by a single single-threaded Redis server and - achieve a higher level of performance and therefore HammerDB testing - can help determine this optimal Redis client to server ratio. -
Configuring Schema Build Options - To create the OLTP test schema based on the TPC-C specification + To create the OLTP test schema based on the TPROC-C specification you will need to select which benchmark and database you wish to use by - choosing select benchmark from under the Options menu or under the - benchmark tree-view. The initial settings are determined by the values - in your XML configuration files. The following example shows the - selection of SQL Server however the process is the same for all - databases. + choosing select benchmark from under the Options menu or double-clicking + on the chosen database under the benchmark tree-view. (For the currently + selected database double left-click shows the benchmark options and + double right-click expands the tree view). The initial settings are + determined by the values in your XML configuration files. The following + example shows the selection of SQL Server however the process is the + same for all databases.
Benchmark Options @@ -1643,9 +2323,9 @@ host all all 192.168.1.67/32 md5
- To create the TPC-C schema select the TPC-C schema options menu - tab from the benchmark tree-view or the options menu. This menu will - change dynamically according to your chosen database. + To create the TPROC-C schema select the TPROC-C schema options + menu tab from the benchmark tree-view or the options menu. This menu + will change dynamically according to your chosen database.
Schema Build Options @@ -1657,17 +2337,13 @@ host all all 192.168.1.67/32 md5
- If selected from the Options menu the schema options window is - divided into two sections. The “Build Options” section details the - general login information and where the schema will be built and the - “Driver Options” for the Driver Script to run after the schema is built. - If selected from the benchmark tree-view only the “Build Options” are - shown and these are the only options of importance at this stage. Note - that in any circumstance you do not have to rebuild the schema every - time you change the “Driver Options”, once the schema has been built - only the “Driver Options” may need to be modified. For the “Build - Options” fill in the values according to the database where the schema - will be built as follows. + The “Build Options” section details the general login information + and where the schema will be built and these are the only options of + importance at this stage. Note that in any circumstance you do not have + to rebuild the schema every time you change the “Driver Options”, once + the schema has been built only the “Driver Options” may need to be + modified. For the “Build Options” fill in the values according to the + database where the schema will be built as follows.
Oracle Schema Build Options @@ -1683,7 +2359,7 @@ host all all 192.168.1.67/32 md5 - Oracle Build Options + Oracle Options @@ -1716,43 +2392,43 @@ host all all 192.168.1.67/32 md5 The system user password is the password for the “system” user you entered during database creation. The system user already exists in all Oracle databases and has the - necessary permissions to create the TPC-C user. + necessary permissions to create the TPROC-C user. - TPC-C User - - The TPC-C user is the name of a user to be created that - will own the TPC-C schema. This user can have any name you - choose but must not already exist and adhere to the standard - rules for naming Oracle users. You may if you wish run the - schema creation multiple times and have multiple TPC-C schemas - created with ownership under a different user you create each - time. + TPROC-C User + + The TPROC-C user is the name of a user to be created + that will own the TPROC-C schema. This user can have any name + you choose but must not already exist and adhere to the + standard rules for naming Oracle users. You may if you wish + run the schema creation multiple times and have multiple + TPROC-C schemas created with ownership under a different user + you create each time. - TPC-C User Password + TPROC-C User Password - The TPC-C user password is the password to be used for - the TPC-C user you create and must adhere to the standard - rules for Oracle user password. You will need to remember the - TPC-C user name and password for running the TPC-C driver - script after the schema is built. + The TPROC-C user password is the password to be used + for the TPROC-C user you create and must adhere to the + standard rules for Oracle user password. You will need to + remember the TPROC-C user name and password for running the + TPROC-C driver script after the schema is built. - TPC-C Default Tablespace + TPROC-C Default Tablespace - The TPC-C default tablespace is the tablespace that - will be the default for the TPC-C user and therefore the + The TPROC-C default tablespace is the tablespace that + will be the default for the TPROC-C user and therefore the tablespace to be used for the schema creation. The tablespace must have sufficient free space for the schema to be created. - Order Line Tablespace + TPROC-C Order Line Tablespace If the “Number of Warehouses” as described below is set to 200 or more then the “Partition Order Line Table” option @@ -1760,15 +2436,17 @@ host all all 192.168.1.67/32 md5 a different tablespace for the Order Line table only becomes active. For high performance schemas this gives the option of using both a separate tablespace and memory cache for the - order line table with a different block size. + order line table with a different block size. Where a + different cache and blocksize is used 16k is + recommended. - TPC-C Temporary Tablespace + TPROC-C Temporary Tablespace - The TPC-C temporary tablespace is the temporary + The TPROC-C temporary tablespace is the temporary tablespace that already exists in the database to be used by - the TPC-C User. + the TPROC-C User. @@ -1787,18 +2465,23 @@ host all all 192.168.1.67/32 md5 also disables table locks. These options can provide additional levels of scalability on high performance systems where contention is observed however will not provide - significant at entry level. + significant performance gains on entry level systems. When + Hash Clusters are enabled table locks are also disabled with + the command "ALTER TABLE XXX DISABLE TABLE LOCK" and these + locks will need to be re-enabled to drop the schema when + required. - Partition Order Line Table + Partition Tables When more than 200 warehouses are selected this option uses Oracle partitioning to divide the Order Line table into partitions of 100 warehouses each. Using partitioning enables scalability for high performance schemas and should be considered with using a separate tablespace for the Order Line - table. + table. Selecting this option also partitions the Orders and + History tables. @@ -1855,7 +2538,7 @@ host all all 192.168.1.67/32 md5 usage is essential for workloads operating on In-memory databases. For a full implementation of in-memory tables a primary key is mandatory, however by definition the HISTORY table does not have a - primary key. Therefore to implement all tables as in-memory and + primary key. Therefore to implement all tables as in-memory an identity column has been added to the HISTORY table. It is important to note that despite the nomenclature of in-memory and on-disk databases in fact most of the workload of the on-disk database @@ -2028,7 +2711,7 @@ GO - SQL Server Database + TRPOC-C SQL Server Database The SQL Server Database is the name of the Database to be created on the SQL Server to contain the schema. If @@ -2053,7 +2736,7 @@ GO In-Memory Hash bucket Multiplier The size of the In-memory database is specified at - creation time, however the OLTP/TPC-C schema allows for + creation time, however the OLTP/TPROC-C schema allows for the insertion of additional rows. This value enables the creation of larger tables for orders, new_order and order_line to allow for these inserts. Note: Do not @@ -2137,28 +2820,28 @@ GO - Db2 User + TPROC-C Db2 User The name of the operating system user to connect to the DB2 database for example db2inst1. - Db2 Password + TPROC-C Db2 Password The password for the operating system DB2 user by default “ibmdb2” - Db2 Database + TPROC-C Db2 Database The name of the Db2 database that you have already created, for example “tpcc” - Db2 Default Tablespace + TPROC-C Db2 Default Tablespace The name of the existing tablespace where tables should be located if a specific tablespace has not been @@ -2167,7 +2850,8 @@ GO - Db2 Tablespace List (Space Separated Values) + TPROC-C Db2 Tablespace List (Space Separated + Values) When partitioning is selected, a space separated list of Tablespace initials followed by a pre-existing tablespace @@ -2218,7 +2902,7 @@ GO warehouses are configured and transparently divides the schema into 10 separate tables for the larger tables for improved scalability and performance. This option is - recommended for larger configuration. + recommended for larger configurations. @@ -2268,6 +2952,15 @@ GO be the default port of 3306. + + MySQL Socket + + The MySQL Socket option is enabled on Linux only. If + HammerDB is running on the same server and the MySQL Host is + 127.0.0.1 or localhost then HammerDB will open a connection + on the socket given instead of using TCP/IP. + + MySQL User @@ -2275,7 +2968,7 @@ GO create a database and you previously granted access to from the load generation server. The root user already exists in all MySQL databases and has the necessary permissions to - create the TPC-C database. + create the TPROC-C database. @@ -2283,15 +2976,15 @@ GO The MySQL user password is the password for the user defined as the MySQL User. You will need to remember the - MySQL user name and password for running the TPC-C driver + MySQL user name and password for running the TPROC-C driver script after the database is built. - MySQL Database + TRPOC-C MySQL Database The MySQL Database is the database that will be - created containing the TPC-C schema creation. There must + created containing the TPROC-C schema creation. There must have sufficient free space for the database to be created. @@ -2413,121 +3106,53 @@ GO - PostgreSQL User + TPROC-C PostgreSQL User The PostgreSQL User is the user (role) that will be - created that owns the database containing the TPC-C + created that owns the database containing the TPROC-C schema. - PostgreSQL User Password + TPROC-C PostgreSQL User Password The PostgreSQL User Password is the password that will be specified for the PostgreSQL user when it is created. - - - - PostgreSQL Database - - The PostgreSQL Database is the database that will be - created and owned by the PostgreSQL User that contains the - TPC-C schema. - - - - EnterpriseDB Oracle Compatible - - Choosing EnterpriseDB Oracle compatible creates a - schema using the Oracle compatible features of EnterpriseDB - in an installation of Postgres Plus Advanced Server. This - build uses Oracle PL/SQL for the creation of the stored - procedures. - - - - PostgreSQL Stored Procedures - - When running on PostgreSQL v11 or upwards use - PostgreSQL stored procedures instead of functions. - - - - Number of Warehouses - - The Number of Warehouses is selected by a listbox. - You should set this value to number of warehouses you have - chosen for your test. - - - - Virtual Users to Build Schema - - The Virtual Users to Build Schema is the number of - Virtual Users to be created on the Load Generation Server - that will complete your multi-threaded schema build. You - should set this value to either the number of warehouses you - are going to create (You cannot set the number of virtual - users lower than the number of warehouses value) or the - number of cores/Hyper-Threads on your Load Generation - Server. If you have a significantly larger core/Hyper-Thread - count on your Database Server then also installing HammerDB - locally on this server as well to run the schema build can - take advantage of the higher core count to run the build - more quickly. - - - -
-
- -
- Redis Schema Build Options - -
- Redis Build Options - - - - - - -
- - - Redis Build Options + - - - Option + TPROC-C PostgreSQL Database - Description + The PostgreSQL Database is the database that if it + does not already exist will be created and owned by the + PostgreSQL User that contains the TPROC-C schema. If the + named database has already been created and is empty then + that database will be used to create the schema. - - - Redis Host + TPROC-C PostgreSQL Tablespace - The host name of the SUT running Redis which the load - generation server running HammerDB will connect to. + The PostgreSQL Tablespace in which to create the + schema. By default the tablespace is pg_default. - Redis Port + EnterpriseDB Oracle Compatible - The port of the Redis service. By default this will - be 6379. + Choosing EnterpriseDB Oracle compatible creates a + schema using the Oracle compatible features of EnterpriseDB + in an installation of Postgres Plus Advanced Server. This + build uses Oracle PL/SQL for the creation of the stored + procedures. - Redis Namespace + PostgreSQL Stored Procedures - The Redis namespace is equivalent to a separate - schema – assign a number for the namespace for the TPC-C - schema. + When running on PostgreSQL v11 or upwards use + PostgreSQL stored procedures instead of functions. @@ -2615,7 +3240,10 @@ GO users will depending on the database create the indexes, stored procedures and gather the statistics. When the schema build is complete Virtual User 1 will display the message SCHEMA COMPLETE and all virtual - users will show that they completed their action successfully. + users will show an end timestamp and that they completed their action + successfully. If this is not the case then then build did not complete + successfully, the schema is not valid for testing and should therefore + be deleted and reinstalled.
Schema complete @@ -2635,6 +3263,20 @@ GO SQL>drop user tpcc cascade; + Note that if Hash Clusters were used it will first be necessary + to re-enable the table locks as follows before deleting the + schema. + + ALTER TABLE WAREHOUSE DISABLE TABLE LOCK; +ALTER TABLE DISTRICT DISABLE TABLE LOCK; +ALTER TABLE CUSTOMER DISABLE TABLE LOCK; +ALTER TABLE ITEM DISABLE TABLE LOCK; +ALTER TABLE STOCK DISABLE TABLE LOCK; +ALTER TABLE ORDERS DISABLE TABLE LOCK; +ALTER TABLE NEW_ORDER DISABLE TABLE LOCK; +ALTER TABLE ORDER_LINE DISABLE TABLE LOCK; +ALTER TABLE HISTORY DISABLE TABLE LOCK; + When you have created your schema you can verify the contents with SQL*PLUS or your favourite admin tool as the newly created user. @@ -2951,37 +3593,6 @@ tpcc=# select relname, n_tup_ins - n_tup_del as rowcount from pg_stat_user_table order_line | 3001170 customer | 300000 (9 rows) - - - -
- Deleting or Verifying the Redis Schema - - If you have made a mistake simply close the application and run - the following SQL to undo the user you have created. - - 127.0.0.1:6379[1]> flushdb -OK - - - you can browse the created schema, for example: - - [redis@MERLIN redis-2.8.13]$ ./src/redis-cli -127.0.0.1:6379> select 1 -OK -127.0.0.1:6379[1]> keys WAREHOUSE:* - 1) "WAREHOUSE:9" - 2) "WAREHOUSE:5" - 3) "WAREHOUSE:4" - 4) "WAREHOUSE:2" - 5) "WAREHOUSE:8" - 6) "WAREHOUSE:6" - 7) "WAREHOUSE:3" - 8) "WAREHOUSE:10" - 9) "WAREHOUSE:1" -10) "WAREHOUSE:7" -(0.53s) -127.0.0.1:6379[1]>
@@ -3004,7 +3615,8 @@ OK This displays the Driver Script Options dialog. The connection options are common to the Schema Build Dialog in addition to new Driver - Options. + Options. For advanced options more details are provided in the + subsequent section.
Driver Script Options @@ -3030,12 +3642,12 @@ OK
- TPC-C Driver Script + TPROC-C Driver Script For all databases you have the option of selecting a Test Driver Script or a Timed Driver Script. The This choice will dynamically change the Driver Script that is loaded when the - TPC-C Driver Script menu option is chosen. The Test Driver + TPROC-C Driver Script menu option is chosen. The Test Driver Script is intended for verifying and testing a configuration only by displaying virtual user output for a small number of virtual users. In particular both Windows and Linux graphical @@ -3062,7 +3674,7 @@ OK value and the Iterations value set in the Virtual User Options window. The Iterations value in the Virtual User Options window determines the number of times that a script will be run in its - entirety. The total_iterations value is internal to the TPC-C + entirety. The total_iterations value is internal to the TPROC-C driver script and determines the number of times the internal loop is iterated ie for {set it 0} {$it < $total_iterations} {incr it} { ... } In other words if total_iterations is set to @@ -3099,45 +3711,34 @@ OK Keying and Thinking Time Keying and Thinking Time is shown as KEYANDTHINK in the - Driver Script. A good introduction to the importance of keying - and thinking time is to read the TPC-C specification. This - parameter will have the biggest impact on the type of workload - that your test will take. Keying and thinking time is an - integral part of an official TPC-C test in order to simulate the - effect of the workload being run by a real user who takes time - to key in an actual order and think about the output. If - KEYANDTHINK is set to TRUE each user will simulate this real - user type workload. An official TPC-C benchmark implements 10 - users per warehouse all simulating this real user experience and - it should therefore be clear that the main impact of KEYANDTHINK - being set to TRUE is that you will need a significant number of - warehouses and users in order to generate a meaningful workload - and hence an extensive testing infrastructure. The positive side - is that when testing hundreds or thousands of virtual users you - will be testing a workload scenario that will be closer to a - real production environment. Whereas with KEYANDTHINK set to - TRUE each user will execute maybe 2 or 3 transactions a minute - you should not underestimate the radical difference that setting - KEYANDTHINK to FALSE will have on your workload. Instead of 2 or - 3 transactions each user will now execute tens of thousands of - transactions a minute. Clearly KEYANDTHINK will have a big - impact on the number of virtual users and warehouses you will - need to configure to run an accurate workload, if this parameter - is set to TRUE you will need at least hundreds or thousands of - virtual users and warehouses, if FALSE then you will need to - begin testing with 1 or 2 threads, building from here up to a - maximum workload with the number of warehouses set to a level - where the users are not contending for the same data. A common - error is to set KEYANDTHINK to FALSE and then create hundreds of - users for an initial test, this form of testing will only - exhibit contention for data between users and nothing about the - potential of the system. If you do not have an extensive testing - infrastructure and a large number of warehouses configured set - KEYANDTHINK to FALSE (whilst remembering that you are not - simulating a real TPC-C type test) and beginning your testing - with 1 virtual user building up the number of virtual users for - each subsequent test in order to plot a transaction - profile. + Driver Script. This parameter will have the biggest impact on + the type of workload that your test will take. Keying and + thinking time is an integral part of a TPROC-C test in order to + simulate the effect of the workload being run by a real user who + takes time to key in an actual order and think about the output. + If KEYANDTHINK is set to TRUE each user will simulate this real + user type workload testing a workload scenario that will be + closer to a real production environment. Whereas with + KEYANDTHINK set to TRUE each user will execute maybe 2 or 3 + transactions a minute setting KEYANDTHINK to FALSE each user + will now execute tens of thousands of transactions a minute. If + this parameter is set to TRUE you will need at least hundreds or + thousands of virtual users and warehouses, if FALSE then you + will need to begin testing with 1 or 2 Virtual Users, building + from here up to a maximum workload with the number of warehouses + set to a level where the users are not contending for the same + data. The default mode is to run with KEYANDTHINK set to FALSE + and this is the method that will drive the highest transaction + rates. To run with KEYANDTHINK set to TRUE the event driven + scaling feature has been introduced to scale up the number of + sessions connecting to the system. This feature is activated by + selecting the Asynchronous Scaling option (which will also + enable Keying and Thinking time). When enabled you are able to + configure multiple sessions per Virtual User. Each Virtual User + will then manage multiple clients processing the Keying and + Thinking time asynchronously. With this feature you are able to + configure significantly more sessions than with a single Virtual + User configuration. @@ -3189,33 +3790,819 @@ OK greater I/O activity. - - Time Profile + + Time Profile + + This option should be selected in conjunction with having + enabled output to the logfile. When selected client side time + profiling will be conducted for the first active virtual user + and output written to the logfile. + + + + Asynchronous Scaling + + Enable the event driven scaling feature to configure + multiple client sessions per Virtual User. When selected this + will also enable the Keying and Thinking Time option. As the + keying and thinking time is managed asynchronously this option + is not valid to be run without keying and thinking time. + Asynchronous Scaling is also a feature that is appropriate to + test connection pooling by scaling up the number of client + sessions that connect to the database. + + + + Asynch Client per Virtual User + + Configures the number of sessions that each Virtual User + will connect to the database and manage. For example if there + are 5 Virtual Users and 10 Asynchronous Clients there will be 50 + active connections to the database. + + + + Asynch Client Login Delay + + The delay that each Virtual User will allow before + logging on each asynchronous client. + + + + Asynchronous Verbose + + Report asynchronous operations such as the time taken for + keying and thinking time. + + + + XML Connect Pool + + XML Connect Pool is intended for simultaneously testing + multiple instances of related clustered databases and when + selected the virtual user database connections will open a pool + of connections defined in the database specific XML file for + example mssqlscpool.xml for SQL Server located in the directory + connectpool in the config directory. Note that each virtual user + (or asynchronous client) will open and hold all of the defined + connections. The monitor virtual user and each virtual user will + also continue to open the main standalone database connection. + The monitor virtual user will continue to report NOPM and TPM + and the virtual users to extract the warehouse count from this + standalone connection and therefore the reliance is on the + database to accurately report cluster wide transactions and for + the instances to have the same warehouse count. For verification + of the results from the master connection when using connect + pooling HammerDB will also report client side transactions. To + use the XML Connect Pool the XML configuration file should be + modified according to the cluster database names with each + connection defined by the tag c1, c2 c3 etc respectively. Under + the sprocs section in the XML file is defined which stored + procedures will use which connections and what policy is to be + used. The policy can be first_named, last_named, random or + round_robin. For example with connections c1, c2 and c3 for + neworder and a policy of round_robin the first neworder + transaction would execute against connection c1, the second c2, + the third c3 and the fourth c1. first_named uses the first given + connection, last_named the last and random chooses a connection + at random. stocklevel and orderstatus are read only stored + procedures that may be run against read only cluster nodes. + There is no restriction on the number of connections that may be + opened per virtual user. For further information on the + connections opened there is a commented information line in the + driver script such as #puts "sproc_cur:$st connections:[ set + $cslist ] cursors:[set $cursor_list] number of cursors:[set + $len] execs:[set $cnt]" prior to the opening of the standalone + connection that may be uncommented for more detail when the + script is run. + + + + Mode + + The mode value is taken from the operational mode setting + set under the Mode Options menu tab under the Mode menu. If set + to Local or Primary then the monitor thread takes snapshots, if + set to Replica no snapshots are taken. This is useful if + multiple instances of HammerDB are running in Primary and + Replica mode against a clustered database configuration to + ensure that only one instance takes the snapshots. + + + +
+
+ +
+ Advanced Driver Script Options + + This section includes advanced driver script options intended for + expert usage. These options can be used independently or simultaneously + for advanced testing scenarios. + +
+ Use All Warehouses for increased I/O + + By default each Virtual User will select one home warehouse at + random and keep that home warehouse for the duration of a run meaning + the majority of its workload will take place on a single warehouse. + This means that when running for example 10 Virtual Users most of the + workload will take place on 10 warehouses regardless of whether 100, + 1000 or 10,000 are configured in the schema. Use All Warehouses is an + option that enables increased I/O to the database data area by + assigning all of the warehouses in the schema to the Virtual Users in + turn. The Virtual Users will then select a new warehouse for each + transaction. Consequently this means that the schema size impacts on + the overall level of performance placing a great emphasis on I/O. To + select this option check the Use All Warehouses check-box. + +
+ Use All Warehouses Option + + + + + + +
+ + On running the workload it can now be seen that the Virtual + Users will evenly assign the available warehouses between them. + +
+ Use All Warehouses + + + + + + +
+ + The listing shows an example of a schema with 30 warehouses and + 3 Virtual Users. This approach is particularly applicable when testing + I/O capabilities for database. + + Vuser 1:Beginning rampup time of 2 minutes +Vuser 2:VU 2 : Assigning WID=1 based on VU count 3, Warehouses = 30 (1 out of 10) +Vuser 2:VU 2 : Assigning WID=4 based on VU count 3, Warehouses = 30 (2 out of 10) +Vuser 2:VU 2 : Assigning WID=7 based on VU count 3, Warehouses = 30 (3 out of 10) +Vuser 2:VU 2 : Assigning WID=10 based on VU count 3, Warehouses = 30 (4 out of 10) +Vuser 2:VU 2 : Assigning WID=13 based on VU count 3, Warehouses = 30 (5 out of 10) +Vuser 2:VU 2 : Assigning WID=16 based on VU count 3, Warehouses = 30 (6 out of 10) +Vuser 2:VU 2 : Assigning WID=19 based on VU count 3, Warehouses = 30 (7 out of 10) +Vuser 2:VU 2 : Assigning WID=22 based on VU count 3, Warehouses = 30 (8 out of 10) +Vuser 2:VU 2 : Assigning WID=25 based on VU count 3, Warehouses = 30 (9 out of 10) +Vuser 2:VU 2 : Assigning WID=28 based on VU count 3, Warehouses = 30 (10 out of 10) +Vuser 2:Processing 1000000 transactions with output suppressed... +Vuser 3:VU 3 : Assigning WID=2 based on VU count 3, Warehouses = 30 (1 out of 10) +Vuser 3:VU 3 : Assigning WID=5 based on VU count 3, Warehouses = 30 (2 out of 10) +Vuser 3:VU 3 : Assigning WID=8 based on VU count 3, Warehouses = 30 (3 out of 10) +Vuser 3:VU 3 : Assigning WID=11 based on VU count 3, Warehouses = 30 (4 out of 10) +Vuser 3:VU 3 : Assigning WID=14 based on VU count 3, Warehouses = 30 (5 out of 10) +Vuser 3:VU 3 : Assigning WID=17 based on VU count 3, Warehouses = 30 (6 out of 10) +Vuser 3:VU 3 : Assigning WID=20 based on VU count 3, Warehouses = 30 (7 out of 10) +Vuser 3:VU 3 : Assigning WID=23 based on VU count 3, Warehouses = 30 (8 out of 10) +Vuser 3:VU 3 : Assigning WID=26 based on VU count 3, Warehouses = 30 (9 out of 10) +Vuser 3:VU 3 : Assigning WID=29 based on VU count 3, Warehouses = 30 (10 out of 10) +Vuser 3:Processing 1000000 transactions with output suppressed... +Vuser 4:VU 4 : Assigning WID=3 based on VU count 3, Warehouses = 30 (1 out of 10) +Vuser 4:VU 4 : Assigning WID=6 based on VU count 3, Warehouses = 30 (2 out of 10) +Vuser 4:VU 4 : Assigning WID=9 based on VU count 3, Warehouses = 30 (3 out of 10) +Vuser 4:VU 4 : Assigning WID=12 based on VU count 3, Warehouses = 30 (4 out of 10) +Vuser 4:VU 4 : Assigning WID=15 based on VU count 3, Warehouses = 30 (5 out of 10) +Vuser 4:VU 4 : Assigning WID=18 based on VU count 3, Warehouses = 30 (6 out of 10) +Vuser 4:VU 4 : Assigning WID=21 based on VU count 3, Warehouses = 30 (7 out of 10) +Vuser 4:VU 4 : Assigning WID=24 based on VU count 3, Warehouses = 30 (8 out of 10) +Vuser 4:VU 4 : Assigning WID=27 based on VU count 3, Warehouses = 30 (9 out of 10) +Vuser 4:VU 4 : Assigning WID=30 based on VU count 3, Warehouses = 30 (10 out of 10) +
+ +
+ Time Profile for measuring Response Times + + In addition to performance profiles based on throughput you + should also take note of transaction response times. Whereas + performance profiles show the cumulative performance of all of the + virtual users running on the system, response times show performance + based on the experience of the individual user. When comparing systems + both throughput and response time are important comparative + measurements. HammerDB includes a time profiling package called etprof + that enables you to select an individual user and measure the response + times. This functionality is enabled by selecting Time Profile + checkbox in the driver options. When enabled the time profile will + show response time percentile values at 10 second intervals, reporting + the minimum, 50th percentile, 95th percentile, 99th percentile and + maximum for each of the procedures during that 10 second interval as + well as cumulative values for all of the test at the end of the test + run. The time profile values are recorded in microseconds. + + Hammerdb Log @ Fri Jul 05 09:55:26 BST 2019 ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- +Vuser 1:Beginning rampup time of 1 minutes +Vuser 2:Processing 1000000 transactions with output suppressed... +Vuser 3:Processing 1000000 transactions with output suppressed... +Vuser 4:Processing 1000000 transactions with output suppressed... +Vuser 5:Processing 1000000 transactions with output suppressed... +Vuser 2:|PERCENTILES 2019-07-05 09:55:46 to 2019-07-05 09:55:56 +Vuser 2:|neword|MIN-391|P50%-685|P95%-1286|P99%-3298|MAX-246555|SAMPLES-3603 +Vuser 2:|payment|MIN-314|P50%-574|P95%-1211|P99%-2253|MAX-89367|SAMPLES-3564 +Vuser 2:|delivery|MIN-1128|P50%-1784|P95%-2784|P99%-6960|MAX-267012|SAMPLES-356 +Vuser 2:|slev|MIN-723|P50%-884|P95%-1363|P99%-3766|MAX-120687|SAMPLES-343 +Vuser 2:|ostat|MIN-233|P50%-568|P95%-1325|P99%-2387|MAX-82538|SAMPLES-365 +Vuser 2:|gettimestamp|MIN-2|P50%-4|P95%-7|P99%-14|MAX-39|SAMPLES-7521 +Vuser 2:|prep_statement|MIN-188|P50%-209|P95%-1067|P99%-1067|MAX-1067|SAMPLES-6 +Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ +... +Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ +Vuser 2:|PERCENTILES 2019-07-05 09:59:26 to 2019-07-05 09:59:36 +Vuser 2:|neword|MIN-410|P50%-678|P95%-1314|P99%-4370|MAX-32030|SAMPLES-4084 +Vuser 2:|payment|MIN-331|P50%-583|P95%-1271|P99%-3152|MAX-43996|SAMPLES-4142 +Vuser 2:|delivery|MIN-1177|P50%-2132|P95%-3346|P99%-4040|MAX-8492|SAMPLES-416 +Vuser 2:|slev|MIN-684|P50%-880|P95%-1375|P99%-1950|MAX-230733|SAMPLES-364 +Vuser 2:|ostat|MIN-266|P50%-688.5|P95%-1292|P99%-1827|MAX-9790|SAMPLES-427 +Vuser 2:|gettimestamp|MIN-3|P50%-4|P95%-7|P99%-14|MAX-22|SAMPLES-8639 +Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ +Vuser 2:|PERCENTILES 2019-07-05 09:59:36 to 2019-07-05 09:59:46 +Vuser 2:|neword|MIN-404|P50%-702|P95%-1296|P99%-4318|MAX-71663|SAMPLES-3804 +Vuser 2:|payment|MIN-331|P50%-597|P95%-1250|P99%-4190|MAX-47539|SAMPLES-3879 +Vuser 2:|delivery|MIN-1306|P50%-2131|P95%-4013|P99%-8742|MAX-25095|SAMPLES-398 +Vuser 2:|slev|MIN-713|P50%-913|P95%-1438|P99%-2043|MAX-7434|SAMPLES-386 +Vuser 2:|ostat|MIN-268|P50%-703|P95%-1414|P99%-3381|MAX-249963|SAMPLES-416 +Vuser 2:|gettimestamp|MIN-3|P50%-4|P95%-8|P99%-16|MAX-27|SAMPLES-8079 +Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ +Vuser 1:3 ..., +Vuser 1:Test complete, Taking end Transaction Count. +Vuser 1:4 Active Virtual Users configured +Vuser 1:TEST RESULT : System achieved 468610 SQL Server TPM at 101789 NOPM +Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ +Vuser 2:|PROCNAME | EXCLUSIVETOT| %| CALLNUM| AVGPERCALL| CUMULTOT| +Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ +Vuser 2:|neword | 82051665|39.96%| 93933| 873| 88760245| +Vuser 2:|payment | 73823956|35.95%| 93922| 786| 80531339| +Vuser 2:|delivery | 22725292|11.07%| 9577| 2372| 23418195| +Vuser 2:|slev | 14396765| 7.01%| 9340| 1541| 14402033| +Vuser 2:|ostat | 10202116| 4.97%| 9412| 1083| 10207260| +Vuser 2:|gettimestamp | 2149552| 1.05%| 197432| 10| 13436919| +Vuser 2:|TOPLEVEL | 2431| 0.00%| 1| 2431| NOT AVAILABLE| +Vuser 2:|prep_statement | 1935| 0.00%| 5| 387| 1936| +Vuser 2:+-----------------+--------------+------+--------+--------------+--------------+ + + After capturing the response time the script below can be run at + the command line and provided with a logfile with the data for one run + only. Note that it is important that you only provide a logfile for + one run of a HammerDB benchmark to convert, otherwise all of the data + will be combined from multiple runs. When run on a logfile with data + such as shown above this will output the data in tab delimited format + that can be interpreted by a spreadsheet. + + !/bin/tclsh + set filename [lindex $argv 0] + set fp [open "$filename" r] + set file_data [ read $fp ] + set data [split $file_data "\n"] + foreach line $data { + if {[ string match *PERCENTILES* $line ]} { + set timeval "[ lindex [ split $line ] 3 ]" + append xaxis "$timeval\t" + } + } + puts "TIME INTERVALS" + puts "\t$xaxis" + foreach storedproc {neword payment delivery slev ostat} { + puts [ string toupper $storedproc ] + foreach line $data { + if {[ string match *PROCNAME* $line ]} { break } + if {[ string match *$storedproc* $line ]} { + regexp {MIN-[0-9.]+} $line min + regsub {MIN-} $min "" min + append minlist "$min\t" + regexp {P50%-[0-9.]+} $line p50 + regsub {P50%-} $p50 "" p50 + append p50list "$p50\t" + regexp {P95%-[0-9.]+} $line p95 + regsub {P95%-} $p95 "" p95 + append p95list "$p95\t" + regexp {P99%-[0-9.]+} $line p99 + regsub {P99%-} $p99 "" p99 + append p99list "$p99\t" + regexp {MAX-[0-9.]+} $line max + regsub {MAX-} $max "" max + append maxlist "$max\t" + } + } + puts -nonewline "MIN\t" + puts $minlist + unset -nocomplain minlist + puts -nonewline "P50\t" + puts $p50list + unset -nocomplain p50list + puts -nonewline "P95\t" + puts $p95list + unset -nocomplain p95list + puts -nonewline "P99\t" + puts $p99list + unset -nocomplain p99list + puts -nonewline "MAX\t" + puts $maxlist + unset -nocomplain maxlist + } + close $fp + + Pass the name of the logfile for the run where response times + were captured and output them to a file with a spreadsheet extension + name. Note that it is important to output the data to a file and not + to a terminal with that data then cut and paste into a spreadsheet. If + output to a terminal it may format the output by removing the tab + characters which are essential to the formatting. + + $ ./extracttp.tcl pgtp.log > pgtp.txt + + With Excel 2013 and above you can give this file a .xls + extension and open it. If you do it will give the following warning, + however if you click OK it will open with the correctly formatted + data. + +
+ Excel Warning + + + + + + +
+ + Alternatively if we open the file with the .txt extension it + will show 3 steps for the Text Import Wizard. Click through the Wizard + until Finish, After clicking Finish the data has been imported into + the spreadsheet without warnings. Highlight the rows you want to graph + by clicking on the row numbers. + +
+ Highlighted Rows + + + + + + +
+ + Click on Insert and Recommended Charts, the default graph + produced by Excel is shown below with the addition of a vertical axis + title and a chart header. When saving the spreadsheet it is saved in + Excel format rather than the imported Tab (Text Delimited). + +
+ Response Time Graph + + + + + + +
+
+ +
+ Event Driven Scaling for Keying and Thinking Times + + Event driven scaling enables the scaling of virtual users to + thousands of sessions running with keying and thinking time enabled. + This feature adds additional benefit to your testing scenarios with + the ability to handle large numbers of connections or testing with + connection pooling. When running transactional workloads with HammerDB + the default mode is CPU intensive meaning that one virtual user will + run as many transactions as possible without keying and thinking time + enabled. When keying and thinking time is enabled there is a large + time delay both before and after running a transaction meaning that + each Virtual User will spend most of its time idle. However creating a + very large number of Virtual Users requires a significant use of load + test generation server resources. Consequently event driven scaling is + a feature that enables each Virtual User to create multiple database + sessions and manage the keying and thinking time for each + asynchronously in an event-driven loop enabling HammerDB to create a + much larger session count within an existing Virtual User footprint. + It should be clear that this feature is only designed to work with + keying and thinking time enabled as it is only the keying and thinking + time that is managed asynchronously. + + To configure this feature select Asynchronous Scaling noting + that Keying and Thinking Time is automatically selected. Select a + number of Asynch Clients per Virtual User and set the Asynch Login + Delay in milliseconds. This Login Delay means that each client will + wait for this time after the previous client has logged in before then + logging in itself. For detailed output select Asynchronous Verbose. + Note that with this feature it is important to allow the clients + enough time to both login fully before measuring performance and also + at the end it will take additional time for the clients to all + complete their current keying and thinking time and to exit before the + Virtual User reports all clients as complete. + +
+ Asynchronous Options + + + + + + +
+ + When all Virtual Users have logged in (example from SQL Server) + the session count will show as the number of Virtual Users multiplied + by the Asynchronous Clients. + + SELECT DB_NAME(dbid) as DBName, COUNT(dbid) as NumberOfConnections FROM sys.sysprocesses WHERE dbid > 0 GROUP BY dbid; + +
+ Session Count + + + + + + +
+ + As each Asynchronous Client logs in it will be reported in the + Virtual User output. + +
+ Logging In Asynchronous Clients + + + + + + +
+ + When the workload is running with Asynchronous Verbose enabled + HammerDB will report the events as they happen. + +
+ Asynchronous Workload Running + + + + + + +
+ + With logging enabled and Asynchronous Verbose HammerDB will + report the events as they happen for each Virtual User such as when + they enter keying or thinking time and when they process a + transaction. + + Vuser 6:keytime:payment:vuser6:ac9:3 secs +Vuser 7:keytime:payment:vuser7:ac92:3 secs +Vuser 7:thinktime:delivery:vuser7:ac77:3 secs +Vuser 3:keytime:payment:vuser3:ac30:3 secs +Vuser 9:keytime:delivery:vuser9:ac49:2 secs +Vuser 7:vuser7:ac77:w_id:21:payment +Vuser 9:keytime:neword:vuser9:ac64:18 secs +Vuser 3:thinktime:neword:vuser3:ac72:15 secs +Vuser 3:vuser3:ac72:w_id:4:payment +Vuser 3:keytime:neword:vuser3:ac52:18 secs +Vuser 7:thinktime:neword:vuser7:ac43:8 secs +Vuser 7:vuser7:ac43:w_id:6:payment +Vuser 7:keytime:ostat:vuser7:ac9:2 secs +Vuser 3:keytime:payment:vuser3:ac9:3 secs +Vuser 3:thinktime:payment:vuser3:ac97:7 secs +Vuser 11:keytime:payment:vuser11:ac42:3 secs +Vuser 5:keytime:neword:vuser5:ac42:18 secs +Vuser 9:thinktime:ostat:vuser9:ac71:3 secs +Vuser 3:vuser3:ac97:w_id:24:payment +Vuser 9:vuser9:ac71:w_id:9:delivery +Vuser 9:keytime:delivery:vuser9:ac69:2 secs +Vuser 5:keytime:delivery:vuser5:ac19:2 secs +Vuser 11:thinktime:neword:vuser11:ac53:13 secs +Vuser 11:vuser11:ac53:w_id:8:neword +Vuser 9:keytime:delivery:vuser9:ac2:2 secs +Vuser 7:thinktime:neword:vuser7:ac81:12 secs +Vuser 3:keytime:neword:vuser3:ac47:18 secs +Vuser 7:vuser7:ac81:w_id:5:payment +Vuser 3:keytime:payment:vuser3:ac81:3 secs +Vuser 7:keytime:slev:vuser7:ac46:2 secs +Vuser 11:thinktime:payment:vuser11:ac65:2 secs +Vuser 11:vuser11:ac65:w_id:21:slev +Vuser 9:keytime:neword:vuser9:ac86:18 secs +Vuser 11:thinktime:payment:vuser11:ac20:1 secs +Vuser 7:thinktime:neword:vuser7:ac76:9 secs +Vuser 11:vuser11:ac20:w_id:6:payment +Vuser 7:vuser7:ac76:w_id:1:payment +Vuser 11:keytime:delivery:vuser11:ac79:2 secs +Vuser 9:thinktime:neword:vuser9:ac57:15 secs +Vuser 11:thinktime:payment:vuser11:ac30:14 secs +Vuser 9:vuser9:ac57:w_id:3:ostat +Vuser 11:vuser11:ac30:w_id:5:neword +Vuser 9:keytime:payment:vuser9:ac3:3 secs +Vuser 11:keytime:payment:vuser11:ac62:3 secs +Vuser 3:keytime:payment:vuser3:ac35:3 secs +Vuser 7:keytime:neword:vuser7:ac88:18 secs +Vuser 11:keytime:payment:vuser11:ac96:3 secs +Vuser 11:thinktime:payment:vuser11:ac47:8 secs +Vuser 11:vuser11:ac47:w_id:4:neword +Vuser 3:thinktime:payment:vuser3:ac24:21 secs +Vuser 5:keytime:neword:vuser5:ac37:18 secs +Vuser 7:keytime:payment:vuser7:ac16:3 secs +Vuser 11:keytime:payment:vuser11:ac88:3 secs +Vuser 3:vuser3:ac24:w_id:16:neword +Vuser 11:thinktime:slev:vuser11:ac25:6 secs +Vuser 11:vuser11:ac25:w_id:3:payment +Vuser 5:thinktime:payment:vuser5:ac40:2 secs +Vuser 5:vuser5:ac40:w_id:26:neword +Vuser 5:thinktime:neword:vuser5:ac63:7 secs +Vuser 5:vuser5:ac63:w_id:10:payment + + One particular advantage of this type of workload is to be able + to run a fixed throughput test defined by the number of Virtual + Users. + +
+ Steady State + + + + + + +
+ + On completion of the workloads the Monitor Virtual User will + report the number of Active Sessions and the performance achieved. The + active Virtual Users will report when all of the asynchronous clients + have completed their workloads and logged off. + +
+ Asynchronous Workload Complete + + + + + + +
+ + The event driven scaling feature is not intended to replace the + default CPU intensive mode of testing and it is expected that this + will continue to be the most popular methodology. Instead being able + to increase up client sessions with keying and thinking time adds + additional test scenarios for highly scalable systems and in + particular is an effective test methodology for testing middle tier or + proxy systems. +
+ +
+ XML Connect Pool for Cluster Testing + + The XML Connect Pool is intended for simultaneously testing + related multiple instances of a clustered database. It enables each + Virtual User to open a pool of connections (Note that each virtual + user (or asynchronous client) will open and hold all of the defined + connections) and direct the individual transactions to run on a + specific instance according to a pre-defined policy. With this + approach it is possible for example to direct the read-write + transactions to a primary instance on a cluster whilst directing the + read-only transactions to the secondary. + +
+ Connect Pooling + + + + + + +
Note that for testing or evaluation of this feature it is + also possible to direct one HammerDB client to test multiple separate + instances at the same time provided that the instances have exactly + the same warehouse count as shown in the example below. However for a + valid and comparable test consistency should be ensured between the + database instances. Therefore for example directing transactions + against any instance in an Oracle RAC configuration would be valid as + would running the read only transactions against a secondary read only + instance in a cluster. However running against separate unrelated + instances is possible for testing but not comparable for performance + results. The monitor virtual user will continue to connect to the + instance defined in the driver options and report NOPM and TPM from + this standalone connection only and therefore the reliance is on the + database to accurately report a cluster wide transactions and for the + instances to have the same warehouse count. Nevertheless when using + the XML connect pooling a client side transaction count will also be + reported to provide detailed transaction data from all Virtual + Users.
+ + The configuration is defined in the database specific XML file + in the config/connpool directory. It is recommended to make a backup + of the file before it is modified. The XML configuration file is in 2 + sections, connections and sprocs. For connections the XML + configuration file should be modified according to the cluster + database names with each connection defined by the tags c1, c2, c3 + respectively. There is no restriction to the number of connections + that you define. Under the sprocs section in the XML configuration + file is defined which stored procedures will use which connections and + what policy is to be used. The policy can be first_named, last_named, + random or round_robin. For example with connections c1, c2 and c3 for + neworder and a policy of round_robin the first neworder transaction + would execute against connection c1, the second c2, the third c3, the + fourth c1 and so on. For all databases and all stored procedures + prepared statements are used meaning that a statement is prepared for + each connection for each virtual user and a reference kept for that + prepared statement for execution. + + For further information on the connections opened there is a + commented information line in the driver script such as #puts + "sproc_cur:$st connections:[ set $cslist ] cursors:[set $cursor_list] + number of cursors:[set $len] execs:[set $cnt]" prior to the opening of + the standalone connection that may be uncommented for more detail when + the script is run. + + <connpool> +<connections> + <c1> + <mssqls_server>(local)\SQLDEVELOP</mssqls_server> + <mssqls_linux_server>host1</mssqls_linux_server> + <mssqls_tcp>false</mssqls_tcp> + <mssqls_port>1433</mssqls_port> + <mssqls_azure>false</mssqls_azure> + <mssqls_authentication>windows</mssqls_authentication> + <mssqls_linux_authent>sql</mssqls_linux_authent> +<mssqls_odbc_driver>ODBC Driver 17 for SQL Server</mssqls_odbc_driver> +<mssqls_linux_odbc>ODBC Driver 17 for SQL Server</mssqls_linux_odbc> + <mssqls_uid>sa</mssqls_uid> + <mssqls_pass>admin</mssqls_pass> +<mssqls_dbase>tpcc1</mssqls_dbase> + </c1> + <c2> + <mssqls_server>(local)\SQLDEVELOP</mssqls_server> + <mssqls_linux_server>host2</mssqls_linux_server> + <mssqls_tcp>false</mssqls_tcp> + <mssqls_port>1433</mssqls_port> + <mssqls_azure>false</mssqls_azure> + <mssqls_authentication>windows</mssqls_authentication> + <mssqls_linux_authent>sql</mssqls_linux_authent> +<mssqls_odbc_driver>ODBC Driver 17 for SQL Server</mssqls_odbc_driver> +<mssqls_linux_odbc>ODBC Driver 17 for SQL Server</mssqls_linux_odbc> + <mssqls_uid>sa</mssqls_uid> + <mssqls_pass>admin</mssqls_pass> +<mssqls_dbase>tpcc2</mssqls_dbase> + </c2> + <c3> + <mssqls_server>(local)\SQLDEVELOP</mssqls_server> + <mssqls_linux_server>host3</mssqls_linux_server> + <mssqls_tcp>false</mssqls_tcp> + <mssqls_port>1433</mssqls_port> + <mssqls_azure>false</mssqls_azure> + <mssqls_authentication>windows</mssqls_authentication> + <mssqls_linux_authent>sql</mssqls_linux_authent> +<mssqls_odbc_driver>ODBC Driver 17 for SQL Server</mssqls_odbc_driver> +<mssqls_linux_odbc>ODBC Driver 17 for SQL Server</mssqls_linux_odbc> + <mssqls_uid>sa</mssqls_uid> + <mssqls_pass>admin</mssqls_pass> +<mssqls_dbase>tpcc3</mssqls_dbase> + </c3> +</connections> +<sprocs> + <neworder> +<connections>c1 c2 c3</connections> + <policy>round_robin</policy> +</neworder> + <payment> +<connections>c1 c2</connections> + <policy>first_named</policy> +</payment> + <delivery> +<connections>c2 c3</connections> + <policy>last_named</policy> +</delivery> + <stocklevel> +<connections>c1 c2 c3</connections> + <policy>random</policy> +</stocklevel> + <orderstatus> +<connections>c2 c3</connections> + <policy>round_robin</policy> +</orderstatus> +</sprocs> +</connpool> + + + After modifying the XML configuration file select XML Connect + Pool in the Driver Options to activate this feature. + +
+ XML Connect Pool + + + + + + +
+ + For this example the additional information for the comments is + also added to illustrate the connections made. + +
+ Connections Comment + + + + + + +
+ + When the Virtual Users are run the logfile shows that + connections are made for the active Virtual Users according to the + connections and policies defined in the XML configuration file. Also + prepared statements are created and held in a pool for execution + against the defined policy. Also note that the standalone connection + "tpcc1" is also made to monitor the transaction rates and define the + warehouse count for the run. + + Vuser 2:sproc_cur:neword_st connections:{odbcc1 odbcc2 odbcc3} cursors:::oo::Obj23::Stmt::3 ::oo::Obj28::Stmt::3 ::oo::Obj33::Stmt::3 number of cursors:3 execs:0 +Vuser 2:sproc_cur:payment_st connections:{odbcc1 odbcc2} cursors:::oo::Obj23::Stmt::4 ::oo::Obj28::Stmt::4 number of cursors:2 execs:0 +Vuser 2:sproc_cur:ostat_st connections:{odbcc2 odbcc3} cursors:::oo::Obj28::Stmt::5 ::oo::Obj33::Stmt::4 number of cursors:2 execs:0 +Vuser 2:sproc_cur:delivery_st connections:{odbcc1 odbcc2 odbcc3} cursors:::oo::Obj23::Stmt::5 ::oo::Obj28::Stmt::6 ::oo::Obj33::Stmt::5 number of cursors:3 execs:0 +Vuser 2:sproc_cur:slev_st connections:{odbcc2 odbcc3} cursors:::oo::Obj28::Stmt::7 ::oo::Obj33::Stmt::6 number of cursors:2 execs:0 +Vuser 3:sproc_cur:neword_st connections:{odbcc1 odbcc2 odbcc3} cursors:::oo::Obj23::Stmt::3 ::oo::Obj28::Stmt::3 ::oo::Obj33::Stmt::3 number of cursors:3 execs:0 +Vuser 3:sproc_cur:payment_st connections:{odbcc1 odbcc2} cursors:::oo::Obj23::Stmt::4 ::oo::Obj28::Stmt::4 number of cursors:2 execs:0 +Vuser 3:sproc_cur:ostat_st connections:{odbcc2 odbcc3} cursors:::oo::Obj28::Stmt::5 ::oo::Obj33::Stmt::4 number of cursors:2 execs:0 +Vuser 3:sproc_cur:delivery_st connections:{odbcc1 odbcc2 odbcc3} cursors:::oo::Obj23::Stmt::5 ::oo::Obj28::Stmt::6 ::oo::Obj33::Stmt::5 number of cursors:3 execs:0 +Vuser 3:sproc_cur:slev_st connections:{odbcc2 odbcc3} cursors:::oo::Obj28::Stmt::7 ::oo::Obj33::Stmt::6 number of cursors:2 execs:0 + + On completion of the run the NOPM and TPM is recorded. This is + the area where it is of particular importance to be aware of the + database and cluster configuration for the results to be consistent. + It is therefore valid to reiterate that if the cluster and standalone + connection does not record all of the transactions in the cluster then + the NOPM results will only be returned for the standalone connection. + By way of example in the test configuration shown there are 3 separate + databases and the standalone connection is made to tpcc1. Therefore + the test results shows the NOPM value at approximately 1/3rd of the + ratio expected against the TPM value that records all of the + transactions against the SQL Server. For this reason the CLIENT SIDE + TPM is also shown. In this example the neworder value per minute is + 78319 a close equivalent to 3 x 26207 and therefore gives an + indication of the NOPM value for multiple instances in a non-cluster + configuration. In this case 3 connections were made to tpcc1, tpcc2 + and tpcc3 and the connections chosen to round robin between them, + therefore the actual number of NOPM is 3X that recorded from just the + standalone connection. In a correctly configured cluster environment + it would be the same and the results wouyld be both consistent and + valid. Be aware that these client side values are recorded during both + rampup and timed test periods and therefore may not accurately reflect + the results from a valid timed test. + + Vuser 1:2 Active Virtual Users configured +Vuser 1:TEST RESULT : System achieved 26207 NOPM from 180515 SQL Server TPM +Vuser 1:CLIENT SIDE TPM : neworder 78319 payment 78185 delivery 7855 stocklevel 7826 orderstatus 7809 + - This option should be selected in conjunction with having - enabled output to the logfile. When selected client side time - profiling will be conducted for the first active virtual user - and output written to the logfile. - + In addition to the CLIENT SIDE TPM each Virtual User will also + report the total number of transactions that it processed from the + time that it started running to the end of the test. - - Mode + Vuser 2:VU2 processed neworder 275335 payment 273822 delivery 27495 stocklevel 27588 orderstatus 27568 transactions +Vuser 3:VU3 processed neworder 272901 payment 273475 delivery 27493 stocklevel 27194 orderstatus 27097 transactions + - The mode value is taken from the operational mode setting - set under the Mode Options menu tab under the Mode menu. If set - to Local or Master then the monitor thread takes snapshots, if - set to Slave no snapshots are taken. This is useful if multiple - instances of HammerDB are running in Master and Slave mode to - ensure that only one instance takes the snapshots. - - - - + The XML Connect Pool feature provides advanced features for the + expert user to test clusters and multiple instances simultaneously, it + also gives the user a high degree of control on how this is used, + therefore it is at the users discretion to use these settings + appropriately to ensure consistent results. +
- Additional Driver Script Options for Server Side Reports: Oracle, - Db2 and EnterpriseDB PostgreSQL + Additional Driver Script Options for Stored Procedures and Server + Side Reports: PostgreSQL, MySQL, Oracle, Db2 and EnterpriseDB + PostgreSQL + +
+ PostgreSQL Stored Procedures + + With PostgreSQL by default the 5 TPROC-C transactions are + implemented using PostgreSQL functions. From PostgreSQL v11.0 there is + the option to use PostgreSQL stored procedures instead. However + prepared statements are not supported by PostgreSQL for stored + procedures only for functions and therefore if using the XML connect + pool feature only PostgreSQL functions are supported. +
+ +
+ MySQL Prepare Statements + + With MySQL there is the option to use server side prepared + statements. This option is mandatory if using the XML connect pool + feature. +
Oracle AWR Reports @@ -3277,7 +4664,7 @@ edb=#
Loading the Driver Script - After selected the Driver Script Options the Driver Script is + After selecting the Driver Script Options the Driver Script is loaded. The configured options can be seen in the Driver Script window and also modified directly there. The Load option can also be used to refresh the script to the configured Options. @@ -3357,9 +4744,9 @@ edb=# Repeat Delay(ms) is the time that each Virtual User will wait before running its next Iteration of the Driver - Script. For the TPC-C workload this should be considered as an - 'outer loop' to the 'inner loop' of the Total Transactions per - User in the TPC-C Driver Script. + Script. For the TPROC-C workload this should be considered as + an 'outer loop' to the 'inner loop' of the Total Transactions + per User in the TPROC-C Driver Script. @@ -3373,7 +4760,7 @@ edb=# Show Output Show Output will report Virtual User Output to the - Virtual User Output Window, For TPC-C tests this should be + Virtual User Output Window, For TPROC-C tests this should be enabled. @@ -3443,9 +4830,10 @@ edb=# When complete the Monitor Virtual User will report the Test - Result, if logging has been selected these values will also be reported - in the log. Where supported additional database side server report - information will also be reported. + Result, refer to Chapter for the configuration of how the NOPM and TPM + values are reported. If logging has been selected these values will also + be reported in the log. Where supported additional database side server + report information will also be reported.
Virtual Users Complete @@ -3466,7 +4854,7 @@ edb=# - Autopilot + Autopilot for Automated Testing To automate this process of repeated tests HammerDB provides the autopilot feature that enables you to configure a single test to be @@ -3544,16 +4932,16 @@ edb=# stop the test and create the next Virtual Users in the sequence. You should configure this value in relation to the Minutes for Ramup Time and Minutes for Test Duration given in the Timed Test - options shown in Figure 31. For example if the values in the - test script are 2 and 5 minutes respectively then 10 minutes for - the Autopilot Options is a good value to allow the test to - complete before the next test in the sequence is run. If the - test overruns the time interval and the Virtual Users are still - running the sequence will wait for the Virtual Users to complete - before proceeding however note any pending output will be - discarded and therefore for example if the TPM and NOPM values - have not been reported by the time the test is stopped they will - not be reported at all. + options. For example if the values in the test script are 2 and + 5 minutes respectively then 10 minutes for the Autopilot Options + is a good value to allow the test to complete before the next + test in the sequence is run. If the test overruns the time + interval and the Virtual Users are still running the sequence + will wait for the Virtual Users to complete before proceeding + however note any pending output will be discarded and therefore + for example if the TPM and NOPM values have not been reported by + the time the test is stopped they will not be reported at + all. @@ -3658,24 +5046,24 @@ edb=#
- Extending Autopilot + Extending Autopilot to start automatically Autopilot can be started automatically by adding the keyword “auto” followed by the name of a script to run, this script must end in the extension .tcl. - ./hammerdb.tcl auto -Usage: hammerdb.tcl [ auto [ script_to_autoload.tcl ] ] + ./hammerdb auto +Usage: hammerdb [ auto [ script_to_autoload.tcl ] ] For example - ./hammerdb.tcl auto newtpccscript.tcl + ./hammerdb auto newtpccscript.tcl On doing so HammerDB will now load the script newtpccscript.tcl at startup and immediately enter the autopilot sequence defined in - config.xml. Upon completion HammerDB will exit. As detailed in the post - linked above this functionality enables the potential to run workloads + config.xml. Upon completion HammerDB will exit. This functionality + enables the potential to run scripted workloads with the HammerDB GUI such as the following with multiple sequences of autopilot interspersed with a database refresh. @@ -3698,7 +5086,7 @@ do echo "Running tests for series: $s" sed -i "s/<autopilot_sequence>.*<\/autopilot_sequence>/<autopilot_sequence>${s}<\/autopilot_sequence>/" $CONFIGFILE - (cd /usr/local/hammerDB/ && ./hammerdb.tcl auto TPCC.postgres.tcl) + (cd /usr/local/hammerDB/ && ./hammerdb auto TPCC.postgres.tcl) echo "Reloading data" ssh postgres@postgres '/var/lib/pgsql/reloadData.sh' @@ -3717,10 +5105,10 @@ done as opposed to the NOPM value as TPM is selected from a database in-memory table and therefore sampling does not impact the test being measured. NOPM on the other hand is sampled from the schema itself and is therefore only - measured at the start and end of the test to minimize the impact. To - configure the Transaction Counter select the Transactions tree-view. If - Virtual Users are running the Transaction Counter Options can be selected - from the menu. + measured at the start and end of the test to minimize the impact of + testing upon performance. To configure the Transaction Counter select the + Transactions tree-view. If Virtual Users are running the Transaction + Counter Options can be selected from the menu.
Transaction Counter Options @@ -3860,28 +5248,6 @@ done select sum(xact_commit + xact_rollback) from pg_stat_database
-
- Redis Transaction Counter - - For Redis the connection parameters are the same as the schema - options. The refresh rate determines the sampling interval. - -
- Redis TX Counter Options - - - - - - -
- - the info command is used to sample the transaction rate extracting - the value total_commands_processed. - - info (total_commands_processed) -
-
Running the Transaction Counter @@ -3930,9 +5296,10 @@ done While active the Transaction Counter Window can be dragged out of - the main HammerDB display to be displayed in an standalone window. To - return to the main display close the window and it will be - re-embedded. + the main HammerDB display to be displayed in an standalone window by + selecting and dragging the notebook tab. To return to the main display + close the window and it will be re-embedded in the main + interface.
Transaction Counter standalone. @@ -3970,8 +5337,8 @@ sysstat version 11.5.7 agent directory. $./agent -Initializing HammerDB Metric Agent 3.0 -HammerDB Metric Agent active @ id 13029 hostname CRANE (Ctrl-C to Exit) +Initializing HammerDB Metric Agent 4.0 +HammerDB Metric Agent active @ id 20376 hostname CRANE (Ctrl-C to Exit) On Windows double-click on agent.bat in the agent directory. @@ -3986,9 +5353,11 @@ HammerDB Metric Agent active @ id 13029 hostname CRANE (Ctrl-C to Exit)
- On Windows you may see the following security alert as the agent - will open a port for communication, access needs to be permitted to - enable communication. + On both Windows and Linux your Firewall configuration should + permit communication between the hosts where the agent and the display + are running, for example on Windows you may see the following security + alert as the agent will open a port for communication, access needs to + be permitted to enable communication.
Security Alert @@ -4087,7 +5456,16 @@ HammerDB Metric Agent active @ id 13029 hostname CRANE (Ctrl-C to Exit) + evenly distributed across all cores. A typical example is where all of + the network interrupt handling is done on the first core, this will be + evident from the HammerDB CPU metrics showing the first core at 100% + system utilisation. + + The agent to display configuration is compatible to run + interchangeably between Linux and Windows with both the agent and + display on either of the operating systems. Additionally the agent may + be run to display the CPU metrics whilst the load is run from the + command line or another system.
Metrics running @@ -4122,7 +5500,8 @@ HammerDB Metric Agent active @ id 13029 hostname CRANE (Ctrl-C to Exit)Oracle Database Metrics When the Oracle Database is selected on both Windows and Linux an - additional option is available to connect to the Oracle Database. + additional option is available to connect to the Oracle Database and + display detailed performance metrics.
Oracle Metrics Options @@ -4136,9 +5515,10 @@ HammerDB Metric Agent active @ id 13029 hostname CRANE (Ctrl-C to Exit)When the metrics button is pressed HammerDB connects to the database and displays graphical information from the Active Session - History detailing wait events. In the example below the graph shows that - at the beginning of a workload the top wait event was User IO, followed - by CPU activity however a significant wait event on Commit. + History detailing wait events. By default in embedded mode the Oracle + Database Metrics will display the Active Session History Graph. For + detailed Oracle Database Metrics the Notebook tab should be dragged out + and expanded to display in a separate window.
Oracle Metrics Display Linux @@ -4150,19 +5530,46 @@ HammerDB Metric Agent active @ id 13029 hostname CRANE (Ctrl-C to Exit)
- As the graph extracts information from the Active Session History, - it is possible to select a section from the window and display the wait - events related to that period of time. The buttons enable the viewing of - SQL text, the explain plan, IO statistics and SQL statistics. Note that - the CPU metrics functionality as previously described is available under - the CPU button. + When display in a separate window, it is possible to make a + selection from the window and display the wait events related to that + period of time. When the SQL_ID is selected the buttons then enable the + detailed viewing of SQL text, the explain plan, IO statistics and SQL + statistics related to that SQL. + +
+ Oracle Metrics Display Windows + + + + + + +
+ + When an event is selected the analysis shows details related to + that particular event. + +
+ Oracle Metrics Event + + + + + + +
+ + The CPU Metrics button displays the current standard HammerDB CPU + Metrics display in an embedded Window and requires the agent running on + the database server. The CPU metrics are not recorded as historical data + relating to the Active Session History.
- Oracle Metrics Display Windows + Oracle Database CPU Metrics - +
@@ -4170,104 +5577,281 @@ HammerDB Metric Agent active @ id 13029 hostname CRANE (Ctrl-C to Exit) - Command Line Interface (CLI) + Remote Primary and Replica Modes - HammerDB can be run from the command line without a graphical - interface. It is recommend that new users become familiar with using the - graphical interface before using the command line as the command line - offers the same workflow and therefore once the graphical interface is - understood learning the command line will be more straightforward. The CLI - implements equivalent readline functionality for navigation. The CLI can - be used in conjunction with scripting to build a powerful automated - environment. + HammerDB allows for multiple instances of the HammerDB program to + run in Primary and Replica modes. Running with multiple modes enables the + additional instances to be controlled by a single master instance either + on the same load testing server or across the network. This functionality + can be particularly applicable when testing Virtualized environments and + the desire is to test multiple databases running in virtualized guests at + the same time. Similarly this functionality is useful for clustered + databases with multiple instances such as Oracle Real Application Clusters + and wishing to partition a load precisely across servers. HammerDB Remote + Modes are entirely operating system independent and therefore an instance + of HammerDB running on Windows can be Primary to one or more instances + running on Linux and vice versa. Additionally there is no requirement for + the workload to be the same and therefore it would be possible to connect + multiple instances of HammerDB running on Windows and Linux simultaneously + testing SQL Server, Oracle, MySQL and PostrgreSQL workloads in a + virtualized environment. In the bottom right hand corner of the interface + the status bar shows the mode that HammerDB is running in. By default this + will be Local Mode. -
- Start the CLI +
+ Mode - To start the command line in interactive mode on Linux run: + + + + + +
- hammerdb>steve@CRANE:~/HammerDB-3.0$ ./hammerdbcli -HammerDB CLI v3.0 -Copyright (C) 2003-2018 Steve Shaw -Type "help" for a list of commands -The xml is well-formed, applying configuration -hammerdb> + + Primary Mode - On Windows double-click hammerdbcli.bat + From the tree-view select Mode Options.
- hammerdbcli.bat + Mode Options - +
- This will display a console command Window. On Windows this - console command window has been designed to run with white text on a - black background. If necessary the colours can be changed using the - Windows COLOR command. + This displays the Mode Options as shown in Figure 3 confirming + that the current mode is Local.
- CLI Windows + Mode Options Select - + +
- It is also possible to run a script directly from the command - line by providing the auto argument preceding the name of a script to - tun. For example the following script is named buildcli.tcl on a - Windows based system. Note that the line "vwait forever" has been - added to the end of the script to be run. This is required to enter - the event loop for correct processing of the script when being called - in this manner. + Select Primary Mode and click OK. - #!/bin/tclsh -puts "SETTING CONFIGURATION" -global complete -proc wait_to_complete {} { -global complete -set complete [vucomplete] -if {!$complete} { after 5000 wait_to_complete } else { exit } -} -dbset db mssqls -diset connection mssqls_server (local)\\SQLDEVELOP -diset tpcc mssqls_count_ware 2 -diset tpcc mssqls_num_vu 2 -print dict -buildschema -wait_to_complete -vwait forever +
+ Primary Mode Select + + + + + + +
- By following hammerdbcli.bat (or hammerdbcli on Linux) by the - auto argument and the name of the script, + Confirm the selection. -
- Run buildcli.tcl +
+ Mode Confirmation + + + + + + +
+ + This will show that Master Mode is now active and the ID and + hostname it is running on. + +
+ Mode Active - + -
+
Note that this will also be recorded in the console display + and the current Mode displayed in the status bar at the bottom right of + the Window. + + Setting Primary Mode at id : 18808, hostname : osprey +
+ + + Replica Mode + + On another instance of HammerDB select Replica Mode, enter the id + and hostname of the Primary and select OK. + +
+ Replica Mode + + + + + + +
+ + Confirm the change and observe the Mode connection on both the + Replica + + Setting Replica Mode at id : 18424, hostname : osprey +Replica connecting to osprey 18808 : Connection succeeded +Primary call back successful + + and the Primary. There is no restriction on the number of Replicas + that can be connected to one Primary. + + Received a new replica connection from host fe80::9042:505b:49de:beb4%26 +New replica joined : {18424 osprey} +
+ + + Primary Distribution + + The Primary Distribution button in the edit menu now becomes + active to distribute scripts across instances. + +
+ Primary Distribution + + + + + + +
+ + Pressing this button enables the distribution of the contents of + the Script Editor to all connected instances. + + Distributing to 18424 osprey ...Primary Distribution Succeeded + + The TPROC-C timed driver scripts reference the operating Mode and + when loaded will set the parameter "mode" to the operating Mode running + on that system. + + If loaded locally a script will show the Mode that the instance of + HammerDB is running in which by default will be "Local". + + set mode "Local" ;# HammerDB operational mode + + Once the Mode is set to "Primary" when the script is loaded on the + Primary it will show the correct mode. + + set mode "Primary" ;# HammerDB operational mode + + When distributed from the Primary to the Replica the Replica will + change the mode to the correct setting. + + set mode "Replica" ;# HammerDB operational mode + + Once a Replica is connected to a Primary all actions that are + taken on the Primary will be replicated on the Replica. All of your + workload choices of creating and running and closing down virtual users + will be replicated automatically on the connected Replicas enabling + control and simultaneous timing from a central point. This enables + workloads to be directed to different database instances simultaneously. + When operating in Replica Mode the Monitor Virtual User on that instance + of HammerDB will not capture any performance data and report that "No + snapshots are taken", the Replica will only run the active Virtual + Users. Note that running a schema creation with multiple connected + instances is not supported. + +
+ Operating in Replica Mode + + + + + + +
+ + When the workload is complete the Primary will terminate the + Virtual Users on the Replicas meaning that running in Remote Mode + configurations is compatible with Autopilot. + +
+ Replica Mode terminated + + + + + + +
+ + If it is wished to capture the performance metrics on the Replica + as well as the Primary the operational mode can be manually changed to + "Local". In this case the Replicas will capture performance data from + the databases instances that they are connected to. + + set mode "Local" ;# HammerDB operational mode + + To disable Remote Modes select Local Mode on the Primary on + confirmation all connected instances will return to Local Mode. + + Remote modes functionality in the CLI can be accessed using the + switchmode command with the GUI and CLI being interchangeable and + therefore a number of CLI Replicas can be connected to a GUI Primary if + desired. +
+ + + + Command Line Interface (CLI) + + HammerDB can be run from the command line without a graphical + interface. It is recommend that new users become familiar with using the + graphical interface before using the command line as the command line + offers the same workflow and therefore once the graphical interface is + understood learning the command line will be more straightforward. The CLI + implements equivalent readline functionality for navigation. The CLI can + be used in conjunction with scripting to build a powerful automated + environment. Both the CLI and the GUI run exactly the same commands + underneath the interactive layers, for example when operational the + Virtual Users run identical workloads and therefore performance + measurements between the CLI and GUI are interchangeable. + +
+ Start the CLI + + To start the command line in interactive mode on Linux run: + + steve@CRANE:~/HammerDB-4.0$ ./hammerdbcli +HammerDB CLI v4.0 +Copyright (C) 2003-2020 Steve Shaw +Type "help" for a list of commands +The xml is well-formed, applying configuration +hammerdb> + + On Windows double-click hammerdbcli.bat + +
+ hammerdbcli.bat + + + + + + +
- The script is then run directly without interaction. + This will display a console command Window. On Windows this + console command window has been designed to run with white text on a + black background and sets the colour scheme accordingly. -
- buildcli.tcl running +
+ CLI Windows - - - - - -
+ + + + +
@@ -4276,7 +5860,12 @@ vwait forever To learn CLI commands type "help". - HammerDB v3.1 CLI Help Index + HammerDB CLI v4.0 +Copyright (C) 2003-2020 Steve Shaw +Type "help" for a list of commands +The xml is well-formed, applying configuration +hammerdb>help +HammerDB v4.0 CLI Help Index Type "help command" for more details on specific commands below @@ -4287,16 +5876,22 @@ Type "help command" for more details on specific commands below dbset dgset diset + distributescript librarycheck loadscript print quit + runtimer + switchmode vucomplete vucreate vudestroy vurun vuset vustatus + waittocomplete + +hammerdb> The commands have the following functionality. @@ -4396,6 +5991,15 @@ Type "help command" for more details on specific commands below Oracle. + + distributescript + + Usage: distributescript + + In Master mode distributes the script loaded by Master + to the connected Slaves. + + librarycheck @@ -4441,6 +6045,32 @@ Type "help command" for more details on specific commands below interface + + runtimer + + runtimer - Usage: runtimer seconds + + Helper routine to run a timer in the main hammerdbcli + thread to keep it busy for a period of time whilst the virtual + users run a workload. The timer will return when vucomplete + returns true or the timer reaches the seconds value. Usually + followed by vudestroy. + + + + switchmode + + Usage: switchmode [mode] ?PrimaryID? + ?PrimaryHostname? + + Changes the remote mode to Primary, Replica or Local. + When Master it will report an id and a hostname. Equivalent to + the Mode option in the graphical interface. Mode to switch to + must be one of Local, Primary or Replica. If Mode is Replica + then the ID and Hostname of the Primary to connect to must be + given. + + vucomplete @@ -4509,6 +6139,18 @@ Type "help command" for more details on specific commands below successfully or "FINISH FAILED" for virtual users that encountered an error. + + + waittocomplete + + Usage: waittocomplete + + Helper routine to enable the main hammerdbcli thread to + keep it busy until vucomplete is detected. When vucomplete is + detected exit is called causing all virtual users and the main + hammerdblci thread to terminate. Often used when calling + hammerdb from external scripting commands. + @@ -4523,35 +6165,34 @@ Type "help command" for more details on specific commands below prompted. hammerdb>dbset db orac -Unknown prefix orac, choose one from ora mssqls db2 mysql pg redis +Unknown prefix orac, choose one from ora mssqls db2 mysql pg When a valid option is chosen the database is set. - hammerdb>dbset db redis -Database set to Redis + hammerdb>dbset db mssqls +Database set to MSSQLServer The print command can be used to confirm the chosen database and available options. hammerdb>print db -Database Redis set. +Database MSSQLServer set. To change do: dbset db prefix, one of: -Oracle = ora MSSQLServer = mssqls Db2 = db2 MySQL = mysql PostgreSQL = pg Redis = redis +Oracle = ora MSSQLServer = mssqls Db2 = db2 MySQL = mysql PostgreSQL = pg Similarly the workload is also selected from the available - configuration also prompting if an incorrect value is chosen. + configuration also prompting if an incorrect value is chosen. When a + correct value is chosen the selection is confirmed. For backward + compatibility with existing scripts TPROC-C and TPC-C and TPROC-H and + TPC-H are interchangeable. - hammerdb>dbset bm TPC-H -Unknown benchmark TPC-H, choose one from TPC-C + hammerdb>dbset bm TPROC-H +Benchmark set to TPROC-H for MSSQLServer - when a correct value is chosen the selection is confirmed - - hammerdb>dbset bm TPC-C -Benchmark set to TPC-C for Redis - - The print bm command is used to confirm the benchmark +hammerdb>dbset bm TPC-C +Benchmark set to TPC-C for MSSQLServer - hammerdb>print bm +hammerdb>print bm Benchmark set to TPC-C After the database and workload is selected the print dict command @@ -4559,55 +6200,98 @@ Benchmark set to TPC-C database. hammerdb>print dict -Dictionary Settings for Redis +Dictionary Settings for MSSQLServer connection { - redis_host = 127.0.0.1 - redis_port = 6379 - redis_namespace = 1 + mssqls_server = (local) + mssqls_linux_server = localhost + mssqls_tcp = false + mssqls_port = 1433 + mssqls_azure = false + mssqls_authentication = windows + mssqls_linux_authent = sql + mssqls_odbc_driver = ODBC Driver 17 for SQL Server + mssqls_linux_odbc = ODBC Driver 17 for SQL Server + mssqls_uid = sa + mssqls_pass = admin } tpcc { - redis_count_ware = 1 - redis_num_vu = 1 - redis_total_iterations = 1000000 - redis_raiseerror = false - redis_keyandthink = false - redis_driver = test - redis_rampup = 2 - redis_duration = 5 - redis_allwarehouse = false - redis_timeprofile = false + mssqls_count_ware = 1 + mssqls_num_vu = 1 + mssqls_dbase = tpcc + mssqls_imdb = false + mssqls_bucket = 1 + mssqls_durability = SCHEMA_AND_DATA + mssqls_total_iterations = 1000000 + mssqls_raiseerror = false + mssqls_keyandthink = false + mssqls_checkpoint = false + mssqls_driver = test + mssqls_rampup = 2 + mssqls_duration = 5 + mssqls_allwarehouse = false + mssqls_timeprofile = false + mssqls_async_scale = false + mssqls_async_client = 10 + mssqls_async_verbose = false + mssqls_async_delay = 1000 + mssqls_connect_pool = false } Use the diset command to change these values for example for the number of warehouses to build. - hammerdb>diset tpcc redis_count_ware 10 -Changed tpcc:redis_count_ware from 1 to 10 for Redis + hammerdb>diset tpcc mssqls_count_ware 10 +Changed tpcc:mssqls_count_ware from 1 to 10 for MSSQLServer and the number of virtual users to build them. - hammerdb>diset tpcc redis_num_vu 4 -Changed tpcc:redis_num_vu from 1 to 4 for Redis + hammerdb>diset tpcc mssqls_num_vu 4 +Changed tpcc:mssqls_num_vu from 1 to 4 for MSSQLServer + + If the dict value to be set has a special character using curly + brackets around the value will prevent the interpretation of the special + character. - print dict can confirm the selection. + hammerdb>diset connection mssqls_server {(local)\SQLDEVELOP} +Changed connection:mssqls_server from (local) to (local)\SQLDEVELOP for MSSQLServer + + print dict will show the changed values. hammerdb>print dict -Dictionary Settings for Redis +Dictionary Settings for MSSQLServer connection { - redis_host = 127.0.0.1 - redis_port = 6379 - redis_namespace = 1 + mssqls_server = (local)\SQLDEVELOP + mssqls_linux_server = localhost + mssqls_tcp = false + mssqls_port = 1433 + mssqls_azure = false + mssqls_authentication = windows + mssqls_linux_authent = sql + mssqls_odbc_driver = ODBC Driver 17 for SQL Server + mssqls_linux_odbc = ODBC Driver 17 for SQL Server + mssqls_uid = sa + mssqls_pass = admin } tpcc { - redis_count_ware = 10 - redis_num_vu = 4 - redis_total_iterations = 1000000 - redis_raiseerror = false - redis_keyandthink = false - redis_driver = test - redis_rampup = 2 - redis_duration = 5 - redis_allwarehouse = false - redis_timeprofile = false + mssqls_count_ware = 10 + mssqls_num_vu = 4 + mssqls_dbase = tpcc + mssqls_imdb = false + mssqls_bucket = 1 + mssqls_durability = SCHEMA_AND_DATA + mssqls_total_iterations = 1000000 + mssqls_raiseerror = false + mssqls_keyandthink = false + mssqls_checkpoint = false + mssqls_driver = test + mssqls_rampup = 2 + mssqls_duration = 5 + mssqls_allwarehouse = false + mssqls_timeprofile = false + mssqls_async_scale = false + mssqls_async_client = 10 + mssqls_async_verbose = false + mssqls_async_delay = 1000 + mssqls_connect_pool = false }
@@ -4616,71 +6300,74 @@ tpcc { Run the buildschema command and the build will commence without prompting using your configuration and if successful report the status - at the end of the build. + at the end of the build. Note that exactly as the GUI the build is + multithreaded with Virtual Users running simultaneously. - hammerdb>buildschema + hhammerdb>buildschema Script cleared -Building 10 Warehouses with 5 Virtual Users, 4 active + 1 Monitor VU(dict value redis_num_vu is set to 4) -Ready to create a 10 Warehouse Redis TPC-C schema -in host 127.0.0.1:6379 in namespace 1? +Building 10 Warehouses with 5 Virtual Users, 4 active + 1 Monitor VU(dict value mssqls_num_vu is set to 4) +Ready to create a 10 Warehouse MS SQL Server TPROC-C schema +in host (LOCAL)\SQLDEVELOP in database TPCC? Enter yes or no: replied yes Vuser 1 created - WAIT IDLE Vuser 2 created - WAIT IDLE Vuser 3 created - WAIT IDLE Vuser 4 created - WAIT IDLE Vuser 5 created - WAIT IDLE -RUNNING - TPC-C creation Vuser 1:RUNNING Vuser 1:Monitor Thread -Vuser 1:CREATING REDIS SCHEMA IN NAMESPACE 1 -Vuser 1:Connection made to Redis at 127.0.0.1:6379 -Vuser 1:Selecting Namespace 1 +Vuser 1:CREATING TPCC SCHEMA +Vuser 1:CHECKING IF DATABASE tpcc EXISTS +Vuser 1:CREATING DATABASE tpcc +Vuser 1:CREATING TPCC TABLES Vuser 1:Loading Item Vuser 2:RUNNING Vuser 2:Worker Thread Vuser 2:Waiting for Monitor Thread... -Vuser 2:Connection made to Redis at 127.0.0.1:6379 -Vuser 2:Selecting Namespace 1 Vuser 2:Loading 2 Warehouses start:1 end:2 -Vuser 2:Start:Mon Apr 09 11:20:43 BST 2018 +Vuser 2:Start:Thu Oct 22 17:56:27 BST 2020 Vuser 2:Loading Warehouse Vuser 2:Loading Stock Wid=1 Vuser 3:RUNNING Vuser 3:Worker Thread Vuser 3:Waiting for Monitor Thread... -Vuser 3:Connection made to Redis at 127.0.0.1:6379 -Vuser 3:Selecting Namespace 1 Vuser 3:Loading 2 Warehouses start:3 end:4 -Vuser 3:Start:Mon Apr 09 11:20:44 BST 2018 +Vuser 3:Start:Thu Oct 22 17:56:27 BST 2020 Vuser 3:Loading Warehouse Vuser 3:Loading Stock Wid=3 Vuser 4:RUNNING Vuser 4:Worker Thread Vuser 4:Waiting for Monitor Thread... -Vuser 4:Connection made to Redis at 127.0.0.1:6379 -Vuser 4:Selecting Namespace 1 Vuser 4:Loading 2 Warehouses start:5 end:6 -Vuser 4:Start:Mon Apr 09 11:20:44 BST 2018 +Vuser 4:Start:Thu Oct 22 17:56:28 BST 2020 Vuser 4:Loading Warehouse Vuser 4:Loading Stock Wid=5 Vuser 5:RUNNING Vuser 5:Worker Thread Vuser 5:Waiting for Monitor Thread... -Vuser 5:Connection made to Redis at 127.0.0.1:6379 -Vuser 5:Selecting Namespace 1 Vuser 5:Loading 2 Warehouses start:7 end:10 -Vuser 5:Start:Mon Apr 09 11:20:45 BST 2018 +Vuser 5:Start:Thu Oct 22 17:56:28 BST 2020 Vuser 5:Loading Warehouse Vuser 5:Loading Stock Wid=7 ..... -Vuser 5:End:Mon Apr 09 11:27:13 BST 2018 +Vuser 5:Loading Orders for D=10 W=10 +Vuser 5:...1000 +Vuser 5:...2000 +Vuser 5:...3000 +Vuser 5:Orders Done +Vuser 5:End:Thu Oct 22 18:02:45 BST 2020 Vuser 5:FINISHED SUCCESS Vuser 1:Workers: 0 Active 4 Done -Vuser 1:REDIS SCHEMA COMPLETE +Vuser 1:CREATING TPCC INDEXES +Vuser 1:CREATING TPCC STORED PROCEDURES +Vuser 1:UPDATING SCHEMA STATISTICS +Vuser 1:TPCC SCHEMA COMPLETE Vuser 1:FINISHED SUCCESS ALL VIRTUAL USERS COMPLETE + +hammerdb> The vustatus command can confirm the status of each Virtual @@ -4711,40 +6398,58 @@ No Virtual Users found output is strongly recommended as a test workload will print considerable output to the command prompt. - hammerdb>diset tpcc redis_driver timed + hammerdb>diset tpcc mssqls_driver timed Clearing Script, reload script to activate new setting Script cleared -Changed tpcc:reddriver from test to timed for Redis +Changed tpcc:mssqls_driver from test to timed for MSSQLServer Configure workload settings, in this example the rampup and duration times are set. - hammerdb>diset tpcc redis_rampup 1 -Changed tpcc:redis_rampup from 2 to 1 for Redis + hammerdb>diset tpcc mssqls_rampup 1 +Changed tpcc:mssqls_rampup from 2 to 1 for MSSQLServer -hammerdb>diset tpcc redis_duration 3 -Changed tpcc:redis_duration from 5 to 3 for Redis +hammerdb>diset tpcc mssqls_duration 3 +Changed tpcc:mssqls_duration from 5 to 3 for MSSQLServer Confirm the settings with the print dict command. hammerdb>print dict -Dictionary Settings for Redis +Dictionary Settings for MSSQLServer connection { - redis_host = 127.0.0.1 - redis_port = 6379 - redis_namespace = 1 + mssqls_server = (local)\SQLDEVELOP + mssqls_linux_server = localhost + mssqls_tcp = false + mssqls_port = 1433 + mssqls_azure = false + mssqls_authentication = windows + mssqls_linux_authent = sql + mssqls_odbc_driver = ODBC Driver 17 for SQL Server + mssqls_linux_odbc = ODBC Driver 17 for SQL Server + mssqls_uid = sa + mssqls_pass = admin } tpcc { - redis_count_ware = 10 - redis_num_vu = 4 - redis_total_iterations = 1000000 - redis_raiseerror = false - redis_keyandthink = false - redis_driver = timed - redis_rampup = 1 - redis_duration = 3 - redis_allwarehouse = false - redis_timeprofile = false + mssqls_count_ware = 10 + mssqls_num_vu = 4 + mssqls_dbase = tpcc + mssqls_imdb = false + mssqls_bucket = 1 + mssqls_durability = SCHEMA_AND_DATA + mssqls_total_iterations = 1000000 + mssqls_raiseerror = false + mssqls_keyandthink = false + mssqls_checkpoint = false + mssqls_driver = timed + mssqls_rampup = 1 + mssqls_duration = 3 + mssqls_allwarehouse = false + mssqls_timeprofile = false + mssqls_async_scale = false + mssqls_async_client = 10 + mssqls_async_verbose = false + mssqls_async_delay = 1000 + mssqls_connect_pool = false } When all the settings have been chosen load the driver script with @@ -4753,25 +6458,35 @@ tpcc { hammerdb>loadscript Script loaded, Type "print script" to view - The loaded script can be viewed with the print script - command. + The loaded script can be viewed with the print script command. + Note that the driver script is exactly the same as the driver script + observed in the GUI. There is no difference whatsoever in what is run in + the CLI compared to the GUI. If there is a wish to change the script a + modified version can be loaded with the customscript command and it is + therefore recommended to use the GUI to save a version of the script to + modify. - hammerdb>print script -#!/usr/local/bin/tclsh8.6 -#THIS SCRIPT TO BE RUN WITH VIRTUAL USER OUTPUT ENABLED + #!/usr/local/bin/tclsh8.6 #EDITABLE OPTIONS################################################## -set library redis ;# Redis Library -set total_iterations 1000000 ;# Number of transactions before logging off -set RAISEERROR "false" ;# Exit script on Redis error (true or false) +set library tdbc::odbc ;# SQL Server Library +set version 1.1.1 ;# SQL Server Library Version +set total_iterations 1000000;# Number of transactions before logging off +set RAISEERROR "false" ;# Exit script on SQL Server error (true or false) set KEYANDTHINK "false" ;# Time for user thinking and keying (true or false) +set CHECKPOINT "false" ;# Perform SQL Server checkpoint when complete (true or false) set rampup 1; # Rampup time in minutes before first Transaction Count is taken set duration 3; # Duration in minutes before second Transaction Count is taken set mode "Local" ;# HammerDB operational mode -set host "127.0.0.1" ;# Address of the server hosting Redis -set port "6379" ;# Port of the Redis Server, defaults to 6379 -set namespace "1" ;# Namespace containing the TPC Schema +set authentication "windows";# Authentication Mode (WINDOWS or SQL) +set server {(local)\SQLDEVELOP1};# Microsoft SQL Server Database Server +set port "1433";# Microsoft SQL Server Port +set odbc_driver {ODBC Driver 17 for SQL Server};# ODBC Driver +set uid "sa";#User ID for SQL Server Authentication +set pwd "admin";#Password for SQL Server Authentication +set tcp "false";#Specify TCP Protocol +set azure "false";#Azure Type Connection +set database "tpcc";# Database containing the TPC Schema #EDITABLE OPTIONS################################################## - ...
@@ -4789,10 +6504,14 @@ set namespace "1" ;# Namespace containing the TPC Schema 0 Virtual Users created The vuset command is used to configure the Virtual User options, - for example the number for create. + for example the number of Virtual Users to create. hammerdb>vuset vu 4 + and to enable logging. + + hammerdb>vuset logtotemp 1 + print vuconf confirms the configuration. hammerdb>print vuconf @@ -4801,7 +6520,7 @@ User Delay(ms) = 500 Repeat Delay(ms) = 500 Iterations = 1 Show Output = 1 -Log Output = 0 +Log Output = 1 Unique Log Name = 0 No Log Buffer = 0 Log Timestamps = 0 @@ -4818,6 +6537,8 @@ Vuser 2 created - WAIT IDLE Vuser 3 created - WAIT IDLE Vuser 4 created - WAIT IDLE Vuser 5 created - WAIT IDLE +Logging activated +to C:/Users/Steve/AppData/Local/Temp/hammerdb.log 5 Virtual Users Created with Monitor VU vustatus can confirm this status. @@ -4836,31 +6557,21 @@ Vuser 5 created - WAIT IDLE To begin the workload type vurun. hammerdb>vurun -RUNNING - Redis TPC-C Vuser 1:RUNNING -Vuser 1:Connection made to Redis at 127.0.0.1:6379 -Vuser 1:Selecting Namespace 1 Vuser 1:Beginning rampup time of 1 minutes Vuser 2:RUNNING -Vuser 2:Connection made to Redis at 127.0.0.1:6379 -Vuser 2:Selecting Namespace 1 Vuser 2:Processing 1000000 transactions with output suppressed... Vuser 3:RUNNING -Vuser 3:Connection made to Redis at 127.0.0.1:6379 -Vuser 3:Selecting Namespace 1 Vuser 3:Processing 1000000 transactions with output suppressed... Vuser 4:RUNNING -Vuser 4:Connection made to Redis at 127.0.0.1:6379 -Vuser 4:Selecting Namespace 1 Vuser 4:Processing 1000000 transactions with output suppressed... Vuser 5:RUNNING -Vuser 5:Connection made to Redis at 127.0.0.1:6379 -Vuser 5:Selecting Namespace 1 Vuser 5:Processing 1000000 transactions with output suppressed... The vustatus command can confirm the change in status. hammerdb>vustatus + 1 = RUNNING 2 = RUNNING 3 = RUNNING @@ -4875,7 +6586,7 @@ falseThe test runs as per the configuration and reports the result at the end and the Virtual User status. Note that when complete the vucomplete command can confirm this. - Vuser 1:Rampup 1 minutes complete ... + hammerdb>Vuser 1:Rampup 1 minutes complete ... Vuser 1:Rampup complete, Taking start Transaction Count. Vuser 1:Timing test period of 3 in minutes Vuser 1:1 ..., @@ -4883,15 +6594,17 @@ Vuser 1:2 ..., Vuser 1:3 ..., Vuser 1:Test complete, Taking end Transaction Count. Vuser 1:4 Active Virtual Users configured -Vuser 1:TEST RESULT : System achieved 2887487 Redis TPM at 24374 NOPM +Vuser 1:TEST RESULT : System achieved 101005 NOPM from 232149 SQL Server TPM Vuser 1:FINISHED SUCCESS Vuser 5:FINISHED SUCCESS Vuser 4:FINISHED SUCCESS Vuser 3:FINISHED SUCCESS Vuser 2:FINISHED SUCCESS ALL VIRTUAL USERS COMPLETE + hammerdb>vucomplete -true +true +hammerdb> To complete the test type vudestroy. @@ -4914,39 +6627,21 @@ Script cleared "The Tcl Programming Language: A Comprehensive Guide by Ashok P. Nadkarni (ISBN: 9781548679644)" - The following example shows an automated test script for a Redis - database that has previously been created. In this example the script - runs a timed tests for a duration of a minute for 1, 2 and 4 Virtual - Users in a similar manner to autopilot functionality with a timer set to - run for 2 minutes. Note that in the timer the update command is included - to process events received from the Virtual Users during the test. - Similarly the functionality of the vucomplete command can be observed. - When called as a command ie [ vucomplete ] and returning a boolean value - this command can be used in the timing loop to observe when the Virtual - Users have completed and once notified stop the timer and proceed with - the next test in the sequence. + The following example shows an automated test script for a + Microsoft SQL Server database that has previously been created. In this + example the script runs a timed tests for a duration of a minute for 1, + 2 and 4 Virtual Users in a similar manner to autopilot functionality + with a timer set to run for 2 minutes. Note that from HammerDB v4.0 the + runtimer command is included. The timer is set to a period of time for + the test to run, however if vucomplete is set to true during the test it + will also return. #!/usr/bin/tclsh -proc runtimer { seconds } { -set x 0 -set timerstop 0 -while {!$timerstop} { - incr x - after 1000 - if { ![ expr {$x % 60} ] } { - set y [ expr $x / 60 ] - puts "Timer: $y minutes elapsed" - } - update - if { [ vucomplete ] || $x eq $seconds } { set timerstop 1 } - } -return -} puts "SETTING CONFIGURATION" -dbset db redis -diset tpcc redis_driver timed -diset tpcc redis_rampup 0 -diset tpcc redis_duration 1 +dbset db mssqls +diset tpcc mssqls_driver timed +diset tpcc mssqls_rampup 0 +diset tpcc mssqls_duration 1 vuset logtotemp 1 loadscript puts "SEQUENCE STARTED" @@ -4965,81 +6660,68 @@ puts "TEST SEQUENCE COMPLETE" name of the script. The following output is produced without further intervention whilst also writing the output to the logfile. - ./hammerdbcli -HammerDB CLI v3.0 -Copyright (C) 2003-2018 Steve Shaw + HammerDB CLI v4.0 +Copyright (C) 2003-2020 Steve Shaw Type "help" for a list of commands The xml is well-formed, applying configuration hammerdb>source cliexample.tcl SETTING CONFIGURATION -Database set to Redis +Database set to MSSQLServer Clearing Script, reload script to activate new setting Script cleared -Changed tpcc:redis_driver from test to timed for Redis -Changed tpcc:redis_rampup from 2 to 0 for Redis -Changed tpcc:redis_duration from 5 to 1 for Redis +Changed tpcc:mssqls_driver from test to timed for MSSQLServer +Changed tpcc:mssqls_rampup from 2 to 0 for MSSQLServer +Changed tpcc:mssqls_duration from 5 to 1 for MSSQLServer Script loaded, Type "print script" to view SEQUENCE STARTED 1 VU TEST Vuser 1 created MONITOR - WAIT IDLE Vuser 2 created - WAIT IDLE Logging activated -to /tmp/hammerdb.log +to C:/Users/Hdb/AppData/Local/Temp/hammerdb.log 2 Virtual Users Created with Monitor VU -RUNNING - Redis TPC-C Vuser 1:RUNNING -Vuser 1:Connection made to Redis at 127.0.0.1:6379 -Vuser 1:Selecting Namespace 1 Vuser 1:Beginning rampup time of 0 minutes Vuser 1:Rampup complete, Taking start Transaction Count. Vuser 1:Timing test period of 1 in minutes Vuser 2:RUNNING -Vuser 2:Connection made to Redis at 127.0.0.1:6379 -Vuser 2:Selecting Namespace 1 Vuser 2:Processing 1000000 transactions with output suppressed... Vuser 1:1 ..., - Timer: 1 minutes elapsed Vuser 1:Test complete, Taking end Transaction Count. +Timer: 1 minutes elapsed Vuser 1:1 Active Virtual Users configured -Vuser 1:TEST RESULT : System achieved 1564723 Redis TPM at 13100 NOPM +Vuser 1:TEST RESULT : System achieved 35576 NOPM from 81705 SQL Server TPM Vuser 1:FINISHED SUCCESS Vuser 2:FINISHED SUCCESS ALL VIRTUAL USERS COMPLETE -Destroying Virtual Users -Virtual Users Destroyed +runtimer returned after 61 seconds +vudestroy success 2 VU TEST Vuser 1 created MONITOR - WAIT IDLE Vuser 2 created - WAIT IDLE Vuser 3 created - WAIT IDLE Logging activated -to /tmp/hammerdb.log +to C:/Users/Hdb/AppData/Local/Temp/hammerdb.log 3 Virtual Users Created with Monitor VU -RUNNING - Redis TPC-C Vuser 1:RUNNING -Vuser 1:Connection made to Redis at 127.0.0.1:6379 -Vuser 1:Selecting Namespace 1 Vuser 1:Beginning rampup time of 0 minutes Vuser 1:Rampup complete, Taking start Transaction Count. Vuser 1:Timing test period of 1 in minutes Vuser 2:RUNNING -Vuser 2:Connection made to Redis at 127.0.0.1:6379 -Vuser 2:Selecting Namespace 1 Vuser 2:Processing 1000000 transactions with output suppressed... Vuser 3:RUNNING -Vuser 3:Connection made to Redis at 127.0.0.1:6379 -Vuser 3:Selecting Namespace 1 Vuser 3:Processing 1000000 transactions with output suppressed... Vuser 1:1 ..., - Timer: 1 minutes elapsed Vuser 1:Test complete, Taking end Transaction Count. Vuser 1:2 Active Virtual Users configured -Vuser 1:TEST RESULT : System achieved 2266472 Redis TPM at 19068 NOPM +Vuser 1:TEST RESULT : System achieved 60364 NOPM from 138633 SQL Server TPM +Timer: 1 minutes elapsed Vuser 1:FINISHED SUCCESS -Vuser 3:FINISHED SUCCESS Vuser 2:FINISHED SUCCESS +Vuser 3:FINISHED SUCCESS ALL VIRTUAL USERS COMPLETE -Destroying Virtual Users -Virtual Users Destroyed +runtimer returned after 60 seconds +vudestroy success 4 VU TEST Vuser 1 created MONITOR - WAIT IDLE Vuser 2 created - WAIT IDLE @@ -5047,47 +6729,63 @@ Vuser 3 created - WAIT IDLE Vuser 4 created - WAIT IDLE Vuser 5 created - WAIT IDLE Logging activated -to /tmp/hammerdb.log +to C:/Users/Hdb/AppData/Local/Temp/hammerdb.log 5 Virtual Users Created with Monitor VU -RUNNING - Redis TPC-C Vuser 1:RUNNING -Vuser 1:Connection made to Redis at 127.0.0.1:6379 -Vuser 1:Selecting Namespace 1 Vuser 1:Beginning rampup time of 0 minutes Vuser 1:Rampup complete, Taking start Transaction Count. Vuser 1:Timing test period of 1 in minutes Vuser 2:RUNNING -Vuser 2:Connection made to Redis at 127.0.0.1:6379 -Vuser 2:Selecting Namespace 1 Vuser 2:Processing 1000000 transactions with output suppressed... Vuser 3:RUNNING -Vuser 3:Connection made to Redis at 127.0.0.1:6379 -Vuser 3:Selecting Namespace 1 Vuser 3:Processing 1000000 transactions with output suppressed... Vuser 4:RUNNING -Vuser 4:Connection made to Redis at 127.0.0.1:6379 -Vuser 4:Selecting Namespace 1 Vuser 4:Processing 1000000 transactions with output suppressed... Vuser 5:RUNNING -Vuser 5:Connection made to Redis at 127.0.0.1:6379 -Vuser 5:Selecting Namespace 1 Vuser 5:Processing 1000000 transactions with output suppressed... Vuser 1:1 ..., Vuser 1:Test complete, Taking end Transaction Count. - Timer: 1 minutes elapsed Vuser 1:4 Active Virtual Users configured -Vuser 1:TEST RESULT : System achieved 2648261 Redis TPM at 22397 NOPM +Vuser 1:TEST RESULT : System achieved 103055 NOPM from 236412 SQL Server TPM Vuser 1:FINISHED SUCCESS +Vuser 2:FINISHED SUCCESS Vuser 4:FINISHED SUCCESS -Vuser 3:FINISHED SUCCESS Vuser 5:FINISHED SUCCESS -Vuser 2:FINISHED SUCCESS +Vuser 3:FINISHED SUCCESS ALL VIRTUAL USERS COMPLETE -Destroying Virtual Users -Virtual Users Destroyed +runtimer returned after 59 seconds +vudestroy success TEST SEQUENCE COMPLETE hammerdb> + + It is a common requirement to also want to drive HammerDB CLI + scripts from an external scripting tool. For this reason HammerDB v4.0 + also includes the command waittocomplete. This can be seen as + complementary to runtimer, whereas runtimer will run in the main thread + for a specified period of time, waittocomplete will wait for an + indeterminate period of time until vucomplete is set to true. Therefore + waittocomplete is particularly beneficial for schema builds which may + take different periods of time but is complete when all Virtual Users + have finished their task. An example automated build is shown. + + #!/bin/tclsh +dbset db mssqls +diset tpcc mssqls_count_ware 10 +diset tpcc mssqls_num_vu 4 +diset connection mssqls_server {(local)\SQLDEVELOP} +vuset logtotemp 1 +print dict +buildschema +waittocomplete + + The HammerDB CLI will accept the argument auto to run a specified + script automatically. + + hammerdbcli.bat auto autorunbuild.tcl + + Using this approach it is possible to build complex test scenarios + automating both build and test functionality.
@@ -5130,9 +6828,9 @@ hammerdb> On starting the Web service with the hammerdbws command HammerDB will listen on the specified port for HTTP requests. - [oracle@vulture HammerDB-3.2]$ ./hammerdbws -HammerDB Web Service v3.2 -Copyright (C) 2003-2019 Steve Shaw + [oracle@vulture HammerDB-4.0]$ ./hammerdbws +HammerDB Web Service v4.0 +Copyright (C) 2003-2020 Steve Shaw Type "help" for a list of commands The xml is well-formed, applying configuration Initialized new SQLite in-memory database @@ -5965,7 +7663,8 @@ JSON format - Introduction to Analytic Testing (OSS-TPC-H) and Cloud Queries + Introduction to Analytic Testing (TPROC-H derived from TPC-H) and + Cloud Queries Analytic workloads can also be interchangeably described as Decision Support, Data Warehousing or Business Intelligence, the basis of these @@ -5974,59 +7673,58 @@ JSON format reading as opposed to modifying data and therefore requires a distinct approach. The ability of a database to process transactions gives limited information towards the ability of a database to support query based - workloads and vice-versa, therefore both OSS-TPC-C and OSS-TPC-H workloads - complement each other in investigating the capabilities of a particular - database. When reading large volumes of data to satisfy query workloads it - should be apparent that if multiple CPU cores are available reading with a - single processing thread is going to leave a significant amount of - resources underutilized. Consequently the most effective Analytic Systems - employ a feature called Parallel Query to break down such queries into - multiple sub tasks to complete the query more quickly. Additional features - such as column orientation, compression and partitioning can also be used - to improve parallel query performance. Advances in server technologies in - particular large numbers of CPU cores available with large memory - configurations have popularised both in-memory and column store - technologies as a means to enhance Parallel Query performance. Examples of - databases supported by HammerDB that support some or all of these enhanced - query technologies are the Oracle Database, SQL Server, Db2, MariaDB and - PostgreSQL, databases that do not support any of these technologies are - single threaded query workloads and cannot be expected to complete these - workloads as quickly. As a NoSQL database Redis does not support Analytic - workloads. If you are unfamiliar with row-oriented and column-store - technologies then it is beneficial to read one of the many guides - explaining the differences and familiarising with the technologies - available in the database that you have chosen to test. With commercial - databases you should also ensure that your license includes the ability to - run Parallel workloads as you may have a version of a database that - supports single-threaded workloads only. + workloads and vice-versa, therefore both TPROC-C and TPROC-H based + workloads complement each other in investigating the capabilities of a + particular database. When reading large volumes of data to satisfy query + workloads it should be apparent that if multiple CPU cores are available + reading with a single processing thread is going to leave a significant + amount of resources underutilized. Consequently the most effective + Analytic Systems employ a feature called Parallel Query to break down such + queries into multiple sub tasks to complete the query more quickly. + Additional features such as column orientation, compression and + partitioning can also be used to improve parallel query performance. + Advances in server technologies in particular large numbers of CPU cores + available with large memory configurations have popularised both in-memory + and column store technologies as a means to enhance Parallel Query + performance. Examples of databases supported by HammerDB that support some + or all of these enhanced query technologies are the Oracle Database, SQL + Server, Db2, MariaDB and PostgreSQL, databases that do not support any of + these technologies are single threaded query workloads and cannot be + expected to complete these workloads as quickly. If you are unfamiliar + with row-oriented and column-store technologies then it is beneficial to + read one of the many guides explaining the differences and familiarising + with the technologies available in the database that you have chosen to + test. With commercial databases you should also ensure that your license + includes the ability to run Parallel workloads as you may have a version + of a database that supports single-threaded workloads only.
- What is OSS-TPC-H? - - To complement the OLTP type OSS-TPC-C workload HammerDB also contains - a Fair Use derivation of the decision support based TPC-H Benchmark Standard. - The HammerDB OSS-TPC-H workload is an open source workload derived from the - TPC-H Benchmark Standard and as such is not comparable to published TPC-H - results, as the results do not comply with the TPC-H Benchmark Standard. - - OSS-TPC-H in simple terms can be thought of as complementing the workload - implemented in OSS-TPC-C related to the activities of a wholesale supplier. - However, whereas OSS-TPC-C simulates an online ordering system OSS-TPC-H - represents the typical workload of a retailer running analytical queries - about their operations. To do this OSS-TPC-H is represented by a set of - business focused ad-hoc queries (in addition to concurrent data updates - and deletes) and is measured upon the time it takes to complete these queries. - In particular the focus is upon highly complex queries that require the - processing of large volumes of data. Also in similarity to OSS-TPC-C the - schema size is not fixed and is dependent upon a Scale Factor and there - your schema your test schema can also be as small or large as you wish with - a larger schema requiring a more powerful computer system to process the - increased data volume for queries. However, in contrast to OSS-TPC-C it is - not valid to compare the test results of query load tests taken at different - Scale Factors shown as SF in the Schema diagram. + What is TPROC-H derived from TPC-H? + + To complement the OLTP type TPROC-C workload HammerDB also + contains a Fair Use derivation of the decision support based TPC-H + Benchmark Standard. The HammerDB TPROC-H workload is an open source + workload derived from the TPC-H Benchmark Standard and as such is not + comparable to published TPC-H results, as the results do not comply with + the TPC-H Benchmark Standard. TPROC-H in simple terms can be thought of + as complementing the workload implemented in TPROC-C related to the + activities of a wholesale supplier. However, whereas TPROC-C simulates + an online ordering system TPROC-H represents the typical workload of a + retailer running analytical queries about their operations. To do this + TPROC-H is represented by a set of business focused ad-hoc queries (in + addition to concurrent data updates and deletes) and is measured upon + the time it takes to complete these queries. In particular the focus is + upon highly complex queries that require the processing of large volumes + of data. Also in similarity to TPROC-C the schema size is not fixed and + is dependent upon a Scale Factor and therefore your schema can also be + as small or large as you wish with a larger schema requiring a more + powerful computer system to process the increased data volume for + queries. However, in contrast to TPROC-C it is not valid to compare the + test results of query load tests taken at different Scale Factors shown + as SF in the Schema diagram.
- OSS-TPC-H Schema. + TPROC-H Schema. @@ -6194,39 +7892,38 @@ order by set. HammerDB provides full capabilities to run this refresh set both automatically as part of a Power test and concurrently with a Throughput test. Note however that once a refresh set is run the schema is required - to be refreshed and it is prudent to backup and restore a HammerDB OSS-TPC-H - based schema where running a refresh set is planned. + to be refreshed and it is prudent to backup and restore a HammerDB + TPROC-H based schema where running a refresh set is planned.
- Choosing a Database for running OSS-TPC-H workloads + Choosing a Database for running TPROC-H workloads - TPC-H workloads run complex queries scanning large volumes of data - and therefore require the use of database features such as parallel + TPROC-H workloads run complex queries scanning large volumes of + data and therefore require the use of database features such as parallel query and in-memory column stores to maximise performance. With the - available HammerDB OSS-TPC-H based workloads the three databases that + available HammerDB TPROC-H based workloads the three databases that support these features are the Enterprise Editions of Oracle, SQL Server and Db2 and therefore these databases will deliver the best experience - for building and running OSS-TPC-H. Over time there has been improvement + for building and running TPROC-H. Over time there has been improvement with open-source and open-source derived databases in the ability to run - OSS-TPC-H workloads. For example PostgreSQL supports Parallel Query and the - PostgreSQL derived versions of Amazon Redshift and Greenplum offer + TPROC-H workloads. For example PostgreSQL supports Parallel Query and + the PostgreSQL derived versions of Amazon Redshift and Greenplum offer further accelerated query solutions. MySQL does not support an analytic storage engine however the MariaDB column store storage is best suited for running analytic tests against MySQL. Nevertheless it is known that with some or all of the open source solutions a number of queries either - fail or are extrmemly long running due to the limitations of the - databases themselves (and not HammerDB) therefore these workloads should - be viewed in an experimental manner as they will not result in the - ability to generate a QphH value. + fail or are extremely long running due to the limitations of the + databases themselves (and not HammerDB) in optimizing the + queries.
Oracle - The Oracle database is fully featured for running OSS-TPC-H based + The Oracle database is fully featured for running TPROC-H based workloads and presents two options for configuring the database either row oriented parallel query or the In-Memory Column Store (IM column - store). Both of these configurations are able to run a full OSS-TPC-H + store). Both of these configurations are able to run a full TPROC-H workload and are configured on the database as opposed to configuring with HammerDB.
@@ -6234,7 +7931,7 @@ order by
Microsoft SQL Server - SQL Server is able to support a full OSS-TPC-H workload and offers + SQL Server is able to support a full TPROC-H workload and offers row oriented parallel query as well as in-memory column store configured. The clustered columnstore build is selected through the HammerDB Build Options. @@ -6253,7 +7950,7 @@ order by
Db2 - Db2 can support a full OSS-TPCH workload through row oriented + Db2 can support a full TPCH workload through row oriented parallel query and Db2 BLU in-memory column store. The column store is selected through the Db2 Organize by options. @@ -6282,7 +7979,7 @@ order by bulk after generating with the HammerDB datagen operation.
- PostgreSQL TPC-H + PostgreSQL TPROC-H @@ -6304,7 +8001,7 @@ order by selected with the Data Warehouse Storage Engine Option.
- MySQL MariaDB OSS-TPC-H + MySQL MariaDB TPROC-H @@ -6313,46 +8010,39 @@ order by
- -
- Redis - - Redis does not support analytic workloads and therefore HammerDB - does not have a OSS-TPC-H workload for Redis. -
Benchmarking Database Cloud Services - In addition to the OS-TPC-H workload there are also a set of Cloud + In addition to the TPROC-H workload there are also a set of Cloud Analytic Queries made publicly available by Oracle for comparison of Cloud Analytic services. These queries run against a derived TPC-H - schema and are included with HammerDB for running against Oracle, - Amazon Aurora and Amazon Redshift with Amazon Aurora and Redshift - being based upon and compatible with MySQL and PostgreSQL respectively. - Note however that in similarity to MySQL Amazon does not have the - features to support analytics such as parallel query or a column - store option and therefore running the analytic tests against Aurora - although possible is not likely to generate the best results. Amazon - Redshift however is a column oriented database based on PostgreSQL - and suitable for running analytic workloads. + schema and are included with HammerDB for running against Oracle, Amazon + Aurora and Amazon Redshift with Amazon Aurora and Redshift being based + upon and compatible with MySQL and PostgreSQL respectively. Note however + that in similarity to MySQL Amazon Aurora does not have the features to + support analytics such as parallel query or a column store option and + therefore running the analytic tests against Aurora although possible is + not likely to generate the best results. Amazon Redshift however is a + column oriented database based on PostgreSQL and suitable for running + analytic workloads. For the Cloud Analytic workload the Oracle specification requires a schema size of 10TB, it is recommended to create the schema with HammerDB using the Generating and Bulk Loading Data feature and this - guide details how to do this for both Oracle and Redshift and this - is particularly recommended when uploading data to the cloud. - - You are permitted to run both the in-built OSS-TPC-H queries and - the Cloud Analytic Queries against the same database. This new query - set is enabled under the OSS-TPC-H Driver Script Options dialog by - selecting the Cloud Analytic Queries checkbox. This query set reports - the geometric mean of the completed queries that returns rows for - circumstances where the query set is run on a scale factor size of - less than 10TB. Given the similarity of the Oracle implementation to - the existing OSS-TPC-H workload the following example illustrates - running the workload against Amazon Redshift. + guide details how to do this for both Oracle and Redshift and this is + particularly recommended when uploading data to the cloud. + + You are permitted to run both the in-built TPROC-H queries and the + Cloud Analytic Queries against the same database. This new query set is + enabled under the TPROC-H Driver Script Options dialog by selecting the + Cloud Analytic Queries checkbox. This query set reports the geometric + mean of the completed queries that returns rows for circumstances where + the query set is run on a scale factor size of less than 10TB. Given the + similarity of the Oracle implementation to the existing TPROC-H workload + the following example illustrates running the workload against Amazon + Redshift.
Redshift Cloud Analytic Workload @@ -6384,8 +8074,8 @@ order by - Create the OSS-TPC-H schema within Redshift using the HammerDB - Generating and Bulk Loading Data feature. Under PostgreSQL OSS-TPC-H + Create the TPROC-H schema within Redshift using the HammerDB + Generating and Bulk Loading Data feature. Under PostgreSQL TPROC-H Driver Options use the Redshift Endpoint as your PostgreSQL host and 5439 as your port. Set the user and password to the credentials you have set under the Amazon AWS console. To run the Cloud Analytic @@ -6438,30 +8128,31 @@ order by parallel performance to drive the workload. When using in-memory column store features processors that support SIMD/AVX instructions sets are also required for the vectorisation of column scans. HammerDB by default - provides OSS-TPC-H schemas at Scale Factors 1,10,30,100,300 and 1000 (larger - can be configured if required). The Scale Factors correspond to the - schema size in Gigabytes. As with the official TPC-H tests the results - at one schema size should not be compared with the results derived with - another schema size. As the analytic workload utilizes parallel query - where available it is possible for a single virtual user to use all of - the CPU resources on the SUT at any schema size. Nevertheless there is - still a relation with all of the hardware resources available including - memory and I/O and a larger system will benefit from tests run a larger - schema size. The actual sizing of hardware resources of hardware - resources is beyond the scope of this document however at the basic - level with traditional parallel execution and modern CPU capabilities - I/O read performance is typically the constraining factor. Note that - also in contrast to an OLTP workload high throughput transaction log - write performance is not a requirement, however in similarity to the - OLTP workload storage based on SSD disks will usually offer significant - improvements in performance over standard hard disks although in this - case it is the benefits of read bandwidth as opposed to the IOPs - benefits of SSDs for OLTP. When using the in-memory column store memory - capacity and bandwidth feature and if fully cached in memory storage - performance is not directly a factor for query performance. Nevertheless - data loads are an important consideration for in-memory data and - therefore I/O and SSD read performance remain important for loading the - data into memory to be available for scans. + provides TPROC-H schemas at Scale Factors 1,10,30,100,300 and 1000 + (larger can be configured if required). The Scale Factors correspond to + the schema size in Gigabytes. As with the official TPROC-H tests the + results at one schema size should not be compared with the results + derived with another schema size. As the analytic workload utilizes + parallel query where available it is possible for a single virtual user + to use all of the CPU resources on the SUT at any schema size. + Nevertheless there is still a relation with all of the hardware + resources available including memory and I/O and a larger system will + benefit from tests run a larger schema size. The actual sizing of + hardware resources of hardware resources is beyond the scope of this + document however at the basic level with traditional parallel execution + and modern CPU capabilities I/O read performance is typically the + constraining factor. Note that also in contrast to an OLTP workload high + throughput transaction log write performance is not a requirement, + however in similarity to the OLTP workload storage based on SSD disks + will usually offer significant improvements in performance over standard + hard disks although in this case it is the benefits of read bandwidth as + opposed to the IOPs benefits of SSDs for OLTP. When using the in-memory + column store memory capacity and bandwidth feature and if fully cached + in memory storage performance is not directly a factor for query + performance. Nevertheless data loads are an important consideration for + in-memory data and therefore I/O and SSD read performance remain + important for loading the data into memory to be available for + scans.
@@ -6599,7 +8290,7 @@ SQL1063N DB2START processing was successful. In your db2cli.ini set the following parameter for each of the - databases that you plan to create to test the OSS-TPC-H workload, this + databases that you plan to create to test the TPROC-H workload, this will prevent failure due to SQLSTATE 01003 “Null values were eliminated from the argument of a column function” when running the workload thereby preventing the query set from completing. @@ -6612,7 +8303,7 @@ IgnoreWarnList="'01003'" When your database server is installed you should create a - database into which the test data will be installed, for OSS-TPC-H + database into which the test data will be installed, for TPROC-H workloads a large pagesize is recommended. [db2inst1 ~]$ db2 create database tpch pagesize 32 k @@ -6664,13 +8355,12 @@ infinidb_import_for_batchinsert_delimiter=7
Configuring Schema Build Options - To create the analytic test schema based on OSS-TPC-H - you will need to select which benchmark and database you - wish to use by choosing select benchmark from under the Options menu or - under the benchmark tree-view. The initial settings are determined by - the values in your XML configuration files. The following example shows - the selection of SQL Server however the process is the same for all - databases. + To create the analytic test schema based on the TPROC-H you will + need to select which benchmark and database you wish to use by choosing + select benchmark from under the Options menu or under the benchmark + tree-view. The initial settings are determined by the values in your XML + configuration files. The following example shows the selection of SQL + Server however the process is the same for all databases.
Benchmark Options @@ -6682,12 +8372,12 @@ infinidb_import_for_batchinsert_delimiter=7
- To create the OSS-TPC-H schema select the OSS-TPC-H schema options menu - tab from the benchmark tree-view or the options menu. This menu will - change dynamically according to your chosen database. + To create the TPROC-H schema select the TPROC-H schema options + menu tab from the benchmark tree-view or the options menu. This menu + will change dynamically according to your chosen database.
- OSS-TPC-H Schema Build Options + TPROC-H Schema Build Options @@ -6712,7 +8402,7 @@ infinidb_import_for_batchinsert_delimiter=7 Oracle Schema Build Options
- Oracle OSS-TPC-H Build Options + Oracle TPROC-H Build Options @@ -6755,47 +8445,48 @@ infinidb_import_for_batchinsert_delimiter=7 The system user password is the password for the “system” user you entered during database creation. The system user already exists in all Oracle databases and has - the necessary permissions to create the TPC-H user. + the necessary permissions to create the TPROC-H + user. - TPC-H User + TPROC-H User - The TPC-H user is the name of a user to be created - that will own the OSS-TPC-H schema. This user can have any name - you choose but must not already exist and adhere to the + The TPROC-H user is the name of a user to be created + that will own the TPROC-H schema. This user can have any + name you choose but must not already exist and adhere to the standard rules for naming Oracle users. You may if you wish run the schema creation multiple times and have multiple - OSS-TPC-H schemas created with ownership under a different user - you create each time. + TPROC-H schemas created with ownership under a different + user you create each time. - TPC-H User Password + TPROC-H User Password - The TPC-H user password is the password to be used - for the TPC-H user you create and must adhere to the + The TPROC-H user password is the password to be used + for the TPROC-H user you create and must adhere to the standard rules for Oracle user password. You will need to - remember the TPC-H user name and password for running the - TPC-H driver script after the schema is built. + remember the TPROC-H user name and password for running the + TPROC-H driver script after the schema is built. - TPC-H Default Tablespace + TPROC-H Default Tablespace - The TPC-H default tablespace is the tablespace that - will be the default for the TPC-H user and therefore the + The TPROC-H default tablespace is the tablespace that + will be the default for the TPROC-H user and therefore the tablespace to be used for the schema creation. The tablespace must have sufficient free space for the schema to be created. - TPC-H Temporary Tablespace + TPROC-H Temporary Tablespace - The TPC-H temporary tablespace is the temporary + The TPROC-H temporary tablespace is the temporary tablespace that already exists in the database to be used by - the TPC-H User. + the TPROC-H User. @@ -6863,7 +8554,7 @@ infinidb_import_for_batchinsert_delimiter=7 SQL Server The Microsoft SQL Server is the host name or host - name and instance where the TPC-H database will be + name and instance where the TPROC-H database will be created. @@ -7138,7 +8829,7 @@ infinidb_import_for_batchinsert_delimiter=7 create a database and you previously granted access to from the load generation server. The root user already exists in all MySQL databases and has the necessary permissions to - create the TPC-H database. + create the TPROC-H database. @@ -7146,7 +8837,7 @@ infinidb_import_for_batchinsert_delimiter=7 The MySQL user password is the password for the user defined as the MySQL User. You will need to remember the - MySQL user name and password for running the OSS-TPC-H driver + MySQL user name and password for running the TPROC-H driver script after the database is built. @@ -7154,7 +8845,7 @@ infinidb_import_for_batchinsert_delimiter=7 MySQL Database The MySQL Database is the database that will be - created containing the OSS-TPC-H schema creation. There must + created containing the TPROC-H schema creation. There must have sufficient free space for the database to be created. @@ -7267,7 +8958,7 @@ infinidb_import_for_batchinsert_delimiter=7 PostgreSQL User The PostgreSQL User is the user (role) that will be - created that owns the database containing the TPC-H + created that owns the database containing the TPROC-H schema. @@ -7284,7 +8975,7 @@ infinidb_import_for_batchinsert_delimiter=7 The PostgreSQL Database is the database that will be created and owned by the PostgreSQL User that contains the - OSS-TPC-H schema. + TPROC-H schema. @@ -7340,7 +9031,7 @@ infinidb_import_for_batchinsert_delimiter=7 tree-view.
- Build OSS-TPC-H Schema + Build TPROC-H Schema @@ -7536,7 +9227,7 @@ Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Sep 17 11:24:10 2 After you have run a workload with a refresh function drop the - TPCH user as follows: + TPROC-H user as follows: SQL> drop user tpch cascade; @@ -7958,7 +9649,7 @@ SET Options.
- OSS-TPC-H Driver Options + TPROC-H Driver Options @@ -8118,7 +9809,9 @@ SET and also modified directly there. The Load option can also be used to refresh the script to the configured Options. Pay particular attention to the Scale Factor value shown as "scale_factor" if different from the - schema that you have loaded. + schema that you have loaded. The "scale factor" value is inherited from + the schema build so you may have to change it manually in the script to + the same size of the schema that you paln to test.
Driver Script Loaded @@ -8199,7 +9892,7 @@ SET Repeat Delay(ms) is the time that each Virtual User will wait before running its next Iteration of the Driver - Script. For OSS-TPC-H this is an external loop before running + Script. For TPROC-H this is an external loop before running another query set, however should not be more than 1 when the refresh function is enabled. @@ -8215,7 +9908,7 @@ SET Show Output Show Output will report Virtual User Output to the - Virtual User Output Window, For OSS-TPC-H tests this should be + Virtual User Output Window, For TPROC-H tests this should be enabled. @@ -8282,8 +9975,9 @@ SET
- When complete the Virtual User will show the query set - time. + When complete the Virtual User will show the query set time as + well as the geometric mean of queries that returned rows including a + count of those queries that returned rows.
Single Virtual User Complete @@ -8298,53 +9992,55 @@ SET And the log will show the Query times. Note how the queries are run in a pre-determined random order. - Hammerdb Log @ Thu Apr 19 15:26:25 BST 2018 + Hammerdb Log @ Fri Oct 23 15:31:59 BST 2020 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- Vuser 1:Executing Query 14 (1 of 22) -Vuser 1:query 14 completed in 9.449 seconds +Vuser 1:query 14 completed in 7.301 seconds Vuser 1:Executing Query 2 (2 of 22) -Vuser 1:query 2 completed in 0.235 seconds +Vuser 1:query 2 completed in 0.952 seconds Vuser 1:Executing Query 9 (3 of 22) -Vuser 1:query 9 completed in 16.28 seconds +Vuser 1:query 9 completed in 24.457 seconds Vuser 1:Executing Query 20 (4 of 22) -Vuser 1:query 20 completed in 1.254 seconds +Vuser 1:query 20 completed in 1.249 seconds Vuser 1:Executing Query 6 (5 of 22) -Vuser 1:query 6 completed in 1.671 seconds +Vuser 1:query 6 completed in 1.978 seconds Vuser 1:Executing Query 17 (6 of 22) -Vuser 1:query 17 completed in 0.63 seconds +Vuser 1:query 17 completed in 1.079 seconds Vuser 1:Executing Query 18 (7 of 22) -Vuser 1:query 18 completed in 10.106 seconds +Vuser 1:query 18 completed in 19.45 seconds Vuser 1:Executing Query 8 (8 of 22) -Vuser 1:query 8 completed in 13.784 seconds +Vuser 1:query 8 completed in 11.962 seconds Vuser 1:Executing Query 21 (9 of 22) -Vuser 1:query 21 completed in 46.308 seconds +Vuser 1:query 21 completed in 58.399 seconds Vuser 1:Executing Query 13 (10 of 22) -Vuser 1:query 13 completed in 20.612 seconds +Vuser 1:query 13 completed in 17.475 seconds Vuser 1:Executing Query 3 (11 of 22) -Vuser 1:query 3 completed in 1.416 seconds +Vuser 1:query 3 completed in 4.463 seconds Vuser 1:Executing Query 22 (12 of 22) -Vuser 1:query 22 completed in 0.808 seconds +Vuser 1:query 22 completed in 2.39 seconds Vuser 1:Executing Query 16 (13 of 22) -Vuser 1:query 16 completed in 2.0 seconds +Vuser 1:query 16 completed in 1.152 seconds Vuser 1:Executing Query 4 (14 of 22) -Vuser 1:query 4 completed in 18.848 seconds +Vuser 1:query 4 completed in 19.246 seconds Vuser 1:Executing Query 11 (15 of 22) -Vuser 1:query 11 completed in 6.146 seconds +Vuser 1:query 11 completed in 2.593 seconds Vuser 1:Executing Query 15 (16 of 22) -Vuser 1:query 15 completed in 1.886 seconds +Vuser 1:query 15 completed in 2.253 seconds Vuser 1:Executing Query 1 (17 of 22) -Vuser 1:query 1 completed in 12.699 seconds +Vuser 1:query 1 completed in 19.213 seconds Vuser 1:Executing Query 10 (18 of 22) -Vuser 1:query 10 completed in 5.707 seconds +Vuser 1:query 10 completed in 21.596 seconds Vuser 1:Executing Query 19 (19 of 22) -Vuser 1:query 19 completed in 1.534 seconds +Vuser 1:query 19 completed in 20.239 seconds Vuser 1:Executing Query 5 (20 of 22) -Vuser 1:query 5 completed in 15.7 seconds +Vuser 1:query 5 completed in 19.305 seconds Vuser 1:Executing Query 7 (21 of 22) -Vuser 1:query 7 completed in 6.19 seconds +Vuser 1:query 7 completed in 6.117 seconds Vuser 1:Executing Query 12 (22 of 22) -Vuser 1:query 12 completed in 10.954 seconds -Vuser 1:Completed 1 query set(s) in 204 seconds +Vuser 1:query 12 completed in 15.223 seconds +Vuser 1:Completed 1 query set(s) in 278 seconds +Vuser 1:Geometric mean of query times returning rows (22) is 6.82555 +
Changing the Query Order @@ -8378,15 +10074,16 @@ if { $myposition > 40 } { set myposition [ expr $myposition % 40 ] } Many test environments are sufficient with running single Virtual User tests. With available parallel and column store configurations this test is sufficient to stress an entire system. Nevertheless a component - of the OSS-TPC-H test is the refresh function should be run either side - of the Power Test. To enable this functionality HammerDB has a special - power test mode, whereby if refresh_on is set to true as shown and only - one virtual user is configured then HammerDB will run a Power Test. Note - that once you selected refresh_on for a single Virtual User in Power - Test Mode the value of update_sets will be set to 1 and the value of - trickle_refresh set to 0 and the value of REFRESH_VERBOSE set to false, - all these values will be set automatically to ensure optimal running of - the Power Test. + of the TPROC-H test is the refresh function and the refresh function + should be run either side of the Power Test. To enable this + functionality HammerDB has a special power test mode, whereby if + refresh_on is set to true as shown and only one virtual user is + configured then HammerDB will run a Power Test. Note that once you + selected refresh_on for a single Virtual User in Power Test Mode the + value of update_sets will be set to 1 and the value of trickle_refresh + set to 0 and the value of REFRESH_VERBOSE set to false, all these values + will be set automatically to ensure optimal running of the Power + Test.
Power Test Options @@ -8403,7 +10100,7 @@ if { $myposition > 40 } { set myposition [ expr $myposition % 40 ] } for your schema.
- OSS-TPC-H refresh on + TPROC-H refresh on @@ -8441,58 +10138,59 @@ if { $myposition > 40 } { set myposition [ expr $myposition % 40 ] } and you can collect the refresh and query times from the log. - Hammerdb Log @ Thu Apr 19 16:08:22 BST 2018 + Hammerdb Log @ Fri Oct 23 15:41:39 BST 2020 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- Vuser 1:New Sales refresh -Vuser 1:New Sales refresh complete in 46.282 seconds +Vuser 1:New Sales refresh complete in 54.15 seconds Vuser 1:Completed 1 update set(s) Vuser 1:Executing Query 14 (1 of 22) -Vuser 1:query 14 completed in 0.943 seconds +Vuser 1:query 14 completed in 7.868 seconds Vuser 1:Executing Query 2 (2 of 22) -Vuser 1:query 2 completed in 0.332 seconds +Vuser 1:query 2 completed in 0.334 seconds Vuser 1:Executing Query 9 (3 of 22) -Vuser 1:query 9 completed in 21.281 seconds +Vuser 1:query 9 completed in 21.87 seconds Vuser 1:Executing Query 20 (4 of 22) -Vuser 1:query 20 completed in 1.163 seconds +Vuser 1:query 20 completed in 0.816 seconds Vuser 1:Executing Query 6 (5 of 22) -Vuser 1:query 6 completed in 1.93 seconds +Vuser 1:query 6 completed in 0.926 seconds Vuser 1:Executing Query 17 (6 of 22) -Vuser 1:query 17 completed in 0.504 seconds +Vuser 1:query 17 completed in 1.299 seconds Vuser 1:Executing Query 18 (7 of 22) -Vuser 1:query 18 completed in 13.358 seconds +Vuser 1:query 18 completed in 19.289 seconds Vuser 1:Executing Query 8 (8 of 22) -Vuser 1:query 8 completed in 11.419 seconds +Vuser 1:query 8 completed in 4.232 seconds Vuser 1:Executing Query 21 (9 of 22) -Vuser 1:query 21 completed in 55.767 seconds +Vuser 1:query 21 completed in 59.815 seconds Vuser 1:Executing Query 13 (10 of 22) -Vuser 1:query 13 completed in 20.412 seconds +Vuser 1:query 13 completed in 13.889 seconds Vuser 1:Executing Query 3 (11 of 22) -Vuser 1:query 3 completed in 2.435 seconds +Vuser 1:query 3 completed in 5.773 seconds Vuser 1:Executing Query 22 (12 of 22) -Vuser 1:query 22 completed in 1.011 seconds +Vuser 1:query 22 completed in 0.928 seconds Vuser 1:Executing Query 16 (13 of 22) -Vuser 1:query 16 completed in 2.08 seconds +Vuser 1:query 16 completed in 0.792 seconds Vuser 1:Executing Query 4 (14 of 22) -Vuser 1:query 4 completed in 18.064 seconds +Vuser 1:query 4 completed in 19.258 seconds Vuser 1:Executing Query 11 (15 of 22) -Vuser 1:query 11 completed in 5.83 seconds +Vuser 1:query 11 completed in 0.497 seconds Vuser 1:Executing Query 15 (16 of 22) -Vuser 1:query 15 completed in 2.003 seconds +Vuser 1:query 15 completed in 9.436 seconds Vuser 1:Executing Query 1 (17 of 22) -Vuser 1:query 1 completed in 13.503 seconds +Vuser 1:query 1 completed in 16.067 seconds Vuser 1:Executing Query 10 (18 of 22) -Vuser 1:query 10 completed in 5.548 seconds +Vuser 1:query 10 completed in 22.284 seconds Vuser 1:Executing Query 19 (19 of 22) -Vuser 1:query 19 completed in 1.525 seconds +Vuser 1:query 19 completed in 19.648 seconds Vuser 1:Executing Query 5 (20 of 22) -Vuser 1:query 5 completed in 9.765 seconds +Vuser 1:query 5 completed in 18.98 seconds Vuser 1:Executing Query 7 (21 of 22) -Vuser 1:query 7 completed in 3.69 seconds +Vuser 1:query 7 completed in 6.089 seconds Vuser 1:Executing Query 12 (22 of 22) -Vuser 1:query 12 completed in 3.226 seconds -Vuser 1:Completed 1 query set(s) in 196 seconds +Vuser 1:query 12 completed in 12.512 seconds +Vuser 1:Completed 1 query set(s) in 263 seconds +Vuser 1:Geometric mean of query times returning rows (22) is 5.43452 Vuser 1:Old Sales refresh -Vuser 1:Old Sales refresh complete in 23.926 seconds +Vuser 1:Old Sales refresh complete in 16.016 seconds Vuser 1:Completed 1 update set(s) Be aware that some databases are considerably better at running @@ -8553,7 +10251,11 @@ Cannot insert duplicate key in object 'dbo.orders'. The duplicate key value is (
- Once set the refresh and queries will run as expected. + Once set the refresh and queries will run as expected. If you + observe foreign key or constraint violation errors after having + restored a schema verify the scale factor in the driver script is set + to the same value as the scale factor of the schema you have + restored.
SQL Server with Snapshot Isolation @@ -8570,8 +10272,7 @@ Cannot insert duplicate key in object 'dbo.orders'. The duplicate key value is ( full query set to complete. You do not need to wait for the trickled refresh function to complete, however must have configured enough update sets to ensure that the refresh function remains running whilst - the throughput test completes. In the example the value is 494 seconds - for the slowest query set to complete. + the throughput test completes.
Throughput test complete @@ -8588,7 +10289,7 @@ Cannot insert duplicate key in object 'dbo.orders'. The duplicate key value is (
Calculate the Geometric Mean - For comparison of HammerDB OSS-TPC-H workloads between systems it is + For comparison of HammerDB TPROC-H workloads between systems it is recommended to use the geometric mean of the query times. This can be a straightforward comparison between power tests. When comparing throughput tests it is recommended to compare the geomean of the slowest @@ -8623,187 +10324,6 @@ GEOMEAN 4.011822724
- - Remote Modes - - HammerDB allows for multiple instances of the HammerDB program to - run in Master and Slave modes. Running with multiple modes enables the - additional instances to be controlled by a single master instance either - on the same load testing server or across the network. This functionality - can be particularly applicable when testing Virtualized environments and - the desire is to test multiple databases running in virtualized guests at - the same time. Similarly this functionality is useful for clustered - databases with multiple instances such as Oracle Real Application Clusters - and wishing to partition a load precisely across servers. HammerDB Remote - Modes are entirely operating system independent and therefore an instance - of HammerDB running on Windows can be Master to one or more instances - running on Linux and vice versa. Additionally there is no requirement for - the workload to be the same and therefore it would be possible to connect - multiple instances of HammerDB running on Windows and Linux simultaneously - testing SQL Server, Oracle, MySQL and PostrgreSQL workloads in a - virtualized environment. In the bottom right hand corner of the interface - the status bar shows the mode that HammerDB is running in. By default this - will be Local Mode. - -
- Mode - - - - - - -
- - - Master Mode - - From the tree-view select Mode Options. - -
- Mode Options - - - - - - -
- - This displays the Mode Options as shown in Figure 3 confirming - that the current mode is Local. - -
- Mode Options Select - - - - - - -
- - Select Master Mode and click OK. - -
- Master Mode Select - - - - - - -
- - Confirm the selection. - -
- Mode Confirmation - - - - - - -
- - This will show that Master Mode is now active and the ID and - hostname it is running on. - -
- Mode Active - - - - - - -
Note that this will also be recorded in the console display - and the current Mode displayed in the status bar at the bottom right of - the Window.
- - Setting Master Mode at id : 14296, hostname : osprey -
- - - Slave Mode - - On another instance of HammerDB select Slave Mode, enter the id - and hostname of the master and select OK. - -
- Slave Mode - - - - - - -
- - Confirm the change and observe the Mode connection on both the - Slave - - Setting Slave Mode at id : 11264, hostname : osprey -Slave connecting to osprey 14296 : Connection suceeded -Master call back successful - - and the Master - - Received a new slave connection from host fe80::69db:2b4d:edd7:962a%16 -New slave joined : {11264 osprey} -
- - - Master Distribution - - The Master Distribution button in the edit menu now becomes active - to distribute scripts across instances. - -
- Master Distribution - - - - - - -
- - Pressing this button enables the distribution of the contents of - the Script Editor to all connected instances. - - Distributing to 11264 osprey ...Succeeded - - The contents however may remain to be individually edited on - remote instances. Note that in particular the OLTP timed tests account - for the particular mode that an instance is running. - - If loaded locally a script will show the Mode that the instance of - HammerDB is running in. - - set mode "Slave" ;# HammerDB operational mode - - If distributed it will inherit the mode and will need to be - changed manually. - - set mode "Master" ;# HammerDB operational mode - - When running in slave Mode the Monitor Virtual User is created but - idle and does not monitor the workload. - - Now run your workload as normal on the Master, all of your - workload choices of creating and running and closing down virtual users - will be replicated automatically on the connected Slaves enabling - control and simultaneous timing from a central point. However, note that - running a schema creation with multiple connected instances is not - supported. - - To disable Remote Modes select Local Mode on the Master on - confirmation all connected instances will return to Local Mode. -
-
- Generating and Loading Bulk Datasets @@ -8873,9 +10393,9 @@ New slave joined : {11264 osprey}
- Depending upon whether you have selected a TPC-C or TPC-H + Depending upon whether you have selected a TPROC-C or TPROC-H benchmark under the benchmark options the dialog will be different. For - the TPC-H options select the Scale Factor, the directory that you have + the TPROC-H options select the Scale Factor, the directory that you have pre-created and the number of Virtual Users to generate the schema. @@ -9014,21 +10534,21 @@ New slave joined : {11264 osprey} generate data for. ./hammerdbcli -HammerDB CLI v3.0 -Copyright (C) 2003-2018 Steve Shaw +HammerDB CLI v4.0 +Copyright (C) 2003-2020 Steve Shaw Type "help" for a list of commands The xml is well-formed, applying configuration hammerdb>print datagen Data Generation set to build a TPC-C schema for Oracle with 1 warehouses with 1 virtual users in /tmp -hammerdb>dbset bm TPC-H -Benchmark set to TPC-H for Oracle +hammerdb>dbset bm TPROC-H +Benchmark set to TPROC-H for Oracle hammerdb>print datagen Data Generation set to build a TPC-H schema for Oracle with 1 scale factor with 1 virtual users in /tmp -hammerdb>dbset bm TPC-C -Benchmark set to TPC-C for Oracle +hammerdb>dbset bm TPROC-C +Benchmark set to TPROC-C for Oracle hammerdb>print datagen Data Generation set to build a TPC-C schema for Oracle with 1 warehouses with 1 virtual users in /tmp @@ -9089,14 +10609,14 @@ Vuser 2:Generating Stock Wid=1 Generating a template database is exceptionally straightforward. From the HammerDB documentation follow the steps for Build a Schema and - create the smallest size database such as Scale Factor 1 for OSS-TPC-H. This - database can then be used as a template to capture the DDL. Note that if - you stop the database creation after the tables are created but before - all of the data is loaded objects such as indexes will not have been - created and will not therefore be included in generated DDL, this may or - may not be an issue for the type of schema you are intending to build, - for example for a column store such as Amazon Redshift, indexes are not - a requirement. + create the smallest size database such as Scale Factor 1 for TPROC-H. + This database can then be used as a template to capture the DDL. Note + that if you stop the database creation after the tables are created but + before all of the data is loaded objects such as indexes will not have + been created and will not therefore be included in generated DDL, this + may or may not be an issue for the type of schema you are intending to + build, for example for a column store such as Amazon Redshift, indexes + are not a requirement.
Capture and run the table creation DDL @@ -9514,7 +11034,7 @@ Records: 150000 Deleted: 0 Skipped: 0 Warnings: 0 Query OK, 150000 rows affected (1.56 sec) Records: 150000 Deleted: 0 Skipped: 0 Warnings: 0 -For the MySQL TPC-C schema NULLS are not automatically +For the MySQL TPROC-C schema NULLS are not automatically recognised and a SET command is required as follows for the ORDER_LINE and ORDERS table: @@ -9552,7 +11072,7 @@ set o_carrier_id = nullif(@id1,''); psql -U postgres -d tpch -f TPCHCOPY.sql With PostgreSQL additional lines are required to handle NULL - value for the TPC-C schema as follows: + value for the TPROC-C schema as follows: \copy order_line from '/home/postgres/TPCCDATA/order_line_1.tbl' WITH NULL AS '' DELIMITER AS '|'; \copy orders from '/home/postgres/TPCCDATA/orders_1.tbl' WITH NULL AS '' DELIMITER AS '|'; @@ -9762,7 +11282,7 @@ EXEC #140169830722080:c=0,e=214,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=558 For more information on the trace file format the document Note:39817.1 Subject “Interpreting Raw SQL_TRACE output “ available from - My Oracle Support a, however this knowledge is not essential as HammerDB + My Oracle Support, however this knowledge is not essential as HammerDB can convert this raw format into a form that can be replayed.
@@ -9883,21 +11403,7 @@ EXEC #140169830722080:c=0,e=214,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=558 The tracefile has now been replayed with multiple Virtual Users illustrating the basic building blocks for creating a bespoke Oracle - workload from a captured and converted trace file. As a further examples - using the same logon trigger for the TPCC user the HammerDB OLTP - workload has now been loaded and for example purposes the number of - transactions set to 10. Once converted HammerDB will correctly format, - extract and insert bind variables for workload replay. - -
- Bind Variables - - - - - - -
+ workload from a captured and converted trace file. @@ -9912,8 +11418,8 @@ EXEC #140169830722080:c=0,e=214,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=558 failed. To get the actual Oracle error message underlying this error you can use the TCL "catch" command to capture the error and print it out inline. For example if you wanted to see the error in the neword - procedure of the TPC-C script change the oraplexec line to look like the - following : + procedure of the TPROC-C script change the oraplexec line to look like + the following : if {[ catch {oraplexec $curn1 $sql5 :no_w_id $no_w_id :no_max_w_id $w_id_input :no_d_id $no_d_id :no_c_id $no_c_id :no_o_ol_cnt $ol_cnt :no_c_discount {NULL} :no_c_last {NULL} :no_c_credit {NULL} :no_d_tax {NULL} :no_w_tax {NULL} :no_d_next_o_id {0} :timestamp $date} message]} { puts $message @@ -9950,14 +11456,601 @@ PL/SQL: Statement ignored} 0 0 4 8} - Copyright + HTTP and HTTPS Testing + + HammerDB can also be used for HTTP and HTTPS testing by using + bespoke scripts. HammerDB uses the TLS package for HTTPS + encryption. + + + HTTPS Script - Copyright © 2019 by Steve Shaw All rights reserved. + An example HTTPS script is shown. Modify the URL to your own URL. + The example will fetch the URL in a loop for 10 iterations and report + the time taken. - This documentation or any portion thereof may not be reproduced or - used in any manner whatsoever without permission. + #!./bin/tclsh8.6 + package require http + package require tls + #package require twapi_crypto - http://www.hammerdb.com +proc http::Log {args} { +puts $args +} + +proc cb {token} { +if { [ http::status $token ] != "ok" } { error "Response not OK [ http::error $token]" } else { +http::cleanup $token + } +set ::done 1 +} + + +set url "https://MYURL" +#http::register https 443 [list ::twapi::tls_socket -async ] +http::register https 443 [list ::tls::socket -async -tls1 1 -ssl3 false -ssl2 false -autoservername true ] + +set response [ http::geturl $url -keepalive true ] + puts [ http::status $response ] + set so [ lsearch -inline [ chan names ] sock* ] + +chan configure $so -buffering none -encoding binary -blocking 1 -translation crlf -eofchar {{} {}} + ::tls::handshake $so + puts [ ::tls::status $so ] + +chan configure $so -buffering full -buffersize 8192 -encoding binary -blocking 0 -translation crlf -eofchar {{} {}} + set test_count 10 + set response_count 0 + set total_time 0 + set start_time [clock milliseconds] + for {set i 0} {$i < $test_count} {incr i} { + set start_time [clock milliseconds] + set ::done 0 +set response [ http::geturl $url -binary true -blocksize 8192 -timeout 10000 -keepalive true -command cb ] +vwait ::done + set end_time [clock milliseconds] + set elapsed [expr {$end_time - $start_time}] + incr total_time $elapsed + puts "Call $i took $elapsed" + } + set end_time [clock milliseconds] +puts "$test_count iterations in $total_time at an average of [expr {$total_time / $test_count}]" + + + + + + HTTPS Output + + The following shows the output of the script run against + www.hammerdb.com. + + Vuser 1:^A1 URL https://www.hammerdb.com - token ::http::1 +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::1} keepalive +Vuser 1:^B1 begin sending request - token ::http::1 +Vuser 1:^C1 end sending request - token ::http::1 +Vuser 1:^D1 begin receiving response - token ::http::1 +Vuser 1:^E1 end of response headers - token ::http::1 +Vuser 1:^F1 end of response body (unchunked) - token ::http::1 +Vuser 1:ok +Vuser 1:sha1_hash 1C27C238029A8E34DBC1C7DBDE500DCD42C38446 subject CN=hammerdb.com issuer CN=Encryption Everywhere DV TLS CA - G1,OU=www.digicert.com,O=DigiCert Inc,C=US notBefore Sep 9 00:00:00 2020 GMT notAfter Sep 19 12:00:00 2021 GMT serial 0D9F3C403C1001E7A648E621BECF67F1 certificate +-----BEGIN CERTIFICATE----- +... +-----END CERTIFICATE----- + sbits 256 cipher ECDHE-RSA-AES256-GCM-SHA384 +Vuser 1:^A2 URL https://www.hammerdb.com - token ::http::2 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::2} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::2} keepalive +Vuser 1:^B2 begin sending request - token ::http::2 +Vuser 1:^C2 end sending request - token ::http::2 +Vuser 1:^D2 begin receiving response - token ::http::2 +Vuser 1:^E2 end of response headers - token ::http::2 +Vuser 1:^F2 end of response body (unchunked) - token ::http::2 +Vuser 1:Call 0 took 175 +Vuser 1:^A3 URL https://www.hammerdb.com - token ::http::3 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::3} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::3} keepalive +Vuser 1:^B3 begin sending request - token ::http::3 +Vuser 1:^C3 end sending request - token ::http::3 +Vuser 1:^D3 begin receiving response - token ::http::3 +Vuser 1:^E3 end of response headers - token ::http::3 +Vuser 1:^F3 end of response body (unchunked) - token ::http::3 +Vuser 1:Call 1 took 62 +Vuser 1:^A4 URL https://www.hammerdb.com - token ::http::4 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::4} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::4} keepalive +Vuser 1:^B4 begin sending request - token ::http::4 +Vuser 1:^C4 end sending request - token ::http::4 +Vuser 1:^D4 begin receiving response - token ::http::4 +Vuser 1:^E4 end of response headers - token ::http::4 +Vuser 1:^F4 end of response body (unchunked) - token ::http::4 +Vuser 1:Call 2 took 56 +Vuser 1:^A5 URL https://www.hammerdb.com - token ::http::5 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::5} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::5} keepalive +Vuser 1:^B5 begin sending request - token ::http::5 +Vuser 1:^C5 end sending request - token ::http::5 +Vuser 1:^D5 begin receiving response - token ::http::5 +Vuser 1:^E5 end of response headers - token ::http::5 +Vuser 1:^F5 end of response body (unchunked) - token ::http::5 +Vuser 1:Call 3 took 65 +Vuser 1:^A6 URL https://www.hammerdb.com - token ::http::6 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::6} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::6} keepalive +Vuser 1:^B6 begin sending request - token ::http::6 +Vuser 1:^C6 end sending request - token ::http::6 +Vuser 1:^D6 begin receiving response - token ::http::6 +Vuser 1:^E6 end of response headers - token ::http::6 +Vuser 1:^F6 end of response body (unchunked) - token ::http::6 +Vuser 1:Call 4 took 73 +Vuser 1:^A7 URL https://www.hammerdb.com - token ::http::7 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::7} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::7} keepalive +Vuser 1:^B7 begin sending request - token ::http::7 +Vuser 1:^C7 end sending request - token ::http::7 +Vuser 1:^D7 begin receiving response - token ::http::7 +Vuser 1:^E7 end of response headers - token ::http::7 +Vuser 1:^F7 end of response body (unchunked) - token ::http::7 +Vuser 1:Call 5 took 62 +Vuser 1:^A8 URL https://www.hammerdb.com - token ::http::8 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::8} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::8} keepalive +Vuser 1:^B8 begin sending request - token ::http::8 +Vuser 1:^C8 end sending request - token ::http::8 +Vuser 1:^D8 begin receiving response - token ::http::8 +Vuser 1:^E8 end of response headers - token ::http::8 +Vuser 1:^F8 end of response body (unchunked) - token ::http::8 +Vuser 1:Call 6 took 77 +Vuser 1:^A9 URL https://www.hammerdb.com - token ::http::9 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::9} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::9} keepalive +Vuser 1:^B9 begin sending request - token ::http::9 +Vuser 1:^C9 end sending request - token ::http::9 +Vuser 1:^D9 begin receiving response - token ::http::9 +Vuser 1:^E9 end of response headers - token ::http::9 +Vuser 1:^F9 end of response body (unchunked) - token ::http::9 +Vuser 1:Call 7 took 52 +Vuser 1:^A10 URL https://www.hammerdb.com - token ::http::10 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::10} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::10} keepalive +Vuser 1:^B10 begin sending request - token ::http::10 +Vuser 1:^C10 end sending request - token ::http::10 +Vuser 1:^D10 begin receiving response - token ::http::10 +Vuser 1:^E10 end of response headers - token ::http::10 +Vuser 1:^F10 end of response body (unchunked) - token ::http::10 +Vuser 1:Call 8 took 52 +Vuser 1:^A11 URL https://www.hammerdb.com - token ::http::11 +Vuser 1:{reusing socket sock55f342ae0870 for www.hammerdb.com:443 - token ::http::11} +Vuser 1:{Using sock55f342ae0870 for www.hammerdb.com:443 - token ::http::11} keepalive +Vuser 1:^B11 begin sending request - token ::http::11 +Vuser 1:^C11 end sending request - token ::http::11 +Vuser 1:^D11 begin receiving response - token ::http::11 +Vuser 1:^E11 end of response headers - token ::http::11 +Vuser 1:^F11 end of response body (unchunked) - token ::http::11 +Vuser 1:Call 9 took 53 +Vuser 1:10 iterations in 727 at an average of 72 + + + + + + GNU Free Documentation License + + ## GNU Free Documentation License + + Version 1.3, 3 November 2008 + + Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, + Inc. <https://fsf.org/> + + Everyone is permitted to copy and distribute verbatim copies of this + license document, but changing it is not allowed. + + #### 0. PREAMBLE The purpose of this License is to make a manual, + textbook, or other functional and useful document "free" in the sense of + freedom: to assure everyone the effective freedom to copy and redistribute + it, with or without modifying it, either commercially or noncommercially. + Secondarily, this License preserves for the author and publisher a way to + get credit for their work, while not being considered responsible for + modifications made by others. + + This License is a kind of "copyleft", which means that derivative + works of the document must themselves be free in the same sense. It + complements the GNU General Public License, which is a copyleft license + designed for free software. + + We have designed this License in order to use it for manuals for + free software, because free software needs free documentation: a free + program should come with manuals providing the same freedoms that the + software does. But this License is not limited to software manuals; it can + be used for any textual work, regardless of subject matter or whether it + is published as a printed book. We recommend this License principally for + works whose purpose is instruction or reference. + + #### 1. APPLICABILITY AND DEFINITIONS + + This License applies to any manual or other work, in any medium, + that contains a notice placed by the copyright holder saying it can be + distributed under the terms of this License. Such a notice grants a + world-wide, royalty-free license, unlimited in duration, to use that work + under the conditions stated herein. The "Document", below, refers to any + such manual or work. Any member of the public is a licensee, and is + addressed as "you". You accept the license if you copy, modify or + distribute the work in a way requiring permission under copyright + law. + + A "Modified Version" of the Document means any work containing the + Document or a portion of it, either copied verbatim, or with modifications + and/or translated into another language. + + A "Secondary Section" is a named appendix or a front-matter section + of the Document that deals exclusively with the relationship of the + publishers or authors of the Document to the Document's overall subject + (or to related matters) and contains nothing that could fall directly + within that overall subject. (Thus, if the Document is in part a textbook + of mathematics, a Secondary Section may not explain any mathematics.) The + relationship could be a matter of historical connection with the subject + or with related matters, or of legal, commercial, philosophical, ethical + or political position regarding them. + + The "Invariant Sections" are certain Secondary Sections whose titles + are designated, as being those of Invariant Sections, in the notice that + says that the Document is released under this License. If a section does + not fit the above definition of Secondary then it is not allowed to be + designated as Invariant. The Document may contain zero Invariant Sections. + If the Document does not identify any Invariant Sections then there are + none. + + The "Cover Texts" are certain short passages of text that are + listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says + that the Document is released under this License. A Front-Cover Text may + be at most 5 words, and a Back-Cover Text may be at most 25 words. + + A "Transparent" copy of the Document means a machine-readable copy, + represented in a format whose specification is available to the general + public, that is suitable for revising the document straightforwardly with + generic text editors or (for images composed of pixels) generic paint + programs or (for drawings) some widely available drawing editor, and that + is suitable for input to text formatters or for automatic translation to a + variety of formats suitable for input to text formatters. A copy made in + an otherwise Transparent file format whose markup, or absence of markup, + has been arranged to thwart or discourage subsequent modification by + readers is not Transparent. An image format is not Transparent if used for + any substantial amount of text. A copy that is not "Transparent" is called + "Opaque". + + Examples of suitable formats for Transparent copies include plain + ASCII without markup, Texinfo input format, LaTeX input format, SGML or + XML using a publicly available DTD, and standard-conforming simple HTML, + PostScript or PDF designed for human modification. Examples of transparent + image formats include PNG, XCF and JPG. Opaque formats include proprietary + formats that can be read and edited only by proprietary word processors, + SGML or XML for which the DTD and/or processing tools are not generally + available, and the machine-generated HTML, PostScript or PDF produced by + some word processors for output purposes only. + + The "Title Page" means, for a printed book, the title page itself, + plus such following pages as are needed to hold, legibly, the material + this License requires to appear in the title page. For works in formats + which do not have any title page as such, "Title Page" means the text near + the most prominent appearance of the work's title, preceding the beginning + of the body of the text. + + The "publisher" means any person or entity that distributes copies + of the Document to the public. + + A section "Entitled XYZ" means a named subunit of the Document whose + title either is precisely XYZ or contains XYZ in parentheses following + text that translates XYZ in another language. (Here XYZ stands for a + specific section name mentioned below, such as "Acknowledgements", + "Dedications", "Endorsements", or "History".) To "Preserve the Title" of + such a section when you modify the Document means that it remains a + section "Entitled XYZ" according to this definition. + + The Document may include Warranty Disclaimers next to the notice + which states that this License applies to the Document. These Warranty + Disclaimers are considered to be included by reference in this License, + but only as regards disclaiming warranties: any other implication that + these Warranty Disclaimers may have is void and has no effect on the + meaning of this License. + + #### 2. VERBATIM COPYING + + You may copy and distribute the Document in any medium, either + commercially or noncommercially, provided that this License, the copyright + notices, and the license notice saying this License applies to the + Document are reproduced in all copies, and that you add no other + conditions whatsoever to those of this License. You may not use technical + measures to obstruct or control the reading or further copying of the + copies you make or distribute. However, you may accept compensation in + exchange for copies. If you distribute a large enough number of copies you + must also follow the conditions in section 3. + + You may also lend copies, under the same conditions stated above, + and you may publicly display copies. + + #### 3. COPYING IN QUANTITY + + If you publish printed copies (or copies in media that commonly have + printed covers) of the Document, numbering more than 100, and the + Document's license notice requires Cover Texts, you must enclose the + copies in covers that carry, clearly and legibly, all these Cover Texts: + Front-Cover Texts on the front cover, and Back-Cover Texts on the back + cover. Both covers must also clearly and legibly identify you as the + publisher of these copies. The front cover must present the full title + with all words of the title equally prominent and visible. You may add + other material on the covers in addition. Copying with changes limited to + the covers, as long as they preserve the title of the Document and satisfy + these conditions, can be treated as verbatim copying in other + respects. + + If the required texts for either cover are too voluminous to fit + legibly, you should put the first ones listed (as many as fit reasonably) + on the actual cover, and continue the rest onto adjacent pages. + + If you publish or distribute Opaque copies of the Document numbering + more than 100, you must either include a machine-readable Transparent copy + along with each Opaque copy, or state in or with each Opaque copy a + computer-network location from which the general network-using public has + access to download using public-standard network protocols a complete + Transparent copy of the Document, free of added material. If you use the + latter option, you must take reasonably prudent steps, when you begin + distribution of Opaque copies in quantity, to ensure that this Transparent + copy will remain thus accessible at the stated location until at least one + year after the last time you distribute an Opaque copy (directly or + through your agents or retailers) of that edition to the public. + + It is requested, but not required, that you contact the authors of + the Document well before redistributing any large number of copies, to + give them a chance to provide you with an updated version of the + Document. + + #### 4. MODIFICATIONS + + You may copy and distribute a Modified Version of the Document under + the conditions of sections 2 and 3 above, provided that you release the + Modified Version under precisely this License, with the Modified Version + filling the role of the Document, thus licensing distribution and + modification of the Modified Version to whoever possesses a copy of it. In + addition, you must do these things in the Modified Version: + + - A. Use in the Title Page (and on the covers, if any) a title + distinct from that of the Document, and from those of previous versions + (which should, if there were any, be listed in the History section of the + Document). You may use the same title as a previous version if the + original publisher of that version gives permission. + + - B. List on the Title Page, as authors, one or more persons or + entities responsible for authorship of the modifications in the Modified + Version, together with at least five of the principal authors of the + Document (all of its principal authors, if it has fewer than five), unless + they release you from this requirement. + + - C. State on the Title page the name of the publisher of the + Modified Version, as the publisher. + + - D. Preserve all the copyright notices of the Document. + + - E. Add an appropriate copyright notice for your modifications + adjacent to the other copyright notices. + + - F. Include, immediately after the copyright notices, a license + notice giving the public permission to use the Modified Version under the + terms of this License, in the form shown in the Addendum below. + + - G. Preserve in that license notice the full lists of Invariant + Sections and required Cover Texts given in the Document's license + notice. + + - H. Include an unaltered copy of this License. + + - I. Preserve the section Entitled "History", Preserve its Title, + and add to it an item stating at least the title, year, new authors, and + publisher of the Modified Version as given on the Title Page. If there is + no section Entitled "History" in the Document, create one stating the + title, year, authors, and publisher of the Document as given on its Title + Page, then add an item describing the Modified Version as stated in the + previous sentence. + + - J. Preserve the network location, if any, given in the Document + for public access to a Transparent copy of the Document, and likewise the + network locations given in the Document for previous versions it was based + on. These may be placed in the "History" section. You may omit a network + location for a work that was published at least four years before the + Document itself, or if the original publisher of the version it refers to + gives permission. + + - K. For any section Entitled "Acknowledgements" or "Dedications", + Preserve the Title of the section, and preserve in the section all the + substance and tone of each of the contributor acknowledgements and/or + dedications given therein. + + - L. Preserve all the Invariant Sections of the Document, unaltered + in their text and in their titles. Section numbers or the equivalent are + not considered part of the section titles. + + - M. Delete any section Entitled "Endorsements". Such a section may + not be included in the Modified Version. + + - N. Do not retitle any existing section to be Entitled + "Endorsements" or to conflict in title with any Invariant Section. + + - O. Preserve any Warranty Disclaimers. + + If the Modified Version includes new front-matter sections or + appendices that qualify as Secondary Sections and contain no material + copied from the Document, you may at your option designate some or all of + these sections as invariant. To do this, add their titles to the list of + Invariant Sections in the Modified Version's license notice. These titles + must be distinct from any other section titles. + + You may add a section Entitled "Endorsements", provided it contains + nothing but endorsements of your Modified Version by various parties for + example, statements of peer review or that the text has been approved by + an organization as the authoritative definition of a standard. + + You may add a passage of up to five words as a Front-Cover Text, and + a passage of up to 25 words as a Back-Cover Text, to the end of the list + of Cover Texts in the Modified Version. Only one passage of Front-Cover + Text and one of Back-Cover Text may be added by (or through arrangements + made by) any one entity. If the Document already includes a cover text for + the same cover, previously added by you or by arrangement made by the same + entity you are acting on behalf of, you may not add another; but you may + replace the old one, on explicit permission from the previous publisher + that added the old one. + + The author(s) and publisher(s) of the Document do not by this + License give permission to use their names for publicity for or to assert + or imply endorsement of any Modified Version. + + #### 5. COMBINING DOCUMENTS + + You may combine the Document with other documents released under + this License, under the terms defined in section 4 above for modified + versions, provided that you include in the combination all of the + Invariant Sections of all of the original documents, unmodified, and list + them all as Invariant Sections of your combined work in its license + notice, and that you preserve all their Warranty Disclaimers. + + The combined work need only contain one copy of this License, and + multiple identical Invariant Sections may be replaced with a single copy. + If there are multiple Invariant Sections with the same name but different + contents, make the title of each such section unique by adding at the end + of it, in parentheses, the name of the original author or publisher of + that section if known, or else a unique number. Make the same adjustment + to the section titles in the list of Invariant Sections in the license + notice of the combined work. + + In the combination, you must combine any sections Entitled "History" + in the various original documents, forming one section Entitled "History"; + likewise combine any sections Entitled "Acknowledgements", and any + sections Entitled "Dedications". You must delete all sections Entitled + "Endorsements". + + #### 6. COLLECTIONS OF DOCUMENTS + + You may make a collection consisting of the Document and other + documents released under this License, and replace the individual copies + of this License in the various documents with a single copy that is + included in the collection, provided that you follow the rules of this + License for verbatim copying of each of the documents in all other + respects. + + You may extract a single document from such a collection, and + distribute it individually under this License, provided you insert a copy + of this License into the extracted document, and follow this License in + all other respects regarding verbatim copying of that document. + + #### 7. AGGREGATION WITH INDEPENDENT WORKS + + A compilation of the Document or its derivatives with other separate + and independent documents or works, in or on a volume of a storage or + distribution medium, is called an "aggregate" if the copyright resulting + from the compilation is not used to limit the legal rights of the + compilation's users beyond what the individual works permit. When the + Document is included in an aggregate, this License does not apply to the + other works in the aggregate which are not themselves derivative works of + the Document. + + If the Cover Text requirement of section 3 is applicable to these + copies of the Document, then if the Document is less than one half of the + entire aggregate, the Document's Cover Texts may be placed on covers that + bracket the Document within the aggregate, or the electronic equivalent of + covers if the Document is in electronic form. Otherwise they must appear + on printed covers that bracket the whole aggregate. + + #### 8. TRANSLATION + + Translation is considered a kind of modification, so you may + distribute translations of the Document under the terms of section 4. + Replacing Invariant Sections with translations requires special permission + from their copyright holders, but you may include translations of some or + all Invariant Sections in addition to the original versions of these + Invariant Sections. You may include a translation of this License, and all + the license notices in the Document, and any Warranty Disclaimers, + provided that you also include the original English version of this + License and the original versions of those notices and disclaimers. In + case of a disagreement between the translation and the original version of + this License or a notice or disclaimer, the original version will + prevail. + + If a section in the Document is Entitled "Acknowledgements", + "Dedications", or "History", the requirement (section 4) to Preserve its + Title (section 1) will typically require changing the actual title. + + #### 9. TERMINATION + + You may not copy, modify, sublicense, or distribute the Document + except as expressly provided under this License. Any attempt otherwise to + copy, modify, sublicense, or distribute it is void, and will automatically + terminate your rights under this License. + + However, if you cease all violation of this License, then your + license from a particular copyright holder is reinstated (a) + provisionally, unless and until the copyright holder explicitly and + finally terminates your license, and (b) permanently, if the copyright + holder fails to notify you of the violation by some reasonable means prior + to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is + reinstated permanently if the copyright holder notifies you of the + violation by some reasonable means, this is the first time you have + received notice of violation of this License (for any work) from that + copyright holder, and you cure the violation prior to 30 days after your + receipt of the notice. + + Termination of your rights under this section does not terminate the + licenses of parties who have received copies or rights from you under this + License. If your rights have been terminated and not permanently + reinstated, receipt of a copy of some or all of the same material does not + give you any rights to use it. + + #### 10. FUTURE REVISIONS OF THIS LICENSE + + The Free Software Foundation may publish new, revised versions of + the GNU Free Documentation License from time to time. Such new versions + will be similar in spirit to the present version, but may differ in detail + to address new problems or concerns. See + <https://www.gnu.org/licenses/>. + + Each version of the License is given a distinguishing version + number. If the Document specifies that a particular numbered version of + this License "or any later version" applies to it, you have the option of + following the terms and conditions either of that specified version or of + any later version that has been published (not as a draft) by the Free + Software Foundation. If the Document does not specify a version number of + this License, you may choose any version ever published (not as a draft) + by the Free Software Foundation. If the Document specifies that a proxy + can decide which future versions of this License can be used, that proxy's + public statement of acceptance of a version permanently authorizes you to + choose that version for the Document. + + #### 11. RELICENSING + + "Massive Multiauthor Collaboration Site" (or "MMC Site") means any + World Wide Web server that publishes copyrightable works and also provides + prominent facilities for anybody to edit those works. A public wiki that + anybody can edit is an example of such a server. A "Massive Multiauthor + Collaboration" (or "MMC") contained in the site means any set of + copyrightable works thus published on the MMC site. + + "CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 + license published by Creative Commons Corporation, a not-for-profit + corporation with a principal place of business in San Francisco, + California, as well as future copyleft versions of that license published + by that same organization. + + "Incorporate" means to publish or republish a Document, in whole or + in part, as part of another Document. + + An MMC is "eligible for relicensing" if it is licensed under this + License, and if all works that were first published under this License + somewhere other than this MMC, and subsequently incorporated in whole or + in part into the MMC, (1) had no cover texts or invariant sections, and + (2) were thus incorporated prior to November 1, 2008. + + The operator of an MMC Site may republish an MMC contained in the + site under CC-BY-SA on the same site at any time before August 1, 2009, + provided the MMC is eligible for relicensing. diff --git a/DocBook/docs/images/annot-close.png b/DocBook/docs/images/annot-close.png new file mode 100644 index 00000000..b9e1a0d5 Binary files /dev/null and b/DocBook/docs/images/annot-close.png differ diff --git a/DocBook/docs/images/annot-open.png b/DocBook/docs/images/annot-open.png new file mode 100644 index 00000000..71040ec8 Binary files /dev/null and b/DocBook/docs/images/annot-open.png differ diff --git a/DocBook/docs/images/blank.png b/DocBook/docs/images/blank.png new file mode 100644 index 00000000..764bf4f0 Binary files /dev/null and b/DocBook/docs/images/blank.png differ diff --git a/DocBook/docs/images/callouts/1.png b/DocBook/docs/images/callouts/1.png new file mode 100644 index 00000000..7d473430 Binary files /dev/null and b/DocBook/docs/images/callouts/1.png differ diff --git a/DocBook/docs/images/callouts/10.png b/DocBook/docs/images/callouts/10.png new file mode 100644 index 00000000..997bbc82 Binary files /dev/null and b/DocBook/docs/images/callouts/10.png differ diff --git a/DocBook/docs/images/callouts/11.png b/DocBook/docs/images/callouts/11.png new file mode 100644 index 00000000..ce47dac3 Binary files /dev/null and b/DocBook/docs/images/callouts/11.png differ diff --git a/DocBook/docs/images/callouts/12.png b/DocBook/docs/images/callouts/12.png new file mode 100644 index 00000000..31daf4e2 Binary files /dev/null and b/DocBook/docs/images/callouts/12.png differ diff --git a/DocBook/docs/images/callouts/13.png b/DocBook/docs/images/callouts/13.png new file mode 100644 index 00000000..14021a89 Binary files /dev/null and b/DocBook/docs/images/callouts/13.png differ diff --git a/DocBook/docs/images/callouts/14.png b/DocBook/docs/images/callouts/14.png new file mode 100644 index 00000000..64014b75 Binary files /dev/null and b/DocBook/docs/images/callouts/14.png differ diff --git a/DocBook/docs/images/callouts/15.png b/DocBook/docs/images/callouts/15.png new file mode 100644 index 00000000..0d65765f Binary files /dev/null and b/DocBook/docs/images/callouts/15.png differ diff --git a/DocBook/docs/images/callouts/2.png b/DocBook/docs/images/callouts/2.png new file mode 100644 index 00000000..5d09341b Binary files /dev/null and b/DocBook/docs/images/callouts/2.png differ diff --git a/DocBook/docs/images/callouts/3.png b/DocBook/docs/images/callouts/3.png new file mode 100644 index 00000000..ef7b7004 Binary files /dev/null and b/DocBook/docs/images/callouts/3.png differ diff --git a/DocBook/docs/images/callouts/4.png b/DocBook/docs/images/callouts/4.png new file mode 100644 index 00000000..adb8364e Binary files /dev/null and b/DocBook/docs/images/callouts/4.png differ diff --git a/DocBook/docs/images/callouts/5.png b/DocBook/docs/images/callouts/5.png new file mode 100644 index 00000000..4d7eb460 Binary files /dev/null and b/DocBook/docs/images/callouts/5.png differ diff --git a/DocBook/docs/images/callouts/6.png b/DocBook/docs/images/callouts/6.png new file mode 100644 index 00000000..0ba694af Binary files /dev/null and b/DocBook/docs/images/callouts/6.png differ diff --git a/DocBook/docs/images/callouts/7.png b/DocBook/docs/images/callouts/7.png new file mode 100644 index 00000000..472e96f8 Binary files /dev/null and b/DocBook/docs/images/callouts/7.png differ diff --git a/DocBook/docs/images/callouts/8.png b/DocBook/docs/images/callouts/8.png new file mode 100644 index 00000000..5e60973c Binary files /dev/null and b/DocBook/docs/images/callouts/8.png differ diff --git a/DocBook/docs/images/callouts/9.png b/DocBook/docs/images/callouts/9.png new file mode 100644 index 00000000..a0676d26 Binary files /dev/null and b/DocBook/docs/images/callouts/9.png differ diff --git a/DocBook/docs/images/caution.png b/DocBook/docs/images/caution.png new file mode 100644 index 00000000..afb9ceae Binary files /dev/null and b/DocBook/docs/images/caution.png differ diff --git a/DocBook/docs/images/cgh7-6.PNG b/DocBook/docs/images/cgh7-6.PNG new file mode 100644 index 00000000..1eee0bb6 Binary files /dev/null and b/DocBook/docs/images/cgh7-6.PNG differ diff --git a/docs/resources/ch13-28.PNG b/DocBook/docs/images/ch-28.PNG similarity index 100% rename from docs/resources/ch13-28.PNG rename to DocBook/docs/images/ch-28.PNG diff --git a/docs/resources/ch1-1.png b/DocBook/docs/images/ch1-1.png similarity index 100% rename from docs/resources/ch1-1.png rename to DocBook/docs/images/ch1-1.png diff --git a/DocBook/docs/images/ch1-10.PNG b/DocBook/docs/images/ch1-10.PNG new file mode 100644 index 00000000..4de1f141 Binary files /dev/null and b/DocBook/docs/images/ch1-10.PNG differ diff --git a/DocBook/docs/images/ch1-11.PNG b/DocBook/docs/images/ch1-11.PNG new file mode 100644 index 00000000..d233b34d Binary files /dev/null and b/DocBook/docs/images/ch1-11.PNG differ diff --git a/DocBook/docs/images/ch1-12.PNG b/DocBook/docs/images/ch1-12.PNG new file mode 100644 index 00000000..15631757 Binary files /dev/null and b/DocBook/docs/images/ch1-12.PNG differ diff --git a/DocBook/docs/images/ch1-13.PNG b/DocBook/docs/images/ch1-13.PNG new file mode 100644 index 00000000..fdbb924c Binary files /dev/null and b/DocBook/docs/images/ch1-13.PNG differ diff --git a/DocBook/docs/images/ch1-14.PNG b/DocBook/docs/images/ch1-14.PNG new file mode 100644 index 00000000..79bfcd5e Binary files /dev/null and b/DocBook/docs/images/ch1-14.PNG differ diff --git a/DocBook/docs/images/ch1-15.PNG b/DocBook/docs/images/ch1-15.PNG new file mode 100644 index 00000000..1560434a Binary files /dev/null and b/DocBook/docs/images/ch1-15.PNG differ diff --git a/DocBook/docs/images/ch1-16.PNG b/DocBook/docs/images/ch1-16.PNG new file mode 100644 index 00000000..e4599470 Binary files /dev/null and b/DocBook/docs/images/ch1-16.PNG differ diff --git a/DocBook/docs/images/ch1-17.PNG b/DocBook/docs/images/ch1-17.PNG new file mode 100644 index 00000000..377dccc4 Binary files /dev/null and b/DocBook/docs/images/ch1-17.PNG differ diff --git a/DocBook/docs/images/ch1-18.PNG b/DocBook/docs/images/ch1-18.PNG new file mode 100644 index 00000000..6912946f Binary files /dev/null and b/DocBook/docs/images/ch1-18.PNG differ diff --git a/DocBook/docs/images/ch1-19.png b/DocBook/docs/images/ch1-19.png new file mode 100644 index 00000000..d224b7c4 Binary files /dev/null and b/DocBook/docs/images/ch1-19.png differ diff --git a/docs/resources/ch1-2.png b/DocBook/docs/images/ch1-2.png similarity index 100% rename from docs/resources/ch1-2.png rename to DocBook/docs/images/ch1-2.png diff --git a/docs/resources/ch1-3.PNG b/DocBook/docs/images/ch1-3.PNG similarity index 100% rename from docs/resources/ch1-3.PNG rename to DocBook/docs/images/ch1-3.PNG diff --git a/DocBook/docs/images/ch1-4.PNG b/DocBook/docs/images/ch1-4.PNG new file mode 100644 index 00000000..2c8bac46 Binary files /dev/null and b/DocBook/docs/images/ch1-4.PNG differ diff --git a/DocBook/docs/images/ch1-5.PNG b/DocBook/docs/images/ch1-5.PNG new file mode 100644 index 00000000..a52f465e Binary files /dev/null and b/DocBook/docs/images/ch1-5.PNG differ diff --git a/DocBook/docs/images/ch1-6.PNG b/DocBook/docs/images/ch1-6.PNG new file mode 100644 index 00000000..91f91109 Binary files /dev/null and b/DocBook/docs/images/ch1-6.PNG differ diff --git a/DocBook/docs/images/ch1-7.PNG b/DocBook/docs/images/ch1-7.PNG new file mode 100644 index 00000000..eb5e9930 Binary files /dev/null and b/DocBook/docs/images/ch1-7.PNG differ diff --git a/DocBook/docs/images/ch1-8.PNG b/DocBook/docs/images/ch1-8.PNG new file mode 100644 index 00000000..c1d5fbfc Binary files /dev/null and b/DocBook/docs/images/ch1-8.PNG differ diff --git a/DocBook/docs/images/ch1-8a.PNG b/DocBook/docs/images/ch1-8a.PNG new file mode 100644 index 00000000..0d197486 Binary files /dev/null and b/DocBook/docs/images/ch1-8a.PNG differ diff --git a/DocBook/docs/images/ch1-9.PNG b/DocBook/docs/images/ch1-9.PNG new file mode 100644 index 00000000..25f0eed8 Binary files /dev/null and b/DocBook/docs/images/ch1-9.PNG differ diff --git a/docs/resources/ch10-1.png b/DocBook/docs/images/ch10-1.png similarity index 100% rename from docs/resources/ch10-1.png rename to DocBook/docs/images/ch10-1.png diff --git a/DocBook/docs/images/ch10-10.PNG b/DocBook/docs/images/ch10-10.PNG new file mode 100644 index 00000000..dd2b6b42 Binary files /dev/null and b/DocBook/docs/images/ch10-10.PNG differ diff --git a/docs/resources/ch10-11.png b/DocBook/docs/images/ch10-11.png similarity index 100% rename from docs/resources/ch10-11.png rename to DocBook/docs/images/ch10-11.png diff --git a/docs/resources/ch10-12.png b/DocBook/docs/images/ch10-12.png similarity index 100% rename from docs/resources/ch10-12.png rename to DocBook/docs/images/ch10-12.png diff --git a/docs/resources/ch10-13.png b/DocBook/docs/images/ch10-13.png similarity index 100% rename from docs/resources/ch10-13.png rename to DocBook/docs/images/ch10-13.png diff --git a/docs/resources/ch10-14.png b/DocBook/docs/images/ch10-14.png similarity index 100% rename from docs/resources/ch10-14.png rename to DocBook/docs/images/ch10-14.png diff --git a/docs/resources/ch10-15.png b/DocBook/docs/images/ch10-15.png similarity index 100% rename from docs/resources/ch10-15.png rename to DocBook/docs/images/ch10-15.png diff --git a/docs/resources/ch10-16.png b/DocBook/docs/images/ch10-16.png similarity index 100% rename from docs/resources/ch10-16.png rename to DocBook/docs/images/ch10-16.png diff --git a/DocBook/docs/images/ch10-2.PNG b/DocBook/docs/images/ch10-2.PNG new file mode 100644 index 00000000..f90392b4 Binary files /dev/null and b/DocBook/docs/images/ch10-2.PNG differ diff --git a/DocBook/docs/images/ch10-3.PNG b/DocBook/docs/images/ch10-3.PNG new file mode 100644 index 00000000..96e781f6 Binary files /dev/null and b/DocBook/docs/images/ch10-3.PNG differ diff --git a/DocBook/docs/images/ch10-4.PNG b/DocBook/docs/images/ch10-4.PNG new file mode 100644 index 00000000..9aa3857d Binary files /dev/null and b/DocBook/docs/images/ch10-4.PNG differ diff --git a/docs/resources/ch10-5.PNG b/DocBook/docs/images/ch10-5.PNG similarity index 100% rename from docs/resources/ch10-5.PNG rename to DocBook/docs/images/ch10-5.PNG diff --git a/DocBook/docs/images/ch10-6.PNG b/DocBook/docs/images/ch10-6.PNG new file mode 100644 index 00000000..7a9970a7 Binary files /dev/null and b/DocBook/docs/images/ch10-6.PNG differ diff --git a/DocBook/docs/images/ch10-7.PNG b/DocBook/docs/images/ch10-7.PNG new file mode 100644 index 00000000..aa97324e Binary files /dev/null and b/DocBook/docs/images/ch10-7.PNG differ diff --git a/DocBook/docs/images/ch10-8.PNG b/DocBook/docs/images/ch10-8.PNG new file mode 100644 index 00000000..31951ea5 Binary files /dev/null and b/DocBook/docs/images/ch10-8.PNG differ diff --git a/docs/resources/ch10-9.PNG b/DocBook/docs/images/ch10-9.PNG similarity index 100% rename from docs/resources/ch10-9.PNG rename to DocBook/docs/images/ch10-9.PNG diff --git a/DocBook/docs/images/ch11-1.PNG b/DocBook/docs/images/ch11-1.PNG new file mode 100644 index 00000000..b67c9091 Binary files /dev/null and b/DocBook/docs/images/ch11-1.PNG differ diff --git a/DocBook/docs/images/ch11-10.PNG b/DocBook/docs/images/ch11-10.PNG new file mode 100644 index 00000000..634b0e8b Binary files /dev/null and b/DocBook/docs/images/ch11-10.PNG differ diff --git a/DocBook/docs/images/ch11-2.PNG b/DocBook/docs/images/ch11-2.PNG new file mode 100644 index 00000000..1246e744 Binary files /dev/null and b/DocBook/docs/images/ch11-2.PNG differ diff --git a/DocBook/docs/images/ch11-3.PNG b/DocBook/docs/images/ch11-3.PNG new file mode 100644 index 00000000..9ffc1399 Binary files /dev/null and b/DocBook/docs/images/ch11-3.PNG differ diff --git a/DocBook/docs/images/ch11-4.PNG b/DocBook/docs/images/ch11-4.PNG new file mode 100644 index 00000000..8c171d4e Binary files /dev/null and b/DocBook/docs/images/ch11-4.PNG differ diff --git a/DocBook/docs/images/ch11-5.PNG b/DocBook/docs/images/ch11-5.PNG new file mode 100644 index 00000000..cdbc9464 Binary files /dev/null and b/DocBook/docs/images/ch11-5.PNG differ diff --git a/DocBook/docs/images/ch11-6.PNG b/DocBook/docs/images/ch11-6.PNG new file mode 100644 index 00000000..2fe01f4d Binary files /dev/null and b/DocBook/docs/images/ch11-6.PNG differ diff --git a/DocBook/docs/images/ch11-7.PNG b/DocBook/docs/images/ch11-7.PNG new file mode 100644 index 00000000..711170bc Binary files /dev/null and b/DocBook/docs/images/ch11-7.PNG differ diff --git a/docs/resources/ch11-8.PNG b/DocBook/docs/images/ch11-8.PNG similarity index 100% rename from docs/resources/ch11-8.PNG rename to DocBook/docs/images/ch11-8.PNG diff --git a/DocBook/docs/images/ch11-9.PNG b/DocBook/docs/images/ch11-9.PNG new file mode 100644 index 00000000..587e55da Binary files /dev/null and b/DocBook/docs/images/ch11-9.PNG differ diff --git a/docs/resources/ch12-1.PNG b/DocBook/docs/images/ch12-1.PNG similarity index 100% rename from docs/resources/ch12-1.PNG rename to DocBook/docs/images/ch12-1.PNG diff --git a/DocBook/docs/images/ch12-2.PNG b/DocBook/docs/images/ch12-2.PNG new file mode 100644 index 00000000..4fe6502d Binary files /dev/null and b/DocBook/docs/images/ch12-2.PNG differ diff --git a/DocBook/docs/images/ch12-3.PNG b/DocBook/docs/images/ch12-3.PNG new file mode 100644 index 00000000..2520edd0 Binary files /dev/null and b/DocBook/docs/images/ch12-3.PNG differ diff --git a/docs/resources/ch12-4.PNG b/DocBook/docs/images/ch12-4.PNG similarity index 100% rename from docs/resources/ch12-4.PNG rename to DocBook/docs/images/ch12-4.PNG diff --git a/DocBook/docs/images/ch12-5.PNG b/DocBook/docs/images/ch12-5.PNG new file mode 100644 index 00000000..aa840d25 Binary files /dev/null and b/DocBook/docs/images/ch12-5.PNG differ diff --git a/DocBook/docs/images/ch12-6.png b/DocBook/docs/images/ch12-6.png new file mode 100644 index 00000000..bf05c433 Binary files /dev/null and b/DocBook/docs/images/ch12-6.png differ diff --git a/docs/resources/ch12-7.png b/DocBook/docs/images/ch12-7.png similarity index 100% rename from docs/resources/ch12-7.png rename to DocBook/docs/images/ch12-7.png diff --git a/DocBook/docs/images/ch12-8.png b/DocBook/docs/images/ch12-8.png new file mode 100644 index 00000000..11795f07 Binary files /dev/null and b/DocBook/docs/images/ch12-8.png differ diff --git a/docs/resources/ch13-1.PNG b/DocBook/docs/images/ch13-1.PNG similarity index 100% rename from docs/resources/ch13-1.PNG rename to DocBook/docs/images/ch13-1.PNG diff --git a/DocBook/docs/images/ch13-10.PNG b/DocBook/docs/images/ch13-10.PNG new file mode 100644 index 00000000..4fc1d5e7 Binary files /dev/null and b/DocBook/docs/images/ch13-10.PNG differ diff --git a/DocBook/docs/images/ch13-11.PNG b/DocBook/docs/images/ch13-11.PNG new file mode 100644 index 00000000..f0915c37 Binary files /dev/null and b/DocBook/docs/images/ch13-11.PNG differ diff --git a/DocBook/docs/images/ch13-12.PNG b/DocBook/docs/images/ch13-12.PNG new file mode 100644 index 00000000..ca404a1a Binary files /dev/null and b/DocBook/docs/images/ch13-12.PNG differ diff --git a/docs/resources/ch13-13.PNG b/DocBook/docs/images/ch13-13.PNG similarity index 100% rename from docs/resources/ch13-13.PNG rename to DocBook/docs/images/ch13-13.PNG diff --git a/docs/resources/ch13-14.PNG b/DocBook/docs/images/ch13-14.PNG similarity index 100% rename from docs/resources/ch13-14.PNG rename to DocBook/docs/images/ch13-14.PNG diff --git a/docs/resources/ch13-15.PNG b/DocBook/docs/images/ch13-15.PNG similarity index 100% rename from docs/resources/ch13-15.PNG rename to DocBook/docs/images/ch13-15.PNG diff --git a/docs/resources/ch13-16.PNG b/DocBook/docs/images/ch13-16.PNG similarity index 100% rename from docs/resources/ch13-16.PNG rename to DocBook/docs/images/ch13-16.PNG diff --git a/DocBook/docs/images/ch13-17.PNG b/DocBook/docs/images/ch13-17.PNG new file mode 100644 index 00000000..a99d3e8e Binary files /dev/null and b/DocBook/docs/images/ch13-17.PNG differ diff --git a/DocBook/docs/images/ch13-18.PNG b/DocBook/docs/images/ch13-18.PNG new file mode 100644 index 00000000..38e6d1cc Binary files /dev/null and b/DocBook/docs/images/ch13-18.PNG differ diff --git a/DocBook/docs/images/ch13-19.PNG b/DocBook/docs/images/ch13-19.PNG new file mode 100644 index 00000000..14de30b9 Binary files /dev/null and b/DocBook/docs/images/ch13-19.PNG differ diff --git a/DocBook/docs/images/ch13-2.PNG b/DocBook/docs/images/ch13-2.PNG new file mode 100644 index 00000000..a5e264c4 Binary files /dev/null and b/DocBook/docs/images/ch13-2.PNG differ diff --git a/DocBook/docs/images/ch13-20.PNG b/DocBook/docs/images/ch13-20.PNG new file mode 100644 index 00000000..e2f256d2 Binary files /dev/null and b/DocBook/docs/images/ch13-20.PNG differ diff --git a/DocBook/docs/images/ch13-21.PNG b/DocBook/docs/images/ch13-21.PNG new file mode 100644 index 00000000..92174846 Binary files /dev/null and b/DocBook/docs/images/ch13-21.PNG differ diff --git a/DocBook/docs/images/ch13-22.PNG b/DocBook/docs/images/ch13-22.PNG new file mode 100644 index 00000000..174975fe Binary files /dev/null and b/DocBook/docs/images/ch13-22.PNG differ diff --git a/DocBook/docs/images/ch13-23.PNG b/DocBook/docs/images/ch13-23.PNG new file mode 100644 index 00000000..9024c422 Binary files /dev/null and b/DocBook/docs/images/ch13-23.PNG differ diff --git a/DocBook/docs/images/ch13-24.PNG b/DocBook/docs/images/ch13-24.PNG new file mode 100644 index 00000000..122022a2 Binary files /dev/null and b/DocBook/docs/images/ch13-24.PNG differ diff --git a/DocBook/docs/images/ch13-25.PNG b/DocBook/docs/images/ch13-25.PNG new file mode 100644 index 00000000..d72645f0 Binary files /dev/null and b/DocBook/docs/images/ch13-25.PNG differ diff --git a/DocBook/docs/images/ch13-26.PNG b/DocBook/docs/images/ch13-26.PNG new file mode 100644 index 00000000..bda73dc3 Binary files /dev/null and b/DocBook/docs/images/ch13-26.PNG differ diff --git a/DocBook/docs/images/ch13-27.PNG b/DocBook/docs/images/ch13-27.PNG new file mode 100644 index 00000000..bba17927 Binary files /dev/null and b/DocBook/docs/images/ch13-27.PNG differ diff --git a/DocBook/docs/images/ch13-28.PNG b/DocBook/docs/images/ch13-28.PNG new file mode 100644 index 00000000..b1c5172d Binary files /dev/null and b/DocBook/docs/images/ch13-28.PNG differ diff --git a/DocBook/docs/images/ch13-29.PNG b/DocBook/docs/images/ch13-29.PNG new file mode 100644 index 00000000..c99f7388 Binary files /dev/null and b/DocBook/docs/images/ch13-29.PNG differ diff --git a/DocBook/docs/images/ch13-3.PNG b/DocBook/docs/images/ch13-3.PNG new file mode 100644 index 00000000..3314a62e Binary files /dev/null and b/DocBook/docs/images/ch13-3.PNG differ diff --git a/DocBook/docs/images/ch13-30.PNG b/DocBook/docs/images/ch13-30.PNG new file mode 100644 index 00000000..8658edeb Binary files /dev/null and b/DocBook/docs/images/ch13-30.PNG differ diff --git a/docs/resources/ch13-31.PNG b/DocBook/docs/images/ch13-31.PNG similarity index 100% rename from docs/resources/ch13-31.PNG rename to DocBook/docs/images/ch13-31.PNG diff --git a/DocBook/docs/images/ch13-32.PNG b/DocBook/docs/images/ch13-32.PNG new file mode 100644 index 00000000..47c7489c Binary files /dev/null and b/DocBook/docs/images/ch13-32.PNG differ diff --git a/DocBook/docs/images/ch13-33.PNG b/DocBook/docs/images/ch13-33.PNG new file mode 100644 index 00000000..07e2b61d Binary files /dev/null and b/DocBook/docs/images/ch13-33.PNG differ diff --git a/docs/resources/ch13-34.PNG b/DocBook/docs/images/ch13-34.PNG similarity index 100% rename from docs/resources/ch13-34.PNG rename to DocBook/docs/images/ch13-34.PNG diff --git a/DocBook/docs/images/ch13-4.PNG b/DocBook/docs/images/ch13-4.PNG new file mode 100644 index 00000000..b8e7b186 Binary files /dev/null and b/DocBook/docs/images/ch13-4.PNG differ diff --git a/DocBook/docs/images/ch13-5.PNG b/DocBook/docs/images/ch13-5.PNG new file mode 100644 index 00000000..3c9d92e5 Binary files /dev/null and b/DocBook/docs/images/ch13-5.PNG differ diff --git a/DocBook/docs/images/ch13-6.PNG b/DocBook/docs/images/ch13-6.PNG new file mode 100644 index 00000000..d9b0b16e Binary files /dev/null and b/DocBook/docs/images/ch13-6.PNG differ diff --git a/DocBook/docs/images/ch13-7.PNG b/DocBook/docs/images/ch13-7.PNG new file mode 100644 index 00000000..ac3122b0 Binary files /dev/null and b/DocBook/docs/images/ch13-7.PNG differ diff --git a/DocBook/docs/images/ch13-8.PNG b/DocBook/docs/images/ch13-8.PNG new file mode 100644 index 00000000..c9ecb961 Binary files /dev/null and b/DocBook/docs/images/ch13-8.PNG differ diff --git a/DocBook/docs/images/ch13-9.PNG b/DocBook/docs/images/ch13-9.PNG new file mode 100644 index 00000000..90979138 Binary files /dev/null and b/DocBook/docs/images/ch13-9.PNG differ diff --git a/DocBook/docs/images/ch2-1.PNG b/DocBook/docs/images/ch2-1.PNG new file mode 100644 index 00000000..9fe8f3f4 Binary files /dev/null and b/DocBook/docs/images/ch2-1.PNG differ diff --git a/DocBook/docs/images/ch2-10.PNG b/DocBook/docs/images/ch2-10.PNG new file mode 100644 index 00000000..61ac6f66 Binary files /dev/null and b/DocBook/docs/images/ch2-10.PNG differ diff --git a/DocBook/docs/images/ch2-11.PNG b/DocBook/docs/images/ch2-11.PNG new file mode 100644 index 00000000..4c34b532 Binary files /dev/null and b/DocBook/docs/images/ch2-11.PNG differ diff --git a/DocBook/docs/images/ch2-12.PNG b/DocBook/docs/images/ch2-12.PNG new file mode 100644 index 00000000..c287b188 Binary files /dev/null and b/DocBook/docs/images/ch2-12.PNG differ diff --git a/DocBook/docs/images/ch2-13.PNG b/DocBook/docs/images/ch2-13.PNG new file mode 100644 index 00000000..46bd543b Binary files /dev/null and b/DocBook/docs/images/ch2-13.PNG differ diff --git a/DocBook/docs/images/ch2-14.PNG b/DocBook/docs/images/ch2-14.PNG new file mode 100644 index 00000000..79d1f113 Binary files /dev/null and b/DocBook/docs/images/ch2-14.PNG differ diff --git a/DocBook/docs/images/ch2-15.PNG b/DocBook/docs/images/ch2-15.PNG new file mode 100644 index 00000000..888f9f3a Binary files /dev/null and b/DocBook/docs/images/ch2-15.PNG differ diff --git a/DocBook/docs/images/ch2-16.PNG b/DocBook/docs/images/ch2-16.PNG new file mode 100644 index 00000000..9f4ece24 Binary files /dev/null and b/DocBook/docs/images/ch2-16.PNG differ diff --git a/DocBook/docs/images/ch2-17.PNG b/DocBook/docs/images/ch2-17.PNG new file mode 100644 index 00000000..231f87a4 Binary files /dev/null and b/DocBook/docs/images/ch2-17.PNG differ diff --git a/DocBook/docs/images/ch2-2.PNG b/DocBook/docs/images/ch2-2.PNG new file mode 100644 index 00000000..8a3dc000 Binary files /dev/null and b/DocBook/docs/images/ch2-2.PNG differ diff --git a/DocBook/docs/images/ch2-3.PNG b/DocBook/docs/images/ch2-3.PNG new file mode 100644 index 00000000..a72c5c79 Binary files /dev/null and b/DocBook/docs/images/ch2-3.PNG differ diff --git a/DocBook/docs/images/ch2-4.PNG b/DocBook/docs/images/ch2-4.PNG new file mode 100644 index 00000000..dc8c071f Binary files /dev/null and b/DocBook/docs/images/ch2-4.PNG differ diff --git a/DocBook/docs/images/ch2-5.PNG b/DocBook/docs/images/ch2-5.PNG new file mode 100644 index 00000000..6e85b503 Binary files /dev/null and b/DocBook/docs/images/ch2-5.PNG differ diff --git a/DocBook/docs/images/ch2-6.PNG b/DocBook/docs/images/ch2-6.PNG new file mode 100644 index 00000000..66979732 Binary files /dev/null and b/DocBook/docs/images/ch2-6.PNG differ diff --git a/DocBook/docs/images/ch2-7.PNG b/DocBook/docs/images/ch2-7.PNG new file mode 100644 index 00000000..325235da Binary files /dev/null and b/DocBook/docs/images/ch2-7.PNG differ diff --git a/DocBook/docs/images/ch2-8.PNG b/DocBook/docs/images/ch2-8.PNG new file mode 100644 index 00000000..f5b49cc2 Binary files /dev/null and b/DocBook/docs/images/ch2-8.PNG differ diff --git a/DocBook/docs/images/ch2-9.PNG b/DocBook/docs/images/ch2-9.PNG new file mode 100644 index 00000000..f40bb8d1 Binary files /dev/null and b/DocBook/docs/images/ch2-9.PNG differ diff --git a/docs/resources/ch3-1.PNG b/DocBook/docs/images/ch3-1.PNG similarity index 100% rename from docs/resources/ch3-1.PNG rename to DocBook/docs/images/ch3-1.PNG diff --git a/docs/resources/ch3-2.png b/DocBook/docs/images/ch3-2.png similarity index 100% rename from docs/resources/ch3-2.png rename to DocBook/docs/images/ch3-2.png diff --git a/docs/resources/ch3-3.PNG b/DocBook/docs/images/ch3-3.PNG similarity index 100% rename from docs/resources/ch3-3.PNG rename to DocBook/docs/images/ch3-3.PNG diff --git a/DocBook/docs/images/ch3-4.PNG b/DocBook/docs/images/ch3-4.PNG new file mode 100644 index 00000000..a30519b9 Binary files /dev/null and b/DocBook/docs/images/ch3-4.PNG differ diff --git a/docs/resources/ch3-5.PNG b/DocBook/docs/images/ch3-5.PNG similarity index 100% rename from docs/resources/ch3-5.PNG rename to DocBook/docs/images/ch3-5.PNG diff --git a/DocBook/docs/images/ch4-1.PNG b/DocBook/docs/images/ch4-1.PNG new file mode 100644 index 00000000..d5b780c8 Binary files /dev/null and b/DocBook/docs/images/ch4-1.PNG differ diff --git a/DocBook/docs/images/ch4-10.PNG b/DocBook/docs/images/ch4-10.PNG new file mode 100644 index 00000000..032f1e48 Binary files /dev/null and b/DocBook/docs/images/ch4-10.PNG differ diff --git a/DocBook/docs/images/ch4-11.PNG b/DocBook/docs/images/ch4-11.PNG new file mode 100644 index 00000000..9f6fc55a Binary files /dev/null and b/DocBook/docs/images/ch4-11.PNG differ diff --git a/DocBook/docs/images/ch4-12.PNG b/DocBook/docs/images/ch4-12.PNG new file mode 100644 index 00000000..1d5ed8bc Binary files /dev/null and b/DocBook/docs/images/ch4-12.PNG differ diff --git a/DocBook/docs/images/ch4-13.PNG b/DocBook/docs/images/ch4-13.PNG new file mode 100644 index 00000000..e9f1564e Binary files /dev/null and b/DocBook/docs/images/ch4-13.PNG differ diff --git a/DocBook/docs/images/ch4-14.PNG b/DocBook/docs/images/ch4-14.PNG new file mode 100644 index 00000000..f6478b9e Binary files /dev/null and b/DocBook/docs/images/ch4-14.PNG differ diff --git a/DocBook/docs/images/ch4-15.PNG b/DocBook/docs/images/ch4-15.PNG new file mode 100644 index 00000000..615ef055 Binary files /dev/null and b/DocBook/docs/images/ch4-15.PNG differ diff --git a/DocBook/docs/images/ch4-15a.png b/DocBook/docs/images/ch4-15a.png new file mode 100644 index 00000000..9aee3e96 Binary files /dev/null and b/DocBook/docs/images/ch4-15a.png differ diff --git a/DocBook/docs/images/ch4-15b.png b/DocBook/docs/images/ch4-15b.png new file mode 100644 index 00000000..edcb149b Binary files /dev/null and b/DocBook/docs/images/ch4-15b.png differ diff --git a/DocBook/docs/images/ch4-15c.png b/DocBook/docs/images/ch4-15c.png new file mode 100644 index 00000000..b0b79fce Binary files /dev/null and b/DocBook/docs/images/ch4-15c.png differ diff --git a/DocBook/docs/images/ch4-15d.png b/DocBook/docs/images/ch4-15d.png new file mode 100644 index 00000000..bbd56611 Binary files /dev/null and b/DocBook/docs/images/ch4-15d.png differ diff --git a/DocBook/docs/images/ch4-15e.PNG b/DocBook/docs/images/ch4-15e.PNG new file mode 100644 index 00000000..c0b36927 Binary files /dev/null and b/DocBook/docs/images/ch4-15e.PNG differ diff --git a/DocBook/docs/images/ch4-15f.PNG b/DocBook/docs/images/ch4-15f.PNG new file mode 100644 index 00000000..80bd9c70 Binary files /dev/null and b/DocBook/docs/images/ch4-15f.PNG differ diff --git a/DocBook/docs/images/ch4-15g.PNG b/DocBook/docs/images/ch4-15g.PNG new file mode 100644 index 00000000..953a2666 Binary files /dev/null and b/DocBook/docs/images/ch4-15g.PNG differ diff --git a/DocBook/docs/images/ch4-15h.PNG b/DocBook/docs/images/ch4-15h.PNG new file mode 100644 index 00000000..b0ee4349 Binary files /dev/null and b/DocBook/docs/images/ch4-15h.PNG differ diff --git a/DocBook/docs/images/ch4-15i.PNG b/DocBook/docs/images/ch4-15i.PNG new file mode 100644 index 00000000..46f6c7e5 Binary files /dev/null and b/DocBook/docs/images/ch4-15i.PNG differ diff --git a/DocBook/docs/images/ch4-15j.PNG b/DocBook/docs/images/ch4-15j.PNG new file mode 100644 index 00000000..a4100d97 Binary files /dev/null and b/DocBook/docs/images/ch4-15j.PNG differ diff --git a/DocBook/docs/images/ch4-15k.PNG b/DocBook/docs/images/ch4-15k.PNG new file mode 100644 index 00000000..0c5307c5 Binary files /dev/null and b/DocBook/docs/images/ch4-15k.PNG differ diff --git a/DocBook/docs/images/ch4-15l.PNG b/DocBook/docs/images/ch4-15l.PNG new file mode 100644 index 00000000..7b4acbd4 Binary files /dev/null and b/DocBook/docs/images/ch4-15l.PNG differ diff --git a/DocBook/docs/images/ch4-15m.PNG b/DocBook/docs/images/ch4-15m.PNG new file mode 100644 index 00000000..428f2354 Binary files /dev/null and b/DocBook/docs/images/ch4-15m.PNG differ diff --git a/DocBook/docs/images/ch4-15n.PNG b/DocBook/docs/images/ch4-15n.PNG new file mode 100644 index 00000000..04c06d70 Binary files /dev/null and b/DocBook/docs/images/ch4-15n.PNG differ diff --git a/DocBook/docs/images/ch4-16.PNG b/DocBook/docs/images/ch4-16.PNG new file mode 100644 index 00000000..8e9067c6 Binary files /dev/null and b/DocBook/docs/images/ch4-16.PNG differ diff --git a/DocBook/docs/images/ch4-17.PNG b/DocBook/docs/images/ch4-17.PNG new file mode 100644 index 00000000..264e93e5 Binary files /dev/null and b/DocBook/docs/images/ch4-17.PNG differ diff --git a/DocBook/docs/images/ch4-18.PNG b/DocBook/docs/images/ch4-18.PNG new file mode 100644 index 00000000..b6c4f12c Binary files /dev/null and b/DocBook/docs/images/ch4-18.PNG differ diff --git a/DocBook/docs/images/ch4-19.PNG b/DocBook/docs/images/ch4-19.PNG new file mode 100644 index 00000000..25ff8f28 Binary files /dev/null and b/DocBook/docs/images/ch4-19.PNG differ diff --git a/DocBook/docs/images/ch4-2.PNG b/DocBook/docs/images/ch4-2.PNG new file mode 100644 index 00000000..966b7e73 Binary files /dev/null and b/DocBook/docs/images/ch4-2.PNG differ diff --git a/DocBook/docs/images/ch4-20.PNG b/DocBook/docs/images/ch4-20.PNG new file mode 100644 index 00000000..149665be Binary files /dev/null and b/DocBook/docs/images/ch4-20.PNG differ diff --git a/DocBook/docs/images/ch4-21.PNG b/DocBook/docs/images/ch4-21.PNG new file mode 100644 index 00000000..5fb1d641 Binary files /dev/null and b/DocBook/docs/images/ch4-21.PNG differ diff --git a/docs/resources/ch4-22.png b/DocBook/docs/images/ch4-22.png similarity index 100% rename from docs/resources/ch4-22.png rename to DocBook/docs/images/ch4-22.png diff --git a/docs/resources/ch4-23.png b/DocBook/docs/images/ch4-23.png similarity index 100% rename from docs/resources/ch4-23.png rename to DocBook/docs/images/ch4-23.png diff --git a/DocBook/docs/images/ch4-3.PNG b/DocBook/docs/images/ch4-3.PNG new file mode 100644 index 00000000..29033528 Binary files /dev/null and b/DocBook/docs/images/ch4-3.PNG differ diff --git a/DocBook/docs/images/ch4-4.PNG b/DocBook/docs/images/ch4-4.PNG new file mode 100644 index 00000000..56729318 Binary files /dev/null and b/DocBook/docs/images/ch4-4.PNG differ diff --git a/DocBook/docs/images/ch4-5.PNG b/DocBook/docs/images/ch4-5.PNG new file mode 100644 index 00000000..a1e2490a Binary files /dev/null and b/DocBook/docs/images/ch4-5.PNG differ diff --git a/DocBook/docs/images/ch4-6.PNG b/DocBook/docs/images/ch4-6.PNG new file mode 100644 index 00000000..92f092cd Binary files /dev/null and b/DocBook/docs/images/ch4-6.PNG differ diff --git a/DocBook/docs/images/ch4-7.PNG b/DocBook/docs/images/ch4-7.PNG new file mode 100644 index 00000000..877fb3b5 Binary files /dev/null and b/DocBook/docs/images/ch4-7.PNG differ diff --git a/DocBook/docs/images/ch4-8.PNG b/DocBook/docs/images/ch4-8.PNG new file mode 100644 index 00000000..ab616d18 Binary files /dev/null and b/DocBook/docs/images/ch4-8.PNG differ diff --git a/docs/resources/ch4-9.PNG b/DocBook/docs/images/ch4-9.PNG similarity index 100% rename from docs/resources/ch4-9.PNG rename to DocBook/docs/images/ch4-9.PNG diff --git a/DocBook/docs/images/ch5-1.PNG b/DocBook/docs/images/ch5-1.PNG new file mode 100644 index 00000000..45e249cc Binary files /dev/null and b/DocBook/docs/images/ch5-1.PNG differ diff --git a/DocBook/docs/images/ch5-2.PNG b/DocBook/docs/images/ch5-2.PNG new file mode 100644 index 00000000..4c3dfced Binary files /dev/null and b/DocBook/docs/images/ch5-2.PNG differ diff --git a/DocBook/docs/images/ch5-3.PNG b/DocBook/docs/images/ch5-3.PNG new file mode 100644 index 00000000..aab41683 Binary files /dev/null and b/DocBook/docs/images/ch5-3.PNG differ diff --git a/DocBook/docs/images/ch5-4.PNG b/DocBook/docs/images/ch5-4.PNG new file mode 100644 index 00000000..89e78bce Binary files /dev/null and b/DocBook/docs/images/ch5-4.PNG differ diff --git a/DocBook/docs/images/ch5-5.PNG b/DocBook/docs/images/ch5-5.PNG new file mode 100644 index 00000000..504dd811 Binary files /dev/null and b/DocBook/docs/images/ch5-5.PNG differ diff --git a/DocBook/docs/images/ch5-6.PNG b/DocBook/docs/images/ch5-6.PNG new file mode 100644 index 00000000..dc90d121 Binary files /dev/null and b/DocBook/docs/images/ch5-6.PNG differ diff --git a/DocBook/docs/images/ch5-7.PNG b/DocBook/docs/images/ch5-7.PNG new file mode 100644 index 00000000..97012b01 Binary files /dev/null and b/DocBook/docs/images/ch5-7.PNG differ diff --git a/DocBook/docs/images/ch6-1.PNG b/DocBook/docs/images/ch6-1.PNG new file mode 100644 index 00000000..ea64d92d Binary files /dev/null and b/DocBook/docs/images/ch6-1.PNG differ diff --git a/DocBook/docs/images/ch6-10.PNG b/DocBook/docs/images/ch6-10.PNG new file mode 100644 index 00000000..961ff4f2 Binary files /dev/null and b/DocBook/docs/images/ch6-10.PNG differ diff --git a/DocBook/docs/images/ch6-11.PNG b/DocBook/docs/images/ch6-11.PNG new file mode 100644 index 00000000..f0e5970c Binary files /dev/null and b/DocBook/docs/images/ch6-11.PNG differ diff --git a/DocBook/docs/images/ch6-2.PNG b/DocBook/docs/images/ch6-2.PNG new file mode 100644 index 00000000..8ec39c0b Binary files /dev/null and b/DocBook/docs/images/ch6-2.PNG differ diff --git a/DocBook/docs/images/ch6-3.PNG b/DocBook/docs/images/ch6-3.PNG new file mode 100644 index 00000000..fea0c38e Binary files /dev/null and b/DocBook/docs/images/ch6-3.PNG differ diff --git a/DocBook/docs/images/ch6-4.PNG b/DocBook/docs/images/ch6-4.PNG new file mode 100644 index 00000000..9b40a474 Binary files /dev/null and b/DocBook/docs/images/ch6-4.PNG differ diff --git a/DocBook/docs/images/ch6-5.PNG b/DocBook/docs/images/ch6-5.PNG new file mode 100644 index 00000000..594c1fa3 Binary files /dev/null and b/DocBook/docs/images/ch6-5.PNG differ diff --git a/DocBook/docs/images/ch6-6.PNG b/DocBook/docs/images/ch6-6.PNG new file mode 100644 index 00000000..b0940145 Binary files /dev/null and b/DocBook/docs/images/ch6-6.PNG differ diff --git a/docs/resources/ch6-7.PNG b/DocBook/docs/images/ch6-7.PNG similarity index 100% rename from docs/resources/ch6-7.PNG rename to DocBook/docs/images/ch6-7.PNG diff --git a/docs/resources/ch6-8.PNG b/DocBook/docs/images/ch6-8.PNG similarity index 100% rename from docs/resources/ch6-8.PNG rename to DocBook/docs/images/ch6-8.PNG diff --git a/docs/resources/ch6-9.PNG b/DocBook/docs/images/ch6-9.PNG similarity index 100% rename from docs/resources/ch6-9.PNG rename to DocBook/docs/images/ch6-9.PNG diff --git a/DocBook/docs/images/ch7-1.PNG b/DocBook/docs/images/ch7-1.PNG new file mode 100644 index 00000000..00da4a72 Binary files /dev/null and b/DocBook/docs/images/ch7-1.PNG differ diff --git a/docs/resources/ch7-10.png b/DocBook/docs/images/ch7-10.png similarity index 100% rename from docs/resources/ch7-10.png rename to DocBook/docs/images/ch7-10.png diff --git a/DocBook/docs/images/ch7-11.PNG b/DocBook/docs/images/ch7-11.PNG new file mode 100644 index 00000000..8159c25a Binary files /dev/null and b/DocBook/docs/images/ch7-11.PNG differ diff --git a/DocBook/docs/images/ch7-12.png b/DocBook/docs/images/ch7-12.png new file mode 100644 index 00000000..b96b9fac Binary files /dev/null and b/DocBook/docs/images/ch7-12.png differ diff --git a/DocBook/docs/images/ch7-13.png b/DocBook/docs/images/ch7-13.png new file mode 100644 index 00000000..cda7394c Binary files /dev/null and b/DocBook/docs/images/ch7-13.png differ diff --git a/DocBook/docs/images/ch7-14.PNG b/DocBook/docs/images/ch7-14.PNG new file mode 100644 index 00000000..e6dec3ae Binary files /dev/null and b/DocBook/docs/images/ch7-14.PNG differ diff --git a/DocBook/docs/images/ch7-15.PNG b/DocBook/docs/images/ch7-15.PNG new file mode 100644 index 00000000..be19ccaa Binary files /dev/null and b/DocBook/docs/images/ch7-15.PNG differ diff --git a/DocBook/docs/images/ch7-17.PNG b/DocBook/docs/images/ch7-17.PNG new file mode 100644 index 00000000..38e6d1cc Binary files /dev/null and b/DocBook/docs/images/ch7-17.PNG differ diff --git a/DocBook/docs/images/ch7-2.PNG b/DocBook/docs/images/ch7-2.PNG new file mode 100644 index 00000000..379de16b Binary files /dev/null and b/DocBook/docs/images/ch7-2.PNG differ diff --git a/DocBook/docs/images/ch7-3.PNG b/DocBook/docs/images/ch7-3.PNG new file mode 100644 index 00000000..35b4cc48 Binary files /dev/null and b/DocBook/docs/images/ch7-3.PNG differ diff --git a/DocBook/docs/images/ch7-4.PNG b/DocBook/docs/images/ch7-4.PNG new file mode 100644 index 00000000..61fd4e4d Binary files /dev/null and b/DocBook/docs/images/ch7-4.PNG differ diff --git a/DocBook/docs/images/ch7-5.PNG b/DocBook/docs/images/ch7-5.PNG new file mode 100644 index 00000000..89afd37b Binary files /dev/null and b/DocBook/docs/images/ch7-5.PNG differ diff --git a/docs/resources/ch7-6.PNG b/DocBook/docs/images/ch7-6.PNG similarity index 100% rename from docs/resources/ch7-6.PNG rename to DocBook/docs/images/ch7-6.PNG diff --git a/DocBook/docs/images/ch7-7.PNG b/DocBook/docs/images/ch7-7.PNG new file mode 100644 index 00000000..c44c3213 Binary files /dev/null and b/DocBook/docs/images/ch7-7.PNG differ diff --git a/DocBook/docs/images/ch7-8.PNG b/DocBook/docs/images/ch7-8.PNG new file mode 100644 index 00000000..0e4ea9ce Binary files /dev/null and b/DocBook/docs/images/ch7-8.PNG differ diff --git a/DocBook/docs/images/ch7-9.PNG b/DocBook/docs/images/ch7-9.PNG new file mode 100644 index 00000000..eb2187aa Binary files /dev/null and b/DocBook/docs/images/ch7-9.PNG differ diff --git a/DocBook/docs/images/ch8-1.PNG b/DocBook/docs/images/ch8-1.PNG new file mode 100644 index 00000000..00e5fe41 Binary files /dev/null and b/DocBook/docs/images/ch8-1.PNG differ diff --git a/DocBook/docs/images/ch8-2.PNG b/DocBook/docs/images/ch8-2.PNG new file mode 100644 index 00000000..c923bb95 Binary files /dev/null and b/DocBook/docs/images/ch8-2.PNG differ diff --git a/docs/resources/ch8-2a.PNG b/DocBook/docs/images/ch8-2a.PNG similarity index 100% rename from docs/resources/ch8-2a.PNG rename to DocBook/docs/images/ch8-2a.PNG diff --git a/docs/resources/ch8-2b.PNG b/DocBook/docs/images/ch8-2b.PNG similarity index 100% rename from docs/resources/ch8-2b.PNG rename to DocBook/docs/images/ch8-2b.PNG diff --git a/docs/resources/ch9-1.png b/DocBook/docs/images/ch9-1.png similarity index 100% rename from docs/resources/ch9-1.png rename to DocBook/docs/images/ch9-1.png diff --git a/DocBook/docs/images/ch9-10.PNG b/DocBook/docs/images/ch9-10.PNG new file mode 100644 index 00000000..1a2305df Binary files /dev/null and b/DocBook/docs/images/ch9-10.PNG differ diff --git a/DocBook/docs/images/ch9-11.PNG b/DocBook/docs/images/ch9-11.PNG new file mode 100644 index 00000000..3d51e7f2 Binary files /dev/null and b/DocBook/docs/images/ch9-11.PNG differ diff --git a/DocBook/docs/images/ch9-12.PNG b/DocBook/docs/images/ch9-12.PNG new file mode 100644 index 00000000..10f3120d Binary files /dev/null and b/DocBook/docs/images/ch9-12.PNG differ diff --git a/docs/resources/ch9-13.png b/DocBook/docs/images/ch9-13.png similarity index 100% rename from docs/resources/ch9-13.png rename to DocBook/docs/images/ch9-13.png diff --git a/docs/resources/ch9-14.png b/DocBook/docs/images/ch9-14.png similarity index 100% rename from docs/resources/ch9-14.png rename to DocBook/docs/images/ch9-14.png diff --git a/docs/resources/ch9-2.PNG b/DocBook/docs/images/ch9-2.PNG similarity index 100% rename from docs/resources/ch9-2.PNG rename to DocBook/docs/images/ch9-2.PNG diff --git a/docs/resources/ch9-3.PNG b/DocBook/docs/images/ch9-3.PNG similarity index 100% rename from docs/resources/ch9-3.PNG rename to DocBook/docs/images/ch9-3.PNG diff --git a/docs/resources/ch9-4.png b/DocBook/docs/images/ch9-4.png similarity index 100% rename from docs/resources/ch9-4.png rename to DocBook/docs/images/ch9-4.png diff --git a/docs/resources/ch9-5.png b/DocBook/docs/images/ch9-5.png similarity index 100% rename from docs/resources/ch9-5.png rename to DocBook/docs/images/ch9-5.png diff --git a/docs/resources/ch9-6.png b/DocBook/docs/images/ch9-6.png similarity index 100% rename from docs/resources/ch9-6.png rename to DocBook/docs/images/ch9-6.png diff --git a/docs/resources/ch9-7.png b/DocBook/docs/images/ch9-7.png similarity index 100% rename from docs/resources/ch9-7.png rename to DocBook/docs/images/ch9-7.png diff --git a/docs/resources/ch9-8.PNG b/DocBook/docs/images/ch9-8.PNG similarity index 100% rename from docs/resources/ch9-8.PNG rename to DocBook/docs/images/ch9-8.PNG diff --git a/DocBook/docs/images/ch9-9.PNG b/DocBook/docs/images/ch9-9.PNG new file mode 100644 index 00000000..316ecc7e Binary files /dev/null and b/DocBook/docs/images/ch9-9.PNG differ diff --git a/DocBook/docs/images/ch9ws-1.PNG b/DocBook/docs/images/ch9ws-1.PNG new file mode 100644 index 00000000..0516e580 Binary files /dev/null and b/DocBook/docs/images/ch9ws-1.PNG differ diff --git a/DocBook/docs/images/draft.png b/DocBook/docs/images/draft.png new file mode 100644 index 00000000..59673fe1 Binary files /dev/null and b/DocBook/docs/images/draft.png differ diff --git a/docs/resources/hammerDB-H-Logo-FB.png b/DocBook/docs/images/hammerDB-H-Logo-FB.png similarity index 100% rename from docs/resources/hammerDB-H-Logo-FB.png rename to DocBook/docs/images/hammerDB-H-Logo-FB.png diff --git a/DocBook/docs/images/home.png b/DocBook/docs/images/home.png new file mode 100644 index 00000000..1e64f066 Binary files /dev/null and b/DocBook/docs/images/home.png differ diff --git a/DocBook/docs/images/important.png b/DocBook/docs/images/important.png new file mode 100644 index 00000000..ad334ece Binary files /dev/null and b/DocBook/docs/images/important.png differ diff --git a/DocBook/docs/images/next.png b/DocBook/docs/images/next.png new file mode 100644 index 00000000..71de589a Binary files /dev/null and b/DocBook/docs/images/next.png differ diff --git a/DocBook/docs/images/note.png b/DocBook/docs/images/note.png new file mode 100644 index 00000000..a27c493b Binary files /dev/null and b/DocBook/docs/images/note.png differ diff --git a/DocBook/docs/images/prev.png b/DocBook/docs/images/prev.png new file mode 100644 index 00000000..c80367ba Binary files /dev/null and b/DocBook/docs/images/prev.png differ diff --git a/DocBook/docs/images/tip.png b/DocBook/docs/images/tip.png new file mode 100644 index 00000000..dcb61cc9 Binary files /dev/null and b/DocBook/docs/images/tip.png differ diff --git a/DocBook/docs/images/toc-blank.png b/DocBook/docs/images/toc-blank.png new file mode 100644 index 00000000..6ffad17a Binary files /dev/null and b/DocBook/docs/images/toc-blank.png differ diff --git a/DocBook/docs/images/toc-minus.png b/DocBook/docs/images/toc-minus.png new file mode 100644 index 00000000..abbb020c Binary files /dev/null and b/DocBook/docs/images/toc-minus.png differ diff --git a/DocBook/docs/images/toc-plus.png b/DocBook/docs/images/toc-plus.png new file mode 100644 index 00000000..941312ce Binary files /dev/null and b/DocBook/docs/images/toc-plus.png differ diff --git a/DocBook/docs/images/up.png b/DocBook/docs/images/up.png new file mode 100644 index 00000000..a1586e0f Binary files /dev/null and b/DocBook/docs/images/up.png differ diff --git a/DocBook/docs/images/warning.png b/DocBook/docs/images/warning.png new file mode 100644 index 00000000..2bf57364 Binary files /dev/null and b/DocBook/docs/images/warning.png differ diff --git a/docs/resources/ch1-10.PNG b/docs/resources/ch1-10.PNG deleted file mode 100644 index 9896eb69..00000000 Binary files a/docs/resources/ch1-10.PNG and /dev/null differ diff --git a/docs/resources/ch1-11.PNG b/docs/resources/ch1-11.PNG deleted file mode 100644 index bba13bbd..00000000 Binary files a/docs/resources/ch1-11.PNG and /dev/null differ diff --git a/docs/resources/ch1-12.PNG b/docs/resources/ch1-12.PNG deleted file mode 100644 index bf49114b..00000000 Binary files a/docs/resources/ch1-12.PNG and /dev/null differ diff --git a/docs/resources/ch1-4.PNG b/docs/resources/ch1-4.PNG deleted file mode 100644 index e6fb2a5f..00000000 Binary files a/docs/resources/ch1-4.PNG and /dev/null differ diff --git a/docs/resources/ch1-5.PNG b/docs/resources/ch1-5.PNG deleted file mode 100644 index f62ed38a..00000000 Binary files a/docs/resources/ch1-5.PNG and /dev/null differ diff --git a/docs/resources/ch1-6.PNG b/docs/resources/ch1-6.PNG deleted file mode 100644 index c16d2705..00000000 Binary files a/docs/resources/ch1-6.PNG and /dev/null differ diff --git a/docs/resources/ch1-7.PNG b/docs/resources/ch1-7.PNG deleted file mode 100644 index f7c5f024..00000000 Binary files a/docs/resources/ch1-7.PNG and /dev/null differ diff --git a/docs/resources/ch1-8.PNG b/docs/resources/ch1-8.PNG deleted file mode 100644 index 3794104f..00000000 Binary files a/docs/resources/ch1-8.PNG and /dev/null differ diff --git a/docs/resources/ch1-9.PNG b/docs/resources/ch1-9.PNG deleted file mode 100644 index 74542049..00000000 Binary files a/docs/resources/ch1-9.PNG and /dev/null differ diff --git a/docs/resources/ch10-10.PNG b/docs/resources/ch10-10.PNG deleted file mode 100644 index f56e9541..00000000 Binary files a/docs/resources/ch10-10.PNG and /dev/null differ diff --git a/docs/resources/ch10-2.PNG b/docs/resources/ch10-2.PNG deleted file mode 100644 index 25cbfd97..00000000 Binary files a/docs/resources/ch10-2.PNG and /dev/null differ diff --git a/docs/resources/ch10-3.PNG b/docs/resources/ch10-3.PNG deleted file mode 100644 index 8d0c4cd9..00000000 Binary files a/docs/resources/ch10-3.PNG and /dev/null differ diff --git a/docs/resources/ch10-4.PNG b/docs/resources/ch10-4.PNG deleted file mode 100644 index 951c38e8..00000000 Binary files a/docs/resources/ch10-4.PNG and /dev/null differ diff --git a/docs/resources/ch10-6.PNG b/docs/resources/ch10-6.PNG deleted file mode 100644 index cef5b389..00000000 Binary files a/docs/resources/ch10-6.PNG and /dev/null differ diff --git a/docs/resources/ch10-7.PNG b/docs/resources/ch10-7.PNG deleted file mode 100644 index cb8a82c2..00000000 Binary files a/docs/resources/ch10-7.PNG and /dev/null differ diff --git a/docs/resources/ch10-8.PNG b/docs/resources/ch10-8.PNG deleted file mode 100644 index f1aea81f..00000000 Binary files a/docs/resources/ch10-8.PNG and /dev/null differ diff --git a/docs/resources/ch11-1.PNG b/docs/resources/ch11-1.PNG deleted file mode 100644 index 8b9337c7..00000000 Binary files a/docs/resources/ch11-1.PNG and /dev/null differ diff --git a/docs/resources/ch11-2.PNG b/docs/resources/ch11-2.PNG deleted file mode 100644 index 21089f57..00000000 Binary files a/docs/resources/ch11-2.PNG and /dev/null differ diff --git a/docs/resources/ch11-3.PNG b/docs/resources/ch11-3.PNG deleted file mode 100644 index b250bea7..00000000 Binary files a/docs/resources/ch11-3.PNG and /dev/null differ diff --git a/docs/resources/ch11-4.PNG b/docs/resources/ch11-4.PNG deleted file mode 100644 index ca406138..00000000 Binary files a/docs/resources/ch11-4.PNG and /dev/null differ diff --git a/docs/resources/ch11-5.PNG b/docs/resources/ch11-5.PNG deleted file mode 100644 index 7e79c5ee..00000000 Binary files a/docs/resources/ch11-5.PNG and /dev/null differ diff --git a/docs/resources/ch11-6.PNG b/docs/resources/ch11-6.PNG deleted file mode 100644 index e5442218..00000000 Binary files a/docs/resources/ch11-6.PNG and /dev/null differ diff --git a/docs/resources/ch11-7.PNG b/docs/resources/ch11-7.PNG deleted file mode 100644 index 44a31677..00000000 Binary files a/docs/resources/ch11-7.PNG and /dev/null differ diff --git a/docs/resources/ch12-2.PNG b/docs/resources/ch12-2.PNG deleted file mode 100644 index bd527735..00000000 Binary files a/docs/resources/ch12-2.PNG and /dev/null differ diff --git a/docs/resources/ch12-3.PNG b/docs/resources/ch12-3.PNG deleted file mode 100644 index b8165374..00000000 Binary files a/docs/resources/ch12-3.PNG and /dev/null differ diff --git a/docs/resources/ch12-5.PNG b/docs/resources/ch12-5.PNG deleted file mode 100644 index e715dd01..00000000 Binary files a/docs/resources/ch12-5.PNG and /dev/null differ diff --git a/docs/resources/ch12-6.png b/docs/resources/ch12-6.png deleted file mode 100644 index da4ed3e7..00000000 Binary files a/docs/resources/ch12-6.png and /dev/null differ diff --git a/docs/resources/ch12-8.png b/docs/resources/ch12-8.png deleted file mode 100644 index 3b941cee..00000000 Binary files a/docs/resources/ch12-8.png and /dev/null differ diff --git a/docs/resources/ch13-10.PNG b/docs/resources/ch13-10.PNG deleted file mode 100644 index 952db874..00000000 Binary files a/docs/resources/ch13-10.PNG and /dev/null differ diff --git a/docs/resources/ch13-11.PNG b/docs/resources/ch13-11.PNG deleted file mode 100644 index 8d08465f..00000000 Binary files a/docs/resources/ch13-11.PNG and /dev/null differ diff --git a/docs/resources/ch13-12.PNG b/docs/resources/ch13-12.PNG deleted file mode 100644 index cbf75523..00000000 Binary files a/docs/resources/ch13-12.PNG and /dev/null differ diff --git a/docs/resources/ch13-17.PNG b/docs/resources/ch13-17.PNG deleted file mode 100644 index 611dde3c..00000000 Binary files a/docs/resources/ch13-17.PNG and /dev/null differ diff --git a/docs/resources/ch13-18.PNG b/docs/resources/ch13-18.PNG deleted file mode 100644 index 1387d09b..00000000 Binary files a/docs/resources/ch13-18.PNG and /dev/null differ diff --git a/docs/resources/ch13-19.PNG b/docs/resources/ch13-19.PNG deleted file mode 100644 index 460cce2c..00000000 Binary files a/docs/resources/ch13-19.PNG and /dev/null differ diff --git a/docs/resources/ch13-2.PNG b/docs/resources/ch13-2.PNG deleted file mode 100644 index 25cbfd97..00000000 Binary files a/docs/resources/ch13-2.PNG and /dev/null differ diff --git a/docs/resources/ch13-20.PNG b/docs/resources/ch13-20.PNG deleted file mode 100644 index 206ddf6d..00000000 Binary files a/docs/resources/ch13-20.PNG and /dev/null differ diff --git a/docs/resources/ch13-21.PNG b/docs/resources/ch13-21.PNG deleted file mode 100644 index f4f5d53c..00000000 Binary files a/docs/resources/ch13-21.PNG and /dev/null differ diff --git a/docs/resources/ch13-23.PNG b/docs/resources/ch13-23.PNG deleted file mode 100644 index 33d09330..00000000 Binary files a/docs/resources/ch13-23.PNG and /dev/null differ diff --git a/docs/resources/ch13-24.PNG b/docs/resources/ch13-24.PNG deleted file mode 100644 index c9af3d88..00000000 Binary files a/docs/resources/ch13-24.PNG and /dev/null differ diff --git a/docs/resources/ch13-25.PNG b/docs/resources/ch13-25.PNG deleted file mode 100644 index 3dc2a6fb..00000000 Binary files a/docs/resources/ch13-25.PNG and /dev/null differ diff --git a/docs/resources/ch13-26.PNG b/docs/resources/ch13-26.PNG deleted file mode 100644 index 376636f7..00000000 Binary files a/docs/resources/ch13-26.PNG and /dev/null differ diff --git a/docs/resources/ch13-27.PNG b/docs/resources/ch13-27.PNG deleted file mode 100644 index f239f4b0..00000000 Binary files a/docs/resources/ch13-27.PNG and /dev/null differ diff --git a/docs/resources/ch13-29.PNG b/docs/resources/ch13-29.PNG deleted file mode 100644 index 5e76c89d..00000000 Binary files a/docs/resources/ch13-29.PNG and /dev/null differ diff --git a/docs/resources/ch13-3.PNG b/docs/resources/ch13-3.PNG deleted file mode 100644 index 85ec4fde..00000000 Binary files a/docs/resources/ch13-3.PNG and /dev/null differ diff --git a/docs/resources/ch13-30.PNG b/docs/resources/ch13-30.PNG deleted file mode 100644 index 84a24769..00000000 Binary files a/docs/resources/ch13-30.PNG and /dev/null differ diff --git a/docs/resources/ch13-32.PNG b/docs/resources/ch13-32.PNG deleted file mode 100644 index 023f7508..00000000 Binary files a/docs/resources/ch13-32.PNG and /dev/null differ diff --git a/docs/resources/ch13-33.PNG b/docs/resources/ch13-33.PNG deleted file mode 100644 index f56e74f3..00000000 Binary files a/docs/resources/ch13-33.PNG and /dev/null differ diff --git a/docs/resources/ch13-4.PNG b/docs/resources/ch13-4.PNG deleted file mode 100644 index 0a34e1e5..00000000 Binary files a/docs/resources/ch13-4.PNG and /dev/null differ diff --git a/docs/resources/ch13-5.PNG b/docs/resources/ch13-5.PNG deleted file mode 100644 index 969abea5..00000000 Binary files a/docs/resources/ch13-5.PNG and /dev/null differ diff --git a/docs/resources/ch13-6.PNG b/docs/resources/ch13-6.PNG deleted file mode 100644 index df7e6cda..00000000 Binary files a/docs/resources/ch13-6.PNG and /dev/null differ diff --git a/docs/resources/ch13-7.PNG b/docs/resources/ch13-7.PNG deleted file mode 100644 index 305d018c..00000000 Binary files a/docs/resources/ch13-7.PNG and /dev/null differ diff --git a/docs/resources/ch13-8.PNG b/docs/resources/ch13-8.PNG deleted file mode 100644 index a99ea036..00000000 Binary files a/docs/resources/ch13-8.PNG and /dev/null differ diff --git a/docs/resources/ch13-9.PNG b/docs/resources/ch13-9.PNG deleted file mode 100644 index cb4fd23d..00000000 Binary files a/docs/resources/ch13-9.PNG and /dev/null differ diff --git a/docs/resources/ch2-1.PNG b/docs/resources/ch2-1.PNG deleted file mode 100644 index dc71b28f..00000000 Binary files a/docs/resources/ch2-1.PNG and /dev/null differ diff --git a/docs/resources/ch2-10.PNG b/docs/resources/ch2-10.PNG deleted file mode 100644 index 6af65768..00000000 Binary files a/docs/resources/ch2-10.PNG and /dev/null differ diff --git a/docs/resources/ch2-11.PNG b/docs/resources/ch2-11.PNG deleted file mode 100644 index c6c35445..00000000 Binary files a/docs/resources/ch2-11.PNG and /dev/null differ diff --git a/docs/resources/ch2-12.PNG b/docs/resources/ch2-12.PNG deleted file mode 100644 index c91eec34..00000000 Binary files a/docs/resources/ch2-12.PNG and /dev/null differ diff --git a/docs/resources/ch2-13.PNG b/docs/resources/ch2-13.PNG deleted file mode 100644 index 73d8bf63..00000000 Binary files a/docs/resources/ch2-13.PNG and /dev/null differ diff --git a/docs/resources/ch2-14.PNG b/docs/resources/ch2-14.PNG deleted file mode 100644 index d3ab233f..00000000 Binary files a/docs/resources/ch2-14.PNG and /dev/null differ diff --git a/docs/resources/ch2-15.PNG b/docs/resources/ch2-15.PNG deleted file mode 100644 index eff9b587..00000000 Binary files a/docs/resources/ch2-15.PNG and /dev/null differ diff --git a/docs/resources/ch2-16.PNG b/docs/resources/ch2-16.PNG deleted file mode 100644 index a24cd5b2..00000000 Binary files a/docs/resources/ch2-16.PNG and /dev/null differ diff --git a/docs/resources/ch2-2.PNG b/docs/resources/ch2-2.PNG deleted file mode 100644 index 88d70697..00000000 Binary files a/docs/resources/ch2-2.PNG and /dev/null differ diff --git a/docs/resources/ch2-3.PNG b/docs/resources/ch2-3.PNG deleted file mode 100644 index 6ab2f849..00000000 Binary files a/docs/resources/ch2-3.PNG and /dev/null differ diff --git a/docs/resources/ch2-4.PNG b/docs/resources/ch2-4.PNG deleted file mode 100644 index ffeaedf4..00000000 Binary files a/docs/resources/ch2-4.PNG and /dev/null differ diff --git a/docs/resources/ch2-5.PNG b/docs/resources/ch2-5.PNG deleted file mode 100644 index b80be677..00000000 Binary files a/docs/resources/ch2-5.PNG and /dev/null differ diff --git a/docs/resources/ch2-6.PNG b/docs/resources/ch2-6.PNG deleted file mode 100644 index 8310912a..00000000 Binary files a/docs/resources/ch2-6.PNG and /dev/null differ diff --git a/docs/resources/ch2-7.PNG b/docs/resources/ch2-7.PNG deleted file mode 100644 index 4b0e25e9..00000000 Binary files a/docs/resources/ch2-7.PNG and /dev/null differ diff --git a/docs/resources/ch2-8.PNG b/docs/resources/ch2-8.PNG deleted file mode 100644 index a442ae8e..00000000 Binary files a/docs/resources/ch2-8.PNG and /dev/null differ diff --git a/docs/resources/ch2-9.PNG b/docs/resources/ch2-9.PNG deleted file mode 100644 index 82bfe807..00000000 Binary files a/docs/resources/ch2-9.PNG and /dev/null differ diff --git a/docs/resources/ch4-1.PNG b/docs/resources/ch4-1.PNG deleted file mode 100644 index 454329ca..00000000 Binary files a/docs/resources/ch4-1.PNG and /dev/null differ diff --git a/docs/resources/ch4-10.PNG b/docs/resources/ch4-10.PNG deleted file mode 100644 index 4a9a3c62..00000000 Binary files a/docs/resources/ch4-10.PNG and /dev/null differ diff --git a/docs/resources/ch4-11.PNG b/docs/resources/ch4-11.PNG deleted file mode 100644 index 6d95ad27..00000000 Binary files a/docs/resources/ch4-11.PNG and /dev/null differ diff --git a/docs/resources/ch4-12.PNG b/docs/resources/ch4-12.PNG deleted file mode 100644 index 67df54e2..00000000 Binary files a/docs/resources/ch4-12.PNG and /dev/null differ diff --git a/docs/resources/ch4-13.PNG b/docs/resources/ch4-13.PNG deleted file mode 100644 index ea28bfb6..00000000 Binary files a/docs/resources/ch4-13.PNG and /dev/null differ diff --git a/docs/resources/ch4-14.PNG b/docs/resources/ch4-14.PNG deleted file mode 100644 index ac9d810f..00000000 Binary files a/docs/resources/ch4-14.PNG and /dev/null differ diff --git a/docs/resources/ch4-15.PNG b/docs/resources/ch4-15.PNG deleted file mode 100644 index 949b8e82..00000000 Binary files a/docs/resources/ch4-15.PNG and /dev/null differ diff --git a/docs/resources/ch4-16.PNG b/docs/resources/ch4-16.PNG deleted file mode 100644 index d298f276..00000000 Binary files a/docs/resources/ch4-16.PNG and /dev/null differ diff --git a/docs/resources/ch4-17.PNG b/docs/resources/ch4-17.PNG deleted file mode 100644 index 347409e4..00000000 Binary files a/docs/resources/ch4-17.PNG and /dev/null differ diff --git a/docs/resources/ch4-18.PNG b/docs/resources/ch4-18.PNG deleted file mode 100644 index 6e0f7d13..00000000 Binary files a/docs/resources/ch4-18.PNG and /dev/null differ diff --git a/docs/resources/ch4-19.PNG b/docs/resources/ch4-19.PNG deleted file mode 100644 index d3e8e3c9..00000000 Binary files a/docs/resources/ch4-19.PNG and /dev/null differ diff --git a/docs/resources/ch4-2.PNG b/docs/resources/ch4-2.PNG deleted file mode 100644 index dc71b28f..00000000 Binary files a/docs/resources/ch4-2.PNG and /dev/null differ diff --git a/docs/resources/ch4-20.PNG b/docs/resources/ch4-20.PNG deleted file mode 100644 index 00a08702..00000000 Binary files a/docs/resources/ch4-20.PNG and /dev/null differ diff --git a/docs/resources/ch4-21.PNG b/docs/resources/ch4-21.PNG deleted file mode 100644 index 1c7a99a4..00000000 Binary files a/docs/resources/ch4-21.PNG and /dev/null differ diff --git a/docs/resources/ch4-3.PNG b/docs/resources/ch4-3.PNG deleted file mode 100644 index dc2643eb..00000000 Binary files a/docs/resources/ch4-3.PNG and /dev/null differ diff --git a/docs/resources/ch4-4.PNG b/docs/resources/ch4-4.PNG deleted file mode 100644 index e029e497..00000000 Binary files a/docs/resources/ch4-4.PNG and /dev/null differ diff --git a/docs/resources/ch4-5.PNG b/docs/resources/ch4-5.PNG deleted file mode 100644 index d0cdfb31..00000000 Binary files a/docs/resources/ch4-5.PNG and /dev/null differ diff --git a/docs/resources/ch4-6.PNG b/docs/resources/ch4-6.PNG deleted file mode 100644 index 71f0bc19..00000000 Binary files a/docs/resources/ch4-6.PNG and /dev/null differ diff --git a/docs/resources/ch4-7.PNG b/docs/resources/ch4-7.PNG deleted file mode 100644 index c6bdb736..00000000 Binary files a/docs/resources/ch4-7.PNG and /dev/null differ diff --git a/docs/resources/ch4-8.PNG b/docs/resources/ch4-8.PNG deleted file mode 100644 index 5264a1a1..00000000 Binary files a/docs/resources/ch4-8.PNG and /dev/null differ diff --git a/docs/resources/ch5-1.PNG b/docs/resources/ch5-1.PNG deleted file mode 100644 index c6c1c998..00000000 Binary files a/docs/resources/ch5-1.PNG and /dev/null differ diff --git a/docs/resources/ch5-2.PNG b/docs/resources/ch5-2.PNG deleted file mode 100644 index 66ffea1c..00000000 Binary files a/docs/resources/ch5-2.PNG and /dev/null differ diff --git a/docs/resources/ch5-3.PNG b/docs/resources/ch5-3.PNG deleted file mode 100644 index 686c64ee..00000000 Binary files a/docs/resources/ch5-3.PNG and /dev/null differ diff --git a/docs/resources/ch5-4.PNG b/docs/resources/ch5-4.PNG deleted file mode 100644 index d6317d1a..00000000 Binary files a/docs/resources/ch5-4.PNG and /dev/null differ diff --git a/docs/resources/ch5-5.PNG b/docs/resources/ch5-5.PNG deleted file mode 100644 index 16436cc7..00000000 Binary files a/docs/resources/ch5-5.PNG and /dev/null differ diff --git a/docs/resources/ch5-6.PNG b/docs/resources/ch5-6.PNG deleted file mode 100644 index 809982a6..00000000 Binary files a/docs/resources/ch5-6.PNG and /dev/null differ diff --git a/docs/resources/ch5-7.PNG b/docs/resources/ch5-7.PNG deleted file mode 100644 index 8b0c70c9..00000000 Binary files a/docs/resources/ch5-7.PNG and /dev/null differ diff --git a/docs/resources/ch6-1.PNG b/docs/resources/ch6-1.PNG deleted file mode 100644 index 55f4eaef..00000000 Binary files a/docs/resources/ch6-1.PNG and /dev/null differ diff --git a/docs/resources/ch6-10.PNG b/docs/resources/ch6-10.PNG deleted file mode 100644 index c2fd987d..00000000 Binary files a/docs/resources/ch6-10.PNG and /dev/null differ diff --git a/docs/resources/ch6-11.PNG b/docs/resources/ch6-11.PNG deleted file mode 100644 index 586579c8..00000000 Binary files a/docs/resources/ch6-11.PNG and /dev/null differ diff --git a/docs/resources/ch6-2.PNG b/docs/resources/ch6-2.PNG deleted file mode 100644 index cc1f5b02..00000000 Binary files a/docs/resources/ch6-2.PNG and /dev/null differ diff --git a/docs/resources/ch6-3.PNG b/docs/resources/ch6-3.PNG deleted file mode 100644 index f76364fa..00000000 Binary files a/docs/resources/ch6-3.PNG and /dev/null differ diff --git a/docs/resources/ch6-4.PNG b/docs/resources/ch6-4.PNG deleted file mode 100644 index 0559f47d..00000000 Binary files a/docs/resources/ch6-4.PNG and /dev/null differ diff --git a/docs/resources/ch6-5.PNG b/docs/resources/ch6-5.PNG deleted file mode 100644 index e74e73da..00000000 Binary files a/docs/resources/ch6-5.PNG and /dev/null differ diff --git a/docs/resources/ch6-6.PNG b/docs/resources/ch6-6.PNG deleted file mode 100644 index 16f4fb06..00000000 Binary files a/docs/resources/ch6-6.PNG and /dev/null differ diff --git a/docs/resources/ch7-1.PNG b/docs/resources/ch7-1.PNG deleted file mode 100644 index 0946c88f..00000000 Binary files a/docs/resources/ch7-1.PNG and /dev/null differ diff --git a/docs/resources/ch7-11.PNG b/docs/resources/ch7-11.PNG deleted file mode 100644 index 19a25c38..00000000 Binary files a/docs/resources/ch7-11.PNG and /dev/null differ diff --git a/docs/resources/ch7-12.png b/docs/resources/ch7-12.png deleted file mode 100644 index 5f57a350..00000000 Binary files a/docs/resources/ch7-12.png and /dev/null differ diff --git a/docs/resources/ch7-13.png b/docs/resources/ch7-13.png deleted file mode 100644 index 9bb3f457..00000000 Binary files a/docs/resources/ch7-13.png and /dev/null differ diff --git a/docs/resources/ch7-2.PNG b/docs/resources/ch7-2.PNG deleted file mode 100644 index 894f27db..00000000 Binary files a/docs/resources/ch7-2.PNG and /dev/null differ diff --git a/docs/resources/ch7-3.PNG b/docs/resources/ch7-3.PNG deleted file mode 100644 index 3ec4991a..00000000 Binary files a/docs/resources/ch7-3.PNG and /dev/null differ diff --git a/docs/resources/ch7-4.PNG b/docs/resources/ch7-4.PNG deleted file mode 100644 index cc7fc525..00000000 Binary files a/docs/resources/ch7-4.PNG and /dev/null differ diff --git a/docs/resources/ch7-5.PNG b/docs/resources/ch7-5.PNG deleted file mode 100644 index 6e0aba5e..00000000 Binary files a/docs/resources/ch7-5.PNG and /dev/null differ diff --git a/docs/resources/ch7-7.PNG b/docs/resources/ch7-7.PNG deleted file mode 100644 index d6b6d274..00000000 Binary files a/docs/resources/ch7-7.PNG and /dev/null differ diff --git a/docs/resources/ch7-8.PNG b/docs/resources/ch7-8.PNG deleted file mode 100644 index 2a1e0b45..00000000 Binary files a/docs/resources/ch7-8.PNG and /dev/null differ diff --git a/docs/resources/ch7-9.PNG b/docs/resources/ch7-9.PNG deleted file mode 100644 index 779715f8..00000000 Binary files a/docs/resources/ch7-9.PNG and /dev/null differ diff --git a/docs/resources/ch8-1.PNG b/docs/resources/ch8-1.PNG deleted file mode 100644 index 7868d81f..00000000 Binary files a/docs/resources/ch8-1.PNG and /dev/null differ diff --git a/docs/resources/ch8-2.PNG b/docs/resources/ch8-2.PNG deleted file mode 100644 index d8633dcc..00000000 Binary files a/docs/resources/ch8-2.PNG and /dev/null differ diff --git a/docs/resources/ch9-10.PNG b/docs/resources/ch9-10.PNG deleted file mode 100644 index df7e6cda..00000000 Binary files a/docs/resources/ch9-10.PNG and /dev/null differ diff --git a/docs/resources/ch9-11.PNG b/docs/resources/ch9-11.PNG deleted file mode 100644 index 305d018c..00000000 Binary files a/docs/resources/ch9-11.PNG and /dev/null differ diff --git a/docs/resources/ch9-12.PNG b/docs/resources/ch9-12.PNG deleted file mode 100644 index a99ea036..00000000 Binary files a/docs/resources/ch9-12.PNG and /dev/null differ diff --git a/docs/resources/ch9-9.PNG b/docs/resources/ch9-9.PNG deleted file mode 100644 index 5fd6fabf..00000000 Binary files a/docs/resources/ch9-9.PNG and /dev/null differ