Skip to content

Commit

Permalink
Automatic benchmarks and plots (#11)
Browse files Browse the repository at this point in the history
The automatic benchmarks and plots are still NOT IMPLEMENTED.

Nevertheless, the following has been done with warrants merging

-   Switch from deprecated CSCMAT format to JSON based format for storing LDPC matrices
-   Enable the C++ code to load matrices from the BINGCSC.JSON format
-  Update documentation and list of LDPC codes to make it more readable


* Changes type of BP max iteration variable in fer simulation

* fixes typo in example

* Reformats codes/README.md

* Adds updating apt-get to CI. Hopefully fixes CI crash

* Splits directory into benchmarks_runtime and benchmarks_error_rate

* Adds missing CMakeLists file.

* Makes constexpr variables in header files also inline (C++17 feature)

* Specifies Cpp standard version for . Get ridd of warning.

* some more simulation results for pure-LDPC FER simulations

* Uses template parameters in static assertions for rate adaption (as opposed to std::array::size, which is not inferred as constant expression)

* Stores all matrices in new json file format. Updates README.

LDPCStorage.jl now uses a new json-based format. This format is going to be used in this project from now on, as well as the new version of the Julia package.
Generation of C++ header files is now also part of LDPCStorage.jl and the script `ldpc_codegen.jl` in this repository now only implements a command line interface.
Note that the C++ simulator in this project still only supports parsing of binary matrices in the deprecated .cscmat files. This will be fixed later.

* Removes obsolete .cscmat matrix files

* Adds protographs and QC exponents to table in codes/README.md

* Adds test for readign rate adaption from file

* Copy reate adaption csv file into tests binary.

* Adds rate adaption csv that is used for testing

* Partial support for json file input (only bincsc.json, i.e., simulator does not accept qc-exponents)

* Adds running minimal error rate benchmarks to CI

* Includes simulators in code coverage

* Fix paths in CI

* Fix typo in CI script

* Switch CI BUILD_TYPE to Debug for correct coverage information

* Updates READMEs

* Try to fix ignore pattern for gcovr

* Fix typo

* Try to get coverage to exclude external dependencies
  • Loading branch information
XQP-Munich authored May 17, 2023
1 parent 53afef5 commit 6aafb99
Show file tree
Hide file tree
Showing 51 changed files with 26,489 additions and 427 deletions.
19 changes: 15 additions & 4 deletions .github/workflows/ci-cmake_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ on:
workflow_dispatch:

env:
BUILD_TYPE: RelWithDebInfo
BUILD_TYPE: Debug

defaults:
run:
Expand Down Expand Up @@ -47,16 +47,27 @@ jobs:
working-directory: ${{runner.workspace}}/build/examples
run: ./demo_error_correction

- name: Run unit tests with coverage
- name: Install lcov and gcov (tools to get C++ coverage information)
working-directory: ${{runner.workspace}}/build/
run: |
set -x # enable debugging in bash
sudo apt-get update
sudo apt-get -qq -y install lcov gcovr
- name: Run unit tests with coverage
working-directory: ${{runner.workspace}}/build/
run: |
set -x # enable debugging in bash
lcov --zerocounters --directory .
# run all unit tests (ctest)
GTEST_OUTPUT=xml:test-results/ GTEST_COLOR=1 ctest -V -C $BUILD_TYPE
gcovr -v --root ../ --xml-pretty --xml coverage.xml
# run also the simulator with very few frames to simulate
./benchmarks_error_rate/rate_adapted_fer -cp "./tests/test_reading_bincscjson_format_block_6144_proto_2x6_313422410401.bincsc.json" -mf 3
./benchmarks_error_rate/critical_rate_simulation -cp "./tests/test_reading_bincscjson_format_block_6144_proto_2x6_313422410401.bincsc.json" -rp "./tests/rate_adaption_2x6_block_6144_for_testing.csv" -nf 3
# collect coverage, etc.
gcovr -v --root ../ --xml-pretty --xml coverage.xml --exclude '.*/external/'
echo
cat ./coverage.xml # pring coverage information
cat ./coverage.xml # print coverage information
- name: Publish to codecov
uses: codecov/codecov-action@v2
Expand Down
16 changes: 8 additions & 8 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@ project(LDPC4QKD
DESCRIPTION "Rate adaptive distributed source coding using LDPC codes."
)

# Enable testing. This is not necessary for using GTest,
# only provides integration with CTest (CMake test suite)
enable_testing()

# Allows `include`'ing custom CMake modules from this directory.
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules/")

Expand Down Expand Up @@ -103,21 +107,17 @@ add_subdirectory(src)

add_subdirectory(examples)

if (BUILD_UNIT_TESTS)
# Enable testing. This is not necessary for using GTest,
# only provides integration with CTest (CMake test suite)
# Note that this line has to be located before adding the sub-directory with the tests.
# Having this line in the test/CMakeLists.txt file doesn't work (I don't know why).
enable_testing()
add_subdirectory(benchmarks_error_rate)

if (BUILD_UNIT_TESTS)
add_subdirectory(tests)
else (BUILD_UNIT_TESTS)
message(STATUS "Unit tests are not built, as specified by user.")
endif (BUILD_UNIT_TESTS)


if (BUILD_RUNTIME_BENCHMARKS)
add_subdirectory(benchmarks)
add_subdirectory(benchmarks_runtime)
else (BUILD_RUNTIME_BENCHMARKS)
message(STATUS "Performance benchmarks are not built, as specified by user.")
message(STATUS "Runtime performance benchmarks are not built, as specified by user.")
endif (BUILD_RUNTIME_BENCHMARKS)
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Let us know if you're having problems with the provided materials, wish to contr

### Data files
- A number of LDPC codes (a list and the actual LDPC matrices are in the folder `codes`).
Their parity check matrices are stored in a custom file format (called `CSCMAT`).
Their parity check matrices are stored in a custom file format (called `qccsc.json`).
- Simulations results done using [AFF3CT](https://github.com/aff3ct/aff3ct), showing FER of the LDPC matrices at various channel parameters.
- Simulation results done using the decoder in this repository, showing FER of LDPC matrices, their rate adapted versions, and average rate under rate adaption (Work in progress!).
- For each LDPC matrix, a specification of rate adaption.
Expand Down Expand Up @@ -112,4 +112,6 @@ If you need some feature for your applications, let us know, e.g. by creating an

## Attributions

- Some of the simulation/benchmarking programs use [CmdParser](https://github.com/FlorianRappl/CmdParser), a simple command line argument parser (MIT license, the sources are included in this repository at `benchmarks/CmdParser-1.1.0`).
Some of the simulation/benchmarking programs use
- [CmdParser](https://github.com/FlorianRappl/CmdParser), a simple command line argument parser (MIT license, the sources are included in this repository at `external/CmdParser-91aaa61e`). Copyright (c) 2015 - 2016 Florian Rappl
- [json](https://github.com/nlohmann/json), a JSON parser (MIT License and others, see header `external/json-6af826d/json.hpp`). © 2013-2022 Niels Lohmann.
75 changes: 0 additions & 75 deletions benchmarks/code_simulation_helpers.hpp

This file was deleted.

48 changes: 48 additions & 0 deletions benchmarks_error_rate/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# CMake file for building the error correction demo.

cmake_minimum_required(VERSION 3.19)

project(ErrorCorrectionDemoForQKD)


# ---------------------------------------------------------------------------------- Rate adapted performance simulation
add_executable(critical_rate_simulation main_critical_rate_simulation.cpp
code_simulation_helpers.hpp)

target_compile_features(critical_rate_simulation PUBLIC cxx_std_17)

target_link_libraries(critical_rate_simulation
PRIVATE
# build options
compiler_warnings
project_options

# libraries
LDPC4QKD::LDPC4QKD
)

target_include_directories(critical_rate_simulation
PRIVATE
"${CMAKE_CURRENT_LIST_DIR}/../"
)

# ------------------------------------------------------------- frame error rate (FER) simulation (allows rate adaption)
add_executable(rate_adapted_fer main_rate_adapted_fer.cpp
code_simulation_helpers.hpp)

target_compile_features(rate_adapted_fer PUBLIC cxx_std_17)

target_link_libraries(rate_adapted_fer
PRIVATE
# build options
compiler_warnings
project_options

# libraries
LDPC4QKD::LDPC4QKD
)

target_include_directories(rate_adapted_fer
PRIVATE
"${CMAKE_CURRENT_LIST_DIR}/../"
)
173 changes: 173 additions & 0 deletions benchmarks_error_rate/code_simulation_helpers.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
//
// Created by alice on 08.09.21.
//

#ifndef LDPC4QKD_CODE_SIMULATION_HELPERS_HPP
#define LDPC4QKD_CODE_SIMULATION_HELPERS_HPP

#include <filesystem> // C++17

#include "external/json-6af826d/json.hpp" // external json parser library

#include "read_ldpc_file_formats.hpp"
#include "src/rate_adaptive_code.hpp"


namespace LDPC4QKD::CodeSimulationHelpers {
template<typename T>
void noise_bitstring_inplace(std::mt19937_64 &rng, std::vector<T> &src, double err_prob) {
std::bernoulli_distribution distribution(err_prob);

for (std::size_t i = 0; i < src.size(); i++) {
if (distribution(rng)) {
src[i] = !src[i];
} else {
src[i] = src[i];
}
}
}


// Shannon binary entropy
template<typename T>
double h2(T p) {
return -p * ::log2(p) - (1 - p) * ::log2(1 - p);
}


template<typename T>
double avg(const std::vector<T> &in) {
double tmp{};
for (auto i : in) {
tmp += static_cast<double>(i);
}
return tmp / static_cast<double>(in.size());
}


/*!
* Loads LDPC code (and optionally also rate adaption) from files.
* WARNING: if the templated types are too small, the numbers in the files are static_cast down!
*
* @tparam Bit type of matrix entires (only values zero and one are used, default to bool)
* @tparam colptr_t unsigned integer type that fits ("number of non-zero matrix entries" + 1)
* @tparam idx_t unsigned integer type fitting number of columns N (thus also number of rows M)
* @param cscmat_file_path path to CSCMAT file, where the LDPC code is loaded from.
* (no QC-exponents allowed, just binary compressed sparse column (CSC) representation!)
* @param rate_adaption_file_path Path to load rate adaption from.
* (csv file of line index pairs, which are combined at each rate adaption step)
* If unspecified, no rate adaption will be available.
* @return Rate adaptive code
*/
template<typename Bit=bool,
typename colptr_t=std::uint32_t,
typename idx_t=std::uint32_t>
LDPC4QKD::RateAdaptiveCode<Bit, colptr_t, idx_t> load_ldpc_from_cscmat(
const std::string &cscmat_file_path, const std::string &rate_adaption_file_path=""
) {
auto pair = LDPC4QKD::read_matrix_from_cscmat<colptr_t, idx_t>(cscmat_file_path);
auto colptr = pair.first;
auto row_idx = pair.second;

if(rate_adaption_file_path.empty()) {
return LDPC4QKD::RateAdaptiveCode<Bit, colptr_t, idx_t>(colptr, row_idx);
} else {
std::vector<idx_t> rows_to_combine = read_rate_adaption_from_csv<idx_t>(rate_adaption_file_path);
return LDPC4QKD::RateAdaptiveCode<Bit, colptr_t, idx_t>(colptr, row_idx, rows_to_combine);
}
}

/*!
* Loads LDPC code (and optionally also rate adaption) from files.
* WARNING: if the templated types are too small, the numbers in the files are static_cast down!
*
* @tparam Bit type of matrix entires (only values zero and one are used, default to bool)
* @tparam colptr_t unsigned integer type that fits ("number of non-zero matrix entries" + 1)
* @tparam idx_t unsigned integer type fitting number of columns N (thus also number of rows M)
* @param json_file_path path to json file, where the LDPC code is loaded from.
* (no QC-exponents allowed, just binary compressed sparse column (CSC) representation!)
* @param rate_adaption_file_path Path to load rate adaption from.
* (csv file of line index pairs, which are combined at each rate adaption step)
* If unspecified, no rate adaption will be available.
* @return Rate adaptive code
*/
template<typename Bit=bool,
typename colptr_t=std::uint32_t,
typename idx_t=std::uint32_t>
LDPC4QKD::RateAdaptiveCode<Bit, colptr_t, idx_t> load_ldpc_from_json(
const std::string &json_file_path, const std::string &rate_adaption_file_path = ""
) {
try {
std::ifstream fs(json_file_path);
if (!fs) {
throw std::runtime_error("Invalid file.");
}

// parse JSON file
using json = nlohmann::json;
std::ifstream f(json_file_path);
json data = json::parse(f);

if (data["format"] == "BINCSCJSON") { // binary matrix stored
std::vector<colptr_t> colptr = data["colptr"];
std::vector<idx_t> rowval = data["rowval"];

if (rate_adaption_file_path.empty()) {
return LDPC4QKD::RateAdaptiveCode<Bit, colptr_t, idx_t>(colptr, rowval);
} else {
std::vector<idx_t> rows_to_combine = read_rate_adaption_from_csv<idx_t>(rate_adaption_file_path);
return LDPC4QKD::RateAdaptiveCode<Bit, colptr_t, idx_t>(colptr, rowval, rows_to_combine);
}
} else if (data["format"] == "COMPRESSED_SPARSE_COLUMN") { // quasi-cyclic exponents stored
// std::vector<std::uint64_t> colptr = data["colptr"];
// std::vector<std::uint64_t> rowval = data["rowval"];
// std::vector<std::uint64_t> nzval = data["nzval"];
// TODO reconstructing the matrix from qc-exponents and storing as sparse needs some work...
throw std::runtime_error("Reading QC-exponents not supported yet! Use LDPCStorage.jl to convert to bincsc.json format!");
} else {
throw std::runtime_error("Unexpected format within json file.");
}
} catch (const std::exception &e) {
std::stringstream s;
s << "Failed to read LDPC code from file '" << json_file_path << "'. Reason:\n" << e.what() << "\n";
throw std::runtime_error(s.str());
} catch (...) {
std::stringstream s;
s << "Failed to read LDPC code from file '" << json_file_path << "' due to unknown error.";
throw std::runtime_error(s.str());
}
}

/*!
* Loads LDPC code (and optionally also rate adaption) from files. Parser chosen depends on File ending!
* WARNING: if the templated types are too small, the numbers in the files are static_cast down!
*
* @tparam Bit type of matrix entires (only values zero and one are used, default to bool)
* @tparam colptr_t unsigned integer type that fits ("number of non-zero matrix entries" + 1)
* @tparam idx_t unsigned integer type fitting number of columns N (thus also number of rows M)
* @param file_path path to file, where the LDPC code is loaded from.
* (no QC-exponents allowed, just binary compressed sparse column (CSC) representation!)
* @param rate_adaption_file_path Path to load rate adaption from.
* (csv file of line index pairs, which are combined at each rate adaption step)
* If unspecified, no rate adaption will be available.
* @return Rate adaptive code
*/
template<typename Bit=bool,
typename colptr_t=std::uint32_t,
typename idx_t=std::uint32_t>
LDPC4QKD::RateAdaptiveCode<Bit, colptr_t, idx_t> load_ldpc(
const std::string &file_path, const std::string &rate_adaption_file_path=""
) {
std::filesystem::path filePath = file_path;
if (filePath.extension() == ".cscmat") {
return load_ldpc_from_cscmat(file_path, rate_adaption_file_path);
} else if (filePath.extension() == ".json") {
return load_ldpc_from_json(file_path, rate_adaption_file_path);
} else {
throw std::runtime_error("Expected file with extension .cscmat or .json");
}
}

}

#endif //LDPC4QKD_CODE_SIMULATION_HELPERS_HPP
Loading

0 comments on commit 6aafb99

Please sign in to comment.