Skip to content

vapopov/lightning-integration

 
 

Repository files navigation

Lightning Integration Testing Framework

Setup

The tests are written as py.test unit tests. This facilitates the handling of fixtures and allows to instrument tests nicely, e.g., write a test once and run it against various combinations of clients. To install the python dependencies and bitcoind use the following:

apt-get install bitcoind python3 python3-pip
pip3 install -r requirements.txt

We suggest running this in a virtualenv in order to guard against changing dependencies.

We currently do not bundle the binaries that we test against. In order for the tests to run you'll need to compile the various clients and move/link the binaries into the directories where the tests can find them. Please refer to the various compilation instructions and make sure that the structure matches the following diagram:

├── btcd
├── eclair
│   └── eclair-node.jar
├── lightningd
│   ├── lightningd
│   ├── lightningd_channel
│   ├── lightningd_gossip
│   ├── lightningd_handshake
│   ├── lightningd_hsm
│   └── lightningd_opening
└── lnd
    └── lnd

The binaries for lightningd and lnd should be pretty self-explanatory. The binary artifact for eclair is the jar-file containing all dependencies for the eclair-node subproject. It is usually called eclair-node_X.Y.Z-SNAPSHOT-XXXXXX-capsule-fat.jar and can be found in the target subdirectory.

Running the tests

The tests rely on py.test to create fixtures, wire them into the tests and run the tests themselves. Execute all tests by running

py.test -v test.py

This will run through all possible combinations of the implementations and report success/failure for each one.

asciicast

To run only tests involving a certain implementation you can also run the following (taking lightningd as an example):

py.test -v test.py -k LightningNode

Not sure where a test dies? Make the whole thing extremely verbose with this:

TEST_DEBUG=1 py.test -v test.py -s -k 'testConnect[EclairNode_LightningNode]'

Should you want to jump into an interactive session if something is about to fail run the following:

py.test -v test.py --pdb

This will run the tests until a failure would be recorded and start the python debugging console instead. In the console you have a python REPL that has access to the context of the current test, and you can interact with the clients via the RPCs to gather more information about what is going wrong.

About

Lightning Integration Testing Framework

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.2%
  • HTML 4.6%
  • Makefile 2.2%