mirror of
https://github.com/aljazceru/python-teos.git
synced 2026-02-01 12:44:25 +01:00
Merge pull request #109 from talaia-labs/100-improve-config
Improves config file format
This commit is contained in:
@@ -73,10 +73,10 @@ jobs:
|
||||
name: Setup teos
|
||||
command: |
|
||||
. venv/bin/activate
|
||||
cp test/teos/e2e/teos-conf.py teos/conf.py
|
||||
python3 -m generate_keys -d ~/.teos/
|
||||
python3 -m generate_keys -n cli -d ~/.teos_cli/
|
||||
cp ~/.teos/teos_pk.der ~/.teos_cli/
|
||||
cp test/teos/e2e/teos.conf ~/.teos/
|
||||
|
||||
|
||||
# Run E2E tests
|
||||
|
||||
19
INSTALL.md
19
INSTALL.md
@@ -5,14 +5,14 @@
|
||||
There are two ways of running `teos`: running it as a module or adding the library to the `PYTHONPATH` env variable.
|
||||
|
||||
## Running `teos` as a Module
|
||||
The easiest way to run `teos` is as a module. To do so you need to use `python -m`. From the teos parent directory run:
|
||||
The **easiest** way to run `teos` is as a module. To do so you need to use `python -m`. From the teos parent directory run:
|
||||
|
||||
python -m teos.teosd
|
||||
python -m teos.teosd -h
|
||||
|
||||
Notice that if you run `teos` as a module, you'll need to replace all the calls from `python teosd.py` to `python -m teos.teosd`
|
||||
|
||||
## Modifying `PYTHONPATH`
|
||||
Alternatively, you can add `teos` to your `PYTHONPATH`. You can do so by running:
|
||||
**Alternatively**, you can add `teos` to your `PYTHONPATH` by running:
|
||||
|
||||
export PYTHONPATH=$PYTHONPATH:<absolute_path_to_teos_parent>
|
||||
|
||||
@@ -24,7 +24,14 @@ You should also include the command in your `.bashrc` to avoid having to run it
|
||||
|
||||
echo 'export PYTHONPATH=$PYTHONPATH:<absolute_path_to_teos_parent>' >> ~/.bashrc
|
||||
|
||||
## Modify Configuration Parameters
|
||||
If you'd like to modify some of the configuration defaults (such as the user directory, where the logs will be stored) you can do so in the config file located at:
|
||||
Once the `PYTHONPATH` is set, you should be able to run `teos` straightaway. Try it by running:
|
||||
|
||||
<absolute_path_to_teos_parent>/teos/conf.py
|
||||
cd <absolute_path_to_cli_parent>/teos
|
||||
python teosd.py -h
|
||||
|
||||
## Modify Configuration Parameters
|
||||
If you'd like to modify some of the configuration defaults (such as the bitcoind rpcuser and password) you can do so in the config file located at:
|
||||
|
||||
<data_dir>/.teos/teos.conf
|
||||
|
||||
`<data_dir>` defaults to your home directory (`~`).
|
||||
|
||||
90
README.md
90
README.md
@@ -14,34 +14,95 @@ The Eye of Satoshi is a Lightning watchtower compliant with [BOLT13](https://git
|
||||
|
||||
Additionally, tests for every module can be found at `tests`.
|
||||
|
||||
By default, `teos` will run on `regtest`. In order to run it on another network you need to change your `bitcoin.conf` (to run in the proper network) and your `conf.py` to match the network name and rpc port:
|
||||
## Dependencies
|
||||
Refer to [DEPENDENCIES.md](DEPENDENCIES.md)
|
||||
|
||||
## Installation
|
||||
|
||||
Refer to [INSTALL.md](INSTALL.md)
|
||||
|
||||
## Running TEOS
|
||||
|
||||
You can run `teos` buy running `teosd.py` under `teos`:
|
||||
|
||||
```
|
||||
BTC_RPC_PORT = 18443
|
||||
BTC_NETWORK = "regtest"
|
||||
python -m teos.teosd
|
||||
```
|
||||
|
||||
### Running TEOS
|
||||
You can run `teos` buy running `teosd.py` under `teos`.
|
||||
`teos` comes with a default configuration that can be found at [teos/\_\_init\_\_.py](teos/__init__.py).
|
||||
|
||||
`teos` comes with a default configuration file (check [conf.py](teos/conf.py)). The configuration file include, amongst others, where your data folder is placed, what network it connects to, etc.
|
||||
The configuration includes, amongst others, where your data folder is placed, what network it connects to, etc.
|
||||
|
||||
To run `teos` you need a set of keys (to sign appointments) stored in your data directory. You can follow [generate keys](#generate-keys) to generate them.
|
||||
|
||||
### Interacting with a TEOS Instance
|
||||
|
||||
### Configuration file and command line parameters
|
||||
|
||||
To change the configuration defaults you can:
|
||||
|
||||
- Define a configuration file following the template (check [teos/template.conf](teos/template.conf)) and place it in the `data_dir` (that defaults to `~/.teos/`)
|
||||
|
||||
and / or
|
||||
|
||||
- Add some global options when running the daemon (run `teosd.py -h` for more info).
|
||||
|
||||
## Running TEOS in another network
|
||||
|
||||
By default, `teos` runs on `mainnet`. In order to run it on another network you need to change the network parameter in the configuration file or pass the network parameter as a command line option. Notice that if teos does not find a `bitcoind` node running in the same network that it is set to run, it will refuse to run.
|
||||
|
||||
|
||||
### Modifiying the configuration file
|
||||
|
||||
The configuration file options to change the network where `teos` will run are the `btc_rpc_port` and the `btc_network` under the `bitcoind` section:
|
||||
|
||||
```
|
||||
[bitcoind]
|
||||
btc_rpc_user = "user"
|
||||
btc_rpc_passwd = "passwd"
|
||||
btc_rpc_connect = "localhost"
|
||||
btc_rpc_port = 8332
|
||||
btc_network = "mainnet"
|
||||
```
|
||||
|
||||
For regtest, it should look like:
|
||||
|
||||
```
|
||||
[bitcoind]
|
||||
btc_rpc_user = "user"
|
||||
btc_rpc_passwd = "passwd"
|
||||
btc_rpc_connect = "localhost"
|
||||
btc_rpc_port = 18443
|
||||
btc_network = "regtest"
|
||||
```
|
||||
|
||||
|
||||
### Passing command line options to `teosd`
|
||||
|
||||
Some configuration options can also be passed as options when running `teosd`. We can, for instance, pick the network as follows:
|
||||
|
||||
```
|
||||
python -m teos.teosd --btcnetwork=regtest --btcrpcport=18443
|
||||
```
|
||||
|
||||
## Interacting with a TEOS Instance
|
||||
|
||||
You can interact with a `teos` instance (either run by yourself or someone else) by using `teos_cli` under `cli`.
|
||||
|
||||
Since `teos_cli` works independently of `teos`, it uses a different configuration file (check [cli/conf.py](cli/conf.py)).
|
||||
Since `teos_cli` works independently of `teos`, it uses a different configuration. The defaults can be found at [cli/\_\_init\_\_.py](cli/__init__.py). The same approach as with `teosd` is followed:
|
||||
|
||||
`teos_cli` also needs an independent set of keys (that can be generated following [generate keys](#generate-keys)) as well as the public key of the tower instance (`teos_pk.der`). The same data directory can be used for both if you are running things locally.
|
||||
- A config file (`~/.teos_cli/teos_cli.conf`) can be set to change the defaults.
|
||||
- Some options ca also be changed via command line.
|
||||
- The configuration file template can be found at [cli/template.conf](cli/template.conf))
|
||||
|
||||
`teos_cli` needs an independent set of keys and, top of that, a copy of tower's the public key (`teos_pk.der`). Check [generate keys](#generate-keys) for more on how to set this.
|
||||
|
||||
Notice that `teos_cli` is a simple way to interact with `teos`, but ideally that should be part of your wallet functionality (therefore why they are independent entities). `teos_cli` can be used as an example for how to send data to a [BOLT13](https://github.com/sr-gi/bolt13) compliant watchtower.
|
||||
|
||||
### Generate Keys
|
||||
## Generate Keys
|
||||
|
||||
In order to generate a pair of keys for `teos` (or `teos_cli`) you can use `generate_keys.py`.
|
||||
|
||||
The script generates a set of keys (`teos_sk.der` and `teos_pk.der`) in the current directory, by default. The name and output directory can be changed using `-n` and `-d` respectively.
|
||||
The script generates and stores a set of keys on disk (by default it outputs them in the current directory and names them `teos_sk.der` and `teos_pk.der`). The name and output directory can be changed using `-n` and `-d` respectively.
|
||||
|
||||
The following command will generate a set of keys for `teos` and store it in the default data directory (`~/.teos`):
|
||||
```
|
||||
@@ -59,12 +120,5 @@ Notice that `cli` needs a copy of the tower public key, so you should make a cop
|
||||
cp ~./teos/teos_pk.der ~./teos_cli/teos_pk.der
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
Refer to [DEPENDENCIES.md](DEPENDENCIES.md)
|
||||
|
||||
## Installation
|
||||
|
||||
Refer to [INSTALL.md](INSTALL.md)
|
||||
|
||||
## Contributing
|
||||
Refer to [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
@@ -2,16 +2,23 @@
|
||||
|
||||
`teos_cli` has some dependencies that can be satisfied by following [DEPENDENCIES.md](DEPENDENCIES.md). If your system already satisfies the dependencies, you can skip that part.
|
||||
|
||||
There are two ways of running `teos_cli`: adding the library to the `PYTHONPATH` env variable, or running it as a module.
|
||||
There are two ways of running `teos_cli`: running it as a module or adding the library to the PYTHONPATH env variable.
|
||||
|
||||
## Running `teos_cli` as a module
|
||||
The **easiest** way to run `teos_cli` is as a module. To do so you need to use `python -m`. From `cli` **parent** directory run:
|
||||
|
||||
python -m cli.teos_cli -h
|
||||
|
||||
Notice that if you run `teos_cli` as a module, you'll need to replace all the calls from `python teos_cli.py <argument>` to `python -m cli.teos_cli <argument>`
|
||||
|
||||
## Modifying `PYTHONPATH`
|
||||
In order to run `teos_cli`, you should set your `PYTHONPATH` env variable to include the folder that contains the `cli` folder. You can do so by running:
|
||||
**Alternatively**, you can add `teos_cli` to your `PYTHONPATH` by running:
|
||||
|
||||
export PYTHONPATH=$PYTHONPATH:<absolute_path_to_cli_parent>
|
||||
|
||||
For example, for user alice running a UNIX system and having `cli` in her home folder, she would run:
|
||||
For example, for user alice running a UNIX system and having `python-teos` in her home folder, she would run:
|
||||
|
||||
export PYTHONPATH=$PYTHONPATH:/home/alice/
|
||||
export PYTHONPATH=$PYTHONPATH:/home/alice/python-teos/
|
||||
|
||||
You should also include the command in your `.bashrc` to avoid having to run it every time you open a new terminal. You can do it by running:
|
||||
|
||||
@@ -22,14 +29,10 @@ Once the `PYTHONPATH` is set, you should be able to run `teos_cli` straightaway.
|
||||
cd <absolute_path_to_cli_parent>/cli
|
||||
python teos_cli.py -h
|
||||
|
||||
## Running `teos_cli` as a module
|
||||
Python code can be also run as a module, to do so you need to use `python -m`. From `cli` **parent** directory run:
|
||||
|
||||
python -m cli.teos_cli -h
|
||||
|
||||
Notice that if you run `teos_cli` as a module, you'll need to replace all the calls from `python teos_cli.py <argument>` to `python -m cli.teos_cli <argument>`
|
||||
|
||||
## Modify configuration parameters
|
||||
If you'd like to modify some of the configuration defaults (such as the user directory, where the logs and appointment receipts will be stored) you can do so in the config file located at:
|
||||
|
||||
<absolute_path_to_cli_parent>/cli/conf.py
|
||||
<data_dir>/.teos_cli/teos_cli.conf
|
||||
|
||||
`<data_dir>` defaults to your home directory (`~`).
|
||||
|
||||
@@ -15,8 +15,8 @@ Refer to [INSTALL.md](INSTALL.md)
|
||||
|
||||
#### Global options
|
||||
|
||||
- `-s, --server`: API server where to send the requests. Defaults to https://teos.pisa.watch (modifiable in conf.py)
|
||||
- `-p, --port` : API port where to send the requests. Defaults to 443 (modifiable in conf.py)
|
||||
- `-s, --server`: API server where to send the requests. Defaults to 'localhost' (modifiable in conf file).
|
||||
- `-p, --port` : API port where to send the requests. Defaults to '9814' (modifiable in conf file).
|
||||
- `-h --help`: shows a list of commands or help for a specific command.
|
||||
|
||||
#### Commands
|
||||
@@ -68,7 +68,7 @@ if `-f, --file` **is** specified, then the command expects a path to a json file
|
||||
|
||||
### get_appointment
|
||||
|
||||
This command is used to get information about an specific appointment from the Eye of Satoshi.
|
||||
This command is used to get information about a specific appointment from the Eye of Satoshi.
|
||||
|
||||
**Appointment can be in three states:**
|
||||
|
||||
@@ -146,8 +146,15 @@ or
|
||||
|
||||
If you wish to read about the underlying API, and how to write your own tool to interact with it, refer to [tEOS-API.md](TEOS-API.md).
|
||||
|
||||
## Are you reckless? Try me on mainnet
|
||||
Would you like to try me on `mainnet` instead of `testnet`? Add `-s https://mainnet.teos.pisa.watch` to your calls, for example:
|
||||
## Try our live instance
|
||||
|
||||
By default, `teos_cli` will connect to your local instance (running on localhost). There are also a couple of live instances running, one for mainet and one for testnet:
|
||||
|
||||
- testnet endpoint = `teos.pisa.watch`
|
||||
- mainnet endpoint = `teosmainnet.pisa.watch`
|
||||
|
||||
### Connecting to the mainnet instance
|
||||
Add `-s https://teosmainnet.pisa.watch` to your calls, for example:
|
||||
|
||||
```
|
||||
python teos_cli.py -s https://teosmainnet.pisa.watch add_appointment -f dummy_appointment_data.json
|
||||
@@ -155,4 +162,4 @@ python teos_cli.py -s https://teosmainnet.pisa.watch add_appointment -f dummy_ap
|
||||
|
||||
You can also change the config file to avoid specifying the server every time:
|
||||
|
||||
`DEFAULT_TEOS_API_SERVER = "https://teosmainnet.pisa.watch"`
|
||||
`TEOS_SERVER = "https://teosmainnet.pisa.watch"`
|
||||
@@ -1,28 +1,16 @@
|
||||
import os
|
||||
import cli.conf as conf
|
||||
from common.tools import extend_paths, check_conf_fields, setup_logging, setup_data_folder
|
||||
|
||||
DATA_DIR = os.path.expanduser("~/.teos_cli/")
|
||||
CONF_FILE_NAME = "teos_cli.conf"
|
||||
LOG_PREFIX = "cli"
|
||||
|
||||
# Load config fields
|
||||
conf_fields = {
|
||||
"DEFAULT_TEOS_API_SERVER": {"value": conf.DEFAULT_TEOS_API_SERVER, "type": str},
|
||||
"DEFAULT_TEOS_API_PORT": {"value": conf.DEFAULT_TEOS_API_PORT, "type": int},
|
||||
"DATA_FOLDER": {"value": conf.DATA_FOLDER, "type": str},
|
||||
"CLIENT_LOG_FILE": {"value": conf.CLIENT_LOG_FILE, "type": str, "path": True},
|
||||
"APPOINTMENTS_FOLDER_NAME": {"value": conf.APPOINTMENTS_FOLDER_NAME, "type": str, "path": True},
|
||||
"CLI_PUBLIC_KEY": {"value": conf.CLI_PUBLIC_KEY, "type": str, "path": True},
|
||||
"CLI_PRIVATE_KEY": {"value": conf.CLI_PRIVATE_KEY, "type": str, "path": True},
|
||||
"TEOS_PUBLIC_KEY": {"value": conf.TEOS_PUBLIC_KEY, "type": str, "path": True},
|
||||
DEFAULT_CONF = {
|
||||
"TEOS_SERVER": {"value": "localhost", "type": str},
|
||||
"TEOS_PORT": {"value": 9814, "type": int},
|
||||
"LOG_FILE": {"value": "teos_cli.log", "type": str, "path": True},
|
||||
"APPOINTMENTS_FOLDER_NAME": {"value": "appointment_receipts", "type": str, "path": True},
|
||||
"CLI_PUBLIC_KEY": {"value": "cli_pk.der", "type": str, "path": True},
|
||||
"CLI_PRIVATE_KEY": {"value": "cli_sk.der", "type": str, "path": True},
|
||||
"TEOS_PUBLIC_KEY": {"value": "teos_pk.der", "type": str, "path": True},
|
||||
}
|
||||
|
||||
# Expand user (~) if found and check fields are correct
|
||||
conf_fields["DATA_FOLDER"]["value"] = os.path.expanduser(conf_fields["DATA_FOLDER"]["value"])
|
||||
# Extend relative paths
|
||||
conf_fields = extend_paths(conf_fields["DATA_FOLDER"]["value"], conf_fields)
|
||||
|
||||
# Sanity check fields and build config dictionary
|
||||
config = check_conf_fields(conf_fields)
|
||||
|
||||
setup_data_folder(config.get("DATA_FOLDER"))
|
||||
setup_logging(config.get("CLIENT_LOG_FILE"), LOG_PREFIX)
|
||||
|
||||
14
cli/conf.py
14
cli/conf.py
@@ -1,14 +0,0 @@
|
||||
# TEOS-SERVER
|
||||
DEFAULT_TEOS_API_SERVER = "http://localhost"
|
||||
DEFAULT_TEOS_API_PORT = 9814
|
||||
|
||||
# WT-CLI
|
||||
DATA_FOLDER = "~/.teos_cli/"
|
||||
|
||||
CLIENT_LOG_FILE = "cli.log"
|
||||
APPOINTMENTS_FOLDER_NAME = "appointment_receipts"
|
||||
|
||||
# KEYS
|
||||
TEOS_PUBLIC_KEY = "teos_pk.der"
|
||||
CLI_PRIVATE_KEY = "cli_sk.der"
|
||||
CLI_PUBLIC_KEY = "cli_pk.der"
|
||||
16
cli/help.py
16
cli/help.py
@@ -1,3 +1,19 @@
|
||||
def show_usage():
|
||||
return (
|
||||
"USAGE: "
|
||||
"\n\tpython teos_cli.py [global options] command [command options] [arguments]"
|
||||
"\n\nCOMMANDS:"
|
||||
"\n\tadd_appointment \tRegisters a json formatted appointment with the tower."
|
||||
"\n\tget_appointment \tGets json formatted data about an appointment from the tower."
|
||||
"\n\thelp \t\t\tShows a list of commands or help for a specific command."
|
||||
"\n\nGLOBAL OPTIONS:"
|
||||
"\n\t-s, --server \tAPI server where to send the requests. Defaults to 'localhost' (modifiable in conf file)."
|
||||
"\n\t-p, --port \tAPI port where to send the requests. Defaults to '9814' (modifiable in conf file)."
|
||||
"\n\t-d, --debug \tshows debug information and stores it in teos_cli.log."
|
||||
"\n\t-h --help \tshows this message."
|
||||
)
|
||||
|
||||
|
||||
def help_add_appointment():
|
||||
return (
|
||||
"NAME:"
|
||||
|
||||
4
cli/template.conf
Normal file
4
cli/template.conf
Normal file
@@ -0,0 +1,4 @@
|
||||
[teos]
|
||||
TEOS_SERVER = localhost
|
||||
TEOS_PORT = 9814
|
||||
|
||||
119
cli/teos_cli.py
119
cli/teos_cli.py
@@ -1,24 +1,27 @@
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import json
|
||||
import requests
|
||||
import time
|
||||
import binascii
|
||||
from sys import argv
|
||||
from uuid import uuid4
|
||||
from coincurve import PublicKey
|
||||
from getopt import getopt, GetoptError
|
||||
from requests import ConnectTimeout, ConnectionError
|
||||
from requests.exceptions import MissingSchema, InvalidSchema, InvalidURL
|
||||
|
||||
from cli import config, LOG_PREFIX
|
||||
from cli.help import help_add_appointment, help_get_appointment
|
||||
from common.blob import Blob
|
||||
from cli.help import show_usage, help_add_appointment, help_get_appointment
|
||||
from cli import DEFAULT_CONF, DATA_DIR, CONF_FILE_NAME, LOG_PREFIX
|
||||
|
||||
import common.cryptographer
|
||||
from common.blob import Blob
|
||||
from common import constants
|
||||
from common.logger import Logger
|
||||
from common.appointment import Appointment
|
||||
from common.config_loader import ConfigLoader
|
||||
from common.cryptographer import Cryptographer
|
||||
from common.tools import setup_logging, setup_data_folder
|
||||
from common.tools import check_sha256_hex_format, check_locator_format, compute_locator
|
||||
|
||||
logger = Logger(actor="Client", log_name_prefix=LOG_PREFIX)
|
||||
@@ -77,7 +80,7 @@ def load_keys(teos_pk_path, cli_sk_path, cli_pk_path):
|
||||
return teos_pk, cli_sk, cli_pk_der
|
||||
|
||||
|
||||
def add_appointment(args):
|
||||
def add_appointment(args, teos_url, config):
|
||||
"""
|
||||
Manages the add_appointment command, from argument parsing, trough sending the appointment to the tower, until
|
||||
saving the appointment receipt.
|
||||
@@ -98,12 +101,17 @@ def add_appointment(args):
|
||||
Args:
|
||||
args (:obj:`list`): a list of arguments to pass to ``parse_add_appointment_args``. Must contain a json encoded
|
||||
appointment, or the file option and the path to a file containing a json encoded appointment.
|
||||
teos_url (:obj:`str`): the teos base url.
|
||||
config (:obj:`dict`): a config dictionary following the format of :func:`create_config_dict <common.config_loader.ConfigLoader.create_config_dict>`.
|
||||
|
||||
Returns:
|
||||
:obj:`bool`: True if the appointment is accepted by the tower and the receipt is properly stored, false if any
|
||||
error occurs during the process.
|
||||
"""
|
||||
|
||||
# Currently the base_url is the same as the add_appointment_endpoint
|
||||
add_appointment_endpoint = teos_url
|
||||
|
||||
teos_pk, cli_sk, cli_pk_der = load_keys(
|
||||
config.get("TEOS_PUBLIC_KEY"), config.get("CLI_PRIVATE_KEY"), config.get("CLI_PUBLIC_KEY")
|
||||
)
|
||||
@@ -151,7 +159,7 @@ def add_appointment(args):
|
||||
data = {"appointment": appointment.to_dict(), "signature": signature, "public_key": hex_pk_der.decode("utf-8")}
|
||||
|
||||
# Send appointment to the server.
|
||||
server_response = post_appointment(data)
|
||||
server_response = post_appointment(data, add_appointment_endpoint)
|
||||
if server_response is None:
|
||||
return False
|
||||
|
||||
@@ -174,7 +182,7 @@ def add_appointment(args):
|
||||
logger.info("Appointment accepted and signed by the Eye of Satoshi")
|
||||
|
||||
# All good, store appointment and signature
|
||||
return save_appointment_receipt(appointment.to_dict(), signature)
|
||||
return save_appointment_receipt(appointment.to_dict(), signature, config)
|
||||
|
||||
|
||||
def parse_add_appointment_args(args):
|
||||
@@ -225,13 +233,14 @@ def parse_add_appointment_args(args):
|
||||
return appointment_data
|
||||
|
||||
|
||||
def post_appointment(data):
|
||||
def post_appointment(data, add_appointment_endpoint):
|
||||
"""
|
||||
Sends appointment data to add_appointment endpoint to be processed by the tower.
|
||||
|
||||
Args:
|
||||
data (:obj:`dict`): a dictionary containing three fields: an appointment, the client-side signature, and the
|
||||
der-encoded client public key.
|
||||
add_appointment_endpoint (:obj:`str`): the teos endpoint where to send appointments to.
|
||||
|
||||
Returns:
|
||||
:obj:`dict` or ``None``: a json-encoded dictionary with the server response if the data can be posted.
|
||||
@@ -241,7 +250,6 @@ def post_appointment(data):
|
||||
logger.info("Sending appointment to the Eye of Satoshi")
|
||||
|
||||
try:
|
||||
add_appointment_endpoint = "{}:{}".format(teos_api_server, teos_api_port)
|
||||
return requests.post(url=add_appointment_endpoint, json=json.dumps(data), timeout=5)
|
||||
|
||||
except ConnectTimeout:
|
||||
@@ -252,8 +260,8 @@ def post_appointment(data):
|
||||
logger.error("Can't connect to the Eye of Satoshi's API. Server cannot be reached")
|
||||
return None
|
||||
|
||||
except requests.exceptions.InvalidSchema:
|
||||
logger.error("No transport protocol found. Have you missed http(s):// in the server url?")
|
||||
except (InvalidSchema, MissingSchema, InvalidURL):
|
||||
logger.error("Invalid URL. No schema, or invalid schema, found ({})".format(add_appointment_endpoint))
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
logger.error("The request timed out")
|
||||
@@ -264,7 +272,7 @@ def process_post_appointment_response(response):
|
||||
Processes the server response to an add_appointment request.
|
||||
|
||||
Args:
|
||||
response (:obj:`requests.models.Response`): a ``Response` object obtained from the sent request.
|
||||
response (:obj:`requests.models.Response`): a ``Response`` object obtained from the sent request.
|
||||
|
||||
Returns:
|
||||
:obj:`dict` or :obj:`None`: a dictionary containing the tower's response data if it can be properly parsed and
|
||||
@@ -297,13 +305,14 @@ def process_post_appointment_response(response):
|
||||
return response_json
|
||||
|
||||
|
||||
def save_appointment_receipt(appointment, signature):
|
||||
def save_appointment_receipt(appointment, signature, config):
|
||||
"""
|
||||
Saves an appointment receipt to disk. A receipt consists in an appointment and a signature from the tower.
|
||||
|
||||
Args:
|
||||
appointment (:obj:`Appointment <common.appointment.Appointment>`): the appointment to be saved on disk.
|
||||
signature (:obj:`str`): the signature of the appointment performed by the tower.
|
||||
config (:obj:`dict`): a config dictionary following the format of :func:`create_config_dict <common.config_loader.ConfigLoader.create_config_dict>`.
|
||||
|
||||
Returns:
|
||||
:obj:`bool`: True if the appointment if properly saved, false otherwise.
|
||||
@@ -333,12 +342,13 @@ def save_appointment_receipt(appointment, signature):
|
||||
return False
|
||||
|
||||
|
||||
def get_appointment(locator):
|
||||
def get_appointment(locator, get_appointment_endpoint):
|
||||
"""
|
||||
Gets information about an appointment from the tower.
|
||||
|
||||
Args:
|
||||
locator (:obj:`str`): the appointment locator used to identify it.
|
||||
get_appointment_endpoint (:obj:`str`): the teos endpoint where to get appointments from.
|
||||
|
||||
Returns:
|
||||
:obj:`dict` or :obj:`None`: a dictionary containing thew appointment data if the locator is valid and the tower
|
||||
@@ -351,7 +361,6 @@ def get_appointment(locator):
|
||||
logger.error("The provided locator is not valid", locator=locator)
|
||||
return None
|
||||
|
||||
get_appointment_endpoint = "{}:{}/get_appointment".format(teos_api_server, teos_api_port)
|
||||
parameters = "?locator={}".format(locator)
|
||||
|
||||
try:
|
||||
@@ -373,49 +382,27 @@ def get_appointment(locator):
|
||||
logger.error("The request timed out")
|
||||
|
||||
|
||||
def show_usage():
|
||||
return (
|
||||
"USAGE: "
|
||||
"\n\tpython teos_cli.py [global options] command [command options] [arguments]"
|
||||
"\n\nCOMMANDS:"
|
||||
"\n\tadd_appointment \tRegisters a json formatted appointment with the tower."
|
||||
"\n\tget_appointment \tGets json formatted data about an appointment from the tower."
|
||||
"\n\thelp \t\t\tShows a list of commands or help for a specific command."
|
||||
"\n\nGLOBAL OPTIONS:"
|
||||
"\n\t-s, --server \tAPI server where to send the requests. Defaults to https://teos.pisa.watch (modifiable in "
|
||||
"config.py)"
|
||||
"\n\t-p, --port \tAPI port where to send the requests. Defaults to 443 (modifiable in conf.py)"
|
||||
"\n\t-d, --debug \tshows debug information and stores it in teos_cli.log"
|
||||
"\n\t-h --help \tshows this message."
|
||||
)
|
||||
def main(args, command_line_conf):
|
||||
# Loads config and sets up the data folder and log file
|
||||
config_loader = ConfigLoader(DATA_DIR, CONF_FILE_NAME, DEFAULT_CONF, command_line_conf)
|
||||
config = config_loader.build_config()
|
||||
|
||||
setup_data_folder(DATA_DIR)
|
||||
setup_logging(config.get("LOG_FILE"), LOG_PREFIX)
|
||||
|
||||
if __name__ == "__main__":
|
||||
teos_api_server = config.get("DEFAULT_TEOS_API_SERVER")
|
||||
teos_api_port = config.get("DEFAULT_TEOS_API_PORT")
|
||||
commands = ["add_appointment", "get_appointment", "help"]
|
||||
# Set the teos url
|
||||
teos_url = "{}:{}".format(config.get("TEOS_SERVER"), config.get("TEOS_PORT"))
|
||||
# If an http or https prefix if found, leaves the server as is. Otherwise defaults to http.
|
||||
if not teos_url.startswith("http"):
|
||||
teos_url = "http://" + teos_url
|
||||
|
||||
try:
|
||||
opts, args = getopt(argv[1:], "s:p:h", ["server", "port", "help"])
|
||||
|
||||
for opt, arg in opts:
|
||||
if opt in ["-s", "server"]:
|
||||
if arg:
|
||||
teos_api_server = arg
|
||||
|
||||
if opt in ["-p", "--port"]:
|
||||
if arg:
|
||||
teos_api_port = int(arg)
|
||||
|
||||
if opt in ["-h", "--help"]:
|
||||
sys.exit(show_usage())
|
||||
|
||||
if args:
|
||||
command = args.pop(0)
|
||||
|
||||
if command in commands:
|
||||
if command == "add_appointment":
|
||||
add_appointment(args)
|
||||
add_appointment(args, teos_url, config)
|
||||
|
||||
elif command == "get_appointment":
|
||||
if not args:
|
||||
@@ -427,7 +414,8 @@ if __name__ == "__main__":
|
||||
if arg_opt in ["-h", "--help"]:
|
||||
sys.exit(help_get_appointment())
|
||||
|
||||
appointment_data = get_appointment(arg_opt)
|
||||
get_appointment_endpoint = "{}/get_appointment".format(teos_url)
|
||||
appointment_data = get_appointment(arg_opt, get_appointment_endpoint)
|
||||
if appointment_data:
|
||||
print(appointment_data)
|
||||
|
||||
@@ -453,8 +441,33 @@ if __name__ == "__main__":
|
||||
else:
|
||||
logger.error("No command provided. Use help to check the list of available commands")
|
||||
|
||||
except json.JSONDecodeError:
|
||||
logger.error("Non-JSON encoded appointment passed as parameter")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
command_line_conf = {}
|
||||
commands = ["add_appointment", "get_appointment", "help"]
|
||||
|
||||
try:
|
||||
opts, args = getopt(argv[1:], "s:p:h", ["server", "port", "help"])
|
||||
|
||||
for opt, arg in opts:
|
||||
if opt in ["-s", "--server"]:
|
||||
if arg:
|
||||
command_line_conf["TEOS_SERVER"] = arg
|
||||
|
||||
if opt in ["-p", "--port"]:
|
||||
if arg:
|
||||
try:
|
||||
command_line_conf["TEOS_PORT"] = int(arg)
|
||||
except ValueError:
|
||||
sys.exit("port must be an integer")
|
||||
|
||||
if opt in ["-h", "--help"]:
|
||||
sys.exit(show_usage())
|
||||
|
||||
main(args, command_line_conf)
|
||||
|
||||
except GetoptError as e:
|
||||
logger.error("{}".format(e))
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error("Non-JSON encoded appointment passed as parameter")
|
||||
|
||||
114
common/config_loader.py
Normal file
114
common/config_loader.py
Normal file
@@ -0,0 +1,114 @@
|
||||
import os
|
||||
import configparser
|
||||
|
||||
|
||||
class ConfigLoader:
|
||||
"""
|
||||
The :class:`ConfigLoader` class is in charge of loading all the configuration parameters to create a config dict
|
||||
that can be used to set all configurable parameters of the system.
|
||||
|
||||
Args:
|
||||
data_dir (:obj:`str`): the path to the data directory where the configuration file may be found.
|
||||
default_conf (:obj:`dict`): a dictionary populated with the default configuration params and the expected types.
|
||||
The format is as follows:
|
||||
|
||||
{"field0": {"value": value_from_conf_file, "type": expected_type, ...}}
|
||||
|
||||
command_line_conf (:obj:`dict`): a dictionary containing the command line parameters that may replace the
|
||||
ones in default / config file.
|
||||
|
||||
Attributes:
|
||||
data_dir (:obj:`str`): the path to the data directory where the configuration file may be found.
|
||||
conf_file_path (:obj:`str`): the path to the config file (the file may not exist).
|
||||
conf_fields (:obj:`dict`): a dictionary populated with the configuration params and the expected types.
|
||||
follows the same format as default_conf.
|
||||
command_line_conf (:obj:`dict`): a dictionary containing the command line parameters that may replace the
|
||||
ones in default / config file.
|
||||
"""
|
||||
|
||||
def __init__(self, data_dir, conf_file_name, default_conf, command_line_conf):
|
||||
self.data_dir = data_dir
|
||||
self.conf_file_path = self.data_dir + conf_file_name
|
||||
self.conf_fields = default_conf
|
||||
self.command_line_conf = command_line_conf
|
||||
|
||||
def build_config(self):
|
||||
"""
|
||||
Builds a config dictionary from command line, config file and default configuration parameters.
|
||||
|
||||
The priority if as follows:
|
||||
- command line
|
||||
- config file
|
||||
- defaults
|
||||
|
||||
Returns:
|
||||
obj:`dict`: a dictionary containing all the configuration parameters.
|
||||
|
||||
"""
|
||||
|
||||
if os.path.exists(self.conf_file_path):
|
||||
file_config = configparser.ConfigParser()
|
||||
file_config.read(self.conf_file_path)
|
||||
|
||||
if file_config:
|
||||
for sec in file_config.sections():
|
||||
for k, v in file_config.items(sec):
|
||||
k_upper = k.upper()
|
||||
if k_upper in self.conf_fields:
|
||||
if self.conf_fields[k_upper]["type"] == int:
|
||||
try:
|
||||
self.conf_fields[k_upper]["value"] = int(v)
|
||||
except ValueError:
|
||||
err_msg = "{} is not an integer ({}).".format(k, v)
|
||||
raise ValueError(err_msg)
|
||||
else:
|
||||
self.conf_fields[k_upper]["value"] = v
|
||||
|
||||
# Override the command line parameters to the defaults / conf file
|
||||
for k, v in self.command_line_conf.items():
|
||||
self.conf_fields[k]["value"] = v
|
||||
|
||||
# Extend relative paths
|
||||
self.extend_paths()
|
||||
|
||||
# Sanity check fields and build config dictionary
|
||||
config = self.create_config_dict()
|
||||
|
||||
return config
|
||||
|
||||
def create_config_dict(self):
|
||||
"""
|
||||
Checks that the configuration fields (self.conf_fields) have the right type and creates a config dict if so.
|
||||
|
||||
Returns:
|
||||
:obj:`dict`: A dictionary with the same keys as the provided one, but containing only the "value" field as
|
||||
value if the provided ``conf_fields`` where correct.
|
||||
|
||||
Raises:
|
||||
ValueError: If any of the dictionary elements does not have the expected type
|
||||
"""
|
||||
|
||||
conf_dict = {}
|
||||
|
||||
for field in self.conf_fields:
|
||||
value = self.conf_fields[field]["value"]
|
||||
correct_type = self.conf_fields[field]["type"]
|
||||
|
||||
if isinstance(value, correct_type):
|
||||
conf_dict[field] = value
|
||||
else:
|
||||
err_msg = "{} variable in config is of the wrong type".format(field)
|
||||
raise ValueError(err_msg)
|
||||
|
||||
return conf_dict
|
||||
|
||||
def extend_paths(self):
|
||||
"""
|
||||
Extends the relative paths of the ``conf_fields`` dictionary with ``data_dir``.
|
||||
|
||||
If an absolute path is given, it'll remain the same.
|
||||
"""
|
||||
|
||||
for key, field in self.conf_fields.items():
|
||||
if field.get("path") is True and isinstance(field.get("value"), str):
|
||||
self.conf_fields[key]["value"] = os.path.join(self.data_dir, self.conf_fields[key]["value"])
|
||||
@@ -1,6 +1,7 @@
|
||||
import re
|
||||
import os
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from common.constants import LOCATOR_LEN_HEX
|
||||
|
||||
|
||||
@@ -50,63 +51,7 @@ def setup_data_folder(data_folder):
|
||||
data_folder (:obj:`str`): the path of the folder
|
||||
"""
|
||||
|
||||
if not os.path.isdir(data_folder):
|
||||
os.makedirs(data_folder, exist_ok=True)
|
||||
|
||||
|
||||
def check_conf_fields(conf_fields):
|
||||
"""
|
||||
Checks that the provided configuration field have the right type.
|
||||
|
||||
Args:
|
||||
conf_fields (:obj:`dict`): a dictionary populated with the configuration file params and the expected types.
|
||||
The format is as follows:
|
||||
|
||||
{"field0": {"value": value_from_conf_file, "type": expected_type, ...}}
|
||||
|
||||
Returns:
|
||||
:obj:`dict`: A dictionary with the same keys as the provided one, but containing only the "value" field as value
|
||||
if the provided ``conf_fields`` where correct.
|
||||
|
||||
Raises:
|
||||
ValueError: If any of the dictionary elements does not have the expected type
|
||||
"""
|
||||
|
||||
conf_dict = {}
|
||||
|
||||
for field in conf_fields:
|
||||
value = conf_fields[field]["value"]
|
||||
correct_type = conf_fields[field]["type"]
|
||||
|
||||
if (value is not None) and isinstance(value, correct_type):
|
||||
conf_dict[field] = value
|
||||
else:
|
||||
err_msg = "{} variable in config is of the wrong type".format(field)
|
||||
raise ValueError(err_msg)
|
||||
|
||||
return conf_dict
|
||||
|
||||
|
||||
def extend_paths(base_path, config_fields):
|
||||
"""
|
||||
Extends the relative paths of a given ``config_fields`` dictionary with a diven ``base_path``.
|
||||
|
||||
Paths in the config file are based on DATA_PATH, this method extends them so they are all absolute.
|
||||
|
||||
Args:
|
||||
base_path (:obj:`str`): the base path to prepend the other paths.
|
||||
config_fields (:obj:`dict`): a dictionary of configuration fields containing a ``path`` flag, as follows:
|
||||
{"field0": {"value": value_from_conf_file, "path": True, ...}}
|
||||
|
||||
Returns:
|
||||
:obj:`dict`: A ``config_fields`` with the flagged paths updated.
|
||||
"""
|
||||
|
||||
for key, field in config_fields.items():
|
||||
if field.get("path") is True:
|
||||
config_fields[key]["value"] = base_path + config_fields[key]["value"]
|
||||
|
||||
return config_fields
|
||||
Path(data_folder).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def setup_logging(log_file_path, log_name_prefix):
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import os.path
|
||||
from pathlib import Path
|
||||
from getopt import getopt
|
||||
from sys import argv, exit
|
||||
|
||||
@@ -48,15 +49,14 @@ if __name__ == "__main__":
|
||||
if opt in ["-d", "--dir"]:
|
||||
output_dir = arg
|
||||
|
||||
if output_dir.endswith("/"):
|
||||
output_dir = output_dir[:-1]
|
||||
# Create the output folder it it does not exist (and all the parents if they don't either)
|
||||
Path(output_dir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
SK_FILE_NAME = "{}/{}_sk.der".format(output_dir, name)
|
||||
PK_FILE_NAME = "{}/{}_pk.der".format(output_dir, name)
|
||||
SK_FILE_NAME = os.path.join(output_dir, "{}_sk.der".format(name))
|
||||
PK_FILE_NAME = os.path.join(output_dir, "{}_pk.der".format(name))
|
||||
|
||||
if os.path.exists(SK_FILE_NAME):
|
||||
print('A key with name "{}" already exists. Aborting.'.format(SK_FILE_NAME))
|
||||
exit(1)
|
||||
exit('A key with name "{}" already exists. Aborting.'.format(SK_FILE_NAME))
|
||||
|
||||
sk = ec.generate_private_key(ec.SECP256K1, default_backend())
|
||||
pk = sk.public_key()
|
||||
|
||||
@@ -1,38 +1,26 @@
|
||||
import os
|
||||
import teos.conf as conf
|
||||
from common.tools import check_conf_fields, setup_logging, extend_paths, setup_data_folder
|
||||
from teos.utils.auth_proxy import AuthServiceProxy
|
||||
|
||||
HOST = "localhost"
|
||||
PORT = 9814
|
||||
DATA_DIR = os.path.expanduser("~/.teos/")
|
||||
CONF_FILE_NAME = "teos.conf"
|
||||
LOG_PREFIX = "teos"
|
||||
|
||||
# Load config fields
|
||||
conf_fields = {
|
||||
"BTC_RPC_USER": {"value": conf.BTC_RPC_USER, "type": str},
|
||||
"BTC_RPC_PASSWD": {"value": conf.BTC_RPC_PASSWD, "type": str},
|
||||
"BTC_RPC_HOST": {"value": conf.BTC_RPC_HOST, "type": str},
|
||||
"BTC_RPC_PORT": {"value": conf.BTC_RPC_PORT, "type": int},
|
||||
"BTC_NETWORK": {"value": conf.BTC_NETWORK, "type": str},
|
||||
"FEED_PROTOCOL": {"value": conf.FEED_PROTOCOL, "type": str},
|
||||
"FEED_ADDR": {"value": conf.FEED_ADDR, "type": str},
|
||||
"FEED_PORT": {"value": conf.FEED_PORT, "type": int},
|
||||
"DATA_FOLDER": {"value": conf.DATA_FOLDER, "type": str},
|
||||
"MAX_APPOINTMENTS": {"value": conf.MAX_APPOINTMENTS, "type": int},
|
||||
"EXPIRY_DELTA": {"value": conf.EXPIRY_DELTA, "type": int},
|
||||
"MIN_TO_SELF_DELAY": {"value": conf.MIN_TO_SELF_DELAY, "type": int},
|
||||
"SERVER_LOG_FILE": {"value": conf.SERVER_LOG_FILE, "type": str, "path": True},
|
||||
"TEOS_SECRET_KEY": {"value": conf.TEOS_SECRET_KEY, "type": str, "path": True},
|
||||
"DB_PATH": {"value": conf.DB_PATH, "type": str, "path": True},
|
||||
# Default conf fields
|
||||
DEFAULT_CONF = {
|
||||
"BTC_RPC_USER": {"value": "user", "type": str},
|
||||
"BTC_RPC_PASSWD": {"value": "passwd", "type": str},
|
||||
"BTC_RPC_CONNECT": {"value": "127.0.0.1", "type": str},
|
||||
"BTC_RPC_PORT": {"value": 8332, "type": int},
|
||||
"BTC_NETWORK": {"value": "mainnet", "type": str},
|
||||
"FEED_PROTOCOL": {"value": "tcp", "type": str},
|
||||
"FEED_CONNECT": {"value": "127.0.0.1", "type": str},
|
||||
"FEED_PORT": {"value": 28332, "type": int},
|
||||
"MAX_APPOINTMENTS": {"value": 100, "type": int},
|
||||
"EXPIRY_DELTA": {"value": 6, "type": int},
|
||||
"MIN_TO_SELF_DELAY": {"value": 20, "type": int},
|
||||
"LOG_FILE": {"value": "teos.log", "type": str, "path": True},
|
||||
"TEOS_SECRET_KEY": {"value": "teos_sk.der", "type": str, "path": True},
|
||||
"DB_PATH": {"value": "appointments", "type": str, "path": True},
|
||||
}
|
||||
|
||||
# Expand user (~) if found and check fields are correct
|
||||
conf_fields["DATA_FOLDER"]["value"] = os.path.expanduser(conf_fields["DATA_FOLDER"]["value"])
|
||||
# Extend relative paths
|
||||
conf_fields = extend_paths(conf_fields["DATA_FOLDER"]["value"], conf_fields)
|
||||
|
||||
# Sanity check fields and build config dictionary
|
||||
config = check_conf_fields(conf_fields)
|
||||
|
||||
setup_data_folder(config.get("DATA_FOLDER"))
|
||||
setup_logging(config.get("SERVER_LOG_FILE"), LOG_PREFIX)
|
||||
|
||||
17
teos/api.py
17
teos/api.py
@@ -5,7 +5,6 @@ from flask import Flask, request, abort, jsonify
|
||||
|
||||
from teos import HOST, PORT, LOG_PREFIX
|
||||
from common.logger import Logger
|
||||
from teos.inspector import Inspector
|
||||
from common.appointment import Appointment
|
||||
|
||||
from common.constants import HTTP_OK, HTTP_BAD_REQUEST, HTTP_SERVICE_UNAVAILABLE, LOCATOR_LEN_HEX
|
||||
@@ -17,9 +16,18 @@ logger = Logger(actor="API", log_name_prefix=LOG_PREFIX)
|
||||
|
||||
|
||||
class API:
|
||||
def __init__(self, watcher, config):
|
||||
"""
|
||||
The :class:`API` is in charge of the interface between the user and the tower. It handles and server user requests.
|
||||
|
||||
Args:
|
||||
inspector (:obj:`Inspector <teos.inspector.Inspector>`): an ``Inspector`` instance to check the correctness of
|
||||
the received data.
|
||||
watcher (:obj:`Watcher <teos.watcher.Watcher>`): a ``Watcher`` instance to pass the requests to.
|
||||
"""
|
||||
|
||||
def __init__(self, inspector, watcher):
|
||||
self.inspector = inspector
|
||||
self.watcher = watcher
|
||||
self.config = config
|
||||
|
||||
def add_appointment(self):
|
||||
"""
|
||||
@@ -48,8 +56,7 @@ class API:
|
||||
if request.is_json:
|
||||
# Check content type once if properly defined
|
||||
request_data = json.loads(request.get_json())
|
||||
inspector = Inspector(self.config)
|
||||
appointment = inspector.inspect(
|
||||
appointment = self.inspector.inspect(
|
||||
request_data.get("appointment"), request_data.get("signature"), request_data.get("public_key")
|
||||
)
|
||||
|
||||
|
||||
@@ -11,10 +11,16 @@ class BlockProcessor:
|
||||
"""
|
||||
The :class:`BlockProcessor` contains methods related to the blockchain. Most of its methods require communication
|
||||
with ``bitcoind``.
|
||||
|
||||
Args:
|
||||
btc_connect_params (:obj:`dict`): a dictionary with the parameters to connect to bitcoind
|
||||
(rpc user, rpc passwd, host and port)
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def get_block(block_hash):
|
||||
def __init__(self, btc_connect_params):
|
||||
self.btc_connect_params = btc_connect_params
|
||||
|
||||
def get_block(self, block_hash):
|
||||
"""
|
||||
Gives a block given a block hash by querying ``bitcoind``.
|
||||
|
||||
@@ -28,7 +34,7 @@ class BlockProcessor:
|
||||
"""
|
||||
|
||||
try:
|
||||
block = bitcoin_cli().getblock(block_hash)
|
||||
block = bitcoin_cli(self.btc_connect_params).getblock(block_hash)
|
||||
|
||||
except JSONRPCException as e:
|
||||
block = None
|
||||
@@ -36,8 +42,7 @@ class BlockProcessor:
|
||||
|
||||
return block
|
||||
|
||||
@staticmethod
|
||||
def get_best_block_hash():
|
||||
def get_best_block_hash(self):
|
||||
"""
|
||||
Returns the hash of the current best chain tip.
|
||||
|
||||
@@ -48,7 +53,7 @@ class BlockProcessor:
|
||||
"""
|
||||
|
||||
try:
|
||||
block_hash = bitcoin_cli().getbestblockhash()
|
||||
block_hash = bitcoin_cli(self.btc_connect_params).getbestblockhash()
|
||||
|
||||
except JSONRPCException as e:
|
||||
block_hash = None
|
||||
@@ -56,8 +61,7 @@ class BlockProcessor:
|
||||
|
||||
return block_hash
|
||||
|
||||
@staticmethod
|
||||
def get_block_count():
|
||||
def get_block_count(self):
|
||||
"""
|
||||
Returns the block height of the best chain.
|
||||
|
||||
@@ -68,7 +72,7 @@ class BlockProcessor:
|
||||
"""
|
||||
|
||||
try:
|
||||
block_count = bitcoin_cli().getblockcount()
|
||||
block_count = bitcoin_cli(self.btc_connect_params).getblockcount()
|
||||
|
||||
except JSONRPCException as e:
|
||||
block_count = None
|
||||
@@ -76,8 +80,7 @@ class BlockProcessor:
|
||||
|
||||
return block_count
|
||||
|
||||
@staticmethod
|
||||
def decode_raw_transaction(raw_tx):
|
||||
def decode_raw_transaction(self, raw_tx):
|
||||
"""
|
||||
Deserializes a given raw transaction (hex encoded) and builds a dictionary representing it with all the
|
||||
associated metadata given by ``bitcoind`` (e.g. confirmation count).
|
||||
@@ -92,7 +95,7 @@ class BlockProcessor:
|
||||
"""
|
||||
|
||||
try:
|
||||
tx = bitcoin_cli().decoderawtransaction(raw_tx)
|
||||
tx = bitcoin_cli(self.btc_connect_params).decoderawtransaction(raw_tx)
|
||||
|
||||
except JSONRPCException as e:
|
||||
tx = None
|
||||
@@ -100,8 +103,7 @@ class BlockProcessor:
|
||||
|
||||
return tx
|
||||
|
||||
@staticmethod
|
||||
def get_distance_to_tip(target_block_hash):
|
||||
def get_distance_to_tip(self, target_block_hash):
|
||||
"""
|
||||
Compute the distance between a given block hash and the best chain tip.
|
||||
|
||||
@@ -117,10 +119,10 @@ class BlockProcessor:
|
||||
|
||||
distance = None
|
||||
|
||||
chain_tip = BlockProcessor.get_best_block_hash()
|
||||
chain_tip_height = BlockProcessor.get_block(chain_tip).get("height")
|
||||
chain_tip = self.get_best_block_hash()
|
||||
chain_tip_height = self.get_block(chain_tip).get("height")
|
||||
|
||||
target_block = BlockProcessor.get_block(target_block_hash)
|
||||
target_block = self.get_block(target_block_hash)
|
||||
|
||||
if target_block is not None:
|
||||
target_block_height = target_block.get("height")
|
||||
@@ -129,8 +131,7 @@ class BlockProcessor:
|
||||
|
||||
return distance
|
||||
|
||||
@staticmethod
|
||||
def get_missed_blocks(last_know_block_hash):
|
||||
def get_missed_blocks(self, last_know_block_hash):
|
||||
"""
|
||||
Compute the blocks between the current best chain tip and a given block hash (``last_know_block_hash``).
|
||||
|
||||
@@ -144,19 +145,18 @@ class BlockProcessor:
|
||||
child of ``last_know_block_hash``.
|
||||
"""
|
||||
|
||||
current_block_hash = BlockProcessor.get_best_block_hash()
|
||||
current_block_hash = self.get_best_block_hash()
|
||||
missed_blocks = []
|
||||
|
||||
while current_block_hash != last_know_block_hash and current_block_hash is not None:
|
||||
missed_blocks.append(current_block_hash)
|
||||
|
||||
current_block = BlockProcessor.get_block(current_block_hash)
|
||||
current_block = self.get_block(current_block_hash)
|
||||
current_block_hash = current_block.get("previousblockhash")
|
||||
|
||||
return missed_blocks[::-1]
|
||||
|
||||
@staticmethod
|
||||
def is_block_in_best_chain(block_hash):
|
||||
def is_block_in_best_chain(self, block_hash):
|
||||
"""
|
||||
Checks whether or not a given block is on the best chain. Blocks are identified by block_hash.
|
||||
|
||||
@@ -173,7 +173,7 @@ class BlockProcessor:
|
||||
KeyError: If the block cannot be found in the blockchain.
|
||||
"""
|
||||
|
||||
block = BlockProcessor.get_block(block_hash)
|
||||
block = self.get_block(block_hash)
|
||||
|
||||
if block is None:
|
||||
# This should never happen as long as we are using the same node, since bitcoind never drops orphan blocks
|
||||
@@ -185,8 +185,7 @@ class BlockProcessor:
|
||||
else:
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def find_last_common_ancestor(last_known_block_hash):
|
||||
def find_last_common_ancestor(self, last_known_block_hash):
|
||||
"""
|
||||
Finds the last common ancestor between the current best chain tip and the last block known by us (older block).
|
||||
|
||||
@@ -204,8 +203,8 @@ class BlockProcessor:
|
||||
target_block_hash = last_known_block_hash
|
||||
dropped_txs = []
|
||||
|
||||
while not BlockProcessor.is_block_in_best_chain(target_block_hash):
|
||||
block = BlockProcessor.get_block(target_block_hash)
|
||||
while not self.is_block_in_best_chain(target_block_hash):
|
||||
block = self.get_block(target_block_hash)
|
||||
dropped_txs.extend(block.get("tx"))
|
||||
target_block_hash = block.get("previousblockhash")
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
class Builder:
|
||||
"""
|
||||
The :class:`Builder` class is in charge or reconstructing data loaded from the database and build the data
|
||||
The :class:`Builder` class is in charge of reconstructing data loaded from the database and build the data
|
||||
structures of the :obj:`Watcher <teos.watcher.Watcher>` and the :obj:`Responder <teos.responder.Responder>`.
|
||||
"""
|
||||
|
||||
|
||||
@@ -39,13 +39,18 @@ class Carrier:
|
||||
The :class:`Carrier` is the class in charge of interacting with ``bitcoind`` to send/get transactions. It uses
|
||||
:obj:`Receipt` objects to report about the sending outcome.
|
||||
|
||||
Args:
|
||||
btc_connect_params (:obj:`dict`): a dictionary with the parameters to connect to bitcoind
|
||||
(rpc user, rpc passwd, host and port)
|
||||
|
||||
Attributes:
|
||||
issued_receipts (:obj:`dict`): a dictionary of issued receipts to prevent resending the same transaction over
|
||||
and over. It should periodically be reset to prevent it from growing unbounded.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self, btc_connect_params):
|
||||
self.btc_connect_params = btc_connect_params
|
||||
self.issued_receipts = {}
|
||||
|
||||
# NOTCOVERED
|
||||
@@ -69,7 +74,7 @@ class Carrier:
|
||||
|
||||
try:
|
||||
logger.info("Pushing transaction to the network", txid=txid, rawtx=rawtx)
|
||||
bitcoin_cli().sendrawtransaction(rawtx)
|
||||
bitcoin_cli(self.btc_connect_params).sendrawtransaction(rawtx)
|
||||
|
||||
receipt = Receipt(delivered=True)
|
||||
|
||||
@@ -119,8 +124,7 @@ class Carrier:
|
||||
|
||||
return receipt
|
||||
|
||||
@staticmethod
|
||||
def get_transaction(txid):
|
||||
def get_transaction(self, txid):
|
||||
"""
|
||||
Queries transaction data to ``bitcoind`` given a transaction id.
|
||||
|
||||
@@ -134,7 +138,7 @@ class Carrier:
|
||||
"""
|
||||
|
||||
try:
|
||||
tx_info = bitcoin_cli().getrawtransaction(txid, 1)
|
||||
tx_info = bitcoin_cli(self.btc_connect_params).getrawtransaction(txid, 1)
|
||||
|
||||
except JSONRPCException as e:
|
||||
tx_info = None
|
||||
|
||||
@@ -4,8 +4,6 @@ from threading import Thread, Event, Condition
|
||||
|
||||
from teos import LOG_PREFIX
|
||||
from common.logger import Logger
|
||||
from teos.conf import FEED_PROTOCOL, FEED_ADDR, FEED_PORT, POLLING_DELTA, BLOCK_WINDOW_SIZE
|
||||
from teos.block_processor import BlockProcessor
|
||||
|
||||
logger = Logger(actor="ChainMonitor", log_name_prefix=LOG_PREFIX)
|
||||
|
||||
@@ -22,6 +20,8 @@ class ChainMonitor:
|
||||
Args:
|
||||
watcher_queue (:obj:`Queue`): the queue to be used to send blocks hashes to the ``Watcher``.
|
||||
responder_queue (:obj:`Queue`): the queue to be used to send blocks hashes to the ``Responder``.
|
||||
block_processor (:obj:`BlockProcessor <teos.block_processor.BlockProcessor>`): a blockProcessor instance.
|
||||
bitcoind_feed_params (:obj:`dict`): a dict with the feed (ZMQ) connection parameters.
|
||||
|
||||
Attributes:
|
||||
best_tip (:obj:`str`): a block hash representing the current best tip.
|
||||
@@ -34,9 +34,13 @@ class ChainMonitor:
|
||||
watcher_queue (:obj:`Queue`): a queue to send new best tips to the :obj:`Watcher <teos.watcher.Watcher>`.
|
||||
responder_queue (:obj:`Queue`): a queue to send new best tips to the
|
||||
:obj:`Responder <teos.responder.Responder>`.
|
||||
|
||||
polling_delta (:obj:`int`): time between polls (in seconds).
|
||||
max_block_window_size (:obj:`int`): max size of last_tips.
|
||||
block_processor (:obj:`BlockProcessor <teos.block_processor.BlockProcessor>`): a blockProcessor instance.
|
||||
"""
|
||||
|
||||
def __init__(self, watcher_queue, responder_queue):
|
||||
def __init__(self, watcher_queue, responder_queue, block_processor, bitcoind_feed_params):
|
||||
self.best_tip = None
|
||||
self.last_tips = []
|
||||
self.terminate = False
|
||||
@@ -48,11 +52,22 @@ class ChainMonitor:
|
||||
self.zmqSubSocket = self.zmqContext.socket(zmq.SUB)
|
||||
self.zmqSubSocket.setsockopt(zmq.RCVHWM, 0)
|
||||
self.zmqSubSocket.setsockopt_string(zmq.SUBSCRIBE, "hashblock")
|
||||
self.zmqSubSocket.connect("%s://%s:%s" % (FEED_PROTOCOL, FEED_ADDR, FEED_PORT))
|
||||
self.zmqSubSocket.connect(
|
||||
"%s://%s:%s"
|
||||
% (
|
||||
bitcoind_feed_params.get("FEED_PROTOCOL"),
|
||||
bitcoind_feed_params.get("FEED_CONNECT"),
|
||||
bitcoind_feed_params.get("FEED_PORT"),
|
||||
)
|
||||
)
|
||||
|
||||
self.watcher_queue = watcher_queue
|
||||
self.responder_queue = responder_queue
|
||||
|
||||
self.polling_delta = 60
|
||||
self.max_block_window_size = 10
|
||||
self.block_processor = block_processor
|
||||
|
||||
def notify_subscribers(self, block_hash):
|
||||
"""
|
||||
Notifies the subscribers (``Watcher`` and ``Responder``) about a new block. It does so by putting the hash in
|
||||
@@ -66,14 +81,13 @@ class ChainMonitor:
|
||||
self.watcher_queue.put(block_hash)
|
||||
self.responder_queue.put(block_hash)
|
||||
|
||||
def update_state(self, block_hash, max_block_window_size=BLOCK_WINDOW_SIZE):
|
||||
def update_state(self, block_hash):
|
||||
"""
|
||||
Updates the state of the ``ChainMonitor``. The state is represented as the ``best_tip`` and the list of
|
||||
``last_tips``. ``last_tips`` is bounded to ``max_block_window_size``.
|
||||
|
||||
Args:
|
||||
block_hash (:obj:`block_hash`): the new best tip.
|
||||
max_block_window_size (:obj:`int`): the maximum length of the ``last_tips`` list.
|
||||
|
||||
Returns:
|
||||
(:obj:`bool`): ``True`` is the state was successfully updated, ``False`` otherwise.
|
||||
@@ -83,7 +97,7 @@ class ChainMonitor:
|
||||
self.last_tips.append(self.best_tip)
|
||||
self.best_tip = block_hash
|
||||
|
||||
if len(self.last_tips) > max_block_window_size:
|
||||
if len(self.last_tips) > self.max_block_window_size:
|
||||
self.last_tips.pop(0)
|
||||
|
||||
return True
|
||||
@@ -91,22 +105,19 @@ class ChainMonitor:
|
||||
else:
|
||||
return False
|
||||
|
||||
def monitor_chain_polling(self, polling_delta=POLLING_DELTA):
|
||||
def monitor_chain_polling(self):
|
||||
"""
|
||||
Monitors ``bitcoind`` via polling. Once the method is fired, it keeps monitoring as long as ``terminate`` is not
|
||||
set. Polling is performed once every ``polling_delta`` seconds. If a new best tip if found, the shared lock is
|
||||
acquired, the state is updated and the subscribers are notified, and finally the lock is released.
|
||||
|
||||
Args:
|
||||
polling_delta (:obj:`int`): the time delta between polls.
|
||||
"""
|
||||
|
||||
while not self.terminate:
|
||||
self.check_tip.wait(timeout=polling_delta)
|
||||
self.check_tip.wait(timeout=self.polling_delta)
|
||||
|
||||
# Terminate could have been set while the thread was blocked in wait
|
||||
if not self.terminate:
|
||||
current_tip = BlockProcessor.get_best_block_hash()
|
||||
current_tip = self.block_processor.get_best_block_hash()
|
||||
|
||||
self.lock.acquire()
|
||||
if self.update_state(current_tip):
|
||||
@@ -138,16 +149,13 @@ class ChainMonitor:
|
||||
logger.info("New block received via zmq", block_hash=block_hash)
|
||||
self.lock.release()
|
||||
|
||||
def monitor_chain(self, polling_delta=POLLING_DELTA):
|
||||
def monitor_chain(self):
|
||||
"""
|
||||
Main :class:`ChainMonitor` method. It initializes the ``best_tip`` to the current one (by querying the
|
||||
:obj:`BlockProcessor <teos.block_processor.BlockProcessor>`) and creates two threads, one per each monitoring
|
||||
approach (``zmq`` and ``polling``).
|
||||
|
||||
Args:
|
||||
polling_delta (:obj:`int`): the time delta between polls by the ``monitor_chain_polling`` thread.
|
||||
"""
|
||||
|
||||
self.best_tip = BlockProcessor.get_best_block_hash()
|
||||
Thread(target=self.monitor_chain_polling, daemon=True, kwargs={"polling_delta": polling_delta}).start()
|
||||
self.best_tip = self.block_processor.get_best_block_hash()
|
||||
Thread(target=self.monitor_chain_polling, daemon=True).start()
|
||||
Thread(target=self.monitor_chain_zmq, daemon=True).start()
|
||||
|
||||
@@ -81,7 +81,7 @@ class Cleaner:
|
||||
logger.error("Some UUIDs not found in the db", locator=locator, all_uuids=uuids)
|
||||
|
||||
else:
|
||||
logger.error("Locator map not found in the db", uuid=locator)
|
||||
logger.error("Locator map not found in the db", locator=locator)
|
||||
|
||||
@staticmethod
|
||||
def delete_expired_appointments(expired_appointments, appointments, locator_uuid_map, db_manager):
|
||||
|
||||
26
teos/conf.py
26
teos/conf.py
@@ -1,26 +0,0 @@
|
||||
# bitcoind
|
||||
BTC_RPC_USER = "user"
|
||||
BTC_RPC_PASSWD = "passwd"
|
||||
BTC_RPC_HOST = "localhost"
|
||||
BTC_RPC_PORT = 18443
|
||||
BTC_NETWORK = "regtest"
|
||||
|
||||
# ZMQ
|
||||
FEED_PROTOCOL = "tcp"
|
||||
FEED_ADDR = "127.0.0.1"
|
||||
FEED_PORT = 28332
|
||||
|
||||
# TEOS
|
||||
DATA_FOLDER = "~/.teos/"
|
||||
MAX_APPOINTMENTS = 100
|
||||
EXPIRY_DELTA = 6
|
||||
MIN_TO_SELF_DELAY = 20
|
||||
SERVER_LOG_FILE = "teos.log"
|
||||
TEOS_SECRET_KEY = "teos_sk.der"
|
||||
|
||||
# CHAIN MONITOR
|
||||
POLLING_DELTA = 60
|
||||
BLOCK_WINDOW_SIZE = 10
|
||||
|
||||
# LEVELDB
|
||||
DB_PATH = "appointments"
|
||||
13
teos/help.py
Normal file
13
teos/help.py
Normal file
@@ -0,0 +1,13 @@
|
||||
def show_usage():
|
||||
return (
|
||||
"USAGE: "
|
||||
"\n\tpython teosd.py [global options]"
|
||||
"\n\nGLOBAL OPTIONS:"
|
||||
"\n\t--btcnetwork \t\tNetwork bitcoind is connected to. Either mainnet, testnet or regtest. Defaults to 'mainnet' (modifiable in conf file)."
|
||||
"\n\t--btcrpcuser \t\tbitcoind rpcuser. Defaults to 'user' (modifiable in conf file)."
|
||||
"\n\t--btcrpcpassword \tbitcoind rpcpassword. Defaults to 'passwd' (modifiable in conf file)."
|
||||
"\n\t--btcrpcconnect \tbitcoind rpcconnect. Defaults to 'localhost' (modifiable in conf file)."
|
||||
"\n\t--btcrpcport \t\tbitcoind rpcport. Defaults to '8332' (modifiable in conf file)."
|
||||
"\n\t--datadir \t\tspecify data directory. Defaults to '~\.teos' (modifiable in conf file)."
|
||||
"\n\t-h --help \t\tshows this message."
|
||||
)
|
||||
@@ -8,7 +8,6 @@ from common.cryptographer import Cryptographer, PublicKey
|
||||
from teos import errors, LOG_PREFIX
|
||||
from common.logger import Logger
|
||||
from common.appointment import Appointment
|
||||
from teos.block_processor import BlockProcessor
|
||||
|
||||
logger = Logger(actor="Inspector", log_name_prefix=LOG_PREFIX)
|
||||
common.cryptographer.logger = Logger(actor="Cryptographer", log_name_prefix=LOG_PREFIX)
|
||||
@@ -26,10 +25,16 @@ ENCRYPTED_BLOB_MAX_SIZE_HEX = 2 * 2048
|
||||
class Inspector:
|
||||
"""
|
||||
The :class:`Inspector` class is in charge of verifying that the appointment data provided by the user is correct.
|
||||
|
||||
Args:
|
||||
block_processor (:obj:`BlockProcessor <teos.block_processor.BlockProcessor>`): a ``BlockProcessor`` instance.
|
||||
min_to_self_delay (:obj:`int`): the minimum to_self_delay accepted in appointments.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, config):
|
||||
self.config = config
|
||||
def __init__(self, block_processor, min_to_self_delay):
|
||||
self.block_processor = block_processor
|
||||
self.min_to_self_delay = min_to_self_delay
|
||||
|
||||
def inspect(self, appointment_data, signature, public_key):
|
||||
"""
|
||||
@@ -49,7 +54,7 @@ class Inspector:
|
||||
Errors are defined in :mod:`Errors <teos.errors>`.
|
||||
"""
|
||||
|
||||
block_height = BlockProcessor.get_block_count()
|
||||
block_height = self.block_processor.get_block_count()
|
||||
|
||||
if block_height is not None:
|
||||
rcode, message = self.check_locator(appointment_data.get("locator"))
|
||||
@@ -279,10 +284,10 @@ class Inspector:
|
||||
to_self_delay, pow(2, 32)
|
||||
)
|
||||
|
||||
elif to_self_delay < self.config.get("MIN_TO_SELF_DELAY"):
|
||||
elif to_self_delay < self.min_to_self_delay:
|
||||
rcode = errors.APPOINTMENT_FIELD_TOO_SMALL
|
||||
message = "to_self_delay too small. The to_self_delay should be at least {} (current: {})".format(
|
||||
self.config.get("MIN_TO_SELF_DELAY"), to_self_delay
|
||||
self.min_to_self_delay, to_self_delay
|
||||
)
|
||||
|
||||
if message is not None:
|
||||
|
||||
@@ -5,8 +5,6 @@ from threading import Thread
|
||||
from teos import LOG_PREFIX
|
||||
from common.logger import Logger
|
||||
from teos.cleaner import Cleaner
|
||||
from teos.carrier import Carrier
|
||||
from teos.block_processor import BlockProcessor
|
||||
|
||||
CONFIRMATIONS_BEFORE_RETRY = 6
|
||||
MIN_CONFIRMATIONS = 6
|
||||
@@ -125,17 +123,22 @@ class Responder:
|
||||
is populated by the :obj:`ChainMonitor <teos.chain_monitor.ChainMonitor>`.
|
||||
db_manager (:obj:`DBManager <teos.db_manager.DBManager>`): A ``DBManager`` instance to interact with the
|
||||
database.
|
||||
carrier (:obj:`Carrier <teos.carrier.Carrier>`): a ``Carrier`` instance to send transactions to bitcoind.
|
||||
block_processor (:obj:`DBManager <teos.block_processor.BlockProcessor>`): a ``BlockProcessor`` instance to get
|
||||
data from bitcoind.
|
||||
last_known_block (:obj:`str`): the last block known by the ``Responder``.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, db_manager):
|
||||
def __init__(self, db_manager, carrier, block_processor):
|
||||
self.trackers = dict()
|
||||
self.tx_tracker_map = dict()
|
||||
self.unconfirmed_txs = []
|
||||
self.missed_confirmations = dict()
|
||||
self.block_queue = Queue()
|
||||
self.db_manager = db_manager
|
||||
self.carrier = Carrier()
|
||||
self.carrier = carrier
|
||||
self.block_processor = block_processor
|
||||
self.last_known_block = db_manager.load_last_block_hash_responder()
|
||||
|
||||
def awake(self):
|
||||
@@ -144,8 +147,7 @@ class Responder:
|
||||
|
||||
return responder_thread
|
||||
|
||||
@staticmethod
|
||||
def on_sync(block_hash):
|
||||
def on_sync(self, block_hash):
|
||||
"""
|
||||
Whether the :obj:`Responder` is on sync with ``bitcoind`` or not. Used when recovering from a crash.
|
||||
|
||||
@@ -165,8 +167,7 @@ class Responder:
|
||||
:obj:`bool`: Whether or not the :obj:`Responder` and ``bitcoind`` are on sync.
|
||||
"""
|
||||
|
||||
block_processor = BlockProcessor()
|
||||
distance_from_tip = block_processor.get_distance_to_tip(block_hash)
|
||||
distance_from_tip = self.block_processor.get_distance_to_tip(block_hash)
|
||||
|
||||
if distance_from_tip is not None and distance_from_tip > 1:
|
||||
synchronized = False
|
||||
@@ -266,11 +267,11 @@ class Responder:
|
||||
|
||||
# Distinguish fresh bootstraps from bootstraps from db
|
||||
if self.last_known_block is None:
|
||||
self.last_known_block = BlockProcessor.get_best_block_hash()
|
||||
self.last_known_block = self.block_processor.get_best_block_hash()
|
||||
|
||||
while True:
|
||||
block_hash = self.block_queue.get()
|
||||
block = BlockProcessor.get_block(block_hash)
|
||||
block = self.block_processor.get_block(block_hash)
|
||||
logger.info("New block received", block_hash=block_hash, prev_block_hash=block.get("previousblockhash"))
|
||||
|
||||
if len(self.trackers) > 0 and block is not None:
|
||||
@@ -377,7 +378,7 @@ class Responder:
|
||||
if appointment_end <= height and penalty_txid not in self.unconfirmed_txs:
|
||||
|
||||
if penalty_txid not in checked_txs:
|
||||
tx = Carrier.get_transaction(penalty_txid)
|
||||
tx = self.carrier.get_transaction(penalty_txid)
|
||||
else:
|
||||
tx = checked_txs.get(penalty_txid)
|
||||
|
||||
|
||||
22
teos/template.conf
Normal file
22
teos/template.conf
Normal file
@@ -0,0 +1,22 @@
|
||||
[bitcoind]
|
||||
btc_rpc_user = user
|
||||
btc_rpc_passwd = passwd
|
||||
btc_rpc_connect = localhost
|
||||
btc_rpc_port = 8332
|
||||
btc_network = mainnet
|
||||
|
||||
# [zmq]
|
||||
feed_protocol = tcp
|
||||
feed_connect = 127.0.0.1
|
||||
feed_port = 28332
|
||||
|
||||
[teos]
|
||||
max_appointments = 100
|
||||
expiry_delta = 6
|
||||
min_to_self_delay = 20
|
||||
|
||||
# [chain monitor]
|
||||
polling_delta = 60
|
||||
block_window_size = 10
|
||||
|
||||
|
||||
@@ -1,20 +1,26 @@
|
||||
from getopt import getopt
|
||||
import os
|
||||
from sys import argv, exit
|
||||
from getopt import getopt, GetoptError
|
||||
from signal import signal, SIGINT, SIGQUIT, SIGTERM
|
||||
|
||||
import common.cryptographer
|
||||
from common.logger import Logger
|
||||
from common.config_loader import ConfigLoader
|
||||
from common.cryptographer import Cryptographer
|
||||
from common.tools import setup_logging, setup_data_folder
|
||||
|
||||
from teos import config, LOG_PREFIX
|
||||
from teos.api import API
|
||||
from teos.help import show_usage
|
||||
from teos.watcher import Watcher
|
||||
from teos.builder import Builder
|
||||
from teos.carrier import Carrier
|
||||
from teos.inspector import Inspector
|
||||
from teos.responder import Responder
|
||||
from teos.db_manager import DBManager
|
||||
from teos.chain_monitor import ChainMonitor
|
||||
from teos.block_processor import BlockProcessor
|
||||
from teos.tools import can_connect_to_bitcoind, in_correct_network
|
||||
from teos import LOG_PREFIX, DATA_DIR, DEFAULT_CONF, CONF_FILE_NAME
|
||||
|
||||
logger = Logger(actor="Daemon", log_name_prefix=LOG_PREFIX)
|
||||
common.cryptographer.logger = Logger(actor="Cryptographer", log_name_prefix=LOG_PREFIX)
|
||||
@@ -29,20 +35,29 @@ def handle_signals(signal_received, frame):
|
||||
exit(0)
|
||||
|
||||
|
||||
def main():
|
||||
def main(command_line_conf):
|
||||
global db_manager, chain_monitor
|
||||
|
||||
signal(SIGINT, handle_signals)
|
||||
signal(SIGTERM, handle_signals)
|
||||
signal(SIGQUIT, handle_signals)
|
||||
|
||||
# Loads config and sets up the data folder and log file
|
||||
config_loader = ConfigLoader(DATA_DIR, CONF_FILE_NAME, DEFAULT_CONF, command_line_conf)
|
||||
config = config_loader.build_config()
|
||||
setup_data_folder(DATA_DIR)
|
||||
setup_logging(config.get("LOG_FILE"), LOG_PREFIX)
|
||||
|
||||
logger.info("Starting TEOS")
|
||||
db_manager = DBManager(config.get("DB_PATH"))
|
||||
|
||||
if not can_connect_to_bitcoind():
|
||||
bitcoind_connect_params = {k: v for k, v in config.items() if k.startswith("BTC")}
|
||||
bitcoind_feed_params = {k: v for k, v in config.items() if k.startswith("FEED")}
|
||||
|
||||
if not can_connect_to_bitcoind(bitcoind_connect_params):
|
||||
logger.error("Can't connect to bitcoind. Shutting down")
|
||||
|
||||
elif not in_correct_network(config.get("BTC_NETWORK")):
|
||||
elif not in_correct_network(bitcoind_connect_params, config.get("BTC_NETWORK")):
|
||||
logger.error("bitcoind is running on a different network, check conf.py and bitcoin.conf. Shutting down")
|
||||
|
||||
else:
|
||||
@@ -51,10 +66,23 @@ def main():
|
||||
if not secret_key_der:
|
||||
raise IOError("TEOS private key can't be loaded")
|
||||
|
||||
watcher = Watcher(db_manager, Responder(db_manager), secret_key_der, config)
|
||||
block_processor = BlockProcessor(bitcoind_connect_params)
|
||||
carrier = Carrier(bitcoind_connect_params)
|
||||
|
||||
responder = Responder(db_manager, carrier, block_processor)
|
||||
watcher = Watcher(
|
||||
db_manager,
|
||||
block_processor,
|
||||
responder,
|
||||
secret_key_der,
|
||||
config.get("MAX_APPOINTMENTS"),
|
||||
config.get("EXPIRY_DELTA"),
|
||||
)
|
||||
|
||||
# Create the chain monitor and start monitoring the chain
|
||||
chain_monitor = ChainMonitor(watcher.block_queue, watcher.responder.block_queue)
|
||||
chain_monitor = ChainMonitor(
|
||||
watcher.block_queue, watcher.responder.block_queue, block_processor, bitcoind_feed_params
|
||||
)
|
||||
|
||||
watcher_appointments_data = db_manager.load_watcher_appointments()
|
||||
responder_trackers_data = db_manager.load_responder_trackers()
|
||||
@@ -89,7 +117,6 @@ def main():
|
||||
|
||||
# Populate the block queues with data if they've missed some while offline. If the blocks of both match
|
||||
# we don't perform the search twice.
|
||||
block_processor = BlockProcessor()
|
||||
|
||||
# FIXME: 32-reorgs-offline dropped txs are not used at this point.
|
||||
last_common_ancestor_watcher, dropped_txs_watcher = block_processor.find_last_common_ancestor(
|
||||
@@ -123,16 +150,41 @@ def main():
|
||||
# Fire the API and the ChainMonitor
|
||||
# FIXME: 92-block-data-during-bootstrap-db
|
||||
chain_monitor.monitor_chain()
|
||||
API(watcher, config=config).start()
|
||||
API(Inspector(block_processor, config.get("MIN_TO_SELF_DELAY")), watcher).start()
|
||||
except Exception as e:
|
||||
logger.error("An error occurred: {}. Shutting down".format(e))
|
||||
exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
opts, _ = getopt(argv[1:], "", [""])
|
||||
for opt, arg in opts:
|
||||
# FIXME: Leaving this here for future option/arguments
|
||||
pass
|
||||
command_line_conf = {}
|
||||
|
||||
main()
|
||||
try:
|
||||
opts, _ = getopt(
|
||||
argv[1:],
|
||||
"h",
|
||||
["btcnetwork=", "btcrpcuser=", "btcrpcpassword=", "btcrpcconnect=", "btcrpcport=", "datadir=", "help"],
|
||||
)
|
||||
for opt, arg in opts:
|
||||
if opt in ["--btcnetwork"]:
|
||||
command_line_conf["BTC_NETWORK"] = arg
|
||||
if opt in ["--btcrpcuser"]:
|
||||
command_line_conf["BTC_RPC_USER"] = arg
|
||||
if opt in ["--btcrpcpassword"]:
|
||||
command_line_conf["BTC_RPC_PASSWD"] = arg
|
||||
if opt in ["--btcrpcconnect"]:
|
||||
command_line_conf["BTC_RPC_CONNECT"] = arg
|
||||
if opt in ["--btcrpcport"]:
|
||||
try:
|
||||
command_line_conf["BTC_RPC_PORT"] = int(arg)
|
||||
except ValueError:
|
||||
exit("btcrpcport must be an integer")
|
||||
if opt in ["--datadir"]:
|
||||
DATA_DIR = os.path.expanduser(arg)
|
||||
if opt in ["-h", "--help"]:
|
||||
exit(show_usage())
|
||||
|
||||
except GetoptError as e:
|
||||
exit(e)
|
||||
|
||||
main(command_line_conf)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
from http.client import HTTPException
|
||||
from socket import timeout
|
||||
from http.client import HTTPException
|
||||
|
||||
import teos.conf as conf
|
||||
from teos.utils.auth_proxy import AuthServiceProxy, JSONRPCException
|
||||
|
||||
"""
|
||||
@@ -10,25 +9,38 @@ Tools is a module with general methods that can used by different entities in th
|
||||
|
||||
|
||||
# NOTCOVERED
|
||||
def bitcoin_cli():
|
||||
def bitcoin_cli(btc_connect_params):
|
||||
"""
|
||||
An ``http`` connection with ``bitcoind`` using the ``json-rpc`` interface.
|
||||
|
||||
Args:
|
||||
btc_connect_params (:obj:`dict`): a dictionary with the parameters to connect to bitcoind
|
||||
(rpc user, rpc passwd, host and port)
|
||||
|
||||
Returns:
|
||||
:obj:`AuthServiceProxy <teos.utils.auth_proxy.AuthServiceProxy>`: An authenticated service proxy to ``bitcoind``
|
||||
that can be used to send ``json-rpc`` commands.
|
||||
"""
|
||||
|
||||
return AuthServiceProxy(
|
||||
"http://%s:%s@%s:%d" % (conf.BTC_RPC_USER, conf.BTC_RPC_PASSWD, conf.BTC_RPC_HOST, conf.BTC_RPC_PORT)
|
||||
"http://%s:%s@%s:%d"
|
||||
% (
|
||||
btc_connect_params.get("BTC_RPC_USER"),
|
||||
btc_connect_params.get("BTC_RPC_PASSWD"),
|
||||
btc_connect_params.get("BTC_RPC_CONNECT"),
|
||||
btc_connect_params.get("BTC_RPC_PORT"),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
# NOTCOVERED
|
||||
def can_connect_to_bitcoind():
|
||||
def can_connect_to_bitcoind(btc_connect_params):
|
||||
"""
|
||||
Checks if the tower has connection to ``bitcoind``.
|
||||
|
||||
Args:
|
||||
btc_connect_params (:obj:`dict`): a dictionary with the parameters to connect to bitcoind
|
||||
(rpc user, rpc passwd, host and port)
|
||||
Returns:
|
||||
:obj:`bool`: ``True`` if the connection can be established. ``False`` otherwise.
|
||||
"""
|
||||
@@ -36,18 +48,23 @@ def can_connect_to_bitcoind():
|
||||
can_connect = True
|
||||
|
||||
try:
|
||||
bitcoin_cli().help()
|
||||
bitcoin_cli(btc_connect_params).help()
|
||||
except (timeout, ConnectionRefusedError, JSONRPCException, HTTPException, OSError):
|
||||
can_connect = False
|
||||
|
||||
return can_connect
|
||||
|
||||
|
||||
def in_correct_network(network):
|
||||
def in_correct_network(btc_connect_params, network):
|
||||
"""
|
||||
Checks if ``bitcoind`` and the tower are configured to run in the same network (``mainnet``, ``testnet`` or
|
||||
``regtest``)
|
||||
|
||||
Args:
|
||||
btc_connect_params (:obj:`dict`): a dictionary with the parameters to connect to bitcoind
|
||||
(rpc user, rpc passwd, host and port)
|
||||
network (:obj:`str`): the network the tower is connected to.
|
||||
|
||||
Returns:
|
||||
:obj:`bool`: ``True`` if the network configuration matches. ``False`` otherwise.
|
||||
"""
|
||||
@@ -56,7 +73,7 @@ def in_correct_network(network):
|
||||
testnet3_genesis_block_hash = "000000000933ea01ad0ee984209779baaec3ced90fa3f408719526f8d77f4943"
|
||||
correct_network = False
|
||||
|
||||
genesis_block_hash = bitcoin_cli().getblockhash(0)
|
||||
genesis_block_hash = bitcoin_cli(btc_connect_params).getblockhash(0)
|
||||
|
||||
if network == "mainnet" and genesis_block_hash == mainnet_genesis_block_hash:
|
||||
correct_network = True
|
||||
|
||||
@@ -3,15 +3,13 @@ from queue import Queue
|
||||
from threading import Thread
|
||||
|
||||
import common.cryptographer
|
||||
from common.cryptographer import Cryptographer
|
||||
from common.appointment import Appointment
|
||||
from common.tools import compute_locator
|
||||
|
||||
from common.logger import Logger
|
||||
from common.tools import compute_locator
|
||||
from common.appointment import Appointment
|
||||
from common.cryptographer import Cryptographer
|
||||
|
||||
from teos import LOG_PREFIX
|
||||
from teos.cleaner import Cleaner
|
||||
from teos.block_processor import BlockProcessor
|
||||
|
||||
logger = Logger(actor="Watcher", log_name_prefix=LOG_PREFIX)
|
||||
common.cryptographer.logger = Logger(actor="Cryptographer", log_name_prefix=LOG_PREFIX)
|
||||
@@ -34,11 +32,12 @@ class Watcher:
|
||||
|
||||
Args:
|
||||
db_manager (:obj:`DBManager <teos.db_manager>`): a ``DBManager`` instance to interact with the database.
|
||||
sk_der (:obj:`bytes`): a DER encoded private key used to sign appointment receipts (signaling acceptance).
|
||||
config (:obj:`dict`): a dictionary containing all the configuration parameters. Used locally to retrieve
|
||||
``MAX_APPOINTMENTS`` and ``EXPIRY_DELTA``.
|
||||
block_processor (:obj:`BlockProcessor <teos.block_processor.BlockProcessor>`): a ``BlockProcessor`` instance to
|
||||
get block from bitcoind.
|
||||
responder (:obj:`Responder <teos.responder.Responder>`): a ``Responder`` instance.
|
||||
|
||||
sk_der (:obj:`bytes`): a DER encoded private key used to sign appointment receipts (signaling acceptance).
|
||||
max_appointments (:obj:`int`): the maximum ammount of appointments accepted by the ``Watcher`` at the same time.
|
||||
expiry_delta (:obj:`int`): the additional time the ``Watcher`` will keep an expired appointment around.
|
||||
|
||||
Attributes:
|
||||
appointments (:obj:`dict`): a dictionary containing a simplification of the appointments (:obj:`Appointment
|
||||
@@ -48,23 +47,28 @@ class Watcher:
|
||||
appointments with the same ``locator``.
|
||||
block_queue (:obj:`Queue`): A queue used by the :obj:`Watcher` to receive block hashes from ``bitcoind``. It is
|
||||
populated by the :obj:`ChainMonitor <teos.chain_monitor.ChainMonitor>`.
|
||||
config (:obj:`dict`): a dictionary containing all the configuration parameters. Used locally to retrieve
|
||||
``MAX_APPOINTMENTS`` and ``EXPIRY_DELTA``.
|
||||
db_manager (:obj:`DBManager <teos.db_manager>`): A db manager instance to interact with the database.
|
||||
block_processor (:obj:`BlockProcessor <teos.block_processor.BlockProcessor>`): a ``BlockProcessor`` instance to
|
||||
get block from bitcoind.
|
||||
responder (:obj:`Responder <teos.responder.Responder>`): a ``Responder`` instance.
|
||||
signing_key (:mod:`PrivateKey`): a private key used to sign accepted appointments.
|
||||
max_appointments (:obj:`int`): the maximum ammount of appointments accepted by the ``Watcher`` at the same time.
|
||||
expiry_delta (:obj:`int`): the additional time the ``Watcher`` will keep an expired appointment around.
|
||||
|
||||
Raises:
|
||||
ValueError: if `teos_sk_file` is not found.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, db_manager, responder, sk_der, config):
|
||||
def __init__(self, db_manager, block_processor, responder, sk_der, max_appointments, expiry_delta):
|
||||
self.appointments = dict()
|
||||
self.locator_uuid_map = dict()
|
||||
self.block_queue = Queue()
|
||||
self.config = config
|
||||
self.db_manager = db_manager
|
||||
self.block_processor = block_processor
|
||||
self.responder = responder
|
||||
self.max_appointments = max_appointments
|
||||
self.expiry_delta = expiry_delta
|
||||
self.signing_key = Cryptographer.load_private_key_der(sk_der)
|
||||
|
||||
def awake(self):
|
||||
@@ -102,7 +106,7 @@ class Watcher:
|
||||
|
||||
"""
|
||||
|
||||
if len(self.appointments) < self.config.get("MAX_APPOINTMENTS"):
|
||||
if len(self.appointments) < self.max_appointments:
|
||||
|
||||
uuid = uuid4().hex
|
||||
self.appointments[uuid] = {"locator": appointment.locator, "end_time": appointment.end_time}
|
||||
@@ -139,7 +143,7 @@ class Watcher:
|
||||
|
||||
while True:
|
||||
block_hash = self.block_queue.get()
|
||||
block = BlockProcessor.get_block(block_hash)
|
||||
block = self.block_processor.get_block(block_hash)
|
||||
logger.info("New block received", block_hash=block_hash, prev_block_hash=block.get("previousblockhash"))
|
||||
|
||||
if len(self.appointments) > 0 and block is not None:
|
||||
@@ -148,7 +152,7 @@ class Watcher:
|
||||
expired_appointments = [
|
||||
uuid
|
||||
for uuid, appointment_data in self.appointments.items()
|
||||
if block["height"] > appointment_data.get("end_time") + self.config.get("EXPIRY_DELTA")
|
||||
if block["height"] > appointment_data.get("end_time") + self.expiry_delta
|
||||
]
|
||||
|
||||
Cleaner.delete_expired_appointments(
|
||||
@@ -265,7 +269,7 @@ class Watcher:
|
||||
except ValueError:
|
||||
penalty_rawtx = None
|
||||
|
||||
penalty_tx = BlockProcessor.decode_raw_transaction(penalty_rawtx)
|
||||
penalty_tx = self.block_processor.decode_raw_transaction(penalty_rawtx)
|
||||
decrypted_blobs[appointment.encrypted_blob.data] = (penalty_tx, penalty_rawtx)
|
||||
|
||||
if penalty_tx is not None:
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
import pytest
|
||||
import random
|
||||
|
||||
from cli import DEFAULT_CONF
|
||||
|
||||
from common.config_loader import ConfigLoader
|
||||
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def prng_seed():
|
||||
@@ -11,3 +15,10 @@ def get_random_value_hex(nbytes):
|
||||
pseudo_random_value = random.getrandbits(8 * nbytes)
|
||||
prv_hex = "{:x}".format(pseudo_random_value)
|
||||
return prv_hex.zfill(2 * nbytes)
|
||||
|
||||
|
||||
def get_config():
|
||||
config_loader = ConfigLoader(".", "teos_cli.conf", DEFAULT_CONF, {})
|
||||
config = config_loader.build_config()
|
||||
|
||||
return config
|
||||
|
||||
@@ -12,10 +12,12 @@ from common.cryptographer import Cryptographer
|
||||
|
||||
from common.blob import Blob
|
||||
import cli.teos_cli as teos_cli
|
||||
from test.cli.unit.conftest import get_random_value_hex
|
||||
from test.cli.unit.conftest import get_random_value_hex, get_config
|
||||
|
||||
common.cryptographer.logger = Logger(actor="Cryptographer", log_name_prefix=teos_cli.LOG_PREFIX)
|
||||
|
||||
config = get_config()
|
||||
|
||||
# dummy keys for the tests
|
||||
dummy_sk = PrivateKey()
|
||||
dummy_pk = dummy_sk.public_key
|
||||
@@ -25,9 +27,7 @@ another_sk = PrivateKey()
|
||||
# Replace the key in the module with a key we control for the tests
|
||||
teos_cli.teos_public_key = dummy_pk
|
||||
# Replace endpoint with dummy one
|
||||
teos_cli.teos_api_server = "https://dummy.com"
|
||||
teos_cli.teos_api_port = 12345
|
||||
teos_endpoint = "{}:{}/".format(teos_cli.teos_api_server, teos_cli.teos_api_port)
|
||||
teos_endpoint = "http://{}:{}/".format(config.get("TEOS_SERVER"), config.get("TEOS_PORT"))
|
||||
|
||||
dummy_appointment_request = {
|
||||
"tx": get_random_value_hex(192),
|
||||
@@ -107,7 +107,7 @@ def test_add_appointment(monkeypatch):
|
||||
|
||||
response = {"locator": dummy_appointment.locator, "signature": get_dummy_signature()}
|
||||
responses.add(responses.POST, teos_endpoint, json=response, status=200)
|
||||
result = teos_cli.add_appointment([json.dumps(dummy_appointment_request)])
|
||||
result = teos_cli.add_appointment([json.dumps(dummy_appointment_request)], teos_endpoint, config)
|
||||
|
||||
assert len(responses.calls) == 1
|
||||
assert responses.calls[0].request.url == teos_endpoint
|
||||
@@ -128,7 +128,9 @@ def test_add_appointment_with_invalid_signature(monkeypatch):
|
||||
}
|
||||
|
||||
responses.add(responses.POST, teos_endpoint, json=response, status=200)
|
||||
result = teos_cli.add_appointment([json.dumps(dummy_appointment_request)])
|
||||
result = teos_cli.add_appointment([json.dumps(dummy_appointment_request)], teos_endpoint, config)
|
||||
|
||||
shutil.rmtree(config.get("APPOINTMENTS_FOLDER_NAME"))
|
||||
|
||||
assert result is False
|
||||
|
||||
@@ -164,7 +166,7 @@ def test_post_appointment():
|
||||
}
|
||||
|
||||
responses.add(responses.POST, teos_endpoint, json=response, status=200)
|
||||
response = teos_cli.post_appointment(json.dumps(dummy_appointment_request))
|
||||
response = teos_cli.post_appointment(json.dumps(dummy_appointment_request), teos_endpoint)
|
||||
|
||||
assert len(responses.calls) == 1
|
||||
assert responses.calls[0].request.url == teos_endpoint
|
||||
@@ -181,27 +183,27 @@ def test_process_post_appointment_response():
|
||||
|
||||
# A 200 OK with a correct json response should return the json of the response
|
||||
responses.add(responses.POST, teos_endpoint, json=response, status=200)
|
||||
r = teos_cli.post_appointment(json.dumps(dummy_appointment_request))
|
||||
r = teos_cli.post_appointment(json.dumps(dummy_appointment_request), teos_endpoint)
|
||||
assert teos_cli.process_post_appointment_response(r) == r.json()
|
||||
|
||||
# If we modify the response code tor a rejection (lets say 404) we should get None
|
||||
responses.replace(responses.POST, teos_endpoint, json=response, status=404)
|
||||
r = teos_cli.post_appointment(json.dumps(dummy_appointment_request))
|
||||
r = teos_cli.post_appointment(json.dumps(dummy_appointment_request), teos_endpoint)
|
||||
assert teos_cli.process_post_appointment_response(r) is None
|
||||
|
||||
# The same should happen if the response is not in json
|
||||
responses.replace(responses.POST, teos_endpoint, status=404)
|
||||
r = teos_cli.post_appointment(json.dumps(dummy_appointment_request))
|
||||
r = teos_cli.post_appointment(json.dumps(dummy_appointment_request), teos_endpoint)
|
||||
assert teos_cli.process_post_appointment_response(r) is None
|
||||
|
||||
|
||||
def test_save_appointment_receipt(monkeypatch):
|
||||
appointments_folder = "test_appointments_receipts"
|
||||
teos_cli.config["APPOINTMENTS_FOLDER_NAME"] = appointments_folder
|
||||
config["APPOINTMENTS_FOLDER_NAME"] = appointments_folder
|
||||
|
||||
# The functions creates a new directory if it does not exist
|
||||
assert not os.path.exists(appointments_folder)
|
||||
teos_cli.save_appointment_receipt(dummy_appointment.to_dict(), get_dummy_signature())
|
||||
teos_cli.save_appointment_receipt(dummy_appointment.to_dict(), get_dummy_signature(), config)
|
||||
assert os.path.exists(appointments_folder)
|
||||
|
||||
# Check that the receipt has been saved by checking the file names
|
||||
@@ -216,10 +218,11 @@ def test_get_appointment():
|
||||
# Response of get_appointment endpoint is an appointment with status added to it.
|
||||
dummy_appointment_full["status"] = "being_watched"
|
||||
response = dummy_appointment_full
|
||||
get_appointment_endpoint = teos_endpoint + "get_appointment"
|
||||
|
||||
request_url = "{}get_appointment?locator={}".format(teos_endpoint, response.get("locator"))
|
||||
request_url = "{}?locator={}".format(get_appointment_endpoint, response.get("locator"))
|
||||
responses.add(responses.GET, request_url, json=response, status=200)
|
||||
result = teos_cli.get_appointment(response.get("locator"))
|
||||
result = teos_cli.get_appointment(response.get("locator"), get_appointment_endpoint)
|
||||
|
||||
assert len(responses.calls) == 1
|
||||
assert responses.calls[0].request.url == request_url
|
||||
@@ -229,9 +232,10 @@ def test_get_appointment():
|
||||
@responses.activate
|
||||
def test_get_appointment_err():
|
||||
locator = get_random_value_hex(16)
|
||||
get_appointment_endpoint = teos_endpoint + "get_appointment"
|
||||
|
||||
# Test that get_appointment handles a connection error appropriately.
|
||||
request_url = "{}get_appointment?locator=".format(teos_endpoint, locator)
|
||||
request_url = "{}?locator=".format(get_appointment_endpoint, locator)
|
||||
responses.add(responses.GET, request_url, body=ConnectionError())
|
||||
|
||||
assert not teos_cli.get_appointment(locator)
|
||||
assert not teos_cli.get_appointment(locator, get_appointment_endpoint)
|
||||
|
||||
244
test/common/unit/test_config_loader.py
Normal file
244
test/common/unit/test_config_loader.py
Normal file
@@ -0,0 +1,244 @@
|
||||
import os
|
||||
import shutil
|
||||
import pytest
|
||||
from copy import deepcopy
|
||||
from configparser import ConfigParser
|
||||
from common.config_loader import ConfigLoader
|
||||
|
||||
DEFAULT_CONF = {
|
||||
"FOO_STR": {"value": "var", "type": str},
|
||||
"FOO_STR_2": {"value": "var", "type": str},
|
||||
"FOO_INT": {"value": 12345, "type": int},
|
||||
"FOO_INT2": {"value": 6789, "type": int},
|
||||
"FOO_STR_PATH": {"value": "foo.var", "type": str, "path": True},
|
||||
"FOO_STR_PATH_2": {"value": "foo2.var", "type": str, "path": True},
|
||||
}
|
||||
|
||||
CONF_FILE_CONF = {
|
||||
"FOO_STR": {"value": "var", "type": str},
|
||||
"FOO_INT2": {"value": 6789, "type": int},
|
||||
"FOO_STR_PATH": {"value": "foo.var", "type": str, "path": True},
|
||||
"ADDITIONAL_FOO": {"value": "additional_var", "type": str},
|
||||
}
|
||||
|
||||
COMMAND_LINE_CONF = {
|
||||
"FOO_STR": {"value": "cmd_var", "type": str},
|
||||
"FOO_INT": {"value": 54321, "type": int},
|
||||
"FOO_STR_PATH": {"value": "var.foo", "type": str, "path": True},
|
||||
}
|
||||
|
||||
data_dir = "test_data_dir/"
|
||||
conf_file_name = "test_conf.conf"
|
||||
|
||||
conf_file_data = {k: v["value"] for k, v in CONF_FILE_CONF.items()}
|
||||
cmd_data = {k: v["value"] for k, v in COMMAND_LINE_CONF.items()}
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def conf_file_conf():
|
||||
config_parser = ConfigParser()
|
||||
|
||||
config_parser["foo_section"] = conf_file_data
|
||||
|
||||
os.mkdir(data_dir)
|
||||
|
||||
with open(data_dir + conf_file_name, "w") as fout:
|
||||
config_parser.write(fout)
|
||||
|
||||
yield conf_file_data
|
||||
|
||||
shutil.rmtree(data_dir)
|
||||
|
||||
|
||||
def test_init():
|
||||
conf_loader = ConfigLoader(data_dir, conf_file_name, DEFAULT_CONF, COMMAND_LINE_CONF)
|
||||
assert conf_loader.data_dir == data_dir
|
||||
assert conf_loader.conf_file_path == data_dir + conf_file_name
|
||||
assert conf_loader.conf_fields == DEFAULT_CONF
|
||||
assert conf_loader.command_line_conf == COMMAND_LINE_CONF
|
||||
|
||||
|
||||
def test_build_conf_only_default():
|
||||
foo_data_dir = "foo/"
|
||||
default_conf_copy = deepcopy(DEFAULT_CONF)
|
||||
|
||||
conf_loader = ConfigLoader(foo_data_dir, conf_file_name, default_conf_copy, {})
|
||||
config = conf_loader.build_config()
|
||||
|
||||
for k, v in config.items():
|
||||
assert k in DEFAULT_CONF
|
||||
assert isinstance(v, DEFAULT_CONF[k].get("type"))
|
||||
|
||||
if DEFAULT_CONF[k].get("path"):
|
||||
assert v == foo_data_dir + DEFAULT_CONF[k].get("value")
|
||||
else:
|
||||
assert v == DEFAULT_CONF[k].get("value")
|
||||
|
||||
|
||||
def test_build_conf_with_conf_file(conf_file_conf):
|
||||
default_conf_copy = deepcopy(DEFAULT_CONF)
|
||||
|
||||
conf_loader = ConfigLoader(data_dir, conf_file_name, default_conf_copy, {})
|
||||
config = conf_loader.build_config()
|
||||
|
||||
for k, v in config.items():
|
||||
# Check that we have only loaded parameters that were already in the default conf. Additional params are not
|
||||
# loaded
|
||||
assert k in DEFAULT_CONF
|
||||
assert isinstance(v, DEFAULT_CONF[k].get("type"))
|
||||
|
||||
# If a value is in the conf file, it will overwrite the one in the default conf
|
||||
if k in conf_file_conf:
|
||||
comp_v = conf_file_conf[k]
|
||||
else:
|
||||
comp_v = DEFAULT_CONF[k].get("value")
|
||||
|
||||
if DEFAULT_CONF[k].get("path"):
|
||||
assert v == data_dir + comp_v
|
||||
else:
|
||||
assert v == comp_v
|
||||
|
||||
|
||||
def test_build_conf_with_command_line():
|
||||
foo_data_dir = "foo/"
|
||||
default_conf_copy = deepcopy(DEFAULT_CONF)
|
||||
|
||||
conf_loader = ConfigLoader(foo_data_dir, conf_file_name, default_conf_copy, cmd_data)
|
||||
config = conf_loader.build_config()
|
||||
|
||||
for k, v in config.items():
|
||||
# Check that we have only loaded parameters that were already in the default conf. Additional params are not
|
||||
# loaded
|
||||
assert k in DEFAULT_CONF
|
||||
assert isinstance(v, DEFAULT_CONF[k].get("type"))
|
||||
|
||||
# If a value is in the command line conf, it will overwrite the one in the default conf
|
||||
if k in COMMAND_LINE_CONF:
|
||||
comp_v = cmd_data[k]
|
||||
else:
|
||||
comp_v = DEFAULT_CONF[k].get("value")
|
||||
|
||||
if DEFAULT_CONF[k].get("path"):
|
||||
assert v == foo_data_dir + comp_v
|
||||
else:
|
||||
assert v == comp_v
|
||||
|
||||
|
||||
def test_build_conf_with_all(conf_file_conf):
|
||||
default_conf_copy = deepcopy(DEFAULT_CONF)
|
||||
|
||||
conf_loader = ConfigLoader(data_dir, conf_file_name, default_conf_copy, cmd_data)
|
||||
config = conf_loader.build_config()
|
||||
|
||||
for k, v in config.items():
|
||||
# Check that we have only loaded parameters that were already in the default conf. Additional params are not
|
||||
# loaded
|
||||
assert k in DEFAULT_CONF
|
||||
assert isinstance(v, DEFAULT_CONF[k].get("type"))
|
||||
|
||||
# The priority is: cmd, conf file, default
|
||||
if k in cmd_data:
|
||||
comp_v = cmd_data[k]
|
||||
elif k in conf_file_conf:
|
||||
comp_v = conf_file_conf[k]
|
||||
else:
|
||||
comp_v = DEFAULT_CONF[k].get("value")
|
||||
|
||||
if DEFAULT_CONF[k].get("path"):
|
||||
assert v == data_dir + comp_v
|
||||
else:
|
||||
assert v == comp_v
|
||||
|
||||
|
||||
def test_build_invalid_data(conf_file_conf):
|
||||
# Lets first try with only default
|
||||
foo_data_dir = "foo/"
|
||||
default_conf_copy = deepcopy(DEFAULT_CONF)
|
||||
default_conf_copy["FOO_INT"]["value"] = "foo"
|
||||
|
||||
conf_loader = ConfigLoader(foo_data_dir, conf_file_name, default_conf_copy, {})
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
conf_loader.build_config()
|
||||
|
||||
# Set back the default value
|
||||
default_conf_copy["FOO_INT"]["value"] = DEFAULT_CONF["FOO_INT"]["value"]
|
||||
|
||||
# Only conf file now
|
||||
conf_file_conf["FOO_INT2"] = "foo"
|
||||
# Save the wrong data
|
||||
config_parser = ConfigParser()
|
||||
config_parser["foo_section"] = conf_file_data
|
||||
with open(data_dir + conf_file_name, "w") as fout:
|
||||
config_parser.write(fout)
|
||||
|
||||
conf_loader = ConfigLoader(data_dir, conf_file_name, default_conf_copy, {})
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
conf_loader.build_config()
|
||||
|
||||
# Only command line now
|
||||
cmd_data["FOO_INT"] = "foo"
|
||||
conf_loader = ConfigLoader(foo_data_dir, conf_file_name, default_conf_copy, cmd_data)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
conf_loader.build_config()
|
||||
|
||||
# All together
|
||||
# Set back a wrong default
|
||||
default_conf_copy["FOO_STR"]["value"] = 1234
|
||||
conf_loader = ConfigLoader(data_dir, conf_file_name, default_conf_copy, cmd_data)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
conf_loader.build_config()
|
||||
|
||||
|
||||
def test_create_config_dict():
|
||||
# create_config_dict should create a dictionary with the config fields in ConfigLoader.config_fields as long as
|
||||
# the type of the field "value" matches the type in "type". The conf source does not matter here.
|
||||
foo_data_dir = "foo/"
|
||||
default_conf_copy = deepcopy(DEFAULT_CONF)
|
||||
conf_loader = ConfigLoader(foo_data_dir, conf_file_name, default_conf_copy, {})
|
||||
config = conf_loader.create_config_dict()
|
||||
|
||||
assert isinstance(config, dict)
|
||||
for k, v in config.items():
|
||||
assert k in config
|
||||
assert isinstance(v, default_conf_copy[k].get("type"))
|
||||
|
||||
|
||||
def test_create_config_dict_invalid_type():
|
||||
# If any type does not match the expected one, we should get a ValueError
|
||||
foo_data_dir = "foo/"
|
||||
default_conf_copy = deepcopy(DEFAULT_CONF)
|
||||
|
||||
# Modify a field so the type does not match
|
||||
default_conf_copy["FOO_STR_2"]["value"] = 1234
|
||||
|
||||
conf_loader = ConfigLoader(foo_data_dir, conf_file_name, default_conf_copy, {})
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
conf_loader.create_config_dict()
|
||||
|
||||
|
||||
def test_extend_paths():
|
||||
# Test that only items with the path flag are extended
|
||||
foo_data_dir = "foo/"
|
||||
default_conf_copy = deepcopy(DEFAULT_CONF)
|
||||
|
||||
conf_loader = ConfigLoader(foo_data_dir, conf_file_name, default_conf_copy, {})
|
||||
conf_loader.extend_paths()
|
||||
|
||||
for k, field in conf_loader.conf_fields.items():
|
||||
if isinstance(field.get("value"), str):
|
||||
if field.get("path") is True:
|
||||
assert conf_loader.data_dir in field.get("value")
|
||||
else:
|
||||
assert conf_loader.data_dir not in field.get("value")
|
||||
|
||||
# Check that absolute paths are not extended
|
||||
absolute_path = "/foo/var"
|
||||
conf_loader.conf_fields["ABSOLUTE_PATH"] = {"value": absolute_path, "type": str, "path": True}
|
||||
conf_loader.extend_paths()
|
||||
|
||||
assert conf_loader.conf_fields["ABSOLUTE_PATH"]["value"] == absolute_path
|
||||
@@ -1,10 +1,5 @@
|
||||
import os
|
||||
import pytest
|
||||
import logging
|
||||
from copy import deepcopy
|
||||
|
||||
# FIXME: Import from teos. Common should not import anything from cli nor teos.
|
||||
from teos import conf_fields
|
||||
|
||||
from common.constants import LOCATOR_LEN_BYTES
|
||||
from common.tools import (
|
||||
@@ -12,16 +7,11 @@ from common.tools import (
|
||||
check_locator_format,
|
||||
compute_locator,
|
||||
setup_data_folder,
|
||||
check_conf_fields,
|
||||
extend_paths,
|
||||
setup_logging,
|
||||
)
|
||||
from test.common.unit.conftest import get_random_value_hex
|
||||
|
||||
|
||||
conf_fields_copy = deepcopy(conf_fields)
|
||||
|
||||
|
||||
def test_check_sha256_hex_format():
|
||||
# Only 32-byte hex encoded strings should pass the test
|
||||
wrong_inputs = [None, str(), 213, 46.67, dict(), "A" * 63, "C" * 65, bytes(), get_random_value_hex(31)]
|
||||
@@ -75,39 +65,6 @@ def test_setup_data_folder():
|
||||
os.rmdir(test_folder)
|
||||
|
||||
|
||||
def test_check_conf_fields():
|
||||
# The test should work with a valid config_fields (obtained from a valid conf.py)
|
||||
assert type(check_conf_fields(conf_fields_copy)) == dict
|
||||
|
||||
|
||||
def test_bad_check_conf_fields():
|
||||
# Create a messed up version of the file that should throw an error.
|
||||
conf_fields_copy["BTC_RPC_USER"] = 0000
|
||||
conf_fields_copy["BTC_RPC_PASSWD"] = "password"
|
||||
conf_fields_copy["BTC_RPC_HOST"] = 000
|
||||
|
||||
# We should get a ValueError here.
|
||||
with pytest.raises(Exception):
|
||||
check_conf_fields(conf_fields_copy)
|
||||
|
||||
|
||||
def test_extend_paths():
|
||||
# Test that only items with the path flag are extended
|
||||
config_fields = {
|
||||
"foo": {"value": "foofoo"},
|
||||
"var": {"value": "varvar", "path": True},
|
||||
"foovar": {"value": "foovarfoovar"},
|
||||
}
|
||||
base_path = "base_path/"
|
||||
extended_config_field = extend_paths(base_path, config_fields)
|
||||
|
||||
for k, field in extended_config_field.items():
|
||||
if field.get("path") is True:
|
||||
assert base_path in field.get("value")
|
||||
else:
|
||||
assert base_path not in field.get("value")
|
||||
|
||||
|
||||
def test_setup_logging():
|
||||
# Check that setup_logging creates two new logs for every prefix
|
||||
prefix = "foo"
|
||||
|
||||
@@ -3,19 +3,31 @@ import random
|
||||
from multiprocessing import Process
|
||||
from decimal import Decimal, getcontext
|
||||
|
||||
import teos.conf as conf
|
||||
from teos.teosd import main
|
||||
from teos import DATA_DIR, CONF_FILE_NAME, DEFAULT_CONF
|
||||
from teos.utils.auth_proxy import AuthServiceProxy
|
||||
|
||||
from common.config_loader import ConfigLoader
|
||||
|
||||
|
||||
getcontext().prec = 10
|
||||
END_TIME_DELTA = 10
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def bitcoin_cli():
|
||||
# return AuthServiceProxy("http://%s:%s@%s:%d" % (conf.BTC_RPC_USER, conf.BTC_RPC_PASSWD, conf.BTC_RPC_HOST, 18444))
|
||||
config = get_config(DATA_DIR, CONF_FILE_NAME, DEFAULT_CONF)
|
||||
print(config)
|
||||
# btc_connect_params = {k: v["value"] for k, v in DEFAULT_CONF.items() if k.startswith("BTC")}
|
||||
|
||||
return AuthServiceProxy(
|
||||
"http://%s:%s@%s:%d" % (conf.BTC_RPC_USER, conf.BTC_RPC_PASSWD, conf.BTC_RPC_HOST, conf.BTC_RPC_PORT)
|
||||
"http://%s:%s@%s:%d"
|
||||
% (
|
||||
config.get("BTC_RPC_USER"),
|
||||
config.get("BTC_RPC_PASSWD"),
|
||||
config.get("BTC_RPC_CONNECT"),
|
||||
config.get("BTC_RPC_PORT"),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@@ -51,7 +63,7 @@ def create_txs(bitcoin_cli):
|
||||
|
||||
|
||||
def run_teosd():
|
||||
teosd_process = Process(target=main, daemon=True)
|
||||
teosd_process = Process(target=main, kwargs={"command_line_conf": {}}, daemon=True)
|
||||
teosd_process.start()
|
||||
|
||||
return teosd_process
|
||||
@@ -116,3 +128,10 @@ def build_appointment_data(bitcoin_cli, commitment_tx_id, penalty_tx):
|
||||
}
|
||||
|
||||
return appointment_data
|
||||
|
||||
|
||||
def get_config(data_folder, conf_file_name, default_conf):
|
||||
config_loader = ConfigLoader(data_folder, conf_file_name, default_conf, {})
|
||||
config = config_loader.build_config()
|
||||
|
||||
return config
|
||||
|
||||
@@ -1,26 +0,0 @@
|
||||
# bitcoind
|
||||
BTC_RPC_USER = "user"
|
||||
BTC_RPC_PASSWD = "passwd"
|
||||
BTC_RPC_HOST = "localhost"
|
||||
BTC_RPC_PORT = 18445
|
||||
BTC_NETWORK = "regtest"
|
||||
|
||||
# ZMQ
|
||||
FEED_PROTOCOL = "tcp"
|
||||
FEED_ADDR = "127.0.0.1"
|
||||
FEED_PORT = 28335
|
||||
|
||||
# TEOS
|
||||
DATA_FOLDER = "~/.teos/"
|
||||
MAX_APPOINTMENTS = 100
|
||||
EXPIRY_DELTA = 6
|
||||
MIN_TO_SELF_DELAY = 20
|
||||
SERVER_LOG_FILE = "teos.log"
|
||||
TEOS_SECRET_KEY = "teos_sk.der"
|
||||
|
||||
# CHAIN MONITOR
|
||||
POLLING_DELTA = 60
|
||||
BLOCK_WINDOW_SIZE = 10
|
||||
|
||||
# LEVELDB
|
||||
DB_PATH = "appointments"
|
||||
20
test/teos/e2e/teos.conf
Normal file
20
test/teos/e2e/teos.conf
Normal file
@@ -0,0 +1,20 @@
|
||||
[bitcoind]
|
||||
btc_rpc_user = user
|
||||
btc_rpc_passwd = passwd
|
||||
btc_rpc_connect = localhost
|
||||
btc_rpc_port = 18445
|
||||
btc_network = regtest
|
||||
|
||||
# [zmq]
|
||||
feed_protocol = tcp
|
||||
feed_connect = 127.0.0.1
|
||||
feed_port = 28335
|
||||
|
||||
[teos]
|
||||
max_appointments = 100
|
||||
expiry_delta = 6
|
||||
min_to_self_delay = 20
|
||||
|
||||
# [chain monitor]
|
||||
polling_delta = 60
|
||||
block_window_size = 10
|
||||
@@ -3,12 +3,10 @@ import binascii
|
||||
from time import sleep
|
||||
from riemann.tx import Tx
|
||||
|
||||
|
||||
from teos import HOST, PORT
|
||||
from cli import teos_cli
|
||||
from common.blob import Blob
|
||||
from cli import teos_cli, DATA_DIR, DEFAULT_CONF, CONF_FILE_NAME
|
||||
|
||||
import common.cryptographer
|
||||
from common.blob import Blob
|
||||
from common.logger import Logger
|
||||
from common.tools import compute_locator
|
||||
from common.appointment import Appointment
|
||||
@@ -20,15 +18,20 @@ from test.teos.e2e.conftest import (
|
||||
get_random_value_hex,
|
||||
create_penalty_tx,
|
||||
run_teosd,
|
||||
get_config,
|
||||
)
|
||||
from cli import config as cli_conf
|
||||
|
||||
cli_config = get_config(DATA_DIR, CONF_FILE_NAME, DEFAULT_CONF)
|
||||
common.cryptographer.logger = Logger(actor="Cryptographer", log_name_prefix="")
|
||||
|
||||
# We'll use teos_cli to add appointments. The expected input format is a list of arguments with a json-encoded
|
||||
# appointment
|
||||
teos_cli.teos_api_server = "http://{}".format(HOST)
|
||||
teos_cli.teos_api_port = PORT
|
||||
# # We'll use teos_cli to add appointments. The expected input format is a list of arguments with a json-encoded
|
||||
# # appointment
|
||||
# teos_cli.teos_api_server = "http://{}".format(HOST)
|
||||
# teos_cli.teos_api_port = PORT
|
||||
|
||||
teos_base_endpoint = "http://{}:{}".format(cli_config.get("TEOS_SERVER"), cli_config.get("TEOS_PORT"))
|
||||
teos_add_appointment_endpoint = teos_base_endpoint
|
||||
teos_get_appointment_endpoint = teos_base_endpoint + "/get_appointment"
|
||||
|
||||
# Run teosd
|
||||
teosd_process = run_teosd()
|
||||
@@ -43,7 +46,7 @@ def broadcast_transaction_and_mine_block(bitcoin_cli, commitment_tx, addr):
|
||||
def get_appointment_info(locator):
|
||||
# Check that the justice has been triggered (the appointment has moved from Watcher to Responder)
|
||||
sleep(1) # Let's add a bit of delay so the state can be updated
|
||||
return teos_cli.get_appointment(locator)
|
||||
return teos_cli.get_appointment(locator, teos_get_appointment_endpoint)
|
||||
|
||||
|
||||
def test_appointment_life_cycle(bitcoin_cli, create_txs):
|
||||
@@ -52,7 +55,7 @@ def test_appointment_life_cycle(bitcoin_cli, create_txs):
|
||||
appointment_data = build_appointment_data(bitcoin_cli, commitment_tx_id, penalty_tx)
|
||||
locator = compute_locator(commitment_tx_id)
|
||||
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)]) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)], teos_add_appointment_endpoint, cli_config) is True
|
||||
|
||||
appointment_info = get_appointment_info(locator)
|
||||
assert appointment_info is not None
|
||||
@@ -102,7 +105,7 @@ def test_appointment_malformed_penalty(bitcoin_cli, create_txs):
|
||||
appointment_data = build_appointment_data(bitcoin_cli, commitment_tx_id, mod_penalty_tx.hex())
|
||||
locator = compute_locator(commitment_tx_id)
|
||||
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)]) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)], teos_add_appointment_endpoint, cli_config) is True
|
||||
|
||||
# Broadcast the commitment transaction and mine a block
|
||||
new_addr = bitcoin_cli.getnewaddress()
|
||||
@@ -132,7 +135,7 @@ def test_appointment_wrong_key(bitcoin_cli, create_txs):
|
||||
appointment = Appointment.from_dict(appointment_data)
|
||||
|
||||
teos_pk, cli_sk, cli_pk_der = teos_cli.load_keys(
|
||||
cli_conf.get("TEOS_PUBLIC_KEY"), cli_conf.get("CLI_PRIVATE_KEY"), cli_conf.get("CLI_PUBLIC_KEY")
|
||||
cli_config.get("TEOS_PUBLIC_KEY"), cli_config.get("CLI_PRIVATE_KEY"), cli_config.get("CLI_PUBLIC_KEY")
|
||||
)
|
||||
hex_pk_der = binascii.hexlify(cli_pk_der)
|
||||
|
||||
@@ -140,7 +143,7 @@ def test_appointment_wrong_key(bitcoin_cli, create_txs):
|
||||
data = {"appointment": appointment.to_dict(), "signature": signature, "public_key": hex_pk_der.decode("utf-8")}
|
||||
|
||||
# Send appointment to the server.
|
||||
response = teos_cli.post_appointment(data)
|
||||
response = teos_cli.post_appointment(data, teos_add_appointment_endpoint)
|
||||
response_json = teos_cli.process_post_appointment_response(response)
|
||||
|
||||
# Check that the server has accepted the appointment
|
||||
@@ -176,8 +179,8 @@ def test_two_identical_appointments(bitcoin_cli, create_txs):
|
||||
locator = compute_locator(commitment_tx_id)
|
||||
|
||||
# Send the appointment twice
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)]) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)]) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)], teos_add_appointment_endpoint, cli_config) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)], teos_add_appointment_endpoint, cli_config) is True
|
||||
|
||||
# Broadcast the commitment transaction and mine a block
|
||||
new_addr = bitcoin_cli.getnewaddress()
|
||||
@@ -210,8 +213,8 @@ def test_two_appointment_same_locator_different_penalty(bitcoin_cli, create_txs)
|
||||
appointment2_data = build_appointment_data(bitcoin_cli, commitment_tx_id, penalty_tx2)
|
||||
locator = compute_locator(commitment_tx_id)
|
||||
|
||||
assert teos_cli.add_appointment([json.dumps(appointment1_data)]) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment2_data)]) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment1_data)], teos_add_appointment_endpoint, cli_config) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment2_data)], teos_add_appointment_endpoint, cli_config) is True
|
||||
|
||||
# Broadcast the commitment transaction and mine a block
|
||||
new_addr = bitcoin_cli.getnewaddress()
|
||||
@@ -238,7 +241,7 @@ def test_appointment_shutdown_teos_trigger_back_online(create_txs, bitcoin_cli):
|
||||
appointment_data = build_appointment_data(bitcoin_cli, commitment_tx_id, penalty_tx)
|
||||
locator = compute_locator(commitment_tx_id)
|
||||
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)]) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)], teos_add_appointment_endpoint, cli_config) is True
|
||||
|
||||
# Restart teos
|
||||
teosd_process.terminate()
|
||||
@@ -276,7 +279,7 @@ def test_appointment_shutdown_teos_trigger_while_offline(create_txs, bitcoin_cli
|
||||
appointment_data = build_appointment_data(bitcoin_cli, commitment_tx_id, penalty_tx)
|
||||
locator = compute_locator(commitment_tx_id)
|
||||
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)]) is True
|
||||
assert teos_cli.add_appointment([json.dumps(appointment_data)], teos_add_appointment_endpoint, cli_config) is True
|
||||
|
||||
# Check that the appointment is still in the Watcher
|
||||
appointment_info = get_appointment_info(locator)
|
||||
|
||||
@@ -1,32 +1,40 @@
|
||||
import os
|
||||
import pytest
|
||||
import random
|
||||
import requests
|
||||
from time import sleep
|
||||
from shutil import rmtree
|
||||
from threading import Thread
|
||||
|
||||
from coincurve import PrivateKey
|
||||
|
||||
from common.blob import Blob
|
||||
from teos.responder import TransactionTracker
|
||||
from teos.tools import bitcoin_cli
|
||||
from teos.db_manager import DBManager
|
||||
from common.appointment import Appointment
|
||||
from common.tools import compute_locator
|
||||
|
||||
from bitcoind_mock.transaction import create_dummy_transaction
|
||||
from bitcoind_mock.bitcoind import BitcoindMock
|
||||
from bitcoind_mock.conf import BTC_RPC_HOST, BTC_RPC_PORT
|
||||
from bitcoind_mock.transaction import create_dummy_transaction
|
||||
|
||||
from teos.carrier import Carrier
|
||||
from teos.tools import bitcoin_cli
|
||||
from teos.db_manager import DBManager
|
||||
from teos import LOG_PREFIX, DEFAULT_CONF
|
||||
from teos.responder import TransactionTracker
|
||||
from teos.block_processor import BlockProcessor
|
||||
|
||||
from teos import LOG_PREFIX
|
||||
import common.cryptographer
|
||||
from common.blob import Blob
|
||||
from common.logger import Logger
|
||||
from common.tools import compute_locator
|
||||
from common.appointment import Appointment
|
||||
from common.constants import LOCATOR_LEN_HEX
|
||||
from common.config_loader import ConfigLoader
|
||||
from common.cryptographer import Cryptographer
|
||||
|
||||
common.cryptographer.logger = Logger(actor="Cryptographer", log_name_prefix=LOG_PREFIX)
|
||||
|
||||
# Set params to connect to regtest for testing
|
||||
DEFAULT_CONF["BTC_RPC_PORT"]["value"] = 18443
|
||||
DEFAULT_CONF["BTC_NETWORK"]["value"] = "regtest"
|
||||
|
||||
bitcoind_connect_params = {k: v["value"] for k, v in DEFAULT_CONF.items() if k.startswith("BTC")}
|
||||
bitcoind_feed_params = {k: v["value"] for k, v in DEFAULT_CONF.items() if k.startswith("FEED")}
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def run_bitcoind():
|
||||
@@ -54,6 +62,16 @@ def db_manager():
|
||||
rmtree("test_db")
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def carrier():
|
||||
return Carrier(bitcoind_connect_params)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def block_processor():
|
||||
return BlockProcessor(bitcoind_connect_params)
|
||||
|
||||
|
||||
def generate_keypair():
|
||||
sk = PrivateKey()
|
||||
pk = sk.public_key
|
||||
@@ -84,7 +102,7 @@ def fork(block_hash):
|
||||
|
||||
def generate_dummy_appointment_data(real_height=True, start_time_offset=5, end_time_offset=30):
|
||||
if real_height:
|
||||
current_height = bitcoin_cli().getblockcount()
|
||||
current_height = bitcoin_cli(bitcoind_connect_params).getblockcount()
|
||||
|
||||
else:
|
||||
current_height = 10
|
||||
@@ -151,23 +169,7 @@ def generate_dummy_tracker():
|
||||
|
||||
|
||||
def get_config():
|
||||
data_folder = os.path.expanduser("~/.teos")
|
||||
config = {
|
||||
"BTC_RPC_USER": "username",
|
||||
"BTC_RPC_PASSWD": "password",
|
||||
"BTC_RPC_HOST": "localhost",
|
||||
"BTC_RPC_PORT": 8332,
|
||||
"BTC_NETWORK": "regtest",
|
||||
"FEED_PROTOCOL": "tcp",
|
||||
"FEED_ADDR": "127.0.0.1",
|
||||
"FEED_PORT": 28332,
|
||||
"DATA_FOLDER": data_folder,
|
||||
"MAX_APPOINTMENTS": 100,
|
||||
"EXPIRY_DELTA": 6,
|
||||
"MIN_TO_SELF_DELAY": 20,
|
||||
"SERVER_LOG_FILE": data_folder + "teos.log",
|
||||
"TEOS_SECRET_KEY": data_folder + "teos_sk.der",
|
||||
"DB_PATH": "appointments",
|
||||
}
|
||||
config_loader = ConfigLoader(".", "teos.conf", DEFAULT_CONF, {})
|
||||
config = config_loader.build_config()
|
||||
|
||||
return config
|
||||
|
||||
@@ -5,10 +5,11 @@ from time import sleep
|
||||
from threading import Thread
|
||||
|
||||
from teos.api import API
|
||||
from teos.watcher import Watcher
|
||||
from teos.responder import Responder
|
||||
from teos.tools import bitcoin_cli
|
||||
from teos import HOST, PORT
|
||||
from teos.watcher import Watcher
|
||||
from teos.tools import bitcoin_cli
|
||||
from teos.inspector import Inspector
|
||||
from teos.responder import Responder
|
||||
from teos.chain_monitor import ChainMonitor
|
||||
|
||||
from test.teos.unit.conftest import (
|
||||
@@ -18,8 +19,11 @@ from test.teos.unit.conftest import (
|
||||
generate_dummy_appointment_data,
|
||||
generate_keypair,
|
||||
get_config,
|
||||
bitcoind_connect_params,
|
||||
bitcoind_feed_params,
|
||||
)
|
||||
|
||||
|
||||
from common.constants import LOCATOR_LEN_BYTES
|
||||
|
||||
|
||||
@@ -33,15 +37,21 @@ config = get_config()
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def run_api(db_manager):
|
||||
def run_api(db_manager, carrier, block_processor):
|
||||
sk, pk = generate_keypair()
|
||||
|
||||
watcher = Watcher(db_manager, Responder(db_manager), sk.to_der(), get_config())
|
||||
chain_monitor = ChainMonitor(watcher.block_queue, watcher.responder.block_queue)
|
||||
responder = Responder(db_manager, carrier, block_processor)
|
||||
watcher = Watcher(
|
||||
db_manager, block_processor, responder, sk.to_der(), config.get("MAX_APPOINTMENTS"), config.get("EXPIRY_DELTA")
|
||||
)
|
||||
|
||||
chain_monitor = ChainMonitor(
|
||||
watcher.block_queue, watcher.responder.block_queue, block_processor, bitcoind_feed_params
|
||||
)
|
||||
watcher.awake()
|
||||
chain_monitor.monitor_chain()
|
||||
|
||||
api_thread = Thread(target=API(watcher, config).start)
|
||||
api_thread = Thread(target=API(Inspector(block_processor, config.get("MIN_TO_SELF_DELAY")), watcher).start)
|
||||
api_thread.daemon = True
|
||||
api_thread.start()
|
||||
|
||||
@@ -131,7 +141,7 @@ def test_get_all_appointments_responder():
|
||||
locators = [appointment["locator"] for appointment in appointments]
|
||||
for locator, dispute_tx in locator_dispute_tx_map.items():
|
||||
if locator in locators:
|
||||
bitcoin_cli().sendrawtransaction(dispute_tx)
|
||||
bitcoin_cli(bitcoind_connect_params).sendrawtransaction(dispute_tx)
|
||||
|
||||
# Confirm transactions
|
||||
generate_blocks(6)
|
||||
@@ -173,7 +183,7 @@ def test_request_appointment_watcher(new_appt_data):
|
||||
def test_request_appointment_responder(new_appt_data):
|
||||
# Let's do something similar to what we did with the watcher but now we'll send the dispute tx to the network
|
||||
dispute_tx = locator_dispute_tx_map[new_appt_data["appointment"]["locator"]]
|
||||
bitcoin_cli().sendrawtransaction(dispute_tx)
|
||||
bitcoin_cli(bitcoind_connect_params).sendrawtransaction(dispute_tx)
|
||||
|
||||
r = add_appointment(new_appt_data)
|
||||
assert r.status_code == 200
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import pytest
|
||||
|
||||
from teos.block_processor import BlockProcessor
|
||||
from test.teos.unit.conftest import get_random_value_hex, generate_block, generate_blocks, fork
|
||||
from test.teos.unit.conftest import get_random_value_hex, generate_block, generate_blocks, fork, bitcoind_connect_params
|
||||
|
||||
|
||||
hex_tx = (
|
||||
@@ -14,19 +13,16 @@ hex_tx = (
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def best_block_hash():
|
||||
return BlockProcessor.get_best_block_hash()
|
||||
|
||||
|
||||
def test_get_best_block_hash(run_bitcoind, best_block_hash):
|
||||
def test_get_best_block_hash(run_bitcoind, block_processor):
|
||||
best_block_hash = block_processor.get_best_block_hash()
|
||||
# As long as bitcoind is running (or mocked in this case) we should always a block hash
|
||||
assert best_block_hash is not None and isinstance(best_block_hash, str)
|
||||
|
||||
|
||||
def test_get_block(best_block_hash):
|
||||
def test_get_block(block_processor):
|
||||
best_block_hash = block_processor.get_best_block_hash()
|
||||
# Getting a block from a block hash we are aware of should return data
|
||||
block = BlockProcessor.get_block(best_block_hash)
|
||||
block = block_processor.get_block(best_block_hash)
|
||||
|
||||
# Checking that the received block has at least the fields we need
|
||||
# FIXME: We could be more strict here, but we'll need to add those restrictions to bitcoind_sim too
|
||||
@@ -34,75 +30,75 @@ def test_get_block(best_block_hash):
|
||||
assert block.get("hash") == best_block_hash and "height" in block and "previousblockhash" in block and "tx" in block
|
||||
|
||||
|
||||
def test_get_random_block():
|
||||
block = BlockProcessor.get_block(get_random_value_hex(32))
|
||||
def test_get_random_block(block_processor):
|
||||
block = block_processor.get_block(get_random_value_hex(32))
|
||||
|
||||
assert block is None
|
||||
|
||||
|
||||
def test_get_block_count():
|
||||
block_count = BlockProcessor.get_block_count()
|
||||
def test_get_block_count(block_processor):
|
||||
block_count = block_processor.get_block_count()
|
||||
assert isinstance(block_count, int) and block_count >= 0
|
||||
|
||||
|
||||
def test_decode_raw_transaction():
|
||||
def test_decode_raw_transaction(block_processor):
|
||||
# We cannot exhaustively test this (we rely on bitcoind for this) but we can try to decode a correct transaction
|
||||
assert BlockProcessor.decode_raw_transaction(hex_tx) is not None
|
||||
assert block_processor.decode_raw_transaction(hex_tx) is not None
|
||||
|
||||
|
||||
def test_decode_raw_transaction_invalid():
|
||||
def test_decode_raw_transaction_invalid(block_processor):
|
||||
# Same but with an invalid one
|
||||
assert BlockProcessor.decode_raw_transaction(hex_tx[::-1]) is None
|
||||
assert block_processor.decode_raw_transaction(hex_tx[::-1]) is None
|
||||
|
||||
|
||||
def test_get_missed_blocks():
|
||||
target_block = BlockProcessor.get_best_block_hash()
|
||||
def test_get_missed_blocks(block_processor):
|
||||
target_block = block_processor.get_best_block_hash()
|
||||
|
||||
# Generate some blocks and store the hash in a list
|
||||
missed_blocks = []
|
||||
for _ in range(5):
|
||||
generate_block()
|
||||
missed_blocks.append(BlockProcessor.get_best_block_hash())
|
||||
missed_blocks.append(block_processor.get_best_block_hash())
|
||||
|
||||
# Check what we've missed
|
||||
assert BlockProcessor.get_missed_blocks(target_block) == missed_blocks
|
||||
assert block_processor.get_missed_blocks(target_block) == missed_blocks
|
||||
|
||||
# We can see how it does not work if we replace the target by the first element in the list
|
||||
block_tip = missed_blocks[0]
|
||||
assert BlockProcessor.get_missed_blocks(block_tip) != missed_blocks
|
||||
assert block_processor.get_missed_blocks(block_tip) != missed_blocks
|
||||
|
||||
# But it does again if we skip that block
|
||||
assert BlockProcessor.get_missed_blocks(block_tip) == missed_blocks[1:]
|
||||
assert block_processor.get_missed_blocks(block_tip) == missed_blocks[1:]
|
||||
|
||||
|
||||
def test_get_distance_to_tip():
|
||||
def test_get_distance_to_tip(block_processor):
|
||||
target_distance = 5
|
||||
|
||||
target_block = BlockProcessor.get_best_block_hash()
|
||||
target_block = block_processor.get_best_block_hash()
|
||||
|
||||
# Mine some blocks up to the target distance
|
||||
generate_blocks(target_distance)
|
||||
|
||||
# Check if the distance is properly computed
|
||||
assert BlockProcessor.get_distance_to_tip(target_block) == target_distance
|
||||
assert block_processor.get_distance_to_tip(target_block) == target_distance
|
||||
|
||||
|
||||
def test_is_block_in_best_chain():
|
||||
best_block_hash = BlockProcessor.get_best_block_hash()
|
||||
best_block = BlockProcessor.get_block(best_block_hash)
|
||||
def test_is_block_in_best_chain(block_processor):
|
||||
best_block_hash = block_processor.get_best_block_hash()
|
||||
best_block = block_processor.get_block(best_block_hash)
|
||||
|
||||
assert BlockProcessor.is_block_in_best_chain(best_block_hash)
|
||||
assert block_processor.is_block_in_best_chain(best_block_hash)
|
||||
|
||||
fork(best_block.get("previousblockhash"))
|
||||
generate_blocks(2)
|
||||
|
||||
assert not BlockProcessor.is_block_in_best_chain(best_block_hash)
|
||||
assert not block_processor.is_block_in_best_chain(best_block_hash)
|
||||
|
||||
|
||||
def test_find_last_common_ancestor():
|
||||
ancestor = BlockProcessor.get_best_block_hash()
|
||||
def test_find_last_common_ancestor(block_processor):
|
||||
ancestor = block_processor.get_best_block_hash()
|
||||
generate_blocks(3)
|
||||
best_block_hash = BlockProcessor.get_best_block_hash()
|
||||
best_block_hash = block_processor.get_best_block_hash()
|
||||
|
||||
# Create a fork (forking creates a block if the mock is set by events)
|
||||
fork(ancestor)
|
||||
@@ -111,6 +107,6 @@ def test_find_last_common_ancestor():
|
||||
generate_blocks(5)
|
||||
|
||||
# The last common ancestor between the old best and the new best should be the "ancestor"
|
||||
last_common_ancestor, dropped_txs = BlockProcessor.find_last_common_ancestor(best_block_hash)
|
||||
last_common_ancestor, dropped_txs = block_processor.find_last_common_ancestor(best_block_hash)
|
||||
assert last_common_ancestor == ancestor
|
||||
assert len(dropped_txs) == 3
|
||||
|
||||
@@ -5,6 +5,7 @@ from queue import Queue
|
||||
from teos.builder import Builder
|
||||
from teos.watcher import Watcher
|
||||
from teos.responder import Responder
|
||||
|
||||
from test.teos.unit.conftest import (
|
||||
get_random_value_hex,
|
||||
generate_dummy_appointment,
|
||||
@@ -12,8 +13,11 @@ from test.teos.unit.conftest import (
|
||||
generate_block,
|
||||
bitcoin_cli,
|
||||
get_config,
|
||||
bitcoind_connect_params,
|
||||
)
|
||||
|
||||
config = get_config()
|
||||
|
||||
|
||||
def test_build_appointments():
|
||||
appointments_data = {}
|
||||
@@ -89,8 +93,15 @@ def test_populate_block_queue():
|
||||
assert len(blocks) == 0
|
||||
|
||||
|
||||
def test_update_states_empty_list(db_manager):
|
||||
w = Watcher(db_manager=db_manager, responder=Responder(db_manager), sk_der=None, config=None)
|
||||
def test_update_states_empty_list(db_manager, carrier, block_processor):
|
||||
w = Watcher(
|
||||
db_manager=db_manager,
|
||||
block_processor=block_processor,
|
||||
responder=Responder(db_manager, carrier, block_processor),
|
||||
sk_der=None,
|
||||
max_appointments=config.get("MAX_APPOINTMENTS"),
|
||||
expiry_delta=config.get("EXPIRY_DELTA"),
|
||||
)
|
||||
|
||||
missed_blocks_watcher = []
|
||||
missed_blocks_responder = [get_random_value_hex(32)]
|
||||
@@ -103,13 +114,20 @@ def test_update_states_empty_list(db_manager):
|
||||
Builder.update_states(w, missed_blocks_responder, missed_blocks_watcher)
|
||||
|
||||
|
||||
def test_update_states_responder_misses_more(run_bitcoind, db_manager):
|
||||
w = Watcher(db_manager=db_manager, responder=Responder(db_manager), sk_der=None, config=get_config())
|
||||
def test_update_states_responder_misses_more(run_bitcoind, db_manager, carrier, block_processor):
|
||||
w = Watcher(
|
||||
db_manager=db_manager,
|
||||
block_processor=block_processor,
|
||||
responder=Responder(db_manager, carrier, block_processor),
|
||||
sk_der=None,
|
||||
max_appointments=config.get("MAX_APPOINTMENTS"),
|
||||
expiry_delta=config.get("EXPIRY_DELTA"),
|
||||
)
|
||||
|
||||
blocks = []
|
||||
for _ in range(5):
|
||||
generate_block()
|
||||
blocks.append(bitcoin_cli().getbestblockhash())
|
||||
blocks.append(bitcoin_cli(bitcoind_connect_params).getbestblockhash())
|
||||
|
||||
# Updating the states should bring both to the same last known block.
|
||||
w.awake()
|
||||
@@ -120,14 +138,21 @@ def test_update_states_responder_misses_more(run_bitcoind, db_manager):
|
||||
assert w.responder.last_known_block == blocks[-1]
|
||||
|
||||
|
||||
def test_update_states_watcher_misses_more(run_bitcoind, db_manager):
|
||||
def test_update_states_watcher_misses_more(db_manager, carrier, block_processor):
|
||||
# Same as before, but data is now in the Responder
|
||||
w = Watcher(db_manager=db_manager, responder=Responder(db_manager), sk_der=None, config=get_config())
|
||||
w = Watcher(
|
||||
db_manager=db_manager,
|
||||
block_processor=block_processor,
|
||||
responder=Responder(db_manager, carrier, block_processor),
|
||||
sk_der=None,
|
||||
max_appointments=config.get("MAX_APPOINTMENTS"),
|
||||
expiry_delta=config.get("EXPIRY_DELTA"),
|
||||
)
|
||||
|
||||
blocks = []
|
||||
for _ in range(5):
|
||||
generate_block()
|
||||
blocks.append(bitcoin_cli().getbestblockhash())
|
||||
blocks.append(bitcoin_cli(bitcoind_connect_params).getbestblockhash())
|
||||
|
||||
w.awake()
|
||||
w.responder.awake()
|
||||
|
||||
@@ -1,6 +1,3 @@
|
||||
import pytest
|
||||
|
||||
from teos.carrier import Carrier
|
||||
from bitcoind_mock.transaction import create_dummy_transaction
|
||||
from test.teos.unit.conftest import generate_blocks, get_random_value_hex
|
||||
from teos.rpc_errors import RPC_VERIFY_ALREADY_IN_CHAIN, RPC_DESERIALIZATION_ERROR
|
||||
@@ -14,11 +11,6 @@ from teos.rpc_errors import RPC_VERIFY_ALREADY_IN_CHAIN, RPC_DESERIALIZATION_ERR
|
||||
sent_txs = []
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def carrier():
|
||||
return Carrier()
|
||||
|
||||
|
||||
def test_send_transaction(run_bitcoind, carrier):
|
||||
tx = create_dummy_transaction()
|
||||
|
||||
@@ -56,15 +48,15 @@ def test_send_transaction_invalid_format(carrier):
|
||||
assert receipt.delivered is False and receipt.reason == RPC_DESERIALIZATION_ERROR
|
||||
|
||||
|
||||
def test_get_transaction():
|
||||
def test_get_transaction(carrier):
|
||||
# We should be able to get back every transaction we've sent
|
||||
for tx in sent_txs:
|
||||
tx_info = Carrier.get_transaction(tx)
|
||||
tx_info = carrier.get_transaction(tx)
|
||||
|
||||
assert tx_info is not None
|
||||
|
||||
|
||||
def test_get_non_existing_transaction():
|
||||
tx_info = Carrier.get_transaction(get_random_value_hex(32))
|
||||
def test_get_non_existing_transaction(carrier):
|
||||
tx_info = carrier.get_transaction(get_random_value_hex(32))
|
||||
|
||||
assert tx_info is None
|
||||
|
||||
@@ -3,17 +3,16 @@ import time
|
||||
from queue import Queue
|
||||
from threading import Thread, Event, Condition
|
||||
|
||||
from teos.block_processor import BlockProcessor
|
||||
from teos.chain_monitor import ChainMonitor
|
||||
|
||||
from test.teos.unit.conftest import get_random_value_hex, generate_block
|
||||
from test.teos.unit.conftest import get_random_value_hex, generate_block, bitcoind_connect_params, bitcoind_feed_params
|
||||
|
||||
|
||||
def test_init(run_bitcoind):
|
||||
def test_init(run_bitcoind, block_processor):
|
||||
# run_bitcoind is started here instead of later on to avoid race conditions while it initializes
|
||||
|
||||
# Not much to test here, just sanity checks to make sure nothing goes south in the future
|
||||
chain_monitor = ChainMonitor(Queue(), Queue())
|
||||
chain_monitor = ChainMonitor(Queue(), Queue(), block_processor, bitcoind_feed_params)
|
||||
|
||||
assert chain_monitor.best_tip is None
|
||||
assert isinstance(chain_monitor.last_tips, list) and len(chain_monitor.last_tips) == 0
|
||||
@@ -27,8 +26,8 @@ def test_init(run_bitcoind):
|
||||
assert isinstance(chain_monitor.responder_queue, Queue)
|
||||
|
||||
|
||||
def test_notify_subscribers():
|
||||
chain_monitor = ChainMonitor(Queue(), Queue())
|
||||
def test_notify_subscribers(block_processor):
|
||||
chain_monitor = ChainMonitor(Queue(), Queue(), block_processor, bitcoind_feed_params)
|
||||
# Subscribers are only notified as long as they are awake
|
||||
new_block = get_random_value_hex(32)
|
||||
|
||||
@@ -42,11 +41,11 @@ def test_notify_subscribers():
|
||||
assert chain_monitor.responder_queue.get() == new_block
|
||||
|
||||
|
||||
def test_update_state():
|
||||
def test_update_state(block_processor):
|
||||
# The state is updated after receiving a new block (and only if the block is not already known).
|
||||
# Let's start by setting a best_tip and a couple of old tips
|
||||
new_block_hash = get_random_value_hex(32)
|
||||
chain_monitor = ChainMonitor(Queue(), Queue())
|
||||
chain_monitor = ChainMonitor(Queue(), Queue(), block_processor, bitcoind_feed_params)
|
||||
chain_monitor.best_tip = new_block_hash
|
||||
chain_monitor.last_tips = [get_random_value_hex(32) for _ in range(5)]
|
||||
|
||||
@@ -63,14 +62,15 @@ def test_update_state():
|
||||
assert chain_monitor.best_tip == another_block_hash and new_block_hash == chain_monitor.last_tips[-1]
|
||||
|
||||
|
||||
def test_monitor_chain_polling(db_manager):
|
||||
def test_monitor_chain_polling(db_manager, block_processor):
|
||||
# Try polling with the Watcher
|
||||
wq = Queue()
|
||||
chain_monitor = ChainMonitor(wq, Queue())
|
||||
chain_monitor.best_tip = BlockProcessor.get_best_block_hash()
|
||||
chain_monitor = ChainMonitor(Queue(), Queue(), block_processor, bitcoind_feed_params)
|
||||
chain_monitor.best_tip = block_processor.get_best_block_hash()
|
||||
chain_monitor.polling_delta = 0.1
|
||||
|
||||
# monitor_chain_polling runs until terminate if set
|
||||
polling_thread = Thread(target=chain_monitor.monitor_chain_polling, kwargs={"polling_delta": 0.1}, daemon=True)
|
||||
polling_thread = Thread(target=chain_monitor.monitor_chain_polling, daemon=True)
|
||||
polling_thread.start()
|
||||
|
||||
# Check that nothing changes as long as a block is not generated
|
||||
@@ -88,10 +88,10 @@ def test_monitor_chain_polling(db_manager):
|
||||
polling_thread.join()
|
||||
|
||||
|
||||
def test_monitor_chain_zmq(db_manager):
|
||||
rq = Queue()
|
||||
chain_monitor = ChainMonitor(Queue(), rq)
|
||||
chain_monitor.best_tip = BlockProcessor.get_best_block_hash()
|
||||
def test_monitor_chain_zmq(db_manager, block_processor):
|
||||
responder_queue = Queue()
|
||||
chain_monitor = ChainMonitor(Queue(), responder_queue, block_processor, bitcoind_feed_params)
|
||||
chain_monitor.best_tip = block_processor.get_best_block_hash()
|
||||
|
||||
zmq_thread = Thread(target=chain_monitor.monitor_chain_zmq, daemon=True)
|
||||
zmq_thread.start()
|
||||
@@ -106,9 +106,9 @@ def test_monitor_chain_zmq(db_manager):
|
||||
assert chain_monitor.responder_queue.empty()
|
||||
|
||||
|
||||
def test_monitor_chain(db_manager):
|
||||
def test_monitor_chain(db_manager, block_processor):
|
||||
# Not much to test here, this should launch two threads (one per monitor approach) and finish on terminate
|
||||
chain_monitor = ChainMonitor(Queue(), Queue())
|
||||
chain_monitor = ChainMonitor(Queue(), Queue(), block_processor, bitcoind_feed_params)
|
||||
|
||||
chain_monitor.best_tip = None
|
||||
chain_monitor.monitor_chain()
|
||||
@@ -131,15 +131,16 @@ def test_monitor_chain(db_manager):
|
||||
generate_block()
|
||||
|
||||
|
||||
def test_monitor_chain_single_update(db_manager):
|
||||
def test_monitor_chain_single_update(db_manager, block_processor):
|
||||
# This test tests that if both threads try to add the same block to the queue, only the first one will make it
|
||||
chain_monitor = ChainMonitor(Queue(), Queue())
|
||||
chain_monitor = ChainMonitor(Queue(), Queue(), block_processor, bitcoind_feed_params)
|
||||
|
||||
chain_monitor.best_tip = None
|
||||
chain_monitor.polling_delta = 2
|
||||
|
||||
# We will create a block and wait for the polling thread. Then check the queues to see that the block hash has only
|
||||
# been added once.
|
||||
chain_monitor.monitor_chain(polling_delta=2)
|
||||
chain_monitor.monitor_chain()
|
||||
generate_block()
|
||||
|
||||
watcher_block = chain_monitor.watcher_queue.get()
|
||||
|
||||
@@ -1,26 +1,27 @@
|
||||
from binascii import unhexlify
|
||||
|
||||
from teos.errors import *
|
||||
from teos.inspector import Inspector
|
||||
from common.appointment import Appointment
|
||||
from teos.block_processor import BlockProcessor
|
||||
from teos.conf import MIN_TO_SELF_DELAY
|
||||
|
||||
from test.teos.unit.conftest import get_random_value_hex, generate_dummy_appointment_data, generate_keypair, get_config
|
||||
|
||||
from common.constants import LOCATOR_LEN_BYTES, LOCATOR_LEN_HEX
|
||||
from common.cryptographer import Cryptographer
|
||||
from common.logger import Logger
|
||||
|
||||
from teos import LOG_PREFIX
|
||||
from teos.inspector import Inspector
|
||||
from teos.block_processor import BlockProcessor
|
||||
|
||||
import common.cryptographer
|
||||
from common.logger import Logger
|
||||
from common.appointment import Appointment
|
||||
from common.cryptographer import Cryptographer
|
||||
from common.constants import LOCATOR_LEN_BYTES, LOCATOR_LEN_HEX
|
||||
|
||||
from test.teos.unit.conftest import (
|
||||
get_random_value_hex,
|
||||
generate_dummy_appointment_data,
|
||||
generate_keypair,
|
||||
bitcoind_connect_params,
|
||||
get_config,
|
||||
)
|
||||
|
||||
common.cryptographer.logger = Logger(actor="Cryptographer", log_name_prefix=LOG_PREFIX)
|
||||
|
||||
|
||||
inspector = Inspector(get_config())
|
||||
APPOINTMENT_OK = (0, None)
|
||||
|
||||
NO_HEX_STRINGS = [
|
||||
"R" * LOCATOR_LEN_HEX,
|
||||
get_random_value_hex(LOCATOR_LEN_BYTES - 1) + "PP",
|
||||
@@ -41,6 +42,11 @@ WRONG_TYPES = [
|
||||
]
|
||||
WRONG_TYPES_NO_STR = [[], unhexlify(get_random_value_hex(LOCATOR_LEN_BYTES)), 3.2, 2.0, (), object, {}, object()]
|
||||
|
||||
config = get_config()
|
||||
MIN_TO_SELF_DELAY = config.get("MIN_TO_SELF_DELAY")
|
||||
block_processor = BlockProcessor(bitcoind_connect_params)
|
||||
inspector = Inspector(block_processor, MIN_TO_SELF_DELAY)
|
||||
|
||||
|
||||
def test_check_locator():
|
||||
# Right appointment type, size and format
|
||||
@@ -200,7 +206,7 @@ def test_inspect(run_bitcoind):
|
||||
|
||||
# Valid appointment
|
||||
locator = get_random_value_hex(LOCATOR_LEN_BYTES)
|
||||
start_time = BlockProcessor.get_block_count() + 5
|
||||
start_time = block_processor.get_block_count() + 5
|
||||
end_time = start_time + 20
|
||||
to_self_delay = MIN_TO_SELF_DELAY
|
||||
encrypted_blob = get_random_value_hex(64)
|
||||
|
||||
@@ -1,27 +1,33 @@
|
||||
import json
|
||||
import pytest
|
||||
import random
|
||||
from queue import Queue
|
||||
from uuid import uuid4
|
||||
from queue import Queue
|
||||
from shutil import rmtree
|
||||
from copy import deepcopy
|
||||
from threading import Thread
|
||||
|
||||
from teos.db_manager import DBManager
|
||||
from teos.responder import Responder, TransactionTracker
|
||||
from teos.block_processor import BlockProcessor
|
||||
from teos.chain_monitor import ChainMonitor
|
||||
from teos.carrier import Carrier
|
||||
from teos.tools import bitcoin_cli
|
||||
from teos.db_manager import DBManager
|
||||
from teos.chain_monitor import ChainMonitor
|
||||
from teos.responder import Responder, TransactionTracker
|
||||
|
||||
from common.constants import LOCATOR_LEN_HEX
|
||||
from bitcoind_mock.transaction import create_dummy_transaction, create_tx_from_hex
|
||||
from test.teos.unit.conftest import generate_block, generate_blocks, get_random_value_hex
|
||||
from test.teos.unit.conftest import (
|
||||
generate_block,
|
||||
generate_blocks,
|
||||
get_random_value_hex,
|
||||
bitcoind_connect_params,
|
||||
bitcoind_feed_params,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def responder(db_manager):
|
||||
responder = Responder(db_manager)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue)
|
||||
def responder(db_manager, carrier, block_processor):
|
||||
responder = Responder(db_manager, carrier, block_processor)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue, block_processor, bitcoind_feed_params)
|
||||
chain_monitor.monitor_chain()
|
||||
|
||||
return responder
|
||||
@@ -61,7 +67,7 @@ def create_dummy_tracker_data(random_txid=False, penalty_rawtx=None):
|
||||
if random_txid is True:
|
||||
penalty_txid = get_random_value_hex(32)
|
||||
|
||||
appointment_end = bitcoin_cli().getblockcount() + 2
|
||||
appointment_end = bitcoin_cli(bitcoind_connect_params).getblockcount() + 2
|
||||
locator = dispute_txid[:LOCATOR_LEN_HEX]
|
||||
|
||||
return locator, dispute_txid, penalty_txid, penalty_rawtx, appointment_end
|
||||
@@ -86,21 +92,21 @@ def test_tracker_init(run_bitcoind):
|
||||
)
|
||||
|
||||
|
||||
def test_on_sync(run_bitcoind, responder):
|
||||
def test_on_sync(run_bitcoind, responder, block_processor):
|
||||
# We're on sync if we're 1 or less blocks behind the tip
|
||||
chain_tip = BlockProcessor.get_best_block_hash()
|
||||
assert Responder.on_sync(chain_tip) is True
|
||||
chain_tip = block_processor.get_best_block_hash()
|
||||
assert responder.on_sync(chain_tip) is True
|
||||
|
||||
generate_block()
|
||||
assert Responder.on_sync(chain_tip) is True
|
||||
assert responder.on_sync(chain_tip) is True
|
||||
|
||||
|
||||
def test_on_sync_fail(responder):
|
||||
def test_on_sync_fail(responder, block_processor):
|
||||
# This should fail if we're more than 1 block behind the tip
|
||||
chain_tip = BlockProcessor.get_best_block_hash()
|
||||
chain_tip = block_processor.get_best_block_hash()
|
||||
generate_blocks(2)
|
||||
|
||||
assert Responder.on_sync(chain_tip) is False
|
||||
assert responder.on_sync(chain_tip) is False
|
||||
|
||||
|
||||
def test_tracker_to_dict():
|
||||
@@ -147,8 +153,8 @@ def test_tracker_from_dict_invalid_data():
|
||||
assert True
|
||||
|
||||
|
||||
def test_init_responder(temp_db_manager):
|
||||
responder = Responder(temp_db_manager)
|
||||
def test_init_responder(temp_db_manager, carrier, block_processor):
|
||||
responder = Responder(temp_db_manager, carrier, block_processor)
|
||||
assert isinstance(responder.trackers, dict) and len(responder.trackers) == 0
|
||||
assert isinstance(responder.tx_tracker_map, dict) and len(responder.tx_tracker_map) == 0
|
||||
assert isinstance(responder.unconfirmed_txs, list) and len(responder.unconfirmed_txs) == 0
|
||||
@@ -156,8 +162,8 @@ def test_init_responder(temp_db_manager):
|
||||
assert responder.block_queue.empty()
|
||||
|
||||
|
||||
def test_handle_breach(db_manager):
|
||||
responder = Responder(db_manager)
|
||||
def test_handle_breach(db_manager, carrier, block_processor):
|
||||
responder = Responder(db_manager, carrier, block_processor)
|
||||
|
||||
uuid = uuid4().hex
|
||||
tracker = create_dummy_tracker()
|
||||
@@ -176,7 +182,11 @@ def test_handle_breach(db_manager):
|
||||
assert receipt.delivered is True
|
||||
|
||||
|
||||
def test_handle_breach_bad_response(responder):
|
||||
def test_handle_breach_bad_response(db_manager, block_processor):
|
||||
# We need a new carrier here, otherwise the transaction will be flagged as previously sent and receipt.delivered
|
||||
# will be True
|
||||
responder = Responder(db_manager, Carrier(bitcoind_connect_params), block_processor)
|
||||
|
||||
uuid = uuid4().hex
|
||||
tracker = create_dummy_tracker()
|
||||
|
||||
@@ -262,10 +272,10 @@ def test_add_tracker_already_confirmed(responder):
|
||||
assert penalty_txid not in responder.unconfirmed_txs
|
||||
|
||||
|
||||
def test_do_watch(temp_db_manager):
|
||||
def test_do_watch(temp_db_manager, carrier, block_processor):
|
||||
# Create a fresh responder to simplify the test
|
||||
responder = Responder(temp_db_manager)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue)
|
||||
responder = Responder(temp_db_manager, carrier, block_processor)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue, block_processor, bitcoind_feed_params)
|
||||
chain_monitor.monitor_chain()
|
||||
|
||||
trackers = [create_dummy_tracker(penalty_rawtx=create_dummy_transaction().hex()) for _ in range(20)]
|
||||
@@ -293,7 +303,7 @@ def test_do_watch(temp_db_manager):
|
||||
# And broadcast some of the transactions
|
||||
broadcast_txs = []
|
||||
for tracker in trackers[:5]:
|
||||
bitcoin_cli().sendrawtransaction(tracker.penalty_rawtx)
|
||||
bitcoin_cli(bitcoind_connect_params).sendrawtransaction(tracker.penalty_rawtx)
|
||||
broadcast_txs.append(tracker.penalty_txid)
|
||||
|
||||
# Mine a block
|
||||
@@ -312,7 +322,7 @@ def test_do_watch(temp_db_manager):
|
||||
# Do the rest
|
||||
broadcast_txs = []
|
||||
for tracker in trackers[5:]:
|
||||
bitcoin_cli().sendrawtransaction(tracker.penalty_rawtx)
|
||||
bitcoin_cli(bitcoind_connect_params).sendrawtransaction(tracker.penalty_rawtx)
|
||||
broadcast_txs.append(tracker.penalty_txid)
|
||||
|
||||
# Mine a block
|
||||
@@ -321,9 +331,9 @@ def test_do_watch(temp_db_manager):
|
||||
assert len(responder.tx_tracker_map) == 0
|
||||
|
||||
|
||||
def test_check_confirmations(db_manager):
|
||||
responder = Responder(db_manager)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue)
|
||||
def test_check_confirmations(db_manager, carrier, block_processor):
|
||||
responder = Responder(db_manager, carrier, block_processor)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue, block_processor, bitcoind_feed_params)
|
||||
chain_monitor.monitor_chain()
|
||||
|
||||
# check_confirmations checks, given a list of transaction for a block, what of the known penalty transaction have
|
||||
@@ -378,11 +388,11 @@ def test_get_txs_to_rebroadcast(responder):
|
||||
assert txs_to_rebroadcast == list(txs_missing_too_many_conf.keys())
|
||||
|
||||
|
||||
def test_get_completed_trackers(db_manager):
|
||||
initial_height = bitcoin_cli().getblockcount()
|
||||
def test_get_completed_trackers(db_manager, carrier, block_processor):
|
||||
initial_height = bitcoin_cli(bitcoind_connect_params).getblockcount()
|
||||
|
||||
responder = Responder(db_manager)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue)
|
||||
responder = Responder(db_manager, carrier, block_processor)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue, block_processor, bitcoind_feed_params)
|
||||
chain_monitor.monitor_chain()
|
||||
|
||||
# A complete tracker is a tracker that has reached the appointment end with enough confs (> MIN_CONFIRMATIONS)
|
||||
@@ -417,7 +427,7 @@ def test_get_completed_trackers(db_manager):
|
||||
}
|
||||
|
||||
for uuid, tracker in all_trackers.items():
|
||||
bitcoin_cli().sendrawtransaction(tracker.penalty_rawtx)
|
||||
bitcoin_cli(bitcoind_connect_params).sendrawtransaction(tracker.penalty_rawtx)
|
||||
|
||||
# The dummy appointments have a end_appointment time of current + 2, but trackers need at least 6 confs by default
|
||||
generate_blocks(6)
|
||||
@@ -438,9 +448,9 @@ def test_get_completed_trackers(db_manager):
|
||||
assert set(completed_trackers_ids) == set(ended_trackers_keys)
|
||||
|
||||
|
||||
def test_rebroadcast(db_manager):
|
||||
responder = Responder(db_manager)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue)
|
||||
def test_rebroadcast(db_manager, carrier, block_processor):
|
||||
responder = Responder(db_manager, carrier, block_processor)
|
||||
chain_monitor = ChainMonitor(Queue(), responder.block_queue, block_processor, bitcoind_feed_params)
|
||||
chain_monitor.monitor_chain()
|
||||
|
||||
txs_to_rebroadcast = []
|
||||
|
||||
@@ -1,17 +1,17 @@
|
||||
from teos.tools import can_connect_to_bitcoind, in_correct_network, bitcoin_cli
|
||||
|
||||
from common.tools import check_sha256_hex_format
|
||||
from test.teos.unit.conftest import bitcoind_connect_params
|
||||
|
||||
|
||||
def test_in_correct_network(run_bitcoind):
|
||||
# The simulator runs as if it was regtest, so every other network should fail
|
||||
assert in_correct_network("mainnet") is False
|
||||
assert in_correct_network("testnet") is False
|
||||
assert in_correct_network("regtest") is True
|
||||
assert in_correct_network(bitcoind_connect_params, "mainnet") is False
|
||||
assert in_correct_network(bitcoind_connect_params, "testnet") is False
|
||||
assert in_correct_network(bitcoind_connect_params, "regtest") is True
|
||||
|
||||
|
||||
def test_can_connect_to_bitcoind():
|
||||
assert can_connect_to_bitcoind() is True
|
||||
assert can_connect_to_bitcoind(bitcoind_connect_params) is True
|
||||
|
||||
|
||||
# def test_can_connect_to_bitcoind_bitcoin_not_running():
|
||||
@@ -22,7 +22,7 @@ def test_can_connect_to_bitcoind():
|
||||
|
||||
def test_bitcoin_cli():
|
||||
try:
|
||||
bitcoin_cli().help()
|
||||
bitcoin_cli(bitcoind_connect_params).help()
|
||||
assert True
|
||||
|
||||
except Exception:
|
||||
|
||||
@@ -4,11 +4,19 @@ from shutil import rmtree
|
||||
from threading import Thread
|
||||
from coincurve import PrivateKey
|
||||
|
||||
from teos import LOG_PREFIX
|
||||
from teos.carrier import Carrier
|
||||
from teos.watcher import Watcher
|
||||
from teos.responder import Responder
|
||||
from teos.tools import bitcoin_cli
|
||||
from teos.chain_monitor import ChainMonitor
|
||||
from teos.responder import Responder
|
||||
from teos.db_manager import DBManager
|
||||
from teos.chain_monitor import ChainMonitor
|
||||
from teos.block_processor import BlockProcessor
|
||||
|
||||
import common.cryptographer
|
||||
from common.logger import Logger
|
||||
from common.tools import compute_locator
|
||||
from common.cryptographer import Cryptographer
|
||||
|
||||
from test.teos.unit.conftest import (
|
||||
generate_blocks,
|
||||
@@ -16,14 +24,9 @@ from test.teos.unit.conftest import (
|
||||
get_random_value_hex,
|
||||
generate_keypair,
|
||||
get_config,
|
||||
bitcoind_feed_params,
|
||||
bitcoind_connect_params,
|
||||
)
|
||||
from teos.conf import EXPIRY_DELTA, MAX_APPOINTMENTS
|
||||
|
||||
import common.cryptographer
|
||||
from teos import LOG_PREFIX
|
||||
from common.logger import Logger
|
||||
from common.tools import compute_locator
|
||||
from common.cryptographer import Cryptographer
|
||||
|
||||
common.cryptographer.logger = Logger(actor="Cryptographer", log_name_prefix=LOG_PREFIX)
|
||||
|
||||
@@ -33,6 +36,7 @@ START_TIME_OFFSET = 1
|
||||
END_TIME_OFFSET = 1
|
||||
TEST_SET_SIZE = 200
|
||||
|
||||
config = get_config()
|
||||
|
||||
signing_key, public_key = generate_keypair()
|
||||
|
||||
@@ -50,8 +54,22 @@ def temp_db_manager():
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def watcher(db_manager):
|
||||
watcher = Watcher(db_manager, Responder(db_manager), signing_key.to_der(), get_config())
|
||||
chain_monitor = ChainMonitor(watcher.block_queue, watcher.responder.block_queue)
|
||||
block_processor = BlockProcessor(bitcoind_connect_params)
|
||||
carrier = Carrier(bitcoind_connect_params)
|
||||
|
||||
responder = Responder(db_manager, carrier, block_processor)
|
||||
watcher = Watcher(
|
||||
db_manager,
|
||||
block_processor,
|
||||
responder,
|
||||
signing_key.to_der(),
|
||||
config.get("MAX_APPOINTMENTS"),
|
||||
config.get("EXPIRY_DELTA"),
|
||||
)
|
||||
|
||||
chain_monitor = ChainMonitor(
|
||||
watcher.block_queue, watcher.responder.block_queue, block_processor, bitcoind_feed_params
|
||||
)
|
||||
chain_monitor.monitor_chain()
|
||||
|
||||
return watcher
|
||||
@@ -89,9 +107,11 @@ def test_init(run_bitcoind, watcher):
|
||||
assert isinstance(watcher.appointments, dict) and len(watcher.appointments) == 0
|
||||
assert isinstance(watcher.locator_uuid_map, dict) and len(watcher.locator_uuid_map) == 0
|
||||
assert watcher.block_queue.empty()
|
||||
assert isinstance(watcher.config, dict)
|
||||
assert isinstance(watcher.signing_key, PrivateKey)
|
||||
assert isinstance(watcher.block_processor, BlockProcessor)
|
||||
assert isinstance(watcher.responder, Responder)
|
||||
assert isinstance(watcher.max_appointments, int)
|
||||
assert isinstance(watcher.expiry_delta, int)
|
||||
assert isinstance(watcher.signing_key, PrivateKey)
|
||||
|
||||
|
||||
def test_add_appointment(watcher):
|
||||
@@ -120,7 +140,7 @@ def test_add_too_many_appointments(watcher):
|
||||
# Any appointment on top of those should fail
|
||||
watcher.appointments = dict()
|
||||
|
||||
for _ in range(MAX_APPOINTMENTS):
|
||||
for _ in range(config.get("MAX_APPOINTMENTS")):
|
||||
appointment, dispute_tx = generate_dummy_appointment(
|
||||
start_time_offset=START_TIME_OFFSET, end_time_offset=END_TIME_OFFSET
|
||||
)
|
||||
@@ -160,7 +180,7 @@ def test_do_watch(watcher, temp_db_manager):
|
||||
|
||||
# Broadcast the first two
|
||||
for dispute_tx in dispute_txs[:2]:
|
||||
bitcoin_cli().sendrawtransaction(dispute_tx)
|
||||
bitcoin_cli(bitcoind_connect_params).sendrawtransaction(dispute_tx)
|
||||
|
||||
# After generating enough blocks, the number of appointments should have reduced by two
|
||||
generate_blocks(START_TIME_OFFSET + END_TIME_OFFSET)
|
||||
@@ -169,7 +189,7 @@ def test_do_watch(watcher, temp_db_manager):
|
||||
|
||||
# The rest of appointments will timeout after the end (2) + EXPIRY_DELTA
|
||||
# Wait for an additional block to be safe
|
||||
generate_blocks(EXPIRY_DELTA + START_TIME_OFFSET + END_TIME_OFFSET)
|
||||
generate_blocks(config.get("EXPIRY_DELTA") + START_TIME_OFFSET + END_TIME_OFFSET)
|
||||
|
||||
assert len(watcher.appointments) == 0
|
||||
|
||||
|
||||
Reference in New Issue
Block a user