Data used to be stored both in memory and disk (db). This commits modifies the Watcher, Responder and Cleaner so they only keep the needed maps and load information from disk when necessary.
ChainMonitor is the actor in charge of checking new blocks. It improves from the previous zmq_subscriber by also doing polling and, therefore, making sure that no blocks are missed.
Documentation and tests are still required. Partially covers #31
get_missed_blocks is always called after calling find_last_common_ancestor, so the later was called twice returning the same value as the input. It does not seem to be any reason for it atm.
The API was never made an object since I couldn't find a way or working around the Flask decorators.
By using dispatch we can get around the issues in #14 and will be able to create better mocks for the API
Appointment serialization used to be part of the cryptographer (signature_format) but it makes more sense to be an appointment method. Therefore cli also need Appointment
Also fixes comments based on reviews
The Watcher used to receive a secret key file path ion the __init__ to load a secret key for signing. That made testing the Watcher hard, since the file needed to be present. Changing it so the main (pisad) loads the file from disk and passes the data the Watcher on init.
get_txs_to_rebroadcast was beinf triggered based on received transactions indstead of stored txs. Fixing that.
Some of the names in the Responder were poorly picked (not descriptibe enough). Tries to fix that.
``Job`` class has been renames to ``TransactionTracker``.
``add_response`` has been renamed to ``handle_breach`` and ``create_job`` to ``add_tracker``.
All the variables that has `job` on it have already been updated.
We were passing some unnecessary parameters to the Cleaner (locator) that could be derived from other data (uuid and appointments). Also standarises the order of the parameters to match the rest of the methods