lightningd: simplify peer destruction.

We have to do a dance when we get a reconnect in openingd, because we
don't normally expect to free both owner and peer.  It's a layering
violation: freeing a peer should clean up the owner's pointer to it,
to avoid a double free, and we can eliminate this dance.

The free order is now different, and the test_reconnect_openingd was
overprecise.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This commit is contained in:
Rusty Russell
2017-10-11 20:30:50 +10:30
committed by Christian Decker
parent 61786b9c90
commit 871d0b1d74
3 changed files with 19 additions and 15 deletions

View File

@@ -1250,25 +1250,22 @@ class LightningDTests(BaseLightningDTests):
tx = l1.bitcoin.rpc.getrawtransaction(txid)
l1.rpc.addfunds(tx)
# It closes on us, we forget about it.
# l2 closes on l1, l1 forgets.
self.assertRaises(ValueError, l1.rpc.fundchannel, l2.info['id'], 20000)
assert l1.rpc.getpeer(l2.info['id']) == None
# Reconnect.
l1.rpc.connect('localhost', l2.info['port'], l2.info['id'])
# Truncate (hack to release old openingd).
with open(os.path.join(l2.daemon.lightning_dir, 'dev_disconnect'), "w"):
pass
# We should get a message about old one exiting.
l2.daemon.wait_for_log('Subdaemon lightning_openingd died')
l2.daemon.wait_for_log('Peer has reconnected, state OPENINGD')
l2.daemon.wait_for_log('Owning subdaemon lightning_openingd died')
# Should work fine.
l1.rpc.fundchannel(l2.info['id'], 20000)
l1.daemon.wait_for_log('sendrawtx exit 0')
# Just to be sure, second openingd should die too.
# Just to be sure, second openingd hand over to channeld.
l2.daemon.wait_for_log('Subdaemon lightning_openingd died \(0\)')
def test_reconnect_normal(self):