Duplicate connections to some onion peers #1408

Closed
opened 2018-12-13 13:43:24 +01:00 by g1itch · 12 comments
g1itch commented 2018-12-13 13:43:24 +01:00 (Migrated from github.com)

The description is based on messages from [chan] bitmessage but I've seen it myself a couple of weeks ago (I thought it's related to my #1394 - wrongly).

image

Changes proposed today:

diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py
index e599cdf..c5ba701 100644
--- a/src/network/connectionpool.py
+++ b/src/network/connectionpool.py
@@ -93,7 +93,7 @@ class BMConnectionPool(object):
                     del self.inboundConnections[connection.destination.host]
                 except KeyError:
                     pass
-        connection.close()
+        connection.handle_close()
 
     def getListeningIP(self):
         if BMConfigParser().safeGet("bitmessagesettings", "onionhostname").endswith(".onion"):

The description is based on messages from `[chan] bitmessage` but I've seen it myself a couple of weeks ago (I thought it's related to my #1394 - wrongly). ![image](https://user-images.githubusercontent.com/4012700/49939372-56add980-fee5-11e8-8f6c-de2f83123ebd.png) Changes proposed today: ```patch diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py index e599cdf..c5ba701 100644 --- a/src/network/connectionpool.py +++ b/src/network/connectionpool.py @@ -93,7 +93,7 @@ class BMConnectionPool(object): del self.inboundConnections[connection.destination.host] except KeyError: pass - connection.close() + connection.handle_close() def getListeningIP(self): if BMConfigParser().safeGet("bitmessagesettings", "onionhostname").endswith(".onion"): ```
stman commented 2018-12-13 14:00:13 +01:00 (Migrated from github.com)

Are you saying this node was not an attack on bitmessage to sandbox users ?

Envoyé de mon iPhone

Le 13 déc. 2018 à 13:43, Dmitri Bogomolov notifications@github.com a écrit :

The description is based on messages from [chan] bitmessage but I've seen it myself a couple of weeks ago (I thought it's related to my #1394 - wrongly).

Changes proposed today:

diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py
index e599cdf..c5ba701 100644
--- a/src/network/connectionpool.py
+++ b/src/network/connectionpool.py
@@ -93,7 +93,7 @@ class BMConnectionPool(object):
del self.inboundConnections[connection.destination.host]
except KeyError:
pass

  •    connection.close()
    
  •    connection.handle_close()
    

    def getListeningIP(self):
    if BMConfigParser().safeGet("bitmessagesettings", "onionhostname").endswith(".onion"):

    You are receiving this because you are subscribed to this thread.
    Reply to this email directly, view it on GitHub, or mute the thread.

Are you saying this node was not an attack on bitmessage to sandbox users ? Envoyé de mon iPhone > Le 13 déc. 2018 à 13:43, Dmitri Bogomolov <notifications@github.com> a écrit : > > The description is based on messages from [chan] bitmessage but I've seen it myself a couple of weeks ago (I thought it's related to my #1394 - wrongly). > > > > Changes proposed today: > > diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py > index e599cdf..c5ba701 100644 > --- a/src/network/connectionpool.py > +++ b/src/network/connectionpool.py > @@ -93,7 +93,7 @@ class BMConnectionPool(object): > del self.inboundConnections[connection.destination.host] > except KeyError: > pass > - connection.close() > + connection.handle_close() > > def getListeningIP(self): > if BMConfigParser().safeGet("bitmessagesettings", "onionhostname").endswith(".onion"): > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub, or mute the thread.
g1itch commented 2018-12-13 14:25:11 +01:00 (Migrated from github.com)

Are you saying this node was not an attack on bitmessage to sandbox users ? Envoyé de mon iPhone

I don't know

> Are you saying this node was not an attack on bitmessage to sandbox users ? Envoyé de mon iPhone I don't know
g1itch commented 2018-12-13 15:50:22 +01:00 (Migrated from github.com)

My own screenshot with the same peer:
image

And with other peer from 11.28:
image

My own screenshot with the same peer: ![image](https://user-images.githubusercontent.com/4012700/49946100-ed36c680-fef6-11e8-9bd6-c020fbf311c1.png) And with other peer from 11.28: ![image](https://user-images.githubusercontent.com/4012700/49946162-12c3d000-fef7-11e8-8b16-2fd7d880cfbb.png)
stman commented 2018-12-13 17:10:08 +01:00 (Migrated from github.com)

Have you implemented a mecanism to select the best noted peers to connect preferably to them first ?

Because it is the case, I think this could be a sandboxing attack.

Envoyé de mon iPhone

Le 13 déc. 2018 à 15:50, Dmitri Bogomolov notifications@github.com a écrit :

My own screenshot with the same peer:

And with other peer from 11.28:


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

Have you implemented a mecanism to select the best noted peers to connect preferably to them first ? Because it is the case, I think this could be a sandboxing attack. Envoyé de mon iPhone > Le 13 déc. 2018 à 15:50, Dmitri Bogomolov <notifications@github.com> a écrit : > > My own screenshot with the same peer: > > > And with other peer from 11.28: > > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub, or mute the thread.
PeterSurda commented 2018-12-15 10:19:38 +01:00 (Migrated from github.com)

@stman The rating adjust dynamically and influences the likelihood the peer will be retried in the future. The client isn't supposed to connect to the same peer multiple times, it's a bug. It also could be a UI bug, erroneously forgetting to remove an entry from the list upon disconnect. Looking at @g1itch 's patch, I think it's probably the UI. I haven't had time to investigate it yet but I also saw something similar happening on my local client.

@stman The rating adjust dynamically and influences the likelihood the peer will be retried in the future. The client isn't supposed to connect to the same peer multiple times, it's a bug. It also could be a UI bug, erroneously forgetting to remove an entry from the list upon disconnect. Looking at @g1itch 's patch, I think it's probably the UI. I haven't had time to investigate it yet but I also saw something similar happening on my local client.
stman commented 2018-12-15 14:08:57 +01:00 (Migrated from github.com)

Yep.

I think it must be seriously investigated, and this mecanism to preferably connect to the best peers, even randomly, is at the root, according to me, to a way for the trick exploited to sandbox me and several users, to sabotage silently bitmessage’ peer ring.

Envoyé de mon iPhone

Le 15 déc. 2018 à 10:19, Peter Šurda notifications@github.com a écrit :

@stman The rating adjust dynamically and influences the likelihood the peer will be retried in the future. The client isn't supposed to connect to the same peer multiple times, it's a bug. It also could be a UI bug, erroneously forgetting to remove an entry from the list upon disconnect. Looking at @g1itch 's patch, I think it's probably the UI. I haven't had time to investigate it yet but I also saw something similar happening on my local client.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Yep. I think it must be seriously investigated, and this mecanism to preferably connect to the best peers, even randomly, is at the root, according to me, to a way for the trick exploited to sandbox me and several users, to sabotage silently bitmessage’ peer ring. Envoyé de mon iPhone > Le 15 déc. 2018 à 10:19, Peter Šurda <notifications@github.com> a écrit : > > @stman The rating adjust dynamically and influences the likelihood the peer will be retried in the future. The client isn't supposed to connect to the same peer multiple times, it's a bug. It also could be a UI bug, erroneously forgetting to remove an entry from the list upon disconnect. Looking at @g1itch 's patch, I think it's probably the UI. I haven't had time to investigate it yet but I also saw something similar happening on my local client. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or mute the thread.
stman commented 2018-12-15 15:39:57 +01:00 (Migrated from github.com)

Peter, I suggest a small conf call on mumble to explain to all those interested how, according to me, the sandboxing could be operating.

Envoyé de mon iPhone

Le 15 déc. 2018 à 10:19, Peter Šurda notifications@github.com a écrit :

@stman The rating adjust dynamically and influences the likelihood the peer will be retried in the future. The client isn't supposed to connect to the same peer multiple times, it's a bug. It also could be a UI bug, erroneously forgetting to remove an entry from the list upon disconnect. Looking at @g1itch 's patch, I think it's probably the UI. I haven't had time to investigate it yet but I also saw something similar happening on my local client.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Peter, I suggest a small conf call on mumble to explain to all those interested how, according to me, the sandboxing could be operating. Envoyé de mon iPhone > Le 15 déc. 2018 à 10:19, Peter Šurda <notifications@github.com> a écrit : > > @stman The rating adjust dynamically and influences the likelihood the peer will be retried in the future. The client isn't supposed to connect to the same peer multiple times, it's a bug. It also could be a UI bug, erroneously forgetting to remove an entry from the list upon disconnect. Looking at @g1itch 's patch, I think it's probably the UI. I haven't had time to investigate it yet but I also saw something similar happening on my local client. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or mute the thread.
g1itch commented 2018-12-15 17:55:50 +01:00 (Migrated from github.com)

Personally I doubt that proposed patch solves the issue. To test it I added logging in network.connectionpool:

diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py
index e599cdfb..5444e736 100644
--- a/src/network/connectionpool.py
+++ b/src/network/connectionpool.py
@@ -253,6 +253,7 @@ class BMConnectionPool(object):
         for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values():
             if not (i.accepting or i.connecting or i.connected):
                 reaper.append(i)
+                logger.warning('Reap connection to %s', i.destination)
             else:
                 try:
                     if i.state == "close":

Today I noticed strange log message:

2018-12-14 18:02:49,870 - WARNING - Reap connection to Peer(host='', port=8444)

But when the duplicate connections appear there is no such message in log.

Personally I doubt that proposed patch solves the issue. To test it I added logging in `network.connectionpool`: ```patch diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py index e599cdfb..5444e736 100644 --- a/src/network/connectionpool.py +++ b/src/network/connectionpool.py @@ -253,6 +253,7 @@ class BMConnectionPool(object): for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values(): if not (i.accepting or i.connecting or i.connected): reaper.append(i) + logger.warning('Reap connection to %s', i.destination) else: try: if i.state == "close": ``` Today I noticed strange log message: ``` 2018-12-14 18:02:49,870 - WARNING - Reap connection to Peer(host='', port=8444) ``` But when the duplicate connections appear there is no such message in log.
stman commented 2018-12-15 18:26:06 +01:00 (Migrated from github.com)

I insist, we need to talk through mumble. In less than 30 minutes we will collectively solve this issue definitely. And we must solve this issue.

Le 15 déc. 2018 à 17:55, Dmitri Bogomolov notifications@github.com a écrit :

Personally I doubt that proposed patch solves the issue. To test it I added logging in network.connectionpool:

diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py
index e599cdfb..5444e736 100644
--- a/src/network/connectionpool.py
+++ b/src/network/connectionpool.py
@@ -253,6 +253,7 @@ class BMConnectionPool(object):
for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values():
if not (i.accepting or i.connecting or i.connected):
reaper.append(i)

  •            logger.warning('Reap connection to %s', i.destination)
           else:
               try:
                   if i.state == "close":
    

Today I noticed strange log message:

2018-12-14 18:02:49,870 - WARNING - Reap connection to Peer(host='', port=8444)
But when the duplicate connections appear there is no such message in log.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

I insist, we need to talk through mumble. In less than 30 minutes we will collectively solve this issue definitely. And we must solve this issue. > Le 15 déc. 2018 à 17:55, Dmitri Bogomolov <notifications@github.com> a écrit : > > Personally I doubt that proposed patch solves the issue. To test it I added logging in network.connectionpool: > > diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py > index e599cdfb..5444e736 100644 > --- a/src/network/connectionpool.py > +++ b/src/network/connectionpool.py > @@ -253,6 +253,7 @@ class BMConnectionPool(object): > for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values(): > if not (i.accepting or i.connecting or i.connected): > reaper.append(i) > + logger.warning('Reap connection to %s', i.destination) > else: > try: > if i.state == "close": > Today I noticed strange log message: > > 2018-12-14 18:02:49,870 - WARNING - Reap connection to Peer(host='', port=8444) > But when the duplicate connections appear there is no such message in log. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or mute the thread.
g1itch commented 2018-12-16 18:08:13 +01:00 (Migrated from github.com)

I confirm that it can be reproduced by adding trustedpeer = <thatpeer.onion>:8444 to keys.dat.

I confirm that it can be reproduced by adding `trustedpeer = <thatpeer.onion>:8444` to keys.dat.
stman commented 2018-12-16 21:10:43 +01:00 (Migrated from github.com)

Ok then, I'm taking a few minutes to write down my sandboxing scenario.

Of course, all these are just assumptions, and some close variants must
also be considered.

It is a multistage attack scenario.

  • The attacker wishing to sandbox create a bunch of BM nodes in VM,
    preferably with onion addresses, it allow duplication in a much more
    easy way than with real IP addresses, and he ensure a very good
    connectivity to the netork, modifying the source code to accept much
    more than 8 connections, and avoiding his own peers, ti ensure that he
    is always has the most synchronized peers on the network, therefor
    gaining the best notations from all other peers.

  • He makes his system running for a sufficient period of time to ensure
    most of the peers connecting to the BM network will know the addresses
    of his peers, with the highest rating possible.

Let’s summerize this first attack step :

  • Creation of many fake peers (Arbitrarily 10 times more than the
    current real number of bitmessage distinct users), using preferably
    onion adresses (easier, fully scalable), and ensuring they all get the
    best rating by giving those fake peers a good internet bandwidth, and
    letting them running 24h/24.

The second stage of the attack is the following :

  • The attacker takes advantage of the fact that Pybitmessage is the
    mostly used node.

  • The tattacker takes advantage that this client is rating peers, and is
    now connecting in priority to the highest rated peers.

  • The attacker takes advantage that the latest versions of Pybitmessage
    has limited the number of outgoing connections to 8, and that most users
    don’t let their configuration run 24h/24, so indeed mostly use outgoing
    connections to connect to the network.

  • The probability in such conditions that a target user like me,
    connecting cuasually to the network, gets connected first to the
    attacker peers is therefore getting very high.

  • Modifying the source code of its fake peers, the attack can easily
    implement a detection mecanism when a targeted users is exclusively
    connected onto his fake peers : This is possible thanks to the peer
    authentification mecanism, and the only thing the attacker wishing to
    sandbox a user had to do is to modify pybitmessage source code, and add
    a special communication functionnality between his fake nodes, to know
    when 8 connections from the targeted peer have connected to his attacker
    peer ring. He just have to build a kind of hypervisor of all his running
    peers.

Doing so, he can know when some targeted peers are under his control,
because exclusively connected to his peers, and none of the targeted
users connected between them, they all goi through his peer ring to
communicate.

Doing so, it is fucking easy for him to start making his fake peer ring
filtering messages on demand, and start slowly filtering most messages
comming from the real bitmessage network.

And he’s done. He can isolate selected peers on demand.

This is why a motherfucking guy told me once I should preferably connect
exclusively through Tor, I remember that. I told him : But what is
entrapped in a fake peer ring, and never answered me (In my head, this
is rising an alarm you know).

FUCK JEFF BEZOS, THE CIA, THE NSA and their fucking transnational
corruption ring.

Le 15/12/2018 à 17:55, Dmitri Bogomolov a écrit :

Personally I doubt that proposed patch solves the issue. To test it I
added logging in |network.connectionpool|:

diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py
index e599cdfb..5444e736 100644
--- a/src/network/connectionpool.py
+++ b/src/network/connectionpool.py
@@ -253,6 +253,7 @@ class BMConnectionPool(object):
for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values():
if not (i.accepting or i.connecting or i.connected):
reaper.append(i)

  • logger.warning('Reap connection to %s', i.destination)
    else:
    try:
    if i.state == "close":

Today I noticed strange log message:

|2018-12-14 18:02:49,870 - WARNING - Reap connection to Peer(host='',
port=8444) |

But when the duplicate connections appear there is no such message in log.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Bitmessage/PyBitmessage/issues/1408#issuecomment-447582121,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAOyDZtx4MwrqvwvEHNeREIRguDuj0vkks5u5SmXgaJpZM4ZRjrO.

Ok then, I'm taking a few minutes to write down my sandboxing scenario. Of course, all these are just assumptions, and some close variants must also be considered. It is a multistage attack scenario. - The attacker wishing to sandbox create a bunch of BM nodes in VM, preferably with onion addresses, it allow duplication in a much more easy way than with real IP addresses, and he ensure a very good connectivity to the netork, modifying the source code to accept much more than 8 connections, and avoiding his own peers, ti ensure that he is always has the most synchronized peers on the network, therefor gaining the best notations from all other peers. - He makes his system running for a sufficient period of time to ensure most of the peers connecting to the BM network will know the addresses of his peers, with the highest rating possible. Let’s summerize this first attack step : * Creation of many fake peers (Arbitrarily 10 times more than the current real number of bitmessage distinct users), using preferably onion adresses (easier, fully scalable), and ensuring they all get the best rating by giving those fake peers a good internet bandwidth, and letting them running 24h/24. The second stage of the attack is the following : * The attacker takes advantage of the fact that Pybitmessage is the mostly used node. * The tattacker takes advantage that this client is rating peers, and is now connecting in priority to the highest rated peers. * The attacker takes advantage that the latest versions of Pybitmessage has limited the number of outgoing connections to 8, and that most users don’t let their configuration run 24h/24, so indeed mostly use outgoing connections to connect to the network. - The probability in such conditions that a target user like me, connecting cuasually to the network, gets connected first to the attacker peers is therefore getting very high. - Modifying the source code of its fake peers, the attack can easily implement a detection mecanism when a targeted users is exclusively connected onto his fake peers : This is possible thanks to the peer authentification mecanism, and the only thing the attacker wishing to sandbox a user had to do is to modify pybitmessage source code, and add a special communication functionnality between his fake nodes, to know when 8 connections from the targeted peer have connected to his attacker peer ring. He just have to build a kind of hypervisor of all his running peers. Doing so, he can know when some targeted peers are under his control, because exclusively connected to his peers, and none of the targeted users connected between them, they all goi through his peer ring to communicate. Doing so, it is fucking easy for him to start making his fake peer ring filtering messages on demand, and start slowly filtering most messages comming from the real bitmessage network. And he’s done. He can isolate selected peers on demand. This is why a motherfucking guy told me once I should preferably connect exclusively through Tor, I remember that. I told him : But what is entrapped in a fake peer ring, and never answered me (In my head, this is rising an alarm you know). FUCK JEFF BEZOS, THE CIA, THE NSA and their fucking transnational corruption ring. Le 15/12/2018 à 17:55, Dmitri Bogomolov a écrit : > > Personally I doubt that proposed patch solves the issue. To test it I > added logging in |network.connectionpool|: > > diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py > index e599cdfb..5444e736 100644 > --- a/src/network/connectionpool.py > +++ b/src/network/connectionpool.py > @@ -253,6 +253,7 @@ class BMConnectionPool(object): > for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values(): > if not (i.accepting or i.connecting or i.connected): > reaper.append(i) > + logger.warning('Reap connection to %s', i.destination) > else: > try: > if i.state == "close": > > Today I noticed strange log message: > > |2018-12-14 18:02:49,870 - WARNING - Reap connection to Peer(host='', > port=8444) | > > But when the duplicate connections appear there is no such message in log. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/Bitmessage/PyBitmessage/issues/1408#issuecomment-447582121>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AAOyDZtx4MwrqvwvEHNeREIRguDuj0vkks5u5SmXgaJpZM4ZRjrO>. >
g1itch commented 2018-12-18 17:04:27 +01:00 (Migrated from github.com)

Hmm, I confirm that patch from [chan] bitmessage solves the issue (at least when I reproduce it with trustedpeer in config). Debug:

2018-12-18 17:54:43,476 - DEBUG - Outbound proxy connection to t7hb2h5gvudfht6u.onion:8444
2018-12-18 17:54:55,509 - DEBUG - remoteProtocolVersion: 3
2018-12-18 17:54:55,510 - DEBUG - services: 0x00000000
2018-12-18 17:54:55,510 - DEBUG - time offset: -6
2018-12-18 17:54:55,511 - DEBUG - my external IP: 127.0.0.1
2018-12-18 17:54:55,511 - DEBUG - remote node incoming address: t7hb2h5gvudfht6u.onion:8444
2018-12-18 17:54:55,511 - DEBUG - user agent: /PyBitmessage:0.6.3.2/
2018-12-18 17:54:55,512 - DEBUG - streams: [1]
2018-12-18 17:54:55,512 - DEBUG - Initial skipping processing getdata for 14.78s
2018-12-18 17:54:55,605 - DEBUG - Sending huge inv message with 26208 objects to just this one peer
2018-12-18 17:54:58,270 - DEBUG - Bad checksum, ignoring
2018-12-18 17:54:58,271 - DEBUG - Closing due to invalid command close
2018-12-18 17:54:59,486 - DEBUG - Outbound proxy connection to t7hb2h5gvudfht6u.onion:8444
2018-12-18 17:55:11,500 - DEBUG - remoteProtocolVersion: 3
2018-12-18 17:55:11,501 - DEBUG - services: 0x00000000
2018-12-18 17:55:11,501 - DEBUG - time offset: -6
2018-12-18 17:55:11,502 - DEBUG - my external IP: 127.0.0.1
2018-12-18 17:55:11,502 - DEBUG - remote node incoming address: t7hb2h5gvudfht6u.onion:8444
2018-12-18 17:55:11,502 - DEBUG - user agent: /PyBitmessage:0.6.3.2/
2018-12-18 17:55:11,503 - DEBUG - streams: [1]
2018-12-18 17:55:11,503 - DEBUG - Initial skipping processing getdata for 14.79s
2018-12-18 17:55:11,617 - DEBUG - Sending huge inv message with 26207 objects to just this one peer
2018-12-18 17:55:14,146 - DEBUG - Bad checksum, ignoring
2018-12-18 17:55:14,147 - DEBUG - Closing due to invalid command close
2018-12-18 17:55:15,504 - DEBUG - Outbound proxy connection to t7hb2h5gvudfht6u.onion:8444
...
Hmm, I confirm that patch from `[chan] bitmessage` solves the issue (at least when I reproduce it with `trustedpeer` in config). Debug: ``` 2018-12-18 17:54:43,476 - DEBUG - Outbound proxy connection to t7hb2h5gvudfht6u.onion:8444 2018-12-18 17:54:55,509 - DEBUG - remoteProtocolVersion: 3 2018-12-18 17:54:55,510 - DEBUG - services: 0x00000000 2018-12-18 17:54:55,510 - DEBUG - time offset: -6 2018-12-18 17:54:55,511 - DEBUG - my external IP: 127.0.0.1 2018-12-18 17:54:55,511 - DEBUG - remote node incoming address: t7hb2h5gvudfht6u.onion:8444 2018-12-18 17:54:55,511 - DEBUG - user agent: /PyBitmessage:0.6.3.2/ 2018-12-18 17:54:55,512 - DEBUG - streams: [1] 2018-12-18 17:54:55,512 - DEBUG - Initial skipping processing getdata for 14.78s 2018-12-18 17:54:55,605 - DEBUG - Sending huge inv message with 26208 objects to just this one peer 2018-12-18 17:54:58,270 - DEBUG - Bad checksum, ignoring 2018-12-18 17:54:58,271 - DEBUG - Closing due to invalid command close 2018-12-18 17:54:59,486 - DEBUG - Outbound proxy connection to t7hb2h5gvudfht6u.onion:8444 2018-12-18 17:55:11,500 - DEBUG - remoteProtocolVersion: 3 2018-12-18 17:55:11,501 - DEBUG - services: 0x00000000 2018-12-18 17:55:11,501 - DEBUG - time offset: -6 2018-12-18 17:55:11,502 - DEBUG - my external IP: 127.0.0.1 2018-12-18 17:55:11,502 - DEBUG - remote node incoming address: t7hb2h5gvudfht6u.onion:8444 2018-12-18 17:55:11,502 - DEBUG - user agent: /PyBitmessage:0.6.3.2/ 2018-12-18 17:55:11,503 - DEBUG - streams: [1] 2018-12-18 17:55:11,503 - DEBUG - Initial skipping processing getdata for 14.79s 2018-12-18 17:55:11,617 - DEBUG - Sending huge inv message with 26207 objects to just this one peer 2018-12-18 17:55:14,146 - DEBUG - Bad checksum, ignoring 2018-12-18 17:55:14,147 - DEBUG - Closing due to invalid command close 2018-12-18 17:55:15,504 - DEBUG - Outbound proxy connection to t7hb2h5gvudfht6u.onion:8444 ... ```
This repo is archived. You cannot comment on issues.
No Milestone
No project
No Assignees
1 Participants
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: Bitmessage/PyBitmessage-2024-08-21#1408
No description provided.