- track pending hashId more accurately
- add timeout and a cleanup so that the download queues don't
get stuck and memory is freed
- randomise download order (only works for inv commands with
more than 1 entry)
- replace PendingDownload singleton dict with a Queue
- total memory and CPU requirements should be reduced
- get rid of somObjectsOfWhichThisRemoteNodeIsAlearedyAware. It has very
little practicle effect and only uses memory
- if too many nodes, only delete oldest nodes in bootstrap provider
mode, in normal mode ignore new nodes as it used to before
- in bootstrap provider mode, penalise nodes announced by others by 1
day instead of 3 hours
- if knownNodes grows to 20000, instead of ignoring new nodes, forget
the 1000 oldest ones
- drop connection after sendaddr if too many connections, even if it's
an outbound one
- if maximum total connections are lower than maximum outbound
connections, active bootstrap provider mode
- in this mode, check all addresses received before announcing them
- so basically it only annouces those addresses it successfully
connected to
- maxtotalconnections = maximum number of total full connections
(incoming + outgoing) the node will allow. Default 200 as it was.
- maxbootstrapconnections = number of additional (to total) connection
that will act in bootstrap mode, closing after sending the list of
addresses. Default 20 as it was.
- maxaddrperstreamsend = initial address list maximum size, per
participating stream. Default 500. Child streams get half. The
response is chunked into pieces of max. 1000 addresses as that's the
protocol limit.
- rearranged code to reduce cyclic dependencies
- doCleanShutdown is separated in shutdown.py
- shared queues are separated in queues.py
- some default values were moved to defaults.py
- knownnodes partially moved to knownnodes.py
- complete the version and SSL handshake first, and only then feed
errors into the stream and close connection
- this allows more accurate error handling on both sides
- also the timeOffset error trigger is now more accurate, but requires
more nodes to upgrade
- version command sends list of all participating streams
- biginv sends lists of hosts for all streams the peer wants (plus
immediate children)
- objects will spread to all peers that advertise the associated stream
- please note these are just network subsystem adjustments, streams
aren't actually usable yet
- queues were too short
- some error handling was missing
- remove nonblocking repeats in receive data thread
- singleCleaner shouldn't wait unnecessarily
- Missing renamed to PendingDownload
- PendingDownload now only retries 3 times rather than 6 to dowload an
object
- Added PendingUpload, replacing invQueueSize
- PendingUpload has both the "len" method (number of objects not
uploaded) as well as "progress" method, which is a float from 0
(nothing done) to 1 (all uploaded) which considers not only objects
but also how many nodes they are uploaded to
- PendingUpload tracks when the object is successfully uploaded to the
remote node instead of just adding an arbitrary time after they have
been send the corresponding "inv"
- Network status tab's "Objects to be synced" shows the sum of
PendingUpload and PendingDownload sizes
- remember what was requested from which node
- remember if it was received
- re-request object if we haven't received any new object for more than
a minute
- tries to avoid calling senddata it it would block receiveDataThread,
allowing fore more asynchronous operation
- request objects in chunks of 100 (CPU performance optimisation)
- moved logic into a Missing singleton
- shouldn't try to download duplicates anymore, only requests a hash
once every 5 minutes and not from the same host
- removed obsoleted variables
- the "Objects to be synced" in the Network tab should now be correct
- removed some checks which aren't necessary anymore in my opinion
- fix missing self in Throttle (thanks landscape.io)
- send buffer to send multiple commands in one TCP packet
- recv/send operation size now based on bandwith limit
- send queue limited to 100 entries
- buffer getdata commands to fill send queue, instead of waiting for the
data packet to arrive first (i.e. allow getdata to work asynchronously)
- SSL handshake would often fail, because verack packet was being sent
at the same time as the do_handshake was executed in a different
thread. This makes it so that do_handshake waits until verack is done
sending.
- also minor modifications in SSLContext initialisation
- fixes errors introduced in the earlier refactoring
- more variables moved to state.py
- path finding functions moved to paths.py
- remembers IPv6 network unreachable (in the future can be used to skip
IPv6 for a while)
- got rid of shared config parser and made it into a singleton
- refactored safeConfigGetBoolean as a method of the config singleton
- refactored safeConfigGet as a method of the config singleton
- moved softwareVersion from shared.py into version.py
- moved some global variables from shared.py into state.py
- moved some protocol-specific functions from shared.py into protocol.py
- minor refactoring, made it into singleton instead of a shared global
variable. This makes it a little bit cleaner and moves the class into
a separate file
- removed duplicate inventory locking
- renamed singleton.py to singleinstance.py (this is the code that
ensures only one instance of PyBitmessage runs at the same time)
- TLS handshake in python is apparently always asynchronous, so it needs
proper handling of SSLWantReadError and SSLWantWriteError
- also adds a timeout and a proper shutdown if handshake fails
- sometimes SSL connections unnecessarily disconnected on non-fatal
errors. This should fix that. This is however a short term solution
because of migrating to asyncore which has its own error handling
- if your time is off by more than an hour, you won't be able to
establish a connection to the network. This patch adds a UI
notification so that the user can understand why he can't connect.