- will try to report "Server full" over protocol for 10 extra
connections over limit, instead of simply dropping them
- if connected to the same host inbound and outbound, handle as server
full (prevents duplicate connections)
- I thought this is done automatically through garbage collection, but I
think as the channel is still assigned in the asyncore map, it needs
to be done manually. Basically filehandle limit exceeded and it
crashed
- dandelion would always think there is a cycle and trigger fluff
- cycle fluff trigger didn't correctly re-download and re-announce the
object. Now it remembers between (d)inv and object commands that it's
in a fluff trigger phase.
- fixes and feedback from @gfanti and @amiller
- addresses #1049
- minor refactoring
- two global child stems with fixed mapping between parent and
child stem
- allow child stems which don't support dandelion
- only allow outbound connections to be stems
- adjust stems if opening/closing outbound connections (should
allow partial dandelion functionality when not enough outbound
connections are available instead of breaking)
- reduce buffer size to 128kB (was 2MB)
- IP address handling use str instead of buffer (the latter, even
though it should be faster, breaks the code on Windows)
- read up to full buffer after fully established (otherwise
downloads become too slow due to the loop time). This reverts
a change made in d28a7bfb86
- more exception handling
- only use outbound connections for stems
(thanks to @amillter for info)
- don't create stems if config disabled
- addresses #1049
- allow loopback addresses, now you can bind different loopback IP
addresses on a single system and they will auto-cross-connect
- always listen for discovery on 0.0.0.0
- [network] - bind now also applies for the TCP socket as well as UDP
socket
- closing socket iterator fix
- get rid of per-connection writeQueue/receiveQueue, and instead use
strings and locking
- minor code cleanup
- all state handlers now should set expectBytes
- almost all data processing happens in ReceiveDataThread, and
AsyncoreThread is almost only I/O (plus TLS). AsyncoreThread simply
puts the connection object into the queue when it has some data for
processing
- allow poll, epoll and kqueue handlers. kqueue is untested and
unoptimised, poll and epoll seem to work ok (linux)
- stack depth threshold handler in decode_payload_content, this is
recursive and I think was causing occasional RuntimeErrors. Fixes#964
- longer asyncore loops, as now data is handled in ReceiveDataThread
- randomise node order when deciding what to download. Should prevent
retries being stuck to the same node
- socks cleanup (socks5 works ok, socks4a untested but should work too)
- implemented by ignoring getdata during the delay rather than sleeping
as it was in the threaded model
- it can happen that a valid getdata request is received during the
delay. A node should be implemented in a way that retries to download,
that may not be the case with older PyBitmessage versions or other
implementations
- now tracks downloads globally too, so it doesn't request the same
object from multiple peers at the same time
- retries at the earliest every minute
- stops trying to download an object after an hour
- minor fixes in retrying downloading invalid objects
- outbound peers now have a rating
- it's also shown in the network status tab
- currently it's between -1 to +1, changes by 0.1 steps and uses a
hyperbolic function 0.05/(1.0 - rating) to convert rating to
probability with which we should connect to that node when randomly
chosen
- it increases when we successfully establish a full outbound connection
to a node, and decreases when we fail to do that
- onion nodes have priority when using SOCKS
- this thread is for spreading new/updated addresses in active
connections, analogous to the InvThread
- it doesn't do anything yet, this is just a dummy queue at the moment
- should prevent the same object being re-requested indefinitely
- locking for object tracking
- move SSL-specific error handling to TLSDispatcher
- observe maximum connection limit when accepting a new connection
- stack depth test (for debugging purposes)
- separate download thread
- connection pool init moved to main thread