- new options in network section: onionsocksproxytype,
onionsockshostname and onionsocksport. These allow to separate
connectivity types for onion and non-onion addresses, e.g. connect to
clear nodes over clearnet and onions over tor
- also remove some obsolete imports
- dandelion fixes
- try to wait as long as possible before expiration if there are no
outbound connections
- expire in invThread rather than singleCleaner thread
- deduplication of code in inv and dinv command methods
- turn on by default, seems to work correctly now
- turn off dandelion if outbound connections are disabled
- start tracking downloads earlier, and faster download loop
- remove some obsolete lines
- minor PEP8 updates
- caching of whether an object exists in inventory was somehow removed
since storage refactoring (or it never worked). Now existence checking
is cached in the sqlite storage backend
- Use MyForm.statusbar set in __init__ instead of MyForm.statusBar()
- MyForm.updateStatusBar() to show the message in MyForm methods
- queues.UISignalQueue.put(('updateStatusBar', msg)) in dialogs
- will try to report "Server full" over protocol for 10 extra
connections over limit, instead of simply dropping them
- if connected to the same host inbound and outbound, handle as server
full (prevents duplicate connections)
- change forking exit order as systemd expects (wait until child is
ready, then exit parent, then grandparent)
- fix signal handler if prctl not installed
- revert recent PID file changes
- recent changes caused the "tag" (and "payload") columns in the
inventory table in messages.dat to be stored as blobs. Searches by tag
(e.g. pubkey lookups) stopped working. This fixes it.
- I thought this is done automatically through garbage collection, but I
think as the channel is still assigned in the asyncore map, it needs
to be done manually. Basically filehandle limit exceeded and it
crashed
- dandelion would always think there is a cycle and trigger fluff
- cycle fluff trigger didn't correctly re-download and re-announce the
object. Now it remembers between (d)inv and object commands that it's
in a fluff trigger phase.
- fixes and feedback from @gfanti and @amiller
- addresses #1049
- minor refactoring
- two global child stems with fixed mapping between parent and
child stem
- allow child stems which don't support dandelion
- only allow outbound connections to be stems
- adjust stems if opening/closing outbound connections (should
allow partial dandelion functionality when not enough outbound
connections are available instead of breaking)
- reduce buffer size to 128kB (was 2MB)
- IP address handling use str instead of buffer (the latter, even
though it should be faster, breaks the code on Windows)
- read up to full buffer after fully established (otherwise
downloads become too slow due to the loop time). This reverts
a change made in d28a7bfb86
- more exception handling
- only use outbound connections for stems
(thanks to @amillter for info)
- don't create stems if config disabled
- addresses #1049
the desktop sound theme, with pycanberra for example. Plugin name should
start with 'theme' in that case, whereas the name of plugins playing the
sound file starts with 'file'.
- don't treat "@" in label as an email address
- ask for confirmation before autoregistering. It confused some
newbies into thinking that bitmessage requires payment
- calling "shutdown" now cleanly shuts down PyBitmessage, however the
call may not return so you need to add an error handler to the call.
With python for example, wrap the "shutdown()" in
"try:/except socket.error"
- allow loopback addresses, now you can bind different loopback IP
addresses on a single system and they will auto-cross-connect
- always listen for discovery on 0.0.0.0
- [network] - bind now also applies for the TCP socket as well as UDP
socket
- closing socket iterator fix
- get rid of per-connection writeQueue/receiveQueue, and instead use
strings and locking
- minor code cleanup
- all state handlers now should set expectBytes
- almost all data processing happens in ReceiveDataThread, and
AsyncoreThread is almost only I/O (plus TLS). AsyncoreThread simply
puts the connection object into the queue when it has some data for
processing
- allow poll, epoll and kqueue handlers. kqueue is untested and
unoptimised, poll and epoll seem to work ok (linux)
- stack depth threshold handler in decode_payload_content, this is
recursive and I think was causing occasional RuntimeErrors. Fixes#964
- longer asyncore loops, as now data is handled in ReceiveDataThread
- randomise node order when deciding what to download. Should prevent
retries being stuck to the same node
- socks cleanup (socks5 works ok, socks4a untested but should work too)
- implemented by ignoring getdata during the delay rather than sleeping
as it was in the threaded model
- it can happen that a valid getdata request is received during the
delay. A node should be implemented in a way that retries to download,
that may not be the case with older PyBitmessage versions or other
implementations
- now tracks downloads globally too, so it doesn't request the same
object from multiple peers at the same time
- retries at the earliest every minute
- stops trying to download an object after an hour
- minor fixes in retrying downloading invalid objects
- outbound peers now have a rating
- it's also shown in the network status tab
- currently it's between -1 to +1, changes by 0.1 steps and uses a
hyperbolic function 0.05/(1.0 - rating) to convert rating to
probability with which we should connect to that node when randomly
chosen
- it increases when we successfully establish a full outbound connection
to a node, and decreases when we fail to do that
- onion nodes have priority when using SOCKS
- this thread is for spreading new/updated addresses in active
connections, analogous to the InvThread
- it doesn't do anything yet, this is just a dummy queue at the moment
- not used yet, just an inactive helper function
- I received feedback that OpenSSL.rand isn't more secure than
os.urandom. I read several debates/analyses about it and concur
- should prevent the same object being re-requested indefinitely
- locking for object tracking
- move SSL-specific error handling to TLSDispatcher
- observe maximum connection limit when accepting a new connection
- stack depth test (for debugging purposes)
- separate download thread
- connection pool init moved to main thread
- update to 6044df5adf
- objects that are expired or in wrong stream are not re-requested
anymore, even if they aren't stored in the inventory
- the previous option "acceptmismatch" now only affects whether such
objects are stored in the inventory
- a new config file option, network/acceptmismatch, allows the inventory
to store objects that expired or are from a stream we're not
interested in. Having this on will prevent re-requesting objects that
other nodes incorrectly advertise. It defaults to false
- better handling of WSA* checks on non-windows systems
- handle EBADF on Windows/select
- better timeouts / loop lengths in main asyncore loop and
spawning new connections
- remove InvThread prints
- asyncore is now on by default
- inv announcements implemented
- bandwidth limit implemented / fixed
- stats on download / upload speed now work
- make prints into logger
- limit knownNodes to 20k as it was before
- green light fixed
- other minor fixes
- bm headers and commands are only read up to expected length.
On a very fast connection (e.g. local VM), reading verack
also read a part of the TLS handshake
- some debugging info moved from print to logger.debug
- tls handshake cleanup
- bugfixes
- UDP socket for local peer discovery
- new function assembleAddr to unify creating address command
- open port checker functionality (inactive)
- sendBigInv is done in a thread separate from the network IO
thread
- separate queue for processing blocking stuff on reception
- rewrote write buffer as a queue
- some addr handling
- number of half open connections correct
- Network status UI works but current speed isn't implemented yet
- Track per connection and global transferred bytes
- Add locking to write queue so that other threads can put stuff
there
- send ping on timeout (instead of closing the connection)
- implement open port checker (untested, never triggered yet)
- error handling on IO
- most of the stuff is done so it partially works
- disabled pollers other than select (debugging necessary)
- can switch in the settings, section network, option asyncore (defaults
to False)
- bmconfigpaser.py now allows to put default values for a specific
option in the file
- addresses as sections are now detected by "BM-" rather than
just ignoring bitmessagesettings. There can now be other sections
with a cleaner config file
- in some cases when IPv6 stack is available and onionbindip is an IPv4
address, socket.bind doesn't change the bound address, ending up
listening on everything
- immediately return from initCL() if numpy or pyopencl is unevailable
(no ImportError because of resetPoW() call)
- use glob to find C extension even if it named like
`bitmsghash.x86_64-linux-gnu.so`
If user chooses to show the Settings dialog:
- activate the "Network Settings" tab
- remove option 'dontconnect' if settings have been saved
- track pending hashId more accurately
- add timeout and a cleanup so that the download queues don't
get stuck and memory is freed
- randomise download order (only works for inv commands with
more than 1 entry)
- replace PendingDownload singleton dict with a Queue
- total memory and CPU requirements should be reduced
- get rid of somObjectsOfWhichThisRemoteNodeIsAlearedyAware. It has very
little practicle effect and only uses memory
- if too many nodes, only delete oldest nodes in bootstrap provider
mode, in normal mode ignore new nodes as it used to before
- in bootstrap provider mode, penalise nodes announced by others by 1
day instead of 3 hours
- version command struct for faster unpacking
- increase read buffer to 2MB to allow a full command to fit
- initial bitmessage protocol class (WIP)
- error handling
- remove duplicate method
- finished proxy design
- socks4a and socks5 implemented
- authentication not tested
- resolver for both socks4a and socks5
- http client example using the proxy
- if knownNodes grows to 20000, instead of ignoring new nodes, forget
the 1000 oldest ones
- drop connection after sendaddr if too many connections, even if it's
an outbound one
- if maximum total connections are lower than maximum outbound
connections, active bootstrap provider mode
- in this mode, check all addresses received before announcing them
- so basically it only annouces those addresses it successfully
connected to
- I can't get the dynamic loading to work on OSX in frozen mode
- I think that if someone wants to build a frozen executable with custom
messagetypes modules, he can edit the file
- so now it lists the existing types manually (for frozen mode only)
- maxtotalconnections = maximum number of total full connections
(incoming + outgoing) the node will allow. Default 200 as it was.
- maxbootstrapconnections = number of additional (to total) connection
that will act in bootstrap mode, closing after sending the list of
addresses. Default 20 as it was.
- maxaddrperstreamsend = initial address list maximum size, per
participating stream. Default 500. Child streams get half. The
response is chunked into pieces of max. 1000 addresses as that's the
protocol limit.
- on OpenBSD, you can't have a socket that supports both IPv4 and IPv6.
This allows handling for this error, and then it will try IPv4 only,
just like for other similar errors.
- there were reports of errors in FreeBSD (I could only reproduce some)
and Gentoo without IPv4 support (I don't have a VM for testing ready)
- adds an exception handler for double task_done in case sender thread
has to close prematurely (I saw this triggered on FreeBSD 11)
- listening socket opening error handler was broken (triggered if you
can't open a socket with both IPv4 and IPv6 support)
- error handler for socket.accept. Reported on FreeBSD 10.3
- fixes#854
- TTL to chans shouldn't be too low so the UI gives a feedback
- warning when sending wouldn either require a lot of refactoring or
wouldn't have good usability
- don't do subprocess in SafeHTMLParser, it doesn't work in frozen mode
and an attempt to fix it would take too much refactoring and I'm not
even sure it would work
- instead, make it handle broken unicode correctly
- I think the previous reports of freezes were caused by trying to
interpret data as unicode, causing a crash
- it does about 1MB/s on my machine, so a timeout is not a big problem
- spec file for pyinstaller detects architecture (32 or 64bit)
- spec file uses os.path.join
- spec file creates and adds the list of messagetypes
- added MinGW/MSyS support in Makefile
- separate Makefile.msvc for MCVC
- bitmsghash.cpp minor adjustments to build also on MSVC/MinGW
- if frozen mode, messagetypes loads the list of files from a text file
generated during archive building rather than from a directory
- separate Makefile for BSD make
- auto-compile will detect BSD and pass the correct parameters to make
- C PoW builds on OpenBSD and detects number of cores
- "new" folder consistently appears in chans and "All accounts"
- "Sent" message list sorting fix
- When editing a label, keys.dat is saved and the lineEdit completer
is updated
- addressbook is updated when adding/deleting a new chan
- saveKnownNodes replaced the repeated pickle.dump
- with knownNodesLock instead of acquire/release
- outgoingSynSender had an unnecessary loop during shutdown causing
excessive CPU usage / GUI freezing
- networkDefaultProofOfWorkNonceTrialsPerByte and
networkDefaultPayloadLengthExtraBytes cyclic import fix
- PyBitmessage should launch now when there's no keys.dat
- rearranged code to reduce cyclic dependencies
- doCleanShutdown is separated in shutdown.py
- shared queues are separated in queues.py
- some default values were moved to defaults.py
- knownnodes partially moved to knownnodes.py
- complete the version and SSL handshake first, and only then feed
errors into the stream and close connection
- this allows more accurate error handling on both sides
- also the timeOffset error trigger is now more accurate, but requires
more nodes to upgrade
- version command sends list of all participating streams
- biginv sends lists of hosts for all streams the peer wants (plus
immediate children)
- objects will spread to all peers that advertise the associated stream
- please note these are just network subsystem adjustments, streams
aren't actually usable yet
- queues were too short
- some error handling was missing
- remove nonblocking repeats in receive data thread
- singleCleaner shouldn't wait unnecessarily
- sendinv and sendaddress are sometimes being sent to connections that
haven't been established yet, resulting in complaints about stream
mismatch. The error should only be displayed once the connection has
been established and the remote node provides its stream number
- Missing renamed to PendingDownload
- PendingDownload now only retries 3 times rather than 6 to dowload an
object
- Added PendingUpload, replacing invQueueSize
- PendingUpload has both the "len" method (number of objects not
uploaded) as well as "progress" method, which is a float from 0
(nothing done) to 1 (all uploaded) which considers not only objects
but also how many nodes they are uploaded to
- PendingUpload tracks when the object is successfully uploaded to the
remote node instead of just adding an arbitrary time after they have
been send the corresponding "inv"
- Network status tab's "Objects to be synced" shows the sum of
PendingUpload and PendingDownload sizes
- sometimes a node would send an "inv" about an object but then not
provide it when requested. This could be that it expired in the
meantime or it was an attack or a bug. This patch will forget that the
object exists if was requested too many times and not received.
- remember what was requested from which node
- remember if it was received
- re-request object if we haven't received any new object for more than
a minute
- rely on dict quasi-random order instead of an additional shuffle
- request an object once per minute
- stop check after count objects have been found
- tries to avoid calling senddata it it would block receiveDataThread,
allowing fore more asynchronous operation
- request objects in chunks of 100 (CPU performance optimisation)
- moved logic into a Missing singleton
- shouldn't try to download duplicates anymore, only requests a hash
once every 5 minutes and not from the same host
- removed obsoleted variables
- the "Objects to be synced" in the Network tab should now be correct
- removed some checks which aren't necessary anymore in my opinion
- fix missing self in Throttle (thanks landscape.io)
- send buffer to send multiple commands in one TCP packet
- recv/send operation size now based on bandwith limit
- send queue limited to 100 entries
- buffer getdata commands to fill send queue, instead of waiting for the
data packet to arrive first (i.e. allow getdata to work asynchronously)
- SSL handshake would often fail, because verack packet was being sent
at the same time as the do_handshake was executed in a different
thread. This makes it so that do_handshake waits until verack is done
sending.
- also minor modifications in SSLContext initialisation
- fixes errors introduced in the earlier refactoring
- more variables moved to state.py
- path finding functions moved to paths.py
- remembers IPv6 network unreachable (in the future can be used to skip
IPv6 for a while)
- got rid of shared config parser and made it into a singleton
- refactored safeConfigGetBoolean as a method of the config singleton
- refactored safeConfigGet as a method of the config singleton
- moved softwareVersion from shared.py into version.py
- moved some global variables from shared.py into state.py
- moved some protocol-specific functions from shared.py into protocol.py
- minor refactoring, made it into singleton instead of a shared global
variable. This makes it a little bit cleaner and moves the class into
a separate file
- removed duplicate inventory locking
- renamed singleton.py to singleinstance.py (this is the code that
ensures only one instance of PyBitmessage runs at the same time)
- TLS handshake in python is apparently always asynchronous, so it needs
proper handling of SSLWantReadError and SSLWantWriteError
- also adds a timeout and a proper shutdown if handshake fails
- Linux users often don't know that the C PoW is available and complain
it's slow. This will try to build it, and adds availability
notification in the status bar
- also, the updateStatusBar signal now allows emphasised notifications,
which will remain visible for a longer period of time and also
reappear if a status change happened in the meantime
- sometimes SSL connections unnecessarily disconnected on non-fatal
errors. This should fix that. This is however a short term solution
because of migrating to asyncore which has its own error handling
- when you have multiple OpenCL drivers at the same time, e.g. intel and
nvidia, they won't mix leading to crashes. This patch makes it
possible to select which driver to use by listing the available
vendors
- refactored to use the .ui file
- input logic change, address is always optional
- interactive input validation
- runs asynchronously to the main window
- address generator thread can now validate chans in addition to just
adding them
- a user report indicated there is confusion about address error
messages. He/she thought it refers to the sender address, however it
refers to the recipient address. This makes it more clear
- if your time is off by more than an hour, you won't be able to
establish a connection to the network. This patch adds a UI
notification so that the user can understand why he can't connect.
- this has been tested on Windows as well, and has been cleaned up.
There is now a permanent parser thread, and it restarts when the
parsing takes more than 1 second
- Fixes#900
- while 448ceaa74c fixed slow rendering on
windows, there was still a bug where overly long messages caused
freezeing of the hyperlink regexp parser, which appears to happen on
all platforms. Maybe it's a freeze, maybe it just takes too long. This
patch aborts the regexp parser after 1 second and simply displays the
message without hyperlinks being clickable. This doesn't affect HTML
mode because there the links are kept as they are
- Fixes#900
- some messages (e.g. some long messages on Windows, or binary data)
cause an excessive amount of time in rendering the body. This
change is base on a workaround I found at
http://www.qtcentre.org/threads/8188-bug-setLineWrapMode
- most status messages are transient, so they are now only displayed for
10 seconds
- when trying to quit while disconnected or not fully synced, a
three-choice message box now appears: Yes for waiting, No for
closing anyway, and Cancel for aborting the shutdown procedure
- this copyright character has been plaguing pylupdate4 parser and
multiple unsuccessful attempts have been made and then reverted.
Replaced with a HTML entity, hopefully this will finally fix it.
- UI will now display notifications in the status bar if the connection
to the proxy itself is broken. This should give better feedback to
people who are unfamiliar with tor and misconfigured it
- The proxy error handling in the background was slightly improved as
well
- fixes "fast python" (multiprocessing) PoW
- python PoW (both slow and fast) interruptible on *NIX
- signal handler should handle multiple processes and threads correctly
(only tested on Linux)
- popul window asking whether to interrupt PoW when quitting QT GUI
- PoW status in "sent" folder fixes and now also displays broadcast
status which didn't exist before
- Fixes#894
- namecoin connection errors have now severity "info" instead of
"error", because it just confuses peopel who don't have namecoin
configured
- partially addresses #893
- namecoin lookup now also includes name of the record in the recipient
field
- namecoin lookups now support multiple semicolon-separated
recipients like the other recipient-related functions. If there are
multiple recipients, namecoin lookup will look up the last entry on
the line, for example if you have "a; b; c" in the recipient line,
it will lookup "c"
- bitmessage could end up having no known nodes and then it would
freeze. Now it shouldn't freeze, however it can still end up without
known nodes until a restart in some cases (e.g. when suspending the
computer for more then 3 days while BM is running)
- you can now use SMTP to send messages
- uses bmaddr.lan domain
- runs on 127.0.0.8425 if you set "smtpd" to True
- mandatory authentication with smtpdusername and smtpdpassword
- handles old dialog versions better if using curses
- can spawn SMTP delivery thread if configured (only when in daemon
mode)
- daemonized mode now works more like it's properly supposed to on unix
(double fork etc). You may have to adjust your init scripts, when
when using upstart for example you should now use "expect daemon"
- daemon mode now cleanly shuts down when TERM/INT signal is received
- PyBitmessage only used to quit on disk full when running in daemon
mode. When this happened with the QT-GUI, it would end up in a
half-frozen status instead. Quitting is a safer choice
Fixes#572
- helper classes for encoding/decoding messages
- includes both old as well as new extended one (msgpack+zlib)
- the classes are unused yet and are supposed to be for experimenting
- when running a hidden service, the IP of the tor relay was a part of
the verack message. In setups where it's not 127.0.0.1 it may leak
info about network topology
- thanks for an anonymous bug report
- will send the correct combination of hostname and port
- if proxyhostname is a hostname and an IP address, it will now allow
multiple parallel connections for hidden service
- PyBitmessage can now run as a hidden service on Tor
- three new variables in keys.dat: onionhostname, onionport, onionbindip
- you need to manually add a hidden service to tor
- bitmsghash should now build and run on BSD (thanks for
FreeBSD/Dragonfly maintainers for assistance)
- if it cannot detect the number of cores, will default to one thread
(previously it broke)
Two file merge conflicts, __init__.py and upnp.py, were not resolved
correctly by the automatic resolving (probably because the affected code
was written by other people and I merged them into mailchuck fork). This
changes it to the same code that is in the mailchuck fork)
On Windows, the encoding was always the default windows encoding and
didn't change when you use a language in BM that required a different
encoding. This affected mainly date & time in the received column and
the startup info on the network status tab.
The plural/paucal form support was not compatible with pylupdate4, it
didn't correctly parse the 3-argument calls to translate.
This fixes it, and updates the sources accordingly.
Some parts of strings did not use the proper locale. For example, date
and time strings was always output with the US locale. This fixes it.
There are still some cases where localisation is not implemented, and
could be changed from str(string) to locale.str(string).
- it shows that it needs to wait for PoW to finish
- it waits a bit for new objects to be distributed
- it displays a better progress indicator in the status bar
Previously, people who don't understand how PyBitmessage works sometimes
shut it down immediately after they wrote a message. This would have
caused the message to be stuck in the queue locally and not sent. Now,
it will indicate that the PoW still needs to work, and it will wait a
bit longer so that the message can spread. It's not a completely correct
approach, because it does not know whether the message was really
retrieved after the "inv" notification was sent.