Memory leak when remote peer requests too many objects #1414
Labels
No Label
bug
build
dependencies
developers
documentation
duplicate
enhancement
formatting
invalid
legal
mobile
obsolete
packaging
performance
protocol
question
refactoring
regression
security
test
translation
usability
wontfix
No Milestone
No project
No Assignees
1 Participants
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: Bitmessage/PyBitmessage-2024-12-21#1414
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I noticed huge memory usage on some of my nodes. I managed to add memory usage tracking to PyBM, and it looks like some clients request too many objects (possibly even duplicate), and PyBitmessage then fills the socket write buffers with the object data without any length constraints. This is due to design of uploading. In most cases not checking the write buffer size isn't a problem, I don't think that a remote node has other methods of causing too much data to be send other than requesting a lot of objects.
On one of my machines I got 11GB RAM consumed by this, even though there weren't that many connections.
There are several approaches about how to address this, which can be combined.
Now it's consuming 99.5GB on my workstation. Good thing I have 128GB of RAM.
I'll try to create uploadthread.py modelled after downloadthread.py with one
RandomTrackingDict
per connection. This should auto-throttle uploads and as a side effect limit the size of the write buffer.A big write buffer also creates suboptimal performance of the
AsyncoreThread
becauseslice_write_buf
needs to move around gigabytes of data.