Memory leak when remote peer requests too many objects #1414

Closed
opened 2018-12-18 21:09:50 +01:00 by PeterSurda · 2 comments
PeterSurda commented 2018-12-18 21:09:50 +01:00 (Migrated from github.com)

I noticed huge memory usage on some of my nodes. I managed to add memory usage tracking to PyBM, and it looks like some clients request too many objects (possibly even duplicate), and PyBitmessage then fills the socket write buffers with the object data without any length constraints. This is due to design of uploading. In most cases not checking the write buffer size isn't a problem, I don't think that a remote node has other methods of causing too much data to be send other than requesting a lot of objects.

On one of my machines I got 11GB RAM consumed by this, even though there weren't that many connections.

There are several approaches about how to address this, which can be combined.

  • limit number of processed upload requests (very easy but reduces transfer performance)
  • limit write buffer size (very easy but may break smooth network transfers, it may actually affect other types of commands)
  • suppress duplicate uploading for some time (easy but incomplete)
  • upload asynchronously, in a separate thread, similarly to how the download thread works (complicated but a proper fix)
I noticed huge memory usage on some of my nodes. I managed to add memory usage tracking to PyBM, and it looks like some clients request too many objects (possibly even duplicate), and PyBitmessage then fills the socket write buffers with the object data without any length constraints. This is due to design of uploading. In most cases not checking the write buffer size isn't a problem, I don't think that a remote node has other methods of causing too much data to be send other than requesting a lot of objects. On one of my machines I got 11GB RAM consumed by this, even though there weren't that many connections. There are several approaches about how to address this, which can be combined. - limit number of processed upload requests (very easy but reduces transfer performance) - limit write buffer size (very easy but may break smooth network transfers, it may actually affect other types of commands) - suppress duplicate uploading for some time (easy but incomplete) - upload asynchronously, in a separate thread, similarly to how the download thread works (complicated but a proper fix)
PeterSurda commented 2018-12-18 21:51:25 +01:00 (Migrated from github.com)

Now it's consuming 99.5GB on my workstation. Good thing I have 128GB of RAM.

Now it's consuming 99.5GB on my workstation. Good thing I have 128GB of RAM.
PeterSurda commented 2018-12-18 22:12:39 +01:00 (Migrated from github.com)

I'll try to create uploadthread.py modelled after downloadthread.py with one RandomTrackingDict per connection. This should auto-throttle uploads and as a side effect limit the size of the write buffer.

A big write buffer also creates suboptimal performance of the AsyncoreThread because slice_write_buf needs to move around gigabytes of data.

I'll try to create uploadthread.py modelled after downloadthread.py with one `RandomTrackingDict` per connection. This should auto-throttle uploads and as a side effect limit the size of the write buffer. A big write buffer also creates suboptimal performance of the `AsyncoreThread` because `slice_write_buf` needs to move around gigabytes of data.
This repo is archived. You cannot comment on issues.
No Milestone
No project
No Assignees
1 Participants
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: Bitmessage/PyBitmessage-2024-08-21#1414
No description provided.