Network I/O buffer operations slow #1379
Labels
No Label
bug
build
dependencies
developers
documentation
duplicate
enhancement
formatting
invalid
legal
mobile
obsolete
packaging
performance
protocol
question
refactoring
regression
security
test
translation
usability
wontfix
No Milestone
No project
No Assignees
1 Participants
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: Bitmessage/PyBitmessage-2024-12-23#1379
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Operations like
slice_write_buf
,slice_read_buf
,,append_write_buf
are slow because they copy data around and reallocate memory. There are several things which can be done to improve this without major refactoring:read_buf.extend
buffer = bytearray(max_size)
. This has some tradeoffs, e.g. more memory required, and removing data from the buffer then takes more time (so we need to avoid removing data from it).recv_into
/recvfrom_into
instead ofrecv
/recvfrom
. These put data directly into the buffer rather than allocating new stringsI ran some benchmarks, using bytearray slicing and appending has a performance of about 1MB/s, even when using bytearrays. Preallocating buffers can do about 20GB/s (20k times better), and using a slice of memoryview about 6GB/s (6k times better). Obviously it depends on other criteria, I was using 1kB chunks of data within the buffer.
Some operations don't work on buffers, e.g. you can't use a buffer slice as a dict key, but I think most of these have already been addressed earlier.
Edit: Appending to bytearrays doesn't seem to cause performance problems, only slicing from the beginning.