New POW calculation module #1284

Open
Kleshni wants to merge 38 commits from Kleshni/POW into v0.6
Kleshni commented 2018-06-23 11:01:18 +02:00 (Migrated from github.com)

For #1227. It's currently not connected to the PyBitmessage code, I'm going to get it done sooner or later.

It provides the WorkProver thread class which manages tasks and schedules them to solvers. There are 4 possible solvers:

  • Single-threaded DumbSolver in Python. I optimised it so it now works ~2 times faster. In particular, I replaced hashlib.sha512 with SHA512 from OpenSSL. Since you need OpenSSL to assemble a message anyway, I think it's safe to use it in a fallback solver.

  • Multiprocess based ForkingSolver in Python. It works faster too, because it relies on DumbSolver.

  • Multithreaded FastSolver in C. I tried to utilize SIMD, but it gave very little speed up of 7 %, so it still calls OpenSSL.

  • And GPUSolver in OpenCL.

The library was tested on a 4-core AMD64 Linux with integrated GPU and 45-CPU virtual machines with hyperthreading enabled:

  • X86 Linux;
  • AMD64 FreeBSD;
  • AMD64 OpenBSD with the bsd.mp kernel;
  • AMD64 NetBSD;
  • AMD64 Windows 7.

I failed to install Hackintosh on QEMU, so somebody else has to test it there.

For #1227. It's currently not connected to the PyBitmessage code, I'm going to get it done sooner or later. It provides the `WorkProver` thread class which manages tasks and schedules them to solvers. There are 4 possible solvers: - Single-threaded `DumbSolver` in Python. I optimised it so it now works ~2 times faster. In particular, I replaced `hashlib.sha512` with `SHA512` from OpenSSL. Since you need OpenSSL to assemble a message anyway, I think it's safe to use it in a fallback solver. - Multiprocess based `ForkingSolver` in Python. It works faster too, because it relies on `DumbSolver`. - Multithreaded `FastSolver` in C. I tried to utilize SIMD, but it gave very little speed up of 7 %, so it still calls OpenSSL. - And `GPUSolver` in OpenCL. The library was tested on a 4-core AMD64 Linux with integrated GPU and 45-CPU virtual machines with hyperthreading enabled: - X86 Linux; - AMD64 FreeBSD; - AMD64 OpenBSD with the `bsd.mp` kernel; - AMD64 NetBSD; - AMD64 Windows 7. I failed to install Hackintosh on QEMU, so somebody else has to test it there.
PeterSurda (Migrated from github.com) reviewed 2018-06-23 11:01:18 +02:00
PeterSurda commented 2018-06-23 11:05:03 +02:00 (Migrated from github.com)

Have you tried frozen mode on Windows?

Have you tried frozen mode on Windows?
Kleshni commented 2018-06-23 11:31:28 +02:00 (Migrated from github.com)

No, but ForkingSolver is disabled in this case complying to the current behaviour. All file paths (to the C library and OpenCL kernel) are relative to the code directory, whose path must be provided by the calling code.

No, but `ForkingSolver` is disabled in this case complying to the current behaviour. All file paths (to the C library and OpenCL kernel) are relative to the code directory, whose path must be provided by the calling code.
omkar1117 (Migrated from github.com) requested changes 2018-06-23 12:09:35 +02:00
@ -0,0 +1,42 @@
Please keep this module independent from the outside code, so that it can be reused in other applications.
omkar1117 (Migrated from github.com) commented 2018-06-23 11:17:58 +02:00

In that case why can't we make it a separate package and import it where required ?

In that case why can't we make it a separate package and import it where required ?
omkar1117 (Migrated from github.com) commented 2018-06-23 11:19:26 +02:00

pass is the dangerous statement, please log it or please write a print statement atleast...

pass is the dangerous statement, please log it or please write a print statement atleast...
omkar1117 (Migrated from github.com) commented 2018-06-23 11:23:40 +02:00

parallelism = self.solver.parallelism if self.solver else 0

parallelism = self.solver.parallelism if self.solver else 0
omkar1117 (Migrated from github.com) commented 2018-06-23 11:27:44 +02:00

if not name and not self.solverName

if not name and not self.solverName
omkar1117 (Migrated from github.com) commented 2018-06-23 11:28:02 +02:00

logging is required for this functionality.

logging is required for this functionality.
omkar1117 (Migrated from github.com) commented 2018-06-23 11:28:26 +02:00

if not self.solver

if not self.solver
omkar1117 (Migrated from github.com) commented 2018-06-23 11:28:41 +02:00

if not name

if not name
omkar1117 (Migrated from github.com) commented 2018-06-23 11:45:55 +02:00

doc string required here

doc string required here
omkar1117 (Migrated from github.com) commented 2018-06-23 11:46:34 +02:00

docstring here

docstring here
omkar1117 (Migrated from github.com) commented 2018-06-23 11:47:06 +02:00

pep8 validation

pep8 validation
omkar1117 (Migrated from github.com) commented 2018-06-23 11:50:56 +02:00

docstring here please

docstring here please
omkar1117 (Migrated from github.com) commented 2018-06-23 11:53:15 +02:00

if not self.tasks

if not self.tasks
omkar1117 (Migrated from github.com) commented 2018-06-23 11:53:58 +02:00

pep8 validation

pep8 validation
omkar1117 (Migrated from github.com) commented 2018-06-23 11:56:19 +02:00

@PeterSurda , I think we need optimization here.

@PeterSurda , I think we need optimization here.
@ -0,0 +90,4 @@
self.currentTaskID = None
def notifyStatus(self):
if self.statusUpdated is None:
omkar1117 (Migrated from github.com) commented 2018-06-23 11:26:47 +02:00

if not self.statusUpdated:
return

if not self.statusUpdated: return
@ -0,0 +154,4 @@
self.notifyStatus()
def addTask(self, ID, headlessPayload, TTL, expiryTime, byteDifficulty, lengthExtension):
omkar1117 (Migrated from github.com) commented 2018-06-23 11:33:56 +02:00

add docs to the function.

add docs to the function.
@ -0,0 +223,4 @@
task = self.tasks[self.currentTaskID]
if task.expiryTime is None:
omkar1117 (Migrated from github.com) commented 2018-06-23 11:56:49 +02:00

if not task.expiryTime

if not task.expiryTime
@ -0,0 +234,4 @@
appendedSeed = self.seed + struct.pack(">Q", self.roundsCounter)
self.roundsCounter += 1
try:
omkar1117 (Migrated from github.com) commented 2018-06-23 11:57:09 +02:00

pep8 validation

pep8 validation
omkar1117 (Migrated from github.com) commented 2018-06-23 11:57:55 +02:00

docs here and pep8 validation.

docs here and pep8 validation.
omkar1117 (Migrated from github.com) commented 2018-06-23 11:58:58 +02:00

doc type required here

doc type required here
omkar1117 (Migrated from github.com) commented 2018-06-23 11:59:54 +02:00

documentation required here

documentation required here
@ -0,0 +2,4 @@
import os.path
import platform
import subprocess
import sys
omkar1117 (Migrated from github.com) commented 2018-06-23 11:58:39 +02:00

please write all the imports in alphabetic order

please write all the imports in alphabetic order
@ -0,0 +13,4 @@
if platform.architecture()[0] == "64bit":
suffix = "-64"
omkar1117 (Migrated from github.com) commented 2018-06-23 11:59:12 +02:00

pep8 validation

pep8 validation
omkar1117 (Migrated from github.com) commented 2018-06-23 12:07:47 +02:00

remove un necessary spaces

remove un necessary spaces
@ -0,0 +1,117 @@
import multiprocessing
import os
omkar1117 (Migrated from github.com) commented 2018-06-23 12:02:19 +02:00

pep8 validation for the file

pep8 validation for the file
Kleshni (Migrated from github.com) reviewed 2018-06-23 12:19:14 +02:00
@ -0,0 +1,42 @@
Please keep this module independent from the outside code, so that it can be reused in other applications.
Kleshni (Migrated from github.com) commented 2018-06-23 12:19:14 +02:00

It's a Python package with the __init__.py file, and it's intended to be imported like import workprover. It could be moved to a separate repository, but I think, it's easier and safer to keep it in the same file tree.

It's a Python package with the `__init__.py` file, and it's intended to be imported like `import workprover`. It could be moved to a separate repository, but I think, it's easier and safer to keep it in the same file tree.
Kleshni (Migrated from github.com) reviewed 2018-06-23 12:32:58 +02:00
Kleshni (Migrated from github.com) commented 2018-06-23 12:32:58 +02:00

self.availableSolvers is visible to the outside code, so it can log or print if fast is missing. It should show a message in GUI status bar like the current code does.

`self.availableSolvers` is visible to the outside code, so it can log or print if `fast` is missing. It should show a message in GUI status bar like the current code does.
Kleshni (Migrated from github.com) reviewed 2018-06-23 12:34:25 +02:00
Kleshni (Migrated from github.com) commented 2018-06-23 12:34:24 +02:00

It's less readable.

It's less readable.
Kleshni (Migrated from github.com) reviewed 2018-06-23 12:39:07 +02:00
@ -0,0 +90,4 @@
self.currentTaskID = None
def notifyStatus(self):
if self.statusUpdated is None:
Kleshni (Migrated from github.com) commented 2018-06-23 12:39:06 +02:00

It's less acurate.

self.statusUpdated must be either a function or a None. If it's 0 or an empty string, it would be a programming error and further call would rise an exception making the error noticable.

It's less acurate. `self.statusUpdated` must be either a function or a `None`. If it's 0 or an empty string, it would be a programming error and further call would rise an exception making the error noticable.
Kleshni (Migrated from github.com) reviewed 2018-06-23 12:44:27 +02:00
Kleshni (Migrated from github.com) commented 2018-06-23 12:44:26 +02:00

It calls the self.statusUpdated callback and the outside code should log this and display in the GUI.

It calls the `self.statusUpdated` callback and the outside code should log this and display in the GUI.
g1itch (Migrated from github.com) reviewed 2018-06-23 13:50:50 +02:00
g1itch (Migrated from github.com) left a comment

Seems to work for me: two "singleWorker" threads appear when doing PoW. Oh, I realized it's not connected so far :(

Seems to work for me: two "singleWorker" threads appear when doing PoW. Oh, I realized it's not connected so far :(
g1itch commented 2018-06-23 14:25:37 +02:00 (Migrated from github.com)

@Kleshni workprover.test also contains tabs

@Kleshni `workprover.test` also contains tabs
g1itch commented 2018-06-23 14:48:01 +02:00 (Migrated from github.com)

Some tests are failing. It seems to be expected for TestGPUSolver, what about the rest?

Some tests [are failing](https://travis-ci.org/g1itch/PyBitmessage/builds/395814851). It seems to be expected for `TestGPUSolver`, what about the rest?
Kleshni commented 2018-06-23 15:26:38 +02:00 (Migrated from github.com)

TestSolver is not a test case, it's a base class for other test cases. Now it should be skipped during automatic test discovery.

TestFastSolver fails loading compiled libfastsolver.so for the reason I don't know. A print in the corresponding except could clarify this.

`TestSolver` is not a test case, it's a base class for other test cases. Now it should be skipped during automatic test discovery. `TestFastSolver` fails loading compiled `libfastsolver.so` for the reason I don't know. A print in the corresponding `except` could clarify this.
g1itch commented 2018-06-23 17:09:41 +02:00 (Migrated from github.com)

Strange thing:

OSError: /home/travis/build/g1itch/PyBitmessage/src/workprover/fastsolver/libfastsolver.so: undefined symbol: SHA512_Update

It seems to be specific to ubuntu:trusty or Travis-CI.

Nevertheless could you please add pybitmessage.workprover to packages and decorate TestGPUSolver like I did?

[Strange thing](https://travis-ci.org/g1itch/PyBitmessage/builds/395846736): > OSError: /home/travis/build/g1itch/PyBitmessage/src/workprover/fastsolver/libfastsolver.so: undefined symbol: SHA512_Update It seems to be specific to ubuntu:trusty or Travis-CI. Nevertheless could you please add `pybitmessage.workprover` to `packages` and decorate `TestGPUSolver` like I did?
Kleshni commented 2018-06-25 10:04:14 +02:00 (Migrated from github.com)

I have registered in Travis-CI and debugged the issue. It was a very stupid mistake, shared libraries like -lpthread and -lcrypto should be listed after object files in the linker command line.

Travis-CI also gives an OS X environment, so I tested the module there.

I have registered in Travis-CI and debugged the issue. It was a very stupid mistake, shared libraries like `-lpthread` and `-lcrypto` should be listed after object files in the linker command line. Travis-CI also gives an OS X environment, so I tested the module there.
PeterSurda commented 2018-06-27 11:42:20 +02:00 (Migrated from github.com)

packages/pyinstaller/bitmessagemain.spec needs updating.

`packages/pyinstaller/bitmessagemain.spec` needs updating.
g1itch (Migrated from github.com) reviewed 2018-06-27 11:47:35 +02:00
g1itch (Migrated from github.com) commented 2018-06-27 11:47:35 +02:00

Could you please comment #1283 and maybe change the import order too if you agree?

Could you please comment #1283 and maybe change the import order too if you agree?
g1itch commented 2018-06-28 16:39:13 +02:00 (Migrated from github.com)

Could you please also add the extension and a workaround for dh_python in Debian: a9955e5?

Could you please also add the extension and a workaround for dh_python in Debian: a9955e5?
Kleshni commented 2018-06-28 17:00:26 +02:00 (Migrated from github.com)

Maybe just invoke the makefile from setup.py?

Maybe just invoke the makefile from `setup.py`?
g1itch commented 2018-06-28 17:10:50 +02:00 (Migrated from github.com)

The extension approach is more pythonic and there is no need for makefile. You can see it even in the tests log.

The extension approach is more pythonic and there is no need for makefile. You can see it even in the [tests log](https://travis-ci.org/g1itch/PyBitmessage/builds/397318626).
Kleshni commented 2018-06-28 17:20:43 +02:00 (Migrated from github.com)

But there rises up a need for a workaround, while the makefile is needed anyway. And a pythonic extention is not a ctypes library, it must use Python C API to be directly importable:

A C extension for CPython is a shared library (e.g. a .so file on Linux, .pyd on Windows), which exports an initialization function.

But there rises up a need for a workaround, while the makefile is needed anyway. And a pythonic extention is not a `ctypes` library, it must use Python C API to be directly importable: >A C extension for CPython is a shared library (e.g. a .so file on Linux, .pyd on Windows), which exports an *initialization function*.
omkar1117 (Migrated from github.com) requested changes 2018-07-01 18:48:05 +02:00
omkar1117 (Migrated from github.com) left a comment

Imports should be placed on the top of the file with a alphabetic order.

After that from statements should be written in the alphabetic order

Imports should be placed on the top of the file with a alphabetic order. After that from statements should be written in the alphabetic order
@ -13,2 +13,4 @@
import socket
import threading
import time
from binascii import hexlify, unhexlify
omkar1117 (Migrated from github.com) commented 2018-07-01 18:44:52 +02:00

Please place all the imports in a order

Please place all the imports in a order
omkar1117 (Migrated from github.com) commented 2018-07-01 18:45:46 +02:00

better to place log instead of exception pass.
I hope it will be dangerous if we have pass statement.

better to place log instead of exception pass. I hope it will be dangerous if we have pass statement.
omkar1117 (Migrated from github.com) commented 2018-07-01 18:47:05 +02:00

Place this imports in the starting of the script.

Place this imports in the starting of the script.
g1itch commented 2018-07-05 09:22:06 +02:00 (Migrated from github.com)

Are you going to connect new POW code soon? This PR still not includes the actual use of this code for doing work.

Are you going to connect new POW code soon? This PR still not includes the actual use of this code for doing work.
Kleshni commented 2018-07-05 09:27:19 +02:00 (Migrated from github.com)

Yes, I'm rewriting the class_singleWorker module.

Yes, I'm rewriting the `class_singleWorker` module.
Kleshni commented 2018-07-07 17:45:34 +02:00 (Migrated from github.com)

I almost finished rewriting class_singleWorker, but I have a problem with the function for requesting pubkeys. It uses the retrynumber column in the database for detemining the TTL and rising it exponentially for every next attempt.

But proper implementation of this scheme gets very complicated if two or more messages need the same pubkey and have different retry numbers. It would be wise to not send two simultaneous getpubkey requests in this case, and to not send it at all if it's already present in the inventory. So there emerges a question on how to handle the retrynumber column in such situations, and what to do with it when some other user sends a getpubkey request for a key we also need.

I think we can get rid of this column, I mean use it only for resending the message itself but ignore it for getpubkeys. This would simplify the algorithm: just check there is no requests in the inventory, and send it with a fixed TTL.

The comment near the line calculating the TTL says, that the 2,5-day period was chosen "fairly arbitrarily". Maybe redefine it to the maximum 28 days? Getpubkey requests are negligibly small and are sent with the default network difficulty, they don't need too much work.

There are currently 13661 getpubkeys in the inventory, but only 84 of them have different tags! The others are duplicates, which I propose to avoid by simplifying the sending algorithm. And there are 40939 different pubkeys, so I conclude that the need for getpubkeys is very small, and there is no big reason to save on short TTLs.

Less frequent resending and not repeating already existing requests can also affect privacy in the positive way.

(Another question is that the current requesting function specifies the object type as 0 in one place and 1 in another, but I think it's certainly a bug.)

I almost finished rewriting `class_singleWorker`, but I have a problem with the function for requesting pubkeys. It uses the `retrynumber` column in the database for detemining the TTL and rising it exponentially for every next attempt. But proper implementation of this scheme gets very complicated if two or more messages need the same pubkey and have different retry numbers. It would be wise to not send two simultaneous getpubkey requests in this case, and to not send it at all if it's already present in the inventory. So there emerges a question on how to handle the `retrynumber` column in such situations, and what to do with it when some other user sends a getpubkey request for a key we also need. I think we can get rid of this column, I mean use it only for resending the message itself but ignore it for getpubkeys. This would simplify the algorithm: just check there is no requests in the inventory, and send it with a fixed TTL. The comment near the [line](https://github.com/Bitmessage/PyBitmessage/blob/529559d06a3cd1c89838482acaf0fd16a018a057/src/class_singleWorker.py#L1324) calculating the TTL says, that the 2,5-day period was chosen "fairly arbitrarily". Maybe redefine it to the maximum 28 days? Getpubkey requests are negligibly small and are sent with the default network difficulty, they don't need too much work. There are currently 13661 getpubkeys in the inventory, but only 84 of them have different tags! The others are duplicates, which I propose to avoid by simplifying the sending algorithm. And there are 40939 different pubkeys, so I conclude that the need for getpubkeys is very small, and there is no big reason to save on short TTLs. Less frequent resending and not repeating already existing requests can also affect privacy in the positive way. (Another question is that the current requesting function specifies the object type as 0 in one place and 1 in another, but I think it's certainly a bug.)
omkar1117 commented 2018-07-09 05:07:30 +02:00 (Migrated from github.com)

@Kleshni can I have the functional flow also along with the PR, it will be easy to test the internal things please.

If possible can you attach a doc please with some snapshots.

@Kleshni can I have the functional flow also along with the PR, it will be easy to test the internal things please. If possible can you attach a doc please with some snapshots.
Kleshni commented 2018-07-09 13:20:30 +02:00 (Migrated from github.com)

Sorry, I don't know what the "functional flow" is. I tried to google, but it seems, that I have to learn much new material to satisfy your request.

with some snapshots

Snapshots of what?

Maybe I should add doc strings to functions or something like that?

Sorry, I don't know what the "functional flow" is. I tried to google, but it seems, that I have to learn much new material to satisfy your request. >with some snapshots Snapshots of what? Maybe I should add doc strings to functions or something like that?
Kleshni commented 2018-07-16 05:03:50 +02:00 (Migrated from github.com)

I don't completely understand why the TTL randomization is needed:

TTL += random.randrange(-300, 300)
expiryTime = int(time.time() + TTL)

Possibly, it's supposed to hide the origin node of the message because it makes neighbour nodes unable to tell whether the message was generated right now or its retranslation took some time.

But it seems very weak:

  1. Maybe it could hide singular messages, but if a node generates messages continuously (like the time service broadcast), the expected value of the random addition approaches -0,5 - a constant.

  2. An attacker can guess the original TTL, which is usually a multiple of 3600 seconds. If the expiry time of a received object is currentTime + originalTTL + 299, then it's obvious, that the random addition was 299 seconds, it can't be more. In this case it hides nothing.

  3. TTL affects POW difficulty, POW difficulty affects the nonce value, the nonce value is known to everybody. It's another way to leak the real sending time.

I don't know how to solve the first two problems, but the the last can be solved by randomizing the expiry time instead:

TTL += 599
expiryTime = int(time.time() + TTL - random.randrange(600))

So I propose to change the code this way.

I don't completely understand why the TTL randomization is needed: ```python TTL += random.randrange(-300, 300) expiryTime = int(time.time() + TTL) ``` Possibly, it's supposed to hide the origin node of the message because it makes neighbour nodes unable to tell whether the message was generated right now or its retranslation took some time. But it seems very weak: 1. Maybe it could hide singular messages, but if a node generates messages continuously (like the time service broadcast), the expected value of the random addition approaches -0,5 - a constant. 2. An attacker can guess the original TTL, which is usually a multiple of 3600 seconds. If the expiry time of a received object is `currentTime + originalTTL + 299`, then it's obvious, that the random addition was 299 seconds, it can't be more. In this case it hides nothing. 3. TTL affects POW difficulty, POW difficulty affects the nonce value, the nonce value is known to everybody. It's another way to leak the real sending time. I don't know how to solve the first two problems, but the the last can be solved by randomizing the expiry time instead: ```python TTL += 599 expiryTime = int(time.time() + TTL - random.randrange(600)) ``` So I propose to change the code this way.
Kleshni commented 2018-07-20 02:15:38 +02:00 (Migrated from github.com)

I deleted the file bitmessageqt/settings.py. It was originally generated by the pyuic4 utility from bitmessageqt/settings.ui and was not actually needed, because *.ui files can be used directly without conversion to *.py.

The generated files must remain untouched because they need a regeneration on every change to the source files, but settings.py was manually modified one day. This prevented easy modification of the settings window, so I solved this by moving the changes out from this file and deleting it.

I deleted the file `bitmessageqt/settings.py`. It was originally generated by the `pyuic4` utility from `bitmessageqt/settings.ui` and was not actually needed, because `*.ui` files can be used directly without conversion to `*.py`. The generated files must remain untouched because they need a regeneration on every change to the source files, but `settings.py` was manually modified one day. This prevented easy modification of the settings window, so I solved this by moving the changes out from this file and deleting it.
Kleshni commented 2018-07-22 09:46:46 +02:00 (Migrated from github.com)

The same for the main window.

The same for the main window.
omkar1117 (Migrated from github.com) reviewed 2018-07-23 08:13:31 +02:00
@ -56,3 +51,3 @@
from statusbar import BMStatusBar
from network.asyncore_pollchoose import set_rates
import sound
import re
omkar1117 (Migrated from github.com) commented 2018-07-20 18:03:47 +02:00

please write imports in the starting point of the file in the alphabetic order

please write imports in the starting point of the file in the alphabetic order
omkar1117 (Migrated from github.com) commented 2018-07-23 08:13:08 +02:00

can't we make it dynamic or iterable?

can't we make it dynamic or iterable?
Kleshni (Migrated from github.com) reviewed 2018-07-23 08:25:20 +02:00
Kleshni (Migrated from github.com) commented 2018-07-23 08:25:20 +02:00

We just need to delete these splitters.

We just need to delete these splitters.
Kleshni commented 2018-07-23 08:26:38 +02:00 (Migrated from github.com)

Added a right-click menu option to cancel a message or broadcast. "Move to Trash" now also cancels POW before deleting.

Added a right-click menu option to cancel a message or broadcast. "Move to Trash" now also cancels POW before deleting.
omkar1117 (Migrated from github.com) requested changes 2018-07-25 06:09:30 +02:00
@ -52,8 +52,8 @@ else:
arch=64
omkar1117 (Migrated from github.com) commented 2018-07-25 05:55:47 +02:00

PEP8 validation required here.

PEP8 validation required here.
omkar1117 (Migrated from github.com) commented 2018-07-25 06:02:59 +02:00

please add doc string to the class.

please add doc string to the class.
omkar1117 (Migrated from github.com) commented 2018-07-25 06:03:47 +02:00

please add doc string to the class

please add doc string to the class
omkar1117 (Migrated from github.com) commented 2018-07-25 06:04:41 +02:00

please write all the imports on the starting of the file.

please write all the imports on the starting of the file.
omkar1117 (Migrated from github.com) commented 2018-07-25 06:05:06 +02:00

Please place all the imports in the starting the file...

Please place all the imports in the starting the file...
@ -37,1 +37,4 @@
def resendStaleMessages():
staleMessages = sqlQuery("""
SELECT "toaddress", "ackdata", "status" FROM "sent"
omkar1117 (Migrated from github.com) commented 2018-07-25 06:06:11 +02:00

Pep8 validation here.

Pep8 validation here.
@ -161,0 +195,4 @@
stream, readLength = addresses.decodeVarint(payload[readPosition: readPosition + 9])
readPosition += readLength
tag = buffer(payload[readPosition: readPosition + 32]) # May be shorter than 32 bytes for getpubkeys
omkar1117 (Migrated from github.com) commented 2018-07-25 06:06:34 +02:00

pep8 validation here.

pep8 validation here.
omkar1117 (Migrated from github.com) commented 2018-07-25 06:08:09 +02:00

if not name and not self.solverName:
pass

if not name and not self.solverName: pass
omkar1117 (Migrated from github.com) commented 2018-07-25 06:08:25 +02:00

if self.solver:
….

if self.solver: ….
omkar1117 (Migrated from github.com) commented 2018-07-25 06:08:36 +02:00

if name:
….

if name: ….
@ -0,0 +1,262 @@
import Queue
omkar1117 (Migrated from github.com) commented 2018-07-25 06:07:41 +02:00

please place all the imports in a alphabetic order.

please place all the imports in a alphabetic order.
Kleshni (Migrated from github.com) reviewed 2018-07-25 07:05:13 +02:00
Kleshni (Migrated from github.com) commented 2018-07-25 07:05:12 +02:00

These imports must be conditional.

These imports must be conditional.
Kleshni commented 2018-07-29 13:24:00 +02:00 (Migrated from github.com)

Added API methods for dealing with raw objects in the inventory, solving #1225 and superseding #1226.

disseminateRawObject

Argument: a hex-encoded object without packet headers, starting from nonce.

Tries to send the object to the network. POW must be already calculated.

Returns an error or the object's inventory hash.

getRawObject

Argument: an inventory hash.

Returns an error or a JSON object with the following fields:

  • hash - the inventory hash;
  • expiryTime - a UNIX timestamp;
  • objectType - an integer;
  • stream;
  • tag - hex-encoded object tag;
  • payload - hex-encoded object without packet headers, starting from nonce.

listRawObjects

Parameters: the desired object type or None, stream number or None, hex-encoded tag or None.

Returns a list of hex-encoded hashes.

queueRawObject

Arguments: TTL in seconds and a hex-encoded object without packet headers, nonce and expiry time, starting from object type.

Queues the object for POW calculation and sending to the network.

Returns a unique handle to track the objects' state.

cancelQueuedRawObject

Argument: a handle returned from the queueRawObject method.

Tries to cancel a previously queued object.

checkQueuedRawObject

Argument: a handle returned from the queueRawObject method.

Returns an array with the first element being the current status of the object:

  • queued;
  • doingwork;
  • failed for invalid objects;
  • sent. In this case the second item of the array is the hex-encoded inventory hash;
  • canceled;
  • notfound for wrong handles and for handles that previously returned failed, sent or canceled.

If a queued object enters the notfound state, it can mean that the daemon was restarted, because the queued objects are not saved to the disk.

Added API methods for dealing with raw objects in the inventory, solving #1225 and superseding #1226. `disseminateRawObject` ---------------------- Argument: a hex-encoded object without packet headers, starting from nonce. Tries to send the object to the network. POW must be already calculated. Returns an error or the object's inventory hash. `getRawObject` -------------- Argument: an inventory hash. Returns an error or a JSON object with the following fields: - `hash` - the inventory hash; - `expiryTime` - a UNIX timestamp; - `objectType` - an integer; - `stream`; - `tag` - hex-encoded object tag; - `payload` - hex-encoded object without packet headers, starting from nonce. `listRawObjects` ---------------- Parameters: the desired object type or `None`, stream number or `None`, hex-encoded tag or `None`. Returns a list of hex-encoded hashes. `queueRawObject` ---------------- Arguments: TTL in seconds and a hex-encoded object without packet headers, nonce and expiry time, starting from object type. Queues the object for POW calculation and sending to the network. Returns a unique handle to track the objects' state. `cancelQueuedRawObject` ----------------------- Argument: a handle returned from the `queueRawObject` method. Tries to cancel a previously queued object. `checkQueuedRawObject` ---------------------- Argument: a handle returned from the `queueRawObject` method. Returns an array with the first element being the current status of the object: - `queued`; - `doingwork`; - `failed` for invalid objects; - `sent`. In this case the second item of the array is the hex-encoded inventory hash; - `canceled`; - `notfound` for wrong handles and for handles that previously returned `failed`, `sent` or `canceled`. If a queued object enters the `notfound` state, it can mean that the daemon was restarted, because the queued objects are not saved to the disk.
Kleshni commented 2018-07-29 13:47:44 +02:00 (Migrated from github.com)

I deleted the getMessageDataByDestination{Hash,Tag} methods because they were undocumented and didn't work at all, so I concluded that no-one ever used them. They are now replaced by getRawObject and listRawObjects.

The same for disseminatePreEncryptedMsg and disseminatePubkey.

I deleted the `getMessageDataByDestination{Hash,Tag}` methods because they were undocumented and didn't work at all, so I concluded that no-one ever used them. They are now replaced by `getRawObject` and `listRawObjects`. The same for `disseminatePreEncryptedMsg` and `disseminatePubkey`.
g1itch (Migrated from github.com) reviewed 2018-08-02 15:52:45 +02:00
g1itch (Migrated from github.com) commented 2018-08-02 15:52:45 +02:00

undefined name 'GPUSolverError'

undefined name 'GPUSolverError'
g1itch commented 2018-08-02 16:00:14 +02:00 (Migrated from github.com)

I got 3 additional threads (workprover.WorkProver instances) which don't stop when I run src/pybitmessage.py -t:
https://travis-ci.org/g1itch/PyBitmessage/builds/411231409

This patch helped:

diff --git a/src/singleworker.py b/src/singleworker.py
index 9fa1391c..e1eeda3f 100644
--- a/src/singleworker.py
+++ b/src/singleworker.py
@@ -270,17 +270,19 @@ def setBestSolver():
 
 setBestSolver()
 
+
 class singleWorker(threading.Thread, helper_threading.StoppableThread):
     name = "singleWorker"
 
     def __init__(self):
-        super(self.__class__, self).__init__()
+        super(self.__class__, self).__init__(name="singleWorker")
 
         self.initStop()
 
     def stopThread(self):
         queues.workerQueue.put(("stopThread", "data"))
 
+        workProver.commandsQueue.put(("shutdown", ))
         super(self.__class__, self).stopThread()
 
     def run(self):
I got 3 additional threads (`workprover.WorkProver` instances) which don't stop when I run `src/pybitmessage.py -t`: https://travis-ci.org/g1itch/PyBitmessage/builds/411231409 This patch helped: ```patch diff --git a/src/singleworker.py b/src/singleworker.py index 9fa1391c..e1eeda3f 100644 --- a/src/singleworker.py +++ b/src/singleworker.py @@ -270,17 +270,19 @@ def setBestSolver(): setBestSolver() + class singleWorker(threading.Thread, helper_threading.StoppableThread): name = "singleWorker" def __init__(self): - super(self.__class__, self).__init__() + super(self.__class__, self).__init__(name="singleWorker") self.initStop() def stopThread(self): queues.workerQueue.put(("stopThread", "data")) + workProver.commandsQueue.put(("shutdown", )) super(self.__class__, self).stopThread() def run(self): ```
Kleshni commented 2018-09-17 13:46:39 +02:00 (Migrated from github.com)

Any plans on merging?

Any plans on merging?
dimyme commented 2018-09-20 13:21:57 +02:00 (Migrated from github.com)

your POW branch works nicely here ! great job !

your POW branch works nicely here ! great job !
PeterSurda commented 2018-09-28 11:41:42 +02:00 (Migrated from github.com)

@Kleshni I haven't looked at it yet, but it first needs to pass all the code quality checks and all commits need to be signed (you can squash commits if it helps).

However, others gave good feedback so chances are good.

@Kleshni I haven't looked at it yet, but it first needs to pass all the code quality checks and all commits need to be signed (you can squash commits if it helps). However, others gave good feedback so chances are good.
PeterSurda commented 2018-09-28 11:45:28 +02:00 (Migrated from github.com)

Also, please split this into multiple patches. The PoW, the API and the other changes should be separate. You'll have better chances of getting it merged that way too.

Also, please split this into multiple patches. The PoW, the API and the other changes should be separate. You'll have better chances of getting it merged that way too.
PeterSurda commented 2019-04-08 11:25:07 +02:00 (Migrated from github.com)

@Kleshni would it be ok if I assigned someone else to the task to help you with cleaning it up? You'd have to give them write access to your repo.

@Kleshni would it be ok if I assigned someone else to the task to help you with cleaning it up? You'd have to give them write access to your repo.
Kleshni commented 2019-04-24 07:19:14 +02:00 (Migrated from github.com)

OK.

OK.
PeterSurda commented 2019-04-24 08:53:25 +02:00 (Migrated from github.com)

@Kleshni please give @omkar1117 write access to the POW branch. this way your commits will be preserved and you can still get paid through tip4commit.

@Kleshni please give @omkar1117 write access to the POW branch. this way your commits will be preserved and you can still get paid through tip4commit.
Kleshni commented 2019-04-25 10:00:00 +02:00 (Migrated from github.com)

I have sent him an invite yesterday but forgot to notify you 👀

I have sent him an invite yesterday but forgot to notify you :eyes:
omkar1117 commented 2019-05-08 19:10:07 +02:00 (Migrated from github.com)

@Kleshni could you please resolve the conflicts please.

@Kleshni could you please resolve the conflicts please.
PeterSurda commented 2019-06-08 08:46:13 +02:00 (Migrated from github.com)

@Kleshni Omkar says he doesn't have write access. He could fork it into his own repo, but since this PR is already open I would prefer to continue this way. Can you check and let me know?

@Kleshni Omkar says he doesn't have write access. He could fork it into his own repo, but since this PR is already open I would prefer to continue this way. Can you check and let me know?
Kleshni commented 2019-06-11 10:30:41 +02:00 (Migrated from github.com)

This is what I see in the repository settings:
Screenshot

This is what I see in the repository settings: ![Screenshot](https://files.catbox.moe/i8o2jh.png)
This repo is archived. You cannot comment on pull requests.
No description provided.