merge refactorinsert into travis' ui work

This commit is contained in:
Kevin Froman 2019-06-28 17:26:37 -05:00
commit d70afbf92b
115 changed files with 2280 additions and 1556 deletions

View File

@ -11,7 +11,7 @@
(***pre-alpha & experimental, not well tested or easy to use yet***) (***pre-alpha & experimental, not well tested or easy to use yet***)
[![Open Source Love](https://badges.frapsoft.com/os/v3/open-source.png?v=103)](https://github.com/ellerbrock/open-source-badges/) [![Open Source Love](https://badges.frapsoft.com/os/v3/open-source.png?v=103)](https://github.com/ellerbrock/open-source-badges/)
<img src='https://gitlab.com/beardog/Onionr/badges/master/build.svg'> - [Onionr.net](https://onionr.net/) <img src='https://gitlab.com/beardog/Onionr/badges/master/build.svg'> - [Onionr.net](https://onionr.net/) - [.onion](http://onionr.onionkvc5ibm37bmxwr56bdxcdnb6w3wm4bdghh5qo6f6za7gn7styid.onion/)
<hr> <hr>
@ -19,7 +19,7 @@
# About # About
Onionr is a decentralized, peer-to-peer communication and storage network, designed to be anonymous and resistant to (meta)data analysis, spam, and corruption. Onionr is a decentralized, peer-to-peer communication network, designed to be anonymous and resistant to (meta)data analysis, spam, and corruption.
Onionr stores data in independent packages referred to as 'blocks'. The blocks are synced to all other nodes in the network. Blocks and user IDs cannot be easily proven to have been created by a particular user. Even if there is enough evidence to believe that a specific user created a block, nodes still operate behind Tor or I2P and as such cannot be trivially unmasked. Onionr stores data in independent packages referred to as 'blocks'. The blocks are synced to all other nodes in the network. Blocks and user IDs cannot be easily proven to have been created by a particular user. Even if there is enough evidence to believe that a specific user created a block, nodes still operate behind Tor or I2P and as such cannot be trivially unmasked.
@ -85,7 +85,7 @@ The following applies to Ubuntu Bionic. Other distros may have different package
`$ sudo apt install python3-pip python3-dev tor` `$ sudo apt install python3-pip python3-dev tor`
* Have python3.6+, python3-pip, Tor (daemon, not browser) installed (python3-dev recommended) * Have python3.6+, python3-pip, Tor (daemon, not browser) installed. python3-dev is recommended.
* Clone the git repo: `$ git clone https://gitlab.com/beardog/onionr` * Clone the git repo: `$ git clone https://gitlab.com/beardog/onionr`
* cd into install direction: `$ cd onionr/` * cd into install direction: `$ cd onionr/`
* Install the Python dependencies ([virtualenv strongly recommended](https://virtualenv.pypa.io/en/stable/userguide/)): `$ pip3 install --require-hashes -r requirements.txt` * Install the Python dependencies ([virtualenv strongly recommended](https://virtualenv.pypa.io/en/stable/userguide/)): `$ pip3 install --require-hashes -r requirements.txt`
@ -123,7 +123,7 @@ Note: probably not tax deductible
Email: beardog [ at ] mailbox.org Email: beardog [ at ] mailbox.org
Onionr Mail: TRH763JURNY47QPBTTQ4LLPYCYQK6Q5YA33R6GANKZK5C5DKCIGQ==== Onionr Mail: TRH763JURNY47QPBTTQ4LLPYCYQK6Q5YA33R6GANKZK5C5DKCIGQ
## Disclaimers and legal ## Disclaimers and legal

19
docs/README.md Normal file
View File

@ -0,0 +1,19 @@
# Onionr Documentation
The Onionr [whitepaper](whitepaper.md) is the best place to start both for users and developers.
## User Documentation
* [Installation](usage/install.md)
* [First steps](usage/firststeps.md)
* [Using Onionr Mail](usage/mail.md)
* [Using Onionr web pages](usage/pages.md)
* [Staying safe/anonymous](usage/safety.md)
## Developer Documentation
* [Development environment setup](dev/setup.md)
* [Technical overview](dev/overview.md)
* [Project layout](dev/layout.md)
* [Plugin development guide](dev/plugins.md)
* [Testing](dev/testing.md)

7
docs/usage/firststeps.md Normal file
View File

@ -0,0 +1,7 @@
# Onionr First Steps
After installing Onionr, there are several things to do:
1. Setup a [deterministic address](usage/deterministic.md) (optional)
2. Add friends' ids
3. Publish your id

13
docs/usage/install.md Normal file
View File

@ -0,0 +1,13 @@
# Onionr Installation
The following steps work broadly speaking for Windows, Mac, and Linux.
1. Verify python3.6+ is installed: if its not see https://www.python.org/downloads/
2. Verify Tor is installed (does not need to be running, binary can be put into system path or Onionr directory)
3. [Optional but recommended]: setup virtual environment using [virtualenv](https://virtualenv.pypa.io/en/latest/), activate the virtual environment
4. Clone Onionr: git clone https://gitlab.com/beardog/onionr
5. Install the Python module dependencies: pip3 install --require-hashes -r requirements.txt

View File

@ -23,12 +23,12 @@ from gevent import Timeout
import flask import flask
from flask import request, Response, abort, send_from_directory from flask import request, Response, abort, send_from_directory
import core import core
from onionrblockapi import Block import onionrexceptions, onionrcrypto, blockimporter, onionrevents as events, logger, config, onionrblockapi
import onionrutils, onionrexceptions, onionrcrypto, blockimporter, onionrevents as events, logger, config
import httpapi import httpapi
from httpapi import friendsapi, profilesapi, configapi, miscpublicapi from httpapi import friendsapi, profilesapi, configapi, miscpublicapi
from onionrservices import httpheaders from onionrservices import httpheaders
import onionr import onionr
from onionrutils import bytesconverter, stringvalidators, epoch, mnemonickeys
config.reload() config.reload()
class FDSafeHandler(WSGIHandler): class FDSafeHandler(WSGIHandler):
@ -99,7 +99,7 @@ class PublicAPI:
resp = httpheaders.set_default_onionr_http_headers(resp) resp = httpheaders.set_default_onionr_http_headers(resp)
# Network API version # Network API version
resp.headers['X-API'] = onionr.API_VERSION resp.headers['X-API'] = onionr.API_VERSION
self.lastRequest = clientAPI._core._utils.getRoundedEpoch(roundS=5) self.lastRequest = epoch.get_rounded_epoch(roundS=5)
return resp return resp
@app.route('/') @app.route('/')
@ -178,9 +178,8 @@ class API:
self.debug = debug self.debug = debug
self._core = onionrInst.onionrCore self._core = onionrInst.onionrCore
self.startTime = self._core._utils.getEpoch() self.startTime = epoch.get_epoch()
self._crypto = onionrcrypto.OnionrCrypto(self._core) self._crypto = onionrcrypto.OnionrCrypto(self._core)
self._utils = onionrutils.OnionrUtils(self._core)
app = flask.Flask(__name__) app = flask.Flask(__name__)
bindPort = int(config.get('client.client.port', 59496)) bindPort = int(config.get('client.client.port', 59496))
self.bindPort = bindPort self.bindPort = bindPort
@ -335,9 +334,9 @@ class API:
@app.route('/getblockbody/<name>') @app.route('/getblockbody/<name>')
def getBlockBodyData(name): def getBlockBodyData(name):
resp = '' resp = ''
if self._core._utils.validateHash(name): if stringvalidators.validate_hash(name):
try: try:
resp = Block(name, decrypt=True).bcontent resp = onionrblockapi.Block(name, decrypt=True).bcontent
except TypeError: except TypeError:
pass pass
else: else:
@ -347,7 +346,7 @@ class API:
@app.route('/getblockdata/<name>') @app.route('/getblockdata/<name>')
def getData(name): def getData(name):
resp = "" resp = ""
if self._core._utils.validateHash(name): if stringvalidators.validate_hash(name):
if name in self._core.getBlockList(): if name in self._core.getBlockList():
try: try:
resp = self.getBlockData(name, decrypt=True) resp = self.getBlockData(name, decrypt=True)
@ -372,9 +371,9 @@ class API:
def site(name): def site(name):
bHash = name bHash = name
resp = 'Not Found' resp = 'Not Found'
if self._core._utils.validateHash(bHash): if stringvalidators.validate_hash(bHash):
try: try:
resp = Block(bHash).bcontent resp = onionrblockapi.Block(bHash).bcontent
except onionrexceptions.NoDataAvailable: except onionrexceptions.NoDataAvailable:
abort(404) abort(404)
except TypeError: except TypeError:
@ -433,7 +432,7 @@ class API:
@app.route('/getHumanReadable/<name>') @app.route('/getHumanReadable/<name>')
def getHumanReadable(name): def getHumanReadable(name):
return Response(self._core._utils.getHumanReadableID(name)) return Response(mnemonickeys.get_human_readable_ID(name))
@app.route('/insertblock', methods=['POST']) @app.route('/insertblock', methods=['POST'])
def insertBlock(): def insertBlock():
@ -498,14 +497,14 @@ class API:
def getUptime(self): def getUptime(self):
while True: while True:
try: try:
return self._utils.getEpoch() - self.startTime return epoch.get_epoch() - self.startTime
except (AttributeError, NameError): except (AttributeError, NameError):
# Don't error on race condition with startup # Don't error on race condition with startup
pass pass
def getBlockData(self, bHash, decrypt=False, raw=False, headerOnly=False): def getBlockData(self, bHash, decrypt=False, raw=False, headerOnly=False):
assert self._core._utils.validateHash(bHash) assert stringvalidators.validate_hash(bHash)
bl = Block(bHash, core=self._core) bl = onionrblockapi.Block(bHash, core=self._core)
if decrypt: if decrypt:
bl.decrypt() bl.decrypt()
if bl.isEncrypted and not bl.decrypted: if bl.isEncrypted and not bl.decrypted:
@ -521,8 +520,8 @@ class API:
pass pass
else: else:
validSig = False validSig = False
signer = self._core._utils.bytesToStr(bl.signer) signer = bytesconverter.bytes_to_str(bl.signer)
if bl.isSigned() and self._core._utils.validatePubKey(signer) and bl.isSigner(signer): if bl.isSigned() and stringvalidators.validate_pub_key(signer) and bl.isSigner(signer):
validSig = True validSig = True
bl.bheader['validSig'] = validSig bl.bheader['validSig'] = validSig
bl.bheader['meta'] = '' bl.bheader['meta'] = ''

View File

@ -18,6 +18,7 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import core, onionrexceptions, logger import core, onionrexceptions, logger
from onionrutils import validatemetadata, blockmetadata
def importBlockFromData(content, coreInst): def importBlockFromData(content, coreInst):
retData = False retData = False
@ -34,17 +35,17 @@ def importBlockFromData(content, coreInst):
except AttributeError: except AttributeError:
pass pass
metas = coreInst._utils.getBlockMetadataFromData(content) # returns tuple(metadata, meta), meta is also in metadata metas = blockmetadata.get_block_metadata_from_data(content) # returns tuple(metadata, meta), meta is also in metadata
metadata = metas[0] metadata = metas[0]
if coreInst._utils.validateMetadata(metadata, metas[2]): # check if metadata is valid if validatemetadata.validate_metadata(coreInst, metadata, metas[2]): # check if metadata is valid
if coreInst._crypto.verifyPow(content): # check if POW is enough/correct if coreInst._crypto.verifyPow(content): # check if POW is enough/correct
logger.info('Block passed proof, saving.') logger.info('Block passed proof, saving.', terminal=True)
try: try:
blockHash = coreInst.setData(content) blockHash = coreInst.setData(content)
except onionrexceptions.DiskAllocationReached: except onionrexceptions.DiskAllocationReached:
pass pass
else: else:
coreInst.addToBlockDB(blockHash, dataSaved=True) coreInst.addToBlockDB(blockHash, dataSaved=True)
coreInst._utils.processBlockMetadata(blockHash) # caches block metadata values to block database blockmetadata.process_block_metadata(coreInst, blockHash) # caches block metadata values to block database
retData = True retData = True
return retData return retData

View File

@ -27,6 +27,7 @@ from communicatorutils import downloadblocks, lookupblocks, lookupadders
from communicatorutils import servicecreator, connectnewpeers, uploadblocks from communicatorutils import servicecreator, connectnewpeers, uploadblocks
from communicatorutils import daemonqueuehandler, announcenode, deniableinserts from communicatorutils import daemonqueuehandler, announcenode, deniableinserts
from communicatorutils import cooldownpeer, housekeeping, netcheck from communicatorutils import cooldownpeer, housekeeping, netcheck
from onionrutils import localcommand, epoch, basicrequests
from etc import humanreadabletime from etc import humanreadabletime
import onionrservices, onionr, onionrproofs import onionrservices, onionr, onionrproofs
@ -90,7 +91,7 @@ class OnionrCommunicatorDaemon:
plugins.reload() plugins.reload()
# time app started running for info/statistics purposes # time app started running for info/statistics purposes
self.startTime = self._core._utils.getEpoch() self.startTime = epoch.get_epoch()
if developmentMode: if developmentMode:
OnionrCommunicatorTimers(self, self.heartbeat, 30) OnionrCommunicatorTimers(self, self.heartbeat, 30)
@ -176,7 +177,7 @@ class OnionrCommunicatorDaemon:
self.shutdown = True self.shutdown = True
pass pass
logger.info('Goodbye. (Onionr is cleaning up, and will exit)') logger.info('Goodbye. (Onionr is cleaning up, and will exit)', terminal=True)
try: try:
self.service_greenlets self.service_greenlets
except AttributeError: except AttributeError:
@ -184,7 +185,7 @@ class OnionrCommunicatorDaemon:
else: else:
for server in self.service_greenlets: for server in self.service_greenlets:
server.stop() server.stop()
self._core._utils.localCommand('shutdown') # shutdown the api localcommand.local_command(self._core, 'shutdown') # shutdown the api
time.sleep(0.5) time.sleep(0.5)
def lookupAdders(self): def lookupAdders(self):
@ -252,7 +253,7 @@ class OnionrCommunicatorDaemon:
break break
else: else:
if len(self.onlinePeers) == 0: if len(self.onlinePeers) == 0:
logger.debug('Couldn\'t connect to any peers.' + (' Last node seen %s ago.' % humanreadabletime.human_readable_time(time.time() - self.lastNodeSeen) if not self.lastNodeSeen is None else '')) logger.debug('Couldn\'t connect to any peers.' + (' Last node seen %s ago.' % humanreadabletime.human_readable_time(time.time() - self.lastNodeSeen) if not self.lastNodeSeen is None else ''), terminal=True)
else: else:
self.lastNodeSeen = time.time() self.lastNodeSeen = time.time()
self.decrementThreadCount('getOnlinePeers') self.decrementThreadCount('getOnlinePeers')
@ -293,12 +294,12 @@ class OnionrCommunicatorDaemon:
def printOnlinePeers(self): def printOnlinePeers(self):
'''logs online peer list''' '''logs online peer list'''
if len(self.onlinePeers) == 0: if len(self.onlinePeers) == 0:
logger.warn('No online peers') logger.warn('No online peers', terminal=True)
else: else:
logger.info('Online peers:') logger.info('Online peers:', terminal=True)
for i in self.onlinePeers: for i in self.onlinePeers:
score = str(self.getPeerProfileInstance(i).score) score = str(self.getPeerProfileInstance(i).score)
logger.info(i + ', score: ' + score) logger.info(i + ', score: ' + score, terminal=True)
def peerAction(self, peer, action, data='', returnHeaders=False): def peerAction(self, peer, action, data='', returnHeaders=False):
'''Perform a get request to a peer''' '''Perform a get request to a peer'''
@ -309,20 +310,21 @@ class OnionrCommunicatorDaemon:
if len(data) > 0: if len(data) > 0:
url += '&data=' + data url += '&data=' + data
self._core.setAddressInfo(peer, 'lastConnectAttempt', self._core._utils.getEpoch()) # mark the time we're trying to request this peer self._core.setAddressInfo(peer, 'lastConnectAttempt', epoch.get_epoch()) # mark the time we're trying to request this peer
retData = self._core._utils.doGetRequest(url, port=self.proxyPort) retData = basicrequests.do_get_request(self._core, url, port=self.proxyPort)
# if request failed, (error), mark peer offline # if request failed, (error), mark peer offline
if retData == False: if retData == False:
try: try:
self.getPeerProfileInstance(peer).addScore(-10) self.getPeerProfileInstance(peer).addScore(-10)
self.removeOnlinePeer(peer) self.removeOnlinePeer(peer)
if action != 'ping': if action != 'ping' and not self.shutdown:
logger.warn('Lost connection to ' + peer, terminal=True)
self.getOnlinePeers() # Will only add a new peer to pool if needed self.getOnlinePeers() # Will only add a new peer to pool if needed
except ValueError: except ValueError:
pass pass
else: else:
self._core.setAddressInfo(peer, 'lastConnect', self._core._utils.getEpoch()) self._core.setAddressInfo(peer, 'lastConnect', epoch.get_epoch())
self.getPeerProfileInstance(peer).addScore(1) self.getPeerProfileInstance(peer).addScore(1)
return retData # If returnHeaders, returns tuple of data, headers. if not, just data string return retData # If returnHeaders, returns tuple of data, headers. if not, just data string
@ -339,7 +341,7 @@ class OnionrCommunicatorDaemon:
return retData return retData
def getUptime(self): def getUptime(self):
return self._core._utils.getEpoch() - self.startTime return epoch.get_epoch() - self.startTime
def heartbeat(self): def heartbeat(self):
'''Show a heartbeat debug message''' '''Show a heartbeat debug message'''
@ -359,19 +361,19 @@ class OnionrCommunicatorDaemon:
def announce(self, peer): def announce(self, peer):
'''Announce to peers our address''' '''Announce to peers our address'''
if announcenode.announce_node(self) == False: if announcenode.announce_node(self) == False:
logger.warn('Could not introduce node.') logger.warn('Could not introduce node.', terminal=True)
def detectAPICrash(self): def detectAPICrash(self):
'''exit if the api server crashes/stops''' '''exit if the api server crashes/stops'''
if self._core._utils.localCommand('ping', silent=False) not in ('pong', 'pong!'): if localcommand.local_command(self._core, 'ping', silent=False) not in ('pong', 'pong!'):
for i in range(300): for i in range(300):
if self._core._utils.localCommand('ping') in ('pong', 'pong!') or self.shutdown: if localcommand.local_command(self._core, 'ping') in ('pong', 'pong!') or self.shutdown:
break # break for loop break # break for loop
time.sleep(1) time.sleep(1)
else: else:
# This executes if the api is NOT detected to be running # This executes if the api is NOT detected to be running
events.event('daemon_crash', onionr = self._core.onionrInst, data = {}) events.event('daemon_crash', onionr = self._core.onionrInst, data = {})
logger.error('Daemon detected API crash (or otherwise unable to reach API after long time), stopping...') logger.fatal('Daemon detected API crash (or otherwise unable to reach API after long time), stopping...', terminal=True)
self.shutdown = True self.shutdown = True
self.decrementThreadCount('detectAPICrash') self.decrementThreadCount('detectAPICrash')
@ -388,5 +390,4 @@ def run_file_exists(daemon):
if os.path.isfile(daemon._core.dataDir + '.runcheck'): if os.path.isfile(daemon._core.dataDir + '.runcheck'):
os.remove(daemon._core.dataDir + '.runcheck') os.remove(daemon._core.dataDir + '.runcheck')
return True return True
return False return False

View File

@ -20,6 +20,7 @@
import base64 import base64
import onionrproofs, logger import onionrproofs, logger
from etc import onionrvalues from etc import onionrvalues
from onionrutils import basicrequests, bytesconverter
def announce_node(daemon): def announce_node(daemon):
'''Announce our node to our peers''' '''Announce our node to our peers'''
@ -52,7 +53,7 @@ def announce_node(daemon):
combinedNodes = ourID + peer combinedNodes = ourID + peer
if ourID != 1: if ourID != 1:
#TODO: Extend existingRand for i2p #TODO: Extend existingRand for i2p
existingRand = daemon._core._utils.bytesToStr(daemon._core.getAddressInfo(peer, 'powValue')) existingRand = bytesconverter.bytes_to_str(daemon._core.getAddressInfo(peer, 'powValue'))
# Reset existingRand if it no longer meets the minimum POW # Reset existingRand if it no longer meets the minimum POW
if type(existingRand) is type(None) or not existingRand.endswith('0' * ov.announce_pow): if type(existingRand) is type(None) or not existingRand.endswith('0' * ov.announce_pow):
existingRand = '' existingRand = ''
@ -75,8 +76,8 @@ def announce_node(daemon):
daemon.announceCache[peer] = data['random'] daemon.announceCache[peer] = data['random']
if not announceFail: if not announceFail:
logger.info('Announcing node to ' + url) logger.info('Announcing node to ' + url)
if daemon._core._utils.doPostRequest(url, data) == 'Success': if basicrequests.do_post_request(daemon._core, url, data) == 'Success':
logger.info('Successfully introduced node to ' + peer) logger.info('Successfully introduced node to ' + peer, terminal=True)
retData = True retData = True
daemon._core.setAddressInfo(peer, 'introduced', 1) daemon._core.setAddressInfo(peer, 'introduced', 1)
daemon._core.setAddressInfo(peer, 'powValue', data['random']) daemon._core.setAddressInfo(peer, 'powValue', data['random'])

View File

@ -20,6 +20,7 @@
import time, sys import time, sys
import onionrexceptions, logger, onionrpeers import onionrexceptions, logger, onionrpeers
from utils import networkmerger from utils import networkmerger
from onionrutils import stringvalidators, epoch
# secrets module was added into standard lib in 3.6+ # secrets module was added into standard lib in 3.6+
if sys.version_info[0] == 3 and sys.version_info[1] < 6: if sys.version_info[0] == 3 and sys.version_info[1] < 6:
from dependencies import secrets from dependencies import secrets
@ -30,7 +31,7 @@ def connect_new_peer_to_communicator(comm_inst, peer='', useBootstrap=False):
retData = False retData = False
tried = comm_inst.offlinePeers tried = comm_inst.offlinePeers
if peer != '': if peer != '':
if comm_inst._core._utils.validateID(peer): if stringvalidators.validate_transport(peer):
peerList = [peer] peerList = [peer]
else: else:
raise onionrexceptions.InvalidAddress('Will not attempt connection test to invalid address') raise onionrexceptions.InvalidAddress('Will not attempt connection test to invalid address')
@ -72,9 +73,9 @@ def connect_new_peer_to_communicator(comm_inst, peer='', useBootstrap=False):
# Add a peer to our list if it isn't already since it successfully connected # Add a peer to our list if it isn't already since it successfully connected
networkmerger.mergeAdders(address, comm_inst._core) networkmerger.mergeAdders(address, comm_inst._core)
if address not in comm_inst.onlinePeers: if address not in comm_inst.onlinePeers:
logger.info('Connected to ' + address) logger.info('Connected to ' + address, terminal=True)
comm_inst.onlinePeers.append(address) comm_inst.onlinePeers.append(address)
comm_inst.connectTimes[address] = comm_inst._core._utils.getEpoch() comm_inst.connectTimes[address] = epoch.get_epoch()
retData = address retData = address
# add peer to profile list if they're not in it # add peer to profile list if they're not in it

View File

@ -17,6 +17,7 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
from onionrutils import epoch
def cooldown_peer(comm_inst): def cooldown_peer(comm_inst):
'''Randomly add an online peer to cooldown, so we can connect a new one''' '''Randomly add an online peer to cooldown, so we can connect a new one'''
onlinePeerAmount = len(comm_inst.onlinePeers) onlinePeerAmount = len(comm_inst.onlinePeers)
@ -28,7 +29,7 @@ def cooldown_peer(comm_inst):
# Remove peers from cooldown that have been there long enough # Remove peers from cooldown that have been there long enough
tempCooldown = dict(comm_inst.cooldownPeer) tempCooldown = dict(comm_inst.cooldownPeer)
for peer in tempCooldown: for peer in tempCooldown:
if (comm_inst._core._utils.getEpoch() - tempCooldown[peer]) >= cooldownTime: if (epoch.get_epoch() - tempCooldown[peer]) >= cooldownTime:
del comm_inst.cooldownPeer[peer] del comm_inst.cooldownPeer[peer]
# Cool down a peer, if we have max connections alive for long enough # Cool down a peer, if we have max connections alive for long enough
@ -38,7 +39,7 @@ def cooldown_peer(comm_inst):
while finding: while finding:
try: try:
toCool = min(tempConnectTimes, key=tempConnectTimes.get) toCool = min(tempConnectTimes, key=tempConnectTimes.get)
if (comm_inst._core._utils.getEpoch() - tempConnectTimes[toCool]) < minTime: if (epoch.get_epoch() - tempConnectTimes[toCool]) < minTime:
del tempConnectTimes[toCool] del tempConnectTimes[toCool]
else: else:
finding = False finding = False
@ -46,6 +47,6 @@ def cooldown_peer(comm_inst):
break break
else: else:
comm_inst.removeOnlinePeer(toCool) comm_inst.removeOnlinePeer(toCool)
comm_inst.cooldownPeer[toCool] = comm_inst._core._utils.getEpoch() comm_inst.cooldownPeer[toCool] = epoch.get_epoch()
comm_inst.decrementThreadCount('cooldown_peer') comm_inst.decrementThreadCount('cooldown_peer')

View File

@ -19,6 +19,7 @@
''' '''
import logger import logger
import onionrevents as events import onionrevents as events
from onionrutils import localcommand
def handle_daemon_commands(comm_inst): def handle_daemon_commands(comm_inst):
cmd = comm_inst._core.daemonQueue() cmd = comm_inst._core.daemonQueue()
response = '' response = ''
@ -39,7 +40,7 @@ def handle_daemon_commands(comm_inst):
if response == '': if response == '':
response = 'none' response = 'none'
elif cmd[0] == 'localCommand': elif cmd[0] == 'localCommand':
response = comm_inst._core._utils.localCommand(cmd[1]) response = localcommand.local_command(comm_inst._core, cmd[1])
elif cmd[0] == 'pex': elif cmd[0] == 'pex':
for i in comm_inst.timers: for i in comm_inst.timers:
if i.timerFunction.__name__ == 'lookupAdders': if i.timerFunction.__name__ == 'lookupAdders':
@ -49,7 +50,7 @@ def handle_daemon_commands(comm_inst):
if cmd[0] not in ('', None): if cmd[0] not in ('', None):
if response != '': if response != '':
comm_inst._core._utils.localCommand('queueResponseAdd/' + cmd[4], post=True, postData={'data': response}) localcommand.local_command(comm_inst._core, 'queueResponseAdd/' + cmd[4], post=True, postData={'data': response})
response = '' response = ''
comm_inst.decrementThreadCount('daemonCommands') comm_inst.decrementThreadCount('daemonCommands')

View File

@ -27,5 +27,5 @@ def insert_deniable_block(comm_inst):
# This assumes on the libsodium primitives to have key-privacy # This assumes on the libsodium primitives to have key-privacy
fakePeer = onionrvalues.DENIABLE_PEER_ADDRESS fakePeer = onionrvalues.DENIABLE_PEER_ADDRESS
data = secrets.token_hex(secrets.randbelow(1024) + 1) data = secrets.token_hex(secrets.randbelow(1024) + 1)
comm_inst._core.insertBlock(data, header='pm', encryptType='asym', asymPeer=fakePeer, meta={'subject': 'foo'}) comm_inst._core.insertBlock(data, header='pm', encryptType='asym', asymPeer=fakePeer, disableForward=True, meta={'subject': 'foo'})
comm_inst.decrementThreadCount('insert_deniable_block') comm_inst.decrementThreadCount('insert_deniable_block')

View File

@ -19,6 +19,7 @@
''' '''
import communicator, onionrexceptions import communicator, onionrexceptions
import logger, onionrpeers import logger, onionrpeers
from onionrutils import blockmetadata, stringvalidators, validatemetadata
def download_blocks_from_communicator(comm_inst): def download_blocks_from_communicator(comm_inst):
assert isinstance(comm_inst, communicator.OnionrCommunicatorDaemon) assert isinstance(comm_inst, communicator.OnionrCommunicatorDaemon)
@ -47,7 +48,7 @@ def download_blocks_from_communicator(comm_inst):
continue continue
if comm_inst._core._blacklist.inBlacklist(blockHash): if comm_inst._core._blacklist.inBlacklist(blockHash):
continue continue
if comm_inst._core._utils.storageCounter.isFull(): if comm_inst._core.storage_counter.isFull():
break break
comm_inst.currentDownloading.append(blockHash) # So we can avoid concurrent downloading in other threads of same block comm_inst.currentDownloading.append(blockHash) # So we can avoid concurrent downloading in other threads of same block
if len(blockPeers) == 0: if len(blockPeers) == 0:
@ -72,9 +73,9 @@ def download_blocks_from_communicator(comm_inst):
pass pass
if realHash == blockHash: if realHash == blockHash:
content = content.decode() # decode here because sha3Hash needs bytes above content = content.decode() # decode here because sha3Hash needs bytes above
metas = comm_inst._core._utils.getBlockMetadataFromData(content) # returns tuple(metadata, meta), meta is also in metadata metas = blockmetadata.get_block_metadata_from_data(content) # returns tuple(metadata, meta), meta is also in metadata
metadata = metas[0] metadata = metas[0]
if comm_inst._core._utils.validateMetadata(metadata, metas[2]): # check if metadata is valid, and verify nonce if validatemetadata.validate_metadata(comm_inst._core, metadata, metas[2]): # check if metadata is valid, and verify nonce
if comm_inst._core._crypto.verifyPow(content): # check if POW is enough/correct if comm_inst._core._crypto.verifyPow(content): # check if POW is enough/correct
logger.info('Attempting to save block %s...' % blockHash[:12]) logger.info('Attempting to save block %s...' % blockHash[:12])
try: try:
@ -84,7 +85,7 @@ def download_blocks_from_communicator(comm_inst):
removeFromQueue = False removeFromQueue = False
else: else:
comm_inst._core.addToBlockDB(blockHash, dataSaved=True) comm_inst._core.addToBlockDB(blockHash, dataSaved=True)
comm_inst._core._utils.processBlockMetadata(blockHash) # caches block metadata values to block database blockmetadata.process_block_metadata(comm_inst._core, blockHash) # caches block metadata values to block database
else: else:
logger.warn('POW failed for block %s.' % blockHash) logger.warn('POW failed for block %s.' % blockHash)
else: else:
@ -110,6 +111,7 @@ def download_blocks_from_communicator(comm_inst):
if removeFromQueue: if removeFromQueue:
try: try:
del comm_inst.blockQueue[blockHash] # remove from block queue both if success or false del comm_inst.blockQueue[blockHash] # remove from block queue both if success or false
logger.info('%s blocks remaining in queue' % [len(comm_inst.blockQueue)], terminal=True)
except KeyError: except KeyError:
pass pass
comm_inst.currentDownloading.remove(blockHash) comm_inst.currentDownloading.remove(blockHash)

View File

@ -20,6 +20,7 @@
import sqlite3 import sqlite3
import logger import logger
from onionrusers import onionrusers from onionrusers import onionrusers
from onionrutils import epoch
def clean_old_blocks(comm_inst): def clean_old_blocks(comm_inst):
'''Delete old blocks if our disk allocation is full/near full, and also expired blocks''' '''Delete old blocks if our disk allocation is full/near full, and also expired blocks'''
@ -29,7 +30,7 @@ def clean_old_blocks(comm_inst):
comm_inst._core.removeBlock(bHash) comm_inst._core.removeBlock(bHash)
logger.info('Deleted block: %s' % (bHash,)) logger.info('Deleted block: %s' % (bHash,))
while comm_inst._core._utils.storageCounter.isFull(): while comm_inst._core.storage_counter.isFull():
oldest = comm_inst._core.getBlockList()[0] oldest = comm_inst._core.getBlockList()[0]
comm_inst._core._blacklist.addToDB(oldest) comm_inst._core._blacklist.addToDB(oldest)
comm_inst._core.removeBlock(oldest) comm_inst._core.removeBlock(oldest)
@ -41,7 +42,7 @@ def clean_keys(comm_inst):
'''Delete expired forward secrecy keys''' '''Delete expired forward secrecy keys'''
conn = sqlite3.connect(comm_inst._core.peerDB, timeout=10) conn = sqlite3.connect(comm_inst._core.peerDB, timeout=10)
c = conn.cursor() c = conn.cursor()
time = comm_inst._core._utils.getEpoch() time = epoch.get_epoch()
deleteKeys = [] deleteKeys = []
for entry in c.execute("SELECT * FROM forwardKeys WHERE expire <= ?", (time,)): for entry in c.execute("SELECT * FROM forwardKeys WHERE expire <= ?", (time,)):

View File

@ -18,6 +18,7 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import logger import logger
from onionrutils import stringvalidators
def lookup_new_peer_transports_with_communicator(comm_inst): def lookup_new_peer_transports_with_communicator(comm_inst):
logger.info('Looking up new addresses...') logger.info('Looking up new addresses...')
@ -39,7 +40,7 @@ def lookup_new_peer_transports_with_communicator(comm_inst):
invalid = [] invalid = []
for x in newPeers: for x in newPeers:
x = x.strip() x = x.strip()
if not comm_inst._core._utils.validateID(x) or x in comm_inst.newPeers or x == comm_inst._core.hsAddress: if not stringvalidators.validate_transport(x) or x in comm_inst.newPeers or x == comm_inst._core.hsAddress:
# avoid adding if its our address # avoid adding if its our address
invalid.append(x) invalid.append(x)
for x in invalid: for x in invalid:

View File

@ -18,62 +18,71 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import logger, onionrproofs import logger, onionrproofs
def lookup_blocks_from_communicator(comm_inst): from onionrutils import stringvalidators, epoch
logger.info('Looking up new blocks...')
tryAmount = 2
newBlocks = ''
existingBlocks = comm_inst._core.getBlockList()
triedPeers = [] # list of peers we've tried this time around
maxBacklog = 1560 # Max amount of *new* block hashes to have already in queue, to avoid memory exhaustion
lastLookupTime = 0 # Last time we looked up a particular peer's list
for i in range(tryAmount):
listLookupCommand = 'getblocklist' # This is defined here to reset it each time
if len(comm_inst.blockQueue) >= maxBacklog:
break
if not comm_inst.isOnline:
break
# check if disk allocation is used
if comm_inst._core._utils.storageCounter.isFull():
logger.debug('Not looking up new blocks due to maximum amount of allowed disk space used')
break
peer = comm_inst.pickOnlinePeer() # select random online peer
# if we've already tried all the online peers this time around, stop
if peer in triedPeers:
if len(comm_inst.onlinePeers) == len(triedPeers):
break
else:
continue
triedPeers.append(peer)
# Get the last time we looked up a peer's stamp to only fetch blocks since then. def lookup_blocks_from_communicator(comm_inst):
# Saved in memory only for privacy reasons logger.info('Looking up new blocks...')
try: tryAmount = 2
lastLookupTime = comm_inst.dbTimestamps[peer] newBlocks = ''
except KeyError: existingBlocks = comm_inst._core.getBlockList()
lastLookupTime = 0 triedPeers = [] # list of peers we've tried this time around
maxBacklog = 1560 # Max amount of *new* block hashes to have already in queue, to avoid memory exhaustion
lastLookupTime = 0 # Last time we looked up a particular peer's list
new_block_count = 0
for i in range(tryAmount):
listLookupCommand = 'getblocklist' # This is defined here to reset it each time
if len(comm_inst.blockQueue) >= maxBacklog:
break
if not comm_inst.isOnline:
break
# check if disk allocation is used
if comm_inst._core.storage_counter.isFull():
logger.debug('Not looking up new blocks due to maximum amount of allowed disk space used')
break
peer = comm_inst.pickOnlinePeer() # select random online peer
# if we've already tried all the online peers this time around, stop
if peer in triedPeers:
if len(comm_inst.onlinePeers) == len(triedPeers):
break
else: else:
listLookupCommand += '?date=%s' % (lastLookupTime,) continue
try: triedPeers.append(peer)
newBlocks = comm_inst.peerAction(peer, listLookupCommand) # get list of new block hashes
except Exception as error: # Get the last time we looked up a peer's stamp to only fetch blocks since then.
logger.warn('Could not get new blocks from %s.' % peer, error = error) # Saved in memory only for privacy reasons
newBlocks = False try:
else: lastLookupTime = comm_inst.dbTimestamps[peer]
comm_inst.dbTimestamps[peer] = comm_inst._core._utils.getRoundedEpoch(roundS=60) except KeyError:
if newBlocks != False: lastLookupTime = 0
# if request was a success else:
for i in newBlocks.split('\n'): listLookupCommand += '?date=%s' % (lastLookupTime,)
if comm_inst._core._utils.validateHash(i): try:
# if newline seperated string is valid hash newBlocks = comm_inst.peerAction(peer, listLookupCommand) # get list of new block hashes
if not i in existingBlocks: except Exception as error:
# if block does not exist on disk and is not already in block queue logger.warn('Could not get new blocks from %s.' % peer, error = error)
if i not in comm_inst.blockQueue: newBlocks = False
if onionrproofs.hashMeetsDifficulty(i) and not comm_inst._core._blacklist.inBlacklist(i): else:
if len(comm_inst.blockQueue) <= 1000000: comm_inst.dbTimestamps[peer] = epoch.get_rounded_epoch(roundS=60)
comm_inst.blockQueue[i] = [peer] # add blocks to download queue if newBlocks != False:
else: # if request was a success
if peer not in comm_inst.blockQueue[i]: for i in newBlocks.split('\n'):
if len(comm_inst.blockQueue[i]) < 10: if stringvalidators.validate_hash(i):
comm_inst.blockQueue[i].append(peer) # if newline seperated string is valid hash
comm_inst.decrementThreadCount('lookupBlocks') if not i in existingBlocks:
return # if block does not exist on disk and is not already in block queue
if i not in comm_inst.blockQueue:
if onionrproofs.hashMeetsDifficulty(i) and not comm_inst._core._blacklist.inBlacklist(i):
if len(comm_inst.blockQueue) <= 1000000:
comm_inst.blockQueue[i] = [peer] # add blocks to download queue
new_block_count += 1
else:
if peer not in comm_inst.blockQueue[i]:
if len(comm_inst.blockQueue[i]) < 10:
comm_inst.blockQueue[i].append(peer)
if new_block_count > 0:
block_string = ""
if new_block_count > 1:
block_string = "s"
logger.info('Discovered %s new block%s' % (new_block_count, block_string), terminal=True)
comm_inst.decrementThreadCount('lookupBlocks')
return

View File

@ -20,17 +20,19 @@
''' '''
import logger import logger
from utils import netutils from utils import netutils
from onionrutils import localcommand, epoch
def net_check(comm_inst): def net_check(comm_inst):
'''Check if we are connected to the internet or not when we can't connect to any peers''' '''Check if we are connected to the internet or not when we can't connect to any peers'''
rec = False # for detecting if we have received incoming connections recently rec = False # for detecting if we have received incoming connections recently
c = comm_inst._core
if len(comm_inst.onlinePeers) == 0: if len(comm_inst.onlinePeers) == 0:
try: try:
if (comm_inst._core._utils.getEpoch() - int(comm_inst._core._utils.localCommand('/lastconnect'))) <= 60: if (epoch.get_epoch() - int(localcommand.local_command(c, '/lastconnect'))) <= 60:
comm_inst.isOnline = True comm_inst.isOnline = True
rec = True rec = True
except ValueError: except ValueError:
pass pass
if not rec and not netutils.checkNetwork(comm_inst._core._utils, torPort=comm_inst.proxyPort): if not rec and not netutils.checkNetwork(c, torPort=comm_inst.proxyPort):
if not comm_inst.shutdown: if not comm_inst.shutdown:
logger.warn('Network check failed, are you connected to the Internet, and is Tor working?') logger.warn('Network check failed, are you connected to the Internet, and is Tor working?')
comm_inst.isOnline = False comm_inst.isOnline = False

View File

@ -55,7 +55,7 @@ class OnionrCommunicatorTimers:
logger.debug('%s is currently using the maximum number of threads, not starting another.' % self.timerFunction.__name__) logger.debug('%s is currently using the maximum number of threads, not starting another.' % self.timerFunction.__name__)
else: else:
self.daemonInstance.threadCounts[self.timerFunction.__name__] += 1 self.daemonInstance.threadCounts[self.timerFunction.__name__] += 1
newThread = threading.Thread(target=self.timerFunction, args=self.args) newThread = threading.Thread(target=self.timerFunction, args=self.args, daemon=True)
newThread.start() newThread.start()
else: else:
self.timerFunction() self.timerFunction()

View File

@ -18,6 +18,8 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import communicator, onionrblockapi import communicator, onionrblockapi
from onionrutils import stringvalidators
def service_creator(daemon): def service_creator(daemon):
assert isinstance(daemon, communicator.OnionrCommunicatorDaemon) assert isinstance(daemon, communicator.OnionrCommunicatorDaemon)
core = daemon._core core = daemon._core
@ -30,7 +32,7 @@ def service_creator(daemon):
if not b in daemon.active_services: if not b in daemon.active_services:
bl = onionrblockapi.Block(b, core=core, decrypt=True) bl = onionrblockapi.Block(b, core=core, decrypt=True)
bs = utils.bytesToStr(bl.bcontent) + '.onion' bs = utils.bytesToStr(bl.bcontent) + '.onion'
if utils.validatePubKey(bl.signer) and utils.validateID(bs): if stringvalidators.validate_pub_key(bl.signer) and stringvalidators.validate_transport(bs):
signer = utils.bytesToStr(bl.signer) signer = utils.bytesToStr(bl.signer)
daemon.active_services.append(b) daemon.active_services.append(b)
daemon.active_services.append(signer) daemon.active_services.append(signer)

View File

@ -20,15 +20,17 @@
import logger import logger
from communicatorutils import proxypicker from communicatorutils import proxypicker
import onionrblockapi as block import onionrblockapi as block
from onionrutils import localcommand, stringvalidators, basicrequests
def upload_blocks_from_communicator(comm_inst): def upload_blocks_from_communicator(comm_inst):
# when inserting a block, we try to upload it to a few peers to add some deniability # when inserting a block, we try to upload it to a few peers to add some deniability
triedPeers = [] triedPeers = []
finishedUploads = [] finishedUploads = []
comm_inst.blocksToUpload = comm_inst._core._crypto.randomShuffle(comm_inst.blocksToUpload) core = comm_inst._core
comm_inst.blocksToUpload = core._crypto.randomShuffle(comm_inst.blocksToUpload)
if len(comm_inst.blocksToUpload) != 0: if len(comm_inst.blocksToUpload) != 0:
for bl in comm_inst.blocksToUpload: for bl in comm_inst.blocksToUpload:
if not comm_inst._core._utils.validateHash(bl): if not stringvalidators.validate_hash(bl):
logger.warn('Requested to upload invalid block') logger.warn('Requested to upload invalid block')
comm_inst.decrementThreadCount('uploadBlock') comm_inst.decrementThreadCount('uploadBlock')
return return
@ -40,9 +42,9 @@ def upload_blocks_from_communicator(comm_inst):
url = 'http://' + peer + '/upload' url = 'http://' + peer + '/upload'
data = {'block': block.Block(bl).getRaw()} data = {'block': block.Block(bl).getRaw()}
proxyType = proxypicker.pick_proxy(peer) proxyType = proxypicker.pick_proxy(peer)
logger.info("Uploading block to " + peer) logger.info("Uploading block to " + peer, terminal=True)
if not comm_inst._core._utils.doPostRequest(url, data=data, proxyType=proxyType) == False: if not basicrequests.do_post_request(core, url, data=data, proxyType=proxyType) == False:
comm_inst._core._utils.localCommand('waitforshare/' + bl, post=True) localcommand.local_command(core, 'waitforshare/' + bl, post=True)
finishedUploads.append(bl) finishedUploads.append(bl)
for x in finishedUploads: for x in finishedUploads:
try: try:

View File

@ -17,22 +17,20 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import sqlite3, os, sys, time, json, uuid import os, sys, json
import logger, netcontroller, config import logger, netcontroller, config
from onionrblockapi import Block from onionrblockapi import Block
import coredb
import deadsimplekv as simplekv import deadsimplekv as simplekv
import onionrutils, onionrcrypto, onionrproofs, onionrevents as events, onionrexceptions import onionrcrypto, onionrproofs, onionrevents as events, onionrexceptions
import onionrblacklist import onionrblacklist
from onionrusers import onionrusers from onionrusers import onionrusers
from onionrstorage import removeblock, setdata
import dbcreator, onionrstorage, serializeddata, subprocesspow import dbcreator, onionrstorage, serializeddata, subprocesspow
from etc import onionrvalues, powchoice from etc import onionrvalues, powchoice
from onionrutils import localcommand, stringvalidators, bytesconverter, epoch
if sys.version_info < (3, 6): from onionrutils import blockmetadata
try: import storagecounter
import sha3
except ModuleNotFoundError:
logger.fatal('On Python 3 versions prior to 3.6.x, you need the sha3 module')
sys.exit(1)
class Core: class Core:
def __init__(self, torPort=0): def __init__(self, torPort=0):
@ -45,6 +43,10 @@ class Core:
self.dataDir += '/' self.dataDir += '/'
try: try:
self.usageFile = self.dataDir + 'disk-usage.txt'
self.config = config
self.maxBlockSize = 10000000 # max block size in bytes
self.onionrInst = None self.onionrInst = None
self.queueDB = self.dataDir + 'queue.db' self.queueDB = self.dataDir + 'queue.db'
self.peerDB = self.dataDir + 'peers.db' self.peerDB = self.dataDir + 'peers.db'
@ -64,6 +66,7 @@ class Core:
self.dbCreate = dbcreator.DBCreator(self) self.dbCreate = dbcreator.DBCreator(self)
self.forwardKeysFile = self.dataDir + 'forward-keys.db' self.forwardKeysFile = self.dataDir + 'forward-keys.db'
self.keyStore = simplekv.DeadSimpleKV(self.dataDir + 'cachedstorage.dat', refresh_seconds=5) self.keyStore = simplekv.DeadSimpleKV(self.dataDir + 'cachedstorage.dat', refresh_seconds=5)
self.storage_counter = storagecounter.StorageCounter(self)
# Socket data, defined here because of multithreading constraints with gevent # Socket data, defined here because of multithreading constraints with gevent
self.killSockets = False self.killSockets = False
@ -72,11 +75,6 @@ class Core:
self.socketReasons = {} self.socketReasons = {}
self.socketServerResponseData = {} self.socketServerResponseData = {}
self.usageFile = self.dataDir + 'disk-usage.txt'
self.config = config
self.maxBlockSize = 10000000 # max block size in bytes
if not os.path.exists(self.dataDir): if not os.path.exists(self.dataDir):
os.mkdir(self.dataDir) os.mkdir(self.dataDir)
if not os.path.exists(self.dataDir + 'blocks/'): if not os.path.exists(self.dataDir + 'blocks/'):
@ -104,15 +102,14 @@ class Core:
logger.warn('Warning: address bootstrap file not found ' + self.bootstrapFileLocation) logger.warn('Warning: address bootstrap file not found ' + self.bootstrapFileLocation)
self.use_subprocess = powchoice.use_subprocess(self) self.use_subprocess = powchoice.use_subprocess(self)
self._utils = onionrutils.OnionrUtils(self)
# Initialize the crypto object # Initialize the crypto object
self._crypto = onionrcrypto.OnionrCrypto(self) self._crypto = onionrcrypto.OnionrCrypto(self)
self._blacklist = onionrblacklist.OnionrBlackList(self) self._blacklist = onionrblacklist.OnionrBlackList(self)
self.serializer = serializeddata.SerializedData(self) self.serializer = serializeddata.SerializedData(self)
except Exception as error: except Exception as error:
logger.error('Failed to initialize core Onionr library.', error=error) logger.error('Failed to initialize core Onionr library.', error=error, terminal=True)
logger.fatal('Cannot recover from error.') logger.fatal('Cannot recover from error.', terminal=True)
sys.exit(1) sys.exit(1)
return return
@ -128,88 +125,19 @@ class Core:
''' '''
Adds a public key to the key database (misleading function name) Adds a public key to the key database (misleading function name)
''' '''
assert peerID not in self.listPeers() return coredb.keydb.addkeys.add_peer(self, peerID, name)
# This function simply adds a peer to the DB
if not self._utils.validatePubKey(peerID):
return False
events.event('pubkey_add', data = {'key': peerID}, onionr = self.onionrInst)
conn = sqlite3.connect(self.peerDB, timeout=30)
hashID = self._crypto.pubKeyHashID(peerID)
c = conn.cursor()
t = (peerID, name, 'unknown', hashID, 0)
for i in c.execute("SELECT * FROM peers WHERE id = ?;", (peerID,)):
try:
if i[0] == peerID:
conn.close()
return False
except ValueError:
pass
except IndexError:
pass
c.execute('INSERT INTO peers (id, name, dateSeen, hashID, trust) VALUES(?, ?, ?, ?, ?);', t)
conn.commit()
conn.close()
return True
def addAddress(self, address): def addAddress(self, address):
''' '''
Add an address to the address database (only tor currently) Add an address to the address database (only tor currently)
''' '''
return coredb.keydb.addkeys.add_address(self, address)
if type(address) is None or len(address) == 0:
return False
if self._utils.validateID(address):
if address == config.get('i2p.ownAddr', None) or address == self.hsAddress:
return False
conn = sqlite3.connect(self.addressDB, timeout=30)
c = conn.cursor()
# check if address is in database
# this is safe to do because the address is validated above, but we strip some chars here too just in case
address = address.replace('\'', '').replace(';', '').replace('"', '').replace('\\', '')
for i in c.execute("SELECT * FROM adders WHERE address = ?;", (address,)):
try:
if i[0] == address:
conn.close()
return False
except ValueError:
pass
except IndexError:
pass
t = (address, 1)
c.execute('INSERT INTO adders (address, type) VALUES(?, ?);', t)
conn.commit()
conn.close()
events.event('address_add', data = {'address': address}, onionr = self.onionrInst)
return True
else:
#logger.debug('Invalid ID: %s' % address)
return False
def removeAddress(self, address): def removeAddress(self, address):
''' '''
Remove an address from the address database Remove an address from the address database
''' '''
return coredb.keydb.removekeys.remove_address(self, address)
if self._utils.validateID(address):
conn = sqlite3.connect(self.addressDB, timeout=30)
c = conn.cursor()
t = (address,)
c.execute('Delete from adders where address=?;', t)
conn.commit()
conn.close()
events.event('address_remove', data = {'address': address}, onionr = self.onionrInst)
return True
else:
return False
def removeBlock(self, block): def removeBlock(self, block):
''' '''
@ -217,18 +145,7 @@ class Core:
**You may want blacklist.addToDB(blockHash) **You may want blacklist.addToDB(blockHash)
''' '''
removeblock.remove_block(self, block)
if self._utils.validateHash(block):
conn = sqlite3.connect(self.blockDB, timeout=30)
c = conn.cursor()
t = (block,)
c.execute('Delete from hashes where hash=?;', t)
conn.commit()
conn.close()
dataSize = sys.getsizeof(onionrstorage.getData(self, block))
self._utils.storageCounter.removeBytes(dataSize)
else:
raise onionrexceptions.InvalidHexHash
def createAddressDB(self): def createAddressDB(self):
''' '''
@ -254,67 +171,19 @@ class Core:
Should be in hex format! Should be in hex format!
''' '''
coredb.blockmetadb.add.add_to_block_DB(self, newHash, selfInsert, dataSaved)
if not os.path.exists(self.blockDB):
raise Exception('Block db does not exist')
if self._utils.hasBlock(newHash):
return
conn = sqlite3.connect(self.blockDB, timeout=30)
c = conn.cursor()
currentTime = self._utils.getEpoch() + self._crypto.secrets.randbelow(301)
if selfInsert or dataSaved:
selfInsert = 1
else:
selfInsert = 0
data = (newHash, currentTime, '', selfInsert)
c.execute('INSERT INTO hashes (hash, dateReceived, dataType, dataSaved) VALUES(?, ?, ?, ?);', data)
conn.commit()
conn.close()
return
def getData(self, hash):
'''
Simply return the data associated to a hash
'''
data = onionrstorage.getData(self, hash)
return data
def setData(self, data): def setData(self, data):
''' '''
Set the data assciated with a hash Set the data assciated with a hash
''' '''
return onionrstorage.setdata.set_data(self, data)
data = data def getData(self, hash):
dataSize = sys.getsizeof(data) '''
Simply return the data associated to a hash
if not type(data) is bytes: '''
data = data.encode() return onionrstorage.getData(self, hash)
dataHash = self._crypto.sha3Hash(data)
if type(dataHash) is bytes:
dataHash = dataHash.decode()
blockFileName = self.blockDataLocation + dataHash + '.dat'
if os.path.exists(blockFileName):
pass # TODO: properly check if block is already saved elsewhere
#raise Exception("Data is already set for " + dataHash)
else:
if self._utils.storageCounter.addBytes(dataSize) != False:
onionrstorage.store(self, data, blockHash=dataHash)
conn = sqlite3.connect(self.blockDB, timeout=30)
c = conn.cursor()
c.execute("UPDATE hashes SET dataSaved=1 WHERE hash = ?;", (dataHash,))
conn.commit()
conn.close()
with open(self.dataNonceFile, 'a') as nonceFile:
nonceFile.write(dataHash + '\n')
else:
raise onionrexceptions.DiskAllocationReached
return dataHash
def daemonQueue(self): def daemonQueue(self):
''' '''
@ -322,117 +191,31 @@ class Core:
This function intended to be used by the client. Queue to exchange data between "client" and server. This function intended to be used by the client. Queue to exchange data between "client" and server.
''' '''
return coredb.daemonqueue.daemon_queue(self)
retData = False
if not os.path.exists(self.queueDB):
self.dbCreate.createDaemonDB()
else:
conn = sqlite3.connect(self.queueDB, timeout=30)
c = conn.cursor()
try:
for row in c.execute('SELECT command, data, date, min(ID), responseID FROM commands group by id'):
retData = row
break
except sqlite3.OperationalError:
self.dbCreate.createDaemonDB()
else:
if retData != False:
c.execute('DELETE FROM commands WHERE id=?;', (retData[3],))
conn.commit()
conn.close()
events.event('queue_pop', data = {'data': retData}, onionr = self.onionrInst)
return retData
def daemonQueueAdd(self, command, data='', responseID=''): def daemonQueueAdd(self, command, data='', responseID=''):
''' '''
Add a command to the daemon queue, used by the communication daemon (communicator.py) Add a command to the daemon queue, used by the communication daemon (communicator.py)
''' '''
return coredb.daemonqueue.daemon_queue_add(self, command, data, responseID)
retData = True
date = self._utils.getEpoch()
conn = sqlite3.connect(self.queueDB, timeout=30)
c = conn.cursor()
t = (command, data, date, responseID)
try:
c.execute('INSERT INTO commands (command, data, date, responseID) VALUES(?, ?, ?, ?)', t)
conn.commit()
except sqlite3.OperationalError:
retData = False
self.daemonQueue()
events.event('queue_push', data = {'command': command, 'data': data}, onionr = self.onionrInst)
conn.close()
return retData
def daemonQueueGetResponse(self, responseID=''): def daemonQueueGetResponse(self, responseID=''):
''' '''
Get a response sent by communicator to the API, by requesting to the API Get a response sent by communicator to the API, by requesting to the API
''' '''
assert len(responseID) > 0 return coredb.daemonqueue.daemon_queue_get_response(self, responseID)
resp = self._utils.localCommand('queueResponse/' + responseID)
return resp
def daemonQueueWaitForResponse(self, responseID='', checkFreqSecs=1):
resp = 'failure'
while resp == 'failure':
resp = self.daemonQueueGetResponse(responseID)
time.sleep(1)
return resp
def daemonQueueSimple(self, command, data='', checkFreqSecs=1):
'''
A simplified way to use the daemon queue. Will register a command (with optional data) and wait, return the data
Not always useful, but saves time + LOC in some cases.
This is a blocking function, so be careful.
'''
responseID = str(uuid.uuid4()) # generate unique response ID
self.daemonQueueAdd(command, data=data, responseID=responseID)
return self.daemonQueueWaitForResponse(responseID, checkFreqSecs)
def clearDaemonQueue(self): def clearDaemonQueue(self):
''' '''
Clear the daemon queue (somewhat dangerous) Clear the daemon queue (somewhat dangerous)
''' '''
conn = sqlite3.connect(self.queueDB, timeout=30) return coredb.daemonqueue.clear_daemon_queue(self)
c = conn.cursor()
try:
c.execute('DELETE FROM commands;')
conn.commit()
except:
pass
conn.close()
events.event('queue_clear', onionr = self.onionrInst)
return
def listAdders(self, randomOrder=True, i2p=True, recent=0): def listAdders(self, randomOrder=True, i2p=True, recent=0):
''' '''
Return a list of addresses Return a list of addresses
''' '''
conn = sqlite3.connect(self.addressDB, timeout=30) return coredb.keydb.listkeys.list_adders(self, randomOrder, i2p, recent)
c = conn.cursor()
if randomOrder:
addresses = c.execute('SELECT * FROM adders ORDER BY RANDOM();')
else:
addresses = c.execute('SELECT * FROM adders;')
addressList = []
for i in addresses:
if len(i[0].strip()) == 0:
continue
addressList.append(i[0])
conn.close()
testList = list(addressList) # create new list to iterate
for address in testList:
try:
if recent > 0 and (self._utils.getEpoch() - self.getAddressInfo(address, 'lastConnect')) > recent:
raise TypeError # If there is no last-connected date or it was too long ago, don't add peer to list if recent is not 0
except TypeError:
addressList.remove(address)
return addressList
def listPeers(self, randomOrder=True, getPow=False, trust=0): def listPeers(self, randomOrder=True, getPow=False, trust=0):
''' '''
@ -441,35 +224,7 @@ class Core:
randomOrder determines if the list should be in a random order randomOrder determines if the list should be in a random order
trust sets the minimum trust to list trust sets the minimum trust to list
''' '''
conn = sqlite3.connect(self.peerDB, timeout=30) return coredb.keydb.listkeys.list_peers(self, randomOrder, getPow, trust)
c = conn.cursor()
payload = ''
if trust not in (0, 1, 2):
logger.error('Tried to select invalid trust.')
return
if randomOrder:
payload = 'SELECT * FROM peers WHERE trust >= ? ORDER BY RANDOM();'
else:
payload = 'SELECT * FROM peers WHERE trust >= ?;'
peerList = []
for i in c.execute(payload, (trust,)):
try:
if len(i[0]) != 0:
if getPow:
peerList.append(i[0] + '-' + i[1])
else:
peerList.append(i[0])
except TypeError:
pass
conn.close()
return peerList
def getPeerInfo(self, peer, info): def getPeerInfo(self, peer, info):
''' '''
@ -482,46 +237,13 @@ class Core:
trust int 4 trust int 4
hashID text 5 hashID text 5
''' '''
conn = sqlite3.connect(self.peerDB, timeout=30) return coredb.keydb.userinfo.get_user_info(self, peer, info)
c = conn.cursor()
command = (peer,)
infoNumbers = {'id': 0, 'name': 1, 'adders': 2, 'dateSeen': 3, 'trust': 4, 'hashID': 5}
info = infoNumbers[info]
iterCount = 0
retVal = ''
for row in c.execute('SELECT * FROM peers WHERE id=?;', command):
for i in row:
if iterCount == info:
retVal = i
break
else:
iterCount += 1
conn.close()
return retVal
def setPeerInfo(self, peer, key, data): def setPeerInfo(self, peer, key, data):
''' '''
Update a peer for a key Update a peer for a key
''' '''
return coredb.keydb.userinfo.set_peer_info(self, peer, key, data)
conn = sqlite3.connect(self.peerDB, timeout=30)
c = conn.cursor()
command = (data, peer)
# TODO: validate key on whitelist
if key not in ('id', 'name', 'pubkey', 'forwardKey', 'dateSeen', 'trust'):
raise Exception("Got invalid database key when setting peer info")
c.execute('UPDATE peers SET ' + key + ' = ? WHERE id=?', command)
conn.commit()
conn.close()
return
def getAddressInfo(self, address, info): def getAddressInfo(self, address, info):
''' '''
@ -538,117 +260,35 @@ class Core:
trust 8 trust 8
introduced 9 introduced 9
''' '''
return coredb.keydb.transportinfo.get_address_info(self, address, info)
conn = sqlite3.connect(self.addressDB, timeout=30)
c = conn.cursor()
command = (address,)
infoNumbers = {'address': 0, 'type': 1, 'knownPeer': 2, 'speed': 3, 'success': 4, 'powValue': 5, 'failure': 6, 'lastConnect': 7, 'trust': 8, 'introduced': 9}
info = infoNumbers[info]
iterCount = 0
retVal = ''
for row in c.execute('SELECT * FROM adders WHERE address=?;', command):
for i in row:
if iterCount == info:
retVal = i
break
else:
iterCount += 1
conn.close()
return retVal
def setAddressInfo(self, address, key, data): def setAddressInfo(self, address, key, data):
''' '''
Update an address for a key Update an address for a key
''' '''
return coredb.keydb.transportinfo.set_address_info(self, address, key, data)
conn = sqlite3.connect(self.addressDB, timeout=30)
c = conn.cursor()
command = (data, address)
if key not in ('address', 'type', 'knownPeer', 'speed', 'success', 'failure', 'powValue', 'lastConnect', 'lastConnectAttempt', 'trust', 'introduced'):
raise Exception("Got invalid database key when setting address info")
else:
c.execute('UPDATE adders SET ' + key + ' = ? WHERE address=?', command)
conn.commit()
conn.close()
return
def getBlockList(self, dateRec = None, unsaved = False): def getBlockList(self, dateRec = None, unsaved = False):
''' '''
Get list of our blocks Get list of our blocks
''' '''
if dateRec == None: return coredb.blockmetadb.get_block_list(self, dateRec, unsaved)
dateRec = 0
conn = sqlite3.connect(self.blockDB, timeout=30)
c = conn.cursor()
execute = 'SELECT hash FROM hashes WHERE dateReceived >= ? ORDER BY dateReceived ASC;'
args = (dateRec,)
rows = list()
for row in c.execute(execute, args):
for i in row:
rows.append(i)
conn.close()
return rows
def getBlockDate(self, blockHash): def getBlockDate(self, blockHash):
''' '''
Returns the date a block was received Returns the date a block was received
''' '''
return coredb.blockmetadb.get_block_date(self, blockHash)
conn = sqlite3.connect(self.blockDB, timeout=30)
c = conn.cursor()
execute = 'SELECT dateReceived FROM hashes WHERE hash=?;'
args = (blockHash,)
for row in c.execute(execute, args):
for i in row:
return int(i)
conn.close()
return None
def getBlocksByType(self, blockType, orderDate=True): def getBlocksByType(self, blockType, orderDate=True):
''' '''
Returns a list of blocks by the type Returns a list of blocks by the type
''' '''
return coredb.blockmetadb.get_blocks_by_type(self, blockType, orderDate)
conn = sqlite3.connect(self.blockDB, timeout=30)
c = conn.cursor()
if orderDate:
execute = 'SELECT hash FROM hashes WHERE dataType=? ORDER BY dateReceived;'
else:
execute = 'SELECT hash FROM hashes WHERE dataType=?;'
args = (blockType,)
rows = list()
for row in c.execute(execute, args):
for i in row:
rows.append(i)
conn.close()
return rows
def getExpiredBlocks(self): def getExpiredBlocks(self):
'''Returns a list of expired blocks''' '''Returns a list of expired blocks'''
conn = sqlite3.connect(self.blockDB, timeout=30) return coredb.blockmetadb.expiredblocks.get_expired_blocks(self)
c = conn.cursor()
date = int(self._utils.getEpoch())
execute = 'SELECT hash FROM hashes WHERE expire <= %s ORDER BY dateReceived;' % (date,)
rows = list()
for row in c.execute(execute):
for i in row:
rows.append(i)
conn.close()
return rows
def updateBlockInfo(self, hash, key, data): def updateBlockInfo(self, hash, key, data):
''' '''
@ -665,18 +305,7 @@ class Core:
dateClaimed - timestamp claimed inside the block, only as trustworthy as the block author is dateClaimed - timestamp claimed inside the block, only as trustworthy as the block author is
expire - expire date for a block expire - expire date for a block
''' '''
return coredb.blockmetadb.updateblockinfo.update_block_info(self, hash, key, data)
if key not in ('dateReceived', 'decrypted', 'dataType', 'dataFound', 'dataSaved', 'sig', 'author', 'dateClaimed', 'expire'):
return False
conn = sqlite3.connect(self.blockDB, timeout=30)
c = conn.cursor()
args = (data, hash)
c.execute("UPDATE hashes SET " + key + " = ? where hash = ?;", args)
conn.commit()
conn.close()
return True
def insertBlock(self, data, header='txt', sign=False, encryptType='', symKey='', asymPeer='', meta = {}, expire=None, disableForward=False): def insertBlock(self, data, header='txt', sign=False, encryptType='', symKey='', asymPeer='', meta = {}, expire=None, disableForward=False):
''' '''
@ -684,7 +313,7 @@ class Core:
encryptType must be specified to encrypt a block encryptType must be specified to encrypt a block
''' '''
allocationReachedMessage = 'Cannot insert block, disk allocation reached.' allocationReachedMessage = 'Cannot insert block, disk allocation reached.'
if self._utils.storageCounter.isFull(): if self.storage_counter.isFull():
logger.error(allocationReachedMessage) logger.error(allocationReachedMessage)
return False return False
retData = False retData = False
@ -692,11 +321,9 @@ class Core:
if type(data) is None: if type(data) is None:
raise ValueError('Data cannot be none') raise ValueError('Data cannot be none')
createTime = self._utils.getRoundedEpoch() createTime = epoch.get_epoch()
# check nonce dataNonce = bytesconverter.bytes_to_str(self._crypto.sha3Hash(data))
#print(data)
dataNonce = self._utils.bytesToStr(self._crypto.sha3Hash(data))
try: try:
with open(self.dataNonceFile, 'r') as nonces: with open(self.dataNonceFile, 'r') as nonces:
if dataNonce in nonces: if dataNonce in nonces:
@ -769,14 +396,18 @@ class Core:
signature = self._crypto.symmetricEncrypt(signature, key=symKey, returnEncoded=True).decode() signature = self._crypto.symmetricEncrypt(signature, key=symKey, returnEncoded=True).decode()
signer = self._crypto.symmetricEncrypt(signer, key=symKey, returnEncoded=True).decode() signer = self._crypto.symmetricEncrypt(signer, key=symKey, returnEncoded=True).decode()
elif encryptType == 'asym': elif encryptType == 'asym':
if self._utils.validatePubKey(asymPeer): if stringvalidators.validate_pub_key(asymPeer):
# Encrypt block data with forward secrecy key first, but not meta # Encrypt block data with forward secrecy key first, but not meta
jsonMeta = json.dumps(meta) jsonMeta = json.dumps(meta)
jsonMeta = self._crypto.pubKeyEncrypt(jsonMeta, asymPeer, encodedData=True).decode() jsonMeta = self._crypto.pubKeyEncrypt(jsonMeta, asymPeer, encodedData=True).decode()
data = self._crypto.pubKeyEncrypt(data, asymPeer, encodedData=True).decode() data = self._crypto.pubKeyEncrypt(data, asymPeer, encodedData=True).decode()
signature = self._crypto.pubKeyEncrypt(signature, asymPeer, encodedData=True).decode() signature = self._crypto.pubKeyEncrypt(signature, asymPeer, encodedData=True).decode()
signer = self._crypto.pubKeyEncrypt(signer, asymPeer, encodedData=True).decode() signer = self._crypto.pubKeyEncrypt(signer, asymPeer, encodedData=True).decode()
onionrusers.OnionrUser(self, asymPeer, saveUser=True) try:
onionrusers.OnionrUser(self, asymPeer, saveUser=True)
except ValueError:
# if peer is already known
pass
else: else:
raise onionrexceptions.InvalidPubkey(asymPeer + ' is not a valid base32 encoded ed25519 key') raise onionrexceptions.InvalidPubkey(asymPeer + ' is not a valid base32 encoded ed25519 key')
@ -804,25 +435,28 @@ class Core:
retData = False retData = False
else: else:
# Tell the api server through localCommand to wait for the daemon to upload this block to make statistical analysis more difficult # Tell the api server through localCommand to wait for the daemon to upload this block to make statistical analysis more difficult
if self._utils.localCommand('/ping', maxWait=10) == 'pong!': if localcommand.local_command(self, '/ping', maxWait=10) == 'pong!':
self._utils.localCommand('/waitforshare/' + retData, post=True, maxWait=5) if self.config.get('general.security_level', 1) == 0:
localcommand.local_command(self, '/waitforshare/' + retData, post=True, maxWait=5)
self.daemonQueueAdd('uploadBlock', retData) self.daemonQueueAdd('uploadBlock', retData)
else:
pass
self.addToBlockDB(retData, selfInsert=True, dataSaved=True) self.addToBlockDB(retData, selfInsert=True, dataSaved=True)
self._utils.processBlockMetadata(retData) blockmetadata.process_block_metadata(self, retData)
if retData != False: if retData != False:
if plaintextPeer == onionrvalues.DENIABLE_PEER_ADDRESS: if plaintextPeer == onionrvalues.DENIABLE_PEER_ADDRESS:
events.event('insertdeniable', {'content': plaintext, 'meta': plaintextMeta, 'hash': retData, 'peer': self._utils.bytesToStr(asymPeer)}, onionr = self.onionrInst, threaded = True) events.event('insertdeniable', {'content': plaintext, 'meta': plaintextMeta, 'hash': retData, 'peer': bytesconverter.bytes_to_str(asymPeer)}, onionr = self.onionrInst, threaded = True)
else: else:
events.event('insertblock', {'content': plaintext, 'meta': plaintextMeta, 'hash': retData, 'peer': self._utils.bytesToStr(asymPeer)}, onionr = self.onionrInst, threaded = True) events.event('insertblock', {'content': plaintext, 'meta': plaintextMeta, 'hash': retData, 'peer': bytesconverter.bytes_to_str(asymPeer)}, onionr = self.onionrInst, threaded = True)
return retData return retData
def introduceNode(self): def introduceNode(self):
''' '''
Introduces our node into the network by telling X many nodes our HS address Introduces our node into the network by telling X many nodes our HS address
''' '''
if self._utils.localCommand('/ping', maxWait=10) == 'pong!': if localcommand.local_command(self, '/ping', maxWait=10) == 'pong!':
self.daemonQueueAdd('announceNode') self.daemonQueueAdd('announceNode')
logger.info('Introduction command will be processed.') logger.info('Introduction command will be processed.', terminal=True)
else: else:
logger.warn('No running node detected. Cannot introduce.') logger.warn('No running node detected. Cannot introduce.', terminal=True)

View File

@ -0,0 +1 @@
from . import keydb, blockmetadb, daemonqueue

View File

@ -0,0 +1,77 @@
'''
Onionr - Private P2P Communication
This module works with information relating to blocks stored on the node
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3
from . import expiredblocks, updateblockinfo, add
def get_block_list(core_inst, dateRec = None, unsaved = False):
'''
Get list of our blocks
'''
if dateRec == None:
dateRec = 0
conn = sqlite3.connect(core_inst.blockDB, timeout=30)
c = conn.cursor()
execute = 'SELECT hash FROM hashes WHERE dateReceived >= ? ORDER BY dateReceived ASC;'
args = (dateRec,)
rows = list()
for row in c.execute(execute, args):
for i in row:
rows.append(i)
conn.close()
return rows
def get_block_date(core_inst, blockHash):
'''
Returns the date a block was received
'''
conn = sqlite3.connect(core_inst.blockDB, timeout=30)
c = conn.cursor()
execute = 'SELECT dateReceived FROM hashes WHERE hash=?;'
args = (blockHash,)
for row in c.execute(execute, args):
for i in row:
return int(i)
conn.close()
return None
def get_blocks_by_type(core_inst, blockType, orderDate=True):
'''
Returns a list of blocks by the type
'''
conn = sqlite3.connect(core_inst.blockDB, timeout=30)
c = conn.cursor()
if orderDate:
execute = 'SELECT hash FROM hashes WHERE dataType=? ORDER BY dateReceived;'
else:
execute = 'SELECT hash FROM hashes WHERE dataType=?;'
args = (blockType,)
rows = list()
for row in c.execute(execute, args):
for i in row:
rows.append(i)
conn.close()
return rows

View File

@ -0,0 +1,43 @@
'''
Onionr - Private P2P Communication
Add an entry to the block metadata database
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import os, sqlite3
from onionrutils import epoch, blockmetadata
def add_to_block_DB(core_inst, newHash, selfInsert=False, dataSaved=False):
'''
Add a hash value to the block db
Should be in hex format!
'''
if not os.path.exists(core_inst.blockDB):
raise Exception('Block db does not exist')
if blockmetadata.has_block(core_inst, newHash):
return
conn = sqlite3.connect(core_inst.blockDB, timeout=30)
c = conn.cursor()
currentTime = epoch.get_epoch() + core_inst._crypto.secrets.randbelow(301)
if selfInsert or dataSaved:
selfInsert = 1
else:
selfInsert = 0
data = (newHash, currentTime, '', selfInsert)
c.execute('INSERT INTO hashes (hash, dateReceived, dataType, dataSaved) VALUES(?, ?, ?, ?);', data)
conn.commit()
conn.close()

View File

@ -0,0 +1,35 @@
'''
Onionr - Private P2P Communication
Get a list of expired blocks still stored
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3
from onionrutils import epoch
def get_expired_blocks(core_inst):
'''Returns a list of expired blocks'''
conn = sqlite3.connect(core_inst.blockDB, timeout=30)
c = conn.cursor()
date = int(epoch.get_epoch())
execute = 'SELECT hash FROM hashes WHERE expire <= %s ORDER BY dateReceived;' % (date,)
rows = list()
for row in c.execute(execute):
for i in row:
rows.append(i)
conn.close()
return rows

View File

@ -0,0 +1,33 @@
'''
Onionr - Private P2P Communication
Update block information in the metadata database by a field name
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3
def update_block_info(core_inst, hash, key, data):
if key not in ('dateReceived', 'decrypted', 'dataType', 'dataFound', 'dataSaved', 'sig', 'author', 'dateClaimed', 'expire'):
return False
conn = sqlite3.connect(core_inst.blockDB, timeout=30)
c = conn.cursor()
args = (data, hash)
c.execute("UPDATE hashes SET " + key + " = ? where hash = ?;", args)
conn.commit()
conn.close()
return True

View File

@ -0,0 +1,97 @@
'''
Onionr - Private P2P Communication
Write and read the daemon queue, which is how messages are passed into the onionr daemon in a more
direct way than the http api
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3, os
import onionrevents as events
from onionrutils import localcommand, epoch
def daemon_queue(core_inst):
'''
Gives commands to the communication proccess/daemon by reading an sqlite3 database
This function intended to be used by the client. Queue to exchange data between "client" and server.
'''
retData = False
if not os.path.exists(core_inst.queueDB):
core_inst.dbCreate.createDaemonDB()
else:
conn = sqlite3.connect(core_inst.queueDB, timeout=30)
c = conn.cursor()
try:
for row in c.execute('SELECT command, data, date, min(ID), responseID FROM commands group by id'):
retData = row
break
except sqlite3.OperationalError:
core_inst.dbCreate.createDaemonDB()
else:
if retData != False:
c.execute('DELETE FROM commands WHERE id=?;', (retData[3],))
conn.commit()
conn.close()
events.event('queue_pop', data = {'data': retData}, onionr = core_inst.onionrInst)
return retData
def daemon_queue_add(core_inst, command, data='', responseID=''):
'''
Add a command to the daemon queue, used by the communication daemon (communicator.py)
'''
retData = True
date = epoch.get_epoch()
conn = sqlite3.connect(core_inst.queueDB, timeout=30)
c = conn.cursor()
t = (command, data, date, responseID)
try:
c.execute('INSERT INTO commands (command, data, date, responseID) VALUES(?, ?, ?, ?)', t)
conn.commit()
except sqlite3.OperationalError:
retData = False
core_inst.daemonQueue()
events.event('queue_push', data = {'command': command, 'data': data}, onionr = core_inst.onionrInst)
conn.close()
return retData
def daemon_queue_get_response(core_inst, responseID=''):
'''
Get a response sent by communicator to the API, by requesting to the API
'''
assert len(responseID) > 0
resp = localcommand.local_command(core_inst, 'queueResponse/' + responseID)
return resp
def clear_daemon_queue(core_inst):
'''
Clear the daemon queue (somewhat dangerous)
'''
conn = sqlite3.connect(core_inst.queueDB, timeout=30)
c = conn.cursor()
try:
c.execute('DELETE FROM commands;')
conn.commit()
except:
pass
conn.close()
events.event('queue_clear', onionr = core_inst.onionrInst)

View File

@ -0,0 +1 @@
from . import addkeys, listkeys, removekeys, userinfo, transportinfo

View File

@ -0,0 +1,91 @@
'''
Onionr - Private P2P Communication
add user keys or transport addresses
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3
import onionrevents as events
from onionrutils import stringvalidators
def add_peer(core_inst, peerID, name=''):
'''
Adds a public key to the key database (misleading function name)
'''
if peerID in core_inst.listPeers() or peerID == core_inst._crypto.pubKey:
raise ValueError("specified id is already known")
# This function simply adds a peer to the DB
if not stringvalidators.validate_pub_key(peerID):
return False
events.event('pubkey_add', data = {'key': peerID}, onionr = core_inst.onionrInst)
conn = sqlite3.connect(core_inst.peerDB, timeout=30)
hashID = core_inst._crypto.pubKeyHashID(peerID)
c = conn.cursor()
t = (peerID, name, 'unknown', hashID, 0)
for i in c.execute("SELECT * FROM peers WHERE id = ?;", (peerID,)):
try:
if i[0] == peerID:
conn.close()
return False
except ValueError:
pass
except IndexError:
pass
c.execute('INSERT INTO peers (id, name, dateSeen, hashID, trust) VALUES(?, ?, ?, ?, ?);', t)
conn.commit()
conn.close()
return True
def add_address(core_inst, address):
'''
Add an address to the address database (only tor currently)
'''
if type(address) is None or len(address) == 0:
return False
if stringvalidators.validate_transport(address):
if address == core_inst.config.get('i2p.ownAddr', None) or address == core_inst.hsAddress:
return False
conn = sqlite3.connect(core_inst.addressDB, timeout=30)
c = conn.cursor()
# check if address is in database
# this is safe to do because the address is validated above, but we strip some chars here too just in case
address = address.replace('\'', '').replace(';', '').replace('"', '').replace('\\', '')
for i in c.execute("SELECT * FROM adders WHERE address = ?;", (address,)):
try:
if i[0] == address:
conn.close()
return False
except ValueError:
pass
except IndexError:
pass
t = (address, 1)
c.execute('INSERT INTO adders (address, type) VALUES(?, ?);', t)
conn.commit()
conn.close()
events.event('address_add', data = {'address': address}, onionr = core_inst.onionrInst)
return True
else:
return False

View File

@ -0,0 +1,83 @@
'''
Onionr - Private P2P Communication
get lists for user keys or transport addresses
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3
import logger
from onionrutils import epoch
def list_peers(core_inst, randomOrder=True, getPow=False, trust=0):
'''
Return a list of public keys (misleading function name)
randomOrder determines if the list should be in a random order
trust sets the minimum trust to list
'''
conn = sqlite3.connect(core_inst.peerDB, timeout=30)
c = conn.cursor()
payload = ''
if trust not in (0, 1, 2):
logger.error('Tried to select invalid trust.')
return
if randomOrder:
payload = 'SELECT * FROM peers WHERE trust >= ? ORDER BY RANDOM();'
else:
payload = 'SELECT * FROM peers WHERE trust >= ?;'
peerList = []
for i in c.execute(payload, (trust,)):
try:
if len(i[0]) != 0:
if getPow:
peerList.append(i[0] + '-' + i[1])
else:
peerList.append(i[0])
except TypeError:
pass
conn.close()
return peerList
def list_adders(core_inst, randomOrder=True, i2p=True, recent=0):
'''
Return a list of transport addresses
'''
conn = sqlite3.connect(core_inst.addressDB, timeout=30)
c = conn.cursor()
if randomOrder:
addresses = c.execute('SELECT * FROM adders ORDER BY RANDOM();')
else:
addresses = c.execute('SELECT * FROM adders;')
addressList = []
for i in addresses:
if len(i[0].strip()) == 0:
continue
addressList.append(i[0])
conn.close()
testList = list(addressList) # create new list to iterate
for address in testList:
try:
if recent > 0 and (epoch.get_epoch() - core_inst.getAddressInfo(address, 'lastConnect')) > recent:
raise TypeError # If there is no last-connected date or it was too long ago, don't add peer to list if recent is not 0
except TypeError:
addressList.remove(address)
return addressList

View File

@ -0,0 +1,40 @@
'''
Onionr - Private P2P Communication
Remove a transport address but don't ban them
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3
import onionrevents as events
from onionrutils import stringvalidators
def remove_address(core_inst, address):
'''
Remove an address from the address database
'''
if stringvalidators.validate_transport(address):
conn = sqlite3.connect(core_inst.addressDB, timeout=30)
c = conn.cursor()
t = (address,)
c.execute('Delete from adders where address=?;', t)
conn.commit()
conn.close()
events.event('address_remove', data = {'address': address}, onionr = core_inst.onionrInst)
return True
else:
return False

View File

@ -0,0 +1,72 @@
'''
Onionr - Private P2P Communication
get or set transport address meta information
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3
def get_address_info(core_inst, address, info):
'''
Get info about an address from its database entry
address text, 0
type int, 1
knownPeer text, 2
speed int, 3
success int, 4
powValue 5
failure int 6
lastConnect 7
trust 8
introduced 9
'''
conn = sqlite3.connect(core_inst.addressDB, timeout=30)
c = conn.cursor()
command = (address,)
infoNumbers = {'address': 0, 'type': 1, 'knownPeer': 2, 'speed': 3, 'success': 4, 'powValue': 5, 'failure': 6, 'lastConnect': 7, 'trust': 8, 'introduced': 9}
info = infoNumbers[info]
iterCount = 0
retVal = ''
for row in c.execute('SELECT * FROM adders WHERE address=?;', command):
for i in row:
if iterCount == info:
retVal = i
break
else:
iterCount += 1
conn.close()
return retVal
def set_address_info(core_inst, address, key, data):
'''
Update an address for a key
'''
conn = sqlite3.connect(core_inst.addressDB, timeout=30)
c = conn.cursor()
command = (data, address)
if key not in ('address', 'type', 'knownPeer', 'speed', 'success', 'failure', 'powValue', 'lastConnect', 'lastConnectAttempt', 'trust', 'introduced'):
raise Exception("Got invalid database key when setting address info")
else:
c.execute('UPDATE adders SET ' + key + ' = ? WHERE address=?', command)
conn.commit()
conn.close()

View File

@ -0,0 +1,69 @@
'''
Onionr - Private P2P Communication
get or set information about a user id
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3
def get_user_info(core_inst, peer, info):
'''
Get info about a peer from their database entry
id text 0
name text, 1
adders text, 2
dateSeen not null, 3
trust int 4
hashID text 5
'''
conn = sqlite3.connect(core_inst.peerDB, timeout=30)
c = conn.cursor()
command = (peer,)
infoNumbers = {'id': 0, 'name': 1, 'adders': 2, 'dateSeen': 3, 'trust': 4, 'hashID': 5}
info = infoNumbers[info]
iterCount = 0
retVal = ''
for row in c.execute('SELECT * FROM peers WHERE id=?;', command):
for i in row:
if iterCount == info:
retVal = i
break
else:
iterCount += 1
conn.close()
return retVal
def set_peer_info(core_inst, peer, key, data):
'''
Update a peer for a key
'''
conn = sqlite3.connect(core_inst.peerDB, timeout=30)
c = conn.cursor()
command = (data, peer)
# TODO: validate key on whitelist
if key not in ('id', 'name', 'pubkey', 'forwardKey', 'dateSeen', 'trust'):
raise Exception("Got invalid database key when setting peer info")
c.execute('UPDATE peers SET ' + key + ' = ? WHERE id=?', command)
conn.commit()
conn.close()

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
This file registers plugin's flask blueprints for the client http server This file registers plugin's flask blueprints for the client http server
''' '''

View File

@ -26,7 +26,8 @@ friends = Blueprint('friends', __name__)
@friends.route('/friends/list') @friends.route('/friends/list')
def list_friends(): def list_friends():
pubkey_list = {} pubkey_list = {}
friend_list = contactmanager.ContactManager.list_friends(core.Core()) c = core.Core()
friend_list = contactmanager.ContactManager.list_friends(c)
for friend in friend_list: for friend in friend_list:
pubkey_list[friend.publicKey] = {'name': friend.get_info('name')} pubkey_list[friend.publicKey] = {'name': friend.get_info('name')}
return json.dumps(pubkey_list) return json.dumps(pubkey_list)

View File

@ -21,6 +21,8 @@ import base64
from flask import Response from flask import Response
import logger import logger
from etc import onionrvalues from etc import onionrvalues
from onionrutils import stringvalidators, bytesconverter
def handle_announce(clientAPI, request): def handle_announce(clientAPI, request):
''' '''
accept announcement posts, validating POW accept announcement posts, validating POW
@ -51,8 +53,8 @@ def handle_announce(clientAPI, request):
except AttributeError: except AttributeError:
pass pass
if powHash.startswith('0' * onionrvalues.OnionrValues().announce_pow): if powHash.startswith('0' * onionrvalues.OnionrValues().announce_pow):
newNode = clientAPI._core._utils.bytesToStr(newNode) newNode = bytesconverter.bytes_to_str(newNode)
if clientAPI._core._utils.validateID(newNode) and not newNode in clientAPI._core.onionrInst.communicatorInst.newPeers: if stringvalidators.validate_transport(newNode) and not newNode in clientAPI._core.onionrInst.communicatorInst.newPeers:
clientAPI._core.onionrInst.communicatorInst.newPeers.append(newNode) clientAPI._core.onionrInst.communicatorInst.newPeers.append(newNode)
resp = 'Success' resp = 'Success'
else: else:

View File

@ -19,11 +19,12 @@
''' '''
from flask import Response, abort from flask import Response, abort
import config import config
from onionrutils import bytesconverter, stringvalidators
def get_public_block_list(clientAPI, publicAPI, request): def get_public_block_list(clientAPI, publicAPI, request):
# Provide a list of our blocks, with a date offset # Provide a list of our blocks, with a date offset
dateAdjust = request.args.get('date') dateAdjust = request.args.get('date')
bList = clientAPI._core.getBlockList(dateRec=dateAdjust) bList = clientAPI._core.getBlockList(dateRec=dateAdjust)
if config.get('general.hide_created_blocks', True): if clientAPI._core.config.get('general.hide_created_blocks', True):
for b in publicAPI.hideBlocks: for b in publicAPI.hideBlocks:
if b in bList: if b in bList:
# Don't share blocks we created if they haven't been *uploaded* yet, makes it harder to find who created a block # Don't share blocks we created if they haven't been *uploaded* yet, makes it harder to find who created a block
@ -33,15 +34,15 @@ def get_public_block_list(clientAPI, publicAPI, request):
def get_block_data(clientAPI, publicAPI, data): def get_block_data(clientAPI, publicAPI, data):
'''data is the block hash in hex''' '''data is the block hash in hex'''
resp = '' resp = ''
if clientAPI._utils.validateHash(data): if stringvalidators.validate_hash(data):
if not config.get('general.hide_created_blocks', True) or data not in publicAPI.hideBlocks: if not clientAPI._core.config.get('general.hide_created_blocks', True) or data not in publicAPI.hideBlocks:
if data in clientAPI._core.getBlockList(): if data in clientAPI._core.getBlockList():
block = clientAPI.getBlockData(data, raw=True) block = clientAPI.getBlockData(data, raw=True)
try: try:
block = block.encode() # Encode in case data is binary block = block.encode() # Encode in case data is binary
except AttributeError: except AttributeError:
abort(404) abort(404)
block = clientAPI._core._utils.strToBytes(block) block = bytesconverter.str_to_bytes(block)
resp = block resp = block
if len(resp) == 0: if len(resp) == 0:
abort(404) abort(404)

View File

@ -17,20 +17,20 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
from onionrutils import bytesconverter
import onionrcrypto import onionrcrypto
class KeyManager: class KeyManager:
def __init__(self, crypto): def __init__(self, crypto):
assert isinstance(crypto, onionrcrypto.OnionrCrypto) assert isinstance(crypto, onionrcrypto.OnionrCrypto)
self._core = crypto._core self._core = crypto._core
self._utils = self._core._utils
self.keyFile = crypto._keyFile self.keyFile = crypto._keyFile
self.crypto = crypto self.crypto = crypto
def addKey(self, pubKey=None, privKey=None): def addKey(self, pubKey=None, privKey=None):
if type(pubKey) is type(None) and type(privKey) is type(None): if type(pubKey) is type(None) and type(privKey) is type(None):
pubKey, privKey = self.crypto.generatePubKey() pubKey, privKey = self.crypto.generatePubKey()
pubKey = self.crypto._core._utils.bytesToStr(pubKey) pubKey = bytesconverter.bytes_to_str(pubKey)
privKey = self.crypto._core._utils.bytesToStr(privKey) privKey = bytesconverter.bytes_to_str(privKey)
try: try:
if pubKey in self.getPubkeyList(): if pubKey in self.getPubkeyList():
raise ValueError('Pubkey already in list: %s' % (pubKey,)) raise ValueError('Pubkey already in list: %s' % (pubKey,))

View File

@ -126,24 +126,24 @@ def get_file():
return _outputfile return _outputfile
def raw(data, fd = sys.stdout, sensitive = False): def raw(data, fd = sys.stdout, terminal = False):
''' '''
Outputs raw data to console without formatting Outputs raw data to console without formatting
''' '''
if get_settings() & OUTPUT_TO_CONSOLE: if terminal and (get_settings() & OUTPUT_TO_CONSOLE):
try: try:
ts = fd.write('%s\n' % data) ts = fd.write('%s\n' % data)
except OSError: except OSError:
pass pass
if get_settings() & OUTPUT_TO_FILE and not sensitive: if get_settings() & OUTPUT_TO_FILE:
try: try:
with open(_outputfile, "a+") as f: with open(_outputfile, "a+") as f:
f.write(colors.filter(data) + '\n') f.write(colors.filter(data) + '\n')
except OSError: except OSError:
pass pass
def log(prefix, data, color = '', timestamp=True, fd = sys.stdout, prompt = True, sensitive = False): def log(prefix, data, color = '', timestamp=True, fd = sys.stdout, prompt = True, terminal = False):
''' '''
Logs the data Logs the data
prefix : The prefix to the output prefix : The prefix to the output
@ -158,7 +158,7 @@ def log(prefix, data, color = '', timestamp=True, fd = sys.stdout, prompt = True
if not get_settings() & USE_ANSI: if not get_settings() & USE_ANSI:
output = colors.filter(output) output = colors.filter(output)
raw(output, fd = fd, sensitive = sensitive) raw(output, fd = fd, terminal = terminal)
def readline(message = ''): def readline(message = ''):
''' '''
@ -210,37 +210,37 @@ def confirm(default = 'y', message = 'Are you sure %s? '):
return default == 'y' return default == 'y'
# debug: when there is info that could be useful for debugging purposes only # debug: when there is info that could be useful for debugging purposes only
def debug(data, error = None, timestamp = True, prompt = True, sensitive = False, level = LEVEL_DEBUG): def debug(data, error = None, timestamp = True, prompt = True, terminal = False, level = LEVEL_DEBUG):
if get_level() <= level: if get_level() <= level:
log('/', data, timestamp = timestamp, prompt = prompt, sensitive = sensitive) log('/', data, timestamp = timestamp, prompt = prompt, terminal = terminal)
if not error is None: if not error is None:
debug('Error: ' + str(error) + parse_error()) debug('Error: ' + str(error) + parse_error())
# info: when there is something to notify the user of, such as the success of a process # info: when there is something to notify the user of, such as the success of a process
def info(data, timestamp = False, prompt = True, sensitive = False, level = LEVEL_INFO): def info(data, timestamp = False, prompt = True, terminal = False, level = LEVEL_INFO):
if get_level() <= level: if get_level() <= level:
log('+', data, colors.fg.green, timestamp = timestamp, prompt = prompt, sensitive = sensitive) log('+', data, colors.fg.green, timestamp = timestamp, prompt = prompt, terminal = terminal)
# warn: when there is a potential for something bad to happen # warn: when there is a potential for something bad to happen
def warn(data, error = None, timestamp = True, prompt = True, sensitive = False, level = LEVEL_WARN): def warn(data, error = None, timestamp = True, prompt = True, terminal = False, level = LEVEL_WARN):
if not error is None: if not error is None:
debug('Error: ' + str(error) + parse_error()) debug('Error: ' + str(error) + parse_error())
if get_level() <= level: if get_level() <= level:
log('!', data, colors.fg.orange, timestamp = timestamp, prompt = prompt, sensitive = sensitive) log('!', data, colors.fg.orange, timestamp = timestamp, prompt = prompt, terminal = terminal)
# error: when only one function, module, or process of the program encountered a problem and must stop # error: when only one function, module, or process of the program encountered a problem and must stop
def error(data, error = None, timestamp = True, prompt = True, sensitive = False, level = LEVEL_ERROR): def error(data, error = None, timestamp = True, prompt = True, terminal = False, level = LEVEL_ERROR):
if get_level() <= level: if get_level() <= level:
log('-', data, colors.fg.red, timestamp = timestamp, fd = sys.stderr, prompt = prompt, sensitive = sensitive) log('-', data, colors.fg.red, timestamp = timestamp, fd = sys.stderr, prompt = prompt, terminal = terminal)
if not error is None: if not error is None:
debug('Error: ' + str(error) + parse_error()) debug('Error: ' + str(error) + parse_error())
# fatal: when the something so bad has happened that the program must stop # fatal: when the something so bad has happened that the program must stop
def fatal(data, error = None, timestamp=True, prompt = True, sensitive = False, level = LEVEL_FATAL): def fatal(data, error = None, timestamp=True, prompt = True, terminal = False, level = LEVEL_FATAL):
if not error is None: if not error is None:
debug('Error: ' + str(error) + parse_error(), sensitive = sensitive) debug('Error: ' + str(error) + parse_error(), terminal = terminal)
if get_level() <= level: if get_level() <= level:
log('#', data, colors.bg.red + colors.fg.green + colors.bold, timestamp = timestamp, fd = sys.stderr, prompt = prompt, sensitive = sensitive) log('#', data, colors.bg.red + colors.fg.green + colors.bold, timestamp = timestamp, fd = sys.stderr, prompt = prompt, terminal = terminal)
# returns a formatted error message # returns a formatted error message
def parse_error(): def parse_error():

View File

@ -20,7 +20,6 @@
import subprocess, os, sys, time, signal, base64, socket import subprocess, os, sys, time, signal, base64, socket
from shutil import which from shutil import which
import logger, config import logger, config
from onionrblockapi import Block
config.reload() config.reload()
def getOpenPort(): def getOpenPort():
# taken from (but modified) https://stackoverflow.com/a/2838309 by https://stackoverflow.com/users/133374/albert ccy-by-sa-3 https://creativecommons.org/licenses/by-sa/3.0/ # taken from (but modified) https://stackoverflow.com/a/2838309 by https://stackoverflow.com/users/133374/albert ccy-by-sa-3 https://creativecommons.org/licenses/by-sa/3.0/
@ -124,14 +123,14 @@ HiddenServicePort 80 ''' + self.apiServerIP + ''':''' + str(self.hsPort)
try: try:
tor = subprocess.Popen([self.torBinary, '-f', self.torConfigLocation], stdout=subprocess.PIPE, stderr=subprocess.PIPE) tor = subprocess.Popen([self.torBinary, '-f', self.torConfigLocation], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except FileNotFoundError: except FileNotFoundError:
logger.fatal("Tor was not found in your path or the Onionr directory. Please install Tor and try again.") logger.fatal("Tor was not found in your path or the Onionr directory. Please install Tor and try again.", terminal=True)
sys.exit(1) sys.exit(1)
else: else:
# Test Tor Version # Test Tor Version
torVersion = subprocess.Popen([self.torBinary, '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) torVersion = subprocess.Popen([self.torBinary, '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for line in iter(torVersion.stdout.readline, b''): for line in iter(torVersion.stdout.readline, b''):
if 'Tor 0.2.' in line.decode(): if 'Tor 0.2.' in line.decode():
logger.error('Tor 0.3+ required') logger.fatal('Tor 0.3+ required', terminal=True)
sys.exit(1) sys.exit(1)
break break
torVersion.kill() torVersion.kill()
@ -140,17 +139,18 @@ HiddenServicePort 80 ''' + self.apiServerIP + ''':''' + str(self.hsPort)
try: try:
for line in iter(tor.stdout.readline, b''): for line in iter(tor.stdout.readline, b''):
if 'bootstrapped 100' in line.decode().lower(): if 'bootstrapped 100' in line.decode().lower():
logger.info(line.decode())
break break
elif 'opening socks listener' in line.decode().lower(): elif 'opening socks listener' in line.decode().lower():
logger.debug(line.decode().replace('\n', '')) logger.debug(line.decode().replace('\n', ''))
else: else:
logger.fatal('Failed to start Tor. Maybe a stray instance of Tor used by Onionr is still running? This can also be a result of file permissions being too open') logger.fatal('Failed to start Tor. Maybe a stray instance of Tor used by Onionr is still running? This can also be a result of file permissions being too open', terminal=True)
return False return False
except KeyboardInterrupt: except KeyboardInterrupt:
logger.fatal('Got keyboard interrupt. Onionr will exit soon.', timestamp = False, level = logger.LEVEL_IMPORTANT) logger.fatal('Got keyboard interrupt. Onionr will exit soon.', timestamp = False, level = logger.LEVEL_IMPORTANT, terminal=True)
return False return False
logger.debug('Finished starting Tor.', timestamp=True) logger.info('Finished starting Tor.', terminal=True)
self.readyState = True self.readyState = True
try: try:

View File

@ -21,14 +21,23 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import sys import sys
ONIONR_TAGLINE = 'Private P2P Communication - GPLv3 - https://Onionr.net'
ONIONR_VERSION = '0.0.0' # for debugging and stuff
ONIONR_VERSION_TUPLE = tuple(ONIONR_VERSION.split('.')) # (MAJOR, MINOR, VERSION)
API_VERSION = '0' # increments of 1; only change when something fundamental about how the API works changes. This way other nodes know how to communicate without learning too much information about you.
MIN_PY_VERSION = 6 MIN_PY_VERSION = 6
if sys.version_info[0] == 2 or sys.version_info[1] < MIN_PY_VERSION: if sys.version_info[0] == 2 or sys.version_info[1] < MIN_PY_VERSION:
sys.stderr.write('Error, Onionr requires Python 3.%s+' % (MIN_PY_VERSION,)) sys.stderr.write('Error, Onionr requires Python 3.%s+\n' % (MIN_PY_VERSION,))
sys.exit(1) sys.exit(1)
from utils import detectoptimization
if detectoptimization.detect_optimization():
sys.stderr.write('Error, Onionr cannot be run in optimized mode\n')
sys.exit(1)
import os, base64, random, shutil, time, platform, signal import os, base64, random, shutil, time, platform, signal
from threading import Thread from threading import Thread
import api, core, config, logger, onionrplugins as plugins, onionrevents as events import api, core, config, logger, onionrplugins as plugins, onionrevents as events
import onionrutils
import netcontroller import netcontroller
from netcontroller import NetController from netcontroller import NetController
from onionrblockapi import Block from onionrblockapi import Block
@ -40,17 +49,13 @@ try:
except ImportError: except ImportError:
raise Exception("You need the PySocks module (for use with socks5 proxy to use Tor)") raise Exception("You need the PySocks module (for use with socks5 proxy to use Tor)")
ONIONR_TAGLINE = 'Private P2P Communication - GPLv3 - https://Onionr.net'
ONIONR_VERSION = '0.0.0' # for debugging and stuff
ONIONR_VERSION_TUPLE = tuple(ONIONR_VERSION.split('.')) # (MAJOR, MINOR, VERSION)
API_VERSION = '0' # increments of 1; only change when something fundamental about how the API works changes. This way other nodes know how to communicate without learning too much information about you.
class Onionr: class Onionr:
def __init__(self): def __init__(self):
''' '''
Main Onionr class. This is for the CLI program, and does not handle much of the logic. Main Onionr class. This is for the CLI program, and does not handle much of the logic.
In general, external programs and plugins should not use this class. In general, external programs and plugins should not use this class.
''' '''
self.API_VERSION = API_VERSION
self.userRunDir = os.getcwd() # Directory user runs the program from self.userRunDir = os.getcwd() # Directory user runs the program from
self.killed = False self.killed = False
@ -72,7 +77,7 @@ class Onionr:
data_exists = Onionr.setupConfig(self.dataDir, self) data_exists = Onionr.setupConfig(self.dataDir, self)
if netcontroller.torBinary() is None: if netcontroller.torBinary() is None:
logger.error('Tor is not installed') logger.error('Tor is not installed', terminal=True)
sys.exit(1) sys.exit(1)
# If block data folder does not exist # If block data folder does not exist
@ -101,7 +106,6 @@ class Onionr:
self.onionrCore = core.Core() self.onionrCore = core.Core()
self.onionrCore.onionrInst = self self.onionrCore.onionrInst = self
#self.deleteRunFiles() #self.deleteRunFiles()
self.onionrUtils = onionrutils.OnionrUtils(self.onionrCore)
self.clientAPIInst = '' # Client http api instance self.clientAPIInst = '' # Client http api instance
self.publicAPIInst = '' # Public http api instance self.publicAPIInst = '' # Public http api instance
@ -159,7 +163,7 @@ class Onionr:
sys.stderr.write(file.read().decode().replace('P', logger.colors.fg.pink).replace('W', logger.colors.reset + logger.colors.bold).replace('G', logger.colors.fg.green).replace('\n', logger.colors.reset + '\n').replace('B', logger.colors.bold).replace('A', '%s' % API_VERSION).replace('V', ONIONR_VERSION)) sys.stderr.write(file.read().decode().replace('P', logger.colors.fg.pink).replace('W', logger.colors.reset + logger.colors.bold).replace('G', logger.colors.fg.green).replace('\n', logger.colors.reset + '\n').replace('B', logger.colors.bold).replace('A', '%s' % API_VERSION).replace('V', ONIONR_VERSION))
if not message is None: if not message is None:
logger.info(logger.colors.fg.lightgreen + '-> ' + str(message) + logger.colors.reset + logger.colors.fg.lightgreen + ' <-\n', sensitive=True) logger.info(logger.colors.fg.lightgreen + '-> ' + str(message) + logger.colors.reset + logger.colors.fg.lightgreen + ' <-\n', terminal=True)
def deleteRunFiles(self): def deleteRunFiles(self):
try: try:
@ -200,7 +204,7 @@ class Onionr:
''' '''
def exportBlock(self): def exportBlock(self):
commands.exportblocks(self) commands.exportblocks.export_block(self)
def showDetails(self): def showDetails(self):
commands.onionrstatistics.show_details(self) commands.onionrstatistics.show_details(self)
@ -232,13 +236,13 @@ class Onionr:
def listPeers(self): def listPeers(self):
logger.info('Peer transport address list:') logger.info('Peer transport address list:')
for i in self.onionrCore.listAdders(): for i in self.onionrCore.listAdders():
logger.info(i) logger.info(i, terminal=True)
def getWebPassword(self): def getWebPassword(self):
return config.get('client.webpassword') return config.get('client.webpassword')
def printWebPassword(self): def printWebPassword(self):
logger.info(self.getWebPassword(), sensitive = True) logger.info(self.getWebPassword(), terminal=True)
def getHelp(self): def getHelp(self):
return self.cmdhelp return self.cmdhelp
@ -263,13 +267,13 @@ class Onionr:
if len(sys.argv) >= 4: if len(sys.argv) >= 4:
config.reload() config.reload()
config.set(sys.argv[2], sys.argv[3], True) config.set(sys.argv[2], sys.argv[3], True)
logger.debug('Configuration file updated.') logger.info('Configuration file updated.', terminal=True)
elif len(sys.argv) >= 3: elif len(sys.argv) >= 3:
config.reload() config.reload()
logger.info(logger.colors.bold + sys.argv[2] + ': ' + logger.colors.reset + str(config.get(sys.argv[2], logger.colors.fg.red + 'Not set.'))) logger.info(logger.colors.bold + sys.argv[2] + ': ' + logger.colors.reset + str(config.get(sys.argv[2], logger.colors.fg.red + 'Not set.')), terminal=True)
else: else:
logger.info(logger.colors.bold + 'Get a value: ' + logger.colors.reset + sys.argv[0] + ' ' + sys.argv[1] + ' <key>') logger.info(logger.colors.bold + 'Get a value: ' + logger.colors.reset + sys.argv[0] + ' ' + sys.argv[1] + ' <key>', terminal=True)
logger.info(logger.colors.bold + 'Set a value: ' + logger.colors.reset + sys.argv[0] + ' ' + sys.argv[1] + ' <key> <value>') logger.info(logger.colors.bold + 'Set a value: ' + logger.colors.reset + sys.argv[0] + ' ' + sys.argv[1] + ' <key> <value>', terminal=True)
def execute(self, argument): def execute(self, argument):
''' '''
@ -289,11 +293,11 @@ class Onionr:
Displays the Onionr version Displays the Onionr version
''' '''
function('Onionr v%s (%s) (API v%s)' % (ONIONR_VERSION, platform.machine(), API_VERSION)) function('Onionr v%s (%s) (API v%s)' % (ONIONR_VERSION, platform.machine(), API_VERSION), terminal=True)
if verbosity >= 1: if verbosity >= 1:
function(ONIONR_TAGLINE) function(ONIONR_TAGLINE, terminal=True)
if verbosity >= 2: if verbosity >= 2:
function('Running on %s %s' % (platform.platform(), platform.release())) function('Running on %s %s' % (platform.platform(), platform.release()), terminal=True)
def doPEX(self): def doPEX(self):
'''make communicator do pex''' '''make communicator do pex'''
@ -304,7 +308,7 @@ class Onionr:
''' '''
Displays a list of keys (used to be called peers) (?) Displays a list of keys (used to be called peers) (?)
''' '''
logger.info('%sPublic keys in database: \n%s%s' % (logger.colors.fg.lightgreen, logger.colors.fg.green, '\n'.join(self.onionrCore.listPeers()))) logger.info('%sPublic keys in database: \n%s%s' % (logger.colors.fg.lightgreen, logger.colors.fg.green, '\n'.join(self.onionrCore.listPeers())), terminal=True)
def addPeer(self): def addPeer(self):
''' '''
@ -347,14 +351,14 @@ class Onionr:
Displays a "command not found" message Displays a "command not found" message
''' '''
logger.error('Command not found.', timestamp = False) logger.error('Command not found.', timestamp = False, terminal=True)
def showHelpSuggestion(self): def showHelpSuggestion(self):
''' '''
Displays a message suggesting help Displays a message suggesting help
''' '''
if __name__ == '__main__': if __name__ == '__main__':
logger.info('Do ' + logger.colors.bold + sys.argv[0] + ' --help' + logger.colors.reset + logger.colors.fg.green + ' for Onionr help.') logger.info('Do ' + logger.colors.bold + sys.argv[0] + ' --help' + logger.colors.reset + logger.colors.fg.green + ' for Onionr help.', terminal=True)
def start(self, input = False, override = False): def start(self, input = False, override = False):
''' '''

View File

@ -18,6 +18,7 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import sqlite3, os, logger import sqlite3, os, logger
from onionrutils import epoch, bytesconverter
class OnionrBlackList: class OnionrBlackList:
def __init__(self, coreInst): def __init__(self, coreInst):
self.blacklistDB = coreInst.dataDir + 'blacklist.db' self.blacklistDB = coreInst.dataDir + 'blacklist.db'
@ -28,7 +29,7 @@ class OnionrBlackList:
return return
def inBlacklist(self, data): def inBlacklist(self, data):
hashed = self._core._utils.bytesToStr(self._core._crypto.sha3Hash(data)) hashed = bytesconverter.bytes_to_str(self._core._crypto.sha3Hash(data))
retData = False retData = False
if not hashed.isalnum(): if not hashed.isalnum():
@ -56,7 +57,7 @@ class OnionrBlackList:
def deleteExpired(self, dataType=0): def deleteExpired(self, dataType=0):
'''Delete expired entries''' '''Delete expired entries'''
deleteList = [] deleteList = []
curTime = self._core._utils.getEpoch() curTime = epoch.get_epoch()
try: try:
int(dataType) int(dataType)
@ -98,7 +99,7 @@ class OnionrBlackList:
2=pubkey 2=pubkey
''' '''
# we hash the data so we can remove data entirely from our node's disk # we hash the data so we can remove data entirely from our node's disk
hashed = self._core._utils.bytesToStr(self._core._crypto.sha3Hash(data)) hashed = bytesconverter.bytes_to_str(self._core._crypto.sha3Hash(data))
if len(hashed) > 64: if len(hashed) > 64:
raise Exception("Hashed data is too large") raise Exception("Hashed data is too large")
@ -115,7 +116,7 @@ class OnionrBlackList:
if self.inBlacklist(hashed): if self.inBlacklist(hashed):
return return
insert = (hashed,) insert = (hashed,)
blacklistDate = self._core._utils.getEpoch() blacklistDate = epoch.get_epoch()
try: try:
self._dbExecute("INSERT INTO blacklist (hash, dataType, blacklistDate, expire) VALUES(?, ?, ?, ?);", (str(hashed), dataType, blacklistDate, expire)) self._dbExecute("INSERT INTO blacklist (hash, dataType, blacklistDate, expire) VALUES(?, ?, ?, ?);", (str(hashed), dataType, blacklistDate, expire))
except sqlite3.IntegrityError: except sqlite3.IntegrityError:

View File

@ -21,6 +21,7 @@
import core as onionrcore, logger, config, onionrexceptions, nacl.exceptions import core as onionrcore, logger, config, onionrexceptions, nacl.exceptions
import json, os, sys, datetime, base64, onionrstorage import json, os, sys, datetime, base64, onionrstorage
from onionrusers import onionrusers from onionrusers import onionrusers
from onionrutils import stringvalidators, epoch
class Block: class Block:
blockCacheOrder = list() # NEVER write your own code that writes to this! blockCacheOrder = list() # NEVER write your own code that writes to this!
@ -88,7 +89,7 @@ class Block:
# Check for replay attacks # Check for replay attacks
try: try:
if self.core._utils.getEpoch() - self.core.getBlockDate(self.hash) < 60: if epoch.get_epoch() - self.core.getBlockDate(self.hash) < 60:
assert self.core._crypto.replayTimestampValidation(self.bmetadata['rply']) assert self.core._crypto.replayTimestampValidation(self.bmetadata['rply'])
except (AssertionError, KeyError, TypeError) as e: except (AssertionError, KeyError, TypeError) as e:
if not self.bypassReplayCheck: if not self.bypassReplayCheck:
@ -441,7 +442,7 @@ class Block:
''' '''
try: try:
if (not self.isSigned()) or (not self.getCore()._utils.validatePubKey(signer)): if (not self.isSigned()) or (not stringvalidators.validate_pub_key(signer)):
return False return False
return bool(self.getCore()._crypto.edVerify(self.getSignedData(), signer, self.getSignature(), encodedData = encodedData)) return bool(self.getCore()._crypto.edVerify(self.getSignedData(), signer, self.getSignature(), encodedData = encodedData))

View File

@ -22,6 +22,7 @@ import webbrowser, sys
import logger import logger
from . import pubkeymanager, onionrstatistics, daemonlaunch, filecommands, plugincommands, keyadders from . import pubkeymanager, onionrstatistics, daemonlaunch, filecommands, plugincommands, keyadders
from . import banblocks, exportblocks, openwebinterface, resettor from . import banblocks, exportblocks, openwebinterface, resettor
from onionrutils import importnewblocks
def show_help(o_inst, command): def show_help(o_inst, command):
@ -110,8 +111,8 @@ def get_commands(onionr_inst):
'listconn': onionr_inst.listConn, 'listconn': onionr_inst.listConn,
'list-conn': onionr_inst.listConn, 'list-conn': onionr_inst.listConn,
'import-blocks': onionr_inst.onionrUtils.importNewBlocks, 'import-blocks': importnewblocks.import_new_blocks,
'importblocks': onionr_inst.onionrUtils.importNewBlocks, 'importblocks': importnewblocks.import_new_blocks,
'introduce': onionr_inst.onionrCore.introduceNode, 'introduce': onionr_inst.onionrCore.introduceNode,
'pex': onionr_inst.doPEX, 'pex': onionr_inst.doPEX,

View File

@ -19,21 +19,22 @@
''' '''
import sys import sys
import logger import logger
from onionrutils import stringvalidators
def ban_block(o_inst): def ban_block(o_inst):
try: try:
ban = sys.argv[2] ban = sys.argv[2]
except IndexError: except IndexError:
ban = logger.readline('Enter a block hash:') ban = logger.readline('Enter a block hash:')
if o_inst.onionrUtils.validateHash(ban): if stringvalidators.validate_hash(ban):
if not o_inst.onionrCore._blacklist.inBlacklist(ban): if not o_inst.onionrCore._blacklist.inBlacklist(ban):
try: try:
o_inst.onionrCore._blacklist.addToDB(ban) o_inst.onionrCore._blacklist.addToDB(ban)
o_inst.onionrCore.removeBlock(ban) o_inst.onionrCore.removeBlock(ban)
except Exception as error: except Exception as error:
logger.error('Could not blacklist block', error=error) logger.error('Could not blacklist block', error=error, terminal=True)
else: else:
logger.info('Block blacklisted') logger.info('Block blacklisted', terminal=True)
else: else:
logger.warn('That block is already blacklisted') logger.warn('That block is already blacklisted', terminal=True)
else: else:
logger.error('Invalid block hash') logger.error('Invalid block hash', terminal=True)

View File

@ -23,9 +23,10 @@ from threading import Thread
import onionr, api, logger, communicator import onionr, api, logger, communicator
import onionrevents as events import onionrevents as events
from netcontroller import NetController from netcontroller import NetController
from onionrutils import localcommand
def _proper_shutdown(o_inst): def _proper_shutdown(o_inst):
o_inst.onionrUtils.localCommand('shutdown') localcommand.local_command(o_inst.onionrCore, 'shutdown')
sys.exit(1) sys.exit(1)
def daemon(o_inst): def daemon(o_inst):
@ -38,13 +39,8 @@ def daemon(o_inst):
logger.debug('Runcheck file found on daemon start, deleting in advance.') logger.debug('Runcheck file found on daemon start, deleting in advance.')
os.remove('%s/.runcheck' % (o_inst.onionrCore.dataDir,)) os.remove('%s/.runcheck' % (o_inst.onionrCore.dataDir,))
Thread(target=api.API, args=(o_inst, o_inst.debug, onionr.API_VERSION)).start() Thread(target=api.API, args=(o_inst, o_inst.debug, onionr.API_VERSION), daemon=True).start()
Thread(target=api.PublicAPI, args=[o_inst.getClientApi()]).start() Thread(target=api.PublicAPI, args=[o_inst.getClientApi()], daemon=True).start()
try:
time.sleep(0)
except KeyboardInterrupt:
logger.debug('Got keyboard interrupt, shutting down...')
_proper_shutdown(o_inst)
apiHost = '' apiHost = ''
while apiHost == '': while apiHost == '':
@ -56,18 +52,25 @@ def daemon(o_inst):
time.sleep(0.5) time.sleep(0.5)
#onionr.Onionr.setupConfig('data/', self = o_inst) #onionr.Onionr.setupConfig('data/', self = o_inst)
logger.raw('', terminal=True)
# print nice header thing :)
if o_inst.onionrCore.config.get('general.display_header', True):
o_inst.header()
o_inst.version(verbosity = 5, function = logger.info)
logger.debug('Python version %s' % platform.python_version())
if o_inst._developmentMode: if o_inst._developmentMode:
logger.warn('DEVELOPMENT MODE ENABLED (NOT RECOMMENDED)', timestamp = False) logger.warn('Development mode enabled', timestamp = False, terminal=True)
net = NetController(o_inst.onionrCore.config.get('client.public.port', 59497), apiServerIP=apiHost) net = NetController(o_inst.onionrCore.config.get('client.public.port', 59497), apiServerIP=apiHost)
logger.debug('Tor is starting...') logger.info('Tor is starting...', terminal=True)
if not net.startTor(): if not net.startTor():
o_inst.onionrUtils.localCommand('shutdown') localcommand.local_command(o_inst.onionrCore, 'shutdown')
sys.exit(1) sys.exit(1)
if len(net.myID) > 0 and o_inst.onionrCore.config.get('general.security_level', 1) == 0: if len(net.myID) > 0 and o_inst.onionrCore.config.get('general.security_level', 1) == 0:
logger.debug('Started .onion service: %s' % (logger.colors.underline + net.myID)) logger.debug('Started .onion service: %s' % (logger.colors.underline + net.myID))
else: else:
logger.debug('.onion service disabled') logger.debug('.onion service disabled')
logger.debug('Using public key: %s' % (logger.colors.underline + o_inst.onionrCore._crypto.pubKey)) logger.info('Using public key: %s' % (logger.colors.underline + o_inst.onionrCore._crypto.pubKey[:52]), terminal=True)
try: try:
time.sleep(1) time.sleep(1)
@ -75,20 +78,12 @@ def daemon(o_inst):
_proper_shutdown(o_inst) _proper_shutdown(o_inst)
o_inst.onionrCore.torPort = net.socksPort o_inst.onionrCore.torPort = net.socksPort
communicatorThread = Thread(target=communicator.startCommunicator, args=(o_inst, str(net.socksPort))) communicatorThread = Thread(target=communicator.startCommunicator, args=(o_inst, str(net.socksPort)), daemon=True)
communicatorThread.start() communicatorThread.start()
while o_inst.communicatorInst is None: while o_inst.communicatorInst is None:
time.sleep(0.1) time.sleep(0.1)
# print nice header thing :)
if o_inst.onionrCore.config.get('general.display_header', True):
o_inst.header()
# print out debug info
o_inst.version(verbosity = 5, function = logger.debug)
logger.debug('Python version %s' % platform.python_version())
logger.debug('Started communicator.') logger.debug('Started communicator.')
events.event('daemon_start', onionr = o_inst) events.event('daemon_start', onionr = o_inst)
@ -109,10 +104,10 @@ def daemon(o_inst):
signal.signal(signal.SIGINT, _ignore_sigint) signal.signal(signal.SIGINT, _ignore_sigint)
o_inst.onionrCore.daemonQueueAdd('shutdown') o_inst.onionrCore.daemonQueueAdd('shutdown')
o_inst.onionrUtils.localCommand('shutdown') localcommand.local_command(o_inst.onionrCore, 'shutdown')
net.killTor() net.killTor()
time.sleep(3) time.sleep(5) # Time to allow threads to finish, if not any "daemon" threads will be slaughtered http://docs.python.org/library/threading.html#threading.Thread.daemon
o_inst.deleteRunFiles() o_inst.deleteRunFiles()
return return
@ -124,7 +119,7 @@ def kill_daemon(o_inst):
Shutdown the Onionr daemon Shutdown the Onionr daemon
''' '''
logger.warn('Stopping the running daemon...', timestamp = False) logger.warn('Stopping the running daemon...', timestamp = False, terminal=True)
try: try:
events.event('daemon_stop', onionr = o_inst) events.event('daemon_stop', onionr = o_inst)
net = NetController(o_inst.onionrCore.config.get('client.port', 59496)) net = NetController(o_inst.onionrCore.config.get('client.port', 59496))
@ -135,12 +130,12 @@ def kill_daemon(o_inst):
net.killTor() net.killTor()
except Exception as e: except Exception as e:
logger.error('Failed to shutdown daemon.', error = e, timestamp = False) logger.error('Failed to shutdown daemon.', error = e, timestamp = False, terminal=True)
return return
def start(o_inst, input = False, override = False): def start(o_inst, input = False, override = False):
if os.path.exists('.onionr-lock') and not override: if os.path.exists('.onionr-lock') and not override:
logger.fatal('Cannot start. Daemon is already running, or it did not exit cleanly.\n(if you are sure that there is not a daemon running, delete .onionr-lock & try again).') logger.fatal('Cannot start. Daemon is already running, or it did not exit cleanly.\n(if you are sure that there is not a daemon running, delete .onionr-lock & try again).', terminal=True)
else: else:
if not o_inst.debug and not o_inst._developmentMode: if not o_inst.debug and not o_inst._developmentMode:
lockFile = open('.onionr-lock', 'w') lockFile = open('.onionr-lock', 'w')

View File

@ -17,26 +17,28 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import sys import sys, os
import logger, onionrstorage import logger, onionrstorage
from onionrutils import stringvalidators
def doExport(o_inst, bHash): def doExport(o_inst, bHash):
exportDir = o_inst.dataDir + 'block-export/' exportDir = o_inst.dataDir + 'block-export/'
if not os.path.exists(exportDir): if not os.path.exists(exportDir):
if os.path.exists(o_inst.dataDir): if os.path.exists(o_inst.dataDir):
os.mkdir(exportDir) os.mkdir(exportDir)
else: else:
logger.error('Onionr Not initialized') logger.error('Onionr Not initialized', terminal=True)
data = onionrstorage.getData(o_inst.onionrCore, bHash) data = onionrstorage.getData(o_inst.onionrCore, bHash)
with open('%s/%s.dat' % (exportDir, bHash), 'wb') as exportFile: with open('%s/%s.dat' % (exportDir, bHash), 'wb') as exportFile:
exportFile.write(data) exportFile.write(data)
logger.info('Block exported as file', terminal=True)
def export_block(o_inst): def export_block(o_inst):
exportDir = o_inst.dataDir + 'block-export/' exportDir = o_inst.dataDir + 'block-export/'
try: try:
assert o_inst.onionrUtils.validateHash(sys.argv[2]) assert stringvalidators.validate_hash(sys.argv[2])
except (IndexError, AssertionError): except (IndexError, AssertionError):
logger.error('No valid block hash specified.') logger.error('No valid block hash specified.', terminal=True)
sys.exit(1) sys.exit(1)
else: else:
bHash = sys.argv[2] bHash = sys.argv[2]
o_inst.doExport(bHash) doExport(o_inst, bHash)

View File

@ -21,6 +21,7 @@
import base64, sys, os import base64, sys, os
import logger import logger
from onionrblockapi import Block from onionrblockapi import Block
from onionrutils import stringvalidators
def add_file(o_inst, singleBlock=False, blockType='bin'): def add_file(o_inst, singleBlock=False, blockType='bin'):
''' '''
Adds a file to the onionr network Adds a file to the onionr network
@ -31,18 +32,18 @@ def add_file(o_inst, singleBlock=False, blockType='bin'):
contents = None contents = None
if not os.path.exists(filename): if not os.path.exists(filename):
logger.error('That file does not exist. Improper path (specify full path)?') logger.error('That file does not exist. Improper path (specify full path)?', terminal=True)
return return
logger.info('Adding file... this might take a long time.') logger.info('Adding file... this might take a long time.', terminal=True)
try: try:
with open(filename, 'rb') as singleFile: with open(filename, 'rb') as singleFile:
blockhash = o_inst.onionrCore.insertBlock(base64.b64encode(singleFile.read()), header=blockType) blockhash = o_inst.onionrCore.insertBlock(base64.b64encode(singleFile.read()), header=blockType)
if len(blockhash) > 0: if len(blockhash) > 0:
logger.info('File %s saved in block %s' % (filename, blockhash)) logger.info('File %s saved in block %s' % (filename, blockhash), terminal=True)
except: except:
logger.error('Failed to save file in block.', timestamp = False) logger.error('Failed to save file in block.', timestamp = False, terminal=True)
else: else:
logger.error('%s add-file <filename>' % sys.argv[0], timestamp = False) logger.error('%s add-file <filename>' % sys.argv[0], timestamp = False, terminal=True)
def getFile(o_inst): def getFile(o_inst):
''' '''
@ -52,16 +53,16 @@ def getFile(o_inst):
fileName = sys.argv[2] fileName = sys.argv[2]
bHash = sys.argv[3] bHash = sys.argv[3]
except IndexError: except IndexError:
logger.error("Syntax %s %s" % (sys.argv[0], '/path/to/filename <blockhash>')) logger.error("Syntax %s %s" % (sys.argv[0], '/path/to/filename <blockhash>'), terminal=True)
else: else:
logger.info(fileName) logger.info(fileName, terminal=True)
contents = None contents = None
if os.path.exists(fileName): if os.path.exists(fileName):
logger.error("File already exists") logger.error("File already exists", terminal=True)
return return
if not o_inst.onionrUtils.validateHash(bHash): if not stringvalidators.validate_hash(bHash):
logger.error('Block hash is invalid') logger.error('Block hash is invalid', terminal=True)
return return
with open(fileName, 'wb') as myFile: with open(fileName, 'wb') as myFile:

View File

@ -25,15 +25,15 @@ def add_peer(o_inst):
except IndexError: except IndexError:
pass pass
else: else:
if o_inst.onionrUtils.hasKey(newPeer): if newPeer in o_inst.onionrCore.listPeers():
logger.info('We already have that key') logger.info('We already have that key', terminal=True)
return return
logger.info("Adding peer: " + logger.colors.underline + newPeer) logger.info("Adding peer: " + logger.colors.underline + newPeer, terminal=True)
try: try:
if o_inst.onionrCore.addPeer(newPeer): if o_inst.onionrCore.addPeer(newPeer):
logger.info('Successfully added key') logger.info('Successfully added key', terminal=True)
except AssertionError: except AssertionError:
logger.error('Failed to add key') logger.error('Failed to add key', terminal=True)
def add_address(o_inst): def add_address(o_inst):
try: try:
@ -42,8 +42,8 @@ def add_address(o_inst):
except IndexError: except IndexError:
pass pass
else: else:
logger.info("Adding address: " + logger.colors.underline + newAddress) logger.info("Adding address: " + logger.colors.underline + newAddress, terminal=True)
if o_inst.onionrCore.addAddress(newAddress): if o_inst.onionrCore.addAddress(newAddress):
logger.info("Successfully added address.") logger.info("Successfully added address.", terminal=True)
else: else:
logger.warn("Unable to add address.") logger.warn("Unable to add address.", terminal=True)

View File

@ -18,9 +18,11 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import os, uuid, time import os, uuid, time
import logger, onionrutils import logger
from onionrblockapi import Block from onionrblockapi import Block
import onionr import onionr
from onionrutils import checkcommunicator, mnemonickeys
from utils import sizeutils
def show_stats(o_inst): def show_stats(o_inst):
try: try:
@ -29,13 +31,13 @@ def show_stats(o_inst):
signedBlocks = len(Block.getBlocks(signed = True)) signedBlocks = len(Block.getBlocks(signed = True))
messages = { messages = {
# info about local client # info about local client
'Onionr Daemon Status' : ((logger.colors.fg.green + 'Online') if o_inst.onionrUtils.isCommunicatorRunning(timeout = 9) else logger.colors.fg.red + 'Offline'), 'Onionr Daemon Status' : ((logger.colors.fg.green + 'Online') if checkcommunicator.is_communicator_running(o_inst.onionrCore, timeout = 9) else logger.colors.fg.red + 'Offline'),
# file and folder size stats # file and folder size stats
'div1' : True, # this creates a solid line across the screen, a div 'div1' : True, # this creates a solid line across the screen, a div
'Total Block Size' : onionrutils.humanSize(onionrutils.size(o_inst.dataDir + 'blocks/')), 'Total Block Size' : sizeutils.human_size(sizeutils.size(o_inst.dataDir + 'blocks/')),
'Total Plugin Size' : onionrutils.humanSize(onionrutils.size(o_inst.dataDir + 'plugins/')), 'Total Plugin Size' : sizeutils.human_size(sizeutils.size(o_inst.dataDir + 'plugins/')),
'Log File Size' : onionrutils.humanSize(onionrutils.size(o_inst.dataDir + 'output.log')), 'Log File Size' : sizeutils.human_size(sizeutils.size(o_inst.dataDir + 'output.log')),
# count stats # count stats
'div2' : True, 'div2' : True,
@ -65,32 +67,32 @@ def show_stats(o_inst):
groupsize = width - prewidth - len('[+] ') groupsize = width - prewidth - len('[+] ')
# generate stats table # generate stats table
logger.info(colors['title'] + 'Onionr v%s Statistics' % onionr.ONIONR_VERSION + colors['reset']) logger.info(colors['title'] + 'Onionr v%s Statistics' % onionr.ONIONR_VERSION + colors['reset'], terminal=True)
logger.info(colors['border'] + '-' * (maxlength + 1) + '+' + colors['reset']) logger.info(colors['border'] + '-' * (maxlength + 1) + '+' + colors['reset'], terminal=True)
for key, val in messages.items(): for key, val in messages.items():
if not (type(val) is bool and val is True): if not (type(val) is bool and val is True):
val = [str(val)[i:i + groupsize] for i in range(0, len(str(val)), groupsize)] val = [str(val)[i:i + groupsize] for i in range(0, len(str(val)), groupsize)]
logger.info(colors['key'] + str(key).rjust(maxlength) + colors['reset'] + colors['border'] + ' | ' + colors['reset'] + colors['val'] + str(val.pop(0)) + colors['reset']) logger.info(colors['key'] + str(key).rjust(maxlength) + colors['reset'] + colors['border'] + ' | ' + colors['reset'] + colors['val'] + str(val.pop(0)) + colors['reset'], terminal=True)
for value in val: for value in val:
logger.info(' ' * maxlength + colors['border'] + ' | ' + colors['reset'] + colors['val'] + str(value) + colors['reset']) logger.info(' ' * maxlength + colors['border'] + ' | ' + colors['reset'] + colors['val'] + str(value) + colors['reset'], terminal=True)
else: else:
logger.info(colors['border'] + '-' * (maxlength + 1) + '+' + colors['reset']) logger.info(colors['border'] + '-' * (maxlength + 1) + '+' + colors['reset'], terminal=True)
logger.info(colors['border'] + '-' * (maxlength + 1) + '+' + colors['reset']) logger.info(colors['border'] + '-' * (maxlength + 1) + '+' + colors['reset'], terminal=True)
except Exception as e: except Exception as e:
logger.error('Failed to generate statistics table.', error = e, timestamp = False) logger.error('Failed to generate statistics table. ' + str(e), error = e, timestamp = False, terminal=True)
def show_details(o_inst): def show_details(o_inst):
details = { details = {
'Node Address' : o_inst.get_hostname(), 'Node Address' : o_inst.get_hostname(),
'Web Password' : o_inst.getWebPassword(), 'Web Password' : o_inst.getWebPassword(),
'Public Key' : o_inst.onionrCore._crypto.pubKey, 'Public Key' : o_inst.onionrCore._crypto.pubKey,
'Human-readable Public Key' : o_inst.onionrCore._utils.getHumanReadableID() 'Human-readable Public Key' : mnemonickeys.get_human_readable_ID(o_inst.onionrCore)
} }
for detail in details: for detail in details:
logger.info('%s%s: \n%s%s\n' % (logger.colors.fg.lightgreen, detail, logger.colors.fg.green, details[detail]), sensitive = True) logger.info('%s%s: \n%s%s\n' % (logger.colors.fg.lightgreen, detail, logger.colors.fg.green, details[detail]), terminal = True)
def show_peers(o_inst): def show_peers(o_inst):
randID = str(uuid.uuid4()) randID = str(uuid.uuid4())

View File

@ -19,12 +19,13 @@
''' '''
import webbrowser import webbrowser
import logger import logger
from onionrutils import getclientapiserver
def open_home(o_inst): def open_home(o_inst):
try: try:
url = o_inst.onionrUtils.getClientAPIServer() url = getclientapiserver.get_client_API_server(o_inst.onionrCore)
except FileNotFoundError: except FileNotFoundError:
logger.error('Onionr seems to not be running (could not get api host)') logger.error('Onionr seems to not be running (could not get api host)', terminal=True)
else: else:
url = 'http://%s/#%s' % (url, o_inst.onionrCore.config.get('client.webpassword')) url = 'http://%s/#%s' % (url, o_inst.onionrCore.config.get('client.webpassword'))
logger.info('If Onionr does not open automatically, use this URL: ' + url) logger.info('If Onionr does not open automatically, use this URL: ' + url, terminal=True)
webbrowser.open_new_tab(url) webbrowser.open_new_tab(url)

View File

@ -24,18 +24,18 @@ import logger, onionrplugins as plugins
def enable_plugin(o_inst): def enable_plugin(o_inst):
if len(sys.argv) >= 3: if len(sys.argv) >= 3:
plugin_name = sys.argv[2] plugin_name = sys.argv[2]
logger.info('Enabling plugin "%s"...' % plugin_name) logger.info('Enabling plugin "%s"...' % plugin_name, terminal=True)
plugins.enable(plugin_name, o_inst) plugins.enable(plugin_name, o_inst)
else: else:
logger.info('%s %s <plugin>' % (sys.argv[0], sys.argv[1])) logger.info('%s %s <plugin>' % (sys.argv[0], sys.argv[1]), terminal=True)
def disable_plugin(o_inst): def disable_plugin(o_inst):
if len(sys.argv) >= 3: if len(sys.argv) >= 3:
plugin_name = sys.argv[2] plugin_name = sys.argv[2]
logger.info('Disabling plugin "%s"...' % plugin_name) logger.info('Disabling plugin "%s"...' % plugin_name, terminal=True)
plugins.disable(plugin_name, o_inst) plugins.disable(plugin_name, o_inst)
else: else:
logger.info('%s %s <plugin>' % (sys.argv[0], sys.argv[1])) logger.info('%s %s <plugin>' % (sys.argv[0], sys.argv[1]), terminal=True)
def reload_plugin(o_inst): def reload_plugin(o_inst):
''' '''
@ -44,11 +44,11 @@ def reload_plugin(o_inst):
if len(sys.argv) >= 3: if len(sys.argv) >= 3:
plugin_name = sys.argv[2] plugin_name = sys.argv[2]
logger.info('Reloading plugin "%s"...' % plugin_name) logger.info('Reloading plugin "%s"...' % plugin_name, terminal=True)
plugins.stop(plugin_name, o_inst) plugins.stop(plugin_name, o_inst)
plugins.start(plugin_name, o_inst) plugins.start(plugin_name, o_inst)
else: else:
logger.info('Reloading all plugins...') logger.info('Reloading all plugins...', terminal=True)
plugins.reload(o_inst) plugins.reload(o_inst)
@ -62,7 +62,7 @@ def create_plugin(o_inst):
plugin_name = re.sub('[^0-9a-zA-Z_]+', '', str(sys.argv[2]).lower()) plugin_name = re.sub('[^0-9a-zA-Z_]+', '', str(sys.argv[2]).lower())
if not plugins.exists(plugin_name): if not plugins.exists(plugin_name):
logger.info('Creating plugin "%s"...' % plugin_name) logger.info('Creating plugin "%s"...' % plugin_name, terminal=True)
os.makedirs(plugins.get_plugins_folder(plugin_name)) os.makedirs(plugins.get_plugins_folder(plugin_name))
with open(plugins.get_plugins_folder(plugin_name) + '/main.py', 'a') as main: with open(plugins.get_plugins_folder(plugin_name) + '/main.py', 'a') as main:
@ -76,12 +76,12 @@ def create_plugin(o_inst):
with open(plugins.get_plugins_folder(plugin_name) + '/info.json', 'a') as main: with open(plugins.get_plugins_folder(plugin_name) + '/info.json', 'a') as main:
main.write(json.dumps({'author' : 'anonymous', 'description' : 'the default description of the plugin', 'version' : '1.0'})) main.write(json.dumps({'author' : 'anonymous', 'description' : 'the default description of the plugin', 'version' : '1.0'}))
logger.info('Enabling plugin "%s"...' % plugin_name) logger.info('Enabling plugin "%s"...' % plugin_name, terminal=True)
plugins.enable(plugin_name, o_inst) plugins.enable(plugin_name, o_inst)
else: else:
logger.warn('Cannot create plugin directory structure; plugin "%s" exists.' % plugin_name) logger.warn('Cannot create plugin directory structure; plugin "%s" exists.' % plugin_name, terminal=True)
except Exception as e: except Exception as e:
logger.error('Failed to create plugin directory structure.', e) logger.error('Failed to create plugin directory structure.', e, terminal=True)
else: else:
logger.info('%s %s <plugin>' % (sys.argv[0], sys.argv[1])) logger.info('%s %s <plugin>' % (sys.argv[0], sys.argv[1]), terminal=True)

View File

@ -20,7 +20,9 @@
import sys, getpass import sys, getpass
import logger, onionrexceptions import logger, onionrexceptions
from onionrutils import stringvalidators, bytesconverter
from onionrusers import onionrusers, contactmanager from onionrusers import onionrusers, contactmanager
import unpaddedbase32
def add_ID(o_inst): def add_ID(o_inst):
try: try:
sys.argv[2] sys.argv[2]
@ -28,41 +30,46 @@ def add_ID(o_inst):
except (IndexError, AssertionError) as e: except (IndexError, AssertionError) as e:
newID = o_inst.onionrCore._crypto.keyManager.addKey()[0] newID = o_inst.onionrCore._crypto.keyManager.addKey()[0]
else: else:
logger.warn('Deterministic keys require random and long passphrases.') logger.warn('Deterministic keys require random and long passphrases.', terminal=True)
logger.warn('If a good passphrase is not used, your key can be easily stolen.') logger.warn('If a good passphrase is not used, your key can be easily stolen.', terminal=True)
logger.warn('You should use a series of hard to guess words, see this for reference: https://www.xkcd.com/936/') logger.warn('You should use a series of hard to guess words, see this for reference: https://www.xkcd.com/936/', terminal=True)
pass1 = getpass.getpass(prompt='Enter at least %s characters: ' % (o_inst.onionrCore._crypto.deterministicRequirement,)) pass1 = getpass.getpass(prompt='Enter at least %s characters: ' % (o_inst.onionrCore._crypto.deterministicRequirement,))
pass2 = getpass.getpass(prompt='Confirm entry: ') pass2 = getpass.getpass(prompt='Confirm entry: ')
if o_inst.onionrCore._crypto.safeCompare(pass1, pass2): if o_inst.onionrCore._crypto.safeCompare(pass1, pass2):
try: try:
logger.info('Generating deterministic key. This can take a while.') logger.info('Generating deterministic key. This can take a while.', terminal=True)
newID, privKey = o_inst.onionrCore._crypto.generateDeterministic(pass1) newID, privKey = o_inst.onionrCore._crypto.generateDeterministic(pass1)
except onionrexceptions.PasswordStrengthError: except onionrexceptions.PasswordStrengthError:
logger.error('Passphrase must use at least %s characters.' % (o_inst.onionrCore._crypto.deterministicRequirement,)) logger.error('Passphrase must use at least %s characters.' % (o_inst.onionrCore._crypto.deterministicRequirement,), terminal=True)
sys.exit(1) sys.exit(1)
else: else:
logger.error('Passwords do not match.') logger.error('Passwords do not match.', terminal=True)
sys.exit(1) sys.exit(1)
o_inst.onionrCore._crypto.keyManager.addKey(pubKey=newID, try:
privKey=privKey) o_inst.onionrCore._crypto.keyManager.addKey(pubKey=newID,
logger.info('Added ID: %s' % (o_inst.onionrUtils.bytesToStr(newID),)) privKey=privKey)
except ValueError:
logger.error('That ID is already available, you can change to it with the change-id command.', terminal=True)
return
logger.info('Added ID: %s' % (bytesconverter.bytes_to_str(newID),), terminal=True)
def change_ID(o_inst): def change_ID(o_inst):
try: try:
key = sys.argv[2] key = sys.argv[2]
key = unpaddedbase32.repad(key.encode()).decode()
except IndexError: except IndexError:
logger.warn('Specify pubkey to use') logger.warn('Specify pubkey to use', terminal=True)
else: else:
if o_inst.onionrUtils.validatePubKey(key): if stringvalidators.validate_pub_key(key):
if key in o_inst.onionrCore._crypto.keyManager.getPubkeyList(): if key in o_inst.onionrCore._crypto.keyManager.getPubkeyList():
o_inst.onionrCore.config.set('general.public_key', key) o_inst.onionrCore.config.set('general.public_key', key)
o_inst.onionrCore.config.save() o_inst.onionrCore.config.save()
logger.info('Set active key to: %s' % (key,)) logger.info('Set active key to: %s' % (key,), terminal=True)
logger.info('Restart Onionr if it is running.') logger.info('Restart Onionr if it is running.', terminal=True)
else: else:
logger.warn('That key does not exist') logger.warn('That key does not exist', terminal=True)
else: else:
logger.warn('Invalid key %s' % (key,)) logger.warn('Invalid key %s' % (key,), terminal=True)
def friend_command(o_inst): def friend_command(o_inst):
friend = '' friend = ''
@ -70,23 +77,23 @@ def friend_command(o_inst):
# Get the friend command # Get the friend command
action = sys.argv[2] action = sys.argv[2]
except IndexError: except IndexError:
logger.info('Syntax: friend add/remove/list [address]') logger.info('Syntax: friend add/remove/list [address]', terminal=True)
else: else:
action = action.lower() action = action.lower()
if action == 'list': if action == 'list':
# List out peers marked as our friend # List out peers marked as our friend
for friend in contactmanager.ContactManager.list_friends(o_inst.onionrCore): for friend in contactmanager.ContactManager.list_friends(o_inst.onionrCore):
logger.info(friend.publicKey + ' - ' + friend.get_info('name')) logger.info(friend.publicKey + ' - ' + friend.get_info('name'), terminal=True)
elif action in ('add', 'remove'): elif action in ('add', 'remove'):
try: try:
friend = sys.argv[3] friend = sys.argv[3]
if not o_inst.onionrUtils.validatePubKey(friend): if not stringvalidators.validate_pub_key(friend):
raise onionrexceptions.InvalidPubkey('Public key is invalid') raise onionrexceptions.InvalidPubkey('Public key is invalid')
if friend not in o_inst.onionrCore.listPeers(): if friend not in o_inst.onionrCore.listPeers():
raise onionrexceptions.KeyNotKnown raise onionrexceptions.KeyNotKnown
friend = onionrusers.OnionrUser(o_inst.onionrCore, friend) friend = onionrusers.OnionrUser(o_inst.onionrCore, friend)
except IndexError: except IndexError:
logger.warn('Friend ID is required.') logger.warn('Friend ID is required.', terminal=True)
action = 'error' # set to 'error' so that the finally block does not process anything action = 'error' # set to 'error' so that the finally block does not process anything
except onionrexceptions.KeyNotKnown: except onionrexceptions.KeyNotKnown:
o_inst.onionrCore.addPeer(friend) o_inst.onionrCore.addPeer(friend)
@ -94,9 +101,9 @@ def friend_command(o_inst):
finally: finally:
if action == 'add': if action == 'add':
friend.setTrust(1) friend.setTrust(1)
logger.info('Added %s as friend.' % (friend.publicKey,)) logger.info('Added %s as friend.' % (friend.publicKey,), terminal=True)
elif action == 'remove': elif action == 'remove':
friend.setTrust(0) friend.setTrust(0)
logger.info('Removed %s as friend.' % (friend.publicKey,)) logger.info('Removed %s as friend.' % (friend.publicKey,), terminal=True)
else: else:
logger.info('Syntax: friend add/remove/list [address]') logger.info('Syntax: friend add/remove/list [address]', terminal=True)

View File

@ -19,11 +19,13 @@
''' '''
import os, shutil import os, shutil
import logger, core import logger, core
from onionrutils import localcommand
def reset_tor(): def reset_tor():
c = core.Core() c = core.Core()
tor_dir = c.dataDir + 'tordata' tor_dir = c.dataDir + 'tordata'
if os.path.exists(tor_dir): if os.path.exists(tor_dir):
if c._utils.localCommand('/ping') == 'pong!': if localcommand.local_command(c, '/ping') == 'pong!':
logger.warn('Cannot delete Tor data while Onionr is running') logger.warn('Cannot delete Tor data while Onionr is running', terminal=True)
else: else:
shutil.rmtree(tor_dir) shutil.rmtree(tor_dir)

View File

@ -19,8 +19,10 @@
''' '''
import os, binascii, base64, hashlib, time, sys, hmac, secrets import os, binascii, base64, hashlib, time, sys, hmac, secrets
import nacl.signing, nacl.encoding, nacl.public, nacl.hash, nacl.pwhash, nacl.utils, nacl.secret import nacl.signing, nacl.encoding, nacl.public, nacl.hash, nacl.pwhash, nacl.utils, nacl.secret
import unpaddedbase32
import logger, onionrproofs import logger, onionrproofs
import onionrexceptions, keymanager, core from onionrutils import stringvalidators, epoch, bytesconverter
import onionrexceptions, keymanager, core, onionrutils
import config import config
config.reload() config.reload()
@ -37,8 +39,8 @@ class OnionrCrypto:
# Load our own pub/priv Ed25519 keys, gen & save them if they don't exist # Load our own pub/priv Ed25519 keys, gen & save them if they don't exist
if os.path.exists(self._keyFile): if os.path.exists(self._keyFile):
if len(config.get('general.public_key', '')) > 0: if len(self._core.config.get('general.public_key', '')) > 0:
self.pubKey = config.get('general.public_key') self.pubKey = self._core.config.get('general.public_key')
else: else:
self.pubKey = self.keyManager.getPubkeyList()[0] self.pubKey = self.keyManager.getPubkeyList()[0]
self.privKey = self.keyManager.getPrivkey(self.pubKey) self.privKey = self.keyManager.getPrivkey(self.pubKey)
@ -93,9 +95,10 @@ class OnionrCrypto:
def pubKeyEncrypt(self, data, pubkey, encodedData=False): def pubKeyEncrypt(self, data, pubkey, encodedData=False):
'''Encrypt to a public key (Curve25519, taken from base32 Ed25519 pubkey)''' '''Encrypt to a public key (Curve25519, taken from base32 Ed25519 pubkey)'''
pubkey = unpaddedbase32.repad(bytesconverter.str_to_bytes(pubkey))
retVal = '' retVal = ''
box = None box = None
data = self._core._utils.strToBytes(data) data = bytesconverter.str_to_bytes(data)
pubkey = nacl.signing.VerifyKey(pubkey, encoder=nacl.encoding.Base32Encoder()).to_curve25519_public_key() pubkey = nacl.signing.VerifyKey(pubkey, encoder=nacl.encoding.Base32Encoder()).to_curve25519_public_key()
@ -120,7 +123,7 @@ class OnionrCrypto:
privkey = self.privKey privkey = self.privKey
ownKey = nacl.signing.SigningKey(seed=privkey, encoder=nacl.encoding.Base32Encoder()).to_curve25519_private_key() ownKey = nacl.signing.SigningKey(seed=privkey, encoder=nacl.encoding.Base32Encoder()).to_curve25519_private_key()
if self._core._utils.validatePubKey(privkey): if stringvalidators.validate_pub_key(privkey):
privkey = nacl.signing.SigningKey(seed=privkey, encoder=nacl.encoding.Base32Encoder()).to_curve25519_private_key() privkey = nacl.signing.SigningKey(seed=privkey, encoder=nacl.encoding.Base32Encoder()).to_curve25519_private_key()
anonBox = nacl.public.SealedBox(privkey) anonBox = nacl.public.SealedBox(privkey)
else: else:
@ -129,7 +132,7 @@ class OnionrCrypto:
return decrypted return decrypted
def symmetricEncrypt(self, data, key, encodedKey=False, returnEncoded=True): def symmetricEncrypt(self, data, key, encodedKey=False, returnEncoded=True):
'''Encrypt data to a 32-byte key (Salsa20-Poly1305 MAC)''' '''Encrypt data with a 32-byte key (Salsa20-Poly1305 MAC)'''
if encodedKey: if encodedKey:
encoding = nacl.encoding.Base64Encoder encoding = nacl.encoding.Base64Encoder
else: else:
@ -179,7 +182,7 @@ class OnionrCrypto:
def generateDeterministic(self, passphrase, bypassCheck=False): def generateDeterministic(self, passphrase, bypassCheck=False):
'''Generate a Ed25519 public key pair from a password''' '''Generate a Ed25519 public key pair from a password'''
passStrength = self.deterministicRequirement passStrength = self.deterministicRequirement
passphrase = self._core._utils.strToBytes(passphrase) # Convert to bytes if not already passphrase = bytesconverter.str_to_bytes(passphrase) # Convert to bytes if not already
# Validate passphrase length # Validate passphrase length
if not bypassCheck: if not bypassCheck:
if len(passphrase) < passStrength: if len(passphrase) < passStrength:
@ -199,7 +202,7 @@ class OnionrCrypto:
if pubkey == '': if pubkey == '':
pubkey = self.pubKey pubkey = self.pubKey
prev = '' prev = ''
pubkey = pubkey.encode() pubkey = bytesconverter.str_to_bytes(pubkey)
for i in range(self.HASH_ID_ROUNDS): for i in range(self.HASH_ID_ROUNDS):
try: try:
prev = prev.encode() prev = prev.encode()
@ -248,8 +251,8 @@ class OnionrCrypto:
difficulty = onionrproofs.getDifficultyForNewBlock(blockContent, ourBlock=False, coreInst=self._core) difficulty = onionrproofs.getDifficultyForNewBlock(blockContent, ourBlock=False, coreInst=self._core)
if difficulty < int(config.get('general.minimum_block_pow')): if difficulty < int(self._core.config.get('general.minimum_block_pow')):
difficulty = int(config.get('general.minimum_block_pow')) difficulty = int(self._core.config.get('general.minimum_block_pow'))
mainHash = '0000000000000000000000000000000000000000000000000000000000000000'#nacl.hash.blake2b(nacl.utils.random()).decode() mainHash = '0000000000000000000000000000000000000000000000000000000000000000'#nacl.hash.blake2b(nacl.utils.random()).decode()
puzzle = mainHash[:difficulty] puzzle = mainHash[:difficulty]
@ -263,7 +266,7 @@ class OnionrCrypto:
@staticmethod @staticmethod
def replayTimestampValidation(timestamp): def replayTimestampValidation(timestamp):
if core.Core()._utils.getEpoch() - int(timestamp) > 2419200: if epoch.get_epoch() - int(timestamp) > 2419200:
return False return False
else: else:
return True return True

View File

@ -19,6 +19,7 @@
''' '''
import sqlite3 import sqlite3
import core, config, logger import core, config, logger
from onionrutils import epoch
config.reload() config.reload()
class PeerProfiles: class PeerProfiles:
''' '''
@ -106,7 +107,7 @@ def peerCleanup(coreInst):
if PeerProfiles(address, coreInst).score < minScore: if PeerProfiles(address, coreInst).score < minScore:
coreInst.removeAddress(address) coreInst.removeAddress(address)
try: try:
if (int(coreInst._utils.getEpoch()) - int(coreInst.getPeerInfo(address, 'dateSeen'))) >= 600: if (int(epoch.get_epoch()) - int(coreInst.getPeerInfo(address, 'dateSeen'))) >= 600:
expireTime = 600 expireTime = 600
else: else:
expireTime = 86400 expireTime = 86400

View File

@ -19,6 +19,7 @@
''' '''
import onionrplugins, core as onionrcore, logger import onionrplugins, core as onionrcore, logger
from onionrutils import localcommand
class DaemonAPI: class DaemonAPI:
def __init__(self, pluginapi): def __init__(self, pluginapi):
@ -40,7 +41,7 @@ class DaemonAPI:
return return
def local_command(self, command): def local_command(self, command):
return self.pluginapi.get_utils().localCommand(self, command) return localcommand.local_command(self.pluginapi.get_core(), command)
def queue_pop(self): def queue_pop(self):
return self.get_core().daemonQueue() return self.get_core().daemonQueue()
@ -169,9 +170,6 @@ class pluginapi:
def get_core(self): def get_core(self):
return self.core return self.core
def get_utils(self):
return self.get_core()._utils
def get_crypto(self): def get_crypto(self):
return self.get_core()._crypto return self.get_core()._crypto

View File

@ -18,7 +18,8 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import multiprocessing, nacl.encoding, nacl.hash, nacl.utils, time, math, threading, binascii, sys, json import multiprocessing, nacl.encoding, nacl.hash, nacl.utils, time, math, threading, binascii, sys, json
import core, onionrutils, config, logger, onionrblockapi import core, config, logger, onionrblockapi
from onionrutils import bytesconverter
config.reload() config.reload()
@ -29,12 +30,7 @@ def getDifficultyModifier(coreOrUtilsInst=None):
''' '''
classInst = coreOrUtilsInst classInst = coreOrUtilsInst
retData = 0 retData = 0
if isinstance(classInst, core.Core): useFunc = classInst.storage_counter.getPercent
useFunc = classInst._utils.storageCounter.getPercent
elif isinstance(classInst, onionrutils.OnionrUtils):
useFunc = classInst.storageCounter.getPercent
else:
useFunc = core.Core()._utils.storageCounter.getPercent
percentUse = useFunc() percentUse = useFunc()
@ -56,7 +52,7 @@ def getDifficultyForNewBlock(data, ourBlock=True, coreInst=None):
if isinstance(data, onionrblockapi.Block): if isinstance(data, onionrblockapi.Block):
dataSize = len(data.getRaw().encode('utf-8')) dataSize = len(data.getRaw().encode('utf-8'))
else: else:
dataSize = len(onionrutils.OnionrUtils.strToBytes(data)) dataSize = len(bytesconverter.str_to_bytes(data))
if ourBlock: if ourBlock:
minDifficulty = config.get('general.minimum_send_pow', 4) minDifficulty = config.get('general.minimum_send_pow', 4)

View File

@ -21,6 +21,7 @@ import time
import stem import stem
import core import core
from . import connectionserver, bootstrapservice from . import connectionserver, bootstrapservice
from onionrutils import stringvalidators, basicrequests
class OnionrServices: class OnionrServices:
''' '''
@ -39,14 +40,14 @@ class OnionrServices:
When a client wants to connect, contact their bootstrap address and tell them our When a client wants to connect, contact their bootstrap address and tell them our
ephemeral address for our service by creating a new ConnectionServer instance ephemeral address for our service by creating a new ConnectionServer instance
''' '''
assert self._core._utils.validateID(address) assert stringvalidators.validate_transport(address)
BOOTSTRAP_TRIES = 10 # How many times to attempt contacting the bootstrap server BOOTSTRAP_TRIES = 10 # How many times to attempt contacting the bootstrap server
TRY_WAIT = 3 # Seconds to wait before trying bootstrap again TRY_WAIT = 3 # Seconds to wait before trying bootstrap again
# HTTP is fine because .onion/i2p is encrypted/authenticated # HTTP is fine because .onion/i2p is encrypted/authenticated
base_url = 'http://%s/' % (address,) base_url = 'http://%s/' % (address,)
socks = self._core.config.get('tor.socksport') socks = self._core.config.get('tor.socksport')
for x in range(BOOTSTRAP_TRIES): for x in range(BOOTSTRAP_TRIES):
if self._core._utils.doGetRequest(base_url + 'ping', port=socks, ignoreAPI=True) == 'pong!': if basicrequests.do_get_request(self._core, base_url + 'ping', port=socks, ignoreAPI=True) == 'pong!':
# if bootstrap sever is online, tell them our service address # if bootstrap sever is online, tell them our service address
connectionserver.ConnectionServer(peer, address, core_inst=self._core) connectionserver.ConnectionServer(peer, address, core_inst=self._core)
else: else:

View File

@ -24,6 +24,7 @@ from flask import Flask, Response
import core import core
from netcontroller import getOpenPort from netcontroller import getOpenPort
from . import httpheaders from . import httpheaders
from onionrutils import stringvalidators, epoch
def bootstrap_client_service(peer, core_inst=None, bootstrap_timeout=300): def bootstrap_client_service(peer, core_inst=None, bootstrap_timeout=300):
''' '''
@ -32,7 +33,7 @@ def bootstrap_client_service(peer, core_inst=None, bootstrap_timeout=300):
if core_inst is None: if core_inst is None:
core_inst = core.Core() core_inst = core.Core()
if not core_inst._utils.validatePubKey(peer): if not stringvalidators.validate_pub_key(peer):
raise ValueError('Peer must be valid base32 ed25519 public key') raise ValueError('Peer must be valid base32 ed25519 public key')
bootstrap_port = getOpenPort() bootstrap_port = getOpenPort()
@ -61,7 +62,7 @@ def bootstrap_client_service(peer, core_inst=None, bootstrap_timeout=300):
@bootstrap_app.route('/bs/<address>', methods=['POST']) @bootstrap_app.route('/bs/<address>', methods=['POST'])
def get_bootstrap(address): def get_bootstrap(address):
if core_inst._utils.validateID(address + '.onion'): if stringvalidators.validate_transport(address + '.onion'):
# Set the bootstrap address then close the server # Set the bootstrap address then close the server
bootstrap_address = address + '.onion' bootstrap_address = address + '.onion'
core_inst.keyStore.put(bs_id, bootstrap_address) core_inst.keyStore.put(bs_id, bootstrap_address)
@ -76,7 +77,7 @@ def bootstrap_client_service(peer, core_inst=None, bootstrap_timeout=300):
# Create the v3 onion service # Create the v3 onion service
response = controller.create_ephemeral_hidden_service({80: bootstrap_port}, key_type = 'NEW', key_content = 'ED25519-V3', await_publication = True) response = controller.create_ephemeral_hidden_service({80: bootstrap_port}, key_type = 'NEW', key_content = 'ED25519-V3', await_publication = True)
core_inst.insertBlock(response.service_id, header='con', sign=True, encryptType='asym', core_inst.insertBlock(response.service_id, header='con', sign=True, encryptType='asym',
asymPeer=peer, disableForward=True, expire=(core_inst._utils.getEpoch() + bootstrap_timeout)) asymPeer=peer, disableForward=True, expire=(epoch.get_epoch() + bootstrap_timeout))
# Run the bootstrap server # Run the bootstrap server
try: try:
http_server.serve_forever() http_server.serve_forever()

View File

@ -24,6 +24,7 @@ import core, logger, httpapi
import onionrexceptions import onionrexceptions
from netcontroller import getOpenPort from netcontroller import getOpenPort
import api import api
from onionrutils import stringvalidators, basicrequests
from . import httpheaders from . import httpheaders
class ConnectionServer: class ConnectionServer:
@ -33,7 +34,7 @@ class ConnectionServer:
else: else:
self.core_inst = core_inst self.core_inst = core_inst
if not core_inst._utils.validatePubKey(peer): if not stringvalidators.validate_pub_key(peer):
raise ValueError('Peer must be valid base32 ed25519 public key') raise ValueError('Peer must be valid base32 ed25519 public key')
socks = core_inst.config.get('tor.socksport') # Load config for Tor socks port for proxy socks = core_inst.config.get('tor.socksport') # Load config for Tor socks port for proxy
@ -71,7 +72,7 @@ class ConnectionServer:
try: try:
for x in range(3): for x in range(3):
attempt = self.core_inst._utils.doPostRequest('http://' + address + '/bs/' + response.service_id, port=socks) attempt = basicrequests.do_post_request(self.core_inst, 'http://' + address + '/bs/' + response.service_id, port=socks)
if attempt == 'success': if attempt == 'success':
break break
else: else:

View File

@ -17,7 +17,8 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import core, sys, sqlite3, os, dbcreator import core, sys, sqlite3, os, dbcreator, onionrexceptions
from onionrutils import bytesconverter, stringvalidators
DB_ENTRY_SIZE_LIMIT = 10000 # Will be a config option DB_ENTRY_SIZE_LIMIT = 10000 # Will be a config option
@ -65,7 +66,7 @@ def deleteBlock(coreInst, blockHash):
def store(coreInst, data, blockHash=''): def store(coreInst, data, blockHash=''):
assert isinstance(coreInst, core.Core) assert isinstance(coreInst, core.Core)
assert coreInst._utils.validateHash(blockHash) assert stringvalidators.validate_hash(blockHash)
ourHash = coreInst._crypto.sha3Hash(data) ourHash = coreInst._crypto.sha3Hash(data)
if blockHash != '': if blockHash != '':
assert ourHash == blockHash assert ourHash == blockHash
@ -80,9 +81,9 @@ def store(coreInst, data, blockHash=''):
def getData(coreInst, bHash): def getData(coreInst, bHash):
assert isinstance(coreInst, core.Core) assert isinstance(coreInst, core.Core)
assert coreInst._utils.validateHash(bHash) assert stringvalidators.validate_hash(bHash)
bHash = coreInst._utils.bytesToStr(bHash) bHash = bytesconverter.bytes_to_str(bHash)
# First check DB for data entry by hash # First check DB for data entry by hash
# if no entry, check disk # if no entry, check disk
@ -94,4 +95,6 @@ def getData(coreInst, bHash):
retData = block.read() retData = block.read()
else: else:
retData = _dbFetch(coreInst, bHash) retData = _dbFetch(coreInst, bHash)
if retData is None:
raise onionrexceptions.NoDataAvailable("Block data for %s is not available" % [bHash])
return retData return retData

View File

@ -0,0 +1,21 @@
import sys, sqlite3
import onionrexceptions, onionrstorage
from onionrutils import stringvalidators
def remove_block(core_inst, block):
'''
remove a block from this node (does not automatically blacklist)
**You may want blacklist.addToDB(blockHash)
'''
if stringvalidators.validate_hash(block):
conn = sqlite3.connect(core_inst.blockDB, timeout=30)
c = conn.cursor()
t = (block,)
c.execute('Delete from hashes where hash=?;', t)
conn.commit()
conn.close()
dataSize = sys.getsizeof(onionrstorage.getData(core_inst, block))
core_inst.storage_counter.removeBytes(dataSize)
else:
raise onionrexceptions.InvalidHexHash

View File

@ -0,0 +1,36 @@
import sys, sqlite3
import onionrstorage, onionrexceptions
def set_data(core_inst, data):
'''
Set the data assciated with a hash
'''
data = data
dataSize = sys.getsizeof(data)
if not type(data) is bytes:
data = data.encode()
dataHash = core_inst._crypto.sha3Hash(data)
if type(dataHash) is bytes:
dataHash = dataHash.decode()
blockFileName = core_inst.blockDataLocation + dataHash + '.dat'
try:
onionrstorage.getData(core_inst, dataHash)
except onionrexceptions.NoDataAvailable:
if core_inst.storage_counter.addBytes(dataSize) != False:
onionrstorage.store(core_inst, data, blockHash=dataHash)
conn = sqlite3.connect(core_inst.blockDB, timeout=30)
c = conn.cursor()
c.execute("UPDATE hashes SET dataSaved=1 WHERE hash = ?;", (dataHash,))
conn.commit()
conn.close()
with open(core_inst.dataNonceFile, 'a') as nonceFile:
nonceFile.write(dataHash + '\n')
else:
raise onionrexceptions.DiskAllocationReached
else:
raise Exception("Data is already set for " + dataHash)
return dataHash

View File

@ -18,10 +18,13 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import os, json, onionrexceptions import os, json, onionrexceptions
import unpaddedbase32
from onionrusers import onionrusers from onionrusers import onionrusers
from onionrutils import bytesconverter, epoch
class ContactManager(onionrusers.OnionrUser): class ContactManager(onionrusers.OnionrUser):
def __init__(self, coreInst, publicKey, saveUser=False, recordExpireSeconds=5): def __init__(self, coreInst, publicKey, saveUser=False, recordExpireSeconds=5):
publicKey = unpaddedbase32.repad(bytesconverter.str_to_bytes(publicKey)).decode()
super(ContactManager, self).__init__(coreInst, publicKey, saveUser=saveUser) super(ContactManager, self).__init__(coreInst, publicKey, saveUser=saveUser)
self.dataDir = coreInst.dataDir + '/contacts/' self.dataDir = coreInst.dataDir + '/contacts/'
self.dataFile = '%s/contacts/%s.json' % (coreInst.dataDir, publicKey) self.dataFile = '%s/contacts/%s.json' % (coreInst.dataDir, publicKey)
@ -39,7 +42,7 @@ class ContactManager(onionrusers.OnionrUser):
dataFile.write(data) dataFile.write(data)
def _loadData(self): def _loadData(self):
self.lastRead = self._core._utils.getEpoch() self.lastRead = epoch.get_epoch()
retData = {} retData = {}
if os.path.exists(self.dataFile): if os.path.exists(self.dataFile):
with open(self.dataFile, 'r') as dataFile: with open(self.dataFile, 'r') as dataFile:
@ -59,7 +62,7 @@ class ContactManager(onionrusers.OnionrUser):
if self.deleted: if self.deleted:
raise onionrexceptions.ContactDeleted raise onionrexceptions.ContactDeleted
if (self._core._utils.getEpoch() - self.lastRead >= self.recordExpire) or forceReload: if (epoch.get_epoch() - self.lastRead >= self.recordExpire) or forceReload:
self.data = self._loadData() self.data = self._loadData()
try: try:
return self.data[key] return self.data[key]

View File

@ -17,7 +17,9 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import onionrblockapi, logger, onionrexceptions, json, sqlite3, time import logger, onionrexceptions, json, sqlite3, time
from onionrutils import stringvalidators, bytesconverter, epoch
import unpaddedbase32
import nacl.exceptions import nacl.exceptions
def deleteExpiredKeys(coreInst): def deleteExpiredKeys(coreInst):
@ -25,7 +27,7 @@ def deleteExpiredKeys(coreInst):
conn = sqlite3.connect(coreInst.forwardKeysFile, timeout=10) conn = sqlite3.connect(coreInst.forwardKeysFile, timeout=10)
c = conn.cursor() c = conn.cursor()
curTime = coreInst._utils.getEpoch() curTime = epoch.get_epoch()
c.execute("DELETE from myForwardKeys where expire <= ?", (curTime,)) c.execute("DELETE from myForwardKeys where expire <= ?", (curTime,))
conn.commit() conn.commit()
conn.execute("VACUUM") conn.execute("VACUUM")
@ -37,7 +39,7 @@ def deleteTheirExpiredKeys(coreInst, pubkey):
c = conn.cursor() c = conn.cursor()
# Prepare the insert # Prepare the insert
command = (pubkey, coreInst._utils.getEpoch()) command = (pubkey, epoch.get_epoch())
c.execute("DELETE from forwardKeys where peerKey = ? and expire <= ?", command) c.execute("DELETE from forwardKeys where peerKey = ? and expire <= ?", command)
@ -55,8 +57,7 @@ class OnionrUser:
Takes an instance of onionr core, a base32 encoded ed25519 public key, and a bool saveUser Takes an instance of onionr core, a base32 encoded ed25519 public key, and a bool saveUser
saveUser determines if we should add a user to our peer database or not. saveUser determines if we should add a user to our peer database or not.
''' '''
if ' ' in coreInst._utils.bytesToStr(publicKey).strip(): publicKey = unpaddedbase32.repad(bytesconverter.str_to_bytes(publicKey)).decode()
publicKey = coreInst._utils.convertHumanReadableID(publicKey)
self.trust = 0 self.trust = 0
self._core = coreInst self._core = coreInst
@ -103,7 +104,7 @@ class OnionrUser:
deleteExpiredKeys(self._core) deleteExpiredKeys(self._core)
retData = '' retData = ''
forwardKey = self._getLatestForwardKey() forwardKey = self._getLatestForwardKey()
if self._core._utils.validatePubKey(forwardKey[0]): if stringvalidators.validate_pub_key(forwardKey[0]):
retData = self._core._crypto.pubKeyEncrypt(data, forwardKey[0], encodedData=True) retData = self._core._crypto.pubKeyEncrypt(data, forwardKey[0], encodedData=True)
else: else:
raise onionrexceptions.InvalidPubkey("No valid forward secrecy key available for this user") raise onionrexceptions.InvalidPubkey("No valid forward secrecy key available for this user")
@ -158,10 +159,10 @@ class OnionrUser:
conn = sqlite3.connect(self._core.forwardKeysFile, timeout=10) conn = sqlite3.connect(self._core.forwardKeysFile, timeout=10)
c = conn.cursor() c = conn.cursor()
# Prepare the insert # Prepare the insert
time = self._core._utils.getEpoch() time = epoch.get_epoch()
newKeys = self._core._crypto.generatePubKey() newKeys = self._core._crypto.generatePubKey()
newPub = self._core._utils.bytesToStr(newKeys[0]) newPub = bytesconverter.bytes_to_str(newKeys[0])
newPriv = self._core._utils.bytesToStr(newKeys[1]) newPriv = bytesconverter.bytes_to_str(newKeys[1])
command = (self.publicKey, newPub, newPriv, time, expire + time) command = (self.publicKey, newPub, newPriv, time, expire + time)
@ -176,7 +177,7 @@ class OnionrUser:
conn = sqlite3.connect(self._core.forwardKeysFile, timeout=10) conn = sqlite3.connect(self._core.forwardKeysFile, timeout=10)
c = conn.cursor() c = conn.cursor()
pubkey = self.publicKey pubkey = self.publicKey
pubkey = self._core._utils.bytesToStr(pubkey) pubkey = bytesconverter.bytes_to_str(pubkey)
command = (pubkey,) command = (pubkey,)
keyList = [] # list of tuples containing pub, private for peer keyList = [] # list of tuples containing pub, private for peer
@ -190,7 +191,8 @@ class OnionrUser:
return list(keyList) return list(keyList)
def addForwardKey(self, newKey, expire=DEFAULT_KEY_EXPIRE): def addForwardKey(self, newKey, expire=DEFAULT_KEY_EXPIRE):
if not self._core._utils.validatePubKey(newKey): newKey = bytesconverter.bytes_to_str(unpaddedbase32.repad(bytesconverter.str_to_bytes(newKey)))
if not stringvalidators.validate_pub_key(newKey):
# Do not add if something went wrong with the key # Do not add if something went wrong with the key
raise onionrexceptions.InvalidPubkey(newKey) raise onionrexceptions.InvalidPubkey(newKey)
@ -198,7 +200,7 @@ class OnionrUser:
c = conn.cursor() c = conn.cursor()
# Get the time we're inserting the key at # Get the time we're inserting the key at
timeInsert = self._core._utils.getEpoch() timeInsert = epoch.get_epoch()
# Look at our current keys for duplicate key data or time # Look at our current keys for duplicate key data or time
for entry in self._getForwardKeys(): for entry in self._getForwardKeys():

View File

@ -1,572 +0,0 @@
'''
Onionr - Private P2P Communication
OnionrUtils offers various useful functions to Onionr. Relatively misc.
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
# Misc functions that do not fit in the main api, but are useful
import sys, os, sqlite3, binascii, time, base64, json, glob, shutil, math, re, urllib.parse, string
import requests
import nacl.signing, nacl.encoding
from onionrblockapi import Block
import onionrexceptions, config, logger
from onionr import API_VERSION
import onionrevents
import storagecounter
from etc import pgpwords, onionrvalues
from onionrusers import onionrusers
if sys.version_info < (3, 6):
try:
import sha3
except ModuleNotFoundError:
logger.fatal('On Python 3 versions prior to 3.6.x, you need the sha3 module')
sys.exit(1)
config.reload()
class OnionrUtils:
'''
Various useful functions for validating things, etc functions, connectivity
'''
def __init__(self, coreInstance):
#self.fingerprintFile = 'data/own-fingerprint.txt' #TODO Remove since probably not needed
self._core = coreInstance # onionr core instance
self.timingToken = '' # for when we make local connections to our http api, to bypass timing attack defense mechanism
self.avoidDupe = [] # list used to prevent duplicate requests per peer for certain actions
self.peerProcessing = {} # dict of current peer actions: peer, actionList
self.storageCounter = storagecounter.StorageCounter(self._core) # used to keep track of how much data onionr is using on disk
return
def getTimeBypassToken(self):
'''
Load our timingToken from disk for faster local HTTP API
'''
try:
if os.path.exists(self._core.dataDir + 'time-bypass.txt'):
with open(self._core.dataDir + 'time-bypass.txt', 'r') as bypass:
self.timingToken = bypass.read()
except Exception as error:
logger.error('Failed to fetch time bypass token.', error = error)
return self.timingToken
def getRoundedEpoch(self, roundS=60):
'''
Returns the epoch, rounded down to given seconds (Default 60)
'''
epoch = self.getEpoch()
return epoch - (epoch % roundS)
def getClientAPIServer(self):
retData = ''
try:
with open(self._core.privateApiHostFile, 'r') as host:
hostname = host.read()
except FileNotFoundError:
raise FileNotFoundError
else:
retData += '%s:%s' % (hostname, config.get('client.client.port'))
return retData
def localCommand(self, command, data='', silent = True, post=False, postData = {}, maxWait=20):
'''
Send a command to the local http API server, securely. Intended for local clients, DO NOT USE for remote peers.
'''
self.getTimeBypassToken()
# TODO: URL encode parameters, just as an extra measure. May not be needed, but should be added regardless.
hostname = ''
waited = 0
while hostname == '':
try:
hostname = self.getClientAPIServer()
except FileNotFoundError:
time.sleep(1)
waited += 1
if waited == maxWait:
return False
if data != '':
data = '&data=' + urllib.parse.quote_plus(data)
payload = 'http://%s/%s%s' % (hostname, command, data)
try:
if post:
retData = requests.post(payload, data=postData, headers={'token': config.get('client.webpassword'), 'Connection':'close'}, timeout=(maxWait, maxWait)).text
else:
retData = requests.get(payload, headers={'token': config.get('client.webpassword'), 'Connection':'close'}, timeout=(maxWait, maxWait)).text
except Exception as error:
if not silent:
logger.error('Failed to make local request (command: %s):%s' % (command, error))
retData = False
return retData
def getHumanReadableID(self, pub=''):
'''gets a human readable ID from a public key'''
if pub == '':
pub = self._core._crypto.pubKey
pub = base64.b16encode(base64.b32decode(pub)).decode()
return ' '.join(pgpwords.wordify(pub))
def convertHumanReadableID(self, pub):
'''Convert a human readable pubkey id to base32'''
pub = pub.lower()
return self.bytesToStr(base64.b32encode(binascii.unhexlify(pgpwords.hexify(pub.strip()))))
def getBlockMetadataFromData(self, blockData):
'''
accepts block contents as string, returns a tuple of
metadata, meta (meta being internal metadata, which will be
returned as an encrypted base64 string if it is encrypted, dict if not).
'''
meta = {}
metadata = {}
data = blockData
try:
blockData = blockData.encode()
except AttributeError:
pass
try:
metadata = json.loads(blockData[:blockData.find(b'\n')].decode())
except json.decoder.JSONDecodeError:
pass
else:
data = blockData[blockData.find(b'\n'):].decode()
if not metadata['encryptType'] in ('asym', 'sym'):
try:
meta = json.loads(metadata['meta'])
except KeyError:
pass
meta = metadata['meta']
return (metadata, meta, data)
def processBlockMetadata(self, blockHash):
'''
Read metadata from a block and cache it to the block database
'''
curTime = self.getRoundedEpoch(roundS=60)
myBlock = Block(blockHash, self._core)
if myBlock.isEncrypted:
myBlock.decrypt()
if (myBlock.isEncrypted and myBlock.decrypted) or (not myBlock.isEncrypted):
blockType = myBlock.getMetadata('type') # we would use myBlock.getType() here, but it is bugged with encrypted blocks
signer = self.bytesToStr(myBlock.signer)
valid = myBlock.verifySig()
if myBlock.getMetadata('newFSKey') is not None:
onionrusers.OnionrUser(self._core, signer).addForwardKey(myBlock.getMetadata('newFSKey'))
try:
if len(blockType) <= 10:
self._core.updateBlockInfo(blockHash, 'dataType', blockType)
except TypeError:
logger.warn("Missing block information")
pass
# Set block expire time if specified
try:
expireTime = myBlock.getHeader('expire')
assert len(str(int(expireTime))) < 20 # test that expire time is an integer of sane length (for epoch)
except (AssertionError, ValueError, TypeError) as e:
expireTime = onionrvalues.OnionrValues().default_expire + curTime
finally:
self._core.updateBlockInfo(blockHash, 'expire', expireTime)
if not blockType is None:
self._core.updateBlockInfo(blockHash, 'dataType', blockType)
onionrevents.event('processblocks', data = {'block': myBlock, 'type': blockType, 'signer': signer, 'validSig': valid}, onionr = self._core.onionrInst)
else:
pass
#logger.debug('Not processing metadata on encrypted block we cannot decrypt.')
def escapeAnsi(self, line):
'''
Remove ANSI escape codes from a string with regex
taken or adapted from: https://stackoverflow.com/a/38662876 by user https://stackoverflow.com/users/802365/%c3%89douard-lopez
cc-by-sa-3 license https://creativecommons.org/licenses/by-sa/3.0/
'''
ansi_escape = re.compile(r'(\x9B|\x1B\[)[0-?]*[ -/]*[@-~]')
return ansi_escape.sub('', line)
def hasBlock(self, hash):
'''
Check for new block in the list
'''
conn = sqlite3.connect(self._core.blockDB)
c = conn.cursor()
if not self.validateHash(hash):
raise Exception("Invalid hash")
for result in c.execute("SELECT COUNT() FROM hashes WHERE hash = ?", (hash,)):
if result[0] >= 1:
conn.commit()
conn.close()
return True
else:
conn.commit()
conn.close()
return False
def hasKey(self, key):
'''
Check for key in list of public keys
'''
return key in self._core.listPeers()
def validateHash(self, data, length=64):
'''
Validate if a string is a valid hash hex digest (does not compare, just checks length and charset)
'''
retVal = True
if data == False or data == True:
return False
data = data.strip()
if len(data) != length:
retVal = False
else:
try:
int(data, 16)
except ValueError:
retVal = False
return retVal
def validateMetadata(self, metadata, blockData):
'''Validate metadata meets onionr spec (does not validate proof value computation), take in either dictionary or json string'''
# TODO, make this check sane sizes
retData = False
maxClockDifference = 120
# convert to dict if it is json string
if type(metadata) is str:
try:
metadata = json.loads(metadata)
except json.JSONDecodeError:
pass
# Validate metadata dict for invalid keys to sizes that are too large
maxAge = config.get("general.max_block_age", onionrvalues.OnionrValues().default_expire)
if type(metadata) is dict:
for i in metadata:
try:
self._core.requirements.blockMetadataLengths[i]
except KeyError:
logger.warn('Block has invalid metadata key ' + i)
break
else:
testData = metadata[i]
try:
testData = len(testData)
except (TypeError, AttributeError) as e:
testData = len(str(testData))
if self._core.requirements.blockMetadataLengths[i] < testData:
logger.warn('Block metadata key ' + i + ' exceeded maximum size')
break
if i == 'time':
if not self.isIntegerString(metadata[i]):
logger.warn('Block metadata time stamp is not integer string or int')
break
isFuture = (metadata[i] - self.getEpoch())
if isFuture > maxClockDifference:
logger.warn('Block timestamp is skewed to the future over the max %s: %s' (maxClockDifference, isFuture))
break
if (self.getEpoch() - metadata[i]) > maxAge:
logger.warn('Block is outdated: %s' % (metadata[i],))
break
elif i == 'expire':
try:
assert int(metadata[i]) > self.getEpoch()
except AssertionError:
logger.warn('Block is expired: %s less than %s' % (metadata[i], self.getEpoch()))
break
elif i == 'encryptType':
try:
assert metadata[i] in ('asym', 'sym', '')
except AssertionError:
logger.warn('Invalid encryption mode')
break
else:
# if metadata loop gets no errors, it does not break, therefore metadata is valid
# make sure we do not have another block with the same data content (prevent data duplication and replay attacks)
nonce = self._core._utils.bytesToStr(self._core._crypto.sha3Hash(blockData))
try:
with open(self._core.dataNonceFile, 'r') as nonceFile:
if nonce in nonceFile.read():
retData = False # we've seen that nonce before, so we can't pass metadata
raise onionrexceptions.DataExists
except FileNotFoundError:
retData = True
except onionrexceptions.DataExists:
# do not set retData to True, because nonce has been seen before
pass
else:
retData = True
else:
logger.warn('In call to utils.validateMetadata, metadata must be JSON string or a dictionary object')
return retData
def validatePubKey(self, key):
'''
Validate if a string is a valid base32 encoded Ed25519 key
'''
retVal = False
if type(key) is type(None):
return False
try:
nacl.signing.SigningKey(seed=key, encoder=nacl.encoding.Base32Encoder)
except nacl.exceptions.ValueError:
pass
except base64.binascii.Error as err:
pass
else:
retVal = True
return retVal
@staticmethod
def validateID(id):
'''
Validate if an address is a valid tor or i2p hidden service
'''
try:
idLength = len(id)
retVal = True
idNoDomain = ''
peerType = ''
# i2p b32 addresses are 60 characters long (including .b32.i2p)
if idLength == 60:
peerType = 'i2p'
if not id.endswith('.b32.i2p'):
retVal = False
else:
idNoDomain = id.split('.b32.i2p')[0]
# Onion v2's are 22 (including .onion), v3's are 62 with .onion
elif idLength == 22 or idLength == 62:
peerType = 'onion'
if not id.endswith('.onion'):
retVal = False
else:
idNoDomain = id.split('.onion')[0]
else:
retVal = False
if retVal:
if peerType == 'i2p':
try:
id.split('.b32.i2p')[2]
except:
pass
else:
retVal = False
elif peerType == 'onion':
try:
id.split('.onion')[2]
except:
pass
else:
retVal = False
if not idNoDomain.isalnum():
retVal = False
# Validate address is valid base32 (when capitalized and minus extension); v2/v3 onions and .b32.i2p use base32
for x in idNoDomain.upper():
if x not in string.ascii_uppercase and x not in '234567':
retVal = False
return retVal
except:
return False
@staticmethod
def isIntegerString(data):
'''Check if a string is a valid base10 integer (also returns true if already an int)'''
try:
int(data)
except (ValueError, TypeError) as e:
return False
else:
return True
def isCommunicatorRunning(self, timeout = 5, interval = 0.1):
try:
runcheck_file = self._core.dataDir + '.runcheck'
if not os.path.isfile(runcheck_file):
open(runcheck_file, 'w+').close()
# self._core.daemonQueueAdd('runCheck') # deprecated
starttime = time.time()
while True:
time.sleep(interval)
if not os.path.isfile(runcheck_file):
return True
elif time.time() - starttime >= timeout:
return False
except:
return False
def importNewBlocks(self, scanDir=''):
'''
This function is intended to scan for new blocks ON THE DISK and import them
'''
blockList = self._core.getBlockList()
exist = False
if scanDir == '':
scanDir = self._core.blockDataLocation
if not scanDir.endswith('/'):
scanDir += '/'
for block in glob.glob(scanDir + "*.dat"):
if block.replace(scanDir, '').replace('.dat', '') not in blockList:
exist = True
logger.info('Found new block on dist %s' % block)
with open(block, 'rb') as newBlock:
block = block.replace(scanDir, '').replace('.dat', '')
if self._core._crypto.sha3Hash(newBlock.read()) == block.replace('.dat', ''):
self._core.addToBlockDB(block.replace('.dat', ''), dataSaved=True)
logger.info('Imported block %s.' % block)
self._core._utils.processBlockMetadata(block)
else:
logger.warn('Failed to verify hash for %s' % block)
if not exist:
logger.info('No blocks found to import')
def progressBar(self, value = 0, endvalue = 100, width = None):
'''
Outputs a progress bar with a percentage. Write \n after use.
'''
if width is None or height is None:
width, height = shutil.get_terminal_size((80, 24))
bar_length = width - 6
percent = float(value) / endvalue
arrow = '' * int(round(percent * bar_length)-1) + '>'
spaces = ' ' * (bar_length - len(arrow))
sys.stdout.write("\r{0}{1}%".format(arrow + spaces, int(round(percent * 100))))
sys.stdout.flush()
def getEpoch(self):
'''returns epoch'''
return math.floor(time.time())
def doPostRequest(self, url, data={}, port=0, proxyType='tor'):
'''
Do a POST request through a local tor or i2p instance
'''
if proxyType == 'tor':
if port == 0:
port = self._core.torPort
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
elif proxyType == 'i2p':
proxies = {'http': 'http://127.0.0.1:4444'}
else:
return
headers = {'user-agent': 'PyOnionr', 'Connection':'close'}
try:
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
r = requests.post(url, data=data, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30))
retData = r.text
except KeyboardInterrupt:
raise KeyboardInterrupt
except requests.exceptions.RequestException as e:
logger.debug('Error: %s' % str(e))
retData = False
return retData
def doGetRequest(self, url, port=0, proxyType='tor', ignoreAPI=False, returnHeaders=False):
'''
Do a get request through a local tor or i2p instance
'''
retData = False
if proxyType == 'tor':
if port == 0:
raise onionrexceptions.MissingPort('Socks port required for Tor HTTP get request')
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
elif proxyType == 'i2p':
proxies = {'http': 'http://127.0.0.1:4444'}
else:
return
headers = {'user-agent': 'PyOnionr', 'Connection':'close'}
response_headers = dict()
try:
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
r = requests.get(url, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30), )
# Check server is using same API version as us
if not ignoreAPI:
try:
response_headers = r.headers
if r.headers['X-API'] != str(API_VERSION):
raise onionrexceptions.InvalidAPIVersion
except KeyError:
raise onionrexceptions.InvalidAPIVersion
retData = r.text
except KeyboardInterrupt:
raise KeyboardInterrupt
except ValueError as e:
logger.debug('Failed to make GET request to %s' % url, error = e, sensitive = True)
except onionrexceptions.InvalidAPIVersion:
if 'X-API' in response_headers:
logger.debug('Using API version %s. Cannot communicate with node\'s API version of %s.' % (API_VERSION, response_headers['X-API']))
else:
logger.debug('Using API version %s. API version was not sent with the request.' % API_VERSION)
except requests.exceptions.RequestException as e:
if not 'ConnectTimeoutError' in str(e) and not 'Request rejected or failed' in str(e):
logger.debug('Error: %s' % str(e))
retData = False
if returnHeaders:
return (retData, response_headers)
else:
return retData
@staticmethod
def strToBytes(data):
try:
data = data.encode()
except AttributeError:
pass
return data
@staticmethod
def bytesToStr(data):
try:
data = data.decode()
except AttributeError:
pass
return data
def size(path='.'):
'''
Returns the size of a folder's contents in bytes
'''
total = 0
if os.path.exists(path):
if os.path.isfile(path):
total = os.path.getsize(path)
else:
for entry in os.scandir(path):
if entry.is_file():
total += entry.stat().st_size
elif entry.is_dir():
total += size(entry.path)
return total
def humanSize(num, suffix='B'):
'''
Converts from bytes to a human readable format.
'''
for unit in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z']:
if abs(num) < 1024.0:
return "%.1f %s%s" % (num, unit, suffix)
num /= 1024.0
return "%.1f %s%s" % (num, 'Yi', suffix)

View File

View File

@ -0,0 +1,90 @@
'''
Onionr - Private P2P Communication
Do HTTP GET or POST requests through a proxy
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import requests
import logger, onionrexceptions
def do_post_request(core_inst, url, data={}, port=0, proxyType='tor'):
'''
Do a POST request through a local tor or i2p instance
'''
if proxyType == 'tor':
if port == 0:
port = core_inst.torPort
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
elif proxyType == 'i2p':
proxies = {'http': 'http://127.0.0.1:4444'}
else:
return
headers = {'user-agent': 'PyOnionr', 'Connection':'close'}
try:
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
r = requests.post(url, data=data, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30))
retData = r.text
except KeyboardInterrupt:
raise KeyboardInterrupt
except requests.exceptions.RequestException as e:
logger.debug('Error: %s' % str(e))
retData = False
return retData
def do_get_request(core_inst, url, port=0, proxyType='tor', ignoreAPI=False, returnHeaders=False):
'''
Do a get request through a local tor or i2p instance
'''
API_VERSION = core_inst.onionrInst.API_VERSION
retData = False
if proxyType == 'tor':
if port == 0:
raise onionrexceptions.MissingPort('Socks port required for Tor HTTP get request')
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
elif proxyType == 'i2p':
proxies = {'http': 'http://127.0.0.1:4444'}
else:
return
headers = {'user-agent': 'PyOnionr', 'Connection':'close'}
response_headers = dict()
try:
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
r = requests.get(url, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30), )
# Check server is using same API version as us
if not ignoreAPI:
try:
response_headers = r.headers
if r.headers['X-API'] != str(API_VERSION):
raise onionrexceptions.InvalidAPIVersion
except KeyError:
raise onionrexceptions.InvalidAPIVersion
retData = r.text
except KeyboardInterrupt:
raise KeyboardInterrupt
except ValueError as e:
pass
except onionrexceptions.InvalidAPIVersion:
if 'X-API' in response_headers:
logger.debug('Using API version %s. Cannot communicate with node\'s API version of %s.' % (API_VERSION, response_headers['X-API']))
else:
logger.debug('Using API version %s. API version was not sent with the request.' % API_VERSION)
except requests.exceptions.RequestException as e:
if not 'ConnectTimeoutError' in str(e) and not 'Request rejected or failed' in str(e):
logger.debug('Error: %s' % str(e))
retData = False
if returnHeaders:
return (retData, response_headers)
else:
return retData

View File

@ -0,0 +1,108 @@
'''
Onionr - Private P2P Communication
Module to fetch block metadata from raw block data and process it
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import json, sqlite3
import logger, onionrevents
from onionrusers import onionrusers
from etc import onionrvalues
import onionrblockapi
from . import epoch, stringvalidators, bytesconverter
def get_block_metadata_from_data(blockData):
'''
accepts block contents as string, returns a tuple of
metadata, meta (meta being internal metadata, which will be
returned as an encrypted base64 string if it is encrypted, dict if not).
'''
meta = {}
metadata = {}
data = blockData
try:
blockData = blockData.encode()
except AttributeError:
pass
try:
metadata = json.loads(blockData[:blockData.find(b'\n')].decode())
except json.decoder.JSONDecodeError:
pass
else:
data = blockData[blockData.find(b'\n'):].decode()
if not metadata['encryptType'] in ('asym', 'sym'):
try:
meta = json.loads(metadata['meta'])
except KeyError:
pass
meta = metadata['meta']
return (metadata, meta, data)
def process_block_metadata(core_inst, blockHash):
'''
Read metadata from a block and cache it to the block database
'''
curTime = epoch.get_rounded_epoch(roundS=60)
myBlock = onionrblockapi.Block(blockHash, core_inst)
if myBlock.isEncrypted:
myBlock.decrypt()
if (myBlock.isEncrypted and myBlock.decrypted) or (not myBlock.isEncrypted):
blockType = myBlock.getMetadata('type') # we would use myBlock.getType() here, but it is bugged with encrypted blocks
signer = bytesconverter.bytes_to_str(myBlock.signer)
valid = myBlock.verifySig()
if myBlock.getMetadata('newFSKey') is not None:
onionrusers.OnionrUser(core_inst, signer).addForwardKey(myBlock.getMetadata('newFSKey'))
try:
if len(blockType) <= 10:
core_inst.updateBlockInfo(blockHash, 'dataType', blockType)
except TypeError:
logger.warn("Missing block information")
pass
# Set block expire time if specified
try:
expireTime = myBlock.getHeader('expire')
assert len(str(int(expireTime))) < 20 # test that expire time is an integer of sane length (for epoch)
except (AssertionError, ValueError, TypeError) as e:
expireTime = onionrvalues.OnionrValues().default_expire + curTime
finally:
core_inst.updateBlockInfo(blockHash, 'expire', expireTime)
if not blockType is None:
core_inst.updateBlockInfo(blockHash, 'dataType', blockType)
onionrevents.event('processblocks', data = {'block': myBlock, 'type': blockType, 'signer': signer, 'validSig': valid}, onionr = core_inst.onionrInst)
else:
pass
def has_block(core_inst, hash):
'''
Check for new block in the list
'''
conn = sqlite3.connect(core_inst.blockDB)
c = conn.cursor()
if not stringvalidators.validate_hash(hash):
raise Exception("Invalid hash")
for result in c.execute("SELECT COUNT() FROM hashes WHERE hash = ?", (hash,)):
if result[0] >= 1:
conn.commit()
conn.close()
return True
else:
conn.commit()
conn.close()
return False
return False

View File

@ -0,0 +1,14 @@
def str_to_bytes(data):
'''Converts a string to bytes with .encode()'''
try:
data = data.encode('UTF-8')
except AttributeError:
pass
return data
def bytes_to_str(data):
try:
data = data.decode('UTF-8')
except AttributeError:
pass
return data

View File

@ -0,0 +1,38 @@
'''
Onionr - Private P2P Communication
Check if the communicator is running
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import time, os
def is_communicator_running(core_inst, timeout = 5, interval = 0.1):
try:
runcheck_file = core_inst.dataDir + '.runcheck'
if not os.path.isfile(runcheck_file):
open(runcheck_file, 'w+').close()
starttime = time.time()
while True:
time.sleep(interval)
if not os.path.isfile(runcheck_file):
return True
elif time.time() - starttime >= timeout:
return False
except:
return False

View File

@ -0,0 +1,30 @@
'''
Onionr - Private P2P Communication
Get floored epoch, or rounded epoch
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import math, time
def get_rounded_epoch(roundS=60):
'''
Returns the epoch, rounded down to given seconds (Default 60)
'''
epoch = get_epoch()
return epoch - (epoch % roundS)
def get_epoch():
'''returns epoch'''
return math.floor(time.time())

View File

@ -0,0 +1,10 @@
import re
def escape_ANSI(line):
'''
Remove ANSI escape codes from a string with regex
adapted from: https://stackoverflow.com/a/38662876 by user https://stackoverflow.com/users/802365/%c3%89douard-lopez
cc-by-sa-3 license https://creativecommons.org/licenses/by-sa/3.0/
'''
ansi_escape = re.compile(r'(\x9B|\x1B\[)[0-?]*[ -/]*[@-~]')
return ansi_escape.sub('', line)

View File

@ -0,0 +1,29 @@
'''
Onionr - Private P2P Communication
Return the client api server address and port, which is usually random
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
def get_client_API_server(core_inst):
retData = ''
try:
with open(core_inst.privateApiHostFile, 'r') as host:
hostname = host.read()
except FileNotFoundError:
raise FileNotFoundError
else:
retData += '%s:%s' % (hostname, core_inst.config.get('client.client.port'))
return retData

View File

@ -0,0 +1,48 @@
'''
Onionr - Private P2P Communication
import new blocks from disk, providing transport agnosticism
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import glob
import logger, core
from onionrutils import blockmetadata
def import_new_blocks(core_inst=None, scanDir=''):
'''
This function is intended to scan for new blocks ON THE DISK and import them
'''
if core_inst is None:
core_inst = core.Core()
blockList = core_inst.getBlockList()
exist = False
if scanDir == '':
scanDir = core_inst.blockDataLocation
if not scanDir.endswith('/'):
scanDir += '/'
for block in glob.glob(scanDir + "*.dat"):
if block.replace(scanDir, '').replace('.dat', '') not in blockList:
exist = True
logger.info('Found new block on dist %s' % block, terminal=True)
with open(block, 'rb') as newBlock:
block = block.replace(scanDir, '').replace('.dat', '')
if core_inst._crypto.sha3Hash(newBlock.read()) == block.replace('.dat', ''):
core_inst.addToBlockDB(block.replace('.dat', ''), dataSaved=True)
logger.info('Imported block %s.' % block, terminal=True)
blockmetadata.process_block_metadata(core_inst, block)
else:
logger.warn('Failed to verify hash for %s' % block, terminal=True)
if not exist:
logger.info('No blocks found to import', terminal=True)

View File

@ -0,0 +1,51 @@
'''
Onionr - Private P2P Communication
send a command to the local API server
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import urllib, requests, time
import logger
from onionrutils import getclientapiserver
def local_command(core_inst, command, data='', silent = True, post=False, postData = {}, maxWait=20):
'''
Send a command to the local http API server, securely. Intended for local clients, DO NOT USE for remote peers.
'''
# TODO: URL encode parameters, just as an extra measure. May not be needed, but should be added regardless.
hostname = ''
waited = 0
while hostname == '':
try:
hostname = getclientapiserver.get_client_API_server(core_inst)
except FileNotFoundError:
time.sleep(1)
waited += 1
if waited == maxWait:
return False
if data != '':
data = '&data=' + urllib.parse.quote_plus(data)
payload = 'http://%s/%s%s' % (hostname, command, data)
try:
if post:
retData = requests.post(payload, data=postData, headers={'token': core_inst.config.get('client.webpassword'), 'Connection':'close'}, timeout=(maxWait, maxWait)).text
else:
retData = requests.get(payload, headers={'token': core_inst.config.get('client.webpassword'), 'Connection':'close'}, timeout=(maxWait, maxWait)).text
except Exception as error:
if not silent:
logger.error('Failed to make local request (command: %s):%s' % (command, error), terminal=True)
retData = False
return retData

View File

@ -0,0 +1,27 @@
'''
Onionr - Private P2P Communication
convert a base32 string (intended for ed25519 user ids) to pgp word list
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import base64
from etc import pgpwords
def get_human_readable_ID(core_inst, pub=''):
'''gets a human readable ID from a public key'''
if pub == '':
pub = core_inst._crypto.pubKey
pub = base64.b16encode(base64.b32decode(pub)).decode()
return ' '.join(pgpwords.wordify(pub))

View File

@ -0,0 +1,119 @@
'''
Onionr - Private P2P Communication
validate various string data types
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import base64, string
import unpaddedbase32, nacl.signing, nacl.encoding
from onionrutils import bytesconverter
def validate_hash(data, length=64):
'''
Validate if a string is a valid hash hex digest (does not compare, just checks length and charset)
Length is only invalid if its *more* than the specified
'''
retVal = True
if data == False or data == True:
return False
data = data.strip()
if len(data) > length:
retVal = False
else:
try:
int(data, 16)
except ValueError:
retVal = False
return retVal
def validate_pub_key(key):
'''
Validate if a string is a valid base32 encoded Ed25519 key
'''
if type(key) is type(None):
return False
# Accept keys that have no = padding
key = unpaddedbase32.repad(bytesconverter.str_to_bytes(key))
retVal = False
try:
nacl.signing.SigningKey(seed=key, encoder=nacl.encoding.Base32Encoder)
except nacl.exceptions.ValueError:
pass
except base64.binascii.Error as err:
pass
else:
retVal = True
return retVal
def validate_transport(id):
try:
idLength = len(id)
retVal = True
idNoDomain = ''
peerType = ''
# i2p b32 addresses are 60 characters long (including .b32.i2p)
if idLength == 60:
peerType = 'i2p'
if not id.endswith('.b32.i2p'):
retVal = False
else:
idNoDomain = id.split('.b32.i2p')[0]
# Onion v2's are 22 (including .onion), v3's are 62 with .onion
elif idLength == 22 or idLength == 62:
peerType = 'onion'
if not id.endswith('.onion'):
retVal = False
else:
idNoDomain = id.split('.onion')[0]
else:
retVal = False
if retVal:
if peerType == 'i2p':
try:
id.split('.b32.i2p')[2]
except:
pass
else:
retVal = False
elif peerType == 'onion':
try:
id.split('.onion')[2]
except:
pass
else:
retVal = False
if not idNoDomain.isalnum():
retVal = False
# Validate address is valid base32 (when capitalized and minus extension); v2/v3 onions and .b32.i2p use base32
for x in idNoDomain.upper():
if x not in string.ascii_uppercase and x not in '234567':
retVal = False
return retVal
except Exception as e:
return False
def is_integer_string(data):
'''Check if a string is a valid base10 integer (also returns true if already an int)'''
try:
int(data)
except (ValueError, TypeError) as e:
return False
else:
return True

View File

@ -0,0 +1,97 @@
'''
Onionr - Private P2P Communication
validate new block's metadata
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import json
import logger, onionrexceptions
from etc import onionrvalues
from onionrutils import stringvalidators, epoch, bytesconverter
def validate_metadata(core_inst, metadata, blockData):
'''Validate metadata meets onionr spec (does not validate proof value computation), take in either dictionary or json string'''
# TODO, make this check sane sizes
retData = False
maxClockDifference = 120
# convert to dict if it is json string
if type(metadata) is str:
try:
metadata = json.loads(metadata)
except json.JSONDecodeError:
pass
# Validate metadata dict for invalid keys to sizes that are too large
maxAge = core_inst.config.get("general.max_block_age", onionrvalues.OnionrValues().default_expire)
if type(metadata) is dict:
for i in metadata:
try:
core_inst.requirements.blockMetadataLengths[i]
except KeyError:
logger.warn('Block has invalid metadata key ' + i)
break
else:
testData = metadata[i]
try:
testData = len(testData)
except (TypeError, AttributeError) as e:
testData = len(str(testData))
if core_inst.requirements.blockMetadataLengths[i] < testData:
logger.warn('Block metadata key ' + i + ' exceeded maximum size')
break
if i == 'time':
if not stringvalidators.is_integer_string(metadata[i]):
logger.warn('Block metadata time stamp is not integer string or int')
break
isFuture = (metadata[i] - epoch.get_epoch())
if isFuture > maxClockDifference:
logger.warn('Block timestamp is skewed to the future over the max %s: %s' (maxClockDifference, isFuture))
break
if (epoch.get_epoch() - metadata[i]) > maxAge:
logger.warn('Block is outdated: %s' % (metadata[i],))
break
elif i == 'expire':
try:
assert int(metadata[i]) > epoch.get_epoch()
except AssertionError:
logger.warn('Block is expired: %s less than %s' % (metadata[i], epoch.get_epoch()))
break
elif i == 'encryptType':
try:
assert metadata[i] in ('asym', 'sym', '')
except AssertionError:
logger.warn('Invalid encryption mode')
break
else:
# if metadata loop gets no errors, it does not break, therefore metadata is valid
# make sure we do not have another block with the same data content (prevent data duplication and replay attacks)
nonce = bytesconverter.bytes_to_str(core_inst._crypto.sha3Hash(blockData))
try:
with open(core_inst.dataNonceFile, 'r') as nonceFile:
if nonce in nonceFile.read():
retData = False # we've seen that nonce before, so we can't pass metadata
raise onionrexceptions.DataExists
except FileNotFoundError:
retData = True
except onionrexceptions.DataExists:
# do not set retData to True, because nonce has been seen before
pass
else:
retData = True
else:
logger.warn('In call to utils.validateMetadata, metadata must be JSON string or a dictionary object')
return retData

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
This is an interactive menu-driven CLI interface for Onionr This is an interactive menu-driven CLI interface for Onionr
''' '''
@ -23,6 +23,7 @@ import threading, time, uuid, subprocess, sys
import config, logger import config, logger
from onionrblockapi import Block from onionrblockapi import Block
import onionrplugins import onionrplugins
from onionrutils import localcommand
plugin_name = 'cliui' plugin_name = 'cliui'
PLUGIN_VERSION = '0.0.1' PLUGIN_VERSION = '0.0.1'
@ -48,7 +49,7 @@ class OnionrCLIUI:
def isRunning(self): def isRunning(self):
while not self.shutdown: while not self.shutdown:
if self.myCore._utils.localCommand('ping', maxWait=5) == 'pong!': if localcommand.local_command(self.myCore, 'ping', maxWait=5) == 'pong!':
self.running = 'Yes' self.running = 'Yes'
else: else:
self.running = 'No' self.running = 'No'
@ -100,7 +101,7 @@ class OnionrCLIUI:
elif choice == "": elif choice == "":
pass pass
else: else:
logger.error("Invalid choice") logger.error("Invalid choice", terminal=True)
return return
def on_init(api, data = None): def on_init(api, data = None):

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
This is an interactive menu-driven CLI interface for Onionr This is an interactive menu-driven CLI interface for Onionr
''' '''

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Microblogging Platform & Social network Onionr - Private P2P Communication
This default plugin allows users to encrypt/decrypt messages without using blocks This default plugin allows users to encrypt/decrypt messages without using blocks
''' '''
@ -21,6 +21,7 @@
# Imports some useful libraries # Imports some useful libraries
import logger, config, threading, time, datetime, sys, json import logger, config, threading, time, datetime, sys, json
from onionrblockapi import Block from onionrblockapi import Block
from onionrutils import stringvalidators
import onionrexceptions, onionrusers import onionrexceptions, onionrusers
import locale import locale
locale.setlocale(locale.LC_ALL, '') locale.setlocale(locale.LC_ALL, '')
@ -43,16 +44,16 @@ class PlainEncryption:
pass pass
try: try:
if not self.api.get_core()._utils.validatePubKey(sys.argv[2]): if not stringvalidators.validate_pub_key(sys.argv[2]):
raise onionrexceptions.InvalidPubkey raise onionrexceptions.InvalidPubkey
except (ValueError, IndexError) as e: except (ValueError, IndexError) as e:
logger.error("Peer public key not specified") logger.error("Peer public key not specified", terminal=True)
except onionrexceptions.InvalidPubkey: except onionrexceptions.InvalidPubkey:
logger.error("Invalid public key") logger.error("Invalid public key", terminal=True)
else: else:
pubkey = sys.argv[2] pubkey = sys.argv[2]
# Encrypt if public key is valid # Encrypt if public key is valid
logger.info("Please enter your message (ctrl-d or -q to stop):") logger.info("Please enter your message (ctrl-d or -q to stop):", terminal=True)
try: try:
for line in sys.stdin: for line in sys.stdin:
if line == '-q\n': if line == '-q\n':
@ -72,12 +73,12 @@ class PlainEncryption:
plaintext = data plaintext = data
encrypted = self.api.get_core()._crypto.pubKeyEncrypt(plaintext, pubkey, encodedData=True) encrypted = self.api.get_core()._crypto.pubKeyEncrypt(plaintext, pubkey, encodedData=True)
encrypted = self.api.get_core()._utils.bytesToStr(encrypted) encrypted = self.api.get_core()._utils.bytesToStr(encrypted)
logger.info('Encrypted Message: \n\nONIONR ENCRYPTED DATA %s END ENCRYPTED DATA' % (encrypted,)) logger.info('Encrypted Message: \n\nONIONR ENCRYPTED DATA %s END ENCRYPTED DATA' % (encrypted,), terminal=True)
def decrypt(self): def decrypt(self):
plaintext = "" plaintext = ""
data = "" data = ""
logger.info("Please enter your message (ctrl-d or -q to stop):") logger.info("Please enter your message (ctrl-d or -q to stop):", terminal=True)
try: try:
for line in sys.stdin: for line in sys.stdin:
if line == '-q\n': if line == '-q\n':
@ -91,17 +92,17 @@ class PlainEncryption:
myPub = self.api.get_core()._crypto.pubKey myPub = self.api.get_core()._crypto.pubKey
decrypted = self.api.get_core()._crypto.pubKeyDecrypt(encrypted, privkey=self.api.get_core()._crypto.privKey, encodedData=True) decrypted = self.api.get_core()._crypto.pubKeyDecrypt(encrypted, privkey=self.api.get_core()._crypto.privKey, encodedData=True)
if decrypted == False: if decrypted == False:
logger.error("Decryption failed") logger.error("Decryption failed", terminal=True)
else: else:
data = json.loads(decrypted) data = json.loads(decrypted)
logger.info('Decrypted Message: \n\n%s' % data['data']) logger.info('Decrypted Message: \n\n%s' % data['data'], terminal=True)
try: try:
logger.info("Signing public key: %s" % (data['signer'],)) logger.info("Signing public key: %s" % (data['signer'],), terminal=True)
assert self.api.get_core()._crypto.edVerify(data['data'], data['signer'], data['sig']) != False assert self.api.get_core()._crypto.edVerify(data['data'], data['signer'], data['sig']) != False
except (AssertionError, KeyError) as e: except (AssertionError, KeyError) as e:
logger.warn("WARNING: THIS MESSAGE HAS A MISSING OR INVALID SIGNATURE") logger.warn("WARNING: THIS MESSAGE HAS A MISSING OR INVALID SIGNATURE", terminal=True)
else: else:
logger.info("Message has good signature.") logger.info("Message has good signature.", terminal=True)
return return
def on_init(api, data = None): def on_init(api, data = None):

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
HTTP endpoints for controlling IMs HTTP endpoints for controlling IMs
''' '''
@ -22,13 +22,13 @@ from flask import Response, request, redirect, Blueprint, send_from_directory
import core import core
core_inst = core.Core() core_inst = core.Core()
flask_blueprint = Blueprint('clandestine_control', __name__) flask_blueprint = Blueprint('esoteric_control', __name__)
@flask_blueprint.route('/clandestine/ping') @flask_blueprint.route('/esoteric/ping')
def ping(): def ping():
return 'pong!' return 'pong!'
@flask_blueprint.route('/clandestine/send/<peer>', methods=['POST']) @flask_blueprint.route('/esoteric/send/<peer>', methods=['POST'])
def send_message(peer): def send_message(peer):
data = request.get_json(force=True) data = request.get_json(force=True)
core_inst.keyStore.refresh() core_inst.keyStore.refresh()
@ -40,14 +40,14 @@ def send_message(peer):
core_inst.keyStore.flush() core_inst.keyStore.flush()
return Response('success') return Response('success')
@flask_blueprint.route('/clandestine/gets/<peer>') @flask_blueprint.route('/esoteric/gets/<peer>')
def get_sent(peer): def get_sent(peer):
sent = core_inst.keyStore.get('s' + peer) sent = core_inst.keyStore.get('s' + peer)
if sent is None: if sent is None:
sent = [] sent = []
return Response(json.dumps(sent)) return Response(json.dumps(sent))
@flask_blueprint.route('/clandestine/addrec/<peer>', methods=['POST']) @flask_blueprint.route('/esoteric/addrec/<peer>', methods=['POST'])
def add_rec(peer): def add_rec(peer):
data = request.get_json(force=True) data = request.get_json(force=True)
core_inst.keyStore.refresh() core_inst.keyStore.refresh()
@ -59,7 +59,7 @@ def add_rec(peer):
core_inst.keyStore.flush() core_inst.keyStore.flush()
return Response('success') return Response('success')
@flask_blueprint.route('/clandestine/getrec/<peer>') @flask_blueprint.route('/esoteric/getrec/<peer>')
def get_messages(peer): def get_messages(peer):
core_inst.keyStore.refresh() core_inst.keyStore.refresh()
existing = core_inst.keyStore.get('r' + peer) existing = core_inst.keyStore.get('r' + peer)

View File

@ -1,5 +1,5 @@
{ {
"name" : "clandestine", "name" : "esoteric",
"version" : "1.0", "version" : "1.0",
"author" : "onionr" "author" : "onionr"
} }

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
Instant message conversations with Onionr peers Instant message conversations with Onionr peers
''' '''
@ -23,8 +23,9 @@ import locale, sys, os, threading, json
locale.setlocale(locale.LC_ALL, '') locale.setlocale(locale.LC_ALL, '')
import onionrservices, logger import onionrservices, logger
from onionrservices import bootstrapservice from onionrservices import bootstrapservice
from onionrutils import stringvalidators, epoch, basicrequests
plugin_name = 'clandestine' plugin_name = 'esoteric'
PLUGIN_VERSION = '0.0.0' PLUGIN_VERSION = '0.0.0'
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__))) sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)))
import controlapi, peerserver import controlapi, peerserver
@ -36,7 +37,7 @@ def exit_with_error(text=''):
logger.error(text) logger.error(text)
sys.exit(1) sys.exit(1)
class Clandestine: class Esoteric:
def __init__(self, pluginapi): def __init__(self, pluginapi):
self.myCore = pluginapi.get_core() self.myCore = pluginapi.get_core()
self.peer = None self.peer = None
@ -57,8 +58,8 @@ class Clandestine:
else: else:
message += '\n' message += '\n'
except EOFError: except EOFError:
message = json.dumps({'m': message, 't': self.myCore._utils.getEpoch()}) message = json.dumps({'m': message, 't': epoch.get_epoch()})
print(self.myCore._utils.doPostRequest('http://%s/clandestine/sendto' % (self.transport,), port=self.socks, data=message)) print(basicrequests.do_post_request(self.myCore, 'http://%s/esoteric/sendto' % (self.transport,), port=self.socks, data=message))
message = '' message = ''
except KeyboardInterrupt: except KeyboardInterrupt:
self.shutdown = True self.shutdown = True
@ -66,7 +67,7 @@ class Clandestine:
def create(self): def create(self):
try: try:
peer = sys.argv[2] peer = sys.argv[2]
if not self.myCore._utils.validatePubKey(peer): if not stringvalidators.validate_pub_key(peer):
exit_with_error('Invalid public key specified') exit_with_error('Invalid public key specified')
except IndexError: except IndexError:
exit_with_error('You must specify a peer public key') exit_with_error('You must specify a peer public key')
@ -77,7 +78,7 @@ class Clandestine:
self.socks = self.myCore.config.get('tor.socksport') self.socks = self.myCore.config.get('tor.socksport')
print('connected with', peer, 'on', peer_transport_address) print('connected with', peer, 'on', peer_transport_address)
if self.myCore._utils.doGetRequest('http://%s/ping' % (peer_transport_address,), ignoreAPI=True, port=self.socks) == 'pong!': if basicrequests.do_get_request(self.myCore, 'http://%s/ping' % (peer_transport_address,), ignoreAPI=True, port=self.socks) == 'pong!':
print('connected', peer_transport_address) print('connected', peer_transport_address)
threading.Thread(target=self._sender_loop).start() threading.Thread(target=self._sender_loop).start()
@ -89,6 +90,6 @@ def on_init(api, data = None):
''' '''
pluginapi = api pluginapi = api
chat = Clandestine(pluginapi) chat = Esoteric(pluginapi)
api.commands.register(['clandestine'], chat.create) api.commands.register(['esoteric'], chat.create)
return return

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
HTTP endpoints for communicating with peers HTTP endpoints for communicating with peers
''' '''
@ -19,9 +19,10 @@
''' '''
import sys, os, json import sys, os, json
import core import core
from onionrutils import localcommand
from flask import Response, request, redirect, Blueprint, abort, g from flask import Response, request, redirect, Blueprint, abort, g
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__))) sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)))
direct_blueprint = Blueprint('clandestine', __name__) direct_blueprint = Blueprint('esoteric', __name__)
core_inst = core.Core() core_inst = core.Core()
storage_dir = core_inst.dataDir storage_dir = core_inst.dataDir
@ -35,11 +36,11 @@ def request_setup():
g.host = host g.host = host
g.peer = core_inst.keyStore.get('dc-' + g.host) g.peer = core_inst.keyStore.get('dc-' + g.host)
@direct_blueprint.route('/clandestine/ping') @direct_blueprint.route('/esoteric/ping')
def pingdirect(): def pingdirect():
return 'pong!' return 'pong!'
@direct_blueprint.route('/clandestine/sendto', methods=['POST', 'GET']) @direct_blueprint.route('/esoteric/sendto', methods=['POST', 'GET'])
def sendto(): def sendto():
try: try:
msg = request.get_json(force=True) msg = request.get_json(force=True)
@ -47,9 +48,9 @@ def sendto():
msg = '' msg = ''
else: else:
msg = json.dumps(msg) msg = json.dumps(msg)
core_inst._utils.localCommand('/clandestine/addrec/%s' % (g.peer,), post=True, postData=msg) localcommand.local_command(core_inst, '/esoteric/addrec/%s' % (g.peer,), post=True, postData=msg)
return Response('success') return Response('success')
@direct_blueprint.route('/clandestine/poll') @direct_blueprint.route('/esoteric/poll')
def poll_chat(): def poll_chat():
return Response(core_inst._utils.localCommand('/clandestine/gets/%s' % (g.peer,))) return Response(localcommand.local_command(core_inst, '/esoteric/gets/%s' % (g.peer,)))

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Microblogging Platform & Social network Onionr - Private P2P Communication
This file primarily serves to allow specific fetching of flow board messages This file primarily serves to allow specific fetching of flow board messages
''' '''

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Microblogging Platform & Social network Onionr - Private P2P Communication
This default plugin handles "flow" messages (global chatroom style communication) This default plugin handles "flow" messages (global chatroom style communication)
''' '''
@ -22,6 +22,7 @@
import threading, time, locale, sys, os import threading, time, locale, sys, os
from onionrblockapi import Block from onionrblockapi import Block
import logger, config import logger, config
from onionrutils import escapeansi, epoch
locale.setlocale(locale.LC_ALL, '') locale.setlocale(locale.LC_ALL, '')
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__))) sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)))
@ -40,10 +41,10 @@ class OnionrFlow:
return return
def start(self): def start(self):
logger.warn("Please note: everything said here is public, even if a random channel name is used.") logger.warn("Please note: everything said here is public, even if a random channel name is used.", terminal=True)
message = "" message = ""
self.flowRunning = True self.flowRunning = True
newThread = threading.Thread(target=self.showOutput) newThread = threading.Thread(target=self.showOutput, daemon=True)
newThread.start() newThread.start()
try: try:
self.channel = logger.readline("Enter a channel name or none for default:") self.channel = logger.readline("Enter a channel name or none for default:")
@ -59,11 +60,12 @@ class OnionrFlow:
else: else:
if message == "q": if message == "q":
self.flowRunning = False self.flowRunning = False
expireTime = self.myCore._utils.getEpoch() + 43200 expireTime = epoch.get_epoch() + 43200
if len(message) > 0: if len(message) > 0:
logger.info('Inserting message as block...', terminal=True)
self.myCore.insertBlock(message, header='txt', expire=expireTime, meta={'ch': self.channel}) self.myCore.insertBlock(message, header='txt', expire=expireTime, meta={'ch': self.channel})
logger.info("Flow is exiting, goodbye") logger.info("Flow is exiting, goodbye", terminal=True)
return return
def showOutput(self): def showOutput(self):
@ -74,18 +76,16 @@ class OnionrFlow:
for block in self.myCore.getBlocksByType('txt'): for block in self.myCore.getBlocksByType('txt'):
block = Block(block) block = Block(block)
if block.getMetadata('ch') != self.channel: if block.getMetadata('ch') != self.channel:
#print('not chan', block.getMetadata('ch'))
continue continue
if block.getHash() in self.alreadyOutputed: if block.getHash() in self.alreadyOutputed:
#print('already')
continue continue
if not self.flowRunning: if not self.flowRunning:
break break
logger.info('\n------------------------', prompt = False) logger.info('\n------------------------', prompt = False, terminal=True)
content = block.getContent() content = block.getContent()
# Escape new lines, remove trailing whitespace, and escape ansi sequences # Escape new lines, remove trailing whitespace, and escape ansi sequences
content = self.myCore._utils.escapeAnsi(content.replace('\n', '\\n').replace('\r', '\\r').strip()) content = escapeansi.escape_ANSI(content.replace('\n', '\\n').replace('\r', '\\r').strip())
logger.info(block.getDate().strftime("%m/%d %H:%M") + ' - ' + logger.colors.reset + content, prompt = False) logger.info(block.getDate().strftime("%m/%d %H:%M") + ' - ' + logger.colors.reset + content, prompt = False, terminal=True)
self.alreadyOutputed.append(block.getHash()) self.alreadyOutputed.append(block.getHash())
time.sleep(5) time.sleep(5)
except KeyboardInterrupt: except KeyboardInterrupt:

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
This processes metadata for Onionr blocks This processes metadata for Onionr blocks
''' '''
@ -23,6 +23,7 @@ import logger, config
import os, sys, json, time, random, shutil, base64, getpass, datetime, re import os, sys, json, time, random, shutil, base64, getpass, datetime, re
from onionrblockapi import Block from onionrblockapi import Block
import onionrusers, onionrexceptions import onionrusers, onionrexceptions
from onionrutils import stringvalidators
plugin_name = 'metadataprocessor' plugin_name = 'metadataprocessor'
@ -36,7 +37,7 @@ def _processForwardKey(api, myBlock):
key = myBlock.getMetadata('newFSKey') key = myBlock.getMetadata('newFSKey')
# We don't need to validate here probably, but it helps # We don't need to validate here probably, but it helps
if api.get_utils().validatePubKey(key): if stringvalidators.validate_pub_key(key):
peer.addForwardKey(key) peer.addForwardKey(key)
else: else:
raise onionrexceptions.InvalidPubkey("%s is not a valid pubkey key" % (key,)) raise onionrexceptions.InvalidPubkey("%s is not a valid pubkey key" % (key,))

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Microblogging Platform & Social network. Onionr - Private P2P Communication
This plugin acts as a plugin manager, and allows the user to install other plugins distributed over Onionr. This plugin acts as a plugin manager, and allows the user to install other plugins distributed over Onionr.
''' '''
@ -22,6 +22,7 @@
import logger, config import logger, config
import os, sys, json, time, random, shutil, base64, getpass, datetime, re import os, sys, json, time, random, shutil, base64, getpass, datetime, re
from onionrblockapi import Block from onionrblockapi import Block
from onionrutils import importnewblocks, stringvalidators
plugin_name = 'pluginmanager' plugin_name = 'pluginmanager'
@ -180,11 +181,11 @@ def blockToPlugin(block):
shutil.unpack_archive(source, destination) shutil.unpack_archive(source, destination)
pluginapi.plugins.enable(name) pluginapi.plugins.enable(name)
logger.info('Installation of %s complete.' % name) logger.info('Installation of %s complete.' % name, terminal=True)
return True return True
except Exception as e: except Exception as e:
logger.error('Failed to install plugin.', error = e, timestamp = False) logger.error('Failed to install plugin.', error = e, timestamp = False, terminal=True)
return False return False
@ -236,13 +237,13 @@ def pluginToBlock(plugin, import_block = True):
# hash = pluginapi.get_core().insertBlock(, header = 'plugin', sign = True) # hash = pluginapi.get_core().insertBlock(, header = 'plugin', sign = True)
if import_block: if import_block:
pluginapi.get_utils().importNewBlocks() importnewblocks.import_new_blocks(pluginapi.get_core())
return hash return hash
else: else:
logger.error('Plugin %s does not exist.' % plugin) logger.error('Plugin %s does not exist.' % plugin, terminal=True)
except Exception as e: except Exception as e:
logger.error('Failed to convert plugin to block.', error = e, timestamp = False) logger.error('Failed to convert plugin to block.', error = e, timestamp = False, terminal=True)
return False return False
@ -261,7 +262,7 @@ def installBlock(block):
install = False install = False
logger.info(('Will install %s' + (' v' + version if not version is None else '') + ' (%s), by %s') % (name, date, author)) logger.info(('Will install %s' + (' v' + version if not version is None else '') + ' (%s), by %s') % (name, date, author), terminal=True)
# TODO: Convert to single line if statement # TODO: Convert to single line if statement
if os.path.exists(pluginapi.plugins.get_folder(name)): if os.path.exists(pluginapi.plugins.get_folder(name)):
@ -273,12 +274,12 @@ def installBlock(block):
blockToPlugin(block.getHash()) blockToPlugin(block.getHash())
addPlugin(name) addPlugin(name)
else: else:
logger.info('Installation cancelled.') logger.info('Installation cancelled.', terminal=True)
return False return False
return True return True
except Exception as e: except Exception as e:
logger.error('Failed to install plugin.', error = e, timestamp = False) logger.error('Failed to install plugin.', error = e, timestamp = False, terminal=True)
return False return False
def uninstallPlugin(plugin): def uninstallPlugin(plugin):
@ -291,12 +292,12 @@ def uninstallPlugin(plugin):
remove = False remove = False
if not exists: if not exists:
logger.warn('Plugin %s does not exist.' % plugin, timestamp = False) logger.warn('Plugin %s does not exist.' % plugin, timestamp = False, terminal=True)
return False return False
default = 'y' default = 'y'
if not installedByPluginManager: if not installedByPluginManager:
logger.warn('The plugin %s was not installed by %s.' % (plugin, plugin_name), timestamp = False) logger.warn('The plugin %s was not installed by %s.' % (plugin, plugin_name), timestamp = False, terminal=True)
default = 'n' default = 'n'
remove = logger.confirm(message = 'All plugin data will be lost. Are you sure you want to proceed %s?', default = default) remove = logger.confirm(message = 'All plugin data will be lost. Are you sure you want to proceed %s?', default = default)
@ -306,20 +307,20 @@ def uninstallPlugin(plugin):
pluginapi.plugins.disable(plugin) pluginapi.plugins.disable(plugin)
shutil.rmtree(pluginFolder) shutil.rmtree(pluginFolder)
logger.info('Uninstallation of %s complete.' % plugin) logger.info('Uninstallation of %s complete.' % plugin, terminal=True)
return True return True
else: else:
logger.info('Uninstallation cancelled.') logger.info('Uninstallation cancelled.')
except Exception as e: except Exception as e:
logger.error('Failed to uninstall plugin.', error = e) logger.error('Failed to uninstall plugin.', error = e, terminal=True)
return False return False
# command handlers # command handlers
def help(): def help():
logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin> [public key/block hash]') logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin> [public key/block hash]', terminal=True)
logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin> [public key/block hash]') logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin> [public key/block hash]', terminal=True)
def commandInstallPlugin(): def commandInstallPlugin():
if len(sys.argv) >= 3: if len(sys.argv) >= 3:
@ -345,20 +346,20 @@ def commandInstallPlugin():
if pkobh is None: if pkobh is None:
# still nothing found, try searching repositories # still nothing found, try searching repositories
logger.info('Searching for public key in repositories...') logger.info('Searching for public key in repositories...', terminal=True)
try: try:
repos = getRepositories() repos = getRepositories()
distributors = list() distributors = list()
for repo, records in repos.items(): for repo, records in repos.items():
if pluginname in records: if pluginname in records:
logger.debug('Found %s in repository %s for plugin %s.' % (records[pluginname], repo, pluginname)) logger.debug('Found %s in repository %s for plugin %s.' % (records[pluginname], repo, pluginname), terminal=True)
distributors.append(records[pluginname]) distributors.append(records[pluginname])
if len(distributors) != 0: if len(distributors) != 0:
distributor = None distributor = None
if len(distributors) == 1: if len(distributors) == 1:
logger.info('Found distributor: %s' % distributors[0]) logger.info('Found distributor: %s' % distributors[0], terminal=True)
distributor = distributors[0] distributor = distributors[0]
else: else:
distributors_message = '' distributors_message = ''
@ -368,11 +369,11 @@ def commandInstallPlugin():
distributors_message += ' ' + logger.colors.bold + str(index) + ') ' + logger.colors.reset + str(dist) + '\n' distributors_message += ' ' + logger.colors.bold + str(index) + ') ' + logger.colors.reset + str(dist) + '\n'
index += 1 index += 1
logger.info((logger.colors.bold + 'Found distributors (%s):' + logger.colors.reset + '\n' + distributors_message) % len(distributors)) logger.info((logger.colors.bold + 'Found distributors (%s):' + logger.colors.reset + '\n' + distributors_message) % len(distributors), terminal=True)
valid = False valid = False
while not valid: while not valid:
choice = logger.readline('Select the number of the key to use, from 1 to %s, or press Ctrl+C to cancel:' % (index - 1)) choice = logger.readline('Select the number of the key to use, from 1 to %s, or press Ctrl+C to cancel:' % (index - 1), terminal=True)
try: try:
choice = int(choice) choice = int(choice)
@ -380,7 +381,7 @@ def commandInstallPlugin():
distributor = distributors[int(choice)] distributor = distributors[int(choice)]
valid = True valid = True
except KeyboardInterrupt: except KeyboardInterrupt:
logger.info('Installation cancelled.') logger.info('Installation cancelled.', terminal=True)
return True return True
except: except:
pass pass
@ -388,42 +389,42 @@ def commandInstallPlugin():
if not distributor is None: if not distributor is None:
pkobh = distributor pkobh = distributor
except Exception as e: except Exception as e:
logger.warn('Failed to lookup plugin in repositories.', timestamp = False) logger.warn('Failed to lookup plugin in repositories.', timestamp = False, terminal=True)
return True return True
if pkobh is None: if pkobh is None:
logger.error('No key for this plugin found in keystore or repositories, please specify.', timestamp = False) logger.error('No key for this plugin found in keystore or repositories, please specify.', timestamp = False, terminal=True)
return True return True
valid_hash = pluginapi.get_utils().validateHash(pkobh) valid_hash = stringvalidators.validate_hash(pkobh)
real_block = False real_block = False
valid_key = pluginapi.get_utils().validatePubKey(pkobh) valid_key = stringvalidators.validate_pub_key(pkobh)
real_key = False real_key = False
if valid_hash: if valid_hash:
real_block = Block.exists(pkobh) real_block = Block.exists(pkobh)
elif valid_key: elif valid_key:
real_key = pluginapi.get_utils().hasKey(pkobh) real_key = pkobh in pluginapi.get_core().listPeers()
blockhash = None blockhash = None
if valid_hash and not real_block: if valid_hash and not real_block:
logger.error('Block hash not found. Perhaps it has not been synced yet?', timestamp = False) logger.error('Block hash not found. Perhaps it has not been synced yet?', timestamp = False, terminal=True)
logger.debug('Is valid hash, but does not belong to a known block.') logger.debug('Is valid hash, but does not belong to a known block.', terminal=True)
return True return True
elif valid_hash and real_block: elif valid_hash and real_block:
blockhash = str(pkobh) blockhash = str(pkobh)
logger.debug('Using block %s...' % blockhash) logger.debug('Using block %s...' % blockhash, terminal=True)
installBlock(blockhash) installBlock(blockhash)
elif valid_key and not real_key: elif valid_key and not real_key:
logger.error('Public key not found. Try adding the node by address manually, if possible.', timestamp = False) logger.error('Public key not found. Try adding the node by address manually, if possible.', timestamp = False, terminal=True)
logger.debug('Is valid key, but the key is not a known one.') logger.debug('Is valid key, but the key is not a known one.', terminal=True)
elif valid_key and real_key: elif valid_key and real_key:
publickey = str(pkobh) publickey = str(pkobh)
logger.debug('Using public key %s...' % publickey) logger.debug('Using public key %s...' % publickey, terminal=True)
saveKey(pluginname, pkobh) saveKey(pluginname, pkobh)
@ -455,14 +456,14 @@ def commandInstallPlugin():
except Exception as e: except Exception as e:
pass pass
logger.warn('Only continue the installation if you are absolutely certain that you trust the plugin distributor. Public key of plugin distributor: %s' % publickey, timestamp = False) logger.warn('Only continue the installation if you are absolutely certain that you trust the plugin distributor. Public key of plugin distributor: %s' % publickey, timestamp = False, terminal=True)
logger.debug('Most recent block matching parameters is %s' % mostRecentVersionBlock) logger.debug('Most recent block matching parameters is %s' % mostRecentVersionBlock, terminal=True)
installBlock(mostRecentVersionBlock) installBlock(mostRecentVersionBlock)
else: else:
logger.error('Unknown data "%s"; must be public key or block hash.' % str(pkobh), timestamp = False) logger.error('Unknown data "%s"; must be public key or block hash.' % str(pkobh), timestamp = False, terminal=True)
return return
else: else:
logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin> [public key/block hash]') logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin> [public key/block hash]', terminal=True)
return True return True
@ -470,12 +471,12 @@ def commandUninstallPlugin():
if len(sys.argv) >= 3: if len(sys.argv) >= 3:
uninstallPlugin(sys.argv[2]) uninstallPlugin(sys.argv[2])
else: else:
logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin>') logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin>', terminal=True)
return True return True
def commandSearchPlugin(): def commandSearchPlugin():
logger.info('This feature has not been created yet. Please check back later.') logger.info('This feature has not been created yet. Please check back later.', terminal=True)
return True return True
def commandAddRepository(): def commandAddRepository():
@ -484,7 +485,7 @@ def commandAddRepository():
blockhash = sys.argv[2] blockhash = sys.argv[2]
if pluginapi.get_utils().validateHash(blockhash): if stringvalidators.validate_hash(blockhash):
if Block.exists(blockhash): if Block.exists(blockhash):
try: try:
blockContent = json.loads(Block(blockhash, core = pluginapi.get_core()).getContent()) blockContent = json.loads(Block(blockhash, core = pluginapi.get_core()).getContent())
@ -492,25 +493,25 @@ def commandAddRepository():
pluginslist = dict() pluginslist = dict()
for pluginname, distributor in blockContent['plugins']: for pluginname, distributor in blockContent['plugins']:
if pluginapi.get_utils().validatePubKey(distributor): if stringvalidators.validate_pub_key(distributor):
pluginslist[pluginname] = distributor pluginslist[pluginname] = distributor
logger.debug('Found %s records in repository.' % len(pluginslist)) logger.debug('Found %s records in repository.' % len(pluginslist), terminal=True)
if len(pluginslist) != 0: if len(pluginslist) != 0:
addRepository(blockhash, pluginslist) addRepository(blockhash, pluginslist)
logger.info('Successfully added repository.') logger.info('Successfully added repository.', terminal=True)
else: else:
logger.error('Repository contains no records, not importing.', timestamp = False) logger.error('Repository contains no records, not importing.', timestamp = False, terminal=True)
except Exception as e: except Exception as e:
logger.error('Failed to parse block.', error = e) logger.error('Failed to parse block.', error = e, terminal=True)
else: else:
logger.error('Block hash not found. Perhaps it has not been synced yet?', timestamp = False) logger.error('Block hash not found. Perhaps it has not been synced yet?', timestamp = False, terminal=True)
logger.debug('Is valid hash, but does not belong to a known block.') logger.debug('Is valid hash, but does not belong to a known block.')
else: else:
logger.error('Unknown data "%s"; must be block hash.' % str(pkobh), timestamp = False) logger.error('Unknown data "%s"; must be block hash.' % str(pkobh), timestamp = False, terminal=True)
else: else:
logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' [block hash]') logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' [block hash]', terminal=True)
return True return True
@ -520,19 +521,19 @@ def commandRemoveRepository():
blockhash = sys.argv[2] blockhash = sys.argv[2]
if pluginapi.get_utils().validateHash(blockhash): if stringvalidators.validate_hash(blockhash):
if blockhash in getRepositories(): if blockhash in getRepositories():
try: try:
removeRepository(blockhash) removeRepository(blockhash)
logger.info('Successfully removed repository.') logger.info('Successfully removed repository.', terminal=True)
except Exception as e: except Exception as e:
logger.error('Failed to parse block.', error = e) logger.error('Failed to parse block.', error = e, terminal=True)
else: else:
logger.error('Repository has not been imported, nothing to remove.', timestamp = False) logger.error('Repository has not been imported, nothing to remove.', timestamp = False, terminal=True)
else: else:
logger.error('Unknown data "%s"; must be block hash.' % str(pkobh)) logger.error('Unknown data "%s"; must be block hash.' % str(pkobh), terminal=True)
else: else:
logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' [block hash]') logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' [block hash]', terminal=True)
return True return True
@ -545,11 +546,11 @@ def commandPublishPlugin():
if os.path.exists(pluginfolder) and not os.path.isfile(pluginfolder): if os.path.exists(pluginfolder) and not os.path.isfile(pluginfolder):
block = pluginToBlock(pluginname) block = pluginToBlock(pluginname)
logger.info('Plugin saved in block %s.' % block) logger.info('Plugin saved in block %s.' % block, terminal=True)
else: else:
logger.error('Plugin %s does not exist.' % pluginname, timestamp = False) logger.error('Plugin %s does not exist.' % pluginname, timestamp = False, terminal=True)
else: else:
logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin>') logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' <plugin>', terminal=True)
def commandCreateRepository(): def commandCreateRepository():
if len(sys.argv) >= 3: if len(sys.argv) >= 3:
@ -573,22 +574,22 @@ def commandCreateRepository():
if distributor is None: if distributor is None:
distributor = getKey(pluginname) distributor = getKey(pluginname)
if distributor is None: if distributor is None:
logger.error('No distributor key was found for the plugin %s.' % pluginname, timestamp = False) logger.error('No distributor key was found for the plugin %s.' % pluginname, timestamp = False, terminal=True)
success = False success = False
plugins.append([pluginname, distributor]) plugins.append([pluginname, distributor])
if not success: if not success:
logger.error('Please correct the above errors, then recreate the repository.') logger.error('Please correct the above errors, then recreate the repository.', terminal=True)
return True return True
blockhash = createRepository(plugins) blockhash = createRepository(plugins)
if not blockhash is None: if not blockhash is None:
logger.info('Successfully created repository. Execute the following command to add the repository:\n ' + logger.colors.underline + '%s --add-repository %s' % (script, blockhash)) logger.info('Successfully created repository. Execute the following command to add the repository:\n ' + logger.colors.underline + '%s --add-repository %s' % (script, blockhash), terminal=True)
else: else:
logger.error('Failed to create repository, an unknown error occurred.') logger.error('Failed to create repository, an unknown error occurred.', terminal=True)
else: else:
logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' [plugins...]') logger.info(sys.argv[0] + ' ' + sys.argv[1] + ' [plugins...]', terminal=True)
return True return True

View File

@ -1,3 +1,22 @@
'''
Onionr - Private P2P Communication
Load the user's inbox and return it as a list
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import onionrblockapi import onionrblockapi
def load_inbox(myCore): def load_inbox(myCore):
inbox_list = [] inbox_list = []

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
HTTP endpoints for mail plugin. HTTP endpoints for mail plugin.
''' '''
@ -21,6 +21,7 @@ import sys, os, json
from flask import Response, request, redirect, Blueprint, abort from flask import Response, request, redirect, Blueprint, abort
import core import core
from onionrusers import contactmanager from onionrusers import contactmanager
from onionrutils import stringvalidators
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__))) sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)))
import loadinbox, sentboxdb import loadinbox, sentboxdb
@ -34,7 +35,7 @@ def mail_ping():
@flask_blueprint.route('/mail/deletemsg/<block>', methods=['POST']) @flask_blueprint.route('/mail/deletemsg/<block>', methods=['POST'])
def mail_delete(block): def mail_delete(block):
if not c._utils.validateHash(block): if not stringvalidators.validate_hash(block):
abort(504) abort(504)
existing = kv.get('deleted_mail') existing = kv.get('deleted_mail')
if existing is None: if existing is None:

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Anonymous Storage Network Onionr - Private P2P Communication
This default plugin handles private messages in an email like fashion This default plugin handles private messages in an email like fashion
''' '''
@ -23,6 +23,7 @@ import logger, config, threading, time, datetime
from onionrblockapi import Block from onionrblockapi import Block
import onionrexceptions import onionrexceptions
from onionrusers import onionrusers from onionrusers import onionrusers
from onionrutils import stringvalidators, escapeansi, bytesconverter
import locale, sys, os, json import locale, sys, os, json
locale.setlocale(locale.LC_ALL, '') locale.setlocale(locale.LC_ALL, '')
@ -73,7 +74,7 @@ class OnionrMail:
blockCount = 0 blockCount = 0
pmBlockMap = {} pmBlockMap = {}
pmBlocks = {} pmBlocks = {}
logger.info('Decrypting messages...') logger.info('Decrypting messages...', terminal=True)
choice = '' choice = ''
displayList = [] displayList = []
subject = '' subject = ''
@ -108,7 +109,7 @@ class OnionrMail:
displayList.append('%s. %s - %s - <%s>: %s' % (blockCount, blockDate, senderDisplay[:12], subject[:10], blockHash)) displayList.append('%s. %s - %s - <%s>: %s' % (blockCount, blockDate, senderDisplay[:12], subject[:10], blockHash))
while choice not in ('-q', 'q', 'quit'): while choice not in ('-q', 'q', 'quit'):
for i in displayList: for i in displayList:
logger.info(i) logger.info(i, terminal=True)
try: try:
choice = logger.readline('Enter a block number, -r to refresh, or -q to stop: ').strip().lower() choice = logger.readline('Enter a block number, -r to refresh, or -q to stop: ').strip().lower()
except (EOFError, KeyboardInterrupt): except (EOFError, KeyboardInterrupt):
@ -135,27 +136,27 @@ class OnionrMail:
else: else:
cancel = '' cancel = ''
readBlock.verifySig() readBlock.verifySig()
senderDisplay = self.myCore._utils.bytesToStr(readBlock.signer) senderDisplay = bytesconverter.bytes_to_str(readBlock.signer)
if len(senderDisplay.strip()) == 0: if len(senderDisplay.strip()) == 0:
senderDisplay = 'Anonymous' senderDisplay = 'Anonymous'
logger.info('Message received from %s' % (senderDisplay,)) logger.info('Message received from %s' % (senderDisplay,), terminal=True)
logger.info('Valid signature: %s' % readBlock.validSig) logger.info('Valid signature: %s' % readBlock.validSig, terminal=True)
if not readBlock.validSig: if not readBlock.validSig:
logger.warn('This message has an INVALID/NO signature. ANYONE could have sent this message.') logger.warn('This message has an INVALID/NO signature. ANYONE could have sent this message.', terminal=True)
cancel = logger.readline('Press enter to continue to message, or -q to not open the message (recommended).') cancel = logger.readline('Press enter to continue to message, or -q to not open the message (recommended).')
print('') print('')
if cancel != '-q': if cancel != '-q':
try: try:
print(draw_border(self.myCore._utils.escapeAnsi(readBlock.bcontent.decode().strip()))) print(draw_border(escapeansi.escape_ANSI(readBlock.bcontent.decode().strip())))
except ValueError: except ValueError:
logger.warn('Error presenting message. This is usually due to a malformed or blank message.') logger.warn('Error presenting message. This is usually due to a malformed or blank message.', terminal=True)
pass pass
if readBlock.validSig: if readBlock.validSig:
reply = logger.readline("Press enter to continue, or enter %s to reply" % ("-r",)) reply = logger.readline("Press enter to continue, or enter %s to reply" % ("-r",))
print('') print('')
if reply == "-r": if reply == "-r":
self.draft_message(self.myCore._utils.bytesToStr(readBlock.signer,)) self.draft_message(bytesconverter.bytes_to_str(readBlock.signer,))
else: else:
logger.readline("Press enter to continue") logger.readline("Press enter to continue")
print('') print('')
@ -168,7 +169,7 @@ class OnionrMail:
entering = True entering = True
while entering: while entering:
self.get_sent_list() self.get_sent_list()
logger.info('Enter a block number or -q to return') logger.info('Enter a block number or -q to return', terminal=True)
try: try:
choice = input('>') choice = input('>')
except (EOFError, KeyboardInterrupt) as e: except (EOFError, KeyboardInterrupt) as e:
@ -182,11 +183,11 @@ class OnionrMail:
try: try:
self.sentboxList[int(choice)] self.sentboxList[int(choice)]
except (IndexError, ValueError) as e: except (IndexError, ValueError) as e:
logger.warn('Invalid block.') logger.warn('Invalid block.', terminal=True)
else: else:
logger.info('Sent to: ' + self.sentMessages[self.sentboxList[int(choice)]][1]) logger.info('Sent to: ' + self.sentMessages[self.sentboxList[int(choice)]][1], terminal=True)
# Print ansi escaped sent message # Print ansi escaped sent message
logger.info(self.myCore._utils.escapeAnsi(self.sentMessages[self.sentboxList[int(choice)]][0])) logger.info(escapeansi.escape_ANSI(self.sentMessages[self.sentboxList[int(choice)]][0]), terminal=True)
input('Press enter to continue...') input('Press enter to continue...')
finally: finally:
if choice == '-q': if choice == '-q':
@ -199,9 +200,9 @@ class OnionrMail:
self.sentMessages = {} self.sentMessages = {}
for i in self.sentboxTools.listSent(): for i in self.sentboxTools.listSent():
self.sentboxList.append(i['hash']) self.sentboxList.append(i['hash'])
self.sentMessages[i['hash']] = (self.myCore._utils.bytesToStr(i['message']), i['peer'], i['subject']) self.sentMessages[i['hash']] = (bytesconverter.bytes_to_str(i['message']), i['peer'], i['subject'])
if display: if display:
logger.info('%s. %s - %s - (%s) - %s' % (count, i['hash'], i['peer'][:12], i['subject'], i['date'])) logger.info('%s. %s - %s - (%s) - %s' % (count, i['hash'], i['peer'][:12], i['subject'], i['date']), terminal=True)
count += 1 count += 1
return json.dumps(self.sentMessages) return json.dumps(self.sentMessages)
@ -217,10 +218,10 @@ class OnionrMail:
recip = logger.readline('Enter peer address, or -q to stop:').strip() recip = logger.readline('Enter peer address, or -q to stop:').strip()
if recip in ('-q', 'q'): if recip in ('-q', 'q'):
raise EOFError raise EOFError
if not self.myCore._utils.validatePubKey(recip): if not stringvalidators.validate_pub_key(recip):
raise onionrexceptions.InvalidPubkey('Must be a valid ed25519 base32 encoded public key') raise onionrexceptions.InvalidPubkey('Must be a valid ed25519 base32 encoded public key')
except onionrexceptions.InvalidPubkey: except onionrexceptions.InvalidPubkey:
logger.warn('Invalid public key') logger.warn('Invalid public key', terminal=True)
except (KeyboardInterrupt, EOFError): except (KeyboardInterrupt, EOFError):
entering = False entering = False
else: else:
@ -234,7 +235,7 @@ class OnionrMail:
pass pass
cancelEnter = False cancelEnter = False
logger.info('Enter your message, stop by entering -q on a new line. -c to cancel') logger.info('Enter your message, stop by entering -q on a new line. -c to cancel', terminal=True)
while newLine != '-q': while newLine != '-q':
try: try:
newLine = input() newLine = input()
@ -249,7 +250,7 @@ class OnionrMail:
message += newLine message += newLine
if not cancelEnter: if not cancelEnter:
logger.info('Inserting encrypted message as Onionr block....') logger.info('Inserting encrypted message as Onionr block....', terminal=True)
blockID = self.myCore.insertBlock(message, header='pm', encryptType='asym', asymPeer=recip, sign=self.doSigs, meta={'subject': subject}) blockID = self.myCore.insertBlock(message, header='pm', encryptType='asym', asymPeer=recip, sign=self.doSigs, meta={'subject': subject})
@ -261,16 +262,16 @@ class OnionrMail:
while True: while True:
sigMsg = 'Message Signing: %s' sigMsg = 'Message Signing: %s'
logger.info(self.strings.programTag + '\n\nUser ID: ' + self.myCore._crypto.pubKey) logger.info(self.strings.programTag + '\n\nUser ID: ' + self.myCore._crypto.pubKey, terminal=True)
if self.doSigs: if self.doSigs:
sigMsg = sigMsg % ('enabled',) sigMsg = sigMsg % ('enabled',)
else: else:
sigMsg = sigMsg % ('disabled (Your messages cannot be trusted)',) sigMsg = sigMsg % ('disabled (Your messages cannot be trusted)',)
if self.doSigs: if self.doSigs:
logger.info(sigMsg) logger.info(sigMsg, terminal=True)
else: else:
logger.warn(sigMsg) logger.warn(sigMsg, terminal=True)
logger.info(self.strings.mainMenu.title()) # print out main menu logger.info(self.strings.mainMenu.title(), terminal=True) # print out main menu
try: try:
choice = logger.readline('Enter 1-%s:\n' % (len(self.strings.mainMenuChoices))).lower().strip() choice = logger.readline('Enter 1-%s:\n' % (len(self.strings.mainMenuChoices))).lower().strip()
except (KeyboardInterrupt, EOFError): except (KeyboardInterrupt, EOFError):
@ -285,12 +286,12 @@ class OnionrMail:
elif choice in (self.strings.mainMenuChoices[3], '4'): elif choice in (self.strings.mainMenuChoices[3], '4'):
self.toggle_signing() self.toggle_signing()
elif choice in (self.strings.mainMenuChoices[4], '5'): elif choice in (self.strings.mainMenuChoices[4], '5'):
logger.info('Goodbye.') logger.info('Goodbye.', terminal=True)
break break
elif choice == '': elif choice == '':
pass pass
else: else:
logger.warn('Invalid choice.') logger.warn('Invalid choice.', terminal=True)
return return
def add_deleted(keyStore, bHash): def add_deleted(keyStore, bHash):

View File

@ -1,5 +1,5 @@
''' '''
Onionr - P2P Microblogging Platform & Social network Onionr - Private P2P Communication
This file handles the sentbox for the mail plugin This file handles the sentbox for the mail plugin
''' '''
@ -19,6 +19,7 @@
''' '''
import sqlite3, os import sqlite3, os
import core import core
from onionrutils import epoch
class SentBox: class SentBox:
def __init__(self, mycore): def __init__(self, mycore):
assert isinstance(mycore, core.Core) assert isinstance(mycore, core.Core)
@ -60,7 +61,7 @@ class SentBox:
def addToSent(self, blockID, peer, message, subject=''): def addToSent(self, blockID, peer, message, subject=''):
self.connect() self.connect()
args = (blockID, peer, message, subject, self.core._utils.getEpoch()) args = (blockID, peer, message, subject, epoch.get_epoch())
self.cursor.execute('INSERT INTO sent VALUES(?, ?, ?, ?, ?)', args) self.cursor.execute('INSERT INTO sent VALUES(?, ?, ?, ?, ?)', args)
self.conn.commit() self.conn.commit()
self.close() self.close()

View File

@ -1,12 +1,14 @@
friendList = [] friendList = {}
convoListElement = document.getElementsByClassName('conversationList')[0] convoListElement = document.getElementsByClassName('conversationList')[0]
function createConvoList(){ function createConvoList(){
for (var x = 0; x < friendList.length; x++){ console.log(friendList)
for (friend in friendList){
var convoEntry = document.createElement('div') var convoEntry = document.createElement('div')
convoEntry.classList.add('convoEntry') convoEntry.classList.add('convoEntry')
convoEntry.setAttribute('data-pubkey', friendList[x]) convoEntry.setAttribute('data-pubkey', friend)
convoEntry.innerText = friendList[x] convoEntry.innerText = friendList[friend]
convoListElement.append(convoEntry) convoListElement.append(convoEntry)
} }
} }
@ -20,7 +22,7 @@ fetch('/friends/list', {
var keys = [] var keys = []
for(var k in resp) keys.push(k) for(var k in resp) keys.push(k)
for (var i = 0; i < keys.length; i++){ for (var i = 0; i < keys.length; i++){
friendList.push(keys[i]) friendList[keys[i]] = resp[keys[i]]['name']
} }
createConvoList() createConvoList()
}) })

View File

@ -113,4 +113,8 @@ input{
color: black; color: black;
font-size: 1.5em; font-size: 1.5em;
width: 10%; width: 10%;
}
.content{
min-height: 1000px;
} }

View File

@ -58,9 +58,9 @@ function openReply(bHash, quote, subject){
// Add quoted reply // Add quoted reply
var splitQuotes = quote.split('\n') var splitQuotes = quote.split('\n')
for (var x = 0; x < splitQuotes.length; x++){ for (var x = 0; x < splitQuotes.length; x++){
splitQuotes[x] = '>' + splitQuotes[x] splitQuotes[x] = '> ' + splitQuotes[x]
} }
quote = '\n' + splitQuotes.join('\n') quote = '\n' + key.substring(0, 12) + ' wrote:' + '\n' + splitQuotes.join('\n')
document.getElementById('draftText').value = quote document.getElementById('draftText').value = quote
setActiveTab('send message') setActiveTab('send message')
} }
@ -77,7 +77,7 @@ function openThread(bHash, sender, date, sigBool, pubkey, subjectLine){
var sigMsg = 'signature' var sigMsg = 'signature'
// show add unknown contact button if peer is unknown but still has pubkey // show add unknown contact button if peer is unknown but still has pubkey
if (sender == pubkey){ if (sender === pubkey && sender !== myPub && sigBool){
addUnknownContact.style.display = 'inline' addUnknownContact.style.display = 'inline'
} }

Some files were not shown because too many files have changed in this diff Show More