Skip to content

Hardmap guide


Load balancing for Onionspray using only hardmap consists mainly in replicating the same installation across many projects or systems.

This document covers some concrete hardmap setups, as well as it's efficacy.

Using a configuration manager

To improve your workflow, you can integrate your favourite configuration manager to deploy Onionspray projects and onionsites across all your machines.


As mentioned in the introduction and in the topologies document, Onionspray instances can be allocated to multiple CPUs and/or multiple servers.

Besides being relatively easy to configure, load distribution between instances with this simple replication scheme has an unpredictable pattern, as it depends on a random internal timer set to an interval between 60 and 120 minutes to re-publish descriptors (or a irregular event that requires republishing). According to the specification,

every time a hidden service publishes its descriptor, it also sets up a timer for a random time between 60 minutes and 120 minutes in the future. When the timer triggers, the hidden service needs to publish its descriptor again to the responsible HSDirs for that time period.

Republishing happens mostly because descriptors expire in the Tor network after around 3 hours, and so the service keeps updating it at a random time every 60-120min to be ahead of that, and also to account for relay restart.

Now keep in mind that there are several reasons why service would re-upload a descriptor before that 60-120 minute timeframe1:

  1. PoW seed change/expiration between 105 and 120 minutes, and may trigger a republish every 5 minutes if effort changed noticeably.
  2. Introduction circuit collapsing, which is unknown when it happens.
  3. Introduction point rotation (based on number of introduction or time that is random time between 18 to 24h).
  4. Hashring changes. Basically, if the directory information changes and we noticed the hashring changed, there is a republish (with every new Tor network consensus).
  5. SRV changes (consensus).

All these events and timers influences the republishing pattern, so that's why it's so unpredictable.

Anyway, even if this pattern is irregular, it still gives you failover in case one or more instances goes down, up to N -1, where N is the total number of instances.

Also, during high load of an onionsite with Proof of Work (PoW) enabled and kicking off, all involved tor instances are expected to be republishing descriptors more often (in the scale of a few minutes rather than hours), increasing the effectiveness of the hardmap-based load balancing.

Use softmaps to have a more evenly distributed load balancing

If you really want a more evenly and regular distribution of load across instances, use softmaps instead of hardmaps.

Replication strategies

CPU-based load balancing

With different Onion Services

It's expected that the Operating System will automatically distribute Tor and NGINX instances between the available CPUs when running multiple Onionspray projects in the same machine.

We can take advantage of this behavior to distribute all our Onion Services in different Onionspray projects, instead of running a single project with all our .onions.

You could have this setup:

user@onionspray:/home/user/onionspray$ grep hardmap *.conf
project1.conf:hardmap obep6i4wljp2atxkbjkeq7ycvhqa7eabs4rbocfut54wxer44bkfhqyd
project1.conf:hardmap 3g4ul3iy55bkbjlesn26nj2oocsgot2mp63cintnyk4jshbhqtpofpid
project1.conf:hardmap vnbuts3d3vfjjrgknxkma2wuyifwgs27qfkasm2h45zap7msamxwload
project2.conf:hardmap 3ln46qs2g7wzv75kjjc3r4ia5jv2mrojjdgby2vcdtsr3cswhg3l7xqd
project2.conf:hardmap 66yaevkgze2sxcetn45ynkp76rpgsuqg22hw2bj33qpc5plklmay2cyd
project2.conf:hardmap odxuyg67aqpedgjt5qxexnuixve3ule7o45yjubdbd3onqvlmzwzikad
project3.conf:hardmap gwi5cpyyci4xlvgczchcyq353hukjdsgvthtaebqyqlmftds7ioprwyd
project3.conf:hardmap 7zsdpgioajonm556fqgjts6huqcr3dc7ofvigl7sofzggnp6jq5ie5yd
project3.conf:hardmap zppyz4ta6en6x5z25zy5fztvuds4khrwcuvdveumhikzhbzpwsm3u7ad

... which distributes services in three distinct projects, meaning three independent tor daemon and NGINX instances, which has better load balancing than having everything in the same project like this:

user@onionspray:/home/user/onionspray$ grep hardmap *.conf
project1.conf:hardmap obep6i4wljp2atxkbjkeq7ycvhqa7eabs4rbocfut54wxer44bkfhqyd
project1.conf:hardmap 3g4ul3iy55bkbjlesn26nj2oocsgot2mp63cintnyk4jshbhqtpofpid
project1.conf:hardmap vnbuts3d3vfjjrgknxkma2wuyifwgs27qfkasm2h45zap7msamxwload
project1.conf:hardmap 3ln46qs2g7wzv75kjjc3r4ia5jv2mrojjdgby2vcdtsr3cswhg3l7xqd
project1.conf:hardmap 66yaevkgze2sxcetn45ynkp76rpgsuqg22hw2bj33qpc5plklmay2cyd
project1.conf:hardmap odxuyg67aqpedgjt5qxexnuixve3ule7o45yjubdbd3onqvlmzwzikad
project1.conf:hardmap gwi5cpyyci4xlvgczchcyq353hukjdsgvthtaebqyqlmftds7ioprwyd
project1.conf:hardmap 7zsdpgioajonm556fqgjts6huqcr3dc7ofvigl7sofzggnp6jq5ie5yd
project1.conf:hardmap zppyz4ta6en6x5z25zy5fztvuds4khrwcuvdveumhikzhbzpwsm3u7ad

With the same Onion Service

We could also put the same Onion Service in more than one project, for example using the [wikipedia.tconf][] sample configuration. We'll give this example for educational purposes only, as this is not very effective (check the discussion in the topologies document).

CPU-based load balancing for a single service has limited effectiveness

Be aware that running the same Onion Service in multiple CPUs from the same machine is not very effective, since descriptors are not re-published very often, and you may end up having an alternating pattern of a single CPU being more used than the other on each publishing period.

To do so, first copy the sample as wikipedia0.tconf:

cp examples/wikipedia.tconf wikipedia0.tconf

Then edit wikipedia0.conf, changing the line set project wikipedia to set project wikipedia0, and run the configuration procedure:

./onionspray config wikipedia0.tconf

Now replicate the configuration and project folder and reconfigure it:

cp wikipedia0.conf wikipedia1.conf
cp -a projects/wikipedia0 projects/wikipedia1
./onionspray config wikipedia1.conf

Edit wikipedia1.conf, changing the line set project wikipedia0 to set project wikipedia1.

Start both instances and check that they're running:

./onionspray start -a
./onionspray ps -a

You should get something like this:

user@onionspray:/home/user/onionspray$ ./onionspray start -a
:::: start wikipedia0 ::::
:::: start wikipedia1 ::::
user@onionspray:/home/user/onionspray$ ./onionspray ps -a
:::: onionspray processes ::::
root         406  0.0  0.0  11336  1348 ?        Ss   14:00   0:00 nginx: master process /usr/local/openresty/nginx/sbin/nginx -g daemon on; master_process on;
user         522  5.6  3.6 224944 72904 ?        Sl   14:00   0:11 tor -f /home/user/onionspray/projects/wikipedia0/tor.conf
user         527  0.0  0.3 289076  6124 ?        Ss   14:00   0:00 nginx: master process nginx -c /home/user/onionspray/projects/wikipedia0/nginx.conf
user         528  0.0  0.3 290300  7668 ?        S    14:00   0:00 nginx: worker process
user         529  0.0  0.3 290300  7668 ?        S    14:00   0:00 nginx: worker process
user         530  0.0  0.3 288692  6420 ?        S    14:00   0:00 nginx: cache manager process
user         541  6.3  3.3 229412 66760 ?        Sl   14:00   0:12 tor -f /home/user/onionspray/projects/wikipedia1/tor.conf
user         546  0.0  0.3 289076  6208 ?        Ss   14:00   0:00 nginx: master process nginx -c /home/user/onionspray/projects/wikipedia1/nginx.conf
user         547  0.0  0.3 290300  7684 ?        S    14:00   0:00 nginx: worker process
user         548  0.0  0.3 290300  7684 ?        S    14:00   0:00 nginx: worker process
user         549  0.0  0.3 288692  6444 ?        S    14:00   0:00 nginx: cache manager process
user         569  0.0  0.0   2576  1612 pts/0    S+   14:00   0:00 /bin/sh ./onionspray ps -a

Now check that they're running the same set of Onion Services:

user@onionspray:/home/user/onionspray$ ./onionspray maps -a
:::: maps wikipedia0 ::::
3g4ul3iy55bkbjlesn26nj2oocsgot2mp63cintnyk4jshbhqtpofpid.onion wikipedia0 hardmap
vnbuts3d3vfjjrgknxkma2wuyifwgs27qfkasm2h45zap7msamxwload.onion wikipedia0 hardmap
3ln46qs2g7wzv75kjjc3r4ia5jv2mrojjdgby2vcdtsr3cswhg3l7xqd.onion wikipedia0 hardmap
66yaevkgze2sxcetn45ynkp76rpgsuqg22hw2bj33qpc5plklmay2cyd.onion wikipedia0 hardmap
odxuyg67aqpedgjt5qxexnuixve3ule7o45yjubdbd3onqvlmzwzikad.onion wikipedia0 hardmap
gwi5cpyyci4xlvgczchcyq353hukjdsgvthtaebqyqlmftds7ioprwyd.onion wikipedia0 hardmap
7zsdpgioajonm556fqgjts6huqcr3dc7ofvigl7sofzggnp6jq5ie5yd.onion wikipedia0 hardmap
zppyz4ta6en6x5z25zy5fztvuds4khrwcuvdveumhikzhbzpwsm3u7ad.onion wikipedia0 hardmap
p5ynfqzanoamhsmhpl4uz467bd2u4bxabdxaikp7mozyorlpsc6ijnid.onion wikipedia0 hardmap
n6o5h5oamgfkgwcctgwkh5iozdxr4lb6omf5e2ifj2aexo6ipqj36gad.onion wikipedia0 hardmap
edfcbbl6jsu7ls7ldjblqsq4wn7jcepmyndaoflultnyovp5lupdwsad.onion wikipedia0 hardmap
wbi6oaqbsnatevl2gbguq4lbxrqzio7mxwxf7rnabcpy7zdm6kxlcjid.onion wikipedia0 hardmap
:::: maps wikipedia1 ::::
3g4ul3iy55bkbjlesn26nj2oocsgot2mp63cintnyk4jshbhqtpofpid.onion wikipedia1 hardmap
vnbuts3d3vfjjrgknxkma2wuyifwgs27qfkasm2h45zap7msamxwload.onion wikipedia1 hardmap
3ln46qs2g7wzv75kjjc3r4ia5jv2mrojjdgby2vcdtsr3cswhg3l7xqd.onion wikipedia1 hardmap
66yaevkgze2sxcetn45ynkp76rpgsuqg22hw2bj33qpc5plklmay2cyd.onion wikipedia1 hardmap
odxuyg67aqpedgjt5qxexnuixve3ule7o45yjubdbd3onqvlmzwzikad.onion wikipedia1 hardmap
gwi5cpyyci4xlvgczchcyq353hukjdsgvthtaebqyqlmftds7ioprwyd.onion wikipedia1 hardmap
7zsdpgioajonm556fqgjts6huqcr3dc7ofvigl7sofzggnp6jq5ie5yd.onion wikipedia1 hardmap
zppyz4ta6en6x5z25zy5fztvuds4khrwcuvdveumhikzhbzpwsm3u7ad.onion wikipedia1 hardmap
p5ynfqzanoamhsmhpl4uz467bd2u4bxabdxaikp7mozyorlpsc6ijnid.onion wikipedia1 hardmap
n6o5h5oamgfkgwcctgwkh5iozdxr4lb6omf5e2ifj2aexo6ipqj36gad.onion wikipedia1 hardmap
edfcbbl6jsu7ls7ldjblqsq4wn7jcepmyndaoflultnyovp5lupdwsad.onion wikipedia1 hardmap
wbi6oaqbsnatevl2gbguq4lbxrqzio7mxwxf7rnabcpy7zdm6kxlcjid.onion wikipedia1 hardmap

Whenever you access one of these Onion Services, a descriptor from one of these instances will be fetched from the Tor network, and thus one of these instances will be used.

You can replicate all your existing projects the same way: you just have to copy the project files, the config and update the project name, but be reminded that replicating services in CPU has limited effect, as mentioned earlier.

Server-based load balancing

There are basically two options to replicate your existing Onionspray projects to one or more additional machines:

  1. Recommended: syncing configuration and keys.
  2. Not recommended: syncing the whole installation.

Installation path and user

It's advised to to use the same installation path and user in all machines. This will ensure uniformity in your setup, and be easier to manage.

Remote access

Both methods assumes the main machine can connect through the other machines via SSH, preferably with keypair-key based authentication.

Scheduled restarts

Given the unpredicatable descriptor republishing pattern depending on either a random internal timer between 60 and 120 minutes or other events that requires a republish, it's advised to start Onionspray on each server at different times, to decrease the chances that they'll have a similar descriptor republishing pattern.

If unsure on the interval, try space each start your "cluster" uniformily. Example: if you have two servers, space startups by 90 minutes, which is halfway between 60 and 120 minutes, and so on.

You can also consider to restart Onionspray instances now and then at different days and times, to increase this randomness.

Start by copying the configuration and Onion Service keys from a project in machine1 to machine2:

user@machine1$ scp myproject.conf user@machine2:/path/to/onionspray/
user@machine1$ scp secrets/some-onion-address.*key user@machine2:/path/to/onionspray/secrets/

Then configure the project on machine2:

user@machine2$ ./onionspray config myproject.conf

The third step consists in copying the existing HTTPS keys and certificates:

user@machine1$ scp projects/myproject/ssl/* user@machine2:/path/to/onionspray/projects/myproject/ssl/

Finally, start the project on machine2:

user@machine2$ ./onionspray bounce myproject

You can automate this task by creating a script to replicate some or all of your projects across all machines, and taking advantage of the onionspray-workes.conf file that can be used to list all your remote machines (one per line).

Not recommended

This method is experimental, might break the Onionspray installation in the remote machines, and is not recommended.

This sync the whole installation folder, making sure to exclude logfiles, pidfiles and other files relevant only to the current system.

Create a onionspray-workers.conf with the list of remote machines:


Then do a destructive push:

./onionspray rnap

  1. Check discussion at the Periodically republish the descriptor issue.