Skip to main content

You are here

  1. Home
  2. Nextcloud goodies - standalone Signaling service
Tags: 
nextcloud
TL;DR: 

These are the instructions for installing Standalone Nextcloud signaling server which enables video calls for more than a few participants.

This guide assumes you'll be installing the signaling server bundle to a blank server running Debian 10. All the components can also be installed to the same server you run Nextcloud with.

Install and setup signaling server

Get signaling server source

Struktur AG was kind enough to release the standalone signaling server for Nextcloud it had developed in open source. Wget or git clone the repo to server.

$ wget https://github.com/strukturag/nextcloud-spreed-signaling/archive/master.zip

Install unzip if it's missing

$ sudo apt install unzip 

And then unzip the archive

$ unzip master.zip

Build signaling binary

The signaling server must be build from the source. You'll need git, go (version 1.10 or greater) and make to do that.

$ sudo apt install git golang make

Switch to the folder you just downloaded and try compiling the signaling binary

$ cd nextcloud-spreed-signaling-master
$ make build

If everything goes right, you'll have the binary built in bin folder. Next you'd want to move/copy that to a more proper location.

$ cp bin/signaling /usr/local/bin/

Setup configuration file

The default configuration file is server.conf.in which you'll want to copy and rename

$ sudo mkdir /etc/signaling
$ sudo cp server.conf.in /etc/signaling/server.conf

Open the configuration file with text editor of your choice. You'll need to modify at least the following.

Uncomment the listen line under [http] block to enable listening at localhost

[http]
# IP and port to listen on for HTTP requests.
# Comment line to disable the listener.
listen = 127.0.0.1:8080

Create and add three secret keys - two under [sessions] and one under [clients].

[sessions]
# Secret value used to generate checksums of sessions. This should be a random
# string of 32 or 64 bytes.
hashkey = insert-key-here

# Optional key for encrypting data in the sessions. Must be either 16, 24 or
# 32 bytes.
# If no key is specified, data will not be encrypted (not recommended).
blockkey = insert-key-here

[clients]
# Shared secret for connections from internal clients. This must be the same
# value as configured in the respective internal services.
internalsecret = insert-key-here

You can use openssl to easily create mostly random keys for all three. The following command will output 32 byte string.

$ openssl rand -base64 24 

Uncomment and modify backends line under [backend] block to specify backend ID's for all Nextcloud instances you'll want use the signaling server with. 

[backend]
# Comma-separated list of backend ids from which clients are allowed to connect
# from. Each backend will have isolated rooms, i.e. clients connecting to room
# "abc12345" on backend 1 will be in a different room than clients connected to
# a room with the same name on backend 2. Also sessions connected from different
# backends will not be able to communicate with each other.
backends = bunnywurst, jabberwocky

Uncomment and modify/add any backends you specified in [backend] block

# Backend configurations as defined in the "[backend]" section above. The
# section names must match the ids used in "backends" above.
[bunnywurst]
# URL of the Nextcloud instance
url = https://bunnywurst.example.com

# Shared secret for requests from and to the backend servers. This must be the
# same value as configured in the Nextcloud admin ui.
secret = make-up-a-password-here

[jabberwocky]
url = https://jabberwocky.example.com
secret = and-another-password-here

Uncomment the NATS url line under [nats] block. Signaling uses NATS messaging server which we'll install next

[nats]
# Url of NATS backend to use. This can also be a list of URLs to connect to
# multiple backends. For local development, this can be set to ":loopback:"
# to process NATS messages internally instead of sending them through an
# external NATS backend.
url = nats://localhost:4222

Uncomment and add Janus MCU configuration under [mcu]. Janus MCU (Multipoint Control Unit) is a video mixer used to offload the video processing from client to server. We'll install and configure it alongside NATS server in the next section.

[mcu]
# The type of the MCU to use. Currently only "janus" and "proxy" are supported.
# Leave empty to disable MCU functionality.
type = janus

# For type "janus": the URL to the websocket endpoint of the MCU server.
# For type "proxy": a space-separated list of proxy URLs to connect to.
url = ws://127.0.0.1:8188

Save and close the file.

Janus MCU (Multipoint Control Unit) is a video mixer used to offload the video processing from client to server. We'll install and configure it alongside NATS server in the next section.

Install and configure NATS messaging server

NATS server must be installed manually as there is no apt source. Go to https://nats.io/download/nats-io/nats-server/ to find correct binary for your architecture (most likely amd64).

When I wrote this the latest server version was 2.1.8 so: 

$ wget https://github.com/nats-io/nats-server/releases/download/v2.1.8/nats-server-v2.1.8-linux-amd64.zip

And unzip

$ unzip nats-server-v2.1.8-linux-amd64.zip

Move NATS binary to a more correct location

$ sudo mv nats-server-v2.1.8-linux-amd64/nats-server /usr/local/bin

You can check if server is capable of working correctly by trying:

$ nats-server

Output should be something like:

[24148] 2020/10/29 14:29:44.999376 [INF] Starting nats-server version 2.1.8
[24148] 2020/10/29 14:29:44.999466 [INF] Git commit [c0b574f]
[24148] 2020/10/29 14:29:44.999568 [INF] Listening for client connections on 0.0.0.0:4222
[24148] 2020/10/29 14:29:44.999604 [INF] Server id is NB2NLTMFODUSTXPX6BWTTDKPCMLKZKNXWVGJLOXTLEETQQLFNGCYNSX4
[24148] 2020/10/29 14:29:44.999629 [INF] Server is ready

Escape with ctrl + C to terminate server

NATS server must be installed manually as there is no apt source.

Install and configure Janus MCU

Janus is available in Buster backports! If you don't have them enabled already just add the following line to /etc/apt/sources.list

deb http://deb.debian.org/debian buster-backports main

After that update package list and install janus

$ sudo apt update
$ sudo apt install janus

Janus installs itself as a Debian system service by itself, so there's no need to do that by hand. You will have to write configuration however. Unlike signaling and NATS, Janus has multiple config file samples located in /etc/janus/

janus.eventhandler.nanomsgevh.jcfg.sample
janus.eventhandler.rabbitmqevh.jcfg.sample
janus.eventhandler.sampleevh.jcfg.sample
janus.eventhandler.wsevh.jcfg.sample
janus.jcfg.sample
janus.logger.jsonlog.jcfg.sample
janus.plugin.audiobridge.jcfg.sample
janus.plugin.duktape.jcfg.sample
janus.plugin.echotest.jcfg.sample
janus.plugin.lua.jcfg.sample
janus.plugin.nosip.jcfg.sample
janus.plugin.recordplay.jcfg.sample
janus.plugin.sip.jcfg.sample
janus.plugin.streaming.jcfg.sample
janus.plugin.textroom.jcfg.sample
janus.plugin.videocall.jcfg.sample
janus.plugin.videoroom.jcfg.sample
janus.plugin.voicemail.jcfg.sample
janus.transport.http.jcfg.sample
janus.transport.nanomsg.jcfg.sample
janus.transport.pfunix.jcfg.sample
janus.transport.rabbitmq.jcfg.sample
janus.transport.websockets.jcfg.sample

You'll have to enable videoroom plugin, websocket transport and Janus general config. So just copy their config samples to the config folder:

$ sudo cp /etc/janus/janus.plugin.videoroom.jcfg.sample /etc/janus/janus.plugin.videoroom.jcfg
$ sudo cp /etc/janus/janus.transport.websockets.jcfg.sample /etc/janus/janus.transport.websockets.jcfg
$ sudo cp /etc/janus/janus.jcfg.sample /etc/janus/janus.jcfg

The videoroom plugin conf can be used as is but websocket transport needs to confined to localhost. Open the websocket transport config file (/etc/janus/janus.transport.websockets.jcfg) and modify the ws_ip line under general block bind the server to localhost.

ws_ip = "127.0.0.1"    # Whether we should bind this server to a specific IP address only

In Janus general config (/etc/janus/janus.jcfg) you'll have to disable unused plugins and transports. Find sections named plugins and transport (they're separate) and change them to following:

plugins: {
    disable = "libjanus_voicemail.so,
        libjanus_videocall.so,
        libjanus_textroom.so,
        libjanus_streaming.so,
        libjanus_sip.so,
        libjanus_recordplay.so,
        libjanus_nosip.so,
        libjanus_lua.so,
        libjanus_echotest.so,
        libjanus_duktape.so,
        libjanus_audiobridge.so,
        libjanus_jsonlog.so"
}

[...]

transports: {
    disable = "libjanus_rabbitmq.so,
        libjanus_http.so,
        libjanus_nanomsg.so,
        libjanus_pfunix.so"
}

After the configuration is done, we must restart the service:

$ sudo service janus restart
Janus installs itself as a Debian system service by itself, so there's no need to do that by hand.

Create system services for NATS and Signaling

Now we have all required components for signaling server to work but in order to automate the server completely we'll have to create users and system service files for both NATS and Signaling.

Create NATS user and service file

First create a system user for NATS

$ sudo useradd --system nats

Then create a new system service in /etc/systemd/system/nats.service and with the following as its content:

[Unit]
Description=NATS messaging server

[Service]
ExecStart=/usr/local/bin/nats-server -c /etc/nats/nats.conf
User=nats
Restart=on-failure

[Install]
WantedBy=multi-user.target

Remember we moved the NATS server binary to /usr/local/bin and created config in /etc/nats/nats.conf? Now all we do is tell the system service handler to start the NATS server as a background job, and restart it in case of failure. 

Next we'll have to enable the new service

sudo systemctl enable nats.service

And after enabling it can can started

sudo service nats start

And after starting it's worth checking if it actually works

sudo service nats status

If the service is listed as active (running) it should be working as expected.

Create Signaling user and service file

The process is almost the same with Signaling server user and service configuration

$ sudo useradd --system signaling

Service file goes to  /etc/systemd/system/signaling.service and it's contents should be:

[Unit]
Description=Nextcloud Talk signaling server

[Service]
ExecStart=/usr/local/bin/signaling --config /etc/signaling/server.conf
User=signaling
Restart=on-failure

[Install]
WantedBy=multi-user.target

Enable and start service

$ sudo systemctl enable signaling.service
$ sudo service signaling start
$ sudo service signaling status

Signaling service should now be listed as active (running) as well.

Now all we do is tell the system service handler to start the NATS server as a background job, and restart it in case of failure. 

Install apache and certbot

We're almost done! The signaling service is working but it has not been exposed to internet. If you're running your Nextcloud on the same machine as signaling server you can skip this part. If you're not (as this guide assumes) we'll need to install apache www-server as a reverse proxy to the signaling server. Apache will also handle SSL/TLS encryption with certbot.

Start by installing apache and certbot packages

$ sudo apt install apache2 certbot

After the install is complete certain apache modules must be enabled

$ sudo a2enmod proxy proxy_http proxy_wstunnel ssl rewrite

Configure apache as reverse proxy

Create new apache virtual host file to /etc/apache2/sites-available/signaling.conf and fill it with following:

<VirtualHost [SYSTEM IP]:443>
ServerName [SYSTEM FQDN]
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/[SYSTEM FQDN]/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/[SYSTEM FQDN]/privkey.pem

# Enable proxying Websocket requests to the standalone signaling server.
ProxyPass "/" "ws://127.0.0.1:8080/"

RewriteEngine On
# Websocket connections from the clients.
RewriteRule ^/spreed$ - [L]
# Backend connections from Nextcloud.
RewriteRule ^/api/(.*) http://127.0.0.1:8080/api/$1 [L,P]
</VirtualHost>

Replace the square bracketed parts with your server IP address and FQDN (fully qualified domain name) in VirtualHost, ServerName and SSLCertificate(Key)File. It's worth noting that certificate files are not created yet - we'll get to it next.

If you don't use apache for anything else you can disable the apache listening socket on port 80 completely. This will enable certbot to fetch and renew certificates without shutting down apache first. To disable port 80 open file /etc/apache2/ports.conf and simply comment the Listen 80 line

#Listen 80

Obtain SSL/TLS certificate for apache

To ensure security and keep Nextcloud happy we'll need a SSL/TLS certificate. Replace the [SYSTEM FQDN] with your actual system domain name. 

$ sudo certbot certonly -d [SYSTEM FQDN]

Certbot will ask you authentication method (select standalone) and on the first run also email address for urgent renewal and security notices, Terms of Service agreement & if you want to join EFF's (Electronic Frontier Foundation) mailing list. You'll have to agree to the Terms of Service, other two are optional.

After getting the certificate you can enable the reverse proxy site

$ sudo a2ensite signaling.conf

If your config is ready, you can restart apache

$ sudo service apache2 restart

Your signaling server is now ready to used with Nextcloud but hold your horses - there's still one you might want to do.

Automate certificate renewal

By creating a simple cron job you can automatically check and renew the security certificate. Otherwise it will expire in three months. Open root crontab by typing:

$ sudo crontab -e

Crontab asks to set default editor when first run (nano is easiest). When you have the cron file open paste the following line to the end of file

0 0 * * * certbot renew && systemctl apache2.service reload

The above will work only if you disabled the port 80 for apache. If you didn't, the apache service must be terminated every time before attempting certificate renewal. In that case you can use the following:

0 0 * * * systemctl apache2.service stop && certbot renew && systemctl apache2.service start

Both of these commands attempt to renew the certificate every day at midnight. The latter option stops apache completely for renewal process. The renewal usually takes only a few seconds but it will interrupt any conference you might have going on at the moment

Bottom line: 

The signaling server enables having roughly 50 call participants per one typical Xeon core. Memory shouldn't be a concern since most of the signaling work is decoding and re-encoding video on the fly, not storing it.

Käytössä Backdrop CMS