How to install an Oasis¶
Originally, the NOMAD Central Repository is a service that runs at the Max-Planck's computing facility in Garching, Germany. However, the NOMAD software is Open-Source, and everybody can run it. Any service that uses NOMAD software independently is called a NOMAD Oasis. A NOMAD Oasis does not need to be fully isolated. For example, you can publish uploads from your NOMAD Oasis to the central NOMAD installation.
Note
Register your Oasis If you installed (or even just plan to install) a NOMAD Oasis, please register your Oasis with FAIRmat and help us to assist you in the future.
Basic installation¶
You install NOMAD Oasis from a NOMAD distribution project. We provide a template for these distribution projects.
Warning
You can install NOMAD Oasis by directly using our template as described below. However, for a production installation, we recommend to create your own distribution project based on the template by pressing the "use this template" button on the top right of the templat's github page.
The project contains all necessery config files and will allow you to version your configuration, build custom images with plugins, and much more. You own distro project is mandatory for configuring plugins.
The same documentation will also be awailable on the created projects README.md with the necessary github URLs already changed for you.
-
Make sure you have docker installed. Docker nowadays comes with
docker compose
built in. Prior, you needed to install the stand-alone docker-compose. -
Clone the
nomad-distro-template
repository or download the repository as a zip file.or
-
On Linux only, recursively change the owner of the
.volumes
directory to the nomad user (1000) -
Pull the images specified in the
docker-compose.yaml
Note that the image needs to be public or you need to provide a PAT (see "Important" note above).
-
And run it with docker compose in detached (--detach or -d) mode
-
Optionally you can now test that NOMAD is running with
-
Finally, open http://localhost/nomad-oasis in your browser to start using your new NOMAD Oasis.
To run NORTH (the NOMAD Remote Tools Hub), the hub
container needs to run docker and
the container has to be run under the docker group. You need to replace the default group
id 991
in the docker-compose.yaml
's hub
section with your systems docker group id.
Run id
if you are a docker user, or getent group | grep docker
to find your
systems docker gid. The user id 1000 is used as the nomad user inside all containers.
Please see the Jupyter image section below for more information on the jupyter NORTH image being generated in this repository.
Planning your installation¶
Hardware consideration¶
Of course this depends on how much data you need to manage and process. Data storage is the obvious aspect here. NOMAD keeps all files that it manages as they are. The files that NOMAD processes in addition (e.g. through parsing) are typically smaller than the original raw files. Therefore, you can base your storage requirements based on the size of the data files that you expect to manage. The additional mongo database and elasticsearch index is comparatively small.
Storage speed is another consideration. You can work with NAS systems. All that NOMAD needs is a "regular" POSIX filesystem as an interface. So everything you can (e.g. docker host) mount should be fine. For processing data obviously relies on read/write speed, but this is just a matter of convenience. The processing is designed to run as managed asynchronous tasks. Local storage might be favorable for mongodb and elasticsearch operation, but it is not a must.
The amount of compute resource (e.g. processor cores) is also a matter of convenience (and amount of expected users). Four cpu-cores are typically enough to support a research group and run application, processing, and databases in parallel. Smaller systems still work, e.g. for testing.
There should be enough RAM to run databases, application, and processing at the same time. The minimum requirements here can be quite low, but for processing the metadata for individual files is kept in memory. For large DFT geometry-optimizations this can add up quickly, especially if many CPU cores are available for processing entries in parallel. We recommend at least 2GB per core and a minimum of 8GB. You also need to consider RAM and CPU for running tools like jupyter, if you opt to use NOMAD NORTH.
Sharing data through log transfer and data privacy notice¶
NOMAD includes a log transfer functions. When enabled this it automatically collects and transfers non-personalized logging data to us. Currently, this functionality is experimental and requires opt-in. However, in upcoming versions of NOMAD Oasis, we might change to out-out.
To enable this functionality add logtransfer.enabled: true
to you nomad.yaml
.
The service collects log-data and aggregated statistics, such as the number of users or the number of uploaded datasets. In any case this data does not personally identify any users or contains any uploaded data. All data is in an aggregated and anonymized form.
The data is solely used by the NOMAD developers and FAIRmat, including but not limited to:
- Analyzing and monitoring system performance to identify and resolve issues.
- Improving our NOMAD software based on usage patterns.
- Generating aggregated and anonymized reports.
We do not share any collected data with any third parties.
We may update this data privacy notice from time to time to reflect changes in our data practices. We encourage you to review this notice periodically for any updates.
Using the central user management¶
Our recommendation is to use the central user management provided by nomad-lab.eu. We
simplified its use and you can use it out-of-the-box. You can even run your system
from localhost
(e.g. for initial testing). The central user management system is not
communicating with your OASIS directly. Therefore, you can run your OASIS without
exposing it to the public internet.
There are two requirements. First, your users must be able to reach the OASIS. If a user is logging in, she/he is redirected to the central user management server and after login, she/he is redirected back to the OASIS. These redirects are executed by your user's browser and do not require direct communication.
Second, your OASIS must be able to request (via HTTP) the central user management and central NOMAD installation. This is necessary for non JWT-based authentication methods and to retrieve existing users for data-sharing features.
The central user management will make future synchronizing data between NOMAD installations easier and generally recommend to use the central system. But in principle, you can also run your own user management. See the section on your own user management.
Configuration files¶
The nomad-distro-template
provides all the neccessary configuration files. We strongly recommend to create your own distribution
project based on the template. This will allow you to version your configuration, build custom
images with plugins, and much more.
In this section, you can learn about settings that you might need to change. The config files are:
docker-compose.yaml
configs/nomad.yaml
configs/nginx.conf
All docker containers are configured via docker-compose and the respective docker-compose.yaml
file.
The other files are mounted into the docker containers.
docker-compose.yaml¶
The most basic docker-compose.yaml
to run an OASIS looks like this:
services:
# broker for celery
rabbitmq:
restart: unless-stopped
image: rabbitmq:4
container_name: nomad_oasis_rabbitmq
environment:
- RABBITMQ_ERLANG_COOKIE=SWQOKODSQALRPCLNMEQG
- RABBITMQ_DEFAULT_USER=rabbitmq
- RABBITMQ_DEFAULT_PASS=rabbitmq
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- rabbitmq:/var/lib/rabbitmq
healthcheck:
test: [ "CMD", "rabbitmq-diagnostics", "--silent", "--quiet", "ping" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 10s
# the search engine
elastic:
restart: unless-stopped
image: elasticsearch:7.17.24
container_name: nomad_oasis_elastic
environment:
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
volumes:
- elastic:/usr/share/elasticsearch/data
healthcheck:
test: [ "CMD", "curl", "--fail", "--silent", "http://elastic:9200/_cat/health" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 60s
# the user data db
mongo:
restart: unless-stopped
image: mongo:5
container_name: nomad_oasis_mongo
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- mongo:/data/db
- ./.volumes/mongo:/backup
command: mongod --logpath=/dev/null # --quiet
healthcheck:
test: [ "CMD", "mongo", "mongo:27017/test", "--quiet", "--eval", "'db.runCommand({ping:1}).ok'" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 10s
# nomad worker (processing)
worker:
restart: unless-stopped
image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest
container_name: nomad_oasis_worker
environment:
NOMAD_SERVICE: nomad_oasis_worker
NOMAD_RABBITMQ_HOST: rabbitmq
NOMAD_ELASTIC_HOST: elastic
NOMAD_MONGO_HOST: mongo
depends_on:
rabbitmq:
condition: service_healthy
elastic:
condition: service_healthy
mongo:
condition: service_healthy
volumes:
- ./configs/nomad.yaml:/app/nomad.yaml
- ./.volumes/fs:/app/.volumes/fs
command: ./run-worker.sh
# nomad app (api + proxy)
app:
restart: unless-stopped
image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest
container_name: nomad_oasis_app
environment:
NOMAD_SERVICE: nomad_oasis_app
NOMAD_SERVICES_API_PORT: 80
NOMAD_FS_EXTERNAL_WORKING_DIRECTORY: "$PWD"
NOMAD_RABBITMQ_HOST: rabbitmq
NOMAD_ELASTIC_HOST: elastic
NOMAD_MONGO_HOST: mongo
NOMAD_NORTH_HUB_HOST: north
depends_on:
rabbitmq:
condition: service_healthy
elastic:
condition: service_healthy
mongo:
condition: service_healthy
north:
condition: service_started
volumes:
- ./configs/nomad.yaml:/app/nomad.yaml
- ./.volumes/fs:/app/.volumes/fs
command: ./run.sh
healthcheck:
test: [ "CMD", "curl", "--fail", "--silent", "http://localhost:8000/-/health" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 10s
# nomad remote tools hub (JupyterHUB, e.g. for AI Toolkit)
north:
restart: unless-stopped
image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest
container_name: nomad_oasis_north
environment:
NOMAD_SERVICE: nomad_oasis_north
NOMAD_NORTH_DOCKER_NETWORK: nomad_oasis_network
NOMAD_NORTH_HUB_CONNECT_IP: north
NOMAD_NORTH_HUB_IP: "0.0.0.0"
NOMAD_NORTH_HUB_HOST: north
NOMAD_SERVICES_API_HOST: app
NOMAD_FS_EXTERNAL_WORKING_DIRECTORY: "$PWD"
NOMAD_RABBITMQ_HOST: rabbitmq
NOMAD_ELASTIC_HOST: elastic
NOMAD_MONGO_HOST: mongo
volumes:
- ./configs/nomad.yaml:/app/nomad.yaml
- ./.volumes/fs:/app/.volumes/fs
- /var/run/docker.sock:/var/run/docker.sock
user: '1000:991'
command: python -m nomad.cli admin run hub
healthcheck:
test: [ "CMD", "curl", "--fail", "--silent", "http://localhost:8081/nomad-oasis/north/hub/health" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 10s
# nomad proxy (a reverse proxy for nomad)
proxy:
restart: unless-stopped
image: nginx:stable-alpine
container_name: nomad_oasis_proxy
command: nginx -g 'daemon off;'
volumes:
- ./configs/nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
app:
condition: service_healthy
worker:
condition: service_started # TODO: service_healthy
north:
condition: service_healthy
ports:
- "80:80"
volumes:
mongo:
name: "nomad_oasis_mongo"
elastic:
name: "nomad_oasis_elastic"
rabbitmq:
name: "nomad_oasis_rabbitmq"
keycloak:
name: "nomad_oasis_keycloak"
networks:
default:
name: nomad_oasis_network
Changes necessary:
- The group in the value of the hub's user parameter needs to match the docker group on the host. This should ensure that the user which runs the hub, has the rights to access the host's docker.
- On Windows or MacOS computers you have to run the
app
andworker
container withoutuser: '1000:1000'
and thenorth
container withuser: root
.
A few things to notice:
- The app, worker, and north service use the NOMAD docker image. Here we use the
latest
tag, which gives you the latest beta version of NOMAD. You might want to change this tostable
, a version tag (format isvX.X.X
, you find all releases here), or a specific branch tag. - All services use docker volumes for storage. This could be changed to host mounts.
- It mounts two configuration files that need to be provided (see below):
nomad.yaml
,nginx.conf
. - The only exposed port is
80
(proxy service). This could be changed to a desired port if necessary. - The NOMAD images are pulled from our gitlab at MPCDF, the other services use images from a public registry (dockerhub).
- All containers will be named
nomad_oasis_*
. These names can be used later to reference the container with thedocker
cmd. - The services are setup to restart
always
, you might want to change this tono
while debugging errors to prevent indefinite restarts. - Make sure that the
PWD
environment variable is set. NORTH needs to create bind mounts that require absolute paths and we need to pass the current working directory to the configuration from the PWD variable (see hub service in thedocker-compose.yaml
). - The
north
service needs to run docker containers. We have to use the systems docker group as a group. You might need to replace991
with your systems docker group id.
nomad.yaml¶
NOMAD app and worker read a nomad.yaml
for configuration.
services:
api_host: 'localhost'
api_base_path: '/nomad-oasis'
oasis:
is_oasis: true
uses_central_user_management: true
north:
jupyterhub_crypt_key: '978bfb2e13a8448a253c629d8dd84ff89587f30e635b753153960930cad9d36d'
meta:
deployment: 'oasis'
deployment_url: 'https://my-oasis.org/api'
maintainer_email: 'me@my-oasis.org'
logtransfer:
enabled: false
mongo:
db_name: nomad_oasis_v1
elastic:
entries_index: nomad_oasis_entries_v1
materials_index: nomad_oasis_materials_v1
You should change the following:
- Replace
localhost
with the hostname of your server. I user-management will redirect your users back to this host. Make sure this is the hostname, your users can use. - Replace
deployment
,deployment_url
, andmaintainer_email
with representative values. Thedeployment_url
should be the url to the deployment's api (should end with/api
). - To enable the log transfer set
logtransfer.enable: true
(data privacy notice above). - You can change
api_base_path
to run NOMAD under a different path prefix. - You should generate your own
north.jupyterhub_crypt_key
. You can generate one withopenssl rand -hex 32
. - On Windows or MacOS, you have to add
hub_connect_ip: 'host.docker.internal'
to thenorth
section.
A few things to notice:
- Under
mongo
andelastic
you can configure database and index names. This might be useful, if you need to run multiple NOMADs with the same databases. - All managed files are stored under
.volumes
of the current directory.
nginx.conf¶
The GUI container serves as a proxy that forwards requests to the app container. The proxy is an nginx server and needs a configuration similar to this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name localhost;
proxy_set_header Host $host;
gzip_min_length 1000;
gzip_buffers 4 8k;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
gzip on;
gzip_proxied any;
gzip_types
text/css
text/javascript
text/xml
text/plain
application/javascript
application/x-javascript
application/json;
location / {
proxy_pass http://app:8000;
}
location ~ /nomad-oasis\/?(gui)?$ {
rewrite ^ /nomad-oasis/gui/ permanent;
}
location /nomad-oasis/gui/ {
proxy_intercept_errors on;
error_page 404 = @redirect_to_index;
proxy_pass http://app:8000;
}
location @redirect_to_index {
rewrite ^ /nomad-oasis/gui/index.html break;
proxy_pass http://app:8000;
}
location ~ \/gui\/(service-worker\.js|meta\.json)$ {
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
proxy_pass http://app:8000;
}
location ~ /api/v1/uploads(/?$|.*/raw|.*/bundle?$) {
client_max_body_size 35g;
proxy_request_buffering off;
proxy_pass http://app:8000;
}
location ~ /api/v1/.*/download {
proxy_buffering off;
proxy_pass http://app:8000;
}
location /nomad-oasis/north/ {
client_max_body_size 500m;
proxy_pass http://north:9000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
A few things to notice:
- It configures the base path (
nomad-oasis
). It needs to be changed, if you use a different base path. - You can use the server for additional content if you like.
client_max_body_size
sets a limit to the possible upload size.
You can add an additional reverse proxy in front or modify the nginx in the docker-compose.yaml
to support https.
If you operate the GUI container behind another proxy, keep in mind that your proxy should
not buffer requests/responses to allow streaming of large requests/responses for api/v1/uploads
and api/v1/.*/download
.
An nginx reverse proxy location on an additional reverse proxy, could have these directives
to ensure the correct http headers and allows the download and upload of large files:
client_max_body_size 35g;
proxy_set_header Host $host;
proxy_pass_request_headers on;
proxy_buffering off;
proxy_request_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://<your-oasis-host>/nomad-oasis;
Plugins¶
Plugins allow the customization of a NOMAD deployment in terms of which search apps, normalizers, parsers and schema packages are available. In order for these customization to be activated, they have to be configured and installed into an Oasis. The basic template comes with a core set of plugins. If you want to configure your own set of plugins, using the template and creating your own distro project is mandatory.
Plugins are configured via the pyproject.toml
file. Based on this file
the distro project CI pipeline creates the NOMAD docker image that is used by your installation.
Only plugins configured in the pyproject.toml
files, will be installed into the docker
image and only those plugins installed in the used docker image are available in your
Oasis.
Please refer to the template README to learn how to add your own plugins.
Starting and stopping NOMAD services¶
If you prepared the above files, simply use the usual docker compose
commands to start everything.
To make sure you have the latest docker images for everything, run this first:
In the beginning and to simplify debugging, it is recommended to start the services separately:
The -d
option runs container in the background as daemons. Later you can run all at once:
Running all services also contains NORTH. When you use a tool in NORTH for the first time, your docker needs to pull the image that contains this tool. Be aware that this might take longer than timeouts allow and starting a tool for the very first time might fail.
You can also use docker to stop and remove faulty containers that run as daemons:
You can wait for the start-up with curl using the apps alive
"endpoint":
If everything works, the gui should be available under:
If you run into problems, use the dev-tools of your browser to check the javascript logs or monitor the network traffic for HTTP 500/400/404/401 responses.
To see if at least the api works, check
To see logs or 'go into' a running container, you can access the individual containers with their names and the usual docker commands:
If you want to report problems with your OASIS. Please provide the logs for
- nomad_oasis_app
- nomad_oasis_worker
- nomad_oasis_gui
Provide and connect your own user management¶
NOMAD uses keycloak for its user management. NOMAD uses keycloak in two ways. First, the user authentication uses the OpenID Connect/OAuth interfaces provided by keycloak. Second, NOMAD uses the keycloak realm-management API to get a list of existing users. Keycloak is highly customizable and numerous options to connect keycloak to existing identity providers exist.
This tutorial assumes that you have some understanding of what keycloak is and how it works.
The NOMAD Oasis installation with your own keyloak is very similar to the regular docker-compose installation above. There are just a three changes.
- The
docker-compose.yaml
has an added keycloak service. - The
nginx.conf
is also modified to add another location for keycloak. - The
nomad.yaml
has modifications to tell nomad to use your and not the official NOMAD keycloak.
You can start with the regular installation above and manually adopt the config or
download the already updated configuration files: nomad-oasis-with-keycloak.zip.
The download also contains an additional configs/nomad-realm.json
that allows you
to create an initial keycloak realm that is configured for NOMAD automatically.
First, the docker-compose.yaml
:
services:
# keycloak user management
keycloak:
restart: unless-stopped
image: quay.io/keycloak/keycloak:16.1.1
container_name: nomad_oasis_keycloak
environment:
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
- KEYCLOAK_FRONTEND_URL=http://localhost/keycloak/auth
- KEYCLOAK_IMPORT="/tmp/nomad-realm.json"
command:
- "-Dkeycloak.import=/tmp/nomad-realm.json -Dkeycloak.migration.strategy=IGNORE_EXISTING"
volumes:
- keycloak:/opt/jboss/keycloak/standalone/data
- ./configs/nomad-realm.json:/tmp/nomad-realm.json
healthcheck:
test: [ "CMD", "curl", "--fail", "--silent", "http://127.0.0.1:9990/health/live" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 30s
# broker for celery
rabbitmq:
restart: unless-stopped
image: rabbitmq:4
container_name: nomad_oasis_rabbitmq
environment:
- RABBITMQ_ERLANG_COOKIE=SWQOKODSQALRPCLNMEQG
- RABBITMQ_DEFAULT_USER=rabbitmq
- RABBITMQ_DEFAULT_PASS=rabbitmq
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- rabbitmq:/var/lib/rabbitmq
healthcheck:
test: [ "CMD", "rabbitmq-diagnostics", "--silent", "--quiet", "ping" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 10s
# the search engine
elastic:
restart: unless-stopped
image: elasticsearch:7.17.24
container_name: nomad_oasis_elastic
environment:
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
volumes:
- elastic:/usr/share/elasticsearch/data
healthcheck:
test: [ "CMD", "curl", "--fail", "--silent", "http://elastic:9200/_cat/health" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 60s
# the user data db
mongo:
restart: unless-stopped
image: mongo:5
container_name: nomad_oasis_mongo
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- mongo:/data/db
- ./.volumes/mongo:/backup
command: mongod --logpath=/dev/null # --quiet
healthcheck:
test: [ "CMD", "mongo", "mongo:27017/test", "--quiet", "--eval", "'db.runCommand({ping:1}).ok'" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 10s
# nomad worker (processing)
worker:
restart: unless-stopped
image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest
container_name: nomad_oasis_worker
environment:
NOMAD_SERVICE: nomad_oasis_worker
NOMAD_RABBITMQ_HOST: rabbitmq
NOMAD_ELASTIC_HOST: elastic
NOMAD_MONGO_HOST: mongo
depends_on:
rabbitmq:
condition: service_healthy
elastic:
condition: service_healthy
mongo:
condition: service_healthy
volumes:
- ./configs/nomad.yaml:/app/nomad.yaml
- ./.volumes/fs:/app/.volumes/fs
command: ./run-worker.sh
# nomad app (api + proxy)
app:
restart: unless-stopped
image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest
container_name: nomad_oasis_app
environment:
NOMAD_SERVICE: nomad_oasis_app
NOMAD_SERVICES_API_PORT: 80
NOMAD_FS_EXTERNAL_WORKING_DIRECTORY: "$PWD"
NOMAD_RABBITMQ_HOST: rabbitmq
NOMAD_ELASTIC_HOST: elastic
NOMAD_MONGO_HOST: mongo
depends_on:
rabbitmq:
condition: service_healthy
elastic:
condition: service_healthy
mongo:
condition: service_healthy
keycloak:
condition: service_started
volumes:
- ./configs/nomad.yaml:/app/nomad.yaml
- ./.volumes/fs:/app/.volumes/fs
command: ./run.sh
healthcheck:
test: [ "CMD", "curl", "--fail", "--silent", "http://localhost:8000/-/health" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 10s
# nomad remote tools hub (JupyterHUB, e.g. for AI Toolkit)
north:
restart: unless-stopped
image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest
container_name: nomad_oasis_north
environment:
NOMAD_SERVICE: nomad_oasis_north
NOMAD_NORTH_DOCKER_NETWORK: nomad_oasis_network
NOMAD_NORTH_HUB_CONNECT_IP: north
NOMAD_NORTH_HUB_IP: "0.0.0.0"
NOMAD_NORTH_HUB_HOST: north
NOMAD_SERVICES_API_HOST: app
NOMAD_FS_EXTERNAL_WORKING_DIRECTORY: "$PWD"
NOMAD_RABBITMQ_HOST: rabbitmq
NOMAD_ELASTIC_HOST: elastic
NOMAD_MONGO_HOST: mongo
depends_on:
keycloak:
condition: service_started
app:
condition: service_started
volumes:
- ./configs/nomad.yaml:/app/nomad.yaml
- ./.volumes/fs:/app/.volumes/fs
- /var/run/docker.sock:/var/run/docker.sock
user: '1000:991'
command: python -m nomad.cli admin run hub
healthcheck:
test: [ "CMD", "curl", "--fail", "--silent", "http://localhost:8081/nomad-oasis/north/hub/health" ]
interval: 10s
timeout: 10s
retries: 30
start_period: 10s
# nomad proxy (a reverse proxy for nomad)
proxy:
restart: unless-stopped
image: nginx:stable-alpine
container_name: nomad_oasis_proxy
command: nginx -g 'daemon off;'
volumes:
- ./configs/nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
keycloak:
condition: service_healthy
app:
condition: service_healthy
worker:
condition: service_started # TODO: service_healthy
north:
condition: service_healthy
ports:
- "80:80"
volumes:
mongo:
name: "nomad_oasis_mongo"
elastic:
name: "nomad_oasis_elastic"
rabbitmq:
name: "nomad_oasis_rabbitmq"
keycloak:
name: "nomad_oasis_keycloak"
networks:
default:
name: nomad_oasis_network
A few notes:
- You have to change the
KEYCLOAK_FRONTEND_URL
variable to match your host and set a path prefix. - The environment variables on the keycloak service allow to use keycloak behind the nginx proxy with a path prefix, e.g.
keycloak
. - By default, keycloak will use a simple H2 file database stored in the given volume. Keycloak offers many other options to connect SQL databases.
- We will use keycloak with our nginx proxy here, but you can also host-bind the port
8080
to access keycloak directly. - We mount and use the downloaded
configs/nomad-realm.json
to configure a NOMAD compatible realm on the first startup of keycloak.
Second, we add a keycloak location to the nginx config:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name localhost;
proxy_set_header Host $host;
location /keycloak {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
rewrite /keycloak/(.*) /$1 break;
proxy_pass http://keycloak:8080;
}
location / {
proxy_pass http://app:8000;
}
location ~ /nomad-oasis\/?(gui)?$ {
rewrite ^ /nomad-oasis/gui/ permanent;
}
location /nomad-oasis/gui/ {
proxy_intercept_errors on;
error_page 404 = @redirect_to_index;
proxy_pass http://app:8000;
}
location @redirect_to_index {
rewrite ^ /nomad-oasis/gui/index.html break;
proxy_pass http://app:8000;
}
location ~ \/gui\/(service-worker\.js|meta\.json)$ {
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
proxy_pass http://app:8000;
}
location ~ /api/v1/uploads(/?$|.*/raw|.*/bundle?$) {
client_max_body_size 35g;
proxy_request_buffering off;
proxy_pass http://app:8000;
}
location ~ /api/v1/.*/download {
proxy_buffering off;
proxy_pass http://app:8000;
}
location /nomad-oasis/north/ {
client_max_body_size 500m;
proxy_pass http://north:9000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
A few notes:
- Again, we are using
keycloak
as a path prefix. We configure the headers to allow keycloak to pick up the rewritten url.
Third, we modify the keycloak configuration in the nomad.yaml
:
services:
api_host: 'localhost'
api_base_path: '/nomad-oasis'
oasis:
is_oasis: true
uses_central_user_management: false
north:
jupyterhub_crypt_key: '978bfb2e13a8448a253c629d8dd84ff89587f30e635b753153960930cad9d36d'
keycloak:
server_url: 'http://keycloak:8080/auth/'
public_server_url: 'http://localhost/keycloak/auth/'
realm_name: nomad
username: 'admin'
password: 'password'
meta:
deployment: 'oasis'
deployment_url: 'https://my-oasis.org/api'
maintainer_email: 'me@my-oasis.org'
logtransfer:
enabled: false
mongo:
db_name: nomad_oasis_v1
elastic:
entries_index: nomad_oasis_entries_v1
materials_index: nomad_oasis_materials_v1
You should change the following:
- There are two urls to configure for keycloak. The
server_url
is used by the nomad services to directly communicate with keycloak within the docker network. Thepublic_server_url
is used by the UI to perform the authentication flow. You need to replacelocalhost
inpublic_server_url
with<yourhost>
.
A few notes:
- The particular
admin_user_id
is the Oasis admin user in the provided example realm configuration. See below.
If you open http://<yourhost>/keycloak/auth
in a browser, you can access the admin
console. The default user and password are admin
and password
.
Keycloak uses realms
to manage users and clients. A default NOMAD compatible realm
is imported by default. The realm comes with a test user and password test
and password
.
A few notes on the realm configuration:
- Realm and client settings are almost all default keycloak settings.
- You should change the password of the admin user in the nomad realm.
- The admin user in the nomad realm has the additional
view-users
client role forrealm-management
assigned. This is important, because NOMAD will use this user to retrieve the list of possible users for managing co-authors and reviewers on NOMAD uploads. - The realm has one client
nomad_public
. This has a basic configuration. You might want to adapt this to your own policies. In particular you can alter the valid redirect URIs to your own host. - We disabled the https requirement on the default realm for simplicity. You should change this for a production system.
Further steps¶
This is an incomplete list of potential things to customize your NOMAD experience.
- Learn how to develop plugins that can be installed in an Oasis
- Write .yaml based schemas and ELNs
- Learn how to use the tabular parser to manage data from .xls or .csv
- Add specialized NORTH tools
- Restricting user access
Other installation options¶
Kubernetes¶
Attention
This is just preliminary documentation and many details are missing.
There is a NOMAD Helm chart. First we need to add the NOMAD Helm chart repository:
New we need a minimal values.yaml
that configures the individual kubernetes resources
created by our Helm chart:
The jupyterhub
, mongodb
, elasticsearch
, rabbitmq
follow the respective official
Helm charts configuration.
Run the Helm chart and install NOMAD:
Base Linux (without docker)¶
Not recommended. We do not provide official support for this type of installation. It is possible
to run NOMAD without docker. You can infer the necessary steps from the provided docker-compose.yaml
.
Troubleshooting¶
Here are some common problems that may occur in an OASIS installation:
jwt.exceptions.ImmatureSignatureError: The token is not yet valid (iat)
: The authentication information from central authentication is contained in a special piece of signed information (JWT) that contains details about the signed in person. This information also contains a timestamp, which indicates a point in time at which the information was issued at, callediat
. The above error indicates that the server looking at the token thinks that it has not been issued yet.
The underlying reason is a time difference between the two different servers (the one creating the JWT, and the one that is validating it) as these might very well be different physical machines. To fix this problem, you should ensure that the time on the servers is up to date (e.g. a network port on the server may be closed, preventing it from synchronizing the time). Note that the servers do not need to be on the same timezone, as internally everything is converted to UTC+0.
NOMAD in networks with restricted Internet access¶
Some network environments do not allow direct Internet connections, and require the use of an outbound proxy.
However, NOMAD needs to connect to the central user management or elasticsearch thus requires an active Internet
connection (at least on Windows) to work.
In these cases you need to configure docker to use your proxy.
See details via this link https://docs.docker.com/network/proxy/.
An example file ~/.docker/config.json
could look like this.
{
"proxies": {
"default": {
"httpProxy": "http://<proxy>:<port>",
"httpsProxy": "http://<proxy>:<port>",
"noProxy": "127.0.0.0/8,elastic,localhost"
}
}
}
Since not all used services respect proxy variables, one also has to change the docker compose config file docker-compose.yaml
for elastic search to:
elastic:
restart: unless-stopped
image: elasticsearch:7.17.24
container_name: nomad_oasis_elastic
environment:
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- ES_JAVA_OPTS=-Djava.net.useSystemProxies=true
- ES_JAVA_OPTS=-Dhttps.proxyHost=<proxy> -Dhttps.proxyPort=port -Dhttps.nonProxyHosts=localhost|127.0.0.1|elastic
- discovery.type=single-node
volumes:
- elastic:/usr/share/elasticsearch/data
healthcheck:
test:
- "CMD"
- "curl"
- "--fail"
- "--silent"
- "http://elastic:9200/_cat/health"
interval: 10s
timeout: 10s
retries: 30
start_period: 60s
Unfortunately there is no way yet to use the NORTH tools with the central user management, since the jupyterhub spawner does not respect proxy variables. It has not been tested yet when using an authentication which does not require the proxy, e.g. a local keycloak server.
If you have issues please contact us on discord n the oasis channel.
NOMAD behind a firewall¶
It is also possible that your docker container is not able to talk to each other.
This could be due to restrictive settings on your server.
The firewall shall allow both inbound and outbound HTTP and HTTPS traffic.
The corresponding rules need to be added.
Furthermore, inbound traffic needs to be enabled for the port used on the nginx
service.
In this case you should make sure this test runs through: https://docs.docker.com/network/network-tutorial-standalone/
If not please contact your server provider for help.
Elasticsearch and open files limit¶
Even when run in docker elasticsearch might require you to change your systems resource limits as described in the elasticsearch documentation here.
You can temporarely change the open files limit like this: