utils - compose for static server

This commit is contained in:
vadoli 2021-08-07 08:29:59 +00:00
parent 093205da18
commit acff670c63
10 changed files with 338 additions and 36 deletions

123
utils/README.md Normal file
View file

@ -0,0 +1,123 @@
# Utilities
## Running workspace on server in cloud
There are many benefits of using workspace in cloud, some of them:
- you can use workspace on any device with internet access, even on tablet
- it is great for collaboration (anyone can work together with you)
- you have access to more poverful machine
- you can use workspace for long-running or periodic jobs
There are 2 security considerations to be taken into accont, using workspace in cloud:
1. encrypted https connection
2. authentication
To enable https and auth for workspace, one can add reverse proxy to the workspace deployment.
The utility `remote.py` generates all what's needed to run workspace on cloud server behind reverse proxy.
This utility generates certificates, traefik config and docker-compose file
```
python remote.py --workspace="base-workspace" --port="8020" --host="68.183.69.198" --user="user1" --password="pass1"
```
**IMPORTANT: the best is to execute this python script inside worrkspace in docker, whick runs on local laptop**
The following 4 arguments must be provided:
- --workspace - name of the workspace (all lowercase)
- --port - port ofor the workspace UI. Workspace will also take N consequent ports after this one. base-docker for example,
uses 10 ports
- --host - IP or hostname of the server where workspace will be deployed
- --user - any username
- --password - any password
After command is executed, the new folder `remote` is created in the same directory. Copy this folder to the remote server.
*Hint: in order to copy folder to remote, you start base workspace on remote with local volume mounted `docker run --name space-temp -v /home/tmp:/tmp -p 8020-8030:8020-8030 -e WRK_HOST="<ip-of-your-remote-server>" alnoda/base-workspace` copy and remove this workspace right after that.*
On the remote server ssh to this folder, and execute
```
docker-compose up -d
```
The workspace is running, and it is secured with https and user/password. Notice, that self-sined certificate is used, and browser will
deisplay a warning when you try to assess the workspace UI on `https://<ip-of-your-remote-server>:8020`. Agree to the warning and proceed.
## Serve Static Website
Web application should be deployed with domain name and over https. To have this we suggest to use docker-compose file.
### Example: generate and serve docs
Open terminal the workspace, which has UI, and build docs
> `cd /home/docs`
> `mkdocs build -d /home/static-server/doc`
You can check that static website is served by the static web server.
Ssh to the server where the workspace is running, and commit workspace to a new image. Assuming, workspace name `remote_workspace_1`
> `docker commit remote_workspace_1 docs:0.1`
Now we will run container from image docs:0.1, and add traefik reverse proxy with https. But before doing it, you need to buy domain name,
and set A record for your new domain to point to the IP of the server, where the docs are running
```
version: "3.3"
services:
traefik:
image: "traefik:v2.4"
container_name: "traefik"
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
- "--certificatesresolvers.myresolver.acme.email=blackmaster@gmail.com"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
ports:
- "443:443"
- "80:80"
volumes:
- "./letsencrypt:/letsencrypt"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
workspace:
image: "docs:0.1"
container_name: "workspace"
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.httprepl.redirectregex.regex=^http://(.*)"
- "traefik.http.middlewares.httprepl.redirectregex.replacement=https://$${1}"
- "traefik.http.middlewares.add-context.redirectregex.regex=^https:\\/\\/([^\\/]+)\\/?$$"
- "traefik.http.middlewares.add-context.redirectregex.replacement=https://$$1/doc/pages/home/home/"
- "traefik.http.services.STATICFS_URLhttp.loadbalancer.server.port=8022"
- "traefik.http.routers.STATICFS_URLhttp.service=STATICFS_URL"
- "traefik.http.routers.STATICFS_URLhttp.rule=PathPrefix(`/`)"
- "traefik.http.routers.STATICFS_URLhttp.entrypoints=web"
- "traefik.http.routers.STATICFS_URLhttp.middlewares=httprepl"
- "traefik.http.services.STATICFS_URL.loadbalancer.server.port=8022"
- "traefik.http.routers.STATICFS_URL.service=STATICFS_URL"
- "traefik.http.routers.STATICFS_URL.rule=Host(`elnoda.org`)"
- "traefik.http.routers.STATICFS_URL.entrypoints=websecure"
- "traefik.http.routers.STATICFS_URL.middlewares=basic-auth"
- "traefik.http.routers.STATICFS_URL.tls=true"
- "traefik.http.routers.STATICFS_URL.tls.certresolver=myresolver"
- "traefik.http.routers.STATICFS_URL.middlewares=add-context"
```

View file

@ -130,7 +130,7 @@ def get_compose_dict(workspace_name, host_ip, start_port, user, password):
# Add Workspace values to the dict
y["services"]["workspace"] = {}
y["services"]["workspace"]["image"] = f"alnoda/{workspace_name}"
y["services"]["workspace"]["environment"] = {"WRK_HOST": host_ip}
y["services"]["workspace"]["environment"] = {"WRK_HOST": host_ip, "WRK_PROTO": https}
y["services"]["workspace"]["labels"] = get_workspace_labels(ep)
# Add auth
authlabels = make_authlabels(user, password)

107
utils/staticserver.py Normal file
View file

@ -0,0 +1,107 @@
"""
Utility to generate docker-compose.yaml file to launch
static file server using built-in static fileserver tool
python staticserver.py --image="docs:0.1" --domain="elnoda.org" --email="blackmaster@gmail.com" --homepage="doc/pages/home/home/"
"""
import os
import yaml
import shutil
import argparse
import textwrap
import subprocess
def get_workspace_labels(domain, homepage):
""" Create list of Traefik labels for the Workspace service
"""
labels = [
"traefik.enable=true",
"traefik.http.middlewares.httprepl.redirectregex.regex=^http://(.*)",
"traefik.http.middlewares.httprepl.redirectregex.replacement=https://$${1}",
"traefik.http.services.STATICFS_URLhttp.loadbalancer.server.port=8022",
"traefik.http.routers.STATICFS_URLhttp.service=STATICFS_URL",
"traefik.http.routers.STATICFS_URLhttp.rule=PathPrefix(`/`)",
"traefik.http.routers.STATICFS_URLhttp.entrypoints=web",
"traefik.http.routers.STATICFS_URLhttp.middlewares=httprepl",
"traefik.http.services.STATICFS_URL.loadbalancer.server.port=8022",
"traefik.http.routers.STATICFS_URL.service=STATICFS_URL",
"traefik.http.routers.STATICFS_URL.entrypoints=websecure",
"traefik.http.routers.STATICFS_URL.middlewares=basic-auth",
"traefik.http.routers.STATICFS_URL.tls=true",
"traefik.http.routers.STATICFS_URL.tls.certresolver=myresolver",
"traefik.http.routers.STATICFS_URL.middlewares=add-context"
]
varlab = [
f"traefik.http.routers.STATICFS_URL.rule=Host(`{domain}`)",
f"traefik.http.middlewares.add-context.redirectregex.replacement=https://$$1/{homepage}",
"traefik.http.middlewares.add-context.redirectregex.regex=^https:\\/\\/([^\\/]+)\\/?$$"
]
labels.extend(varlab)
return labels
def get_compose_dict(image, domain, homepage, email):
""" Create dict of values for docker-compose. This dict is
to be transformed into docker-compose.yaml
"""
traefik_command = [
"--providers.docker=true",
"--providers.docker.exposedbydefault=false",
"--entrypoints.web.address=:80",
"--entrypoints.websecure.address=:443",
"--certificatesresolvers.myresolver.acme.tlschallenge=true",
f"--certificatesresolvers.myresolver.acme.email={email}",
"--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
]
# Create dict with Traefik values
y = {}
y["version"] = "3.3"
y["services"] = {}
y["services"]["traefik"] = {}
y["services"]["traefik"]["image"] = "traefik:v2.4"
#y["services"]["traefik"]["container_name"] = "trafik_container"
y["services"]["traefik"]["command"] = traefik_command
y["services"]["traefik"]["ports"] = [
"443:443",
"80:80"
]
y["services"]["traefik"]["volumes"] = [
"./letsencrypt:/letsencrypt",
"/var/run/docker.sock:/var/run/docker.sock:ro"
]
y["services"]["workspace"] = {}
y["services"]["workspace"]["image"] = f"{image}"
y["services"]["workspace"]["labels"] = get_workspace_labels(domain, homepage)
return y
def main(cmd_args):
""" Create YAML file for deployment of static website using
static web server
"""
image = cmd_args.image
domain = cmd_args.domain
homepage = cmd_args.homepage
email = cmd_args.email
try:
os.remove("./docker-compose.yaml")
except:
pass
comp_dict = get_compose_dict(image, domain, homepage, email)
with open("./docker-compose.yaml", "a") as y:
y.write(yaml.dump(comp_dict, default_style='"'))
return
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--image")
parser.add_argument("--domain")
parser.add_argument("--email")
parser.add_argument("--homepage")
cmd_args = parser.parse_args()
main(cmd_args)

View file

@ -36,6 +36,10 @@ RUN echo "------------------------------------------------------ filebrowser, un
&& cd /opt/serve && . env/bin/activate && npm install -g serve \
&& echo "------------------------------------------------------ mkdocs" \
&& pip install -r /home/abc/installed-python-packages/mkdocs-requirements.txt \
&& echo "------------------------------------------------------ utils" \
&& git clone https://github.com/bluxmit/alnoda-workspaces /tmp/alnoda-workspaces \
&& mv /tmp/alnoda-workspaces/utils /home/abc/ \
&& rm -rf /tmp/alnoda-workspaces \
&& echo "------------------------------------------------------ user" \
&& mkdir -p /home/static-server \
&& chown -R abc /opt/cronicle \
@ -48,6 +52,8 @@ RUN echo "------------------------------------------------------ filebrowser, un
&& mkdir -p /var/log/filebrowser && chown -R abc /var/log/filebrowser \
&& mkdir -p /var/log/ungit && chown -R abc /var/log/ungit \
&& mkdir -p /var/log/static-file-server && chown -R abc /var/log/static-file-server \
&& mkdir -p /var/log/mkdocs && chown -R abc /var/log/mkdocs
&& mkdir -p /var/log/mkdocs && chown -R abc /var/log/mkdocs \
&& chown -R abc /home/abc/utils \
&& chown -R abc /home/abc/installed-python-packages
USER abc

View file

@ -5,6 +5,37 @@
Base-Workspace - is an attemp to use docker as a light-weight Virtual Machine with batteries included, which is intended to be used
entirely through WEB-based interfaces - its own WEB-UI, WEB-based terminal, filebrowser, visual scheduler and other applications.
#### Try it out
``` docker run --name space-1 --user=root -d -p 8020-8030:8020-8030 alnoda/base-workspace```
## Contents
* [Why this image](#why-this-image)
* [Use-cases](#use-cases)
* [Features](#features)
* [Launch Workspace](#launch-workspace)
* [Workspace terminal](#workspace-terminal)
* [Multiple workspaces](#multiple-workspaces)
* [Open more ports](#open-more-ports)
* [Docker in docker](#docker-in-docker)
* [Run on remote server](#run-on-remote-server)
* [Use Workspace](#use-workspace)
* [Install applications](#install-applications)
* [Schedule jobs with Cron](#schedule-jobs-with-cron)
* [Python](#python)
* [Node.js](#node.js)
* [Run applications and services inside the workspace](#run-applications-and-services-inside-the-workspace)
* [Manage workspaces](#manage-workspaces)
* [Start and stop workspaces](#start-and-stop-workspaces)
* [Create new workspace image](#create-new-workspace-image)
* [Manage workspace images](#manage-workspace-images)
* [Save and load workspace images](#save-and-load-workspace-images)
* [Move workspace to the cloud](#move-workspace-to-the-cloud)
## Why this image
> TL;DR
> You can provide your users with many virtual environments, manage just one server, and have less work with server configuration management.
@ -22,27 +53,6 @@ running inside the workspace.
Base-Workspace can be used as isolated environment on local machine, or as alternative to VM on the cloud server. It can run as root,
or as default **abc** user that is allowed to use *apt-get*.
## Contents
* [Use-cases](#use-cases)
* [Features](#features)
* [Launch Workspace](#launch-workspace)
* [Workspace terminal](#workspace-terminal)
* [Multiple workspaces](#multipl-workspaces)
* [Open more ports](#open-more-ports)
* [Docker in docker](#docker-in-docker)
* [Run on remote server](#run-on-remote-server)
* [Use Workspace](#use-workspace)
* [Install applications](#install-applications)
* [Schedule jobs with Cron](#schedule-jobs-with-cron)
* [Python](#python)
* [Node.js](#node.js)
* [Run applications and services inside the workspace](#run-applications-and-service-inside-the-workspace)
* [Manage workspaces](#manage-workspaces)
* [Start and stop containers](#start-and-stop-containers)
* [Create new workspace image](#create-new-workspace-image)
* [Manage workspace images](#manage-workspace-images)
* [Save and load workspace images](#save-and-load-workspace-images)
## Use-cases
@ -189,12 +199,12 @@ docker exec -it --user=root space-1 /bin/zsh
### Run on remote server
Because workspace is just a docker image, running it in cloud is as easy as running it on local laptop.
Because workspace is just a docker image, running it in any other server is as easy as running it on local laptop.
Running on remote server allows you to collaborate easily by providing access to the workspace for other users.
Running on remote server makes it much simpler to collaborate, because you can just share credentials to the workspace with your peers, and they will be able to use it.
You can also run applications that should run permanently, and run jobs on schedule.
There are only 3 steps needed to run workspace in cloud:
The simplest deployment of the workkspace requires only 3 steps:
- get virtual server on your favourite cloud (Digital Ocean, Linode, AWS, GC, Azure ...)
- [install docker](https://docs.docker.com/engine/install/) on this server
@ -215,6 +225,25 @@ If docker-in-docker is required, then
docker run --name space-1 -d -p 8020-8030:8020-8030 -e WRK_HOST="<ip-of-your-remote-server>" -v /var/run/docker.sock:/var/run/docker.sock alnoda/base-workspace
```
This way launches workspace in cloud, but such workspace is not secure, everyone who knows IP of your server will be able to use it.
***You might want to restrict access to the workspace, and secure encrypted communication with the workspace***
Base-Workspace contains utility that will generate everything needed to launch the workspace in cloud.
If you want to run workspace on the remote server securely, start Base-Workspace on your local laptop first, open its terminal and
use utility `/home/abc/utils/remote.py` to generate create docker-compose project with TLS certificates. Simply execute
> `python /home/abc/utils/remote.py --workspace="base-workspace" --port="8020" --host="68.183.69.198" --user="user1" --password="pass1`
**NOTE:** you have to specify the correct host (IP of the server you want to run the workspace on), and user and password of your choice.
You see folder `/home/abc/utils/remote` is created. Copy this folder to the remote server (any location). Ssh to the server, cd into
the directory you copied and execute `docker-compose up -d`.
That's it, you workspace is running securely on the remote server, using
self-signed TLS certificates for encrypted https communication between you laptop and the remote workspace,
and authentication is added.
## Use Workspace
@ -435,7 +464,33 @@ Pushing image to registry is merely 2 extra commands: 1) tag image; 2) push imag
You will be able to pull image on any device, local or cloud.
### Move workspace to the cloud
Ease of running workspace in cloud, and ability to move workspaces between local machine and remote server -
is one of the main features of the workspace, and the reasonn why the workspace is entirely in docker.
It is often a case that experiment, which started on personal notebook require more computational
resources, must be running for a long period of time, or executed periodically. All of these cases are
the reasons to move a workspace to the cloud server. Usually it is a hassle, but this workspace can be moved
to the remote server easily.
The easiest way to move workspace to the cloud is to get your private docker registry. Then moving a workspace from a laptop to
a remote server is only 3 commands:
1. [Commit workspace to the a image](#create-new-workspace-image)
2. [Push workspace to your docker registry](https://docs.docker.com/engine/reference/commandline/push/)
3. ssh to remote server, and [run workspace there](#run-on-remote-server)
If you don't want to use container registry, then there are 2 steps more involved:
1. [Commit workspace to the a image](#create-new-workspace-image)
2. [Save image to file](save-and-loa-images)
3. Copy file to remote server. There are many options:
- Launch filexchange workspace on the remote server
- Use [cyberduck](https://cyberduck.io/)
- use [scp](https://linuxize.com/post/how-to-use-scp-command-to-securely-transfer-files/)
4. [Load workspace image from file](#save-and-load-workspace-images) on the remote server
5. [Start workspace on the remote server](#run-on-remote-server)

View file

@ -1,7 +1,7 @@
**This is a starting point to create docs for this workspace!**
> Don't neglect documenting your workspace! Soon you will forgot what you were doing with it.
> This page is designed for you to modify it and write down everything you need to know next time you come here.
> This page is created for you to modify it and write down everything you need to know next time you come here.
In order to change this page, simply modify the file `/home/docs/docs/README.md`. Changes will be applied automatically - the
server that serves this page has live reload.

View file

@ -37,6 +37,11 @@ def define_env(env):
host = os.environ["WRK_HOST"]
except:
pass
proto = "http"
try:
proto = os.environ["WRK_PROTO"]
except:
pass
# Entry port - port relative to which other ports will be calculated
entry_port = 8020
try:
@ -48,6 +53,6 @@ def define_env(env):
port = port_increments[env] + entry_port
except:
port = 80
return f"http://{host}:{port}"
return f"{proto}://{host}:{port}"

View file

@ -13,7 +13,7 @@ nav:
# ===========================================================
site_name: Base Workspace
repo_url: https://github.com/Alnoda/workspaces-in-docker/tree/main/workspaces/base-workspace
repo_url: https://github.com/bluxmit/alnoda-workspaces
site_url: https://alnoda.org
edit_uri: ""

View file

@ -32,7 +32,7 @@ When it runs on the remote server, access can be restricted with a password.
* [Save and load images](#save-and-load-images)
* [Move workspace to the cloud](#move-workspace-to-the-cloud)
* [Collaborate and share workspaces](#collaborate-and-share-workspaces)
* [Extra features](#extra-features)
* [Extend](#extend)
* [Java](#java)
* [Run applications permanently](#run-applications-permanently)
@ -455,11 +455,12 @@ resources, must be running for a long period of time, or executed periodically.
the reasons to move a workspace to the cloud server. Usually it is a hassle, but this workspace can be moved
to the remote server easily.
The easiest way to move workspace to the cloud is to get your private docker registry. Then to run workspace on remote server it is only 3 commands:
The easiest way to move workspace to the cloud is to get your private docker registry. Then moving a workspace from a laptop to
a remote server is only 3 commands:
1. [Commit workspace to the a image](#save-and-load-images)
2. [Push workspace to your docker registry](https://docs.docker.com/engine/reference/commandline/push/)
3. ssh to remote server, and run workspace from your registry
3. ssh to remote server, and [run workspace there](#run-in-cloud)
If you don't want to use container registry, then there are 2 steps more involved:
@ -479,7 +480,7 @@ Same as with moving worspaces to the cloud - it is trivial to share workspaces w
- share common docker registry
- start workspace in cloud and collaborate in real time
## Extra features
## Extend
### Java

View file

@ -36,6 +36,11 @@ def define_env(env):
host = os.environ["WRK_HOST"]
except:
pass
proto = "http"
try:
proto = os.environ["WRK_PROTO"]
except:
pass
# Entry port - port relative to which other ports will be calculated
entry_port = 8020
try:
@ -47,6 +52,6 @@ def define_env(env):
port = port_increments[env] + entry_port
except:
port = 80
return f"http://{host}:{port}"
return f{proto}://{host}:{port}"