NoseyNick's Nebula - an Artemis SBS server-in-the-cloud

"Nebula" is my Artemis SBS server in the cloud. Specifically, it's a bunch of scripts that can kick off an AWS or Google Cloud server running a fresh untouched Ubuntu Linux, will install WINE and a bunch of other pre-requisites, install and configure Artemis, run it, click "server", and be ready for your players to connect to within 5 minutes. You can then browse to the server and remote-control it over the web, you can connect your own VNC client, or experts can connect using only SSH - there are some commands installed to "click buttons for you".

Contents...

Step 1: Pick your region and VM size (once)

Unfortunately an Artemis server requires a VM with a spec that's too big for the AWS "free tier". Your VM is going to cost you some money to run, however I've found a spec that seems to run Artemis OK[*] and supports a reasonable number of clients/players for 17c/hr. I think 50c for 3hrs of Artemis is OK, isn't it? Remember how far 50c used to go at the arcade? When you're done with it you can completely nuke it, there's no disks to keep around so no cost when NOT using the server.

Your "region" should probably be chosen according to where most of your players are located, with a bit of an eye on price. See also Varun Agrawal's AWS ping test or Jesús Federic's WS latency test - you may want to ask all your players to try one for a few minutes and collect the results.

For price comparisons, Start at AWS's "on demand" pricing page, 4 vCPUs, usually the "compute optimised" are cheapest. Nebula is best tested on c5.xlarge. You should obviously check prices for yourself, I am not going to pay your bill for you, but last time I checked (2019-03) some appropriate prices (all vcpu=4) were:

0.2140USD/hr: ecu=17 memory=8GiB region=ap-northeast-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1920USD/hr: ecu=17 memory=8GiB region=ap-northeast-2 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.2432USD/hr: ecu=Variable memory=16GiB region=ap-northeast-3 CPUType=IntelXeonFamily Family=GenPurpose instanceType=t2.xlarge
0.2520USD/hr: ecu=16 memory=7.5GiB region=ap-northeast-3 CPUType=IntelXeonE5-2666v3(Haswell) Family=Compute instanceType=c4.xlarge

0.1700USD/hr: ecu=17 memory=8GiB region=ap-south-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1960USD/hr: ecu=17 memory=8GiB region=ap-southeast-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.2112USD/hr: ecu=Variable memory=16GiB region=ap-southeast-2 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.2160USD/hr: ecu=NA memory=16GiB region=ap-southeast-2 CPUType=AMDEPYC7571 Family=GenPurpose instanceType=m5a.xlarge
0.2220USD/hr: ecu=17 memory=8GiB region=ap-southeast-2 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1856USD/hr: ecu=Variable memory=16GiB region=ca-central-1 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.1860USD/hr: ecu=17 memory=8GiB region=ca-central-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1920USD/hr: ecu=Variable memory=16GiB region=eu-central-1 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.1940USD/hr: ecu=17 memory=8GiB region=eu-central-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1728USD/hr: ecu=Variable memory=16GiB region=eu-north-1 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.1820USD/hr: ecu=17 memory=8GiB region=eu-north-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1824USD/hr: ecu=Variable memory=16GiB region=eu-west-1 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.1920USD/hr: ecu=17 memory=8GiB region=eu-west-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1888USD/hr: ecu=Variable memory=16GiB region=eu-west-2 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.2020USD/hr: ecu=17 memory=8GiB region=eu-west-2 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1888USD/hr: ecu=Variable memory=16GiB region=eu-west-3 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.2020USD/hr: ecu=17 memory=8GiB region=eu-west-3 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.2620USD/hr: ecu=17 memory=8GiB region=sa-east-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1664USD/hr: ecu=Variable memory=16GiB region=us-east-1 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.1700USD/hr: ecu=17 memory=8GiB region=us-east-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1664USD/hr: ecu=Variable memory=16GiB region=us-east-2 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.1700USD/hr: ecu=17 memory=8GiB region=us-east-2 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1984USD/hr: ecu=Variable memory=16GiB region=us-west-1 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.2120USD/hr: ecu=17 memory=8GiB region=us-west-1 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

0.1664USD/hr: ecu=Variable memory=16GiB region=us-west-2 CPUType=IntelSkylakeE52686v5(2.5GHz) Family=GenPurpose instanceType=t3.xlarge
0.1700USD/hr: ecu=17 memory=8GiB region=us-west-2 CPUType=IntelXeonPlatinum8124M Family=Compute instanceType=c5.xlarge

Step 2: AWS account (once)

You'll need to get yourself an AWS account ("create a free account"), unless you already have one of course!

You'll want an SSH keypair in the right region. If you're on Linux you probably already know what SSH is. If you're on Windows you'll probably want PuTTY Secure Shell - this is how you'll connect securely across the internet to your Nebula server

You'll want a Network Security Group in the right region. Call it "2010",

Add inbound rule, protocol TCP, port 2010, source "Anywhere" (unless you know all the IPs of all your players?!?), Done. This will allow players to connect to the Artemis server (which runs on TCP port 2010).

Add inbound rule, protocol TCP, port 22, source "Anywhere" (unless you want to be really specific about where you will be connecting from for administration?).

Add inbound rule, protocol TCP, port 80 (you might just be able to choose "HTTP"), source "Anywhere" (unless you want to be really specific about where you will be connecting from for the Web remote-control?). DONE.

There's no cost for any of the above SO FAR

Step 3: Start your VM

Rookies / cadets who prefer the AWS web console...

  • Fire up the "Launch Instance Wizard"
  • Make sure you're in the right region (drop-down in the top-right)
  • Select Ubuntu Server (Nebula prefers Ubuntu 18.04 LTS. X86 NOT ARM)
  • Choose an Instance Type: c5.xlarge (or whatever you chose above)
  • "Configure Instance Details" (NOT review+launch)
  • "Shutdown behaviour: Terminate". You do not need to keep the VM around, you can nuke it when done, and start again "from scratch" next time
  • Advanced Details... paste this into "User data":
    #!/bin/bash
    wget -T3 -O- http://noseynick.org/artemis/nebula/init.sh \
      | NAME=271ben VNCPASS=something bash

    It is STRONGLY recommended that you set VNCPASS to something secure otherwise this can be (ab)used by your players or anyone who stumbles across the Web UI remote control and does a little research.
  • Skip to step 6: "Configure Security Group"
  • "Select an existing security group" and choose the "2010" one you made above (or you could make it for the first time - leave SSH/TCP/22 and add Custom/TCP/2010/Anywhere
  • Review and Launch, Launch
  • Choose the SSH keypair you created/uploaded above (or you could make a new one at this point)
  • Launch Instance
  • Return to the list of instances, make a note of the "Public IP" of your shiny new server!

Expert engineers who are using Linux and/or able to run BASH scripts and have the AWS CLI installed and configured...

Download start.sh. Check the variables at the top. You probably want to change SSHKEY=name-of-your-ssh-key, you may want to change REGION=us-east-1 and a corresponding IMAGE_ID=ami-927185ef, and maybe INSTANCE_TYPE=c5.xlarge - see the notes in the top of the script. All except INSTANCE_TYPE can be set in a creds.sh script if you wish. Some stuff is also driven off of NAME which can be set when you run the script (see below).

Then just run it:

./start.sh 271ben

Make a note of the Public IP it tells you.

Step 4: Wait for it to start

Rookies / cadets who would prefer a web UI...

After a minute or so you should be able to connect a web browser to the Public IP above. Just http://192.0.2.3/ or whatever. You should see a log summary, which will go through approximately the following steps:

YYYY-MM-DD HH:MM:SS : ENV:
YYYY-MM-DD HH:MM:SS : Web UI install
YYYY-MM-DD HH:MM:SS : Keeping config
YYYY-MM-DD HH:MM:SS : Nebula server booting
YYYY-MM-DD HH:MM:SS : SSH config...
YYYY-MM-DD HH:MM:SS : APT updates and installs...
YYYY-MM-DD HH:MM:SS : X11 ...
YYYY-MM-DD HH:MM:SS : Downloading Artemis 271ben bits...
YYYY-MM-DD HH:MM:SS : Making click utils
YYYY-MM-DD HH:MM:SS : VNC setup...
YYYY-MM-DD HH:MM:SS : Making click utils
YYYY-MM-DD HH:MM:SS : Artemis mods and config ...
YYYY-MM-DD HH:MM:SS : DONE - stop the clock!

At this point you should see a link to remote-control your Artemis server using a browser-based remote-control - you'll need to use the password you set in VNCPASS above (you DID set VNCPASS=something, right ?) and then you should see your Artemis server running!

If it DOESN'T work, in particular if you see the message "Artemis failed to start properly" or if you see "Artemis.exe has crashed" in the remote-control, then you're currently out of luck

Coming soon! a simple menu with the ability to restart Artemis.exe

In the meantime, unless you can follow the instructions to the right → you should probably terminate the server and start again. See Step 6: Shutdown below, then go back to Step 3 above to start another VM

Expert engineers should be able to connect to it using SSH (or PuTTY) within a minute, for example if your Public IP was 192.0.2.3:

ssh -L5900:127.1:5900 ubuntu@192.0.2.3  # <-- use real IP address here
tail -1000f /var/log/cloud-init.log  # to watch boot progress

If using PuTTY, you'll want to connect to the supplied IP, user "ubuntu", and arrange for a "local port forward" of port 5900 to 127.0.0.1 5900

If it works, you'll see a lot of headers like:

######################################################################
#### YYYY-MM-DD HH:MM:SS : Web UI install
######################################################################
... and a lot of other debugging output in between. When done, you should see something like...
#### YYYY-MM-DD HH:MM:SS : Waiting 30 secs for server...
#### YYYY-MM-DD HH:MM:SS : Waiting 29 secs for server...
#### YYYY-MM-DD HH:MM:SS : Waiting 28 secs for server...
tcp    0    0    0.0.0.0:2010    0.0.0.0:*    LISTEN
######################################################################
#### YYYY-MM-DD HH:MM:SS : DONE - stop the clock!
######################################################################
If it DOESN'T work, in particular if you don't see the "tcp 0 0 0.0.0.0:2010" line, you may want to hit control-C and re-start the entire process:
wget -T3 -O- http://noseynick.org/artemis/nebula/init.sh \
  | sudo NAME=271ben VNCPASS=something bash
If it still doesn't work... Sorry, get hold of me (probably on the United Stellar Navy Discord server) and tell me what went wrong. I'll probably appreciate a copy of your /var/log/cloud-init.log file from Step 4 above, and/or I might want to log into your server to nose around - I'll discuss how to do that once you get hold of me.

Step 5: Play!

You, and your other players, can point your Artemis clients at the Public IP address.

You're going to need to choose game type, skill level, Terrain/LethalTerrain/FriendlyShips/Monsters/Anomalies etc though, and you're going to need to start the game. There are now 3 ways to do this:

  1. Point a VNC client at 127.0.0.1, EG vncviewer 127.0.0.1 OR
  2. Browse to the IP address of your server and use the (less secure) web-based VNC remote-control OR
  3. Run a series of commands to click buttons for you, and (without being able to see them) HOPE it has done what you expected

Some of the special commands available to you are:

Some other useful scripts:

Some regular Linux commands that may be useful:

Step 6: Shutdown

From inside the VM: sudo /sbin/poweroff

From the AWS CLI: aws ec2 terminate-instances --region $REGION --instance-ids $ID

Coming soon! a simple menu with the ability to shutdown from within VNC (including the web-based remote-control)

From the AWS web console, make sure you're in the right region (if you forgot, then yes, sorry, you're going to have to check every single region). Click on the VM in the list. Click "Actions", "Instance State", "Terminate".

Video

Davis has made a video of the above - Thanks Davis! He takes you through using the AWS web console "Launch Instance Wizard", then use of PuTTY and VNC on Windows, and terminating via the AWS console. Some minor updates:

How It Works

Optional Extras

aws-pricing.pl produces the price list above. It is not guaranteed to be up-to-date or correct, I'm not responsible for any errors on your AWS bill or anything.

A number of environment variables can be set in "Step 3 Start your VM" or if you are re-running manually in step 4. The examples show NAME=271ben. If you're using start.sh from your own machine, you can set them in the environment, or in a creds.sh which will be read by start.sh. If you're using the AWS web console, you'll add them in your "User data" between the | and bash

Performance

Artemis.exe crashes very occasionally, usually at startup, probably no more or no less often than any other/windows Artemis server to be honest   😕

I usually test by cranking everything up to level 11, "Many Many Many Many Many", making a few dozen connections from another server, and turning on the undocumented AI player (press "E" on the server). This isn't quite the same as testing with real players, the connections aren't "doing anything", but it does prove that the server can handle the same number of in-game objects, and push the packets onto the network.

We have tested with fair-sized groups of human players too. "Load average" stats during typical USN games show the machine using about 2.5 - 3.5 of the 4 CPUs allocated to it, with no "steal time", so Artemis seems to not be bottle-necked. AWS still uses "powers of 2" CPUs, so 2 would be too few, 8 would be too many, 4 seems to be the Goldilocks "just right".

Rear Admiral Dave Trinh of the TSN 1st Light Division has referred to this as "our blazing fast Nebula server", and is hoping to work up to a full 8-ship fleet with dozens of online players.

Starry commented on the speed, performance, and lack of lag ("even from the UK") when playing some of our test games running in AWS Ohio.

Davis at Eastern Front is testing it out, "Good News! Nosey Nick Waterman has found a way to create a server that cuts the cost of a server by A LOT. our first server was projected to cost ABOUT 60.00 a month now with his creation we are running at .15 an hour of play! I would like to thank Nosey Nick Waterman for all the developing he has done for the community's to enjoy this game in this way" [...] "Fully tested and got 1 FULL game played and only cost .25 soooooo yea".

After most Nebula USN games, I also get my Artemis-Puppy bot to post a quick survey (see puppy-poll.sh), which looks a bit like...

cute puppy     Artemis-Puppy BOT
Thanks Crew! Please rate the performance of this server - it's a bit of an experiment and I'd appreciate feedback:
AMarvellous
BPretty good
CFair
DNot good
EBad
FAwful
Total votes so far (2019-02-19, - see note The bot itself votes for A-F to make the options appear, making
it LOOK like every letter has at least 1 vote per survey,
but these "Fake votes" have been removed from these totals
):
A 8   B 10   C 2   D 1   E 0   F 0  

... so it looks like USUALLY the players agree these servers are "Pretty good" to "Marvelous" 😀

Version History

I've not kept a very accurate version history, but some noted changes / milestones are:

2017-12-15 First recorded Artemis-on-WINE-on-Linux-on-AWS tests for USN
2018-01-21 Tested with 2 player ships and 4 fighters for nearly 1hr game
2018-03-01 First recorded use with the name "Nebula"
2018-03-09 First reporting back to Discord
2018-03-22 First archived copy of cloud-init-output.log
2018-04-16 Tested at level 11 with 3 player ships and many fighters
2018-08-02 First Google Cloud tests - Nebula goes multi-cloud!
2018-11-14 First recorded use with Eastern Front (no EF mod)
2019-02-10 Tested in us-east-2 for TSN Canada fleet
2019-02-15 Big 1hr & 45min multi-ship TSN 4LD Canada games - performance was great!
2019-02-16 Added mission downloads especially for nginecho and OpenSpace. Better system load reports. Clarified license / copyright
2019-02-18 Davis for Eastern Front made a HOWTO YouTube vid - Thanks Davis!
2019-02-21 Upgraded from Ubuntu 16.04 to 18.04 LTS. Ubuntu WINE is now fine (and much faster than winehq). Added support for Eastern Front mod + ships + missions. Doc overhaul
2019-02-26 Updated aws-pricing.pl for new URLs / formats
2019-03-03 Downloader improvements. Initial support for TSN-RP mods + sandbox + ships. Tidied init.sh, modularised mkclick, keepconf, missions, downloader, SSH config, others. Doc polishing. AWS ping test links
2019-03-04 start.sh dumps USERDATA for better debug / re-use
2019-03-07 Simplified / modularised ship name scripts. Further downloader improvements. Improved PublicIP detection. Support for list of MISSIONS=URLs
2019-03-14 TSN-RP ship names updated. New nebula@ email address. More doc updates. Updated AWS prices. Moved to /artemis/nebula URL to reflect more-than-AWS support. More consistent logging to ~/logs/
2019-03-17 Introduced Web UI / VNC! Fixed VNCPASS (now much more inportant!)
2019-03-23 Major doc overhaul, mostly to reflect Web UI, and add this changelog!

Coming Soon? Maybe?


(See also my other Artemis tools)