NoseyNick's Nebula - an Artemis SBS server-in-the-cloud

"Nebula" is my Artemis SBS server in the cloud. Artemis, cloud, Nebula, get it?

Specifically, it's a bunch of scripts that can kick off an AWS (or Google Cloud?) server running a fresh untouched Ubuntu Linux, it will install WINE and a bunch of other pre-requisites, install and configure Artemis, run it, click "server", and be ready for your players to connect within 5 minutes. You can then browse to the server and remote-control it over the web, you can connect your own VNC client, or experts can connect using only SSH - there are some commands installed to "click buttons for you".

Contents...

Step 1: Pick your region and VM size (once)

Unfortunately an Artemis server requires a VM with a spec that's too big for the AWS "free tier". Your VM is going to cost you some money to run, however I've found a spec that seems to run Artemis OK[*] and supports a reasonable number of clients/players for 17c/hr. I think 50c for 3hrs of Artemis is OK, isn't it? Remember how far 50c used to go at the arcade? When you're done with it you can completely nuke it, there's no disks to keep around so no cost when NOT using the server.

Your "region" should probably be chosen according to where most of your players are located, with a bit of an eye on price. See also Varun Agrawal's AWS ping test or Jesús Federic's AWS latency test - you may want to ask all your players to try one for a few minutes and collect the results.

For price comparisons, Start at AWS's "on demand" pricing page, or the neat EC2Instances.info tool. You want 4 vCPUs, maybe 8 for big multi-ship "fleet" games (Intel, NOT ARM, AWS Graviton, or even AMD). Usually the "compute optimised" are cheapest. Nebula is best tested on c5.xlarge and c5.2xlarge. Artemis Cosmos 3 support is a bit experimental but seems to benefit from more CPU, probably 8vCPUs (t3a.2xlarge? c6a.2xlarge?) or maybe even 16 vCPUs (c6a.4xlarge? c5.4xlarge?). You should obviously check prices for yourself, I am not going to pay your bill for you, but last time I checked (2022-08-06) some appropriate prices (all vcpu=4) were:

0.196USD/hr: vcpu=4 memory=8GiB region=ap-southeast-1 instanceType=c5.xlarge
0.202USD/hr: vcpu=4 memory=8GiB region=eu-west-3 instanceType=c5.xlarge
0.214USD/hr: vcpu=4 memory=8GiB region=ap-northeast-1 instanceType=c5.xlarge
0.17USD/hr:  vcpu=4 memory=8GiB region=us-east-1 instanceType=c5.xlarge
0.17USD/hr:  vcpu=4 memory=8GiB region=us-east-2 instanceType=c5.xlarge
0.17USD/hr:  vcpu=4 memory=8GiB region=ap-south-1 instanceType=c5.xlarge
0.228USD/hr: vcpu=4 memory=8GiB region=af-south-1 instanceType=c5.xlarge
0.202USD/hr: vcpu=4 memory=8GiB region=eu-south-1 instanceType=c5.xlarge
0.192USD/hr: vcpu=4 memory=8GiB region=ap-northeast-2 instanceType=c5.xlarge
0.204USD/hr: vcpu=4 memory=8GiB region=us-west-2-lax-1 instanceType=c5.xlarge
0.222USD/hr: vcpu=4 memory=8GiB region=ap-southeast-2 instanceType=c5.xlarge
0.192USD/hr: vcpu=4 memory=8GiB region=eu-west-1 instanceType=c5.xlarge
0.214USD/hr: vcpu=4 memory=8GiB region=ap-northeast-3 instanceType=c5.xlarge
0.186USD/hr: vcpu=4 memory=8GiB region=ca-central-1 instanceType=c5.xlarge
0.202USD/hr: vcpu=4 memory=8GiB region=eu-west-2 instanceType=c5.xlarge
0.17USD/hr:  vcpu=4 memory=8GiB region=us-west-2 instanceType=c5.xlarge
0.194USD/hr: vcpu=4 memory=8GiB region=eu-central-1 instanceType=c5.xlarge
0.204USD/hr: vcpu=4 memory=8GiB region=us-gov-west-1 instanceType=c5.xlarge
0.212USD/hr: vcpu=4 memory=8GiB region=us-west-1 instanceType=c5.xlarge
0.204USD/hr: vcpu=4 memory=8GiB region=us-gov-east-1 instanceType=c5.xlarge
0.196USD/hr: vcpu=4 memory=8GiB region=ap-southeast-3 instanceType=c5.xlarge
0.182USD/hr: vcpu=4 memory=8GiB region=eu-north-1 instanceType=c5.xlarge
0.216USD/hr: vcpu=4 memory=8GiB region=ap-east-1 instanceType=c5.xlarge
0.262USD/hr: vcpu=4 memory=8GiB region=sa-east-1 instanceType=c5.xlarge
0.211USD/hr: vcpu=4 memory=8GiB region=me-south-1 instanceType=c5.xlarge

Step 2: AWS account (once)

You'll need to get yourself an AWS account ("create a free account"), unless you already have one of course!

You no longer need to create or import an SSH keypair in the right region. If you're on Linux you probably already know what SSH is and probably already have an SSH key - you can import your id_rsa.pub. If you're on Windows you might want PuTTY Secure Shell - this is a way to connect securely across the internet to your Nebula server, but you probably don't need it any more - you can just browse to your Nebula server (see below).

You'll want a Network Security Group in the right region. Call it "2010":

Click DONE.

NOTE: There's no cost for any of the above SO FAR

Step 3: Start your VM

Rookies / cadets who prefer the AWS web console...

  • Fire up the "Launch Instance Wizard"
  • Make sure you're in the right region (drop-down in the top-right)
  • Select Ubuntu Server (Nebula prefers Ubuntu 20.04 LTS. X86 NOT ARM)
    ... or for Artemis Cosmos 3, find an Ubuntu 23.10 image on Ubuntu's image locator, again x86/amd64 NOT ARM.
  • Choose an Instance Type: c5.xlarge (or whatever you chose above, probably c6a.2xlarge for Artemis Cosmos 3)
  • "Configure Instance Details" (NOT review+launch)
  • "Shutdown behaviour: Terminate". You do not need to keep the VM around, you can nuke it when done, and start again "from scratch" next time
  • Advanced Details... paste this into "User data" (or see Optional Extras for other things that you can set in here):
    #!/bin/bash
    wget -t3 -T3 -O- https://noseynick.org/artemis/nebula/init.sh \
      | NAME=280stock VNCPASS=something bash
    It is STRONGLY recommended that you set VNCPASS to something secure otherwise this can be (ab)used by your players or anyone who stumbles across the Web UI remote control and does a little research.
  • Skip to "Configure Security Group"
  • "Select an existing security group" and choose the "2010" one you made above (or you could make it for the first time - leave SSH/TCP/22 and add Custom/TCP/2010/Anywhere, HTTP/TCP/80)
  • Review and Launch, Launch
  • Choose the SSH keypair you created/uploaded above (or you could make a new (disposable) one at this point)
  • Launch Instance
  • Return to the list of instances, make a note of the "Public IP" of your shiny new server!

Expert engineers who are using Linux and/or able to run BASH scripts and have the AWS CLI installed and configured...

Download start.sh. Check the variables at the top. You probably want to change SSHKEY=name-of-your-ssh-key, you may want to change REGION=us-east-1 and a corresponding IMAGE_ID=ami-12345678, and maybe INSTANCE_TYPE=c5.xlarge - see the notes in the top of the script. All except INSTANCE_TYPE can be set in a creds.sh script if you wish. Some stuff is also driven off of NAME which can be set when you run the script (see below).

Then just run it:

./start.sh 285stock

Make a note of the Public IP it tells you.

Step 4: Wait for it to start

Rookies / cadets who would prefer a web UI...

After a minute or so you should be able to connect a web browser to the Public IP above. Just http://192.0.2.3/ or whatever. You should see a log summary, which will go through approximately the following steps:

YYYY-MM-DD HH:MM:SS : ENV:
YYYY-MM-DD HH:MM:SS : Web UI install
YYYY-MM-DD HH:MM:SS : Keeping config
YYYY-MM-DD HH:MM:SS : Nebula server booting
YYYY-MM-DD HH:MM:SS : SSH config...
YYYY-MM-DD HH:MM:SS : APT updates and installs...
YYYY-MM-DD HH:MM:SS : X11 ...
YYYY-MM-DD HH:MM:SS : Downloading Artemis 271ben bits...
YYYY-MM-DD HH:MM:SS : Making click utils
YYYY-MM-DD HH:MM:SS : VNC setup...
YYYY-MM-DD HH:MM:SS : Making click utils
YYYY-MM-DD HH:MM:SS : Artemis mods and config ...
YYYY-MM-DD HH:MM:SS : Nebula Server is ready!
YYYY-MM-DD HH:MM:SS : DONE - stop the clock!

At this point you should see a link to remote-control your Artemis server using a browser-based remote-control - you'll need to use the password you set in VNCPASS above (you DID set VNCPASS=something, right ?) and then you should see your Artemis server running!

If it DOESN'T work, in particular if you see the message "Artemis failed to start properly" or if you see "Artemis.exe has crashed" in the remote-control, then you may need to restart Artemis. Click "In case of emergency: Restart Artemis" and wait a minute or so.

Worst-case, you could terminate the server and start again. See Step 6: Shutdown below, then go back to Step 3 above to start another VM

Expert engineers should be able to connect to it using SSH (or PuTTY) within a minute, for example if your Public IP was 192.0.2.3:

ssh -L5900:127.1:5900 ubuntu@192.0.2.3  # <-- use real IP here
tail -1000f /var/log/cloud-init-output.log  # to watch boot progress

If using PuTTY, you'll want to connect to the supplied IP, user "ubuntu", and arrange for a "local port forward" of port 5900 to 127.0.0.1 5900

If it works, you'll see a lot of headers like:

############################################################
#### YYYY-MM-DD HH:MM:SS : Web UI install
############################################################
... and a lot of other debugging output in between. When done, you should see something like...
#### YYYY-MM-DD HH:MM:SS : Waiting 30 secs for server...
#### YYYY-MM-DD HH:MM:SS : Waiting 29 secs for server...
#### YYYY-MM-DD HH:MM:SS : Waiting 28 secs for server...
tcp    0    0    0.0.0.0:2010    0.0.0.0:*    LISTEN
############################################################
#### YYYY-MM-DD HH:MM:SS : DONE - stop the clock!
############################################################
If it DOESN'T work, in particular if you don't see the "tcp 0 0 0.0.0.0:2010" line, you may want to hit control-C and restart the entire process:
nebula init.sh bash
If it still doesn't work... Sorry, get hold of me (probably on the United Stellar Navy Discord server) and tell me what went wrong. I'll probably appreciate a copy of your /var/log/cloud-init-output.log file from Step 4 above, and/or I might want to log into your server to nose around - I'll discuss how to do that once you get hold of me.

Step 5: Play!

You, and your other players, can point your Artemis clients at the Public IP address (not the one shown on-screen in the remote control).

You're going to need to choose game type, skill level, Terrain/LethalTerrain/FriendlyShips/Monsters/Anomalies etc though, and you're going to need to start the game. There are now 3 ways to do this:

  1. Browse to the IP address of your server and use the web-based VNC remote-control OR
  2. Point a VNC client at 127.0.0.1, EG vncviewer 127.0.0.1 OR
  3. SSH to the machine and run a series of commands to click buttons for you, and (without being able to see them) HOPE it has done what you expected

Some of the special commands available to you are:

Some other useful scripts:

Some regular Linux commands that may be useful:

Step 6: Shutdown

From the VNC remote-control, click "In case of emergency: Power Off" in the bottom-left. This assumes you followed the advice for "Shutdown behaviour: Terminate"

If you are SSHed into the VM: sudo poweroff - again as long as you followed the "Shutdown behaviour: Terminate" advice

From the AWS web console, make sure you're in the right region (if you forgot, then yes, sorry, you're going to have to check every single region). Click on the VM in the list. Click "Actions", "Instance State", "Terminate".

From the AWS CLI: aws ec2 terminate-instances --region $REGION --instance-ids $ID

Video

Davis has made a video of the above - Thanks Davis! He takes you through using the AWS web console "Launch Instance Wizard", then use of PuTTY and VNC on Windows, and terminating via the AWS console. Some minor updates:

How It Works

Optional Extras

create-sg.sh can configure the "2010" Network Security Group for you.

aws-pricing.pl produces the price list above. It is not guaranteed to be up-to-date or correct, I'm not responsible for any errors on your AWS bill or anything.

A number of environment variables can be set in "Step 3 Start your VM" or if you are re-running manually in step 4. The examples show NAME=275stock. If you're using start.sh from your own machine, you can set them in the environment, or in a creds.sh which will be read by start.sh. If you're using the AWS web console, you'll add them in your "User data" between the | and bash

Performance

Artemis.exe crashes very occasionally, usually at startup, probably no more or no less often than any other/windows Artemis server to be honest   😕

I usually test by cranking everything up to level 11, "Many Many Many Many Many", making a few dozen connections from another server, and turning on the undocumented AI player (press "E" on the server). This isn't quite the same as testing with real players, the connections aren't "doing anything", but it does prove that the server can handle the same number of in-game objects, and push the packets onto the network. It gets less INPUT than a real game but I'm not sure if there's much I can do about that.

We have tested with fair-sized groups of human players too. "Load average" stats during typical USN games show the machine using about 2.5 - 3.5 of the 4 CPUs allocated to it, with no "steal time", so Artemis seems to not be bottle-necked. AWS still uses "powers of 2" CPUs, so 2 would be too few, 8 would be too many, 4 seems to be the Goldilocks "just right".

Rear Admiral Dave Trinh of the TSN 1st Light Division has referred to this as "our blazing fast Nebula server", has presented Nebula at some trans-Canada events, and is hoping to work up to a full 8-ship fleet with dozens of online players.

Starry commented on the speed, performance, and lack of lag ("even from the UK") when playing some of our test games running in AWS Ohio.

Davis at Eastern Front is testing it out, "Good News! Nosey Nick Waterman has found a way to create a server that cuts the cost of a server by A LOT. our first server was projected to cost ABOUT 60.00 a month now with his creation we are running at .15 an hour of play! I would like to thank Nosey Nick Waterman for all the developing he has done for the community's to enjoy this game in this way" [...] "Fully tested and got 1 FULL game played and only cost .25 soooooo yea".

After Nebula USN games, I used to get my Artemis-Puppy bot to post a quick survey (see puppy-poll.sh), which looks a bit like...

cute puppy     Artemis-Puppy BOT
Thanks Crew! Please rate the performance of this server - it's a bit of an experiment and I'd appreciate feedback:
AMarvellous
BPretty good
CFair
DNot good
EBad
FAwful
Total votes so far (2019-02-19, - see note The bot itself votes for A-F to make the options appear, making
it LOOK like every letter has at least 1 vote per survey,
but these "Fake votes" have been removed from these totals
):
A 8   B 10   C 2   D 1   E 0   F 0  

... so it looks like USUALLY the players agree these servers are "Pretty good" to "Marvellous" 😀

Version History

I've not kept a very accurate version history, but some noted changes / milestones are:

2017-12-15 First recorded Artemis-on-WINE-on-Linux-on-AWS tests for USN
2018-01-21 Tested with 2 player ships and 4 fighters for nearly 1hr game
2018-03-01 First recorded use with the name "Nebula"
2018-03-09 First reporting back to Discord
2018-03-22 First archived copy of cloud-init-output.log
2018-04-16 Tested at level 11 with 3 player ships and many fighters
2018-08-02 First Google Cloud tests - Nebula goes multi-cloud!
2018-11-14 First recorded use with Eastern Front (no EF mod)
2019-02-10 Tested in us-east-2 for TSN Canada fleet
2019-02-15 Big 1hr & 45min multi-ship TSN 4LD Canada games - performance was great!
2019-02-16 Added mission downloads especially for nginecho and OpenSpace. Better system load reports. Clarified license / copyright
2019-02-18 Davis for Eastern Front made a HOWTO YouTube vid - Thanks Davis!
2019-02-21 Upgraded from Ubuntu 16.04 to 18.04 LTS. Ubuntu WINE is now fine (and much faster than winehq). Added support for Eastern Front mod + ships + missions. Doc overhaul
2019-02-26 Updated aws-pricing.pl for new URLs / formats
2019-03-03 Downloader improvements. Initial support for TSN-RP mods + sandbox + ships. Tidied init.sh, modularised mkclick, keepconf, missions, downloader, SSH config, others. Doc polishing. AWS ping test links
2019-03-04 start.sh dumps USERDATA for better debug / re-use
2019-03-07 Simplified / modularised ship name scripts. Further downloader improvements. Improved PublicIP detection. Support for list of MISSIONS=URLs
2019-03-14 TSN-RP ship names updated. New nebula@ email address. More doc updates. Updated AWS prices. Moved to /artemis/nebula URL to reflect more-than-AWS support. More consistent logging to ~/logs/
2019-03-17 Introduced Web UI / VNC! Fixed VNCPASS (now much more inportant!)
2019-03-23 Major doc overhaul, mostly to reflect Web UI, and add this changelog!
2019-03-31 Added support for Starry's Hermes, SHUTDOWN=5hours, more reliable (longer) click-* commands, mini-menu for Power Off / Restart Artemis. TeamSpeak support.
2019-04-11 Added Artemis 2.7.2 beta (stock). Fixed a SHUTDOWN=Nhours bug. Ran TSN 4th Light Division (London, Ontario, Canada) games.
2019-04-14 Added Artemis 2.7.2 beta with Ben's / TSN-RP / EF mods (untested). Upgraded EF Mod to V1.1. Fixed minor html5 bugs in documentation. Fixed covering "back" button.
2019-04-23 Rolled 2.7.1 EF Mod back to previous (unspecified) version to workaround crash bug.
2019-05-01 The announcement of Nebula's first serious 8-ship test!
2019-05-03 Borrowed DaveT's fancy Nebula image from the above announcemt, and installed it into the boot / log screen of running Nebula servers.
2019-05-10 Multi-ship PvP Arena mission game
2019-06-12 Updated for Eastern Front mod v1.2. TSN-RP ship updated. Some minor tidying.
2019-06-14 Fixed an old bug where "apt" was sometimes waiting for interactive input (which will never happen), despite asking it not to. If "APT package installs (may take a minute or so)" actually took MUCH longer, this is why, sorry :-/
2019-06-29 Added EC2Instances.info link, experimented with a1.xlarge (bad), t3a.xlarge (bad), t3.xlarge (works?)
2019-07-05 Added support for a bunch of versions of Artemis (2.7.0, 2.7.1, 2.7.2) with a bunch of versions of Ben's Mod (4.2.12, 4.3.2, 4.3.4, 4.3.5, and "latest")
2019-07-30 Fixed some obscure new compatibility bug between NoVNC / WebSockify
2019-09-04 Upgraded Teamspeak 3.6.1 to 3.9.1 - TS3 now works again instead of crashing. Added some code to start.sh to upload+use your (properly formatted) ~/.ssh/id_rsa.pub if no SSHKEY was specified
2019-09-06 Fixed some issues unzipping newer BensModLatest.zip
2019-11-16 Added NAME=tsn-rp-git for latest version from github
2019-12-01 ADDED NAME=271tsn-rp-s-git for special low-poly server version from github.
2019-12-06 Added front/left/right/read/tac/lrs/info options to mainscreen.pl
2019-12-11 Configs and fleet config for TSN Trans-Canada exercise. Worked around another new(ish) bug in NoVNC.
2019-12-23 Added support for Artemis 2.7.4
2019-12-24 Lots of TCP Tuning... In our December coast-to-coast TSN-Canada 8-ship game, we had some issues with "hung connections", believed to be caused by the DSL/Cable/Mobile internet providers of the clients, but in any case we had many issues with "We lost our [Helm/Weap/Eng] console and can't get back on", and then we saw many game crashes, and had issues with TCP sockets stuck in FIN_WAT / TIME_WAIT issues that made difficult to restart the server too. Our hypothesis was that "stuck (half-closed) TCP connections" were responsible for most, probably all of these issues. It was hard to completely reproduce in later testing, but with deliberately-broken connections, we could somewhat unreliably reproduce the issues. We can't solve the issues on all the clients without asking can make the server more tolerant. It will now kill off any "stuck" TCP connections within about 12-15 seconds, and should also be much better at restarting, with a much lower delay for the FIN_WAT / TIME_WAIT issues too. Since this tuning, we have been unable to reproduce this type of crash, though obviously there are still plenty of other ways to crash Artemis 😕 We unfortunately CAN'T solve "we are locked out of helm and can't get back in", there's only so much we can do on the outside of Artemis without having access to the code on the inside 😕
2019-12-30 Switched web interface background image the awesome "Nick on Engineering" image DaveT made of+for me
2020-02-26 Preliminary support for Artemis 2.7.5
2020-02-29 Some more 2.7.5-compatibility tweaks.
2020-03-01 International TSN-Canada multi-ship game on 2.7.5. Rock-solid, no crashes, no noted client hanging, but certainly no client lock-outs. Some lagginess for transatlantic players fixed by reducing server update rate. Overall: FUN!
2020-08-03 Experimental support for Empty Epsilon instead of Artemis - try setting EE_VER=EE-2020.04.09 with Ubuntu 18.04, or EE_VER=EE-2020.08.07 with Ubuntu 20.04
2020-08-08 Upgraded from Ubuntu 18.04 to 20.04 by default - see start.sh. Artemis seems fine under WINE, and newer beta EE works (as above). Dropped support for 16.04 - it needed a rather hacky WINE install.
2020-08-30 First scheduled Nebula server, where I pre-arranged in advance for a server to start for DaveT whilst I was away, and email him the credentials to remote-control it. Turns out the server started fine, but the email with the credentials bounced, for a totally trivial fixable reason - won't be a problem again. If you need to BOOK a Nebula server, speak to me, I can arrange for one to start even if I'm not around! 😀
2020-10-30 Added support for Eastern Front Mod 3.0.
Made shellcheck clean - should improve security and stability slightly.
2020-11-12 Made 2.7.5 (stock/vanilla) the new default - about time!
2021-02-11 8 Xim carriers, each with 6 Xim Bombers (no shuttles!) - WOW! 😀
2021-03-21 Some fixes the start scripts
2021-03-19 Some TeamSpeak-related tweaks for TSN Canada Joint Forces Operations
2021-03-31 Added Empty Epsilon EE_VER=latest option. Some minor BASH shell fixes (found some that were not yet shellcheck clean)
2021-04-23 Updated some examples from 271ben to 275stock
2021-05-01 Documentation updates, especially around EE_VER and EE_PROXY. Improved debug log collection.
2021-08-09 Fixed some timeouts and retries, after spotting a server trying TWENTY TIMES (slowly) to fetch a mission script that wasn't there!
2021-11-27 Support for Artemis 2.7.99
2021-12-31 Support for Artemis 2.8.0, and updated nost nebula links to https
2022-01-03 Nebula now fixes Artemis 2.8.0's torpedoTubeLoadingCoeff=1.0 in EF Mod (or Ben's Mod?) until the mod authors add the fix themselves
2022-01-28 Experimental support for 2.8.0 with TSN expansion. Disabled 2XX-tsn-rp-s-git "low poly" variants as this mod can no longer be downloaded from the official source.
2022-03-06 Added EF Mod 3.1
2022-03-23 Minor updates and doc tidying
2022-05-05 Now when discord CHAN=xxx and TOKEN=yyy are provided, game results will be sent to this channel, and when TESTCHAN=zzz is provided, it also gets sent the VNC remote control URL. Thanks for the idea Davis! Also some major tidying / cleaning of start.sh, and all scripts all now shellcheck clean again.
2022-05-27 Some improvements to start.sh to make it more raspberry-pi-compatible for Davis / Eastern Front's bot. Some other general tidying of several other scripts.
2022-08-03 Some doc tweaks. Missions and their logs are also now available on your Nebula server's built-in webserver - most useful for downloading mission logs (if any)
2022-08-06 Fancy logo, at last! 😀
Lots of other doc tweaks.
Experimental support for Artemis 2.8.1-beta, as seen at Artemis Armada 2022. You will need your own copies of the 2.8.1-beta client, then boot with NAME=281b-stock or NAME=281b-vanilla or just NAME=281b. Also available: NAME=281b-ben, NAME=281b-ef, NAME=281b-tsn and so on. Do not forget the b in 281b because it is a beta.
2022-11-20 Had some issues trying to download TeamSpeak server because it was too old. Upgraded code to find the latest instead of a specific version that will drift out-of-date.
2023-01-21 Artemis Cosmos 3.0.6.2 Pre-Release initial support!
Make sure your pre-prepared Network Security Group allows UDP 2010, 2023, and 3100 in addition to the usual TCP.
Requires Ubuntu 23.10, which you can probably most easily find on Ubuntu's image locator - search for (or pick from the dropdowns) 23.10 in your region, and x86/amd64 (NOT ARM). This is mostly for WINE 7.0.
You probably want 8 or even 16vCPUs. I am experimenting with 8vCPU INSTANCE_TYPE=t3a.2xlarge, c6a.2xlarge, t3.2xlarge, c6i.2xlarge, c5.2xlarge, or 16vCPU c6a.4xlarge or c5.4xlarge. Let me know what works for you!
You need more disk space too - increase the default Ubuntu Root Volume from the default 8GB to 10GB or even 12GB.
Use NAME=cos062test or NAME=3062PRE-REL.
Does NOT support MISSIONS, SHIPS, HERMES options (yet), nor the discord TOKEN, CHAN, TESTCHAN, nor the new NEB_WIDTH/NEB_HEIGHT (see below)
2023-03-10 Artemis Cosmos 3.0.6.6 Pre-Release See notes above. Use NAME=cos066test or NAME=3066PRE-REL
2023-04-14 Tested the VNC remote-control on an iPad! Works almost perfectly, but only after adding a new feature! Set NEB_WIDTH=800 NEB_HEIGHT=600 to set the Artemis.ini gameWindowWidth and gameWindowHeight!
2023-06-10 Improved mod download/fixup code. Upgraded Eastern Front NAME=280ef (newer "EF MOD SERVER 3.5"). Standardised Artemis Cosmos 3.0.X.Y pre-release download/install. Added NAME=cos070test or NAME=3070PRE-REL for Cosmos 3.0.7.0 PRE-RELEASE. Lots more testing Artemis Cosmos 3.0.X.Y releases, and better support for it, especially killing/restarting if/when it goes wrong. Minor bugfixes in HTML / logging / banner formatting.
2023-06-27 Support for Artemis 2.8.1, as seen at Artemis Armada 2022. Boot with NAME=281stock NOW THE DEFAULT! or NAME=281vanilla. Also available: NAME=281ben, NAME=281ef, NAME=281tsn and so on.
2023-07-08 I'm part of Thom's "inner circle", Working on some later Cosmos 3.0.X.Y releases! Note UDP port 2023 is now used - make sure to add it to your pre-prepared Network Security Group!
2023-11-12 Cosmos 0.9.0 released to Steam Early Accessm and Nebula is ready! Follow the Cosmos notes and use NAME=cos090test
2023-11-14 Some Cosmos 0.9.0 debugging for Darrin.

Coming Soon? Maybe?


(See also my other Artemis tools)