HOMEBLOG

C# On Linux ARM

Adam C. Clifton
26 Jun 2022

MNT Reform

Here's my MNT Reform. It's a laptop with a mechanical keyboard, and while it is physically robust, it's not a high performance monster. So while the performance is adequate, it cannot handle all the bloat of modern software.

I hope to turn this little laptop into a distraction free environment to do a lot of my development of GameStrut. It's still in the very early stages, but I know at the very least it will involve C# to support Unity3d. So I've been experimenting on how to build and debug C# on Linux ARM.

Runtimes

The two ways of running compiled code that I've looked into is dotnet and mono.

.NET Core aka dotnet is a successor to the original .NET Framework. It adds cross platform support, that the original was lacking, but also removes some things that were available in the previous incarnation. The things it cuts out are not a big deal for me as I'll be writing everything from scratch anyway, so I have no legacy dependencies or requirements, and my code is usually pretty vanilla so it will run anywhere.

I had some trouble installing dotnet on via the Microsoft apt package server on my Reform as it is using Debian 12, and perhaps also because it was ARM. I was able to install easily using the dotnet-install.sh script and manually adding the ~/.dotnet directory to the PATH.

The other option is Mono. It has been around for a long time, initially developed as a open source Linux runtime for the .NET Framework. I'll probably lean towards Mono as I have used it previously for Puzzle Quest 3 and had no problems there. Also it is most similar to what Unity3d uses internally, so it would be good to stay somewhat in sync there as well.

While mono is available in the Debian apt repository, it's a bit old. So it's best to follow the install instructions on mono-project.com to get the latest version.

Building

C# code is usually built based on a project file generated by Visual Studio. It's just a simple XML file and there's enough info on the net if I need to modify or build it from scratch.

The msbuild command works the same was its Windows counterpart, building a project and outputting the results with little fuss. It has lots of options and is reliable, but it is a little slower than I'd like.

csc is the offical Microsoft C# compiler (and probably what msbuild uses internally). It does not know anything about projects, so you must supply all the files to be compiled as command line arguments, similar to something like gcc.

mcs is the Mono version of the C# compiler. As far as I can tell it's basically the same as the official counterpart, but a little faster.

Ive ran a very unscientific speed test on the ways of building a project:

time msbuild
real    0m12.497s

time csc -recurse:*.cs -out:out.exe
real    0m5.476s

time mcs -recurse:*.cs -out:out.exe
real    0m2.575s

This was a smallish project, but it did have about 70 tiny source files. Mono mcs is the clear winner here, but it remains to be seen if these differences will matter too much in a large project.

IDEs

While I can use a text editor and build and run with the command line. It would be nice to have a development environment to help.

There's a couple of features I'm specifically looking for:

  • Omnibox - This is a feature in newer IDEs, similar to web browsers where there is a textbox that can accept many different commands. Tho primarily i'm interested in being able to jump to files by name, to be able to quickly move about the project.
  • InteliSense - this is where the IDE scans your project and is able to use that information to make your life better. Things like auto complete for variable and function names, and being able to jump directly to where things are defined.
  • Debugging - This is the ability to be able to set breakpoints directly in the IDE and being able to inspect a running app, things like being able to select a variable in the source file and seeing what value it is. This can be a lot easier than setting up a bunch of print statements then running the program to spit them out.

Visual Studio Code

Visual Studio Code

Fully featured, but on the slow side. I think it will unfortunately end up being just too slow to keep using for the long run, especially as the project and VSC itself get bigger over the years and start demanding more resources.

✓ Omnibox
✓ InteliSense
✓ Debugging - Was having trouble with exceptions not being able to retrieve their info, but that's probably a weird config issue that could be fixed if i was committed to using VSC long term.

Monodevelop

Monodevelop

A faster GUI than VS Code, but a teeny bit on the slow side. Thinks like opening a project or starting a build can stall for a little bit. I would probably use this for some deep debugging alongside a different editor.

x Omnibox
✓ InteliSense
✓ Debugging

Sublime Text

Sublime Text

A fast enough GUI, it could be better with things like startup time, but I can live with it. I'm using it right now to write this blog post!
The problem is I was unable to get the C# plugin to actually work :(.

✓ Omnibox
x InteliSense - The Sublime Text 3 plugin for C# is abandoned and no longer working.
x Debugging - Even with the plugin I don't think this was supported.
x Jump to error

Geany

Geany

A very speedy editor that unfortunately does not have a lot of C# support.

✓ Omnibox - While it does not have an actual omnibox, Geany does have several plugins to jump to files by name. I choose Quick Open.
x InteliSense - Not available but it does have some typing completion from class names in the project.
x No debug

Vim

I have used vim with omnisharp in the past, but it felt a bit too clunky to me once several disparate packages were added together to create the functionality.

And it felt like a long process of tweaking and learning to vim better was ahead of me...

✓ Omnibox - Jump to file can be added by the "ctrl+p" plugin.
✓ Intelisense - There is plugin support for all the expected OmniSharp functionality
x Debugging - I have not had this setup, I assume it could, but may be a lot of work

Conclusion

At this point I'm not sure which IDE or combinations of IDEs I'll start out with. Definitely towards the faster end. The last worst/best option is for me to write my own IDE with the minimal feature set I want. But that's a very dumb idea. Stay tuned for my next post!


Dynamic Asset Downloading in Real Racing 2

Adam C. Clifton
13 Jun 2022

In the beginning...

Starting with Real Racing 2 we were looking to add a special GUI for managing the players account and save games. We also intended (and did!) use this UI across other games, so it is built as HTML and JavaScript instead of C++ and OpenGL like the games. It would have been difficult to insert ourselves into the input and rendering pipeline, while also supporting other languages so it was a lot easier just to pop up a web browser and do everything there.

We also supported making server requests from this embedded JavaScript, they went through the same system as normal game requests so had no real limit on what could be done there. This allowed us to create a reusable interface for core functionality (account management, save game backup and restore) that can be themed to each game simply. And allowing for extra functionality to be added per game if needed, for example, Real Racing, Spy Mouse and Flight Control Rocket implemented their own leaderboards using this tech.

An in game leaderboard
Special thanks to Touch Arcade for this screenshot. Since the servers are no longer online, the leaderboards are now lost to the ages.

All the assets for this UI (HTML, JavaScript, images) are shipped with the game and loaded off disk to reduce load times. During development of this feature, the edit and test loop was quite painful, we would have to edit the HTML, rebuild the game in XCode, redeploy the whole game to the device and launch it before we could see our changes.

So in an attempt to save time and restore my sanity, I developed a system to automatically refresh these files if the change, when launching the game or multitasking. This chopped a lot of steps from the process and saved a lot of time. I could now just edit the files, and refresh them quickly on the device.

Eventually to save the bandwidth of downloading every file every time, this was improved so that the device would be able to report what files it has, and the server can send back any updates needed.

The game itself also had a change, so that when a file is loaded, it would check the asset download folder to see if the file exists there, otherwise it will load the asset that shipped with the game. This simple change now allowed us to do a lot, by being able to substitute any asset or assets in the game, by simply serving new ones from the server.

Flipped BMW Logo

Just as we were about to publicly release the game, in fact after the final submission had been made to the App Store, we found out that the BMW badge on one of the cars in game was the wrong way around. The blue and white checker pattern was flipped, so we would not be able to get approval from BMW to release the game.

BMW logo, not actually flipped
Here is the car in question, with a blue arrow pointing to the badge that was flipped. But note that the logo is actually correct here!

This was the first big success of the asset replacement system, as we were able to seamlessly fix the logo, by downloading a new asset when our players first launched the game, well before they even had a chance to see the car for the first time.

The most straightforward way to fix the issue would be to fix the texture that is drawn onto the car, but that was a large file, almost 1mb if I recall correctly. So having every player download it was going to be a lot of data. So we managed to find a more efficient fix by changing the 3D mesh of the car instead, flipping the texture coordinates over to flip the logo. This fix was about a tenth of the size.

Fixing Broken Save Games

A few months after release, some players reported bugs where the game would crash after completing a race. We eventually worked out that there was a fixed array of results that was being appended to after every race and these players had simply ran out of space.

The fix was straightforward enough, just keep the best result for every event, and remove the rest from the array. But this would require us to ship an update and wait for it to pass review, so the affected players would either have wait a week for the fix to arrive before they can continue playing, or delete their saved game and start over from the beginning.

But since we already had the tools to backup and restore a players save game, we had a third option to remotely fix the players save games.

Using asset replacement, we added a new button to the HTML UI that would do three things.

  • Using the existing backup functionality, upload the save game to our servers.
  • Send a new request to the server to fix the players most recent save. The game devs created a tiny command line program based off the C++ game code that would load a save, strip out the unnecessary results and save that file back to disk. So a new endpoint was written on the server to pull the players most recent save from the database, run the program to correct it, then update the database with the corrected save file.
  • Again using existing functionality, the client then restores save game from the server to the client, so now the local save file is fixed!

This fix allowed us to have a fix out within the day for our players without having to rush out a new build of the game that would have been delayed up to a week with QA and App store reviews.

AB Testing

Once the game was up and running for a while we eventually extended it to support AB testing. It was somewhat rudimentary, we'd select a control and experiment group of players as they launched the game for the first time. Then for the experiment group we would send a different set of game assets that would change how the data driven game worked. For example, we could make cars cheaper or give more rewards. Then after some time, we could compare the two groups of users in our analytics database, and see what affect the changes had.


Static Site Hosting On Amazon S3 and Cloudfront

Adam C. Clifton
8 Jun 2022

This site is just simple generated HTML files that I host on AWS. It is delivered by Cloudfront, which is a network of servers around the word so that requests can be handled by a server nearby rather than crossing oceans. This makes things speedy and also dirt cheap as html websites are teeny tiny, so not much to store or transfer.

Below are the basic instructions so you can set this up for yourself. These instructions are also for me, so next time I want to setup a new site and have completely forgotten how to do this, I have something to refer to.

S3

We'll be creating two S3 buckets, one to store the website files and one to store the logs.

For the website bucket, the first thing is to give it a useful name, I'll be referring to it as [WEB_BUCKET_NAME] through the rest of this post. You'll also need to uncheck the box that blocks all public access, this is fine since the website is going to be public anyway. You should use the web interface here to upload a file to the bucked as well, preferably your default page, index.html. This file we can use to test that things are working before we are setup to upload the whole site.

Then you'll need to go in and tweak some properties, in particular at the bottom there is the option for static website hosting. Enable that and set the index document to match your default page,index.html. Even tho we will be using Cloudfront, that can serve everything directly from the bucket without this enabled, we want to use it as it handles default documents better.

The box we checked when creating the bucket only stopped blocking public access, we still need to enable it. To do that we need to go to the permissions tab and set this bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadForGetBucketObjects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::[WEB_BUCKET_NAME]/*"
        }
    ]
}

Now in theory your bucket should be available over http, you can confirm that by pointing a browser to:
http://[WEB_BUCKET_NAME].s3-website-us-east-1.amazonaws.com/

This url should also be available at the bottom of the properties tab, now that we have enabled static hosting.

Now we can setup the S3 bucket to store logs. It's much easier than the website bucket, just give it a name [LOG_BUCKET_NAME], and leave public access blocked.

Account

We also need a user account to be able to upload files and download logs from our two buckets, so head over to IAM and start creating a new user. Pick a name and select the check box for "Access key - Programmatic access". We won't be setting any any permissions at this point, we will edit the user later to add them directly, so just keep clicking next and eventually your user will be created.

You'll receive a Access key ID and Secret access key, make sure you record these for later, we'll refer to them as [S3_KEY] and [S3_SECRET].

Now jump back to the users list and click on your newly created user, then click "Add inline policy", click the JSON tab then you can copy and paste in these permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Resource": [
                "arn:aws:s3:::[WEB_BUCKET_NAME]",
                "arn:aws:s3:::[WEB_BUCKET_NAME]/*",
                "arn:aws:s3:::[LOG_BUCKET_NAME]",
                "arn:aws:s3:::[LOG_BUCKET_NAME]/*"
            ],
            "Sid": "Stmt1464826210000",
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObject",
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ]
        }
    ]
}

Click review policy, give it a name then finally click create policy. Now our user should be all setup to upload and download from our S3 buckets.

Uploading

With an account setup we can create a simple bash script to upload our HTML and assets to the server.

We'll be using the AWS CLI tools. On Debian or Ubuntu this can be installed with apt:
sudo apt install awscli

Then we can create a simple script to sync our local copy of the website to the S3 bucket:

#!/bin/bash
export AWS_ACCESS_KEY_ID=[S3_KEY]
export AWS_SECRET_ACCESS_KEY=[S3_SECRET]
export AWS_DEFAULT_REGION=us-west-2
aws s3 sync [LOCAL_WEB_DIR] s3://[WEB_BUCKET_NAME]/ --delete

Where [LOCAL_WEB_DIR] is the folder on your local disk, something like ./html/.
The --delete switch will delete anything in the bucket that is not in your local dir. This means that after the upload completes, the S3 bucket will be a perfect mirror of your local dir.

And with that your website should be up and running at the URL we tried earlier.

Cloudfront

Now it's time to get things running from our own domain, with https and over the CDN. So head over to the Cloudfront section of the AWS dashboard and create a new distribution.

Firstly select your origin, this is the S3 bucket you have already setup, and it should be listed in the drop down box. You can leave most settings as the default all the way down untill "Alternate domain name", here you want to add the domain name you want to use for your site and select a SSL certificate for it. If you don't have one already, click request certificate and follow that process.

Lower down you can enable standard logging, and can select the bucket you created earlier to store the logs.

After completing creation you should be able to access this distribution by it's internal domain name, something like abc123.cloudfront.net.

Now inside the DNS settings of your domain, you can set the CNAME to be the cloudfront internal domain name from above. Once the dns updates (this can take a while depending on your previously set TTL settings) going to your domain in a web browser should bring up your website!

Logs

Similarly to the upload script, we can make a log download script to download our logs from the server.

Also we can use GoAccess to process the downloaded logs and generate a HTML report.

On Debian or Ubuntu it can be installed via apt:

sudo apt install goaccess

For information about installing on other platforms, check here.

From here we just need a script to download the logs and generate the report:

#!/bin/bash
export AWS_ACCESS_KEY_ID=[S3_KEY]
export AWS_SECRET_ACCESS_KEY=[S3_SECRET]
export AWS_DEFAULT_REGION=us-west-2
aws s3 sync s3://[LOG_BUCKET_NAME]/ ./log/
zcat log/*.gz | goaccess --log-format CLOUDFRONT --date-format CLOUDFRONT --time-format CLOUDFRONT -o report.html

After running that script you can check out report.html to see all the pretty graphs and stats from your log.


© Numbat Logic Pty Ltd 2014 - 2022