[BB], 2021 in review

And here we are. The first of January 2022.

In many ways nothing changed, we still have covid, Brexit is still a thing and the cost of things are still skyrocketing.
On the other hand plenty has changed. The handover of the Trump administration happened (after an attempted coup), Betty White literally just died and we’re getting vaccination shots out of the wazoo (which you should get if you haven’t already).

2021 wasn’t a bad year but it wasn’t a great year either and some of this I feel can be reflected in [BB].

Donations

Similar to 2020 we’re starting off with donations for transparency and to give a further peek behind the curtain. As of the writing of this post we have a $700 surplus which is the most (+/- $50) we’ve ever had in reserve which is astronomical. [BB] isn’t ran on the basis of making a profit and assuming we didn’t receive another penny from today onwards we could operationally run barebones for 7 months before running out; or for full service this is more like five and a half months.

This is significant! It shows that our current monthly model for Platinum has a proven track record alongside all the other small bits and bobs we offer. Internally we’re regularly discussing if we should bring more to the table although we’re not entirely sure what would appear. If you have any ideas feel free to message someone in [BB] about it!
Last year I mentioned that if the surplus grew big enough we’d consider looking at upgrading parts of (or the entire) machine and if this continues it may very well be on the cards.

S&Box

Although not a part of [BB] at the moment it’s also unlikely it will be part of [BB]’s future either. Earlier this year I was granted access to S&Box and I had bit of a tinker with it to see what was viable, what would it take to port gamemodes like Surf across and what would we could potentially end up with.

The discoveries were somewhat depressing although they were enlightening in their own regard. S&Box essentially is a standalone game portal where Facepunch are attempting to entice developers (ala Roblox) to make what is unique and original content. This would be okay if not for a few problems:

  • There is no native src1 mounting functionality. This means:
    • Src1 maps cannot simply be loaded up in S&box
    • Src1 models cannot simply be loaded up in S&box
    • Src1 materials cannot simply be loaded up in S&box
  • Facepunch are declining to pay for a license for Half-life / Source universe assets so we have to rely on what they make and what we could potentially make.

The long story short: It’s not viable to consider any of the current [BB] projects for S&Box. There were some neat ideas (e.g. Towns) on display and the toolkit is nice but the key difference is this: Garry’s Mod is a mod that gives functionality of the Source (1) engine of which we can built toys in it, S&Box aims to give developers access to Source 2 in a similar fashion of Unreal or Unity, but entirely within Facepunch’s control.


Beyond that, 2021 was actually quite exploratory and refreshing for [BB] with a number of large changes that has probably had some of the biggest impacts ever!

Web API 2.0

As you may (or may not) have heard and seen, we have a new web api over at https://bbservers.dev/v2/ ! While I won’t retrace too much of what the previous blog posts state, the overall impact and development speed this has allowed for is actually fairly significant. The key thing to take away is it allows us to side-load data without it having to go directly through the game servers. A small amount of both BBase and Surf are actually now powered via this web api and it’s running entirely without issue. Additionally we’ve had some people build systems using it which has been really cool to see.

More and more functionality will continue to be built on this over 2022 and beyond which I can’t wait for and it’ll be real neat to see what people come up with as a response!

Surf

For Surf we had a win and a temporary postponement.

The win was that ranks came out! Here within [BB] we’re incredibly happy with how the ranking system turned out and the overall reception to it. I think I’ve only seen one person state they’d prefer the 1/x system which to me is a serious win and shows that we’ve made something far more interesting to players of Surf. What has been a bit funny though is people making up their own theories / lore as to how the ranking system works even though it tells you in F1 > Surf Rankings > Help. It has led to a few eye rolling moments but that’s just how it is I suppose!

What has helped the ranking system flourish further was the addition of the Platinum Rivals / Challengers system. Our Platinum members seem to have appreciated this information and even the competition it creates which has been fun to watch play out.

The postponement comes in the fact the replay system isn’t out yet. Internally a big focus within the last year for [BB] became scalability and making systems that can work harmoniously with adequate room to grow, or to walk back on if needs be and a big concern for the replay system was getting the storage mechanism wrong. The point that replay is in currently is that we can record a players surf experience just fine – however storing that data, sending it to the client and unpacking it was proving tricky to “get right”: or at least what felt right logistically.
With the creation of the 2.0 web api, this has been partially solved and means we have a way of having clients load the data without the server having to load it and then send it down. Replay hasn’t been forgotten, we just want to get it right!

Outside of the above, Surf continued to have a strong year and we welcome many more players to the club! Updates to many maps were made, often in the name of stopping a few cheeky exploiters we had here and there (although I do offer my thanks for those showing what were our blind spots).

Here’s to another good year for Surf!

BBase

BBase didn’t have any particularly ground-breaking changes to it this year other than the support of the new Web Api. In many regards we had a number of bug fixes (quotes working in chat messages!) and included the support of social controls for players when there’s disruption on the servers.

One thing that has been noted (and we’ve had feedback on) is that some systems are starting to feel a bit creaky or that we’ve outgrown it a bit which I’m in agreement with. There are some loose plans on how we plan to go around correcting some of these issues – but I ask for you to bear with us on this. Some of these systems are as old as 2013 where we literally ported them from previous gamemodes that failed (e.g. the inventory system is from Life) or were designed with very specific parameters in mind (e.g. the timetrial system). Additionally there’s only so many times we can update or rewrite parts of systems before we throw our hands in the air and simply cannot face them anymore.

But hey, at least you can turn other players volume down on the scoreboard now!

TTT

Is dead. Other people can do it better than us and I don’t think many people enjoyed our version of it. It’s gone and isn’t coming back. It’ll be replaced by Prop Hunt once I get the front-end sorted.

Other servers

Gofish, Climb and Deathrun aren’t specifically listed here. They’re existing and continue to get some players. Deathrun has some proposed changes coming for it but no guarantees. Climb can probably be designated as “out of beta” now given it has timetrials, although it needs a final pass just to make sure we’re happy with it. Gofish: Boats anyone?


Closing Words

Content probably best sums up the state of affairs at [BB] in 2021. Given the turmoil and changes that have impacted everyone in 2021 (personal lives and within [BB]) we’ve probably had the best possible outcome that we could ask for. We continue to provide a great experience to our players and in kind, our players give us a fantastic community.

Pour one out for those that couldn’t make it to 2022 and for the rest of us that have, lets try to make it the best year yet.

New Web (API) Foundations (Part Seven, Finale)

So at this point we’ve covered:

  • Why we chose the language we did.
  • Why we rolled for GraphQL instead of REST.
  • What the queries looked like.
  • Hitting home that this couldn’t go out until authentication and authorization were done.

So what exactly is left?

There’s two things that were left that were fairly key to getting it all ready for prime time, these were:

  • A Lua interface for a GMod server / client to use
  • Caching

While it wasn’t exactly a massive issue if we couldn’t get a Lua based interface working, it definitely would be a knock on the entire process. However at the end of the day a GraphQL query is really just a POST request made in a JSON-like form, the only downside is that you have to properly escape strings because… poor design implementation really on GraphQLs side.

To make sure it was effortless as could be, the signature that the client and server both use for querying is BB.MakeAPIRequest with all of the exact same logic, one could even call it shared (because it is)!
Even the way the client and server handle the authentication function BB.GetWebAPIToken() uses the same signature, though the underlying logic is different. Because we also ultimately get a JSON blob back (an actual JSON blob!) we’re able to use standard GMod functions to convert it to a table and pass the data to our callback function. Nice and easy!


At this point the web API had been available to anyone that wanted to use it publicly and /portal in-game had been updated to use it as a small test which worked incredibly well! At this point I was pretty satisfied in terms of the state of the API and the effectiveness of it, those that had been using it directly as an interface seemed pretty happy and even to this day it seems to get some usage which suggests adding public access wasn’t for naught!

The final thing needed was caching. While we didn’t need this for the go-live (and it wasn’t there) it was certainly going to help in terms of response time and load on both the webserver and the database. We have a fair bit of data that rarely changes or changes in a very controlled way. For example: we calculate rank data hourly and seasonal scores once per day so why bother forcing the database to query for this data every time when we can store the output in a serialized format and just present that data? It also makes sense as some of those queries can be slightly expensive computationally to run.

There was a slight problem with implementing caching though. In most languages you can wrap your cache layer around your database layer and it’s very much seamless; only Golang (at time of writing) doesn’t easily support this.

On and off for around three months I toyed with different ways of going about it and getting it as tightly connected as I could before I realised that in true Go fashion, guess I’m writing boilerplate for this again. Once Generics are released with Go 1.18 (assuming the underlying libraries also support them) it’s likely of the caching code added, around 60% of it can be condensed into more generic functions but for the time being it is how it is. It doesn’t impact performance in this case and more just requires lots of keyboard tapping!

In the vast majority of tests done caching meant we lowered response times by about 70 – 85%. There’s some additional optimizations that could be done here but some of that is outside the scope of the application and more into hardware & CDN optimization. Database querying to the game server (from the web API) also dropped fairly significantly and we were actually able to offload a number of data requests and in-game data syncing to the web API instead of streaming it down in-game which means users see less of a “freeze” when they first join. Another great win!


All in all this edition of the web API has been a huge success. It’s not even a year old and yet it’s far more capable of what the v1 API was ever capable of. Additionally it’s also a significant amount cheaper to run in terms of resources – in fact I’m not sure I’ve ever seen something this resource-lite with the exception of compiled applications.

On average the v2 API uses around 6MB – 10MB of RAM. Yep, you’re reading that correctly. It uses quite literally nothing with the actual API binary being a grand total of… 15MB in size. CPU-wise it may as well be invisible because it uses nothing! Even under load it’s stupidly efficient and I’m actually pretty impressed at how efficient Go has become. I really couldn’t be happier with the end result of the API and what it’s capable of.
For the interim I plan to keep it as a read-only API mostly because I don’t see a strong need as of yet to actually manipulate the data through a common API like that.

There’s one or two smaller things that could potentially benefit from it but equally whether or not those actions would even be part of the API is another question entirely. Given that these would be actions that we wouldn’t want a typical person to even be made aware of (e.g. adding a donation) it’s likely it’d end up with its own bespoke controls; unless we could control what introspection on the GraphQL API returns.

But that’s the new web API for you! Hopefully it’s been an interesting read and a peak behind the curtain and it’d be fantastic if any feedback could be given via the forums or Discord.

Until next time!

New Web (API) Foundations (Part Six)

Last post we covered authentication and how it was effectively split three ways depending on the action the end-user was trying to do. Now we’re covering authorization which is conceptually far simpler! We’ll begin by splitting authorization in two, what we consider “personal” authorization and “administrative” authorization. These are concepts only and are implemented in a logical fashion as opposed to a physical one.

As I mentioned in Part 5, privacy to me is an incredibly important thing in that I believe in the right to fully control your data: organizations and groups should make a best-effort attempt at implementing controls which gives this control. In the event controls can’t easily be granted then the assumption should always be that the user doesn’t want their information shared and it should be withheld at all costs with exception of the user directly using it themselves.

This is exactly how our API functions: if a key hasn’t been flagged as what we call a “system” key (bit of a misnomer but hey) then any API key generated by default is unable to query data player-specific data outside of the account it was generated against. Admittedly our behaviour around how we relay this isn’t the most consistent and at some point I may tighten this up but what is true is that you end up with one of two responses – a 403 or we just return your data instead. Furthermore because JWTs use the same underlying permission system it means they’re subject to the exact same rules! In fact because we only ever want JWTs to be used by players in game it means that we set the permissions of every JWT as it passes through to be one that does not have system access. Nice and easy!

So outside of player privacy we have a few more key types and these are incredibly boring. These permission flags mostly deal with things such as can a key issue JWTs or can a key wipe out cache keys and so on. Ultimately we don’t want the average API key (or JWT) to access what are administrative actions even though the routes aren’t actively known and at the end of the day – a lock only works to keep honest people out.

There’s one last authoritative flag which exists – API key rate limiting. This is less of a flag and more just a “once you’ve made this many queries, you gotta wait until reset”. While not strictly a flag in a binary sense it still acts in an authoritative way and not an authentication method because your credentials are correct – we just can’t let you see the data for the time being.

That sums up authorization and was a fair bit smaller this time! Next post: Summary?

New Web (API) Foundations (Part Five)

As a short recap over the four posts previous to this, we have:

  • Reviewed what failed with v1 of the API and set some ground rules of what powers the underlying API.
  • Set some logical rules as to what the API must fulfil in terms of being usable and released to the public.
  • Decided to try using GraphQL instead of REST to try and improve things.
  • Implemented a proof of concept which was successful and at this point we have a working API!

While things were maintainable and in general better performing what we hadn’t done was sort out authentication & authorization. As a quick thing because security is hard and people tend to get the two terms mixed up:

  • Authentication is the process of working out if you should even have access to the thing you’re trying to access. This can take the form of an API key, a username / password; in short a key to a lock.
  • Authorization is the process of working out what you should have access to. Are you an admin? Are you banned? Essentially what can you see, touch and do?

Even back in V1 of the API there were only really ever plans for authentication and authorization was never really a consideration, however I have fairly strong views towards privacy and believe that a person should always have the right to shield their data from others so authorization was absolutely at the top of my list of things to do.

As this post is going to be fairly lengthy we’ll be focusing on authentication.

The API actually has three ways of authenticating with it. That’s right, not one or two, but three! To complicate things further one of these routes is intended to be digested by humans while the other two routes are intended for automation, but why is that?

As part of the desire for this to be available to the public we need the ability for humans to fetch API keys which means we need to grant the ability for a human to “log in” to the API. Whilst they’re not actually directly interacting with the API itself what they are interacting is with a layer above the API: The ability to see information about the API, manage the keys and other “meta” actions. However I also have zero desire to have users register with a username, password and other details that websites otherwise typically have you register with so instead we leverage Steam’s OpenID and we authenticate you that way; this means we don’t need to store credentials, you don’t need another account but everyone gets a seamless process! Win win!

Although we have humans now logging in, what we don’t have yet is a form of automated login for computers to use.


Over the years engineers have come up with fun ways of dealing with authentication for automated systems. Many may not remember it thanks to the work done over the years, but once upon a time sites like Facebook had you “import” friends by you literally handing over your username & password to your email account at somewhere like GMail. You were simply expected to hand over the keys to your metaphorical kingdom for what was often very useless perks because it was the norm! Many a person lost time, accounts, money and even more due to scams and phishing all because humans were trained to do this.

Thankfully it’s 2021 and we’ve come up with overall better solutions even if they’re not always used in the correct ways.

Our second way of authenticating with the API is using authentication keys, or more commonly known as API keys which from this point on we’ll refer to them as. API keys take many forms; in our case we generate a key for you (which you need to keep a secret!) and this is tied to your account. Once the key is supplied we no longer hold a direct record of said key and instead only store a hash of it. We do this for a few reasons:

  • Firstly we can’t guarantee Steam accounts are properly secure. We can’t tell if the user has 2FA enabled or anything about their account security at all. What this means is that if that account is compromised then the attacker can gain access anywhere that the account holder has access to with Steam OpenID.
    • We only display part of the hash for management purposes so the attacker doesn’t really have much to go on.
    • Equally however there isn’t much that can be done to stop the attacker deleting and generating new keys but it does give us an audit trail!
  • Secondly if we were compromised at the database level then the attacker has immediate access to all the API keys and could use them merrily until we voided them all which isn’t a nice experience.
    • Instead by hashing them we buy time and the ability for users to rotate their keys out as the data isn’t available in its raw format. It’s not much but it is better than nothing.

This form of authentication requires checking against the database to ensure the key is valid every single time and includes other actions such as is the key blocked, has it hit the limit and so on. This is perfectly fine for systems where there’s a developer and an application querying data but what about users playing on the server? Generating keys dynamically only to last a certain amount of time is a pain and is there a way they don’t have to authenticate against the database?

Enter JWTs

JWTs, pronounced as “Jots” for some daft reason stands for “JSON Web Token”. Here’s the breakdown:

  • JWTs are signed blobs of JSON data.
  • The data is cryptographically signed
  • But the data is not encrypted (so don’t go storing sensitive details in there).

I’m going to segue for a moment and have a minor rant. JWTs should not be used as de-facto authentication where user sessions are critical and important. Even with refresh tokens! Too many web developers misunderstand the fundamental best-case usage of JWTs and where they’re actually useful and are often used as a crutch. Too often many web applications use a JWT which has no expiry or worse yet – they use a single JWT which is valid forever (or if they ever bother to swap out the certs). If you are unable to remove a persons authentication without having to destroy all the other keys (i.e. other users authentication), your lock is terrible and you should feel bad.

In short: Only use JWTs for incredibly short sessions where long-term access isn’t expected or as a bridge to tell a server “it’s okay to give this person a proper key”. /rantover

Because JWTs are signed blobs of data and are generally authenticated using certificates it means that we can authenticate the user directly on the server and check the signature of the data instead of storing an API key and relaying any authentication requests back to the database.

This is what a JWT looks like in its Base64 encoded format: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJhY2NvdW50SUQiOiIxIiwiZXhwIjoxNjM0OTM5NTkzLCJpYXQiOjE2MzQ5Mzc3OTMsIm5iZiI6MTYzNDkzNzczMywic3RlYW1JRDY0IjoiNzY1NjExOTc5ODE3NTE3MjMifQ.LFyWwquR1rvFHvfhJC2ISuXDZXPfbW_Dqi2tKytsgRNqbQE1otEIVkvLjnkkuQfnhhQsaQhXDuYxB8xW7En3joCnDA_xLZ7-s3rDU2wUBexwlRI5Oal9B9aebfvkTxNzycszH_oaRcS3yqzP5Lmx_hZJnLaACj8GYMO1brzcZVX7PobKXeSVjM2BbjENtGhs6RBBatWeKyfWo-LhY0TdmrcRiUKiYJEOoYeUaibN198JPEEJp699VZYwits65DMIg2wfKexZyIENJhWnCyIXYMCqD7tIXUjgCEGLGDjdPeZUCH_oykjtLMnj8Zz_d58mqd18Z2f58SHiwqlV-118TwX5E-WpLjWfBPI-zibVGprtoVomg9INqqblezpaKxyGSd3L1feIcC0a3HQr4CIn5yHC2_iPKYYrrn2H7a4gPZ-SPFQLWv3sFKTyH5R5nu68320jlPIoepVjZGFeU1ZacrbqwX-xg7ejrFuPsVHitTNaueHvNSnp6Aa-RdQtzHGuCkJtU5SOxFT7nNY13VclK3GdOWdOQ94OL6dFJACGBEj4aslRWiEryekgVgDT7Jhee-ycZG4Ms1PoorANbv8pfGxm9Q7jtSwerKeQt8bWguYaxQqpIMw85qYaoXsgHp_azjHnC1-TaxIT_BlGPj9ENl5HYxxdkZyFRnpxsF5EIpc

Essentially a huge blob of text that makes no sense at all. When converted however we get:

{
  "accountID": "1",
  "exp": 1634939593,
  "iat": 1634937793,
  "nbf": 1634937733,
  "steamID64": "76561197981751723"
}

You can verify this @ jwt.io.

Every player that plays on BB now gets one of these delivered automatically and it’s used to query things like map names, title data and so on.

At one point in time this was a valid JWT for my account! From an authentication point of view all we actually care about the signature from the JWT in the Base64 data, as well as the exp and nbf fields. exp is when the token expires (Friday, October 22, 2021 21:53:13) and the nbf field is “not before”, effectively this token isn’t valid before the time of ¬†Friday, October 22, 2021 21:22:13 to prevent potential replay attacks if the server was misconfigured for whatever reason. Because only we (within the API) hold the keys for these specific signatures it means that if someone were to try and send us a spoofed message the request would immediately fail.

JWTs are perfect for the problem we have with player authentication when playing on the server. Only the player needs to store a copy of the JWT and using the magic of maths and cryptography means we can verify a player is who they say they are with very little effort. Better yet, we can define on the key when it expires so users at most only get 30 minutes per JWT to make requests before it’s a useless blob of text. Furthermore we are the issuers of the JWT meaning that you can only request a fresh key every so often and only when we say so. This gives us a degree of control and helps to prevent abuse.


With that, we’ve covered the three different methods users can attempt to identify themselves with the API. The next post will be looking towards authorization and the importance of making sure players can only see the data they’re meant to see.

New Web (API) Foundations (Part Four)

At the time I had already written a few reasonable applications in Golang which had all leaned into the strengths of the language. One strength is that Golang is strongly typed, so strongly typed in fact that generics (or the any type to you TSers) don’t exist in Go yet. Even though on average I’m writing 20-35% more code it’s never really felt like an issue because the on-save checks generally catch 99% of my mistakes with compilation catching the last 1% and besides, with tiny binaries with fast execution it’s always been bit of a dream.

As a result of these really positive experiences this all felt like we were very much on track and that this would probably be lightning fast. After all, if I can do this and it works so easily then this must be the future… right?

Well building this API turned out to have a couple of curveballs, though none of them were necessarily the fault of the language.

For the initial implementation of the API I decided to implement three key routes to test out gqlgen, these being:

  • Player details
  • Item metadata
  • Title metadata

The reason behind these is each one of them loosely touched upon key benefits that graphql can give us.

  • Player details: We have both “core” and “optional” details.
    • Core data could be considered Cubes, SteamID and so on.
    • Optional details are more like loading a players inventory, their achievements etc.
  • Item metadata: We can have optional parameters which impact the data we get back.
    • Specify an item ID? You only get that item. Don’t specify? All items.
    • We’re guaranteed to return everything even if you don’t ask for it because it’s part and parcel
  • Title metadata: This was actually going to have some upcoming changes so being able to set it now and then in theory update it without any worry to the recipient proved to be an interesting experiment.
    • Also in theory other than the ID, all returned arguments are optional.

As gqlgen is all codegen and you just “provide” your schema; for the Player this looks something like the following:

type Player {
  id: Int!
  steamID64: String!
  name: String!
  cubes: Int
  title: String
  titles: [Int!]
  timeSpent: Int
  lastPlayed: Int
  lastServer: String
  firstJoined: Int
  isPlatinum: Boolean
  achievements: [Achievement]
  discordSnowflake: Int
  inventory: [Item]
}

And likewise the actual “query” object looks something like this:

type Query {
  player(account_id: Int, steamID64: String): Player!

}

For the sake of clarity anything with a ! in it is required for a parameter. If it’s omitted it’s optional. So requesting a Player object takes either an account ID or a steamID64. If neither are present we throw back an error.


At this point we’re now pivoting, just when it was getting good!

In theory at this point all I had to do was run go generate ./... (from the base directory) and everything would be codegenned, types would be made and we’d be in business. That wasn’t the case here.

Although it’s now resolved apparently some libraries had gotten very much out of sync that gqlgen used meaning the latest release of it was actually entirely broken and typically when you pull libraries for the first time to learn them you don’t expect this to be the case, so I lost around 3 hours or so trying to debug this (with some furious googling amongst other things). Eventually after seeing a few reported issues that suggested the latest few releases were a wee bit broken rolling back to an older version resolved that.

But our woes weren’t entirely over yet. Due to how gqlgen does library / package resolving with types in the library it means that each import is in theory a fully qualified library in Go. So an import in Go for all of our codegen looks like "github.com/BB-Games/bbapi/src/generated" which isn’t exactly correct because at the time of writing it’s trying to reference which doesn’t exist! And even if it did, what about future code? This makes no sense!

The good news is this is a problem Go has already solved, especially if you’re using module mode (which since 1.16 you should be!). After a bit of research it was thankfully as easy as doing replace github.com/BB-Games/bbapi/src/generated => ./src/generated in the go.mod file (which is a bit like a schema / packages / requirements file) and Go knows immediately where to look instead.

By far not deal breakers but absolutely things that can trip up even a seasoned developer.


When things were eventually working it was quite interesting to see what got generated! While I won’t post all of it (as it’s a whole lot) here’s an example of the “basic” output of the Player object:

type Player struct {
	ID               int            `json:"id"`
	SteamID64        string         `json:"steamID64"`
	Name             string         `json:"name"`
	Cubes            *int           `json:"cubes"`
	Title            *string        `json:"title"`
	Titles           []int          `json:"titles"`
	TimeSpent        *int           `json:"timeSpent"`
	LastPlayed       *int           `json:"lastPlayed"`
	LastServer       *string        `json:"lastServer"`
	FirstJoined      *int           `json:"firstJoined"`
	IsPlatinum       *bool          `json:"isPlatinum"`
	Achievements     []*Achievement `json:"achievements"`
	DiscordSnowflake *int           `json:"discordSnowflake"`
	Inventory        []*Item        `json:"inventory"`
}

And the corresponding receiver function that requests “hit”:

func (r *queryResolver) Player(ctx context.Context, accountID *int, steamID64 *string) (*model.Player, error) {
	// TODO: Fill me out
	return nil, nil
}

At first glance it seems rudimentary, even basic! There’s some pretty clever nuances going on here.

  • Firstly gqlgen tags each struct entry with the corresponding JSON flag to make marshalling / unmarshalling JSON super easy. While standard practise is to do this with known types, the fact it does it for you is nice.
  • Secondly, the accountID and steamID64 arguments are both pointers. Golang doesn’t do optional arguments but a pointer in an argument can be nil meaning if it doesn’t exist, you can treat it in a safe manner.
  • Finally: If you have any custom objects or methods (e.g. a database connection) queryResolver can hold it for you, meaning you can have per-request data and no worries about global states floating about.

Already we have a nice chunk of work done for us! So when filled out you’d have something like the following (use your imagination a bit here):

func (r *queryResolver) Player(ctx context.Context, accountID *int, steamID64 *string) (*model.Player, error) {
	var playerDetails model.Player

	// We query for stuff and all stuff it in to the playerDetails object..

	return &playerDetails, nil
}

gqlgen then handles converting all that into a json response and serving it to the player. Once I filled in the first three entries (Players / Item meta data / title meta data) I had a pretty strong handle on the system; everything is nicely typed, returns what it says on the tin and there’s no weird interface{} magic going on. To give an idea of how easy this was, each endpoint in 90% of cases was no more than a few minutes worth of work to get implemented with the exception being the data for surf ranks. Nothing special as to why: just wanted to try enums in graphql and just needed a few bits of logic to handle different leaderboard types.

When I had originally set up the Python API, it had taken me a reasonable amount of time to test, validate, be happy with the end results, handcraft a load of things and so on. gqlgen meant I literally had the entire basic system up and running in a night! Any additions since then literally take a few minutes to add and validate their correctness with sometimes a bit of extra time if the endpoint has to be “protected”.


At this point I compiled it and threw it online for testing for myself and Killermon. While this API wasn’t yet finished (I was determined for authentication + authorization to be done first) I wanted to make sure that this could be placed online without too much effort and that it would work.

It simply worked. I have enough practise and history with nginx that this sort of thing is a breeze and yeah, it was immediately queriable!

However we absolutely needed authentication and authorization to be handled. This wouldn’t be a success without those and I was utterly determined to make sure that was sorted and sorted properly. That said – the way I had gone about this, was there even a sane way to do that?

Tune in next time for that part!

New Web (API) Foundations (Part Three)

Lets recap what we’ve discussed so far.

Firstly we’ve looked at how version one of the API exists and is wrote in Python but never really got the love and attention it needed and in return, gave the same lacking love and attention.

Secondly we identified a set of hard requirements of what language / basis we want to be using and settled on Golang.

Next, we’re looking at functional requirements and more the underlying “how” things will get passed around!


For the longest time you’ve had various ways of interacting with systems across the internet, the core of that being “I send request with a given message payload and I get a response back”. For the sake of brevity we’re just going to assume that REST was our only option previously. It’s lightweight enough and a reasonable design pattern that most engineers roll with it anyways.

In many ways REST is nice, especially if designed properly. You have things like your HTTP Verbs (or methods) such as GET and POST alongside URL paths which can describe the route in a sense, or even pass in key information, for example: mywebsite.com/<this/is/user/1> which could describe the user behind ID 1. Each endpoint is typically clearly defined and often requires the person hitting them to be specific and maintain an idea of what they’re going for on their end.

The downside to REST however is it can feel very piecemeal: you may have to hit from as few as 1 to as many as 10 or 11 endpoints just to get the information you want. Furthermore with REST based endpoints you’re always going to get the format in a defined, rigid structure every time; you don’t care about certain data? Well tough because it’s getting loaded and provided unless it exists on another endpoint. Adding on to this you often have to remember to group your endpoints and depending on the language / framework used there may be a degree of repetitiveness or just boilerplate that is consistently in your way.

Don’t get me wrong: REST still very much has its place and I still actively use it but for scale or a public API it often leans on external tools to help it.

The initial plan was to potentially use a REST pattern with Golang given that Go makes it incredibly easy, especially with routers like Gorilla Mux and Chi out there. Although Go makes things performance-wise far smoother all I was really doing was changing the language: there was no under the hood improvements or design changes which meant I wasn’t necessarily making my life any easier and given the internet has moved on a fair bit since I last checked the standards for best practises as a developer I figured I’d go see what other options were available.

Enter GraphQL

GraphQL is somewhat of a distinct flavour change from REST. GraphQL doesn’t so much compete with the ideas of REST and instead changes the discussion from “how do I get my information from this REST API with these endpoints and what verbs do I need” to “we don’t care about that, I’m giving you a request of a data model in JSON, give me back a response in JSON”. And well I mean it actually does what it says on the tin in that respect. Furthermore the goal of GraphQL is to create a modelled representation of data, so if a user say has an inventory and achievements then under REST those would likely be two separate GET endpoints. In GraphQL they’re simply just two additional fields you ask for and the back-end should do the legwork in making that data available for you in one move. Potentially three requests have been squashed into one, very nice!

Example request in GraphQL:

{
  hero {
    name
  }
}

Example Response:

{
  "hero": {
      "name": "Luke Skywalker"
  }
}

So fairly standard stuff really.

GraphQL isn’t without its problems; especially in Golang where you have a few major libraries to work from and then after that you’re sort of on your own. The question for Golang and GraphQL for me then became:

  • Are any of these libraries sane and play to the strengths of Golang?
  • Can we build a wrapper in GMod to sanely interact with GraphQL?
    • Context: in GLua there exists no (known) GraphQL interface
  • Does this makes more or less work?

The answers after some prototyping and testing were as follows:

  • Yes, one of the libraries works very nicely with Golang and provides code-generation meaning we can type out something smaller and get much more in return!
  • Requests are mostly just JSON blobs so yes, we can!
  • Surprisingly, less work!

At this point it seems fairly magical and because of how the library (gqlgen) and Golang work together, intercepting requests for an authentication level seems fairly trivial so how about we now try building something which works!


To recap, at this stage the idea is to see if we can make this work within the confines of GraphQL and Go which at the time was very much an unknown. Bigger yet was the risk that this might not even work properly with GMod which could mean some of this very much going to waste.

Thankfully that didn’t happen and in the next post we can look at how this came about, alongside with some unexpected pitfalls and lessons learned along the way!

New Web (API) Foundations (Part Two)

So version two, what could that even begin to look like?

By trade I’m what I would consider more of a systems engineer when it comes to programming. I like making things tick and work together so what I am definitely not is a web developer by trade. Sure I’ve made the occasional website here and there, hosted various webpages online and so on and made the occasional web API but if we’re talking at scale then systems programming (especially distributed) are where some of my strengths lie. In fact, thinking about the wider aspect of programmers at [BB] we’re all systems engineers moonlighting as game engineers!

What this meant from my point of view was two things:

  • I’m very much open to the wide range of technologies that have come along to make “web dev” better
  • I’m likely going to be paralysed by the choice of options of available.

As a result I drew up a number of things that were hard requirements from a language point of view. Things that were needed and if I was to discount them (or too many of them) then I likely had to rule that option out. These were as following:

  • Had to be relatively easy to maintain
    • I didn’t want to have to be juggling package versions
    • It should be something I can write the foundation of once and not have to worry down the line
    • I wasn’t going to get unknown or odd behaviour because the underlying system wasn’t concrete
  • Had to be as lightweight as possible, both in terms of the compiled endpoint and the amount of code wrote.
    • Additionally if the runtime CPU / Memory usage could be as light as possible, this was a bonus
  • Needs a strong standard library built in to the language.
    • If this isn’t a thing, how easy is it to pull in external libraries that fulfil this function well?
  • Required as few external dependencies possible to get it up and running.
    • For example with Python, you can run it straight up on port <x> and use nginx to reverse proxy this, however this will lead to very poor performance very quickly. So you use a tool like uwsgi to manage this which comes with its own problem. We do not want this.
    • In an ideal world, the web server is as production ready as can be.
  • Routing and passing information has to be sane.
    • I don’t care about mutability vs immutability, I just care I can get my information in a sane fashion.
  • I want to automate what I can, tooling should be available to get me “far enough” that I don’t have to worry about the underlying wiring.
    • But I do care enough that if what is generated is of poor quality or performance, I don’t want this.
  • Middleware
    • I hate this term in the web-dev world, but as it’s ubiquitously used I’ll roll with it
    • If I want to intercept requests and manipulate them (e.g. authentication) how easy is this to do?

At this point I wasn’t even so much caring about REST vs SOAP vs GraphQL vs whatever, I just wanted something that gave a solid base.


I spent about a month going back and forth, prototyping and seeing what I could pop up with. Python was ruled out incredibly quickly because I just wasn’t interested in dealing with it any more as a web language. I wasn’t touching PHP with a bargepole because although it’s gotten better it still feels like a language that I need to use a framework to get the most out of. I’m also of the opinion that adhoc file-based interpreted languages like PHP are an incredibly outdated concept.

This left me with node.js and Golang. I’d been working with a company that as their main language used node.js and I had personally been on a Go kick. I already knew the downsides of node.js (and by proxy, express) but figured it wouldn’t be fair to entirely discount it if I hadn’t tried to do something in it personally.

Yeah, no, that was knocked out very quickly. npm and yarn do their best to deal with libraries in node.js however as there’s a lacklustre standard library to begin with your applications quickly begin to swell in terms of file-size and it’s a performance hog. Sure it’s fast on benchmarks but the trade-offs feel too much for it to be acceptable.

So that left me with Golang. Sure you could argue at this point in the post that I had a favourable bias to Golang and that’s not unnecessarily incorrect. I do like the language but it does have its own downsides; still a bit immature in certain areas, sweeping changes are still being made between some languages and the lack of generics (as we’ll find out in the future) has proven to be slightly painful but for the time being: it fulfils the criteria.

At this point we haven’t even considered how we’re getting the data out or authentication – but we’re one step closer. Now we need to fulfil our second set of criteria. On to the next blog post!

New Web (API) Foundations (Part One)

A foreword: This blog post (and the follow-ups) is a bit more technical than usual and so there won’t be too much in terms of the depths of explaining how they work but instead, terms will be used with at most brief explanations. You should be able to follow the general gist though as some concepts and explanations will be mentioned.


Much like how I’ve rewrote the starting paragraph to this blog post three times now, trying to extract data from our databases in meaningful ways is tricky just like getting the information from my brain to talk about is also tricky.

See I already know this information much like the database knows its’ information but I have to find a medium and a form that allows me to relay this information to you the reader. Additionally I have to process and parse exactly what I can tell you, what might be oversharing and what might not even be enough. I want you to have the exact information you need and no more, no less.

Four to five years ago I wrote the first version of the BB Web API in Python with the goal of having a way to extract data from the database without having to actually have a direct connection to the database. One could argue that there was no direct need for it because ultimately I hold the keys to the kingdom and if I ever needed to generate database credentials for an application, well that’s not exactly difficult for me. But what if I don’t want to do that? What if I want to make it easier for other developers to query the same data in a reliable fashion and that doesn’t require me generating a set of credentials every time. Furthermore what if I only want to give a portion of data as opposed to the entire lump? So off we went.

With reflection, it was an abject failure. Don’t get me wrong; the API works and is still used by a few smaller systems today (with the desire to sunset these endpoints) but the underlying goals that this API set out to solve were never actually solved. There’s no proper authentication or authorization meaning if you know the right incantations and routes to hit you can load the data of any user (minus some personal information such as IPs). As a result what was meant to have eventually become a public API was forever kept internal and very much underutilized. Not only was it underutilized but it couldn’t even be used within our GMod systems on the client for risk of someone going digging and fetching data about players that we felt would otherwise wouldn’t be ideal to have.

Additionally what documentation there was had to be kept internal and required me to manually write and update each and every time which is bit of a pain. You can get tools which help with making REST APIs but in all honesty, they’re not that great and arguably in many cases hinder you rather than help you.

Finally, the toolset used caused additional problems. While I quite like using Python (and still use it for many projects and systems today) I was using bottlepy which is a web framework similar to Flask and Django (but still closer to Flask). Historically I generally picked bottlepy because it’s a very lightweight framework, only the problem is it’s currently halfway between version 0.12 and 0.13 which creates some hilarious inconsistencies at best and downright issues at worst. Going further hosting web applications in Python is… not for the faint-hearted. The tooling available to you outside of managed systems (e.g. Heroku) aren’t exactly great and come with their own set of problems so a very small web API that had less than 20 routes wasn’t always the fastest and was at least 80mb in size. Still smaller than node.js though. The actual management of this got easier over time with some adjustments but it still wasn’t great.This is more of a reflective point overall because my general go-to for web based services is now Golang (more on this in later posts).

We’ve always had the occasional request for exposed web endpoints as people want to do things with the data we have on hand. This might be just to use the data to build a better ban page than what we had, crunch stats about their timetrial data or just even fun little tools or a peak behind the curtain but we could never provide for that. Furthermore I was starting to seek more often a way to pull data from the database and expanding the (now named) version 1 web API always felt like a chore.

And so, the brain-storming and gear-turning process started to answer the question: What would a version 2 of the API look even look like?

Tune in for Part 2 appearing in the next few days (hopefully)!

BB’s 15th Anniversary!

Before we start the proper meat of the post; apologies for the lack of blog posts this year so far! As ever there’s been plenty going on and posts like these tend to fall to the wayside, especially when we don’t have an active feedback loop for these. There’s been a few discussions on how we can improve the feedback loop – but until these feel free to ask anyone on the team their thoughts and if they want to write a post.


[BB] has turned 15! I don’t think when Ben started BB initially all those years ago he expected it to last beyond five years, never-mind fifteen. We’ve had our ups and downs, loud and quiet moments but wholeheartedly I honestly believe the last five have been some of our best. We’ve found our stride in what we offer and it only seems to be getting better with more things available!

As ever with our anniversary events we’re offering players the ability to try Platinum for a week with !tryplat (or /tryplat), various goodies with this years crates and a global 150% cube bonus! Furthermore we’re rolling out a number of new bits of content over the next few weeks, including (but not limited to) new items, Climb achievements, new surf maps and more! Although it was released slightly before the anniversary, we also applied a number of major updates to Climb with the intention of it being done for the anniversary, so go give it a try @ 96.44.144.137:27015 or you can reach it via !portal through any of our game servers!

One thing we’ve opted not to do this year is grant a discount on the Platinum membership, but instead we’ve released the ability to donate for Platinum in 6 / 12 month intervals which do come with a very slight discount. For those hoping for the return of permanent Platinum, I would take this as a sign that for the foreseeable future it won’t be returning via monetarily available means.

Please feel free during this time to take to the forums or Discord to talk about how [BB] has had an impact on you, where you’d like to see it go and where it’s taken you so far!

On Saturday 21st August I’ll be available for anyone that wants to chat and ask questions in a Q&A style stage channel called “Ted-Talks” in the [BB] Discord @ 9pm BST (8pm UTC, 4pm ET) for any and all questions in regards to [BB]! If no one turns up / there’s not enough for that, then we’ll just chat in the “Lounge” instead.

[BB], 2020 in review

2020. The year of Corona and the impending end of the Trump administration. What a year it has been!

Generally I don’t look outwards in these blog posts but I feel it’s worth highlighting that not only the world felt serious changes and impact upon it, but so did [BB]. I feel I can firmly state from the top of this blog post that 2020 was an incredibly great year for us and 2021 is actually shaping up to be all the better.

There’s plenty to chew through this year so some things unlike previous years are condensed or dropped. If you want to know more about any specific topic, feel free to ask in Discord or on the Forums!

Donations

Wait what? We’re starting with donations first? What gives?!

This internally at [BB] has actually been the biggest change for us, namely the conversion of Platinum being a lifetime ($15) option to a monthly ($7.99) option. Many players who pass through assume that we get huge turnover and our community is a cash-cow. This couldn’t be further from the truth. BB has personally cost me (at least) $11,000 through the lifetime of its existence and continues to do so which is why we changed the donation model. What I didn’t expect is how much impact this seemingly redundant and worse-for-value change would bring

In 2020 [BB] has gone from rarely getting even one or two donations a month to this year covering the main server expenses for 8 of 12 months this year. In the remaining 3 of 4 months we received enough donations to cover at least 50% of the server expenses each month. Again, to clarify: The only thing we did was change one donation system (Platinum) from $15 (lifetime) to $7.99 a month. Previous lifetime holders kept their lifetime Platinum.

This is huge.

Don’t get me wrong here: we’re not suddenly making boatloads of cash. The main server costs $99 a month which is exclusively for [BB] alongside a Webserver ($20 a month) that I use for my own projects (so I eat the cost for that) and our GitHub Teams membership (3 seats x $4 = $12) so any excess goes towards these fees and then rolls over to the next month.

It’s been a strong lesson in business for all of us really, especially the fact that if a service can’t wipe its own arse – someone eventually has to pay (that person being me). We believe in providing an experience rather than catering to the higher payer and we’d prefer to remain that way. If this model (maybe with a few tweaks, annual membership anyone?) can sustain us then I think everyone on the team will be more than happy with that.

There are other changes that do also factor in such as improved awareness (we’ve been slightly more imposing with promotion, sorry!) and with everyone being home due to covid but so far the biggest factor in uptake does seem to be the price change.

What would be nice is if we generated enough of a surplus to upgrade the machine. It’s over 10 years old at this point and while it’s still going incredibly strong hardware has come leaps and bounds at this point but I digress.

Branding

We changed from bbroleplay to bbservers. This was a change that had been pending for a while but ultimately I wanted / needed the association of bbservers being a hosting company to drop from Google and other search engines. We haven’t been a roleplay community since 2013 and honestly, these days RP communities have (understandably) bit of a bad reputation.

If roleplay ever returns we might specifically brand it with the bbroleplay moniker. Until then, time to move on!

Server Changes

Earlier this year I turned off Escape and about a month ago I turned off BBuild. Escape needs some love and fundamental gameplay changes and until those are done it doesn’t make sense to keep it online in its current format.

BBuild (Sandbox) only really ever existed back in the day to compliment our roleplay server (people building shacks / shops) so they could build without constraint and being disturbed. The reality is very few people use Sandbox to purely build these days and those that do have dedicated, niche communities designed for it. Due to how we change some internal GMod functions to harden against exploits it also meant things like Wiremod also regularly broke.

If there’s appetite / demand for it in the future we can easily bring it back. Until then, rest easy sweet prince.

Servers as a whole

Other than the servers we shut down as per the last paragraphs all of our other servers have seen a notable uptick in player counts. Deathrun may not yet have a consistent playerbase but across the past year there are players on it! Likewise the Surf T2-6 server is seeing low averages of player counts on it with very spikey overall player counts. This is up from 2019 where the average was often 0.

Surf

It goes without saying that Surf is popular as ever. If anything it’s popular more than ever as competitors have closed their doors. We still aren’t quite catering to the more pro players or the more advanced players but we are working to resolve that. Currently for Surf our new focus is the new ranking system that works in a similar fashion to other servers (e.g. rank y/x) which for all intents and purposes is now going through the front-end design. More on this soon!

A focal point for 2021 is the replay system. Work has been done on replays here and there throughout 2020 but much of this was exploratory work to see what was feasible, what we wanted to achieve and so on. I’m hopeful that by Summer we should see an initial version of this floating about somewhere.

BBase

As mentioned in the review last year we felt that BBase was pretty much in a mature position that lets us do what we need to without fighting ourselves. This year this has absolutely rang true and we’ve probably had some of the fewest “sweeping” changes BBase has ever had and yet we’ve also added the most functionality it’s ever had such as the GitHub integration, mass uncrating and so on!

Steam Group Membership

Group membership is back! Two years after it died I finally got around to writing an updated GMod C++ module which supports this. It’s back and gives you 5% bonus cube gain alongside Discord!

Discord

We now recognise server boosters and we gave boosters + Platinum members the ability to create temporary voice channels. The next steps I’m considering at to include a notification system with the sales-bot so it can notify them of specific keywords and / or sales instead of having to actively monitor the channel!

Garry’s Mod player base & S&Box

Last year I mentioned the GMod playerbase seemed to be dwindling and that S&Box was nowhere to be found which was going to make for interesting times. Covid came along and gave the GMod player base an overall boost although it already seems to be globally diminishing again. Internally we’ve been tracking activity on our own servers (with our own tools) and while we’re not seeing the peaks of covid traffic, our drops (and lack thereof) have been far slower and lower than what the wider community is experiencing. In this regard we’re cautiously optimistic and we’ll keep an eye on things. Coupled with donations being higher than ever it’s likely we’re going to be just fine.

Garry has formally started working and shown media of S&Box which is good news. While we have no formal plans here at [BB] as to what we’ll do when S&Box comes out we’re watching it with close interest. While I’m making no promises about moving any of our gamemodes to S&Box it’s something we are certainly keeping open for discussion.

That said, until we’re given actual access to S&Box we have no way of knowing what technical challenges await us and what will even be available. One thing we can tell is that everything will be far more stripped back and unlike GMod which can mount multiple Source-engine games, S&Box cannot. Cautious optimism is how I’d best describe my current stance on S&Box.

Closing Words

As a community I think [BB] is in the best place it’s ever been. We provide an experience that the entire team at [BB] is proud of and we do it while being mostly self-sufficient and in a way that is unique to us. It feels odd that such a trying year has also been our best year. It has proven though that we can succeed where others fall apart and I think going forwards [BB] will only get stronger yet.