Port posts over

Actual layout, parsing of publish dates and titles is TBD.
This commit is contained in:
Gabriel Simmer 2023-07-20 14:05:34 +01:00
parent a5ff805136
commit 1ad416634d
Signed by: arch
GPG key ID: C81B106D46C5B875
53 changed files with 2888 additions and 4 deletions

View file

@ -14,6 +14,4 @@
**/.direnv **/.direnv
**/gs.* **/gs.*
**/result **/result
**/posts
!**/posts/.keep
fly.toml fly.toml

2
.gitignore vendored
View file

@ -13,5 +13,3 @@ target/
.direnv/ .direnv/
gs.* gs.*
result result
posts/
!posts/.keep

52
posts/30-04-2016.md Normal file
View file

@ -0,0 +1,52 @@
---
title: 30/04/2016
date: 2016-04-30
---
#### DD/MM/YYYY -- a monthly check-in
A lot has happened this month. From NodeMC v5 and v6 to
a new website redesign to a plethora of new project ideas, May is going
to be a much busier month, so I think I should check-in.
NodeMC, one of my most beloved projects thus far, has been going far
better than I could have ever imagined, with more than 30 unique cloners
in the past 30 days and about 300 unique visitors, those have far
outpaced my other projects. And to top it, off, 13 lovely stars. Version
1.5.0 (aka v5) [recently launched with plugin
support](https://nodemc.space/blog/3), and has been a monumental release for me in terms of
what it has taught me and what the contributors and I have achieved. A
big hats-off to [md678685](https://twitter.com/md678685) for helping with the plugin system and other fixes
within in the release. NodeMC v6, however, is going to be even bigger.
[Jared Allard](https://jaredallard.me/) of nexe fame (among other projects) has taken
interest in the project and has rewritten the bulk of it using ES6
standards, which both md678685 and I have been learning, and has
recently decided to rewrite the stock dashboard using the React
framework. I could not be happier working with him on v6.
On a smaller note, [my personal website](http://gabrielsimmer.com)
has had a bit of a facelift. I decided to do so after months of using a
free HTML5 template that really did not offer a whole lot of room to
customize. My new site allows me to add pretty much an unlimited number
of projects and other information as I see fit. It's built using my
favorite CSS framework [Skeleton](http://getskeleton.com/),
which I will forever see as superior to Bootstrap, despite not being
updated in more than a year (I may have some free time to fork it), and
using a nice font called [Elusive Icons](http://elusiveicons.com/) for
the small icons. I'm throwing the source up on GitHub as I type this
post ([it's up!](https://github.com/gmemstr/GabrielSimmer.com)).
I have a lot of projects in my head. Too many to count or even write
down. It's going to be a crazy few months as some of my more long-term
projects (some of them mine, others I'm just working on) are realized
and launched. I really don't want to say much, but I know that several
online communities, namely the Minecraft and GitHub communities, will be
very excited when I am able to talk more freely on what I have coming
up.
_*Okay, maybe I'll tease one thing -- I recently bought the YourCI.space domain, which I am not just partking!*_
Just a quick heads up as well, everyone should go check out [Software Engineering Daily](http://softwareengineeringdaily.com/). I am going to be on the show later next month, but I recommend you subscribe to that wonderful podcast regardless. I will be sure to link the specific episode when it comes out over on [my Twitter](https://twitter.com/gmemstr)
> "If you have good thoughts they will shine out of your face like
> sunbeams and you will always look lovely." -- Roald Dahl

0
posts/_index.md Normal file
View file

View file

@ -0,0 +1,36 @@
---
title: A bit about batch
date: 2016-01-21
---
#### Or: why there is no NodeMC installer
Can we just agree on something -- Batch scripting is
terrible and should go die in a fiery pit where all the other Windows
software goes? I don't think I can name any useful features of batch
scripting besides text-based (and frankly pointless) DOS-like games. It
sucks.
I recently tried to port the NodeMC install script from Linux (bash) to
Windows (batch), and while it seemed possible, the simple tasks of
downloading a file, unzipping it, and moving some others around, it
proved to be utterly impossible.
_A quick note before I proceed -- Yes, I realize something like a custom built installer in a more traditional language would have been possible, however I wanted to see what I could do without it. Also I'm lazy._
First and foremost, there is no native way to download files properly.
Most Linux distros ship with cURL or wget (or are installed fairly early
on), which are both great options for downloading and saving files from
the internet. On Windows, it is suggested
[BITS](https://en.wikipedia.org/wiki/Background_Intelligent_Transfer_Service) could do the job. However, on execution, it simply
*does not work*. I got complaints from Windows about how it didn't want
to do the thing. *Fine*. Let's move on to the other infuriating thing.
Stock Windows has the ability to unzip files just file. So why the hell
can I not call that from batch? There is no reason I shouldn't be able
to. But alas, it cannot be done, [at least not
easily](https://stackoverflow.com/questions/21704041/creating-batch-script-to-unzip-a-file-without-additional-zip-tools) \*grumble grumble\*
In conclusion: Batch is useless. It should be eradicated and replaced
with something useful. Because as of now it has very little if any
redeeming qualities that make me want to use it.

33
posts/a-post-a-day.md Normal file
View file

@ -0,0 +1,33 @@
---
title: A Post A Day...
date: 2015-10-28
---
#### Keeps the psychiatrist away.
I'm trying to keep up with my streak of a post every day on Medium,
mostly because I've found it really fun to write. I think Wednesdays
will become a sort of 'weekly check-in', going over what I have done and
what there is yet to do.
So, what have I accomplished this week?
- Created a link shortner.
- Made the staff page for Creator Studios.
- Started updating [my homepage](http://gabrielsimmer.com)
- Fixed an issue with Moat-JS (pushing out soon).
- Removed mc.gabrielsimmer.com and all the deprecated projects.
- Moved to Atom for development.
And what do I hope to accomplish?
- Fix my link shortner.
- Work on the video submission page for Creator Studios.
- Whatever else needs doing.
- Complete my week-long streak of Medium posts.
It's hard to list what I want to accomplish because things just come up
that I have no control over. But nonetheless, I will report back next
Wednesday.
Happy coding!

29
posts/ajax-is-cool.md Normal file
View file

@ -0,0 +1,29 @@
---
title: AJAX Is Cool
date: 2015-11-07
---
#### Loading content has never been smoother
I started web development (proper web development anyways) using pretty much only PHP and HTML -- I didn't touch JavaScript at all, and CSS was usually handled by Bootstrap. I always thought I was doing it wrong, but I usually concluded the same thing every time.
> "JavaScript is too complex. I'll stick with PHP."
Only recently have I been getting into JavaScript, and surprisingly have
been enjoying it. I've been using mainly AJAX and jQuery for GET/POST
requests for loading and displaying content, which I do realize is *just
barely* scratching the surface of what it can do, but it's super useful
because I can do things such as displaying a loading animation while it
fetches the data or provide nearly on-the-fly updating of a page's
content (I am working on updating ServMineUp to incorporate AJAX for
this reason). I'm absolutely loving my time with it, and I hope I'll be
able to post more little snippets of code like I did for
[HoverZoom.js](/posts/hoverzoom-js). Meanwhile, I encourage everyone to try their hand at AJAX & JavaScript. It's powerful and amazing.
#### Useful Resources
[jQuery API](http://api.jquery.com/)
[JavaScript on Codecademy](https://www.codecademy.com/learn/javascript)
[JavaScript docs on MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript)

39
posts/an-api-a-day.md Normal file
View file

@ -0,0 +1,39 @@
---
title: An API A Day
date: 2016-07-11
---
#### Keeps... other productivity away?
[Update: GitHub org is available here with more info & rules.](https://github.com/apiaday)
I've been in a bit of a slump lately, NodeMC hasn't
been as inspiring and development has been a bit slow, other projects
are on hold as I wait for third parties, so I haven't really been
working on much development wise.
#### So I can up with a challenge: **An API A Day**.
The premise is simple -- Every day, I pick a new API to build an
application with from [this list](https://www.reddit.com/r/webdev/comments/3wrswc/what_are_some_fun_apis_to_play_with/) (from reddit). From the time I start I have the rest
of the day to build a fully-functional prototype, bugs allowed but core
functionality must be there. And it can't just be a "display all the
data" type app, it has to be interactive in some form.
An example of this is [Artdio](http://gmem.pw/artdio), which was the inspiration for this challenge. I
built the page in about 3 hours using SoundCloud's JavaScript API
wrapper, just as a little "how does this work" sort of challenge.
So how is this going to be organised?
I'm going to create a GitHub organization that will house the different
challenges as separate repositories. To contribute, all you'll need to
do is fork the specific day / API, copy your project into it's own
folder (named YOURPROJECTNAME-YOURUSERNAME), then create a pull request
to the main repository. I don't know the specific order I personally
will be going through these APIs, so chances are I will bulk-create
repositories so you can jump around at your own pace, or you can request
a specific repository be created.
If you have any questions or need something cleared up, feel free to
[tweet at me :)](https://twitter.com/gmem_)

View file

@ -0,0 +1,120 @@
---
title: Building a Large-Scale Server Monitor
date: 2017-01-24
---
#### Some insight into the development of [Platypus](https://github.com/ggservers/platypus)
If you've been around for a while, you may be aware I
work at [GGServers](https://ggservers.com) as a developer primarily focused on exploring new
areas of technology and computing. My most recent project has been
Platypus, a replacement to our very old status page
([here](https://status.ggservers.com/), yes we know it's down). Essentially, I had three
goals I needed to fulfil.
1. [Able to check whether a panel (what we refer to our servers as, for
they host the Multicraft panel) within our large network is offline.
This is by far the easiest part of the project, however
implementation and accuracy was a problem.]
2. [Be able to fetch server usage statistics from a custom script which
can be displayed on a webpage so we can accurately monitor which
servers are under or over utilised.]
3. [Build a Slack bot to post updates of downed panels into our panel
reporting channel.]
#### Some Rationale
![The plain-Jane HTML frontend, stats are static until scripts are deployed!](https://cdn-images-1.medium.com/max/600/1*dn3zU7rRapONwU0XoV1Ylw.png)
*Why did you choose Python? Why not Node.js or even PHP (like our
current status page)?* Well, I wanted to learn Python, because it's a
language I never fully appreciated until I built tfbots.trade (which is
broken, I know, I haven't gotten around to fixing it). At that point, I
sort of fell in love with the language, the wonderful syntax and PEP8
formatting. Regardless of whether I loved it or not, it is also a hugely
important language in the world of development, so it's worth learning.
*Why do you use JSON for all the data?* I like JSON. It's easy to work
with, with solid standards and is very human readable.
#### Tackling Panel Scanning
[Full video](https://youtu.be/xAXT1mOFccM)
Right so the most logical way to see if a panel is down is to make a
request and see if it responds. So that's what I did. However there were
a few gotchas along the way.
First, sometimes our panels aren't actually **down**, but just take a
little bit to respond because of various things like CPU load, RAM
usage, etc., so I needed to determine a timeout value so that scanning
doesn't take too long (CloudFlare adds some latency between a client and
the actual "can't reach server" message). Originally, I had this set to
one second, thinking that even though my own internet isn't fast enough,
the VPS I deployed it to should have a fast enough network to reach
them. This turned out to not be true -- I eventually settled on 5
seconds, which is ample time for most panels to respond.
Originally I believed that just fetching the first page of the panel (in
our case, the login for Multicraft), would be effective enough.
Unfortunately what I did not consider is all the legwork the panel
itself has to do to render out that view (Multicraft is largely
PHP-based). But fortunately, the request doesn't really care about the
result it gets back (*yet*). So to make it easier, I told the script to
get whatever is in the /platy/ route. This of course makes it easier for
deployment of the stat scripts, but I'll get to those in a bit.
Caching the results of this scan is taken care of by my useful JSON
caching Python module, which I haven't forked off because I don't feel
it's very fleshed out. That said, I've used it in two of my handful of
Python projects (tfbots and Platypus) and it has come in very handy
([here's a gist of it](https://gist.github.com/gmemstr/78d7525b1397c35b7db6cfa858f766c0)). It handles writing and reading cache data with no
outside modules aside from those shipped with Python.
#### Stat Scripts
An integral part of a status page within a Minecraft hosting company is
being able to see the usage stats from our panels. I wrote two scripts
to help with this, one in Python and one in PHP, which both return the
same data. It wasn't completely necessary to write two versions, but I
was not sure which one would be favoured for deployment, and I figured
PHP was a safe bet because already we have PHP installed on our panels.
The Python script was a backup, or if others wanted to use Platypus but
without the kerfuffle of PHP.
The script(s) monitor three important usage statistics; CPU, RAM and
disk space. It returns this info as a JSON array, with no extra frills.
The Python script implements a minimal HTTP server to handle requests as
well, and only relies on the psutil module for getting stats.
![Script returns some basic info](https://cdn-images-1.medium.com/max/800/1*Zm2es9y_7pmNlh7D675eNQ.png)
#### Perry the Platypus
Aka the Slack bot, which we have affectionately nicknamed. This was the
most simple part of the project to implement thanks to the
straightforward library Slack has for Python. Every hour, he/she/it
(gender undecided, let's not force gender roles people! /s) posts to our
panel report channel with a list of the downed panels. This is the part
most subject to change as well, because after a while it feel a lot like
a very annoying poke in the face every hour.
#### →Going Forward →
I am to continue to work on Platypus for a while; I am currently
implementing multiprocessing so that when it scans for downed panels,
the web server can still respond. I am having some funky issues with
that though, namely the fact Flask seems to be blocking the execution of
functions once it is started. I'm sure there's a fix, I just haven't
found it yet. I also want to make the frontend more functional -- I am
definitely implementing refreshing stats with as little JavaScript as
possible, and maybe making it slightly more compact and readable. As for
the backend, I feel like it's pretty much where it needs to be, although
it could be a touch faster.
Refactoring the code is also on my to do list, but that is for much,
much farther down the line.
I also need an adorable logo for the project.
![From slate.com](https://cdn-images-1.medium.com/max/800/0*15v5v1q_81L1rTGV.jpg)

View file

@ -0,0 +1,64 @@
#+title: Chromium Foundation
#+date: 2021-12-03
*** We need to divorce Chromium from Google
The world of browsers is pretty bleak. We essentially have one viable player, Chromium,
flanked by the smaller players of Firefox, spiraling in a slow painful self destruction,
and Safari, a browser well optimized for Apple products.
/note: We're specifically discussing browsers, but you can roughly equate the arguments I make later with the browser's respective engines./
The current state of browsers is a difficult one. Chromium has the backing of notable
megacorps and is under the stewardship of Google, which come with a number of perks
(great featureset, performance, talented minds working on a singular project, etc)
and a number of downsides as well (Google has a vested interested in advertisements, for example).
Firefox is off in the corner with a declining marketshare as it alienates its users
in a flailing attempt to gain users from the Chromium market, including dumping XUL addons
in favour of WebExtensions, some rather uneccesary UI refreshes (subjective, I suppose),
and various other unsavoury moves that leave a bad taste in everyone's mouth. And Safari
is in the other corner, resisting the web-first movement as applications move to web technologies
and APIs in the name of +control+ privacy and effeciency. While I don't think Safari neccesarily
holds back the web, I think it could make a more concerted effort to steer it.
With all that said, it's easy to come to the conclusion that the web has a monoculture browser
problem; over time, Chromium will emerge the obvious victor. And that's not great, but not because
there would be only one browser engine.
The web in 2021 is a complex place - due to a number of factors it's no longer simple documents located on
webservers, but we now have what are ostensibly desktop applications loaded in a nested operating system. For better or worse,
this is where we've ended up, and that brings a /lot/ of hard problems to solve for a new browser. This is
why I believe there really hasn't been any new mainstream (key word!) browser engines - the web is simply
too complex. The browser pushing this "forward" (somewhat subjective) is Chromium, but Chromium is controlled
by Google. While there are individuals and other corporations contributing code, Google still controls Chromium, and
this makes a fair few people uneasy given Google's primary revenue source - ads, and in turn tracking. Logically,
we want a more diverse set of browsers to choose from, free of Google's influence! Forget V8/Blink, we need
independant engines! Full backing for Gecko and WebKit! Well, yes, but actually, no. We need to throw
effort behind one engine free of Google's clutches, but it should be Chromium/V8/Blink.
Hear me out (credit to @Slackwise for planting the seed of this in my head) - we should really opt to tear the most successful
engine from Google's clutches and spin it off into its own entity. One with a nonprofit structure similar
to how Linux manages itself (a good example of a large scale effort in a similar vein). The web is simply
too complex at this point for new engines to thrive (see: Servo), and the other two options, Gecko from Mozilla/Firefox
and WebKit from Safari/Apple, are having a really hard time evolving and playing catch up. With a foundation dedicated to the
engine, and a licensing or sponsorship model built out, I genuinely believe that it would be better
in the long run for the health and trust of the internet. We can still have Chromium derivatives, with
their unique takes or spins, so it would not reduce the choice available (besides, people choose based on features, not engine).
Concentrating effort into a single browser engine rather than fragmenting the effort across a handful might allow
for some really great changes to the core of the engine, whether it be performance, better APIs, more privacy
respecting approaches, and so on. It also finally eliminates the problem of cross browser incompatibilities.
Would it stagnate? Maybe. It's entirely possible this is a terrible idea that would stiffle innovation. But
given the success and evolution of other projects with a matching scale (Linux), and the constant demands
for new "things" for the web, I feel confident that we could maintain a healthy internet with a single engine.
And remember, it's okay for something to be "done". Constantly shipping new features isn't neccesarily a plus -
while we see new features shipping as a sign of activity and life, it's perfectly fine for us to take a step back
and work on bugs and speed improvements. And if something isn't satisfactory, I'm pretty confident that the
project could be properly forked with improvements or changes made later upstreamed, in the spirit of open
source and collaboration.
There are calls for breaking up the large technology companies, but I don't really want to delve much
into that here, or even consider this a call to action. Instead, I want this to serve as mild musings and
hopefully get the seed of an idea out there, an idea discussed a few times in a private Discord guild. I don't
expect this to ever become a reality without some strongarming from some government body, but I hold out some
hope.

View file

@ -0,0 +1,76 @@
---
title: Using CouchDB as a Website Backend
date: 2022-09-25
---
**She's a Coucher!**
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/m1eooqIyjbM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Not too long ago, I was shown [CouchDB](https://couchdb.apache.org/), a wonderous database promising to let me relax. It was presented
as a database that would be able to effectively act as a standalone app backend, without any need for a custom backend for simple applications.
At the time, I didn't think I had any use for a document store (or "NoSQL database"), nor did I want to remove any of my existing custom
backends - I spent so long writing them! Deploying them! But then, as I started to think about what would need to go into adding a new
type of commission my girlfriend wanted to accept to her website, including uploading files, managing the schema etc. I realised that actually,
her website is pretty well served by a document database.
Allow me to justify; the content [on their website](https://artbybecki.com) is completely standalone - there's no need for relations, and I want
to allow flexibility to add whatever content or commission types that she wants without me needing to update a bunch of Go code and deploy it
(as much as I may love Fly.io at the moment), while also performing any migrations to the PostgreSQL database.
So with that in mind, I started to write a proof of concept, moving the database of the existing
Go backend to use CouchDB instead. This was surprisingly easy - CouchDB communicates over HTTP, returning JSON, so I just needed to use the stdlib
HTTP client Go provides. But I found that the more I wrote, the more I was just layering a thin proxy over CouchDB that didn't need to exist!
Granted, this thin proxy did do several nice things, like conveniently shortcutting views, or providing a simple token-based authentication for
editing entries. But at the end of the day, I was just passing CouchDB JSON directly through the API, and realised I could probably scrap it
all together. Not only is it one less API to maintain and update, but I get to play around with a new concept - directly querying and modifying
a database from the frontend of the website! Something typically labelled as unwanted, but given CouchDB's method of communication I was willing
to give it a shot.
Thus, I `rm -rf backend/`'d and got to work, getting a handle of how CouchDB works. The transition was really easy - the data I wanted was still
being returned in a format I could easily handle, and after writing some simple views I got a prototype put together.
_views are just JS!_
![Screenshot of Fauxton interface showing view code snippet](https://i.imgur.com/JaZr8qU.png)
(this does still mean there's a bit of manual legwork I have to do when she wants to add a new type, but I'd have to tweak the frontend anyways)
The tricky part came when it was time to move the admin interface to use the CouchDB API. I wanted to use CouchDB's native auth, of course, and
ideally the cookies that it provides on one of its authentication endpoints. The best I could come up with, for the moment, is storing the username
and password as a base64 encoded string and sending it along as basic HTTP authentication, for the time being. These are only stored in-memory, so while
I do feel a shred of guilt storing the username and password in essentially plaintext, it's at least difficult to get to - and the application is only
used by me and my partner, so the radius is relatively small.
One minor note, on this topic, on permissions. CouchDB doesn't have a granular permission system, and is sort of an all-or-nothing thing - either your
database is open for everyone to read and write, or just one group/user. Thankfully, you can use a design document with a validation function to restrict
the modification of the database to users or groups, but it's a little annoying that it's not technically native, but it does seem to be working just fine
so until something breaks it seems like the best approach.
There was also the question of where to store images - the custom API I wrote uploaded
images to Backblaze B2, which is proxied through Cloudflare and processed with Cloudinary's free offering for optimising images. Thankfully, the answer
is "just shove it into CouchDB!". CouchDB natively understands attachments for documents, so I don't have to do any funky base64 encoding into the
document itself. It's hooked up to Cloudinary as a data source, so images are cached and processed on their CDN - the B2/Cloudflare approach was
okay, if a little slow, but using CouchDB for this was _really_ slow, so this caching is pretty much mandatory. Also on the caching front, I opted
to put an AWS Cloudfront distribution in front of the database itself to cache the view data. While this slows down updates, it also lessens the
load on the database (currently running on a small Hetzner VPS) and speeds up fetching the data.
_side note: Given CouchDB's replication features, and my want to have a mostly globally distributed CDN for the data, I'm considering looking into
using CouchDB on Fly.io and replicating the database(s) between regions! Follow me on [Twitter](https://twitter.com/gmem_) or [Mastodon](https://tech.lgbt/@arch)
for updates._
Migration from the previous API and the development environment was a breeze as well - I wrote a simple Python script that just pulls the API
and inserts the objects in the response into their own documents, then uploading the images associated with the object to the database itself.
The entire process is pretty quick, and only has to be done once more when I finally merge the frontend changes point to the database's API.
Using a database directly as an API for a CRUD interface is a very odd feeling, and even more odd exposing it directly to end users of a website.
But all things considered, it works really well and I'm excited to keep exploring what CouchDB can offer. I don't know if I have enough of an
understanding of the database to recommend it as a backend replacement for simple applications, but I _do_ recommend considering it for simple
APIs, and shipping with a web GUI for management (Fauxton)is incredibly helpful for experimenting. My stance on "SQL all the things!" has shifted
substantially and I recognise that 1) traditional SQL databases are actually a _really_ clunky way of handling webapp data and 2) it's fine to not
have relational data.
I'm going to be exploring the database much more, with CouchDB and PouchDB, [SurrealDB](https://surrealdb.com/), and continuing keep an eye on
SQLite thanks to [LiteFS](https://github.com/superfly/litefs) and [Litestream](https://litestream.io/) piquing my, and the rest of the internet's,
interest. I also want to invest a little bit of time into time series databases like Prometheus, InfluxDB or QuestDB, although those are a little
lower on my priority list.

View file

@ -0,0 +1,86 @@
#+title: Creating an Artist's Website
#+date: 2022-05-14
*** So my girlfriend is doing comissions...
If you're coming to this post expecting some magical journey into the design
and implementation of some fancy abstract website, I'm sorry to disappoint -
this will instead be a relatively technical post about building out a /very/
simple web frontend for both displaying comission information, building out an API
(or more specifically, a headless content management system) for managing a few
specific bits of content on said site (including images and text. Yay!), and the
trials and tribulations of deploying said API, which includes a rewrite from
TypeScript to Go.
A little more background, to clarify who the artist in question is and their use case.
My girlfriend has recently been taking on some art comissions, specifically around
texturing avatars for VRChat. To make life easier, I decided to create a website
for her that she could direct potential clients to, allowing them to have a look at
the work she's completed, pricing estimates, and contact information. It began life as
a simply gallery of photos, but didn't quite fit the above goals, and thus a plan was
born (after consultation with the client) - custom made information sheets for each
character, paired with a little blurb about what was done to reach the end product.
The goal of having it be editable through an interface rather than manually editing
HTML was forefront in my mind, so early into this design I opted to store the
information as a static object in the JavaScript, intending to swap it out later.
Frontend isn't really my speciality, so we'll say that it was relatively
straightforward to put together and move on to the exciting piece - the API.
My initial reaction was to leverage CloudFlare Page's /functions/ feature, which
allows the creation of "serverless" functions alongside the static website (they
also offer dedicated "Workers", but these appear to be intended as standalone
endpoints rather than being developed in tandem with the site). Storing the comission
sheets and the associated data was easy with the K/V store offered to the functions,
but I began to encounter issues as soon as files got involved. While the runtime the
functions are contained in /seems/ complete, it's a bit deceptive - in this instance,
I found that the =File= / =Blob= API simply didn't exist, which put a big block in
front of my plan to more or less proxy the image over to the GitHub repository,
allowing me to store the images for free and very easily link and load them.
Unfortunately, GitHub's API requires the contents of the files to be base64 encoded,
and the limitations of the function's runtime environment made this difficult. I did
manage to get the files to upload, but it would transform into a text file rather
than the PNG it should be.
After wrestling with this problem for a day, attempting various changes, I decided
to throw the whole function idea into the bin and look at a traditional long-running
server process, and ditched the idea of storing files in GitHub as it would only lead
to frustrations when trying to do development and push changes, opting instead for
Backblaze's B2 offering (mostly because I'd used it before and the pricing was
reasonable, paired with the free bandwidth as it goes through CloudFlare). Not wanting
to pass up and opportunity to pick up at least one new technology, I opted to leverage
[[https://fly.io][Fly.io]]'s free plan. My initial impression of Fly.io was limited, having only read
a handful of their blog posts, but upon further inspection (and I'm still standing
by this after using the product for a while) it felt more like an evolved, but less
mature, Heroku, offering a very similar fashion of deployment but with some added
niceties like persistant volumes and free custom domains.
The first prototype leveraged SQLite with a persistant volume, since I didn't expect to
need anythinig more complex - a flat file structure would have been fine, but where's the
fun in that? And this actually worked fine, but I quickly found out that during deploys,
the API would be unavailable as it updated to the latest version, and I figured the best
way to resolve this would be to scale up to 2 instances of the app, so there would always
be one instance available as the update rolled out. Ah! The keen eyed reader may say.
"How will you replicate the SQLite database?" This... was a problem I had not considered,
and thus went looking for answers to avoid spinning up a hosted database. With Fly.io
having just aquired a company that builds a product specifically for this purpose, I
figure this feature may be coming in the future, but after a little digging I decided to
opt for a PostgreSQL database running on Fly.io. Blissfully easy to set up, with two
commands required to create the database then create a connection string for the app
itself, injected as an environment variable. After some manual migration (there were
only a few records to migrate, so better sooner than later) and a deployment to swap over
to the postgres datbase in the Go app, we were off to the races! Deployments now don't
take the API offline, and I can scale up as I need without worrying about replicating the
SQLite datbase. Success!
/sidenote: this codebase is unlike to be open sourced anytime soon because it's... real messy. but keep an eye on my [[https://tech.lgbt/@arch][Mastodon]]/
I know I've glossed over the file storage with Backblaze B2 a bit, but it's not really
anything to note as exciting. The setup with CloudFlare was largely a [[https://www.backblaze.com/blog/free-image-hosting-with-cloudflare-transform-rules-and-backblaze-b2/][standard affair]] with
some DNS entries and URL rules, and leveraging the S3 API and Go libraries made it a
breeze to setup in the API itself. It's "fast enough", with caching, and the peering
agreements between CloudFlare and Backblaze mean I only pay for the storage, which is much
less than it would cost to use some other S3-compatible provider (say, AWS itself).
My current task is getting the admin panel up to snuff, but it's very functional at the
moment and easy enough for my girlfriend to use to update the content of the site, so
at this point I'm satisfied with the [[https://artbybecki.com][current result]]. I now await further instructions.

View file

@ -0,0 +1,89 @@
#+title: Current Infrastructure (2022)
#+date: 2022-07-11
*** Keep it interesting
My personal infrastructure has evolved quite significantly over the years, from
a single Raspberry Pi 1, to a Raspberry Pi 2 and ThinkCenter mini PC, to my
current setup consisting of two Raspberry Pis, a few cloud servers, and a NAS that
is currently being put together.
At the heart of my infrastructure is my [[https://tailscale.com/kb/1136/tailnet/][tailnet]]. All machines, server, desktop, mobile, whatever,
get added to the network, mostly for ease of access. One of my Pis at home serves as an exit
node, exposing my home's subnet (sometimes called dot-fifty because of the default subnet the
router creating) so I can access the few devices that I can't install Tailscale on. The
simplicity of adding new devices to the network has proved very useful, and has encouraged me to
adopt this everything-on-one-network approach.
The servers on the network run a few key pieces of infrastructure. At home, the same Pi that
serves as the Tailscale exit node also runs [[https://k3s.io/][k3s]] to coordinate the hosting of my Vaultwarden,
container registry, and [[github.com/gmemstr/hue-webapp][hue webapp]] applications. This same Pi also serves Pihole, which has yet
to be moved into k3s (but it will be soon). While k3s is a fairly optimised distribution of
Kubernetes, it does beg the question "why deploy it? why not just run docker, or docker compose,
or whatever else?". The simple answer is "I wanted to". The other simple answer is that it is
an excellent learning exercise. I deal with Kubernetes on a fairly regular basis both at my
day job and at [[https://furality.org][Furality]] (I'll be writing a dedicated post delving into the tech powering that),
so having one or two personal deployments doesn't hurt for experimentation and learning. Plus,
it's actually simplified my workflow for deploying applications to self host, and forced me to
setup proper CI/CD workflows to push things to my personal container registry. This isn't
anything special, just [[https://docs.docker.com/registry/deploying/][Docker's own registry server]] which I can push whatever images I want and
pull them to whatever machine I need, provided said machine is connected to the tailnet.
Thankfully Tailscale is trivial to use in CI/CD pipelines, so I don't ever have to expose
this registry to the wider internet.
Also at home I have my media library, which runs off a Raspberry Pi 4b connected to a 4TB external
hard drive. This is the first thing that will be moved to the NAS being built, as it can struggle
with some media workloads. It hosts an instance of [[https://jellyfin.org/][Jellyfin]] for watching things back, but I tend
to use the exposed network share instead, since the Pi can sometimes struggle to encode video
to serve through the browser. Using it as a "dumb" network share is mostly fine, but you do
lose some of the nice features that come with a more full featured client, like resuming playback
across devices or a nicer interface for picking what to watch. There's really nothing much more
to say about this Pi. When the NAS is built, the work it does will be moved to that, and the k3s
configuration currently running on my Pi 3b will move to it. At that point it's likely I'll
actually cluster the two together, depending whether I find another workload for it.
Over in the datacenter world, I have a few things running that are slightly less critical. For
starters, I rent a 1TB storage box from [[https://www.hetzner.com][Hetzner]] for backing things up off-site. Most of it is just
junk, and I really should get around to sorting it out, but there's a lot of files and directories
and it's easier to just lug it around (I say that, it might actually be easier to just remove
most of it since I rarely access the bulk of it). This is also where backups of my Minecraft server
are sent to on a daily basis. This Minecraft server runs on [[https://www.oracle.com/uk/cloud/free/][Oracle Cloud's free tier]], specifically
on a 4-core 12GB ARM based server. It performs pretty well considering it's only really me and my
girlfriend playing on the server, and while I may not be the biggest fan of Oracle, it doesn't
cost me anything (I do keep checking to make sure though!). Also running on Oracle Cloud is an
instance of [[https://github.com/louislam/uptime-kuma][Uptime Kuma]], which is a fairly simple monitoring tool that makes requests to whatever
services I need to keep an eye on every minute or so. This runs on the tiny AMD-based server
the free tier provides, and while I ran into a bit of trouble with the default request interval
for each service (it's currently monitoring 12 different services), randomising the intervals
a bit seems to have smoothed everything out.
Among the services being monitored is a small project I'm working on that is currently hosted
on a Hetzner VPS. This VPS is also running k3s, and serves up [[https://mc.gmem.ca][mc.gmem.ca]] while I work on the beta
version. The setup powering it is fairly straightforward, with a Kubernetes deployment pulling
images from my container registry, the container image itself being built and pushed with
[[https://sourcehut.org/][sourcehut]]'s build service. Originally, I tried hosting this on the same server as the Minecraft
server, but despite being able to build images for different architectures, it proved very slow
and error prone, so I opted to instead grab a cheap VPS to host it for the time being. I don't
forsee the need to scale it up anytime soon, but it will be easy enough to do.
A fair number of services I deploy or write rely on SQLite as a datastore, since I don't see much
point in deploying/maintaining a full database server like Postgres, so I've taken to playing
around with [[https://litestream.io/][Litestream]], which was recently "aquired" by Fly.io. This replicates over to the
aforementioned Hetzner storage box, and I might add a second target to the configuration for
peace of mind.
Speaking of Fly.io, I also leverage that! Mostly as an experiment, but I did have a valid
use case for it as well. My girlfriend does comissions for VRChat avatars, and needed a place to
showcase her work. I opted to build out a custom headless CMS and simple frontend (with Go and
SvelteKit, respectively) to create [[https://artbybecki.com/][Art by Becki]]. I'm no frontend dev, but the design is simple
enough and the "client" is happy with it. The frontend itself is hosted on CloudFlare Pages (most
of my sites or services have their DNS managed through CloudFlare), and images are served from
Backblaze B2. I covered all this in my previous post [[/posts/creating-an-artists-website/][Creating an Artist's Website]] so you
can read more about the site there. My own website (and this blog) is hosted with GitHub Pages,
so nothing to really write about on that front.
And with that, I think that's everything I currently self host, and how. I'm continuing to refine
the setup, and my current goals are to build the NAS I desperately need and find a proper solution
for writing/maintaining a personal knowledgebase. Be sure to either follow me on Mastodon [[https://tech.lgbt/@arch][tech.lgbt/@arch]]
or Twitter [[https://twitter.com/gmem_][twitter.com/gmem_]]. I'm sure I'll have a followup post when I finally get my NAS built
and deployed, with whatever trials and tribulations I encounter along the way.

View file

@ -0,0 +1,57 @@
---
title: DIY API Documentation
date: 2016-02-24
---
#### How difficult can writing my own API doc pages be?
I needed a good documentation solution for
[NodeMC](https://nodemc.space)'s
RESTful API. But alas, I could find no solutions that really met my
particular need. Most API documentation services I could find were
either aimed more towards web APIs, like Facebook's or the various API's
from Microsoft, very, very slow, or just far too expensive for what I
wanted to do (I'm looking at you,
[readme.io](http://readme.io)). So,
as I usually do, I decided to tackle this issue myself.
![The current docs!](https://cdn-images-1.medium.com/max/800/1*-ojv-n3P9P3Tn_49VO0Iug.png)
I knew I wanted to use Markdown for writing the docs, so the first step
was to find a Markdown-to-HTML converter that I could easily automate
for a smoother workflow. After a bit of research, I came along
[Pandoc](http://pandoc.org/), a
converter that does pretty much everything I need, including adding in
CSS resources to the exported file. Excellent. There is also quite a few
integrations for several Markdown (or text) editors, but none for vsCode
so I didn't need to worry about those, choosing instead to use the \*nix
watch command to run my 'makefile' every second to build to HTML.
The next decision I had to make was what to use for CSS. I was very
tempted to use Bootstrap, which I have always used for pretty much all
of my projects that I needed a grid system for. However, instead, I
decided on the much more lightweight
[Skeleton](http://getskeleton.com/)
framework, which does pretty much everything I need to in a much smaller
package. Admittedly it's not as feature-packed as Bootstrap, but it does
the job for something that is designed to be mostly text for developers
who want to get around quickly. Plus, it's not too bad looking.
So the final piece of the puzzle was "how can I present the information
attractively?", which took a little bit more time to figure out. I
wanted to do something like what most traditional companies will do,
with a sidebar table of contents, headers, etc. The easiest way to do
this was a bit of custom HTML and a handy bit of Pandoc parameters, and
off to the races.
Now at this point you're probably wondering why I'm not just using
Jekyll, and the answer to that is... well, I just didn't. Honestly I
wanted to try to roll my own Jekyll-esque tool, which while slightly
less efficient still gets the job done.
So where can you see these docs in action? Well, you can view the
finished result over at
[docs.nodemc.space](http://docs.nodemc.space), and the source code for the docs (where you can make
suggestions as pull requests) is available on [my
GitHub](https://github.com/gmemstr/NodeMC-Docs), which I hope can be used by other people to build
their own cool docs.

58
posts/emacs-induction.org Normal file
View file

@ -0,0 +1,58 @@
#+title: Emacs Induction
#+date: 2021-09-01
*** Recently, I decided to pick up Emacs.
/sidenote: this is my first post using orgmode. so apologies for any weirdness./
I've always been fascinated with the culture around text editors. Each one is formed
of its own clique of dedicated users, with either a flourishing ecosystem or floundering
community (see: Atom). You have the vim users, swearing by the keyboard shortcuts,
the VSCode users, pledging allegiance +to the flag+ Microsoft and Node, the Sublime
fanatics, with their focused and fast editor, and the emacs nerds, living and breathing
(lisp). And all the other editors and their hardcore users (seriously, we could spend
all day listing them). And the fantastic thing is, all of them (except Notepad) are
perfectly valid options for development. Thanks to the advent of the Language Server
Protocol, most text extensible editors can be turned into competent code editors (not
neccesarily IDE replacements, but good enough for small or day to day use).
Up until recently, I've been using Sublime. It's a focused experience with a very small
development team and a newly revived ecosystem, and native applications for any platform
I care to use. I've used VSCode, Atom, and Notepad++ previously, but never really delved
much into the world of "text based" (for lack of better term?) editors, including vim
and emacs. The most exposure was using nano for quick configuration edits on servers or
vim for git commit messages. Emacs evaded me, and I had little interest in switching
away from the editors I already understood. But as I grew as a developer and explored
new topics, including Clojure and Lisps in general, I quickly realized that to go further
I would need to dig deeper into more foreign concepts and stray from the C-like languages
I was so comfortable with. The first few days at CircleCI, I was introduced to [[https://clojure.org/][Clojure]],
and I quickly grew more comfortable with the language and concepts (although I am
nowhere near experienced enough to write a more complete application), and I have that
to thank for my real interest in lisps.
Several failed attempts later, I managed to get a handle on how Guix works on a surface
level. My motivation for this was trying to package Sublime Text, which, while I make
significant progress, I hit some hard blockers that proved tough to defeat. This
sparked me to invest time into emacs, the operating system with an okay text editor.
For a while, leading up to this, I've subscribed and consumed [[https://www.youtube.com/c/systemcrafters][System Crafters]], an
excellent resource for getting started with emacs configuration (among other related
topics). It was part of my inspiration to pick up emacs and play around with it - I don't
typically enjoy watching video based tutorials, especially for programming, but thanks
to the livestreamed format presented it was much easier to consume.
So far, I'm enjoying it. Now that I have more of a handle on how lisps work, it's a much
smoother experience, and I do encourage developers to exit their comfort zone of C-like
languages and poke around a lisp. There's a learning curve, for sure, but the concepts
can be applied to non lisp languages as well. The configuration for my emacs setup is
(so far) relatively straightforward, and I haven't spent much time setting it up with
language servers or specific language modes, but for writing it's pretty snappy (and
pretty). [[https://orgmode.org][Orgmode]] is a very interesting experience coming from being a staunch Markdown
defender, but it's not a huge adjustment and the experience with emacs is sublime. It's
also usable outside of emacs, although I can't speak to the experience, and GitHub
supports it natively (and Hugo, thank goodness). [[https://justin.abrah.ms/emacs/literate_programming.html][Literate programming]] also seems like
a really neat idea of blog posts and documentation, and I might switch my repository
READMEs over to it for things like configuration templates. These are still early days
though - I've only been using emacs for a few days and am still working out where it
fits in to my development workflow beyond markdown/orgmode documents.
/sidenote: emacs or Emacs?/

View file

@ -0,0 +1,38 @@
---
title: Enjoying my time with Node.js
date: 2015-12-19
---
#### Alternatively, I found a project to properly learn it
A bit of background -- I've been using primarily PHP for any backend
that I've needed to do, which while works most certainly doesn't seem
quite right. I have nothing against PHP, however it feels a bit dirty,
almost like I'm cheating when using it. I didn't know any other way,
though, so I stuck with it.
Well I recently found a project I could use to learn Node.js -- a
Minecraft server control panel -- and I've actually been enjoying it,
much more than I have PHP. Here's a demo of my project:
https://www.youtube.com/embed/c0IGKEmHyOM?feature=oembed
It's all served (very quickly) by a Node.js backend, that wraps around
the Minecraft server and uses multiple POST and GET routes for various
functions, such as saving files. The best part about it is how fast it
is (obviously), but the second greatest thing is the efficiency. For
example, in PHP, for me to implement some new thing, I'd most likely
need to create a new file, fill in my variables and methods, and point
my JavaScript (or AJAX) towards it. And I have no real good way of
debugging it. However with Node.js, it's three lines of code (no
seriously) to implement a new route with Express that will perform a
function. Not only that, but it's *so easy to debug.* Because of how
it's run, instead of just producing a 500 error page, it can actually
log the error before shutting off the program, which is so much more
useful then then old 'cat /var/log/apache2/error.log'.
My advice to anyone looking to get into web development is *learn
Node.js.* Not only is it a new web technology that is only increasing in
size, but it's powerful, open with about a billion extensions, and can
help you learn more JavaScript, a big part of dynamic content on HTML5
websites.

View file

@ -0,0 +1,60 @@
#+title: From Guix to NixOS
#+date: 2021-09-29
*it's a matter of compromise*
I really, really like the idea of a declarative model for things - define a state
you want an application to be in, and let the system figure out how to get there.
It's what drew me to Terraform (my first real exposure to declarative systems)
and what eventually lead me to paying particular attention to Guix. NixOS was on
my radar, but having been inducted into the [[/posts/emacs-induction][land of lisps]] I didn't particularly
like the configuration language.
It was inevitable I would end up with a Guix install, but it didn't last long -
the move was motivated by a recent kernel upgrade I had done on my up-until-then
very stable Arch install. The hard lockups got irritating to the point I decided
to hop distributions, and Guix was the natural choice - I had already been
experimenting with it in a virtual machine, and had written a rather complete
configuration file as well. But I quickly ran into a big roadblock:
/graphics drivers/. I rather like playing video games on the old Steam, and
having an nvidia graphics card means the graphics drivers included in the Linux
kernel don't quite cut it. But don't panic! There's a community driven [[https://gitlab.com/nonguix/nonguix][nonguix]]
repository that focuses on packaging nonfree applications, drivers and kernels
for Guix systems (my install image [[https://github.com/SystemCrafters/guix-installer/][was based on it the nonfree kernel]]).
Unfortunately, support for nvidia drivers is spotty - this isn't really the fault
of the community, with nvidia being rather disliked in the Linux kernel and
community ([[https://www.youtube.com/watch?v=_36yNWw_07g][case in point]]). No problem though. A few versions behind isn't that
big of a deal, since I don't play many new triple-A games anyways. Alas, the
troubles arose again when I ran into incompatibilities with GNOME and GDM. While
is has been reported to be compatible with an alternative login manager and
desktop environment, I was comfortable with GNOME (despite disliking the overall
look and feel, it's a comfortable place to be and I'm rather tired of tiling
window managers). Having at this point become incredibly frustrated with the
edit-reboot-test loop, I decided to instead turn to NixOS, which benefitted from
not adhering to free-software-only to a fault. While I do crave the ability to
only use free software, at some point I have to make a comprimise to continue
enjoying the hobbies and acitvities that I do.
A fresh NixOS install USB in hand, I set about spinning up a new installation
on my desktop. The initial install had a few missteps - while my memory is
somewhat fuzzy on the cause, my first go lead to a constant boot into systemd
emergency mode. Going back to square one seemed to resolve the issue, and I had
a happily running NixOS install at the end of the day. So I guess I'm learning
Nix! Which is really not the worst thing ever - given the package manager itself
can run on more systems (including macOS, making it a likely candidate to replace
=brew= for me), I can't think of any real downsides to having it in my toolbelt.
It furthers my goal of having a declarative for systems I use, which is a slow
but ambitious end game.
My initial impressions of Nix's DSL versus Guix's Guile (scheme) is that of
curiousity. Nix's language comes across as more of a configuration system akin
to (and I hesitate to make this comparison, but I'm in the early stages of
learning) YAML or TOML, while Guile is (as expected) a very flexible lisp. While
I do still invest my time into learning and embracing lisps (especially in
emacs), and I really want to return to Guix at some point, I feel the current
comprimise I have to take will lead me to never doing so (unless I switched to
an AMD GPU and optimized for nonfree operation). So my current path is to stick
with NixOS and optimize and share my configurations (you can expect to see them
on my [[https://github.com/gmemstr/dotfiles][dotfiles repository]] soon-ish).
I hope to write more on this soon!

View file

@ -0,0 +1,205 @@
---
title: Infrastructure at Furality
date: 2022-08-17
---
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/_KmcIv6XU3U" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen>
</iframe>
**[You can find the slide deck here](https://docs.google.com/presentation/d/1V2UuCbXzLQaXZrPQq7SapuL-KuBpDxVuAkZhLSigHSA/edit?usp=sharing)**
Back in November of 2021, [Furality
Legends](https://past.furality.org/f4/) convention took place, and I
attended along with my SO [Becki](https://artbybecki.com). It was an
interesting experience, and I bought a VR headset (an Oculus Quest 2)
about halfway through to properly immerse myself. During the tech
enthousiast meetup, a volunteer of the convention popped in, and while
speaking to another attendee mentioned they were open to new volunteers.
Inspired, and eager to improve my skills in DevOps (I was employed at
CircleCI, about to transition to my current employer), I promptly sent
an email in with a very short introduction, and ended up joining the
convention's DevOps department. Despite the name, the DevOps team
encompasses all web-related development (it's important to distinguish
this from the Unity/world development team) including the F.O.X. API (a
currently monolithic PHP application), web frontend for both the portal
and main organisation website, and a few other pieces required to run
the convention smoothly. I landed on the infrastructure team, a hybrid
of Platform and Developer Experience. Coming off of Legends, the team
lead, Junaos, was starting to investigate alternate means of hosting the
backends and frontends that wasn't just a pile of servers (you can see
what our infrastructure used to look like [here during the DevOps panel
at Legends](https://youtu.be/vmmyzFFn_Uo)), so I joined at a really
opportune time for influencing the direction we took.
![Initial email sent to Furality to volunteer](/images/furality-email.png)
While the infrastructure team is also responsible for maintaining the
streaming infrastructure required to run the convention club, live
stream, live panels, and more, this is *relatively* hands off, and I
didn't have a ton of involvement in that side of things. Alofoxx goes
into more detail during the panel.
The technical requirements of Furality are somewhat unique. We have a
few events per year, with a crazy amount of activity (in the \~150req/s
range to our API during Aqua) during a weekend then very little until
the next event. It's entirely made up of volunteers, so scheduling
things can be tricky and while there is some overlap in availability it
can be tough to ensure people are online to monitor services or fix
bugs, especially during the offseason. With these things in mind, some
key focuses emerge:
1. Aggresive auto scaling, both up and down
2. Automate as much as possible, especially when getting code into
production
3. Monitor everything
Of those three, I think only the 1st point is really unique to us.
Points 2 and 3 can apply pretty widely to other tech companies (the
DevOps department is, operationally, a small tech startup).
We picked Kubernetes to help solve these three focuses, and I think we
did pretty damn well. But before I explain how I came to that
conclusion, let's dive into the points a little deeper, talk about how
Kubernetes addresses each issue, and maybe touch on *why you wouldn't*
want to use Kubernetes.
![Furality infrastructure diagram of our cluster and services](/images/furality-infra-diagram.jpg)
### Aggresive auto scaling, both up and down
As mentioned, Furality has a major spike of activity a few times a year
for a weekend (with some buffer on either side), followed by a miniscule
amount of user interaction in between. While this is doable with
provisioned VPSs through Terraform and custom images built with Packer,
it feels a little bit cumbersome. Ideally, we define a few data points,
and the system reacts when thresholds are met to scale up the number of
instances of the API running. Since the API is stateless (everything
feeds back to a single MySQL database), we aren't too worried about
things being lost if a user hits one instance then another.
One perk of this system being for a convention is we can examine the
scheduled events taking place and use that to predict when we need to
pay particular attention to our systems. That 150 requests per second
figure was rounding down during our opening ceremonies, when attendees
were flocking to the portal to request invites to worlds, intent on
watching the stream. The backend team had the foresight to implement a
decent caching layer for some of the more expensive data retrieval
operations, and all said and done there was no real "outage" due to load
(with the definition of outage being a completely inaccessible service
or website). Things just got a bit slow as our queue consumers sending
out invites fell behind a bit - a bit of tweaking to the scaling sorted
it out - and some would sometimes crash outright.
Part of the way through building out the infrastructure, I was
questioning our decision to opt for Kubernetes over something else. But
it actually proved to be a solid choice for our use case, especially for
scaling, since we could automatically scale the number of pods, and in
turn nodes for our cluster, by defining the few metrics we wanted to
watch (primarily looking at requests being handled by php-fpm and CPU
usage). We scaled up pretty aggresively, and maxed out at about 20
`s-4vcpu-8gb` DigitalOcean nodes. With a bit more tuning I'm sure we
could have optimised our scaling a little better, but we were intent on
ensuring a smooth experience for con-goers, and opted for the "if it
works" mentality.
Scaling down was a bit tricky. During the off season we need to keep
nearly all the same services running, but with much smaller capacities
to facilitate some of the portal and internal functionality, as well as
ongoing development environments. Because the bulk of Furality's income
happens during the convention, it's important to keep off-season costs
low, and this is one of the reasons we opted for DigitalOcean as the
server host. We ended up with a slightly larger cluster than we started
out with pre-convention, even after aggresively scaling down and
imposing resource limits on pods. Scaling down our database, which we
sized up 3 times during the convention with no notable downtime, was
also a bit tricky, as DigtalOcean has removed the ability to scale down
via their API. Instead, we migrated the data manually to a smaller
instance, doing various sanity checks before fully decomissioning the
previous deployment.
### Automate as much as possible, especially when getting code into production
It can be hard to wrangle people for code reviews or manually updating
deployments on servers. At one point, updating the F.O.X. API required
ssh'ing into individual servers and doing a `git pull`, or running an
Ansible playbook to run a similar command. This was somewhat error
prone, requiring human intervention, and could lead to drift in some
instances. To address this, we needed a way of automatically pushing up
changes, and having the servers update as required, while also making
sure our Terraform configuration was the source of truth for how our
infrastructure was set up.
To accomplish this, we built out Otter, which is a small application
listening for webhooks from our CI/CD processes that will take the data
it recieves and updates our Terraform HCL files with the new tag,
opening a pull request for review. It's not a perfect system, still
requiring some human intervention to not only merge the changes but also
apply the changes through Terraform Cloud, but it was better than
nothing, and let us keep everything in Terraform.
![Otter service mascot, an otter carrying a stack of boxes wearing a hard hat](/images/furality-otter.png)
![Example Otter pull request](/images/otter-pr.png)
We also built out Dutchie, a little internal tool that gates our API
documentation behind OAuth and rendering it in a nice format using
SwaggerUI. It fetches the spec directly from the GitHub repository, so
it's always up to date, and as a bonus we can fetch specific branches,
estentially getting dev/prod/whatever else versioning very easily.
### Monitor everything
We already had Grafana and Graylog instances up and running, so this is
pretty much a solved problem for us. We have Fluentd and Prometheus
running in the cluster (along with an exporter running alongside our API
pod for php-fpm metrics) that feed into the relevant services. From
there we can put up pretty dashboards for some teams and really verbose
ones for ourselves.
![Grafana Dashboard showing general metrics](/images/furality-grafana-0.jpg)
![Grafana dashboard show php, rabbitmq and redis stats](/images/furality-grafana-1.jpg)
### What could have been done better?
From the offset, we opted to deploy a *lot* to our Kubernetes cluster,
including our Discord bots, Tolgee for translations, and a few other
internal services, in addition to our custom services for running the
convention. Thankfully we had the foresight to deploy our static sites
to a static provider, CloudFlare Pages. Trying to run absolutely
everything in our cluster was almost more trouble than it was worth,
such when a pod with a Discord bot would be killed and moved to another
node (requiring the attached volume for the database to be moved), or
the general cognitive load and demand of maintaining these additional
services that didn't benefit much from running in the cluster. We're
probably going to move some of these services out of our cluster,
specifically the Discord bots, to free up resources and ensure a more
stable uptime for those critical tools.
Another thing that we found somewhat painful was defining our cluster
state in Terraform, rather than a Kubernetes-native solution. We ended
up acruing a fair amount of technical debt in our infrastructure state
repository and running everything through Terraform Cloud drastically
slowed down pushing out updates to configurations. While it was nice to
keep our configuration declaractive and in one place, it proved to be a
significant bottleneck.
### What happens next?
We don't really know! As it stands, I'm fairly confident our existing
infrastructure could weather another convention, but we know there are
some places we could improve, and the move did introduce a fair amount
of technical debt that we need to clean up. For example, we're using
Terraform to control everything from server provisioning to Kubernetes
cluster, and want to move the management of our cluster to something
more "cloud native" (our current focus is ArgoCD). There is also some
improvements that could be done to our ability to scale down, and
general cost optimisation. Now that we have a baseline understanding of
what to expect with this more modern and shiney solution, we can iterate
on our infrastructure and keep working towards an "ideal system",
something you don't normally have the chance to do in a traditional full
time employment role. Whatever it is we do, I'll be very excited to talk
about it at the next DevOps panel.
If you have any questions, feel free to poke me [on Twitter](https://twitter.com/gmem_)
or [on Mastodon](https://tech.lgbt/@arch).

View file

@ -0,0 +1,16 @@
---
title: Hello, world! (Again)
date: 2021-07-06
---
Welcome back to the blog! Sort of.
After much thought, I've decided to move my blog off of Medium and back to a place I can fully control. It wasn't really a difficult decision; Medium as a platform has changed a fair amount since I began using it, just as I have. Rather than nuking the entirety of my previous posts, I exported and cleaned them up for archival (although I did seriously consider just starting fresh).
There are some things I should address about these posts. First, they are a product of their time. I don't say that to excuse racism (there is none in these old posts), but rather both my writing style and view of the world. These older posts span from 2015 to 2018, when I was between the ages of 15 and 18 (at the time of this post, I'm 21). I've done my best to clean up the posts after the Medium export, but content left as-is where possible. While I maintain an archive of the original export, I've decided against commiting the raw files and any drafts that my account contained (mostly because they're in very rough states and are mostly for personal nostalgia/reflection).
I will not be editing these older posts. However, posts from this one onwards are fair game!
On the technical side, this site is using [Hugo](https://gohugo.io) to generate static files. It seemed like the simplest option. Commits run through a CircleCI pipeline (I work there, and am most familiar with the platform, hence no GitHub Actions) and get pushed to a branch that GitHub Pages serves up. There are simpler approaches, but the important thing is that the content is in plaintext, rather than a platform's database, and can be moved to wherever is needed.
I'm unsure how regular posts will be to this blog, but there may be the odd post here and there - follow me on Twitter to be notified, or the [generated RSS feed](https://blog.gabrielsimmer.com/index.xml) - [@gmem_](twitter.com/gmem_).

51
posts/hoverzoom-js.md Normal file
View file

@ -0,0 +1,51 @@
---
title: HoverZoom.js
date: 2015-11-05
---
#### A quick script
I'm working on a new, super secret project, and as such I'm going to
post bits and pieces of code that are generic enough but could be
useful.
This particular one will zoom in when you hover over an image, and only
requires jQuery to work. Enjoy!
```javascript
/*
HoverZoom.js by Gabriel Simmer
Provides zoom-in on hover over element,
requires jQuery :)
Obviously you can change "img" to whatever
you'd like, e.g ".image" or "#image"
*/
var imgHeight=720;
var imgWidth=1280; // Naturally you should replace these with your own values
$( document ).on({
mouseenter:function() {
$("img").animate({
"height": imgHeight,
"width": imgWidth
});
},
mouseleave:function(){
$("img").animate({
"height": "100%",
"width": "100%"
})
}}, "img");
```
Of course, be sure to include the following at the bottom of your *body*
element:
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js></script>
<! -- HoverZoom JS -->
<script src="js/hoverzoom.js"></script>
```

View file

@ -0,0 +1,52 @@
---
title: I discovered a /r/programmingcirclejerk of NodeMC...
date: 2016-03-02
---
#### Haters do exist, but hey, publicity right?!
So recently I was looking at the graphs for NodeMC's
traffic on GitHub and realized... there was a thread from
/r/programmingcirclejerk (which I won't link for obvious reasons) that
was directing a bit of traffic (actually quite a bit) to the GitHub
repository. So out of idle curiosity of just having woken up, I decided
*"Why not?"* and opened up the thread. I was greeted with what on the
surface seemed like hatred towards me and my product but upon further
investigation I found it seemed more general mockery towards a few of my
decisions or wording -- or just Node.js in general. So let's analyze
and reply to some comments!
![Top comment](https://cdn-images-1.medium.com/max/800/1*lK1kKdzGiKVu0pyf0PYGGQ.png)
So this is obviously attacking my wording in the README... I claim that
because NodeMC is written in Node.js, it is fast. And I admit, maybe
Node.js in general is not that fast. *However* -- NodeMC has actually
proven to be quite speedy in tests, so I stand by my statement, perhaps
with some tweaked wording...
![](https://cdn-images-1.medium.com/max/800/1*z7IAUMH05TmOiJPrb1Y_Rg.png)
![](https://cdn-images-1.medium.com/max/800/1*xWqHee4G3FlMGeZnZaN57Q.png)
This is most likely touching on (or slapping) my little confusion about
\*nix permissions. The EPIPE error was due to the fact Java couldn't
access the Minecraft server jarfiles, and was throwing an error that
Node.js just had no f\*kng clue what to do with it. I did manage to fix
it.
![](https://cdn-images-1.medium.com/max/800/1*Hcwl8cD1kWwns5lqgpoebg.png)
Unfortunately, I do have to agree with this commenter, Node.js isn't
exactly the most reliable thing, and **most definitely** not the easiest
to debug \*grumble EPIPE grumble\*. Now that said, it's not as
unreliable as Windows 10 \*badum tish\*.
![](https://cdn-images-1.medium.com/max/800/1*QGsJMCN-p0QagVKINTKjvg.png)
And the final comment. I do want to learn Ruby at some point. But I did
laugh when I saw this lovely comment.
![](https://cdn-images-1.medium.com/max/800/1*rIupWDN17_YNRPbrcHt5aA.png)
And of course, my comment, to finish off the pile of steaming comments.
I love you all\~

View file

@ -0,0 +1,43 @@
---
title: I'm Using Notepad++
date: 2015-10-27
---
#### Again
UPDATE! I've started using Atom (thanks to Oliver Dunk for reminding me
about it), enjoy it a ton with the remote-edit package. I'll have a
follow-up post with more thoughts :P
Maybe I'm just crazy. Or maybe there are some things I just really need
my development area to do. Because of my job as CTO of a YouTube
Network, I need to do a lot of work on the website, which means I need
real-time access to it. I have a locally hosted server using XAMPP, but
it's not enough -- for example, when I have subdirectories, or
a .htaccess rule, I have to do this
```
<link href="/projectname/css/bootstrap.min.css" rel="stylesheet">
```
which is obviously incorrect when deploying it to a real server.
Maybe I should explain a bit more -- for the past few months, I've been
sticking with Visual Studio Code, Microsoft's entry into the text-editor-that-also-has-syntax-highlighting-and-other-IDE-stuff area
(or as I sometimes call it, simple IDEs). It's not bad, but for things
like plugin support, it simply isn't there.
So why Notepad++? A piece of software that looks like it's from the 80s?
Well, for one, NppFTP. A small plugin that allows me to edit and upload
file in basically real-time on a server. I've been using it for about 20
minutes and it's already *super* useful. Second, oodles of
customization. I'm just getting back into it, but I already like it (the
top bar is still ugly though).
![Seriously what do half these
buttons do](https://cdn-images-1.medium.com/max/800/1*ANZjRhNHF9e6Hd3uzAl-dg.png)
As soon as vsCode adds FTP support, I'll probably immediately switch
back to it. I love the aesthetic, and the autocomplete is *decent*. But
for the time being, I'm sticking with Notepad++.

View file

@ -0,0 +1,37 @@
---
title: Improving Old Sites
date: 2015-10-30
---
#### The magical 15%
Because of the nature of my schooling (mostly online), I'm provided with
a lot of very old, often dead, links. But, when a link does work, I'm
often greeted with a hideous site.
![I realize this is probably a joke site at this point.](https://cdn-images-1.medium.com/max/800/1*HI07Jtm7z-mTzkvXF6tnnA.png)
(This wasn't for a school thing by the way).
How do you improve such a thing! Well, first is my magical "padding by
percent". Open up the "Inspect Element" tool and let's get to work.
![](https://cdn-images-1.medium.com/max/800/1*8BS7lmnoQYZzR6k5OTU-hg.png)
I tend to go for a padding of about 15%. It's usually a pretty
comfortable place to be. But... that background... and that font...
let's tweak that. Just a bit.
![Much better!](https://cdn-images-1.medium.com/max/800/1*y1K8LfbiKS2ZDYpQk7ak6Q.png)
A few simple steps goes a long way to improve the ability to use and
read a site.
![The entirety of my tweaks.](https://cdn-images-1.medium.com/max/800/1*emPdafUnQibWZI4RSXmEvQ.png)
Obviously, this is a lot of work for just one little site. But hopefully
soon I'll be able to get my own custom Chrome plugin working for this,
because it's one of my biggest issues with using the web.
And for my next post, I'll talk about the benefits of learning how to
use a .htaccess file!

View file

@ -0,0 +1,44 @@
---
title: It's Wrong to Compare Tech Companies
date: 2015-10-23
---
Alright, everyone calm down, it's not always true but read on to see my
reasoning.
Recently I was talking to my friend and fellow web developer [Sam
Bunting](http://sambunting.biz/). We set on to the topic of Google being evil after me asking why he chose a Windows Phone over something like an Android phone or an iPhone. His reasoning was simply that
> Because in all honesty, I like Microsoft and dislike Google, The
> phones for a fairly powerful spec is quite cheap compared to things
> like iPhones, I really like flat design and personalisation of my
> stuff (Which I can do really easily on Windows Phones) and it is fast,
> not just out of the box, but *\[after a lot of use\]* it appears to
> have not lost any performance at all.
He pointed out that he feels google is an 'evil company', and that they get everything they -- and what they don't get they bully out of
existence. We went on to compare Google vs. Microsoft (a friendly
sparring match if you will) and I left with the conclusion that we
really should not be comparing those two companies -- just like how we shouldn't compare apple to oranges.
Tech companies try to innovate (at least, most of them). They find new
ways to do things and earn some revenue off that. They do things the way
they want to. The way they see is the 'right way'. It's not really fair
to compare Android to Windows Phone because they're aiming for two
different things. Android is going for customizability and aiming for
all markets, whereas Windows Phone seems to be aiming more for the
low-to-medium end business-oriented smartphones for people who want to
get stuff done and don't care about changing their launcher or flashing
custom ROMs. It's like that for iOS too. iOS is more focused on Apple's
closed garden, and people who invest in their (rather pricey compared to
the competition) technology want to be enclosed in Apple's walled garden
of safety, where maybe the ability to change their icons isn't there but
the safety and speed (although with iOS 9 that's debatable) are present.
Obviously, some companies could be easy to compare, like Intel and AMD,
or Nvidia and AMD, but that's because they're in the same sort of
business -- they make processors and graphics cards for PCs, among
other endeavours. But for the most part I don't think it's fair to
compare Microsoft vs. Google vs. Apple. They're all going their own
directions.

View file

@ -0,0 +1,54 @@
---
title: Let's clear up some things about GGServer
date: 2017-03-15
---
#### We get a lot of hate directed to us because we aren't as transparent. Let's fix that.
I've been a developer at GGServers for about a year
now. In that time I've gotten a good sense of how things work behind the
scenes at the budget Minecraft server host. I've (this is going to sound
canned but it's genuine) dedicated myself to improving the state of the
technology running everything. I've had to put up with hate and abuse
from unhappy customers, some justified others not so much. One thing
that some people have pointed out is how opaque commununications from us
can be -- hell, in the four years we've had our Twitter we've tweeted
less than 1,000 times (250 tweets/year on average).
Let's make something clear -- We are still a very small company. Like,
*really* small. We have five dedicated support staff, answering tickets
of various technical depth. These people unfortunately are not
"full-time". They don't come in to an office 9am-5pm, answer your
tickets and go home. They're spread out. When we say 24/7 support, we
don't mean your ticket will be taken care of immediately, we mean that
our support system doesn't go down. We -- I -- Get it, you want your
question answered right away, and properly. We've gotten a lot better at
replying to things quickly, and typically manage to get the entire
ticket queue handled every week or so.
Let's talk about server downtime. It sucks. You can't access your
servers, can't pull your files to move to another host. Currently, we
have on person in our limited staff who has access to our hosting
provider, and he is busy very often. Our previous systems administrator
just sort of went AWOL, leaving us a bit stuck in terms of responding to
server downtime. I was just recently granted the title of sysadmin, and
am still waiting on our provider to grant me access to our hardware. And
to be clear this does not mean I get a keycard to datacenters where I
can walk to a rack of servers, pull ours out and tinker. It means I have
as-close-to-hardware-as-possible remote access, where I can troubleshoot
servers as if I was using a monitor plugged into it, but I never
physically touch the bare metal. *But* it does mean I can get servers up
and running properly. Once our provider allows me access. Hopefully that
clears that up a bit.
We try to limit our interactions outside of tickets for two reasons.
One, we can log everything that is filed in our system. Two, and most
importantly, we're not quite sure what we can and cannot say. There's a
fine line between being transparent and divulging company secrets. I
feel comfortable saying what I am because I don't feel anyone could gain
a competitive advantage knowing these facts. I would love to get access
to the Twitter account and start using it to our advantage but it's
going to take a while.
Anyways, if you guys have any more questions or comments, keep it civil
and I will do my best to answer.

View file

@ -0,0 +1,51 @@
---
title: Let's talk about the Windows Taskbar
date: 2015-11-21
---
#### Or a small rant about why I hate the Windows (10) UI
Windows (insert version here) doesn't look good. I believe this is a very well established fact. OSX? That is a good interface. GNOME 3? It's okay. XFCE? Pretty sharp. So why can't Microsoft learn from them?
First off, Microsoft is stubborn. I've learned this very recently. They
haven't changed much of their UI design over the years, haven't really
rethought it. It's annoying, to say the least. Sure, they moved to
making sharper edges (pun intended). And sure, they trying their tiled
displays. But it's all garbage, compared to something like OSX's
fantastic UI.
![Image credit to extremetech.com](https://cdn-images-1.medium.com/max/1200/1*p2uneWh0xlKU5P2l6PFGog.jpeg)
As you can see, OSX is *clean*. It's *soft.* It doesn't clutter up the
taskbar the way Windows does, it doesn't have massive top bars on
applications. The UI isn't fragmented between Metro (or Modern,
whatever) and the programs of old. It's all cohesive.
Your obvious response to this may be something along the lines of *"Well
Windows is open! We can change it!"*. But the thing is, *I don't want to
change the OSX interface.* I freely admit, I am a bit of a sucker for
design, and Apple usually takes the cake with their interfaces (despite
being crap for customization, looking at you iOS).
My biggest problem is the Windows taskbar. I know, it's not something
some people usually harp on, but I'm going to do it. First of all, it's
too big. Notice how on OSX (I will use OSX as an example here) the bar
is not only a lot shorter, *it grows only as needed.* Second, the
Windows 10 taskbar just feels like it *wants* to be cluttered up,
because there's so much space, whereas on OSX, it's short and simple.
Nicely laid out and organized.
Let's take a look at my desktop for a second.
![Clean. And no taskbar. Huh.](https://cdn-images-1.medium.com/max/1200/1*137X7RWX2eFqEZazGQg-yg.png)
It flows. It works. It's simple. Everything I need is in that little
rainmeter dock (more info on my entire desktop [here](https://www.reddit.com/r/desktops/comments/3tha11/had_some_inspiration_minimal_once_again/)), the clock is out of the way, and I can focus on work. The Windows taskbar is hidden at the top, made the smallest size possible, so that it stays out of the way (and at the top to mimic my favorite desktop environment XFCE). I don't want it taking up the real estate on my screen, and when a notification comes in (which is fairly often), I don't want to see it blinking in the taskbar. *I know I've
received a message, and I will check it when I want.*
To summarize, Microsoft has to learn about good design. They need to
pull a 1080. I kind of liked their flatter, sharper interface, but now I
like my minimal, no frills or gimmicks setup.
At some point all this will be nullified because I'll be on Xubuntu or
some other XFCE distro, but for now these are my feelings.

90
posts/ltx-2017.md Normal file
View file

@ -0,0 +1,90 @@
---
title: LTX 2017
date: 2017-07-30
---
#### The first Linus Tech Tips tech conival
![Don't tell Min](https://cdn-images-1.medium.com/max/1200/1*yivY7HBqRftSIW7enKYSuA.jpeg)
Anticipation was high as we approached the Langley
Convention Center by bus, as my girlfriend and I eagerly awaited meeting
one of our role models Linus Sebastian, and the crew of personalities
that helped script, film, edit and perfect the seven videos a week Linus
Media Group create, not including their secondary channels.
![LTX map, which ended up not being quite accurate](https://cdn-images-1.medium.com/max/600/1*m6cIdWVGNL20toPCzXRRKQ.jpeg)
We were off to a good start -- all of our transit was on time, we
spotted one of their newer hires on the bus there, and everyone was very
friendly (I made the mistake of thinking that the event started at 10AM,
it actually began at 11). I had a slight nagging in the back of my head
however. The day before, the Linus Tech Tips account had tweeted out the
map of the floor, and revealed the fact it was going to take place in an
ice hockey rink (sans ice, unfortunately). That itself is not a bad
thing -- hockey rinks are decently sized for such an event. However
upon having our tickets scanned and claiming our ticket tier swag, we
entered into the absolutely packed arena. And when I say packed, I mean
*very* packed -- Becky and I had a hard time getting from one end to
another.
The two primary causes that I could see include the fact that there were
a ridiculous number of people there for the area, and the positioning of
booths. The main attractions were positioned right by the door, which
included the virtual reality booth, Razer, and 16K gaming. The floor
also included a case toss (unfortunately not very well shielded), some
carnival-esque inflatable games, some hardware maker booths, some other
miscellaneous booths, and the main stage (we'll get to the stage in a
minute). In fairness, the booths were all pretty spot on for the kind of
audience Linus attracts, and the choices were very well done. The case
toss had a major issue, however -- it was cordoned off with just ropes,
with metal railings at the end to prevent too much destruction. Many
cases ended up outside the area when thrown, whether slipping underneath
the ropes or going over the top. Paranoia was high on my part on the
topic of getting hit.
![From the stage looking toward the entrance](https://cdn-images-1.medium.com/max/800/1*7cnfR-74sBi0u_rwd2sswA.jpeg)
The highlight of the event was meeting Linus himself, face to face.
Throughout the event whenever his feet touched the floor a circle of
fans would appear around him, and if he moved you'd better watch out the
for line of people behind him. However he handled it very well, and so
did the cluster of fans -- nobody was pushy or overly aggressive, and
he took his time with each person. We asked if he could sign both my
girlfriend\'s laptop with the dbrand dragon skin (which he liked so much
on the laptop he wanted to have it) and a hat, which he happily did. We
also nabbed a selfie, which was awesome of him.
![Linus, Becky and I](https://cdn-images-1.medium.com/max/1200/1*O6Yfba9oeZwIb5T6KjSdxQ.jpeg)
Unfortunately because of the seemingly lack of volunteers, most of the
LMG (Linus Media Group) employees were running the marquee booths. This
meant that they had less time to roam the show floor and meet fans, and
while I appreciate the enthusiasm they had for their specific booth, it
would have been nice to be able to say hi to more of them. I managed to
meet Colton, Linus, Luke and Yvonne, and those meetings were somewhat
spontaneous. The other members of the team were usually busy, either
filming or helping run booths (or in the case of the sales team, making
sure reveals went as they should have).
In a lighter note, the stage shows were great. There were some audio
issues, and they did acknowledge them. The energy level was great, and
the performances were great, with the unboxing of the Razer toasters,
AMD Vega graphics card and a whole computer powered by AMD Threadripper
and Vega. It was a bit unfortunate there was no barrier at the edge of
the stage, which resulted in a lot of crowding as people tried to get
closer for the best view (this was probably advantageous when Linus
dropped a 4K Razer Blade laptop though). The raffle was a bit lackluster
on our end as we didn't win anything, but the winners who did came away
with an absolutely amazing set of prizes, ranging from headphones to a
Titan Xp.
By the end of the day, we were both exhausted. We had a bit of a bad
taste in our mouths as our expectations had been higher, or at the very
least different. I wouldn't say disappointed -- that isn't the right
word. It was a mix of eagerness for the next year and longing to work at
LMG and help plan (or just work). We're both glad we went to the first
one, so we could get a taste of what the next year could be like and
also so we could say we were there.
![(Bad) Panorama of the floor](https://cdn-images-1.medium.com/max/2560/1*QnlNePBTV9XT8q9Rh_6b2w.jpeg)

View file

@ -0,0 +1,77 @@
---
title: Making a link shortner
date: 2015-10-26
---
My first thought when I got the following message
> Also, do you know how to make a URL shortner?
was *yes, probably, all things considered*. The last two days I'm been
working on it, and it's turned out semi-well. There's a bug or two that
needs fixing, but I figured I could post a short tutorial on how to do a
simple one.
Right, I'm assuming a few things:
- [You have a web server]{#6644}
- [You have a short domain/subdomain]{#0b99}
- [You have MySQL & PhpMyAdmin]{#f238}
- [You know how to use PHP]{#91de}
With that out of the way, let's set up our MySQL database. You'll need a
table called **links**, in which it should have the columns laid out
like so:
```yaml
links:
- actual [text]
- short [text] (no other special values are required)
```
Now, in our file named shorten.php (which you should download
[here](https://ghostbin.com/paste/xk7gh), we need to edit a few things, first, make sure you change the PDO to connect to your database. Also, change
```php
$b = generate_random_letters(5);
```
to be any length you'd like. Lastly, make sure
```php
value="url.com
```
is the domain or subdomain of your site.
Great! Now that we can create short links, we need to interpret them. In
long.php ([link](https://ghostbin.com/paste/yzbdj)), change the first
```php
header('Location: url.com');
```
to redirect to your main website. This will redirect users to the main
website if there is no short link.
Fantastic, you're all done! As a bonus though, you can use a .htaccess
file to tidy up your URL.
```
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^([A-Za-z0-9-]+)/?$ long.php?short=$1 [NC,L]
```
So instead of *http://url.com/long.php?short=efkea*, it will be
[*http://url.com/efkea*](http://url.com/efkea).
That's all for today :)
#### Files index:
[shorten.php -- GhostBin](https://ghostbin.com/paste/xk7gh)
[long.php -- GhostBin](https://ghostbin.com/paste/yzbdj)
[.htaccess -- GhostBin](https://ghostbin.com/paste/vznww)

39
posts/moat-mobile.md Normal file
View file

@ -0,0 +1,39 @@
---
title: Moat Mobile
date: 2015-11-11
---
## Or "The evolution of my web skills"
![](https://cdn-images-1.medium.com/max/1200/1*mJn71fHwI6K3pfZlY5MN8Q.png)
Moat, for the uninitiated (so, most of you), is my original project to
learn how to use APIs. The API I chose was for [Voat.co](http://voat.co), a reddit competit- sorry, news aggregator that looks an awful lot like another
website.
It started off pretty rough -- in fact, you can go preview it [here (dead link)](http://gabrielsimmer.com/moat/)
![Using pretty much just pure PHP](https://cdn-images-1.medium.com/max/800/1*0kycKbtMpPuQPSduytG2gg.png)
It... worked, but the UI wasn't really where I wanted it. I was also
using the Voat alpha API, which was really slow. I looked ahead, and
started working on a version that utilized JavaScript and AJAX, so I
could display some sort of loading animation. I also used Bootstrap for
it, so that I could scale it better on mobile.
![Looks nicer. But functions about the same.](https://cdn-images-1.medium.com/max/800/1*r8Z-FLErTE4ldh9F8EZ-_g.png)
The logical next step was to upgrade the interface, since so far it has
been terrible. Again, I wanted to use Bootstrap, and I wanted to make it
as mobile friendly as possible. And what's the best way of doing that?
By using a [material design bootstrap
theme](http://fezvrasta.github.io/bootstrap-material-design/). I also used [Material Floating Buttons](http://nobitagit.github.io/material-floating-button/)to give it navigation that made sense. I also made JavaScript do all the formatting work, using the .get() function in jQuery, and using my own server as an API middleman because of AJAX's lack of trust when getting info from other sites (for understandable reasons, but [here is how you can bypass it](https://ghostbin.com/paste/kf3pf)). And here is the final product.
![Not too bad.](https://cdn-images-1.medium.com/max/800/1*dPZW7uPaJRClAs-SRhtbPg.png)
The FAB requires a bit of tweaking, and I have a bit of functionality to
add, but this is the product so far. I doubt I'll touch the styling for
quite a while, unless it's to make the UI more material design like.
You can fork the project on the [GitHub
page](https://github.com/gmemstr/moat-mobile) if you so please, and be sure to read the FAQ if you want to know what you can help out with.

View file

@ -0,0 +1,105 @@
---
title: Moving Away From Google
date: 2017-11-23
---
#### I'm starting to move outside the comfort bubble
If you've kept up with me on Twitter, you'll know what
a huge fan of Google I am -- I have a Google Pixel XL (1st generation),
a Google Wifi mesh network, a Google Home, and rely on Google services
for a huge amount of work I do, including Google Drive/Docs, Gmail,
Maps, Keep, and up until yesterday, Play Music.
But I'm starting to get tired of Google's products for a very simple and
maybe even petty reason. *Their design is the least consistent thing
ever.* I get it, Google is a huge company with tons of teams working on
different things, but I find it hard to keep using their services when
the interface for products is just straight up terrible compared to
competition. Recently I switched away from using Google Play Music in
favour of Spotify, which I had previously been using, and like the
interface a lot more, as it's very consistent and not the garish orange
of GPM. Despite being material design, the interface feels clunky and
ugly compared to the refinement of Spotify, most likely in part due to
work on the iOS app. Plus, it has a pretty great desktop app (albeit an
Electron app, but I digress). All of the Drive apps (Docs, Sheets,
Slides, etc) have a very clean and well designed look, but switching to
the web version of Gmail is jarring, and the mobile app is simplistic at
best -- it get's the job done at the very least. Not to mention I've
found refreshing very slow compared to my own mail server I run for [Git
Galaxy](https://gitgalaxy.com), which
feels odd because logically Gmail should be faster, if not on par,
considering Google's massive architecture. Hangouts is a beautifully
designed experience, but it's become a very obvious second class citizen
of Google's arsenal, thanks in part to Duo and Allo (we'll get to second
class citizens in Google's world in a minute). It also does not support
Firefox at all, even the latest beta version 58 (which is my daily
driver), which requires me to keep Google Chrome installed -- clever
move.
Let's switch gears away from UI/UX experience and talk about apps that
Google seems to have forgotten or lost interest in. While I am aware
some of these apps may have an active development team, they don't seem
to be priority for Google as a whole, and this can lead to frustrations,
such as the example above -- Hangouts does not support Firefox, even
the beta 58 I currently run. I recognize Firefox Quantum just recently
launched, but they had betas and nightlies available leading up to the
release, so I don't believe there is any excuse for Hangout to not work
outside of Chrome (I have not tested Edge). Also on topic of messaging
apps from Google, they also offer Duo and Allo, two apps that
essentially split the functionality of Hangouts in two. While some of
the Android community was very vocal about this, and a large number of
Hangouts users worried they were going to have to move, these feelings
seem to have pettered out, although it does still seem possible for
Google to pull the plug and force everyone over to their new offerings.
The feeling of being second rate extends to Gmail as well, at least to a
certain extent. Google is in no rush no to shut down their email
service, as it's very valuable for keeping consumers locked in to their
ecosysem and also provides them with a metric ton of data that they can
sift through and utilize for advertising, [although recently they've
claimed they will no longer do this](http://www.wired.co.uk/article/google-reading-personal-emails-privacy). That said, we don't neccesarily know if Google is
still reading our emails and doing something else with the data. Which
leads nicely into the next topic.
Security and privacy is something Google values greately, but it seems
it's moreso to gain the trust of the consumer than to keep their own
noses out of your data. Let's face it, Google is an advertising company,
and little else. Everything they do is to expose people to more
advertisements and maximize engagement to attract more advertisers.
Their reach is incredible, especially considering AdSense and Analytics.
With these two platforms in their arsenal, they have an almost infinite
reach across the internet, living and collecting data on millions of
websites that implement these services. The recent "Adpocalypse" on
YouTube seems to be a case of runaway algorithms attempting to optimize
YouTube for advertisers, or at least that's what most theorize. And
frankly, it's not neccesarily a bad thing, but consumers need to
recognize that Google is watching, and doesn't neccesarily have the end
user in mind when it comes to accomplishing their goals.
So in summary -- Why am I slowly moving away from Google?
Their apps are inconsistant, which is to be expected in such a huge
company with so many teams working on different things. Projects that
aren't the forefront of Google's priorities suffer heavily, especially
in an age where "the web" is accelerating very quickly and design is
being refined constantly. Also, Google is an advertising company, at
their core. It's how they earn the majority of their revenue, and
despite ad blockers, it will continue to be. This isn't a problem on
it's own, and I actually used to embrace the feeling of contributing so
much data to Google, but the honeymoon period has worn off now, and I
feel like I should cut back -- after all, millions of other people are
contributing just as much data, so Google won't notice if they suddenly
loose one. I run Firefox Nightly on my phone instead of Chrome, and am
actively looking for alternatives to many of their other services,
notably Gmail and Google Docs.
> Google does a bit of everything okay, but sometimes it's better to pay
> a bit for a much more specific service that does one thing incredibly
> well.
The final nail in the coffin for Google Play Music, if you were
wondering, was the fact that YouTube Red is not available in Canada yet,
which infuriates me and requires the use of a VPN. This coupled with the
fact Google Play Music has a fairly limited catalogue compared to
Spotify makes it difficult to recommend. If you can get YouTube Red with
it, however, the price is worth it.

62
posts/my-2017-project.md Normal file
View file

@ -0,0 +1,62 @@
---
title: My 2017 Project
date: 2016-12-22
---
#### This one... this one will be fun
As some of you may know, or not know, I am a developer
at heart, writing and playing with code to my hearts content. However
there are some other areas of technology I really, *really* want to play
with. These last few months I've been toying with the idea of building
my own homelab, teaching myself the basics of enterprise and small-scale
networks (and then breaking them, of course). I've also wanted to look
into server farms and whatnot, but that seems a bit too much considering
my budget and space, among other things.
#### The Plan
First, I want to start with a really solid second hand network switch.
The primary function of this is to allow me to extend my ethernet
capabilities -- right now I'm essentially limited to one physical
ethernet jack and have to rely on WiFi for everything else. This will
allow me to have a permanent home for my Pi, laptop station, and
whatever else I add to the network. Plus I think network switches look
really cool.
Next, once I can afford it, I want to either build or buy a good
rackmount NAS, along with an actual rack to mount it on (and add the
switch to said rack). Ideally I'd want to have around 8TB of storage to
play with initially, a few 2TB drives most likely with unRaid. I'd want
a rack mount case that can support a fair few drives so later down the
line I can add more and larger disks. Specs wise, I have no clue what a
NAS would need, but I would assume nothing too high-end. If all else
fails, I'd end up buying a second hand one off eBay and going from
there. This will then connect to the switch -- whether I allow it to
communicate with the outside world I can't say for sure yet (this is
after all a rambly brainstorm kind of blog post). This NAS would be a
"hot" server, one that is frequently read from, modified, written to,
and so forth.
The second NAS box would be a highly-redundant backup system, limited to
just the internal network and comprised of many tried-and-tested drives.
This server would be upgraded and read from far less than the "hot" NAS,
but needs to make up for lack of read speed in sheer bytes of data it
can hold and keep intact even in cases like drive failure. This box
would most likely be bought second hand, depending on the situation.
Capacity I do not have concretely in my head, however I want to aim for
about 30TB of raw storage (5x6TB drives, 10x3TB drives if the server is
big enough).
The final system, the pièce de résistance if you will, is a high-end
dedicated computation-heavy server. In my head (and heart) this would be
equipped with two Intel Xeon cores, one or two GPUs (one gaming, one
dedicated to crunching numbers like a Quadro), and a couple SSDs to keep
it happy in terms of storage. This box would be the server that handles
media encoding, 3D rendering (most likely renting it out later), serving
up websites, and whatever the heck else I can get the power-hungry thing
to do. Overkill, most likely, and probably end up selling some computing
space in the form of VPSs and whatnot, but it would be a damn cool thing
to have around (my power bill would never be happy).
Anywho, time to get cracking.

79
posts/my-career.md Normal file
View file

@ -0,0 +1,79 @@
---
title: My Career
date: 2015-10-23
---
This is less of a resumé and more of a look back at some of the projects
I've been involved in, most of which failed -- and *usually* not
because of me.
I believe my first ever attempt to make a name for myself was as part of
the long-defunct "NK Team".
> [I was involved pretty early on](https://twitter.com/TheNkTeam/status/201040884190543872)
I joined on because of my "skills" (aka I knew how to do
HTML -- barely) to develop the website. I also became their PR person,
handling the Twitter account. It went okay, honestly. We were building a
prison Minecraft server and actually had a fairly nice sized community
built up. It fell apart when I left after realizing where things were
headed, and that Andrew, the leader, was a complete and utter *dick*.
> [Last ever tweet from the account](https://twitter.com/TheNkTeam/status/217286027591688194)
I believe my second team I joined was Team Herobrine, a modding team my
cousin was already a part of.
> [Tweet](https://twitter.com/TeamHerobrine/status/249592769541193728)
It was attempting to bring an Aether-level mod that featured
"Herobrine's Realm". And looking back, it was doomed from the start. The
lead was a kid probably around 11 (I honestly don't remember) who, while
he did create an actual working mod that I tested myself, I am convinced
he just took some sample code and threw it together. It really went
nowhere, and eventually fell apart when the lead developer stopped
showing up on Skype. Honestly, it was a cool project that did have
potential.
I don't quite remember what order some of these projects/groups come in,
but this was around the time Team Herobrine was dying off. My friends
and I from high school decided we wanted to make our own Minecraft
server, named Tactical Kingdom.
> [](https://twitter.com/TKingdoms/status/298607656774561792)
It actually got pretty far in -- we were pretty much ready to launch,
but then our host disappeared in a puff of smoke, taking with it our
hard work and money. I still haven't learned to do frequent backups of
anything though. (You can go check out the crappy website courtesy of
the WayBack Machine [here](https://web.archive.org/web/20130512015002/http://tacticalkingdoms.clanteam.com/)
I pause here to reflect and try to recall what else I did. I honestly
can't remember. I did some solo projects, mostly bad maps nobody should
play.
Later on, more recently, I applied to be a map builder for the infamous
SwampMC server, where I met some wonderful people, on of which I now
hold very close to my heart.
(Sadly, I can't find the first Tweet from when I joined.)
It was a cool community with some awesome people. But sometimes awesome
people don't work well together, especially when people overlap in
power. Power struggles caused the server, and the spinoff HydroMC (or
HydroGamesMC depending who you ask) to disappear, the group of friends
once dedicated to it now completely disintegrated. There is a bright
side to it though. While working on it, I became the developer, and
developed my coding ability, which has landed me as CTO of Creator
Studios.
> [Tweet](https://twitter.com/CreatorStudios_/status/656958850176786432)
Which is utterly fantastic. I do enjoy developing things on my own, but
I do like having guidelines and rules to follow from time to time too.
There's probably other projects I was involved with that I have
forgotten. If any come to mind, I'll probably do a follow up post about
them. And to any wondering about Total Block SMP, that's something I
will be discussing later.

View file

@ -0,0 +1,91 @@
---
title: NodeMC Developer Log Number Something
date: 2016-02-10
---
#### A more developery post
NodeMC has changed a lot from what I envisioned in the
beginning. When I first began development, nearly three months ago now
(and about 52 git commits), I had envisioned a single product,
everything packaged into one executable and probably wouldn't be used by
anyone but me and one or two of my friends. However, I quickly found
myself leaning towards something very... different.
It started when I started looking for ways to package NodeMC. My plan
was to develop a full dashboard then open the source up and provide a
few binaries. I had a silly idea it would be a quick project. I started
in December, my first git commit dated the 17th (although I think I
started on the 16th). I thought about it as a complete thing, dashboard
and whatnot all packed into one executable. The first thing that moved
my direction to the one I'm going in now was the fact that I could not
figure out how to package other files into my executable made with
[EncloseJS](http://enclosejs.com/). I
made the decision to instead allow people to make their own dashboards
and apps around the application.
![Three months of git commits on NodeMC](https://cdn-images-1.medium.com/max/800/1*v3jOiqGff74xqOOa6UQslg.png)
When looking for investors, it came down to the Minecraft hosts I'd used
before and knew they used the old Multicraft dashboard. I have nothing
against Multicraft -- I think it's a pretty good dashboard, and the
recent UI refresh makes it look much better. However I knew for a fact
several hosts didn't upgrade, so I asked them first. I wanted to sell
NodeMC to a host and develop it for them exclusively. My first target
was ImChimp, whose owner [Alex](https://twitter.com/AlexHH25)
has given me support in the past (and helped run the infamous
server-that-shall-not-be-named). Unfortunately, he wasn't interested,
and who can blame him, because at the time I had a very rudimentary
demo.
https://www.youtube.com/watch?v=25ZVtFHwiCE
I did a bit more work and eventually was able to show off a much more
refined version to James from [GGServers](https://ggservers.net).
He was interested, and invested some money into the project to pay for a
VPS to use for testing and hosting the [nodemc.space](https://nodemc.space)
website, and a domain that was on sale (and would lead to my decision
for major release names). I can confidently say that without his
investment NodeMC would have probably been left as abandonware on
GitHub.
Also thanks to James, I was given a list of things that are essential
for Minecraft server dashboards, especially if you want to have
multi-server hosts using it. This included custom jar files, file
manipulation, logins with authentication, and more. Taking this list, I
worked hard to implement the features I needed. Below is the playlist
for all my dev logs.
https://www.youtube.com/watch?v=V-K8A6zQam0
It's been an interesting few months. I've learned many things about
developing things in Node.js, from methods to the limits of the
JavaScript language.
Since the beginning of this month, I've been making a huge effort to
make MultiNodeMC work, building it out with logins, setup pages, server
management, and everything else a server host admin needs. A very
interested aspect that I've never given much thought is login and
authentication, storing passwords, and keeping it all *secure*. A huge
shoutout to [Oliver](https://www.oliverdunk.com/) for giving pointers on how to cut down on security
vulnerabilities. He encouraged me to implement the API key feature for
NodeMC to prevent unauthorized access of files.
Recently, and what made me rethink my methods of distributing the
binaries, was my EncloseJS license key recently ran out. I have been
looking at [nexe](https://github.com/jaredallard/nexe) as an alternative, which while it works (and seems to
be slightly better at binary compression) isn't great because when I
deployed it onto the VPS, it produced an error saying that glibc wasn't
the correct version. This made me pause and wonder what on Earth I'm
getting into. To clarify, with EncloseJS, you literally just need to
send out the binary (and any files not packed into it), not worrying too
much about dependencies because there are pretty much... none. That
said, I believe nexe may be the way forward for me, and I'll be working
on compiling it for all the distributions that I need to.
A question I've been asked quite a bit is **will you open-source this**?
The answer is... no, not yet. I'll be opening up CORE (the basic
application) around the time version 1.4.0 of NodeMC is released. I have
no plans on open-sourcing MultiNodeMC at this time, however if I ever
abandon the project I promise to release the full sourcecode to the
public.

56
posts/on-keyboards.md Normal file
View file

@ -0,0 +1,56 @@
---
title: On Keyboards
date: 2021-08-19
---
#### My current keyboard is a GMMK Pro
But before diving in that keyboard, let's get into my history with keyboards. My first "proper" mechanical keyboard was a Corsair K70 with Speed Silver switches - a now massive 100% keyboard that was quite pretty. While I liked the speed silver switches for typing, gaming was another question. I tended to rest my hands quite heavily on the keyboard, and would accidentally actuate keys from time to time. At some point around this time (2017) I bought a Razer BlackWidow TKL (ten-keyless, or without the numberpad), which introduced me to alternative keyboard form factors and the Razer Green switch, which are essentially Cherry MX Blues with a different coloured stem; that is, clicky, but not great. At some point, I must have sold them, as I quickly moved on to a Ducky One TKL with genuine Cherry MX Blue switches. In those days, I wasn't very invested in the keyboard scene, and thought of Cherry as being the "be all end all" for switches.
_The Corsair K70 keyboard. Very gamery._
![Corsair K70 Keyboard](/images/corsair-k70.png)
_The Razer BlackWidow keyboard, from a terrible panorama that is my only evidence of owning one_
![Razer BlackWidow TKL Keyboard](/images/razer-blackwidow.png)
This Ducky One lasted me about a year or so, during that time I bought some (very nice) SA style keycaps (which I ended up removing relatively quickly as I didn't like the profile). At some point, however, the draw of smaller keyboards was too much, and I found myself ordering an Anne Pro off AliExpress (or one of those sites) with Gateron Brown switches (tactile, ish). Being a 60% form factor, it was a major adjustment, but I got there and it was a great relief freeing up some more room on the tiny desk I had at the time. Commuting to an office every day was also somewhat simplified, although I ended up purchasing two of the Anne Pro's successor, the Anne Pro 2. It was roughly the same keyboard, but with slightly better materials and some refinements to the software control and bluetooth interface. I ordered one with Cherry MX Blue switches and another with Kalih Box Blacks, to experience a heavier linear switch. This was also my first real exposure to alternative switch makers, and I was a massive fan of the Box Blacks. I found the brown switches were a bit too linear, and the blues were blues -- clicky, but not in a pleasant way, especially during extended use.
_Ducky One TKL_
![Ducky One TKL Keyboard](/images/ducky-one.png)
_Ducky One TKL with SA style keycaps_
![Ducky One TKL with SA style keycaps](/images/ducky-one-sa-keycaps.png)
These Anne Pros lasted me several years, switching between them as I wanted and even bringing one traveling (since it was just that compact). And I still love them. But I knew it was time to upgrade. I found myself missing the navigation cluster, and wishing for a slightly more premium experience. So I started doing some research, and quickly feel down a rabbit hole.
_The Anne Pro collection - left to right, black with Cherry MX Blues, white with Kalih Box Blacks, Anne Pro with Gateron Browns_
![Anne Pro collection](/images/anne-pros.png)
Keyboards are dangerous. The ignorant are lucky -- there is so much information out there that it is possible to become overwhelmed very quickly. Everything from the material keycaps are made of, to the acoustic properties of case materials, to the specific plastics _each part_ of a switch is made from. I had little clue what I was doing, but eventually settled on a few things; I wanted a navigation cluster, I wanted something slightly more compact than a full size keyboard, and I wanted to get premium materials.
I ended up with the following:
* [GMMK Pro](https://www.pcgamingrace.com/products/glorious-gmmk-pro-75-barebone-black)
* [GMMK Pro FR4 Plate](https://www.mechmods.co.uk/products/gmmk-pro-fr4-plate-by-avx-works)
* [Durock v2 stabilizers](https://www.mechmods.co.uk/products/durock-screw-in-stabilisers-v2?variant=40494419476674) (hand lubed)
* [AKKO NEON ASA doubleshot PBT keycaps](https://epomaker.com/products/akko-neon-asa-keycaps-set?_pos=3&_sid=8e158a975&_ss=r)
* [Glorious Pandas](https://www.pcgamingrace.com/products/glorious-panda-mechanical-switches) (tactile, to be hand lubed soon)
_Give it a listen - fair warning, not everything is finely tuned or lubed!_
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/mDrq4B2k2KM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
And so far? Writing this post on that keyboard? I'm pretty happy with it (for the... one evening I've used it so far)! The process of swapping out the plate and lubing the stabilizers was a bit tedious and frustrating being my first time, and I do need to do some fine tuning with the lubrication on the switches and stabilizers (you can clearly hear the rattle on my spacebar in the audio clip), but overall it's now a case of teaching myself a slightly more spacious layout, not needing to leap for the `fn` key every time I want to use my arrow keys or take a screenshot. Being a hotswap board (that is, rather than soldering switches to the PCB, they slot into sockets), I do plan to experiment with other switches in the future, to truly nail down my preference. There are also a number of community modifications documented that are intended to tweak the keyboard to your liking (when I say keyboard in the context of custom builds, I specifically mean the case and PCB), but I don't know if I'll end up trying any of them out. Unfortunately, it weighs an impressive 1768 grams, so traveling with it is out of the question, but I do still have my original Anne Pro handy that I plan to use as a testing grounds for modifications before selling it (be sure to follow me [on Twitter](https://twitter.com/gmem_) if that sounds interesting, or to follow my plans).
_The GMMK Pro in question_
![GMMK Pro top view](/images/gmmk-pro-top.png)
![GMMK Pro side view](/images/gmmk-pro-side.png)
This is by no means my "end game" keyboard. I do plan on investing myself further in this hobby, but slowly. I already have a small list of switches to try and kits to experiment with, and have some inkling of how I'd want to custom design a keyboard. With time.

View file

@ -0,0 +1,79 @@
#+title: A Reflection on Operating Systems
#+date: 2021-11-13
*macOS. Windows. Linux. It's all bad.*
Clickybaity line aside, it's worth digging into the "Big Three" when it comes to operating
systems. Over the years I've used them in various forms and versions, on personal desktops,
production servers, work laptops, and so on. I've seen both the good and the bad for most
of the operating system choices discussed, but I will in no way claim to be an expert. Before
going any further, it's worth doing a quick rundown of Linux distributions I've used.
- Ubuntu (various versions over the years)
- Arch Linux (A favourite of mine)
- GNU Guix
- NixOS
- Various Ubuntu-based distributions
Specific versions aside, I've been able to watch Linux distros evolve over the past ~9 years.
Along the way I've used Windows 7/8/8.1/10/11 (Vista is in there somewhere, but my memory is
fuzzy), and macOS/OSX versions.
All these choices have various upsides and downsides. This particular post is motivated by
a recent change in my life; after years(ish) of running Linux on my desktop, stubbornly
refusing to install Windows, I finally did it. I switch to an operating system that makes
me feel less in control of my desktop. An operating system with very strange bugs that
should not exist ([[https://arstechnica.com/gadgets/2021/11/expired-windows-11-certificate-breaks-some-built-in-apps-and-tools/][like snipping tool breaking due to an expired certificate]]). An operating
system that just does its own thing, and is still incredibly expensive.
Gripes about Windows aside, the change I made was mostly for reasons related to gaming.
Gaming on Linux has come a /long/ way, to the point it was almost second nature to be
playing relatively recent AAA games on it. There was also a sense of trying to wrestle
Linux - towards the end of my journey I was using NixOS, which is an excellent declarative
operating system [[/posts/from-guix-to-nixos][that I've covered before]]. While the package repository it offers is very
complete, there were a few instances where I found myself needing to reach for the unstable
package repository, or debating whether to write my own Nix packages, or diving into long
GitHub discussions about a specific issue. I found myself with little energy to actually
persue these things, with my job as a Software Development Engineer sucking up what
motivation/eagerness I had to deal with technical issues. How I ended up on NixOS is
detailed on the aforelinked (that's not a word, but work with me on this) blog post,
with did not help with my frustrations with Linux on the desktop. Eventually this
built up into a crescendo of the Windows 11 install screen and self loathing.
I'm not proud of this move, nor particularly happy, but at the very least WSL has come
a long way from the initial versions, now supporting fancy things like GUI apps (I'm
currently typing this in emacs running in WSL, which is still a bit weird to me).
#+ATTR_HTML :title emacs on Windows in WSL :alt emacs gui running from WSL
[[file:/images/emacs-on-windows.png]]
Jetbrains editors also have okay support for WSL, so it's feasible to do what little
personal development I do these days in a mostly complete Linux environment.
macOS isn't something I feel neccesary to touch on here since it's a fairly personal post
about my journey, but for the sake of completion it's worth mentioning it's been my main
work OS and I do have my share of complaints about it. Primarily, window management is
very cumbersome without a third party application. While I haven't run a tiling window
manager in a while, I do like having the option of arranging my windows in that way.
I've opted for [[https://rectangleapp.com/][Rectangle]], which works well enough that I an satisfied and not wrestling
the urge to buy an application.
It may seem that the summary is that macOS has the fewest problems, but it does still suffer
from being the most locked down of the three choices (I /know/ Linux is a kernel, not an
operating system itself, but most Linux based operating systems are pretty similar and covering
them under "Linux" is just easier). I'm not neccesarily worried that Apple is going to kill
the ability to install third party applications on their desktop and laptop platforms, since
those are a mainstay of those platforms, but every so often I do wonder what that could look like.
Windows is an "okay" middleground of "flexible enough to do everything" and "closed enough that
I don't have to spend too much time DIYing solutions". When you run into problems on Windows, you'll
have to wrestle for control to maybe fix it, but it may be possible. On macOS, good luck - you're
at the mercy of Apple's priorities and a reinstall may be in your future. On Linux? It's a 50/50
chance of that issue being totally unique to your hardware and/or software combination - good luck
(but at least you have the opportunity of fixing it yourself and contributing to the community).
Overall I do want to return to Linux. But given my recent frustrations with it, I'm going to hold
off until I'm either in a position or mindset to contribute properly. Linux has a long way to go
on desktop, but I desperately want it to succeed. The sooner we stop relying on closed platforms
the better (we just need to sort out the UI/UX crisis for FOSS). For the time being, I'm going
to explore Windows 11 and what it offers for developers, and keep trucking along with macOS as
a work environment as long as my employers offer it.

View file

@ -0,0 +1,53 @@
---
title: (Part 1) The Android Twitter Client Showdown
date: 2016-12-06
---
#### Falcon Pro, Talon, Flamingo and Fenix go head to head
I've been using Twitter for a long time now. I first
signed up in October of the year 2010, four years after the initial
launch. I've used is fairy regularly, with a total of 14,800+ tweets and
have met a fairly wide range of people on the platform. I don't claim to
be an expert, or even well know on the platform (276 followers is hardly
"famous"). But, over the years, I have garnered much experience with
different Twitter clients, starting with the web client as it was back
in 2010, then moving to the official Twitter app on my old 1st
generation iPad (no, not pro or anything... 1st gen iPad). Somehow I
stumbled onto Tweetbot, which became my defacto (and still is, if I used
iOS) Twitter client for iOS.
But back to the point. Over the next few weeks, or indeed months because
I have a busy few coming up, I will be taking a few Android Twitter
clients for a spin: [Talon](https://play.google.com/store/apps/details?id=com.klinker.android.twitter_l), [Flamingo](https://play.google.com/store/apps/details?id=com.samruston.twitter), [Falcon Pro 3](https://play.google.com/store/apps/details?id=com.jv.materialfalcon&hl=en), and [Fenix](https://play.google.com/store/apps/details?id=it.mvilla.android.fenix). All three of these apps offer fast, customizable
experiences for Twitter, perhaps not with features on par with the
official Twitter app, but you can blame the fact Twitter is locking down
their API big time
\[[Forbes](http://www.forbes.com/sites/ianmorris/2014/08/20/twitter-is-destroying-itself/#77a6716971b7),
[ITWorld](http://www.itworld.com/article/2712336/unified-communications/the-death-of-the-twitter-api-or-a-new-beginning-.html)\].
My methodology is pretty straightforward. During the month of December,
I will use each one for one week, testing and prodding each one to
unlock the depths. Then, once all (four) have gone through the gauntlet,
I will then allow myself the freedom of choice -- whichever one I find
myself using the most will be the "winner".
#### Initial Impressions
All four of these support multiple accounts, however with Falcon Pro
there is a small caveat, which is for each account you want to add you
are required to purchase a new "account slot", which is about \$2. The
initial purchase to add one account is on par with the rest of the apps,
about \$5, Talon coming in at \$3.99, Fenix at \$6.49, and Flamingo
coming in at \$2.79, the cheapest and also the newest of the four (all
prices USD). Each one features material design, interestingly enough in
a variety of styles. Each app has it's own customization, each with it's
own advantages. Flamingo feels the most tweakable, letting you set your
own colours for basically everything. Falcon Pro is by far the least
customizable, however that is not necessarily a bad thing depending on
your use case and how much you enjoy fiddling with settings.
Album of each Twitter client's interface
> "All life is an experiment. The more experiments you make the
> better." -- Ralph Waldo Emerson

View file

@ -0,0 +1,71 @@
---
title: (Part 2) ATCS --- Flamingo
date: 2016-12-08
---
Flamingo is a relatively new Twitter client, created by
Sam Runston. It goes for \$2.79USD on the Google Play store, and
frankly, is a solid Twitter client. Battery performance is fine, and
performance is great.
#### Customizability
It is a fantastic app for Twitter, and it one of the more customizable
clients if you like that sort of thing. You can tweak all the individual
colours, or pick from a pre-made theme. You can change the fonts, how
and where the pages are and look, and a whole lot more (actually while
writing this review I found a way to turn off the giant navbar at the
top, so bonus points!) through the settings. It might seem a bit
overwhelming, and for the most part I left it with the default theme
with some small tweaks.
For you night mode lovers, there is a beautiful implementation here. You
can tweak the highlight colour to your liking and also the background,
between dark, black, and "midnight blue". Of course, you can also set a
timeframe for when you want this mode enabled -- one slightly annoying
thing is you can't have it enabled all the time, however the theming
options available should compensate for that fact.
#### Syncing
Retrieving tweets from Twitter's servers was always very snappy and
fast, and I noticed to real performance hit in terms of battery or CPU
usage when having tweet streaming on. I definitely recommend you turn it
on -- I did not test it with background syncing enabled, and didn't
really see any need to turn it on during my week with the app. There is
Tweet Marker support, which comes in handy if you use clients, and I did
leave it enabled (as I usually do).
There is a built-in proxy server setting, which is nice if you want to
browse only Twitter using a proxy server. There are also several other
options for turning off all retweets or replies, disabling auto refresh
on startup and tweeting, and even has a setting for TeslaUnread so you
can pick if you'd like to see unread notifications or tweets in your
timeline (I assume you'll need to enable background sync for this to
work properly.
#### Notifications
Notifications, as with any third party Twitter app I've tried, were slow
to come in but seemed to be much more quick when the app was recently
launched. I don't know if this is a Twitter API restriction or just an
Android oddity, but I'm willing to guess it has to do with Twitter's
rate limiting. Regardless, plenty of options here. You can enable
notifications for individual accounts, as well as which notifications to
show for each account (mentions, DMs, new followers, etc). There are
also options to enable notifications only from verified accounts, if you
interact with those enough, and allowing you to see which account
received the notification, which as a multi-account user I left on.
#### Other notes
I love Flamingo so much -- the number of options it gives you to make
your setup truly unique is astounding. Individual themes per account,
*alternate app icons*, composition notifications, there is so much you
can do with this app. I have some minor issues, like it confusing my
accounts timelines, direct messages, or notifications, but overall it is
a solid app that I highly recommend to those who want to get the
absolute most out of their Twitter client.
[Get Flamingo on the Google Play
Store -- \$2.79](https://play.google.com/store/apps/details?id=com.samruston.twitter&hl=en)

View file

@ -0,0 +1,74 @@
---
title: Porting Websites to Jekyll
date: 2016-01-07
---
#### Why on Earth use SSI...
Jekyll, to the uninitiated, is a static site generator for websites. Instead of using PHP include statements or SSI, it generates all the pages into plain-jane HTML. It uses a fairly straightforward file structure, making most projects very clean and nice to work with. I've personally used it for the [NodeMC](http://nodemc.space) website and the new [floorchan.org](https://floorchan.org) site, and have absolutely loved it. So how can you convert your website to use Jekyll?
I'll be using Strata from HTML5Up.net ([link](http://html5up.net/strata))
to build a very simple and quick static page blog. This shouldn't take
long. (I'm also assuming you have Jekyll installed, if not here's a
[quick start guide](https://jekyllrb.com/docs/quickstart/).
First, extract the files from the zip. We'll need to create a few files
for Jekyll to work. We need the following:
```
_includes/
_layouts/
_config.yml
```
Inside the \_config.yml file, we'll need a few things just to make sure
Jekyll understands us, and maybe for future use. A simple setup would
look like this:
```yaml
name: Strata
description: A template ported to Jekyll
```
Let's now focus on the \_layouts/ directory. In here is where your
templates will go for dictating how a page will look. You'll need to
take your header and any static components you want to share across all
pages and place them in here, like navbars, footers, etc. Where you want
the content to go, you add {{content}}, which will pull the content from
the file and place it there on the generated page. It is recommended you
name this 'default.html' for easier referencing to it. Now you can go
into your index.html in the root of your project and put in your
content. You need to declare a title (more on this in a second) and a
layout to use. Enclose the lines between three dashes. Here's an
example.
```markdown
---
title: Strata
layout: default
---
lorem ipsum
```
And voila! You have ported your website over to Jekyll. You can run
'jekyll serve' to run a server on port :4000 to preview your website,
and 'jekyll build' to build the website to a static format to a \_site/
directory.
:::
![It works!](https://cdn-images-1.medium.com/max/1200/1*WHbyFDZGHl_6eYlJc436Uw.png)
There is of course quite a bit more you can do with Jekyll, like
creating blogs and such, but maybe I'll cover that later -- or maybe
not at all, because it seems pretty well documented online.
The advantage of using Jekyll should be fairly obvious -- because it
generates HTML pages, it requires less processing overhead than PHP or
SSI. It also means that there are no entry points for SQL injections or
any of those nasty things. And one of the biggest advantages in my mind
is the layout system, so you can quickly change something across all
pages.
\~gabriel

View file

@ -0,0 +1,66 @@
---
title: Python / Flask Logins
date: 2017-02-22
---
#### This was fun! \*twitch\*
Bloody hell, where do I start...So I recently got back
from a two week vacation down to LA (Disneyland) and then further south
to Mexico. During those two weeks I did little to no coding, which
greatly relaxed me and allowed me to think about what my goals with my
many projects were.
And then I got home. And decided that the best thing to do (besides
getting a violent cold) was to start work on a login system for
Platypus. You know, so that admins can edit servers and whatnot. Oh boy
are we in for some fun.
#### Initial Approach
At first I wanted to use Flask-Login, because that seemed like the
logical way of doing things. It integrated with Flask, which is
fantastic because I use that framework for literally everything (sorry
Django, not feeling you yet). It (seemed) to provide an easy way to
handle restricting views to logged in users. And thus I set out.
The first thing I noticed was that, like Flask, Flask-Login assumes
nothing about your stack or how you should implement things. It requires
you to write your own class for users and implement methods for
retrieving users and passwords from a database, and also validating
users login details. And then it hit me. Flask-Login is for *session*
management, not *user* management. Back to the drawing board, slightly
red faced when I realised what I was doing wrong.
#### IYWIDP, DIY
**I**f **y**ou **w**ant **i**t **d**one **p**roperly, **d**o **i**t
**y**ourself. And so I did. I grabbed bcrypt's Python implementation and
started writing my own system that relies on old school cookies as
authentication. There were some false starts, but I eventually rigged
together something that works, albeit with duct tape. What happens is
thus. First, user requests /admin, which is obviously not a route we
want unrestricted. So Flask grabs a cookie the browser provides and
checks it against the current session token internally. If the two don't
match or the cookie is blank, you're redirected to a login screen. The
login form POSTS the data to the login route, which compares the passed
password the the encrypted, salted and hashed password stored (as most
logins do). Then, the function returns a unique key (actually a bcrypt
salt) that acts as the session token. Cookie is set, user is sent to
admin page. Brilliant!
Obviously there are some drawbacks that are not entirely intentional.
For one, only one user can be auth'd at a time. This isn't a
particularly troublesome problem in my deployment, however it's
definitely not ideal. Also the session key is a bcrypt salt
stringified -- this looks a bit funky but was a quick hacky way to
generate a pseudo-random key. It's never used for anything beyond
authenticating the browser.
*Hopefully it's secure enough.*
Now anyone who wants to have a crack at breaking the login, go right
ahead, I won't stop you. Hell, I encourage it, and file issues as you
see fit.
[Platypus on GitHub](https://github.com/ggservers/platypus)

View file

@ -0,0 +1,57 @@
---
title: Samsung Could Take Over The Digital Assistant Market
date: 2017-08-06
---
#### Bixby on phones is just the beginning.
The world was shaken when Samsung decided to roll out
their own digital assistant on their flagship Galaxy S8 phones. As in,
it was shaken by the collective groan that the tech community let loose
upon learning that it would even have it's own, dedicated button. But
there's something else I believe it lurking beneath the surface.
Samsung is an enormous company, making everything from phones to vacuum
cleaners to dishwashers. Notable missing, however, is any indication of
a competitor to Amazon Echo, Google Home or Apple's newly announced
HomePod. Granted -- Bixby is fairly new, and is still learning. However
given the wide range of products Samsung offers, the conclusion that is
easy to draw is that they are planning on putting Bixby into their own
home appliances, for a *truly* "smart" home.
Bixby itself is a fairly standard first generation virtual assistant. It
can answer questions and control phone functions fairly well (including
installing apps and changing system settings), but when put head to head
with Google Assistant or Siri it falls a bit flat (you can check out
MKBHDs comparison video [here](https://www.youtube.com/watch?v=BkpAro4zIwU)). The interface is very Samsung, not unattractive but
not my personal taste. I especially like the Google Now-like interface
(which it actually replaces on the S8), and believe it looks better if
not as good as Google Now itself (Pixel XL owner here). However the
interaction with the assistant itself definitely needs some work before
I would consider it on par with it's competitors.
Already, many of their appliances are going "smart". They tout their
smart fridges as being "Family Hubs" (trademarked, obviously), and run
the Tizen operating system, an IoT operating system by the Linux
Foundation. This means that they're already building off an open source
project (which they already have a lot of experience with, launching
phones running Tizen OS in India), and gives them a very, very good
opportunity to implement Bixby. In everything. Smart ovens -- exist.
Smart dishwashers -- why not. Would you like a smart vacuum cleaner?
Consider it done. And they could all be running Bixby when Samsung
decides it's smart enough.
Now -- the argument that Google, Amazon or Apple could take over your
home with their products is there, however the problem with that is that
they do not make every day appliances. Google has made some moves with
Nest, and all three have assistant pods/bluetooth speakers, but it would
take years for them to catch up to Samsung in terms of brand recognition
in the appliances market. Not everyone needs a Google Home, but most
people need a fridge, and that might happen to come with Tizen and
Bixby. The average person may not have an Amazon Echo, but they'll
probably have an oven, dishwasher, laundry machine... Perhaps not all,
but at least one. And Samsung is well established.
So look forward to finally having a completely mic'd up home, courtesy
of Samsung, listening, learning, adapting and assisting you every day.
Whether or not it explodes (in popularity) we will see.

View file

@ -0,0 +1,29 @@
---
title: Saving the Link Shortner (Quick Post)
date: 2015-10-30
---
#### Aka a very silly mistake
You may remember the link shortner I wrote about in a past blog post.
Well, I fixed an issue where it couldn't decide if a link was http://,
https://, or just blank. Here's what I did.
Basically, had my strpos() in the wrong order. You need the haystack
*first*, not second. That was an issue. Here's the correct code:
```php
if(strpos($link[0][ 'actual '], 'http://') == false || strpos($link[0]['actual'], 'https://') == false){
header( 'Location: '.$link[0]['actual']);
}
```
And then, of course, the little else statement in case it doesn't match:
```php
}else{
header('Location: http://'.$link[0]['actual']);
}
```
I do love PHP.

View file

@ -0,0 +1,29 @@
#+title: Slightly Intelligent Home
#+date: 2023-03-06
#+attr_html: :alt An internet connected house (Midjourney)
[[/images/slightly-intelligent-home.png]]
/Generated with the Midjourney bot "An internet connected house"/
I'm not overly-eager to automate my whole home. Leaving aside security concerns, it's also really, really expensive. However, there are some small quality-of-life things I've added over time or kept over the years that I've found very helpful. It's an evolving thing and there are a few "smart home" objects I would never, ever touch (internet connected "smart" deadlocks anyone?), but here and there I've found some useful internet-of-things-things to add.
Lights and lightbulbs are maybe the biggest no-brainer thing you could automate around your home, at least in my eyes. Automate them to wake you up, turn off when you leave the house, turn on when it gets dark in the evening, when you're away, and so on. To that end for the last few years I've had a set of hue bulbs (an older version with their hub) installed in my bedroom, upstairs office, and bird room. Over the years the app has gotten a bit clunky and slow so I ended up [[https://github.com/gmemstr/hue-webapp][making my own webapp]] for toggling lights, but I don't have any plans to upgrade the kit anytime soon. We'll talk about how I control the lights, and home, further down.
#+attr_html: :alt Simple hue webapp :title Hue webapp
[[/images/hue-webapp.png]]
We have a camera pointed at our bird's cage to keep an eye on her. It's a fairly straightforward Neos SmartCam/Wyzecam V2/Xiaomi Xiaofang 1S/Whatever other branding it exists under, and has been flashed with the [[https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks/][Dafang Hacks custom firmware]] for a plain RTSP stream (among other more standard protocols being exposed). This means I can open it with any player (usually VLC) without any problems. It's not the most powerful camera, streaming only 720p at the moment (24FPS, 300kbps), but considering the simple use case of checking in on our bird when we're out of the house it serves its purpose well. Not exactly "smart" but it is still part of the system.
The next thing we added were some semi-connected thermometers. By semi-connected I mean they communicate over bluetooth low energy (BLE), so don't have a direct connection to the internet. This is somewhat preferable since it forces more processing to be done offline/in home. Unfortunately this also means we're very tied to the mildly hideous mobile app. At the very least we can still check the display on the front for the values.
Finally, at the end of the tour, we have a simple connected plug for our bird's UV lamp. We try to maintain a pretty regular schedule for her, so being able to automate the lamp is a huge plus as it's a really good indication of when it's daytime (we're in Britain, land of no sun. It's raining as I type this) and when it's time for bed.
Controlling all of this, especially in a centalised manner, is a little tricky. The pieces didn't really click until I remembered [[https://homebridge.io/][Homebridge]] existed. This spurred my adoption of HomeKit and the Apple Home app, and the purchase of an Apple HomePod mini. For all intents and purposes, this works really well. The rather simple UI is very fitting for my minimal/progressive "smart" home approach, and with Homebridge I've managed to bring in the camera and bluetooth thermometers (Homebridge is running on my Raspberry Pi based k3s cluster, which is easy since the Raspberry Pis have bluetooth to pick up the signal!). Using the Home app also means a much nicer experience for my partner, without needing to fiddle with multiple apps or deal with my very /engineered/ web interfaces, and allows us both to feed into the automations (e.g don't turn off all the lights when one of us it still home).
As a sort of hidden benefit of Homebridge, I've been able to bring the thermometer metrics into my Grafana Cloud instance (more on this in the future) as the values from the plugin are printed to the log and shipped off to Loki. From there I do some regex on the logs to extract values. It's as bad as one might expect - that is to say, not horrible, but sometimes inaccurate.
#+ATTR_HTML: :alt Grafana dashboard for climate metrics :title Climate metrics in Grafana
[[/images/grafana-climate.png]]
From here the next step is to obtain some connected thermostatic valves for our radiators around the house. The radiator valves have ended up in slightly awkward or hard to reach places so being able to connect them up and adjust as needed (especially scheduling them for evening VR sessions) would be a huge plus. Beyond that, I'm unsure what else I would want to introduce in the home - most of the common bases are covered, especially when it comes to keeping an eye on things when we're out of the house. But who knows - keep up with me [[https://floofy.tech/@arch][on Mastodon]] and we'll see what happens next.

21
posts/sliproad.md Normal file
View file

@ -0,0 +1,21 @@
---
title: Sliproad
date: 2021-07-10
---
### Another case of "just do it yourself"
The want for quickly sharing files across devices and having an easy interface for uploading and downloading them is a relatively common want among people, both for those in the technology sphere or otherwise. Most may opt for services like Dropbox or Google Drive, which offer good clients for desktop and mobile, but I wanted to take it a step further and self host a solution. While options like ownCloud/Nextcloud and the like exist, I wasn't happy with the platforms for a variety of reasons (mostly with regard to their feature set being relatively large and ill-suited to my usual workflow). I also wanted to be able to bring in other file storage providers into one interface, since I store backups and whatnot on external providers (this came into play later, but we'll get to that).
Thus was born the concept of _sliproad_, originally just "pi nas" (changed for obvious reasons), was born. The initial commit was February 24th, 2019, [and really isn't much to look at](https://github.com/gmemstr/sliproad/commit/7b091438d43d77300c4be8afb64e2735dd423d71) - just reading a configuration defining a "cold" and "hot" storage location. The idea behind this stemmed from my use of two drives attached to my server at the time (a small Thinkcenter PC), one being an external hard drive and the other a much faster USB 3 SSD. For simplicity, I leveraged server-side rendered templates for file listings, and the API wasn't really of importance at this point.
![My old "server" setup](/images/old-server-setup.png)
For a long while, this sufficed. It was more or less a file explorer for a remote fileysystem with two degrees of performance. But I wanted to expand on the frontend, specifically looking for more dynamic features. I began to work on decoupling functionality into an API rather than server-side templates in March of 2019, and that evolved to the implementation of "providers". I'm not entirely sure what sparked this idea, besides beginning to use Backblaze around the time of the rewrite. Previously, I ran a simple bash script to rsync my desktop's important files to the Thinkcenter's cold storage, but understood I needed offsite backups. Offloading this to Backblaze's B2 service was an option (and very worth the price) but I sacrificed ease of use when looking through backed up files. Bringing the various file providers under one roof allowed me to keep using the same interface and gave me the option of expanding the methods of interfacing with the filesystems provided. Around this time I was looking to rebrand, and taking a queue from highways chose the name "sliproad" to signify the merging of filesystems into one "road" (or API).
Coming back around to my want for a more robust frontend - while rewriting and decoupling the frontend rendering, I originally opted to rewrite the interface using React. This was off the heels of a relatively good experience rewriting my server monitor's interface ([Platypus](#)) using it, but it was quickly abandoned as I grew frustrated with the process of running both the React development server and the sliproad server in parallel to develop them in tandem. Eventually I opted to delete it and instead moved to a much more simplified form factor, with plain HTML, CSS and JavaScript. This ended up being a great move when Go's bundling of files into executables came to the stable branch, which meant I could deploy a single executable to my Raspberry Pi or wherever I need to run the project (I regularly run it on my desktop or work laptop to quickly nab files between them, rather than uploading them to the Pi as a go between).
And this is where Sliproad is currently. I've been tweaking the internals a bit to hopefully make future "providers" easier to add (spurred on by AWS S3 support) and working on figuring out how to handle authentication in the future, but the application itself works well for my use case. It's entirely possible it will work well for someone else, but that's pretty secondary. For the time being, I'm happy keeping the repository and code base "as is" and consider the project largely finished.
_Side note: I'm intentionally omitting the brief period I tried to rewrite the application in Rust. My intention was to rewrite with speed in mind, but ultimately it wasn't something I found myself wanting to keep up, given the level of functionality of the application in the current language, Go._

View file

@ -0,0 +1,65 @@
---
title: State of the Union
date: 2017-10-26
---
#### Where are the projects at?
I have a lot of projects on my plate right now. While
also a full-time student, I am also working on expanding my portfolio
and knowledge for the real world, which means a lot of projects.
My current project I'm focusing on is the podcast hosting app written in
Go, named [**Pogo**](https://pogoapp.net). It's a straightforward CMS for managing podcast
episodes, and automatically generates an RSS feed. It is more than
stable in the current release, and I'd personally feel confident using
it in production (your mileage may vary). Pogo currently features
multiple user support, a flat directory structure for storing episodes
alongside their respective shownotes, *mostly* correct RSS (few bugs to
iron out, but all readers I have tested manage it fine), and a rich
admin interface built out in Vue.js, which includes custom CSS and
episode management.
I am currently working on the user management aspect of Pogo,
implementing a much more sane method of storing and modifying user
accounts, and looking into permissions for restricting certain
functions. Previously, users were stored in a JSON file, became
notoriously difficult to manage in Golang (not impossible however).
Thus, I have moved to the much more portable SQLite3 -- I do have plans
to explore the possibility of picking SQLite3 or MySQL (or MariaDB
etc.), however I plan to focus most of my efforts on ensuring SQLite3
compatibility. With this will come an admin interface for adding and
managing users, which in the current release requires you to manually
add them into the JSON file (and manually generate a bcrypt hash...).
Once the users branch has been merged into the master branch, work will
be done to rework the frontend to use Vue.js instead of plain
JavaScript. I've also been really happy with the current traffic and
outside contributions thanks to my efforts to promote it "organically"
and Hacktoberfest, from which some contributors have found the project.
Another project I've been looking at again is
[**Platypus**](https://getplatypus.io). The simple real time server usage monitor I wrote
back at GGServers has been lying dormant for a long time, and I can't
remember where I left off. It was ready to be deployed, but was not the
focus of the company at the time and I ended up moving it back to my
personal Github. I'm still very proud of the achievement of writing such
a platform in Python, but I want to start rewriting it in Go. The
reasons are twofold; one, I have become very familiar with Go in the
past few months, and believe it could offer much better performance when
it comes to scaling the application. It's never really been tested at
the large scale it should have been, and I'm still a bit leery of the
aspect. I do want to reach out to some larger companies to see if they'd
be interesting in giving me a hand with this. Regardless, a rewrite in
Go + Vue.js is definitely on my mind, and improving the AOR interface so
anyone can write their own version in whatever they already have on
their server.
And I continue to work on articles for [**Git Galaxy**](https://gitgalaxy.com),
writing about whatever comes to mind when it comes to open source
software. I'm currently working on a Hacktoberfest experience roundup,
and researching another opinion piece along the lines of the [Opening
Schools to the
Students](https://gitgalaxy.com/opening-schools-to-the-students/). Analytic numbers are looking solid, and I am more
than happy with how it's turning out.
That is the state of my projects.

10
posts/tailscale-oidc.org Normal file
View file

@ -0,0 +1,10 @@
#+title: Controlling Your Fate with OIDC and Tailscale
#+date: 2023-05-21
I think the urge to self host services in a way that makes it difficult, if not impossible, for a third party to take away your ability to use the service is an itch many of us in tech have encountered and tried to fulfill in some way or another. In my experience there are three approaches. First, one can opt for a third party provider to host the underlying server, with the freedom to install and operate whatever you want inside. This exchanges a great deal of this autonomy from third parties in return for convenience, which is a bit of a theme among the approaches. Next is utilising hardware within your own home, such as a Raspberry Pi or a spare computer, to do the same. And finally you have the approach of outright rejecting servers and going for services that run entirely on-device where possible.
For myself, I went with the second approach of hosting things using hardware in my home where possible, with a small amount of "cloud" mixed in mostly for redundancy or backups, and I've [[/posts/current-infrastructure-2022/][blogged about it before]]. Connecting it all together is Tailscale, and despite being a major piece, it still relied on a third party provider - /Google/. Rather unfortunate given they're a large company I'm trying to interact with less, but I didn't really have another option until Tailscale rolled out support for [[https://tailscale.com/kb/1240/sso-custom-oidc/][custom OIDC providers]]! The one stumbling block I had was in regards to getting [[https://webfinger.net/][a webfinger]] up and running, as I assumed the OIDC provider had to be hosted on the same domain. Thankfully, this isn't actually the case, and I need to give a huge thank you to [[https://tenforward.social/@noracodes/110293900506448915][Nora]] for pointing me in the right direction. I quickly signed up for a basic managed Keycloak instance through [[https://www.cloud-iam.com/][Cloud-IAM]] and set about putting up a =.well-known/webfinger= file on my [[https://gmem.ca][gmem.ca]] domain. It's important to note that although I opted for a managed service to handle Identity and Access Management, it's theoretically trivial for me to migrate elsewhere by updating the webfinger and checking with Tailscale support.
The =.well-known/webfinger= endpoint for my domain started as a static file in an S3 bucket, and that worked well enough for myself. However, I wanted to grant my partner access to my tailnet, and realised that the static file wouldn't cut it. So, after an evening of hacking while recovering from COVID-19, I got a basic Rust-based AWS Lambda function written. Functionally, very simple - it pulls a webfinger JSON file from that same AWS bucket, finds the =subject= matching the query parameter, and returns the spec-conforming file. It's very straightforward, and I picked Rust to both learn Rust in the context of Lambdas and to keep any sort of resource usage as low as possible. The source is available [[https://git.sr.ht/~gmem/webfinger][on sourcehut]], although the documentation is a little lacking since I have yet to fully import the resources into my infrastructure Terraform. While the static JSON file would have still worked (maybe) when adding my partner to the tailnet, since the OIDC provider would still be the correct one, it doesn't hurt to set myself up for the future.
And with that, I have my Tailscale tailnet entirely under my domain, using my own OIDC provider, with the ability to add people as needed! Suprisingly straightforward and is entirely free. It's a few steps removed from running my own [[https://github.com/juanfont/headscale][headscale]] instance, but I don't have any desire to set that up at the moment since my primary use for Tailscale is to not worry about the networking between my devices.

View file

@ -0,0 +1,31 @@
---
title: The Importance of Income
date: 2015-11-02
---
#### It's scary how much a small number can mean
I'm preparing myself to start launching a long-term project that I have
been working on for the past few months. It's something I'm going to be
doing on the side and I don't hope to earn much of a living off
it -- for one simple reason. Per month, if the service takes off, I'd
have to pay around $108.88USD.
I don't really know how that compares to other startups. But considering
it's a (so far) one man operation and I'm not expecting to make a living
off it, I think it's rather cheap compared to something like
[Twitter](http://www.amazon.ca/Hatching-Twitter-Story-Friendship-Betrayal/dp/1591847087). Also, about $99 of that wouldn't need to be accounted for (pun unintended) until I hit around 500 users, which would mean at least the first couple of months would only cost me \$9.88, out of pocket. The \$99, by the way, would be for the 'Startup' level of [UserApp.io](https://www.userapp.io/), which I am using for user accounts and payments -- I could ditch it in favor of my own user system (and in fact at some point I plan to), but for now I don't want to focus on that, and therefore must pay the price when I hit the user cap.
So how can I make the project self-sufficient? Well, donations will be a
big part of it. There will be two ranks -- a $5 and a $15 -- that will hopefully be able to manage the upkeep cost. These will also be
monthly subscriptions, which means if as little as 20 users (out of the
500 cap) bought the $5 rank, that would be $100 I wouldn't have to
worry about. A minimum of 8 people buying the $15 rank would make the
entire project entirely self-funded, at $120 a month of income.
Now this is making a lot of assumptions, such as if the project will
actually take off, or if 20 people in 500 donate, but I think it's a
decent assumption that it will work.
Another important factor is investors, but I can cover that another
time.

View file

@ -0,0 +1,55 @@
---
title: The TVDB Google App Script
date: 2016-12-06
---
#### Let's learn something new!
My girlfriend Becky and I enjoy watching TV shows
together. So much so, in fact, that we've started putting together a
spreadsheet of the shows the need to binge watch together. So just for
the hell of it, I threw together a rather basic (and messy) Google App
Script that lives in the spreadsheet, which pulls information from [The
TVDB](https://thetvdb.com/) regarding
how many episodes there are and how long it would take to binge watch
all the shows (roughly).
Some things I learned while working in the Google App Script IDE. First,
tabs are only two spaces, instead of the usual four I work with in
Python. Which messed me up slightly when I first started, but I go much
more used to it. Second, it's just JavaScript, really. I expected some
sort of stripped down programming language but it's really just a
slightly tweaked JS environment, with some functions that allow you to
interact with Google Docs/Sheets/Forms etc. And finally, I learned just
how useful Google App Scripts can be -- I never really used them, and
believed them to be a waste of time to learn. Alas, I was wrong, my
elitist thinking getting the better of me.
So let's talk about the actual script. You can find the whole thing in
the Gist below, a slightly tidied up and revised version. You'll need an
API key from The TVDB, and I highly recommend you check out [their API
docs](https://api.thetvdb.com/swagger) so you know exactly what sort of info and routes
we're using.
Essentially, what happens in this. First, we search (using
`searchTvdb(show, network)`) for the specific show, using the network to
filter out results in case there are multiple shows with the same name.
Next, we take the showId it returns and query TVDB for the info
corresponding to the show -- we're most interested in the average
runtime from this query. We also ask TVDB for the summary of the
episodes, which returns the number of seasons, episodes aired, and so
on. We aggregate all this data into one clump of data and then throw it
into the spreadsheet.
It's very inefficient, I realize that. There are plenty of things I
could probably improve performance wise, however it works fine. I expect
the more shows in the spreadsheet the longer it will take (about 12
seconds with a 14 item list), but I'll refine a bit in the future.
![How the spreadsheet looks](https://cdn-images-1.medium.com/max/600/1*a6NU2Lv_H2gHrffRT2pfEw.png)
[Gist source](https://gist.github.com/gmemstr/d0024ab38a9cd0aae3a8cce25202c9b0)
> "If everyone demanded peace instead of another television set, then
> there'd be peace." -- [**John
> Lennon**](https://www.goodreads.com/author/show/19968.John_Lennon) **(Probably)**

View file

@ -0,0 +1,57 @@
---
title: Things Twitter Should Do Better
date: 2016-01-14
---
#### Just some notes...
Twitter is going downhill (in my opinion). Since Jack
Dorsey became CEO (again), there have been some... *interesting*
additions. The number one, most useless thing they wasted their time on
is **Twitter Moments**. Moments is basically a summary of media the
people who follow have tweeted. I'd like to see some stats on how many
people use it, and how many of those people clicked it either by
accident or out of idle curiosity. I don't think it's something Twitter
should have spent time developing -- perhaps their time would have been
better spent improving their existing things, like making the Twitter
mobile app more consistent or streamlining Twitter for Web -- or hell,
even making TweetDeck nicer to use.
The official Twitter app for Android (not sure about iOS) is a mess. The
most prominent issue I have is the fact that the top navigation bar will
change what icons it has where, either when I switch between accounts or
just restart the app. Their bottom nav is terrible too -- it will cycle
between a clunky floating action button, a bar with a faux-text input
and camera icon, and a three-section bar consisting of a new tweet,
camera, and gallery icons. What is this? Why on Earth do we need three
different nav styles -- that it *cycles between?!* I want to know both
how and why this decision was made, if it can count as a decision.
I think the worst part of this whole Twitter app debacle is that they
have people at Twitter who know how to do Twitter apps. The creator of
[Falcon Pro](http://getfalcon.pro/)
works at Twitter, and I consider his app an absolutely amazing Twitter
experience. So how why is the experience so terrible? And why are they
focusing on new things like an edit button (rumored), 10,000 character
tweets (again, rumored), and **bloody Twitter Moments**.
#### So, Mr. Jack Dorsey, how could you make Twitter decent again?
First, take a step back. Remember what you envisioned Twitter being. No,
not a text-only messaging system, that's a terrible idea, and remember
how much debt Twitter was in after that. No, rather, take a step back
and simplify. Cut out the 'features' you think are fantastic. Make it a
140-character microblogging platform. Cut out anything that you can see
any reason to have. Open up your API more and let devs work on what they
want to. Why? Because people want and have freedom of choice. If they
don't want to use the official Twitter app, fine. Let them use an
unofficial one, an unofficial one that has actual access to something
resembling an API, not extremely basic functions. And fix your own
official apps, so people are actually tempted to use them. Keep the
140-character limit, because it's a gimmick that worked. Return Likes to
Favorites, because you're not Facebook, as much as you want to work
there. Maybe fix up Tweetdeck so it's a little nicer to use, modernize
the UI and make it flow. Promote the Falcon Pro author to a position
where he can make some UI/UX suggestions and actually get heard.
And above all, *please don't kill the blue bird.*

14
posts/use-rss.org Normal file
View file

@ -0,0 +1,14 @@
#+title: Use RSS
#+date: 2023-03-24
RSS is one of those wonderful technologies that continues to endure, and I genuinely hope that more people realize how useful it can be and embrace the fairly universal sydication format available rather than relying on algorithmically driven aggregators.
For a long time, I've been solely reliant on community-driven link aggregators like HackerNews or Lobste.rs (Reddit as well, although I've long since stopped using it). Why bother visiting sites individually, or seeking out content myself, when I could rely on other people to curate the content? That isn't to say I outright ignored RSS - I still relied on it extensively for podcasts. At some point, I even made a contribution to the [[https://github.com/gorilla/feeds/pull/41][gorilla/feeds project]] for a [[https://github.com/gmemstr/pogo][podcasting platform]] (sidenote: that codebase still works. If I were hosting my own podcast, I'd probably pick it back up). I was really happy with RSS for podcasts, but I wasn't taking it very seriously for articles and blog posts. After all, at that point, I didn't really follow anyone in particular - or if I did, Twitter was all I needed to keep up with them.
Over time, I started to notice a few recurring blogs I consistently read when they cropped up on the aggregators and decided that it was worth following them more closely than a Twitter follow. So, I went and downloaded NetNewsWire and set about adding a few feeds. Along the way, I added the RSS feeds for Lobste.rs and HackerNews - I still found the sites useful and mildly addictive, for varying reasons, and having them unified into a single app was helpful for skimming without the attraction of a large comments section or massive number of votes.
Eventually, the passive adding of interesting RSS feeds caused my collection to grow considerably. Often times a single interesting article from the fediverse or the aggregators will result in the blog being added to my RSS reader. To manage and sync all of them across devices, I was using NetNewsWire's iCloud sync integration, until I started to hit iCloud's rate limits. After an admittedly small amount of research, I opted for [[https://www.freshrss.org/][FreshRSS]] since NetNewsWire supported it natively, and it seemed simple enough to spin up in my homelab Kubernetes cluster. And since that point, it's been happily syncing my RSS feeds and keeping track of what I've read. I have no real notes about FreshRSS - it's a fairly boring and straightforward PHP application, with a reasonable web interface and a handy bookmarklet for subscribing to feeds (it will attempt to parse out the feed URL for a blog page).
It's worth noting that I never had the opportunity to use the apparently wonderful Google Reader, so I have no way of comparing my current setup to what it offered. NetNewsWire is a fairly "dumb" reader, but it does the job well.
At this point I'm subscribed to 48 different feeds for various personal blogs, aggregators, corporate blogs, and webcomics, and I do my best to keep the OPML file up to date [[https://gabrielsimmer.com/feeds.xml][on my website]] if anyone is looking for inspiration. I highly encourage others to /use RSS/ where possible, and preferably with a relatively "dumb" reader that avoids algorithmic curation - or at least offers the ability to ignore it. I also encourage you to subscribe to this blogs own RSS feed! I have a few posts in the pipelines around machine learning and its impacts and implications on code, literary and artistic works. And a post about coffee coming soon.

View file

@ -0,0 +1,48 @@
---
title: Watching a Cryptocurrency Grow
date: 2018-01-22
---
#### GRLC is the only coin you'll ever need
Last weekend, a new cryptocurrency launched: Garlicoin, a fork of
Litecoin based around garlic bread. It may seem silly at first, but keep
in mind that sometimes "silly" coins can have a huge impact, like the
time [Dogecoin sponsored a NASCAR
racer](https://www.nascar.com/en_us/news-media/blogs/Off-Track/posts/doge-reddit-josh-wise-talladega-superspeedway-aarons-499.html). Regardless of what you may believe, it's always
worth keeping an eye on meme coins (not memecoin, which is an actual
coin).
Currently, it's Monday, the day after the release, and already the
Garlicoin community is thriving, both on the incredibly active Discord
and the /r/garlicoin subreddit. There is also /r/garlicmarket, which
will be the focus of much of this post. Regardless, this is very much a
community driven coin, with a number of mining pools already available
([see here](http://pools.garlicoin.fun/)) and about 297063 coins available so far. The market
cap is at 69 million, so it should be interesting to see how that plays
out. It's also been pretty interesting to see the newcomers to the
cryptocurrency space asking questions and figuring things out, like why
having a pool with more than 51% of the total hashing power [is a bad
thing](https://learncryptography.com/cryptocurrency/51-attack).
#### So how valuable is Garlicoin right now? Should I be investing into it?
First, it's actually doing pretty well in terms of value. The
/r/garlicmarket subreddit seems to have established that 1GRLC (1
garlicoin) is about equivalent to a whole 1USD. This will likely
fluctuate over time, but considering how quickly they can be mined and
the current circulating supply it's pretty impressive. It's been pretty
impressive to watch new trade offers be posted offering 1USD for 1GRLC.
You can even [buy lockpicks with it if you want](https://www.reddit.com/r/GarlicMarket/comments/7s8bkw/shop_lock_picks_and_security_tools_usa_global/), on the same exchange rate.
As for whether or not you should invest now, I would say absolutely.
With the way things are currently going, 1USD being roughly 1GRLC, it
may be better to mine for them rather than trade or buy outright and
watch the price closely, because with cryptocurrency you never really
know what's going to happen.
If you want to get started, head over to the [official website,](http://garlicoin.io/) where
you can find wallets and links to getting mining.
My GRLC (vanity) address if you're interested:
GMeMMdtqRTUF7V9FmsdtuFcej69DnyKhnY

View file

@ -0,0 +1,21 @@
#+title: Well Known Fursona
#+date: 2023-05-06
Fursonas are a wonderful thing. They are a way of expressing one's inner self in a relatively safe manner, whether it be as a dark grey wolf or a vibrant pink and red cross between a dragon and a hyena. It's a fantastic escape from reality and frankly I think more should embrace the concept. I've adopted one as an extension of myself, giving me a character I can use consistently. As time goes on I've been trying to figure out my own self-identity and having a fursona does help. The opportunities to meet interesting people who share this interest also cannot be understated, and there's a significant number of people in the tech industry who are furries. It's fantastic.
Getting to know someone's fursona can be tricky, though. Some may have at most a reference sheet illustrating important markings or features of a character, others may have an entire novel series of their character's backstory. As a lover of standards, these wildly different approaches are both a blessing and a curse, so I was very interested when I saw some talk about [[https://github.com/theHedgehog0/fursona-schema][Pyrox's fursona-schema]] repository. I feel that the =/.well-known= path is underutilised at the moment, and I found the idea of putting my fursona information in there amusing. The question was, /how to host it?/
Typically when it comes to static websites, there are a plethora of options, which can lead to a bit of analysis paralysis. Some of the options for static websites include GitHub Pages, Vercel, Netlify, Cloudflare Pages, AWS S3 or Amplify, 'dumb' shared hosts, and more. In this instance, I decided to opt for an AWS S3 bucket with Cloudfront as a CDN/cache to keep costs down (Cloudfront includes 1TB of egress for free). The main advantage is that due to Amazon's dominance S3 has more or less become the standard API for interacting with object stores, so there are plenty of tools and libraries available should I want to expand on it. The disadvantage being I miss out on the fancy auto deployments and integrations that other providers offer, but given this is /just/ hosting some static files (including a static [[https://gmem.ca/.well-known/webfinger][webfinger]]) I'm not particularly worried about it.
#+attr_html: :alt A simple HTML page using dog onamonapias for words
[[/images/silly-site.png]]
/As a sidenote, thanks to the webfinger file, I now have a custom OIDC provider setup for my [[https://tailscale.com][tailnet]]! I'll probably talk about this in another post./
With my first in-person furry convention coming up, I thought it would be neat to expand on the idea, making a landing page for my fursona. Others could visit and learn more without needing to read some JSON. So I got to work, gathering my current frontend framework of choice SvelteKit (for better or worse) and Tailwind for styling (I may as well give it a fair shake). It's also worth noting that I'm still very much in my "research" phase of using ChatGPT and Copilot as pair programmers (in a sense) so those tools are included but generally didn't impact my approach to things. You can find the finished product at [[https://fursona.gmem.ca/][fursona.gmem.ca]].
Generally speaking the application itself is very simple - a landing page prompting a domain entry, and a basic card based page for displaying fursonas from a domain. Initially, I made a =fetch()= request directly from the frontend to the resource, but ran into issues with CORS. Not wanting to add some requirement to allowing people to use it, I wrote a small amount of TypeScript code that acts as a very simple proxy. I opted to build and deploy this on Vercel, since they own SvelteKit, I have experience deploying SvelteKit sites to them, and I can use the Vercel Edge Functions for the proxy function. With that said, I have had some minor annoyances with Vercel's performance. I'm hoping to dig into why at some point, but the site works for now.
I'm mostly familiar with SvelteKit, but Tailwind was new to me. While I'm not in love with it, having a pre-built CSS framework allowed me to build quickly, and the CSS files are relatively small. Cramming all the CSS into the =class= of an element can get out of hand very quickly, though, so I'm not entirely sold on it. All that said, the site packs down to 55.17kB (29.21kB transferred) for the landing page and about 63.3kB (33.7kB) for the card page itself (I say about because I had to manually calculate the value - so I might be slightly off). This doesn't take into account caching, and most assets are cached. Of course a site without SvelteKit and/or Tailwind might pack down smaller, and could be built with server side rendering for an even smaller footprint if we wanted. But the focus of this excersize, for lack of better term, was around Edge Functions and their speed, and Tailwind.
Overall, I'm happy with the result, and it will serve its purpose at FWA. I plan to attach at least one NFC sticker to the back of my custom badge that links to [[https://fursona.gmem.ca/gmem.ca][my own page]]. I hope that some others also find it helpful, and that it encourages adoption of the well-known schema. If you do find me at FWA, I'd be more than happy to program a sticker for your page as well! And thank you to everyone who has taken time to help test the application and suggest improvements. I'm planning to continue iterating on the application over time, as needed.

View file

@ -0,0 +1,69 @@
---
title: Where NodeMC is headed.
date: 2016-06-02
---
#### aka the "SE Daily Follow-up Post"
I talked about where I wanted to take NodeMC on the
Software Engineering Daily podcast last month. I also discussed a few
things about the future of Minecraft, it becoming less of a game and
more of a tool, and how NodeMC will help fulfil a need for quick and
easy-to-deploy Minecraft servers.
Before we get into the future, let's talk about what's happening for
NodeMC v6 (6.0.0). First and foremost, we've rewritten pretty much every
aspect of NodeMC using the ES6 specifications, transforming the
monolithic single-file-of-code into an agile and easy to scale platform
of microservices. Not only does it mean I have to learn ES6, it also
means it's much easier to add new features and API routes. We're adding
an auto-updater (which you will be able to turn on or off hopefully),
switching to using semver for versioning, and overhauling the plugin
system with permissions and a better API to interface with the core.
We're also changing several of the routes to make more sense. Every
route that has been implemented in version 5.0.0 or below is labelled
'/v1/'. New routes introduced in v6.0.0 are '/v2/'. Routes that
interface with the server are directed to '/server/', and so forth. Each
of these routes are generated with Express and some fancy magic (Jared
goes into it a bit more
[here](https://medium.com/@jaredallard/why-i-moved-from-monolithic-backends-to-microservices-d9955b9464b2#.bexdzdpzw)) so that we can keep all the code clean.
So, back to the main topic of this post.
#### Where is NodeMC going?
The direction NodeMC is headed is being a software-as-a-service. I want
to accommodate for the rapid change in direction of Minecraft, it
becoming less of a game and more of a platform for creative works. We've
seen it used as more than just a game before, with things like [the UN
using Minecraft to re-develop neighbourhoods](http://www.polygon.com/2014/4/22/5641044/minecraft-block-by-block-united-nations-project) or it being [used for
teaching](http://education.minecraft.net/minecraftedu/), and it makes me feel we're heading more in the
direction of this sandbox game becoming a tool for both creative,
educational and professional work.
NodeMC as a SaaS basically means this: Companies who want to quickly
deploy and manage Minecraft servers will be able to quickly spin up
Minecraft servers either through a user interface or their own UI. A
typical example of this may be something like so.
Company A wants to design a new housing complex really quickly to show
to some clients, and they feel Minecraft is the best way of doing that.
They would visit the NodeMC website and hit the "New Server" button,
picking the flat world preset with one or two plugins like WorldEdit.
Once the designers are done their job, they run a command to zip the
world file, save the zip to the cloud, and shut off the server. Company
A can then spin up "viewing servers" that allow clients to log in and
explore the project freely. Everything is stored in the cloud, and if
Company A wants they can download the zip file or run the world through
a processor first to export it to a 3D design program.
> TL;DR: Starts server for building at click of a button \> Builds
> mockup \> Saves world to the cloud \> Viewing server deployed for
> clients automatically.
Obviously this is not a small task, and required a *ton* more work on
NodeMC. Right now v6 is focused on the ES6 rewrite, dashboard written in
React, and the plugin system. I'm already drawing up v7 plans, which are
going to help drive NodeMC in the direction I want to take it. And who
knows, maybe this will go other unexpected directions.

View file

@ -0,0 +1,50 @@
---
title: Why New Social Media Fails
date: 2017-06-16
---
#### (for now)
We all know and love social media. In this day and age,
it's almost impossible to avoid it. As social creatures we crave the
satisfaction of being connected to so many people at once. But we also
hate the platforms that are available -- Facebook is data hungry,
Twitter has some bizarre developer policies and is rife with bots,
reddit is a hive mind... the list goes on. So why can't we just make a
new social platform that solves all these issues?
The sad truth is that incentivizing people to move to another platform
is really difficult. We can observe this with something like Google
Allo, which surged with popularity when it was announced but has
apparently completely stalled in downloads (via [Android Police](http://www.androidpolice.com/2016/11/29/google-allo-hit-5-million-downloads-in-5-days-two-months-later-its-momentum-seems-utterly-stalled/)). A handful of people I talk to on a regular basis
have it installed, but we also end up communicating a lot on Twitter,
which defeats the purpose (I also have a sneaking suspicion I might be
the only person they have on Allo)
> [](https://twitter.com/gmem_/status/870790497727422464)
> [](https://twitter.com/jaredallard/status/870790618238210048)
Unfortunately Facebook is no different. Despite my best efforts, for a
lot of people including my bosses, it has pretty much completely
replaced SMS. Facebook Messenger is the only messaging platform they
use, and who can blame them -- it's a solid messaging app. And I say
this with a hint of sarcasm
([reddit](https://www.reddit.com/r/Nexus6P/comments/4jpcvt/absolutely_ridiculous_facebook_messenger_battery/), although I've seen some reports [suggesting they fixed it](http://mashable.com/2017/01/11/facebook-messenger-battery-drain/#jfK9Pvnv9iqP) recently).
And other social networks... have fared no better. The two biggest names
out there, Facebook and Twitter, dominate. Startups have attempted to
build better platforms -- I remember signing up for the beta of
[App.net](https://en.wikipedia.org/wiki/App.net), which ended up shutting because the company simply
ran out of money, due both to low adoption rates and low conversion of
free customers to paying. Niche platforms have risen like
[Mastodon](https://mastodon.social/about), which have potential but the adoption rate will be
low and very niche.
Now there is a bright side to all this. There will always be a place for
startup social platforms -- Remember, we all thought MySpace couldn't
fail (and when was the last time you thought of *MySpace*?). There will
be a time when Twitter doesn't manage to keep up with the times, or
people wise up and ditch Facebook completely. I don't want to discourage
people from exploring the potential of their twist on the idea, who
knows what could happen.