sc and dustman

October 9, 2016

In my career as a programmer and generally speaking software developer, I often had to deal with things for which I couldn’t find any solution online, so I decided to write a solution from scratch.

Once I wanted to make a “speedcoding” video, the coding version of speedpainting. “Use Chronolapse”, I hear you say. “I am a Linux user”, you hear me say. And while it is true that Chronolapse has a linux version, it’s also true that it fucking sucks. It crashed on me about 10 times when I was trying to use it, and I got pretty sick of it. So what did I do? I wrote my own! And so, sc was born.

sc stands for SpeedCoding, and it is, of course, a software for speedcoding. It really is something I wrote in a few minutes, and that I made for my very specific use case, which is using it on KDE (it executes spectacle). Its code is absolutely minimal, and it can be adapted to other “screenshot taking systems” very easily, because all you need to change is one function: I’ve been using it to make my speedcoding videos, and various other attempts to do it, which always end up with being too long or too short.

Today I’ve made a second tool, which is always in the kind of same style. This time it took a couple of hours, but still not too long, and I didn’t write any particulary huge amount of code for it. And it also does one thing, and does it well. While it may sound stupid, this is my new spam filter. And it’s called dustman.

So here’s the story, which I predict will take slightly longer to explain. Basically, setting up an email server in 2016 still sucks. And apparently it always will. Anyway, I set up the email server on my server, zxq, way back in the past, like 2 years ago, and it has always worked ever since, in a way or another. Early this year I’ve made it work using my MySQL server, which basically meant I didn’t have to modify two files each time to add a new user. That had already been a nightmare to get working, but in the end I made it. Now, fast forward a few months, and here we are at me like 3 months ago. The amount of spam email I was (am) getting is amazing, I receive a spam email every 10 minutes, so the need for a spam filter is in order.

Digitalocean, from which I read the guide on how to setup the mail server, had a guide on how to set up spamassassin, which unluckily didn’t work completely. My desired behaviour is very simple:

  • Emails that are spam get in the Junk folder.
  • Emails that are not get in the inbox.

Turns out this is one of the most difficult things to do in the world, at least on my server. So I sweared and I sweared day and night, but still nothing. The best I got to was being able to receive email (with spam going to Junk), but no way in the world being able to send email. So I rolled back changes, but well, now spamassassin warned me with a nice *****SPAM***** in the subject line, which made deleting all spam emails fairly easy, but still, I was doing it manually, and you probably know how much a programmer hates doing things by hand. I postponed this job for about a month now, but finally, today I did it, and I wrote dustman, probably one of the hackiest ways of filtering spam.

What it does, basically, is get notified for file changes in the mail folder. If it’s an email being created, or modified, it checks the emails headers, and if there’s X-Spam-Flag and it’s “YES”, then it moves the file to the Junk folder. And since it uses file system notifications, it’s so fast to move stuff, that your email client doesn’t even notice it ever was in the inbox. Hack-ish, but it works.

Getting this cool stuff

Now, if you’re here, you might be wondering how you can download these things? Well, the easiest way to do that is to install Go, set up the GOPATH, and then go get Easy as that!

Building a large-scale osu! private server from scratch

October 3, 2016

Hello there! It’s been a while. In the past few months, I’ve been working in most of my spare time on Ripple, which is an osu! private server I built with a friend of mine, Nyo. Ripple is my first large-scale project I’ve built, and so I’d like to share my experience in my past year and something writing code for it.

A bit of a summary, for those of you not coming here from Ripple:

Ripple was born out of my mind the 12th of August, 2016. Few days later, my pal Nyo came in, and started contributing a hell lot of code, and in about a week or so, we were set up: we had a domain, we had the server working (it was score-only, it did not have bancho), and we were playing on it with about 10 friends of ours. But, shortly, after a month, I decided to shut down the project, because peppy was going to add HTTPS to osu!, and I believed it would have been impossible to get it working (turns out it isn’t). Fast forward from September to January, Nyo decides that it’s time to bring Ripple back to life, especially since justm3, a famous member of a certain osu! hacking community, released the source code of his bancho server.

The reason why we didn’t have a Bancho server when we started out with Ripple was that we were clueless on how it worked, and I, who was in charge of understanding the protocol, understood jack shit about it, especially since I knew little about binary encoding. Anyway, taking justm3’s code as reference, Nyo wrote a bancho server. It was ugly, it used PHP and MySQL (not even redis!), but it worked for the moment. We still refrained from advertising ourselves to the public, but we started allowing people to come in slowly, so we had more users than what we did initially.

Now, at this moment in history, Ripple’s very terribly performing. It’s all PHP, and as you may know PHP pretty sucks when it comes to performance, especially for real-time chatting and streaming systems. So the Great Rewrite started (I’m kidding, we didn’t actually name it like that).

So, how did we optimise Ripple?

Rewrite Bancho

The very first thing Nyo did soon after getting the PHP bancho (nicknamed potatobancho) done, was rewriting it in Python. And well, guess what, it worked waaay better. A lot of the code is still in use, although as usual a lot of it like in most programs has ben rewritten.

Rewrite the cron

So, here’s the thing. We had a PHP script up until a few months ago, that used to do some administration tasks such as recalculating Leaderboards, ranked scores, accuracies, and all that stuff. Here’s the source. It was first made in the first version of Ripple, and no surprise here, it sucked. Keep in mind that that was waaaay back, like when we had 300 or so users (today we have over 10,000). Since we wanted it to actually be fast and scalable, I took care of it and rewrote it in Go in about a week or so. This was the outcome (see especially the last part of it, where the workers are maximised). So it went from being a slow-ass script, to being a modular program that is super-fast and can do what was done in 3 minutes in 10 seconds or so. Even today, the cron still takes about 30 seconds to execute, and I’d say that’s pretty good scaling.

Rewrite the score server

This was another very important thing to do. As we grew, it became more important to have the responses to osu!’s unofficial API (the one the client/game itself uses) be done extremely fast, so that people can have a flawless experience playing the game. Again, this was done with Python. LETS is probably one of our most important projects, as it’s really the core of a lot of stuff, like leaderboards, score submission and downloading replays. While it’s still not fast as we’d wish it was, mainly for technical issues which would take a lot of time to fix, it’s still pretty fast and does a good fucking job at replacing the old PHP shitty scripts.


Something a lot of people forget to do when writing query is adding LIMITs whenever they can. As it turned out for us, along with adding a hell lot of indexes to our tables, LIMIT 1 made our second best imprvement in performance.

Getting rid of the current website.

If you do not have horrible taste in design, you’ll have noticed that Ripple’s design is absolutely disgusting as of right now. That’s the main reason why I am currently building a solution to that, which is codenamed “Hanayo”, from the Love Live! character. Also, there’s the performance reason to why I’m building it: the PHP version was very resource-straining and especially very slow. Usually pages took about 500ms to one second to generate, and when it comes to the admin control panel, it goes even higher than that, up to 2 seconds.

Currently Hanayo is really fast, and requests are usually ~100ms, still a huge improvement. I can say that developing Ripple in the past year has helped me a lot developing my skills as a programmer, mostly when it comes to building large-scale products, since Ripple actually is my first project that I can say really succeded. It is also the first project I start and make completely with a friend, which in my opinion means a lot, since it tells how much working together with someone else is important when making a project. For the two of us, the thing that drives us the most for what matters the development was the fact that whenever either of us doesn’t feel like developing for Ripple, seeing the counterpart doing work makes the non-working part want to go back to write some code. Now, I could probably write the previous sentence better, but I don’t want to go research way of saying it so that it doesn’t sound so completely fucked up. You get the point, anyway.

Snowflake IDs, JSON and Go

June 2, 2016

I’ve recently started making a library for the Discord API. The discord API uses snowflake for generating high ID numbers, and for preventing integer overflow in various languages they wrap Snowflake IDs into a string. This can be in a simple manner overcome by Go by passing ,string to the JSON tag:

type MyJSON struct {
	ID uint64 `json:"id,string"`
var m MyJSON
json.Unmarshal([]byte(`{ "id": "1956019865191951" }`), &m)
// Output: 1956019865191951


Neat, isn’t it? So now you have solved your problem with that busty uint64 being wrapped into a string, by simply placing ,string in the tag of your struct field.

But have you really?

I happened to have to unmarshal an array of these Snowflake IDs. Here’s what I tried:

var into struct{ Els []uint64 `json:"els,string"` }
err := json.Unmarshal([]byte(`{"els":["1941", "918592", "9581958129", "5819235812"]}`), &into)
if err != nil {
// Error: json: cannot unmarshal string into Go value of type uint64


So, how can we fix it? Well, custom types of course! Here is a trivial (and unsafe) example of a Snowflake with the methods UnmarshalJSON and MarshalJSON

package main

import (

func main() {
	var into struct{ Els []Snowflake `json:"els,string"` }
	err := json.Unmarshal([]byte(`{"els":["1941", "918592", "9581958129", "5819235812"]}`), &into)
	if err != nil {

type Snowflake uint64

func (f *Snowflake) UnmarshalJSON(data []byte) error {
	i, err := strconv.ParseUint(string(data[1:len(data)-1]), 10, 64)
	if err != nil {
		return err
	*f = Snowflake(i)
	return nil

func (f Snowflake) MarshalJSON() ([]byte, error) {
	return []byte(`"` + strconv.FormatUint(uint64(f), 10) + `"`), nil


As you can see, this soulution plays pretty nicely, and requires very little code. I’ve wrapped the type Snowflake into a package, with adeguate tests, documentation, examples and an actually safe implementation of the unmarshaller. It now gets down to this!

How to set up deployment of jekyll websites with gogs + drone

December 13, 2015

I’m not a big fan of wordpress (nor of any CMS in general). The main reason for that is because I like to save every single bit of my resources. The biggest CMS in the world is a huge memory hog. There are some lightweight solutions, such as Anchor, or if you fancy Python, my good ol’ friend QuadPiece has created his own CMS, flask-blog. Although, everything that is dynamically generated can’t be as low on resources as a static site.

About a year and a half ago, I came across GitHub pages, and thus jekyll. I’m telling you, I didn’t really even try to run jekyll at the time, because I was still a newbie to the programming world and I didn’t want to bother trying to set up jekyll via the windows guide. As I think to have already mentioned on this blog somewhere, I think developing on windows if you’re not actually doing .NET development is a pain in the fucking ass. That’s also the case for ruby.

Fast forward to about March of this year, I decided to replace my blog once again, and put it on the front page of my website this time (well, actually a redirect). Grabbed jekyll, a cool theme, changed it a bit, created a placeholder post, and boom, ready to go. Push to the website.

For the first times, I would have manually uploaded the _site folder’s content to the server. After that, I started generating jekyll on the server. Today I decided that had to change. So, I decided to put it under version control, as if it were a GitHub Pages project.

Few months ago I’ve come across gogs, which I think to be the best git system on the world, despite of it lacking many features. Why do I love it so much? Because performance! Yes! Gogs run on very few megabytes of RAM, and doesn’t eat much of your CPU, so due to my performance illness, it was perfect for me.

Shortly later, I’ve installed drone on my server, and integrated it with my gogs instance. It was running amazingly. I do not really use tests on GitHub, because the few times I’ve tried to set up Travis CI I failed, but with drone everything went smoothly, and I love it so far.

Getting back to the blog talk, I put my jekyll blog under version control, just like any 1337 on earth. I then pushed it to a repo on GitHub, and to create a system like the one on GitHub pages, I decided to set up drone to generate the blog each time I pushed something new to it. Here’s the end .drone.yml:

# Check that it builds successfully
  image: grahamc/jekyll
    - jekyll build
# And then push to prod!
      branch: master
    user: $$SSH_USER
    host: $$SSH_HOST
    port: 22
    source: ./\_site/*
    target: $$SSH_FOLDER
    delete: false
    recursive: true

Understanding it is fairly simple: build the website, and then if the branch is master and the build passed, push it to the website via rsync. Easy, isn’t it? So now, I don’t have to do an ssh into my server for updating the blog. Yay git! Yay automation!

© Morgan Bazalgette 2015-2017. Back to main site.