- Alex Builds
- Posts
- It's been a month...
It's been a month...
It's been a little over a month since I last gave you an update and I'm back. Let's talk events in my life, my newfound love for Rust, why prediction markets are actually not a great business and why EV doesn't tell the whole story.
A quick update on my professional life
Due to unforeseen circumstances, the dev shop I work for has stopped working with Paragon, the place where I built the social staking protocol and was cleaning up that trading bot. We were lucky that there was another client in the pipeline, so no down time. But it still was bitter sweet.
In a sense, I’m glad I don’t have to put up with the messiness of that trading bot codebase anymore. It was genuinely frustrating. If you read the last email, you know when we inherited it it had 0 tests and it was very convoluted.
We managed to make some major improvements there as a team. Added tests for, refactored and fixed 2 of the most important hot paths: buying and selling tokens. But the DCA feature remained as it was, and to be frank, DCA was the most messy/complex part of the codebase.
I liked the challenge and I wish I could have taken it to its conclusion, to get the thing fully stable, but it is what it is.
Now I’m working on one of the things that I know best: an indexer. I’ll talk more about it in a future email, when I have interesting things to tell you about it. A hint: I’ll load test Node and MongoDB in the context of indexing EVM chains.
Another important update I should give you is that…
I have a son now!
I’ll keep this one short cause I know most of you are not here to read about my baby. It’s been a ride though. A beautiful ride, but a ride nonetheless. I’m used to sleeping 8 hours a night and having a newborn doesn’t really mix well with that, lol.
Him being here did imbue me with a whole new sense of responsibility and maturity. It also made me realize that mothers are heroes. They go through so much for us as kids and we don’t even remember most of it.
Back to technical topics…
Rust: 1 year in review
It’s not really a year, but that doesn’t matter. I’ve used it enough to have an opinion. Back in December 2024 I got hired for this temporary contract to help build a prediction market PoC on Solana.
That eventually lead into more Solana programs work on a social staking protocol and even a legacy trading bot. You probably already knew this by now, but in case you don’t and would like to hear more you can read the first release of this newsletter.
I can’t say that my first few months with Rust were enjoyable. Not because the language was bad. I actually think Rust is fairly easy to grasp if you avoid pointers and concurrency / parallelism (which I was already familiar with from my previous work with Go).
=====
Some explanations for more junior devs.
“Pointers” is when a variable points to a slot in memory that already exists, most languages don’t use pointers but simply copy the value in a new slot for all new variables. You use pointers to make more efficient use of RAM (less copies of the same data) and of CPU (copying costs some CPU).
“Concurrency” is when you do multiple things, but one at a time. You make a query to a db, while it’s blocked by the db responding you do something else.
“Parallelism” is when you use the available CPU cores to compute XYZ while also computing ABC.
Node with its async/await, that’s concurrency but not parallelism unless you use worker threads. Go has goroutines which default to concurrency but have their own parameters for when to use parallelism by default. In Rust, you can do both with Tokio.
=====
Rust was hard for me because of the tooling. Not once, but twice have I complained (or ranted) about it in r/rust. The rust-analyzer really almost became the death of me.
A friend of mine joked with me once that those rants are what got me reported and banned on X, lol.
Once I managed to finally fix my rust-analyzer by cleaning various env var configs and migrating from Cursor to Zed, things actually took a nice turn.
I could finally appreciate not having nil
pointers like in Go (which I’ve coded for about 1.5 years previous to Rust). I could grow to love that it forces me to handle errors and other potential issues like that, which neither Go nor TypeScript really do. I started liking the match
expression and a bunch of other things about it.
And, now, after having used Rust at both work and on various personal experiments (like geiger.rs), I can say that I actually like it.
I like the strictness of it, I think it can make you faster just like TypeScript makes you faster than using JavaScript because you end up generating fewer “dumb” bugs.
It’s easy to compile it for many targets and you can even make it callable from Python (I think even TypeScript maybe). You have no GC pauses, which for most software isn’t a meaningful performance gain but it is there.
You can literally even build frontend apps with it with Leptos. It’s not going to be much more performant cause the main bottleneck is still DOM updates, but it can be useful if you have to do intensive computations.
Ultimately, the mob of Rust cultists do it a huge disservice. Or, at least, to me it did because for a long time I discounted Rust because of it. It really is a solid language.
I still wouldn’t use it for most things I build as a web / crypto dev, simply because it is harder to hire for. Trust me, I’ve had to do it. TypeScript & Solidity still are the kings (i.e.: most known, most used) for most of the work I do.
But I am keeping Rust in my back pocket. It is a neat tool and I’d go as far as to say it has replaced Go for me. And I was a pretty big Go fan (love the simplicity).
Stop with the premature optimization
I actually want to address one more thing before I let the Rust topic go. Earlier I said “I actually think Rust is fairly easy to grasp if you avoid pointers and concurrency / parallelism”.
I strongly stand by that. Rust is way easier to grasp if you avoid those. It’s way more productive too (which tends to be a common complaint).
And I know the mob will descend at this blasphemy I said to burn me at the stake. But the truth of it is that most code out there is not bottlenecked (whether in speed or scale) by memory or compute, it’s bottlenecked by I/O.
For more junior readers, I/O is reading & writing from/to the disk or a database. It’s making network requests. It’s writing to stdout
(i.e. like logs in the terminal).
The real performance killers in most software most of us will ever write are N+1 db queries, complex db queries, no connection pooling, blocking I/O without async, no caching, long distance network trips, many network trips.
You can use a profiler to figure out where it actually makes sense to optimize for CPU / memory, but it should probably not be a core concern for you initially. Shipping should be, at least for most software.
Prediction markets / eSports betting are shitty business
During the last newsletter I’ve mentioned my idea for Bumble Bet, a would-be prediction market for eSports.
Well, over time I’ve learnt that prediction markets are actually a hard business. Part of it is me doing more research, part of it is that the new contract I have is on a crypto gambling project.
For one, this is an entertainment business. You have to keep people entertained to keep them around, otherwise they’ll leave for the newest hottest thing.
My father had a restaurant, a club and a hotel and the core problem was precisely this. You have to perpetually keep re-inventing your design, brand & spiel or people will get bored.
He also had a scrap yard and a concrete batch plant. No re-inventing there, just offer a good service at a good price and it’s “easy” mode.
That said, this is not your biggest problem. Not even close.
CAC (customer acquisition cost) can fluctuate a lot and it really depends on who you ask, but I’ve seen numbers ranging from $60 up to $500 in the gambling industry. Supposedly, the CAC for Polymarket / Kalshi is far lower at $20 to $60 because they had a lot of organic growth.
The first problem with this high CAC is that you need to spend a lot of money to capture a meaningful amount of users. At $60 CAC, 1,000 users = $60,000 marketing spend
. This means it’s hard to indie hack this business, you need outside capital.
That said, even if the CAC is high, if it’s profitable it shouldn’t be hard to raise outside capital, no? Well, this is where it gets messy.
Rake (the % out of betting volume that you keep), betting volume of a user and ARPU (average revenue per user) are a bit hard to pin down.
From what I’ve seen, we can generally expect a rake of 0% to 5%. Call it a 3%.
As for ARPU, I believe Polymarket and the like sit high at ~$800 because their betting volume per user per year is also very high ($25k+). But you have to take into account that there are a lot of political bets which drive these numbers high.
For eSports betting, I’ve managed to find that that the global ARPU is closer to around $33. For various countries, like the US or Germany it might get up to $60. Sweden seems to be a big spender at $125.
This is all obviously very problematic.
Yes, in a world where you spend $20 on marketing to make $800 you have plenty of margin for anything else you might need like product development or regulation-associated costs.
The “spend $20, get $800” seems like something unique to political prediction markets though.
In a world where you spend $60 in marketing to make $30 in revenue, you’re underwater from the start.
Even if you bet on (pun-intended) your average user playing for 2-3 years and you’re playing the long game, you’re still spending $60 to make *maybe* $90 and you’ve just invited cashflow management problems in.
Let me also tell you, the regulation-associated costs are a huge issue as well… as I’ve learned from my current contract at the crypto gambling site. It’s not just dollar cost, but a huge time sink too.
I’m not saying it can’t be done. Obviously averages are averages and I don’t think you should subscribe to being average. That said, these aspects have moderated my desire to tackle this business.
Lately I’ve been enamored by the idea of making a toxic flow detector, but that’s a topic for next email when it’s closer to being done.
EV doesn’t tell the whole story
Since we’ve talked about gambling and I also promised you charts before, let’s talk about a topic that I find very interesting, especially as it pertains to startups or indie hacking.
Let’s say I give you a game where you start with $100. You are allowed to bet 3 times only. You have to bet 33% of portfolio at any time (so $33.33 to start). Your win probability for each round is 2/3. Every time you win I double your money.
The expected value of one round is 1.33x and that of the 3-bet game is 1.37x which is nice.

3 bets at 1/3 your budget
But, in reality, if 100 people take that sequence of bets, about 26 of them will lose money.
Isn’t that insane?

30 bets at 1/3 your budget
Even if you play up to 30 rounds, you still end up having a loss about 17.1% of the time.
This, my friends, is an example of ergodicity… or, actually, non-ergodicity.
In an ergodic game ensemble average == time average
. What happens to a group on average is what happens to you on average over time.
In a non-ergodic game time average != ensemble average
. Past outcomes permanently change your future possibilities and you can get "stuck" in certain states.
Our game here is not as bad. Another simple example of this is 6 people playing Russian roulette vs you playing it 6 times.
There is a lot of material out there on the topic, like this. But, simply put, the EV (expected value) doesn’t tell the whole story.
This is a bit of a common problem in many real life situations: startups, quant trading, business, etc. When uncertainty exists, you need to optimize for long-term survival… long enough to let the EV play out.

30 bets of 1/30 your budget
In this new chart, we bet 30 rounds and each time we only bet 1/30 of our portfolio. You can see two things:
the portfolio value doesn’t get as high (within 30 bets) because your bets are smaller
you’ll only lose money 4.4% of the time instead of the earlier 26.2% and 17.1%.
You can call this bankroll management. You can read about the Kelly Criterion and a whole bunch of other similar things if this is interesting to you.
I’ll leave you with a little puzzle to think through. If someone offers you $1,000,000 (one million USD) or a fair coin flip (50% odds) for $1,000,000,000 (one billion USD), what do you take? Assume you can’t sell your rights to the flip or game the system in any way.
Email me with your answer and thought process if you want.