Programming Leftovers
-
RC Week 9: Parallels of Proofs and Programs | nicholas@web
I have three weeks left at Recurse Center. This last week was significantly less productive for me than usual, because I've been pretty fatigued and just recovered from a cold. But I still got some work done that I'm proud of. More than that, I'm excited for the coming three weeks!
This week I was mostly fatigued all week, so I didn't do very much coding. In spite of that, I made some really good progress on IsabellaDB through some pairing sessions! A friend reminded me that a few years ago I was deeply skeptical of pair programming (I knew it worked for some people, but I was convinced I was not one of those people). This week cemented what I learned earlier in batch: Pair programming is a highly effective tool for getting work done. It's not an all-the-time thing for me, and it's highly dependent on having the right pair for the right problem, but it's a great time.
Through pairing this week, I was able to finish out both a basic move explorer (show the list of legal moves, click one to make that move) and finish out my sparse bitmap implementation. This lays the groundwork for the more interesting features I am building with IsabellaDB. Next up is displaying win/loss/draw percents in an opening tree so you can explore openings. After that, building some filters to explore openings for a certain subset of games (played in the last 12 months, etc.). And then after that, I'll generalize it to be a query engine over all the games so you can do things like search for sequences of positions (want to see how often the Caro-Kann transposes into a French Defense?) or features of positions/games (want to find all the Botez Gambits?).
-
How to do Pairwise Comparisons in R? - Data Science Tutorials
How to do Pairwise Comparisons in R, To evaluate if there is a statistically significant difference between the means of three or more independent groups, a one-way ANOVA is utilized.
-
Learning Data Science: Predictive Maintenance with Decision Trees - Learning Machines
"openEO is an open source, community-based API for cloud-based processing of Earth Observation data. This blog introduces the R openeo client, and demonstrates a sample analysis using the openEO Platform for processing." https://www.r-bloggers.com/2022/11/processing-large-scale-satellite-imagery-with-openeo-platform-and-r/
-
Processing large scale satellite imagery with openEO Platform and R | R-bloggers
openEO is an open source, community-based API for cloud-based processing of Earth Observation data. This blog introduces the R openeo client, and demonstrates a sample analysis using the openEO Platform for processing.
-
Whisperings in the Academy - Weird Data Science
The noblest of human endeavours is to enlighten the uninitiated consciousness; to bare its awareness before the endless and terrifying vistas that lie beyond darkness and ignorance.
In pursuit of such necessarily painful revelations the Oxford Internet Insitute at the University of Oxford — the unwitting host on which the investigations here parasitise — recently hosted an inaugural Halloween lecture. This oration drew on several years of dark explorations chronicled in this blog, to inculcuate into a new generation of unprepared and curious minds the horror and necessity of subjecting our reality to the insidious power of statistical science. Through what seems a dangerously careless oversight, this brief glimpse of truth was recorded and made available for posterity.
-
CodeGuessr — Andrew Healey
I recently shipped CodeGuessr. It's like GeoGuessr .. but for code. Given a random code snippet, you have to guess which popular open source project it belongs to.
-
Site Update: Version 3.0
When I ported my website from Go to Rust back in 2020 I needed a library like Go's html/template to template out the HTML that my site uses. At the time there were many options I could pick from, but I ended up choosing ructe because it would compile the templates into my application binary instead of having to ship those with my website. This also means that the optimizer can chew through my templates and make them even faster than html/template. Native code will always be faster than interpreted code.
This worked for a while, but I started running into ergonomics problems as I continued to use ructe. The great part about ructe is that because the templates are compiled to Rust anyways, you can use any Rust logic or types you want. The horrible part about ructe is that your editor autocomplete and type checking logic doesn't work. Debugging compile failures of your templates requires that you understand how the generated code works. This isn't really as much of an issue as I'm making it sound like, but it's a papercut nonetheless.
-
TWC 192: Frosting a cake without flipping the spatula
-
Cleaning up and upgrading third-party crates
Typically, you'd want a production application to use a stable version of Rust. At the time of this writing, that's Rust 1.65.0, which stabilizes a bunch of long-awaited features (GATs, let-else, MIR inlining, split debug info, etc.).
-
Migrating from warp to axum
Back when I wrote this codebase, warp was the best / only alternative for something relatively high-level on top of hyper.
I was never super fond of warp's model — it's a fine crate, just not for me.
-
Deploying at the edge
One thing I didn't really announce (because I wanted to make sure it worked before I did), is that I've migrated my website over completely from a CDN (Content Delivery Network) to an ADN (Application Delivery Network), and that required some architectural changes.