news
Back End/Databases: pgtwin as OCF Agent, Kubernetes v1.35, MySQL vs PostgreSQL Performance, and DuckDB Considered Harmful
-
pgtwin as OCF Agent
When I was looking for a solution that could provide High Availability for two Datacenters, the only solution that remained viable and comprehensible for me was using Corosync/Pacemaker. The reason that I actually need this is, that Mainframe environments typically use two Datacenters, since z/OS can nicely operate with that. The application that I had to setup is Kubernetes on GNU/Linux on Z and since Kubernetes itself normally runs with 3 or more nodes, I had to find a different solution. I found, that I could use an external database to run Kubernetes with https://github.com/k3s-io/kine, and being no DBA, I selected PostgreSQL as first try.
-
Kubernetes Blog ☛ Kubernetes v1.35: Extended Toleration Operators to Support Numeric Comparisons (Alpha)
Many production Kubernetes clusters blend on-demand (higher-SLA) and spot/preemptible (lower-SLA) nodes to optimize costs while maintaining reliability for critical workloads. Platform teams need a safe default that keeps most workloads away from risky capacity, while allowing specific workloads to opt-in with explicit thresholds like "I can tolerate nodes with failure probability up to 5%".
Today, Kubernetes taints and tolerations can match exact values or check for existence, but they can't compare numeric thresholds. You'd need to create discrete taint categories, use external admission controllers, or accept less-than-optimal placement decisions.
In Kubernetes v1.35, we're introducing Extended Toleration Operators as an alpha feature. This enhancement adds
Gt(Greater Than) andLt(Less Than) operators tospec.tolerations, enabling threshold-based scheduling decisions that unlock new possibilities for SLA-based placement, cost optimization, and performance-aware workload distribution. -
Igor Roztropiński ☛ MySQL vs PostgreSQL Performance: throughput & latency, reads & writes
MySQL, the Dolphin, and Postgres, the Elephant, are one of the best and most widely used open-source databases. They are often compared across multiple angles: supported features, SQL dialect differences, architecture & internals, resource utilization and - performance. Today, we will jump into performance as deeply and broadly as possible - running many (17) test cases with all kinds of queries and workloads, using a few tables to simulate various scenarios, most often occurring in the real world, and measuring both throughput & latency. Let's then start to get the answer: [...]
-
[Old] Remy Wang ☛ DuckDB Considered Harmful
So why shouldn’t you compare to DuckDB? Because scientific progress is not a straight line. If we require every next paper to outperform the previous one, we are forcing ourselves into a greedy algorithm that will get trapped in local optima. The relational model itself took decades to mature and become competitive with earlier navigational systems, and it took around 30 years for deep learning to be vindicated. Of course, ideas that can be immediately applied to current systems are valuable and impactful, but revisiting core principles and restarting from scratch is more likely to lead to breakthroughs. The latter is particularly challenging in the context of database research, because the field is so mature that system architecture has converged to a few highly optimized key components, and any improvement to one component will receive diminishing returns. But if we want to bring vitality back to the field, we have to invest in risky ideas that do not pay off immediately.