Servers: DNS, HPC, Kubernetes, and Databases
-
How to set up your own open source DNS server
A Domain Name Server (DNS) associates a domain name (like example.com) with an IP address (like 93.184.216.34). This is how your web browser knows where in the world to look for data when you enter a URL or when a search engine returns a URL for you to visit. DNS is a great convenience for internet users, but it's not without drawbacks. For instance, paid advertisements appear on web pages because your browser naturally uses DNS to resolve where those ads "live" on the internet. Similarly, software that tracks your movement online is often enabled by services resolved over DNS. You don't want to turn off DNS entirely because it's very useful. But you can run your own DNS service so you have more control over how it's used.
I believe it's vital that you run your own DNS server so you can block advertisements and keep your browsing private, away from providers attempting to analyze your online interactions. I've used Pi-hole in the past and still recommend it today. However, lately, I've been running the open source project Adguard Home on my network. I found that it has some unique features worth exploring.
Adguard Home
Of the open source DNS options I've used, Adguard Home is the easiest to set up and maintain. You get many DNS resolution solutions, such as DNS over TLS, DNS over HTTPS, and DNS over QUIC, within one single project.
-
HPC and me
Recently I found that quite a few of my Twitter and Mastodon followers are working in high-performance computing (HPC). At first I was surprised because I’m not a HPC person, even if I love high performance computers. Then I realized that there are quite few overlaps, and one of my best friends is also deeply involved in HPC. My work, logging, is also a fundamental part of HPC environments.
Let’s start with a direct connection to HPC: one of my best friends, Gabor Samu, is working in HPC. He is one of the product managers for one of the leading commercial HPC workload managers: IBM Spectrum LSF Suites. I often interact with his posts both on Twitter and Mastodon.
I love high performance computers and non-x86 architectures. Of course, high performance computers aren’t the exclusive domain of HPC today. Just think of web and database servers, CAD and video editing workstations, AI, and so on. But there is definitely an overlap. Some of the fastest HPC systems are built around non-x86 architectures. You can find many of those on the top500 list. ARM and POWER systems made it even into the top10 list, and occupied the #1 position for years.
-
Kubernetes is the key to cloud, but cost containment is critical
What’s driving the growth of open source container orchestrator Kubernetes? A study by Pepperdata shows how companies are using K8s and the challenges they face in getting a handle on cloud costs.
-
Synchronize databases more easily with open source tools
Change Data Capture (CDC) uses Server Agents to record, insert, update, and delete activity applied to database tables. CDC provides details on changes in an easy-to-use relational format. It captures column information and metadata needed to apply the changes to the target environment for modified rows. A changing table that mirrors the column structure of the tracked source table stores this information.
Capturing change data is no easy feat. However, the open source Apache SeaTunnel project is a data integration platform provides CDC function with a design philosophy and feature set that makes these captures possible, with features above and beyond existing solutions.
CDC usage scenarios
Classic use cases for CDC is data synchronization or backups between heterogeneous databases. You may synchronize data between MySQL, PostgreSQL, MariaDB, and similar databases in one scenario. You could synchronize the data to a full-text search engine in a different example. With CDC, you can create backups of data based on what CDC has captured.
When designed well, the data analysis system obtains data for processing by subscribing to changes in the target data tables. There's no need to embed the analysis process into the existing system.
-
mysqldump: Couldn’t execute ‘FLUSH TABLES’: Access denied; you need (at least one of) the RELOAD or FLUSH_TABLES privilege(s) for this operation (1227)
This article is a copy/paste/modify of mysqldump: Error: ‘Access denied; you need (at least one of) the PROCESS privilege(s) for this operation’ when trying to dump tablespaces.