Language Selection

English French German Italian Portuguese Spanish

Linux Journal

Syndicate content
Updated: 2 days 5 hours ago

Safeguarding Linux Landscapes: Backup and Restore Strategies

Thursday 14th of September 2023 04:00:00 PM
by George Whittaker Introduction

In the dynamic world of Linux environments, safeguarding data stands paramount. Whether for personal use or maneuvering through server settings, understanding the depth of backup and restore strategies can be a game-changer. This article unfurls the multifaceted avenues of Linux backup and restore strategies, touching upon the necessity to have a fortified plan and how it keeps the data landscape secure and retrievable in Linux operating systems.

Understanding Linux File System

Before delving into the intricacies of backup and restore strategies, it's vital to understand the Linux file system. Linux supports several file systems such as ext4, XFS, and Btrfs, each boasting unique features that govern how data is stored and retrieved. Appreciating the nuances of these file systems can significantly influence your backup and restore strategy, rendering it more robust and suited to your specific needs.

Backup Strategies

Protection starts with a proper backup strategy. Let's explore various backup avenues available in Linux environments.

Manual Backup Utilizing Basic Linux Commands

Linux offers potent commands like cp, tar, and rsync to facilitate manual backups. These commands are versatile, allowing users to specify exactly what to back up.

  • Full control over the backup process
  • No additional software required
  • Requires good knowledge of Linux commands
  • Time-consuming and prone to human errors
Automated Backup Cron Jobs

Cron jobs make it possible to schedule backups at regular intervals, automating the backup process and reducing the possibility of human error.

Linux Backup Solutions

Bacula and Amanda stand tall as holistic solutions offering a range of features to facilitate automated backups.

  • Regular automatic backups
  • Comprehensive solutions with detailed reporting
  • Can be complex to set up initially
  • Potential overhead on system resources
Restore Strategies

Having a backup is half the journey; being adept at restoration completes it. Let’s delineate various restoration strategies pertinent to Linux environments.

Manual Restore Restoring with Linux Commands

Using Linux commands for restoration carries the same pros and cons as using them for backups, offering control but requiring expertise.

Go to Full Article

Navigating the Landscape of Linux File System Types

Tuesday 12th of September 2023 04:00:00 PM
by George Whittaker Introduction

In the Linux environment, the file system acts as a backbone, orchestrating the systematic storage and retrieval of data. It is a hierarchical structure that outlines how data is organized, stored, and accessed on a storage device. Understanding the different Linux file system types can profoundly aid both developers and administrators in optimizing system performance and ensuring data security. This article delves deep into the intricate world of Linux file system types, tracing their evolutionary history and dissecting their features to provide a roadmap for selecting the appropriate file system for your needs.

History of Linux File Systems

Early Adventures in Linux File Systems

In the late 80s and early 90s, the Linux environment utilized relatively rudimentary file systems such as Minix, which later evolved to extended file systems like ext and ext2. These were foundational in framing the modern Linux file systems we see today.

The Journey from ext2 to ext4

The extended family of file systems transitioned from ext2 to ext3, introducing journaling features, and eventually culminated in the development of ext4, which brought forth substantial improvements in performance and storage capabilities.

Understanding Linux File System Types

Dive into the fascinating world of Linux file systems, each characterized by its unique features and functionalities that cater to various demands and preferences.

The Extended Family
  • ext2

    • Features and Limitations: Known for its simplicity and robustness, ext2 lacks journaling capabilities, which can be a drawback in data recovery scenarios.
    • Use Cases: Ideal for USB drives and flash memory where journaling isn't a priority.
  • ext3

    • Features and Limitations: Building upon ext2, ext3 introduced journaling capabilities, improving data integrity yet lagging in performance compared to its successors.
    • Use Cases: Suitable for systems requiring data reliability without the need for top-tier performance.
  • ext4

Go to Full Article

How to Change the Hostname in Debian 12 BookWorm

Tuesday 5th of September 2023 04:00:00 PM
by George Whittaker Introduction

In the vast realm of networked computers, each device needs a unique identifier—a name that allows it to be distinguishable from the crowd. This unique identifier is known as the "hostname." Whether you are working in a large corporate network or simply tinkering with a personal Linux box, you might find yourself needing to change this hostname at some point. This comprehensive guide walks you through the process of changing the hostname in Debian 12 BookWorm, one of the latest iterations of the popular Linux distribution Debian.


Before diving into the nitty-gritty, ensure you have the following:

  1. Access to a Terminal: You can access the terminal through your GUI or via SSH if you're working remotely.
  2. Superuser or sudo Privileges: Administrative access is necessary to make system-wide changes.
  3. Basic Understanding of Linux Command Line: Knowing how to navigate the terminal will be beneficial.
  4. Installed Instance of Debian 12 BookWorm: The instructions are tailored for this specific version.

To make sure we're on the same page, let's clarify some terminology:

  1. Hostname: A label assigned to a machine on a network.
  2. Superuser: The administrator with full access to the Linux system.
  3. sudo: Command that allows permitted users to execute a command as a superuser.
  4. /etc/hostname and /etc/hosts: Configuration files storing hostname information.
Backup Current Settings

It's always prudent to backup important configurations before making any changes. Open the terminal and run:

cp /etc/hostname /etc/hostname.bak cp /etc/hosts /etc/hosts.bak

This creates backup copies of your current hostname and hosts files.

Method 1: Using the hostnamectl Command Step 1: Check Current Hostname

To see your current hostname, run the following command:


Step 2: Change the Hostname

To change your hostname, execute:

sudo hostnamectl set-hostname new-hostname

Replace new-hostname with your desired hostname. For instance, to change the hostname to "mydebian," you'd run:

sudo hostnamectl set-hostname mydebian

Step 3: Verify the Changes

Use the hostnamectl command again to check if the hostname has been updated:


Go to Full Article

The Arch Decision: Evaluating If a Leap From Manjaro to EndeavourOS Is Right for You

Thursday 31st of August 2023 04:00:00 PM
by George Whittaker Introduction

In the expansive universe of Linux distributions, the choice of which one to use can be overwhelming. Among the galaxies of options, two Arch-based stars have shone increasingly brightly: Manjaro and EndeavourOS. Both are rooted in the Arch Linux ecosystem, yet they cater to different kinds of users and offer unique experiences. If you're currently a Manjaro user contemplating the switch to EndeavourOS, this article aims to help you make an informed decision.

Background Information What is Manjaro?

Manjaro is an Arch-based Linux distribution that is designed to be user-friendly and accessible. Known for its 'Install and Go' philosophy, Manjaro offers ease of use, making it suitable for Linux newcomers. It comes with a variety of desktop environments like XFCE, KDE, and GNOME, among others. Manjaro also features its own package manager, Pamac, which makes software installation a breeze. Automatic updates and built-in stability checks make it a go-to choice for those who want the power of Arch Linux without its complexities.

What is EndeavourOS?

EndeavourOS is also an Arch-based Linux distribution, but it aims to be closer to vanilla Arch. Targeted at intermediate to advanced users, EndeavourOS offers an almost bare-bones experience with the choice to customize your system as you see fit. While it does come with an installer, it is more manual compared to Manjaro's Calamares installer. It aims to provide the user with an Arch experience with minimal added features, relying mostly on the Arch User Repository (AUR) and Pacman for package management.

Comparison Criteria

To make an apples-to-apples comparison between Manjaro and EndeavourOS, we'll evaluate them based on the following criteria:

  • Ease of Installation
  • Package Management
  • Desktop Environments
  • System Performance
  • Software Availability
  • Community Support
  • Stability and Updates
Detailed Comparison Ease of Installation

Manjaro offers an incredibly user-friendly installation process via its Calamares installer. It is mostly automated and requires only minimal user interaction.

EndeavourOS, on the other hand, offers a more hands-on installation process. Though it also offers an installer, it allows for more customization during the setup, which might be more appealing to advanced users but intimidating for beginners.

Package Management

Manjaro uses Pamac for package management, which offers a clean, easy-to-use graphical interface. It also supports AUR, enabling a wide range of software availability.

Go to Full Article

How to Set or Modify the Path Variable in Linux

Tuesday 29th of August 2023 04:00:00 PM
by George Whittaker Introduction

The Linux command line is a powerful tool that gives you complete control over your system. But to unleash its full potential, you must understand the environment in which it operates. One crucial component of this environment is the PATH variable. It's like a guide that directs the system to where it can find the programs you're asking it to run. In this article, we will delve into what the PATH variable is, why it's important, and how to modify it to suit your needs.

What is the PATH Variable?

The PATH is an environment variable in Linux and other Unix-like operating systems. It contains a list of directories that the shell searches through when you enter a command. Each directory is separated by a colon (:). When you type in a command like ls or gcc, the system looks through these directories in the order they appear in the PATH variable to find the executable file for the command.

For example, if your PATH variable contains the following directories:


and you type ls, the system will first look for the ls executable in /usr/local/sbin. If it doesn't find it there, it will move on to /usr/local/bin, and so on until it finds the executable or exhausts all directories in the PATH.

Why Modify the PATH Variable?

The default PATH variable usually works well for most users. However, there are scenarios where you might need to modify it:

  • Adding Custom Scripts: If you have custom scripts stored in a particular directory, adding that directory to your PATH allows you to run those scripts as commands from any location.

  • Software in Non-standard Locations: Some software may be installed in directories that are not in the default PATH. Adding such directories allows you to run the software without specifying its full path.

  • Productivity: Including frequently-used directories in your PATH can make your workflow more efficient, reducing the need to type full directory paths.

Temporarily Modifying the PATH Variable Using the export Command

To temporarily add a new directory to your PATH for the current session, you can use the export command as follows:

export PATH=$PATH:/new/directory/path

This modification will last until you close your terminal session.

Using the PATH=$PATH:/your/path Syntax

Alternatively, you can modify the PATH variable using the following syntax:

Go to Full Article

A Brief Story of Time and Timeout

Thursday 24th of August 2023 04:00:00 PM
by Nawaz Abbasi

When working in a Linux terminal, you often encounter situations where you need to monitor the execution time of a command or limit its runtime. The time and timeout commands are powerful tools that can help you achieve these tasks. In this tutorial, we'll explore how to use both commands effectively, along with practical examples.

Using the time Command

The time command in Linux is used to measure the execution time of a specified command or process. It provides information about the real, user, and system time used by the command. The real time represents the actual elapsed time, while the user time accounts for the CPU time consumed by the command, and the system time indicates the time spent by the system executing on behalf of the command.

Syntax time [options] command Example

Let's say you want to measure the time taken to execute the ls command:

time ls

The output will provide information like:

real 0m0.005s user 0m0.001s sys 0m0.003s

In this example, the real time is the actual time taken for the command to execute, while user and sys times indicate CPU time spent in user and system mode, respectively.

Using the timeout Command

The timeout command allows you to run a command with a specified time limit. If the command does not complete within the specified time, timeout will terminate it. This can be especially useful when dealing with commands that might hang or run indefinitely.

Syntax timeout [options] duration command Example

Suppose you want to limit the execution of a potentially time-consuming command, such as a backup script, to 1 minute:

timeout 1m ./

If completes within 1 minute, the command will finish naturally. However, if it exceeds the time limit, timeout will terminate it.

By default, timeout sends the SIGTERM signal to the command when the time limit is reached. You can also specify which signal to send using the -s (--signal) option.

Combining time and timeout

You can also combine the time and timeout commands to measure the execution time of a command within a time-constrained environment.

Go to Full Article

UNIX vs Linux: What's the Difference?

Tuesday 22nd of August 2023 04:00:00 PM
by George Whittaker

In the intricate landscape of operating systems, two prominent players have shaped the digital realm for decades: UNIX and Linux. While these two systems might seem similar at first glance, a deeper analysis reveals fundamental differences that have implications for developers, administrators, and users. In this comprehensive article, we embark on a journey to uncover the nuances that set UNIX and Linux apart, shedding light on their historical origins, licensing models, system architectures, communities, user interfaces, market applications, security paradigms, and more.

Historical Context

UNIX, a pioneer in the world of operating systems, emerged in the late 1960s at AT&T Bell Labs. Developed by a team led by Ken Thompson and Dennis Ritchie, UNIX was initially created as a multitasking, multi-user platform for research purposes. In the subsequent decades, commercialization efforts led to the rise of various proprietary UNIX versions, each tailored to specific hardware platforms and industries.

In the early 1990s, a Finnish computer science student named Linus Torvalds ignited the open-source revolution by developing the Linux kernel. Unlike UNIX, which was mainly controlled by vendors, Linux leveraged the power of collaborative development. The open-source nature of Linux invited contributions from programmers across the globe, leading to rapid innovation and the creation of diverse distributions, each with unique features and purposes.

Licensing and Distribution

One of the most significant differentiators between UNIX and Linux lies in their licensing models. UNIX, being proprietary, often required licenses for usage and customization. This restricted the extent to which users could modify and distribute the system.

Conversely, Linux operates under open-source licenses, most notably the GNU General Public License (GPL). This licensing model empowers users to study, modify, and distribute the source code freely. The result is a plethora of Linux distributions catering to various needs, such as the user-friendly Ubuntu, the stability-focused CentOS, and the community-driven Debian.

Kernel and System Architecture

The architecture of the kernel—the core of an operating system—plays a crucial role in defining its behavior and capabilities. UNIX systems typically employ monolithic kernels, meaning that essential functions like memory management, process scheduling, and hardware drivers are tightly integrated.

Linux also utilizes a monolithic kernel, but it introduces modularity through loadable kernel modules. This enables dynamic expansion of kernel functionality without requiring a complete system reboot. Furthermore, the collaborative nature of Linux development ensures broader hardware support and adaptability to evolving technological landscapes.

Go to Full Article

The 8 Best SSH Clients for Linux

Thursday 17th of August 2023 04:00:00 PM
by George Whittaker Introduction

SSH, or Secure Shell, is a cryptographic network protocol for operating network services securely over an unsecured network. It's a vital part of modern server management, providing secure remote access to systems. SSH clients, applications that leverage SSH protocol, are an essential tool for system administrators, developers, and IT professionals. In the world of Linux, where remote server management is common, choosing the right SSH client can be crucial. This article will explore the 8 best SSH clients available for Linux.

The Criteria for Selection

When selecting the best SSH clients for Linux, several factors must be taken into consideration:


The speed and efficiency of an SSH client can make a significant difference in day-to-day tasks.

Security Features

With the critical nature of remote connections, the chosen SSH client must have robust security features.

Usability and Interface Design

The client should be easy to use, even for those new to SSH, with a clean and intuitive interface.

Community Support and Documentation

Available support and comprehensive documentation can be essential for troubleshooting and learning.

Compatibility with Different Linux Distributions

A wide compatibility ensures that the client can be used across various Linux versions.

The 8 Best SSH Clients for Linux OpenSSH Overview

OpenSSH is the most widely used SSH client and server system. It’s open-source and found in most Linux distributions.

  • Key management
  • SCP and SFTP support
  • Port forwarding
  • Strong encryption
Installation Process

OpenSSH can be installed using package managers like apt-get or yum.

Pros and Cons


  • Highly secure
  • Widely supported
  • Flexible


  • Can be complex for beginners
PuTTY Overview

PuTTY is a free and open-source terminal emulator. It’s known for its simplicity and wide range of features.

  • Supports SSH, Telnet, rlogin
  • Session management
  • GUI-based configuration
Installation Process

PuTTY can be installed from the official website or through Linux package managers.

Pros and Cons


  • User-friendly
  • Extensive documentation


Go to Full Article

Linux Containers Unleashed: A Comprehensive Guide to the Technology Revolutionizing Modern Computing

Tuesday 15th of August 2023 04:00:00 PM
by George Whittaker Introduction Definition of Linux Containers

Linux Containers (LXC) are a lightweight virtualization technology that allows you to run multiple isolated Linux systems (containers) on a single host. Unlike traditional virtual machines, containers share the host system's kernel, providing efficiency and speed.

Brief History and Evolution

The concept of containerization dates back to the early mainframes, but it was with the advent of chroot in Unix in 1979 that it began to take a recognizable form. The Linux Containers (LXC) project, started in 2008, brought containers into the Linux kernel and laid the groundwork for the popular tools we use today like Docker and Kubernetes.

Importance in Modern Computing Environments

Linux Containers play a vital role in modern development, enabling efficiency in resource usage, ease of deployment, and scalability. From individual developers to large-scale cloud providers, containers are a fundamental part of today's computing landscape.

Linux Containers (LXC) Explained Architecture Containers vs. Virtual Machines

While Virtual Machines (VMs) emulate entire operating systems, including the kernel, containers share the host kernel. This leads to a significant reduction in overhead, making containers faster and more efficient.

The Kernel's Role

The Linux kernel is fundamental to containers. It employs namespaces to provide isolation and cgroups for resource management. The kernel orchestrates various operations, enabling containers to run as isolated user space instances.

User Space Tools

Tools like Docker, Kubernetes, and OpenVZ interface with the kernel to manage containers, providing user-friendly commands and APIs.

Features Isolation

Containers provide process and file system isolation, ensuring that applications run in separate environments, protecting them from each other.

Resource Control

Through cgroups, containers can have resource limitations placed on CPU, memory, and more, allowing precise control over their utilization.

Network Virtualization

Containers can have their network interfaces, enabling complex network topologies and isolation.

Popular Tools Docker

Docker has become synonymous with containerization, offering a complete platform to build, ship, and run applications in containers.


Kubernetes is the de facto orchestration system for managing containerized applications across clusters of machines, providing tools for deploying applications, scaling them, and managing resources.


OpenVZ is a container-based virtualization solution for Linux, focusing on simplicity and efficiency, particularly popular in VPS hosting environments.

Go to Full Article

5 Reasons To Choose Ubuntu Cinnamon Over Anything Else

Thursday 10th of August 2023 04:00:00 PM
by George Whittaker Introduction

Ubuntu, a popular open-source operating system based on Debian, is known for its ease of use and the variety of flavors it offers. Each flavor comes with a different desktop environment and features, and one of the latest additions to this list is Ubuntu Cinnamon.

In this article, we will explore five reasons why some users might prefer Ubuntu Cinnamon over other Ubuntu flavors, such as Ubuntu GNOME, Kubuntu, Xubuntu, and others.

Reason 1: User-Friendly Interface Cinnamon Desktop Environment

Ubuntu Cinnamon leverages the Cinnamon desktop environment, initially developed for Linux Mint. Known for its traditional and intuitive design, it offers an experience that’s familiar to users migrating from other operating systems like Windows.

Ease of Use

Ubuntu Cinnamon is renowned for its simplicity and ease of use. The layout is straightforward, with a clear application menu, taskbar, and system tray. This layout helps new users adapt quickly without a steep learning curve.


Compared to GNOME’s more minimalistic approach or KDE's feature-rich environment, Cinnamon hits a sweet spot of being both functional and not overly complex. Its usability strikes a chord with both newbies and seasoned Linux users.

Visual Appeal

The visual aesthetics of Ubuntu Cinnamon, with its clean lines and modern look, can be appealing to many users. The default themes are both elegant and eye-pleasing, without being distracting.

Reason 2: Performance Efficiency System Requirements

One of Ubuntu Cinnamon's strengths is its ability to run smoothly on a wide range of hardware configurations, from older machines to the latest PCs. It consumes less memory compared to some other Ubuntu flavors, providing a responsive experience even on limited resources.

Speed and Responsiveness

Ubuntu Cinnamon is known for its speed and quick response times. The Cinnamon desktop environment is lighter, and users often report faster boot times and overall system responsiveness.


When compared to other desktop environments like KDE, which might require more system resources, Ubuntu Cinnamon's efficiency becomes evident, making it a great choice for performance-conscious users.

Reason 3: Customization Flexibility

Cinnamon allows for extensive customization. From the panel layout to the window behaviors, almost everything can be tweaked to fit personal preferences.

Go to Full Article

How to Count Files in a Directory in Linux?

Tuesday 8th of August 2023 04:00:00 PM
by George Whittaker Introduction

File counting in a directory is a common task that many users might need to perform. It could be for administrative purposes, understanding disk usage, or organizing files in a systematic manner. Linux, an open-source operating system known for its powerful command-line interface, offers multiple ways to accomplish this task. In this article, we'll explore various techniques to count files in a directory, catering to both command-line enthusiasts and those who prefer graphical interfaces.


Before proceeding, it is essential to have some basic knowledge of the command line in Linux. If you're new to the command line, you might want to familiarize yourself with some introductory tutorials. Here's how you can get started:

  • Accessing the Terminal: Most Linux distributions provide a terminal application that you can find in the Applications menu. You can also use shortcut keys like Ctrl+Alt+T in some distributions.

  • Basic Command Line Skills: Understanding how to navigate directories and basic command usage will be helpful.

Using the ‘ls’ Command and Piping with ‘wc’ The ‘ls’ Command

The ls command in Linux is used to list files and directories. You can use it with the wc command to count files.

Counting Files with ‘ls’ and ‘wc’

You can count files in a directory by using the following command:

ls -1 | wc -l

Here, ls -1 lists the files in a single column, and wc -l counts the lines, effectively giving you the number of files.


In your home directory, you can run:

cd ~ ls -1 | wc -l

Utilizing the ‘find’ Command The ‘find’ Command

find is a powerful command that allows you to search for files and directories. You can use it to count files as well.

Counting Files with ‘find’

To count all the files in the current directory and its subdirectories, use:

find . -type f | wc -l


To count only text files in a directory, you can use:

find . -name "*.txt" -type f | wc -l

Implementing the ‘tree’ Command Introduction to ‘tree’

The tree command displays directories as trees, with directory paths as branches and filenames as leaves.


If ‘tree’ is not installed, you can install it using:

sudo apt-get install tree # Debian/Ubuntu sudo yum install tree # RedHat/CentOS

Go to Full Article

Add a User to sudo Group in Debian 12 Linux

Thursday 3rd of August 2023 04:00:00 PM
by George Whittaker Introduction

In Linux systems, including Debian 12, the sudo group grants users the ability to execute administrative commands. This provides them with the privileges to install, update, and delete software, modify system configurations, and more.

Administrative permissions are vital for maintaining and controlling the operating system. They allow you to perform tasks that regular users cannot, ensuring security and overall system health.

This article is intended for system administrators, advanced users, or anyone responsible for managing Debian 12 systems.

Administering sudo permissions must be done with care. Inappropriate use of sudo can lead to system vulnerabilities, damage, or data loss.

Prerequisites Debian 12 System Requirements

Ensure that you have Debian 12 installed with the latest updates.

Necessary Permissions

You must have root or sudo access to modify user groups.

How to Open a Terminal Window

Press "Ctrl + Alt + T" or search for "Terminal" in the application menu.

Understanding the sudo Group

The sudo group allows users to execute commands as a superuser or another user. It promotes better security by limiting root access. However, misuse can lead to system instability. Root has unlimited access, while sudo provides controlled administrative access.

Identifying the User List Existing Users

cut -d: -f1 /etc/passwd

Select the User

Choose the username you wish to add to the sudo group.

Check Existing sudo Group Membership

groups <username>

Adding the User to the sudo Group Command-line Method Open a Terminal

Start by opening the terminal window.

Switching to Root User

su -

Using the usermod Command

usermod -aG sudo <username>

Verifying the Addition

groups <username>

Graphical User Interface (GUI) Method
  1. Open Users and Groups management.
  2. Find the user, select Properties, and check the "sudo" box.
  3. Confirm and apply changes.

If errors occur, consult system logs, or use:

journalctl -xe

Remove the user from the sudo group using:

gpasswd -d <username> sudo

Check man pages, forums, or official Debian documentation.

Go to Full Article

Organizing Secure Document Collaboration: How to Install ONLYOFFICE DocSpace Server on Linux

Tuesday 1st of August 2023 04:00:00 PM
by George Whittaker Introduction

Nowadays, online document collaboration is a must for everyone. You definitely need to co-edit numerous docs with your teammates as well as work on office files with various external users, almost everyday.

Keeping this in mind, the open-source project ONLYOFFICE released the DocSpace solution which allows connecting people and files and levels up document collaboration. Let's discover its features and installation options.

Key features

ONLYOFFICE DocSpace is intended to improve collaboration on documents with various  people you need to interact, for example, your colleagues, teammates, customers, partners, contractors, sponsors, etc.

The platform comes with integrated online viewers and editors allowing you to work with files of multiple formats, including text docs, digital forms, sheets, presentations, PDFs, e-books, and multimedia.


ONLYOFFICE DocSpace provides a room-based environment which allows organizing a clear file structure depending on your needs or project goals. DocSpace rooms are group spaces with the pre-set access level to ensure quick file sharing and avoid unnecessary repeated actions.

Currently, two types of rooms are available:

  • Collaboration rooms to co-author docs, track changes and communicate in real time.
  • Custom rooms for any custom purpose, for example, to request document review or comments, or share any content for viewing only.

In the future releases, the ONLYOFFICE developers are going to add further room types such as form filling rooms and private rooms for end-to-end encrypted collaboration.

User roles

Flexible access permissions allow you to fine-tune the access to the whole space or separate rooms. Available actions with files in a room depend on the given role.

Go to Full Article

Mount Drives with Ease: A Guide to Automounting in Linux GUI and CLI

Thursday 27th of July 2023 04:00:00 PM
by George Whittaker

Understanding how to efficiently automate tasks on Linux can significantly simplify your daily operations. One such routine task is mounting drives, which can be performed automatically, saving you precious time. If you're a GNOME user, you will be pleased to know that this interface makes auto-mounting drives particularly effortless. By following the steps outlined below, you'll be on your way to becoming proficient at auto-mounting drives on Linux with GNOME in no time.

Why Automount?

Before we delve into the process, it's important to comprehend why automounting is a handy feature. Normally, when a storage drive is connected to your Linux system, it does not become instantly accessible. You must manually mount the drive every time you boot up. Automounting eliminates this hassle by ensuring the drive is automatically accessible when the system starts. Now that you know why this is crucial, let's delve into the process.

Getting Started: Install Disks Utility

If you're a GNOME desktop user, you're already equipped with a built-in utility called 'Disks'. If not, don't worry, installing it is easy:

  1. Open your terminal.
  2. Type the following command: sudo apt-get install gnome-disk-utility.
  3. Provide your password when prompted, and hit enter.
  4. Allow the installation process to complete.

Now, you are ready to use the 'Disks' utility, the key tool to automounting drives on your Linux system.

A Step-by-step Guide to Automount Drives with GNOME

Now, let's dive into the process of setting up the automount feature.

Launch the Disks Utility

Open the 'Disks' utility from your GNOME desktop's menu. In the left panel, you'll see a list of drives attached to your system. Choose the one you wish to automount.

Adjust Mount Options

Next, locate and click the 'additional partition options' button, represented by two gears under the Volumes section. Select 'Edit Mount Options' from the drop-down menu.

Set Automount Preferences

By default, the 'User Session Defaults' option is turned on. Turn it off to manually set your preferences. Now, tick the 'Mount at system startup' checkbox to ensure that the drive mounts automatically at boot. Additionally, you might want to select the 'Show in user interface' option for the drive to be visible in the file manager.

Save Changes and Test

After setting your preferences, click 'OK'. A prompt will request your password to authenticate changes. Provide it, then restart your computer to test if the drive mounts automatically.

Go to Full Article

A Comprehensive Guide to Using PuTTY for SSH into Linux

Tuesday 25th of July 2023 04:00:00 PM
by George Whittaker

Whether you're an experienced developer or a beginner trying to establish a secure connection between your computer and a remote Linux server, PuTTY is a tool you can rely on. Let's delve into understanding how to utilize PuTTY to Secure Shell (SSH) into a Linux machine from a Windows operating system.

Introduction to PuTTY

PuTTY is an open-source, free SSH client for Windows. It enables users to remotely access computers over networks and run commands as if they were sitting in front of the terminal. It's a versatile tool that's widely used in network administration, software development, and other IT-related professions.

Downloading and Installing PuTTY

Getting started with PuTTY is straightforward. Head over to the official PuTTY download page and select the appropriate version for your Windows OS. It's typically best to choose the latest stable version. After downloading the installer, run it, and follow the prompts to successfully install PuTTY on your machine.

Configuring PuTTY for SSH Connections

Before initiating an SSH connection, you need to gather some vital information: the IP address or hostname of the Linux server you're connecting to, the port number, and your username.

Open PuTTY and you'll see a configuration window. Under "Session," in the "Host Name (or IP address)" field, type the IP address or hostname of your Linux server. Ensure the "Port" field is set to 22, which is the default SSH port.

Select SSH under "Connection type" and then move on to the "Saved Sessions" field. Input a name for this connection configuration for future use. Once done, click "Save" to keep these settings. This way, you won't need to input these details every time you want to establish a connection.

Initiating the SSH Connection

With your session saved, you're ready to connect. Select your saved session and click "Open." A new window with a console interface will open, and a prompt will ask for your username. Input the username for your Linux server. Hit "Enter," and you'll be asked for your password. Type in your password and hit "Enter" again. Remember, the cursor won't move while you're typing your password; this is a standard security feature.

Dealing with PuTTY Security Alerts

The first time you establish a connection, PuTTY will display a security alert to confirm the server's authenticity. This alert safeguards against potential man-in-the-middle attacks. PuTTY will show the server's SSH key fingerprint, which you should compare with the fingerprint of your Linux server. If the fingerprints match, click "Yes" to add the server's host key to PuTTY's cache. If the alert pops up in subsequent sessions, there's a possibility your server's security may have been compromised.

Go to Full Article

Running Multiple Linux Commands Simultaneously

Thursday 20th of July 2023 04:00:00 PM
by George Whittaker

Understanding how to execute multiple commands at once in Linux can significantly improve your efficiency and productivity. This article will guide you through various ways you can run multiple Linux commands in a single line and even how to automate repetitive tasks.

Understanding the Basics

Before delving into advanced techniques, you should familiarize yourself with the command line or Terminal, Linux's powerful tool. Here, you can perform tasks by typing a sequence of commands. While it may seem daunting at first, learning to use it can open up a new world of efficiency and productivity.

Running Commands Consecutively

If you want to run multiple commands consecutively, i.e., run the next command after the previous one finishes, use the semicolon (;). For instance, command1 ; command2 ; command3 will execute command1, wait for it to finish, and then execute command2 and so on.

Executing Commands in Parallel

To run commands simultaneously or in parallel, use the ampersand (&). However, keep in mind that using an ampersand sends the process to the background, allowing the next command to start immediately. For instance, command1 & command2 executes both command1 and command2 at the same time.

Using the Logical Operators

You can also employ logical operators (&& and ||) to run commands based on the success or failure of the previous command. The '&&' operator will execute the next command if the previous one succeeds. For instance, command1 && command2 will only execute command2 if command1 is successful. Conversely, the '||' operator will execute the next command only if the previous one fails.

Grouping Commands

If you have a group of commands that you want to execute in a specific order, you can use parentheses. For example, (command1 ; command2) & command3 will run command1 and command2 simultaneously but will only initiate command3 once both have completed.

Utilizing Command Line Pipes

Pipes are an invaluable tool when you want to pass the output of one command as the input to another. You can do this by using the vertical bar (|). For instance, command1 | command2 would pass the output of command1 as input to command2.

Automating Repetitive Tasks

If you frequently execute a particular set of commands, you can write a simple bash script to automate the process. All you have to do is write the commands in a text file and save it with a .sh extension. For example, you can create a file named '' and write:

Go to Full Article

Running HIP VPLS on a NanoPI R2S

Tuesday 11th of July 2023 04:00:00 PM
by Dmitriy Kuptsov Introduction

In our previous article we have demonstrated a working prototype of Host Identity Based Virtual Private Service or HIP-VPLS. Back then we used the Mininet framework. Here we are going to demonstrate how to deploy this system on a real hardware. We are going to use NanoPi R2S as the platform for HIP-VPLS. Just a reminder. Virtual Private LAN Services (VPLS) provide means for building Layer 2 communication on top of an existing IP network. VPLS can be built using various approaches. However, when building a production-grade VPLS solution one needs to have a clear picture of how such aspects as security, mobility, and L2 issues will be solved.

Host Identity Protocol (HIP) was originally designed to split the dual role of the IP addresses. In other words, HIP is a Layer 3.5 solution that sits between the IP and transport layers. HIP uses hashes of public keys as identifiers. These identifiers, or Host Identity Tags (HITs), are exposed to the transport layer and never change (well, strictly speaking, they might change if the system administrator will decide to rotate the RSA or ECDSA key pairs for instance, but that will happen rarely). On the other hand, HIP uses routable IP addresses (these can be both IPv4 or IPv6) as locators and are used to deliver the HIP and IPSec packets between the end-points. Overall, to identify each other and exchange secret keys, HIP relies on a 4-way handshake (also known as HIP base exchange, or HIP BEX for short). During the BEX, peers negotiate a set of cryptographic algorithms to be used, identify each other (since HITs are permanent and are bound to public keys HIP can employ a simple firewall based on HITs to filter out untrusted connections), exchange the keys (HIP can use Diffie-Hellman and Elliptic Curve Diffie-Hellman algorithms), and even protect from Denial of Service attacks using computational puzzles (these are based on cryptographic hash functions and ability of peers to find collisions in hash functions; the complexity of a solution is regulated by a responder in HIP BEX). HIP also supports mobility and uses a separate handshake procedure during which the peer notifies its counterpart about the changes in the locator (read the IP address used for routing purposes).

Go to Full Article

Minarca: A Backup Solution You'll Love

Wednesday 7th of June 2023 04:00:00 PM
by Patrik Dufresne Introduction

Data backup is a crucial aspect of information management. Both businesses and individuals face risks such as hard drive failure, human error or cyberattacks, which can cause the loss of important data. There are many backup solutions on the market, but many are expensive or difficult to use.

That's where Minarca comes in. Developed by Patrik Dufresne of IKUS Software, Minarca is an open source backup solution designed to offer a simplified user experience while providing management and monitoring tools for system administrators. So let's take a closer look at how Minarca came about and how it compares to other solutions.

History and evolution of the project

Minarca is a data backup software, whose name comes from the combination of the Latin words "mi" and "arca", meaning "my box" or "my safe". The Minarca story begins with Rdiffweb, a web application developed in 2006 by Josh Nisly and other contributors to serve as the web interface to rdiff-backup.

In 2012, Patrik Dufresne became interested in Rdiffweb and decided to improve its graphical interface. Since then, Rdiffweb has continued to evolve, including permissions management, quota management, reporting, statistical analysis, notifications and LDAP integration. However, Rdiffweb has remained a tool for technically competent people who are able to configure an SSH server, secure it and install rdiff-backup on all the machines to be backed up from the command line.

It was with the goal of making data backup more accessible to less technical users that the development of Minarca began in 2014, building on the work done in Rdiffweb. The goal was to provide a fully integrated, turnkey, easy-to-use solution.

Since its inception, Minarca has gone through several versions, including an early version of the agent in Java for Linux and Windows. In 2020, the agent was rewritten in Python to better support Linux, Windows and MacOS operating systems. Minarca is now a complete data backup solution that is accessible to everyone, regardless of technical skill levels.

The benefits of Minarca

Comparison with Rdiffweb

Minarca is the logical continuation of the Rdiffweb web application. Developed to provide a simplified backup experience, Minarca is designed to support administrators and users. Unlike Rdiffweb, Minarca offers rapid deployment on Linux, Windows and MacOS through the Minarca agent. In addition, Minarca manages the storage space, simplifies SSH key exchange and supports multiple versions of rdiff-backup simultaneously. In addition, Minarca improves security by isolating the execution of backups, thus enhancing the protection of sensitive data.

Go to Full Article

Illuminating Your Console: Enhancing Your Linux Command Line Experience with ccat

Wednesday 31st of May 2023 04:00:00 PM
by George Whittaker Introducing ccat

ccat stands for "colorized cat." It's a simple yet powerful tool that, like the traditional cat command, reads files sequentially, writing them to standard output. However, the ccat command adds a visual advantage - color-coding. It makes your command-line experience more user-friendly, improving the readability and understanding of your code.

Installing ccat

Before diving in, you need to ensure you have ccat installed on your system. This process varies based on the Linux distribution you're using, but here are the most common methods:

For Ubuntu, Debian, and derivatives, the process begins by downloading the latest .deb package from the official ccat GitHub repository, which can be found at: After downloading the package, you can install it using the dpkg command:

sudo dpkg -i /path/to/downloaded_file.deb

For Arch Linux and Manjaro, use the below command to download and install the ccat package from the AUR repository:  

git clone cd ccat makepkg -si

For other distributions, you can build ccat from source. To do so, ensure you have Go installed on your system, clone the ccat repository, then build and install:

git clone cd ccat go build sudo mv ccat /usr/local/bin/

Using ccat

Now that you have ccat installed, let's see it in action. The usage of ccat follows the same pattern as the cat command, replacing cat with ccat:

ccat file_name

You will notice that different types of text (such as comments, keywords, and strings) are colorized differently, providing a more visually-pleasing and organized output. For example, comments might be displayed in blue, keywords in bold yellow, and strings in green.

If you want to use ccat as your default cat command, you can create an alias. Add the following line to your .bashrc or .zshrc file:

alias cat='ccat'

Remember to source the .bashrc/.zshrc file after updating it or simply close and reopen your terminal.

Customizing ccat

Customization is a key benefit of ccat. You can adjust color settings for different types of text in your output, tailoring them to your preference.

Go to Full Article

A Comprehensive Guide to the wc Command in Linux

Monday 29th of May 2023 04:00:00 PM
by George Whittaker

One of the most valuable utilities offered by Unix and Linux-based systems is the wc command. This versatile command stands for "word count" and offers you a simple, yet powerful way to analyze text files. By comprehending the full scope of wc, you'll increase your proficiency with command-line operations, making your interaction with Unix or Linux systems more productive and efficient.

Introducing the wc Command

At its core, wc performs a simple task: it counts. However, the objects of its attention include not only words, but also characters, lines, and bytes in files. The wc command will return four values: newline counts, word counts, byte counts, and the maximum line length when executed with a file name.

The basic syntax of the wc command is: wc [options] [file].

Options and Usage

Let's look at the different options you can use with wc and how they work. The options will modify the output of wc, providing you with more targeted information. These options are entered in the command line after wc and before the file name.

  1. -l: This option enables you to count the lines in a file. For example, wc -l file1 will return the number of lines in 'file1'.
  2. -w: The -w option tells wc to count the words in a file, with wc -w file1 returning the number of words in 'file1'.
  3. -c or -m: These options command wc to count the bytes or characters in a file respectively. The command wc -c file1 or wc -m file1 returns the number of bytes or characters in 'file1'.
  4. -L: With the -L option, wc determines the length of the longest line in a file. To find the length of the longest line in 'file1', you would use wc -L file1.

It's important to note that you can use multiple options at the same time. For example, wc -lw file1 will return both the number of lines and words in 'file1'.

Reading from Standard Input

The wc command can also read from standard input (stdin), not just from a file. This is useful when you want to count the words, lines, or characters from a stream of text that is not saved in a file. You simply type wc, hit enter, and then start typing the text. Once you're done, press Ctrl + D to stop and wc will return the counts.

Go to Full Article

More in Tux Machines

digiKam 7.7.0 is released

After three months of active maintenance and another bug triage, the digiKam team is proud to present version 7.7.0 of its open source digital photo manager. See below the list of most important features coming with this release. Read more

Dilution and Misuse of the "Linux" Brand

Samsung, Red Hat to Work on Linux Drivers for Future Tech

The metaverse is expected to uproot system design as we know it, and Samsung is one of many hardware vendors re-imagining data center infrastructure in preparation for a parallel 3D world. Samsung is working on new memory technologies that provide faster bandwidth inside hardware for data to travel between CPUs, storage and other computing resources. The company also announced it was partnering with Red Hat to ensure these technologies have Linux compatibility. Read more

today's howtos

  • How to install go1.19beta on Ubuntu 22.04 – NextGenTips

    In this tutorial, we are going to explore how to install go on Ubuntu 22.04 Golang is an open-source programming language that is easy to learn and use. It is built-in concurrency and has a robust standard library. It is reliable, builds fast, and efficient software that scales fast. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel-type systems enable flexible and modular program constructions. Go compiles quickly to machine code and has the convenience of garbage collection and the power of run-time reflection. In this guide, we are going to learn how to install golang 1.19beta on Ubuntu 22.04. Go 1.19beta1 is not yet released. There is so much work in progress with all the documentation.

  • molecule test: failed to connect to bus in systemd container - openQA bites

    Ansible Molecule is a project to help you test your ansible roles. I’m using molecule for automatically testing the ansible roles of geekoops.

  • How To Install MongoDB on AlmaLinux 9 - idroot

    In this tutorial, we will show you how to install MongoDB on AlmaLinux 9. For those of you who didn’t know, MongoDB is a high-performance, highly scalable document-oriented NoSQL database. Unlike in SQL databases where data is stored in rows and columns inside tables, in MongoDB, data is structured in JSON-like format inside records which are referred to as documents. The open-source attribute of MongoDB as a database software makes it an ideal candidate for almost any database-related project. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the MongoDB NoSQL database on AlmaLinux 9. You can follow the same instructions for CentOS and Rocky Linux.

  • An introduction (and how-to) to Plugin Loader for the Steam Deck. - Invidious
  • Self-host a Ghost Blog With Traefik

    Ghost is a very popular open-source content management system. Started as an alternative to WordPress and it went on to become an alternative to Substack by focusing on membership and newsletter. The creators of Ghost offer managed Pro hosting but it may not fit everyone's budget. Alternatively, you can self-host it on your own cloud servers. On Linux handbook, we already have a guide on deploying Ghost with Docker in a reverse proxy setup. Instead of Ngnix reverse proxy, you can also use another software called Traefik with Docker. It is a popular open-source cloud-native application proxy, API Gateway, Edge-router, and more. I use Traefik to secure my websites using an SSL certificate obtained from Let's Encrypt. Once deployed, Traefik can automatically manage your certificates and their renewals. In this tutorial, I'll share the necessary steps for deploying a Ghost blog with Docker and Traefik.