Interview

25 Unix Interview Questions and Answers

Prepare for your next technical interview with this guide on Unix, featuring common and advanced questions to enhance your understanding and skills.

Unix is a powerful, multiuser operating system that has been foundational in the development of many modern computing environments. Known for its robustness, flexibility, and security, Unix is widely used in server environments, mainframes, and high-performance computing systems. Its command-line interface and scripting capabilities make it a preferred choice for system administrators and developers who need to manage complex systems efficiently.

This article provides a curated selection of Unix interview questions designed to test and enhance your understanding of key concepts and practical skills. By working through these questions, you will be better prepared to demonstrate your proficiency in Unix during technical interviews, showcasing your ability to handle real-world challenges effectively.

Unix Interview Questions and Answers

1. How do you change the permissions of a file to make it readable, writable, and executable by the owner only?

In Unix, file permissions determine who can read, write, or execute a file. These permissions are represented by a combination of characters or octal numbers. The command used to change file permissions is chmod.

To make a file readable, writable, and executable by the owner only, you can use the chmod command with the appropriate octal value or symbolic representation. The octal value for this permission set is 700, where:

  • 7 represents read (4), write (2), and execute (1) permissions for the owner.
  • 0 represents no permissions for the group.
  • 0 represents no permissions for others.

Example:

chmod 700 filename

Alternatively, you can use the symbolic representation:

chmod u=rwx,go= filename

2. Write a script to list all currently running processes that contain the word “apache”.

To list all currently running processes that contain the word “apache”, you can use a combination of Unix commands such as ps and grep. The ps command is used to display information about active processes, and grep is used to filter the output based on a specific pattern.

Example:

ps aux | grep apache | grep -v grep

In this script:

  • ps aux lists all currently running processes.
  • grep apache filters the list to include only those processes that contain the word “apache”.
  • grep -v grep excludes the grep command itself from the results.

3. How would you search for the string “error” in all .log files within a directory?

To search for the string “error” in all .log files within a directory, you can use the grep command. grep is a powerful Unix utility that searches through text using patterns. The command can be combined with wildcards to search through multiple files.

Example:

grep "error" *.log

This command will search for the string “error” in all files with the .log extension in the current directory. If you want to search recursively through subdirectories, you can use the -r option:

grep -r "error" *.log

4. Explain how you would set and export an environment variable in the shell.

Environment variables in Unix are used to store data that can be accessed by the operating system and various applications. They are often used to configure settings and preferences for the shell and other programs. Setting and exporting an environment variable allows it to be available to child processes spawned by the shell.

To set an environment variable, you use the VAR_NAME=value syntax. To export it so that it is available to child processes, you use the export command.

Example:

# Set the environment variable
MY_VAR="Hello, World!"

# Export the environment variable
export MY_VAR

5. What command would you use to check the current IP address of your machine?

To check the current IP address of your machine in a Unix environment, you can use the ifconfig or ip command. The ifconfig command is older and more traditional, while the ip command is more modern and preferred in newer Unix systems.

Example using ifconfig:

ifconfig

Example using ip:

ip addr show

Both commands will display network interface information, including the IP address of your machine.

6. Write a command to combine the output of ls and grep to find all files starting with “test”.

In Unix, you can combine the output of ls and grep using a pipe to filter the list of files and find those that start with “test”. The ls command lists the files in the directory, and grep is used to search for patterns within the output.

Example:

ls | grep '^test'

In this command:

  • ls lists all files in the current directory.
  • The pipe | passes the output of ls to grep.
  • grep '^test' filters the files, showing only those that start with “test”. The ^ character denotes the beginning of a line in regular expressions.

7. How would you schedule a job to run at 2 AM every day using cron?

Cron is a time-based job scheduler in Unix-like operating systems. It allows users to schedule jobs (commands or scripts) to run periodically at fixed times, dates, or intervals. The cron daemon runs in the background and checks the crontab (cron table) for scheduled jobs.

To schedule a job to run at 2 AM every day, you need to edit the crontab file and add a specific entry. The crontab file uses a specific syntax to define the schedule:

* * * * * command_to_run
- - - - -
| | | | |
| | | | +---- Day of the week (0 - 7) (Sunday=0 or 7)
| | | +------ Month (1 - 12)
| | +-------- Day of the month (1 - 31)
| +---------- Hour (0 - 23)
+------------ Minute (0 - 59)

To schedule a job to run at 2 AM every day, you would use the following crontab entry:

0 2 * * * /path/to/your/script.sh

This entry means “run the script located at /path/to/your/script.sh at 2:00 AM every day.”

To edit the crontab file, you can use the crontab -e command, which opens the crontab file in the default text editor. After adding the entry, save and close the file. The cron daemon will automatically pick up the changes and schedule the job accordingly.

8. How do you add a new user to the system and assign them to a group?

To add a new user to the system and assign them to a group in Unix, you can use the useradd command followed by the usermod command. The useradd command is used to create a new user, and the usermod command is used to modify the user’s group membership.

Example:

# Add a new user
sudo useradd -m newuser

# Assign the user to a group
sudo usermod -aG groupname newuser

In the above example, the -m option with useradd creates a home directory for the new user. The -aG option with usermod appends the user to the specified group without removing them from other groups.

9. Which command would you use to check disk usage of all mounted filesystems?

To check the disk usage of all mounted filesystems in Unix, you can use the df command. This command provides a summary of disk space usage for all mounted filesystems. The -h option can be added to make the output human-readable, displaying sizes in powers of 1024 (e.g., 1K, 1M, 1G).

Example:

df -h

This command will display the disk usage of all mounted filesystems in a human-readable format, showing the total size, used space, available space, and the percentage of space used for each filesystem.

10. How do you install a package using the apt package manager?

The apt package manager is a tool used in Debian-based Linux distributions for managing software packages. It simplifies the process of installing, updating, and removing packages. To install a package using apt, you can use the apt-get install command followed by the package name.

Example:

sudo apt-get install package_name

In this command, sudo is used to execute the command with superuser privileges, apt-get is the command-line tool for handling packages, and install is the action to be performed. Replace package_name with the name of the package you wish to install.

11. Name three tools you can use to monitor system performance.

Three tools commonly used to monitor system performance in Unix are:

  1. top: This command provides a dynamic, real-time view of the system’s processes. It displays information such as CPU usage, memory usage, and process IDs. It is useful for identifying processes that are consuming excessive resources.
  2. vmstat: This tool reports information about processes, memory, paging, block IO, traps, and CPU activity. It is useful for providing a snapshot of system performance and identifying bottlenecks.
  3. iostat: This command provides statistics on CPU and input/output operations for devices and partitions. It is useful for monitoring disk activity and performance, helping to identify potential issues with storage devices.

12. Write a shell script that takes a filename as an argument and counts the number of lines in the file.

To write a shell script that takes a filename as an argument and counts the number of lines in the file, you can use the wc (word count) command, which is commonly used in Unix for this purpose. The wc -l option specifically counts the number of lines.

Here is a simple shell script to achieve this:

#!/bin/bash

if [ $# -eq 0 ]; then
    echo "No filename provided"
    exit 1
fi

filename=$1

if [ ! -f "$filename" ]; then
    echo "File not found!"
    exit 1
fi

line_count=$(wc -l < "$filename")
echo "The file '$filename' has $line_count lines."

13. Explain how to load and unload a kernel module.

Kernel modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system. This is particularly useful for adding support for new hardware, filesystems, or other features.

To load a kernel module, you can use the insmod or modprobe command. The modprobe command is generally preferred because it handles module dependencies automatically. To unload a kernel module, you can use the rmmod or modprobe -r command.

Example:

# Load a kernel module
sudo modprobe module_name

# Unload a kernel module
sudo modprobe -r module_name

14. Describe the steps to configure a static IP address on a Unix system.

Configuring a static IP address on a Unix system involves several steps:

  • Identify the network interface: Determine the name of the network interface you want to configure. This can be done using the ifconfig or ip a command.
  • Edit the network configuration file: Depending on the Unix distribution, you will need to edit specific configuration files. For example, on a Debian-based system, you would edit /etc/network/interfaces, while on a Red Hat-based system, you would edit /etc/sysconfig/network-scripts/ifcfg-<interface>.
  • Add static IP configuration: In the configuration file, specify the static IP address, netmask, gateway, and DNS servers. For example, in a Debian-based system, you might add the following lines to /etc/network/interfaces:
auto eth0
iface eth0 inet static
    address 192.168.1.100
    netmask 255.255.255.0
    gateway 192.168.1.1
    dns-nameservers 8.8.8.8 8.8.4.4
  • Restart the network service: After editing the configuration file, restart the network service to apply the changes. This can be done using the systemctl restart networking command on a systemd-based system or service networking restart on a SysVinit-based system.
  • Verify the configuration: Use the ifconfig or ip a command to verify that the static IP address has been correctly assigned to the network interface.

15. How do you mount a filesystem located on /dev/sdb1 to /mnt/data?

To mount a filesystem located on /dev/sdb1 to /mnt/data, you can use the mount command in Unix. The mount command attaches the filesystem found on a device to the filesystem hierarchy at the specified mount point.

The basic syntax for the mount command is:

mount [options] device directory

In this case, the device is /dev/sdb1 and the directory is /mnt/data. The command would be:

mount /dev/sdb1 /mnt/data

This command mounts the filesystem on /dev/sdb1 to the directory /mnt/data. You need to ensure that the directory /mnt/data exists before running the command. If it does not exist, you can create it using the mkdir command:

mkdir -p /mnt/data

16. What are some best practices for securing a Unix system?

Securing a Unix system involves several best practices to ensure the system remains protected from unauthorized access and vulnerabilities. Some of the key best practices include:

  • Regular Updates: Keep the system and all installed software up to date with the latest security patches and updates.
  • Strong Password Policies: Enforce strong password policies, including complexity requirements and regular password changes.
  • Access Controls: Implement strict access controls using file permissions and user roles to limit access to sensitive data and system functions.
  • Firewall Configuration: Configure firewalls to restrict incoming and outgoing traffic to only necessary services and ports.
  • SSH Security: Disable root login via SSH, use key-based authentication, and change the default SSH port to reduce the risk of brute-force attacks.
  • Intrusion Detection: Implement intrusion detection systems (IDS) to monitor and alert on suspicious activities.
  • Regular Audits: Conduct regular security audits and vulnerability assessments to identify and address potential security issues.
  • Backup and Recovery: Maintain regular backups and have a disaster recovery plan in place to ensure data integrity and availability.
  • Logging and Monitoring: Enable comprehensive logging and monitoring to track system activities and detect any anomalies.
  • Minimize Services: Disable or remove unnecessary services and applications to reduce the attack surface.

17. Describe a strategy for backing up and recovering data on a Unix system.

A comprehensive strategy for backing up and recovering data on a Unix system involves several key components:

1. Types of Backups:

  • *Full Backup:* A complete copy of all data.
  • *Incremental Backup:* Only the data that has changed since the last backup.
  • *Differential Backup:* Data that has changed since the last full backup.

2. Backup Tools:

  • Common tools include rsync, tar, and dd. These tools can be used to create backups and automate the process using cron jobs.
  • For more advanced needs, tools like Bacula, Amanda, or Duplicity can be used.

3. Backup Storage:

  • Backups should be stored in a secure location, preferably off-site or in a cloud storage service to protect against physical damage to the primary site.
  • Ensure that the storage medium is reliable and has sufficient capacity.

4. Backup Schedule:

  • Regularly scheduled backups are important. The frequency depends on the criticality of the data and how often it changes.
  • A common approach is to perform full backups weekly and incremental backups daily.

5. Recovery Plan:

  • Test the recovery process regularly to ensure that backups are valid and can be restored quickly.
  • Document the recovery procedures and ensure that staff are trained to perform data recovery.

6. Security:

  • Encrypt backups to protect sensitive data.
  • Use secure transfer methods (e.g., scp, sftp) to move backups to remote locations.

18. What is SELinux and how does it enhance security?

SELinux is a security module in the Linux kernel that provides a mechanism for enforcing mandatory access control (MAC) policies. It was developed by the National Security Agency (NSA) to add an additional layer of security to the Linux operating system. SELinux operates by defining a set of rules that specify how processes and users can interact with files, network ports, and other resources.

SELinux enhances security in several ways:

  • Mandatory Access Control (MAC): Unlike traditional discretionary access control (DAC), where users can set permissions on files they own, MAC policies are enforced by the system and cannot be modified by users. This ensures a higher level of security.
  • Least Privilege: SELinux policies are designed to grant the minimum necessary permissions to users and processes, reducing the risk of unauthorized access or damage.
  • Role-Based Access Control (RBAC): SELinux supports RBAC, allowing administrators to define roles with specific permissions, making it easier to manage and audit access controls.
  • Type Enforcement (TE): This is the primary mechanism used by SELinux to enforce policies. It assigns types to files, processes, and other resources, and defines rules that govern how these types can interact.
  • Multi-Level Security (MLS): SELinux can enforce MLS policies, which are useful in environments that require strict separation of data based on sensitivity levels.

19. Name and describe a tool you would use to debug a running process.

One of the most commonly used tools to debug a running process in Unix is the GNU Debugger, or gdb. gdb allows you to see what is happening inside a program while it executes or what it was doing at the moment it crashed. It provides a wide range of features to help you track down and fix bugs in your code.

Key features of gdb include:

  • Breakpoints: You can set breakpoints in your code to pause execution at specific points, allowing you to inspect the state of the program.
  • Step Execution: You can step through your code line by line to observe the flow of execution and the state of variables.
  • Variable Inspection: You can inspect and modify the values of variables to understand how they change over time.
  • Backtraces: You can generate backtraces to see the call stack at any point in time, which is particularly useful for diagnosing crashes.
  • Core Dumps: You can analyze core dumps to determine the state of the program at the time of a crash.

To attach gdb to a running process, you can use the following command:

gdb -p <pid>

Replace <pid> with the process ID of the running process you want to debug. Once attached, you can use gdb commands to set breakpoints, step through code, and inspect variables.

20. What are the benefits of using virtualization on Unix systems?

Virtualization on Unix systems provides several key benefits:

  • Resource Efficiency: Virtualization allows multiple virtual machines (VMs) to run on a single physical machine, optimizing the use of hardware resources. This leads to better CPU, memory, and storage utilization.
  • Isolation: Each VM operates in its own isolated environment, which enhances security by preventing one VM from affecting others. This isolation also helps in testing and development, as different environments can be created and destroyed without impacting the host system.
  • Scalability: Virtualization makes it easier to scale resources up or down based on demand. New VMs can be quickly provisioned to handle increased load, and decommissioned when no longer needed.
  • Disaster Recovery: Virtual machines can be easily backed up and restored, providing a robust disaster recovery solution. Snapshots of VMs can be taken at any point in time, allowing for quick recovery in case of system failures.
  • Cost Savings: By consolidating multiple workloads onto fewer physical machines, organizations can reduce hardware and maintenance costs. This also leads to lower energy consumption and cooling requirements.
  • Flexibility: Virtualization allows for the creation of different operating system environments on the same physical hardware. This is particularly useful for testing software across different OS versions and configurations.

21. How would you set up and manage containers on a Unix system?

Containerization is a lightweight form of virtualization that allows you to run applications in isolated environments. Containers are more efficient than traditional virtual machines because they share the host system’s kernel and resources. On a Unix system, Docker is one of the most popular tools for setting up and managing containers.

To set up and manage containers on a Unix system, follow these steps:

  • Install Docker: First, you need to install Docker on your Unix system. Docker provides a convenient way to package, distribute, and run applications in containers.
  • Pull Docker Images: Docker images are pre-configured environments that contain everything needed to run an application. You can pull images from Docker Hub or create your own custom images using a Dockerfile.
  • Run Containers: Once you have the necessary Docker images, you can run containers using the docker run command. This command creates a new container instance from the specified image and starts it.
  • Manage Containers: Docker provides various commands to manage running containers, such as docker ps to list running containers, docker stop to stop a container, and docker rm to remove a container.
  • Networking and Storage: Docker also allows you to configure networking and storage for your containers. You can create custom networks to enable communication between containers and use volumes to persist data.
  • Orchestration: For managing multiple containers and ensuring high availability, you can use orchestration tools like Docker Compose, Kubernetes, or OpenShift. These tools help automate the deployment, scaling, and management of containerized applications.

22. Explain the role of the init system in Unix.

The init system in Unix is the first process that the kernel starts when the system boots up. It has a process ID (PID) of 1 and is responsible for bringing the system to a usable state. The init system performs several key functions:

  • System Initialization: It initializes the system by setting up the environment, mounting file systems, and starting essential services.
  • Service Management: It manages system services and daemons, ensuring they are started, stopped, and restarted as needed. This includes services like networking, logging, and user sessions.
  • Runlevels: The init system uses runlevels to define different states of the system, such as single-user mode, multi-user mode, and shutdown. Each runlevel has a specific set of services that should be running.
  • Shutdown and Reboot: It handles system shutdown and reboot processes, ensuring that all services are stopped gracefully and that the system is safely powered off or restarted.

There are different implementations of the init system, such as System V init, Upstart, and systemd. Each has its own way of managing services and runlevels, but the core responsibilities remain the same.

23. What are the differences between hard links and soft links?

In Unix, hard links and soft links (also known as symbolic links) are two different methods of creating links to files.

A hard link is essentially an additional name for an existing file. Both the original file and the hard link share the same inode number, meaning they point to the same data on the disk. If the original file is deleted, the data remains accessible through the hard link. Hard links cannot span across different file systems and cannot link to directories.

A soft link, on the other hand, is a special type of file that contains a path to another file or directory. Unlike hard links, soft links have their own inode number and can span across different file systems. If the original file is deleted, the soft link becomes a dangling link, pointing to a non-existent file.

Key differences include:

  • Inode Sharing: Hard links share the same inode number as the original file, while soft links have a different inode number.
  • File System Boundaries: Hard links cannot cross file system boundaries, whereas soft links can.
  • Directory Linking: Hard links cannot link to directories, but soft links can.
  • Deletion Impact: Deleting the original file does not affect hard links, but it renders soft links invalid.

24. Describe the function and typical use cases of the cron daemon.

The cron daemon is a background process that enables users to schedule jobs (commands or scripts) to run automatically at specified times and intervals. It reads configuration files known as “crontabs” for predefined tasks and their schedules. Each user can have their own crontab file, and there is also a system-wide crontab file.

Typical use cases for the cron daemon include:

  • System Maintenance: Automating tasks such as cleaning up temporary files, rotating logs, or updating software packages.
  • Backups: Scheduling regular backups of important data to ensure data integrity and availability.
  • Monitoring: Running scripts to monitor system health, resource usage, or application performance.
  • Data Processing: Automating data processing tasks such as generating reports, aggregating data, or running batch jobs.

A typical crontab entry consists of five fields representing the time and date, followed by the command to be executed. For example, the following crontab entry schedules a script to run every day at midnight:

0 0 * * * /path/to/script.sh

25. What is the significance of the /var directory?

The /var directory in Unix-based systems stands for “variable.” It is used to store files that are expected to grow and change frequently as the system runs. This directory is essential for the proper functioning of the system and various applications.

Some of the key subdirectories within /var include:

  • /var/log: Stores log files generated by the system and applications. These logs are important for debugging and monitoring system performance.
  • /var/spool: Contains directories for print spools and mail queues. This is where tasks are queued for processing.
  • /var/tmp: Used for temporary files that need to be preserved between reboots. Unlike /tmp, which is often cleared on reboot, /var/tmp retains its contents.
  • /var/lib: Holds state information pertaining to applications. For example, databases might store their data files here.
  • /var/cache: Stores cached data for applications to speed up operations. This data can usually be regenerated if needed.
Previous

15 PostgreSQL DBA Interview Questions and Answers

Back to Interview
Next

15 SolidWorks Interview Questions and Answers