Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.
  • Labs icon Lab
  • A Cloud Guru
Google Cloud Platform icon
Labs

Red Hat Certified Specialist in Security (EX415) Practice Exam

This practice test is designed to assess your readiness to take the Red Hat Certified Specialist in Security (EX415) exam. This test covers securing Red Hat servers in a production environment, as well as the many objectives listed in the official Red Hat curriculum. This test is 4 hours long, just like the real EX415 exam. *This course is not approved or sponsored by Red Hat.*

Google Cloud Platform icon
Labs

Path Info

Level
Clock icon Intermediate
Duration
Clock icon 2h 0m
Published
Clock icon Jul 02, 2019

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    1. On Host2, set up auditing for low disk space alerts to email root when the available disk space reaches 100 MB. Also, restrict audit logs to consume no more than 100 MB of disk space, and limit the number of audit buffers to 2560.

    1. Edit the /etc/audit/auditd.conf file and configure the following:
      • space_left = 100
      • space_left_action = email
    2. Edit the /etc/audit/auditd.conf file and set the max_log_file and the num_logs values so their multiplied value is equal to 100 (MB).
    3. Edit the file /etc/audit/rules.d/audit.rules and include a line consisting of -b 2560.
  2. Challenge

    2. On Host2, configure the audit rules to meet STIG compliance, then make sure all the audit changes are put into effect.

    1. Make a backup of the current audit rules using the following command:
    cp /etc/audit/rules.d/audit.rules /etc/audit/rules.d/audit.rules_backup
    
    

    Copy the STIG audit rules into the audit.rules file with the following command:

    ```
    sudo su
    ```
    ```
    cd /usr/share/doc/audit-2.8.4/rules
    ```
    ```
    cat 30-stig.rules 99-finalize.rules >> /etc/audit/rules.d/audit.rules
    ```
    

    Enter y to overwrite the file.

    The 10-base-config.rules we copied in also included a buffer size setting. We will need to remove this:

    nano /etc/audit/rules.d/audit.rules
    

    Remove the following line: -b 8192.

    1. Restart the auditd service:
    service auditd restart  
    
    1. Now let's go into the /etc/audit/rules.d/audit.rules file and remove the extra buffer setting that was copied over when we copied the rules.
      nano /etc/audit/rules.d/auditrules
      • Remove -b 8192.
      • This way, only -b 2560 remains in the config file.
  3. Challenge

    3. On Host2, create an audit report for all executed events in the logs. Name the report `host2-audit-report.txt` and save it to the `cloud_user`'s home directory.

    1. Create an audit report for all executed events:
    aureport -x > /home/cloud_user/host2-audit-report.txt
    
  4. Challenge

    4. On Control1, create a custom OpenSCAP policy to check to ensure the Telnet and FTP servers are removed, and `firewalld` is installed and running. Name the customized policy `control1_custom.xml` in the `cloud_user`'s home directory.

    1. VNC to Control1.
    2. Open SCAP Workbench:
      • Applications > System Tools > SCAP Workbench
    3. For Select content to load, choose RHEL7.
    4. Click the Customize button next to Profile.
    5. Provide a New Profile ID of: xccdf_org.ssgproject.content_profile_C2S_control1.
    6. In the customization window:
      1. Click the Deselect All button at the top.
      2. Under Services > Obsolete Services > Telnet, check the box next to Uninstall telnet-server Package.
      3. Under Services > FTP Server > Disable vsftpd if Possible, check the box next to Uninstall vsftpd Package.
      4. Under System Settings > Network Configurations and Firewalls > firewalld > Inspect and Activate Default firewalld Rules, check the boxes next to Verify firewalld Enabled and Install firewalld.
    7. Click the OK button at the bottom of the customization window.
    8. Now, in the SCAP Workbench window, click on File, Save Customization Only, and name the customization "control1_custom.xml".
    9. Close SCAP-Workbench.
  5. Challenge

    5. On Control1, use SCAP-Workbench to scan `Control1` (Local Machine) using the newly created `control1_custom` profile. Then, create a report of the scan results named `control1_scan_report.html`.

    1. From within the SCAP Workbench window, select Local Machine as the target, then click the Scan button at the bottom to start a scan using the custom profile.
    2. Once the scan is finished, click the Close button in the Diagnostics window.
    3. Click the Save Results button at the bottom, and select HTML Report.
    4. Enter "control1_scan_report.html" as the name of the report, and click Save.
  6. Challenge

    6. On Control1, generate an SSH key for the `ansible` user, then copy that key to `Host2` in order to use Ansible later on.

    1. To create a key pair for the ansible user on the Control1 host, run the following commands:
      sudo su - ansible
      ssh-keygen
      
    2. Press Enter at each prompt to accept all defaults.
    3. Copy the public key to Host2.
      ssh-copy-id Host2
      
    4. Accept the host key if prompted, and authenticate as the ansible user.
  7. Challenge

    7. On Control1, SCAP-Workbench was used to create an Ansible playbook to remediate `Host2` issues. Add `Host2` to an inventory file in the Ansible home directory, then download and run the `remediate.yml` playbook against `Host2`.

    1. On Control1, create an inventory file in the ansible users home directory and add Host2 to it:
    sudo su ansible
    
    nano inventory
    
    [Host2]
    X.X.X.X   (Private IP Address of Host2)
    
    1. Download the remediate.yml playbook by running:
    wget https://raw.githubusercontent.com/linuxacademy/content-security-redhat-ex415/master/remediate.yml /home/ansible/
    
    1. Run the remediate.yml playbook against Host2:
    ansible-playbook -i inventory remediate.yml
    
  8. Challenge

    8. On Host3, set up AIDE to monitor the `/accounting` directory using the `DIR` settings group, and monitor the `/applications/payroll` for all access events. Configure AIDE to run a check every morning at 1 AM.

    1. Install AIDE:
    yum install -y aide
    
    1. Initialize AIDE:
    /usr/sbin/aide --init  (will take about 5 minutes to complete)
    
    1. Copy the initialized database to production:
    cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz
    
    1. Define the directories to monitor:
    nano /etc/aide.conf
    
    /accounting     DIR
    
    1. Add an application to monitor each time it's accessed:
    nano /etc/aide.conf
    
    APP_ACCESS = a
    /applications/payroll   APP_ACCESS
    
    1. Create a cronjob to run aide --check at 1 AM daily:
    nano /etc/crontab
    
    0 1 * * * /usr/sbin/aide --check 
    
    1. Now we need to update the AIDE database since we made changes to what was monitored:
    /usr/sbin/aide --update
    

    Note: This will take about 5 minutes to complete.

    cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz  
    
  9. Challenge

    9. On Host1, only permit SSH access for `root` from host `Control1`, and be sure root SSH access is enabled globally as well. Also, grant `cloud_user` SSH access from anywhere. Ensure these changes take effect immediately.

    1. The first step is to permit root logins by removing the comment in front of the line #PermitRootLogin yes in the /etc/ssh/sshd_config file.
    2. Secondly, we need to add root@control1 and cloud_user to the AllowUsers line in the /etc/ssh/sshd_config file.
    3. Now we need to restart the sshd service so the changes we made will take effect:
    systemctl restart sshd  
    
  10. Challenge

    10. On Host1, install USBGuard and configure it to allow devices with the name `Yubikey-Waddle` or serial number `1337h4x0r`. Configure it to block all devices that don't match these rules. USBGuard will need to run at boot.

    1. Install USBGuard.
    yum install -y usbguard
    
    1. Start the USBGuard service.
    systemctl start usbguard.service
    
    1. Generate a base policy for USBGuard.
    usbguard generate-policy > /etc/usbguard/rules.conf
    
    1. Restart the USBGuard service after creating the base policy.
    systemctl restart usbguard.service
    
    1. Enable the USBGuard service to start at boot.
    systemctl enable usbguard.service
    
    1. Create a local file named rules.conf and add two allow lines.
    nano rules.conf
    

    Enter these two lines:

    allow name "Yubikey-Waddle"
    allow serial "1337h4x0r"
    

    Press Ctrl+x to quit, and save at the prompt.

    1. Commit the USBGuard rule changes by running the following command:
    install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf
    
    1. Edit the /etc/usbguard/usbguard-daemon.conf file.
    nano /etc/usbguard/usbguard-daemon.conf
    

    Set the ImplicitPolicyTarget to block.

    ImplicitPolicyTarget=block
    

    Press Ctrl+x to quit, and save at the prompt.

    1. Restart the USBGuard service.
    systemctl restart usbguard.service
    
  11. Challenge

    11. On Host1, ensure the Helpdesk group has permissions to edit USBGuard rules.

    1. Update USBGuard to permit the USBGuard-Users group to make changes to USBGuard.
    nano /etc/usbguard/usbguard-daemon.conf
    

    Change the IPCAllowedGroups line to the following:

    IPCAllowedGroups=Helpdesk
    
    1. Restart the USBGuard service.
    systemctl restart usbguard.service
    
  12. Challenge

    12. On Host3, install PAM and configure an account lockout policy to lock accounts out for 15 minutes after 3 failed login attempts. Do not include root in the account lockout policy.

    1. To install PAM, run the following command:
    sudo yum install -y pam-devel
    
    1. To set up an account lockout policy that will lock for 15 minutes after 3 consecutive failed logins and includes the root account in the policy, add the following lines to both /etc/pam.d/password-auth and /etc/pam.d/system-auth:
    auth        required      pam_faillock.so preauth silent audit deny=3 unlock_time=900
    auth        [default=die] pam_faillock.so authfail audit deny=3 unlock_time=900
    
    • The first line above should be the second uncommented line in the files.
    • The second line above should be the fourth uncommented line in the files.
    1. Next, we need to add the following line as the first line in the account section of the password-auth and system-auth files:
    account     required      pam_faillock.so 
    
  13. Challenge

    13. On Host3, create a password complexity policy that requires all new passwords to be at least 14 characters in length, contain at least 4 different character classes, and have at least 4 numbers in it.

    1. To create the password requirements in the policy, we need to edit the /etc/security/pwquality.conf file and include the following:
    minlen = 14  
    minclass = 4  
    dcredit = -4    
    
    1. In order to put the new policy into effect, we need to add the following line to the /etc/pam.d/passwd file:
    password    required    pam_pwquality.so retry=3
    
    • This line should be inserted as the first line with the word "password", the third uncommented line in the default configuration of the file.
  14. Challenge

    14. On Host3, ensure users `dschrute` and `mscott` have full `sudo` access.

    1. Add the following lines to the /etc/sudoers file via visudo:
    dschrute     ALL=(ALL)       ALL
    mscott       ALL=(ALL)       ALL
    
  15. Challenge

    15. On Host1, create a new volume 100 MB in size named `data_lv`, which is to be part of the `luks_vg` volume group.

    1. View a list of available volume groups:
    vgs
    
    1. Create a new logical volume:
    lvcreate -L 100M -n data_lv luks_vg
    
    1. Verify that the new logical volume was created:
    lvs
    
  16. Challenge

    16. On Host1, encrypt the new `data_lv` volume with LUKS, then format it using `ext4` and mount it to the `/data` directory. Lastly, write a test file to the `/data` directory named `test.txt`.

    1. Run the following command:
    cryptsetup luksFormat /dev/mapper/luks_vg-data_lv
    
    • Type YES at the prompt.
    • Enter the passphrase Pinehead1! at the next two prompts.
    1. Check for TYPE=crypto_LUKS in the output of this command:
    blkid | grep data
    
    1. Next, format the volume:
    cryptsetup luksOpen /dev/mapper/luks_vg-data_lv data_lv
    
    • Enter the passphrase Pinehead1! at the prompt.
    1. Check for data_lv in the output of this command:
    ls /dev/mapper
    
    1. Run the following command to overwrite all of the storage on the new volume:
    shred -v -n1 /dev/mapper/data_lv
    
    1. Next, format the new volume using ext4 with the following command:
    mkfs.ext4 /dev/mapper/data_lv
    
    1. Next, mount the volume to /data:
    mount /dev/mapper/data_lv /data
    
    1. Check for lost+found in the output of this command:
    ls /data
    
    1. Check the status of the new encrypted volume:
    cryptsetup -v status data_lv
    
    1. Create the test file:
    touch /data/test.txt
    
  17. Challenge

    17. On Host2, change the LUKS passphrase for the `patient_lv` volume to `Itscoldinthesnow!32`. The original passphrase is `Pinehead1!`. No data should be lost during this process.

    1. We first need to identify what volume patient_lv is part of. When a LUKS-encrypted volume is created, its original name includes the volume group name.

    Run the following command, and look for device in the output:

    cryptsetup -v status patient_lv
    
    1. Run the following command to change the passphrase:
    sudo cryptsetup luksChangeKey /dev/mapper/luks_vg-patient_lv
    
    • Enter the original passphrase (Pinehead1!) at the prompt.
    • Enter the new passphrase (Itscoldinthesnow!32) at the prompt.
    • Re-enter the new passphrase (Itscoldinthesnow!32) to confirm.
  18. Challenge

    18. In preparation for deploying NBDE, set up Control1 as an NBDE Tang server.

    1. On Control1, install Tang:
    yum install -y tang
    
    1. Configure Tang to run at boot:
    systemctl enable tangd.socket --now
    
    1. Verify that two Tang keys were created:
    ls /var/db/tang
    

    There should be two files in that directory with the file extension .jwk.

    1. Lastly, copy the IP address of Server 2 to your clipboard (we'll need it later).
    ip addr
    
  19. Challenge

    19. On Host3, encrypt the `/dev/xvdg` disk using the NBDE Tang keys on Control1. Then, ensure the NBDE keys are set to retrieve automatically at boot.

    1. First, install the necessary Clevis packages on Server 1:
    yum install -y clevis clevis-luks clevis-dracut
    
    1. Next, encrypt the /dev/xvdg disk with the Tang key from Control1:
    clevis bind luks -d /dev/xvdg tang '{"url":"http://10.0.1.<Control1_IP>"}'
    
    • Type Y to trust the keys.
    • Type y to initialize.
    • Enter Pinehead1! for the existing LUKS passphrase.
    1. Verify that the key was entered into the LUKS header of /dev/xvdg:
    luksmeta show -d /dev/xvdg
    
    1. Verify that slot 1 is active and there is a key value next to it.
    2. Lastly, force the retrieval of the Tang key at boot (this will take about 2 minutes to complete).
    dracut -f
    
  20. Challenge

    20. On Host1, ensure SELinux is put into `enforcing` mode and the host boots into `enforcing` mode.

    1. Check the SELinux state.
    getenforce
    

    This will show that it is in disabled mode. We need to change it to permissive mode in /etc/selinux/config.

    1. Edit /etc/selinux/config and set SELinux to permissive mode:
    nano /etc/selinux/config
    
    SELINUX=permissive
    
    1. Reboot the host.
    shutdown -r now
    
    1. Check the SELinux state.
    getenforce
    
    • This will show that it is in permissive mode. We need to change it to enforcing mode.
    1. Set SELinux to enforcing mode.
    setenforce 1
    
    1. Verify that SELinux is now in enforcing mode.
    getenforce
    

    We can see our change worked and SELinux is now in enforcing mode.

    1. Ensure SELinux boots into enforcing mode.

    Edit the SELinux configuration file:

    nano /etc/selinux/config
    
    SELINUX=enforcing
    

    Save the changes.

  21. Challenge

    21. On Host1, configure SELinux-confined users by mapping Linux user `jhalpert` to SELinux user `user_u` and Linux user `pbeesly` to SELinux user `staff_u`.

    1. Map Linux user jhalpert to SELinux user user_u:
    semanage login -a -s user_u jhalpert
    
    1. Map Linux user pbeesly to SELinux user staff_u:
    semanage login -a -s staff_u pbeesly  
    
    1. Check the user mappings:
    semanage login -l  
    
    • We can see our Linux users successfully mapped to the assigned SELinux users.

The Cloud Content team comprises subject matter experts hyper focused on services offered by the leading cloud vendors (AWS, GCP, and Azure), as well as cloud-related technologies such as Linux and DevOps. The team is thrilled to share their knowledge to help you build modern tech solutions from the ground up, secure and optimize your environments, and so much more!

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.

Start learning by doing today

View Plans