Using and Administering Linux: Volume 1: Zero to SysAdmin: Getting Started - PDF Free Download (2024)

Using and Administering Linux: Volume 1 Zero to SysAdmin: Getting Started — David Both

Using and Administering Linux: Volume 1 Zero toSysAdmin: Getting Started

DavidBoth

Using and Administering Linux: Volume 1 DavidBoth Raleigh, NC, USA ISBN-13 (pbk): 978-1-4842-5048-8 https://doi.org/10.1007/978-1-4842-5049-5

ISBN-13 (electronic): 978-1-4842-5049-5

Copyright © 2020 by David Both This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Managing Director, Apress Media LLC: Welmoed Spahr Acquisitions Editor: Louise Corrigan Development Editor: James Markham Coordinating Editor: Nancy Chen Cover designed by eStudioCalamar Cover image designed by Freepik (www.freepik.com) Distributed to the book trade worldwide by Springer Science+Business Media NewYork, 233 Spring Street, 6th Floor, NewYork, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail [emailprotected], or visit www.springeronline.com. Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation. For information on translations, please e-mail [emailprotected], or visit http://www.apress.com/ rights-permissions. Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook versions and licenses are also available for most titles. For more information, reference our Print and eBook Bulk Sales web page at http://www.apress.com/bulk-sales. Any source code or other supplementary material referenced by the author in this book is available to readers on GitHub via the book’s product page, located at www.apress.com/9781484250488. For more detailed information, please visit http://www.apress.com/source-code. Printed on acid-free paper

This book– this course– is dedicated to all Linux and open source course developers and trainers. :(){ :|:& };:

Table of Contents About the Author����������������������������������������������������������������������������������������������������xix About the Technical Reviewer��������������������������������������������������������������������������������xxi Acknowledgments������������������������������������������������������������������������������������������������xxiii Introduction�����������������������������������������������������������������������������������������������������������xxv Chapter 1: Introduction�������������������������������������������������������������������������������������������� 1 Objectives������������������������������������������������������������������������������������������������������������������������������������� 1 About Linux����������������������������������������������������������������������������������������������������������������������������������� 1 The birth of Windows�������������������������������������������������������������������������������������������������������������������� 3 Black box syndrome���������������������������������������������������������������������������������������������������������������� 3 The birth of Linux�������������������������������������������������������������������������������������������������������������������� 5 The open box��������������������������������������������������������������������������������������������������������������������������� 6 The Linux Truth������������������������������������������������������������������������������������������������������������������������������ 7 Knowledge������������������������������������������������������������������������������������������������������������������������������� 8 Flexibility��������������������������������������������������������������������������������������������������������������������������������� 9 Stability��������������������������������������������������������������������������������������������������������������������������������� 10 Scalability������������������������������������������������������������������������������������������������������������������������������ 11 Security��������������������������������������������������������������������������������������������������������������������������������� 11 Freedom�������������������������������������������������������������������������������������������������������������������������������� 12 Longevity������������������������������������������������������������������������������������������������������������������������������� 13 Should I be a SysAdmin?������������������������������������������������������������������������������������������������������������ 15 About this course������������������������������������������������������������������������������������������������������������������������ 17 About the experiments���������������������������������������������������������������������������������������������������������������� 18 What to do if the experiments do not work��������������������������������������������������������������������������������� 20 Terminology�������������������������������������������������������������������������������������������������������������������������������� 21

v

Table of Contents

How to access the command line����������������������������������������������������������������������������������������������� 21 Chapter summary����������������������������������������������������������������������������������������������������������������������� 22 Exercises������������������������������������������������������������������������������������������������������������������������������������� 22

Chapter 2: Introduction to Operating Systems������������������������������������������������������� 23 Objectives����������������������������������������������������������������������������������������������������������������������������������� 23 Choice – Really!�������������������������������������������������������������������������������������������������������������������������� 23 What is an operating system?���������������������������������������������������������������������������������������������������� 24 Hardware������������������������������������������������������������������������������������������������������������������������������� 25 The operating system������������������������������������������������������������������������������������������������������������ 30 Typical operating system functions�������������������������������������������������������������������������������������������� 31 Memory management����������������������������������������������������������������������������������������������������������� 32 Multitasking��������������������������������������������������������������������������������������������������������������������������� 32 Multiuser������������������������������������������������������������������������������������������������������������������������������� 33 Process management������������������������������������������������������������������������������������������������������������ 34 Interprocess communication������������������������������������������������������������������������������������������������� 35 Device management�������������������������������������������������������������������������������������������������������������� 35 Error handling������������������������������������������������������������������������������������������������������������������������ 36 Utilities���������������������������������������������������������������������������������������������������������������������������������������� 36 A bit of history����������������������������������������������������������������������������������������������������������������������������� 37 Starting with UNICS��������������������������������������������������������������������������������������������������������������� 37 UNIX��������������������������������������������������������������������������������������������������������������������������������������� 38 A (very) brief history of Linux������������������������������������������������������������������������������������������������ 41 Core utilities�������������������������������������������������������������������������������������������������������������������������������� 41 GNU coreutils������������������������������������������������������������������������������������������������������������������������� 42 util-linux�������������������������������������������������������������������������������������������������������������������������������� 43 Copyleft��������������������������������������������������������������������������������������������������������������������������������� 44 Games����������������������������������������������������������������������������������������������������������������������������������������� 44 Chapter summary����������������������������������������������������������������������������������������������������������������������� 45 Exercises������������������������������������������������������������������������������������������������������������������������������������� 45

vi

Table of Contents

Chapter 3: The Linux Philosophy for SysAdmins���������������������������������������������������� 47 Objectives����������������������������������������������������������������������������������������������������������������������������������� 47 Background��������������������������������������������������������������������������������������������������������������������������������� 47 The structure of the philosophy�������������������������������������������������������������������������������������������������� 48 The tenets����������������������������������������������������������������������������������������������������������������������������������� 50 Data streams are a universal interface��������������������������������������������������������������������������������� 50 Transforming data streams��������������������������������������������������������������������������������������������������� 51 Everything is a file����������������������������������������������������������������������������������������������������������������� 52 Use the Linux FHS����������������������������������������������������������������������������������������������������������������� 52 Embrace the CLI�������������������������������������������������������������������������������������������������������������������� 53 Be the lazy SysAdmin������������������������������������������������������������������������������������������������������������ 54 Automate everything������������������������������������������������������������������������������������������������������������� 54 Always use shell scripts�������������������������������������������������������������������������������������������������������� 55 Test early test often��������������������������������������������������������������������������������������������������������������� 55 Use common sense naming�������������������������������������������������������������������������������������������������� 56 Store data in open formats���������������������������������������������������������������������������������������������������� 57 Use separate filesystems for data����������������������������������������������������������������������������������������� 58 Make programs portable������������������������������������������������������������������������������������������������������� 59 Use open source software����������������������������������������������������������������������������������������������������� 60 Strive for elegance���������������������������������������������������������������������������������������������������������������� 61 Find the simplicity����������������������������������������������������������������������������������������������������������������� 61 Use your favorite editor��������������������������������������������������������������������������������������������������������� 63 Document everything������������������������������������������������������������������������������������������������������������ 63 Back up everything – frequently������������������������������������������������������������������������������������������� 65 Follow your curiosity������������������������������������������������������������������������������������������������������������� 65 There is no should����������������������������������������������������������������������������������������������������������������� 66 Mentor the young SysAdmins������������������������������������������������������������������������������������������������ 67 Support your favorite open source project���������������������������������������������������������������������������� 67 Reality bytes�������������������������������������������������������������������������������������������������������������������������� 68 Chapter summary����������������������������������������������������������������������������������������������������������������������� 69 Exercises������������������������������������������������������������������������������������������������������������������������������������� 69 vii

Table of Contents

Chapter 4: Preparation������������������������������������������������������������������������������������������� 71 Objectives����������������������������������������������������������������������������������������������������������������������������������� 71 Overview������������������������������������������������������������������������������������������������������������������������������������� 71 Got root?������������������������������������������������������������������������������������������������������������������������������������� 72 Hardware specifications������������������������������������������������������������������������������������������������������������� 73 Host software requirements������������������������������������������������������������������������������������������������������� 74 Installing VirtualBox�������������������������������������������������������������������������������������������������������������������� 75 Install VirtualBox on a Linux host������������������������������������������������������������������������������������������ 75 Install VirtualBox on a Windows host������������������������������������������������������������������������������������� 82 Creating the VM��������������������������������������������������������������������������������������������������������������������������� 86 VirtualBox Manager��������������������������������������������������������������������������������������������������������������� 86 Configuring the virtual network��������������������������������������������������������������������������������������������� 88 Preparing disk space������������������������������������������������������������������������������������������������������������� 90 Download the ISO image file����������������������������������������������������������������������������������������������� 104 Creating the VM������������������������������������������������������������������������������������������������������������������� 105 Chapter summary��������������������������������������������������������������������������������������������������������������������� 114 Exercises����������������������������������������������������������������������������������������������������������������������������������� 115

Chapter 5: Installing Linux����������������������������������������������������������������������������������� 117 Objectives��������������������������������������������������������������������������������������������������������������������������������� 117 Overview����������������������������������������������������������������������������������������������������������������������������������� 117 Boot the Fedora live image�������������������������������������������������������������������������������������������������� 118 Installing Fedora����������������������������������������������������������������������������������������������������������������������� 123 Start the installation������������������������������������������������������������������������������������������������������������ 124 Set the hostname���������������������������������������������������������������������������������������������������������������� 125 Hard drive partitioning��������������������������������������������������������������������������������������������������������� 126 About swap space��������������������������������������������������������������������������������������������������������������� 136 Begin the installation���������������������������������������������������������������������������������������������������������������� 140 Set the root password��������������������������������������������������������������������������������������������������������� 141 Create the student user������������������������������������������������������������������������������������������������������� 143 Finishing the installation����������������������������������������������������������������������������������������������������������� 144 Exit the installer������������������������������������������������������������������������������������������������������������������� 144 viii

Table of Contents

Shut down the Live system������������������������������������������������������������������������������������������������������� 145 Reconfigure the VM������������������������������������������������������������������������������������������������������������� 146 Create a snapshot��������������������������������������������������������������������������������������������������������������������� 146 First boot����������������������������������������������������������������������������������������������������������������������������������� 148 What to do if the experiments do not work������������������������������������������������������������������������������� 149 Chapter summary��������������������������������������������������������������������������������������������������������������������� 151 Exercises����������������������������������������������������������������������������������������������������������������������������������� 151

Chapter 6: Using the Xfce Desktop����������������������������������������������������������������������� 153 Objectives��������������������������������������������������������������������������������������������������������������������������������� 153 Why Xfce����������������������������������������������������������������������������������������������������������������������������������� 153 The desktop������������������������������������������������������������������������������������������������������������������������� 154 The file manager����������������������������������������������������������������������������������������������������������������� 156 Stability������������������������������������������������������������������������������������������������������������������������������� 156 xfce4-terminal emulator������������������������������������������������������������������������������������������������������ 156 Configurability��������������������������������������������������������������������������������������������������������������������� 157 Getting started�������������������������������������������������������������������������������������������������������������������������� 157 Login����������������������������������������������������������������������������������������������������������������������������������������� 159 Exploring the Xfce desktop������������������������������������������������������������������������������������������������������� 162 Settings Manager���������������������������������������������������������������������������������������������������������������� 165 Adding launchers to Panel 2������������������������������������������������������������������������������������������������ 166 Preferred applications��������������������������������������������������������������������������������������������������������� 168 Desktop appearance����������������������������������������������������������������������������������������������������������������� 170 Appearance������������������������������������������������������������������������������������������������������������������������� 170 Multiple desktops���������������������������������������������������������������������������������������������������������������������� 173 Installing updates���������������������������������������������������������������������������������������������������������������������� 175 Chapter summary��������������������������������������������������������������������������������������������������������������������� 178 Exercises����������������������������������������������������������������������������������������������������������������������������������� 179

Chapter 7: Using the Linux Command Line����������������������������������������������������������� 181 Objectives��������������������������������������������������������������������������������������������������������������������������������� 181 Introduction������������������������������������������������������������������������������������������������������������������������������� 181 ix

Table of Contents

Preparation������������������������������������������������������������������������������������������������������������������������������� 182 Defining the command line������������������������������������������������������������������������������������������������������� 183 CLI terminology������������������������������������������������������������������������������������������������������������������������� 183 Command prompt���������������������������������������������������������������������������������������������������������������� 184 Command line��������������������������������������������������������������������������������������������������������������������� 184 Command-line interface������������������������������������������������������������������������������������������������������ 184 Command���������������������������������������������������������������������������������������������������������������������������� 185 Terminal������������������������������������������������������������������������������������������������������������������������������� 185 Console�������������������������������������������������������������������������������������������������������������������������������� 187 Virtual consoles������������������������������������������������������������������������������������������������������������������� 188 Terminal emulator��������������������������������������������������������������������������������������������������������������� 195 Pseudo-terminal������������������������������������������������������������������������������������������������������������������ 196 Session�������������������������������������������������������������������������������������������������������������������������������� 197 Shell������������������������������������������������������������������������������������������������������������������������������������ 198 Secure Shell (SSH)�������������������������������������������������������������������������������������������������������������� 201 screen���������������������������������������������������������������������������������������������������������������������������������� 201 The GUI and the CLI������������������������������������������������������������������������������������������������������������������� 204 Some important Linux commands�������������������������������������������������������������������������������������������� 205 The PWD������������������������������������������������������������������������������������������������������������������������������ 206 Directory path notation styles��������������������������������������������������������������������������������������������� 206 Moving around the directory tree���������������������������������������������������������������������������������������� 207 Tab completion facility�������������������������������������������������������������������������������������������������������� 212 Exploring files���������������������������������������������������������������������������������������������������������������������� 214 More commands������������������������������������������������������������������������������������������������������������������ 217 Command recall and editing����������������������������������������������������������������������������������������������������� 220 Chapter summary��������������������������������������������������������������������������������������������������������������������� 223 Exercises����������������������������������������������������������������������������������������������������������������������������������� 223

Chapter 8: Core Utilities���������������������������������������������������������������������������������������� 225 Objectives��������������������������������������������������������������������������������������������������������������������������������� 225 GNU coreutils���������������������������������������������������������������������������������������������������������������������������� 225 util-linux������������������������������������������������������������������������������������������������������������������������������������ 230 x

Table of Contents

Chapter summary��������������������������������������������������������������������������������������������������������������������� 236 Exercises����������������������������������������������������������������������������������������������������������������������������������� 237

Chapter 9: Data Streams�������������������������������������������������������������������������������������� 239 Objectives��������������������������������������������������������������������������������������������������������������������������������� 239 Data streams as raw materials������������������������������������������������������������������������������������������������� 239 Text streams – A universal interface����������������������������������������������������������������������������������������� 241 STDIO file handles��������������������������������������������������������������������������������������������������������������������� 241 Preparing a USB thumb drive���������������������������������������������������������������������������������������������� 242 Generating data streams����������������������������������������������������������������������������������������������������������� 247 Test a theory with yes��������������������������������������������������������������������������������������������������������������� 250 Exploring the USB drive������������������������������������������������������������������������������������������������������������ 254 Randomness����������������������������������������������������������������������������������������������������������������������������� 261 Pipe dreams������������������������������������������������������������������������������������������������������������������������������ 262 Building pipelines��������������������������������������������������������������������������������������������������������������������� 264 Redirection�������������������������������������������������������������������������������������������������������������������������������� 265 Just grep’ing around����������������������������������������������������������������������������������������������������������������� 268 Cleanup������������������������������������������������������������������������������������������������������������������������������������� 269 Chapter summary��������������������������������������������������������������������������������������������������������������������� 270 Exercises����������������������������������������������������������������������������������������������������������������������������������� 271

Chapter 10: Text Editors��������������������������������������������������������������������������������������� 273 Objectives��������������������������������������������������������������������������������������������������������������������������������� 273 Why we need text editors��������������������������������������������������������������������������������������������������������� 273 Vim�������������������������������������������������������������������������������������������������������������������������������������������� 275 Other editors����������������������������������������������������������������������������������������������������������������������������� 276 Emacs���������������������������������������������������������������������������������������������������������������������������������� 276 gedit������������������������������������������������������������������������������������������������������������������������������������ 276 Leafpad�������������������������������������������������������������������������������������������������������������������������������� 277 Kate������������������������������������������������������������������������������������������������������������������������������������� 277 xfw��������������������������������������������������������������������������������������������������������������������������������������� 277 xed��������������������������������������������������������������������������������������������������������������������������������������� 277 xi

Table of Contents

Learning Vim����������������������������������������������������������������������������������������������������������������������������� 277 Disabling SELinux���������������������������������������������������������������������������������������������������������������� 278 Use your favorite text editor������������������������������������������������������������������������������������������������������ 280 Chapter summary��������������������������������������������������������������������������������������������������������������������� 281 Exercises����������������������������������������������������������������������������������������������������������������������������������� 281

Chapter 11: Working As Root�������������������������������������������������������������������������������� 283 Objectives��������������������������������������������������������������������������������������������������������������������������������� 283 Why root?���������������������������������������������������������������������������������������������������������������������������������� 283 More about the su command���������������������������������������������������������������������������������������������������� 284 Getting to know the root account���������������������������������������������������������������������������������������������� 286 Disadvantages of root��������������������������������������������������������������������������������������������������������������� 292 Escalating user privilege����������������������������������������������������������������������������������������������������������� 293 The bad ways���������������������������������������������������������������������������������������������������������������������� 293 Using sudo��������������������������������������������������������������������������������������������������������������������������� 293 Using su as root������������������������������������������������������������������������������������������������������������������������ 305 Chapter summary��������������������������������������������������������������������������������������������������������������������� 306 Exercises����������������������������������������������������������������������������������������������������������������������������������� 306

Chapter 12: Installing and Updating Software������������������������������������������������������ 309 Objectives��������������������������������������������������������������������������������������������������������������������������������� 309 Dependency hell������������������������������������������������������������������������������������������������������������������ 309 RPM������������������������������������������������������������������������������������������������������������������������������������������� 310 YUM������������������������������������������������������������������������������������������������������������������������������������������� 315 DNF������������������������������������������������������������������������������������������������������������������������������������������� 316 Installing packages�������������������������������������������������������������������������������������������������������������� 317 Installing updates���������������������������������������������������������������������������������������������������������������� 320 Post-update tasks��������������������������������������������������������������������������������������������������������������� 323 Removing packages������������������������������������������������������������������������������������������������������������ 324 Groups��������������������������������������������������������������������������������������������������������������������������������������� 326 Adding repositories������������������������������������������������������������������������������������������������������������������� 327

xii

Table of Contents

About the kernel������������������������������������������������������������������������������������������������������������������������ 330 Chapter summary��������������������������������������������������������������������������������������������������������������������� 332 Exercises����������������������������������������������������������������������������������������������������������������������������������� 332

Chapter 13: Tools for Problem Solving����������������������������������������������������������������� 335 Objectives��������������������������������������������������������������������������������������������������������������������������������� 335 The art of problem solving�������������������������������������������������������������������������������������������������������� 336 The five steps of problem solving���������������������������������������������������������������������������������������� 336 Knowledge��������������������������������������������������������������������������������������������������������������������������� 337 Observation������������������������������������������������������������������������������������������������������������������������� 338 Reasoning���������������������������������������������������������������������������������������������������������������������������� 339 Action���������������������������������������������������������������������������������������������������������������������������������� 340 Test�������������������������������������������������������������������������������������������������������������������������������������� 340 System performance and problem solving������������������������������������������������������������������������������� 341 top��������������������������������������������������������������������������������������������������������������������������������������� 342 Other top-like tools������������������������������������������������������������������������������������������������������������������� 358 htop������������������������������������������������������������������������������������������������������������������������������������� 359 atop������������������������������������������������������������������������������������������������������������������������������������� 361 More tools��������������������������������������������������������������������������������������������������������������������������������� 364 Memory tools����������������������������������������������������������������������������������������������������������������������� 364 Tools that display disk I/O statistics������������������������������������������������������������������������������������ 366 The /proc filesystem����������������������������������������������������������������������������������������������������������������� 369 Exploring hardware������������������������������������������������������������������������������������������������������������������� 372 Monitoring hardware temperatures������������������������������������������������������������������������������������������ 374 Monitoring hard drives�������������������������������������������������������������������������������������������������������� 377 System statistics with SAR������������������������������������������������������������������������������������������������������� 386 Installation and configuration���������������������������������������������������������������������������������������������� 386 Examining collected data���������������������������������������������������������������������������������������������������� 386 Cleanup������������������������������������������������������������������������������������������������������������������������������������� 391 Chapter summary��������������������������������������������������������������������������������������������������������������������� 392 Exercises����������������������������������������������������������������������������������������������������������������������������������� 393 xiii

Table of Contents

Chapter 14: Terminal Emulator Mania������������������������������������������������������������������ 395 Objectives��������������������������������������������������������������������������������������������������������������������������������� 395 About terminals������������������������������������������������������������������������������������������������������������������������� 395 My requirements����������������������������������������������������������������������������������������������������������������������� 396 rxvt�������������������������������������������������������������������������������������������������������������������������������������� 398 xfce4-terminal��������������������������������������������������������������������������������������������������������������������� 398 LXTerminal��������������������������������������������������������������������������������������������������������������������������� 402 Tilix�������������������������������������������������������������������������������������������������������������������������������������� 404 Konsole�������������������������������������������������������������������������������������������������������������������������������� 410 Terminator��������������������������������������������������������������������������������������������������������������������������� 412 Chapter summary��������������������������������������������������������������������������������������������������������������������� 415 Exercises����������������������������������������������������������������������������������������������������������������������������������� 415

Chapter 15: Advanced Shell Topics���������������������������������������������������������������������� 417 Objectives��������������������������������������������������������������������������������������������������������������������������������� 417 The Bash shell��������������������������������������������������������������������������������������������������������������������������� 418 Shell options����������������������������������������������������������������������������������������������������������������������������� 418 Shell variables��������������������������������������������������������������������������������������������������������������������������� 420 Commands�������������������������������������������������������������������������������������������������������������������������������� 421 The PATH����������������������������������������������������������������������������������������������������������������������������� 422 Internal commands�������������������������������������������������������������������������������������������������������������� 424 External commands������������������������������������������������������������������������������������������������������������� 427 Forcing the use of external commands������������������������������������������������������������������������������� 428 Compound commands�������������������������������������������������������������������������������������������������������������� 429 Time-saving tools���������������������������������������������������������������������������������������������������������������������� 433 Brace expansion������������������������������������������������������������������������������������������������������������������ 433 Special pattern characters�������������������������������������������������������������������������������������������������� 435 Sets������������������������������������������������������������������������������������������������������������������������������������� 438 Meta-characters������������������������������������������������������������������������������������������������������������������ 440 Using grep��������������������������������������������������������������������������������������������������������������������������������� 440 Finding files������������������������������������������������������������������������������������������������������������������������������ 445 xiv

Table of Contents

Chapter summary��������������������������������������������������������������������������������������������������������������������� 448 Exercises����������������������������������������������������������������������������������������������������������������������������������� 448

Chapter 16: Linux Boot and Startup��������������������������������������������������������������������� 451 Objectives��������������������������������������������������������������������������������������������������������������������������������� 451 Overview����������������������������������������������������������������������������������������������������������������������������������� 451 Hardware boot�������������������������������������������������������������������������������������������������������������������������� 452 Linux boot��������������������������������������������������������������������������������������������������������������������������������� 453 GRUB����������������������������������������������������������������������������������������������������������������������������������� 454 Configuring GRUB���������������������������������������������������������������������������������������������������������������� 464 The Linux kernel������������������������������������������������������������������������������������������������������������������ 470 Linux startup����������������������������������������������������������������������������������������������������������������������������� 471 systemd������������������������������������������������������������������������������������������������������������������������������� 471 Graphical login screen��������������������������������������������������������������������������������������������������������� 478 About the login�������������������������������������������������������������������������������������������������������������������������� 487 CLI login screen������������������������������������������������������������������������������������������������������������������� 487 GUI login screen������������������������������������������������������������������������������������������������������������������ 488 Chapter summary��������������������������������������������������������������������������������������������������������������������� 489 Exercises����������������������������������������������������������������������������������������������������������������������������������� 490

Chapter 17: Shell Configuration���������������������������������������������������������������������������� 491 Objectives��������������������������������������������������������������������������������������������������������������������������������� 491 Starting the shell����������������������������������������������������������������������������������������������������������������������� 492 Non-login shell startup�������������������������������������������������������������������������������������������������������� 495 Login shell startup��������������������������������������������������������������������������������������������������������������� 495 Exploring the global configuration scripts��������������������������������������������������������������������������� 496 Exploring the local configuration scripts����������������������������������������������������������������������������� 499 Testing it������������������������������������������������������������������������������������������������������������������������������ 500 Exploring the environment�������������������������������������������������������������������������������������������������������� 504 User shell variables������������������������������������������������������������������������������������������������������������� 505

xv

Table of Contents

Aliases�������������������������������������������������������������������������������������������������������������������������������������� 508 Chapter summary��������������������������������������������������������������������������������������������������������������������� 510 Exercises����������������������������������������������������������������������������������������������������������������������������������� 510

Chapter 18: Files, Directories, and Links�������������������������������������������������������������� 513 Objectives��������������������������������������������������������������������������������������������������������������������������������� 513 Introduction������������������������������������������������������������������������������������������������������������������������������� 514 Preparation������������������������������������������������������������������������������������������������������������������������������� 514 User accounts and security������������������������������������������������������������������������������������������������������ 516 File attributes���������������������������������������������������������������������������������������������������������������������������� 517 File ownership��������������������������������������������������������������������������������������������������������������������� 517 File permissions������������������������������������������������������������������������������������������������������������������ 520 Directory permissions��������������������������������������������������������������������������������������������������������� 522 Implications of Group ownership����������������������������������������������������������������������������������������� 522 umask���������������������������������������������������������������������������������������������������������������������������������� 527 Changing file permissions��������������������������������������������������������������������������������������������������� 529 Applying permissions���������������������������������������������������������������������������������������������������������� 531 Timestamps������������������������������������������������������������������������������������������������������������������������� 532 File meta-structures����������������������������������������������������������������������������������������������������������������� 533 The directory entry�������������������������������������������������������������������������������������������������������������� 533 The inode����������������������������������������������������������������������������������������������������������������������������� 533 File information������������������������������������������������������������������������������������������������������������������������� 533 Links����������������������������������������������������������������������������������������������������������������������������������������� 536 Hard links���������������������������������������������������������������������������������������������������������������������������� 537 Chapter summary��������������������������������������������������������������������������������������������������������������������� 546 Exercises����������������������������������������������������������������������������������������������������������������������������������� 546

Chapter 19: Filesystems��������������������������������������������������������������������������������������� 549 Objectives��������������������������������������������������������������������������������������������������������������������������������� 549 Overview����������������������������������������������������������������������������������������������������������������������������������� 549 Definitions��������������������������������������������������������������������������������������������������������������������������������� 550 Filesystem functions����������������������������������������������������������������������������������������������������������������� 551 xvi

Table of Contents

The Linux Filesystem Hierarchical Standard����������������������������������������������������������������������������� 553 The standard����������������������������������������������������������������������������������������������������������������������� 553 Problem solving������������������������������������������������������������������������������������������������������������������� 556 Using the filesystem incorrectly������������������������������������������������������������������������������������������ 556 Adhering to the standard����������������������������������������������������������������������������������������������������� 557 Linux unified directory structure����������������������������������������������������������������������������������������������� 557 Filesystem types����������������������������������������������������������������������������������������������������������������������� 559 Mounting����������������������������������������������������������������������������������������������������������������������������������� 561 The Linux EXT4 filesystem�������������������������������������������������������������������������������������������������������� 562 Cylinder groups������������������������������������������������������������������������������������������������������������������� 563 The inode����������������������������������������������������������������������������������������������������������������������������� 569 Journal�������������������������������������������������������������������������������������������������������������������������������� 570 Data allocation strategies��������������������������������������������������������������������������������������������������������� 572 Data fragmentation�������������������������������������������������������������������������������������������������������������� 573 Repairing problems������������������������������������������������������������������������������������������������������������������� 578 The /etc/fstab file���������������������������������������������������������������������������������������������������������������� 578 Repairing damaged filesystems������������������������������������������������������������������������������������������ 585 Creating a new filesystem�������������������������������������������������������������������������������������������������������� 594 Finding space���������������������������������������������������������������������������������������������������������������������� 595 Add a new virtual hard drive����������������������������������������������������������������������������������������������� 596 Other filesystems���������������������������������������������������������������������������������������������������������������������� 604 Chapter summary��������������������������������������������������������������������������������������������������������������������� 606 Exercises����������������������������������������������������������������������������������������������������������������������������������� 606

Bibliography��������������������������������������������������������������������������������������������������������� 609 Books���������������������������������������������������������������������������������������������������������������������������������������� 609 Web sites���������������������������������������������������������������������������������������������������������������������������������� 610

Index��������������������������������������������������������������������������������������������������������������������� 615

xvii

About the Author DavidBothis an open source software and GNU/Linux advocate, trainer, writer, and speaker. He has been working with Linux and open source software for more than 20 years and has been working with computers for over 45 years. He is a strong proponent of and evangelist for the “Linux Philosophy for System Administrators.” David has been in the IT industry for over 40 years. Mr. Both worked for IBM for 21 years and, while working as a Course Development Representative in Boca Raton, FL, in 1981, wrote the training course for the first IBM PC.He has taught RHCE classes for Red Hat and has worked at MCI WorldCom, Cisco, and the State of North Carolina. In most of the places he has worked since leaving IBM in 1995, he has taught classes on Linux ranging from Lunch’n’Learns to full five-day courses. Helping others learn about Linux and open source software is one of his great pleasures. David prefers to purchase the components and build his own computers from scratch to ensure that each new computer meets his exacting specifications. Building his own computers also means not having to pay the Microsoft tax. His latest build is an ASUS TUF X299 motherboard and an Intel i9 CPU with 16 cores (32 CPUs) and 64GB of RAM in a ThermalTake Core X9 case. He has written articles for magazines including Linux Magazine, Linux Journal, and OS/2 back when there was such a thing. His article “Complete Kickstart,” co-authored with a colleague at Cisco, was ranked 9th in the Linux Magazine Top Ten Best System Administration Articles list for 2008. He currently writes prolifically and is a volunteer community moderator for Opensource.com. He particularly enjoys learning new things while researching his articles. David currently lives in Raleigh, NC, with his very supportive wife and a strange rescue dog that is mostly Jack Russell. David also likes reading, travel, the beach, old M*A*S*H reruns, and spending time with his two children, their spouses, and four grandchildren. David can be reached at [emailprotected] or on Twitter @LinuxGeek46. xix

About the Technical Reviewer JasonBakerhas been a Linux user since the early 2000s, ever since stuffing a Slackware box under his desk and trying to make the darn thing work. He is a writer and presenter on a variety of open source projects and technologies, much of which can be found on Opensource. com. A Red Hat Certified Systems Administrator, he is currently the managing editor of Enable SysAdmin, Red Hat’s community publication for system administrators. When he’s not at work, he enjoys tinkering with hardware and using open source tools to play with maps and other visualizations of cool data sets. He lives in Chapel Hill, NC, with his wife, Erin, and their rescue cat, Mary.  

xxi

Acknowledgments Writing a book is not a solitary activity, and this massive three-volume Linux training course required a team effort so much more than most. The most important person in this effort has been my awesome wife, Alice, who has been my head cheerleader and best friend throughout. I could not have done this without your support and love. I am grateful for the support and guidance of Louise Corrigan, senior editor for open source at Apress, who believed in me and my vision for this book. This book would not have been possible without her. To my coordinating editor, Nancy Chen, I owe many thanks for her hours of work, guidance, and being there to discuss many aspects of this book. As it grew and then continued to grow some more, our discussions were invaluable in helping to shape the final format of this work. And to Jim Markham, my development editor, who quietly kept an eye and a guiding hand on the vast volume of material in these three volumes to ensure that the end result would meet the needs of you– my readers– and most importantly, you as the student. Jason Baker, my intrepid technical reviewer, has done an outstanding job to ensure the technical accuracy of the first two volumes and part of the third volume of this course. Due to the major changes made in some parts of the course as its final form materialized, he retested some chapters in their entirety to help ensure that I had not screwed anything up. Jason also made important suggestions that have significantly enhanced the quality and scope of the entire three-volume work. These volumes are much better for his contributions. Jason’s amazing work and important contributions to this book and the course of which it is part have helped to make it far better than it might have been. Of course any remaining errors and omissions are my responsibility alone.

xxiii

Introduction First, thank you for purchasing Using and Administering Linux: Volume 1– Zero to SysAdmin: Getting Started. The Linux training course upon which you have embarked is significantly different from other training that you could purchase to learn about Linux.

About this course This Linux training course, Using and Administering Linux– Zero to SysAdmin, consists of three volumes. Each of these three volumes is closely connected, and they build upon each other. For those new to Linux, it’s best to start here with Volume 1, where you’ll be guided through the creation of a virtual laboratory– a virtual network and a virtual machine– which will be used and modified by many of the experiments in all three volumes. More experienced Linux users can begin with later volumes and download the script that will set up the VM for the start of Volumes 2 and 3. Instructions provided with the script will provide specifications for configuration of the virtual network and the virtual machine. Refer to the following Volume overviews to select the volume of this course most appropriate for your current skill level. This Linux training course differs from others because it is a complete self-study course. Newcomers should start at the beginning of Volume 1 and read the text, perform all of the experiments, and complete all of the chapter exercises through to the end of Volume 3. If you do this, even if you are starting from zero knowledge about Linux, you can learn the tasks necessary to becoming a Linux system administrator, a SysAdmin. Another difference this course has over others is that all of the experiments are performed on one or more virtual machines (VMs) in a virtual network. Using the free software, VirtualBox, you will create this virtual environment on any reasonably sized host, whether Linux or Windows. In this virtual environment, you are free to experiment on your own, make mistakes that could damage the Linux installation of a hardware host, and still be able to recover completely by restoring the Linux VM host from any one of multiple snapshots. This flexibility to take risks and yet recover easily makes it possible to learn more than would otherwise be possible. xxv

Introduction

I have always found that I learn more from my mistakes than I ever have when things work as they are supposed to. For this reason I suggest that rather than immediately reverting to an earlier snapshot when you run into trouble, you try to figure out how the problem was created and how best to recover from it. If, after a reasonable period of time, you have not resolved the problem, that would be the point at which reverting to a snapshot would make sense. Inside, each chapter has specific learning objectives, interactive experiments, and review exercises that include both hands-on experiments and some review questions. I learned this format when I worked as a course developer for IBM from 1978 through 1981. It is a tried and true format that works well for self-study. These course materials can also be used as reference materials. I have used my previous course materials for reference for many years, and they have been very useful in that role. I have kept this as one of my goals in this set of materials.

Note Not all of the review exercises in this course can be answered by simply reviewing the chapter content. For some questions you will need to design your own experiment in order to find a solution. In many cases there will very probably be multiple solutions, and all that produce the correct results will be the “correct” ones.

Process The process that goes with this format is just as important as the format of the course– really even more so. The first thing that a course developer must do is generate a list of requirements that define both the structure and the content of the course. Only then can the process of writing the course proceed. In fact, many times I find it helpful to write the review questions and exercises before I create the rest of the content. In many chapters of this course, I have worked in this manner. These courses present a complete, end-to-end Linux training course for students like you who know before you start that you want to learn to be a Linux system administrator– a SysAdmin. This Linux course will allow you to learn Linux right from the beginning with the objective of becoming a SysAdmin.

xxvi

Introduction

Many Linux training courses begin with the assumption that the first course a student should take is one designed to start them as users. Those courses may discuss the role of root in system administration but ignore topics that are important to future SysAdmins. Other courses ignore system administration altogether. A typical second course will introduce the student to system administration, while a third may tackle advanced administration topics. Frankly, this baby step approach did not work well for many of us who are now Linux SysAdmins. We became SysAdmins, in part at least, due to our intense desire– our deep need– to learn as much as possible as quickly as possible. It is also, I think in large part, due to our highly inquisitive natures. We learn a basic command and then start asking questions, experimenting with it to see what its limits are, what breaks it, and what using it can break. We explore the man(ual) pages and other documentation to learn the extreme usages to which it might be put. If things don’t break by themselves, we break them intentionally to see how they work and to learn how to fix them. We relish our own failures because we learn more from fixing them than we do when things always work as they are supposed to. In this course we will dive deep into Linux system administration almost from the very beginning. You will learn many of the Linux tools required to use and administer Linux workstations and servers– usually multiple tools that can be applied to each of these tasks. This course contains many experiments to provide you with the kind of hands-on experiences that SysAdmins appreciate. All of these experiments guide you one step at a time into the elegant and beautiful depths of the Linux experience. You will learn that Linux is simple and that simplicity is what makes it both elegant and knowable. Based on my own years working with Unix and Linux, the course materials contained in these three volumes are designed to introduce you to the practical, daily tasks you will perform as a Linux user and, at the same time, as a Linux system administrator– SysAdmin. But I do not know everything– that is just not possible– no SysAdmin does. Further, no two SysAdmins know exactly the same things because that too is impossible. We have each started with different knowledge and skills; we have different goals; we have different experiences because the systems on which we work have failed in different ways, had different hardware, were embedded in different networks, had different distributions installed, and have many other differences. We use different tools and approaches to problem solving because the many different mentors and teachers we had used different sets of tools from each other; we use different Linux distributions; we think differently; and we know different things about the hardware on which Linux runs. Our past is much of what makes us what we are and what defines us as SysAdmins. xxvii

Introduction

So I will show you things in this course– things that I think are important for you to know– things that, in my opinion, will provide you with the skills to use your own curiosity and creativity to find solutions that I would never think of to problems I have never encountered.

What this course is not This course is not a certification study guide. It is not designed to help you pass a certification test of any type. This course is intended purely to help you become a good or perhaps even great SysAdmin, not to pass a test. There are a few good certification tests. Red Hat and Cisco certifications are among the best because they are based on the test-taker’s ability to perform specific tasks. I am not familiar with any of the other certification tests because I have not taken them. But the courses you can take and books you can purchase to help you pass those tests are designed to help you pass the tests and not to administer a Linux host or network. That does not make them bad– just different from this course.

Content overview Because there are three volumes to this course, and because I reference other chapters, some of which may be in other volumes, we need a method for specifying in which volume the referenced material exists. If the material is in another volume, I will always specify the volume number, that is, “Chapter 2 in Volume 3,” or “Volume 2, Chapter 5.” If the material is in the same volume as the reference to it, I may simply specify the chapter number; however I may also reference the current volume number for clarity. This quick overview of the contents of each volume should serve as a quick orientation guide if you need to locate specific information. If you are trying to decide whether to purchase this book and its companion volumes, it will give you a good overview of the entire course.

xxviii

Introduction

sing andAdministering Linux: Volume 1 U Zero toSysAdmin: Getting Started Volume 1 of this training course introduces operating systems in general and Linux in particular. It briefly explores the The Linux Philosophy for SysAdmins1 in preparation for the rest of the course. Chapter 4 then guides you through the use of VirtualBox to create a virtual machine (VM) and a virtual network to use as a test laboratory for performing the many experiments that are used throughout the course. In Chapter 5, you will install the Xfce version of Fedora– a popular and powerful Linux distribution– on the VM.In Chapter 6, you will learn to use the Xfce desktop which will enable you to leverage your growing command-line interface (CLI) expertise as you proceed through the course. Chapters 7 and 8 will get you started using the Linux command line and introduce you to some of the basic Linux commands and their capabilities. In Chapter 9, you will learn about data streams and the Linux tools used to manipulate them. And in Chapter 10, you will learn a bit about several text editors which are indispensable to advanced Linux users and system administrators. Chapters 11 through 13 start your work as a SysAdmin and take you through some specific tasks such as installing software updates and new software. Chapters 14 and 15 discuss more terminal emulators and some advanced shell skills. In Chapter 16, you will learn about the sequence of events that take place as the computer boots and Linux starts up. Chapter 17 shows you how to configure your shell to personalize it in ways that can seriously enhance your command-line efficiency. Finally, Chapters 18 and 19 dive into all things file and filesystems. 1. Introduction 2. Introduction to Operating Systems 3. The Linux Philosophy for SysAdmins 4. Preparation 5. Installing Linux 6. Using the Xfce Desktop 7. Using the Linux Command Line Both, David, The Linux Philosophy for SysAdmins, Apress, 2018

1

xxix

Introduction

8. Core Utilities 9. Data Streams 10. Text Editors 11. Working As Root 12. Installing and Updating Software 13. Tools for Problem Solving 14. Terminal Emulator Mania 15. Advanced Shell Topics 16. Linux Boot and Startup 17. Shell Configuration 18. Files, Directories, and Links 19. Filesystems

sing andAdministering Linux: Volume 2 U Zero toSysAdmin: Advanced Topics Volume 2 of Using and Administering Linux introduces you to some incredibly powerful and useful advanced topics that every SysAdmin must know. In Chapters 1 and 2, you will experience an in-depth exploration of logical volume management– and what that even means– as well as the use of file managers to manipulate files and directories. Chapter 3 introduces the concept that in Linux, everything is a file. You will also learn some fun and interesting uses of the fact that everything is a file. In Chapter 4, you will learn to use several tools that enable the SysAdmin to manage and monitor running processes. Chapter 5 enables you to experience the power of the special filesystems, such as /proc, which enable us as SysAdmins to monitor and tune the kernel while it is running– without a reboot. Chapter 6 will introduce you to regular expressions and the power that using them for pattern matching can bring to the command line, while Chapter 7 discusses managing printers and printing from the command line. In Chapter 8, you will use several tools to unlock the secrets of the hardware in which your Linux operating system is running. xxx

Introduction

Chapters 9 through 11 show you how to do some simple– and not so simple– command-line programming and how to automate various administrative tasks. You will begin to learn the details of networking in Chapter 12, and Chapters 13 through 15 show you how to manage the many services that are required in a Linux system. You will also explore the underlying software that manages the hardware and can detect when hardware devices such as USB thumb drives are installed and how the system reacts to that. Chapter 16 shows you how to use the logs and journals to look for clues to problems and confirmation that things are working correctly. Chapters 17 and 18 show you how to enhance the security of your Linux systems, including how to perform easy local and remote backups. 1. Logical Volume Management 2. File Managers 3. Everything Is a File 4. Managing Processes 5. Special Filesystems 6. Regular Expressions 7. Printing 8. Hardware Detection 9. Command-Line Programming 10. Automation with BASH Scripts 11. Time and Automation 12. Networking 13. systemd 14. dbus and Udev 15. Using Logs and Journals 16. Managing Users 17. Security 18. Backups xxxi

Introduction

sing andAdministering Linux: Volume 3 U Zero toSysAdmin: Network Services In Volume 3 of Using and Administering Linux, you will start by creating a new VM on the existing virtual network. This new VM will be used as a server for the rest of this course, and it will replace some of the functions performed by the virtual router that is part of our virtual network. Chapter 2 begins this transformation from simple workstation to server by adding a new network interface card (NIC) to the VM so that it can act as a firewall and router and then changing its network configuration from DHCP to static. This includes configuring both NICs so that one is connected to the existing virtual router so as to allow connections to the outside world and so that the other NIC connects to the new “inside” network that will contain the existing VM. Chapters 3 and 4 guide you through setting up the necessary services, DHCP and DNS, which are required to support a managed, internal network, and Chapter 5 takes you through configuration of SSHD to provide secure remote access between Linux hosts. In Chapter 6, you will convert the new server into a router with a simple yet effective firewall. You will learn to install and configure an enterprise class e-mail server that can detect and block most spam and malware in Chapters 7 through 9. Chapter 10 takes you through setting up a web server, and in Chapter 11, you will set up WordPress, a flexible and powerful content management system. In Chapter 12, you return to e-mail by setting up a mailing list using Mailman. Then Chapter 13 guides you through sharing files to both Linux and Windows hosts. Sometimes accessing a desktop remotely is the only way to do some things, so in Chapter 14, you will do just that. Chapter 15 shows you how to set up a time server on your network and how to determine its accuracy. Although we have incorporated security in all aspects of what has already been covered, Chapter 16 covers some additional security topics. Chapter 17 discusses package management from the other direction by guiding you through the process of creating an RPM package for the distribution of your own scripts and configuration files.

xxxii

Introduction

Finally, Chapter 18 will get you started in the right direction because I know you are going to ask, “Where do I go from here?” 1. Preparation 2. Server Configuration 3. DHCP 4. Name Services– DNS 5. Remote Access with SSH 6. Routing and Firewalls 7. Introducing E-mail 8. E-mail Clients 9. Combating Spam 10. Apache Web Server 11. WordPress 12. Mailing Lists 13. File Sharing with NFS and SAMBA 14. Using Remote Desktop Access 15. Does Anybody Know What Time It Is? 16. Security 17. Advanced Package Management 18. Where Do I Go from Here?

Taking this course Although designed primarily as a self-study guide, this course can be used effectively in a classroom environment. This course can also be used very effectively as a reference. Many of the original course materials I wrote for Linux training classes I used to teach as an independent trainer and consultant were valuable to me as references. The experiments became models for performing many tasks and later became the basis for xxxiii

Introduction

automating many of those same tasks. I have used many of those original experiments in parts of this course, because they are still relevant and provide an excellent reference for many of the tasks I still need to do. You will see as you proceed through the course that it uses many software programs considered to be older and perhaps obsolete like Sendmail, Procmail, BIND, the Apache web server, and much more. Despite their age, or perhaps because of it, the software I have chosen to run my own systems and servers and to use in this course has been well-proven and is all still in widespread use. I believe that the software we will use in these experiments has properties that make it especially valuable in learning the in-­ depth details of how Linux and those services work. Once you have learned those details, moving to any other software that performs the same tasks will be relatively easy. In any event, none of that “older” software is anywhere near as difficult or obscure as some people seem to think that it is.

Who should take this course If you want to learn to be an advanced Linux user and SysAdmin, this course is for you. Most SysAdmins have an extremely high level of curiosity and a deep-seated need to learn Linux system administration. We like to take things apart and put them back together again to learn how they work. We enjoy fixing things and are not hesitant about diving in to fix the computer problems that our friends and coworkers bring us. We want to know what happens when some part of computer hardware fails so we might save defective components such as motherboards, RAM memory, and hard drives. This gives us defective components with which we can run tests. As I write this, I have a known defective hard drive inserted in a hard drive docking station connected to my primary workstation, and have been using it to test failure scenarios that will appear later in this course. Most importantly, we do all of this for fun and would continue to do so even if we had no compelling vocational reason for doing so. Our intense curiosity about computer hardware and Linux leads us to collect computers and software like others collect stamps or antiques. Computers are our avocation– our hobby. Some people like boats, sports, travel, coins, stamps, trains, or any of thousands of other things, and they pursue them relentlessly as a hobby. For us– the true SysAdmins– that is what our computers are.

xxxiv

Introduction

That does not mean we are not well-rounded and do not do other things. I like to travel, read, go to museums and concerts, and ride historical trains, and my stamp collection is still there, waiting for me when I decide to take it up again. In fact, the best SysAdmins, at least the ones I know, are all multifaceted. We are involved in many different things, and I think that is due to our inexhaustible curiosity about pretty much everything. So if you have an insatiable curiosity about Linux and want to learn about it– regardless of your past experience or lack thereof– then this course is most definitely for you.

Who should not take this course If you do not have a strong desire to learn about or to administer Linux systems, this course is not for you. If all you want– or need– to do is use a couple apps on a Linux computer that someone has put on your desk, this course is not for you. If you have no curiosity about what superpowers lie underneath the GUI desktop, this course is not for you.

Why this course Someone asked me why I want to write this course. My answer is simple– I want to give back to the Linux community. I have had several amazing mentors over the span of my career, and they taught me many things– things I find worth sharing with you along with much that I have learned for myself. This course– all three volumes of it– started its existence as the slide presentations and lab projects for three Linux courses I created and taught. For a number of reasons, I do not teach those classes any more. However I would still like to pass on my knowledge and as many of the tips and tricks I have learned for the administration of Linux as possible. I hope that with this course, I can pass on at least some of the guidance and mentoring that I was fortunate enough to have in my own career.

xxxv

CHAPTER 1

Introduction O bjectives After reading this chapter, you will be able to •

Define the value proposition of Linux

Describe at least four attributes that make Linux desirable as an operating system

Define the meaning of the term “free” when it is applied to open source software

State the Linux Truth and its meaning

Describe how open source software makes the job of the SysAdmin easier

List some of the traits found in a typical SysAdmin

Describe the structure of the experiments used throughout this course

List two types of terminal environments that can be used to access the Linux command line

A bout Linux The value of any software lies in its usefulness not in its price. —Linus Torvalds1

Wikipedia, Linus Torvalds, https://en.wikipedia.org/wiki/Linus_Torvalds

1

© David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_1

1

Chapter 1

Introduction

The preceding quote from Linus Torvalds, the creator of Linux,2 perfectly describes the value proposition of free open source software (FOSS) and particularly Linux. Expensive software that performs poorly or does not meet the needs of the users can in no way be worth any amount of money. On the other hand, free software that meets the needs of the users has great value to those users. Most open source software3 falls in the latter category. It is software that millions of people find extremely useful and that is what gives it such great value. I have personally downloaded and used only one proprietary software application in over 20 years that I have been using Linux. Linux itself is a complete, open source operating system that is open, flexible, stable, scalable, and secure. Like all operating systems, it provides a bridge between the computer hardware and the application software that runs on it. It also provides tools that can be used by a system administrator, SysAdmin, to monitor and manage the following things: 1. The functions and features of the operating system itself 2. Productivity software like word processors; spreadsheets; financial, scientific, industrial, and academic software; and much more 3. The underlying hardware, for example, temperatures and operational status 4. Software updates to fix bugs 5. Upgrades to move from one release level of the operating system to the next higher level The tasks that need to be performed by the system administrator are inseparable from the philosophy of the operating system, both in terms of the tools which are available to perform them and the freedom afforded to the SysAdmin in their performance of those tasks. Let’s look very briefly at the origins of both Linux and Windows and explore a bit about how the philosophies of their creators affect the job of a SysAdmin.

Wikipedia, History of Linux, https://en.wikipedia.org/wiki/History_of_Linux Wikipedia, Open Source Software, https://en.wikipedia.org/wiki/Open-source_software

2 3

2

Chapter 1

Introduction

The birth ofWindows The proprietary DEC VAX/VMS4 operating system was designed by developers who subscribed to a closed philosophy. That is, that the user should be protected from the internal “vagaries” of the system5 because the users are afraid of computers. Dave Cutler,6 who wrote the DEC VAX/VMS operating system, is also the chief architect of Windows NT, the parent of all current forms of Windows. Cutler was hired away from DEC by Microsoft with the specific intention of having him write Windows NT.As part of his deal with Microsoft, he was allowed to bring many of his top engineers from DEC with him. Therefore, it should be no surprise that the Windows versions of today, however, far removed from Windows NT they might be, remain hidden behind this veil of secrecy.

Black box syndrome Let’s look at what proprietary software means to someone trying to fix it. I will use a trivial black box example to represent some hypothetical compiled, proprietary software. This software was written by a hypothetical company that wants to keep the source code a secret so that their alleged “trade secrets” cannot be stolen. As the hypothetical user of this hypothetical proprietary software, I have no knowledge of what happens inside the bit of compiled machine language code to which I have access. Part of that restriction is contractual– notice that I do not say “legal”– in a license agreement that forbids me from reverse engineering the machine code to produce the source code. The sole function of this hypothetical code is to print “no” if the number input is 17 or less and to print “yes” if the input is over 17. This result might be used to determine whether my customer receives a discount on orders of 17 units or more. Using this software for a number of weeks/months/years, everything seems normal until one of my customers complains that they should have received the discount but did not.

enamed to OpenVMS circa late 1991 R Gancarz. Mike, Linux and the Unix Philosophy, Digital Press, 2003, 146–148 6 ITPro Today, Windows NT and VMS: The rest of the Story, www.itprotoday.com/ management-mobility/windows-nt-and-vms-rest-story 4 5

3

Chapter 1

Introduction

Simple testing of input numbers from 0 to 16 produces the correct output of “no.” Testing of numbers from 18 and up produces the correct output of “yes.” Testing of the number 17 results in an incorrect output of “no.” Why? We have no way of knowing why! The program fails on the edge case of exactly 17. I can surmise that there is an incorrect logical comparison in the code, but I have no way of knowing, and without access to the source code, I can neither verify this nor fix it myself. So I report this problem to the vendor from whom I purchased the software. They tell me they will fix it in the next release. “When will that be?” I ask. “In about six months– or so,” they reply. I must now task one of my workers to check the results of every sale to verify whether the customer should receive the discount. If they should, we assign other people to cut a refund check and send that along with a letter explaining the situation. After a few months with no work on a fix from the vendor, I call to try and determine the status of the fix. They tell me that they have decided not to fix the problem because I am the only one having the problem. The translation of this is “sorry, you don’t spend enough money with us to warrant us fixing the problem.” They also tell me that the new owners, the venture capital company who bought out the company from which I bought the software, will no longer be selling or supporting that software anyway. I am left with useless– less than useless– software that will never be fixed and that I cannot fix myself. Neither can anyone else who purchased that software fix it if they ever run into this problem. Because it is completely closed and the sealed box in which it exists is impenetrable, proprietary software is unknowable. Windows is like this. Even most Windows support staff have no idea how it works inside. This is why the most common advice to fix Windows problems is to reboot the computer– because it is impossible to reason about a closed, unknowable system of any kind. Operating systems like Windows that shield their users from the power they possess were developed starting with the basic assumption that the users are not smart or knowledgeable enough to be trusted with the full power that computers can actually provide. These operating systems are restrictive and have user interfaces– both command line and graphical– which enforce those restrictions by design. These restrictive user interfaces force regular users and SysAdmins alike into an enclosed room with no windows and then slam the door shut and triple lock it. That locked room prevents them from doing many clever things that can be done with Linux.

4

Chapter 1

Introduction

The command-line interfaces of such limiting operating systems offer a relatively few commands, providing a de facto limit on the possible activities in which anyone might engage. Some users find this a comfort. I do not and, apparently, neither do you to judge from the fact that you are reading this book.

The birth ofLinux The short version of this story is that the developers of Unix, led by Ken Thompson7 and Dennis Ritchie,8 designed Unix to be open and accessible in a way that made sense to them. They created rules, guidelines, and procedural methods and then designed them into the structure of the operating system. That worked well for system developers and that also– partly, at least– worked for SysAdmins (system administrators). That collection of guidance from the originators of the Unix operating system was codified in the excellent book, The Unix Philosophy, by Mike Gancarz, and then later updated by Mr. Gancarz as Linux and the Unix Philosophy.9 Another fine book, The Art of Unix Programming,10 by Eric S.Raymond, provides the author's philosophical view of programming in a Unix environment. It is also somewhat of a history of the development of Unix as it was experienced and recalled by the author. This book is also available in its entirety at no charge on the Internet.11 In 1991, in Helsinki, Finland, Linus Torvalds was taking computer science classes using Minix,12 a tiny variant of Unix that was written by Andrew S.Tanenbaum.13 Torvalds was not happy with Minix as it had many deficiencies, at least to him. So he wrote his own operating system and shared that fact and the code on the Internet. This little operating system, which started as a hobby, eventually became known as Linux as a tribute to its creator and was distributed under the GNU GPL 2 open source license.14

h ttps://en.wikipedia.org/wiki/Ken_Thompson https://en.wikipedia.org/wiki/Dennis_Ritchie 9 Mike Gancarz, “Linux and the Unix Philosophy,” Digital Press– an imprint of Elsevier Science, 2003, ISBN 1-55558-273-7 10 Eric S.Raymond, “The Art of Unix Programming,” Addison-Wesley, September 17, 2003, ISBN 0-13-142901-9 11 Eric S.Raymond, “The Art of Unix Programming,” www.catb.org/esr/writings/taoup/html/ index.html/ 12 https://en.wikipedia.org/wiki/MINIX 13 https://en.wikipedia.org/wiki/Andrew_S._Tanenbaum 14 https://en.wikipedia.org/wiki/GNU_General_Public_License 7 8

5

Chapter 1

Introduction

Wikipedia has a good history of Linux15 as does Digital Ocean.16 For a more personal history, read Linus Torvalds’ own book, Just for fun17.

The open box Let’s imagine the same software as in the previous example but this time written by a company that open sourced it and provides the source code should I want it. The same situation occurs. In this case, I report the problem, and they reply that no one else has had this problem and that they will look into it but don’t expect to fix it soon. So I download the source code. I immediately see the problem and write a quick patch for it. I test the patch on some samples of my own customer transactions– in a test environment of course– and find the results to show the problem has been fixed. I submit the patch to them along with my basic test results. They tell me that is cool, insert the patch in their own code base, run it through testing, and determine that the fix works. At that point they add the revised code into the main trunk of their code base, and all is well. Of course, if they get bought out or otherwise become unable or unwilling to maintain the software, the result would be the same. I would still have the open source code, fix it, and make it available to whoever took over the development of the open source product. This scenario has taken place more than once. In one instance, I took over the development of a bit of shell script code from a developer in Latvia who no longer had the time to maintain it and I maintained it for several years. In another instance, a large company purchased a software firm called StarOffice who open sourced their office suite under the name OpenOffice.org. Later, a large computer company purchased OpenOffice.org. The new organization decided they would create their own version of the software starting from the existing code. That turned out to be quite a flop. Most of the developers of the open source version migrated to a new, open organization that maintains the reissued software that is now called LibreOffice. OpenOffice now languishes and has few developers while LibreOffice flourishes. One advantage of open source software is that the source code is always available. Any developers can take it over and maintain it. Even if an individual or an organization

h ttps://en.wikipedia.org/wiki/History_of_Linux Juell, Kathleen, A Brief History of Linux, www.digitalocean.com/community/tutorials/ brief-history-of-linux 17 Torvalds, Linus, and Diamond, David, Just for fun: The story of an accidental revolutionary, HarperBusiness, 2001 15 16

6

Chapter 1

Introduction

tries to take it over and make it proprietary, they cannot, and the original code is out there and can be “forked” into a new but identical product by any developer or group. In the case of LibreOffice, there are thousands of people around the world contributing new code and fixes when they are required. Having the source code available is one of the main advantages of open source because anyone with the skills can look at it and fix it then make that fix available to the rest of the community surrounding that software. §§§ In the context of open source software, the term “open” means that the source code is freely available for all to see and examine without restriction. Anyone with appropriate skills has legal permission to make changes to the code to enhance its functionality or to fix a bug. For the latest release of the Linux kernel, version 4.17, on June 03, 2018, as I write this, over 1,700 developers from a multitude of disparate organizations around the globe contributed 13,500 changes to the kernel code. That does not even consider the changes to other core components of the Linux operating system, such as core utilities, or even major software applications such as LibreOffice, the powerful office suite that I use for writing my books and articles as well as spreadsheets, drawings, presentations, and more. Projects such as LibreOffice have hundreds of their own developers. This openness makes it easy for SysAdmins– and everyone else, for that matter– to explore all aspects of the operating system and to fully understand how any or all of it is supposed to work. This means that it is possible to apply one’s full knowledge of Linux to use its powerful and open tools in a methodical reasoning process that can be leveraged for problem solving.

The Linux Truth Unix was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things. —Doug Gwyn This quote summarizes the overriding truth and the philosophies of both Unix and Linux– that the operating system must trust the user. It is only by extending this full measure of trust that allows the user to access the full power made possible by the operating system. This truth applies to Linux because of its heritage as a direct descendant of Unix. 7

Chapter 1

Introduction

The Linux Truth results in an operating system that places no restrictions or limits on the things that users, particularly the root18 user, can do. The root user can do anything on a Linux computer. There are no limits of any type on the root user. Although there are a very few administrative speed bumps placed in the path of the root user, root can always remove those slight impediments and do all manner of stupid and clever things. Non-root users have a few limits placed on them, but they can still do plenty of clever things as well. The primary limits placed on non-root users are intended to– mostly– prevent them from doing things that interfere with others’ ability to freely use the Linux host. These limits in no way prevent regular users from doing great harm to their own user accounts. Even the most experienced users can do “stupid things” using Linux. My experience has been that recovery from my own not so infrequent stupidity has been made much easier by the open access to the full power of the operating system. I find that most times a few commands can resolve the problem without even a reboot. On a few occasions, I have had to switch to a lower runlevel to fix a problem. I have only very infrequently needed to boot to recovery mode in order to edit a configuration file that I managed to damage so badly it caused serious problems including failure to boot. It takes knowledge of the underlying philosophy, the structure, and the technology of Linux to be able to fully unleash its power, especially when things are broken. Linux just requires a bit of understanding and knowledge on the part of the SysAdmin to fully unlock its potential.

K nowledge Anyone can memorize or learn commands and procedures, but rote memorization is not true knowledge. Without the knowledge of the philosophy and how that is embodied in the elegant structure and implementation of Linux, applying the correct commands as tools to resolve complex problems is not possible. I have seen smart people who had a vast knowledge of Linux be unable to resolve a relatively simple problem because they were unaware of the elegance of the structure beneath the surface. As a SysAdmin, part of my responsibility in many of my jobs has been to assist with hiring new employees. I participated in many technical interviews of people who had passed many Microsoft certifications and who had fine resumes. I also participated in he root user is the administrator of a Linux host and can do everything and anything. T Compared to other operating systems, non-root Linux users also have very few restrictions, but we will see later in this course that there are some limits imposed on them.

18

8

Chapter 1

Introduction

many interviews in which we were looking for Linux skills, but very few of those applicants had certifications. This was at a time when Microsoft certifications were the big thing but during the early days of Linux in the data center and few applicants were yet certified. We usually started these interviews with questions designed to determine the limits of the applicant’s knowledge. Then we would get into the more interesting questions, ones that would test their ability to reason through a problem to find a solution. I noticed some very interesting results. Few of the Windows certificate owners could reason their way through the scenarios we presented, while a very large percentage of the applicants with a Linux background were able to do so. I think that result was due in part to the fact that obtaining the Windows certificates relied upon memorization rather than actual hands-on experience combined with the fact that Windows is a closed system which prevents SysAdmins from truly understanding how it works. I think that the Linux applicants did so much better because Linux is open on multiple levels and that, as a result, logic and reason can be used to identify and resolve any problem. Any SysAdmin who has been using Linux for some time has had to learn about the architecture of Linux and has had a decent amount of experience with the application of knowledge, logic, and reason to the solution of problems.

F lexibility To me, flexibility means the ability to run on any platform, not just Intel and AMD processors. Scalability is about power, but flexibility is about running on many processor architectures. Wikipedia has a list of CPU architectures supported by Linux,19 and it is a long one. By my automated count, there are over 100 CPU architectures on which Linux is currently known to run. Note that this list changes and CPUs get added and dropped from the list. But the point is well taken that Linux will run on many architectures. If your architecture is not currently supported by Linux, with some work you can recompile it to run on any 64-bit system and some 32-bit ones. This broad-ranging hardware support means that Linux can run on everything from my Raspberry Pi20 to my television, to vehicle entertainment systems, to cell phones, to

ikipedia, List of Linux-supported computer architectures, https://en.wikipedia.org/wiki/ W List_of_Linux-supported_computer_architectures 20 Raspberry Pi web site, www.raspberrypi.org/ 19

9

Chapter 1

Introduction

DVRs, to the computers on the International Space Station21 (ISS), to all 500 of the fastest supercomputers back on Earth,22 and much more. A single operating system can run nearly any computing device from the smallest to the largest from any vendor.

S tability Stability can have multiple meanings when the term is applied to Linux by different people. My own definition of the term as it applies to Linux is that it can run for weeks or months without crashing or causing problems that make me worry I might lose data for any of the critical projects I am working on. Today’s Linux easily meets that requirement. I always have several computers running Linux at any given time, and they are all rock solid in this sense. They run without interruption. I have workstations, a server, a firewall, and some that I use for testing, and they all just run. This is not to say that Linux never has any problems. Nothing is perfect. Many of those problems have been caused by my own misconfiguration of one or more features, but a few have been caused by problems with some of the software I use. Sometimes a software application will crash, but that is very infrequent and usually related to issues I have had with the KDE desktop. If you read my personal technical web site, you know that I have had some problems with the KDE GUI desktop over the years and that it has had two significant periods of instability. In the first of these instances which was many years ago around the time of Fedora 10, KDE was transitioning from KDE 3 to the KDE Plasma 4 desktop which offered many interesting features. In this case most of the KDE-specific applications I used had not been fully rewritten for the new desktop environment so lacked required functionality or would just crash. During the second, most recent, and still ongoing instance, the desktop just locks up, crashes, or fails to work properly. In both of these cases, I was able to use a different desktop to get my work done in a completely stable environment. In the first case, I used the Cinnamon desktop, and in this most recent instance, I am using the LXDE desktop. However, the underlying software, the kernel, and the programs running underneath the surface– they all

ZDNet, The ISS just got its own Linux supercomputer, www.zdnet.com/article/ the-iss-just-got-its-own-linux-supercomputer/ 22 Wikipedia, TOP500, https://en.wikipedia.org/wiki/TOP500 21

10

Chapter 1

Introduction

continued to run without problem. So this is the second layer of stability; if one thing crashes, even the desktop, the underlying stuff continues to run. To be fair, KDE is improving, and many of the problems in this round have been resolved. I never did lose any data, but I did lose a bit of time. Although I still like KDE, the LXDE desktop is my current favorite, and I also like the Xfce desktop.

S calability Scalability is extremely important for any software, particularly for an operating system. Running the same operating system from watches, phones (Android), to laptops, powerful workstations, servers, and even the most powerful supercomputers on the planet can make life much simpler for the network administrator or the IT manager. Linux is the only operating system on the planet today which can provide that level of scalability. Since November of 2017, Linux has powered all of the fastest supercomputers in the world.23 Through this writing, as of July 2019, one hundred percent, 100%– all– of the top 500 supercomputers in the world run Linux of one form or another, and this is expected to continue. There are usually specialized distributions of Linux designed for supercomputers. Linux also powers much smaller devices such as Android phones and Raspberry Pi single board computers. Supercomputers are very fast, and many different calculations can be performed simultaneously. It is, however, very unusual for a single user to have access to the entire resources of a supercomputer. Many users share those resources, each user performing his or her own set of complex calculations. Linux can run on any computer from the smallest to the largest and anything in between.

S ecurity We will talk a lot about security as we proceed through these courses. Security is a critical consideration in these days of constant attacks from the Internet. If you think that they are not after you, too, let me tell you that they are. Your computer is under constant attack every hour of every day. Most Linux distributions are very secure right from the installation. Many tools are provided to both ensure tight security where it is needed as well as to allow specified

Top 500, www.top500.org/statistics/list/

23

11

Chapter 1

Introduction

access into the computer. For example, you may wish to allow SSH access from a limited number of remote hosts, access to the web server from anywhere in the world, and e-mail to be sent to a Linux host from anywhere. Yet you may also want to block, at least temporarily, access attempts by black hat hackers attempting to force their way in. Other security measures provide your personal files protection from other users on the same host while still allowing mechanisms for you to share files that you choose with others. Many of the security mechanisms that we will discuss in these courses were designed and built in to Linux right from its inception. The architecture of Linux is designed from the ground up, like Unix, its progenitor, to provide security mechanisms that can protect files and running processes from malicious intervention from both internal and external sources. Linux security is not an add-on feature, it is an integral part of Linux. Because of this, most of our discussions that relate to security will be embedded as an integral part of the text throughout this book. There is a chapter about security, but it is intended to cover those few things not covered elsewhere.

Freedom Freedom has an entirely different meaning when applied to free open source software (FOSS) than it does in most other circ*mstances. In FOSS, free is the freedom to do what I want with software. It means that I have easy access to the source code and that I can make changes to the code and recompile it if I need or want to. Freedom means that I can download a copy of Fedora Linux, or Firefox, or LibreOffice, and install it on as many computers as I want to. It means that I can share that downloaded code by providing copies to my friends or installing it on computers belonging to my customers, both the executables and the sources. Freedom also means that we do not need to worry about the license police showing up on our doorsteps and demanding huge sums of money to become compliant. This has happened at some companies that “over-installed” the number of licenses that they had available for an operating system or office suite. It means that I don’t have to type in a long, long, “key” to unlock the software I have purchased or downloaded.

Our software rights The rights to the freedoms that we have with open source software should be part of the license we receive when we download open source software. The definition for

12

Chapter 1

Introduction

open source software24 is found at the Open Source Initiative web site. This definition describes the freedoms and responsibilities that are part of using open source software. The issue is that there are many licenses that claim to be open source. Some are and some are not. In order to be true open source software, the license must meet the requirements specified in this definition. The definition is not a license– it specifies the terms to which any license must conform if the software to which it is attached is to be legally considered open source. If any of the defined terms do not exist in a license, then the software to which it refers is not true open source software. All of the software used in this book is open source software. I have not included that definition here despite its importance because it is and not really the focus of this book. You can go to the web site previously cited, or you can read more about it in my book, The Linux Philosophy for SysAdmins.25 I strongly recommend that you at least go to the web site and read the definition so that you will more fully understand what open source really is and what rights you have. I also like the description of Linux at Opensource.com,26 as well as their long list of other open source resources.27

L ongevity Longevity– an interesting word. I use it here to help clarify some of the statements that I hear many people make. These statements are usually along the lines of “Linux can extend the life of existing hardware,” or “Keep old hardware out of landfills or unmonitored recycling facilities.” The idea is that you can use your old computer longer and that by doing that, you lengthen the useful life of the computer and decrease the number of computers you need to purchase in your lifetime. This both reduces demand for new computers and reduces the number of old computers being discarded. Linux prevents the planned obsolescence continually enforced by the ongoing requirements for more and faster hardware required to support upgrades. It means I do not need to add more RAM or hard drive space just to upgrade to the latest version of the operating system. Opensource.org, The Open Source Definition, https://opensource.org/docs/osd Both, David, The Linux Philosophy for SysAdmins, Apress, 2018, 311–316 26 Opensource.com, What is Linux?, https://opensource.com/resources/linux 27 Opensource.com, Resources, https://opensource.com/resources 24 25

13

Chapter 1

Introduction

Another aspect of longevity is the open source software that stores data in open and well-documented formats. Documents that I wrote over a decade ago are still readable by current versions of the same software I used then, such as LibreOffice and its predecessors, OpenOffice, and before that Star Office. I never need to worry that a software upgrade will relegate my old files to the bit bucket.

Keep thehardware relevant For one example, until it recently died, I had an old Lenovo ThinkPad W500 that I purchased in May of 2006. It was old and clunky and heavy compared to many of today’s laptops, but I liked it a lot, and it was my only laptop. I took it with me on most trips and use it for training. It had enough power in its Intel Core 2 Duo 2.8GHz processor, 8GB of RAM, and 300GB hard drive to support Fedora running a couple virtual machines and to be the router and firewall between a classroom network and the Internet, to connect to a projector to display my slides, and to use to demonstrate the use of Linux commands. I used Fedora 28 on it, the very latest. That is pretty amazing considering that this laptop, which I affectionately called vgr, was a bit over 12 years old. The ThinkPad died of multiple hardware problems in October of 2018, and I replaced it with a System7628 Oryx Pro with 32GB of RAM, an Intel i7 with 6 cores (12 CPU threads) and 2TB of SSD storage. I expect to get at least a decade of service out of this new laptop. And then there is my original EeePC 900 netbook with an Intel Atom CPU at 1.8GHz, 2G of RAM, and an 8GB SDD.It ran Fedora up through Fedora 28 for ten years before it too started having hardware problems. Linux can most definitely keep old hardware useful. I have several old desktop workstations that are still useful with Linux on them. Although none are as old as vgr, I have at least one workstation with an Intel motherboard from 2008, one from 2010, at least three from 2012.

R esist malware Another reason that I can keep old hardware running longer is that Linux is very resistant to malware infections. It is not completely immune to malware, but none of my systems have ever been infected. Even my laptop which connects to all kinds of wired and wireless networks that I do not control has never been infected.

System76 Home page, https://system76.com/

28

14

Chapter 1

Introduction

Without the massive malware infections that cause most peoples’ computers to slow to an unbearable crawl, my Linux systems– all of them– keep running at top speed. It is this constant slowdown, even after many “cleanings” at the big box stores or the strip mall computer stores, which causes most people to think that their computers are old and useless. So they throw them away and buy another. So if Linux can keep my 12-year-old laptop and other old systems running smoothly, it can surely keep many others running as well.

S hould IbeaSysAdmin? Since this book is intended to help you become a SysAdmin, it would be useful for you to know whether you might already be one, whether you are aware of that fact or not, or if you exhibit some propensity toward system administration. Let’s look at some of the tasks a SysAdmin may be asked to perform and some of the qualities one might find in a SysAdmin. Wikipedia29 defines a system administrator as “a person who is responsible for the upkeep, configuration, and reliable operation of computer systems, especially multiuser computers, such as servers.” In my experience, this can include computer and network hardware, software, racks and enclosures, computer rooms or space, and much more. The typical SysAdmin's job can include a very large number of tasks. In a small business, a SysAdmin may be responsible for doing everything computer related. In larger environments, multiple SysAdmins may share responsibility for all of the tasks required to keep things running. In some cases, you may not even know you are a SysAdmin; your manager may have simply told you to start maintaining one or more computers in your office– that makes you a SysAdmin, like it or not. There is also a term, “DevOps,” which is used to describe the intersection of the formerly separate development and operations organizations. In the past, this has been primarily about closer cooperation between development and operations, and it included teaching SysAdmins to write code. The focus is now shifting to teaching programmers how to perform operational tasks.30 Attending to SysAdmin tasks makes these folks SysAdmins, too, at least for part of the time. While I was working at Cisco, I had a DevOps

Wikipedia, System Administrator, https://en.wikipedia.org/wiki/System_administrator Charity, “Ops: It’s everyone’s job now,” https://opensource.com/article/17/7/ state-systems-administration

29 30

15

Chapter 1

Introduction

type of job. Part of the time I wrote code to test Linux appliances, and the rest of the time I was a SysAdmin in the lab where those appliances were tested. It was a very interesting and rewarding time in my career. I have created this short list to help you determine whether you might have some of the qualities of a SysAdmin. You know you are a SysAdmin if... 1. You think this book might be a fun read. 2. You would rather spend time learning about computers than watch television. 3. You like to take things apart to see how they work. 4. Sometimes those things still work when you are required by someone else to reassemble them. 5. People frequently ask you to help them with their computers. 6. You know what open source means. 7. You document everything you do. 8. You find computers easier to interact with than most humans. 9. You think the command line might be fun. 10. You like to be in complete control. 11. You understand the difference between “free as in beer” and “free as in speech,” when applied to software. 12. You have installed a computer. 13. You have ever repaired or upgraded your own computer. 14. You have installed or tried to install Linux. 15. You have a Raspberry Pi. 16. You leave the covers off your computer because you replace components frequently. 17. ...etc... You get the idea. I could list a lot more things that might make you a good candidate to be a SysAdmin, but I am sure you can think of plenty more that apply to you. The bottom line here is that you are curious, you like to explore the internal workings of 16

Chapter 1

Introduction

devices, you want to understand how things work– particularly computers, you enjoy helping people, and you would rather be in control of at least some of the technology that we encounter in our daily lives than to let it completely control you.

About this course If you ask me a question about how to perform some task in Linux, I am the Linux guy that explains how Linux works before answering the question– at least that is the impression I give most people. My tendency is to explain how things work, and I think that it is very important for SysAdmins to understand why things work as they do and the architecture and structure of Linux in order to be most effective. So I will explain a lot of things in detail as we go through this course. For the most part, it will not be a course in which you will be told to type commands without some reasoning behind it. The preparation in Chapter 4 will also have some explanation but perhaps not so much as the rest of the book. Without these explanations, the use of the commands would be just rote memorization and that is not how most of us SysAdmins learn best.

UNIX is very simple, it just needs a genius to understand its simplicity. —Dennis Ritchie31 The explanations I provide will sometimes include historical references because the history of Unix and Linux is illustrative of why and how Linux is so open and easy to understand. The preceding Ritchie quote also applies to Linux because Linux was designed to be a version of Unix. Yes, Linux is very simple. You just need a little guidance and mentoring to show you how to explore it yourself. That is part of what you will learn in this course. Part of the simplicity of Linux is that it is completely open and knowable, and you can access any and all of it in very powerful and revealing ways. This course contains many experiments which are designed to explore the architecture of Linux as well as to introduce you to new commands. Why do you think that Windows support– regardless of where you get it– always starts with rebooting the system? Because it is a closed system and closed systems

Wikipedia, Dennis Ritchie, https://en.wikipedia.org/wiki/Dennis_Ritchie

31

17

Chapter 1

Introduction

cannot ever be knowable. As a result, the easiest approach to solving problems is to reboot the system rather than to dig into the problem, find the root cause, and fix it.

A bout theexperiments As a hands-on SysAdmin, I like to experiment with the command line in order to learn new commands, new ways to perform tasks, and how Linux works. Most of the experiments I have devised for this book are ones that I have performed in my own explorations with perhaps some minor changes to accommodate their use in a course using virtual machines. I use the term “experiments” because they are intended to be much more than simple lab projects, designed to be followed blindly with no opportunity for you, the student, to follow your own curiosity and wander far afield. These experiments are designed to be the starting points for your own explorations. This is one reason to use a VM for them, so that production machines will be out of harm’s way and you can safely try things that pique your curiosity. Using virtualization software such as VirtualBox enables us to run a software implementation of standardized hardware. It allows us to run one or more software computers (VMs), in which we can install any operating system, on your hardware computer. It seems complex, but we will go through creating a virtual network and a virtual machine (VM) in Chapter 4 as we prepare for the experiments. All SysAdmins are curious, hands-on people even though we have different ways of learning. I think it is helpful for SysAdmins to have hands-on experience. That is what the experiments are for– to provide an opportunity to go beyond the theoretical and apply the things you learn in a practical way. Although some of the experiments are a bit contrived in order to illustrate a particular point, they are nevertheless valid. These enlightening experiments are not tucked away at the end of each chapter, or the book, where they can be easily ignored– they are embedded in the text and are an integral part of the flow of this book. I recommend that you perform the experiments as you proceed through the book. The commands and sometimes the results for each experiment will appear in “experiment” sections as shown in the following. Some experiments need only a single command and so will have only one “experiment” section. Other experiments may be more complex and so split among two to more experiments.

18

Chapter 1

Introduction

SAMPLE EXPERIMENT This is an example of an experiment. Each experiment will have instructions and code for you to enter end run on your computer. Many experiments will have a series of instructions in a prose format like this paragraph. Just follow the instructions and the experiments will work just fine: 1. Some experiments will have a list of steps to perform. 2. Step 2. 3. etc... Code that you are to enter for the experiments will look like this.

This is the end of the experiment. Some of these experiments can be performed as a non-root user; that is much safer than doing everything as root. However you will need to be root for many of these experiments. These experiments are considered safe for use on a VM designated for training such as the one that you will create in Chapter 4. Regardless of how benign they may seem, you should not perform any of these experiments on a production system whether physical or virtual. There are times when I want to present code that is interesting but which you should not run as part of one of the experiments. For such situations I will place the code and any supporting text in a CODE SAMPLE section as shown in the following.

CODE SAMPLE Code that is intended to illustrate a point but which you should not even think about running on any computer will be contained in a section like this one: echo "This is sample code which you should never run."

Warning Do not perform the experiments presented in this book on a production system. You should use a virtual machine that is designated for this training. 19

Chapter 1

Introduction

What todo if theexperiments do not work These experiments are intended to be self-contained and not dependent upon any setup, except for the USB thumb drive, or the results of previously performed experiments. Certain Linux utilities and tools must be present, but these should all be available on a standard Fedora Linux workstation installation or any other mainstream general use distribution. Therefore, all of these experiments should “just work.” We all know how that goes, right? So when something does fail, the first things to do are the obvious. Verify that the commands were entered correctly. This is the most common problem I encounter for myself. You may see an error message indicating that the command was not found. The Bash shell shows the bad command; in this case I made up badcommand. It then gives a brief description of the problem. This error message is displayed for both missing and misspelled commands. Check the command spelling and syntax multiple times to verify that it is correct: [student@testvm1 ~]$ badcommand bash: badcommand: command not found... Use the man command to view the manual pages (man pages) in order to verify the correct syntax and spelling of commands. Ensure that the required command is, in fact, installed. Install them if they are not already installed. For experiments that require you to be logged in as root, ensure that you have done so. There should be only a few of these, but performing them as a non-root user will not work. There is not much else that should go wrong– but if you encounter a problem that you cannot make work using these tips, contact me at [emailprotected], and I will do my best to help figure out the problem.

20

Chapter 1

Introduction

T erminology It is important to clarify a bit of terminology before we proceed. In this course I will refer to computers with multiple terms. A “computer” is a hardware or virtual machine for computing. A computer is also referred to as a “node” when connected to a network. A network node can be any type of device including routers, switches, computers, and more. The term “host” generally refers to a computer that is a node on a network, but I have also encountered it used to refer to an unconnected computer.

How toaccess thecommand line All of the modern mainstream Linux distributions provide at least three ways to access the command line. If you use a graphical desktop, most distributions come with multiple terminal emulators from which to choose. I prefer Krusader, Tilix, and especially xfce4-­ terminal, but you can use any terminal emulator that you like. Linux also provides the capability for multiple virtual consoles to allow for multiple logins from a single keyboard and monitor (KVM32). Virtual consoles can be used on systems that don’t have a GUI desktop, but they can be used even on systems that do have one. Each virtual console is assigned to a function key corresponding to the console number. So vc1 would be assigned to function key F1, and so on. It is easy to switch to and from these sessions. On a physical computer, you can hold down the Ctrl and Alt keys and press F2 to switch to vc2. Then hold down the Ctrl and Alt keys and press F1 to switch to vc1 and the graphical interface. The last method to access the command line on a Linux computer is via a remote login. Telnet was common before security became such an issue, so Secure Shell (SSH) is now used for remote access. For some of the experiments, you will need to log in more than once or start multiple terminal sessions in the GUI desktop. We will go into much more detail about terminal emulators, console sessions, and shells as we proceed through this book.

Keyboard, Video, Mouse

32

21

Chapter 1

Introduction

Chapter summary Linux was designed from the very beginning as an open and freely available operating system. Its value lies in the power, reliability, security, and openness that it brings to the marketplace for operating systems and not just in the fact that it can be had for free in monetary terms. Because Linux is open and free in the sense that it can be freely used, shared, and explored, its use has spread into all aspects of our lives. The tasks a SysAdmin might be asked to do are many and varied. You may already be doing some of these or at least have some level of curiosity about how Linux works or how to make it work better for you. Most of the experiments encountered in this book must be performed at the command line. The command line can be accessed in multiple ways and with any one or more of several available and acceptable terminal emulators.

Exercises Note that a couple of the following questions are intended to cause you to think about your desire to become a SysAdmin. There are no right answers to these questions, only yours, and you are not required to write them down or to share them. They are simply designed to prompt you to be a bit introspective about yourself and being a SysAdmin: 1. From where does open source software derive its value? 2. What are the four defining characteristics of Linux? 3. As of the time you read this, how many of the world’s top 500 supercomputers use Linux as their operating system? 4. What does the “Linux Truth” mean to Linux users and administrators? 5. What does “freedom” mean with respect to open source software? 6. Why do you want to be a SysAdmin? 7. What makes you think you would be a good SysAdmin? 8. How would you access the Linux command line if there were no GUI desktop installed on the Linux host?

22

CHAPTER 2

Introduction toOperating Systems O bjectives In this chapter you will learn to •

Describe the functions of the main hardware components of a computer

List and describe the primary functions of an operating system

Briefly outline the reasons that prompted Linus Torvalds to create Linux

Describe how the Linux core utilities support the kernel and together create an operating system

C hoice– Really! Every computer requires an operating system. The operating system you use on your computer is at least as important– or more so– than the hardware you run it on. The operating system (OS) is the software that determines the capabilities and limits of your computer or device. It also defines the personality of your computer. The most important single choice you will make concerning your computer is that of the operating system which will create a useful tool out of it. Computers have no ability to do anything without software. If you turn on a computer which has no software program, it simply generates revenue for the electric company in return for adding a little heat to the room. There are far less expensive ways to heat a room.

© David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_2

23

Chapter 2

Introduction toOperating Systems

The operating system is the first level of software which allows your computer to perform useful work. Understanding the role of the operating system is key to making informed decisions about your computer. Of course, most people do not realize that there even is a choice when it comes to operating systems. Fortunately, Linux does give us a choice. Some vendors such as EmperorLinux, System76, and others are now selling systems that already have Linux installed. Others, like Dell, sometimes try out the idea of Linux by selling a single model with few options. We can always just purchase a new computer, install Linux on it, and wipe out whatever other operating system might have previously been there. My preference is to purchase the parts from a local computer store or the Internet and build my own computers to my personal specifications. Most people don’t know that they have either of these options and, if they did, would not want to try anyway.

What is anoperating system? Books about Linux are books about an operating system. So– what is an operating system? This is an excellent question– one which most training courses and books I have read either skip over completely or answer very superficially. The answer to this question can aid the SysAdmin’s understanding of Linux and its great power. The answer is not simple. Many people look at their computer’s display and see the graphical (GUI1) desktop and think that is the operating system. The GUI is only a small part of the operating system. It provides an interface in the form of a desktop metaphor that is understandable to many users. It is what is underneath the GUI desktop that is the real operating system. The fact is that for advanced operating systems like Linux, the desktop is just another application, and there are multiple desktops from which to choose. We will cover the Xfce desktop in Chapter 6 of this volume because that is the desktop I recommend for use with this course. We will also explore window managers, a simpler form of desktop, in Chapter 16 of this volume.

Graphical User Interface

1

24

Chapter 2

Introduction toOperating Systems

In this chapter and throughout the rest of this course, I will elaborate on the answer to this question, but it is helpful to understand a little about the structure of the hardware which comprises a computer system. Let’s take a brief look at the hardware components of a modern Intel computer.

H ardware There are many different kinds of computers from single-board computers (SBC) like the Arduino and the Raspberry Pi to desktop computers, servers, mainframes, and supercomputers. Many of these use Intel or AMD processors, but others do not. For the purposes of this series of books, I will work with Intel X86_64 hardware. Generally, if I say Intel, you can also assume I mean the X86_64 processor series and supporting hardware, and that AMD X86_64 hardware should produce the same results, and the same hardware information will apply.

M otherboard Most Intel-based computers have a motherboard that contains many components of the computer such as bus and I/O controllers. It also has connectors to install RAM memory and a CPU, which are the primary components that need to be added to a motherboard to make it functional. Single-board computers are self-contained on a single board and do not require any additional hardware because components such as RAM, video, network, USB, and other interfaces are all an integral part of the board. Some motherboards contain a graphics processing unit (GPU) to connect the video output to a monitor. If they do not, a video card can be added to the main computer I/O bus, usually PCI2, or PCI Express (PCIe).3 Other I/O devices like a keyboard, mouse, and external hard drives and USB memory sticks can be connected via the USB bus. Most modern motherboards have one or two Gigabit Ethernet network interface cards (NIC) and four or six SATA4 connectors for hard drives. Random-access memory (RAM) is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM from where they can be quickly moved into the

ikipedia, Conventional PCI, https://en.wikipedia.org/wiki/Conventional_PCI W Wikipedia, PCI Express, https://en.wikipedia.org/wiki/PCI_Express 4 Wikipedia, Serial ATA, https://en.wikipedia.org/wiki/Serial_ATA 2 3

25

Chapter 2

Introduction toOperating Systems

CPU cache. RAM and cache memory are both volatile memory; that is, the data stored in them is lost if the computer is turned off. The computer can also erase or alter the contents of RAM, and this is one of the things that gives computers their great flexibility and power. Hard drives are magnetic media used for long-term storage of data and programs. Magnetic media is nonvolatile; the data stored on a disk remains even when power is removed from the computer. DVDs and CD-ROM store data permanently and can be read by the computer but not overwritten. The exception to this is that some DVD and CD-ROM disks are re-writable. ROM means read-only memory because it can be read by the computer but not erased or altered. Hard drives and DVD drives are connected to the motherboard through SATA adapters. Solid-state drives (SSDs) are the solid state equivalent of hard drives. They have the same characteristics in terms of the long-term storage of data because it is persistent through reboots and when the computer is powered off. Also like hard drives with rotating magnetic disks, SSDs allow data to be erased, moved, and managed when needed. Printers are used to transfer data from the computer to paper. Sound cards convert data to sound as well as the reverse. USB storage devices can be used to store data for backup or transfer to other computers. The network interface cards (NICs) are used to connect the computer to a network, hardwired or wireless, so that it can communicate easily with other computers attached to the network.

T he processor Let’s take a moment to explore the CPU and define some terminology in an effort to help reduce confusion. Five terms are important when we talk about processors: processor, CPU, socket, core, and thread. The Linux command lscpu, as shown in Figure2-1, gives us some important information about the installed processor(s) as well as clues about terminology. I use my primary workstation for this example.

26

Chapter 2

Introduction toOperating Systems

[root@david ~]# lscpu Architecture:

x86_64

CPU op-mode(s):

32-bit, 64-bit

Byte Order:

Little Endian

CPU(s):

32

On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s):

1

NUMA node(s):

1

Vendor ID:

GenuineIntel

CPU family:

6

Model:

85

Model name:

Intel(R) Core(TM) i9-7960X CPU @ 2.80GHz

Stepping:

4

CPU MHz:

3542.217

CPU max MHz:

4400.0000

CPU min MHz:

1200.0000

BogoMIPS:

5600.00

Virtualization:

VT-x

L1d cache:

32K

L1i cache:

32K

L2 cache:

1024K

L3 cache:

22528K

NUMA node0 CPU(s):

0-31

Flags:

<snip

Figure 2-1.  The output of the lscpu command gives us some information about the processor installed in a Linux host. It also helps us understand the current terminology to use when discussing processors The first thing to notice in Figure2-1 is that the term “processor” never appears. The common usage for the term “processor”5 refers generically to any hardware unit that performs some form of computations. It can refer to the CPU6– central processing

Wikipedia, Processor, https://en.wikipedia.org/wiki/Processor Wikipedia, Central processing unit, https://en.wikipedia.org/wiki/Central_processing_unit

5 6

27

Chapter 2

Introduction toOperating Systems

unit– of the computer, to a graphic processing unit (GPU7) that performs calculations relating to graphical video displays, or any number of other types of processors. The terms processor and CPU tend to be used interchangeably when referring to the physical package that is installed in your computer. Using Intel terminology, which can be a bit fluid, the processor is the physical package that can contain one or more computing cores. Figure2-2 shows an Intel i5-­2500 processor which contains four cores. Because the processor package is plugged into a socket and a motherboard may have multiple sockets, the lscpu utility numbers the sockets. Figure2-1 shows the information for the processor in socket number 1 on the motherboard. If this motherboard had additional sockets, lscpu would list them separately.

Figure 2-2.  An Intel Core i5 processor may contain one, two, or four cores. Photo courtesy of Wikimedia Commons, CC by SA 4 International A core, which is sometimes referred to as a compute core, is the smallest physical hardware component of a processor that can actually perform arithmetic and logical computations, that is, it is composed of a single arithmetic and logic unit (ALU)8 and its ikipedia, Graphics processing unit, https://en.wikipedia.org/wiki/ W Graphics_processing_unit 8 Wikipedia, Arithmetic Logic Unit, https://en.wikipedia.org/wiki/Arithmetic_logic_unit 7

28

Chapter 2

Introduction toOperating Systems

required supporting components. Every computer has at least one processor with one or more cores. Most modern Intel processors have more– two, four, or six cores, and many processors have eight or more cores. They make up the brains of the computer. They are the part of the computer which is responsible for executing each of the instructions specified by the software utilities and application programs. The line in the lscpu results that specifies the number of cores contained in the processor package is “Core(s) per socket.” For this socket on my primary workstation, there are sixteen (16) cores. That means that there are 16 separate computing devices in the processor plugged into this socket. But wait– there’s more! The line “CPU(s)” shows that there are 32 CPUs on this socket. How can that be? Look at the line with the name “Thread(s) per core,” and the number there is 2, so 16 x 2 = 32. Well that is the math but not the explanation. The short explanation is that compute cores are really fast. They are so fast that a single stream of instructions and data is not enough to keep them busy all the time even in a very compute intensive environment. The details of why this is so are beyond the scope of this book but suffice it to say that before hyper-threading, most compute cores would sit waiting with nothing to do, while the slower external memory circuitry tried to feed them sufficient steams of program instructions and data to them to keep them active. Rather than let precious compute cycles go to waste in high-performance computing environments, Intel developed hyper-threading technology that allows a single core to process two streams of instructions and data by switching between them. This enables a single core to perform almost as well as two. So the term CPU is used to specify that a single hyper-threading core is reasonably close to the functional equivalent of two CPUs. But there are some caveats. Hyper-threading is not particularly helpful if all you are doing is word processing and spreadsheets. Hyper-threading is intended to improve performance in high-performance computing environments where every CPU compute cycle is important in speeding the results.

Peripherals Peripherals are hardware devices that can be plugged into the computer via the various types of interface ports. USB devices such as external hard drives and thumb drives are typical of this type of hardware. Other types include keyboards, mice, and printers.

29

Chapter 2

Introduction toOperating Systems

Printers can also be connected using the very old parallel printer ports which I still see on some new motherboards, but most are USB capable of being attached using USB or a network connection. Displays are commonly connected using HDMI, DVI, DisplayPort, or VGA connectors. Peripheral devices can also include such items as USB hubs, disk drive docking stations, plotters, and more.

The operating system All of these hardware pieces of the computer must work together. Data must be gotten into the computer and moved about between the various components. Programs must be loaded from long-term storage on the hard drive into RAM where they can be executed. Processor time needs to be allocated between running applications. Access to the hardware components of the computer such as RAM, disk drives, and printers by application programs must be managed. It is the task of the operating system to provide these functions. The operating system manages the operation of the computer and of the application software which runs on the computer.

The definition A simple definition of an operating system is that it is a program, much like any other program. It is different only in that its primary function is to manage the movement of data in the computer. This definition refers specifically to the kernel of the operating system. The operating system kernel manages access to the hardware devices of the computer by utility and application programs. The operating system also manages system services such as memory allocation– the assignment of specific virtual memory locations to various programs when they request memory– the movement of data from various storage devices into memory where it can be accessed by the CPU, communications with other computers and devices via the network, display of data in text or graphic format on the display, printing, and much more. The Linux kernel provides an API– application programming interface– for other programs to use in order to access the kernel functions. For example, a program that needs to have more memory allocated to its data structures uses a kernel function call to request that memory. The kernel then allocates the memory and notifies the program that the additional memory is available. 30

Chapter 2

Introduction toOperating Systems

The Linux kernel also manages access to the CPUs as computing resources. It uses a complex algorithm to determine which processes have are allocated some CPU time, when, and for how long. If necessary, the kernel can interrupt a running program in order to allow another program to have some CPU time. An operating system kernel like Linux can do little on its own. It requires other programs– utilities– that can be used to perform basic functions such as create a directory on the hard drive and then other program utilities to access that directory, create files in that directory, and then manage those files. These utility programs perform functions like creating files, deleting files, copying files from one place to another, setting display resolution, and complex processing of textual data. We will cover the use of many of these utilities as we proceed through this book.

Typical operating system functions Any operating system has a set of core functions which are the primary reason for its existence. These are the functions that enable the operating system to manage itself, the hardware on which it runs, and the application programs and utilities that depend upon it to allocate system resources to them: •

Memory management

Managing multitasking

Managing multiple users

Process management

Interprocess communication

Device management

Error handling and logging

Let’s look briefly at these functions.

31

Chapter 2

Introduction toOperating Systems

M emory management Linux and other modern operating systems use advanced memory management strategies to virtualize real memory– random-access memory9 (RAM) and swap memory (disk)– into a single virtual memory space which can be used as if it were all physical RAM.Portions of this virtual memory10 can be allocated by the memory management functions of the kernel to programs that request memory. The memory management components of the operating system are responsible for assigning virtual memory space to applications and utilities and for translation between virtual memory spaces and physical memory. The kernel allocates and deallocates memory and assigns physical memory locations based upon requests, either implicit or explicit, from application programs. In cooperation with the CPU, the kernel also manages access to memory to ensure that programs only access those regions of memory which have been assigned to them. Part of memory management includes managing the swap partition or file and the movement of memory pages between RAM and the swap space on the hard drive. Virtual memory eliminates the need for the application programmer to deal directly with memory management because it provides a single virtual memory address space for each program. It also isolates each application’s memory space from that of every other, thus making the program’s memory space safe from being overwritten or viewed by other programs.

M ultitasking Linux, like most modern operating systems, can multitask. That means that it can manage two, three, or hundreds of processes at the same time. Part of process management is managing multiple processes that are all running on a Linux computer. I usually have several programs running at one time such as LibreOffice Write which is a word processor, an e-mail program, a spreadsheet, a file manager, a web browser, and usually multiple terminal sessions in which I interact with the Linux command-line interface (CLI). Right now, as I write this sentence, I have multiple documents open in several LibreOffice Write windows. This enables me to see what I have written in other documents and to work on multiple chapters at the same time. Wikipedia, Random Access Memory, https://en.wikipedia.org/wiki/Random-access_memory Wikipedia, Virtual Memory, https://en.wikipedia.org/wiki/Virtual_memory

9

10

32

Chapter 2

Introduction toOperating Systems

But those programs usually do little or nothing until we give them things to do by typing words into the word processor or clicking an e-mail to display it. I also have several terminal emulators running and use them to log in to various local and remote computers for which I manage and have responsibility. Linux itself always has many programs running in the background– called daemons– programs that help Linux manage the hardware and other software running on the host. These programs are usually not noticed by users unless we specifically look for them. Some of the tools you will learn about in this book can reveal these otherwise hidden programs. Even with all of its own programs running in the background and users’ programs running, a modern Linux computer uses a few compute cycles and wastes most of its CPU cycles waiting for things to happen. Linux can download and install its own updates while performing any or all of the preceding tasks simultaneously– without the need for a reboot. Wait... what?! That’s right. Linux does not usually need to reboot before, during, or after installing updates or when installing new software. After a new kernel or glibc (General C Libraries) is installed, however, you may wish to reboot the computer to activate it, but you can do that whenever you want and not be forced to reboot multiple times during an update or even stop doing your work while the updates are installed.

Multiuser The multitasking functionality of Linux extends to its ability to host multiple users– tens or hundreds of them– all running the same or different programs at the same time on one single computer. Multiuser capabilities means a number of different things. First, it can mean a single user who has logged in multiple times via a combination of the GUI desktop interface and via the command line using one or more terminal sessions. We will explore the extreme flexibility available when using terminal sessions a bit later in this course. Second, multiuser means just that– many different users logged in at the same time, each doing their own thing, and each isolated and protected from the activities of the others. Some users can be logged inlocally and others from anywhere in the world with an Internet connection if the host computer is properly configured. The role of the operating system is to allocate resources to each user and to ensure that any tasks, that is, processes, they have running have sufficient resources without impinging upon the resources allocated to other users. 33

Chapter 2

Introduction toOperating Systems

P rocess management The Linux kernel manages the execution of all tasks running on the system. The Linux operating system is multitasking from the moment it boots up. Many of those tasks are the background tasks required to manage a multitasking and– for Linux– a multiuser environment. These tools take only a small fraction of the available CPU resources available on even modest computers. Each running program is a process. It is the responsibility of the Linux kernel to perform process management.11 The scheduler portion of the kernel allocates CPU time to each running process based on its priority and whether it is capable of running. A task which is blocked– perhaps it is waiting for data to be delivered from the disk, or for input from the keyboard– does not receive CPU time. The Linux kernel will also preempt a lower priority task when a task with a higher priority becomes unblocked and capable of running. In order to manage processes, the kernel creates data abstractions that represent that process. Part of the data required is that of memory maps that define the memory that is allocated to the process and whether it is data or executable code. The kernel maintains information about the execution status such as how recently the program had some CPU time, how much time, and a number called the “nice” number. It uses that information and the nice number to calculate the priority of the process. The kernel uses the priority of all of the process to determine which process(es) will be allocated some CPU time. Note that not all processes need CPU time simultaneously. In fact, for most desktop workstations in normal circ*mstances, usually only two or three processes at the most need to be on the CPU at any given time. This means that a simple quad-core processor can easily handle this type of CPU load. If there are more programs– processes– running than there are CPUs in the system, the kernel is responsible for determining which process to interrupt in order to replace it with a different one that needs some CPU time.

Process management is discussed in Chapter 4 of Volume 2.

11

34

Chapter 2

Introduction toOperating Systems

I nterprocess communication Interprocess communication (IPC) is vital to any multitasking operating system. Many programs must be synchronized with each other to ensure that their work is properly coordinated. Interprocess communication is the tool that enables this type of inter-­ program cooperation. The kernel manages a number of IPC methods. Shared memory is used when two tasks need to pass data between them. The Linux clipboard is a good example of shared memory. Data which is cut or copied to the clipboard is stored in shared memory. When the stored data is pasted into another application, that application looks for the data in the clipboard’s shared memory area. Named pipes can be used to communicate data between two programs. Data can be pushed into the pipe by one program, and the other program can pull the data out of the other end of the pipe. A program may collect data very quickly and push it into the pipe. Another program may take the data out of the other end of the pipe and either display it on the screen or store it to the disk, but it can handle the data at its own rate.

D evice management The kernel manages access to the physical hardware through the use of device drivers. Although we tend to think of this in terms of various types of hard drives and other storage devices, it also manages other input/output (I/O) devices such as the keyboard, mouse, display, printers, and so on. This includes management of pluggable devices such as USB storage devices and external USB and eSATA hard drives. Access to physical devices must be managed carefully, or more than one application might attempt to control the same device at the same time. The Linux kernel manages devices so that only one program actually has control of or access to a device at any given moment. One example of this is a COM port.12 Only one program can communicate through a COM port at any given time. If you are using the COM port to get your e-mail from the Internet, for example, and try to start another program which attempts to use the same COM port, the Linux kernel detects that the COM port is already in use. The kernel then uses the hardware error handler to display a message on the screen that the COM port is in use. COM (communications) port is used with serial communications such as a serial modem to A connect to the Internet over telephone lines when a cable connection is not available.

12

35

Chapter 2

Introduction toOperating Systems

For managing disk I/O devices, including USB, parallel and serial port I/O, and filesystem I/O, the kernel does not actually handle physical access to the disk but rather manages the requests for disk I/O submitted by the various running programs. It passes these requests on to the filesystem, whether it be EXT[2,3,4], VFAT, HPFS, CDFS (CD-­ ROM file system), or NFS (Network Filesystem, or some other filesystem types), and manages the transfer of data between the filesystem and the requesting programs. We will see later how all types of hardware– whether they are storage devices or something else attached to a Linux host– are handled as if they were files. This results in some amazing capabilities and interesting possibilities.

E rror handling Errors happen. As a result, the kernel needs to identify these errors when they occur. The kernel may take some action such as retrying the failing operation, displaying an error message to the user, and logging the error message to a log file. In many cases, the kernel can recover from errors without human intervention. In others, human intervention may be required. For example, if the user attempts to unmount13 a USB storage device that is in use, the kernel will detect this and post a message to the umount program which usually sends the error message to the user interface. The user must then take whatever action necessary to ensure that the storage device is no longer in use and then attempt to unmount the device.

U tilities In addition to its kernel functions, most operating systems provide a number of basic utility programs which enable users to manage the computer on which the operating system resides. These are the commands such as cp, ls, mv, and so on, as well as the various shells, such as bash, ksh, csh and so on, which make managing the computer so much easier. These utilities are not truly part of the operating system; they are merely provided as useful tools that can be used by the SysAdmin to perform administrative tasks. In Linux, often these are the GNU core utilities. However, common usage groups the kernel together with the utilities into a single conceptual entity that we call the operating system. The Linux command to unmount a device is actually umount.

13

36

Chapter 2

Introduction toOperating Systems

A bit ofhistory Entire books have been written just about the history of Linux14 and Unix,15 so I will attempt to make this as short as possible. It is not necessary to know this history to be able to use Unix or Linux, but you may find it interesting. I have found it very useful to know some of this history because it has helped me to understand the Unix and Linux Philosophy and to formulate my own philosophy which I discuss in my book, The Linux Philosophy for SysAdmins16 and a good bit in the three volumes of this course.

S tarting withUNICS The history of Linux begins with UNICS which was originally written as a gaming platform to run a single game. Ken Thompson was an employee at Bell Labs in the late 1960s– before the breakup– working on a complex project called Multics. Multics was an acronym that stood for Multiplexed Information and Computing System. It was supposed to be a multitasking operating system for the GE (yes, General Electric) 64517 mainframe computer. It was a huge, costly, complex project with three very large organizations, GE, Bell Labs, and MIT, working on it. Although Multics never amounted to much more than a small bump along the road of computer history, it did introduce a good number of then innovative features that had never before been available in an operating system. These features included multitasking and multiuser capabilities. Ken Thompson,18 one of the developers of Multics, had written a game called Space Travel19 that ran under Multics. Unfortunately, due at least in part to the committee-­ driven design of Multics, the game ran very slowly. It was also very expensive to run at about $50 per iteration. As with many projects developed by committees, Multics died a slow, agonizing death. The platform on which the Space Travel game was run was no longer available.

ikipedia, History of Linux, https://en.wikipedia.org/wiki/History_of_Linux W Wikipedia, History of Unix, https://en.wikipedia.org/wiki/History_of_Unix 16 Apress, The Linux Philosophy for SysAdmins, www.apress.com/us/book/9781484237298 17 Wikipedia, GE 645, https://en.wikipedia.org/wiki/GE_645 18 Wikipedia, Ken Thompson, https://en.wikipedia.org/wiki/Ken_Thompson 19 Wikipedia, Space Travel, https://en.wikipedia.org/wiki/Space_Travel_(video_game) 14 15

37

Chapter 2

Introduction toOperating Systems

Thompson then rewrote the game to run on a DEC PDP-7 computer similar to the one in Figure2-3 that was just sitting around gathering dust. In order to make the game run on the DEC, he and some of his buddies, Dennis Ritchie20 and Rudd Canaday, first had to write an operating system for the PDP-7. Because it could only handle two simultaneous users– far fewer than Multics had been designed for– they called their new operating system UNICS for UNiplexed Information and Computing System as a bit of geeky humor.

U NIX At some time later, the UNICS name was modified slightly to UNIX, and that name has stuck ever since. In 1970, recognizing its potential, Bell Labs provided some financial support for the Unix operating system and development began in earnest. In 1972 the entire operating system was rewritten in C to make it more portable and easier to maintain than the assembler it had been written in allowed for. By 1978, Unix was in fairly wide use inside AT&T Bell Labs and many universities. Due to the high demand, AT&T decided to release a commercial version of Unix in 1982. Unix System III was based on the seventh version of the operating system. In 1983, AT&T released Unix System V Release 1. For the first time, AT&T promised to maintain upward compatibility for future versions. Thus programs written to run on SVR1 would also run on SVR2 and future releases. Because this was a commercial version, AT&T began charging license fees for the operating system. Also, in order to promote the spread of Unix and to assist many large universities in their computing programs, AT&T gave away the source code of Unix to many of these institutions of higher learning. This caused one of the best and one of the worst situations for Unix. The best thing about the fact that AT&T gave the source code to universities was that it promoted rapid development of new features. It also promoted the rapid divergence of Unix into many distributions. System V was an important milestone in the history of Unix. Many Unix variants are today based on System V.The most current release is SVR4 which is a serious attempt to reconverge the many variants that split off during these early years. SVR4 contains most of the features of both System V and BSD.Hopefully they are the best features.

Wikipedia, Dennis Ritchie, https://en.wikipedia.org/wiki/Dennis_Ritchie

20

38

Chapter 2

Introduction toOperating Systems

Figure 2-3.  A DEC PDP-7 similar to the one used by Ken Thompson and Dennis Ritchie to write the UNICS[sic] operating system. This one is located in Oslo, and the picture was taken in 2005 before restoration began. Photo courtesy of Wikimedia, CC by SA 1.0

The Berkeley Software Distribution (BSD) The University of California at Berkeley got into the Unix fray very early. Many of the students who attended the school added their own favorite features to BSD Unix. Eventually only a very tiny portion of BSD was still AT&T code. Because of this it was very different from, though still similar to System V.Ultimately the remaining portion of BSD was totally rewritten as well and folks using it no longer needed to purchase a license from AT&T.

39

Chapter 2

Introduction toOperating Systems

The Unix Philosophy The Unix Philosophy is an important part of what makes Unix unique and powerful. Because of the way that Unix was developed, and the particular people involved in that development, the Unix Philosophy was an integral part of the process of creating Unix and played a large part in many of the decisions about its structure and functionality. Much has been written about the Unix Philosophy. And the Linux Philosophy is essentially the same as the Unix Philosophy because of its direct line of descent from Unix. The original Unix Philosophy was intended primarily for the system developers. In fact, the developers of Unix, led by Thompson and Ritchie, designed Unix in a way that made sense to them, creating rules, guidelines, and procedural methods and then designing them into the structure of the operating system. That worked well for system developers and that also– partly, at least– worked for SysAdmins (system administrators). That collection of guidance from the originators of the Unix operating system was codified in the excellent book, The Unix Philosophy, by Mike Gancarz, and then later updated by Mr. Gancarz as Linux and the Unix Philosophy.21 Another fine and very important book, The Art of Unix Programming,22 by Eric S.Raymond, provides the author’s philosophical and practical views of programming in a Unix environment. It is also somewhat of a history of the development of Unix as it was experienced and recalled by the author. This book is also available in its entirety at no charge on the Internet.23 I learned a lot from all three of those books. They all have great value to Unix and Linux programmers. In my opinion, Linux and the Unix Philosophy and The Art of Unix Programming should be required reading for Linux programmers, system administrators, and DevOps personnel. I strongly recommend that you read these two books in particular. I have been working with computers for over 45 years. It was not until I started working with Unix and Linux and started reading some of the articles and books about Unix, Linux, and the common philosophy they share that I understood the reasons why Gancarz, Mike, Linux and the Unix Philosophy, Digital Press– an imprint of Elsevier Science, 2003, ISBN 1-55558-273-7 22 Raymond, Eric S., The Art of Unix Programming, Addison-Wesley, September 17, 2003, ISBN 0-13-142901-9 23 Raymond, Eric S., The Art of Unix Programming, www.catb.org/esr/writings/taoup/html/ index.html/ 21

40

Chapter 2

Introduction toOperating Systems

many things in the Linux and Unix worlds are done as they are. Such understanding can be quite useful in learning new things about Linux and in being able to reason through problem solving.

A (very) brief history ofLinux Linus Torvalds, the creator of Linux, was a student at Helsinki University in 1991. The university was using a very small version of Unix called Minix for school projects. Linus was not very happy with Minix and decided to write his own Unix-like operating system.24 Linus wrote the kernel of Linux and used the then ubiquitous PC with an 80386 processor as the platform for his operating system because that is what he had on hand as his home computer. He released an early version in 1991 and the first public version in March of 1992. Linux spread quickly, in part because many of the people who downloaded the original versions were hackers like Linus and had good ideas that they wanted to contribute. These contributors, with guidance from Torvalds, grew into a loose international affiliation of hackers dedicated to improving Linux. Linux is now found in almost all parts of our lives.25 It is ubiquitous, and we depend upon it in many places that we don’t normally even think about. Our mobile phones, televisions, automobiles, the International Space Station, most supercomputers, the backbone of the Internet, and most of the web sites on the Internet all utilize Linux. For more detailed histories of Linux, see Wikipedia26 and its long list of references and sources.

C ore utilities Linus Torvalds wrote the Linux kernel, but the rest of the operating system was written by others. These utilities were the GNU core utilities developed by Richard M.Stallman (aka, RMS) and others as part of their intended free GNU operating system. All orvalds, Linus and Diamond, David, Just for Fun, HarperCollins, 2001, 61–64, T ISBN 0-06-662072-4 25 Opensource.com, Places to find Linux, https://opensource.com/article/18/5/ places-find-linux?sc_cid=70160000001273HAAQ 26 Wikipedia, History of Linux, https://en.wikipedia.org/wiki/History_of_Linux 24

41

Chapter 2

Introduction toOperating Systems

SysAdmins use these core utilities regularly, pretty much without thinking about them. There is also another set of basic utilities, util-linux, that we should also look at because they also are important Linux utilities. Together, these two sets of utilities comprise many of the most basic tools– the core– of the Linux system administrator’s toolbox. These utilities address tasks that include management and manipulation of text files, directories, data streams, various types of storage media, process controls, filesystems, and much more. The basic functions of these tools are the ones that allow SysAdmins to perform many of the tasks required to administer a Linux computer. These tools are indispensable because without them, it is not possible to accomplish any useful work on a Unix of Linux computer. GNU is a recursive algorithm that stands for “Gnu’s Not Unix.” It was developed by the Free Software Foundation (FSF) to provide free software to programmers and developers. Most distributions of Linux contain the GNU utilities.

G NU coreutils To understand the origins of the GNU core utilities, we need to take a short trip in the Wayback Machine to the early days of Unix at Bell Labs. Unix was originally written so that Ken Thompson, Dennis Ritchie, Doug McIlroy, and Joe Ossanna could continue with something they had started while working on a large multitasking and multiuser computer project called Multics. That little something was a game called “Space Travel.” As is true today, it always seems to be the gamers that drive forward the technology of computing. This new operating system was much more limited than Multics as only two users could log in at a time, so it was called Unics. This name was later changed to UNIX. Over time, UNIX turned out to be such a success, that Bell Labs began essentially giving it away to universities and later to companies, for the cost of the media and shipping. Back in those days, system-level software was shared between organizations and programmers as they worked to achieve common goals within the context of system administration. Eventually the PHBs27 at AT&T decided that they should start making money on Unix and started using more restrictive– and expensive– licensing. This was taking place at a time when software in general was becoming more proprietary, restricted, and closed. It was becoming impossible to share software with other users and organizations.

PHB: Pointy Haired Bosses. A reference to the boss in the Dilbert comics

27

42

Chapter 2

Introduction toOperating Systems

Some people did not like this and fought it with– free software. Richard M.Stallman28 led a group of rebels who were trying to write an open and freely available operating system that they call the “GNU Operating System.” This group created the GNU utilities but did not produce a viable kernel. When Linus Torvalds first wrote and compiled the Linux kernel, he needed a set of very basic system utilities to even begin to perform marginally useful work. The kernel does not provide these commands or even any type of command shell such as bash. The kernel is useless by itself. So Linus used the freely available GNU core utilities and recompiled them for Linux. This gave him a complete operating system even though it was quite basic. You can learn about all of the individual programs that comprise the GNU utilities by entering the command info coreutils at a terminal command line. The utilities are grouped by function to make specific ones easier to find. Highlight the group you want more information on, and press the Enter key. There are 102 utilities in that list. It does cover many of the basic functions necessary to perform some basic tasks on a Unix or Linux host. However, many basic utilities are missing. For example, the mount and umount commands are not in this list. Those and many of the other commands that are not in the GNU coreutils can be found in the util-­ linux collection.

u til-linux The util-linux package of utilities contains many of the other common commands that SysAdmins use. These utilities are distributed by the Linux Kernel Organization, and virtually every distribution uses them. These 107 commands were originally three separate collections, fileutils, shellutils, and textutils, which were combined into the single package, util-linux, in 2003. These two collections of basic Linux utilities, the GNU core utilities and utillinux, together provide the basic utilities required to administer a basic Linux system. As I researched this book, I found several interesting utilities in this list that I never knew about. Many of these commands are seldom needed. But when you do, they are indispensable. Between these two collections, there are over 200 Linux utilities.

Wikipedia, Richard M.Stallman, https://en.wikipedia.org/wiki/Richard_Stallman

28

43

Chapter 2

Introduction toOperating Systems

Linux has many more commands, but these are the ones that are needed to manage the most basic functions of the typical Linux host. The lscpu utility that I used earlier in this chapter is distributed as part of the util-linux package. I find it easiest to refer to these two collections together as the Linux core utilities.

Copyleft Just because Linux and its source code are freely available does not mean that there are no legal or copyright issues involved. Linux is copyrighted under the GNU General Public License Version 2 (GPL2). The GNU GPL2 is actually called a copyleft instead of a copyright by most people in the industry because its terms are so significantly different from most commercial licenses. The terms of the GPL allow you to distribute or even to sell Linux (or any other copylefted software), but you must make the complete source code available without restrictions of any kind, as well as the compiled binaries. The original owner– Linus Torvalds in the case of parts of the Linux kernel– retains copyright to the portions of the Linux kernel he wrote, and other contributors to the kernel retain the copyright to their portions software no matter by whom or how much it is modified or added to.

Games One thing that my research has uncovered and which I find interesting is that right from the beginning, it has been the gamers that have driven technology. At first it was things like Tic-Tac-Toe on an old IBM 1401, then Space Travel on Unics and the PDP-7, Adventure and many other text-based games on Unix, single player 2D video games on the IBM PC and DOS, and now first person shooter (FPS) and massively multiplayer online games (MMOGs) on powerful Intel and AMD computers with lots of RAM, expensive and very sensitive keyboards, and extremely high-speed Internet connections. Oh, yes, and lights. Lots of lights inside the case, on the keyboard and mouse, and even built into the motherboards. In many instances these lights are programmable. AMD and Intel are intensely competitive in the processor arena, and both companies provide very high-powered versions of their products to feed the gaming community. These powerful hardware products also provide significant benefits to other communities like writers.

44

Chapter 2

Introduction toOperating Systems

For me, having many CPUs and huge amounts of RAM and disk space makes it possible to run several virtual machines simultaneously. This enables me to have two or three VMs to represent the ones you will use for the experiments that will help you to explore Linux in this book, and other, crashable and disposable VMs that I use test various scenarios.

Chapter summary Linux is an operating system that is designed to manage the flow and storage of programs and data in a modern Intel computer. It consists of a kernel, which was written by Linus Torvalds, and two sets of system-level utilities that provide the SysAdmin with the ability to manage and control the functions of the system and the operating system itself. These two sets of utilities, the GNU utilities and util-linux, together consist of a collection of over 200 Linux core utilities that are indispensable to the Linux SysAdmin. Linux must work very closely with the hardware in order to perform many of its functions, so we looked at the major components of a modern Intel-based computer.

Exercises 1. What is the primary function of an operating system? 2. List at least four additional functions of an operating system. 3. Describe the purpose of the Linux core utilities as a group. 4. Why did Linus Torvalds choose to use the GNU core utilities for Linux instead of writing his own?

45

CHAPTER 3

The Linux Philosophy forSysAdmins O bjectives In this chapter you will learn •

The historical background of the Linux Philosophy for SysAdmins

A basic introduction to the tenets of the Linux Philosophy for SysAdmins

How the Linux Philosophy for SysAdmins can help you learn to be a better SysAdmin

B ackground The Unix Philosophy is an important part of what makes Unix1 unique and powerful. Much has been written about the Unix Philosophy, and the Linux philosophy is essentially the same as the Unix Philosophy because of its direct line of descent from Unix. The original Unix Philosophy was intended primarily for the system developers. Having worked with Unix and Linux for over 20 years as of this writing, I have found that the Linux Philosophy has contributed greatly to my own efficiency and effectiveness as a SysAdmin. I have always tried to follow the Linux Philosophy because my experience has been that a rigorous adherence to it, regardless of the pressure applied by a legion of Pointy-Haired Bosses (PHB), will always pay dividends in the long run.

https://en.wikipedia.org/wiki/Unix

1

© David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_3

47

Chapter 3

The Linux Philosophy forSysAdmins

The original Unix and Linux Philosophy was intended for the developers of those operating systems. Although system administrators could apply many of the tenets to their daily work, many important tenets that address things unique to SysAdmins were missing. Over the years I have been working with Linux and Unix; I have formulated my own philosophy– one which applies more directly to the everyday life and tasks of the system administrator. My philosophy is based in part upon the original Unix and Linux Philosophy, as well as the philosophies of my mentors. My book, The Linux Philosophy for SysAdmins,2 is the result of my SysAdmin approach to the Linux Philosophy. Much of this chapter is taken directly from that book. Because the name, “Linux Philosophy for SysAdmins,” is a bit long, most of the time I will refer to it in this book as the “Philosophy” for simplicity.

The structure ofthephilosophy There are three layers to the Linux Philosophy for System Administrators in a way that is similar to Maslow’s hierarchy of needs.3 These layers are also symbolic of our growth through progressively higher levels of enlightenment. The bottom layer is the foundation– the basic commands and knowledge that we as SysAdmins need to know in order to perform the lowest level of our jobs. The middle layer consists of those practical tenets that build on the foundation and inform the daily tasks of the SysAdmin. The top layer contains the tenets that fulfill our higher needs as SysAdmins and which encourage and enable us to share our knowledge. In the first and most basic layer of the philosophy is the foundation. It is about “The Linux Truth,” data streams, Standard Input/Output (STDIO), transforming data streams, small command-line programs, and the meaning of “everything is a file,” for example. The middle layer contains the functional aspects of the philosophy. Embracing the command line, we expand our command-line programs to create tested and maintainable shell programs that we save and can use repeatedly and even share. We become the “lazy admin” and begin to automate everything. We use the Linux filesystem hierarchy appropriately and store data in open formats. These are the functional portions of the philosophy.

Both, David, The Linux Philosophy for SysAdmins, Apress, 2018, ISBN 978-1-4842-3729-8 Wikipedia, Maslow’s hierarchy of needs, https://en.wikipedia.org/wiki/ Maslow%27s_hierarchy_of_needs

2 3

48

Chapter 3

The Linux Philosophy forSysAdmins

Figure 3-1.  The hierarchy of the Linux Philosophy for SysAdmins The top layer of the philosophy is about enlightenment. We begin to progress beyond merely performing our SysAdmin tasks and just getting the job done; our understanding of the elegance and simplicity in the design of Linux is perfected. We begin striving for doing our own work elegantly, keeping solutions simple, simplifying existing but complex solutions, and creating usable and complete documentation. We begin to explore and experiment simply for the sake of gaining new knowledge. At this stage of enlightenment, we begin to pass our knowledge and methods to those new to the profession, and we actively support our favorite open source projects. In my opinion it is impossible to learn about many Linux commands and utilities without learning about the structure and philosophy of Linux. Working on the command line requires such knowledge. At the same time, working on the command line engenders the very knowledge required to use it. If you use the command line long enough, you will find that you have learned at least some about the intrinsic beauty of Linux without even trying. If you then follow your own curiosity about what you have already learned, the rest will be revealed. Does that sound a bit Zen? It should because it is.

49

Chapter 3

The Linux Philosophy forSysAdmins

T he tenets Here we look briefly at each of the tenets of the Linux Philosophy for SysAdmins. As we proceed through this book, I will point out many places where these tenets apply and what they reveal about the underlying structure of Linux. We will also discover many practical applications of the philosophy that you will be able to use every day. This list must necessarily be terse, and it cannot cover all aspects of each tenet. If you are interested in learning more, you should refer to The Linux Philosophy for SysAdmins4 for more information and the details of each tenet.

Data streams are auniversal interface Everything in Linux revolves around streams of data– particularly text streams. In the Unix and Linux worlds, a stream is a flow text data that originates at some source; the stream may flow to one or more programs that transform it in some way, and then it may be stored in a file or displayed in a terminal session. As a SysAdmin, your job is intimately associated with manipulating the creation and flow of these data streams. The use of Standard Input/Output (STDIO) for program input and output is a key foundation of the Linux way of doing things and manipulating data streams. STDIO was first developed for Unix and has found its way into most other operating systems since then, including DOS, Windows, and Linux.

This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. —Doug McIlroy, Basics of the Unix Philosophy5,6 STDIO was developed by Ken Thompson7 as a part of the infrastructure required to implement pipes on early versions of Unix. Programs that implement STDIO use

Both, David, The Linux Philosophy for SysAdmins, Apress, 2018, ISBN 978-1-4842-3729-8 Eric S.Raymond, The Art of Unix Programming, www.catb.org/esr/writings/taoup/html/ ch01s06.html 6 Linuxtopia, Basics of the Unix Philosophy, www.linuxtopia.org/online_books/programming_ books/art_of_unix_programming/ch01s06.html 7 Wikipedia, Ken Thompson, https://en.wikipedia.org/wiki/Ken_Thompson 4 5

50

Chapter 3

The Linux Philosophy forSysAdmins

standardized file handles for input and output rather than files that are stored on a disk or other recording media. STDIO is best described as a buffered data stream, and its primary function is to stream data from the output of one program, file, or device to the input of another program, file, or device. Data streams are the raw materials upon which the core utilities and many other CLI tools perform their work. As its name implies, a data stream is a stream of data being passed from one file, device, or program to another using STDIO.

Transforming data streams This tenet explores the use of pipes to connect streams of data from one utility program to another using STDIO.The function of these programs is to transform the data in some manner. You will also learn about the use of redirection to redirect the data to a file. Data streams can be manipulated by inserting transformers into the stream using pipes. Each transformer program is used by the SysAdmin to perform some transformational operation on the data in the stream, thus changing its contents in some manner. Redirection can then be used at the end of the pipeline to direct the data stream to a file. As has already been mentioned, that file could be an actual data file on the hard drive, or a device file such as a drive partition, a printer, a terminal, a pseudo-terminal, or any other device connected to a computer. I use the term “transform” in conjunction with these programs because the primary task of each is to transform the incoming data from STDIO in a specific way as intended by the SysAdmin and to send the transformed data to STDOUT for possible use by another transformer program or redirection to a file. The standard term for these programs, “filters,” implies something with which I don’t agree. By definition, a filter is a device or a tool that removes something, such as an air filter removes airborne contaminants so that the internal combustion engine of your automobile does not grind itself to death on those particulates. In my high school and college chemistry classes, filter paper was used to remove particulates from a liquid. The air filter in my home HVAC system removes particulates that I don’t want to breathe. So, although they do sometimes filter out unwanted data from a stream, I much prefer the term “transformers” because these utilities do so much more. They can add data to a stream, modify the data in some amazing ways, sort it, rearrange the data in each line, perform operations based on the contents of the data stream, and so much more. Feel free to use whichever term you prefer, but I prefer transformers. 51

Chapter 3

The Linux Philosophy forSysAdmins

The ability to manipulate these data streams using these small yet powerful transformer programs is central to the power of the Linux command-line interface. Many of the Linux core utilities are transformer programs and use STDIO.

Everything is afile This is one of the most important concepts that makes Linux especially flexible and powerful: Everything is a file. That is, everything can be the source of a data stream, the target of a data stream, or in many cases both. In this book you will explore what “everything is a file” really means and learn to use that to your great advantage as a SysAdmin.

The whole point with “everything is a file” is ... the fact that you can use common tools to operate on different things. —Linus Torvalds in an e-mail The idea that everything is a file has some interesting and amazing implications. This concept makes it possible to copy a boot record, a disk partition, or an entire hard drive including the boot record, because the entire hard drive is a file, just as are the individual partitions. Other possibilities include using the cp (copy) command to print a PDF file to a compatible printer, using the echo command to send messages from one terminal session to another, and using the dd command to copy ISO image files to a USB thumb drive. “Everything is a file” is possible because all devices are implemented by Linux as these things called device special files which are located in the /dev/ directory. Device files are not device drivers; rather they are gateways to devices that are exposed to the user. We will discuss device special files in some detail throughout this course as well as in Volume 2, Chapter 3.

Use theLinux FHS The Linux Filesystem Hierarchical Standard (FHS) defines the structure of the Linux directory tree. It names a set of standard directories and designates their purposes. This standard has been put in place to ensure that all distributions of Linux are consistent in their directory usage. Such consistency makes writing and maintaining shell and compiled programs easier for SysAdmins because the programs, their configuration files, 52

Chapter 3

The Linux Philosophy forSysAdmins

and their data, if any, should be located in the standard directories. This tenet is about storing programs and data in the standard and recommended locations in the directory tree and the advantages of doing so. As SysAdmins, our tasks include everything from fixing problems to writing CLI programs to perform many of our tasks for us and for others. Knowing where data of various types are intended to be stored on a Linux system can be very helpful in resolving problems as well as preventing them. The latest Filesystem Hierarchical Standard (3.0)8 is defined in a document maintained by the Linux Foundation.9 The document is available in multiple formats from their web site as are historical versions of the FHS.

E mbrace theCLI The force is with Linux, and the force is the command-line interface– the CLI.The vast power of the Linux CLI lies in its complete lack of restrictions. Linux provides many options for accessing the command line such as virtual consoles, many different terminal emulators, shells, and other related software that can enhance your flexibility and productivity. The command line is a tool that provides a text mode interface between the user and the operating system. The command line allows the user to type commands into the computer for processing and to see the results. The Linux command-line interface is implemented with shells such as Bash (Bourne again shell), csh (C shell), and ksh (Korn shell) to name just three of the many that are available. The function of any shell is to pass commands typed by the user to the operating system which executes the commands and returns the results to the shell. Access to the command line is through a terminal interface of some type. There are three primary types of terminal interface that are common in modern Linux computers, but the terminology can be confusing. These three interfaces are virtual consoles, terminal emulators that run on a graphical desktop, and an SSH remote connection. We will explore the terminology, virtual consoles, and one terminal emulator in Chapter 7. Several different terminal emulators are covered in Chapter 14.

he Linux Foundation, The Linux Filesystem Hierarchical Standard, http://refspecs. T linuxfoundation.org/fhs.shtml 9 The Linux Foundation maintains documents defining many Linux standards. It also sponsors the work of Linus Torvalds. 8

53

Chapter 3

The Linux Philosophy forSysAdmins

Be thelazy SysAdmin Despite everything, we were told by our parents, teachers, bosses, well-meaning authority figures, and hundreds of quotes about hard work that I found with a Google search, getting your work done well and on time is not the same as working hard. One does not necessarily imply the other. I am a lazy SysAdmin. I am also a very productive SysAdmin. Those two seemingly contradictory statements are not mutually exclusive; rather they are complementary in a very positive way. Efficiency is the only way to make this possible. This tenet is about working hard at the right tasks to optimize our own efficiency as SysAdmins. Part of this is about automation which we will explore in detail in Chapter 10 of Volume 2 but also throughout this course. The greater part of this tenet is about finding many of the myriad ways to use the short cuts already built into Linux. Things like using aliases as shortcuts to reduce typing– but probably not in the way you think of them if you come from a Windows background. Naming files so that they can be easily found in lists, using the file name completion facility that is part of Bash, the default Linux shell for most distributions, and more all contribute to making life easier for lazy SysAdmins.

A utomate everything The function of computers is to automate mundane tasks in order to allow us humans to concentrate on the tasks that the computers cannot– yet– do. For SysAdmins, those of us who run and manage the computers most closely, we have direct access to the tools that can help us work more efficiently. We should use those tools to maximum benefit. In Chapter 8, “Be a Lazy SysAdmin,” of The Linux Philosophy for SysAdmins10, I state, “A SysAdmin is most productive when thinking– thinking about how to solve existing problems and about how to avoid future problems; thinking about how to monitor Linux computers in order to find clues that anticipate and foreshadow those future problems; thinking about how to make her job more efficient; thinking about how to automate all of those tasks that need to be performed whether every day or once a year.” SysAdmins are next most productive when creating the shell programs that automate the solutions that they have conceived while appearing to be unproductive. The more

Both, David, The Linux Philosophy for SysAdmins, Apress, 2018, 132, ISBN 978-1-4842-3729-8

10

54

Chapter 3

The Linux Philosophy forSysAdmins

automation we have in place, the more time we have available to fix real problems when they occur and to contemplate how to automate even more than we already have. I have learned that, for me at least, writing shell programs– also known as scripts– provides the best single strategy for leveraging my time. Once having written a shell program, it can be rerun as many times as needed.

Always use shell scripts When writing programs to automate– well, everything– always use shell scripts rather than compiled utilities and tools. Because shell scripts are stored in plain text11 format, they can be easily viewed and modified by humans just as easily as they can by computers. You can examine a shell program and see exactly what it does and whether there are any obvious errors in the syntax or logic. This is a powerful example of what it means to be open. A shell script or program is an executable file that contains at least one shell command. They usually have more than a single command, and some shell scripts have thousands of lines of code. When taken together, these commands are the ones necessary to perform a desired task with a specifically defined result. Context is important, and this tenet should be considered in the context of our jobs as SysAdmins. The SysAdmin’s job differs significantly from those of developers and testers. In addition to resolving both hardware and software problems, we manage the day-to-day operation of the systems under our care. We monitor those systems for potential problems and make all possible efforts to prevent those problems before they impact our users. We install updates and perform full release level upgrades to the operating system. We resolve problems caused by our users. SysAdmins develop code to do all of those things and more; then we test that code; and then we support that code in a production environment.

Test early test often There is always one more bug. —Lubarsky’s Law of Cybernetic Entomology

Wikipedia, Plain text, https://en.wikipedia.org/wiki/Plain_text

11

55

Chapter 3

The Linux Philosophy forSysAdmins

Lubarsky– whoever he might be– is correct. We can never find all of the bugs in our code. For every one I find there always seems to be another that crops up, usually at a very inopportune time. Testing affects the ultimate outcome of the many tasks SysAdmins do and is an integral part of the philosophy. However, testing is not just about programs. It is also about verification that problems– whether caused by hardware, software, or the seemingly endless ways that users can find to break things– that we are supposed to have resolved actually have been. These problems can be with application or utility software we wrote, system software, applications, and hardware. Just as importantly, testing is also about ensuring that the code is easy to use and the interface makes sense to the user. Testing is hard work, and it requires a well-designed test plan based on the requirements statements. Regardless of the circ*mstances, start with a test plan. Even a very basic test plan provides some assurance that testing will be consistent and will cover the required functionality of the code. Any good plan includes tests to verify that the code does everything it is supposed to. That is, if you enter X and click button Y, you should get Z as the result. So you write a test that does creates those conditions and then verify that Z is the result. The best plans include tests to determine how well the code fails. The specific scenarios explicitly covered by the test plan are important, but they may fail to anticipate the havoc that can be caused by unanticipated or even completely random input. This situation can be at least partially covered by fuzzy testing in which someone or some tool randomly bangs on the keyboard until something bad happens. For SysAdmins, testing in production, which some people consider to be a new thing, is a common practice. There is no test plan that can be devised by a lab full of testers that can possibly equal a few minutes in the real world of production.

Use common sense naming The lazy SysAdmin does everything possible to reduce unnecessary typing, and I take that seriously. This tenet expands on that, but there is much more to it than just reducing the amount of typing I need to do. It is also about the readability of scripts and naming things so that they are more understandable more quickly.

56

Chapter 3

The Linux Philosophy forSysAdmins

One of the original Unix Philosophy tenets was to always use lowercase and keep names short.12 An admirable goal but not one so easily met in the world of the SysAdmin. In many respects my own tenet would seem a complete refutation of the original. However the original was intended for a different audience, and this one is intended for SysAdmins with a different set of needs. The ultimate goal is to create scripts that are readable and easily understood in order to make them easily maintainable. And then to use other, simple scripts and cron jobs to automate running those scripts. Keeping the script names reasonably short also reduces typing when executing those scripts from the command line, but that is mostly irrelevant when starting them from another script or as cron jobs. Readable scripts depend upon easily understandable and readable variable names. Sometimes, as with script names, these names may be longer but more understandable than many I have encountered in the past. Variable names like $DeviceName are much more understandable than $D5 and make a script easier to read. Note that most of the Linux command names are short, but they also have meaning. After working at the command line for a while, you will understand most of these. For example, the ls command means list the contents of a directory. Other command names contain the “ls” string in their names, such as lsusb to list the USB devices connected to the host or lsblk to list the block devices– hard drives– in the host.

Store data inopen formats The reason we use computers is to manipulate data. It used to be called “Data Processing” for a reason, and that was an accurate description. We still process data although it may be in the form of video and audio streams, network and wireless streams, word processing data, spreadsheets, images, and more. It is all still just data. We work with and manipulate text data streams with the tools we have available to us in Linux. That data usually needs to be stored, and when there is a need to store data, it is always better to store it in open file formats than closed ones. Although many user application programs store data in plain text formats including simple flat plain text and XML, this tenet is mostly about configuration data and scripts that relate directly to Linux. However, any type of data should be stored as plain text if possible. arly Unix systems had very little memory compared to today’s systems, so saving a few bytes E in a name was important. Unix and Linux are case sensitive, so an extra keystroke to hit the shift key was extra work.

12

57

Chapter 3

The Linux Philosophy forSysAdmins

“Open source” is about the code and making the source code available to any and all who want to view or modify it. “Open data”13 is about the openness of the data itself. The term open data does not mean just having access to the data itself; it also means that the data can be viewed, used in some manner, and shared with others. The exact manner in which those goals are achieved may be subject to some sort of attribution and open licensing. As with open source software, such licensing is intended to ensure the continued open availability of the data and not to restrict it any manner that would prevent its use. Open data is knowable. That means that access to it is unfettered. Truly open data can be read freely and understood without the need for further interpretation or decryption. In the SysAdmin world, open means that the data we use to configure, monitor, and manage our Linux hosts is easy to find, read, and modify when necessary. It is stored in formats that permit that ease of access, such as plain text text. When a system is open, the data and software can all be managed by open tools– tools that work with plain text text.

Use separate filesystems fordata There is a lot to this particular tenet, and it requires understanding the nature of Linux filesystems and mount points.

Note The primary meaning for the term “filesystem” in this tenet is a segment of the directory tree that is located on a separate partition or logical volume that must be mounted on a specified mount point of the root filesystem to enable access to it. We also use the term to describe the structure of the metadata on the partition or volume such as EXT4, XFS, or other structure. These different usages should be clear from their context. There are at least three excellent reasons for maintaining separate filesystems on our Linux hosts. First, when hard drives crash, we may lose some or all of the data on a damaged filesystem, but, as we will see, data on other filesystems on the crashed hard drive may still be salvageable.

Wikipedia, Open Data, https://en.wikipedia.org/wiki/Open_data

13

58

Chapter 3

The Linux Philosophy forSysAdmins

Second, despite having access to huge amounts of hard drive space, it is possible to fill up a filesystem. When that happens, separate filesystems can minimize the immediate effects and make recovery easier. Third, upgrades can be made easier when certain filesystems such as /home are located on separate filesystems. This makes it easy to upgrade without needing to restore that data from a backup. I have frequently encountered all three of these situations in my career. In some instances, there was only a single partition, the root (/) partition, and so recovery was quite difficult. Recovery from these situations was always much easier and faster when the host was configured with separate filesystems. Keeping data of all types safe is part of the SysAdmin’s job. Using separate filesystems for storing that data can help us accomplish that. This practice can also help us achieve our objective to be a lazy Admin. Backups do allow us to recover most of the data that would otherwise be lost in a crash scenario, but using separate filesystem may allow us to recover all of the data from unaffected filesystems right up to the moment of a crash. Restoring from backups takes much longer.

Make programs portable Portable programs make life much easier for the lazy SysAdmin. Portability is an important consideration because it allows programs to be used on a wide range of operating system and hardware platforms. Using interpretive languages such as Bash, Python, and Perl that can run on many types of systems can save loads of work. Programs written in compiled languages such as C must be recompiled at the very least when porting from one platform to another. In many cases, platform specific code must be maintained in the sources in order to support the different hardware platforms that the binaries are expected to run on. This generates a lot of extra work, both writing and testing the programs. Perl, Bash, and many other scripting languages are available in most environments. With very few exceptions, programs written in Perl, Bash, Python, PHP, and other languages can run unchanged on many different platforms. Linux runs on a lot of hardware architectures.14 Wikipedia maintains a long list of hardware architectures supported by Linux, but here are just a few. Of course Linux Wikipedia, List of Linux-supported computer architectures, https://en.wikipedia.org/wiki/ List_of_Linux-supported_computer_architectures

14

59

Chapter 3

The Linux Philosophy forSysAdmins

supports Intel and AMD.It also supports 32- and 64-bit ARM architectures that are found in practically every mobile phone on the planet and devices such as the Raspberry Pi.15 Most mobile phones use a form of Linux called Android.

Use open source software This tenet may not mean exactly what you think it does. Most times we think of open source software as something like the Linux kernel, LibreOffice, or any of the thousands of open source software packages that make up our favorite distribution. In the context of system administration, open source means the scripts that we write to automate our work.

Open source software is software with source code that anyone can inspect, modify, and enhance.16 —Opensource.com The web page from which the preceding quote was taken contains a well-written discussion of open source software including some of the advantages of open source. I suggest you read that article and consider how it applies to the code we write– our scripts. The implications are there if we look for them. The official definition of open source is quite terse. The annotated version of the open source definition17 at opensource.org contains ten sections that explicitly and succinctly define the conditions that must be met for software to be considered truly open source. This definition is important to the Linux Philosophy for SysAdmins. You do not have to read this definition, but I suggest you do so in order to gain a more complete understanding of what the term open source really means. However I can summarize a bit. Open source software is open because it can be read, modified, and shared because its source code is freely available to anyone who wants it. This “free as in speech” approach to software promotes worldwide participation by individuals and

aspberry Pi Foundation, www.raspberrypi.org/ R Opensource.com, What is open source?, https://opensource.com/resources/ what-open-source 17 Opensource.org, The Open Source Definition (Annotated), https://opensource.org/ osd-annotated 15 16

60

Chapter 3

The Linux Philosophy forSysAdmins

organizations in the creation and testing of high-quality code that can be shared freely by everyone. Being a good user of open source also means that we SysAdmins should share our own code, the code that we write to solve our own problems, and license it with one of the open source licenses.

S trive forelegance Elegance is one of those things that can be difficult to define. I know it when I see it, but putting what I see into a terse definition is a challenge. Using the Linux dict command, Wordnet provides one definition of elegance as “a quality of neatness and ingenious simplicity in the solution of a problem (especially in science or mathematics); ‘the simplicity and elegance of his invention.’” In the context of this book, I assert that elegance is a state of beauty and simplicity in the design and working of both hardware and software. When a design is elegant, software and hardware work better and are more efficient. The user is aided by simple, efficient, and understandable tools. Creating elegance in a technological environment is hard. It is also necessary. Elegant solutions produce elegant results and are easy to maintain and fix. Elegance does not happen by accident; you must work for it.

F ind thesimplicity The quality of simplicity is a large part of technical elegance. The tenets of the Linux Philosophy helped me to solidify my understanding of the truth that Linux is simple and that the simplicity is illuminated by the philosophy.

UNIX is basically a simple operating system, but you have to be a genius to understand the simplicity.18 —Dennis Ritchie In this tenet we search for the simplicity of Linux. I cringe when I see articles with titles like 77 Linux commands and utilities you’ll actually use19 and 50 Most Frequently azquotes.com, www.azquotes.com/quote/246027?ref=unix TechTarget.com, http://searchdatacenter.techtarget.com/ tutorial/77-Linux-commands-and-utilities-youll-actually-use

18 19

61

Chapter 3

The Linux Philosophy forSysAdmins

Used UNIX / Linux Commands (With Examples).20 These titles imply that there are sets of commands that you must memorize, or that knowing large numbers of commands is important. I do read many of these articles, but I am usually looking for new and interesting commands, commands that might help me resolve a problem or simplify a command-­ line program. I never tried to learn all of those Linux commands, regardless of what numbers you might come up with as the total for “all.” I just started by learning the commands I needed at any given moment for whatever project was at hand. I started to learn more commands because I took on personal projects and ones for work that stretched my knowledge to the limit and forced me to find commands previously unknown to me in order to complete those projects. My repertoire of commands grew over time, and I became more proficient at the application of those commands to resolve problems; I began finding jobs that paid me more and more money to play with Linux, my favorite toy. As I learned about piping and redirection, about Standard Streams and STDIO, as I read about the Unix Philosophy and then the Linux Philosophy, I started to understand how and why the command line made Linux and the core utilities so powerful. I learned about the elegance of writing command-line programs that manipulated data streams in amazing ways. I also discovered that some commands are, if not completely obsolete, then seldom used and only in unusual circ*mstances. For this reason alone, it does not make sense to find a list of Linux commands and memorize them. It is not an efficient use of your time as a SysAdmin to learn many commands that may never be needed. The simplicity here is to learn what you need to do the task at hand. There will be plenty of tasks in the future which will require you to learn other commands. When writing our own administrative scripts, simplicity is also key. Each of our scripts should do only one thing and do it well. Complex programs are difficult to use and to maintain.

Fools ignore complexity; pragmatists suffer it; experts avoid it; geniuses remove it. —Alan Perlis21 he Geek Stuff, www.thegeekstuff.com/2010/11/50-linux-commands/?utm_ T source=feedburner 21 Wikipedia, Alan Perlis, https://en.wikipedia.org/wiki/Alan_Perlis 20

62

Chapter 3

The Linux Philosophy forSysAdmins

Use your favorite editor Why is this a tenet of The Linux Philosophy for System Administrators? Because arguing about editors can be the cause of a great deal of wasted energy. Everyone has their favorite text editor, and it might not be the same as mine. So what? I use Vim as my editor. I have used it for years and like it very much. I am used to it. It meets my needs more than any other editor I have tried. If you can say that about your editor– whichever one that might be– then you are in editor Nirvana. I started using vi when I began learning Solaris over 20 years ago. My mentor suggested that I start learning to edit with vi because it would always be present on every system. That has proven to be true whether the operating system is Solaris or Linux. The vi editor is always there, so I can count on it. For me, this works. Vim is the new vi, but I can still use the vi command to launch Vim. The vi editor can also be used as the editor for Bash command-line editing. Although the default for command editing is EMACS-like, I use the vi option because I already know the vi keystrokes. Other tools that use vi editing are the crontab and visudo commands; both of these are wrappers around vi. Lazy developers use code that already exists, especially when it is open source. Using existing editors for these tools is an excellent example of that. It does not matter to me what tools you use, and it should not matter to anyone else, either. What really matters is getting the job done. Whether you use Vim or EMACS, systemd or SystemV, or RPM or DEB, what difference does it make? The bottom line here is that you should use the tools with which you are most comfortable and that work best for you.

Document everything Real programmers don’t comment their code, if it was hard to write, it should be hard to understand and harder to modify. —Unknown I, too, would want to remain anonymous if I had written that. It might even have been meant to be sarcasm or irony. Regardless, this does seem to be the attitude of many developers and SysAdmins. There is a poorly disguised ethos among some developers and SysAdmins that one must figure everything out for themselves in order to join the 63

Chapter 3

The Linux Philosophy forSysAdmins

club– whatever club that might be. If you cannot figure it out, they imply, you should go do something else because you don’t belong. First, that is not true. Second, most developers, programmers, and SysAdmins that I know definitely do not subscribe to this view. In fact, the best ones, some of whom have been my mentors over the years, exemplify the exact opposite. The best of the best make documentation– good documentation– a high priority in everything they do. I have used a lot of software whose creators subscribed to the philosophy that all code is self-explanatory. I have also been required to fix a lot of code that was completely uncommented and otherwise undocumented as well. It seems that many developers and SysAdmins figure if the program works for them, it does not need to be documented. I have been the SysAdmin assigned to fix uncommented code on more than one occasion. That is one of the least enjoyable tasks I have ever had to do. Part of the problem is that many PHBs do not see documentation as a high priority. I have been involved in many aspects of the IT industry, and fortunately most of the companies I worked for believed that documentation was not only important but that it was crucial to the task at hand, regardless of what that task was. And yet there is a lot of really good documentation out there. For example, the documentation for LibreOffice is excellent. It includes several documents in multiple formats including HTML and PDF that range from “Getting Started” to a very complete user’s guide for each of the LibreOffice applications. The documentation for Red Hat Enterprise Linux (RHEL) and CentOS and that for Fedora– which are all very closely related distributions– are also among the best I have seen in my more than 40 years of working in the IT industry. Good documentation is not easy and takes time. It also requires an understanding of the audience– not only in relation to the purpose of the documentation but also the technical expertise of the intended readers as well as the languages and cultures of the readers. Rich Bowen covered that quite nicely in his fine article at Opensource.com, “RTFM? How to write a manual worth reading.”22 There is also the question of what constitutes good documentation for a SysAdmin. We explore these things in this tenet which is mostly about documenting the scripts we write.

owen, Rich, Opensource.com, RTFM? How to write a manual worth reading, https:// B opensource.com/business/15/5/write-better-docs

22

64

Chapter 3

The Linux Philosophy forSysAdmins

Back upeverything– frequently Nothing can ever go wrong with my computer, and I will never lose my data. If you believe that, I have a bridge you might like to buy. I have experienced data loss for a myriad of reasons, many of them my own fault. Keeping decent backups has always enabled me to continue with minimal disruption. This tenet is concerned with some of the more common reasons for data loss and methods for preventing data loss and facilitating easy recovery. Recently, very recently, I encountered a problem in the form of a hard drive crash that destroyed the data in my home directory. I had been expecting this for some time, so it came as no surprise. The first indication I had that something was wrong was a series of e-mails from the SMART (Self-Monitoring, Analysis and Reporting Technology) enabled hard drive on which my home directory resided.23 Each of these e-mails indicated that one or more sectors had become defective and that the defective sectors had been taken offline and reserved sectors allocated in their place. This is normal operation; hard drives are designed intentionally with reserved sectors for just this, and the data is stored in a reserved sector instead of the intended one. When the hard drive finally failed– I left it in my computer until it failed as a test– I replaced the drive, partitioned and formatted it appropriately, copied my files from the backup to the new drive, did a little testing, and was good to go. Backups save time, effort, and money. Don’t be caught without backups. You will need them.

Follow your curiosity People talk about lifelong learning and how that keeps one mentally alert and youthful. The same is true of SysAdmins. There is always more to learn, and I think that is what keeps most of us happy and always ready to tackle the next problem. Continuous learning helps to keep our minds and skills sharp no matter what or age. I love to learn new things. I was fortunate in that my curiosity led me to a lifetime of working with my favorite toys– computers. There are certainly plenty of new things to learn about computers; the industry and technology are constantly changing. There are

our host must have a mail transfer agent (MTA) such as Sendmail installed and running. Y The /etc/aliases file must have an entry to send root’s e-mail to your e-mail address.

23

65

Chapter 3

The Linux Philosophy forSysAdmins

many things on Earth and in this universe to be curious about. Computers and related technology just seem to be the thing I enjoy the most. I also assume that you must be curious because you are reading this book. Curiosity got me into Linux in the first place, but it was a long and winding road. Over a period of many years, my curiosity led me through many life events that led me to a job at IBM which led to writing the first training course for the original IBM PC, which led to a job at a company where I was able to learn Unix, which led me to Linux because Unix was too expensive to use at home, which led to a job at Red Hat which ... you get the idea. Now I write about Linux. Follow your own curiosity. You should explore the many aspects of Linux and go wherever your curiosity leads you. It was only by following my curiosity, first about electronics, then computers, programming, operating systems, Linux, servers, networking, and more, that I have been able to do so many fun and interesting things.

There is no should This tenet is about possibilities. It is also the most Zen of all the tenets. It is more about how our minds work to solve problems than it is about specific technology. It is also about overcoming or at least recognizing some of the obstacles that prevent us from fully utilizing the potential we have in ourselves. In “The Wrath of Kahn,” Spock says, “There are always possibilities.” With Linux there are always possibilities– many ways to approach and solve problems. This means that you may perform a task in one way, while another SysAdmin may do it in another. There is no one way in which tasks “should” be done. There is only the way you have done it. If the results meet the requirements, then the manner in which they were reached is perfection. I believe that we Linux SysAdmins approach solving Linux problems with fewer constraints on our thinking than those who appear to think more in terms of “harnessing” and “restrictions.” We have so many simple yet powerful tools available to us that we do not find ourselves constrained by either the operating system or any inhibitive manner of thinking about the tools we use or the operational methods with which we may apply them. Rigid logic and rules do not give us SysAdmins enough flexibility to perform our jobs efficiently. We don’t especially care about how things “should” be done. SysAdmins are

66

Chapter 3

The Linux Philosophy forSysAdmins

not easily limited by the “shoulds” that others try to constrain us with. We use logical and critical thinking that is flexible and that produces excellent results and which enables us to learn more while we are at it. We don’t just think outside the box. We are the ones who destroy the boxes that others try to make us work inside. For us, there is no “should.”

Mentor theyoung SysAdmins I have taken many training courses over the years, and most have been very useful in helping me to learn more about Unix and Linux as well as a host of other subjects. But training– as useful and important as it is– cannot cover many essential aspects of performing SysAdmin duties. Some things can only be taught by a good mentor in a real-world environment, usually while you are under extreme pressure to fix a critical problem. A good mentor will allow you to do the actual work in these situations, so you can have a valuable learning experience while keeping the wolves at bay, taking the heat while you work uninterrupted. A great mentor will also be able to create a learning opportunity from every situation no matter how critical. This tenet is also about teaching the young SysAdmins critical thinking and application of the scientific method to the art of solving problems. It is about passing on what you have received.

Support your favorite open source project Linux and a very large proportion of the programs that we run on it are open source programs. Many of the larger projects, such as the kernel itself, are supported directly by foundations set up for that purpose, such as the Linux Foundation, and/or by corporations and other organizations that have an interest in doing so. As a SysAdmin, I write a lot of scripts, and I like doing so, but I am not an application programmer. Nor do I want to be because I enjoy the work of a SysAdmin which allows for a different kind of programming. So, for the most part, contributing code to an open source project is not a good option for me. There are other ways to contribute, such as answering questions on lists or web sites, submitting bug reports, writing documentation, writing articles for web sites like Opensource.com, teaching, and contributing money. And I use some of those options. This tenet is about exploring some of the ways in which you might contribute. As in mentoring, this is a way to give back to the community. 67

Chapter 3

The Linux Philosophy forSysAdmins

Reality bytes The Linux Philosophy for SysAdmins is a technical philosophy which would not normally be considered to be very practical. But there is “truth” here. Reality imposes itself upon SysAdmins every day in a multitude of ways. It is possible always to be able to follow each of the tenets– but it is quite improbable. In the “real” world, we SysAdmins face some incredible challenges just to get our assigned work completed. Deadlines, management, and other pressures force us to make decisions many times a day about what to do next and how to do it. Meetings usually waste our time– not always but usually. Finding time and money for training is unheard of in many organizations and requires selling your SysAdmin soul in others. Adhering to the philosophy does pay high-value returns in the long run. Still, reality always intrudes on the ever so perfect philosophical realm. Without room for flexibility, any philosophy is merely doctrine and that is not what the Linux Philosophy for System Administrators is about. This tenet explores how reality may affect us as system administrators.

Computers are easy– people are hard. —Bridget Kromhout SysAdmins must work and interact with people. It can be difficult, but we do need to do so from time to time. We SysAdmins must interact with other people whether they be users, technical professionals on other teams, our peers, or management. We need to discuss our work with other people who have differing levels of knowledge. Knowledge is not a binary condition; it is analog. People have a wide disparity in the amount of knowledge that they have about computers and technology. This ranges from seemingly less than none to very knowledgeable. Their level of knowledge is important in how we interact with them. I have been accused of overexplaining things, but I would rather overexplain than under-explain. Some have even called it “mansplaining,” but that is not really my intent. I have found that all technical people, regardless of gender, gender preference, or identification, or other identifying characteristic, have the same tendency to explain things from the ground up when asked a simple question. That is because the answers are never as simple as the questions.

68

Chapter 3

The Linux Philosophy forSysAdmins

Chapter summary This chapter is a brief overview of The Linux Philosophy for SysAdmins. The philosophy is my mental framework for how I work as a SysAdmin. It has been a useful tool for me, and as we proceed through this course, I will point out and explain why and how these tenets apply to certain situations or tasks. Working in accordance with the tenets of the philosophy will enhance our productivity and efficiency as we perform our work.

Exercises Perform the following exercises to complete this chapter: 1. Why do you think that the Linux Philosophy for SysAdmins is important? 2. Do any of the tenets discussed in this chapter suggest doing things differently for you?

69

CHAPTER 4

Preparation O bjectives In this chapter you will •

Choose a computer host on which to install VirtualBox and a virtual machine (VM) on which you can perform the experiments

Install VirtualBox on the hardware you chose

Create a small VM with which to safely perform experiments

Configure the virtual network adapter as needed for this course

O verview There are some tasks that need to be accomplished in order to prepare for the experiments in this Linux training course. Most lab environments use physical machines for training purposes, but this course will ultimately use at least two Linux hosts in a private network in order to enable a realistic environment for learning about being a SysAdmin. It is also helpful for these hosts to be left untouched during the times between one course and the next or in case of long breaks while in the middle of any of the courses. So a normal classroom environment is not optimal for learning Linux. Also, most people who want to learn Linux do not usually have that many physical hosts and a private network available. Even if you work for a company that supports your training with money– a very big consideration for many people– and the time to take classes– usually an even more scarce commodity– I have never seen a company or public training center that can dedicate multiple computers to a single student during a class and keep them untouched between classes which may be scheduled months apart.

© David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_4

71

Chapter 4

Preparation

Because of these factors, this series of three volumes that make up our training manual– which is what this is– uses virtual machines (VMs) in a virtual network that can be installed on a modest system with a set of specifications to which nearly everyone should have access. Thus, the VMs can be used for this volume and saved for use in the next two volumes. Of course they can always be restored from snapshots which we will take at one or two checkpoints or even recreated from scratch if necessary. This is one advantage of VMs over physical hosts because it is easy to recover from really bad mistakes. Hopefully the use of multiple VMs to create a virtual network on a single physical host will provide a safe virtual computing and network environment in which to learn by making mistakes. In this chapter you also begin to do the work of the SysAdmin. One of the many tasks that SysAdmins do is install Linux, and that is what you will do in this chapter– after we install VirtualBox visualization software. I will try to explain as much as I can as we go through this chapter, but there are probably some things you won’t yet understand. Don’t worry– we will get to them. In this chapter you will begin to use some Linux commands, most of which you may not know or understand. For now I will explain a little about some of the commands that we will encounter, but so long as you enter them as they are given here, you should have no problems. In many cases, if you make an error when typing the command, the system will respond with an error message that should help you understand what is wrong.

Got root? Root is the primary user account on a Linux system. Root is the god of Linux, the administrator, the SysAdmin, the privileged user. Root can do anything. There is an entire chapter a bit later in this book about root and the powers available to root that go far beyond those of mere mortals and non-privileged users. This course is intended to enable you to safely use those root privileges, but we are not going to start by having you merely dip your toes into the water and wade in deeper a little bit at a time. I was always told to just dive into the water and get wet all over. That is what we do in this chapter– dive right in. So I hereby declare that you are now root. Much of what we do from this point on will be performed as root, and you are the ­SysAdmin.

72

Chapter 4

Preparation

H ardware specifications In order to perform the experiments contained in this course, you must have access to a single physical computer that can run a single virtual machine for Volumes 1 and 2 and at least two virtual machines for the third volume in this series. These hardware specifications are intended to provide you with some guidance for selecting a computer for use with all three volumes of this course. Because the VMs will not be running large complex programs, the load on them in terms of CPU and RAM memory, will be relatively low. Disk usage may be somewhat high because virtual disks for the VMs may take up a significant amount of disk space after some of the experiments, and you will also make occasional snapshots of the virtual disk in order to make recovery from otherwise catastrophic failures relatively simple. This volume, Learning to use and Administer Linux, uses a single VM, but the hardware specifications listed here should be enough to handle at least three virtual machines because at least two and possibly three will be required for the last volume of this course. You should nevertheless consider these hardware specifications as a minimum for use during these courses. More is always better. The motherboard, processor, and memory should be 64-bit. Many of the 32-bit versions of Linux are no longer supported. Table4-1 is a list of the minimum physical hardware requirements for this course. More is always better.

Table 4-1.  Physical system minimum hardware requirements. Component

Description

Processor

The Intel i5 or i7 processors or an AMD equivalent; at least four cores plus hyper-­ threading with support for virtualization; 2.5GHz or higher CPU speed.

Motherboard

Capable of supporting the Intel processor you selected earlier; USB support for a keyboard and mouse; video output that matches the video connector your display (see below) such as VGA, HDMI, or DVI.

Memory

I recommend at least 8GB of RAM for your host system. This will allow sufficient memory for multiple VMs and still have enough available for the host itself.

Hard drive

Internal or external hard drive with at least 300GB of free space for storage of virtual machine disk drives.

Network

One Ethernet network interface card (NIC) that has support for 1Gb connections.

(continued) 73

Chapter 4

Preparation

Table 4-1.  (continued) Component

Description

USB keyboard and mouse

Seems obvious but just being thorough.

Video display

Any decent screen monitor will do so long as it is at least HD resolution.

Internet connection

The physical host must have an Internet connection with at least 2Mb/s download speeds. Greater download speed is highly recommended and will make downloading faster and result in less waiting.

Host software requirements In all probability, the computer you use will already have an operating system installed, and it will most likely be Windows. Preferably you will have the latest version which, as of this writing, is Windows 10 with all updates installed. The preferred operating system for your physical lab host would be Fedora 29 or the most recent version of Fedora that is currently available. Any recent version of Linux is fine so long as it is 64-bit. I recommend Fedora if you have a choice. However, I strongly recommend using the most recent version of Fedora because that is what I am using in these books and you won’t need to make any adjustments for other distributions in this chapter. You will be using Fedora on the virtual machines anyway, so this makes the most sense. Regardless of which operating system is installed as the host OS on your lab system, you should use VirtualBox as the virtualization platform for these experiments because it is open source and free of charge. All of the procedures for creating the VMs and the virtual network are based on VirtualBox, so I strongly suggest that you use it for virtualizing your lab environment. Other virtualization tools would probably work, but it would be your own responsibility to install and configure them and the VMs you create. No other software is required for the physical system that will host the virtual environment.

74

Chapter 4

Preparation

I nstalling VirtualBox The VirtualBox virtualization software can be downloaded from web pages accessible from the URL at www.virtualbox.org/wiki/Downloads.

Note  You must have root access, that is, the root password, on a Linux host, or be the administrator on a Windows host in order to install VirtualBox. You will also need to have a non-root user account on the Linux host.

Install VirtualBox onaLinux host This section covers the steps required to install VirtualBox on a Fedora Linux host. If you have a Windows host, you can skip to the next section. For this book we will download the files from the VirtualBox web site. If you are using a different Linux distribution, the steps will be mostly the same, but you should use the VirtualBox package and the package manager commands for your own distribution. In the following steps, the # character is the command prompt for root. Do not enter it. It is displayed on the console or terminal screen to indicate that the command line is waiting for input. You will type the commands that are shown in boldface type in the following instructions. After typing each command and ensuring that it is correct, then press the Enter key to submit the command to the shell for processing. Don’t worry if you don’t understand what these commands are doing. If you enter them just as they are, they will all work just fine. You are doing tasks that are typical of those you will be doing as a SysAdmin, so you might as well jump right in. However, if you do not feel that you can safely do this, you should have the SysAdmin who has responsibility for this host do it for you. Your entries are shown in bold. Press the Enter key when you see <Enter> if there is no data to enter such as when you might take a default that requires no keyboard input: 1. Log in to your Linux host GUI desktop as a non-root user. In the example in Figure4-­1, I use the student user ID.You may need your SysAdmin to create an account for you and set the password.

75

Chapter 4

Preparation

Figure 4-1.  Select the non-root user account and type the password for that account

Note The login GUI may look different on your system, but it will have the same elements that will enable you to select a user account and enter the password. 2. After the GUI desktop has finished loading, open your favorite web browser. 3. Enter the following URL to display the Linux download page: https://www.virtualbox.org/wiki/Linux_Downloads. If this download page does not appear, you can go to the VirtualBox home page and click through to the Downloads section.

76

Chapter 4

Preparation

4. Download the VirtualBox package suitable for your Linux distribution into the /Downloads directory. In Figure4-2 the mouse pointer (the hand with the pointing finger) is pointing to the AMD version for Fedora 26-28. The AMD version is the 64-­bit version of VirtualBox and is used for both AMD and Intel processors. Do not use the i386 version.

Figure 4-2.  Download the VirtualBox package appropriate to your distribution. The VirtualBox version will probably be more recent than the one shown here 77

Chapter 4

Preparation

5. When the Save file dialog pops up, be sure to verify the directory location to which your browser saves the file. This might be ~/ Downloads for browsers like Chrome, and other browsers may ask you to specify the location. If you have a choice, use ~/Downloads. 6. Click the Save button. 7. Click the Downloads link on the left of the web page. 8. Under the section “ … Oracle VM VirtualBox Extension Pack,” select the All supported platforms link to download the Extension Pack. 9. When the Save file dialog pops up, be sure to select ~/Downloads as the destination directory. 10. Click the Save button. Now that both files we will need have been downloaded, we can install VirtualBox. 11. Launch a terminal session on the desktop, and use the su command to switch to the root user: [student@david ~]$ su Password: <Enter the password for root> [root@david ~]# 12. Make ~/Downloads the present working directory (PWD), and verify the files just downloaded are located there: [root@fedora29vm ~]# cd /home/student/Downloads/ ; ll *Virt* -rw-rw-r--. 1 student student23284806 Dec 18 13:36 Oracle_VM_VirtualBox_ Extension_Pack-­6.0.0.vbox-extpack -rw-rw-r--. 1 student student 136459104 Dec 18 13:36 VirtualBox-6.0-6.0.0_127566_fedora29-1.x86_64.rpm

78

Chapter 4

Preparation

13. We need to install all the current updates and some RPMs that are needed for VirtualBox to work. They may be already installed on your Fedora computer, but attempting to install them again will not cause a problem. The dnf command is the package manager for Fedora Linux and can be used to install, remove, and update packages: [root@fedora29vm Downloads]# dnf -y update Reboot the physical computer after installing the latest updates. It is not always necessary to reboot after installing updates on a Linux computer unless the kernel has been updated. I suggest do it here in case the kernel has been updated. It is important for the next steps that the kernel be the most recent one or the installation of VirtualBox may not properly complete. Then make / home/student/Downloads the PWD: [root@fedora29vm ~]# cd /home/student/Downloads/ [root@david Downloads]# dnf -y install elfutils-libelf-devel kernel-devel I did not include any of the output from these commands in order to save some space. 14. Now install the VirtualBox RPM with this dnf command. Note that the command needs to be entered on a single line. It can wrap on your screen if there are not enough columns in your terminal, but just don’t press the Enter key until you have entered the entire command. Be sure to use the correct name for your VirtualBox installation file which probably will be different from this one: [root@fedora29vm Downloads]# dnf -y install VirtualBox-6.0-6.0.0_127566_ fedora29-­1.x86_64.rpm Last metadata expiration check: 0:04:17 ago on Tue 18 Dec 2018 04:40:44 PM EST. Dependencies resolved.

79

Chapter 4

Preparation

=========================================================================== PackageArchVersionRepository Size =========================================================================== Installing: VirtualBox-6.0x86_646.0.0_127566_fedora29-1@commandline130M Installing dependencies: SDLx86_641.2.15-33.fc29fedora202 k Transaction Summary =========================================================================== Install2 Packages Total size: 130M Total download size: 202 k Installed size: 258M Downloading Packages: SDL-1.2.15-33.fc29.x86_64.rpm112 kB/s | 202 kB 00:01 --------------------------------------------------------------------------Total58 kB/s | 202 kB 00:03 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing:1/1 Installing: SDL-1.2.15-33.fc29.x86_641/2 Running scriptlet: VirtualBox-6.0-6.0.0_127566_fedora29-1.x86_642/2 Installing: VirtualBox-6.0-6.0.0_127566_fedora29-1.x86_642/2 Running scriptlet: VirtualBox-6.0-6.0.0_127566_fedora29-1.x86_642/2 Creating group 'vboxusers'. VM users must be member of that group! Verifying: SDL-1.2.15-33.fc29.x86_641/2 Verifying: VirtualBox-6.0-6.0.0_127566_fedora29-1.x86_642/2

80

Chapter 4

Preparation

Installed: VirtualBox-6.0-6.0.0_127566_fedora29-1.x86_64SDL-1.2.15-33.fc29.x86_64 Complete! 15. We now install the Extension Pack which provides some additional functionality for the guest operating systems. Note that the command needs to be entered on a single line. It can wrap on your screen if there are not enough columns in your terminal, but just don’t press the Enter key until you have entered the entire command: [root@fedora29vm Downloads]# VBoxManage extpack install Oracle_VM_ VirtualBox_Extension_Pack-­6.0.0.vbox-extpack VirtualBox Extension Pack Personal Use and Evaluation License (PUEL) <Snip the long license> 16. Press the Y key when asked to accept the license. Do you agree to these license terms and conditions (y/n)? y License accepted. For batch installation add --accept-license=56be48f923303c8cababb0bb4c478284b688ed23f16d775d729b89a2e8 e5f9eb to the VBoxManage command line. 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Successfully installed "Oracle VM VirtualBox Extension Pack". [root@david Downloads]# Do not close the root terminal session. It will be used to prepare an external USB hard drive on which we will store the virtual hard drives and other files required for the virtual machines that we will create. From this point on, using the VirtualBox Manager GUI interface is the same whether you are running Windows or Linux.

81

Chapter 4

Preparation

Install VirtualBox onaWindows host This section covers the steps required to install VirtualBox on a current host with a currently supported version of Windows. This procedure downloads the VirtualBox installer and then installs VirtualBox and the VirtualBox Extension Pack. If you have never worked as the SysAdmin before, just follow the directions as given, and everything should work. However, if you do not feel that you can safely do this, you should have the SysAdmin who has responsibility for this host do this for you: 1. Log in to your Windows host as an administrative user. 2. Install all current updates. 3. Open your browser. 4. Enter the following URL in the browser: https://www. virtualbox.org 5. Click the big large “Download VirtualBox” button in the middle of the screen to continue to the download page. 6. Locate the section heading VirtualBox X.X.XX platform packages where X.X.XX is the most current version of VirtualBox. 7. Locate the Windows hosts link and click that. 8. When the Save as window opens, as in Figure4-3, ensure that the download target is the Downloads directory, which should be the default.

82

Chapter 4

Preparation

Figure 4-3.  The Save As window. Be sure to download the VirtualBox installer for Windows into the Downloads directory 9. Click the Save button. 10. When the file has finished downloading, open the File Explorer and click Downloads. 11. Locate the VirtualBox installer, and double-click to launch it. 12. When the setup wizard Welcome dialog shown in Figure4-4 appears, click the Next button. This will take you to the Custom Setup dialog.

83

Chapter 4

Preparation

Figure 4-4.  The Oracle VirtualBox Setup Wizard 13. Do not make any changes to the Custom Setup dialog, and press Next to continue. 14. Again, do not make any changes on the second Custom Setup dialog, and press Next to continue. 15. If a dialog appears with a warning about resetting the network interfaces, just click Yes to continue. 16. When the Ready to install window is displayed, click Install. 17. You will see a dialog asking whether you want to allow this app to make changes to your device. Click Yes to continue. 18. When the completion dialog is displayed, remove the check from the box to start VirtualBox after the installation.

84

Chapter 4

Preparation

19. Click the Finish button to complete the basic installation. You should now have a shortcut on your desktop to launch VirtualBox. However we still need to install the “Extension Pack,” which helps to integrate VMs more closely into the Windows desktop. 20. Use your browser to navigate to the URL: www.virtualbox.org/ wiki/Downloads 21. Locate the section, VirtualBox X.X.X Oracle VM VirtualBox Extension Pack, and the link All Platforms under that. 22. When the file has finished downloading, open the File Explorer and click Downloads. 23. Locate the Oracle Extension Pack file, and double-click it to launch VirtualBox and install the Extension Pack. 24. When the dialog window titled VirtualBox Question is displayed, click Install to continue. 25. The license will be displayed in a dialog window. Scroll down to the bottom, and when the I Agree button is no longer grayed out, click it. 26. Once again, click Yes when the message asking if you want to allow this app to make changes. You will receive a verification dialog window when the Extension Pack software has been installed. 27. Click OK to close that dialog, and this leaves the VirtualBox Manager welcome window displayed on the screen. From this point on, using the VirtualBox Manager GUI interface is the same whether you are running Windows or Linux.

85

Chapter 4

Preparation

C reating theVM Before setting up the VM itself, we want to create a virtual network that has a specific configuration. This will enable the experiments in this course to work as designed, and it will provide the basis for the virtual network in Volume 3 of this course. After the virtual network has been configured, we will create the virtual machine and configure it properly for use in the experiments. This VM will also be used in the follow-on course.

V irtualBox Manager Both tasks, configuring the virtual network and creating the VM, are accomplished using the VirtualBox Manager which is a GUI interface that is used to create and manage VMs. Start by locating the Oracle VM VirtualBox item in the application launcher on your desktop. The icon should look like Figure4-5.

Figure 4-5.  The VirtualBox icon Click this icon to launch the VirtualBox Manager. The first time the VirtualBox Manager is launched, it displays the VirtualBox Welcome shown in Figure4-6.

86

Chapter 4

Preparation

Figure 4-6.  The VirtualBox Manager welcome is displayed the first time it is launched The Virtual Manager is identical in both Windows and Linux. The steps required to create your VMs is the same. Although VirtualBox can be managed from the command line, I find that, for me, working with the GUI interface is quick and easy. Although I am a strong proponent of using the command line, I find that using the VirtualBox Manager GUI interface is easy and quick enough for the type of work I am doing. Besides, for the purposes of this book, it will probably be easier for you. Using the GUI will certainly enable you to more easily find and understand the available options.

87

Chapter 4

Preparation

Configuring thevirtual network Before creating the virtual machine, let’s configure the virtual network. The virtual network is a private network that exists only on the VirtualBox host. It is designed to allow the user to manage access to the outside world. The virtual router which is created also provides services such as DHCP and name services for the VMs that are created on the virtual network. VirtualBox has a number of interesting options for connecting the VM hosts to a network. The Oracle VM VirtualBox User Manual1 lists these options with excellent descriptions of the capabilities of each as well as their limitations. The simplest is the default which is using Network Address Translation2 (NAT) which allows the VM to talk to the Internet but which does not allow multiple VM hosts to talk to each other. Because we will need our VM to be able to communicate with at least one more host in Volume 3 of this course, this option won’t be appropriate for us. We will instead use the NAT Network option which allows hosts to communicate with each other on the virtual network as well as the outside world through a virtual router. The limitation of the NAT Network option is that it does not allow communication from the physical host into the virtual network. We can overcome that limitation if we need to, but the NAT Network option gives us the virtual network environment that most closely resembles a real network so that is what we will use. We will discuss networking in more detail later in this course, but for now, the folks at whatismyipaddress.com, referenced in footnote 2, have the best short description of NAT, while Wikipedia3 has an excellent, if somewhat long and esoteric, discussion of NAT.We will use the VirtualBox Manager to create and configure the virtual NAT Network: 1. The VirtualBox Manager should be open. If it is not, start the VirtualBox Manager now. 2. On the menu bar, click File ➤ Preferences. 3. Click the Network folder on the left side of the Preferences window as shown in Figure4-7.

he Oracle VM VirtualBox User Manual (PDF), https://download.virtualbox.org/ T virtualbox/5.2.16/UserManual.pdf , 96-107 2 https://whatismyipaddress.com/nat 3 Wikipedia, Network Address Translation, https://en.wikipedia.org/wiki/ Network_address_translation 1

88

Chapter 4

Preparation

Figure 4-7.  Select the Network folder to add a NAT Network 4. On the right side of the Preferences dialog box, click the little network adapter icon with the green + (plus) sign to add a new NAT network. The network is added and configured automatically. 5. Double-click the new Nat Network or the bottom icon on the right side of the Preferences dialog box, and change the Network Name to StudentNetwork as in Figure4-8.

89

Chapter 4

Preparation

Figure 4-8.  Change the Network Name to StudentNetwork 6. Click the OK button to complete the name change, and then click the OK button on the Preferences dialog. The virtual network configuration is complete.

Preparing disk space In order to have space for the virtual machines that we will be using in this course, it may be necessary to clear some space on a hard drive. You should make backups of your system before taking this step. If you have a host with about 300GB of free hard drive space already available for your home directory, you can skip this section. If you have less than that amount of space available, you will need to allocate some disk space for storing the virtual hard drives and other files required for the virtual machines. I found it a useful alternative to allocate an external USB hard drive on which to locate the virtual machines for the experiments in this course. I don’t have an external hard drive smaller than 500GB, and I had this one on hand, so it is what I used. I suggest using an external USB hard drive that is designated by the vendor to be at least 300GB capacity. In reality, less than that will be available to the user after the partition is created and formatted. We will destroy all of the existing data on this external hard drive and repartition it, so be sure to make a backup of any data on this external hard drive that you might want to keep.

90

Chapter 4

Preparation

W indows These steps will guide you in configuring an external USB hard drive to use for the experiments on a Windows 10 host. If you have a Linux host, you can skip this section: 1. Using the Start menu, locate and open the Computer Management tool. 2. Select Storage and then Disk Management. 3. Verify the disks that are currently available on your Windows host. 4. Plug the USB cable on the hard drive into a free USB connector on your computer. 5. After a moment or two, the disk management tool will display the new disk drive, as shown in Figure4-9. On my Windows VM, this new disk is Disk 1, and the space is shown as unallocated because I previously deleted the existing partition. This may be a different disk for you.

Figure 4-9.  Disk 1 is the new external USB hard drive 91

Chapter 4

Preparation

6. Right-click Disk 1, and choose New Simple Volume to begin preparing the drive. The New Simple Volume Wizard welcome dialog is displayed. 7. Click Next to continue. 8. Do not make any changes on the Specify Volume Size dialog. This will assign the entire physical drive to this partition. Press Next to continue. 9. Accept the suggested drive letter on the Assign Drive Letter or Path dialog. On my Windows VM, this is E:. This drive assignment will most likely be different on your host. Be sure to make a note of this drive letter because you will need it soon. Click the Next button to continue. 10. Take the defaults on the Format Partition dialog as you can see in Figure4-10. Click Next to continue.

Figure 4-10.  Accept the defaults to format the partition

92

Chapter 4

Preparation

11. Click the Finish button on the Completing the New Simple Volume Wizard dialog to start the format process. Figure4-11 shows the final result.

Figure 4-11.  The completed disk partition Note that the final formatted disk provides less than the 500GB specified by the drive vendor.

L inux This section will guide you through adding an external USB hard drive to your Linux host. This hard drive will be the storage location for the virtual hard drive and other files required for the virtual machines used for the experiments in the rest of this course. There is a GUI desktop tool for Linux that works very similarly to the disk management tool for Windows. Just so you see we could do that, I have included a screenshot of the disk tool in Figure4-12.

93

Chapter 4

Preparation

Figure 4-12.  The Linux GUI disk management tools provide functionality similar to those of the Windows Disk Manager tools. We are not going to use them We are not going to use this gnome-disks GUI tool shown in Figure4-10. Instead we are going to use the command-line interface (CLI) because there is no time like the present to start learning the command-line tools. This is so that you will become familiar with the tools themselves as well as some of the other concepts such as device identification. We will go into great detail about many of the things you will encounter here as we proceed through the course. Your entries are shown in bold. Press the Enter key when you see <Enter>. You must be root to perform all of the following tasks: 1. You should already have a terminal open and be logged in as root. Run the following command, like I did on my physical workstation, to determine whether you have enough space available: 94

Chapter 4

[root@david /]# df -h FilesystemSizeUsed Avail Use% devtmpfs32G40K32G1% tmpfs32G24M32G1% tmpfs32G2.2M 32G 1% tmpfs32G032G0% /dev/mapper/vg_david1-root9.8G437M8.9G 5% /dev/mapper/vg_david1-usr45G9.6G 33G23% /dev/mapper/vg_david3-home246G46G190G20% /dev/mapper/vg_david2-Virtual787G425G323G57% /dev/mapper/vg_david2-stuff246G115G119G50% /dev/sdb24.9G433M4.2G10% /dev/sdb15.0G18M5.0G1% /dev/mapper/vg_david1-tmp45G144M 42G 1% /dev/mapper/vg_david1-var20G6.6G 12G36% tmpfs6.3G24K6.3G1% /dev/mapper/vg_Backups-Backups3.6T1.9T1.6T54% /dev/sde13.6T1.5T2.0T42% /dev/sdi1457G73M434G1%

Preparation

Mounted on /dev /dev/shm /run /sys/fs/cgroup / /usr /home /Virtual /stuff /boot /boot/efi /tmp /var /run/user/1000 /media/Backups /media/4T-Backup /Experiments

This is the output of the df command on my workstation. It shows the space available on each disk volume of my workstation. The output from this command on your physical host will be different from this. I have a couple places that conform to the LFHS4 on which I could locate the virtual machines’ data on my filesystems, but I choose to use the /Experiments filesystem and directory rather than mix this data in with other data, even that of my other virtual machines. You will now configure your external USB hard drive like I did /Experiments. 2. Plug in the external USB hard drive. It will take a few moments for it to spin up and be initialized.

e will discuss the Linux Hierarchical Filesystem Standard (LHFS) in Chapter 19. The LHFS W defines the approved directory structure of the Linux filesystem and provides direction on what types of files are to be located in which directories.

4

95

Chapter 4

Preparation

3. Run the following command to determine the drive ID assigned to the new device: [root@david /]# dmesg [258423.969703] usb 1-14.4: new high-speed USB device number 24 using xhci_hcd [258424.060505] usb 1-14.4: New USB device found, idVendor=1058, idProduct=070a, bcdDevice=10.32 [258424.060509] usb 1-14.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [258424.060511] usb 1-14.4: Product: My Passport 070A [258424.060512] usb 1-14.4: Manufacturer: Western Digital [258424.060513] usb 1-14.4: SerialNumber: 575850314133304434303739 [258424.062534] usb-storage 1-14.4:1.0: USB Mass Storage device detected [258424.063769] usb-storage 1-14.4:1.0: Quirks match for vid 1058 pid 070a: 200000 [258424.064704] scsi host14: usb-storage 1-14.4:1.0 [258425.108670] scsi 14:0:0:0: Direct-Access WDMy Passport 070A 1032 PQ: 0 ANSI: 4 [258425.109453] scsi 14:0:0:1: CD-ROMWDVirtual CD 070A1032 PQ: 0 ANSI: 4 [258425.112633] scsi 14:0:0:2: EnclosureWDSES Device 1032 PQ: 0 ANSI: 4 [258425.115424] sd 14:0:0:0: Attached scsi generic sg11 type 0 [258425.115609] sd 14:0:0:0: [sdi] 975400960 512-byte logical blocks: (499 GB/465 GiB) [258425.117416] sd 14:0:0:0: [sdi] Write Protect is off [258425.117426] sd 14:0:0:0: [sdi] Mode Sense: 23 00 10 00 [258425.118978] sd 14:0:0:0: [sdi] No Caching mode page found [258425.118986] sd 14:0:0:0: [sdi] Assuming drive cache: write back [258425.120216] sr 14:0:0:1: [sr2] scsi3-mmc drive: 51x/51x caddy [258425.120460] sr 14:0:0:1: Attached scsi CD-ROM sr2 [258425.120641] sr 14:0:0:1: Attached scsi generic sg12 type 5 [258425.120848] ses 14:0:0:2: Attached Enclosure device [258425.120969] ses 14:0:0:2: Attached scsi generic sg13 type 13 [258425.134787]sdi: sdi1 96

Chapter 4

Preparation

[258425.140464] sd 14:0:0:0: [sdi] Attached SCSI disk [root@david /]# The data from the preceding dmesg command is displayed at the end of a long list of kernel messages. The dmesg command is used to display the kernel messages because they can be used in situations like this as well as providing information that can be used in debugging problems. The numbers inside the square braces, such as [258425.134787], are the time in seconds down to the nanosecond since the computer was booted up. We are looking for the drive device identifier so that we can use it in the next few commands; in this case, the device identifier for the entire hard drive is sdi. The sdi1 device is the first partition on the drive. We are going to delete the existing partition in order to start from the very beginning because that is what I would do with any new disk device. On your Linux host, the drive identifier is more likely to be /dev/sdb or /dev/sdc.

Warning  Be sure you use the correct device identifier for the USB hard drive in the next step, or you might wipe out your main hard drive and all of its data. 4. Start fdisk and then see if there are any existing partitions and how many: [root@david /]# fdisk /dev/sdi Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sdi: 465.1 GiB, 499405291520 bytes, 975400960 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos 97

Chapter 4

Preparation

Disk identifier: 0x00021968 DeviceBoot Start EndSectorsSize Id Type /dev/sdi12048 975400959 975398912 465.1G 83 Linux If there are no partitions on the hard drive, skip the next step. 5. Delete the existing partitions, and then create a new one with fdisk. Be sure to use /dev/sdi and not /dev/sdi1 because we are working on the disk and not the partition. The d sub-command deletes the existing partition: Command (m for help): d Selected partition 1 Partition 1 has been deleted. If there were more partitions on the hard drive, delete those, too, also using d. 6. Now let’s create the new partition and write the results to the partition table on the USB drive. We use the n sub-command to create a new partition and then mostly just hit the Enter key to take the defaults. This would be a bit more complex if we were going to create multiple partitions on this hard drive and we will do that later in this course. Your entries are shown in bold. Press the Enter key when you see <Enter> to take the defaults: Command (m for help): n Partition type pprimary (0 primary, 0 extended, 4 free) eextended (container for logical partitions) Select (default p): <Enter> Using default response p. Partition number (1-4, default 1): <Enter> First sector (2048-975400959, default 2048): <Enter> Last sector, +sectors or +size{K,M,G,T,P} (2048-975400959, default 975400959): <Enter> Created a new partition 1 of type 'Linux' and of size 465.1 GiB. 98

Chapter 4

Preparation

7. If you do not get the following message, skip this step. We must respond with y to remove the previous partition signature: Partition #1 contains a ext4 signature. Do you want to remove the signature? [Y]es/[N]o: y The signature will be removed by a write command. 8. The following p sub-command prints the current partition table and disk information to the terminal: Command (m for help): p Disk /dev/sdi: 465.1 GiB, 499405291520 bytes, 975400960 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00021968 DeviceBoot Start EndSectorsSize Id Type /dev/sdi12048 975400959 975398912 465.1G 83 Linux Filesystem/RAID signature on partition 1 will be wiped. Command (m for help): 9. If your operating system automatically mounted the new partition when you created it, be sure to unmount (eject) it. Now write the revised partition table to the disk, and exit back to the command line: Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@david /]#

99

Chapter 4

Preparation

10. Create an EXT4 filesystem on the partition. Be careful to specify the correct device identifier so that the correct partition is formatted: [root@david /]# mkfs -t ext4 /dev/sdi1 mke2fs 1.44.2 (14-May-2018) Creating filesystem with 121924864 4k blocks and 30482432 inodes Filesystem UUID: 1f9938a0-82cd-40fb-8069-57be0acd13fd Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done [root@david /]# 11. Now let’s add a label to the partition. This label makes it easy for us humans to identify a disk device. It also allows us to use the label so that the computer can identify and mount the device in the correct location on the filesystem directory structure. We will get to that in a few steps: [root@david /]# e2label /dev/sdi1 Experiments [root@david /]# e2label /dev/sdi1 Experiments [root@david /]# The second invocation of the e2label command lists the current label for that partition. 12. Create the Experiments directory. This will be the directory on which we mount the filesystem that we are creating on the USB drive. Create this in the root (/) directory: [root@david ~]# mkdir /Experiments 100

Chapter 4

Preparation

13. At this point we could mount the filesystem on the USB drive onto the /Experiments directory, but let’s make it a bit easier by adding a line to the /etc/fstab (filesystem table) file. This will reduce the amount of typing we need to do in the long run. The easy way to do this, since we have not discussed the use of editors, yet, is to use the following simple command to append the line we need to the end of the existing fstab file. Be sure to enter the entire command on a single line: [root@david ~]# echo "LABEL=Experiments /Experiments ext4 user,owner, noauto,defaults0 0" >> /etc/fstab If it wraps around on your terminal that is ok. Just do not hit the Enter key until you have typed the entire line. Be sure to use the double >> or you will overwrite the entire fstab file. That would not be a good thing. We will talk about backups and other options for editing files later, but for now just be careful. 14. Mount the new drive and verify that it is present: [root@david ~]# mount /Experiments ; df -h FilesystemSizeUsed Avail Use% devtmpfs32G40K32G1% tmpfs32G34M32G1% tmpfs32G2.2M 32G 1% tmpfs32G032G0% /dev/mapper/vg_david1-root9.8G437M8.9G 5% /dev/mapper/vg_david1-usr45G9.6G 33G23% /dev/mapper/vg_david3-home246G46G190G20% /dev/mapper/vg_david2-Virtual787G425G323G57% /dev/mapper/vg_david2-stuff246G115G119G50% /dev/sdb24.9G433M4.2G10% /dev/sdb15.0G18M5.0G1% /dev/mapper/vg_david1-tmp45G144M 42G 1% /dev/mapper/vg_david1-var20G6.8G 12G37% tmpfs6.3G28K6.3G1% /dev/mapper/vg_Backups-Backups3.6T1.9T1.6T56%

Mounted on /dev /dev/shm /run /sys/fs/cgroup / /usr /home /Virtual /stuff /boot /boot/efi /tmp /var /run/user/1000 /media/Backups 101

Chapter 4

Preparation

/dev/sde13.6T1.5T2.0T43% /media/4T-Backup /dev/sdh1458G164G272G38% /run/media/dboth/USB-­ X47GF /dev/sdi1457G73M434G1% /Experiments I have highlighted the line for our new device in bold at the bottom of the output. This tells us that the new filesystem has been properly mounted on the root filesystem. It also tells us how much space is used and how much is available. The -h option tells the df command to display the numeric results in human-readable format instead of bytes. Go ahead and run the df command without any options, and see the difference. Which is easier to read and interpret? 15. Now look at the contents of our new directory: [root@david ~]# ll -a /Experiments/ total 24 drwxr-xr-x3 root root4096 Aug8 09:34 . dr-xr-xr-x. 24 root root4096 Aug8 11:18 .. drwx------2 root root 16384 Aug8 09:34 lost+found If you see the lost+found directory, then everything is working as it should. 16. We still have a bit more to do to prepare this directory. First, we need to change the group ownership and permissions of this directory so that VirtualBox users can have access to it. First let’s look at its current state. Piping the output of the grep command allows us to see only the Experiments directory for clarity: [root@david ~]# cd / ; ll | grep Exp drwxr-xr-x3 root root4096 Aug8 09:34 Experiments This way we can verify the changes actually happen.

102

Chapter 4

Preparation

17. Making the changes. First we change the PWD (present working directory) to the root directory (/). Then we will make the changes and finally verify them: [root@david /]# [root@david /]# [root@david /]# [root@david /]# drwxrwxr-x3 [root@david /]#

cd / chgrp root /Experiments chmod g+w /Experiments ll | grep Exp root root4096 Aug8 09:34 Experiments

Some things you might notice here– or possibly even before this. Now is a good time to explain. The chgrp (change group) and chmod (change file mode, i.e., access permissions) commands were quiet. They did not announce their success. This is one of the Linux Philosophy tenets, that “silence is golden.” Also, the ll command is an alias that expands into ls -l to give a long listing of the current directory. We will go into much more detail about things like this as we get further into the course. 18. Now we need to add our own non-root user account to the vboxusers group in the /etc/groups file. I use my own personal ID in this case, but you should use the non-­root account you are logged into to create and use the virtual machine: [root@david /]# cd /etc [root@david etc]# grep vboxusers group vboxusers:x:973: [root@david etc]# usermod -G vboxusers dboth [root@david etc]# grep vboxusers group vboxusers:x:973:dboth [root@david /]# You have completed preparation of the hard drive. Regardless of whether you prepared this USB hard drive on a Windows or Linux host, you are already doing the work of a SysAdmin. These are exactly the types of tasks required of SysAdmins on a regular basis.

103

Chapter 4

Preparation

Download theISO image file Now is a good time to download the Fedora5 ISO live image file. This is just a file that is an image we can copy to a CD or USB thumb drive. You can insert the CD or thumb drive into a computer and boot from it to run Linux in a test drive environment. Booting this live image device on your computer will not make any changes to the hard drive of the computer until you install Linux. For our purposes, we will not need to create a hardware device; all we need to do is download the image, so this will be very easy. The VM we create will boot directly from the live image file when we are ready to install Linux– no external physical media will be needed. We will use the Fedora 29 image for Xfce6 which is one of the alternate desktops. We could use KDE or GNOME, but for this course, we will use Xfce which is much smaller and uses far less system resources. It is also fast and has all of the features we need in a desktop for this course without a lot of extra features that cause code bloat and reduced performance. The Xfce desktop is also very stable so does not change much between Fedora releases which occur every six months or so.7 For Fedora 28, which is the current release as of this writing, the file Fedora-Xfce-­ Live-x86_64-28-1.1.iso is about 1.3G in size. Be sure to use the Fedora release that is most current at the time you take this course: 1. Use your favorite browser, and navigate to the URL: https:// spins.fedoraproject.org/xfce/download/index.html. 2. Click the button with the Download label. 3. For students with a Linux host, select the /tmp directory in which to store the download, and click the Save button. If you have a Windows host or a browser that does not allow you to select a download directory, the default download directory is fine.

Fedora Project, Fedora’s Mission and Foundations, https://docs.fedoraproject.org/en-US/ project/ 6 Fedora Project, Xfce, https://spins.fedoraproject.org/xfce/ 7 For us, this Xfce stability means that the desktop images in this book will be correct through several releases of Fedora. 5

104

Chapter 4

Preparation

4. If the downloaded file is not in the /tmp directory, move or copy it from the ~/Downloads directory to /tmp: [dboth@david ~]$ cd Downloads/ ; ll Fedora* -rw-rw-r-- 1 dboth dboth 1517289472 Dec 20 12:56 Fedora-Xfce-Live-x86_6429-­20181029.1.iso [dboth@david Downloads]$ mv Fedora* /tmp [dboth@david Downloads]$ We will use this file when we install Fedora Linux on the VM, but we need to create the virtual machine first.

C reating theVM To create the VM we will use in the rest of this course, we need to first create it and then make some configuration changes: 1. Switch back to the VirtualBox Manager to perform these steps. 2. Click the Machine Tools icon. This shows the list of current virtual machines and the configuration details of one that is selected. 3. I already have several VMs in five groups. Don’t worry about creating or using groups in VirtualBox. That is not necessary to the success of these experiments. Click the New icon to start the process of creating the new VM.Enter the following data as shown in Figure4-13.

105

Chapter 4

Preparation

Figure 4-13.  Creating the virtual machine with the name StudentVM1 4. In the Create Virtual Machine window, type the VM name, StudentVM1. 5. For the Machine Folder, type /Experiments. 6. Select Linux as the operating system type in the Type field. 7. For the Version, select Fedora (64-bit). 8. Set the memory size (RAM) to 4096MB.The memory size can be changed at any time later so long as the VM is powered off. For now this should be more than enough RAM. 106

Chapter 4

Preparation

9. Click the Create button to continue to the Create Virtual Hard Disk dialog shown in Figure4-14.

Figure 4-14.  Click the folder icon with the green ^ character to change the default file location. Type in /Experiments to prepend the VM name 10. If you have set up a different location from the default, click the little folder icon with the green ^ sign on it as shown in Figure4-12. This opens an operating system dialog that allows you to choose the location in which you want to store your virtual machines including the virtual hard drives. I have set up a separate 500GB hard drive and mounted it on /Experiments, so I selected the /Experiments directory. Note that the VM name is automatically appended to whatever location you choose. The .vdi extension is the VirtualBox Disk Image file format. You could select other formats, but this VDI format will be perfect for our needs.

107

Chapter 4

Preparation

11. Use the slider or the text box to set 60GB for the size of the virtual hard drive. Be sure to use the default dynamic allocation of disk space. This ensures that the disk will take up no more space on the hard drive than is actually needed. For example, even though we specified this disk size as 60GB, if we only use 24GB, the space required on the physical hard drive will be about 24GB.This space allocation will expand as needed. 12. Click the Create button to create the virtual hard drive and continue. 13. At this point the basic virtual machine has been created, but we need to make a few changes to some of the configuration. Click the entry for the new VM.If the VM details are not shown on the right side of the VirtualBox Manager as it is in Figure4-­15, click the Details button using the menu icon on the right side of the StudentVM1 entry in the VM list.

108

Chapter 4

Preparation

Figure 4-15.  The details for the StudentVM1 virtual machine we just created 14. Click the Settings icon to open the Settings dialog in Figure4-16, and then select the System page in the list on the left. Deselect the Floppy disk icon, and then use the down arrow button to move it down the Boot Order to below the Hard Disk. Leave the Pointing Device set to USB Tablet.

109

Chapter 4

Preparation

Figure 4-16.  Move the Floppy disk down the boot order, and remove the check mark beside it 15. Still on the System settings page, select the Processor tab, as in Figure4-17, and increase the number of CPUs from 1 to 2 for the StudentVM1 virtual machine.

110

Chapter 4

Preparation

Figure 4-17.  Set the number of CPUs to 2 16. If your physical host has 8G of RAM or more, click the Display settings, and increase the amount of video memory to 128MB as shown in Figure4-18. It is neither necessary nor recommended that you enable 2D or 3D video acceleration because it is not needed for this course.

111

Chapter 4

Preparation

Figure 4-18.  With sufficient RAM in the physical host, you can increase the amount of video memory assigned to the virtual machine 17. Click the storage dialog as shown in Figure4-19. The port count for the VM must be at least 5in order to add new disk devices in later chapters. Previous versions of VirtualBox defaulted to 2 ports, while VB 6.0 defaults to only 1 which means we need to add more ports to the existing SATA controller (but not another controller) in order to accommodate additional SATA storage devices in later chapters. Increase the port count to 5 or more. We will need some of these additional drives in Chapter 19 in this Volume and Chapter 1 in Volume 2.

112

Chapter 4

Preparation

Figure 4-19.  Set the number of SATA ports to 5 18. Select the Network settings page, and, in the Adapter 1 tab, select NAT Network in the Attached to: field, as seen in Figure4-20. Because we have created only one NAT Network, the StudentNetwork, that network will be selected for us. Click the little blue triangle next to Advanced to view the rest of the configuration for this device. Do not change anything else on this page.

113

Chapter 4

Preparation

Figure 4-20.  Selecting the NAT Network option automatically selects the StudentNetwork because it is the only NAT Network we have created 19. Click the OK button to save the changes we have made. The virtual machine is now configured and ready for us to install Linux.

C hapter summary You have finished preparations for installing Fedora and performing the experiments in the rest of this course. You prepared an external USB disk drive to hold the virtual machine we will use in this course, and you have created that VM.You have also made some modifications to the VM that could not be made during its initial creation, such as the network adapter settings and the number of processors allocated to the VM. We will install the latest release of Fedora in Chapter 5. Note that you will be required to create another virtual machine and install Linux on it in Volume 3 of this course. The steps in creating the VM and installing Linux on it will be nearly the same. The only differences will be that the second VM will need a different name. 114

Chapter 4

Preparation

Exercises Do the following exercises to complete this chapter: 1. Define “virtual machine.” 2. What command used in this chapter might be used to discover information about the hardware components of a computer system? 3. How does “NAT Network” differ from “NAT” as a network type when using VirtualBox? 4. Why might we want more than a single network adapter on a VM?

115

CHAPTER 5

Installing Linux O bjectives In this chapter you will learn to •

Install the latest version of Fedora on your VM

Partition a hard drive using recommended standards

Describe and explain the use of swap space

State the amount of swap space recommended in the Fedora documentation

Create snapshots of your VM

O verview In this chapter you begin to do the work of the SysAdmin. One of the many tasks that SysAdmins do is install Linux, and that is what you will do in this chapter. I will try to explain as much as I can as we go through this chapter, but there are probably some things you won’t yet understand. Don’t worry– we will get to them. Just as a reminder, this book uses Fedora 29 with the Xfce desktop for the experiments that we will be doing. You should be sure to use the most current version of Fedora Xfce for this course. Both the Xfce desktop and the Linux tools we will be using are stable and will not change appreciably over the next several releases of Fedora. Please install Fedora as the Linux distribution for this course. This will make it much easier for you because you won’t have to make allowances for the differences that exist between Fedora and some other distributions. Even other Red Hat-based distributions such as RHEL and CentOS differ from Fedora. You will find, however, that after finishing this course, the knowledge you gain from it will transfer easily to other distributions.

© David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_5

117

Chapter 5

Installing Linux

Boot theFedora live image If this were a physical host, you would create a physical USB thumb drive with the ISO image on it and plug it into a USB slot on your host. In order to boot the live ISO image in our VM, we need to “insert” it into a logical device: 1. Open the Settings for the StudentVM1 VM. 2. Select the Storage page. 3. Click the Empty disk icon on the IDE controller. If you do not have an IDE controller on your VM– which is possible but very unlikely– you can right-click the white space in the Storage Devices panel and choose to add a new IDE controller. Only one IDE controller can be added. 4. Click the CD icon to the right of the Optical Drive field of the IDE controller. As you can see in Figure5-1, this opens a selection list that enables us to select which ISO image to mount1 on this device. 5. Unlike my workstation, your computer will probably have no images in this list. Select the Choose Virtual Optical Disk File item.

e will discuss the term “mount” and all it means in Chapter 19. For now, if you want more W information, see Wikipedia, https://en.wikipedia.org/wiki/Mount_(computing).

1

118

Chapter 5

Installing Linux

Figure 5-1.  Select Choose Virtual Optical Disk File to locate and mount the ISO image 6. Navigate to the location in which you stored the file when you downloaded it, and click the file, then click Open to set the mount. In Figure5-2 we see the ISO image file which is located in the / tmp directory.

119

Chapter 5

Installing Linux

Figure 5-2.  Select the ISO image file, and then click Open 7. Verify that the correct file is selected for the IDE controller in the Storage Devices box as shown in Figure5-3. Click OK.The Fedora live ISO image file is now “inserted” in the virtual optical drive, and we are ready to boot the VM for the first time.

120

Chapter 5

Installing Linux

Figure 5-3.  The Fedora live image ISO file is now “inserted” in the virtual optical drive 8. To boot the VM, be sure that the StudentVM1 virtual machine is selected and click the green Start arrow in the icon bar of the VirtualBox Manager. This launches the VM which opens a window in which the VM will run and boots to the image file. The first screen you see is shown in Figure5-4. The first time you use VirtualBox on any physical host, you will also get a message, “You have the Auto capture keyboard option turned on. This will cause the Virtual Machine to automatically capture the keyboard every time the VM window is activated...,” and then you’ll see also get a similar message about mouse pointer integration. They’re just informational, but you can change these settings if you like. 9. This first screen has a countdown timer, and the second item is already selected. After the timer counts down to zero, or when you press the Enter key, this selection will first test the install medium to detect any errors and then boot to the installer if there are no problems. We can skip the test because it is far less useful for our 121

Chapter 5

Installing Linux

image file than it would be for a physical DVD or USB thumb drive. Press the up arrow on your keyboard to highlight the entry Start Fedora-Xfce-Live 29, as shown in Figure5-4, and press the Enter key on your keyboard.

Figure 5-4.  Select the Start Fedora-Xfce-Live 29 menu item, and press Enter 10. The VM boots into a login screen as shown in Figure5-5. The only user account is the Live System User, and there is no password. Click the Log In button to access the live desktop.

122

Chapter 5

Installing Linux

Figure 5-5.  Click the Log In button to log in Your VM is now booted to the live image, and you could spend some time exploring Linux without installing it. In fact, if I go shopping at my local computer store– I stay away from the big box stores because they never have what I want– I take my trusty Live Linux thumb drive and try out the various systems that my local computer store has on display. This lets me test Linux on them and not disturb the Windows installations that are already there. We do not need to do any exploration right now, although you can if you like. We will do plenty of exploration after the installation. So let’s get right to the installation.

Installing Fedora Installing Fedora from the live image is easy, especially when using all of the defaults. We won’t use the defaults because we are going to make a few changes, the most complex one being to the virtual hard drive partitioning. If you have any questions about the 123

Chapter 5

Installing Linux

details of installation and want more information, you can go to the Fedora installation documentation at https://docs.fedoraproject.org/en-US/fedora/f29/install-­ guide/install/Installing_Using_Anaconda/ . This URL will be different for later versions of Fedora. Just be sure to use the correct Fedora release number when you enter the URL.

S tart theinstallation To start the Fedora Linux installation, double-click the Install to Hard Drive icon on the desktop as shown in Figure5-6. As on any physical or virtual machine, the live image does not touch the hard drive until we tell it to install Linux.

Figure 5-6.  Double-click the Install to Hard Drive icon to start the Fedora installation A double-click Install to Hard Drive launches the Anaconda installer. The first screen displayed by Anaconda is the Welcome screen where you can choose the language that 124

Chapter 5

Installing Linux

will be used during the installation process. If your preferred language is not English, select the correct language for you on this screen. Then click the Continue button.

S et thehostname Click the Network & Host Name option on the Installation Summary dialog as shown in Figure5-7. This hostname is the one that the computer will be known to itself as. It is the hostname that you will see on the command prompt line. The external world, that is any node on the network to which this host is connected, sees a computer as the hostname set up in whichever name service you are using. So it is possible that you might ping or ssh to a computer using one name and that it will have a different name once you are logged into it. By convention, computer hostnames are usually in lowercase. Note that the name of the VM is in mixed case, StudentVM1, but that is not the hostname and has no network usage.

Figure 5-7.  Select Network & Host Name to set the hostname for the VM 125

Chapter 5

Installing Linux

In the Host Name field, type the hostname studentvm1 in all lowercase letters, and then click Apply. That is all we need to do on this dialog, so click the blue Done button on the upper left. This will take you back to the Installation Summary dialog. Note that there are no options for selecting any additional software packages to install in any of the live images. If you want to install additional software, you must do it after the basic installation.

Hard drive partitioning The second and most important thing we need to change is to partition the hard drive in a more standard, recommended manner. We do this rather than taking the default way which is easy for most beginners but which is definitely not the best partitioning setup for a workstation intended for training a SysAdmin. We will explore the details of why this partitioning scheme is better in Chapter 19 of this volume. In Figure5-7, notice that the Installation Destination has a caution icon and the text, Automatic partitioning in red. Click Installation Destination, and you get the dialog shown in Figure5-8.

126

Chapter 5

Installing Linux

Figure 5-8.  Select Custom for Storage Configuration, then click Done We only have a single virtual disk drive in this VM, but if we had multiple hard drives, they could be selected here as part of the installation target. The size of the VM display window at this point may be too small to contain the entire dialog box. It is hard to see, but there is a scroll bar on the right side of this dialog. Scroll down using the scroll bar or the scroll wheel on your mouse until you get to the bottom. You should also be able to resize the window in which the VM is running to make it big enough to see the entire dialog box as in Figure5-9.

127

Chapter 5

Installing Linux

You should see Storage Configuration and three options. We are going to perform a custom configuration, so select the middle radio button, Custom. Then click Done. The next dialog, which you can see in Figure5-9, is the one where we will do a good deal of work. What we need to do is create a partitioning scheme like the one shown in Table5-1. The partition sizes in this table are not appropriate for a real-world working system, but they are more than sufficient for use in this educational environment.

Figure 5-9.  The Manual Partitioning dialog 128

Chapter 5

Installing Linux

However, that said, I have an old ASUS EeePC netbook with a built-in 4GB SSD-­ like hard drive and a 32GB removable SD card that I have set up as part of the volume group that, along with the system drive, totals 36GB.I have installed Fedora Linux 28 on it along with LibreOffice. I use this little system for presentations, note taking in some meetings, and for [emailprotected] There is still over 17GB of “disk” space available. So it is possible and not unreasonable to install a working Fedora system with a GUI desktop in about 20GB.Of course it would be somewhat limited, but it would be usable.

Table 5-1.  The disk partitions– filesystems– and their sizes Mount point

Partition

Filesystem type

/boot

Standard

EXT4

1.0

boot

/ (root)

LVM

EXT4

2.0

root

/usr

LVM

EXT4

15.0

usr

/home

LVM

EXT4

2.0

/var

LVM

EXT4

10.0

var

/tmp

LVM

EXT4

5.0

tmp

swap

swap

swap

4.0

swap

Total

Size (GiB)

Label

home

119.00

In Table5-1, you can see what are usually considered the standard filesystems that most books and SysAdmins– well at least I– recommend. Note that for Red Hat-based distributions including Fedora, the directory structure is always created, but separate filesystems– partitions– may or may not be. Theoretically, because of the fact that we created a brand new virtual hard drive for this VM, there should be no existing partitions on this hard drive. If you are not following these instructions exactly or are using a physical or virtual hard drive with existing partitions, use this page to delete all existing partitions before you continue any further. If, as in Figure5-9, you see the message that you have not created any mount points, then continue. To add the first partition, click the plus (+) button as illustrated in Figure5-9. This results in the display of the Add Mount Point dialog box as shown in Figure5-10. Enter Select /boot as the first mount point, and type 1G in the Desired Capacity field. Seti@Home, http://setiweb.ssl.berkeley.edu/index.php

2

129

Chapter 5

Installing Linux

Figure 5-10.  Set the mount point and size desired for the /boot partition Although we will go into more detail in later chapters, let’s take a moment to talk about partitions, filesystems, and mount points. Hopefully this will temporarily answer questions you might have about the apparently conflicting and definitely confusing terminology. First, the entire Linux directory structure starting at the top with the root (/) directory can be called the Linux filesystem. A raw partition on a hard drive or a logical volume can be formatted with an EXT3, EXT4, BTRFS, XFS, or other filesystem meta-structure. The partition can then be called a filesystem. If the partition is for the /home directory, for example, it will be called the /home filesystem. The /home filesystem is then mounted on the /home mount point, which is simply the /home directory on the root filesystem, and then it becomes a logical and functional part of the root filesystem. Just remember that not all root-level directories can be separate filesystems and others just don’t make sense to make separate. 130

Chapter 5

Installing Linux

So after all of the partitions are defined, Anaconda, the installation program, will create the volume group, the logical volumes, any raw partitions such as /boot, and the entire directory tree including the mount points (directories) on the / filesystem, format the volumes or partitions with the selected filesystem type (EXT4 for the most part), and create the /etc/fstab file to define the mounts and their mount points so the kernel knows about and can find them every time the system is booted. Again, more on all of this later. After entering the correct data for this partition, click the Add mount point button to proceed. At this point the Manual Partitioning dialog looks like Figure5-11. Notice that if the VM window is a bit small, there is a scroll bar at the right side of the screen. If you hover your mouse there, the scroll bar becomes a bit wider so is easier to see and manipulate. You can also resize the VM window if you have not already.

Figure 5-11.  Creating the /boot partition

131

Chapter 5

Installing Linux

If necessary, scroll down so that you can see the Label field. Enter the label for this partition as “boot” without the quotes. As mentioned before, I find that labels make working with various components of the filesystem much easier than would be possible without them. After typing in the label, click the Update Settings button to save the changes you made. The /boot partition contains the files required for the system to boot up and get to a minimal state of functionality. Because full-featured filesystem kernel drivers are not available at the beginning of this process, drivers that would allow the use of logical volume management (LVM), the /boot partition must be a standard, non-LVM3 Linux partition with an EXT4 filesystem. These settings are chosen automatically when the /boot partition was created. We will study the boot and start up sequences in some detail in Chapter 16. After saving the updated settings for the /boot filesystem, the rest of the partitions can be created as logical volumes in a volume group. We will discuss logical volume management (LVM) in Chapter 1 of Volume 2, but for now it is important to know that LVM makes managing and resizing logical volumes very easy. For example, recently the logical volume I was using to store my virtual machines filled up while I was creating a new VM.VirtualBox politely stopped with a warning message indicating it was out of disk space and that it could continue when additional disk space was made available. I wish all software were that nice. Most times one would think about deleting existing files, but all I had in this filesystem were files for VMs that I needed. I was able to increase the size of the logical volume containing the directory in which my VMs are stored. Using Logical Volume Management made it possible to add space to the volume group, assign some of that space to the logical volume, and then increase the size of the filesystem, all without rebooting the computer or even terminating and restarting VirtualBox. When the task of adding space to the (physical) logical volume on which my VMs reside was complete, I simply clicked the button in the warning dialog to continue, and creation of the VM proceeded as if nothing had happened. Let’s continue creating mount points. Once again, start by clicking the + button. Select / (the root filesystem), and type 2G for the size as shown in Figure5-12. Click Add mount point to continue. The root filesystem is the top level of the Linux directory tree on any Linux host. All other filesystems will be mounted at various mount points on the root filesystem.

Logical Volume Manager

3

132

Chapter 5

Installing Linux

Figure 5-12.  Adding the root filesystem Now scroll down in the right pane of the Manual Partitioning dialog, and type in the label “root” as shown in Figure5-13. Notice that the device type is now LVM for Logical Volume Management, and there is a volume group name. We are not yet done because we want to do one more thing before proceeding. If we do nothing else to define the size of the volume group that will be created when the hard drive is formatted, the volume group will take only the 41G or so, as we specify our filesystems in Table5-1, and it will leave the rest of the disk empty and inaccessible. We could fix that later, and the result would work, but it would be less than elegant.

133

Chapter 5

Installing Linux

In order to include all of the remaining space available on our virtual disk in the volume group (VG), we need to modify the VG specification. Click the Modify button under Volume Group.

Figure 5-13.  After entering the “root” label, click Modify to make changes to the volume group We will not need to modify the volume group size more than once. After making the change to the volume group while creating this logical volume (LV), the VG size is set, and we don’t need to do this on the following LVs. The only change we need to make on the rest of the logical volumes is to set the label.

134

Chapter 5

Installing Linux

The Configure Volume Group dialog would also allow us to change other things like the name of the volume group, but unless there is some imperative to do so, we should leave the rest of these configuration items alone. Nothing that we will do in this course requires any further changes to the volume group configuration. Under the Size policy selection box in the Configure Volume Group dialog box, click As large as possible as shown in Figure5-14. This will cause the volume group to expand to include all of the remaining free space on the hard drive. Then click Save. Add the label “root,” and click the Update Settings button.

Figure 5-14.  Configuring the volume group to use all available disk space

135

Chapter 5

Installing Linux

Go ahead and add the other partitions, except for the swap partition, as shown in Table5-1. You will notice that the /usr and /tmp partitions are not in the list of mount points. For these partitions, just type in the partition names, being sure to use the leading slash (/), and then proceed as you would with any other partition.

About swap space Before you create the swap partition, this would be a good time to discuss swap, also known as paging. Swap space is a common and important aspect of computing today, regardless of operating system. Linux uses swap space and can use either a dedicated swap partition or a file on a regular filesystem or logical volume. SysAdmins have differing ideas about swap space– in particular how much is the right amount. Although there are no definitive answers here, there are some explanations and guidelines to get you started.

T ypes ofmemory There are two basic types of memory in a typical computer. Random-access memory (RAM) is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM.RAM is volatile memory; that is, the data stored in RAM is lost if the computer is turned off. Hard drives are magnetic media or solid-state devices (SSDs) used for long-term storage of data and programs. Magnetic media and SSDs are nonvolatile; the data stored on a disk remains even when power is removed from the computer. The CPU cannot directly access the programs and data on the hard drive; it must be copied into RAM first, and that is where the CPU can access its programming instructions and the data to be operated on by those instructions. USB memory devices are used as if they were removable hard drives, and the operating system treats them as hard drives. During the boot process, a computer copies specific operating system programs such as the kernel and startup programs like init or systemd and data from the hard drive into RAM where it is accessed directly by the computer’s processor, the CPU (central processing unit).

136

Chapter 5

Installing Linux

Swap The primary function of swap space is to substitute disk space for RAM memory when real RAM fills up and more space is needed. For example, assume you have a computer system with 2GB of RAM.If you start up programs that don’t fill that RAM, everything is fine, and no swapping is required. But say the spreadsheet you are working on grows when you add more rows to it, and it now fills all of RAM.Without swap space available, you would have to stop work on the spreadsheet until you could free up some of your limited RAM by closing down some other programs. Swap space allows the use of disk space as a memory substitute when enough RAM is not available. The kernel uses a memory management program that detects blocks, aka pages, of memory in which the contents have not been used recently. The memory management program swaps enough of these relatively infrequently used pages of memory out to a special partition on the hard drive specifically designated for “paging” or swapping. This frees up RAM and makes room for more data to be entered into your spreadsheet. Those pages of memory swapped out to the hard drive are tracked by the kernel’s memory management code and can be paged back into RAM if they are needed. The total amount of memory in a Linux computer is the RAM plus swap space and is referred to as virtual memory.

Types ofLinux swap Linux provides for two types of swap space. By default, most Linux installations create a swap partition, but it is also possible to use a specially configured file as a swap file. A swap partition is just what its name implies– a standard disk partition or logical volume that is designated as swap space by the mkswap command. A swap file can be used if there is no free disk space in which to create a new swap partition or space in a volume group in which a logical volume can be created for swap space. This is just a regular file that is created and preallocated to a specified size. Then the mkswap command is run to configure it as swap space. I don’t recommend using a file for swap space unless absolutely necessary or if you have so much system RAM that you find it unlikely that Linux would ever use your swap file unless something was going wrong, but you still wanted to prevent crashing/thrashing in unusual circ*mstances. I have discovered that even on my very large workstation with 64G of RAM, some swap space is used during backups and other operations that can take huge amounts of RAM and use it as buffers for temporary storage. 137

Chapter 5

Installing Linux

T hrashing Thrashing can occur when total virtual memory, both RAM and swap space, become nearly full. The system spends so much time paging blocks of memory between swap space and RAM and back, that little time is left for real work. The typical symptoms of this are fairly obvious: •

The system becomes completely unresponsive or very, very slow.

If you can issue a command like free that shows CPU load and memory usage, you will see that the CPU load is very high, perhaps as much as 30–40 times the number of CPUs in the system.

RAM is almost completely allocated, and swap space is seeing significant usage.

What is theright amount ofswap space? Many years ago, the rule of thumb for the amount of swap space that should be allocated was 2X the amount of RAM installed in the computer. Of course that was when computers typically had RAM amounts measured in KB or MB.So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule of thumb took into account the fact that RAM memory sizes were typically quite small at that time and the fact that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work. RAM memory has become quite inexpensive, and many computers these days have amounts of RAM that extend into tens or hundreds of gigabytes. Most of my newer computers have at least 4 or 8GB of RAM, and one has 32GB while another, my main workstation, 64GB.When dealing with computers having huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. As a consequence, the recommended swap space is considered a function of system memory workload, not of system memory. Table 5-2 provides the Fedora Project’s recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage. The “recommended” swap

138

Chapter 5

Installing Linux

partition size is established automatically during a default installation, but I usually find that to be either too large or too small for my needs. The Fedora 29 Installation Guide4 contains the following table that defines the current thinking about swap space allocation. I have included in the following my version of that table of recommendations. Note that other versions of Fedora and other Linux distributions may differ slightly from this table in some aspects, but this is the same table used by Red Hat Enterprise Linux for its recommendations. The recommendations in Table5-2 have been very stable since Fedora 19.

Table 5-2.  Recommended System Swap Space in Fedora 29 Documentation Amount of RAM installed in system

Recommended swap space

Recommended swap space with hibernation

≤ 2GB

2X RAM

3X RAM

2GB–8GB

= RAM

2X RAM

8GB–64GB

4G to 0.5X RAM

1.5X RAM

>64GB

Min 4GB

Hibernation not recommended

Of course most Linux administrators have their own ideas about the appropriate amount of swap space—as well as pretty much everything else. Table5-3 contains my own recommendations based on my experiences in multiple environments.

Table 5-3.  Recommended System Swap Space per the author Amount of RAM installed in system

Recommended swap space

≤ 2GB

2X RAM

2GB–8GB

= RAM

>8GB

8GB

Neither of these tables may work for your specific environment but it does give you a place to start. The main consideration in both tables is that as the amount of RAM increases, adding more swap space simply leads to thrashing well before the swap space Fedora Documentation, Installation Guide, https://docs.fedoraproject.org/en-US/ fedora/f29/

4

139

Chapter 5

Installing Linux

even comes close to being filled. If you have too little virtual memory while following these recommendations, you should add more RAM, if possible, rather than more swap space. In order to test the Fedora (and RHEL) swap space recommendations, I have used their recommendation of 0.5 ∗ RAM on my two largest systems, the ones with 32 and 64GB of RAM.Even when running four or five VMs, multiple documents in LibreOffice, Thunderbird, Chrome web browser, several terminal emulator sessions, the Xfce file manager, and a number of other background applications, the only time I see any use of swap is during backups I have scheduled for every morning at about 2am. Even then, swap usage is no more than 16MB– yes megabytes. Don’t forget– these results are for my system with my loads and do not necessarily apply to your particular real-world environment.

F inish partitioning Now go ahead and enter the data to create the swap partition as shown in Table5-1. Note that once you select “swap” in the Add New Mount Point dialog, the swap partition does not actually have a mount point as it is accessible only by the Linux kernel and not by users, even root. This is just a mechanism for allowing you to choose the swap partition while doing manual partitioning. When you have created all of the partitions listed in Table5-1, click the blue Done button. You will then see a dialog entitled Summary of Changes. Click Accept Changes to return to the Installation Summary dialog.

B egin theinstallation We have now completed all of the configuration items needed for our VM.To start the installation procedure, click the blue Begin Installation button. We have a couple tasks that need to be performed during the installation. We do not need to wait until the installation has completed before we can set the root password and add a non-root user. Notice in Figure5-15 that there are warnings superimposed over the Root Password and User Creation options. It is not required that we create a non-root user and we could do it later. Since we have this opportunity to do so now, let’s go ahead and take care of both of these remaining tasks.

140

Chapter 5

Installing Linux

Figure 5-15.  The installation process has started

Set theroot password Click Root Password to set the password for root. Type in the password twice as shown in Figure5-16. Notice the warning message at the bottom of the root password dialog which says that the password I entered is based on a dictionary word. Because of the weak password, you must click the blue Done button twice to verify that you really want to use this weak password. If, as root, you set a weak password for root or a non-privileged user from the command line, you would receive a similar message, but you could continue anyway. This is because root can do anything, even 141

Chapter 5

Installing Linux

set poor passwords for themselves or non-root users. The non-privileged users must set a good password and are not allowed to circumvent the rules for the creation of good passwords. However, you should enter a stronger password– one which does not generate any warnings– and then click the Done button.

Figure 5-16.  Setting the root password After setting the root password, you will be back at the installation dialog as in Figure5-15, and the Root Password item will no longer have a warning message.

142

Chapter 5

Installing Linux

Create thestudent user Click the User Creation icon, and you will enter the User Creation dialog shown in Figure5-17. Enter the data as shown, and click the blue Done button.

Figure 5-17.  Creating the student user After specifying the user information, you will be back at the main installation dialog. The installation may not be complete yet. If not, wait until it does complete as shown in Figure5-18 and then proceed.

143

Chapter 5

Installing Linux

F inishing theinstallation When completed, the Anaconda installer dialog will indicate “Complete” on the progress bar, and the success message at the bottom right in Figure5-18 will be displayed along with the blue Quit button.

E xit theinstaller This terminology may be a bit confusing. Quit means to quit the Anaconda installer, which is an application running on the live image desktop. The hard drive has been partitioned and formatted, and Fedora has already been installed. Click Quit to exit the Anaconda installer.

Figure 5-18.  The installation is complete 144

Chapter 5

Installing Linux

Shut down theLive system Before we do anything else, look at the Live system Xfce desktop. It looks and works the same as the Xfce desktop you will use when we reboot the VM using its own virtual disk instead of the Live system. The only difference will be that of some of the Live filesystem icons will no longer be present. So using this desktop will be the same as using the Xfce desktop on any installed system. Figure 5-19 shows how to shut down the Live system. The Xfce panel across the top of the screen starts with the Applications launcher on the left and has space for the icons of running applications, a clock, the system tray containing icons of various functions and notifications, and the User button on the far right which always displays the name of the current logged in user.

Figure 5-19.  Shut down the VM after the installation is complete Click the Live System User button, and then click the Shut Down action button. A dialog with a 30-second countdown will display. This dialog will allow you to shut down 145

Chapter 5

Installing Linux

immediately or cancel the shutdown. If you do nothing, the system will shut down when the 30-second timer counts down to zero. This shutdown will power off the VM, and the VM window will close.

R econfigure theVM Before rebooting the VM, we need to reconfigure it a little by removing the Fedora ISO image file from the virtual optical drive. If we were to leave the ISO image inserted in the virtual drive, the VM would boot from the image: 1. Open the Settings for StudentVM1. 2. Click Storage. 3. Select the Fedora Live CD which is under the IDE controller in the Storage Devices panel. 4. Click the little CD icon on the Optical Drive line in the Attributes panel. 5. At the bottom of the list, choose the menu option, Remove disk From Virtual Drive. The entry under the IDE controller should now be empty. 6. Click the OK button of the Settings dialog. The StudentVM1 virtual machine is now ready to run the experiments you will encounter in the rest of this course.

C reate asnapshot Before we boot the VM, we want to create a snapshot that you can return to in case the VM gets borked up so badly that you cannot recover without starting over. The snapshot will make it easy to recover to a pristine system without having to perform a complete reinstallation. Figure 5-20 shows the Snapshots view for the StudentVM1 virtual machine which we just created. To get to this view in the VirtualBox Manager, select the StudentVM1 VM, and then click the menu icon on the right side of the StudentVM1 selection bar. This pops up a short menu with Snapshots in it. Click the Snapshots view button in the icon bar. The Current State entry is the only one shown, so there are no snapshots. 146

Chapter 5

Installing Linux

You can take many snapshots of the same virtual machine as you progress through this course which will make it easy to back up to a recent snapshot instead of going back all the way to the first one which we will create here. I suggest creating a snapshot at the end of each chapter if you have enough room on the hard drive where the virtual machine files are stored.

Figure 5-20.  The Snapshots view of StudentVM1 before taking a snapshot To create a snapshot, simply click the Take button– the one with the green + sign. This opens the Take Snapshot of Virtual Machine dialog where you can change the default name to something else. There is also a description field where you can enter any type of notes or identifying data that you want. I kept the name and just entered, “Before first boot” in the description field. Enter whatever you want in the description field, but I suggest keeping the default snapshot names. The Snapshot view looks like Figure5-21 after taking your first snapshot.

147

Chapter 5

Installing Linux

Figure 5-21.  After taking the first snapshot of StudentVM1

F irst boot It is now time to boot up the VM: 1. Select the StudentVM1 virtual machine. 2. Be sure that the Current State of the VM is selected in the Snapshots dialog. 3. Click the Start icon in the icon bar of the VirtualBox Manager. You could also right-­click the VM and select Start from the pop-up menu. 4. The VM should boot to a GUI login screen like the one shown in Figure5-22.

148

Chapter 5

Installing Linux

Figure 5-22.  The Fedora 29 GUI login screen But don’t log in just yet. We will get to that in Chapter 6 where we will explore this login screen and some other things a bit before we actually log in and explore the Xfce desktop. If you are not ready to continue to the next chapter, you can leave the VM running in this state or shut it down from the login screen. In the upper right corner of the VM login screen is a universal On/Off symbol. Click that and select Shut Down ... to power off the VM.

What todo if theexperiments do not work Starting in the next chapter, you will have experiments to perform as part of learning to become a SysAdmin. These experiments are intended to be self-contained and not dependent upon any setup, except for the results of previously performed experiments or preparation. Certain Linux utilities and tools must be present, but these should all 149

Chapter 5

Installing Linux

be installed or available to install on a standard Fedora Linux workstation installation. If any of these tools need to be installed, there will be a preparation section before the experiment in which they are needed. Installing tools like this is, after all, another part of being a SysAdmin. All of these experiments should “just work” assuming we install the requisite tools. We all know how that goes, right? So when something does fail, the first things to do are the obvious: 1. Ensure that the required tools were installed as part of the chapter preparation section. Not all chapters will need a preparation section. 2. Verify that the commands were entered correctly. This is the most common problem I encounter for myself; it sometimes seems as if my fingers are not typing the things my brain sends to them. 3. You may see an error message indicating that the command was not found. The Bash shell shows the bad command; in this case I made up badcommand. It then gives a brief description of the problem. This error message is displayed for both missing and misspelled commands. Check the command spelling and syntax multiple times to verify that it is correct: [student@testvm1 ~]$ badcommand bash: badcommand: command not found... 4. Use the man command to view the manual pages (man pages) in order to verify the correct syntax and spelling of commands. 5. Ensure that the required commands are, in fact, installed. Install them if they are not already installed. 6. For experiments that require you to be logged in as root, ensure that you have done so. Many of the experiments in this course require that you be logged in as root– performing them as a nonroot user will not work, and the tools will throw errors. 7. For the experiments that require being performed as a non-root user, be sure that you are using the student account.

150

Chapter 5

Installing Linux

There is not much else that should go wrong– but if you encounter a problem that you cannot make work using these tips, contact me at [emailprotected], and I will do my best to help figure out the problem.

Chapter summary We have now installed the latest release of Fedora Linux on the virtual machine we created in the previous chapter. We discussed the terminology surrounding filesystems and should be able to list the directories that are typically recommended for mounting as separate filesystems. We have created a snapshot of the VM in case we run into problems and need to roll back to the beginning.

Exercises Perform the following exercises to complete this chapter: 1. Can the name of the volume group created by the Anaconda installer be changed during the installation? 2. How much swap space is recommended in the Fedora documentation for a host with 10GB of RAM that does not require hibernation? 3. On what factors are the swap space recommendations based? 4. How much total space was used by the installation? 5. What is the purpose of snapshots? 6. Is it possible to take a snapshot while the VM is up and running?

151

CHAPTER 6

Using theXfce Desktop O bjectives In this chapter you will learn •

Why Xfce is a good desktop to use for this course as well as for regular use

The basic usage and navigation of the Xfce desktop

How to launch programs

The basic usage of the xfce4-terminal emulator

How to install all current updates as well as some new software

How to use the Settings Manager

How to add program launchers to the bottom panel

How to configure the Xfce desktop

W hy Xfce Xfce seems like an unusual choice for the desktop to use in a Linux course rather than the more common GNOME or KDE desktops. I started using Xfce a few months ago, and I find that I like it a lot and am enjoying its speed and lightness. The Xfce desktop is thin and fast with an overall elegance that makes it easy to figure out how to do things. Its lightweight construction conserves both memory and CPU cycles. This makes it ideal for older hosts with few resources to spare for a desktop and resource-constrained virtual machines. However, Xfce is flexible and powerful enough to satisfy my needs as a power user.

© David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_6

153

Chapter 6

Using theXfce Desktop

T he desktop Xfce is a lightweight desktop that has a very small memory footprint and CPU usage compared to some of the other desktops such as KDE and GNOME.On my system the programs that make up the Xfce desktop take a tiny amount of memory for such a powerful desktop. Very low CPU usage is also a hallmark of the Xfce desktop. With such a small memory footprint, I am not especially surprised that Xfce is also very sparing of CPU cycles. The Xfce desktop, as seen in Figure6-1, is simple and uncluttered with fluff. The basic desktop has two panels and a vertical line of icons on the left side. Panel 0 is at the bottom and consists of some basic application launchers, as well as the “Applications” icon which provides access to all of the applications on the system. The panels can be modified with additional items such as new launchers or altering their height and width. Panel 1 is at the top and has an Applications launcher as well as a “Workspace Switcher” that allows the user to switch between multiple workspaces. A workspace is an organizational entity like a desktop, and having multiple workspaces is like having multiple desktops on which to work with different projects on each.

154

Chapter 6

Using theXfce Desktop

Figure 6-1.  The Xfce desktop with the Thunar file manager and the xfce4-terminal open The icons down the left side of the desktop consist of home directory and Trash icons. It can also display icons for the complete filesystem directory tree and any connected pluggable USB storage devices. These icons can be used to mount and unmount the device, as well as to open the default file manager. They can also be hidden, if you prefer, with the filesystem, Trash, and home directory icons being separately controllable. The removable drives can be hidden or displayed as a group.

155

Chapter 6

Using theXfce Desktop

The file manager Thunar is the default file manager for Xfce. It is simple, easy to use and configure, and very easy to learn. While not as full featured as file managers like Konqueror or Dolphin, it is quite capable and very fast. Thunar does not have the ability to create multiple panes in its window, but it does provide tabs so that multiple directories can be open at the same time. Thunar also has a very nice sidebar that, like the desktop, shows the same icons for the complete filesystem directory tree and any connected USB storage devices. Devices can be mounted or unmounted, and removable media such as CDs can be ejected. Thunar can also use helper applications such as ark to open archive files when they are clicked on. Archives such as zip, tar, and rpm files can be viewed, and individual files can be copied out of them. Having used a number of different file managers, I must say that I like Thunar for its simplicity and ease of use. It is easy to navigate the filesystem using the sidebar.

Stability The Xfce desktop is very stable. New releases seem to be on a three-year cycle although updates are provided as necessary. The current version is 4.12 which was released in February of 2015. The rock solid nature of the Xfce desktop is very reassuring after having issues with KDE.The Xfce desktop has never crashed for me, and it has never spawned daemons that gobbled up system resources. It just sits there and works which is what I want. Xfce is simply elegant. Simplicity is one of the hallmarks of elegance. Clearly the programmers who write and maintain Xfce and its component applications are great fans of simplicity. This simplicity is very likely the reason that Xfce is so stable, but it also results in a clean look, a responsive interface, an easily navigable structure that feels natural, and an overall elegance that makes it a pleasure to use.

xfce4-terminal emulator The xfce4-terminal emulator is a powerful emulator that uses tabs to allow multiple terminals in a single window, like many other terminal emulators. This terminal emulator is simple compared to other emulators like Tilix, Terminator, and Konsole, but it does get the job done. The tab names can be changed, and the tabs can be rearranged by drag and drop, by using the arrow icons on the toolbar, or the options on the menu 156

Chapter 6

Using theXfce Desktop

bar. One thing I especially like about the tabs on the Xfce terminal emulator is that they display the name of the host to which they are connected regardless of how many other hosts are connected through to make that connection, that is, host1 → host2 → host3 → host4 properly shows host4in the tab. Other emulators show host2 at best. Many aspects of its function and appearance can be easily configured to suit your needs. Like other Xfce components, this terminal emulator uses very little in the way of system resources.

C onfigurability Within its limits, Xfce is very configurable. While not offering as much configurability as a desktop like KDE, it is far more configurable and more easily so than GNOME, for example. I found that the Settings Manager is the doorway to everything that is needed to configure Xfce. The individual configuration apps are separately available, but the Settings Manager collects them all into one window for ease of access. All of the important aspects of the desktop can be configured to meet my own personal needs and preferences.

G etting started Before we log in for the first time, let’s take a quick look at the GUI login screen shown in Figure6-2. There are some interesting things to explore here. The login screen, that is, the greeter, is displayed and controlled by the display manager, lightdm,1 which is only one of several graphical login managers called display managers.2 Each display manager also has one or more greeters– graphical interfaces– which can be changed by the user. In the center of the screen is the login dialog. The user student is already selected because there are no other users who can log in at the GUI.The root user is not allowed to log in using the GUI.Like everything else in Linux, this behavior is configurable, but I recommend against changing that. If there were other users created for this host, they would be selectable using the selection bar. The panel across the top of the login screen contains information and controls. Starting from the left, we see first the name of the host. Many of the display managers

Wikipedia, LightDM, https://en.wikipedia.org/wiki/LightDM Wikipedia, Display Manager, https://en.wikipedia.org/wiki/X_display_manager_ (program_type)

1 2

157

Chapter 6

Using theXfce Desktop

I have used– and there are several– do not display the hostname. In the center of the control panel is the current date and time. On the right side of the panel, we first find– again from left to right– a circle that contains “XF,” which stands for Xfce. This control allows you to select any one of multiple desktops if you have more than Xfce installed. Linux has many desktops available, such as KDE, GNOME, Xfce, LXDE, Mate, and many more. You can install any or all of these and switch between them whenever you log in. You would need to select the desired desktop before you log in.

Figure 6-2.  Type in the password, and click the Log In button 158

Chapter 6

Using theXfce Desktop

The next item we encounter is language selection. This control allows you to select any one of hundreds of languages to use on the desktop. Next we have a human person with arms and legs spread wide. This allows accessibility choices for large font and high-contrast color selections for the desktop. Last and furthest to the right is the virtual power button. Click this and you get a submenu that allows you to suspend, hibernate, restart (reboot), and shut down (power off ) the system.

L ogin Before we can use the Xfce desktop, we need to log in. The StudentVM1 virtual machine should already be up and running and waiting for you to log in as shown in Figure6-2; however if you closed it at the end of the previous chapter, start it now. Click the VM screen, then type in the password you chose for the student user, and click the Log In button. The first time you log in to Xfce, you are given a choice for the panel configuration. The panel(s) can contain application launchers, a time and date calendar, a system tray with icons to allow access to things like network, updates, the clipboard, and more. I strongly suggest using the default configuration rather than an empty panel. You can make changes to the panel later, but starting with an empty one creates a lot of work to start with that we don’t need right now.

159

Chapter 6

Using theXfce Desktop

Figure 6-3.  Select the default panel configuration Just click Use default config to continue to the Xfce desktop which now has a panel at the top and one at the bottom as shown in Figure6-4. The top panel contains several components that provide access and control over some important functions. On the far left of the top panel is the Applications menu. Click this to see a menu and several submenus that allow you to select and launch programs and utilities. Just click the desired application to launch it. Next is some currently empty space where the icons for running applications will be displayed. Then we have four squares, one of which is dark gray and the other three are lighter gray. This is the desktop selector, and the darker one is the currently selected desktop. The purpose of having more than one desktop is to enable placing windows for different projects on different desktops to help keep things organized. Application 160

Chapter 6

Using theXfce Desktop

windows and icons are displayed in the desktop selector if any are running. Just click the desired desktop to switch to it. Applications can be moved from one desktop to another. Drag the application from one desktop in the switcher to another, or right-click the application title bar to raise a menu that provides a desktop switching option.

Figure 6-4.  The Xfce desktop To the immediate right of the desktop switcher is the clock. You can right-click the clock to configure it to display the date as well as the time in different formats. Next is the system tray which contains icons to install software updates; connect, disconnect, and check the status of the network; and check the battery status. The network is connected by default at boot time, but you can also find information about the current connection. On a laptop, you would also have wireless information. 161

Chapter 6

Using theXfce Desktop

Soon after you log in, and at regular intervals thereafter, the dnf-dragora program– the orange and blue icon that is hard to see– will check for updates and notify you if there are any. There will very likely be a large number after the installation and first boot. For now just ignore this. Do not try to install updates now; we will do that from the command line later in this chapter. The bottom panel contains launchers for some basic applications. Be sure to note the second icon from the left which will launch the xfce4-terminal emulator. We will look at the rest of these launchers in more detail soon.

Exploring theXfce desktop Let’s spend some time exploring the Xfce desktop itself. This includes reducing the annoyance level of the screensaver, doing some configuration to set default applications, adding launchers to Panel 2– the bottom panel– to make them more easily accessible, and using multiple desktops. As we proceed through this exploration of the Xfce desktop, you should take time to do a bit of exploration on your own. I find that is the way I learn best. I like to fiddle with things to try to get them the way I want them– or until they break– whichever comes first. When they break, I get to figure out what went wrong and fix them. Like all decent desktops, Xfce has a screensaver that also locks the screen. This can get annoying– as it has for me while I write this– so we are going to reconfigure the screensaver first. Figure6-5 shows us how to get started.

162

Chapter 6

Using theXfce Desktop

Figure 6-5.  Launching the screensaver configuration application EXPERIMENT 6-1 Do this experiment as the student user. In this experiment we explore the screensaver and then turn it off so it won’t interfere with our work. 1. To launch the screensaver application, use Panel 1 (the top one) and select Applications ➤ Settings ➤ Screensaver. 2. Figure 6-6 shows the Screensaver Preferences dialog. The Mode is currently set to Random Screen Saver which selects savers from the checked ones in the list. Scroll down and select some of them to see what they look like in the preview box on the right. I selected the XanalogTV for this screen shot because 163

Chapter 6

Using theXfce Desktop

it is interesting and it does bring back memories. Go ahead and “experiment”– all right– play with this because it is fun.

Figure 6-6.  Experimenting with the screensaver application This page also allows you to select timeouts for screen blanking and how often to cycle to a new random saver. 3. Click the Advanced tab. This dialog allows configuration of text and image manipulation. It also provides power management configuration for the display. 4. To disable the screensaver, return to the Display Modes tab, click the Mode button, and select Disable Screen Saver. 5. Close the Screensaver Preferences dialog. For my physical hosts, I usually select the blank screen for my screensaver and set the time long enough that it won’t blank while I am still working at my desk but not touching the mouse or keyboard. I set the screen to lock a few minutes after that. My tolerance levels change over time, so I do reset these occasionally. You should set them to your own needs. 164

Chapter 6

Using theXfce Desktop

S ettings Manager Let’s look at how we can access the various desktop settings to Xfce. There are two ways to do so, and one is to use the Applications button on Panel 1, select Settings, and then select the specific setting item you want to view of change. The other option is to open the Settings Manager at the top of the Settings menu. The Settings Manager has all of the other settings in one window for easy access. Figure6-7 shows both options. On the left, you can see the Applications menu selection, and on the right is the Settings Manager.

Figure 6-7.  There are two ways of accessing the various Xfce desktop settings. Notice that I have resized the window of the StudentVM1 virtual machine so that there would be enough vertical space to show all of the settings in the Settings Manager 165

Chapter 6

Using theXfce Desktop

Adding launchers toPanel 2 I prefer to use the Settings Manager. I also like to make it easier for myself to access the Settings Manager itself. Not that three clicks to go through the menu tree every time I want to access a settings tool, but one click is always better than three. This is part of being the lazy SysAdmin, less typing and fewer mouse clicks are always more efficient. So let’s take a side trip to add the Settings Manager icon to Panel 2, the bottom panel, as a launcher.

EXPERIMENT 6-2 In this experiment we will add the Settings Manager to Panel 2 on the Xfce desktop. 1. Open the Applications menu as shown in Figure6-7, and locate the Settings Manager at the top of the Settings menu. 2. Click the Settings Manager as if you were going to open it, but hold the mouse button down, and drag it to the left side of Panel 2 like I have in Figure6-8. Hover over the small space at the end of the panel until the vertical red bar appears. This bar shows where the new launcher will be added.

166

Chapter 6

Using theXfce Desktop

Figure 6-8.  Adding the Settings Manager to Panel 2 3. When the red bar is in the desired location on the panel, release the mouse button to drop it there. 4. An interrogatory dialog will open that asks if you want to “Create new launcher from 1 desktop file.” Click the Create Launcher button. The new launcher now appears on Panel 2 as shown in Figure6-9.

Figure 6-9.  The new Settings Manager launcher on Panel 2 You can now launch the Settings Manager from the panel. You could have placed the launcher anywhere on the panel or on the desktop.

167

Chapter 6

Using theXfce Desktop

Note that only one click is required to launch applications from the panel. I add all of my most used applications to Panel 2 which prevents me from having to search for them in the menus every time I want to use one of them. As we work our way through this course, you can add more launchers to the panel to enhance your own efficiency.

P referred applications We can now return to setting our preferred applications. Default applications are choices like which terminal emulator or web browser that you want all other applications to launch when one of those is needed. For example, you might want your word processor to launch Chrome when you click a URL embedded in the text. Xfce calls these preferred applications. The preferred terminal emulator is already configured as the xfce4-terminal, which you have had an opportunity to use. We will go into much more detail about the xfce4-­ terminal in Chapter 7. The icons at the bottom of the Xfce desktop, in Panel 2, include a couple for which we should choose default applications, the web browser and the file manager. If you were to click the web browser icon, the Earth with a mouse pointer on it, you would be given a choice of which of the installed web browsers you want to use as the default. At the moment, only the Firefox web browser is installed, so there aren’t any real choices available. There is also a better way, and that is to make all of the preferred application selections at one time.

EXPERIMENT 6-3 In this experiment we will set the preferred applications for the student user. 1. If the Settings Manager is not already open, open it now. 2. Locate the Preferred Applications icon in the Settings dialog, and click it once to open it. This dialog opens to its Internet tab which allows selection of the browser and e-mail application. Neither has a preferred application at this time, so we need to set one for the browser. 3. To set Firefox as the default browser, click the selection bar that says “No application selected” for the web browser. The only option at this time is Firefox so select that. 168

Chapter 6

Using theXfce Desktop

4. Switch to the Utilities tab of the Preferred Applications dialog shown in Figure6-10. Notice that both items here already have selections made. Thunar is the only option available as the file manager, and the Xfce terminal is the only option for the terminal emulator. 5. The fact that there are no other options available for any of these applications is due to the extremely basic installation that is performed by the desktop installers. 6. Click the All Settings button shown in Figure6-10 to return to the main Settings Manager.

Figure 6-10.  The Utilities tab of the Preferred Applications dialog allows selection of the default GUI file manager and the default terminal emulator The Thunar file manager is one of the best ones I have used. There are many and several of them are available for Fedora Linux. The same is true of the Xfce terminal– it is one of the best of many very good ones. In my opinion, even if there were other choices available to be made here, these are excellent ones, and I would not change them. We will cover file managers in more detail in Chapter 2 of Volume 2. 169

Chapter 6

Using theXfce Desktop

Desktop appearance Changing the appearance of the desktop is managed by more than one of the settings tools in the Settings Manager. I like to play– ...er...experiment– with these as my moods change. Well, not that often, but every few weeks. I like to try different things, and this is one harmless way of making changes that can be fun.

Appearance We start with the Appearance tools which allows us to select various aspects of the look of the user interface. Although Xfce does not have the vast number of configuration that KDE does, it has more than some other desktops. I like a lot of flexibility in changing the look of my desktop, and I am quite satisfied with the amount of flexibility I get with the Xfce desktop. It is flexible enough for me without being overly complex. The Appearance tool has four tabs that provide controls to adjust different parts of the Xfce desktop. The Appearance dialog opens to the Style tab. This tab is mostly about color schemes, but it also has some effect on the rendering of buttons and sliders. For example, controls may have a flat or 3D appearance in different styles. The second tab, Icons, allows selection of an icon theme from among several available ones. Others can be downloaded and installed as well. The third tab, Fonts, allows the user to select a font theme for the desktop. A default variable width font can be selected as well as a default monospace font. The fourth tab, Settings, allows selection of whether the icons have text or not and where it is located. It also provides the ability to determine whether some buttons and menu items have images on them. You can also turn sounds for events on or off on this tab.

EXPERIMENT 6-4 This experiment will provide you with an opportunity to try making changes to the look and feel of your desktop. Experimenting with these changes can suck up a lot of time, so try not to get too distracted by it. The main idea here is to allow you to familiarize yourself with changing the appearance of the Xfce desktop.

170

Chapter 6

Using theXfce Desktop

To begin, open the Settings Manager using the icon you added to Panel 2in Experiment 6-2. Then click the Appearance icon which is in the upper left of the Settings Manager window. Figure6-11 shows the Style tab. This tab allows you to choose the basic color scheme and some of the visual aspects of the Xfce desktop. Click some of the different schemes to see how they look in your VM.I have noticed (at the time of this writing) that the Xfce selections look good with respect to the colors, but that the menu bars, on windows that have them, seem to jam the menu items together, so they become difficult to read. For your new style, you should consider one of the others. I like the Adwaita-dark, Arc-Dark-solid, and Crux styles.

Figure 6-11.  Setting the style elements of the Xfce desktop Now go to the Icons tab, and select some different icon schemes to see how they look. This is not the mouse pointer icon, but the application icons. I like the Fedora icon set. Notice that all changes take place almost as soon as you select them. 171

Chapter 6

Using theXfce Desktop

When you have finished setting the appearance of your desktop, click the All Settings button to return to the main settings dialog. Then click window manager. These settings enable you to change the look of the window decorations– things like the title bar, the icons on the title bar, and the size and look of the window borders. In Figure6-12 I have chosen the B6 window decorations. Try some of the other themes in this menu. The Keyboard tab allows you to change some of the keyboard shortcuts, but I seldom make any changes here. The Focus tab gives you the ability to determine when a window gets the focus so that it is the active window. The Advanced tab determines whether windows snap to invisible grid lines when moves and the granularity of the grid. It also allows you to configure how windows dragged to the edge of the screen act. Leave the Settings Manager open for now.

Figure 6-12.  The Window Manager settings allow you to change the look of the window decorations

172

Chapter 6

Using theXfce Desktop

You should also take a little time to explore the other dialogs found in the Settings Manager. Don’t forget that you can return to the Settings Manager at any time to change the appearance of your desktop. So if you don’t like tomorrow what you selected today, you can choose another look and feel for your desktop. Configuring the look and feel of the desktop may seem a bit frivolous, but I find that having a desktop that looks good to me and that has launchers for the applications I use most frequently and that can be easily modified goes a long way to making my work pleasant and easy. Besides, it is fun to play with these settings, and SysAdmins just want to have fun.

M ultiple desktops Another feature of the Xfce desktop, and all except the simplest of the others I have used, is the ability to use multiple desktops, or workspaces as they are called in Xfce. I use this feature often, and many people find it useful to organize their work by placing the windows belonging to each project on which they are working on different desktops. For example, I have four workspaces on my Xfce desktop. I have my e-mail, an instance of the Chromium web browser, and a terminal session on my main workspace. I have VirtualBox and all of my running VMs in a second workspace along with another terminal session. I have my writing tools on a third workspace, including various documents that are open in LibreOffice, another instance of Chromium for research, a file manager to open and manage the documents that comprise this book, and another terminal emulator session with multiple tabs each of which are logged in via SSH to one of the VMs I have running.

EXPERIMENT 6-5 This experiment is designed to give you practice with using multiple desktops. Your desktop should look very similar to that in Figure6-13, with the Settings Manager and Thunar file manager open.

173

Chapter 6

Using theXfce Desktop

Figure 6-13.  Move the Thunar file manager to another workspace using the System menu To start, click the filing cabinet icon in the center of Panel 2 (the bottom panel). If you hover the mouse pointer over this folder, the tool tip will pop up showing the title “File Manager.” The default file manager is Thunar, and it can be used to explore the files and directories in your home directory as well as other system directories to which you have access, such as /tmp. But we want to move this file manager to a different desktop. There are two different ways to do this. First, right-click anywhere on the file manager’s Title bar at the top of the window. Then select Move to Another Workspace as in Figure6-13, and then click Workspace 3. You could also access the same menu with a right-click on the button for the running application in the top panel, Panel 1. The Workspace Switcher now shows the window for the file manager in workspace 3, while the Settings Manager is still in workspace 1, as shown in Figure6-14. You can click any workspace in the switcher to go immediately to that workspace. So click workspace three to go there. 174

Chapter 6

Using theXfce Desktop

Figure 6-14.  The Workspace Switcher shows windows in workspaces 1 and 3 Notice that the windows in the switcher are a reasonable approximation of their relative size on the workspaces that the switcher represents. The windows in the switcher also have icons that represent the application running in the window. This makes it fairly easy for us to use the switcher to move windows from one workplace to another. However, if the panel size is too small, the windows may not be replicated in the desktop switcher, or just the outline of the window will be present without an icon. If there are no windows in the desktop switcher, you should skip the next paragraph. Drag the file manager icon from workspace 3 to workspace 4 and drop it there. The file manager window disappears from the workspace, and the icon for the file manager is now in workspace 4. Click workspace 4 to go there. As with all things Linux, there are multiple ways to manage these workspaces and the application windows in each. I find that there are times when placing windows that belong to a specific project on a workspace by themselves is a good way to simplify the clutter on my primary workspace.

Installing updates It is important to ensure that the Linux operating system and software are always up to date. Although it is possible to install updates using the dnfdragora software management tool that is found in the system tray on the desktop, SysAdmins are more likely to perform updates from the command line. Software updates are installed to fix problems with existing versions or to add some new function. Updates do not install a complete new release version of Fedora. The last experiment in this chapter will explore using a terminal session on the desktop as root to install software updates.

175

Chapter 6

Using theXfce Desktop

EXPERIMENT 6-6 On the bottom Panel, Panel 2, click the Terminal Emulator icon once, the third from the left in Figure6-15. You can hover the mouse pointer over the icon to view a terse description of the program represented by the icon.

Figure 6-15.  Use Panel 2 to open a terminal session 1. Updates can only be installed by root. Even if we used the graphical dnfdragora software management tool on the desktop, we would need to use the root password. We need to switch user to root in the terminal session: [student@studentvm1 ~]$ su Password: <Enter the root password> [root@studentvm1 ~]#

176

Chapter 6

Using theXfce Desktop

You may have already noticed that we always add a dash after the su command, like so: su -. We will go into more detail about this in a later chapter, but for now it is sufficient to say that the dash ensures that root is working in the correct environment. The root user has its own home directory, environment variables like the path ($PATH), and some command-line tools that are a bit different for root than for other users. 2. Now we install all of the available updates. This is very important because it is always a best practice to ensure that things are working as they should by having the latest updates installed. The latest updates will contain the most recent security patches as well as functional fixes. This is easy, but it will require waiting while the process completes. The nice thing is that Linux updates, even when they do require a reboot, don’t reboot automatically, and you can continue working until you are ready to do the reboot. Enter the following command: [root@studentvm1 ~]# dnf -y update

On my VM this installed over 375 updates. This number may vary greatly depending upon how recent the ISO image you installed Linux from is and how many updates there are. I have not shown the lengthy output produced from this command, but you should pay some attention to it as the dnf command does its work. This will give you an idea of what to expect when you do updates later. The installation of some updates, especially some kernel packages, may appear to stop for a period of time or be hung. Don’t worry; this is normal. 3. Because the kernel was updated, we will do a reboot so that the new kernel is loaded. There are some ways to do this in the GUI, but I prefer rebooting from the command line. After the updates have been installed and the message, “Complete!” is displayed, we will do the reboot– but not before: [root@studentvm1 ~]# reboot

177

Chapter 6

Using theXfce Desktop

4. During the reboot, be sure to look at the GRUB menu. Note that there are multiple kernels shown, two, for now. You can use the up and down arrow keys on your keyboard to select a different kernel than the default, which is always the most recent. We will talk more about this later, but having multiple kernels from which to boot can be very helpful at times. Don’t change this for now. 5. Log in to the desktop and open a terminal session. There is something else that needs to be done after an update to ensure that the man(ual) pages– the help facility– are up to date. I have had times when the database was not properly updated and the man command did not display the man page for a command. This command ensures that all of the man pages are up to date: [root@studentvm1 ~]# mandb <snip> Purging old database entries in /usr/share/man/ko... Processing manual pages under /usr/share/man/ko... Purging old database entries in /usr/local/share/man... Processing manual pages under /usr/local/share/man... 0 man subdirectories contained newer manual pages. 0 manual pages were added. 0 stray cats were added. 2 old database entries were purged.

Not very much resulted from this on my system, but two old manual database items were purged.

Chapter summary You have logged in using the GUI greeter for the Xfce desktop and familiarized yourself with the desktop. You launched and learned very basic usage of the xfce4-terminal emulator. You installed all current updates. You have explored the Xfce desktop and learned a number of ways to configure it to create a different look and feel. You have also explored some ways to make the desktop work a bit more efficiently for you, such as adding launchers to the panel and using multiple desktops.

178

Chapter 6

Using theXfce Desktop

I did an online search to try to discover what Xfce means, and there is a historical reference to XForms Common Environment, but Xfce no longer uses the Xforms tools. Some years ago I found a reference to “Xtra fine computing environment,” and I like that a lot and will use that despite not being able to find the page reference again.

Exercises Perform the following exercises to complete this chapter: 1. What does the term “lightweight” mean when applied to the Xfce desktop? 2. Do you think that using multiple workspaces will be beneficial to you and the way you like to work? 3. How many options are there for the terminal emulator in the Preferred Applications configuration dialog? 4. Can you change the number of available workspaces? 5. What is the name of the default file manager for the Xfce desktop? 6. How does this file manager compare to others you have used? 7. How do you obtain a terminal login as the root user?

179

CHAPTER 7

Using theLinux Command Line O bjectives In this chapter you will learn •

Command-line terminology and exploration of the differences between the terms terminal, console, shell, command line, and session.

Three different methods for gaining access to the Linux command-­ line interface (CLI)

To use the Bash shell

About some other, alternative shells

Why it can be useful to have multiple command-line sessions open simultaneously

At least three different ways to deal with multiple command-line interfaces

Some basic but important Linux commands

I ntroduction The Linux command line is “Linux Command Central” to a SysAdmin. The Linux CLI is a nonrestrictive interface because it places no limits on how you use it. A graphical user interface (GUI) is by definition a very restrictive interface. You can only perform the tasks you are allowed in a prescribed manner, and all of that is © David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_7

181

Chapter 7

Using theLinux Command Line

chosen by the programmer. You cannot go beyond the limits of the imagination of the programmer who wrote the code or– more likely– the restrictions placed on the programmer by the Pointy-Haired Bosses. In my opinion, the greatest drawback of any graphical interface is that it suppresses any possibility for automation. No GUI offers any capability to truly automate tasks. Instead there is only repetitive mouse clicks to perform the same or similar operations multiple times on slightly different data. Simple “search and replace” operations are about the best it gets with most GUI programs. The CLI, on the other hand, allows for great flexibility in performing tasks. The reason for this is that each Linux command, not just the GNU core utilities but also the vast majority of the Linux commands, was written using tenets of the Linux Philosophy such as “Everything is a file,” “Always use STDIO,” “Each program should do one thing well,” “Avoid captive user interfaces,” and so on. You get the idea, and I will discuss each of these tenets later in this book, so don’t worry too much if you don’t yet understand what they mean. The bottom line for the SysAdmin is that when developers follow the tenets, the power of the command line can be fully exploited. The vast power of the Linux CLI lies in its complete lack of restrictions. In this chapter we will begin to explore the command line in ways that will illuminate the power that it literally places at your fingertips. There are many options for accessing the command line such as virtual consoles, many different terminal emulators, and other related software that can enhance your flexibility and productivity. All of those possibilities will be covered in this chapter as well as some specific examples of how the command line can perform seemingly impossible tasks– or just satisfy the Pointy-Haired Boss.

Preparation Before we get any further into our discussion about the command line, there is a little preparation we need to take care of. The default Linux shell is Bash which is the one I prefer. Like many other things, there are many shells from which you can choose. Many of these shells are available for both Linux and Unix systems including OS X.We will be looking at a few of them and are going to install them here, along with a couple other interesting programs that we will explore later.

182

Chapter 7

Using theLinux Command Line

PREPARATION Not all distributions install several software packages we will use during this chapter, so we will install them now. These packages are primarily shells. If one or more of these packages are already installed, a message will be displayed to indicate that, but the rest of the packages will still install correctly. Some additional packages will be installed to meet the prerequisites of the ones we are installing. Do this as root: [root@studentvm1 ~]# dnf -y install tilix screen ksh tcsh zsh sysstat

On my test VM, the command installed the packages listed and some other packages to meet dependencies.

Defining thecommand line The command line is a tool that provides a text mode interface between the user and the operating system. The command line allows the user to type commands into the computer for processing and to see the results. The Linux command-line interface is implemented with shells such as Bash (Bourne again shell), csh (C shell), and ksh (Korn shell) to name just three of the many that are available. The function of any shell is to interpret commands typed by the user and pass the results to the operating system which executes the commands and returns the results to the shell. Access to the command line is through a terminal interface of some type. There are three primary types of terminal interface that are common in modern Linux computers, but the terminology can be confusing. So indulge me while I define those terms as well as some other terms that relate to the command line– in some detail.

CLI terminology There are several terms relating to the command line that are often used interchangeably. This indiscriminate usage of the terms caused me a good bit of confusion when I first started working with Unix and Linux. I think it is important for 183

Chapter 7

Using theLinux Command Line

SysAdmins to understand the differences between the terms console, virtual console, terminal, terminal emulator, a terminal session, and shell. Of course you can use whatever terminology works for you so long as you get your point across. Within the pages of this book, I will try to be as precise as possible because the reality is that there are significant differences in the meanings of these terms and it sometimes matters.

Command prompt The command prompt is a string of characters like this one that sits there with a cursor which may be flashing and waiting– prompting– you to enter a command: [student@studentvm1 ~]$ ■ The typical command prompt in a modern Linux installation consists of the username, the hostname, and the present working directory (PWD), also known as the “current” directory, all enclosed in square braces. The tilde (~) character indicates the home directory.

Command line The command line is the line on the terminal that contains the command prompts and any command you enter. All of the modern mainstream Linux distributions provide at least three ways to access the command line. If you use a graphical desktop, most distributions come with multiple terminal emulators from which to choose. The graphical terminal emulators run in a window on the GUI desktop, and more than one terminal emulator can be open at a time. Linux also provides the capability for multiple virtual consoles to allow for multiple logins from a single keyboard and monitor (KVM). Virtual consoles can be used on systems that don’t have a GUI desktop, but they can be used even on systems that do have one. The last method to access the command line on a Linux computer is via a remote login. Telnet was a common tool for remote access for many years, but because of greatly increased security concerns, it has largely been replaced by Secure Shell (SSH).

Command-line interface The Command-line interface is any text mode user interface to the Linux operating system that allows the user to type commands and see the results as textual output. 184

Chapter 7

Using theLinux Command Line

C ommand Commands are what you type on the command line in order to tell Linux what you want it to do for you. Commands have a general syntax that is easy to understand. The basic command syntax for most shells is command [-o(ptions)] [arg1] [arg2] ... [argX] Options may also be called switches. They are usually a single character and are binary in meaning, that is, to turn on a feature of the command, such as using the -l option in ls -l to show a long listing of the directory contents. Arguments are usually text or numerical data that the command needs to have in order to function or produce the correct results. For example, the name of a file, directory, username, and so on would be an argument. Many of the commands that you will discover in this course use one or more options and, sometimes, an argument. If you run a command that simply returns to the CLI command prompt without printing any additional data to the terminal, don’t worry, that is what is supposed to happen with most commands. If a Linux command works as it is supposed to, most of the time it will not display any result at all. Only if there is an error will any message display. This is in line with that part of the Linux Philosophy– and there is a significant discussion about that which I won’t cover here– that says, “Silence is golden.” Command names are also usually very short. This is called the “Lazy Admin” part of the Linux Philosophy; less typing is better. The command names also usually have some literal relation to their function. Thus the “ls” command means “list” the directory contents, “cd” means change directory, and so on. Note that Linux is case sensitive. Commands will not work if entered in uppercase. ls will work but LS will not. File and directory names are also case sensitive.

T erminal The original meaning of the word “terminal” is an old bit of hardware that provides a means of interacting with a mainframe or Unix computer host. In this book the term will refer to terminal emulator software that performs the same function. The terminal is not the computer; the terminals merely connect to mainframes and Unix systems. Terminals– the hardware type– are usually connected to their host computer through a long serial cable. Terminals such as the DEC VT100 shown in Figure7-1 are usually called “dumb terminals” to differentiate them from a PC or other 185

Chapter 7

Using theLinux Command Line

small computer that may act as a terminal when connecting to a mainframe or Unix host. Dumb terminals have just enough logic in them to display data from the host and to transfer keystrokes back to the host. All of the processing and computing is performed on the host to which the terminal is connected.

Figure 7-1.  A DEC VT100 dumb terminal This file is licensed under the Creative Commons Attribution 2.0 Generic license. Author: Jason Scott. Terminals that are even older, such as mechanical teletype machines (TTY), predate the common use of CRT displays. They used rolls of newsprint-quality paper to provide a record of both the input and results of commands. The first college course I took on computer programming used these TTY devices which were connected by telephone line at 300 bits per second to a GE (yes, General Electric) time-sharing computer a couple hundred miles away. Our university could not afford a computer of their own at that time. Much of the terminology pertaining to the command line is rooted by historical usage in these dumb terminals of both types. For example, the term TTY is still in common use, but I have net seen an actual TTY device in a many years. Look again in the /dev directory of your Linux or Unix computer, and you will find a large number of TTY device files. Terminals were designed with the singular purpose of allowing users to interact with the computer to which they were attached by typing commands and viewing the results 186

Chapter 7

Using theLinux Command Line

on the roll of paper or the screen. The term, “terminal,” tends to imply a hardware device that is separate from the computer while being used to communicate and interact with it.

Console A console is a special terminal because it is the primary terminal connected to a host. It is the terminal at which the system operator would sit to enter commands and perform tasks that were not allowed at other terminals connected to the host. The console is also the only terminal on which the host would display system-level error messages when problems occurred.

Figure 7-2.  Unix developers Ken Thompson and Dennis Ritchie. Thompson is sitting at a teletype terminal used as a console to interface with a DEC computer running Unix Peter Hamer– Uploaded by Magnus Manske. 187

Chapter 7

Using theLinux Command Line

There can be many terminals connected to mainframe and Unix hosts, but only one can act as a console. On most mainframes and Unix hosts, the console was connected through a dedicated connection that was designated specifically for the console. Like Unix, Linux has runlevels, and some of the runlevels such as runlevel 1, single user mode, and recovery mode are used only for maintenance. In these runlevels, only the console is functional to allow the SysAdmin to interact with the system and perform maintenance.

Note  KVM stands for keyboard, video, and mouse, the three devices that most people use to interact with their computers. On a PC the physical console is usually the keyboard, monitor (video), and sometimes the mouse (KVM) that are directly attached to the computer. These are the physical devices used to interact with BIOS during the BIOS boot sequence and can be used during the early stages of the Linux boot process to interact with GRUB and choose a different kernel to boot or modify the boot command to boot into a different run level. Because of the close physical connection to the computer of the KVM devices, the SysAdmin must be physically present at this console during the boot process in order to interact with the computer. Remote access is not available to the SysAdmin during the boot process and only becomes available when the SSHD service is up and running.

Virtual consoles Modern personal computers and servers that run Linux do not usually have dumb terminals that can be used as a console. Linux typically provides the capability for multiple virtual consoles to allow for multiple logins from a single, standard PC keyboard and monitor. Red Hat Enterprise Linux, CentOS, and Fedora Linux usually provide for six or seven virtual consoles for text mode logins. If a graphical interface is used, the first virtual console, vc1, becomes the first graphical (GUI) session after the X Window System (X) starts, and vc7 becomes the second GUI session. Each virtual console is assigned to a function key corresponding to the console number. So vc1 would be assigned to function key F1, and so on. It is easy to switch to and from these sessions. On a physical computer, you can hold down the Ctrl-Alt keys and press F2 to switch to vc2. Then hold down the Ctrl-Alt keys and press F1 to switch to vc1 and what is usually the graphical desktop interface. We will cover how to do this on a VM in Experiment 7-1. If there is no GUI running, vc1 will be simply another text console. 188

Chapter 7

Using theLinux Command Line

Figure 7-3.  Login prompt for virtual console 2 Virtual consoles provide a means to access multiple consoles using a single physical system console, the keyboard, video display, and mouse (KVM). This gives administrators more flexibility to perform system maintenance and problem solving. There are some other means for additional flexibility, but virtual consoles are always available if you have physical access to the system or directly attached KVM device or some logical KVM extension such as Integrated Lights-Out (ILO). Other means such as the screen command might not be available in some environments, and a GUI desktop will probably not be available on most servers.

Using virtual consoles EXPERIMENT 7-1 For this experiment you will use one of the virtual consoles to log in to the command line as root. The command line is where you will do most of your work as a system administrator. You will have an opportunity to use a terminal session in the GUI desktop later, but this is what your system will look like if you do not have a GUI. 1. If you were on a physical host, you would press Ctrl-Alt-F2 to access virtual console 2. Because we are on virtual machines, however, pressing that key combination would take us to virtual console for the physical host. We need to do something a bit different for the virtual machine. Click the VM to give it the focus. There is a key called the Host Key that we will use to simulate the Ctrl-Alt key combination. The current Host Key is indicated in the lower right corner of the VM window as you can see in Figure7-4. As you can see there, I have changed the default Host Key on my VirtualBox installation to be the Left WinKey because I find it easier to use than the right Ctrl key.1 The WinKeys are the keys on your physical keyboard that have the Windows icon on them. Use the File ➤ Preferences menu on the VM window’s menu bar, and then choose Input to change the Host Key and other key combinations.

1

189

Chapter 7

Using theLinux Command Line

Figure 7-4.  The Right WinKey is the default Host Key, but I have changed mine to the Left WinKey because it is easier for me to use To change to virtual console 2 (vc2) now that the VM has the focus, press and hold the Host Key for your VM, then press the F2 key (HostKey-F2) on your keyboard. Your VM window should now look like that in Figure7-5. Note that I have resized the VM window so that the entire window can be easily shown here.

Figure 7-5.  The VM window showing the virtual console 2 login 2. If you are not already logged in, and you probably are not, log in to virtual console session 2 as root. Type root on the Login line, and press the Enter key as shown in Figure7-6. Type in your root password, and press Enter again. You should now be logged in and at the command prompt.

190

Chapter 7

Using theLinux Command Line

Figure 7-6.  Vc2 after logging in as root The # prompt shows that this is a root login. 3. Use HostKey-F3 to change to virtual console session three (vc3). Log in on this console as student. Note that any user can be logged in multiple times using any combination of the virtual consoles and GUI terminal emulators. Note the $ prompt which denotes the prompt for a non-root (non-privileged) user. In vc3, run the ls -la command. Notice the Bash and other configuration files, most of which start with a dot (.). Your listing will probably be different from my listing: [student@studentvm1 ~]$ ls -la total 160 drwx------. 15 student student4096 drwxr-xr-x.5 rootroot 4096 -rw-------.1 student student19 -rw-r--r--.1 student student18 -rw-r--r--.1 student student 193 -rw-r--r--.1 student student 231 drwx------.9 student student4096 drwx------.8 student student4096 drwxr-xr-x.2 student student4096 drwxr-xr-x.2 student student4096 drwxr-xr-x.2 student student4096 -rw-------.1 student student16 drwx------.3 student student4096 -rw-------.1 student student1550

Sep2 Aug 19 Aug 29 Mar 15 Mar 15 Mar 15 Sep2 Aug 19 Aug 18 Aug 18 Aug 18 Aug 18 Aug 18 Sep2

09:14 08:52 13:04 09:56 09:56 09:56 09:15 15:35 17:10 10:21 10:21 10:21 10:21 09:13

. .. .bash_history .bash_logout .bash_profile .bashrc .cache .config Desktop Documents Downloads .esd_auth .gnupg .ICEauthority

191

Chapter 7

Using theLinux Command Line drwxr-xr-x.3 student student4096 drwxr-xr-x.4 student student4096 drwxr-xr-x.2 student student4096 drwxr-xr-x.2 student student4096 drwxr-xr-x.2 student student4096 drwxr-xr-x.2 student student4096 -rw-r-----.1 student student 5 -rw-r-----.1 student student 5 -rw-r-----.1 student student 5 -rw-r-----.1 student student 5 drwxr-xr-x.2 student student4096 -rw-rw-r--.1 student student 18745 -rw-rw-r--.1 student student 20026 last -rw-rw-r--.1 student student8724 -rw-------.1 student student1419 -rw-------.1 student student1748 [student@studentvm1 ~]$

Aug 18 Apr 25 Aug 18 Aug 18 Aug 18 Aug 18 Sep2 Sep2 Sep2 Sep2 Aug 18 Sep2 Sep2

10:21 02:19 10:21 10:21 10:21 10:21 09:13 09:13 09:13 09:13 10:21 09:24 09:12

.local .mozilla Music Pictures Public Templates .vboxclient-clipboard.pid .vboxclient-display.pid .vboxclient-draganddrop.pid .vboxclient-seamless.pid Videos .xfce4-session.verbose-­log .xfce4-session.verbose-­log.

Aug 18 21:45 .xscreensaver Sep2 09:13 .xsession-errors Sep2 09:12 .xsession-errors.old

4. Use the clear command to clear the console screen: [student@studentvm1 ~]$ clear

The reset command resets all terminal settings. This is useful if the terminal becomes unusable or unreadable, such as after cat’ing a binary file. Even if you cannot read the reset command as you input it, it will still work. I have on occasion had to use the reset command twice in a row. 5. If you are not currently logged in to a terminal emulator session in the GUI, do so now. Use HostKey-F1 to return to the GUI and open the terminal emulator. Because you are already logged in to the GUI desktop, it is unnecessary to log in to the terminal emulator session. 6. Open a terminal window if you do not already have one open, and type w to list currently logged in users and uptime. You should see at least three logins, one for root on tty2 and one for student on tty3 and one for student on tty1, which is the GUI console session: [student@studentvm1 ~]$ w 16:48:31 up 2 days,7:35,5 users,load average: 0.05, 0.03, 0.01

192

Chapter 7 USERTTYLOGIN@IDLEJCPUPCPU studenttty1Sun092days 10.41s0.05s xinitrc -- vt studentpts/1 Sun09 18:57m0.15s0.05s roottty2 13:073:41m0.02s0.02s studentpts/3 13:174.00s0.05s0.03s studenttty313:213:24m0.03s0.03s [student@studentvm1 ~]$

Using theLinux Command Line

WHAT /bin/sh /etc/xdg/xfce4/ sshd: student [priv] -bash w -bash

I have more logins listed than you will because I also have logged in “remotely” from the physical host workstation using SSH.This makes it a bit easier for me to copy and paste the results of the commands. Due to the setup of the virtual network, you will not be able to SSH into the virtual machine. Notice the first line of data which shows student logged in on TTY1. TTY1 is the GUI desktop. You will also see the logins for TTY2 and TTY3 as well as two logins using pseudo-terminals (pts) pts/1 and pts/3. These are my remote SSH login sessions. 7. Enter the who command. It provides similar, slightly different information than w: [student@studentvm1 ~]$ who studenttty1 2018-09-02 studentpts/12018-09-02 roottty2 2018-09-04 studentpts/32018-09-04 studenttty3 2018-09-04 [student@studentvm1 ~]$

09:13 (:0) 09:26 (192.168.0.1) 13:07 13:17 (192.168.0.1) 13:21

In the results of the who command you can also see the IP address from which I logged in using SSH.The (:0) string is not an emoji, it is an indicator that TTY1 is attached to display :0– the first display. 8. Type whoami to display your current login name: [student@studentvm1 ~]$ whoami student [student@studentvm1 ~]$

Of course your login name is also displayed in the text of the command prompt. However, you may not always be who you think you are. 193

Chapter 7

Using theLinux Command Line

9. Type the id command to display your real and effective ID and GID.The id command also shows a list of the groups to which your user id belongs: [student@studentvm1 ~]$ id uid=1000(student) gid=1000(student) groups=1000(student) context=unconfined_u :unconfined_r:unconfined_t:s0-s0:c0.c1023 [student@studentvm1 ~]$

We will discuss user IDs, groups, and group IDs in detail later. The part of the output from the id command that starts with “context” is split onto a second line here, but it should be displayed on a single line in your terminal. However, the split here is a convenient way to see the SELinux information. SELinux is Secure Linux, and the code was written by the NSA to ensure that even if a hacker gains access to a host protected by SELinux, the potential damage is extremely limited. We will cover SELinux in a little more detail in Volume 3, Chapter 17. 10. Switch back to console session 2. Use the whoami , who, and id commands the same as in the other console session. Let’s also use the who am I command: [student@studentvm1 ~]$ whoami student [student@studentvm1 ~]$ who rootpts/1 2019-01-13 14:13 (192.168.0.1:S.0) rootpts/2 2019-01-14 12:09 (192.168.0.1:S.1) studentpts/32019-01-15 16:15 (192.168.0.1) studenttty1 2019-01-15 21:53 (:0) studentpts/52019-01-15 22:04 (:pts/4:S.0) studentpts/62019-01-15 22:04 (:pts/4:S.1) studenttty2 2019-01-15 22:05 studenttty3 2019-01-15 22:06 studentpts/82019-01-15 22:19 [student@studentvm1 ~]$ id uid=1000(student) gid=1000(student) groups=1000(student) context=unconfined_u :unconfined_r:unconfined_t:s0-s0:c0.c1023 [student@studentvm1 ~]$ who am i studentpts/82019-01-15 22:19

1 1. Log out of all the virtual console sessions. 12. User Ctrl-Alt-F1 (Host Key+F1) to return to the GUI desktop. 194

Chapter 7

Using theLinux Command Line

The virtual consoles are assigned to device files such as /dev/tty2 for virtual console 2 as in Figure7-3. We will go into much more detail on device files throughout this course and especially in Chapter 3 of Volume 2. The Linux console2 is the terminal emulator for the Linux virtual consoles.

T erminal emulator Let’s continue with our terminology. A terminal emulator is a software program that emulates a hardware terminal. Most of the current graphical terminal emulators, like the xfce4-terminal emulator seen in Figure7-7, can emulate several different types of hardware terminals. Most terminal emulators are graphical programs that run on any Linux graphical desktop environment like Xfce, KDE, Cinnamon, LXDE, GNOME, and others. You can see in Figure7-7 that a right-click on the Xfce4 terminal emulator window brings up a menu that allows opening another tab or another emulator window. This figure also shows that there are currently two tabs open. You can see them just under the menu bar.

Figure 7-7.  The xfce4-terminal emulator with two tabs open Wikipedia, Linux Console, https://en.wikipedia.org/wiki/Linux_console

2

195

Chapter 7

Using theLinux Command Line

The first terminal emulator was Xterm3 which was originally developed in 1984 by Thomas Dickey.4 The original Xterm is still maintained and is packaged as part of many modern Linux distributions. Other terminal emulators include xfce4-terminal,5 GNOME-­ terminal,6 Tilix,7 rxvt,8 Terminator,9 Konsole,10 and many more. Each of these terminal emulators has a set of interesting features that appeal to specific groups of users. Some have the capability to open multiple tabs or terminals in a single window. Others provide just the minimum set of features required to perform their function and are typically used when small size and efficiency are called for. My favorite terminal emulators are xfce4-terminal, Konsole, and Tilix because they offer the ability to have many terminal emulator sessions in a single window. The xfce4-­ terminal and terminal do this using multiple tabs that I can switch between. Tilix offers the ability to tile multiple emulator sessions in a window session as well as providing multiple sessions. My current terminal emulator of choice is xfce4, primarily because it offers a good feature set that is as good as terminal and yet is also very lightweight and uses far fewer system resources. Other terminal emulator software provides many of these features but not as adroitly and seamlessly as the xfce4-terminal and Tilix. For this course we will use the xfce4-terminal because it is the default for the Xfce desktop, it is very sparing of system resources, and it has all of the features we need. We will install and explore other terminal emulators in Chapter 14 of this volume.

P seudo-terminal A pseudo-terminal is a Linux device file to which a terminal emulator is attached in order to interface with the operating system. The device files for pseudo-terminals are located in the /dev/pts directory and are created only when a new terminal emulator session is launched. That can be a new terminal emulator window or a new tab or

Wikipedia, Xterm, https://en.wikipedia.org/wiki/Xterm Wikipedia, Thomas Dickey, https://en.wikipedia.org/wiki/Thomas_Dickey 5 Xfce Documentation, Xfce4-terminal, https://docs.xfce.org/apps/terminal/introduction 6 Wikipedia, GNOME terminal, https://en.wikipedia.org/wiki/GNOME_Terminal 7 Fedora Magazine, Tilix, https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/ 8 Wikipedia, Rxvt, https://en.wikipedia.org/wiki/Rxvt 9 Wikipedia, Terminator, https://en.wikipedia.org/wiki/Terminator_(terminal_emulator) 10 KDE, Konsole terminal emulator, https://konsole.kde.org/ 3 4

196

Chapter 7

Using theLinux Command Line

panel in an existing window of one of the terminal emulators, such as terminal, which supports multiple sessions in a single window. The device files in /dev/pts are simply a number for each emulator session that is opened. The first emulator would be /dev/pts/1, for example.

Device special files Let’s take a brief side trip. Linux handles almost everything as a file. This has some interesting and amazing implications. This concept makes it possible to copy an entire hard drive, boot record included, because the entire hard drive is a file, just as are the individual partitions. “Everything is a file” is possible because all devices are implemented by Linux as these things called device files. Device files are not device drivers; rather they are gateways to devices that are exposed to the user. Device files are technically known as device special files.11 Device files are employed to provide the operating system and, even more importantly in an open operating system, the users, an interface to the devices that they represent. All Linux device files are located in the /dev directory which is an integral part of the root (/) filesystem because they must be available to the operating system during early stages of the boot process– before other filesystems are mounted. We will encounter device special files throughout this course, and you will have an opportunity to experiment extensively with device special files in Chapter 3 of Volume 2. For now, just having a bit of information about device special files will suffice.

S ession Session is another of those terms that can apply to different things, and yet it retains essentially the same meaning. The most basic application of the term is to a terminal session. That is a single terminal emulator connected to a single user login and shell. So in its most basic sense, a session is a single window or virtual console logged into a local or remote host with a command-line shell running in it. The xfce4-terminal emulator supports multiple sessions by placing each session in a separate tab.

Wikipedia, Device File, https://en.wikipedia.org/wiki/Device_file

11

197

Chapter 7

Using theLinux Command Line

S hell A shell is the command interpreter for the operating system. Each of the many shells available for Linux interprets the commands typed by the user or SysAdmin into a form usable by the operating system. When the results are returned to the shell program, it displays them on the terminal. The default shell for most Linux distributions is the Bash shell. Bash stands for Bourne again shell because the Bash shell is based upon the older Bourne shell which was written by Steven Bourne in 1977. Many other shells are available. The four I list here are the ones I encounter most frequently but many others exist:12 •

csh: The C shell for programmers who like the syntax of the C language

ksh: The Korn shell, written by David Korn and popular with Unix users

tcsh: A version of csh with more ease of use features

zsh: Which combines many features of other popular shells

All shells have some built-in commands that supplement or replace the commands provided by the core utilities. Open the man page for bash and find the “BUILT-INS” section to see the list of commands provided by the shell itself. I have used the C shell, the Korn shell, and the Z shell. I still like the Bash shell better than any of the others I have tried. Each shell has its own personality and syntax. Some will work better for you and others not so well. Use the one that works best for you, but that might require that you at least try some of the others. You can change shells quite easily.

Using different shells So far we have been using the Bash shell, so you have a brief experience with it. There are some other shells that might be better suited for your needs. We will look at three others in this experiment.

ikipedia, Comparison of command shells, https://en.wikipedia.org/wiki/ W Comparison_of_command_shells

12

198

Chapter 7

Using theLinux Command Line

EXPERIMENT 7-2 Because most Linux distributions use the Bash shell as the default, I will assume that is the one you have been using and that it is your default shell. In our preparation for this chapter, we installed three other shells, ksh, tcsh, and zsh. Do this experiment as the user student. First, look at your command prompt which should look like this: [student@studentvm1 ~]$

This is the standard bash prompt for a non-root user. Now let’s change this to the ksh shell. Just enter the name of the shell: [student@studentvm1 ~]$ ksh $

You can tell by the difference in the prompt that this is a different shell. Run a couple simple commands such as ls and free just to see that there is no difference in how the commands work. This is because most of the commands are separate from the shell, except for the built-ins. Try the ll command: $ ll ksh: ll: not found [No such file or directory] $

That fails because Korn shell aliases are different from Bash aliases. Try scrolling up to get a command history like bash. It does not work. Now let’s try zsh. $ zsh This is the Z Shell configuration function for new users, zsh-newuser-install. You are seeing this message because you have no zsh startup files (the files .zshenv, .zprofile, .zshrc, .zlogin in the directory ~).This function can help you with a few settings that should make your use of the shell easier. You can: (q)Quit and do nothing.The function will be run again next time.

199

Chapter 7

Using theLinux Command Line

(0)Exit, creating the file ~/.zshrc containing just a comment. That will prevent this function being run again. (1)Continue to the main menu. --- Type one of the keys in parentheses ---

If you continue by entering a “1,” you will be taken through a series of menus that will help you configure the Z shell to suit your needs– as best you might know them at this stage. I chose “Q” to just go on to the prompt which looks like just a bit different from the bash prompt: [student@studentvm1]~%

Run a few simple commands while you are in the Z shell. Then type exit twice to get back to the original, top-level Bash shell: [student@studentvm1]~% w 14:30:25 up 3 days,6:12,3 users,load average: 0.00, 0.00, 0.02 USERTTYLOGIN@IDLEJCPUPCPU WHAT studentpts/0 Tue080.00s0.07s0.00s w rootpts/1Wed0618:480.26s0.26s -bash studentpts/2 08:146:16m0.03s0.03s -bash [student@studentvm1]~% exit $ exit [student@studentvm1 ~]$

What do you think might happen if you start a Bash shell while you are already in a bash shell? [student@studentvm1 ~]$ bash [student@studentvm1 ~]$ ls DesktopDocumentsDownloadsMusicPicturesPublicTemplatesVideos [student@studentvm1 ~]$ exit exit [student@studentvm1 ~]$

You just get into another Bash shell, is what. This illustrates more than it might appear superficially. First there is the fact that each shell is a layer. Starting a new shell does not terminate the previous one. When you started tcsh from bash, the Bash shell remained in the background, and when you exited from tcsh, you were returned to the waiting Bash shell. 200

Chapter 7

Using theLinux Command Line

It turns out that this is exactly what happens when running any command or process from a shell. The command runs in its own session, and the parent shell– process– waits until that sub-command returns and control is returned to it before being able to continue processing further commands. So if you have a script which runs other commands– which is the purpose of a script– the script runs each command, waiting for it to finish before moving on to run the next command. That behavior can be modified by appending an ampersand (&) to the end of a command, which places the called command in the background and allows the user to continue to interact with the shell, or for the script to continue processing more commands. You would only want to do this with commands that do not require further human interaction or output to STDOUT.You would also not want to run commands in the background when the results of that command are needed by other commands that will be run later but perhaps before the background task has finished. Because of the many options available to SysAdmins and users in Linux, there is little need for moving programs to the background. Just open another terminal emulator on the desktop, start another terminal emulator in a screen session, or switch to an available virtual console. This capability might be more useful in scripts to launch programs that will run while your script continues to process other commands. You can change your shell with the chsh command so that it will be persistent every time you log in and start a new terminal session. We will explore terminal emulators and shells in more detail in Chapter 14.

Secure Shell (SSH) SSH is not really a shell. The ssh command starts a secure communication link between itself as the client and another host with the SSHD server running on it. The actual command shell used at the server end is whatever the default shell set for that account on the server side, such as the Bash shell. SSH is simply a protocol that creates a secure communications tunnel between to Linux hosts.

s creen You might at first think of “screen” as the device on which your Linux desktop is displayed. That is one meaning. For SysAdmins like us, screen is a program, a screen manager that enhances the power of the command line. The screen utility allows 201

Chapter 7

Using theLinux Command Line

launching multiple shells in a single terminal session and provides means to navigate between the running shells. I have many times had a remote session running a program when the communications link failed. When that happened, the running program was terminated as well, and I had to restart it from the beginning. It could get very frustrating. The screen program can prevent that. A screen session will continue to run even if the connectivity to the remote hosts is broken because the network connection fails. It also allows the intentional disconnection of the screen session from the terminal session and reconnecting later from the same or a different computer. All of the CLI programs running in the screen terminal sessions will continue to run on the remote host. This means that once communications is reestablished, one can log back into the remote host and use the screen -r command at the remote command line to reattach the screen session to the terminal. So I can start up a bunch of terminal sessions in screen and use Ctrl-a + d to disconnect from screen and log out. Then I can go to another location, log in to a different host, SSH to the host running screen, and log in and use the screen -r command to reconnect to the screen session, and all of the terminal sessions and their respective programs will still be running. The screen command can be useful in some environments where physical access to a hardware console is not available to provide access to the virtual consoles but the flexibility of multiple shells is needed. You will probably find it convenient to use the screen program, and in some cases, it will be necessary to do so in order to work quickly and efficiently.

EXPERIMENT 7-3 In this experiment we explore the use of the screen program. Perform this experiment in a terminal session as the student user. Before we begin, let’s discuss how to send commands to the screen program itself in order to do things like open a new terminal and switch between running terminal sessions. In this experiment I provide instructions such as “press Ctrl-a + c” to open a new terminal, for example. That means that you should hold down the Control key while you press the “a” key; at this point you can release the Control and “a” keys because you have alerted the screen program that the next keystroke is intended for it. Now press the “c” key. This sequence of 202

Chapter 7

Using theLinux Command Line

keystrokes seems a bit complicated, but I soon learned it as muscle memory, and it is quite natural by now. I’m sure the same will be true for you, too. For the sequence Ctrl-a + " (double quote) sequence which shows a list of all open terminals in that screen session, do Ctrl-a, release those keys, and then press shift + ". Use the Ctrl-a + Ctrl-a sequence which toggles between the most recent two terminal sessions. You must continue to hold down the Control key and press the “a” key twice. 1. Enter the screen command which will clear the display and leave you at a command prompt. You are now in the screen display manager with a single terminal session open and displayed in the window. 2. Type any command such as ls to have something displayed in the terminal session besides the command prompt. 3. Press Ctrl-a + c to open a new shell within the screen session. 4. Enter a different command, such as df –h in this new terminal. 5. Type Ctrl-a + a to switch between the terminals. 6. Enter Ctrl-a + c to open a third terminal. 7. Type Ctrl-a + " to list the open terminals. Choose any one except the last one by using the up/dn arrow keys, and hit the Enter key to switch to that terminal. 8. To close the selected terminal, type exit and press the Enter key. 9. Type the command Ctrl-a + " to verify that the terminal is gone. Notice that the terminal with the number you have chosen to close is no longer there and that the other terminals have not been renumbered. 10. To reopen a fresh terminal, use Ctrl-a + c. 11. Type Ctrl-a + " to verify that the new terminal has been created. Notice that it has been opened in the place of the terminal that was previously closed. 12. To disconnect from the screen session and all open terminals, press Ctrl-a + d. Note that this leaves all of the terminals and the programs in them intact and still running. 13. Enter the command screen -list command on the command line to list all of the current screen sessions. This can be useful to ensure that you reconnect to the correct screen session if there are multiple ones. 203

Chapter 7

Using theLinux Command Line

14. Use the command screen –r to reconnect to the active screen session. If multiple active screen sessions are open, then a list of them will be displayed, and you can choose the one to which you wish to connect; you will have to enter the name of the screen session to which you want to connect. I recommend that you not open a new screen session inside of an existing screen session. It can be difficult to switch between the terminals because the screen program does not always understand which of the embedded sessions to which to send the command. I use the screen program all the time. It is a powerful tool that provides me with extreme flexibility for working on the command line.

The GUI andtheCLI You may like and use any of the many graphical user interfaces, that is, desktops, which are available with almost all Linux distributions; you may even switch between them because you find one particular desktop such as KDE more usable for certain tasks and another like GNOME better suited for other tasks. But you will also find that most of the graphical tools required to manage a Linux computer are simply wrappers around the underlying CLI commands that actually perform those functions. A graphical interface cannot approach the power of the CLI because the GUI is inherently limited to those functions the programmers have decided you should have access to. This is how Windows and other restrictive operating systems work. They only allow you to have access to the functions and power that they decide you should have. This might be because the developers think you really do want to be shielded from the full power of your computer, or it might be due to the fact that they don’t think you are capable of dealing with that level of power, or it might be that writing a GUI to do everything a CLI can do is time-consuming and a low priority for the developer. . Just because the GUI is limited in some ways does not mean that good SysAdmins cannot leverage it to make their jobs easier. I do find that I can leverage the GUI with more flexibility for my command-line tasks. By allowing multiple terminal windows on the desktop, or by using advanced terminal emulation programs such as Xfce, Tilix, and terminal that are designed for a GUI environment, I can improve my productivity. Having multiple terminals open on the desktop gives me the capability of being logged

204

Chapter 7

Using theLinux Command Line

into multiple computers simultaneously. I can also be logged into any one computer multiple times, having open multiple terminal sessions using my own user ID and more terminal sessions as root. For me, having multiple terminal sessions available at all times, in multiple ways, is what the GUI is all about. A GUI can also provide me with access to programs like LibreOffice, which I am using to write this book, graphical e-mail and web browsing applications, and much more. But the real power for SysAdmins is in the command line. Linux uses the GNU core utilities which were originally written by Richard M.Stallman,13 aka RMS, as the free, open source utilities required by any free version of Unix or Unix-like operating systems. The GNU core utilities are the basic file, shell, and text manipulation utilities of any GNU operating system such as GNU/Linux and can be counted upon by any SysAdmin to be present on every version of Linux. In addition, every Linux distribution has an extended set of utilities that provide even more functions. You can enter the command, info coreutils, to view a list of the GNU core utilities and select individual commands for more information. You can also use the In-line command to view the man page for each of these commands and all of the many hundreds of other Linux commands that are also standard with every distribution.

Some important Linux commands The most basic Linux commands are those that allow you to determine and change your current location in the directory structure, create manage and look at files, view various aspects of system status, and more. These next experiments will introduce you to some basic commands that enable you to do all of these things. It also covers some advanced commands that are frequently used during the process of problem determination. Most of the commands covered in these experiments have many options, some of which can be quite esoteric. These experiments are neither meant to cover all of the Linux commands available (there are several hundred) nor are they intended to cover all of the options on any of these commands. This is meant only as an introduction to these commands and their uses.

Wikipedia, Richard M.Stallman, https://en.wikipedia.org/wiki/Richard_Stallman

13

205

Chapter 7

Using theLinux Command Line

The PWD The acronym PWD means present working directory. The PWD is important because all command actions take place in the PWD unless another location is explicitly specified in the command. The pwd command means “print working directory,” that is, print the name of the current directory on the shell output.

Directory path notation styles A path is a notational method for referring to directories in the Linux directory tree. This gives us a method for expressing the path to a directory or a file that is not in the pwd. The term pwd refers to present working directory, which you might know as the “current directory.” Linux uses paths extensively for easy location of and access to executable files, making it unnecessary to type the entire path to the executable. For example, it is easier to type “ls” than it is to type “/usr/bin/ls” to run the ls command. The shell uses the PATH variable where it finds a list of directories in which to search for the executable by the name “ls”.

EXPERIMENT 7-4 This simple experiment simply displays the content of the PATH environment variable for the student user: [student@studentvm1 ~]$ echo $PATH /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/student/.local/bin:/ home/student/bin [student@studentvm1 ~]$

The various paths– directories– that the shell will search are listed in the output from the preceding command. Each path is separated by a colon (:). There are two types of notation we can use to express a path– absolute and relative. An absolute path is specified completely stating with the root directory. So if the pwd is the Downloads directory of my home directory, I would specify the absolute path as /home/student/Downloads. With that as my pwd, if I need to specify the absolute path to my Documents/Work directory, that would look like this, /home/student/Documents/ 206

Chapter 7

Using theLinux Command Line

Work. I could also specify that path in relative notation from my current pwd as ../Documents/Work. I could also use the notation ~/Documents/Work because the Tilde (~) is a shorthand notation for my home directory.

Moving around thedirectory tree Let’s start by looking at how to move around the Linux filesystem directory tree at the command line. Many times working on or in a directory is easier if it is the present working directory (pwd), which is also known as the current directory. Moving around the filesystem is a very important capability, and there are a number of shortcuts that can help as well.

EXPERIMENT 7-5 Perform this experiment as the student user. You should already be logged in to the Xfce desktop with an Xfce terminal session open as the student user. If not, do that now. Moving around the Linux filesystem directory tree is important for many reasons. You will use these skills throughout this course and in real life as a SysAdmin. 1. Start in the terminal session as the user student. Check the present working directory (PWD): [student@studentvm1 /tmp [student@studentvm1 [student@studentvm1 /home/student [student@studentvm1

tmp]$ pwd tmp]$ cd ~]$ pwd ~]$

The first time I checked, the pwd was the /tmp directory because I had been working there. Your PWD will probably be your home directory, (~). Using the cd command with no options always makes your home directory the pwd. Notice in the command prompt that the tilde (~) is a shorthand indicator for your home directory. 2. Now just do a simple command to view the content of your home directory. These directories are created when a new user does the first GUI login to the account: 207

Chapter 7

Using theLinux Command Line

[student@studentvm1 ~]$ ll total 212 drwxr-xr-x. 2 student student4096 drwxr-xr-x. 2 student student4096 drwxr-xr-x. 2 student student4096 drwxr-xr-x. 2 student student4096 drwxr-xr-x. 2 student student4096 drwxr-xr-x. 2 student student4096 drwxr-xr-x. 2 student student4096 drwxr-xr-x. 2 student student4096 [student@studentvm1 ~]$

Aug Aug Aug Aug Aug Aug Aug Aug

18 18 18 18 18 18 18 18

17:10 10:21 10:21 10:21 10:21 10:21 10:21 10:21

Desktop Documents Downloads Music Pictures Public Templates Videos

This command does not show the so-called hidden files in your home directory which makes it easier to scan the rest of the contents. 3. Let’s create a few files to work with since there are none other than the hidden configuration files created by default. The following command line program will create a few files so that we have more than just directories to look at. We will look at command-line programming in some detail as we proceed through the course. Enter the program all on one line: [student@studentvm1 ~]$ for I in dmesg.txt dmesg4.txt ; do dmesg > $I ; done [student@studentvm1 ~]$ ll total 252 drwxr-xr-x. 2 student student4096 Sep 29 -rw-rw-r--. 1 student student 41604 Sep 30 -rw-rw-r--. 1 student student 41604 Sep 30 -rw-rw-r--. 1 student student 41604 Sep 30 -rw-rw-r--. 1 student student 41604 Sep 30 -rw-rw-r--. 1 student student 41604 Sep 30 drwxr-xr-x. 2 student student4096 Sep 29 drwxr-xr-x. 2 student student4096 Sep 29 drwxr-xr-x. 2 student student4096 Sep 29 drwxr-xr-x. 2 student student4096 Sep 29 drwxr-xr-x. 2 student student4096 Sep 29 drwxr-xr-x. 2 student student4096 Sep 29 drwxr-xr-x. 2 student student4096 Sep 29 [student@studentvm1 ~]$

208

dmesg1.txt dmesg2.txt dmesg3.txt

15:31 16:13 16:13 16:13 16:13 16:13 15:31 15:31 15:31 15:31 15:31 15:31 15:31

Desktop dmesg1.txt dmesg2.txt dmesg3.txt dmesg4.txt dmesg.txt Documents Downloads Music Pictures Public Templates Videos

Chapter 7

Using theLinux Command Line

This long listing shows the ownership and file permissions for each file and directory. The data drwxr-xr-x shows first that this is a directory with the leading “d” while a file would have a dash (-) in that position. The file permissions are three triplets of (R)ead, (W)rite, and e(X)ecute. Each triplet represents User, the owner of the file, Group, the group that owns the file, and Other, for all other users. These permissions represent something a bit different on a directory. We will explore file and directory ownership and permissions in more detail in Chapter 18. 4. Make /var/log the pwd and list the contents: [student@studentvm1 ~]# cd /var/log ; ll total 18148 drwxrwxr-x. 2 rootroot 4096 drwx------. 2 rootroot 4096 drwxr-xr-x. 2 rootroot 4096 -rw-------. 1 rootroot74912 -rw-rw----. 1 rootutmp768 -rw-rw----. 1 rootutmp384 <snip> drwxr-xr-x. 2 lightdm lightdm4096 -rw-------. 1 rootroot0 -rw-------. 1 rootroot 0 -rw-------. 1 rootroot 0 -rw-------. 1 rootroot 0 -rw-------. 1 rootroot2360540 -rw-------. 1 rootroot1539520 -rw-------. 1 rootroot1420556 -rw-------. 1 rootroot 741931 drwx------. 3 rootroot 4096 -rw-r--r--. 1 rootroot 1040 <snip> -rw-r--r--. 1 rootroot29936 -rw-r--r--. 1 rootroot28667 -rw-r--r--. 1 rootroot 23533 [root@studentvm1 log]#

Aug 13 16:24 Jul 18 13:27 Feb92018 Sep2 09:13 Sep2 09:26 Aug 18 10:21

anaconda audit blivet-gui boot.log btmp btmp-20180901

Sep2 Sep2 Apr 25 Aug 19 Aug 31 Sep6 Aug 19 Aug 31 Sep2 Jul8 Jul 18

lightdm maillog maillog-20180819 maillog-20180831 maillog-20180902 messages messages-20180819 messages-20180831 messages-20180902 pluto README

09:13 03:45 02:21 03:51 14:47 13:03 03:48 14:44 03:44 22:49 07:39

Sep4 16:48 Xorg.0.log Sep2 09:12 Xorg.0.log.old Aug 18 10:16 Xorg.9.log

Can you determine which are files and which are directories?

209

Chapter 7

Using theLinux Command Line

5. Try to display the content of the current maillog file: [student@studentvm1 log]$ cat maillog cat: maillog: Permission denied [student@studentvm1 log]$

6. If you are using Fedora as recommended, there should be a README file in / var/log. Use the cat command to view the contents: [student@studentvm1 log]$ cat README

Why can you view the contents of this file? 7. Let’s change the pwd to /etc: [student@studentvm1 log]$ cd /etc ; pwd /etc [student@studentvm1 etc]$

8. Now change to the Documents subdirectory of your home directory (~): [student@studentvm1 etc]$ cd ~/Documents/ ; ll total 0 [student@studentvm1 Documents]$

Notice that we used the tilde (~) to represent our home directory which would otherwise have to be typed out as /home/student/Documents. 9. Now I want to return to the /etc directory, but we can save a bit of typing using this shortcut: [student@studentvm1 Documents]$ cd /etc [student@studentvm1 etc]$

The dash (-), aka, the minus sign, will always return you to the previous pwd. How? Let’s look a bit at the environment which defines many environment variables including $PWD and $OLDPWD.The env command prints all of the current environment variables, and the grep command extracts and sends to STDOUT only those lines that contain “pwd”: [student@studentvm1 etc]$ env | grep -i pwd PWD=/etc

210

Chapter 7

Using theLinux Command Line

OLDPWD=/home/student/Documents [student@studentvm1 etc]$

The dash (-), when used as an option to the cd command, is a shorthand notation for the $OLDPWD variable. The command could also be issued in the following manner: [student@studentvm1 Documents]$ cd $OLDPWD [student@studentvm1 etc]$

10. Let’s go to a directory that is a couple layers deep. First we return to our home directory and create a new directory that has a few levels of parents. The mkdir command can do that when used with the -p option: [student@studentvm1 etc]$ cd ; mkdir -p ./testdir1/testdir2/testdir3/ testdir4/testdir5 testdir6 testdir7 [student@studentvm1 ~]$ tree . ├── Desktop ├── dmesg1.txt ├── dmesg2.txt ├── dmesg3.txt ├── dmesg.txt ├── Documents ├── Downloads ├── Music ├── newfile.txt ├── Pictures ├── Public ├── Templates ├── testdir1 │└── testdir2 │└── testdir3 │└── testdir4 │└── testdir5 ├── testdir6 ├── testdir7 └── Videos

211

Chapter 7

Using theLinux Command Line

We also did some other fun stuff with that command to make new directories. The first string was a directory with a number of parents. Then we also added two more directories to be created in the current directory. The mkdir utility, like so many others, accepts a list of arguments not just a single one. In this case the list was of new directories to create. 11. There is also a shorthand notation for the PWD that we can use in commands. The variable $PWD would work, but the dot (.) is much faster. So for some commands that need a source and target directory, we can use the . for either. Note that in the previous step, the top of the tree command output starts with a dot which indicates the current directory: [student@studentvm1 ~]$ mv ./dmesg2.txt /tmp [student@studentvm1 ~]$ cp /tmp/dmesg2.txt . [student@studentvm1 ~]$ cp /tmp/dmesg2.txt ./dmesg4.txt

In this experiment we have looked at how to navigate the directory tree and how to create new directories. We have also practiced using some of the notational shortcuts available to us.

Tab completion facility Bash provides a facility for completing partially typed program and hostnames, file names, and directory names. Type the partial command or a file name as an argument to a command, and press the Tab key. If the host, file, directory, or program exists and the remainder of the name is unique, Bash will complete the entry of the name. Because the Tab key is used to initiate the completion, this feature is sometimes referred to as “Tab completion.” Tab completion is programmable and can be configured to meet many different needs. However unless you have specific needs that are not met by the standard configurations provided by Linux, the core utilities, and other CLI applications, there should never be a reason to change the defaults.

212

Chapter 7

Using theLinux Command Line

Note The Bash man page has a detailed and mostly unintelligible explanation of “programmable completion.” The book Beginning the Linux Command Line has a short and more readable description,14 and Wikipedia15 has more information, examples, and an animated GIF to aid in understanding this feature. Experiment 7-6 provides a very short introduction to command completion.

EXPERIMENT 7-6 Perform this experiment as the student user. Your home directory should have a subdirectory named Documents for this experiment. Most Linux distributions create a Documents subdirectory for each user. Be sure that your home directory is the PWD.We will use completion to change into the ~/ Documents directory. Type the following partial command into the terminal: [student@studentvm1 ~]$ cd D

means to press the Tab key once. Nothing happens because there are three directories that start with “D.” You can see that by pressing the Tab key twice in rapid succession which lists all of the directories that match what you have already typed: [student@studentvm1 ~]$ cd D Desktop/Documents/ Downloads/ [student@studentvm1 ~]$ cd D

Now add the “o” to the command, and press Tab twice more: [student@studentvm1 ~]$ cd Do Documents/ Downloads/ [student@studentvm1 ~]$ cd Do

an Vugt, Sander. Beginning the Linux Command Line, (Apress 2015), 22. V Wikipedia, Command Line Completion, https://en.wikipedia.org/wiki/ Command-line_completion

14 15

213

Chapter 7

Using theLinux Command Line

You should see a list of both directories that start with “Do.” Now add the “c” to the command, and press the Tab key once: [student@studentvm1 ~]$ cd Doc [student@studentvm1 ~]$ cd Documents/

So if you type cd Doc the rest of the directory name is completed in the command. Let’s take a quick look at completion for commands. In this case the command is relatively short, but most are. Assume we want to determine the current uptime for the host: [student@studentvm1 ~]$ up update-alternativesupdatedb update-mime-database update-ca-trustupdate-desktop-database update-pciids update-crypto-policiesupdate-gtk-immodules update-smart-drivedb [student@studentvm1 ~]$ up

upower uptime

We can see several commands that begin with “up” and we can also see that typing one more letter, “t”, will complete enough of the uptime command that the rest will be unique: [student@studentvm1 ~]$ uptime 07:55:05 up 1 day, 10:01,7 users,load average: 0.00, 0.00, 0.00

The completion facility only completes the command, directory, or file name when the remaining text string needed is unequivocally unique. Tab completion works for commands, some sub-commands, file names, and directory names. I find that completion is most useful for completing directory and file names, which tend to be longer, and a few of the longer commands and some sub-­commands. Many Linux commands are so short already that using the completion facility can actually be less efficient than typing the command. The short Linux command names is quite in keeping with being a lazy SysAdmin. So it just depends on whether you find it more efficient or consistent for you to use completion on short commands. Once you learn which commands are worthwhile for tab completion and how much you need to type, you can use those that you find helpful.

Exploring files The commands we will be exploring in this next experiment are all related to creating and manipulating files as objects. 214

Chapter 7

Using theLinux Command Line

EXPERIMENT 7-7 Perform this experiment as the student user. You should already be logged in to your Linux computer as the user student in the GUI and have an xfce4-terminal session open. 1. Open a new tab by selecting File from the terminal menu bar, and select Open Tab from the drop-down menu. The new tab will become the active one, and it is already logged in as the user student. An alternate and easy way to open a new tab in terminal is to right-click anywhere in the terminal window and select Open Tab from the pop-up menu. 2. Enter the pwd command to determine the present working directory (pwd). It should be /home/student as shown here: [student@studentvm1 ~]$ pwd /home/student [student@studentvm1 ~]$

3. If the pwd is not your home directory, change to your home directory using the cd command without any options or arguments. 4. Let’s create some new files like you did as root in an earlier project. The cp command is used to copy files. Use the following commands to create and copy some files: [student@studentvm1 ~]$ touch newfile.txt [student@fstudentvm1 ~]$ df -h > diskusage.txt

5. Use the command ls -lah to display a long list of all files in your home directory and display their sizes in human-readable format. Note that the time displayed on each file is the mtime which is the time the file or directory was last modified. There are a number of “hidden” files that have a dot (.) as the first character of their names. Use ls –lh if you don’t need to see all of the hidden files. 6. The touch dmesg2.txt changes all of the times for that file: [student@studentvm1 ~]$ touch dmesg2.txt [student@studentvm1 ~]$ ls -lh total 212K

215

Chapter 7

Using theLinux Command Line

drwxr-xr-x. 2 student student 4.0K -rw-rw-r--. 1 student student 1.8K -rw-rw-r--. 1 student student44K -rw-rw-r--. 1 student student44K -rw-rw-r--. 1 student student44K -rw-rw-r--. 1 student student44K drwxr-xr-x. 2 student student 4.0K drwxr-xr-x. 2 student student 4.0K drwxr-xr-x. 2 student student 4.0K -rw-rw-r--. 1 student student0 drwxr-xr-x. 2 student student 4.0K drwxr-xr-x. 2 student student 4.0K drwxr-xr-x. 2 student student 4.0K drwxr-xr-x. 2 student student 4.0K [student@studentvm1 ~]$

Aug 18 Sep6 Sep6 Sep6 Sep6 Sep6 Aug 18 Aug 18 Aug 18 Sep6 Aug 18 Aug 18 Aug 18 Aug 18

17:10 09:08 10:52 10:54 10:52 10:52 10:21 10:21 10:21 10:52 10:21 10:21 10:21 10:21

Desktop diskusage.txt dmesg1.txt dmesg2.txt dmesg3.txt dmesg.txt Documents Downloads Music newfile.txt Pictures Public Templates Videos

7. Enter the commands ls -lc and ls -lu to view the ctime (time the inode last changed) and atime (time the file was last accessed– used or the contents viewed), respectively. 8. Enter the command cat dmesg1.txt but don’t worry about the fact that the data spews off the screen. Now use the commands ls -l, ls -lc, and ls -lu to again view the dates and times of the files, and notice that the file dmesg1.txt has had its atime changed. The atime of a file is the time that it was last accessed for reading by some program. Note that the ctime has also changed. Why? If you don’t figure this out now, it will be covered later, so no worries. 9. Enter stat dmesg1.txt to display a complete set of information about this file, including its [acm]times, its size, permissions, the number of disk data blocks assigned to it, its ownership, and even its inode number. We will cover inodes in detail in a later session: [student@studentvm1 ~]$ stat dmesg1.txt File: dmesg1.txt Size: 44297Blocks: 88IO Block: 4096regular file Device: fd07h/64775dInode: 213 Links: 1 Access: (0664/-rw-rw-r--)Uid: ( 1000/ student) Gid: ( 1000/ student) Context: unconfined_u:object_r:user_home_t:s0 Access: 2018-09-06 10:58:48.725941316 -0400

216

Chapter 7

Using theLinux Command Line

Modify: 2018-09-06 10:52:51.428402753 -0400 Change: 2018-09-06 10:52:51.428402753 -0400 Birth: [student@studentvm1 ~]$

Notice that the stat command displays the files timestamps in microseconds. This has changed since Fedora 14. The reason for this is that the previous granularity of timestamps in full seconds was not fine enough to deal with high-speed, high-volume transaction-based environments in which transaction timing sequence is important.

Note The /tmp directory is readable and writable by all users. This makes it a good place to share files temporarily. But that can also make it a security issue. 10. Perhaps you were curious– that is a good thing– and repeated step 8 of this experiment multiple times, in which case you would have noticed that the atime did not change after the first cat command to access the file content. This is because the file content is now in cache and does not need to be accessed again to read the content. Use the following commands to change the content, and then stat it to view the results: [student@studentvm1 ~]$ echo "hello world" >> dmesg1.txt ; cat dmesg1.txt ; stat dmesg1.txt

11. Move the file dmesg3.txt to the /tmp directory with the mv dmesg3.txt /tmp command. Use the ls command in both the current directory and the /tmp directory to verify that the file has been moved. 12. Enter the command rm /tmp/dmesg3.txt to delete the file, and use the ls command to verify that it has been deleted. This experiment has explored creating, copying, and moving files. It also provided some tools that allow you to expose metadata about files.

More commands There are some additional commands that you will find useful. 217

Chapter 7

Using theLinux Command Line

EXPERIMENT 7-8 Perform this experiment as the student user. Start by looking at what happens when too much data is displayed by a command and it scrolls off the top of the screen. 1. The dmesg command displays the messages generated by Linux during the initial boot process. Enter the command dmsg and watch the output quickly scroll off the screen. There are lots of data there that could be missed. 2. Enter the dmsg | less command. You should see the top of the output from the dmesg command. At the bottom of the terminal, you should see a colon and the cursor as in the following example : :■ To see a single new line at the bottom of the screen, press the Enter key. 3. Press the Space bar to see a whole new page of output from the command. 4. You can also use the up and down arrow keys to move one line at a time in the respective direction. The Page Up and Page Down keys can be used to move up or down a page at a time. Use these four keys to navigate the output stream for a few moments. You will see (END) at the bottom left of the screen when the end of the data stream has been reached. 5. You can also specify a line number and use the G key to “Goto” the specified line number. The following entry will go to line 256, which will display at the top of the terminal: 256G 6. Capital G without a line number takes you to the end of the data stream: G 7. Lowercase g takes you to the beginning of the data stream: g 8. Press the q key to quit and return to the command line. The movement commands in less are very similar to those of vim so this should be familiar. 218

Chapter 7

Using theLinux Command Line

Time and date are important, and the Linux date and cal commands command provide some interesting capabilities. 9. Enter the date command to display today’s date: [student@studentvm1 ~]$ date Sun Sep 23 15:47:03 EDT 2018 [student@studentvm1 ~]$

10. Enter the cal command to display a calendar for the current month: [student@studentvm1 ~]$ cal September 2018 Su Mo Tu We Th Fr Sa 1 2345678 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [student@studentvm1 ~]$

11. Enter the following command to display a calendar for the entire year of 1949: [student@studentvm1 ~]$ cal 1949

12. Use the command cat /etc/passwd | less to display the contents of the password file. Hint: it does not actually contain any passwords. After browsing around a bit, quit from less. 13. Enter the following command to generate a data stream and pipe the results through the wc (word count) command to count the words, lines, and characters in the data stream: [student@studentvm1 ~]$ cat /etc/services | wc 1147363130692241 [student@studentvm1 ~]$

219

Chapter 7

Using theLinux Command Line

This shows that the wc command counted 11,473 lines, 63,130 words, and 692,241 characters in the data stream. The numbers in your result should be the same or very close. The services file is a list of the standard assigned and recognized ports used by various network services to communicate between computers. 14. The wc command can be used on its own. Use wc -l /etc/services to count the lines in that file. That is -L in lowercase for “line.”

Command recall andediting Lazy admins don’t like typing. We especially don’t like repetitive typing, so we look for ways to save time and typing. Using the Bash shell history can help do that. The history command displays the last 1000 commands issued from the command line. You can use the up/down arrow keys to scroll through that history on the command line and then execute the same or modified commands with no or minimal retyping. Command-line editing can make entering lots of similar commands easier. Previous commands can be located by using the up arrow key to scroll back through the command history. Then some simple editing can be performed to make modifications to the original command. The Left arrow and Right arrow keys are used to move through the command being edited. The Backspace key is used to delete characters, and simply typing can complete the revised command.

EXPERIMENT 7-9 Start this experiment as the student user. We will switch to root partway through. In this experiment we look at using the Bash history, command-line recall, and editing the recalled command line. 1. Enter the history command to view the current command history: [student@studentvm1 ~]$ history 1su 2poweroff 3su 4ls -la

220

Chapter 7

Using theLinux Command Line

5clear 6w 7who 8whoami 9id 10ksh 11exit 12infor core-utils 13info core-utils 14info coreutils 15info utils-linux 16info utilslinux 17info utils 18info coreutils 19ls -la 20tty 21stty <snip> 220hwclock --systohc -v 221cd /root 222vgs 223less /etc/sudoers 224cd /tmp/testdir1 225ll 226tree 227vim ascii-program.sh <snip> 257dnf list installed 258dnf list installed | wc 259dnf list available | wc 260dnf list available 261dnf info zorba 262dnf info zipper 263history [student@studentvm1 ~]$

2. Use the up arrow key to scroll through the history on the command line.

221

Chapter 7

Using theLinux Command Line

3. When you find a nondestructive command, like one of the many ls commands that should be in the history, just hit the Enter key to issue that command again. 4. Use the history command to view the history again. Pick a command you want to execute again, and enter the following command, where XXX is the number of the command you want to run. Then press the Enter key: [student@studentvm1 ~]$ !XXX

5. Switch to a root terminal session to perform the rest of this experiment. 6. Changing the PWD to /var/log/ and do a listing of the files there. You will see, among others, a file named boot.log. We will use this file for some of the next tasks. 7. Use the cat command to print the contents of the boot.log file to the screen: [root@studentvm1 log]# cat boot.log

8. Count the lines in the boot.log file. Use the up arrow key to return to the previous line. The changes to the command are added to the end, so just type until the command looks like this: [root@studentvm1 log]# cat boot.log | wc

9. Now view the lines that have the word “kernel” in them. Return to the previous command using the up arrow key. Backspace to remove “wc” but leave the pipe (|). Add the grep command, which we will cover in more detail in Chapter 9, to show only those lines containing the kernel: [root@studentvm1 log]# cat boot.log | grep kernel

10. But what if some lines contain “Kernel” with an uppercase K? Return to the last command, and use the left arrow key to move the cursor to the space between “grep” and “kernel” then add -i (ignore case) so the command looks like this: [root@studentvm1 log]# cat boot.log | grep -i kernel

11. Edit that last command to add | wc to the end to count the total lines with the word “kernel” in both upper- and lowercases.

222

Chapter 7

Using theLinux Command Line

Although using the CLI history as in these examples seems a bit trivial, if you have to repeat some very long and complex commands, it can really save a lot of typing and perhaps mis-typing which can be even more frustrating.

Chapter summary I hope you can see from these simple examples, just a little of the vast power available to the SysAdmin when using the command line. In this chapter you have discovered that Linux provides a large number of methods to access the command line and perform your work as a SysAdmin. You can use the virtual consoles and any of a number of different terminal emulators and shells. You can combine those with the screen program in order to further enhance the flexibility you have at the command line. We have also explored a number of important Linux commands and learned how to recall and edit commands from the Bash history. The examples in this chapter are informative in themselves, but they also are just the beginning. As we proceed through this course, you will encounter many ways in which the power and flexibility of the command line will be enhanced by combining the many options discussed in this chapter.

Exercises Complete the following exercises to finish this chapter: 1. Why does the Bash shell use different characters to denote root and non-root sessions, that is, $ and #? 2. Why do you think that there are so many different shells available for Linux? 3. If you already have a favorite terminal emulator, how does it compare to the Xfce terminal emulator and which features of each do you prefer? 4. What is the function of any terminal emulator? 5. If you prefer a shell other than Bash, which one and why? 223

Chapter 7

Using theLinux Command Line

6. What command would you use to temporarily switch to the tcsh shell? 7. How does SSH differ from virtual consoles and terminal emulators? 8. Can an unprivileged user such as student display the contents of the /var/log/messages file? Why or why not– from a technical perspective rather than an architectural design decision one? 9. What command would you use to return the pwd to the previous pwd? 10. What do the last two entries of the student user’s PATH tell you? 11. Can the cat command be used to list the contents of more than one file at a time? 12. If you want to repeat the previous command, how would you do that if you don’t want to type it in again? 13. How can you list all of the commands previously issued at the command line?

224

CHAPTER 8

Core Utilities O bjectives In this chapter you will learn •

Some history of the GNU core utilities

Some history of the utils-linux utilities

How to use some of the basic core utilities

I have recently been doing research for some articles and books I am writing– yes, this one among others– and the GNU core utilities have been showing up quite frequently. All SysAdmins use these utilities regularly, pretty much without thinking about them. There is another set of basic utilities, util-linux, which we should also look at because they also are important Linux. Together, these two sets of utilities comprise many of the most basic tools the Linux system administrator uses to complete everyday tasks. These tasks include management and manipulation of text files, directories, data streams, various types of storage media, process controls, filesystems, and much more. The primary functions of these tools are the ones that allow SysAdmins to perform many of the basic tasks required to administer a Linux computer. These tools are indispensable because, without them, it is not possible to accomplish any useful work on a Linux computer.

G NU coreutils To understand the origins of the GNU core utilities, we need to take a short trip in the Wayback Machine to the early days of Unix at Bell Labs. Unix was originally written so that Ken Thompson, Dennis Ritchie, Doug McIlroy, and Joe Ossanna could continue with something they had started while working on a large multitasking and multiuser computer project called Multics. That little something was a game called “Space Travel.” © David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_8

225

Chapter 8

Core Utilities

As is true today, it always seems to be the gamers that drive forward the technology of computing. This new operating system was much more limited than Multics as only two users could log in at a time, so it was called Unics. This name was later changed to Unix. Over time, Unix turned out to be such a success, that Bell Labs began essentially giving it away it to universities and later to companies, for the cost of the media and shipping. Back in those days, system-level software was shared between organizations and programmers as they worked to achieve common goals within the context of system administration. Eventually the PHBs at AT&T decided that they should start making money on Unix and started using more restrictive– and expensive– licensing. This was taking place at a time when software in general was becoming more proprietary, restricted, and closed. It was becoming impossible to share software with other users and organizations.1 Some people did not like this and fought it with– free software. Richard M.Stallman, aka RMS, led a group of rebels who were trying to write an open and freely available operating system that they call the “GNU Operating System.” This group created what would become the GNU core utilities2 but have not as yet produce a viable kernel. When Linus Torvalds first began working on and compiled the Linux kernel, he needed a set of very basic system utilities to even begin to perform marginally useful work. The kernel does not provide commands themselves or any type of command shell such as Bash. It is useless by itself. So Linus used the freely available GNU core utilities and recompiled them for Linux. This gave him a complete operating system even though it was quite basic. These commands were originally three separate collections, fileutils, shellutils, and textutils, which were combined into the Linux core utilities, in 2002.

EXPERIMENT 8-1 This experiment can be performed as the student user. You can learn about all of the individual programs that comprise the GNU utilities with the info command. If you do not already have a terminal emulator open on the Xfce desktop, please open one now: [student@studentvm1 ~]$ info coreutils Next: Introduction,Up: (dir)

Wikipedia, History of Unix, https://en.wikipedia.org/wiki/History_of_Unix GNU Operating System, Core Utilities, www.gnu.org/software/coreutils/coreutils.html

1 2

226

Chapter 8

Core Utilities

GNU Coreutils ************* This manual documents version 8.29 of the GNU core utilities, including the standard programs for text and file manipulation. Copyright © 1994-2017 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts.A copy of the license is included in the section entitled "GNU Free Documentation License". * Menu: * * * * * * * * * * * * * * * * * * * * * *

Introduction::Caveats, overview, and authors Common options::Common options Output of entire files::cat tac nl od base32 base64 Formatting file contents::fmt pr fold Output of parts of files::head tail split csplit Summarizing files::wc sum cksum b2sum md5sum sha1sum sha2 Operating on sorted files::sort shuf uniq comm ptx tsort Operating on fields::cut paste join Operating on characters::tr expand unexpand Directory listing::ls dir vdir dircolors Basic operations::cp dd install mv rm shred Special file types::mkdir rmdir unlink mkfifo mknod ln link readlink Changing file attributes::chgrp chmod chown touch Disk usage::df du stat sync truncate Printing text::echo printf yes Conditions::false true test expr Redirection::tee File name manipulation::dirname basename pathchk mktemp realpath Working context::pwd stty printenv tty User information::id logname whoami groups users who System context::date arch nproc uname hostname hostid uptime SELinux context::chcon runcon

227

Chapter 8 * * * * * * * * * *

Core Utilities

Modified command invocation::chroot env nice nohup stdbuf timeout Process control::kill Delaying::sleep Numeric operations::factor numfmt seq File permissions::Access modes File timestamps::File timestamp issues Date input formats::Specifying date strings Opening the software toolbox:: The software tools philosophy GNU Free Documentation License:: Copying and sharing this manual Concept index::General index

— The Detailed Node Listing — -----Info: (coreutils)Top, 344 lines --Top---------------------------------------

The utilities are grouped by function to make specific ones easier to find. This page is interactive. Use the arrow keys on the keyboard to highlight the group you want more information on, and press the Enter key. Scroll down the list so that the block cursor is on the line, “Working context::” and press Enter. The following page is displayed: Next: User information,Prev: File name manipulation,Up: Top 19 Working context ****************** This section describes commands that display or alter the context in which you are working: the current directory, the terminal settings, and so forth.See also the user-related commands in the next section. * Menu: * * * *

pwd invocation::Print stty invocation::Print printenv invocation::Print tty invocation::Print

working directory. or change terminal characteristics. environment variables. file name of terminal on standard

input.

Now highlight the bottom line of the listed utilities and press Enter: Prev: printenv invocation,Up: Working context

228

Chapter 8

Core Utilities

19.4 'tty': Print file name of terminal on standard input ========================================================= 'tty' prints the file name of the terminal connected to its standard input.It prints 'not a tty' if standard input is not a terminal. Synopsis: tty [OPTION]... The program accepts the following option.Also see *note Common options::. '-s' '--silent' '--quiet' Print nothing; only return an exit status. Exit status: 0 1 2 3

if if if if

standard input is a terminal standard input is a non-terminal file given incorrect arguments a write error occurs

You can read the information about this utility. So now let’s use it. If you don’t already have a second terminal emulator open and ready, open a new one now– you might want to open a second tab in the existing xfce4-terminal emulator. This way you can see or easily switch between the Info page and the command line on which you will be working. Enter the command following command in the second terminal: [student@studentvm1 ~]$ tty /dev/pts/52 [student@studentvm1 ~]$

You can see we are getting essentially the same information as we did from the w and who commands, but this is in a format that shows the complete path to the device special file. This would be useful when you need that information for a script because it is easier than writing code to extract the date needed from either of those other two commands. To do some basic maneuvering in Info, use the following keys. A node is a page about a specific command or group of commands:

229

Chapter 8

Core Utilities

• p: Previous Info node in the menu sequence • n: Next Info node in the menu sequence • u: Up one menu layer • l (lowercase L): Last visited node in history • q: Quit the Info facility • H: Help / exit help Take some time to use the Info facility to look at a few of the core utilities. You have learned a bit about the GNU utilities in this experiment. You have also received a quick tutorial in using the info utility for locating information about Linux commands. To learn more about using the Info facility, use the command info info. And– of course– all of these utilities can be found in the man pages, but the documentation in the Info facility is more complete. There are 102 utilities in the GNU core utilities. It does cover many of the basic functions necessary to perform some basic tasks on a Unix or Linux host. However, many basic utilities are still missing. For example, the mount and umount commands are not in this group of utilities. Those and many of the other commands that are not in the GNU coreutils can be found in the util-linux collection.

u til-linux The util-linix3 package of utilities contains many of the other common commands that SysAdmins use. These utilities are distributed by the Linux Kernel Organization. As you can see from the following list, they cover many aspects of Linux system administration: agettyfsck.minixmkfs.bfs setpriv blkdiscardfsfreeze mkfs.cramfssetsid blkidfstabmkfs.minixsetterm blockdevfstrim mkswap sfdisk calgetoptmore su cfdiskhexdumpmount sulogin Wikipedia, util-linux, https://en.wikipedia.org/wiki/Util-linux

3

230

Chapter 8

Core Utilities

chcpuhwclockmountpointswaplabel chfnionicenamei swapoff chrtipcmknewgrpswapon chshipcrmnologinswitch_root colcrtipcs nsentertailf colisosizepartx taskset colrmkill pg tunelp columnlast pivot_rootul ctrlaltdelldattach prlimit umount ddpartline raw unshare delpartlogger readprofileutmpdump dmesgloginrenameuuidd ejectlook reniceuuidgen fallocatelosetup reset vipw fdformatlsblk resizepartwall fdisklscpurev wdctl findfslslocksrtcwakewhereis findmntlsloginsrunuserwipefs flockmcookiescript write fsckmesgscriptreplayzramctl fsck.cramfsmkfs setarch Note that some of these utilities have been deprecated and will likely fall out of the collection at some point in the future. You should check the Wikipedia reference for util-linux for some information on many of the utilities. The man pages can be used to learn the details of these commands, but there is no corresponding Info pages for these utilities. Notice that mount and umount are a part of this group of commands. Let’s look at a couple of these utilities just to see what they are about.

EXPERIMENT 8-2 Do this experiment as the student user. Let’s start with the cal command which generates a calendar. Without any options, it shows the current month with today’s date highlighted:

231

Chapter 8

Core Utilities

[student@studentvm1 ~]$ cal September 2018 Su Mo Tu We Th Fr Sa 1 234 5678 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [student@studentvm1 ~]$

Using the -3 option prints three months with the current month in the middle: [student@studentvm1 ~]$ cal -3 August 2018September 2018October 2018 Su Mo Tu We Th Fr SaSu Mo Tu We Th Fr SaSu Mo Tu We Th Fr Sa 1234 1123456 56789 10 11 2345678 789 10 11 12 13 12 13 14 15 16 17 189 10 11 12 13 14 1514 15 16 17 18 19 20 19 20 21 22 23 24 2516 17 18 19 20 21 2221 22 23 24 25 26 27 26 27 28 29 30 3123 24 25 26 27 28 2928 29 30 31 30 [student@studentvm1 ~]$

Using a year as an argument displays a calendar of that entire year: [student@studentvm1 ~]$ cal 1948 1948 JanuaryFebruary March Su Mo Tu We Th Fr SaSu Mo Tu We Th Fr SaSu Mo Tu We Th Fr Sa 1231234567 123456 456789 1089 10 11 12 13 14789 10 11 12 13 11 12 13 14 15 16 1715 16 17 18 19 20 2114 15 16 17 18 19 20 18 19 20 21 22 23 2422 23 24 25 26 27 2821 22 23 24 25 26 27 25 26 27 28 29 30 312928 29 30 31

232

Chapter 8

Core Utilities

AprilMayJune Su Mo Tu We Th Fr SaSu Mo Tu We Th Fr SaSu Mo Tu We Th Fr Sa 123112345 456789 1023456786789 10 11 12 11 12 13 14 15 16 179 10 11 12 13 14 1513 14 15 16 17 18 19 18 19 20 21 22 23 2416 17 18 19 20 21 2220 21 22 23 24 25 26 25 26 27 28 29 3023 24 25 26 27 28 2927 28 29 30 30 31 JulyAugustSeptember Su Mo Tu We Th Fr SaSu Mo Tu We Th Fr SaSu Mo Tu We Th Fr Sa 1231234567 1234 456789 1089 10 11 12 13 1456789 10 11 11 12 13 14 15 16 1715 16 17 18 19 20 2112 13 14 15 16 17 18 18 19 20 21 22 23 2422 23 24 25 26 27 2819 20 21 22 23 24 25 25 26 27 28 29 30 3129 30 3126 27 28 29 30 OctoberNovemberDecember Su Mo Tu We Th Fr SaSu Mo Tu We Th Fr SaSu Mo Tu We Th Fr Sa 12 123456 1234 3456789789 10 11 12 1356789 10 11 10 11 12 13 14 15 1614 15 16 17 18 19 2012 13 14 15 16 17 18 17 18 19 20 21 22 2321 22 23 24 25 26 2719 20 21 22 23 24 25 24 25 26 27 28 29 3028 29 3026 27 28 29 30 31 31 [student@studentvm1 ~]$

Use the command man cal to find additional information about the cal command. I do use the cal command, so you might find it useful, too. I use some commands to find information about the hardware– real or virtual– to which I am logged in. For example, it can be useful for a SysAdmin to know about the CPU: [student@studentvm1 ~]$ lscpu Architecture:x86_64 CPU op-mode(s):32-bit, 64-bit Byte Order:Little Endian CPU(s):2

233

Chapter 8

Core Utilities

On-line CPU(s) list: 0,1 Thread(s) per core:1 Core(s) per socket:2 Socket(s):1 NUMA node(s):1 Vendor ID:GenuineIntel CPU family:6 Model:85 Model name:Intel(R) Core(TM) i9-7960X CPU @ 2.80GHz Stepping:4 CPU MHz:2807.986 BogoMIPS:5615.97 Hypervisor vendor:KVM Virtualization type: full L1d cache:32K L1i cache:32K L2 cache:1024K L3 cache:22528K NUMA node0 CPU(s):0,1 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_ tsc cpuid pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase avx2 invpcid rdseed clflushopt

The lscpu command provides a great deal of information about the installed CPU(s). Some of this information is very useful when writing scripts that may have need to know it. Note that VirtualBox sees most hardware and passes on the virtualized version just the same as the physical. The lsblk command– list block devices which are usually disk drives– is very useful in helping me to understand the structure of the partitions, volume groups, and physical and logical volumes of disks using logical volume management (LVM):

234

Chapter 8 [student@studentvm1 ~]$ lsblk -i NAMEMAJ:MIN RMSIZE RO sda8:0060G0 |-sda18:101G0 `-sda28:2059G0 |-fedora_studentvm1-pool00_tmeta253:0 0 4M0 | `-fedora_studentvm1-pool00-tpool 253:202G0 ||-fedora_studentvm1-root 253:3 0 2G0 |`-fedora_studentvm1-pool00253:6 0 2G0 |-fedora_studentvm1-pool00_tdata253:1 0 2G0 | `-fedora_studentvm1-pool00-tpool 253:202G0 ||-fedora_studentvm1-root 253:3 0 2G0 |`-fedora_studentvm1-pool00253:6 0 2G0 |-fedora_studentvm1-swap253:408G0 |-fedora_studentvm1-usr253:5015G0 |-fedora_studentvm1-home253:702G0 |-fedora_studentvm1-var253:8010G0 `-fedora_studentvm1-tmp253:905G0 sr011:01 1024M0 [student@studentvm1 ~]$

Core Utilities

TYPE MOUNTPOINT disk part /boot part lvm lvm lvm/ lvm lvm lvm lvm/ lvm lvm[SWAP] lvm/usr lvm/home lvm/var lvm/tmp rom

I used the -i option to produce the results in ASCII format because it transfers better to a document like this. You can use -i but you should also try the command without any options to get a version that looks a little nicer on the display. The df command (from the original GNU core utilities) shows similar data but with somewhat different detail: [student@studentvm1 ~]$ df -h FilesystemSizeUsed Avail Use% devtmpfs2.0G02.0G 0% tmpfs2.0G02.0G 0% tmpfs2.0G1.2M2.0G 1% tmpfs2.0G02.0G 0% /dev/mapper/fedora_studentvm1-root2.0G 49M1.8G 3%

Mounted on /dev /dev/shm /run /sys/fs/cgroup /

/dev/mapper/fedora_studentvm1-usr15G3.8G 11G27% /dev/sda1976M185M724M21% /dev/mapper/fedora_studentvm1-tmp4.9G21M4.6G1% /dev/mapper/fedora_studentvm1-var9.8G494M8.8G6%

/usr /boot /tmp /var

235

Chapter 8

Core Utilities

/dev/mapper/fedora_studentvm1-home2.0G7.3M1.8G 1% /home tmpfs395M8.0K395M 1% /run/user/1000 tmpfs395M0395M 0% /run/user/0

I used the -h option to show the disk space in easily human-readable numbers like GB and MB.Note that the names of commands that list things tend to start with “ls” which in Linux-­ speak usually means “list.” There are several temporary filesystem shown in the output of both the df and lsblk commands. We will talk about some temporary filesystems later in this course. We will also explore the logical volume manager (LVM) that creates the entries like /dev/ mapper/fedora_studentvm1-tmp.

Chapter summary These two collections of basic Linux utilities, the GNU core utilities and util-linux, together provide the basic utilities required to administer a basic Linux system. As I researched this chapter, I found several interesting utilities in this list that I never knew about. Many of these commands are seldom needed. But when you do, they are indispensable. Between these two collections, there are over 200 Linux utilities. The typical Linux distribution has many more commands, but these are the ones that are needed to manage the most basic functions of the typical Linux host. We explored a couple commands from each of these utility packages, but we will definitely encounter more as we proceed through this course. It makes much more sense to only cover the utilities that we will encounter and use the most rather than try to learn all of these commands. Just a note about terminology so that we are working with the same understanding: From this point on in this course, when I say core utilities, I mean both sets of these utilities. If I intend to refer to either set individually, I will name them explicitly.

236

Chapter 8

Core Utilities

Exercises Complete these exercises to finish this chapter: 1. What is the overall purpose of these two groups of core utilities? 2. Why were the GNU core utilities important to Linus Torvalds? 3. Which core utility would you use to determine how much space is left in each filesystem? 4. What is the model name of the CPU in your VM? 5. How many CPUs does your physical host have and how many are allocated to the VM? 6. Does allocating a CPU to the VM make it unavailable to the host machine?

237

CHAPTER 9

Data Streams O bjectives In this chapter you will learn •

How text data streams form the architectural basis for the extreme flexibility of the Linux command line

How to generate streams of text data

How to use pipes, STDIO, and many of the core utilities to manipulate text data streams

How to redirect data streams to and from files

The basic usage of some of the special device files in the /dev directory

Data streams asraw materials Everything in Linux revolves around streams of data– particularly text streams. Data streams are the raw materials upon which the core utilities and many other CLI tools perform their work. As its name implies, a data stream is a stream of data– text data– being passed from one file, device, or program to another using Standard Input/ Output (STDIO). This chapter introduces the use of pipes to connect streams of data from one utility program to another using STDIO.You will learn that the function of these programs is to transform the data in some manner. You will also learn about the use of redirection to redirect the data to a file. I use the term “transform” in conjunction with these programs because the primary task of each is to transform the incoming data from STDIO in a specific way as intended by the SysAdmin and to send the transformed data to STDOUT for possible use by another transformer program or redirection to a file. © David Both 2020 D. Both, Using and Administering Linux: Volume 1, https://doi.org/10.1007/978-1-4842-5049-5_9

239

Chapter 9

Data Streams

The standard term, “filters,” implies something with which I don't agree. By definition, a filter is a device or a tool that removes something, such as an air filter removes airborne contaminants so that the internal combustion engine of your automobile does not grind itself to death on those particulates. In my high school and college chemistry classes, filter paper was used to remove particulates from a liquid. The air filter in my home HVAC system removes particulates that I don’t want to breathe. Although they do sometimes filter out unwanted data from a stream, I much prefer the term “transformers” because these utilities do so much more. They can add data to a stream, modify the data in some amazing ways, sort it, rearrange the data in each line, perform operations based on the contents of the data stream, and so much more. Feel free to use whichever term you prefer, but I prefer transformers. Data streams can be manipulated by inserting transformers into the stream using pipes. Each transformer program is used by the SysAdmin to perform some operation on the data in the stream, thus changing its contents in some manner. Redirection can then be used at the end of the pipeline to direct the data stream to a file. As has already been mentioned, that file could be an actual data file on the hard drive, or a device file such as a drive partition, a printer, a terminal, a pseudo-terminal, or any other device1 connected to a computer. The ability to manipulate these data streams using these small yet powerful transformer programs is central to the power of the Linux command-line interface. Many of the core utilities are transformer programs and use STDIO. I recently Googled “data stream,” and most of the top hits are concerned with processing huge amounts of streaming data in single entities such as streaming video and audio or financial institutions processing streams consisting of huge numbers of individual transactions. This is not what we are talking about here, although the concept is the same and a case could be made that current applications use the stream processing functions of Linux as the model for processing many types of data. In the Unix and Linux worlds, a stream is a flow text data that originates at some source; the stream may flow to one or more programs that transform it in some way, and then it may be stored in a file or displayed in a terminal session. As a SysAdmin, your job is intimately associated with manipulating the creation and flow of these data streams. In this chapter we will explore data streams– what they are, how to create them, and a little bit about how to use them. 1

I n Linux systems all hardware devices are treated as files. More about this in Chapter 3 of Volume 2.

240

Chapter 9

Data Streams

Text streams– Auniversal interface The use of Standard Input/Output (STDIO) for program input and output is a key foundation of the Linux way of doing things. STDIO was first developed for Unix and has found its way into most other operating systems since then, including DOS, Windows, and Linux.

This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. —Doug McIlroy, Basics of the Unix Philosophy2,3 STDIO was developed by Ken Thompson4 as a part of the infrastructure required to implement pipes on early versions of Unix. Programs that implement STDIO use standardized file handles for input and output rather than files that are stored on a disk or other recording media. STDIO is best described as a buffered data stream, and its primary function is to stream data from the output of one program, file, or device to the input of another program, file, or device.

STDIO file handles There are three STDIO data streams, each of which is automatically opened as a file at the startup of a program– well those programs that use STDIO.Each STDIO data stream is associated with a file handle which is just a set of metadata that describes the attributes of the file. File handles 0, 1, and 2 are explicitly defined by convention and long practice as STDIN, STDOUT, and STDERR, respectively. STDIN, File handle 0, is Standard Input which is usually input from the keyboard. STDIN can be redirected from any file including device files instead of the keyboard. It is not common to need to redirect STDIN, but it can be done.

Eric S.Raymond, The Art of Unix Programming, www.catb.org/esr/writings/taoup/html/ ch01s06.html 3 Linuxtopia, Basics of the Unix Philosophy, www.linuxtopia.org/online_books/programming_ books/art_of_unix_programming/ch01s06.html 4 Wikipedia, Ken Thompson, https://en.wikipedia.org/wiki/Ken_Thompson 2

241

Chapter 9

Data Streams

STDOUT, File handle 1, is Standard Output which sends the data stream to the display by default. It is common to redirect STDOUT to a file or to pipe it to another program for further processing. STDERR is associated with File handle 2. The data stream for STDERR is also usually sent to the display. If STDOUT is redirected to a file, STDERR continues to be displayed on the screen. This ensures that when the data stream itself is not displayed on the terminal, that STDERR is thus ensuring that the user will see any errors resulting from execution of the program. STDERR can also be redirected to the same or passed on to the next transformer program in a pipeline. STDIO is implemented in a standard C library header file, stdio.h, which can be included in the source code of programs so that it can be compiled into the resulting executable.

Preparing aUSB thumb drive You can perform some the following experiments safely with a USB thumb drive that is not being used for anything else. I found an 8GB– thumb drive that I have no other current use for, so set it up to use with these experiments. You can use any size USB stick that you have on hand, but a small one, even just a few MB in size, is perfectly fine. The thumb drive you use should have a VFAT partition on it; unless you have intentionally formatted the device with another type of filesystem, it should meet the requirements for the experiments in this chapter.

PREPARATION 9-1 Prepare the USB device for use with some of these experiments. 1. If a terminal session as root is not already open, open one on the virtual machine that you will be using for these experiments and login as root. 2. Insert the USB device in an available USB slot on your physical host computer. 3. At the top of the VM window, in the menu bar, click Devices ➤ USB.Locate the specific device you just inserted. It will probably look a lot like Figure9-1 as a “generic mass storage device.” Another of my devices was identified as a “USB Disk.” 242

Chapter 9

Data Streams

4. Click the device, and within a moment or two, a new disk device icon should appear on your VM desktop. This is how you know that you found the correct device.

Using and Administering Linux: Volume 1: Zero to SysAdmin: Getting Started - PDF Free Download (2024)

References

Top Articles
Latest Posts
Article information

Author: Delena Feil

Last Updated:

Views: 5912

Rating: 4.4 / 5 (45 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Delena Feil

Birthday: 1998-08-29

Address: 747 Lubowitz Run, Sidmouth, HI 90646-5543

Phone: +99513241752844

Job: Design Supervisor

Hobby: Digital arts, Lacemaking, Air sports, Running, Scouting, Shooting, Puzzles

Introduction: My name is Delena Feil, I am a clean, splendid, calm, fancy, jolly, bright, faithful person who loves writing and wants to share my knowledge and understanding with you.