ONE HOUR Linux SESSIONS: 2014-2019
sessions in chronological order going back to 2014

2019


JohnMeister.com      Today's Date: 


Remember: SAW - simple always works
Too many special characters strung together and most of us Linux lovers won't remember or understand.
The goal here is to understand and remember.
The intended audience of this material are power users, Sys Admins, Analysts, Engineers, and Web Masters.
Programmers may pick up a few tricks; but anything they share will likely far exceed our limited comprehension buffers.

NOTE: LinuxMeister dot net was expired, but RENEWED able to access files using this path: http://johnmeister.com/linux/ as well as through LinuxMeister.net!


The Art of Linux System Administration - on Amazon and O'Reilly Media (Designed to help prepare for the LPIC-2 exams.)

Session #151 - May 31, 2019 -

Session #150 Linux at Lunch - the joys of vim - 05/31/2019  
THIS SESSION:
"vim info"  - presented by one  of our regulars.

One of the resources listed is a "free" vim scripting book, I've included the HTML version on my website, 
it is NOT a how-to book, but has a lot of useful information in it:
 learn vim script the hardway . by steve losh
other vim info at:
  vim info





Session #150 - May 17, 2019 -

Session #150 Linux at Lunch - RANDOMIZATION - 05/17/2019  

THIS SESSION:
Randomization tools: creating random numbers for adding a prefix to files and updating id3tags using BASH and Perl. 

PROBLEM: MP3 files were playing in sequence on a couple different players that had no random play feature... 
 using $RANDOM at the command line worked, but not in a shell script... leading me to a Perl solution:
http://johnmeister.com/linux/Scripts/rename-files-with-random-number-suffix.sh.txt 
http://johnmeister.com/linux/Scripts/id3tool-change-title.html


Session #149 - April 12, 2019 -

Session #149 Linux at Lunch - 04/12/2019  

THIS SESSION:
  1. Look through content on http://LinuxMeister.net
  2. working on a new "booklet": "Using Linux:BASICS" - basically a non-computer user guide to using Linux - install Linux (dual boot Linux Mint with Mate desktop on Windows 10) - show the MATE desktop settings, menus and customization - show the basic tools like Thunderbird, Firefox, LibreOffice or OpenOffice, Stellarium, DarkTable, and other free applications - will likely make the book "free" or set the price, which can be "free". - plan to use "Simply Linux:BASICS" as the starting point, removing a lot of content to keep it focused on use.

Session #148 - March 29, 2019 -

Session #148 Linux at Lunch - 03/29/2019  

THIS SESSION:
  1. update on id3tool - http://johnmeister.com/linux/Notes/id3tool-info.html
  2. migrating from LibreOffice-->OpenOffice at the command line
  3. editing colors in HTML using vi - http://johnmeister.com/linux/colors.html

id3tool process: 1) copy original mp3's into new directories, create ##_AAA, cd ##_AAA; ls >> idAAA ; vi idAAA cp -r ../ORIGINAL-MP3s/58_Hebrews/ 58_HEB ; cd 58_HEB ; ls >> id3heb 2) edit to "mv" from old long name to new short name vi id3heb :%s/.*/mv & 58_00_Heb_&/g # edit by removing the long previous name and add number for each track - using searches and replaces a tedious manual process... 3) execute mv ; ls >> idAAA ; edit id3tool info saving old info behind # sh ./id3heb # (this renames all the files to the new, shorter file name) ls >> id3heb # appending to the file vi id3heb :24,$s/.*/# &/g # first select all the original "mv" lines to the bottom, then comment them out :1,23s/.*/id3tool -t "&" -c Z -y 2018 -G "Speech" -a "Hebrews - Letters" -n "bridgechristianfellowshipcom" -r "Rick Crawford" &/g 4) execute id3tool info sh ./id3heb # (this updates all the id3tag information in files) id3tool *.mp3 # check to see if it updated and didn't truncate, if it did, edit "id3heb" and execute again 5) mv id3tool script idAAA to ../../ORIGINAL-BCF-MP3s/NOTES/ mv id3heb ../../ORIGINAL-MP3s/NOTES/ # save a copy of each script for the 66 directories to help identify proper time and sequence 6) scp -r NEW_DIR/ server:/web/bible/mp3 ; scp NEW_DIR/*.mp3 server:/web/BIBLE/MP3-ALL/ # one method was to ssh and cp -r 7) repeat for all 66 books: finished work: http://johnmeister.com/BIBLE/BCF-MP3-ALL/ --> id3tool 58_11_Heb_9-10v18.mp3 Filename: 58_11_Heb_9-10v18.mp3 Song Title: 58_11_Heb_9-10v18 Artist: Rick Crawford Album: Hebrews - Letters Note: bridgechristianfellowshipcom Track: 11 Year: 2018 Genre: Speech (0x65)

migrating from LibreOffice to Apache's OpenOffice.org suite

NOTE: these steps are for any debian based Linux distro: Ubuntu, Mint or Debian, but minor tweaks and it'll work for all distros BEFORE: --> ls -al /usr/bin/soffice # lrwxrwxrwx 1 root root 34 Nov 27 02:13 /usr/bin/soffice -> ../lib/libreoffice/program/soffice AFTER: --> ls -al /usr/bin/soffice # lrwxrwxrwx 1 root root 32 Oct 23 04:31 /usr/bin/soffice -> /opt/openoffice4/program/soffice
  1. sudo apt-get update
  2. sudo apt-get -y remove --purge libreoffice* libexttextcat-data* && sudo apt-get -y autoremove
  3. cp ~/bin/Apache_OpenOffice_4.1.6_Linux_x86-64_install-deb_en-US.tar.gz /tmp # NOTE: instructions called out an older version; found the current version; it was simple to download and copy to /tmp # file downloaded and copied to /tmp: Apache_OpenOffice_4.1.6_Linux_x86-64_install-deb_en-US.tar.gz using wget: # cd /tmp && wget http://downloads.sourceforge.net/project/openofficeorg.mirror/ \ 4.1.6/binaries/en-US/Apache_OpenOffice_4.1.6_Linux_x86-64_install-deb_en-US.tar.gz
  4. cd /tmp
  5. tar -xvf Apache_OpenOffice*.tar.gz # wildcards save typing...
  6. sudo dpkg -i en-US/DEBS/*.deb # using the debian packaging tool, if using SuSE, use zypper, Rh/Centos, yum, etc. Process is the same.
  7. sudo dpkg -i en-US/DEBS/desktop-integration/*.deb # this put the icons in the menu and made it all happy.

    editing colors in HTML using vi

    http://johnmeister.com/linux/colors.html for info on RGB/Hexadecimal values: https://www.rapidtables.com/web/color/RGB_Color.html Use sRGB rather than Adobe's proprietary colors for most applications. (standard Red Green Blue color space) sRGB is recommended by most photographers, especially for the web and monitors. see: https://photographylife.com/srgb-vs-adobe-rgb-vs-prophoto-rgb and https://kenrockwell.com/tech/adobe-rgb.htm RGB = Red Green Blue - the three primary colors. The red, green and blue use 8 bits each, which have integer values from 0 to 255. 256*256*256=16,777,216 colors. wiki on shades of white the color Platinum is presented by this value: #E5E4E2 R=229 G=228 B=226 (values range from 0 through 255) the color Red is FF0000 the color Green is 00FF00 (actually comes out lime green in html!) the color Blue is 0000FF

    this is the html code that displays the table below

    <table border="5" bgcolor="FFFFFF"> <tr> <td bgcolor="#FFCCCC"> Law </td> <td bgcolor="#FFCCCC"> #FFE5CC </td> <td bgcolor="#FFCCCC"> 255 </td> <td bgcolor="#FFCCCC"> 204 </td> <td bgcolor="#FFCCCC"> 204 </td></tr> <tr> <td bgcolor="#FFFFCC"> History </td> <td bgcolor="#FFFFCC"> #FFFFCC </td> <td bgcolor="#FFFFCC"> 255 </td> <td bgcolor="#FFFFCC"> 255 </td> <td bgcolor="#FFFFCC"> 204 </td></tr> <tr> <td bgcolor="#E5FFCC"> Wisdom </td> <td bgcolor="#E5FFCC"> #E5FFCC </td> <td bgcolor="#E5FFCC"> 229 </td> <td bgcolor="#E5FFCC"> 255 </td> <td bgcolor="#E5FFCC"> 204 </td></tr> <tr> <td bgcolor="#E9FFDB"> Prophets </td> <td bgcolor="#E9FFDB"> #E9FFDB </td> <td bgcolor="#E9FFDB"> 233 </td> <td bgcolor="#E9FFDB"> 255 </td> <td bgcolor="#E9FFDB"> 219 </td></tr> <tr> <td bgcolor="#CCFFE5"> Gospels </td> <td bgcolor="#CCFFE5"> #CCFFE5 </td> <td bgcolor="#CCFFE5"> 204 </td> <td bgcolor="#CCFFE5"> 255 </td> <td bgcolor="#CCFFE5"> 255 </td></tr> <tr> <td bgcolor="#CCFFFF"> Paul </td> <td bgcolor="#CCFFFF"> #CCFFFF </td> <td bgcolor="#CCFFFF"> 204 </td> <td bgcolor="#CCFFFF"> 255 </td> <td bgcolor="#CCFFFF"> 255 </td></tr> <tr> <td bgcolor="#CCE5FF"> Letters </td> <td bgcolor="#CCE5FF"> #CCE5FF </td> <td bgcolor="#CCE5FF"> 204 </td> <td bgcolor="#CCE5FF"> 229 </td> <td bgcolor="#CCE5FF"> 255 </td></tr> <tr> <td bgcolor="#EFCCFF"> Revelation </td> <td bgcolor="#EFCCFF"> #EFCCFF </td> <td bgcolor="#EFCCFF"> 229 </td> <td bgcolor="#EFCCFF"> 204 </td> <td bgcolor="#EFCCFF"> 255 </td></tr> </table>
    Law #FFE5CC 255 204 204
    History #FFFFCC 255 255 204
    Wisdom #E5FFCC 229 255 204
    Prophets #E9FFDB 233 255 219
    Gospels #CCFFE5 204 255 255
    Paul #CCFFFF 204 255 255
    Letters #CCE5FF 204 229 255
    Revelation #EFCCFF 229 204 255
    seeking colors that are easy to read, have a logical flow and are pleasing to the eye... current colors were picked in excel:

Session #147 - March 22, 2019 -

Session #147 Linux at Lunch - 03/22/2019  
 
THIS SESSION:
After I wrote the Win10/BASH book we talked about last week, I continued on to finish the Linux book that was the starting point. 
 http://johnmeister.com/linux/Overview/ 
So, this week we will go through that book, "Simply Linux: BASICS" https://www.smashwords.com/books/view/705084 • Published: Feb. 19, 2017 • Words: 48,820 • Language: English • ISBN: 9781370361489
just as a note... the smashword royalties this month for all five books came out to $7.43... :)

Session #146 - March 15, 2019 -

Session #146 Linux at Lunch - 03/15/2019  
 
THIS SESSION:
Let's use BASH Windows 10! 
Will go through the book on Using BASH on Windows 10.
https://www.smashwords.com/books/view/703463
Ubuntu BASH on Windows 10 performs much better than Cygwin. Comparing BASH on various systems, almost everything works the same, as I note in this table:
http://johnmeister.com/linux/FileSystems/Linux-bashrc-vim-differences.jpg

Session #145 - February 22, 2019 -

Session #145 Linux at Lunch - 02/22/2019  
 
THIS SESSION:
1) cal - calendar - using it with julian and mdays for calculating time and scheduling 
    cal 02 2019
    cal 2019
    cal -j 2019
2) find - using find to locate particular types of files 
     http://johnmeister.com/linux/Commands/find-examples.html
     http://johnmeister.com/linux/Commands/find-copy.html 
3) xargs - using xargs along with find to manage files found with find. 
     http://johnmeister.com/linux/Commands/xargs-some-examples.html
4) du - using disk usage to find directories with large files 
    --> find /home/luser/ -type d -exec du -sh {} \;  | grep -v ^0 | grep -v ^[0-9].0K
5) id3tool - tool to update metadata in MP3 and related files: titles, albumns, year, author, etc. 
         http://johnmeister.com/linux/Commands/id3tool-command-info.html

Session #144 - February 8, 2019 -

Session #144 Linux at Lunch - 02/08/2019  
 
THIS SESSION:

1) basics of disk and removable media and file systems: 
        - fdisk / parted (partitions) 
            http://johnmeister.com/linux/FileSystems/newdisk-setup.html
            http://johnmeister.com/linux/FileSystems/usingParted8TB.html
        - df (disk free)   
                 alias: alias dfa='df -h | grep sda' 
                 alias dfh='df -h | grep -v fs | grep -v boot | grep -v udev'
        - du (disk usage)    du -sh *
        - mount/umount ( /etc/fstab, mounting and unmounting filesystems)
               http://johnmeister.com/linux/FileSystems/mount-external-drive-rysnc.html
               http://johnmeister.com/linux/FileSystems/NFS-info.html
        - gvfs (gnome virtual file system)
                http://johnmeister.com/linux/SysAdmin/Undo-Systemd-Resolved-a-script-and-results.html
        - using /proc
                http://johnmeister.com/linux/FileSystems/RAID-info.html
                --> cat /proc/diskstats 
                    ...
                    8       0 sda 201275 864 56505586 678268 14613 5748 1758344 1076088 0 736088 1754128
                    8       1 sda1 201249 864 56503490 678248 14612 5748 1758344 1076088 0 736068 1754104
				--> cat /proc/devices
				Character devices:
				  1 mem
				  3 cons
				  5 /dev/tty
				  5 /dev/console
				  5 /dev/ptmx
				  9 st
				13 misc
				14 sound
				117 ttyS
				136 tty
				Block devices:
				  2 fd
				  8 sd
				11 sr
				65 sd
				66 sd
				67 sd
				68 sd
				69 sd
				70 sd
				71 sd
        - using /dmesg and /var/messages (YMMV)
                --> dmesg | grep usb | wc -l
                    67
        - using lsusb
            --> lsusb
                    Bus 001 Device 001: ID 8086:8d26 Intel Corp. \ Bus 002 Device 001: ID 8086:8d2d Intel Corp.
                    Bus 003 Device 005: ID 0424:2744 Standard Microsystems Corp.
                    Bus 003 Device 006: ID 0424:2744 Standard Microsystems Corp.
                    Bus 003 Device 003: ID 045e:0039 Microsoft Corp. IntelliMouse Optical
                    Bus 003 Device 004: ID 045e:00db Microsoft Corp. Natural Ergonomic Keyboard 4000 V1.0
                    Bus 003 Device 002: ID 08e6:3437 Gemalto (was Gemplus) GemPC Twin SmartCard Reader
                    Bus 001 Device 002: ID 8087:8002 Intel Corp. \  Bus 002 Device 002: ID 8087:800a Intel Corp.
                    Bus 003 Device 001: ID 8086:8d31 Intel Corp.

2) read removable media
        - using SD and microSD card readers in Linux, Mac, VirtualBox and cygwin
        - using USB devices in Linux, Mac, VirtualBox and cygwin

3)   ran into a problem with one of my SD card scripts over the weekend - had to rewrite it. - trace through scripts
    http://johnmeister.com/linux/Scripts/get-images-from-SD-card.sh.html

4) quick mention of lsusb and dmesg - helped a friend back east with an industrial equipment problem - USB to RS232
    searched on:  linux u232-p9 driver download 
    https://ubuntuforums.org/showthread.php?t=1716756
    boot up; plug in device ;  lsusb  ; dmesg | tail 

NOTE:  from the discussion wanted to share a couple of Mike's inputs, however he can't recommend the installation
    completely...  https://yourunclemike.github.io/lunch-n-learn_2016-09/index.html  there are some issues... HOWEVER,
    the issues seem to be related to the distribution and version, and I'll leave this here because the steps are still helpful.
    He also suggested a website to try out various distros, as he points out it's less trouble than setting up a VM, or
    a thumbdrive for a trial run:  https://distrotest.net/    to see all the distros out there:   http://distrowatch.com
    Also... there's this:  https://manjaro.org/download/  

If you're going to setup a system for personal use my personal recommendation stands with Linux Mint with the MATE desktop.
If you're possibly working in IT and will support an Oracle or Apache system I'd recommend SuSE Linux.  
If you're stuck with supporting RedHat, I suggest Centos for your personal system, in an Oracle VirtualBox VM... hosted on Mint or SuSE. ;)  



Session #143 - February 1, 2019 -

Session #143 Linux at Lunch - 02/01/2019  
 
THIS SESSION:
 1) improved the script for thumbnails: 

    http://johnmeister.com/linux/Scripts/make-page-index-with-thumbnails.txt 
output example: http://johnmeister.com/2019/01-Jan-26-Mountains/tindex.html 

key featured: "convert" 
    The convert program is a member of the ImageMagick(1) suite of tools. 
    
 2) cygwin updating in the lab:
    The process to update cygwin has been to delete everything in LAB_USERS and cygwin64 prior to a Microsoft copy.
    FIRST:   copy files from a system that is configured, either "push" or "pull"
         xcopy /E  c:\LAB_USERS \\LAB-K2\C$\LAB_USERS
            xcopy /E  \\LAB-E3\C$\LAB_USERS c:\LAB_USERS
    THEN,  xcopy the cygwin files into the cygwin64 directory.
            xcopy /E c:\LAB_USERS\...  C:\cygwin64   
    (making sure cygwin64 is empty or it'll have issues even if switched to overwrite)

I ran into a NUMBER of issues with Microsoft hanging up, or not copying reliably, permissions were wrong...
    despite setting switches properly, but generally xcopy worked...
Also had to update the desktop link to cygwin 
    - adding this to the "start in" directory:  c:\cygwin64\home\%USERNAME%

By copying the entire cygwin64 directory I was hoping to not have to do an install on the workstation.
I tried tar files, zip files, but the only thing that seemed to work "reliably" to replicate the 
    install throughout the lab was to xcopy into an empty directory.

HOWEVER, in trying to make ssh work I have since learned that using Microsoft's xcopy across the network 
    did NOT preserve permissions and ssh has failed. There are complexities with cygwin on Microsoft.

4) using cygwin to clean some files and directories globally using find, xargs, du and integrating Microsoft commands.

5) had a long term plan, incorporate scripts like this into the lab: 
    http://johnmeister.com/linux/Scripts/chksys.sh.html
        http://johnmeister.com/linux/Scripts/run-now-on-all-hosts.sh.html  
    (launch the first script via this one... on a cron job...)
  HOWEVER, unless I can find a way of installing cygwin via a template, tar file or other means 
        so as not to install on each system... it'll take a long time.

6) also working on id3tools for MP3's  and exiftools for digital photography, 
    and using "dvdbackup -i /dev/sr0 -M"   which works splendidly with VLC.
The dvdbackup tool creates a directory that is 4.4G in size and VLC reads it just like the DVD.  
A great tool if you don't want to wear down your battery spinning a DVD, or if you don't have an optical drive, 
or don't want to travel with your DVD library.  I've used handbrake to convert chapters on the dvd to mkv files.  
  future session will discuss loops and scripting using id3 and exif... have been distracted with other scripts.


Session #142 - January 18, 2019 -

Session #142 Linux at Lunch - 01/18/2019  
 
THIS SESSION:

- follow up on performance improvements on Linux laptop after a few minor hardware changes (Memory and SSD upgrade)
     - on HP Elitebook 840/G3 started with a 256GB SSD with "adequate" performance and 8gb of memory.
  LINUX MINT 19.0 with MATE desktop results:  
        - first test:  baseline 256GB ssd/8gb memory:                       13.33 seconds - base performance
        - first step was doubling the memory, 256gb ssd/16gb memory:         4.67 seconds - 65% improvement! (2x memory!)
        - second step was to replace ssd with faster 1TB ssd/16gb:           3.33 seconds - an additional 28.57% improvement.

  OVERALL PERFORMANCE IMPROVEMENT on LINUX was 75%!!!  with a $175 ssd and $40 memory chip on a $299 used laptop.
    - performance test script is:  http://johnmeister.com/linux/Microsoft/LOADTEST-OS-COMPARISON.html  

- created a script to create an index page, thumbnail index page and the ALL.html for my photography:
    
    http://johnmeister.com/linux/Scripts/make-page-index-with-thumbnails.txt

    INDEX:  http://johnmeister.com/2019/01-Jan-13-Skagit-Birds/index.html
    THUMBNAIL: http://johnmeister.com/2019/01-Jan-13-Skagit-Birds/tindex.html 
    ALL.html: http://johnmeister.com/2019/01-Jan-13-Skagit-Birds/ALL.html

-  will walk through this script as well:    
    
    http://johnmeister.com/linux/Scripts/z-archive-it.html

- fix-filenames:  a script to undo bad habits - created a quick for loop to fix a number of directories
    
    http://johnmeister.com/linux/Scripts/for-loop-for-fix-filenames.txt

    
    http://johnmeister.com/linux/Scripts/fix-filenames.sh.html


Session #141 - January 11, 2019 -

Session #141 Linux at Lunch - 01/11/2019  ... was going to stop but...  will explain during the session. 
 SESSION #1 was held on November 26, 2014   
 
THIS SESSION:

- setup a new laptop using SUSE LINUX LEAP 15.0 - did an update and stuff broke... so I nuked it...
    installed LINUX MINT MATE 19.0 Tara... and... recorded (most of) the steps:

    http://johnmeister.com/linux/SysAdmin/Install-n-Configure-Linux-MINT-MATE.html 

... found that DNS was broken...  so... I googled the issue and found the villian:  systemd-resolved.

why???  how hard is it to enter a line in /etc/resolv.conf?
    
    http://johnmeister.com/linux/SysAdmin/Undo-Systemd-Resolved-a-script-and-results.html

- created a script to clean up my "bin" directories on all of my Linux systems... the issue is that when I
rsync I'm picking up subdirectories that have source s/w... so rather than trying to identify which directories
to rsync, I will only rsync the top level of bin.  So my archived or out of date scripts that I use for reference
will need to be moved out of "ARCHIVED" into the main directory. The plan is to write a script to prepend the file
name with "z_" and then add a suffix with the date.  This way I'll have backup copies and a library of old scripts
from which to recover the "perls of wisdom", especially important because of CRS (can't remember stuff). 
I looked through old scripts and these class notes to find the technique...   Worked first try.
    
    http://johnmeister.com/linux/Scripts/z-archive-it.html

- using cygwin on windows for file management and regular expressions - 
    this will include using "du", find and even rsync to manage backups of local files to thumbdrives or shares.
             
             http://johnmeister.com/linux/Scripts/Engineering/Get-a-CLUE-w-Linux.pdf

- using time to see how well cygwin performs (cf. w/linux if possible)   (time )

- fix-filenames:  a script to undo bad habits
    
    http://johnmeister.com/linux/Scripts/fix-filenames.sh.html

- determine disk hogs  (du -sh) and other file management tricks of sys admins (that create more problems later! :)

- find large files and move, remove, etc.  (find ....) (page with examples is AWOL... !!!)
       
       http://johnmeister.com/linux/Commands/find-copy.html

- using script to capture tasks you'd like to make into a script  (script name.RAW; cat name.RAW | col -b > name.txt )

- using windows command in cygwin  - one such interoperable script would be to clean windows:
    
    http://johnmeister.com/linux/Microsoft/batchfile-to-clean-your-windows.html
    (of course this script/batch file needs to have documents and settings changed to user...)

Session #140 - November 30, 2018 -

Session #140 Linux at Lunch - 11/30/2018 - LAST SESSION 
 SESSION #1 was held on November 26, 2014   
this will be the last session, at least for 2018...  
    ending this series at #140 on Nov 30, 2018  - 4 years and 4 days... 
 
THIS SESSION:
- using cygwin on windows for file management and regular expressions
             
             http://johnmeister.com/linux/Scripts/Engineering/Get-a-CLUE-w-Linux.pdf
- using time to see how well cygwin performs (cf. w/linux if possible)   (time )
- fix-filenames:  a script to undo bad habits
    
    http://johnmeister.com/linux/Scripts/fix-filenames.sh.html
- determine disk hogs  (du -sh) and other file management tricks of sys admins (that create more problems later! :)
- find large files and move, remove, etc.  (find ....) (page with examples is AWOL... !!!)
       
       http://johnmeister.com/linux/Commands/find-copy.html
- using script to capture tasks you'd like to make into a script  (script name.RAW; cat name.RAW | col -b > name.txt )
- using windows command in cygwin 

Session #139 - November 9, 2018 -

Session #139 Session #139 - November 9 - Linux at Lunch
This SESSION: crontab, wrapper scripts and backup techniques

- wrapper script to create a copy of a file with a sortable date prefixed to file name 
    http://johnmeister.com/linux/Scripts/using-DATE-prefix-or-suffix.html 
    http://johnmeister.com/linux/Scripts/wrapper-script.html 
    http://johnmeister.com/linux/Scripts/wrapper-copy-w-date.sh.html 
    http://johnmeister.com/linux/Scripts/filedate.sh.html 
- identify a list of key files: e.g. .bashrc, .vimrc, .viminfo, .ssh/authorized_keys, /etc/hosts, /etc/passwd, etc.
        http://johnmeister.com/linux/Scripts/Replace-README-using-find.html
        http://johnmeister.com/linux/SysAdmin/Editing-Apache2-etc-default-server.conf.html
- create script to check key files and create a dated backup copy of updates in ~/bin/KEY
        http://johnmeister.com/linux/Scripts/LOOPS-shell-examples.html
        http://johnmeister.com/linux/SysAdmin/system-check-script-July-2012.sh.html
- create rsync script to sync directory of backup copies with another system
        http://johnmeister.com/linux/Scripts/rsync-directories.sh.html
        http://johnmeister.com/linux/SysAdmin/Using-rsync.html
        http://johnmeister.com/linux/Notes/RSYNC-cmd.html
        http://johnmeister.com/linux/SysAdmin/SysAdmin-scripts-n-tools.html
- incorporate scripts to check key files and rysnc into crontab
        http://johnmeister.com/linux/Notes/crontab-email-setup.html
        http://johnmeister.com/linux/Notes/wget-n-crontab.html
        http://johnmeister.com/linux/SysAdmin/Cron-Rsync-sync-up-drives.sh.html
        http://johnmeister.com/linux/FileSystems/man.rsync.txt
- demonstrate how to use tar and compress (etc.)  to create an additional backup copy to be archived
        http://johnmeister.com/linux/FileSystems/creating-TAR-n-ZIP.html
        http://johnmeister.com/linux/Commands/MAN/man.tar.html
        http://johnmeister.com/linux/Notes/how-to-TAR-home.html
        http://johnmeister.com/linux/SysAdmin/UNIX/OldTapeBackup-ufsdump-restore/

Session #138 - October 26, 2018 -

Session #138 Linux at Lunch - 10/26/2018 - 
Basics of RAID - (redundant array of inexpensive disks):
 
 http://johnmeister.com/linux/FileSystems/RAID-info.html

logical volume manager on Linux will allow you to use JBOD for RAID:
 
 http://johnmeister.com/linux/FileSystems/setup-LVM.html 

using more than one computer at home and with external drives connected via USB is a reasonable backup method: 
will discuss different scenarios, and using rsync: 

http://johnmeister.com/linux/Scripts/rsync-directories.sh.htm

Session #137 - October 12, 2018 -

Session #137 Linux at Lunch - 10/12/2018 - review environments and scripts, using cygwin for windows commands
FRIDAY'S SESSION:
    - discuss environments (.bashrc)   http://johnmeister.com/linux/basic-setup-for-BASH.html
    - review scripts and discuss:   http://johnmeister.com/linux/Scripts/
    - discuss cmd in cygwin:   http://johnmeister.com/linux/Microsoft/CygWin-when-penguins-are-not-allowed-to-roam-free.html


Session #136 - September 28, 2018 -

Session #136- September 28, FRIDAY: READING a file into another file...
SITUATION RESOLVED: 1,189 web pages have been updated.
Linux at Lunch - Session #136 - 9/28/2018 - completed file insertion using SED and SHELL

FRIDAY'S SESSION:

READ a file into another file... COMPLETED...  
1,189 web pages NAVIGATION added (by Monday night!)...

review and discuss the process and tools used.  
Show how using and editing the history allowed for faster turnaround by adding commands via semicolon on one line.

http://johnmeister.com/linux/Scripts/SED-used-to-insert-Table-of-Content-in-HTML-pages.txt

http://johnmeister.com/BIBLE/STUDY 


Session #135 - September 21, 2018 -

Session #135- September 21, FRIDAY: READING a file into another file...
SITUATION: Have 1,189 web pages that I'd like to add NAVIGATION in to. 

However, there are 66 groupings of these pages with different navigation. 

So I need to create a custom navigation for each of these 66 groupings - 
have a basic set of internal vi sed commands. 

Once that's created I need to insert that navigation, or more correctly, 
a Table of Contents, into each of those like pages. 

Then scp or rsync my local system with my server and test. 


1) I have a manual process: 
    1) vi *files-with-same-name* 
    2) <esc> / key words 
    3) dd ; <esc>:r toc-filename 
    4) <esc>:wn 
    5) repeat until fingers bleed then scp -rp <updated files> server:/path

2) someone said PYTHON, his name was NOT monty...  so... with his help I got it working:
        1) 1 created the toc (table of contents) file, then found the key word search string
        2) dummied up files with simple names and tested - worked
        3) attempted to insert a simple for loop for *files-with-same-name*, failed
        4) BRUTE FORCE SCRIPT worked:  http://johnmeister.com/linux/Scripts/Python-script-attempt-without-a-loop.txt
        5) the use of python without knowledge, training or a clear cheat sheet was MORE WORK

3) someone said SED... it wasn't Fred... but it was said by more one... so...
        1) got some basic stuff working...
        2) can find the pattern and read the toc file... however, it doesn't output the original file to stdout
        3) I could not get sed to edit in place with the pattern as stated
        4) am reading the sed&awk book by oreilly and hoping to write a sed script before Friday
        5) one possibility is to replace the complex with PERL, then...
        6) sed '/banana/r file2' file1 > newfile
        7) GOOD NEWS at 23:13:38 19 Sep - PROBLEM: needed to add ".bak" to the -i in MacOSX BASH / SED!!!
                  Once I escaped all the special characters I still got this error: 
                sed: 1: "mal.html": invalid command code m     
                SEARCH pointed out the problem was a "safety feature".
     sed -i.bak '/\<br\>\<center\>\<hr width=\"50%\"\>\<\/center\>\<\/td\>/r toc-mal' mal.html

       8) now I can write a for loop to get through all of the groupings!
       9) thanks to the folks who said to use SED.    :)
      10) one other note... had to add one more line to remove the string that was supposed to be replaced:
     sed -i.bak '/\<br\>\<center\>\<hr width=\"50%\"\>\<\/center\>\<\/td\>/d' mal.html

4) PERL still seems like a viable tool...
        1) however, my perl scripting class is long expired (1998)
        2) but... I can use my perl -pi -e 's/complex string for/ banana/g'  files   
            - and then use the sed line from #6 above.
        3) I'm going to have my nose in the O'Reilly book on sed & awk and see what h and H and r really do in sed.
        4) Considering the damage I did with perl on prior sessions, I'm stopping at SED this trip.
        5) although perl is still in my tool box, just fixed an error where I had "Revelation" in the title of Isaiah!

        --> grep Revelation *Isa*     #  oops...  all 66 books had it!  didn't edit my template properly
        --> perl -pi -e 's/Revelation/Isaiah/g' *Isa*
        --> grep Revelation *Isa*     #  all gone... 

ONE THING I'VE LEARNED is that a bunch of my really smart friends will say, oh, just use this or that... most of them just leave you with that, or say RTM... or they could do it with this or that tool... and let you research it. I've heard and experienced this for decades... it's very common for a developer or advanced sys admin to say these things, it's another for those helpful and knowledgeable folks to actually help with the exact syntax and make it work.... generally it is never as easy as any of us thought. But every little bit of info, advice and suggestions helps. Have a friend back east who has been helping me with Python... he's run into the same issues I have, but worked it through... he's promised to show me how to setup a for loop tonight (Wed)... I'm trying to make a generic script that does this and just change the variables. I have manually edited those 1,189 files several times... and have ideas for more enhancements. :) Despite my lack of coding skills, the web site is coming along nicely.

Session #134 - September 14, 2018 -

Session #134- September 14, FRIDAY: SIMPLE ALWAYS WORKS!
TWO THINGS TO SORT OUT:
    1) ERROR:   xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option 
    2) difference in counts between xargs and -exec when using the same command 
         (...and of course any differences with grep -i and escaping ".") 

Once again, upon searching with this error I find myself once again in 
     the WASTELAND of FORUMS with misinformation and unanswered questions.
Leaving me to setup a known situation and case and testing the variables myself. 
Will also try to do this in Cygwin so we can also use find and perl on Windows... 

HOWEVER, one of the lunch time linux folks suggested finding the rogue file, found it, fixed it and updated server...

------------------------------------------------ 
## using xargs gave an error, and a slight different count than using -exec  
------------------------------------------------ 
--> time find . -type f -name "*.html" | xargs grep -i "johnmeister.com" | wc -l 
xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option 
52843 
   real 0m10.360s 
   user 0m5.439s 
   sys 0m0.321s 
------------------------------------------------ 
--> time find . -type f -name "*.html" -exec grep "wagoneers\.com" {} \; | wc -l 
61947 
   real 3m29.839s 
   user 0m53.282s 
   sys 0m25.615s 
------------------------------------------------ 
Here's the steps used to sort out the single quote thing and update the server:

    
    http://johnmeister.com/linux/Scripts/Using-find-perl-to-fix-wagoneers.com.html

---------
UPDATE FROM SESSION:
find tmp -type f -name 'sed-xargs-test.txt' | xargs sed -ir 's$before$after$g'
    sed with a "-i" = in place editing
    sed with a "-r" = extended regex   (more perl compatible) (still must be carefull with parens!)

the find with parens destroyed thousands of files... must escape parens:  \(  \)

NOTE:
sed 's/[a-z]*\(-processing\)/post\1/' <<


Session #133 - September 7, 2018 -

Session #133- September 7, FRIDAY: SIMPLE ALWAYS WORKS!
1) using grep, find and perl... 
   or how to fix 3,359 web pages in 8 minutes and 40 seconds. 
- 
http://johnmeister.com/linux/Notes/Perl-to-fix-incomplete-URL.html 
- 
http://johnmeister.com/linux/Commands/PERL-examples-2018-09-Sep-06.txt
- will fix some entries on all the pages in real time, e.g.: 
   a) grep for suspect string, pick a file:   
     grep "Cor rinthian" *   # 56 files were found with this error.
     1091_47_2_Corinthians_13_nas.html:<th colspan="3" bgcolor="aaccff">  2 Cor rinthians 13 <!-- DAY and VERSE info here --> </th>
   b)  WHAT is a good idea at this point is to make a COPY of the file in tmp as a precaution!
     cp 1091_47_2_Corinthians_13_nas.html /tmp/save-for-a-bit   # (history will help you remember the real name)
   c) select one file, as noted above, try perl string, grep again or edit file to validate,
     perl -pi -e 's/Cor rinthians /Corinthians /g' 1091_47_2_Corinthians_13_nas.html
   d) test: grep "Cor rinthian" 1091_47_2_Corinthians_13_nas.html    # expected response is none... nothing... "silence is golden"
   e) now, recall history and change the file name 1091... to a wildcard, or restricted wildcard, e.g. *Cori*
   f) perl -pi -e 's/Cor rinthians /Corinthians /g' *
   g) at this point there are two other directories that have related files, would cd ../OTHERDIR1, recall command via history, then cd again
   h) while not completely necessary, removing the file from /tmp (or where ever you parked it) is a good idea.

2) combine the use of find to fix or change various strings in the text, for example:
   # grep "" *     # two files came up in the subdirectory
   # perl -pi -e 's///g' bashrc-n-history-details.html    #  this fixed one file, there's one left
   # find . -type f -name 'setup-diverse-systems-prototype.sh.html'  -exec perl -pi -e 's///g' {} \;

grep "a string" *     # two files
perl -pi -e 's/a string//g' bashrc-n-history-details.html 
grep "a string" *    # just one file
find . -type f -name 'setup-diverse-systems-prototype.sh.html'  -exec perl -pi -e 's/a string//g' {} \;
grep "a string" *    # none found

   At this point we change the string to hit all html files and go to the top of each directory where this might exist.

GOOD IDEA TO TEST FIRST:
find . -type f -name '*.html'  -exec grep "a string" {} \;
  <img src="http://johnmeister.com/pix/practical-suggestions-for-Microsoft-Windows-small.jpg" width="86%"><br></a>
THEN EXECUTE REPLACEMENT OF STRING:
find . -type f -name '*.html'  -exec perl -pi -e 's/a string//g' {} \;

--> time find . -type f -name '*.html' -exec grep "even free" {} \; real 4m17.062s user 0m57.338s sys 0m31.313s SEE ALSO: http://johnmeister.com/linux/SysAdmin/How-to-update-18-thousand-pages.html for use of xargs or just find... 3) time permitting, review the work of a couple years' using wget, perl, sed, grep, awk, vi, html and find... to build a complete website that works with any browser or device and is portable and stands alone.


Session #132 - August 17, 2018 -

Session #132- August 17, FRIDAY: SIMPLE ALWAYS WORKS!   

1) dd of 8TB - took 16.36 days... not doing THAT again. :) 
http://johnmeister.com/linux/FileSystems/usingParted8TB.html 
http://johnmeister.com/linux/FileSystems/rescuing-failing-drive-w-dd.html 
------------------------------------------------ 
root@LOVING-LINUX -root- [/root]
------------------------------------------------
--> dd if=/dev/sdc of=/dev/sdd
------------------------------------------------
dd: reading `/dev/sdc': Input/output error
43184776+0 records in
43184776+0 records out
22110605312 bytes (22 GB) copied, 4404.01 s, 5.0 MB/s
------------------------------------------------

2)   rsync script:

3) sequentially number files - had to do a bit of manual processing... 
        http://johnmeister.com/linux/Scripts/number-file-names.sh.html, 
had to  number files from 1 to 1,189 to get:  http://bibletech.net from http://bibletech.net/READ

4) used paste, head, cut, cat, and a few other commands to merge several separate files into 1,189 html files.  

5)  Reordered the daily reading to follow the Hebrew version's ordering of these texts... 
http://bibletech.net/Mikrah  - looking at it I need to create an index file for it... 
that will involve:   ls > index.html ; vi index.html  ;  :%s$.*$ & $g  ;
and then will need to put them into various sections, add headings and insert the table format associated with my base README.html template.
Which is mostly laid out in this script:   
http://johnmeister.com/linux/Scripts/mk-webpage-2015-05-08.html  this creates an "ALL.html" page,


Session #131 - July 27, 2018 -

Session #131- July 27, FRIDAY: SIMPLE ALWAYS WORKS!   

1)migrated from 4TB to 8TB - 
    used parted, http://johnmeister.com/linux/FileSystems/usingParted8TB.html 

2) then used rsync and dd - rsync from 4TB to 8TB took over a day... dd has been almost a week... not done... 

3) using script to rsync and update files between computers - 
      strategy for use including using external drives kept in air tight metal box. 

4) modifying a script to sequentially number files: 
  http://johnmeister.com/linux/Scripts/number-file-names.sh.html number files from 1 to 1,189 
  http://bibletech.net/BY-CHAPTER/ and then create an index that matches what's already encoded! 
  Will be using awk, sed, and likely perl. 
 Some of the scripts will be on cygwin, others on SuSE. 

other scripts useful for archiving: 
http://johnmeister.com/linux/Scripts/wrapper-copy-w-date.sh.html 
http://johnmeister.com/linux/Scripts/wrapper-KSH-experiment.html 
http://johnmeister.com/linux/Scripts/wrapper-script.html 
after you get a wrapper script or PROCESS in place, you can expand it across systems 
     and do some things with ssh/scp and rsync: 
http://johnmeister.com/linux/Scripts/rsync-directories.sh.html 
http://johnmeister.com/linux/Scripts/run-now-on-all-hosts.sh.html 


Session #130 - June 15, 2018 - "home brew" revision control

Session #130- June 15, FRIDAY: SIMPLE ALWAYS WORKS! - "home brew" revision control and file management 

Talking with a few of the regular attenders I feel inspired to discuss the creation of a "home brew" revision control system. 

I've "cooked" up a few things over the years to manage system files and notes. 
Keeping it simple keeps it portable. 

Simple is what you can remember... YMMV. 


http://johnmeister.com/linux/Scripts/wrapper-copy-w-date.sh.html 

http://johnmeister.com/linux/Scripts/wrapper-KSH-experiment.html 

http://johnmeister.com/linux/Scripts/wrapper-script.html 

then, after you get a wrapper script or PROCESS in place, you can expand it across systems 
and do some things with ssh/scp and rsync: 


http://johnmeister.com/linux/Scripts/rsync-directories.sh.html 

http://johnmeister.com/linux/Scripts/run-now-on-all-hosts.sh.html 


#!/bin/bash # vicp.sh - created 2015-09-01 jm - wrapper script # updated 2018-06-12 to add a "BAK" directory in local path, be sure to chmod 755 ################################################################################## # if desired, add an archive Directory path to copied file name # /usr/bin/cp $1 $1.before-`date +'%Y%m%d-%H%M'` # BEFORE # /usr/bin/cp $1 BAK/$1.after-`date +'%Y%m%d-%H%M'` # AFTER ################################################################################## if [ ! -d BAK ] # tests for directory, if not, then mkdir... # echo in-loop # commented to test loop - had a problem with special characters then mkdir BAK # echo dirbak # comment to check process.. a space, a tick and an apostrophe... echo "history directory made" fi cp $1 BAK/$1.before-`date +'%Y%m%d-%H%M'` # ls -al BAK # added this to see that the loop worked (cutting and pasting has issues!) vi $1 cp $1 BAK/$1.after-`date +'%Y%m%d-%H%M'` ##################################################################################
Working as a Systems Administrator you do NOT always have the luxury of advanced tools and applications... while there are good tools available in UNIX and Linux and likely Cygwin, they are NOT installed by default, and if they are they may not be configured. You can't trust them. So, yeah, I know there are tools, but they're not always in YOUR tool box when you're dealing with bare metal or on "other" systems. My focus in training and sharing is on what I've used and can remember from the real world. If you're using a system all the time in a stable environment you may be able to add the "better" tools... or do what I did and "grow" my scripts to do more things. These scripts work because I've used them to manage enterprises. I make them work, then build a web page to share them... and... cut and paste into any environment I might find myself... because trying to remember some of these things is a challenge... syntax can be complex. Simple Always Works. I'm pretty simple, and always seem to be working... ...

Session #129 - June 8, 2018 - GPARTED & SuSE 42.3 update; CygWin/BASH continued

Session #129- June 1, FRIDAY: SIMPLE ALWAYS WORKS!    

SITUATION:  Toshiba R600 Portege laptop, 500GB drive - SuSE 42.2 - EOL.  Needed to update, and low on space.
RESOURCES:  DVD with SuSE 42.3 Leap; 1TB SSD from recently deceased laptop; and Gparted on a USB stick.

Thought I would need to use EZGig or Clonezilla... EZgig was version 2... out of date, failed... downloaded v6, but 
it required the hardware device... long ago deceased and disposed... Tried Clonezilla... bad disk... dug out my USB stick with 
Gparted that we've used in prior session to rescue systems, and dd for cloning:
http://johnmeister.com/linux/Notes/Gparted-for-Recovery/ALL.html
http://johnmeister.com/linux/FileSystems/rescuing-failing-drive-w-dd.html

http://johnmeister.com/linux/FileSystems/Gparted-clone-expand-partition-n-upgrade-SuSE.html

But I forgot that one can "copy" partitions with Gparted...  so I tested it, and it worked.
Getting the system to boot to the correct drive was a challenge... had to be careful of /dev/sda as it had the good SuSE 42.2 with MATE... attached to a USB SATA dual drive unit was the target 1TB SSD (which had a full up MINT/MATE environment on it with a corrupted MATE Menu, the files had already been copied to another 1TB SATA drive). And then of course the USB stick with Gparted (Debian Linux), which was what I needed to boot to. It took a few tries to get the correct boot device, but once Gparted started it was all good. What I ended up doing was removing the 1TB drive from the USB/SATA "toaster" and that allowed the system to find the USB stick, once the Gparted screen appeared and BEFORE it loaded its kernel, I reinserted the 1TB drive. If you boot into the Gparted kernel without both drives (source and target) in place it won't work, it needs to be registered in the kernel when started.

The process steps were:

1) boot into Gparted (disk or thumbdrive) - make sure all disks are attached
2) identify drives, e.g.

    /dev/sda1  swap 10GB
    /dev/sda2  root 138GB  (SuSE 42.2)
    /dev/sda3  home 318GB

    /dev/sdb1  swap 8GB
    /dev/sdb2  root 989GB  (Mint/Mate)

    /dev/sdc1  root 512M  (Gparted/Debian)

3) select /dev/sdb - create partition (gpt) 
    - yes, delete all partitions ; apply

4) select /dev/sda1 - copy ; select /dev/sdb, paste 
    (leave the same size, 10GB for 8GB of memory is adequate for swap)

5) select /dev/sda2 - copy ; select /dev/sdb, paste 
    (leave the same size, 138GB for the OS is adequate)

6) select /dev/sda3 - copy ; select /dev/sdb, paste 
    - AND expand to fill the drive 998GB or whatever.

7) after being completely certain you did this right, click APPLY and wait... 
    for what may be a long time...

8) once complete reboot the system verify it's ok... 
    this is where the real work begins.

9) disassemble the laptop and swap the drives, making sure you keep them straight.
    remember, the original drive is still good and correct.

10) swap drives (YMMV), boot up to the cloned drive... good luck, you'll need it.
    - at this point things can get very complicated...
    - if your system used UUID instead of KERNEL devices (i.e. some cryptic string, vs. /dev/sda1)
       you will have a problem... the UUID of the device you cloned will be different...
       that means you'll have to edit /etc/fstab and likely go into /boot and grub.cfg and so on.
    - if you can't boot at this point, and you're smart enough to use OpenSuSE, you have options.

11) Assuming you were bright enough to use OpenSuSE, you have options.
    - you can fiddle with files, or boot up to the install media.
    - when booting to the SuSE Install disk, select upgrade (if you select install the disk 
        is smart enough to realize that it's already installed, and allow you to upgrade,
        which isn't something that often happens with other distros... of course, YMMV.)
    - when doing the upgrade I always select at least one or two additional packages,
        by doing this I make sure the system is changed so that the boot features are also changed.

12) the system will do it's upgrade and usually fix the booting issue.
    - if not either dig into grub or put the old drive back in.
    - once the old drive is in, modify grub.cfg and /etc/fstab to show KERNEL drivers, not UUID,
      or you can get fancy and fake out the UUID on your system to make the new drive appear to be
      the old drive... confused?  yeah, me too... so I ignore the warning about kernel names and
      go that way... however, if you swap drives around on your bus by accident, that will BITE you! be careful. 
    once the system boots up you can restore your old repos (non-oss) and make sure everything is working.
    (I forgot to save my old repos... needed non-oss for virtualbox and VLC... not hard to add back in.).

This was way more fun than CygWin... 


Session #128 - May 11, 2018 - BASH Commands in CYGWIN - continued

Session #128- May 11, FRIDAY: SIMPLE ALWAYS WORKS!    just not always as fast as we'd like...              
 
Plan to continue working with CygWin and using it for parsing information;
                while not as fast as native Linux, it's cost-effective (as in free and no additional hardware required) and convenient. 
I recognize that this tool is the best way of getting power users familiar with regular expressions and Linux!
 
I have been testing various configurations and working with others to learn more about this tool.  I was able to get ssh working in Cygwin and am
researching it further.  The plan is to see if we can ssh between Microsoft systems!   This will greatly help me manage my computer lab!!!!
Also considering making X-windows work in Cygwin...  lots to explore with this tool, one computer, one OS, but many tools! 
 
The focus on the sessions will be about the commands and scripts, 
    and I will comment on how these commands relate to the Linux environment where they differ.
 
Using CYGWIN for these sessions provides the LOWEST COMMON DENOMINATOR to help folks learn regular expressions and Linux commands.

Session #127 - April 27, 2018 - BASH Commands in CYGWIN - continued

remember:  SIMPLE ALWAYS WORKS!  Find the lowest common denominator in your setup and tools, master them.

April 27, FRIDAY - Linux at Lunch 
Linux at Lunch - Session #127 - Real World Linux Commands - in CYGWIN continued.
Working through the list of commands to see which are built-in, which are commands and which are not there.

THIS WEEK: parsing info from the Microsoft "systeminfo" command, and also some CSV data from Active Directory.
    Will identify details about a system and create a CSV file to input into Excel, or OpenOffice, or LibreOffice.

 http://johnmeister.com/linux/SysAdmin/system-info-to-CSV-sysadmin.sh.html 

 http://johnmeister.com/linux/Notes/Real-world-Linux-Commands.html


Session #126 - April 13, 2018 - BASH Commands in CYGWIN - continued

remember:  SIMPLE ALWAYS WORKS!  Find the lowest common denominator in your setup and tools, master them.

April 13, FRIDAY - Linux at Lunch 
Linux at Lunch - Session #126 - Real World Linux Commands - in CYGWIN continued.
Working through the list of commands to see which are built-in, which are commands and which are not there.
 http://johnmeister.com/linux/Notes/Real-world-Linux-Commands.html


Session #125 - April 6, 2018 - BASH Commands in CYGWIN

remember:  SIMPLE ALWAYS WORKS!  Find the lowest common denominator in your setup and tools, master them.

April 6, FRIDAY - Linux at Lunch 
Linux at Lunch - Session #125 - Real World Linux Commands - in CYGWIN 


http://johnmeister.com/linux/Notes/Real-world-Linux-Commands.html


Session #124 - March 30, 2018 - building interoperability

remember:  SIMPLE ALWAYS WORKS!  Find the lowest common denominator in your setup and tools, master them.

March 30, FRIDAY - Linux at Lunch 
Linux at Lunch - Session #124 - building a base of interoperability with Linux

Two topics this week:
Discuss a variety of file management practices to ensure interoperability.
Using Cygwin on Windows and working with Linux or Mac with the same files.
Moving files between systems on a network, via email or via thumbdrive.
Setting up a thumbdrive to work on Mac, Linux and Microsoft.
Using rsync, winscp, putty, Exceed, VNC, VM's, ssh, scp, config mgt and backup tools.
Linux commands involving  ssh setup, rysnc, scp, and creating a function or alias to make backup copies of files.

Useful links for learning more about Linux:
http://johnmeister.com/linux/Overview/
http://johnmeister.com/linux/Commands/commands-bash.html
http://johnmeister.com/linux/vi/ - the only editor worth learning if you're going to work at the command line in Linux/UNIX
 http://johnmeister.com/linux/Intro-to-Linux/lab-exercise-1.html
http://johnmeister.com/linux/Intro-to-Linux/lab-exercise-2.html
http://johnmeister.com/linux/Commands/Man-Pages-Lab/create-man-txt.sh.html
http://johnmeister.com/linux/Commands/vi-examples-of-sed-strings.html
http://johnmeister.com/linux/Commands/MAN/LPI-commands.html
http://johnmeister.com/linux/Intro-to-Linux/Special-Characters.pdf
http://johnmeister.com/linux/Intro-to-Linux/Slides/ALL.html
http://johnmeister.com/linux/Intro-to-Linux/OLDER-NOTES/
http://johnmeister.com/linux/Scripts/Engineering/
for home, BASH Win10 - older info: http://johnmeister.com/linux/Microsoft/Win10-BASH/
https://www.smashwords.com/books/search?query=John+E+Meister+Jr  link to e-books
http://johnmeister.com/linux/The_Art_of_Linux_System_Administration.html
http://www.oreilly.com/pub/au/6963 videos:  http://shop.oreilly.com/product/0636920050209.do


Session #123 - March 23, 2018 - /bin vs. /usr/bin; systemd (systemctl and journalctl)

remember:  SIMPLE ALWAYS WORKS!  Find the lowest common denominator in your setup and tools, master them.

Two topics this week:
1) FHS (Filesystem Hierarchy Standard) info related to setting up .bashrc and related VIM issues:

http://johnmeister.com/linux/FileSystems/Linux-bashrc-vim-differences.jpg

2) "very" quick refresher on systemd, logs, and services:
http://johnmeister.com/linux/Notes/systemd-info.html

3) open talk on future sessions - thinking going through very basic regular expression use might be helpful... thoughts?



Session #122 - March 9, 2018 - wget, paste and reg ex tools

Linux at Lunch - Session #122 - March 9, 2018 - wget, paste, sed, and other useful tools in practice
          Linux at Lunch sessions EVERY OTHER WEEK during 2018:
            March 9 (#122), March 23 (#123); April 6 (#124), April 20 (#125);
            May 4 (#126), May 18 (#127); June 8 (#128); July 27 (#129);
            August 10 (#130), Aug 24 (#131); Sep 7 (#132), Sep 21 (#133);
            Oct 19 (#134); Nov 16 (#135) - END OF SERIES (? mmv...).

Updated a long term project, you can use "wget" to copy this entire directory to a local drive, 
allowing you to view it without a network.  TO DOWNLOAD:

   wget -r -np -nd -P RKNG http://johnmeister.com/tech/RKNG

alternate method:
mkdir RKNG ; cd RKNG ; wget -r -np -nd  http://johnmeister.com/tech/RKNG

the -r is for recursive (goes down the path listed, 
    can be limited with a "-l # " , e.g. "  -l 2 " for two levels down

the  -np is for "no parent"  (doesn't go back up to the root file system - only gets the path specified)

the  -nd is to prevent copying the entire directory structure from the top down... FQDN and path...

the  -P RKNG - is the designated path to copy to.  
        use whatever name you prefer, or mkdir "name" ; cd "name" 

That project made use of a lot of sed, grep, awk, vi, perl and vi edits.  
It is a self-contained directory with no external links, 
using only internal html references using name and href designations.

Useful links for learning more about Linux:
 http://johnmeister.com/linux/Overview/
http://johnmeister.com/linux/Commands/commands-bash.txt
http://johnmeister.com/linux/vi/ - seriously vi is the only editor worth 
        really learning if you're going to work at the command line in Linux/UNIX
 http://johnmeister.com/linux/Intro-to-Linux/lab-exercise-1.html
http://johnmeister.com/linux/Intro-to-Linux/lab-exercise-2.html
http://johnmeister.com/linux/Commands/Man-Pages-Lab/create-man-txt.sh.txt
http://johnmeister.com/linux/Commands/vi-examples-of-sed-strings.html
http://johnmeister.com/linux/Commands/MAN/LPI-commands.html
http://johnmeister.com/linux/Intro-to-Linux/Special-Characters.pdf
http://johnmeister.com/linux/Intro-to-Linux/Slides/ALL.html
http://johnmeister.com/linux/Intro-to-Linux/OLDER-NOTES/
http://johnmeister.com/linux/Scripts/Engineering/
for home, BASH Win10 - older info: http://johnmeister.com/linux/Microsoft/Win10-BASH/
  
https://www.smashwords.com/books/search?query=John+Meister  link to e-books
http://johnmeister.com/linux/The_Art_of_Linux_System_Administration.html
 http://www.oreilly.com/pub/au/6963 

giving some thought to the following option, not completely sure yet:
 < esc >:wq!  

Session #121 - February 23, 2018 - FRIDAY - cygwin scripting environments


Linux at Lunch - Session #121 - Feb 23, 2018 - cygwin, Linux 101, CSV, exiftool
Session #121 February 23, 2018 - FRIDAY - Linux at Lunch -  session opens 15 minutes before...
1) cygwin  - basic steps to setup
            (for home, BASH Win10 - older info: 
            http://johnmeister.com/linux/Microsoft/Win10-BASH/Win10-BASH-install.html)
2) Linux 101 - over view of commands
3) CSV - script showing conversion of system info to CSV to Excel
    part of the Video series:  http://johnmeister.com/linux/The_Art_of_Linux_System_Administration.html
    Script To Create System Info Spreadsheet 06m23s 0808-SA-tools managing the system resources 
    using a spreadsheet
4) exiftool and CSV
-------------------------------------------
This week we'll talk about installing Cygwin, show the basic setup with .bashrc and .vimrc,
 http://johnmeister.com/linux/basic-setup-for-BASH.html
 and then discuss commands briefly from the Overview page linked below, kind of a Linux 101, 
basically just talking about the various grouping of commands and then moving into some scripts 
to show the commands in action.  http://johnmeister.com/linux/Overview/
http://johnmeister.com/linux/Overview/LinuxOverview.pdf

ADDITIONAL INFO:
http://johnmeister.com/linux/Intro-to-Linux/lab-exercise-1.html
http://johnmeister.com/linux/Intro-to-Linux/lab-exercise-2.html
http://johnmeister.com/linux/Commands/vi-examples-of-sed-strings.html
http://johnmeister.com/linux/Commands/MAN/LPI-commands.html
http://johnmeister.com/linux/Intro-to-Linux/Special-Characters.pdf
http://johnmeister.com/linux/Intro-to-Linux/Slides/ALL.html
http://johnmeister.com/linux/Intro-to-Linux/OLDER-NOTES/

Then will walk through the CSV script, and will show how to read in and out of excel with 
    CSV. (comma separated values).
http://johnmeister.com/linux/SysAdmin/system-info-to-CSV-sysadmin.sh.html
from :  http://johnmeister.com/linux/The_Art_of_Linux_System_Administration.html

Script To Create System Info Spreadsheet 06m23s 0808-SA-tools managing the system resources 
using a spreadsheet.  Have an idea for a useful script for my photography, will run the exiftool, 
filter out specific details and place them in CSV format with the server link and add them to Excel.

http://johnmeister.com/linux/Scripts/extract-base-EXIF-info.sh.html

  I can then open that file in OpenOffice and create a PDF or an HTML page for navigation.
Always interested in which lens I used, exposures and so on.
Session #121 February 23, 2018 - FRIDAY - Linux at Lunch -  session opens 15 minutes before...



Session #120 - February 16, 2018 - FRIDAY - scripting environments

discussion of scripts

a couple of scripts and PERFORMANCE



Session #119 - February 9, 2018 - FRIDAY - updated Engineering Week presentation and adventures with AWK
UPDATED the material - will go through the new powerpoint
During the Engineering week presentation will provide an overview of Linux with links to more information, and then describe a time savings of over 182 manhours using Regular Expressions in an engineering analysis. Presentation is scheduled for Tuesday, 20 February 2018 from 1300-1400 hrs at the Bomarc Complex.

How to use Regular Expressions to save 13 hours of Microsoft Excel sorting time with 5 SECONDS in a Linux Shell. Seriously... there are pictures... kind of... (btw, I love spreadsheets... but... less than FIVE SECONDS!!!)
(using cygwin on Win7, pure Linux about 0.2 seconds)
Then we will discuss some simple features of AWK; the ones we can remember.


Session #118 - January 26, 2018 - FRIDAY - dry run of Engineering Week presentation
 During the Engineering week presentation will provide an overview of Linux with links to more information,
and then describe a time savings of over 182 manhours using Regular Expressions in an engineering analysis.

Presentation is scheduled for Tuesday, 20 February 2018 from 1300-1400 hrs at the Bomarc Complex. 

How to use Regular Expressions to save 13 hours of Microsoft Excel sorting time with 5 SECONDS in a Linux Shell. Seriously... there are pictures... kind of... (btw, I love spreadsheets... but... FIVE SECONDS!!!)


Session #117 - January 19, 2018 - FRIDAY - BASH command precedence
Discusson on command precedence and useful HTML features as demonstrated in his page:
There are two major lessons: 1) path considerations, special characters, built-ins, commands and aliases. see prior material at: bashrc-n-history-details.html see prior material at: basic-setup-for-BASH.html see example .bashrc file: bashrc-basic.html 2) practical use of HTML features, e.g. code for buttons used to hide material until expanded.




Session #116 - January 12, 2018 - FRIDAY - using command line tools to build a web page
############################################### 
1)  FILES TO MERGE INTO PARALLEL COLUMNS: used excel to create list  
      (moved columns in excel), copy/paste
 
    http://johnmeister.com/bible/ReadInOneYear/one-year-plan.txt
  - Added leading zeros so all numbers were 3 digits for sorting, underscores for spaces.
  - Global replace in vi "touch" at the front of the line, save and: sh ./listofnames
   created empty files, ls > Daily-Passages.html, edited and:
 
    http://johnmeister.com/bible/ReadInOneYear/Daily-Passages.html

Took the list and extracted only the chapters using cut to get the ranges needed.
   ls Day_212_Jul_07_31_Isaiah_57-59.html | cut -c 25-29  
        # tested placement of verses... off by 1
   ls Day_212_Jul_07_31_Isaiah_57-59.html | cut -c 26-30
   ls | cut -c 26-30 > create-files.sh 
        #  then I took the list of 365 days and created the list of chapters
--> echo Day_212_Jul_07_31_Isaiah_57-59.html | cut -c 26-30
57-59       #  so, VS="57-59" and would be created for each section. 
############################################### 
2)  created copies: kjv, nas, greek 
   cp gnt.txt g$VS.txt ; cp KJVb.txt k$VS.txt; cp NAS-NT.txt n$VS.txt 
############################################### 
3) vi - edit out all but chapters for reading: 
    vi file, /Mark 15, k, 1G, /Mark 17,dG,:wn... last,:wq
############################################### 
3) check lines on all three files (using cat and wc -l)
cat g$VS.txt | wc -l ; cat k$VS.txt | wc -l ; cat n$VS.txt | wc -l
####  time saver:  cat g*.txt | wc -l ; cat k*.txt | wc -l ; cat n*.txt | wc -l
############################################### 
4) copy empty file from web directory  (would do this as the last step in the script)
   DAY1 would be file in edit, DAYN is the next day.
############################################### 
5) paste tags and text together into file
  paste -d '\n' tags1.txt k$VS.txt tags2.txt n$VS.txt tags3.txt g$VS.txt tags4.txt > $DAY1
#### manual / time saver mode (grabbed from history, command line complete of Day...): 
#  paste -d '\n' tags1.txt k*.txt tags2.txt n*.txt tags3.txt g*.txt tags4.txt \
#                    > Day_294_Oct_10_21_Luke_1-2.html
############################################### 
6) vi $DAY1 - deleted empty rows at end;  :r FILE-BASE.txt ; type day and verses; save
############################################### 
7) scp file to server and mv to web directory
############################################### 
8) repeat for all 365 days: 1,189 chapters; 31,102 verses and 781,621 words (YMMV)

########################################## script for part of the process: ########################################## #!/bin/bash ############# DAY1="Day_293_Oct_10_20_Mark_15-16.html" ; VS="15-16" DAYN="Day_294_Oct_10_21_Luke_1-2.html" ############# cp gnt.txt g$VS.txt ; cp KJVb.txt k$VS.txt; cp NAS-NT.txt n$VS.txt vi ?$VS.txt cat g$VS.txt | wc -l ; cat k$VS.txt | wc -l ; cat n$VS.txt | wc -l read # safety net - edit files in a different shell - hit enter when ready #### THE MAIN COURSE: tags have html formatting including colors paste -d '\n' tags1.txt k$VS.txt tags2.txt n$VS.txt tags3.txt g$VS.txt tags4.txt > $DAY1 ########### cat FILE-BASE.txt >> $DAY1 # I found that trimming the file of empty lines first was better vi $DAY1 ; ls -al ; echo $DAY scp $DAY1 server:/home/luser/website/ReadInOneYear/ mv $DAY1 ../ReadInOneYear/ mv ?$VS.txt hold-tmp # a safety net - just in case... (used it once so far) ############# echo "next day" cp ../10_Oct/$DAYN . # cp ../07_Jul/Day_191_Jul_07_10_Ecclesiastes_1-4.html . # manual steps for smaller sections echo "next $DAYN" ; echo "====================" ###################################################### could queue up more day # DAY1="Day_286_Oct_10_13_Mark_1-3.html" ; VS="1-3" # DAYN="Day_287_Oct_10_14_Mark_4-5.html" ######################################################



2017 - click for all sessions


Session #115 - December 15, 2017 - FRIDAY - using find and perl
note on html: color for cell above is "ivory" (rrggbb), font color is "black"
(red, green, and blue): colors between 0 and 255 (hex values= ff). 255,0,0 = red ; 0,255,0 = green ; 0,0,255 = blue
(can use names too...)

NOTE: LAST SESSION OF 2017
http://johnmeister.com/linux/Intro-to-Linux/OLDER-NOTES/One-Hour-Linux-Sessions-2014-2017.html

using find and perl to update HTML pages for URL correction and consolidation of domains

http://johnmeister.com/linux/SysAdmin/Using-find-n-perl-to-manage-URLs.html
also: time, grep, col, and Apache: /etc/apache2/errors.conf and creating a missing.html page

Session #114 - December 8, 2017 - FRIDAY - Oracle VirtualBox and LVM
note on html: color for cell above is "yellow" (rrggbb), font color is "black"
(red, green, and blue): colors between 0 and 255 (hex values= ff). 255,0,0 = red ; 0,255,0 = green ; 0,0,255 = blue
(can use names too...)


Expanding the capactity of a dynamic vdi disk in Oracle's VirtualBox

Built a default Linux 64bit VM with Centos, however, needed more space, 
   had to increase the capacity from the default of 8GB to about 14GB. 
https://www.virtualbox.org/

http://johnmeister.com/linux/SysAdmin/Virtualization/VM_setup_SuSE_13.2/ALL.html
  
Steps to increase capacity of a vdi drive in VirtualBox 
        (Linux or MacOSx host) at the COMMAND LINE:

-->sudo VBoxManage modifyhd /home/luser/VirtualBox_VMs/Centos-Lab/Centos_Lab.vdi --resize 14480
 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

Once that was done, checked settings in VirtualBox;  
        was good... but local drive in VM still not expanded.

FROM INSIDE THE VM, at the command line:  
    pvs ; pvdisplay ; vgs ; vgdisplay; lvs ; lvdisplay  
        # displays system values
    fdisk /dev/sda      
        #  create a partition in the "expanded" virtual drive
        p, n, p, 3, t, 82e, w
    pvcreate /dev/sda3    ; pvs; vgs    
        # create the physical volume, check it
   vgextend centos_centos7-vm /dev/sda3  
        # extend the existing volume group into the expande physical drive
   vgs ; pvscan ; lvdisplay ; df -h; vgdisplay
   lvextend /dev/mapper/centos_centos7--vm-root /dev/sda3 
        # extend the logical volume into the expanded volume group
   df -h . ; vgs ; lvs 
   resize2fs /dev/mapper/centos_centos7--vm-root  
       ## FAILURE - bug in RH LVM and/or XFS  (should have extended the file system)
## FAILURE - bug in RH LVM and/or XFS
--> resize2fs /dev/centos_centos7-vm/root   (FAILURE IN RH-type system and xfs)
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/centos_centos7-vm/root
Couldn't find valid filesystem superblock.

FIX:
--> xfs_growfs  /dev/centos_centos7-vm/root

http://johnmeister.com/linux/FileSystems/ADD-6GB-to-vdi-CentOS-VM.html

process:  Order of precendence and logic:   - PHYSICAL   - VOLUME  - LOGICAL 
PHYSICAL VOLUME(s): pvs; pvdisplay; pvresize
VOLUME GROUP(s): vgs; vgdisplay; vgextend        Add physical volumes to a volume group
LOGICAL VOLUME(s): lvs ; lvdisplay; lvextend        Add space to a logical volume

        http://johnmeister.com/linux/FileSystems/setup-LVM.html

        http://johnmeister.com/linux/FileSystems/lvm-commands.html


Session #113 - December 1, 2017 - FRIDAY - RECAP of 3 years of Linux at lunch!
note on html: color for cell above is "fedcba" (rrggbb), font color is "black"
(red, green, and blue): colors between 0 and 255 (hex values= ff). 255,0,0 = red ; 0,255,0 = green ; 0,0,255 = blue
(can use names too...)


recap of 3 years worth of sessions

http://johnmeister.com/linux/Intro-to-Linux/session-list-2017.html

DETAILS 2014-2017 Linux sesssions

SESSION #2 - December 3, 2014 - SAW:"simple always works"

provided link to exercise #1,
discussed environment (.bashrc), path, chmod, discussed "SAW" -
SAW:"simple always works",
History files viewed with sort, uniq, and how to create notes for reference:
ls, sort, grep, uniq, wc -l
(after using a command, use history recall, add echo and quotes around it,
and then append to ~/bin/cool-commands.txt e.g.

echo "ls *.jpg > ALL.html ; \
perl -pi -e 's/(.*)/<img src="$1"><BR>$1<HR>/g' ALL.html ; \
perl -pi -e 's/<img src=""><BR><HR>//g' ALL.html ; \
cat ALL.html" >> ~/bin/cool-commands.txt

----------------------------------------------------------------------------
Links: http://johnmeister.com/linux/Intro-to-Linux/lab-exercise-1.html
(do this on your system, use "script exercise-1.raw",
when finished type exit, then cat exercise-1.raw | col -b > exercise-1.txt
# then edit in vi and save)
http://johnmeister.com/linux/Scripts/chksys.sh.html
(use the vi editor to create and run on your system)
http://johnmeister.com/linux/Scripts/man-page-create-textfiles.sh.txt
(mkdir and cd LAB, then create this script in vi and run)
http://johnmeister.com/linux/Commands/grep-awk-uniq.html
(the output of the script above is needed to use the commands,
YMMV, counts may be different)


SESSION #1 - November 26, 2014 - OVERVIEW

	 provided overview of Linux using the pdf: 
        http://johnmeister.com/linux/Overview/LinuxOverview.pdf  
	5 basic commands: man, ls (ls -al, ls -Al), cd, pwd, more 
	discussed .bashrc and showed a few script examples, talked about "script"
----------------------------------------------------------------------------
Links:	http://johnmeister.com/linux/Overview/LinuxOverview.pdf    
        (print out and use as a guide)
	http://johnmeister.com/linux/Intro-to-Linux/Special-Characters.pdf 
        (print out and keep as a guide)
	http://johnmeister.com/linux/Notes/bashrc-the-dotfile.html  
        (copy and build your own .basrhc for Cygwin or Linux)
	http://johnmeister.com/linux/Scripts/chksys.sh.html  
	http://johnmeister.com/linux/Intro-to-Linux/lab-exercise-1.html  
        (use the commands and the vi editor)


THE CORE MATERIAL:The power of the command line - Simply Linux: Basics will work our way through the Linux book (Simply Linux: Basics) under construction based on http://johnmeister.com/linux/Overview/ http://johnmeister.com/linux/Overview/LinuxOverview.pdf (print out and use as a guide) http://johnmeister.com/linux/Overview/Linux-PowerPoint-2004-overview.pdf http://johnmeister.com/linux/Notes/Real-world-Linux-Commands. http://johnmeister.com/linux/Intro-to-Linux/Special-Characters.pdf (print out and use as a guide) http://johnmeister.com/linux/Notes/bashrc-the-dotfile. (copy and build your own .basrhc for Cygwin or Linux)

ONE HOUR Linux SESSIONS



ebooks and video series by john

Simply Linux: Basics Linux Tackles Microsoft Using BASH on Windows 10
Practical Suggestions for Microsoft Windows
 Full Size Jeep Buyer's Guide
the art of Linux sys admin
the Art of Linux SysAdmin
john's publications (click on cover for further info)


JohnMeister.com Today's Date:

Simply Linux: Basics  Full Size Jeep Buyer's Guide Using BASH on Windows 10
Practical Suggestions for Microsoft Windows
Linux Tackles Microsoft
Video Course:
The Art of Linux System Administration, and a
Study Guide for the LPIC-2 Certification Exams.

-- O'Reilly Media author info
FULL SIZE JEEP

Buyer's Guide

SJ model Jeeps
"Jeep is America's
only real sports car."
-Enzo Ferrari


Mercedes, VW, and other Diesels
Nikon cameras
general tech info
AMSOIL product guide,
or, AMSOIL web, or 1-800-956-5695,
use customer #283461

Amsoil dealer since 1983

purchase AMSOIL and have it
installed locally in WA at:

- Fleet Services 425.355.4440 - Everett
- Midway Auto 360.668.7111 - Clearview/Snohomish
- Northland Diesel 360.676.1970 - Bellingham