Dockercraft

I woke up this morning not feeling like I was enough of a hipster admin so I decided to write this keyword..er.. educational post about docker. I am going to spare you the explanation of what docker is and what it does. That you can find from other places. So lets say you have docker installed somewhere and you want to try running your own minecraft server. You could do something like this: ...

March 3, 2015 · 2 min · Jered Sutton

When strace isn't enough Part 1

An important tool in any linux admin’s toolkit is the venerable strace command. It enables us to get insight into what a program is actually doing. As awesome as strace can be, it doesn’t tell us everything. This series of articles will get you familiar with some of the other commands and approaches to gain insight into program execution. ...

November 4, 2013 · 3 min · jsuttonlogin

Back to Basics: 5 tips for the IT worker in all of us

As I have worked in the IT industry, I have come across a few ideas that seem to appear over and over. These ideas are fundamentally simple, and are a good think to think of regardless of what level you work in IT. Take some time to meditate on these and become an IT superhero. ...

October 27, 2013 · 3 min · jsuttonlogin

Quick Tip: View view linux process limits

I have on several occasions needed to troubleshoot issues which wound up being problems with linux limiting the number of open files for a given process. This can be an annoying issue to troubleshoot since many programs do not gracefully handle this condition and linux does not provide log information which indicates the situation by default. This really applies to all of the linux process limits, not just open files. ...

July 14, 2013 · 2 min · jsuttonlogin

Bash Nagios plugin

Today lets have a look at one way to construct a nagios plugin in bash. I would usually write these in perl, but sometimes that is not possible. This plugin is actually written to be executed using NRPE. <code>#!/bin/bash # bash nagios plugin ### # Variables ### OK=0 WARNING=1 CRITICAL=2 UNKNOWN=-1 TO_RETURN=${OK} TO_OUTPUT='' # Print usage information and exit print_usage(){ echo -e "\n" \ "usage: ./check_uptime -w 20 -c 30 \n" \ "\n" \ "-w <days> warning value\n" \ "-c <days> critical value\n" \ "-h this help\n" \ "\n" && exit 1 } ### # Options ### # Loop through $@ to find flags while getopts ":hw:c:" FLAG; do case "${FLAG}" in w) # Warning value WARNING_VALUE="${OPTARG}" ;; c) # Critical value CRITICAL_VALUE="${OPTARG}" ;; h) # Print usage information HELP=1;; [:?]) # Print usage information print_usage;; esac done ### # Functions ### log_date(){ echo $(date +"%b %e %T") } error() { NOW=$(log_date) echo "${NOW}: ERROR: $1" exit 1 } warning() { NOW=$(log_date) echo "${NOW}: WARNING: $1" } info() { NOW=$(log_date) echo "${NOW}: INFO: $1" } # Do something get_cmd_output(){ #generate output echo `uptime | sed 's/.*up \([0-9]*\) day.*/\1/'` || error "failed to run command" } ### # Program execution ### [ "${HELP}" ] && print_usage if [ ${WARNING_VALUE} ] && [ ${CRITICAL_VALUE} ] then CMD_OUTPUT=$(get_cmd_output) else print_usage fi if [ "${CMD_OUTPUT}" ] && [ ${CMD_OUTPUT} -gt ${CRITICAL_VALUE} ] then TO_RETURN=${CRITICAL} elif [ "${CMD_OUTPUT}" ] && [ ${CMD_OUTPUT} -gt ${WARNING_VALUE} ] then TO_RETURN=${WARNING} elif [ "${CMD_OUTPUT}" ] && [ ${CMD_OUTPUT} -gt 0 ] then TO_RETURN=${OK} else TO_RETURN=${UNKNOWN} fi if [ $TO_RETURN == ${CRITICAL} ] then TO_OUTPUT="CRITICAL " elif [ $TO_RETURN == ${WARNING} ] then TO_OUTPUT="WARNING " elif [ ${TO_RETURN} == ${OK} ] then TO_OUTPUT="OK " else TO_OUTPUT="UNKNOWN " fi TO_OUTPUT="${TO_OUTPUT}| uptime=${CMD_OUTPUT};$WARNING_VALUE;$CRITICAL_VALUE" echo "$TO_OUTPUT"; exit $TO_RETURN; </code> Lets break it down… ...

April 29, 2013 · 3 min · jsuttonlogin

Bash Parallel Execution

If you have ever wanted an easy way to execute multiple jobs in parallel in bash, then this is the snippet for you. This was originally posted on Stack Exchange. It has been modified a bit. <code>#!/bin/bash #how many jobs to run at one time JOBS_AT_ONCE=20 # The bgxupdate and bgxlimit functions below allow for # running X jobs in parallel in bash. They are taken from: # http://stackoverflow.com/questions/1537956/bash-limit-the-number-of-concurrent-jobs/1685440#1685440 # bgxupdate - update active processes in a group. # Works by transferring each process to new group # if it is still active. # in: bgxgrp - current group of processes. # out: bgxgrp - new group of processes. # out: bgxcount - number of processes in new group. bgxupdate() { bgxoldgrp=${bgxgrp} bgxgrp="" ((bgxcount = 0)) bgxjobs=" $(jobs -pr | tr '\n' ' ')" for bgxpid in ${bgxoldgrp} ; do echo "${bgxjobs}" | grep " ${bgxpid} " >/dev/null 2>&1 if [[ $? -eq 0 ]] ; then bgxgrp="${bgxgrp} ${bgxpid}" ((bgxcount = bgxcount + 1)) fi done } # bgxlimit - start a sub-process with a limit. # Loops, calling bgxupdate until there is a free # slot to run another sub-process. Then runs it # an updates the process group. # in: $1 - the limit on processes. # in: $2+ - the command to run for new process. # in: bgxgrp - the current group of processes. # out: bgxgrp - new group of processes bgxlimit() { bgxmax=$1 ; shift bgxupdate while [[ ${bgxcount} -ge ${bgxmax} ]] ; do sleep 1 bgxupdate done if [[ "$1" != "-" ]] ; then $* & bgxgrp="${bgxgrp} $!" fi } bgxgrp="process_group_1" for LINE in `cat hosts` do CHECK_SCRIPT='echo $(hostname),$(cat /etc/debian_version)' bgxlimit $JOBS_AT_ONCE ssh ${LINE} "${CHECK_SCRIPT}" done # Wait until all queued processes are done. bgxupdate while [[ ${bgxcount} -ne 0 ]] ; do oldcount=${bgxcount} while [[ ${oldcount} -eq ${bgxcount} ]] ; do sleep 1 bgxupdate done done </code> In this script the primary changes are defining the max number of simultaneous jobs, as well as doing somewhat useful work in returning the hostname and the debian version. ...

April 14, 2013 · 2 min · jsuttonlogin

Turboprop

As an extension of my previous post on parallel execution I present turboprop. The initial version of this script will perform an optimization of a mysql database with multiple tables running at the same time. In the future it may be extended to allow for more operations from the command line such as mysql dumps. <code>#!/bin/bash # turboprop # how many jobs to run at one time JOBS_AT_ONCE=20 # Command to run in parallel in this case mysqlcheck -o COMMAND="mysqlcheck -o" # Print usage information and exit print_usage(){ echo -e "\n" \ "usage: ./turboprop -d databasename \n" \ "Optimizes mysql tables in parallel\n" \ "-d <databasename> Database to optimize\n" \ "-h this help\n" \ "\n" && exit 1 } ### # Options ### # Loop through $@ to find flags while getopts ":d:" FLAG; do case "${FLAG}" in d) # Database name DB=${OPTARG} ;; h) # Print usage print_usage;; [:?]) print_usage;; esac done [ ! ${DB} ] && print_usage ### # Functions ### # The bgxupdate and bgxlimit functions below allow for # running X jobs in parallel in bash. They are taken from: # http://stackoverflow.com/questions/1537956/bash-limit-the-number-of-concurrent-jobs/1685440#1685440 # bgxupdate - update active processes in a group. # Works by transferring each process to new group # if it is still active. # in: bgxgrp - current group of processes. # out: bgxgrp - new group of processes. # out: bgxcount - number of processes in new group. bgxupdate() { bgxoldgrp=${bgxgrp} bgxgrp="" ((bgxcount = 0)) bgxjobs=" $(jobs -pr | tr '\n' ' ')" for bgxpid in ${bgxoldgrp} ; do echo "${bgxjobs}" | grep " ${bgxpid} " >/dev/null 2>&1 if [[ $? -eq 0 ]] ; then bgxgrp="${bgxgrp} ${bgxpid}" ((bgxcount = bgxcount + 1)) fi done } # bgxlimit - start a sub-process with a limit. # Loops, calling bgxupdate until there is a free # slot to run another sub-process. Then runs it # an updates the process group. # in: $1 - the limit on processes. # in: $2+ - the command to run for new process. # in: bgxgrp - the current group of processes. # out: bgxgrp - new group of processes bgxlimit() { bgxmax=$1 ; shift bgxupdate while [[ ${bgxcount} -ge ${bgxmax} ]] ; do sleep 1 bgxupdate done if [[ "$1" != "-" ]] ; then $* & bgxgrp="${bgxgrp} $!" fi } ### # Program Execution ### bgxgrp="process_group_1" for TABLE in `mysql ${DB} -e 'show tables'` do bgxlimit ${JOBS_AT_ONCE} ${COMMAND} ${TABLE} done # Wait until all queued processes are done. bgxupdate while [[ ${bgxcount} -ne 0 ]] ; do oldcount=${bgxcount} while [[ ${oldcount} -eq ${bgxcount} ]] ; do sleep 1 bgxupdate done done </code>

April 7, 2013 · 3 min · jsuttonlogin

Agile vs Agile

I have had occasion recently to read about different development methodologies, and in doing so I realized that I had a fundamental misunderstanding of agile. What struck me as interesting is that so many people derive so many different meanings from this word. Here are just a few: ...

April 1, 2013 · 2 min · jsuttonlogin

Bash Best(ish) practices part 4

This is the final post in the series. I will follow up this post by commiting my code to github for easy access. <code>#!/bin/bash # bash template # Print usage information and exit print_usage(){ echo -e "\n" \ "usage: ./bashtemplate -o option \n" \ "\n" \ "-o <option> an option\n" \ "-h this help\n" \ "\n" && exit 1 } ### # Options ### # Loop through $@ to find flags while getopts ":ho:" FLAG; do case "${FLAG}" in o) # Our option OPTION="${OPTARG}" ;; h) # Print usage information HELP=1;; [:?]) # Print usage information print_usage;; esac done ### # Functions ### log_date(){ echo $(date +"%b %e %T") } error() { NOW=$(log_date) echo "${NOW}: ERROR: $1" exit 1 } warning() { NOW=$(log_date) echo "${NOW}: WARNING: $1" } info() { NOW=$(log_date) echo "${NOW}: INFO: $1" } # Do something do_something(){ info "Doing something..." warning "Encountered an anomaly while doing something." } ### # Program execution ### [ "${HELP}" ] && print_usage if [ ${OPTION} ] then info "Executing do_something()" do_something else print_usage fi </code> I have removed much of the extraneous stuff from the previous examples as I would like this to be generally useful for writing scripts. ...

March 30, 2013 · 2 min · jsuttonlogin

Bash Best(ish) practices part 3

This is our third installment of the best(ish) series. In this installment we will look at some basic logging functions that simplify life. Here is the code so far. <code>#!/bin/bash # Copy some files # Print usage information and exit print_usage(){ echo -e "\n" \ "usage: ./copy_stuff -s sourcedir -d destdir \n" \ "\n" \ "-s <sourcedir> source directory\n" \ "-d <destdir> destination directory\n" \ "-h this help\n" \ "\n" && exit 1 } ### # Options ### # Loop through $@ to find flags while getopts ":hs:d:" FLAG; do case "${FLAG}" in s) # Our source SOURCE="${OPTARG}" ;; d) # Our destination DEST="${OPTARG}" ;; h) # Print usage information HELP=1;; [:?]) # Print usage information print_usage;; esac done ### # Functions ### log_date(){ echo $(date +"%b %e %T") } error() { NOW=$(log_date) echo "${NOW}: ERROR: $1" exit 1 } warning() { NOW=$(log_date) echo "${NOW}: WARNING: $1" } info() { NOW=$(log_date) echo "${NOW}: INFO: $1" } # Copy some files from one place to another copy_files(){ info "Copying ${SOURCE}/foo to ${DEST}/foo" info "Copying ${SOURCE}/bar to ${DEST}/bar" info "Copying ${SOURCE}/baz/foo to ${DEST}/baz/foo" info "Copying ${SOURCE}/baz/bar/foo to ${DEST}/baz/bar/foo" info "Copying ${SOURCE}/dir/somefile to ${DEST}/dir/someotherfile" echo } # Restart some service restart_service(){ info "Stopping the service" info "Making sure the service is stopped" info "Reticulating splines..." warning "splines not reticulated" info "Starting the service" echo } # Test to see if the service is running test_service(){ info "service is tested" } ### # Program execution ### [ "${HELP}" ] && print_usage if [ ${SOURCE} ] && [ ${DEST} ] then info "Copying files from ${SOURCE} to ${DEST}" copy_files info "Restarting the service" restart_service info "Testing the service" test_service fi </code> Lets break it down… ...

March 24, 2013 · 3 min · jsuttonlogin