Intro to K8S Helm Presentation

On Thursday, June 13, I gave a brief introduction to Kubernetes Helm presentation at Rackspace‘s San Antonio HQ as part of the monthly Kubernetes San Antonio meetup. At the end I gave a quick demonstration on creating a simple Helm chart that went ok, minis a few technical difficulties.

Considering this was the first presentation I gave at a meetup, there were obviously areas I could improve on for future presentations, however, you have to start somewhere.

I will say that preparing a presentation greatly helps you better understand a topic, a new technology or just about anything. Even though presenting may be scary to some, it is definitely worth it.

Attached are the slides from the presentation.

Software Usability Abstract

There are a few useful notes on software usability / Human Computer Interaction (HCI), written on hallway the whiteboards in the Century Link St. Louis office that I feel provide a good summary of the topic. These reminded me of the HCI class I took with Bill White.

“To Design is to plan, to order, to relate, and to control. In short, it opposes all means of disorder and accident” – Emil Ruder
Elements of User Centered Design

Elements of User Centered Design

  • P: Personas
  • UA: User Analytics
  • UT: Usability Testing
  • CI: Contextual Inquiry (Context by user)
  • ID: Interactive Design
  • HE: Heuristic Analysis (Best Practice Review)
  • PT: Prototype Testing
  • CA: Competitive Analysis
  • IA: Information Architecture
  • VD: Visual Design

Usability: The effectiveness, efficiency and satisfaction with with specified users achieve specified goals in particular environments.

10 Usability Best Practices (1-3)
10 Usability Best Practices (4-7)
10 Usability Best Practices (8-10)

10 Usability Best Practices

  1. Visibility of System Status: Give users appropriate feedback about what is going on.
  2. User Control and Freedom: Support undo, redo and exit points to help users leave an unwanted state caused by mistakes.
  3. Aesthetic And Minimalist Design: Don’t show irrelevant or rarely needed information since every extra element diminishes the relevance of the others.
  4. Flexibility and Efficiency of Use: Make the system efficient for different experience levels through shortcuts, advanced tools and frequent actions.
  5. Help and Documentation: Make necessary help and documentation easy to find and search focused.
  6. Match Between System and the Real World: Use real world words, concepts and conventions familiar to the users in a natural and logical order.
  7. Error Prevention: Prevent problems from occurring: eliminate error prone conditions or check for them before users commit to action.
  8. Consistency and Standards: Follow platform conventions through consistent words, situations and actions.
  9. Recognition Rather than Recall: Make objects, actions, and options visible at the appropriate time to minimize users memory load and facilitate decisions.
  10. Help Users Recognize, Diagnose, and Recover from Errors: Express error messages in plain language (no codes) to indicate the problem and suggest solutions.

Deploying WordPress with Terraform

Terraform is an open source Infrastructure as Code (IaC) tool developed by HashiCorp and it supports most major public cloud platforms such as AWS, Google Cloud, Azure, DigtialOcean, and so on.

What makes Infrastructure as Code special? It enables you to automatically deploy and manage the infrastructure needed to run technology stacks, such as WordPress, through software instead of manual processes. IaC enables one to deploy and manage cloud servers, create sub nets, update DNS and more with code that is stored in a version control system such as Git. In fact, this website was deployed using both Terraform and Ansible. Terraform provisioned the DigitalOcean cloud server, configured firewall rules that only allow HTTPS traffic though CloudFlare, and configured the DNS records on CloudFlare for this website. Ansible was then used to configure the cloud server with WordPress. Here is the GitHub repository for those of you who are interested: HauptJ/WordPress-CloudFlare-Terraform

One useful feature of Terraform is the fact that it keeps track of the resources it created in a state file. This makes managing those resources easier later on in case you ever want add, update or destroy resources.

First you need to declare your service providers. In this example, I am using DigitalOcean for their Infrastructure as a Service (IaaS) offerings and CloudFlare for their DNS and Software as a Service (SaaS) offerings. Terraform interacts with the service providers API, so you will need have the required API tokens readily available. For security reasons, storing hard coded credentials such as API tokens in a repository is a terrible practice. One safe method of passing in credentials is to set them as temporary environment variables.

After you declare your providers, you can start creating resources such as servers. Terraform provides provisioners to configure servers after they are deployed. Unfortunately, it does not support Ansible out of the box. However, Adam Mckaig created a Golang project that utilizes the Terraform inventory file to generate a dynamic inventory for Ansible. Here is the GitHub repository for his tool. adammck/terraform-inventory

As shown below, you will notice the local exec provisioner that executes Ansible with a specified Terraform state file. The state file is used to generate a dynamic inventory for Ansible that contains the wordpress server as a member of the wordpress group.

Terraform can also configure firewall policies, or security groups as they are called in AWS, with may cloud service providers. Below is an example of a firewall policy that only allows incoming HTTPS traffic from CloudFlare’s IPv6 address ranges. The idea with this is to prevent rouge actors from trying to bypass the Web Application Firewall (WAF) protecting this site. This also demonstrates a cool feature that a few WAF providers offer, which is the ability to proxy incoming IPv4 requests over IPv6 to the end host server. For this website, Nginx is only configured to listen for HTTPS connections over IPv6 and CloudFlare is not only proxying IPv4 connections, it is also redirecting HTTP requests to use HTTPS.

In order to associate this firewall policy with the DigitalOcean cloud server, you need to specify the droplet id which is generated as the server is deployed. You can see this on line 4.

After the server is deployed, the DNS records are updated so CloudFlare can proxy HTTPS requests to the server. For management purposes, I also configured an IPv6 AAAA record and an IPv4 A record to point directly to the server, bypassing CloudFlare. This is to make logging into the server with SSH easier in the rare event it is necessary for troubleshooting purposes or to inspect anomalies caused by malicious actors.

You will want to make sure that DNS records are created after the server is deployed. To do so, you can specify a the resources that need to be created beforehand using depends_on as demonstrated on lines 9, 18, and 27 of the code snippet below.

Once you have all the resources defined, you deploy them. To do so, you first need to initialize Terraform by running terraform init. Then you need to generate the a deployment plan by running terraform plan -out=wordpress.tfplan. After the plan file has been generated, you can use it to deploy everything by running terraform apply wordpress.tfplan.

If you wish to destroy everything you can generate a plan to destroy the resources you created by running terraform plan -destroy -out=wordpress_down.tfplan. Then you can destroy everything by running terraform apply “wordpress_down.tfplan”.

Fatpipe.org IP Packet Header Drawings

Matt Baxter created very descriptive IP packet header drawings in 2008. Unfortunately, his website, which was where he posted them, has been inaccessible for a long time. I was able to recover these copies of the original PDF files up using the Internet Archive’s Wayback Machine I uploaded the PDF versions of the drawings here in case anyone else is interested in scalable, high quality copies of them.

TCP Packet Header

IPv4 Packet Header

IPv6 Packet Header

UDP Packet Header

ICMP Packet Header

Integration Testing Ansible Playbooks with Travis CI and Docker

The process behind performing integration tests on Ansible playbooks is almost exactly the same as the one used to test individual roles. In fact, this tutorial is based on a modified version of Continuous Testing of Ansible Roles with Docker and Travis CI by Ignacio Sánchez Ginés. This tutorial is a demonstration of how I set up continuous integration testing of the WordPress playbook I wrote to deploy this website.

First of all, you will need to create an Ansible inventory file with localhost as a host in your desired host group. As a group variable, you will need to specify the name of the local user and that the connection is local.
See: Working with Inventory

For example, you should have something like this in your hosts file.
[wordpress]
localhost

[wordpress:vars]
ansible_connection=local
ansible_user=root

Now that you have a hosts file, you will COPY it into your Dockerfile. You will obviously want to install ansible, sudo, any other dependencies you may need using a RUN command. If you need to install an Ansible Galaxy role, you can do so with a RUN command after Ansible has been installed.
See: Dockerfile reference and Best practices for writing Dockerfiles

Installing Ansible Galaxy Roles:
# Install Dependencies from Ansible Galaxy
RUN ansible-galaxy install geerlingguy.repo-epel
RUN ansible-galaxy install geerlingguy.repo-remi

This is the CentOS 7 Dockerfile I use to CI test my WordPress playbook.

If you wish to use Ubuntu or Fedora, you can find similar systemd and init based images in Ignacio’s GitHub repository.

Now you need to create a .travis.yml configuration file. Below, you can find the one I use to test my WordPress playbook. I run a syntax check before I try running the actual playbook. Also, if your playbook contains secret variables such as passwords or API keys, I recommend encrypting them using Ansible Vault, and keeping them in a directory that is outside of your repository. You can create a “dummy” secrets file for testing purposes.
See: How To Use Vault to Protect Sensitive Ansible Data on Ubuntu 16.04

If you would like to keep your Dockerfiles in a sub directory, you can specify the location using the --file flag, and replace the ., which specifies the directory you are currently in, with the name of the sub directory you want.

For example, if you want to store your Dockerfiles in the ./travis sub directory, you build command would look something like this.
- 'sudo docker build --no-cache --rm --file=travis/Dockerfile.${distribution}-${version} --tag=${distribution}-${version}:ansible travis'

If you have variables you would like to test different values for, you can use sed to change them.
See: Sed – An Introduction and Tutorial by Bruce Barnett

For example:
- '/bin/sed -e "s/resty_install_from_source: true/resty_install_from_source: false/" -i defaults/main.yml'

After you change the variables, you will obviously need to run another series of tests. Below is the CI test configuration I am using for my OpenResty role.

Finally, you will need to enable Travis CI on your repository, which you can reference the Travis CI getting started for instructions.

If you prefer a video tutorial on how to set up Travis CI, here is a decent one.

Regular Expressions and In-Place Slice Manipulation in Go

Regular expressions are very useful for parsing strings. If you need to replace a substring or split up an array, you should consider using regular expressions. I will admit that I am not an expert in regards to using them, however, I will not dismiss their usefulness. You may find the RexEgg.com Regex Cheat Sheet very useful.

In Go, the regexp package includes many useful functions that utilize regular expressions.

Here is a simple CLI program that calculates the sum of the integer values in a string. It splits the string into a slice and then sums up all of the integer values in the slice. In this application, the string is split up using the regular expression [^0-9]+. The ^ indicates that that we do not want to match on a character that is an integer within the range of [0-9]. While the + “greedy” quantifier indicates that we want to match when there are one or more characters. In this example, we are simply delimiting the string based on non-digit values such as letters and other special characters.

One important thing to note about func (*Regexp) Split is that it creates an empty element in the slice if it starts splitting the string at the first or last element. In this example, this is handled by catching the error that is thrown when strconv.Atoi(intSlice[i]) fails to convert a string to an integer value at a given index. After an error is thrown, the bad index which is delete in place by taking the valid indexes up to the bad index and then appending the indexes that follow it. This is implemented with intSlice = append(intSlice[:i], intSlice[i:]...). After you remove the element, the length of the slice needs to be decremented.

SEE: SliceTricks

USAGE:
go run .\stringSum.go st1ngW1thIn7s

Screenshot:

Golang String Sum Windows
Golang String Sum Windows

Traversing Directories Recursively and Sorting Objects by Attribute Value in Go

Lets say you would like to sort all the files in a directory, as well as its sub directories by an attribute like file size.

Approach:

First, you need to recursively traverse or walk the specified directory, which is easy in Golang with the filepath.Walk() function from the path/filepath package. In order to use filepath.Walk() you need a walkfunc or walk function, which can be nested in your walk function as demonstrated in the sample program below. The walkfunc will allow you to handle errors that occur when you approach directory files or files you do not have permission to access.

Recommended tutorial: Flavio Copes – LIST THE FILES IN A FOLDER WITH GO

As you walk the directories, you will need to get the size of each file, which can be achieved using the stat() system call on Linux. In Golang, you can the os.Stat() function to get an object of type FileInfo which contains an a Size() attribute, which is what we want. As you get the file size for each file in the directory, you will need to store the relative file path and the size in bytes of each file as an object in a slice of type file objects.

Once have the file names and sizes of all of the files, you will then need to sort the files by the size attribute, either from largest to smallest or from smallest to largest, which can easily be done using sort.Slice() from the sort package. If you want sort the files from smallest to largest, you would want to use a less / < function like: sort.Slice(files, func(i, j int) bool { return files[i].size < files[j].size }). However, if you wish to sort from largest to smallest, you would use a greater / > function instead, which would look like this: sort.Slice(files, func(i, j int) bool { return files[i].size > files[j].size }). Finally, you can just simply print the desired number of files from the sorted slice.

I would say that it is important to know that in many Linux filesystems such as EXT3 and EXT4, filenames are stored in a directory table that lists files as name (key), inode (value) pairs. You can find the names of all of the files from the directory table and if you refer to a file’s inode, you can get its file size in bytes.

The relationship between the directory entry, an inode, and blocks of an allocated file
The relationship between the directory entry, an inode, and blocks of an allocated file

Here is a command line utility that takes in a directory as a string, a number of files as an int, and an order to sort them as a string. It then lists the largest or smallest files in the directory ordered by size.

Recommended tutorial: Rapid7 – Building a Simple CLI Tool with Golang

Note that since slices are references to arrays, you do not have to pass them as pointers. You can see this demonstrated with the call to sortFiles(*sortPtr, files) on line 125 as nothing is returned from it.
See: The Minimum You Need To Know About Arrays and Slices in Golang

Instructions:
Feed in the directory using --dir, the file count using --cnt, and the sort order which is either largest or smallest --sort

Example:
go run .\largestFiles.go --dir . --cnt 10 --sort smallest

Golang Sorted Files Windows
Golang Sorted Files Windows

On a side note, Golang’s stat() function works quite well on Windows systems, which do not have a native stat() system call.

Sources:

  1. https://unix.stackexchange.com/questions/18605/how-are-directories-implemented-in-unix-filesystems
  2. https://premaseem.wordpress.com/2016/02/14/what-is-inode-in-linux-unit/
  3. http://www.grymoire.com/Unix/Inodes.html

Installing Infinoted on Debian and Ubuntu

Note: This is an old post from 2016, that was recovered from the database of my old website.

Overview: Gobby is a cross-platform collaborative text editor that enables Google Docs style editing. Unlike Etherpad, Gobby is more focused toward editing code with its support for syntax highlighting as demonstrated below.

Gobby Syntax Highlighting
Gobby Syntax Highlighting

I have used this program for multiple group projects and everyone found it very beneficial as it allowed us all to work on the same file(s) at once even when we were not in the same room. The built-in chat box enabled us to easily communicate while we worked on the project remotely. As an added bonus, the server used by Gobby, Infinoted, can be used with other text editors such as gedit. Hosting the server does not require a very powerful machine as a simple Raspberry PI 2 will suffice. You can download the Gobby client from the project’s website.

Installation:

Note: items marked in red indicate they may vary based on how you decide to install this service. Whenever you want to install something on a Debain based Linux distribution, you want to, first of all, make sure your package manager, apt, it up to date. To do so, run the following command. sudo apt-get update After you update your package manager, it is always a good idea to make sure all your packages are up to date. sudo apt-get upgrade Now it is safe to install Infinoted using the package manager. sudo apt-get install infinoted Once you have finished installing Infinoted using the package manager, you need to configure it. For security reasons, it is recommended to run a service with a specific user and group. You can choose any username and / or group you want, but it is recommended to chose something that will be easy to recognize for example gobby or infinoted. The following instructions use gobby for both the user and the group. The following command will create a new system user and group named gobby with no login access and home directory:

/home/services/gobby sudo adduser --system --gecos "Infinoted Service" --disabled-password --group --home /home/services/gobby gobby

If you prefer, you could use /var/lib/gobby as the home directory. You should now set the directory permissions for the gobby home directory. You should recursively set the home directory and all other following directories to 770. Setting the permission level to 770 allows only the owner and those in the group to read, write and execute files.

sudo chmod -R 770 /home/services/gobby

The following command should be optional but is very useful if encounter ownership problems in the gobby home directory. It recursively sets both the directory owner and group to gobby. Both the directory owner and group should have been set when you created the gobby user.

sudo chown -R gobby:gobby /home/services/gobby

Now you need to add the users you wish to be easily able to access the files edited on the server to the gobby group. To do so, use the following command as a template by replacing “username” with the desired username without quotes.

sudo adduser "username" gobby

Next, you need to add the keys, data and export directories under the gobby home directory. If you set the directory permissions correctly you should not have to use sudo to create these directories.

mkdir /home/services/gobby/keys

mkdir /home/services/gobby/data

mkdir /home/services/gobby/export

Now it is time to create the Infinoted configuration file, infinoted.conf, in which you specify your desired settings. First, create the file using touch.

sudo touch /etc/xdg/infinoted.conf

Note: the configuration must be located in either /etc/xdg/ or $HOME/.config/ as infinoted looks for the configuration file in the following order.

  1. /etc/xdg/infinoted.conf
  2. $HOME/.config/infinoted.conf

You can use Nano, or your favorite Linux text editor to edit infinoted.conf. sudo nano /etc/xdg/infinoted.conf You should add the following to the file.

Note: you should set your own password, so don’t just blindly copy this.

[infinoted]
security-policy=require-tls
key-file=/home/services/gobby/keys/infinoted-key.pem
certificate-file=/home/services/gobby/keys/infinoted-cert.pem
password=strong_password
autosave-interval=5
root-directory=/home/services/gobby/data
sync-directory=/home/services/gobby/export
sync-interval=25

After you save the configuration file, you should set its permissions. This step should be optional, but it is recommended if you encounter permission errors. sudo chown gobby:gobby /etc/xdg/infinoted.conf sudo chmod 550 /etc/xdg/infinoted.conf Now you can generate the certificate and key files and test Infinoted. infinoted --create-certificate --create-key If everything worked correctly, you should see this:

Infinoted Generate Cert and Key
Infinoted Generate Cert and Key

 

Once you see the output above, close Infinoted with crtl-c. To run Infinoted on startup, need to create a systemd startup script. Note: this only applies if you are running Debian version 8 or Ubuntu version 15.04 or later. If you are using an earlier version, you will have to improvise with Upstart or init (obsolete). Create a file called /etc/systemd/system/infinoted.service which should contain the following:
[Unit]
Description=Infinoted Daemon
After=network-online.target
[Service]
Type=simple
User=gobby
Group=gobby
UMask=007 ExecStart=/usr/bin/infinoted &
Restart=on-failure # Configures the time to wait before service is stopped forcefully.
TimeoutStopSec=300
[Install] WantedBy=multi-user.target

Now start the service using systemctl start infinoted and to verify the service is running, use systemctl status infinoted. If Infinoted is running correctly you should see this:

Systemd Infinoted Running
Systemd Infinoted Running

 

Finally, enable Infinoted to startup on boot. systemctl enable infinoted

References: Gobby project website

Redis Caching WordPresss on CentOS 7

Caching WordPress with Nginx and Redis is quite simple if you are using Ubuntu, as you can just follow this tutorial, but what if you want to use CentOS, or even better, want to automate the setup using Ansible?

First of all, you will need to install the EPEL and REMI repositories and of course, Redis.

You will need to have the following Nginx modules installed. srcache-nginx-module, HttpRedisModule, redis2-nginx-module, set-misc-nginx-module Fortunately, there is a third party Nginx distribution called OpenResty that comes with all of the required modules. You can either build it from source or install it from their repository. Even better, you can install it using Ansible.

If you wish to install it from source using Ansible, here are the tasks.

The template used for generating the build script.

Or if you wish to use a pre-built binary from a repository, you can use these tasks.

Now, of course, you will need to configure OpenResty and Systemd, so here are example configuration files that you can modify to your liking.

Systemd service file

Nginx.conf
Here is a link to the nginx.conf I use for this website.

default.conf
Note: If you wish to use a vhost, you can modify this and use it as a sitename.conf
Here is a link to the default.conf am using for this website. Yes, CloudFlare accepts self-signed connections from servers. I know it is not a pretty solution, but Let’s Encrypt does not work behind reverse proxy servers or in firewalled local development environments.

Now, you will have to install the php71-php-pecl-redis package and configure the connection to the Redis server so it can cache PHP sessions.
As a reference, here are the tasks I use to install and configure PHP for this website.

Now, if everything works, you will be able to ssh into the server and run redis-cli and then monitor and see this whenever a page is requested.

redis-cli monitor output
redis-cli monitor output

Using Golang to Generate Custom Cover Letters

Writing a cover letter for every application is quite cumbersome. So why not automate the process? That is why I wrote a simple Go application to help with the process. The logic is quite simple as all you have to do is fill out a templated Latex .tex file and then compile it. If you want to go even further, the process for generating a custom cover Email is almost the same. Instead of using a templated out .tex file, you just use a templated out .html file.

For this tutorial, I recommend using the popular Moderncv Classic LaTex template. If you wish to use this LateX template for your resume as well, I recommend moving the cover letter and CV / resume sections in main.tex to seperate files. To do so, you can use the \include LateX command. StackExchange: When should I use \input vs. \include?

Here is an example of a templated out cover letter .tex file

First, you take in the arguments, that are dependent on the position, as command line flags. Read this tutorial on building a simple CLI tool.

Using the provided arguments, you can generate default statements.

You might also want to get the current date. Read Date and time formatting in Go.

The user specified and generated values are then passed into a map which is used to fill in the templated out .tex file.

You then read the templated out text file and when a key attribute in the map is found in the templated out file, you replace it with it’s corresponding value attribute.

Now you want to save the string to a new .tex file.

If you wish to just send a cover Email, you do not have to write to a new file as the string can serve as the body of the Email. As already stated, you should use a templated .html file to generate cover Emails.

Now to build the .pdf file, you will have to call an external command to run pdflatex. Read this tutorial on running shelled out commands in Golang.

If you are an over achiever and would like to be able to automatically send generated cover Emails, the gomail library makes this easy. Read the gomail README.

If you would like to keep track of where you sent your applications out, you can simply write to a log file. I recommend writing to a CSV based log file as they import easily into Microsoft Excel and LibreOffice Calc.

You can also pass an “application” object by reference instead of passing variables by value.

You can find my full implementation with sample template files in this git repository.