Articles

Automatic DNS for Kubernetes Ingresses with ExternalDNS

ExternalDNS is a relatively new Kubernetes Incubator project that makes Ingresses and Services available via DNS. It currently supports AWS Route 53 and Google Cloud DNS. There are several similar tools available with varying features and capabilities like route53-kubernetes, Mate, and the DNS controller from Kops. While it is not there yet, the goal is for ExternalDNS to include all of the functionality of the other options by 1.0.


Deploy Kubernetes in an Existing AWS VPC with Kops and Terraform

Kubernetes Visualization via Weave Scope

Kops is a relatively new tool that can be used to deploy production-ready Kubernetes clusters on AWS. It has the ability to create a highly-available cluster spanning multiple availability zones and supports a private networking topology. By default, Kops will create all of the required resources on AWS for you — the EC2 instances, the VPC and subnets, the required DNS entries in Route53, the load balancers for exposing the Kubernetes API, and all of the other necessary infrastructure components.

For organizations that use Terraform, Kops can instead be used to generate a Terraform configuration for all of the aforementioned AWS resources. This will allow them to use the familiar terraform plan and terraform apply workflow to build and update their Kubernetes infrastructure. The Terraform configuration that Kops generates will include new VPC, subnet, and route resources.

But what if you want to use Kops to generate a Terraform configuration for a Kubernetes cluster in an existing VPC? In this post, I will walk through the process to achieve this.


Terraform State Move - Refactoring Terraform Against Existing Infrastructure

Terraform State Move Example

Have you ever wanted to refactor a Terraform configuration against an already existing infrastructure? In the past, modifying the Terraform state required manually editing a potentially large and confusing JSON file. Recent versions of Terraform make it possible to manipulate a Terraform state file using supported CLI commands. With this new capability, it is significantly easier to refactor an existing Terraform configuration into modules without affecting the underlying infrastructure in any way. If you are importing existing cloud infrastructure into Terraform, you will also likely be using the terraform state * commands to build a modular configuration.


Using Docker Native Health Checks

In version 1.12, Docker added the ability to perform health checks directly in the Docker engine — without needing external monitoring tools or sidecar containers. Built so that the new Swarm mode orchestration layer can reschedule unhealthy containers or remove them from the load balancer pool, health checks can also be used outside of Swarm mode.

Docker Health Status


Using Google Container Registry (GCR) with Minikube

Image Pull Failed

Are you using the Google Container Registry (GCR) and seeing the dreaded ImagePullBackoff status on your pods in minikube? Are you seeing errors in your pod events like this?


Rolling updates with Kubernetes: Replication Controllers vs Deployments

A rolling update is the process of updating an application — whether it is a new version or just updated configuration — in a serial fashion. By updating one instance at a time, you are able to keep the application up and running. If you were to just update all instances at the same time, your application would likely experience downtime. In addition, performing a rolling update allows you to catch errors during the process so that you can rollback before it affects all of your users.


Building a Kubernetes Cluster on AWS

The excellent Kubernetes documentation includes a guide that covers how to build and run a Kubernetes cluster on AWS with the kube-up script. However, when it comes to customizing that install, the details are a little sparse. In this post, I am going to go over just one way you can customize the cluster. Hopefully, this will provide a little more transparency about what is going on under the hood and give you a little more control over how your cluster is built.


Kong on Mantl

Kong is an “Open-source, Microservice & API Management Layer built on top of NGINX”. Mantl is a “modern, batteries included platform for rapidly deploying globally distributed services”. I put together a short video on running Kong on Mantl. Are you interested in learning more about mantl, mesos, marathon, kubernetes, swarm, nomad, and more? Sign up below!


Introducing AWS Keymaster

AWS Keymaster is a simple utility that allows you to import your own personal key pair into all AWS regions with a single command. Distributed as a single binary with no dependencies, AWS Keymaster is easy to deploy and run. It is also available as a Docker image.


Waiting on EC2 Resources

When using the AWS CLI, did you know you could run a command that waits for a specific resource or condition to occur before moving on to the next? For example, you might want to write a script that starts an EC2 instance and then, only after it is up and running, perform an additional task. Without the aws ec2 wait command, this could be a bit of a challenge involving a loop and some polling for the state. However, this is actually kind of trivial with the wait command at our disposal.

EC2 Wait Instance-Running


AWS CloudFormation vs Terraform

I am a firm believer in the benefits of programmable and repeatable infrastructure for organizations of all sizes. There are a wide range of tools that are available to help you along this path but I just want to touch on two of them today: CloudFormation and Terraform.


Troubleshooting AWS Elastic Beanstalk Errors

When errors occur in your Elastic Beanstalk environment, the root cause may not always be obvious. In the browser, you may get a 502 Bad Gateway error or an error like:

An unhandled lowlevel error occured. The application logs may have details.


Dockerized Postgresql Development Environment

My local development environment is kind of a mess. I am running OSX and use a variety of techniques to run the projects I work on. For example, I have local postgresql, redis, and memcache servers running to support some projects. Every once in a while I run into issues where a version of the service I am running for one project is not compatible with the version I need on another project. This problem can be solved with virtualized, per-project development environments using a tool like Vagrant. On most of my recent projects, this is actually how I do things. But, not on all of them. I certainly could go back and create Vagrant environments for projects that depend on local resources but haven’t found the time.


Securing a Server with Ansible

A while back, Bryan Kennedy wrote a post describing how he spends the first 5 minutes configuring and securing a new linux server. He runs through the list of commands and configuration settings that address things like:


How to Convince your Boss to Invest in Continuous Delivery

If you are a developer, you might already be aware of the benefits of continuous delivery. This can be frustrating if your management is not also on board. They might not see it as a worthy investment at this point. Maybe they feel your team is too busy fighting fires. Or there are important features to ship and taking time to develop a continuous delivery workflow would take away from that. Perhaps they have decided that an infrequent deploy schedule is best for your application.


Enable Gzip Compression on Apache Shared Hosting

I was working on tuning the performance of a site that happened to be hosted on a shared hosting provider - Dreamhost in this case. One of the simplest things you can do to improve performance is enable Gzip compression for HTTP requests. This is supported in all modern browsers and provides a quick win by reducing the size of HTTP responses and, therefore, improving response times. The instructions on how to enable this will vary based on your web server and the level of control you have.


Ensuring a Command Module Task is Repeatable with Ansible

Ansible Playbook Changed

A few readers have pointed out to me that there is a small improvement I could make to the simple Ansible playbook I created for my Ansible Quick Start post. Idempotence is an important concept in Ansible and the last task in the playbook was violating that principle. Here is the original task:


CloudFront with an S3 Origin

In a previous post, I covered how to setup CloudFront as an asset host for a Rails application using the same site as the origin. It is also possible to use an S3 bucket as the origin. The easiest way I know of to make this work with Rails is to use the asset_sync gem.


Web Fonts with CloudFront

In my last post, I may have been a little cavalier when I said it is a “no-brainer” to use CloudFront to serve assets for your Rails application. In truth, there are a few issues that can make things more complicated. One of those is the ability to serve web fonts.


Using CloudFront to Speed up your Rails Application

Update: November 9, 2014. A few people asked me how to handle serving web fonts using CloudFront. I created a new post here that covers a few options.

Moving your static assets (images, css, javascripts, etc.) to a Content Delivery Network is a quick, easy, and impactful win for the performance of your Rails application. CDNs are designed to distribute your content to multiple geographic locations and to serve it up to your users in the most optimal way possible. Using a CDN also lets you reduce the number of requests your web servers need to handle. This is especially important when you are hosted on platform like Heroku. You don’t want your precious (and expensive) dynos spending their time serving up images.


Automated Rails Deployments with Jenkins and Capistrano

Continuous integration and continuous deployment are two important elements of building successful web applications. Frequently merging code together and running automated tests tends to result in a healthier code base and improves the ability and speed in which a development team can release features and fix bugs. And, by automating the deployment process, you can ensure that your team can deploy confidently and quickly. In this post, I am going to summarize a quick way to achieve a simple continuous deployment workflow for a Rails application using Capistrano and Jenkins.


11 Capistrano Plugins To Simplify Your Rails Deployments

The Rails deployment story has improved dramatically since the early days, but it can still be challenging. Compiling assets, running migrations, achieving zero-downtime deployments, and load balancing your application servers are some of the tasks that you’ll want to handle as part of your deployment strategy. Many deployment processes still tend to be a mixture of automation and manual work. The goal is to have a fully automated, repeatable and fast deployment process. Sounds simple on paper, but as many of us already know, this process is time consuming, error prone and has the tendency to make you want to rip your hair out or throw your keyboard out the window.


Book recommendation: PostgreSQL 9.0 High Performance

asinimagePostgreSQL 9.0 High Performance

Lately, I have been working on a reporting system that involves some pretty complex queries against a large data set. I feel like I am reasonably proficient in writing SQL but I’ve always felt like performance tuning queries was a bit of a dark art. Trying to interpret long and cryptic query plans just made my head hurt. I needed something to help demystify this stuff and I found this book: PostgreSQL 9.0 High Performance. It is actually not just about query performance optimization - far from it. In fact, the majority of the book covers other aspects of building a high-performance PostgreSQL installation such as:


Updating ruby-build to get the latest rubies

No rocket science here - just because I always forget… If you are using rbenv with the ruby-build plugin and want to upgrade to the latest version of ruby, you might have to update ruby-build to get the latest definitions.


Ansible Quick Start - A Brief Introduction

Recently, I have been working with Ansible, an IT automation, configuration management and provisioning tool along the same lines as Chef and Puppet. If you are responsible for managing servers - even if it is just one - you would be well served to learn one of these tools.


How to Prepare for a Successful Launch

Launching a new web application can be a nerve-wracking experience. Besides being nervous about how your users are going to respond to your product, you also have to worry about whether the site will stay up and running in the first place. It is important to be well prepared for any issues you may encounter when you unleash your web application to the public for the first time. It can be hard to predict how much (or how little) traffic you are going to get. You don’t want to get paralyzed by fear and doubt and delay your launch while you (prematurely?) optimize everything you can but you also don’t want to end up with a failing application and no plan to get it back up and running quickly. There are certainly some things you can do to prepare for your launch and to handle an unexpected load of traffic.