<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Jason's Blog</title>
    <description>A place for me to talk about things. It's possible some of them may interest you.</description>
    <link>https://jasonmorgan.github.io/</link>
    <atom:link href="https://jasonmorgan.github.io/feed.xml" rel="self" type="application/rss+xml" />
    <pubDate>Wed, 24 Mar 2021 17:27:06 +0000</pubDate>
    <lastBuildDate>Wed, 24 Mar 2021 17:27:06 +0000</lastBuildDate>
    <generator>Jekyll v3.9.0</generator>
    
      <item>
        <title>Kubernetes Cluster Migrations with Velero and Tanzu Mission Control</title>
        <description>&lt;p&gt;Hey folks! I ran into an interesting situation with a customer the other day. They are using &lt;a href=&quot;https://tanzu.vmware.com/mission-control&quot;&gt;Tanzu Mission Control&lt;/a&gt; (TMC) with their kubernetes clusters and are trying to migrate from EKS to Tanzu Kubernetes (TKG) Clusters. I’m going to totally ignore the relative merits of EKS vs TKG and go straight to the how of the migration.&lt;/p&gt;

&lt;p&gt;The rest of this article is going to talk about how to use kubectl, the velero cli, and TMC to migrate workloads from one cluster to another using Velero’s backup and restore functionality. I hope this is interesting and if you have anything else you’d like to hear about please ping me on &lt;a href=&quot;https://twitter.com/RJasonMorgan&quot;&gt;twitter&lt;/a&gt; or &lt;a href=&quot;https://www.linkedin.com/in/jasonmorgan2/&quot;&gt;Linkedin&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thanks!&lt;/p&gt;

&lt;h2 id=&quot;the-set-up&quot;&gt;The Set Up&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;The TKG cli
    &lt;ul&gt;
      &lt;li&gt;v1.2.0&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;2 TKG clusters
    &lt;ul&gt;
      &lt;li&gt;Called mine workload1 and workload2 cause I’m super creative&lt;/li&gt;
      &lt;li&gt;Joining the clusters to TMC is out of scope for this article
        &lt;ul&gt;
          &lt;li&gt;Look &lt;a href=&quot;https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-getstart/GUID-F0162E40-8D47-45D7-9EA1-83B64B380F5C.html&quot;&gt;here&lt;/a&gt; if you’d like an article on that&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Access to Tanzu Mission Control
    &lt;ul&gt;
      &lt;li&gt;See your friendly neighborhood vmware salesperson&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The Velero cli
    &lt;ul&gt;
      &lt;li&gt;v1.4.2&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;kubectl
    &lt;ul&gt;
      &lt;li&gt;v1.19.3&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The helm cli
    &lt;ul&gt;
      &lt;li&gt;v3.1.2&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;An app to migrate
    &lt;ul&gt;
      &lt;li&gt;We’ll be deploying wordpress from the Bitnami helm repository&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;getting-things-ready&quot;&gt;Getting things ready&lt;/h2&gt;

&lt;p&gt;First thing’s first, lets deploy a wordpress instance using the well built and well currated helm charts from the good folks over at Bitnami. You can dig into them in more detail &lt;a href=&quot;https://bitnami.com/application-catalog&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the code block below you’ll see the steps to add the bitnami helm repo and install wordpress in it’s own namespace.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Add the helm repo if needed&lt;/span&gt;
helm repo add bitnami https://charts.bitnami.com/bitnami

&lt;span class=&quot;c&quot;&gt;# Update to be sure you have the latest charts&lt;/span&gt;
helm repo update

&lt;span class=&quot;c&quot;&gt;# Lookup wordpress&lt;/span&gt;
helm search repo wordpress

&lt;span class=&quot;c&quot;&gt;# Checkout the chart values&lt;/span&gt;
helm show values bitnami/wordpress | less

&lt;span class=&quot;c&quot;&gt;## We're going to override the blog name and the service type which you can find in the values file as wordpressBlogName and service.type.&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Create our new namespace&lt;/span&gt;
kubectl create ns wordpress

&lt;span class=&quot;c&quot;&gt;# Deploy wordpress&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;## use -n to set the namespace and use --set to override the variables we identified in the values file.&lt;/span&gt;
helm &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;new-blog bitnami/wordpress &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; wordpress &lt;span class=&quot;nt&quot;&gt;--set&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;wordpressBlogName&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;new-blog &lt;span class=&quot;nt&quot;&gt;--set&lt;/span&gt; service.type&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ClusterIP
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Take a second here and browse to your wordpress page. It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/wordpress1.png&quot; alt=&quot;wordpress&quot; /&gt;&lt;/p&gt;

&lt;p&gt;If you’re having trouble getting access to the page here’s the port-forward command I used: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl port-forward svc/new-blog-wordpress 8080:80 -n wordpress&lt;/code&gt;. The site will now be available at localhost:8080.&lt;/p&gt;

&lt;h2 id=&quot;backing-it-up&quot;&gt;Backing it up&lt;/h2&gt;

&lt;p&gt;Quick disclosure here: Everything I’m telling you to do in the TMC console can be done instead via kubectl and the Velero cli. I’m not covering any of that in this article.&lt;/p&gt;

&lt;h3 id=&quot;enroll-in-data-protection&quot;&gt;Enroll in Data Protection&lt;/h3&gt;

&lt;p&gt;Head over to your cluster group and find your clusters.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/tmc1.png&quot; alt=&quot;clusterGroups&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Once again I’m going to save everyone a little time and point you at the &lt;a href=&quot;https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-using/GUID-5EF38D8D-2085-4924-B78B-D49C63064F31.html#GUID-5EF38D8D-2085-4924-B78B-D49C63064F31&quot;&gt;vmware docs&lt;/a&gt; for enabling data protection on your clusters. It’s a fairly painless process and VMware gives you an easy to use cloud formation template to get the environment set up.&lt;/p&gt;

&lt;p&gt;You want to ensure you have data protection enabled on both clusters.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/tmc2.png&quot; alt=&quot;ClusterWithDP&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;run-a-backup&quot;&gt;Run a Backup&lt;/h3&gt;

&lt;p&gt;In the TMC console you need to select your cluster, in my case workload1 and head over the the data protection tab. From there you can schedule a backup and you have a few options. Feel free to do full or by namespace, I selected my wordpress namespace from before so I could simplify my restore.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/tmc3.png&quot; alt=&quot;tmc-workspace&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Once you have that done validate that your backup.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/tmc4.png&quot; alt=&quot;tmc-backup&quot; /&gt;&lt;/p&gt;

&lt;p&gt;With that we have our starting point for migrating our wordpress blog from one cluster to another! The next section will have us heading back to the cli.&lt;/p&gt;

&lt;h2 id=&quot;migrating&quot;&gt;Migrating&lt;/h2&gt;

&lt;p&gt;I start out doing a quick check that velero is installed on my cluster and validate my versions at the same time.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;velero version
Client:
        Version: v1.4.2
        Git commit: 56a08a4d695d893f0863f697c2f926e27d70c0c5
Server:
        Version: v1.4.2

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Seeing that tells me velero is good to go. Make sure you checkout each cluster before moving on.&lt;/p&gt;

&lt;h3 id=&quot;finding-the-backup&quot;&gt;Finding the Backup&lt;/h3&gt;

&lt;p&gt;Going to take a quick detour and talk about kubernetes custom resource definitions (CRDs) and how they relate to tools that integrate with the kubernetes api. CRDs are a handy way to extend kubernetes and allow apps like velero to build their own constructs right into the cluster. You’re going to see examples using both kubectl and the velero cli to perform various tasks.&lt;/p&gt;

&lt;p&gt;With that in mind we’re going to explore the kubernetes API to find our backup and figure out how we can get our target cluster, workload2, to see the backup of the source cluster, workload1. When you look at the clusters, in either the TMC console or the cli, you’ll only see backups on the source cluster. We’ll dive into that in the next section.&lt;/p&gt;

&lt;h4 id=&quot;velero-contructs&quot;&gt;Velero Contructs&lt;/h4&gt;

&lt;p&gt;Velero is going to create a bunch of new object types but we’re really only interested in 2 at this point. Backups, and Backup Storage Locations. You can see a bit more by running the following command: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;k get crd | grep velero.io&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/term1.png&quot; alt=&quot;crds&quot; /&gt;&lt;/p&gt;

&lt;h5 id=&quot;backups&quot;&gt;Backups&lt;/h5&gt;

&lt;p&gt;Neither construct is particularly complicated. Backups are just that, and they’re stored, and retrieved from, backup locations. We can see when we ask our clusters about backups workload1, the source cluster, sees a backup called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;wp&lt;/code&gt;. The backup &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;wp&lt;/code&gt; represents the wordpress blog we want to move from workload1 to workload2. Unfortunately for us only our source cluster can see it for now. You can explore the clusters witht eh following commands.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# With the velero cli&lt;/span&gt;
velero backup get

&lt;span class=&quot;c&quot;&gt;# via kubectl&lt;/span&gt;
kubectl get backups &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; velero
&lt;span class=&quot;c&quot;&gt;## Velero puts it's backups and other things in the velero namespace by default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can see from the output that the backup is only visable from workload1. We’re going to checkout the backup locations and see what we can do about it.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;//images/velero/term2.png&quot; alt=&quot;backups&quot; /&gt;&lt;/p&gt;

&lt;h5 id=&quot;backup-locations&quot;&gt;Backup Locations&lt;/h5&gt;

&lt;p&gt;Looking at the screen grab you can see my storage location for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;wp&lt;/code&gt; is listed as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;jmo-dp&lt;/code&gt;. In fact when we look at both workload1 and workload2 they each have a backup location called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;jmo-dp&lt;/code&gt; but when you dig deeper you see that the values for each don’t line up.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# getting the backup location via velero&lt;/span&gt;
velero backup-location get

&lt;span class=&quot;c&quot;&gt;# getting the backup location via kubectl&lt;/span&gt;
kubectl get backupstoragelocations.velero.io &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; velero
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you run the velero cli version of that command you’ll see something like the image below and you’ll note that while the s3 bucket name is the same the prefix the backup gets stored at are different.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/term3.png&quot; alt=&quot;velero-locations&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Going to go ahead and spoil any remaining mystery we have at this point. In order to get our cross cluster restore working we need to copy over our backup location from workload1 to workload2. When you dig into the backup storage location on workload1 you’ll see an object that looks a lot like this:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;# The below yaml has been modified to drop non required fields&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;velero.io/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;BackupStorageLocation&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;jmo-dp&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;velero&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;bucket&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;vmware-tmc-1234&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;profile&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;jmo-dp&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;s3ForcePathStyle&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;false&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;objectStorage&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;bucket&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;vmware-tmc-1234&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;prefix&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;01ET5VEFZTES40MAJYZ98MC6G6/&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;provider&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;aws&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In order to successfully do a cross cluster restore we need to create a new backup location on workload2 and add new credentials to the velero cloud config secret.&lt;/p&gt;

&lt;h4 id=&quot;creating-our-objects&quot;&gt;Creating our Objects&lt;/h4&gt;

&lt;p&gt;Checkout the Velero docs &lt;a href=&quot;velero.io&quot;&gt;here&lt;/a&gt;. This is a good place to start but in order to get this working I also needed to look at the &lt;a href=&quot;https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/backupstoragelocation.md&quot;&gt;aws backup plugin&lt;/a&gt;. After diving in there you see that the value under &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;spec.config.profile&lt;/code&gt; refers to the aws credentials that velero will use to access the backups.&lt;/p&gt;

&lt;p&gt;Let’s create a new backup location object on workload2. I’m going to show an example using my cluster config but you need to get the appropriate values from your own source and target clusters.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;velero.io/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;BackupStorageLocation&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;source-cluster&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# renamed from jmo-dp, this is will represent the source for our cross cluster restore.&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;velero&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;bucket&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;vmware-tmc-1234&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;profile&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;source-creds&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# renamed from jmo-dp, this will represent a new entry in the cloud-config secret.&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;s3ForcePathStyle&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;false&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;objectStorage&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;bucket&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;vmware-tmc-1234&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;prefix&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;01ET5VEFZTES40MAJYZ98MC6G6/&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;provider&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;aws&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once you have your backup location object made and saved, I called mine &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;backupLocation.yaml&lt;/code&gt;, you can apply it to your target cluster, workload2. So to be extra clear here, we are creating a backup location object on workload2 based on the jmo-dp backup location from workload1. You can create the new object any way you like, I did it by running `kubectl apply -f backupLocation.yaml. You can explore the new backup location in workload2 but you’ll quickly see in the velero logs that it doesn’t have permission to access backups. To get that working we need to migrate the backup credentials from workload1 over to workload2.&lt;/p&gt;

&lt;h5 id=&quot;migrating-credentials&quot;&gt;Migrating Credentials&lt;/h5&gt;

&lt;p&gt;In order to use workload1’s backup we need to update workload2’s cloud-credentials, the secret velero uses to backup and restore clusters. You can view your cloud-credentials secret like this &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl get secret cloud-credentials -n velero -o json | jq -r .data.cloud | base64 -d &amp;amp;&amp;amp; echo&lt;/code&gt; and you’ll see some output like:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# I replaced all my values, expect to see real information in your output&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;jmo-dp] &lt;span class=&quot;c&quot;&gt;# this is the credential name you referenced in the backup location object&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;aws_access_key_id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_ID
&lt;span class=&quot;nv&quot;&gt;aws_secret_access_key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_SECRET
&lt;span class=&quot;nv&quot;&gt;aws_session_token&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;SUPER_LONG_TOKEN

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;default]
&lt;span class=&quot;nv&quot;&gt;aws_access_key_id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_ID
&lt;span class=&quot;nv&quot;&gt;aws_secret_access_key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_SECRET
&lt;span class=&quot;nv&quot;&gt;aws_session_token&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;SUPER_LONG_TOKEN
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We’re going to add the values from workload1, our source cluster, under a new heading, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;source-creds&lt;/code&gt;. So here we’re going to do a little credential surgery and create a new entry in this file, base64 encode it, and then replace the cloud-credentials secret. It’s not exactly an easy or seemless process but I expect VMware is going to work on making cross cluster restores easier in a future version of TMC.&lt;/p&gt;

&lt;h5 id=&quot;updating-creds&quot;&gt;Updating Creds&lt;/h5&gt;

&lt;p&gt;In order to update the secret I did the following:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Start by getting the cloud credentials for each cluster.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Pipe the credentials from workload2 to a creds.txt file&lt;/span&gt;
kubectl get secret cloud-credentials &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; velero &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; json | jq &lt;span class=&quot;nt&quot;&gt;-r&lt;/span&gt; .data.cloud | &lt;span class=&quot;nb&quot;&gt;base64&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; creds.txt

&lt;span class=&quot;c&quot;&gt;## Edit the file and add an entry to it under the name [source-creds] this should be the jmo-dp value from workload1&lt;/span&gt;
vim creds.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Example data:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;source-creds] &lt;span class=&quot;c&quot;&gt;# Our new entry&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;aws_access_key_id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_ID_FROM_WORKLOAD1
&lt;span class=&quot;nv&quot;&gt;aws_secret_access_key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_SECRET_FROM_WORKLOAD1
&lt;span class=&quot;nv&quot;&gt;aws_session_token&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;SUPER_LONG_TOKEN_FROM_WORKLOAD1

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;jmo-dp] &lt;span class=&quot;c&quot;&gt;# this is the credential name you referenced in the backup location object&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;aws_access_key_id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_ID_FROM_WORKLOAD2
&lt;span class=&quot;nv&quot;&gt;aws_secret_access_key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_SECRET_FROM_WORKLOAD2
&lt;span class=&quot;nv&quot;&gt;aws_session_token&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;SUPER_LONG_TOKEN_FROM_WORKLOAD2

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;default]
&lt;span class=&quot;nv&quot;&gt;aws_access_key_id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_ID_FROM_WORKLOAD2
&lt;span class=&quot;nv&quot;&gt;aws_secret_access_key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ACCESS_KEY_SECRET_FROM_WORKLOAD2
&lt;span class=&quot;nv&quot;&gt;aws_session_token&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;SUPER_LONG_TOKEN_FROM_WORKLOAD2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then we’re going to re-encode it and update our secret on workload2.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Re encode your credentials&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;## You can do this anyway you like as long as the data is good, adding my method in case you just want to follow along.&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat &lt;/span&gt;creds.txt | &lt;span class=&quot;nb&quot;&gt;base64&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;tr&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'\n'&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;## I have a clipboard utility so I really ran &lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat &lt;/span&gt;creds.txt | &lt;span class=&quot;nb&quot;&gt;base64&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;tr&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'\n'&lt;/span&gt; | clip

&lt;span class=&quot;c&quot;&gt;# On our workload2 cluster&lt;/span&gt;
kubectl edit secret &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; velero cloud-credentials
&lt;span class=&quot;c&quot;&gt;## delete the string in the data.cloud section of the secret and replace it with the base64 string from above.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;EDITORS NOTE&lt;/em&gt;&lt;/strong&gt; Watch that your AWS token didn’t expire. In the time it took me to write and test this process I had to get a new token and re edit the cloud-credentials secret.&lt;/p&gt;

&lt;p&gt;With that done let’s run some commands on workload2 to see if our new backup location is configured correctly and available.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Show your backups&lt;/span&gt;
velero backup get

&lt;span class=&quot;c&quot;&gt;# via kubectl &lt;/span&gt;
kubectl get backups &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; velero
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;running-the-restore&quot;&gt;Running the Restore&lt;/h3&gt;

&lt;p&gt;Now that we’re happy with the velero config we’re heading back to the TMC console to handle the restore. Again please feel free to do the rest via the velero cli if you prefer.&lt;/p&gt;

&lt;p&gt;Browse over to your cluster and go back to the data protection tab. You should be able to see the wp backup.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/restore1.png&quot; alt=&quot;restore1&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Select the backup and click restore, you’ll see a menu like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/restore2.png&quot; alt=&quot;restore2&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Restore either the entire cluster or the namespace. It doesn’t make any difference for me as I only backed up the namespace.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/restore3.png&quot; alt=&quot;restore3&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now just wait for your restore to complete.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/velero/restore4.png&quot; alt=&quot;restore4&quot; /&gt;&lt;/p&gt;

&lt;p&gt;And that’s it! Go peak at your migrated wordpress blog.&lt;/p&gt;

&lt;h2 id=&quot;validating&quot;&gt;Validating&lt;/h2&gt;

&lt;p&gt;Hop on to the terminal again and ensure you’re working with workload2. You can now run  &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl port-forward svc/new-blog-wordpress 8080:80 -n wordpress&lt;/code&gt; and dhe site will be available at localhost:8080.&lt;/p&gt;

&lt;h2 id=&quot;wrap-up&quot;&gt;Wrap Up&lt;/h2&gt;

&lt;p&gt;With that we’ve walked through the full process of doing a cross cluster restore with Velero and Tanzu Mission Control. I wouldn’t say it’s the easiest process in the world but ti gives you a chance to dive into a bit of how velero works under the hood and if you think this is something TMC should support the folks over at VMware would love to hear from you. Regardless you now have the steps required to do it on your own.&lt;/p&gt;

&lt;p&gt;Thanks so much for reading and I’d love to hear any feedback you have, please hit me up on &lt;a href=&quot;https://twitter.com/RJasonMorgan&quot;&gt;twitter&lt;/a&gt; or &lt;a href=&quot;https://www.linkedin.com/in/jasonmorgan2/&quot;&gt;Linkedin&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Jason&lt;/p&gt;
</description>
        <pubDate>Tue, 02 Feb 2021 00:00:00 +0000</pubDate>
        <link>https://jasonmorgan.github.io/tmc-velero-cluster-migration</link>
        <guid isPermaLink="true">https://jasonmorgan.github.io/tmc-velero-cluster-migration</guid>
        
        <category>tmc</category>
        
        <category>velero</category>
        
        <category>kubernetes</category>
        
        <category>k8s</category>
        
        
        <category>kubernetes</category>
        
      </item>
    
      <item>
        <title>Multi Cluster Service Mesh on TKG with Linkerd</title>
        <description>&lt;p&gt;Hey folks! Thanks for stopping by! Today I’m going to dive into using the &lt;a href=&quot;https://linkerd.io/&quot;&gt;linkerd&lt;/a&gt; service mesh to route traffic between 2 &lt;a href=&quot;https://tanzu.vmware.com/kubernetes-grid&quot;&gt;Tanzu Kubernetes Grid&lt;/a&gt;, or tkg, clusters.&lt;/p&gt;

&lt;p&gt;We’re going to look at this because I see more and more folks looking at either spanning services between clusters or connecting an app in one cluster with a service in another. It could be for high availability reasons, in order to isolate workloads with more strict regulatory requirements, or even just to let stateful services run in their own clusters. Linkerd provides a secure, and relatively easy, way to do this and I’m going to set it up today. I hope you’re able to follow along and I’d love to hear if you have any thoughts on the process.&lt;/p&gt;

&lt;p&gt;Thanks!&lt;/p&gt;

&lt;h2 id=&quot;the-set-up&quot;&gt;The Set Up&lt;/h2&gt;

&lt;p&gt;What we’re using:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The &lt;a href=&quot;https://my.vmware.com/group/vmware/downloads/details?downloadGroup=TKG-100&amp;amp;productId=988&amp;amp;rPId=45068&quot;&gt;tkg&lt;/a&gt; cli
    &lt;ul&gt;
      &lt;li&gt;to create, manage, and scale our k8s clusters&lt;/li&gt;
      &lt;li&gt;made with version 1.2.0&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The &lt;a href=&quot;https://linkerd.io/2/getting-started/#step-1-install-the-cli&quot;&gt;linkerd&lt;/a&gt; cli
    &lt;ul&gt;
      &lt;li&gt;to do all our Linkerd work&lt;/li&gt;
      &lt;li&gt;made with version stable-2.9.0&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Our &lt;a href=&quot;https://github.com/JasonMorgan/podinfo&quot;&gt;podinfo&lt;/a&gt; app&lt;/li&gt;
  &lt;li&gt;The &lt;a href=&quot;https://github.com/smallstep/cli/releases/tag/v0.15.3&quot;&gt;step&lt;/a&gt; cli
    &lt;ul&gt;
      &lt;li&gt;to generate our certificates&lt;/li&gt;
      &lt;li&gt;made with version 0.15.3&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Tanzu Mission Control (Optional component)
    &lt;ul&gt;
      &lt;li&gt;To manage our tkg clusters&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The tmc cli (Optional component)
    &lt;ul&gt;
      &lt;li&gt;To connect our clusters to Tanzu Mission Control (TMC)&lt;/li&gt;
      &lt;li&gt;You can download this from your Tanzu Mission Control portal&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;creating-and-managing-our-clusters&quot;&gt;Creating and Managing our Clusters&lt;/h2&gt;

&lt;p&gt;I’ve previously set up a tkg management cluster in AWS and used it to provision two new clusters.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;tkg create cluster &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; dev &lt;span class=&quot;nt&quot;&gt;-w&lt;/span&gt; 1 workload1
tkg create cluster &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; dev &lt;span class=&quot;nt&quot;&gt;-w&lt;/span&gt; 1 workload2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once tkg finished with the cluster create I pulled down the relevant kubeconfig files and added them to Tanzu Mission Control. You have a couple options for adding them to TMC but I chose to use the cli.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Get the kubeconfigs&lt;/span&gt;
tkg get credentials workload1 &lt;span class=&quot;nt&quot;&gt;--export-file&lt;/span&gt; ~/configs/workload1
tkg get credentials workload1 &lt;span class=&quot;nt&quot;&gt;--export-file&lt;/span&gt; ~/configs/workload2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Here you have the optional TMC commands, if you aren’t using TMC please skip ahead.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# login to TMC&lt;/span&gt;
tmc login

&lt;span class=&quot;c&quot;&gt;# Add your clusters&lt;/span&gt;
tmc cluster attach &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; workload1 &lt;span class=&quot;nt&quot;&gt;-g&lt;/span&gt; jmo &lt;span class=&quot;nt&quot;&gt;-k&lt;/span&gt; ~/configs/workload1
tmc cluster attach &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; workload2 &lt;span class=&quot;nt&quot;&gt;-g&lt;/span&gt; jmo &lt;span class=&quot;nt&quot;&gt;-k&lt;/span&gt; ~/configs/workload2
&lt;span class=&quot;c&quot;&gt;## I had already created a cluster group in TMC&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once the clusters appear in TMC you can move on to the next step.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/tmc-screen-grab.png&quot; alt=&quot;tmc-screen-grab&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;install-linkerd-in-multi-cluster-mode&quot;&gt;Install Linkerd in Multi Cluster Mode&lt;/h2&gt;

&lt;h3 id=&quot;setting-up-the-ca&quot;&gt;Setting up the CA&lt;/h3&gt;

&lt;p&gt;I’m following the Linkerd multi cluster docs you can find &lt;a href=&quot;https://linkerd.io/2/tasks/multicluster/&quot;&gt;here&lt;/a&gt;. I start out creating a new root certificate authority with &lt;a href=&quot;https://smallstep.com/cli/&quot;&gt;step&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Generate the root ca&lt;/span&gt;

step certificate create identity.linkerd.cluster.local root.crt root.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--profile&lt;/span&gt; root-ca &lt;span class=&quot;nt&quot;&gt;--no-password&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--insecure&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--san&lt;/span&gt; identity.linkerd.cluster.local

&lt;span class=&quot;c&quot;&gt;# Generate an intermediary ca&lt;/span&gt;

step certificate create identity.linkerd.cluster.local issuer.crt issuer.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--profile&lt;/span&gt; intermediate-ca &lt;span class=&quot;nt&quot;&gt;--not-after&lt;/span&gt; 8760h &lt;span class=&quot;nt&quot;&gt;--no-password&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--insecure&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--ca&lt;/span&gt; root.crt &lt;span class=&quot;nt&quot;&gt;--ca-key&lt;/span&gt; root.key &lt;span class=&quot;nt&quot;&gt;--san&lt;/span&gt; identity.linkerd.cluster.local

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We use the the ca to establish a common trust between the meshes in both clusters. You can find a lot more useful details about the certificates and how they’re used &lt;a href=&quot;https://linkerd.io/2/tasks/generate-certificates/&quot;&gt;here&lt;/a&gt; and &lt;a href=&quot;https://linkerd.io/2/features/automatic-mtls/#how-does-it-work&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;install-the-mesh&quot;&gt;Install the Mesh&lt;/h3&gt;

&lt;h4 id=&quot;test-your-clusters&quot;&gt;Test your Clusters&lt;/h4&gt;

&lt;p&gt;First things first, lets ensure our clusters are good to go. Linkerd’s cli comes with a handy dandy check feature to see if your cluster is ready.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;linkerd check &lt;span class=&quot;nt&quot;&gt;--pre&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/term-pre-check.png&quot; alt=&quot;pre&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Depending on your kubernetes api and Linkerd versions you may get some warnings about the crd api version is but that won’t impact the install.&lt;/p&gt;

&lt;p&gt;If you are using TMC you’ll get additional warnings about Pod Security Policies (PSPs), you can safely ignore them provided you aren’t enforcing them, otherwise you’ll need to appropriately configure your PSPs which is beyond the scope of this article.&lt;/p&gt;

&lt;h4 id=&quot;running-the-install&quot;&gt;Running the Install&lt;/h4&gt;

&lt;p&gt;Provided you’re happy with the results of your pre checks it’s time to run the install. I like to set up two terminals, one for each cluster.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Run this once per cluster&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;## Be sure your paths to the certs are valid, either by executing this in the same directory as the step command or by fixing the path.&lt;/span&gt;

linkerd &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--identity-trust-anchors-file&lt;/span&gt; root.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--identity-issuer-certificate-file&lt;/span&gt; issuer.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--identity-issuer-key-file&lt;/span&gt; issuer.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  | kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/term-linkerd-install.png&quot; alt=&quot;install&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Once that’s wrapped wait a few minutes for the pods to boot up. You can watch the linkerd namespace or, if you’re using tmc you can take a peak at the workloads under the workloads tab or by looking at one of the worker nodes.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Run this for each cluster&lt;/span&gt;
watch kubectl get pods &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; linkerd
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/tmc-linkerd-node-pods.png&quot; alt=&quot;pods&quot; /&gt;&lt;/p&gt;

&lt;p&gt;After the control plane is available you can check the status of Linkerd using the cli. Linkerd goes out of it’s way to be easy to use and debug. With that in mind they added the check option to the cli so you can get an at a glance health check. Once again be sure you’re running this for each workload cluster.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Run this once per cluster&lt;/span&gt;
linkerd check
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/term-linkerd-check.png&quot; alt=&quot;check&quot; /&gt;&lt;/p&gt;

&lt;h4 id=&quot;adding-multi-cluster-support&quot;&gt;Adding Multi Cluster Support&lt;/h4&gt;

&lt;p&gt;Now that we’re happy with the per cluster install we need to extend it to handle multi cluster.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Run this once per cluster&lt;/span&gt;
linkerd multicluster &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt; | kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; - 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/term-linkerd-multi.png&quot; alt=&quot;multi&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You can check the status of your multicluster install a few different ways. We’re going to exercise the Linkerd cli a bit, then take a peak at our kubernetes objects. The big thing we’re looking for is to ensure our new gateway has a load balancer assigned to it.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Run a check with linkerd&lt;/span&gt;
linkerd check &lt;span class=&quot;nt&quot;&gt;--multicluster&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Check the new pod&lt;/span&gt;
kubectl get pods &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; linkerd-multicluster

&lt;span class=&quot;c&quot;&gt;# Check for a loadbalancer attached to the linkerd-gateway service&lt;/span&gt;
kubectl get svc &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; linkerd-multicluster

&lt;span class=&quot;c&quot;&gt;# Ensure the load balancer has been assigned&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;With our new gateways up and running we now want to “link” our clusters. You can get a more detailed explenations in the &lt;a href=&quot;https://linkerd.io/2/tasks/multicluster/#linking-the-clusters&quot;&gt;docs&lt;/a&gt; but the short version is that running the link commands will allow the clusters to talk and build/maintain the service mirrors.&lt;/p&gt;

&lt;p&gt;This part can be a little tricky so I’ll include a screen grab after. What we need to do is run our Linkerd link command to generate the yaml from 1 cluster then use kubectl to apply it to the other cluster.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;
&lt;span class=&quot;c&quot;&gt;# Run for each cluster&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;## I accomplish this by running two terminals&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;### one with the $KUBECONFIG variable set to ~/configs/workload1&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#### Then run:&lt;/span&gt;
linkerd multicluster &lt;span class=&quot;nb&quot;&gt;link&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--cluster-name&lt;/span&gt; workload2 &lt;span class=&quot;nt&quot;&gt;--kubeconfig&lt;/span&gt; ~/configs/workload2 | kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; - 

&lt;span class=&quot;c&quot;&gt;### one with the $KUBECONFIG variable set to ~/configs/workload2&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#### Then run:&lt;/span&gt;
linkerd multicluster &lt;span class=&quot;nb&quot;&gt;link&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--cluster-name&lt;/span&gt; workload1 &lt;span class=&quot;nt&quot;&gt;--kubeconfig&lt;/span&gt; ~/configs/workload1 | kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; -

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can, hopefully, see that a little more clearly in the screen grab below. Note the red section of the prompt indicates the current kubernetes context.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/terminal-linkerd-mc-link.png&quot; alt=&quot;terminal-view&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Next you’ll want to validate the clusters are properly linked. Start by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;linkerd multicluster gateways&lt;/code&gt; for each cluster. You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/terminal-linkerd-mc-gws.png&quot; alt=&quot;terminal-gws&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Note that each cluster sees the other’s gateway. On top of that we can rerun our Linkerd check to see more multicluster health check outputs.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;linkerd check &lt;span class=&quot;nt&quot;&gt;--multicluster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;With that all your checks should be passing and your clusters are now ready for some workloads.&lt;/p&gt;

&lt;h2 id=&quot;getting-our-test-service-installed&quot;&gt;Getting our Test Service Installed&lt;/h2&gt;

&lt;p&gt;The default docs want you to use two podinfo configs mapped to the values east and west. I wanted to pair it up to my cluster names so I pulled the app config into it’s &lt;a href=&quot;https://github.com/JasonMorgan/podinfo&quot;&gt;own repo&lt;/a&gt;. You can also use the east/west names from the docs by pulling app definitions from the &lt;a href=&quot;https://github.com/linkerd/website/tree/master/multicluster&quot;&gt;Linkerd website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When running the commands be sure you run a different version of the app manifest for each cluster.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# the podinfo repo at the path below has the config for 2 clusters, workload1 and workload2&lt;/span&gt;
kubectl apply &lt;span class=&quot;nt&quot;&gt;-k&lt;/span&gt; github.com/jasonmorgan/podinfo/workload1/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/term-linkerd-workloads.png&quot; alt=&quot;workloads &quot; /&gt;&lt;/p&gt;

&lt;p&gt;With that done now is a good time to checkout our new podinfo web service and see what it looks like. Run the following command for each cluster, be sure to either use a different port or run one at a time then browse to your the page.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl port-forward svc/frontend 8081:8080 &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then open a browser to localhost:8081. You should see something that looks like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/page.png&quot; alt=&quot;cuttle&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Later on when we get to &lt;a href=&quot;#Splitting-Traffic&quot;&gt;Traffic Splitting&lt;/a&gt; you’ll be able to see the traffic shift from one cluster to the other with the port-forwarding.&lt;/p&gt;

&lt;h2 id=&quot;linking-or-exporting-a-service&quot;&gt;Linking, or Exporting, a Service&lt;/h2&gt;

&lt;p&gt;When we decide we want to share a service between clusters we need to let Linkerd know which services to mirror. We do that with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mirror.linkerd.io/exported=true&lt;/code&gt; label, alternatively if you’d like to modify the key you can find it on the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;links.multicluster.linkerd.io&lt;/code&gt; object in the linkerd-multicluster namespace.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# You'll only need to run this on the workloads in one cluster. I ran it on workload2&lt;/span&gt;
kubectl label svc &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;test &lt;/span&gt;podinfo mirror.linkerd.io/exported&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can now check the other cluster for the new service. I got the following output on workload1:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get svc

NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;S&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;             AGE
frontend            ClusterIP   100.65.191.192   &amp;lt;none&amp;gt;        8080/TCP            38m
podinfo             ClusterIP   100.70.110.153   &amp;lt;none&amp;gt;        9898/TCP,9999/TCP   38m
podinfo-workload2   ClusterIP   100.68.52.124    &amp;lt;none&amp;gt;        9898/TCP,9999/TCP   7s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Technically you’ve now completed the task of sharing a service between clusters with Linkerd on Tanzu Kubernetes Grid. That being said we still don’t have any cool demos to show off so lets keep going.&lt;/p&gt;

&lt;p&gt;You can move on to the next section where we split traffic. If you’d like some more detailed tests checkout &lt;a href=&quot;https://linkerd.io/2/tasks/multicluster/#exporting-the-services&quot;&gt;this&lt;/a&gt; section of the docs or to see a walkthrough of validating TLS look &lt;a href=&quot;https://linkerd.io/2/tasks/multicluster/#security&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;splitting-traffic&quot;&gt;Splitting Traffic&lt;/h2&gt;

&lt;p&gt;This is where we start to see some of the power of a tool like Linkerd in combination with the &lt;a href=&quot;https://smi-spec.io/&quot;&gt;Service Mesh Interface&lt;/a&gt; (SMI). SMI is aiming to provide for our mesh layer what CNI, or the &lt;a href=&quot;https://landscape.cncf.io/selected=container-network-interface-cni&quot;&gt;Container Network Interface&lt;/a&gt;, provides for our pod networks. It is a standard interface that allows us to define common tasks like splitting traffic, surfacing metrics, or defining access controls. SMI is still early days but you’ll be able to see some of what it can do in this example. We’re going to leverage the Traffic Split spec in order to share requests between the new podinfo services on workload cluster 1 and 2.&lt;/p&gt;

&lt;p&gt;Lets create a TrafficSplit object and hand it off to our kube cluster.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;split.smi-spec.io/v1alpha1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;TrafficSplit&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;podinfo&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;test&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;service&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;podinfo&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;backends&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;service&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;podinfo&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;weight&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;50&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;service&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;podinfo-workload2&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;weight&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;50&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The file above will tell Linkerd to split traffic for the podinfo service between the local podinfo on workload1, known locally as podinfo, with the podinfo service on workload2, know locally as podinfo-workload2. Save the yaml content from above into a file called split.yaml.&lt;/p&gt;

&lt;p&gt;Before we talk anymore about it lets get it up and running and checkout the output. Start by doing that port forward operation we talked about when we first deployed the podinfo app, we need to ensure we’re running it on workload1.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl port-forward svc/frontend 8081:8080 &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;browse to localhost:8081 on your local browser and keep it open, we’re going to watch what happens when we set up traffic splitting.&lt;/p&gt;

&lt;p&gt;Now apply that split.yaml we created earlier to your workload1 cluster.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; split.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/term-linkerd-split.png&quot; alt=&quot;split&quot; /&gt;&lt;/p&gt;

&lt;p&gt;At this point you should see your browser switching between the local and remote podinfo services. With that you’ve successfully split traffic between two kubernetes clusters with Linkerd! This is pretty neat as an example but think about some other ways we could apply this. We could isolate PCI workloads to a PCI cluster or run backing services in one cluster and front end apps in another.&lt;/p&gt;

&lt;h2 id=&quot;wrap-up&quot;&gt;Wrap Up&lt;/h2&gt;

&lt;p&gt;Well I hope y’all were able to follow along and I really hope you got to have a, “that’s pretty cool” moment when we split the traffic for podinfo between clusters. I certainly enjoyed it. If this is interesting I’d recommend you dig a little deeper into Linkerd and see how to expand on this example by connecting an app between clusters.&lt;/p&gt;

&lt;p&gt;Once I got the hang of it I was able to run through the end to end example in about 30 minutes. I’m definitely looking forward to seeing one of my customers give this a shot and I’ll be eagerly waiting for multi cluster networking with Linkerd to support database connections.&lt;/p&gt;

&lt;p&gt;Thanks so much for reading and I’d love to hear any feedback you have,&lt;/p&gt;

&lt;p&gt;Jason&lt;/p&gt;
</description>
        <pubDate>Sun, 06 Dec 2020 00:00:00 +0000</pubDate>
        <link>https://jasonmorgan.github.io/multi-cluster-linkerd</link>
        <guid isPermaLink="true">https://jasonmorgan.github.io/multi-cluster-linkerd</guid>
        
        <category>linkerd</category>
        
        <category>kubernetes</category>
        
        <category>k8s</category>
        
        <category>mesh</category>
        
        
        <category>kubernetes</category>
        
      </item>
    
      <item>
        <title>Installing code-server in Kubernetes</title>
        <description>&lt;p&gt;Hey folks! I wanted to write a quick article today on getting started using code-server on kubernetes. Code-server is an open source project from the folks at &lt;a href=&quot;coder.com&quot;&gt;Coder&lt;/a&gt; that makes it easy to run and manage a cloud based vscode instance that you can connect to and operate remotely. Installing it in kubernetes has some neat benefits like allowing you to develop your app alongside any versions of other services you want. If you’re following a strong gitops flow, it’s pretty straightforward to build a dev cluster that closely mirrors one of your production environments; that and it allows you to log into the same vscode instance from any device and location you want.&lt;/p&gt;

&lt;h2 id=&quot;getting-started&quot;&gt;Getting Started&lt;/h2&gt;

&lt;p&gt;Lately, I’ve gotten into the habbit of writing “getting started” guides and hosting them in Github. You can find my “getting started” guide for code-server &lt;a href=&quot;https://github.com/JasonMorgan/code-server-getting-started&quot;&gt;here&lt;/a&gt;. Feel free to hop over there if you want to get up and running, the rest of this post will be all about what choices I made, and why I made them, while building the manifest and container.&lt;/p&gt;

&lt;h2 id=&quot;the-dockerfile&quot;&gt;The Dockerfile&lt;/h2&gt;

&lt;div class=&quot;language-Dockerfile highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt;&lt;span class=&quot;s&quot;&gt; codercom/code-server&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;COPY&lt;/span&gt;&lt;span class=&quot;s&quot;&gt; .vscode /home/coder/.local/share/code-server&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;RUN &lt;/span&gt;curl &lt;span class=&quot;nt&quot;&gt;-Lo&lt;/span&gt; shellcheck-v0.7.1.linux.aarch64.tar.xz https://github.com/koalaman/shellcheck/releases/download/v0.7.1/shellcheck-v0.7.1.linux.aarch64.tar.xz &lt;span class=&quot;se&quot;&gt;\
&lt;/span&gt;  &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;tar&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-xvf&lt;/span&gt; shellcheck-v0.7.1.linux.aarch64.tar.xz &lt;span class=&quot;se&quot;&gt;\
&lt;/span&gt;  &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x shellcheck-v0.7.1/shellcheck &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo mv &lt;/span&gt;shellcheck-v0.7.1/shellcheck /usr/local/bin/ &lt;span class=&quot;se&quot;&gt;\
&lt;/span&gt;  &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rf&lt;/span&gt; shellcheck&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo chown&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-R&lt;/span&gt; coder:coder /home/coder/.local/share/code-server &lt;span class=&quot;se&quot;&gt;\
&lt;/span&gt;  &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class=&quot;nt&quot;&gt;-LO&lt;/span&gt; https://storage.googleapis.com/kubernetes-release/release/&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;curl &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; https://storage.googleapis.com/kubernetes-release/release/stable.txt&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;/bin/linux/amd64/kubectl &lt;span class=&quot;se&quot;&gt;\
&lt;/span&gt;  &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x kubectl &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo mv &lt;/span&gt;kubectl /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you aren’t really familiar with Dockerfile command words I dig through the FROM/RUN/COPY stuff a bit more in the &lt;a href=&quot;https://github.com/JasonMorgan/code-server-getting-started/blob/master/docker-container.md&quot;&gt;git repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For my instance I decided I wanted my code-server to mirror my local vscode environment, and the best way to do that was to just copy over my extensions directly. I won’t dig into it now, but it turns out in spite of vscode being open source some components, like the extension store, have special access rules around them. Long story short, if you’re using a variant of vscode, like code-server, you don’t necessarily have access to install all the vscode extensions you might be using. You can get around any concern about extension stores by just copying over your current .vscode directory into the container. Just be aware the default extensions directory in code-server is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/.local/share/code-server&lt;/code&gt; and once you move the extensions over you need to take ownership of the files. Other than that some extensions, like shellcheck and kubernetes, require you to have the binary available in your path so you’ll need to modify your dockerfile to pull down anything you need.&lt;/p&gt;

&lt;p&gt;If you want to see a bit more detail about the dockerfile check out my comments in the &lt;a href=&quot;https://github.com/JasonMorgan/code-server-getting-started/blob/master/docker-container.md&quot;&gt;getting started guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;the-manifest&quot;&gt;The Manifest&lt;/h2&gt;

&lt;p&gt;Before I go anywhere I want to point out that I got my initial template from the folks over at DigitalOcean. You can see &lt;a href=&quot;https://www.digitalocean.com/community/tutorials/how-to-set-up-the-code-server-cloud-ide-platform-on-digitalocean-kubernetes&quot;&gt;their post&lt;/a&gt; about deploying code-server on kubernetes. I thought the whole thing was really well done and it’s probably worth your time to read through, especially if you want a second opinion.&lt;/p&gt;

&lt;p&gt;A lot of this is going to be pretty standard, we have a deployment, a service, and a persistent volume claim. I want to make sure I run my code server, using the image I customized and pushed to dockerhub, I want to be able to access it reliably via a service, and I want to persist it’s data disk so I don’t lose my work if something happens to the pod.&lt;/p&gt;

&lt;p&gt;When we get to persisting the data we have to watch out for a couple things. First and foremost, we have file permissions. I didn’t have any issues on docker desktop but mounting volumes on AWS set the user permissions on the file system to root, and code-server by default runs as coder, so I modified my prep script to chown the home directory. After that I built out a little script to set up a clean working directory and to try and clone whatever repo I told the init container to pull. You can find all this in the manifest file &lt;a href=&quot;https://github.com/JasonMorgan/code-server-getting-started/blob/master/code-server.yaml&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;the-ingress-and-tls&quot;&gt;The Ingress and TLS&lt;/h2&gt;

&lt;p&gt;This ended up being trickier than I anticipated as code-server uses websockets to get that fancy editor experience and project contour didn’t, maybe doesn’t, have great documentation around how to get that working. I swapped over to nginx’s ingress which supports web sockets by default and was up and running. If you aren’t using cert-manager I’d higly recommend you check it out. The ACME HTTP challenge works great and cert-manager and nginx, or contour, will auto generate certificates for you on demand. This blog has some articles about setting up cert-manager and contour has a great tutorial &lt;a href=&quot;https://projectcontour.io/guides/cert-manager/&quot;&gt;here&lt;/a&gt;. If you want to use nginx instead, like I did, you can checkout the how to on cert-manager’s &lt;a href=&quot;https://cert-manager.io/docs/tutorials/acme/ingress/&quot;&gt;docs page&lt;/a&gt;. The helm stuff is a bit dated but installing nginx’s ingress is pretty straightforward at this point.&lt;/p&gt;

&lt;p&gt;I ended up using the following ingress:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Ingress&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;code-server&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;code-server&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;annotations&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-prod&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;nginx.ingress.kubernetes.io/force-ssl-redirect&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;true&quot;&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;kubernetes.io/tls-acme&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;true&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;tls&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;secretName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;code-server&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;hosts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;code-server.my-domain.com&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;rules&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;host&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;code-server.my-domain.com&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;http&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;paths&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;backend&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;serviceName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;code-server&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;servicePort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Be sure to add some annotations for forcing ssl so you don’t accidentally hit the unencrypted endpoint.&lt;/p&gt;

&lt;h2 id=&quot;the-wrap-up&quot;&gt;The Wrap Up&lt;/h2&gt;

&lt;p&gt;That’s all I have folks! Hope you enjoyed reading it and if there’s anything you’d like to read about in the future hit me up on the kubernetes slack, @jmo, or on twitter @rjasonmorgan.&lt;/p&gt;
</description>
        <pubDate>Mon, 27 Jul 2020 00:00:00 +0000</pubDate>
        <link>https://jasonmorgan.github.io/install-code-server</link>
        <guid isPermaLink="true">https://jasonmorgan.github.io/install-code-server</guid>
        
        <category>code-cerver</category>
        
        <category>kubernetes</category>
        
        <category>k8s</category>
        
        
        <category>kubernetes</category>
        
      </item>
    
      <item>
        <title>Initial Experience with the Helm 3 Alpha</title>
        <description>&lt;p&gt;Hey folks, I’ve been working with Helm3 since the alpha came out (some amount of time ago that I’m unwilling to lookup). It’s been a pretty good experience so far but I wanted to sketch out my experience to date. I may come back and update this later as I run into new issues.&lt;/p&gt;

&lt;h2 id=&quot;goodbye-tiller&quot;&gt;Goodbye Tiller&lt;/h2&gt;

&lt;p&gt;I’m super psyched that Tiller is gone. I, think I, get what tiller was intended to do but it represented both an operational and security failure plane. That ok if it provides significant value but I didn’t see that value. Even though most of my examples in this blog involve using helm I rarely used tiller myself. I would use helm templating to generate yaml manifests then just apply those directly. It was a decent system but it was definitely kind of a pain. Helm3 drops Tiller and once it matures it will probably be my go to.&lt;/p&gt;

&lt;h3 id=&quot;why-helm&quot;&gt;Why Helm&lt;/h3&gt;

&lt;p&gt;Just wanted to write this down somewhere. I know there are a few different, and seemingly really good, alternatives to Helm. I don’t really use them. I don’t have anything against any particular tool or approach, I just believe that for package managers the top feature I care about is, is it a standard. Helm is, IMO, the closest thing to a standard for k8s “package” management. So I’ll use helm, write charts, and use tools that support helm packages. I’d rather see helm get better and evolve than look for the “best” tool. If the standard changes, or solidifies somewhere else, I’ll migrate to that.&lt;/p&gt;

&lt;h2 id=&quot;its-an-alpha&quot;&gt;It’s an Alpha&lt;/h2&gt;

&lt;p&gt;It’s an Alpha. Some stuff is broken, or doesn’t work, or kind of doesn’t work, or sometimes doesn’t work. So far I’ve noticed that it really doesn’t like dealing with resources that get namespace labels, particularly if they aren’t for the namespace you’re currently pinned to. Also helm upgrade seems to work great as long as you don’t actually want objects to update or change… But that being said it’s definitely worth trying out. A lot of what you want to do just works and you can get around the stuff that doesn’t work by adding the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--dry-run&lt;/code&gt; switch to your install command or using the templating function to just create yaml files you can apply normally.&lt;/p&gt;

&lt;h3 id=&quot;things-ive-noticed&quot;&gt;Things I’ve Noticed&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;helm upgrade&lt;/code&gt; doesn’t seem to do much&lt;/li&gt;
  &lt;li&gt;It really doesn’t like when you are creating objects in multiple namespaces&lt;/li&gt;
  &lt;li&gt;It doesn’t seem to like or necessarily respect it when you use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--namespace&lt;/code&gt; flag&lt;/li&gt;
  &lt;li&gt;The syntax has changed in what I think is a real positive way&lt;/li&gt;
  &lt;li&gt;The naming no longer appends some chart based name afterwords
    &lt;ul&gt;
      &lt;li&gt;I love this cause for some reason I really care about the names of things&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;check-it-out&quot;&gt;Check it Out&lt;/h2&gt;

&lt;p&gt;I’d suggest you check it out and start using it. For those of use that can order k8s clusters on demand, cause we use one of the *KS’s, it’s really easy to grab a cluster, try out some of our existing workflows with new tools, then blow it away. For you DIY k8s-ers out there Docker for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Whatever OS&lt;/code&gt; or minikube make it pretty easy to test things out.&lt;/p&gt;
</description>
        <pubDate>Fri, 31 May 2019 00:00:00 +0000</pubDate>
        <link>https://jasonmorgan.github.io/helm3</link>
        <guid isPermaLink="true">https://jasonmorgan.github.io/helm3</guid>
        
        <category>helm</category>
        
        <category>kubernetes</category>
        
        <category>k8s</category>
        
        
        <category>kubernetes</category>
        
      </item>
    
      <item>
        <title>Issuing Certificates with Cert-Manager and Let's Encrypt</title>
        <description>&lt;p&gt;I go out of my way to secure all my sites with a valid https certificate. I’m also fairly cheap, this left me with a dilemma and &lt;a href=&quot;https://aws.amazon.com/certificate-manager/&quot;&gt;AWS’s certificate manager&lt;/a&gt; used to be my only option. It was handy but I was effectively stuck on AWS and had to use an Elastic Load Balancer, ELB, to terminate my TLS connections. Before we go too deep here if any of the terms I’m using are a little ambiguous I’d recommend checking out &lt;a href=&quot;https://www.websecurity.symantec.com/security-topics/what-is-ssl-tls-https&quot;&gt;this article&lt;/a&gt; from symantec on the meaning of SSL, TLS, and https. The first section covers the definitions well enough that hopefully you feel comfortable with the difference between the terms.&lt;/p&gt;

&lt;h2 id=&quot;concepts&quot;&gt;Concepts&lt;/h2&gt;

&lt;h3 id=&quot;lets-encrypt&quot;&gt;Let’s Encrypt&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://letsencrypt.org/&quot;&gt;Let’s Encrypt&lt;/a&gt; is a free service that allows you to programmatically generate TLS certificates. It’s not always super well documented or easy to use but once you get it in place you can generate, use, and renew certificates on a regular basis for free. On top of that Let’s Encrypt pioneered a new IETF standard for programmatically doing Domain Validation. I’ll get into how and why that works right now.&lt;/p&gt;

&lt;h3 id=&quot;cert-manager&quot;&gt;Cert-Manager&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://docs.cert-manager.io/en/latest/#&quot;&gt;cert-manager&lt;/a&gt; is a kubernetes service that will interact with LetsEncrypt, or another CA, for you to programmatically request, generate, and renew certificates. We’re going to limit our scope today to working with Let’sEncrypt. Go &lt;a href=&quot;https://docs.cert-manager.io/en/latest/getting-started/install.html#installing-with-helm&quot;&gt;here&lt;/a&gt; to ignore some of this guide and just install the latest cert-manager. They’re moving to a non default helm repo and have their own getting started instructions. I’ll also cover it in more detail in the &lt;a href=&quot;#Code&quot;&gt;Getting to the Code&lt;/a&gt; section.&lt;/p&gt;

&lt;h4 id=&quot;acme-and-acmev2&quot;&gt;ACME and ACMEv2&lt;/h4&gt;

&lt;p&gt;Let’s Encrypt originally came out with a protocol they called ACME, which was a mechanism for programmatically doing &lt;a href=&quot;https://en.wikipedia.org/wiki/Domain-validated_certificate&quot;&gt;domain validation&lt;/a&gt;. This year they got ACME accepted as an &lt;a href=&quot;https://tools.ietf.org/html/rfc8555&quot;&gt;IETF standard&lt;/a&gt; and they called it ACMEv2. From a practical standpoint ACME/ACMEv2 supports two mechanisms for domain validation.&lt;/p&gt;

&lt;p&gt;For Let’sEncrypt we want to be careful to manage our interactions with the API. Let’sEncrypt provides a free service to anyone that wants certificates. Awesome right? The downside is they throttle requests against their production APIs. Let’sEncrypt has been changing their rate limits as they mature their infrastructure, you can keep up to date with their rate limits &lt;a href=&quot;https://letsencrypt.org/docs/rate-limits/&quot;&gt;here&lt;/a&gt;. The impact of that is that you don’t want to test that your issuer and request are valid against the production API. In order to validate your issuer and request before hitting the production API Let’sEncrypt provides a staging API that allows you to happily mess up as much as you like. &lt;strong&gt;&lt;em&gt;Clarification&lt;/em&gt;&lt;/strong&gt; the staging API is also rate limited but it’s much more forgiving than the prod API, please don’t spam the staging API.&lt;/p&gt;

&lt;h5 id=&quot;dns&quot;&gt;DNS&lt;/h5&gt;

&lt;p&gt;DNS Validation, covered in &lt;a href=&quot;https://tools.ietf.org/html/rfc8555#section-8.4&quot;&gt;section 8.4 of the IETF RFC&lt;/a&gt;, is when you prove you’re authoritative for the domain by creating a custom DNS entry.&lt;/p&gt;

&lt;h5 id=&quot;http&quot;&gt;HTTP&lt;/h5&gt;

&lt;p&gt;HTTP Validation, covered in &lt;a href=&quot;https://tools.ietf.org/html/rfc8555#section-8.3&quot;&gt;section 8.3 of the IETF RFC&lt;/a&gt;, involves putting a specific file on a web server at a specific URL.&lt;/p&gt;

&lt;h4 id=&quot;issuers&quot;&gt;Issuers&lt;/h4&gt;

&lt;p&gt;Issuers refer to the service that will act to issue a given certificate. Ultimately your CA issues the actual certificate. The issuer in the context of cert-manager refers to the broker between your kubernetes cluster and the CA.&lt;/p&gt;

&lt;p&gt;In order to get your issuer up and running correctly you need to pick a domain validation method. Let’sEncrypt offers free Domain Validation (DV) certs which basically just check that the entity requesting the certificate has effective control of the domain in question. If that doesn’t sound like a whole ton of validation you’re right, it’s not. &lt;a href=&quot;https://www.troyhunt.com/cloudflare-ssl-and-unhealthy-security-absolutism/&quot;&gt;Troy Hunt&lt;/a&gt; among others actually has a lot of good talk tracks on what exactly SSL/TLS do, and more importantly don’t do, to secure a given site. The long and the short of it being a TLS connection makes it really hard to snoop on the traffic between a browser and a site. That’s it.&lt;/p&gt;

&lt;p&gt;Back to DV: Let’sEncrypt has been pioneering programatic ways to do Domain Validation and they recently got ACMEv2 adopted as an &lt;a href=&quot;https://tools.ietf.org/html/rfc8555&quot;&gt;IETF&lt;/a&gt; standard. With cert-manager you have two options for DV, HTTP and DNS.&lt;/p&gt;

&lt;h4 id=&quot;certificates&quot;&gt;Certificates&lt;/h4&gt;

&lt;p&gt;Certificates refer to the actual certificate you intend to generate. Basically what URL/Host are you trying to secure and where do you want to store your secret? I honestly prefer to think of these objects as requests as the actual certificate is stored in kubernetes as a kubernetes secret but that doesn’t really matter.&lt;/p&gt;

&lt;h2 id=&quot;code&quot;&gt;Code&lt;/h2&gt;

&lt;p&gt;This guide is working with cert-manager version 0.7.0, for the latest docs checkout &lt;a href=&quot;https://docs.cert-manager.io/en/latest/#&quot;&gt;cert-manager’s site.&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;installing-cert-manager&quot;&gt;Installing cert-manager&lt;/h3&gt;

&lt;p&gt;The helm chart has a good &lt;a href=&quot;https://github.com/jetstack/cert-manager/blob/release-0.7/deploy/charts/cert-manager/README.md#installing-the-chart&quot;&gt;installation guide&lt;/a&gt;, which I wont spend a ton of time talking about here.&lt;/p&gt;

&lt;p&gt;We need to first apply the cert-manager &lt;a href=&quot;https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml&quot;&gt;CRDs&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next create the cert-manager namespace, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl create namespace cert-manager&lt;/code&gt;, although personally I create and maintain a yaml version of the namespace that I can apply in bulk as necessary. Once the namespace is up you need to apply a label to disable cert-manager validation. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=&quot;true&quot;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For the namespace definition I use:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Namespace&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert-manager&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;certmanager.k8s.io/disable-validation&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;true&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you’re using helm you can complete the install with this:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;
&lt;span class=&quot;c&quot;&gt;## Add the Jetstack Helm repository&lt;/span&gt;
helm repo add jetstack https://charts.jetstack.io

&lt;span class=&quot;c&quot;&gt;# Update your local Helm chart repository cache&lt;/span&gt;
helm repo update

&lt;span class=&quot;c&quot;&gt;## Install the cert-manager helm chart&lt;/span&gt;
helm &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; cert-manager &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; cert-manager &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--version&lt;/span&gt; v0.7.0 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  jetstack/cert-manager

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;configuring-a-cluster-issuer&quot;&gt;Configuring a Cluster Issuer&lt;/h3&gt;

&lt;p&gt;I start from the assumption that I won’t run multi tenant clusters so I always build out cluster issuers as opposed to creating individual issuers by namespace.&lt;/p&gt;

&lt;p&gt;In order to test out my issuer I create a ClusterIssuer that will connect to the staging API. I modify the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;spec.acme.server&lt;/code&gt; to use the staging API, https://acme-staging-v02.api.letsencrypt.org/directory. The full staging issuer is included below.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;certmanager.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ClusterIssuer&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-stage&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Arbitrary string, you'll reference this when you create a certificate&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;acme&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;server&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;https://acme-staging-v02.api.letsencrypt.org/directory&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# LetsEncrypt API URL&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;email&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;jason@59s.io&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Your email address&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;# Name of a secret used to store the ACME account private key&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;privateKeySecretRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-stage&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Arbitrary string&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;# ACME DNS-01 provider configurations&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;dns01&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;

      &lt;span class=&quot;c1&quot;&gt;# Here we define a list of DNS-01 providers that can solve DNS challenges&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;providers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;dns&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# arbitrary string, you'll reference this later in your request.&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;route53&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;region&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;us-east-1&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Region of the zone&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;hostedZoneID&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;IMAVALIDHOSTEDZONEID&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;# optional if ambient credentials are available; see ambient credentials documentation&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;accessKeyID&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;IMAVALIDAWSACCESSKEY&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;secretAccessKeySecretRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Because we're using a cluster issuer this secret, with the properties you see below, need to be placed in the cert-manager namespace.&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;secret-name&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;property-name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;using-staging&quot;&gt;Using Staging&lt;/h4&gt;

&lt;p&gt;Use staging. Until you’re super comfortable that your certificate request and issuer work well stick with the staging API. The staging API will generate an invalid certificate and store it in your kubernetes cluster. You don’t actually want to use the staging certificate for anything other than to validate that your issuer and certificate request are valid.&lt;/p&gt;

&lt;h3 id=&quot;configuring-a-certificate-request&quot;&gt;Configuring a certificate request&lt;/h3&gt;

&lt;p&gt;Once your issuer is up and running you’ll request a certificate. Again, we start with the staging API then swap over to the prod API later.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;certmanager.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Certificate&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;59s-io-stage&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Arbitrary string&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;n1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;secretName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;59s-io-tls-stage&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Arbitrary string, name of the secret you want to create&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;issuerRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-stage&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ClusterIssuer&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;commonName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*.59s.io'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;dnsNames&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;59s.io&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;acme&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;dns01&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;provider&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;dns&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# use the provider name from your issuer&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;domains&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*.59s.io'&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Super sweet wildcard certificates let me use a single ingress for all my sites. LetsEncrypt started issuing wildcard certs back in 2018.&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;59s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I like to watch the logs at this point to see what cert-manager is doing and ensure it’s able to issue my certificate. Once it’s issued you can begin migrating over to the production API.&lt;/p&gt;

&lt;h3 id=&quot;moving-to-the-production-api&quot;&gt;Moving to the Production API&lt;/h3&gt;

&lt;p&gt;We’re going to effectively copy the issuer and certificate request from above. Swap out the names so that prod replaces staging. Also we need to update the URL.&lt;/p&gt;

&lt;h4 id=&quot;cluster-issuer&quot;&gt;Cluster Issuer&lt;/h4&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;certmanager.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ClusterIssuer&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-prod&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Arbitrary string, you'll reference this when you create a certificate&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;acme&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;server&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;https://acme-v02.api.letsencrypt.org/directory&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# LetsEncrypt API URL&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;email&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;jason@59s.io&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Your email address&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;# Name of a secret used to store the ACME account private key&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;privateKeySecretRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-prod&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Arbitrary string&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;# ACME DNS-01 provider configurations&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;dns01&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;

      &lt;span class=&quot;c1&quot;&gt;# Here we define a list of DNS-01 providers that can solve DNS challenges&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;providers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;dns&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# arbitrary string, you'll reference this later in your request.&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;route53&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;region&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;us-east-1&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Region of the zone&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;hostedZoneID&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;IMAVALIDHOSTEDZONEID&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;# optional if ambient credentials are available; see ambient credentials documentation&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;accessKeyID&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;IMAVALIDAWSACCESSKEY&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;secretAccessKeySecretRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Because we're using a cluster issuer this secret, with the properties you see below, need to be placed in the cert-manager namespace.&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;secret-name&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;property-name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;certificate-request&quot;&gt;Certificate Request&lt;/h4&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;certmanager.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Certificate&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;59s-io-prod&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Arbitrary string&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;n1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;secretName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;59s-io-tls-prod&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Arbitrary string, name of the secret you want to create&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;issuerRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-prod&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ClusterIssuer&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;commonName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*.59s.io'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;dnsNames&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;59s.io&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;acme&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;dns01&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;provider&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;dns&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# use the provider name from your issuer&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;domains&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*.59s.io'&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Super sweet wildcard certificates let me use a single ingress for all my sites. LetsEncrypt started issuing wildcard certs back in 2018.&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;59s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Use certificates to secure your sites. They’re free, the technical burden of requesting and maintaining them is relatively low, and the process can be fully automated using tools like cert-manager. Also Kubernetes has mechanisms to simplify the process of issuing, maintaining, and requesting certificates.&lt;/p&gt;

&lt;p&gt;I’ll do a follow on post on using a wildcard certificate with an NGINX ingress and a wildcard DNS entry and wildcard certificate to automatically secure any sites you want to build on a given cluster.&lt;/p&gt;
</description>
        <pubDate>Fri, 26 Apr 2019 00:00:00 +0000</pubDate>
        <link>https://jasonmorgan.github.io/cert-manager</link>
        <guid isPermaLink="true">https://jasonmorgan.github.io/cert-manager</guid>
        
        <category>certificates</category>
        
        <category>LetsEncrypt</category>
        
        <category>kubernetes</category>
        
        <category>k8s</category>
        
        <category>cert-manager</category>
        
        
        <category>certificates</category>
        
      </item>
    
      <item>
        <title>Concourse in Kubernetes</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://concourse-ci.org/&quot;&gt;Concourse&lt;/a&gt; is a handy tool for running build jobs or any other arbitrary code you want, otherwise known as a CI server. It has it’s own &lt;a href=&quot;https://en.wikipedia.org/wiki/Domain-specific_language&quot;&gt;DSL&lt;/a&gt;, domain specific language, that you have to learn but it wasn’t too much of a burden for me to pick up. If you’re looking for some help getting started with concourse I recommend &lt;a href=&quot;https://concoursetutorial.com/&quot;&gt;this delightful tutorial&lt;/a&gt; from the folks over at Starke &amp;amp; Wayne.&lt;/p&gt;

&lt;h2 id=&quot;deploying-concourse&quot;&gt;Deploying Concourse&lt;/h2&gt;

&lt;p&gt;You have a lot of options for deploying your own concourse instance. A few examples below.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;a Bosh release&lt;/li&gt;
  &lt;li&gt;Pivotal provides a somewhat curated release over at network.pivotal.io&lt;/li&gt;
  &lt;li&gt;Engineer Better has &lt;a href=&quot;https://github.com/EngineerBetter/concourse-up&quot;&gt;Concourse-Up&lt;/a&gt;
    &lt;ul&gt;
      &lt;li&gt;This will deploy
        &lt;ul&gt;
          &lt;li&gt;a Bosh Director, think kubernetes for VMs&lt;/li&gt;
          &lt;li&gt;Credhub, a secret store similar to Vault
            &lt;ul&gt;
              &lt;li&gt;it will even connect concourse to credhub so it can securely store and retrieve credentials&lt;/li&gt;
            &lt;/ul&gt;
          &lt;/li&gt;
          &lt;li&gt;A prometheus instance with Grafana&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;It’s pretty sweet overall and if you’re running in GCP or AWS it’ll work well for you&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/starkandwayne/bucc&quot;&gt;BUCC&lt;/a&gt; from Starke and Wayne
    &lt;ul&gt;
      &lt;li&gt;Supposed to be similar to Concourse-up but more IaaS agnostic&lt;/li&gt;
      &lt;li&gt;I personally haven’t gotten it working so take that for what it’s worth&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;concourse-in-k8s&quot;&gt;Concourse in K8s&lt;/h2&gt;

&lt;p&gt;I think bosh is cool but I’m often impatient. Kubernetes lets me do a lot of what I do with bosh but it does it with containers which I can start and modify a lot faster than VMs. It also does a bunch of other really cool things that I like which we can get into later.&lt;/p&gt;

&lt;h3 id=&quot;helm-chart&quot;&gt;Helm Chart&lt;/h3&gt;

&lt;p&gt;I’m not going to get into what &lt;a href=&quot;https://helm.sh/&quot;&gt;helm&lt;/a&gt; is and the rest of this article will assume you have some familiarity there. If you want me to write an article about that tweet me or something. Someone out there, I’m not sure who, is maintaining the stable/concourse chart. It works really well and they’ve been doing a pretty good job keeping it up to date.&lt;/p&gt;

&lt;h3 id=&quot;getting-into-the-code&quot;&gt;Getting into the code&lt;/h3&gt;

&lt;p&gt;Kubernetes is effectively tons of YAML so we’re going to dive into that now.&lt;/p&gt;

&lt;h4 id=&quot;pre-reqs&quot;&gt;Pre reqs&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;a Kubernetes Cluster
    &lt;ul&gt;
      &lt;li&gt;I’m going to assume RBAC is enabled as that seems pretty standard&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Helm&lt;/li&gt;
  &lt;li&gt;kubectl&lt;/li&gt;
  &lt;li&gt;Some kind of editor&lt;/li&gt;
  &lt;li&gt;A terminal&lt;/li&gt;
  &lt;li&gt;A clean working directory&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;getting-started-with-helm&quot;&gt;Getting Started with Helm&lt;/h4&gt;

&lt;p&gt;Open your terminal and get your connection to your kube cluster squared away. You can validate that it’s up and running with this command.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl --version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we’re going to set up our tiller account and initialize helm. If you have any questions while you’re doing this please refer to &lt;a href=&quot;https://helm.sh/docs/using_helm/#role-based-access-control&quot;&gt;helm’s guide&lt;/a&gt; to installing tiller with RBAC enabled.&lt;/p&gt;

&lt;h5 id=&quot;tiller-account-and-cluster-role-binding&quot;&gt;Tiller Account and Cluster Role Binding&lt;/h5&gt;

&lt;p&gt;I have a file laying around with my tiller service account and cluster role binding. Save this file as tiller.yaml in my current working directory.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ServiceAccount&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;tiller&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kube-system&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;tiller&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;roleRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;apiGroup&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;rbac.authorization.k8s.io&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ClusterRole&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cluster-admin&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;subjects&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ServiceAccount&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;tiller&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kube-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The big thing here is making sure that the tiller account exists and has appropriate permissions to start launching new applications.&lt;/p&gt;

&lt;p&gt;After saving the tiller file we’re going to apply it to our kube cluster.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; tiller.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;you should see output that looks super similar to this&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h5 id=&quot;helm-init&quot;&gt;Helm Init&lt;/h5&gt;

&lt;p&gt;With our service account in place we’re clear to start tiller.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm init &lt;span class=&quot;nt&quot;&gt;--service-account&lt;/span&gt; tiller
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can test the install by running&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm version
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You should get feedback about both the helm and tiller versions.&lt;/p&gt;

&lt;p&gt;If you want to be done with the article now and just run the vanilla version of concourse that comes with the chart run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;helm install stable/concourse&lt;/code&gt; and follow the instructions in the output to access your instance. The rest of the article is just diving into the values file.&lt;/p&gt;

&lt;h4 id=&quot;getting-the-chart&quot;&gt;Getting the chart&lt;/h4&gt;

&lt;p&gt;To get started you can find the chart by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;helm search concourse&lt;/code&gt; or just going to the &lt;a href=&quot;https://github.com/helm/charts/tree/master/stable/concourse&quot;&gt;stable/concourse folder&lt;/a&gt; in the stable helm chart repo.&lt;/p&gt;

&lt;p&gt;I like to download a copy locally to get started. Also as a side note I’m going to avoid the most recent version of the chart as I’m not yet ready to dig into the concourse 5.0.0 release. You can view chart versions by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;helm search stable/concourse -l&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To download the chart locally&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm fetch stable/concourse &lt;span class=&quot;nt&quot;&gt;--version&lt;/span&gt; 3.8.0 &lt;span class=&quot;nt&quot;&gt;--untar&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I like to download the charts locally and render them myself using helm’s templating functionality but that’s a topic for another day.&lt;/p&gt;

&lt;p&gt;Keep in mind you have 2 version numbers in play here, the version of the app the chart is managing and the version of the chart. 3.8.0 refers to the chart version and 4.2.2 is the version of concourse it’s running. We’re actually going to fix that a little when we deploy the app as 4.2.2 still has the bad old version of runc so we’re going to fix that by going to 4.2.3.&lt;/p&gt;

&lt;p&gt;I always maintain my own version of a chart’s values file even when I accept the defaults.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cp &lt;/span&gt;concourse/values.yaml values.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;important-modifications&quot;&gt;Important Modifications&lt;/h4&gt;

&lt;p&gt;Just to get started the helm chart itself bundles a pretty good explanation of values right in the values file. The &lt;a href=&quot;https://github.com/helm/charts/tree/master/stable/concourse&quot;&gt;repo page&lt;/a&gt; also does a pretty good job getting you up and running. Make the modifications recommended below to the values.yaml file you have saved in your working directory.&lt;/p&gt;

&lt;h5 id=&quot;secrets&quot;&gt;Secrets&lt;/h5&gt;

&lt;p&gt;So the Chart ships with &lt;a href=&quot;https://github.com/helm/charts/tree/master/stable/concourse#secrets&quot;&gt;some secrets built into it&lt;/a&gt; to make it easier to use. Obviously if you plan on actually using concourse you ought to swap those out. Lucky for me &lt;a href=&quot;https://github.com/helm/charts/tree/master/stable/concourse#secrets&quot;&gt;the documentation&lt;/a&gt; has that subject pretty well covered so I’ll let you follow those steps yourself.&lt;/p&gt;

&lt;h5 id=&quot;persistence&quot;&gt;Persistence&lt;/h5&gt;

&lt;p&gt;Read &lt;a href=&quot;https://github.com/helm/charts/tree/master/stable/concourse#persistence&quot;&gt;this section&lt;/a&gt; carefully. They aren’t joking about the workers filling up your local disks, I managed to bring down 3 workers when I tried to save a little money by skipping the PVCs.&lt;/p&gt;

&lt;h4 id=&quot;deploying-our-concourse-instance&quot;&gt;Deploying our Concourse Instance&lt;/h4&gt;

&lt;p&gt;Assuming you’ve gone ahead and followed the steps above we’re going to make one more modification to our values file then deploy. We need to swap out the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;imageTag&lt;/code&gt; value from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;4.2.2&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;4.2.3&lt;/code&gt;. And again just for clarity we need to use 4.2.3 as 4.2.2 still bundles a &lt;a href=&quot;&quot;&gt;bad version of runc&lt;/a&gt; that we don’t need to be putting out there.&lt;/p&gt;

&lt;p&gt;With that done we can go ahead and set up our concourse instance.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;stable/concourse &lt;span class=&quot;nt&quot;&gt;--version&lt;/span&gt; 3.8.0 &lt;span class=&quot;nt&quot;&gt;--values&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Helm’s output will tell you how to access your instance and that will be enough to get up and running.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;The Concourse helm chart is pretty functional and the maintainers seem to be doing a good job with it. I’d recommend it for getting started running concourse in k8s. My next post will go over using cert-manager to automatically generate certificates that you can use for apps like concourse.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;
</description>
        <pubDate>Sun, 17 Mar 2019 00:00:00 +0000</pubDate>
        <link>https://jasonmorgan.github.io/concourse-in-k8s</link>
        <guid isPermaLink="true">https://jasonmorgan.github.io/concourse-in-k8s</guid>
        
        <category>ci</category>
        
        <category>concourse</category>
        
        <category>kubernetes</category>
        
        <category>k8s</category>
        
        
        <category>ci</category>
        
      </item>
    
  </channel>
</rss>
