Skip to content

vSphere

Prerequisites
  • karina is installed
  • Machine Image with the matching versions of kubeadm, kubectl and kubelet and either docker or containerd
  • Access to a vCenter server

Generate CA's

# generate CA for kubernetes api-server authentication
karina ca generate --name root-ca \
  --cert-path .certs/root-ca.crt \
  --private-key-path .certs/root-ca.key \
  --password foobar --expiry 10

# generate ingressCA for ingress certificates
karina ca generate --name ingress-ca \
  --cert-path .certs/ingress-ca.crt \
  --private-key-path .certs/ingress-ca.key \
  --password foobar  --expiry 10

Create karina.yaml

karina.yaml

## Cluster name
name: example-cluster

## Prefix to be added to VM hostnames
hostPrefix: vsphere-k8s-

vsphere:
  username:  !!env GOVC_USER
  datacenter: !!env GOVC_DATACENTER
  cluster: !!env GOVC_CLUSTER
  folder: !!env GOVC_FOLDER
  datastore: !!env GOVC_DATASTORE
  # can be found on the Datastore summary page
  datastoreUrl: !!env GOVC_DATASTORE_URL
  password: !!env GOVC_PASS
  hostname: !!env GOVC_FQDN
  resourcePool: !!env GOVC_RESOURCE_POOL
  csiVersion: v2.0.0
  cpiVersion: v1.1.0

## Endpoint for externally hosted consul cluster
## NOTE: a working consul config required to verify
##       that primary master is available.
consul: 10.100.0.13

## Domain that cluster will be available at
## NOTE: domain must be supplied for vSphere clusters
domain: 10.100.0.0.nip.io

# Name of consul datacenter
datacenter: lab

dns:
  disabled: true

# The CA certs generated in step 3
ca:
  cert: .certs/root-ca.crt
  privateKey: .certs/root-ca.key
  password: foobar
ingressCA:
  cert: .certs/ingress-ca.crt
  privateKey: .certs/ingress-ca.key
  password: foobar

versions:
  kubernetes: v1.18.15
serviceSubnet: 10.96.0.0/16
podSubnet: 10.97.0.0/16

## The VM configuration for master nodes
master:
  count: 1
  cpu: 2  #NOTE: minimum of 2
  memory: 4
  disk: 10
  network: !!env GOVC_NETWORK
  cluster: !!env GOVC_CLUSTER
  prefix: m
  template: "kube-v1.18.15"
workers:
  worker-group-a:
    prefix: w
    network: !!env GOVC_NETWORK
    cluster: !!env GOVC_CLUSTER
    count: 1
    cpu: 2
    memory: 4
    disk: 10
    template: kube-v1.18.15

See other examples in the test vSphere platform fixtures.

See the Configuration Reference for details of available configurations.

Provision the cluster

Provision the cluster with:

karina provision vsphere-cluster -c karina.yaml
karina deploy phases --crd --base --calico -c karina.yaml

Access the cluster

Export a kubeconfig:

karina kubeconfig admin -c karina.yaml > kubeconfig.yaml
export KUBECONFIG=$PWD/kubeconfig.yaml

For the session kubectl commands can then be used to access the cluster, e.g.:

kubectl get nodes

Run E2E Tests

karina test all --e2e -c karina.yaml

Tear down the cluster

karina terminate -c karina.yaml