remove integration tests
This commit is contained in:
parent
dc86531454
commit
d997e470a0
14 changed files with 0 additions and 416 deletions
|
@ -130,23 +130,3 @@ nextcloud-image-publish:
|
||||||
stage: image
|
stage: image
|
||||||
script:
|
script:
|
||||||
- cd infrastructure/nextcloud && pyb image publish
|
- cd infrastructure/nextcloud && pyb image publish
|
||||||
|
|
||||||
#.nextcloud-integrationtest:
|
|
||||||
# stage: integrationtest
|
|
||||||
# image: registry.gitlab.com/gitlab-org/cluster-integration/helm-install-image/releases/3.7.1-kube-1.20.11-alpine-3.14
|
|
||||||
# services:
|
|
||||||
# - name: registry.gitlab.com/gitlab-org/cluster-integration/test-utils/k3s-gitlab-ci/releases/v1.22.2-k3s2
|
|
||||||
# alias: k3s
|
|
||||||
# script:
|
|
||||||
# - apk add curl sudo bash
|
|
||||||
# - apk add wget curl bash sudo openjdk8
|
|
||||||
# - wget -P /etc/apk/keys/ https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
|
|
||||||
# - apk add --no-cache --repository=https://apkproxy.herokuapp.com/sgerrand/alpine-pkg-leiningen leiningen
|
|
||||||
#
|
|
||||||
# - mkdir -p ${HOME}/.kube/
|
|
||||||
# - curl -f k3s:8081 > ${HOME}/.kube/config
|
|
||||||
# - kubectl version
|
|
||||||
# - kubectl cluster-info
|
|
||||||
# - echo "---------- Integration test -------------"
|
|
||||||
# - pwd
|
|
||||||
# - cd ./src/test/resources/local-integration-test/ && ./setup-local-s3-on-k3d.sh
|
|
|
@ -1,7 +0,0 @@
|
||||||
{:postgres-db-user "nextcloud"
|
|
||||||
:postgres-db-password "dbpass"
|
|
||||||
:nextcloud-admin-user "cloudadmin"
|
|
||||||
:nextcloud-admin-password "cloudpassword"
|
|
||||||
:aws-access-key-id ""
|
|
||||||
:aws-secret-access-key ""
|
|
||||||
:restic-password "test-password"}
|
|
|
@ -1,6 +0,0 @@
|
||||||
{:fqdn "cloudhost"
|
|
||||||
:issuer :staging
|
|
||||||
:nextcloud-data-volume-path "/var/cloud"
|
|
||||||
:postgres-data-volume-path "/var/postgres"
|
|
||||||
:restic-repository "s3://k3stesthost/mybucket"
|
|
||||||
:local-integration-test true}
|
|
|
@ -1,84 +0,0 @@
|
||||||
# Usage
|
|
||||||
|
|
||||||
`setup-local-s3.sh [BUCKET_NAME]`:
|
|
||||||
- [BUCKET_NAME] is optional, "mybucket" will be used if not specified
|
|
||||||
- sets up a k3s instance
|
|
||||||
- installs a localstack pod
|
|
||||||
- creates http and https routing to localstack via localhost
|
|
||||||
- saves the self-signed certificate as ca.crt
|
|
||||||
- uses the certificate to initialize a restic repo at `https://k3stesthost/BUCKET_NAME`
|
|
||||||
|
|
||||||
Note: In case of not being able to connect to "k3stesthost/health", you might need to ensure that the ingress' ip matches with the required host names: k3stesthost and cloudhost. With `sudo k3s kubectl get ingress` you can view the ingress' ip (e.g. 10.0.2.15), then add a line to file "/etc/hosts" e.g. `10.0.2.15 k3stesthost cloudhost`
|
|
||||||
|
|
||||||
`start-k3s.sh`:
|
|
||||||
- creates and starts a k3s instance
|
|
||||||
|
|
||||||
`k3s-uninstall.sh`:
|
|
||||||
- deletes everything k3s related
|
|
||||||
|
|
||||||
## Other useful commands
|
|
||||||
- `sudo k3s kubectl get pods`
|
|
||||||
- `curl k3stesthost/health`
|
|
||||||
expected: `{"services": {"s3": "running"}, "features": {"persistence": "disabled", "initScripts": "initialized"}}`
|
|
||||||
|
|
||||||
#### Requires AWS-CLI
|
|
||||||
- create bucket `aws --endpoint-url=http://k3stesthost s3 mb s3://mybucket`
|
|
||||||
- list buckets `aws --endpoint-url=http://k3stesthost s3 ls`
|
|
||||||
- upload something `aws --endpoint-url=http://k3stesthost s3 cp test.txt s3://mybucket`
|
|
||||||
- check files `aws --endpoint-url=http://k3stesthost s3 ls s3://mybucket`
|
|
||||||
|
|
||||||
## Run docker locally
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
docker pull docker:19.03.12-dind
|
|
||||||
docker run -d --privileged --name integration-test docker:19.03.12-dind
|
|
||||||
docker exec integration-test sh -c "apk add bash"
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Set up docker container integration-test:
|
|
||||||
|
|
||||||
```
|
|
||||||
docker cp ../../../../../c4k-nextcloud/ integration-test:/
|
|
||||||
docker exec -it integration-test sh
|
|
||||||
cd /c4k-nextcloud/src/test/resources/local-integration-test
|
|
||||||
./setup-docker.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deploy nextcloud
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
|
|
||||||
* leiningen (install with: `sudo apt install leiningen` )
|
|
||||||
* In the project's root execute: `lein uberjar`
|
|
||||||
* Change file "valid-config.edn" according to your settings (e.g. `:fqdn "cloudhost"` and `:restic-repository "s3://k3stesthost:mybucket"`).
|
|
||||||
|
|
||||||
### Deploy to k3s
|
|
||||||
|
|
||||||
* Create and deploy the k8s yaml:
|
|
||||||
`java -jar target/uberjar/c4k-nextcloud-standalone.jar valid-config.edn valid-auth.edn | sudo k3s kubectl apply -f -`
|
|
||||||
|
|
||||||
Some of the steps may take some min to be effective, but eventually nextcloud should be available at: https://cloudhost
|
|
||||||
|
|
||||||
### Deploy to k3d
|
|
||||||
|
|
||||||
k3d is a k3s system which is running inside of a container. To install k3d run `curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash` or have a look at https://k3d.io/v5.0.3/ .
|
|
||||||
|
|
||||||
* Start a k3d cluster to deploy s3, nextcloud and test backup and restore on it: `./setup-local-s3-on-k3d.sh`
|
|
||||||
|
|
||||||
Some steps may take a couple of minutes to be effective, but eventually nextcloud should be available at: https://cloudhost
|
|
||||||
|
|
||||||
#### Remove k3d cluster
|
|
||||||
|
|
||||||
`k3d cluster delete nextcloud`
|
|
||||||
|
|
||||||
## Test in local gitlab runner
|
|
||||||
|
|
||||||
See https://stackoverflow.com/questions/32933174/use-gitlab-ci-to-run-tests-locally
|
|
||||||
|
|
||||||
This needs to be done in the project root
|
|
||||||
|
|
||||||
`docker run -d --name gitlab-runner --restart always -v $PWD:$PWD -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest`
|
|
||||||
|
|
||||||
`docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker nextcloud-integrationtest --docker-privileged --docker-volumes '/var/run/docker.sock:/var/run/docker.sock'`
|
|
|
@ -1,20 +0,0 @@
|
||||||
apiVersion: cert-manager.io/v1
|
|
||||||
kind: Certificate
|
|
||||||
metadata:
|
|
||||||
name: localstack-cert
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
secretName: localstack-secret
|
|
||||||
commonName: k3stesthost
|
|
||||||
dnsNames:
|
|
||||||
- k3stesthost
|
|
||||||
issuerRef:
|
|
||||||
name: selfsigning-issuer
|
|
||||||
kind: ClusterIssuer
|
|
||||||
---
|
|
||||||
apiVersion: cert-manager.io/v1
|
|
||||||
kind: ClusterIssuer
|
|
||||||
metadata:
|
|
||||||
name: selfsigning-issuer
|
|
||||||
spec:
|
|
||||||
selfSigned: {}
|
|
|
@ -1,44 +0,0 @@
|
||||||
@startuml
|
|
||||||
|
|
||||||
autonumber
|
|
||||||
|
|
||||||
skinparam sequenceBox {
|
|
||||||
borderColor White
|
|
||||||
}
|
|
||||||
|
|
||||||
participant gitlab_runner
|
|
||||||
|
|
||||||
box "outer container" #LightBlue
|
|
||||||
|
|
||||||
participant .gitlab_ci
|
|
||||||
participant PreparingCommands
|
|
||||||
participant test_script
|
|
||||||
|
|
||||||
end box
|
|
||||||
|
|
||||||
|
|
||||||
box "k3s" #CornSilk
|
|
||||||
|
|
||||||
participant k3s_api_server
|
|
||||||
participant backup_pod
|
|
||||||
|
|
||||||
end box
|
|
||||||
|
|
||||||
|
|
||||||
gitlab_runner -> k3s_api_server: run k3s as container
|
|
||||||
gitlab_runner -> .gitlab_ci : run
|
|
||||||
|
|
||||||
.gitlab_ci -> PreparingCommands : Install packages (curl bash ...)
|
|
||||||
.gitlab_ci -> PreparingCommands : get k3s_api_server config for k3s_api_server
|
|
||||||
|
|
||||||
.gitlab_ci -> test_script : run
|
|
||||||
|
|
||||||
test_script -> k3s_api_server: apply cert-manager
|
|
||||||
test_script -> k3s_api_server: apply localstack
|
|
||||||
test_script -> k3s_api_server: enable tls / create certificates
|
|
||||||
test_script -> k3s_api_server: apply cloud
|
|
||||||
test_script -> k3s_api_server: create backup_pod (by scale to 1)
|
|
||||||
test_script -> backup_pod: backup
|
|
||||||
test_script -> backup_pod: restore
|
|
||||||
|
|
||||||
@enduml
|
|
|
@ -1,17 +0,0 @@
|
||||||
# Set the default kube context if present
|
|
||||||
DEFAULT_KUBE_CONTEXTS="$HOME/.kube/config"
|
|
||||||
if test -f "${DEFAULT_KUBE_CONTEXTS}"
|
|
||||||
then
|
|
||||||
export KUBECONFIG="$DEFAULT_KUBE_CONTEXTS"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Additional contexts should be in ~/.kube/custom-contexts/
|
|
||||||
CUSTOM_KUBE_CONTEXTS="$HOME/.kube/custom-contexts"
|
|
||||||
mkdir -p "${CUSTOM_KUBE_CONTEXTS}"
|
|
||||||
OIFS="$IFS"
|
|
||||||
IFS=$'\n'
|
|
||||||
for contextFile in `find "${CUSTOM_KUBE_CONTEXTS}" -type f -name "*.yml"`
|
|
||||||
do
|
|
||||||
export KUBECONFIG="$contextFile:$KUBECONFIG"
|
|
||||||
done
|
|
||||||
IFS="$OIFS"
|
|
|
@ -1,65 +0,0 @@
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: localstack
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: localstack
|
|
||||||
strategy:
|
|
||||||
type: Recreate
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: localstack
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- image: localstack/localstack
|
|
||||||
name: localstack-app
|
|
||||||
imagePullPolicy: IfNotPresent
|
|
||||||
env:
|
|
||||||
- name: SERVICES
|
|
||||||
value: s3
|
|
||||||
---
|
|
||||||
# service
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: localstack-service
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
app: localstack
|
|
||||||
ports:
|
|
||||||
- port: 4566
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: localstack-secret
|
|
||||||
type: Opaque
|
|
||||||
---
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: Ingress
|
|
||||||
metadata:
|
|
||||||
name: ingress-localstack
|
|
||||||
annotations:
|
|
||||||
cert-manager.io/cluster-issuer: selfsigning-issuer
|
|
||||||
kubernetes.io/ingress.class: traefik
|
|
||||||
traefik.ingress.kubernetes.io/redirect-entry-point: https
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
tls:
|
|
||||||
- hosts:
|
|
||||||
- k3stesthost
|
|
||||||
secretName: localstack-secret
|
|
||||||
rules:
|
|
||||||
- host: k3stesthost
|
|
||||||
http:
|
|
||||||
paths:
|
|
||||||
- path: /
|
|
||||||
pathType: Prefix
|
|
||||||
backend:
|
|
||||||
service:
|
|
||||||
name: localstack-service
|
|
||||||
port:
|
|
||||||
number: 4566
|
|
|
@ -1,48 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
set -x
|
|
||||||
|
|
||||||
docker volume create k3s-server
|
|
||||||
|
|
||||||
name='inttst'
|
|
||||||
|
|
||||||
[[ $(docker ps -f "name=$name" --format '{{.Names}}') == $name ]] || docker run --name $name -d --privileged --tmpfs /run --tmpfs /var/run --restart always -e K3S_TOKEN=12345678901234 -e K3S_KUBECONFIG_OUTPUT=./kubeconfig.yaml -e K3S_KUBECONFIG_MODE=666 -v k3s-server:/var/lib/rancher/k3s:z -v $(pwd):/output:z -p 6443:6443 -p 80:80 -p 443:443 rancher/k3s server --cluster-init --tls-san k3stesthost --tls-san cloudhost
|
|
||||||
|
|
||||||
docker ps
|
|
||||||
|
|
||||||
export timeout=30; while ! docker exec $name sh -c "test -f /var/lib/rancher/k3s/server/kubeconfig.yaml"; do if [ "$timeout" == 0 ]; then echo "ERROR: Timeout while waiting for file."; break; fi; sleep 1; ((timeout--)); done
|
|
||||||
|
|
||||||
mkdir -p $HOME/.kube/
|
|
||||||
|
|
||||||
docker cp $name:/var/lib/rancher/k3s/server/kubeconfig.yaml $HOME/.kube/config
|
|
||||||
|
|
||||||
if [ "$timeout" == 0 ]
|
|
||||||
then
|
|
||||||
echo -------------------------------------------------------
|
|
||||||
find / -name "kubeconfig.yaml";
|
|
||||||
echo -------------------------------------------------------
|
|
||||||
docker ps -a
|
|
||||||
echo -------------------------------------------------------
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "127.0.0.1 kubernetes" >> /etc/hosts
|
|
||||||
|
|
||||||
apk add wget curl bash sudo openjdk8
|
|
||||||
|
|
||||||
wget -P /etc/apk/keys/ https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
|
|
||||||
apk add --no-cache --repository=https://apkproxy.herokuapp.com/sgerrand/alpine-pkg-leiningen leiningen
|
|
||||||
|
|
||||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.22.0/bin/linux/amd64/kubectl
|
|
||||||
chmod +x ./kubectl
|
|
||||||
mv ./kubectl /usr/local/bin/kubectl
|
|
||||||
|
|
||||||
sleep 20 #allow some time to startup k3s
|
|
||||||
docker ps -a
|
|
||||||
|
|
||||||
swapoff -a # can this be removed ?
|
|
||||||
|
|
||||||
export KUBECONFIG=$HOME/.kube/config
|
|
||||||
|
|
||||||
pwd
|
|
||||||
cd ./c4k-nextcloud/src/test/resources/local-integration-test && ./setup-local-s3-on-k3d.sh
|
|
|
@ -1,60 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
set -x
|
|
||||||
|
|
||||||
function main()
|
|
||||||
{
|
|
||||||
# enable tls for k3s with cert-manager
|
|
||||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
|
|
||||||
|
|
||||||
kubectl apply -f localstack.yaml
|
|
||||||
|
|
||||||
until kubectl apply -f certificate.yaml
|
|
||||||
do
|
|
||||||
echo "[INFO] Waiting for certificate ..."
|
|
||||||
sleep 30
|
|
||||||
done
|
|
||||||
|
|
||||||
# wait for ingress to be ready
|
|
||||||
bash -c 'external_ip=""; while [ -z $external_ip ]; do echo "[INFO] Waiting for end point..."; external_ip=$(kubectl get ingress -o jsonpath="{$.items[*].status.loadBalancer.ingress[*].ip}"); [ -z "$external_ip" ] && sleep 10; done; echo "End point ready - $external_ip";'
|
|
||||||
|
|
||||||
export INGRESS_IP=$(kubectl get ingress ingress-localstack -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")
|
|
||||||
|
|
||||||
cd ../../../../ # c4k-nextcloud project root
|
|
||||||
lein uberjar
|
|
||||||
java -jar target/uberjar/c4k-nextcloud-standalone.jar config-local.edn auth-local.edn | kubectl apply -f -
|
|
||||||
|
|
||||||
CLOUD_POD=$(kubectl get pod -l app=cloud-app -o name)
|
|
||||||
kubectl wait $CLOUD_POD --for=condition=Ready --timeout=240s
|
|
||||||
|
|
||||||
# wait for nextcloud config file available
|
|
||||||
timeout 180 bash -c "kubectl exec -t $POD -- bash -c \"until [ -f /var/www/html/config/config.php ]; do sleep 10; done\""
|
|
||||||
|
|
||||||
# ensure an instance of pod backup-restore
|
|
||||||
kubectl scale deployment backup-restore --replicas 1
|
|
||||||
|
|
||||||
# wait for localstack health endpoint
|
|
||||||
echo "$INGRESS_IP k3stesthost cloudhost" >> /etc/hosts
|
|
||||||
until curl --fail --silent k3stesthost/health | grep -oe '"s3": "available"' -oe '"s3": "running"'
|
|
||||||
do
|
|
||||||
curl --fail k3stesthost/health
|
|
||||||
echo "[INFO] Waiting for s3 running"
|
|
||||||
sleep 10
|
|
||||||
done
|
|
||||||
|
|
||||||
BACKUP_POD=$(kubectl get pod -l app=backup-restore -o name)
|
|
||||||
kubectl wait $BACKUP_POD --for=condition=Ready --timeout=240s
|
|
||||||
|
|
||||||
kubectl exec -t $BACKUP_POD -- bash -c "echo \"$INGRESS_IP k3stesthost cloudhost\" >> /etc/hosts"
|
|
||||||
kubectl exec -t $BACKUP_POD -- /usr/local/bin/init.sh
|
|
||||||
|
|
||||||
echo ================= BACKUP =================
|
|
||||||
kubectl exec -t $BACKUP_POD -- /usr/local/bin/backup.sh
|
|
||||||
|
|
||||||
sleep 10 # avoid race conditions
|
|
||||||
|
|
||||||
echo ================= RESTORE =================
|
|
||||||
kubectl exec -t $BACKUP_POD -- /usr/local/bin/restore.sh
|
|
||||||
}
|
|
||||||
|
|
||||||
main "$@"
|
|
|
@ -1,34 +0,0 @@
|
||||||
function main()
|
|
||||||
{
|
|
||||||
local bucket_name="${1:-mybucket}"; shift
|
|
||||||
|
|
||||||
./start-k3s.sh
|
|
||||||
|
|
||||||
sudo k3s kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
|
|
||||||
|
|
||||||
sudo k3s kubectl apply -f localstack.yaml
|
|
||||||
|
|
||||||
until sudo k3s kubectl apply -f certificate.yaml
|
|
||||||
do
|
|
||||||
echo "*** Waiting for certificate ... ***"
|
|
||||||
sleep 10
|
|
||||||
done
|
|
||||||
echo
|
|
||||||
|
|
||||||
echo
|
|
||||||
echo "[INFO] Waiting for localstack health endpoint"
|
|
||||||
until curl --connect-timeout 3 -s -f -o /dev/null "k3stesthost/health"
|
|
||||||
do
|
|
||||||
sleep 5
|
|
||||||
done
|
|
||||||
echo
|
|
||||||
|
|
||||||
sudo k3s kubectl get secret localstack-secret -o jsonpath="{.data.ca\.crt}" | base64 --decode > ca.crt
|
|
||||||
|
|
||||||
#aws --endpoint-url=http://localhost s3 mb s3://$bucket_name
|
|
||||||
export RESTIC_PASSWORD="test-password"
|
|
||||||
restic init --cacert ca.crt -r s3://k3stesthost/$bucket_name
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
main $@
|
|
|
@ -1,9 +0,0 @@
|
||||||
function main()
|
|
||||||
{
|
|
||||||
./start-k3s.sh
|
|
||||||
|
|
||||||
sudo k3s kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
main
|
|
|
@ -1 +0,0 @@
|
||||||
KUBECONFIG=~/.kube/custom-contexts/k3d-config.yml k3d cluster create nextcloud --k3s-arg '--tls-san cloudhost@loadbalancer' --port 80:80@loadbalancer --port 443:443@loadbalancer --api-port 6443 --kubeconfig-update-default
|
|
|
@ -1 +0,0 @@
|
||||||
curl -sfL https://get.k3s.io | K3S_NODE_NAME=k3stesthost INSTALL_K3S_EXEC='--tls-san cloudhost' sh -
|
|
Loading…
Reference in a new issue