update to nextra 4

This commit is contained in:
2025-09-06 19:19:45 +02:00
parent d17a565130
commit 7864c38371
48 changed files with 998 additions and 500 deletions

4
content/dev_ops/_meta.ts Normal file
View File

@@ -0,0 +1,4 @@
export default {
'github-actions': 'Github Actions',
hosting: 'Hosting',
}

View File

@@ -0,0 +1,116 @@
---
tags:
- Github Actions
- DRY
---
# Composite Actions
Often we reuse `steps` inside our different github actions. As we generally want to follow [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) principles (and are lazy), which means every duplicated step has potential for improvement.
> There is also a [good guide/tutorial by James Wallis](https://wallis.dev/blog/composite-github-actions), which this is mainly inspired by.
## Composite Actions vs Reusable Workflows
Within Github actions there are two ways to achieve that: **Composite Actions** and **Reusable Workflows**. Here is a [good comparison by cardinalby](https://cardinalby.github.io/blog/post/github-actions/dry-reusing-code-in-github-actions/).
## Key Points of Composite Actions
- Can live in the same repository, but can also be outsourced into it's own.
- Share the same filesystem -> no build artifacts need to be passed around.
- Secrets cannot be accessed directly, need to be passed.
- Each action has to have it's own directory with an `action.yaml` file inside it.
- When executing raw commands we need to specify the `shell` we are running in.
## Example
The example will show how to extract a part of a github action to a composite action. In this case: building some LaTeX files.
```
.github/
├── actions
│ └── build
│ └── action.yaml
└── workflows
├── preview.yml
└── release.yml
```
```yaml
name: 'Latex Builder'
description: 'Checkout and build LaTeX files.'
inputs:
# As we cannot access secrets directly, they must be passed
github-token:
description: 'GitHub token for authentication.'
required: true
runs:
using: 'composite' # This is the magic
steps:
- uses: actions/cache@v3
name: Tectonic Cache
with:
path: ~/.cache/Tectonic
key: ${{ runner.os }}-tectonic-${{ hashFiles('**/*.tex') }}
restore-keys: |
${{ runner.os }}-tectonic-
- uses: wtfjoke/setup-tectonic@v2
with:
github-token: ${{ inputs.github-token }}
- name: Run Tectonic
run: make tectonic
shell: bash # This would not be required in the normal action file
```
```yaml
name: 'Preview'
on:
# ...
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/build
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Upload PDFs
uses: actions/upload-artifact@v2
with:
name: PDFs
path: '*.pdf'
```
```yaml
name: 'Release'
on:
# ...
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/build
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Release
uses: ncipollo/release-action@v1
with:
allowUpdates: true
artifacts: '*.pdf'
token: ${{ secrets.GITHUB_TOKEN }}
```
## Gotchas
- If we use a local composite action, the `actions/checkout@v3` step cannot be inside the composite action, as the step itself is inside the repository, so it does not exist yet in the run.

View File

@@ -0,0 +1,63 @@
---
tags:
- Github Actions
- Pages
- Static Site
---
# Github Pages with Actions
Publish static sites to Github Pages using Actions.
## Example
The example uses `docs` as the built folder containing the static site.
```yaml
name: Docs
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: 'pages'
cancel-in-progress: true
jobs:
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
# Build some static assets
- uses: actions/configure-pages@v3
- uses: actions/upload-pages-artifact@v1
with:
path: './docs'
- id: deployment
uses: actions/deploy-pages@v1
```
## Path prefix
Note that we require a path to be set as github pages are published as: `https://<username>.github.io/<repo>/`
### Vite
For vite you can set it with the [base option](https://vitejs.dev/config/shared-options.html#base).
```bash
vite build --emptyOutDir --base=./
```

View File

@@ -0,0 +1,83 @@
---
tags:
- LaTeX
- Github Actions
- CD
- Pipeline
- Tectonic
---
# Building LaTeX in Github Actions
This pipeline uses [tectonic](https://tectonic-typesetting.github.io) as the build system for LaTeX. Covered here are:
- Custom fonts
- Pipeline
- Upload generated files as artifacts
## Fonts
If we are using custom fonts, we need to make them available first. This means checking them into the repo (or downloading them remotely). In this case I chose storing them as LFS files.
In most Linux systems you can install custom fonts under `~/.fonts`.
```
./fonts/
├── Open_Sans.zip
├── Roboto_Mono.zip
└── install.sh
```
```sh
#!/bin/sh
TARGET=~/.fonts
mkdir -p $TARGET
unzip -o -d "$TARGET/roboto_mono" "./fonts/Roboto_Mono.zip"
unzip -o -d "$TARGET/open_sans" "./fonts/Open_Sans.zip"
```
## Pipeline
```yaml
name: 'Build LaTeX'
on:
pull_request:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
# Optional Cache of downloaded Tex packages
- uses: actions/cache@v3
name: Tectonic Cache
with:
path: ~/.cache/Tectonic
key: ${{ runner.os }}-tectonic-${{ hashFiles('**/*.tex') }}
restore-keys: |
${{ runner.os }}-tectonic-
# Install tectonic
- uses: wtfjoke/setup-tectonic@v2
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install fonts
run: ./fonts/install.sh
- name: Build
run: tectonic src/main.tex
- name: Upload PDFs
uses: actions/upload-artifact@v2
with:
name: PDFs
path: '*.pdf'
```

View File

@@ -0,0 +1,63 @@
# Publish Docker images
This is how to publish a docker image simultaneously to the official Docker and Github registries.
**Supported features**
- **x86** and **arm** images
- Push to **both** registries.
- Semver tag labeling
We will assume that our image is called `foo/bar`, so our username is `foo` and the actual package is `bar`
```yaml
name: Publish Docker image
on:
release:
types: [published]
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
install: true
- name: Docker Labels
id: meta
uses: docker/metadata-action@v5
with:
images: |
foo/bar
ghcr.io/${{ github.repository }}
# This assumes your repository is also github.com/foo/bar
# You could also use ghcr.io/foo/some-package as long as you are the user/org "foo"
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
```

View File

@@ -0,0 +1,14 @@
---
tags:
- docker registry
- hosting
- authentication
---
# Setup you own authenticated Docker Registry
## Resources
- https://earthly.dev/blog/private-docker-registry/
- https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-20-04
- https://github.com/docker/get-involved/blob/90c9470fd66c9318fec9c6f0914cb70fa87b9bf9/content/en/docs/CommunityLeaders/EventHandbooks/Docker101/registry/_index.md?plain=1#L203

View File

@@ -0,0 +1,61 @@
# Imgproxy with caching
A simple docker compose file that enables caching of the transformed [imgproxy](https://github.com/imgproxy/imgproxy) responses powered by nginx.
```yaml
version: '3.8'
volumes:
cache:
services:
img:
image: darthsim/imgproxy
environment:
# Required for nginx
IMGPROXY_BIND: 0.0.0.0:80
# Security
IMGPROXY_MAX_SRC_RESOLUTION: 100
IMGPROXY_ALLOWED_SOURCES: https://images.unsplash.com/,https://images.pexels.com/
# Transforms
IMGPROXY_ENFORCE_WEBP: true
IMGPROXY_ENFORCE_AVIF: true
IMGPROXY_ONLY_PRESETS: true
IMGPROXY_PRESETS: default=resizing_type:fit,250=size:250:250,500=size:500:500,1000=size:1000:1000,1500=size:1500:1500,2000=size:2000:2000
proxy:
image: nginx
ports:
- 80:80
volumes:
- ./proxy.conf:/etc/nginx/conf.d/default.conf:ro
- cache:/tmp
```
```
# proxy.conf
# Set cache to 30 days, 1GB.
# Only use the uri as the cache key, as it's the only input for imageproxy.
proxy_cache_path /tmp levels=1:2 keys_zone=backcache:8m max_size=1g inactive=30d;
proxy_cache_key "$uri";
proxy_cache_valid 200 302 30d;
server
{
listen 80;
server_name _;
location /
{
proxy_pass_request_headers off;
proxy_set_header HOST $host;
proxy_set_header Accept $http_accept;
proxy_pass http://img;
proxy_cache backcache;
}
}
```

View File

@@ -0,0 +1,227 @@
# Outline
[Outline](https://www.getoutline.com/) does not make it suuuper easy to not pay for their hosted version. So a few things are a bit rough. Here the [official docs](https://wiki.generaloutline.com/s/hosting/doc/hosting-outline-nipGaCRBDu).
1. Copy `docker-compose.yaml` and `.env`
2. Fill in missing values
3. Manually create a bucket called `wiki` in the minio dashboard.
```yaml
version: '3.8'
networks:
proxy:
external: true
services:
outline:
image: outlinewiki/outline
restart: unless-stopped
env_file: .env
command: sh -c "yarn db:migrate --env production-ssl-disabled && yarn start"
depends_on:
- db
- redis
- storage
networks:
- default
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.outline.rule=Host(`example.org`)
- traefik.http.routers.outline.entrypoints=secure
- traefik.http.routers.outline.tls.certresolver=cf
redis:
restart: unless-stopped
image: redis
db:
image: postgres:15-alpine
restart: unless-stopped
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
# PGSSLMODE: disable
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: outline
storage:
image: minio/minio
restart: unless-stopped
command: server /data --console-address ":80"
volumes:
- ./data/s3:/data
environment:
- MINIO_ROOT_USER=user
- MINIO_ROOT_PASSWORD=pass
- MINIO_DOMAIN=s3.example.org
networks:
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.s3.rule=Host(`s3.example.org`)
- traefik.http.routers.s3.entrypoints=secure
- traefik.http.routers.s3.tls.certresolver=cf
- traefik.http.routers.s3.service=s3-service
- traefik.http.services.s3-service.loadbalancer.server.port=9000
- traefik.http.routers.s3-dash.rule=Host(`s3-dash.example.org`)
- traefik.http.routers.s3-dash.entrypoints=secure
- traefik.http.routers.s3-dash.tls.certresolver=cf
- traefik.http.routers.s3-dash.service=s3-dash-service
- traefik.http.services.s3-dash-service.loadbalancer.server.port=80
```
```env
# https://github.com/outline/outline/blob/main/.env.sample
# REQUIRED
NODE_ENV=production
SECRET_KEY=
UTILS_SECRET=
DATABASE_URL=postgres://user:pass@db:5432/outline
PGSSLMODE=disable
REDIS_URL=redis://redis:6379
URL=https://example.org
PORT=3000
COLLABORATION_URL=
AWS_ACCESS_KEY_ID=user
AWS_SECRET_ACCESS_KEY=pass
AWS_S3_ACCELERATE_URL=https://s3.example.org/wiki
AWS_S3_UPLOAD_BUCKET_URL=https://s3.example.org/wiki
AWS_S3_UPLOAD_BUCKET_NAME=wiki
AWS_S3_FORCE_PATH_STYLE=false
# AUTHENTICATION
# Third party signin credentials, at least ONE OF EITHER Google, Slack,
# or Microsoft is required for a working installation or you'll have no sign-in
# options.
# To configure Slack auth, you'll need to create an Application at
# => https://api.slack.com/apps
#
# When configuring the Client ID, add a redirect URL under "OAuth & Permissions":
# https://<URL>/auth/slack.callback
SLACK_CLIENT_ID=
SLACK_CLIENT_SECRET=
# To configure Google auth, you'll need to create an OAuth Client ID at
# => https://console.cloud.google.com/apis/credentials
#
# When configuring the Client ID, add an Authorized redirect URI:
# https://<URL>/auth/google.callback
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
# To configure Microsoft/Azure auth, you'll need to create an OAuth Client. See
# the guide for details on setting up your Azure App:
# => https://wiki.generaloutline.com/share/dfa77e56-d4d2-4b51-8ff8-84ea6608faa4
AZURE_CLIENT_ID=
AZURE_CLIENT_SECRET=
AZURE_RESOURCE_APP_ID=
# To configure generic OIDC auth, you'll need some kind of identity provider.
# See documentation for whichever IdP you use to acquire the following info:
# Redirect URI is https://<URL>/auth/oidc.callback
OIDC_CLIENT_ID=
OIDC_CLIENT_SECRET=
OIDC_AUTH_URI=
OIDC_TOKEN_URI=
OIDC_USERINFO_URI=
# Specify which claims to derive user information from
# Supports any valid JSON path with the JWT payload
OIDC_USERNAME_CLAIM=preferred_username
# Display name for OIDC authentication
OIDC_DISPLAY_NAME=OpenID
# Space separated auth scopes.
OIDC_SCOPES=openid profile email
# OPTIONAL
# Base64 encoded private key and certificate for HTTPS termination. This is only
# required if you do not use an external reverse proxy. See documentation:
# https://wiki.generaloutline.com/share/1c922644-40d8-41fe-98f9-df2b67239d45
SSL_KEY=
SSL_CERT=
# If using a Cloudfront/Cloudflare distribution or similar it can be set below.
# This will cause paths to javascript, stylesheets, and images to be updated to
# the hostname defined in CDN_URL. In your CDN configuration the origin server
# should be set to the same as URL.
CDN_URL=
# Auto-redirect to https in production. The default is true but you may set to
# false if you can be sure that SSL is terminated at an external loadbalancer.
FORCE_HTTPS=false
# Have the installation check for updates by sending anonymized statistics to
# the maintainers
ENABLE_UPDATES=true
# How many processes should be spawned. As a reasonable rule divide your servers
# available memory by 512 for a rough estimate
WEB_CONCURRENCY=1
# Override the maximum size of document imports, could be required if you have
# especially large Word documents with embedded imagery
MAXIMUM_IMPORT_SIZE=5120000
# You can remove this line if your reverse proxy already logs incoming http
# requests and this ends up being duplicative
#DEBUG=http
# For a complete Slack integration with search and posting to channels the
# following configs are also needed, some more details
# => https://wiki.generaloutline.com/share/be25efd1-b3ef-4450-b8e5-c4a4fc11e02a
#
SLACK_VERIFICATION_TOKEN=your_token
SLACK_APP_ID=A0XXXXXXX
SLACK_MESSAGE_ACTIONS=true
# Optionally enable google analytics to track pageviews in the knowledge base
GOOGLE_ANALYTICS_ID=
# Optionally enable Sentry (sentry.io) to track errors and performance,
# and optionally add a Sentry proxy tunnel for bypassing ad blockers in the UI:
# https://docs.sentry.io/platforms/javascript/troubleshooting/#using-the-tunnel-option)
SENTRY_DSN=
SENTRY_TUNNEL=
# To support sending outgoing transactional emails such as "document updated" or
# "you've been invited" you'll need to provide authentication for an SMTP server
SMTP_HOST=
SMTP_PORT=
SMTP_USERNAME=
SMTP_PASSWORD=
SMTP_FROM_EMAIL=
SMTP_REPLY_EMAIL=
SMTP_TLS_CIPHERS=
SMTP_SECURE=true
# The default interface language. See translate.getoutline.com for a list of
# available language codes and their rough percentage translated.
DEFAULT_LANGUAGE=en_US
# Optionally enable rate limiter at application web server
RATE_LIMITER_ENABLED=true
# Configure default throttling parameters for rate limiter
RATE_LIMITER_REQUESTS=1000
RATE_LIMITER_DURATION_WINDOW=60
```

View File

@@ -0,0 +1,50 @@
---
tags:
- docker
- vpn
- transmission
- torrent
---
# Dockerised Transmission over VPN
This setup allows to have a VPN server running, for downloading all your Linux ISOs over a VPN.
This works by using the amazing gluetun container and giving it a name `container_name:vpn` and referencing that name in all the containers where we want to go through a VPN by setting `network_mode: "container:vpn"`.
The two containers don't have to be in the same docker-compose file.
All the traffic is then routed thought the VPN container, where also the ports then are set.
Many vpn providers are supported, just look at the gluetun docs.
```yaml
version: '3.8'
services:
vpn:
image: qmcgaw/gluetun
container_name: vpn
restart: unless-stopped
cap_add:
- NET_ADMIN
ports:
- 9091:9091
environment:
- VPN_SERVICE_PROVIDER=nordvpn
- SERVER_REGIONS=Switzerland
- OPENVPN_USER=
- OPENVPN_PASSWORD=
transmission:
image: lscr.io/linuxserver/transmission:latest
restart: unless-stopped
network_mode: 'container:vpn'
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
volumes:
- ./data/config:/config
- ./data/source:/watch
- /media/storage/dl:/downloads
```