Switch to terraform
This commit is contained in:
204
PAIN_POINTS.md
Normal file
204
PAIN_POINTS.md
Normal file
@@ -0,0 +1,204 @@
|
|||||||
|
# NetBird GitOps - Remaining Pain Points
|
||||||
|
|
||||||
|
This document captures challenges discovered during the POC that need resolution before production use.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
**Use case:** ~100+ operators, each with 2 devices (BlastPilot + BlastGS-Agent)
|
||||||
|
**Workflow:** Ticket-based onboarding, engineer creates PR, merge triggers setup key creation
|
||||||
|
**Current pain:** Manual setup key creation and peer renaming in dashboard
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pain Point 1: Peer Naming After Enrollment
|
||||||
|
|
||||||
|
### Problem
|
||||||
|
|
||||||
|
When a peer enrolls using a setup key, it appears in the NetBird dashboard with its hostname (e.g., `DESKTOP-ABC123` or `raspberrypi`). These hostnames are:
|
||||||
|
- Often generic and meaningless
|
||||||
|
- Not controllable via IaC (peer generates its own keypair locally)
|
||||||
|
- Confusing when managing 100+ devices
|
||||||
|
|
||||||
|
**Desired state:** Peer appears as `pilot-ivanov` or `gs-unit-042` immediately after enrollment.
|
||||||
|
|
||||||
|
### Root Cause
|
||||||
|
|
||||||
|
NetBird's architecture requires peers to self-enroll:
|
||||||
|
1. Setup key defines which groups the peer joins
|
||||||
|
2. Peer runs `netbird up --setup-key <key>`
|
||||||
|
3. Peer generates WireGuard keypair locally
|
||||||
|
4. Peer registers with management server using its local hostname
|
||||||
|
5. **No API link between "which setup key was used" and "which peer enrolled"**
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Option | Description | Effort | Tradeoffs |
|
||||||
|
|--------|-------------|--------|-----------|
|
||||||
|
| **A. Manual rename** | Engineer renames peer in dashboard after enrollment | Zero | 30 seconds per device, human in loop |
|
||||||
|
| **B. Polling service** | Service watches for new peers, matches by timing/IP, renames | Medium | More infrastructure, heuristic matching |
|
||||||
|
| **C. Per-user tracking groups** | Unique group per user, find peer by group membership | High | Group sprawl, cleanup needed |
|
||||||
|
| **D. Installer modification** | Modify BlastPilot/BlastGS-Agent to set hostname before enrollment | N/A | Code freeze constraint |
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
**Option A** is acceptable for ~100 operators with ticket-based workflow:
|
||||||
|
- Ticket arrives -> engineer creates PR -> PR merges -> engineer sends setup key -> operator enrolls -> **engineer renames peer (30 sec)**
|
||||||
|
- Total engineer time per onboarding: ~5 minutes
|
||||||
|
- No additional infrastructure
|
||||||
|
|
||||||
|
**Option B** worth considering if:
|
||||||
|
- Onboarding volume increases significantly
|
||||||
|
- Full automation is required (no human in loop)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pain Point 2: Per-User vs Per-Role Setup Keys
|
||||||
|
|
||||||
|
### Current State
|
||||||
|
|
||||||
|
Setup keys are defined per-role in `terraform/setup_keys.tf`:
|
||||||
|
```hcl
|
||||||
|
resource "netbird_setup_key" "gs_onboarding" {
|
||||||
|
name = "ground-station-onboarding"
|
||||||
|
type = "reusable"
|
||||||
|
auto_groups = [netbird_group.ground_stations.id]
|
||||||
|
...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This means:
|
||||||
|
- One reusable key per role
|
||||||
|
- Key is shared across all operators of that role
|
||||||
|
- No way to track "this key was issued to Ivanov"
|
||||||
|
|
||||||
|
### Problems
|
||||||
|
|
||||||
|
1. **No audit trail** - Can't answer "who enrolled device X?"
|
||||||
|
2. **Revocation is all-or-nothing** - Revoking `pilot-onboarding` affects everyone
|
||||||
|
3. **No usage attribution** - Can't enforce "one device per operator"
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Option | Description | Effort | Tradeoffs |
|
||||||
|
|--------|-------------|--------|-----------|
|
||||||
|
| **A. Accept per-role keys** | Current state, manual tracking in ticket system | Zero | No IaC-level audit trail |
|
||||||
|
| **B. Per-user setup keys** | Create key per onboarding request | Low | More keys to manage, cleanup needed |
|
||||||
|
| **C. One-off keys** | Each key has `usage_limit = 1` | Low | Key destroyed after use, good for audit |
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
**Option C (one-off keys)** provides the best tradeoff:
|
||||||
|
- Create unique key per onboarding ticket
|
||||||
|
- Key auto-expires after first use
|
||||||
|
- Clear audit trail: key name links to ticket number
|
||||||
|
- Easy to implement:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# Example: ticket-based one-off key
|
||||||
|
resource "netbird_setup_key" "ticket_1234_pilot" {
|
||||||
|
name = "ticket-1234-pilot-ivanov"
|
||||||
|
type = "one-off"
|
||||||
|
auto_groups = [netbird_group.pilots.id]
|
||||||
|
usage_limit = 1
|
||||||
|
ephemeral = false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Workflow:**
|
||||||
|
1. Ticket ACHILLES-1234: "Onboard pilot Ivanov"
|
||||||
|
2. Engineer adds setup key `ticket-1234-pilot-ivanov` to Terraform
|
||||||
|
3. PR merged, key created
|
||||||
|
4. Engineer sends key to operator (see Pain Point 3)
|
||||||
|
5. Operator uses key, it's consumed
|
||||||
|
6. After enrollment, engineer renames peer to `pilot-ivanov`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pain Point 3: Secure Key Distribution
|
||||||
|
|
||||||
|
### Problem
|
||||||
|
|
||||||
|
After CI/CD creates a setup key, how does it reach the operator?
|
||||||
|
|
||||||
|
Setup keys are sensitive:
|
||||||
|
- Anyone with the key can enroll a device into the network
|
||||||
|
- Keys may be reusable (depends on configuration)
|
||||||
|
- Keys should be transmitted securely
|
||||||
|
|
||||||
|
### Current State
|
||||||
|
|
||||||
|
Setup keys are output by Terraform:
|
||||||
|
```bash
|
||||||
|
terraform output -raw gs_setup_key
|
||||||
|
```
|
||||||
|
|
||||||
|
But:
|
||||||
|
- Requires local Terraform access
|
||||||
|
- No automated distribution mechanism
|
||||||
|
- Keys in state file (committed to git in POC - not ideal)
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Option | Description | Effort | Tradeoffs |
|
||||||
|
|--------|-------------|--------|-----------|
|
||||||
|
| **A. Manual retrieval** | Engineer runs `terraform output` locally | Zero | Requires CLI access, manual process |
|
||||||
|
| **B. CI output to ticket** | CI posts key to ticket system via API | Medium | Keys in ticket history (audit trail) |
|
||||||
|
| **C. Secrets manager** | Store keys in Vault/1Password, notify engineer | Medium | Another system to integrate |
|
||||||
|
| **D. Encrypted email** | CI encrypts key, emails to operator | High | Key management complexity |
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
**Option A** for now (consistent with manual rename):
|
||||||
|
- Engineer retrieves key after CI completes
|
||||||
|
- Engineer sends key to operator via secure channel (Signal, encrypted email)
|
||||||
|
- Ticket updated with "key sent" status
|
||||||
|
|
||||||
|
**Option B** worth implementing if:
|
||||||
|
- Volume increases
|
||||||
|
- Want full automation
|
||||||
|
- Ticket system has secure "hidden fields" feature
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary: Recommended Workflow
|
||||||
|
|
||||||
|
Given the constraints (code freeze, ~100 operators, ticket-based), the pragmatic workflow is:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Ticket created: "Onboard pilot Ivanov with BlastPilot + GS"
|
||||||
|
|
||||||
|
2. Engineer adds to Terraform:
|
||||||
|
- ticket-1234-pilot (one-off, 7 days)
|
||||||
|
- ticket-1234-gs (one-off, 7 days)
|
||||||
|
|
||||||
|
3. Engineer creates PR, gets review, merges
|
||||||
|
|
||||||
|
4. CI/CD applies changes, keys created
|
||||||
|
|
||||||
|
5. Engineer retrieves keys:
|
||||||
|
terraform output -raw ticket_1234_pilot_key
|
||||||
|
|
||||||
|
6. Engineer sends keys to operator via secure channel
|
||||||
|
|
||||||
|
7. Operator enrolls both devices
|
||||||
|
|
||||||
|
8. Engineer renames peers in dashboard:
|
||||||
|
DESKTOP-ABC123 -> pilot-ivanov
|
||||||
|
raspberrypi -> gs-ivanov
|
||||||
|
|
||||||
|
9. Engineer closes ticket
|
||||||
|
```
|
||||||
|
|
||||||
|
**Total engineer time:** ~10 minutes per onboarding (pair of devices)
|
||||||
|
**Automation level:** Groups, policies, key creation automated; naming and distribution manual
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Future Improvements (If Needed)
|
||||||
|
|
||||||
|
1. **Webhook listener** for peer enrollment events -> auto-rename based on timing correlation
|
||||||
|
2. **Ticket system integration** for automated key distribution
|
||||||
|
3. **Custom installer** that prompts for device name before enrollment
|
||||||
|
4. **Batch onboarding tool** for multiple operators at once
|
||||||
|
|
||||||
|
These can be addressed incrementally as the operation scales.
|
||||||
213
README.md
Normal file
213
README.md
Normal file
@@ -0,0 +1,213 @@
|
|||||||
|
# NetBird GitOps PoC
|
||||||
|
|
||||||
|
Proof-of-concept for managing NetBird VPN configuration via Infrastructure as Code (IaC) with GitOps workflow using Terraform.
|
||||||
|
|
||||||
|
## Project Status: POC Complete
|
||||||
|
|
||||||
|
**Start date:** 2026-02-15
|
||||||
|
**Status:** Core functionality working, remaining pain points documented
|
||||||
|
|
||||||
|
### What Works
|
||||||
|
|
||||||
|
- [x] NetBird self-hosted instance deployed (`netbird-poc.networkmonitor.cc`)
|
||||||
|
- [x] Gitea CI/CD server deployed (`gitea-poc.networkmonitor.cc`)
|
||||||
|
- [x] Gitea Actions runner for CI/CD
|
||||||
|
- [x] Terraform implementation - creates groups, policies, setup keys
|
||||||
|
- [x] CI/CD pipeline - PR shows plan, merge-to-main applies changes
|
||||||
|
|
||||||
|
### Remaining Pain Points
|
||||||
|
|
||||||
|
See [PAIN_POINTS.md](./PAIN_POINTS.md) for detailed analysis of:
|
||||||
|
- Peer naming automation (no link between setup keys and enrolled peers)
|
||||||
|
- Per-user vs per-role setup keys
|
||||||
|
- Secure key distribution to operators
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
+-------------------+ PR/Merge +-------------------+
|
||||||
|
| Engineer | ----------------> | Gitea |
|
||||||
|
| (edits .tf) | | (gitea-poc.*) |
|
||||||
|
+-------------------+ +-------------------+
|
||||||
|
|
|
||||||
|
| CI/CD
|
||||||
|
v
|
||||||
|
+-------------------+
|
||||||
|
| Terraform |
|
||||||
|
| (in Actions) |
|
||||||
|
+-------------------+
|
||||||
|
|
|
||||||
|
| API calls
|
||||||
|
v
|
||||||
|
+-------------------+ Enroll +-------------------+
|
||||||
|
| Operators | ----------------> | NetBird |
|
||||||
|
| (use setup keys) | | (netbird-poc.*) |
|
||||||
|
+-------------------+ +-------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
## Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
netbird-gitops-poc/
|
||||||
|
├── ansible/ # Deployment playbooks
|
||||||
|
│ ├── caddy/ # Shared reverse proxy
|
||||||
|
│ ├── gitea/ # Standalone Gitea (no OAuth)
|
||||||
|
│ ├── gitea-runner/ # Gitea Actions runner
|
||||||
|
│ └── netbird/ # NetBird with embedded IdP
|
||||||
|
├── terraform/ # Terraform configuration (Gitea repo content)
|
||||||
|
│ ├── .gitea/workflows/ # CI/CD workflow
|
||||||
|
│ │ └── terraform.yml
|
||||||
|
│ ├── main.tf # Provider config
|
||||||
|
│ ├── variables.tf # Input variables
|
||||||
|
│ ├── groups.tf # Group resources
|
||||||
|
│ ├── policies.tf # Policy resources
|
||||||
|
│ ├── setup_keys.tf # Setup key resources
|
||||||
|
│ ├── outputs.tf # Output values
|
||||||
|
│ ├── terraform.tfstate # State (committed for POC)
|
||||||
|
│ ├── terraform.tfvars # Secrets (gitignored)
|
||||||
|
│ └── terraform.tfvars.example
|
||||||
|
├── README.md
|
||||||
|
└── PAIN_POINTS.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
- VPS with Docker
|
||||||
|
- DNS records pointing to VPS
|
||||||
|
- Ansible installed locally
|
||||||
|
- Terraform installed locally (for initial setup)
|
||||||
|
|
||||||
|
### 1. Deploy Infrastructure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. NetBird (generates secrets, needs vault password)
|
||||||
|
cd ansible/netbird
|
||||||
|
./generate-vault.sh
|
||||||
|
ansible-vault encrypt group_vars/vault.yml
|
||||||
|
ansible-playbook -i poc-inventory.yml playbook-ssl.yml --ask-vault-pass
|
||||||
|
|
||||||
|
# 2. Gitea
|
||||||
|
cd ../gitea
|
||||||
|
ansible-playbook -i poc-inventory.yml playbook.yml
|
||||||
|
|
||||||
|
# 3. Caddy (reverse proxy for both)
|
||||||
|
cd ../caddy
|
||||||
|
ansible-playbook -i poc-inventory.yml playbook.yml
|
||||||
|
|
||||||
|
# 4. Gitea Runner (get token from Gitea Admin -> Actions -> Runners)
|
||||||
|
cd ../gitea-runner
|
||||||
|
ansible-playbook -i poc-inventory.yml playbook.yml -e vault_gitea_runner_token=<TOKEN>
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Initial Terraform Setup (Local)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd terraform
|
||||||
|
|
||||||
|
# Create tfvars with your NetBird PAT
|
||||||
|
cp terraform.tfvars.example terraform.tfvars
|
||||||
|
# Edit terraform.tfvars with actual token
|
||||||
|
|
||||||
|
# Initialize and apply
|
||||||
|
terraform init
|
||||||
|
terraform apply
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Push to Gitea
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd terraform
|
||||||
|
git init
|
||||||
|
git add .
|
||||||
|
git commit -m "Initial Terraform config"
|
||||||
|
git remote add origin git@gitea-poc.networkmonitor.cc:admin/netbird-iac.git
|
||||||
|
git push -u origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Configure Gitea Secrets
|
||||||
|
|
||||||
|
In Gitea repository Settings -> Actions -> Secrets:
|
||||||
|
- `NETBIRD_TOKEN`: Your NetBird PAT
|
||||||
|
|
||||||
|
### 5. Make Changes via GitOps
|
||||||
|
|
||||||
|
Edit Terraform files locally, push to create PR:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# groups.tf - add a new group
|
||||||
|
resource "netbird_group" "new_team" {
|
||||||
|
name = "new-team"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git checkout -b add-new-team
|
||||||
|
git add groups.tf
|
||||||
|
git commit -m "Add new-team group"
|
||||||
|
git push -u origin add-new-team
|
||||||
|
# Create PR in Gitea -> CI runs terraform plan
|
||||||
|
# Merge PR -> CI runs terraform apply
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CI/CD Workflow
|
||||||
|
|
||||||
|
The `.gitea/workflows/terraform.yml` workflow:
|
||||||
|
|
||||||
|
| Event | Action |
|
||||||
|
|-------|--------|
|
||||||
|
| Pull Request | `terraform plan` (preview changes) |
|
||||||
|
| Push to main | `terraform apply` (apply changes) |
|
||||||
|
| After apply | Commit updated state file |
|
||||||
|
|
||||||
|
**State Management:** State is committed to git (acceptable for single-operator POC). For production, use a remote backend.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Discoveries
|
||||||
|
|
||||||
|
### NetBird API Behavior
|
||||||
|
|
||||||
|
1. **Peer IDs are not predictable** - Generated server-side at enrollment time
|
||||||
|
2. **No setup key -> peer link** - NetBird doesn't record which setup key enrolled a peer
|
||||||
|
3. **Peers self-enroll** - Cannot create peers via API (WireGuard keypair generated locally)
|
||||||
|
4. **Terraform URL format** - Use `https://domain.com` NOT `https://domain.com/api`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Credentials Reference (POC Only)
|
||||||
|
|
||||||
|
| Service | Credential | Location |
|
||||||
|
|---------|------------|----------|
|
||||||
|
| NetBird PAT | `nbp_T3yD...` | Dashboard -> Team -> Service Users |
|
||||||
|
| Gitea | admin user | Created during setup |
|
||||||
|
| VPS | root | `observability-poc.networkmonitor.cc` |
|
||||||
|
|
||||||
|
**Warning:** Rotate all credentials before any production use.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cleanup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Destroy Terraform resources
|
||||||
|
cd terraform
|
||||||
|
terraform destroy
|
||||||
|
|
||||||
|
# Stop VPS services
|
||||||
|
ssh root@observability-poc.networkmonitor.cc
|
||||||
|
cd /opt/caddy && docker compose down
|
||||||
|
cd /opt/gitea && docker compose down
|
||||||
|
cd /opt/netbird && docker compose down
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
See [PAIN_POINTS.md](./PAIN_POINTS.md) for remaining challenges to address before production use.
|
||||||
25
ansible/caddy/group_vars/caddy_servers.yml
Normal file
25
ansible/caddy/group_vars/caddy_servers.yml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Shared Caddy Reverse Proxy Configuration
|
||||||
|
# =============================================================================
|
||||||
|
# Single Caddy instance handling all PoC services.
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Let's Encrypt Configuration
|
||||||
|
# =============================================================================
|
||||||
|
letsencrypt_email: "vlad.stus@gmail.com"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Paths
|
||||||
|
# =============================================================================
|
||||||
|
caddy_base_dir: "/opt/caddy"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Services to proxy
|
||||||
|
# =============================================================================
|
||||||
|
gitea_domain: "gitea-poc.networkmonitor.cc"
|
||||||
|
gitea_http_port: 3000
|
||||||
|
gitea_network: "gitea_gitea"
|
||||||
|
|
||||||
|
netbird_domain: "netbird-poc.networkmonitor.cc"
|
||||||
|
netbird_network: "netbird_netbird"
|
||||||
134
ansible/caddy/playbook.yml
Normal file
134
ansible/caddy/playbook.yml
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Shared Caddy Reverse Proxy Playbook
|
||||||
|
# =============================================================================
|
||||||
|
# Deploys single Caddy instance that proxies to Gitea and NetBird.
|
||||||
|
# Run AFTER deploying Gitea and NetBird (needs their networks).
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. Gitea deployed (creates gitea_gitea network)
|
||||||
|
# 2. NetBird deployed (creates netbird_netbird network)
|
||||||
|
# 3. DNS records pointing to VPS
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook -i poc-inventory.yml playbook.yml
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Deploy Shared Caddy Reverse Proxy
|
||||||
|
hosts: caddy_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/caddy_servers.yml
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
- name: Check if Gitea network exists
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker network inspect {{ gitea_network }}
|
||||||
|
register: gitea_network_check
|
||||||
|
failed_when: false
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Check if NetBird network exists
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker network inspect {{ netbird_network }}
|
||||||
|
register: netbird_network_check
|
||||||
|
failed_when: false
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Warn about missing networks
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
WARNING: Some service networks don't exist yet.
|
||||||
|
Gitea network ({{ gitea_network }}): {{ 'EXISTS' if gitea_network_check.rc == 0 else 'MISSING - deploy Gitea first' }}
|
||||||
|
NetBird network ({{ netbird_network }}): {{ 'EXISTS' if netbird_network_check.rc == 0 else 'MISSING - deploy NetBird first' }}
|
||||||
|
|
||||||
|
Caddy will fail to start until both networks exist.
|
||||||
|
when: gitea_network_check.rc != 0 or netbird_network_check.rc != 0
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Stop existing Caddy if running elsewhere
|
||||||
|
# =========================================================================
|
||||||
|
- name: Check for Caddy in Gitea deployment
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: /opt/gitea/docker-compose.yml
|
||||||
|
register: gitea_compose
|
||||||
|
|
||||||
|
- name: Stop Caddy in Gitea deployment
|
||||||
|
ansible.builtin.shell: |
|
||||||
|
cd /opt/gitea && docker compose stop caddy && docker compose rm -f caddy
|
||||||
|
when: gitea_compose.stat.exists
|
||||||
|
failed_when: false
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Caddy Directory Structure
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create Caddy directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ caddy_base_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Deploy Configuration Files
|
||||||
|
# =========================================================================
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/docker-compose.yml.j2
|
||||||
|
dest: "{{ caddy_base_dir }}/docker-compose.yml"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy Caddyfile
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/Caddyfile.j2
|
||||||
|
dest: "{{ caddy_base_dir }}/Caddyfile"
|
||||||
|
mode: "0644"
|
||||||
|
register: caddyfile_changed
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Start Caddy
|
||||||
|
# =========================================================================
|
||||||
|
- name: Pull Caddy image
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose pull
|
||||||
|
chdir: "{{ caddy_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Start Caddy
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose up -d
|
||||||
|
chdir: "{{ caddy_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Reload Caddy config if changed
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile
|
||||||
|
chdir: "{{ caddy_base_dir }}"
|
||||||
|
when: caddyfile_changed.changed
|
||||||
|
failed_when: false
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Deployment Summary
|
||||||
|
# =========================================================================
|
||||||
|
- name: Display deployment status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Shared Caddy Deployed!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Proxying:
|
||||||
|
- https://{{ gitea_domain }} -> gitea:{{ gitea_http_port }}
|
||||||
|
- https://{{ netbird_domain }} -> netbird services
|
||||||
|
|
||||||
|
============================================
|
||||||
|
|
||||||
|
View logs:
|
||||||
|
ssh root@{{ ansible_host }} "cd {{ caddy_base_dir }} && docker compose logs -f"
|
||||||
|
|
||||||
|
Reload config after changes:
|
||||||
|
ssh root@{{ ansible_host }} "cd {{ caddy_base_dir }} && docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile"
|
||||||
|
|
||||||
|
============================================
|
||||||
8
ansible/caddy/poc-inventory.yml
Normal file
8
ansible/caddy/poc-inventory.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
caddy_servers:
|
||||||
|
hosts:
|
||||||
|
caddy-poc:
|
||||||
|
ansible_host: observability-poc.networkmonitor.cc
|
||||||
|
ansible_user: root
|
||||||
56
ansible/caddy/templates/Caddyfile.j2
Normal file
56
ansible/caddy/templates/Caddyfile.j2
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# Shared Caddy - NetBird GitOps PoC
|
||||||
|
# =============================================================================
|
||||||
|
{
|
||||||
|
servers :80,:443 {
|
||||||
|
protocols h1 h2c h2 h3
|
||||||
|
}
|
||||||
|
email {{ letsencrypt_email }}
|
||||||
|
}
|
||||||
|
|
||||||
|
(security_headers) {
|
||||||
|
header * {
|
||||||
|
Strict-Transport-Security "max-age=3600; includeSubDomains; preload"
|
||||||
|
X-Content-Type-Options "nosniff"
|
||||||
|
X-Frame-Options "SAMEORIGIN"
|
||||||
|
X-XSS-Protection "1; mode=block"
|
||||||
|
-Server
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea
|
||||||
|
# =============================================================================
|
||||||
|
{{ gitea_domain }} {
|
||||||
|
import security_headers
|
||||||
|
reverse_proxy gitea:{{ gitea_http_port }}
|
||||||
|
}
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird
|
||||||
|
# =============================================================================
|
||||||
|
{{ netbird_domain }} {
|
||||||
|
import security_headers
|
||||||
|
|
||||||
|
# Embedded IdP OAuth2 endpoints
|
||||||
|
reverse_proxy /oauth2/* management:80
|
||||||
|
reverse_proxy /.well-known/openid-configuration management:80
|
||||||
|
reverse_proxy /.well-known/jwks.json management:80
|
||||||
|
|
||||||
|
# NetBird Relay
|
||||||
|
reverse_proxy /relay* relay:80
|
||||||
|
|
||||||
|
# NetBird Signal (gRPC)
|
||||||
|
reverse_proxy /signalexchange.SignalExchange/* h2c://signal:10000
|
||||||
|
|
||||||
|
# NetBird Management API (gRPC)
|
||||||
|
reverse_proxy /management.ManagementService/* h2c://management:80
|
||||||
|
|
||||||
|
# NetBird Management REST API
|
||||||
|
reverse_proxy /api/* management:80
|
||||||
|
|
||||||
|
# NetBird Dashboard (catch-all)
|
||||||
|
reverse_proxy /* dashboard:80
|
||||||
|
}
|
||||||
|
}
|
||||||
35
ansible/caddy/templates/docker-compose.yml.j2
Normal file
35
ansible/caddy/templates/docker-compose.yml.j2
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
networks:
|
||||||
|
# Connect to Gitea network
|
||||||
|
gitea:
|
||||||
|
name: {{ gitea_network }}
|
||||||
|
external: true
|
||||||
|
# Connect to NetBird network
|
||||||
|
netbird:
|
||||||
|
name: {{ netbird_network }}
|
||||||
|
external: true
|
||||||
|
|
||||||
|
services:
|
||||||
|
caddy:
|
||||||
|
image: caddy:alpine
|
||||||
|
container_name: caddy
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- gitea
|
||||||
|
- netbird
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
- "443:443/udp"
|
||||||
|
volumes:
|
||||||
|
- {{ caddy_base_dir }}/Caddyfile:/etc/caddy/Caddyfile
|
||||||
|
- caddy_data:/data
|
||||||
|
- caddy_config:/config
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "100m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
caddy_data:
|
||||||
|
caddy_config:
|
||||||
23
ansible/gitea-runner/group_vars/gitea_runner_servers.yml
Normal file
23
ansible/gitea-runner/group_vars/gitea_runner_servers.yml
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea Actions Runner Configuration
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# Gitea instance URL
|
||||||
|
gitea_url: "https://gitea-poc.networkmonitor.cc"
|
||||||
|
|
||||||
|
# Runner registration token (get from Gitea: Site Administration → Actions → Runners)
|
||||||
|
# This will be set via vault or command line
|
||||||
|
gitea_runner_token: "{{ vault_gitea_runner_token }}"
|
||||||
|
|
||||||
|
# Runner name
|
||||||
|
gitea_runner_name: "poc-runner"
|
||||||
|
|
||||||
|
# Runner labels (determines what jobs it can run)
|
||||||
|
gitea_runner_labels: "ubuntu-latest:docker://node:20-bookworm,ubuntu-22.04:docker://ubuntu:22.04"
|
||||||
|
|
||||||
|
# Installation directory
|
||||||
|
gitea_runner_dir: "/opt/gitea-runner"
|
||||||
|
|
||||||
|
# Runner version
|
||||||
|
gitea_runner_version: "0.2.11"
|
||||||
160
ansible/gitea-runner/playbook.yml
Normal file
160
ansible/gitea-runner/playbook.yml
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea Actions Runner Deployment
|
||||||
|
# =============================================================================
|
||||||
|
# Deploys act_runner for Gitea Actions CI/CD.
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. Gitea instance running with Actions enabled
|
||||||
|
# 2. Runner registration token from Gitea admin
|
||||||
|
#
|
||||||
|
# Get registration token:
|
||||||
|
# 1. Go to Gitea → Site Administration → Actions → Runners
|
||||||
|
# 2. Click "Create new runner"
|
||||||
|
# 3. Copy the registration token
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook -i poc-inventory.yml playbook.yml -e vault_gitea_runner_token=<TOKEN>
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Deploy Gitea Actions Runner
|
||||||
|
hosts: gitea_runner_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/gitea_runner_servers.yml
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
- name: Validate runner token is provided
|
||||||
|
ansible.builtin.assert:
|
||||||
|
that:
|
||||||
|
- gitea_runner_token is defined
|
||||||
|
- gitea_runner_token | length > 0
|
||||||
|
fail_msg: |
|
||||||
|
Runner token not provided!
|
||||||
|
Get it from: {{ gitea_url }}/admin/actions/runners
|
||||||
|
Run with: -e vault_gitea_runner_token=<TOKEN>
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Docker (required for container-based jobs)
|
||||||
|
# =========================================================================
|
||||||
|
- name: Check if Docker is installed
|
||||||
|
ansible.builtin.command: docker --version
|
||||||
|
register: docker_check
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Fail if Docker not installed
|
||||||
|
ansible.builtin.fail:
|
||||||
|
msg: "Docker is required. Run gitea or netbird playbook first to install Docker."
|
||||||
|
when: docker_check.rc != 0
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Create Runner Directory
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create runner directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_runner_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Download act_runner
|
||||||
|
# =========================================================================
|
||||||
|
- name: Download act_runner binary
|
||||||
|
ansible.builtin.get_url:
|
||||||
|
url: "https://gitea.com/gitea/act_runner/releases/download/v{{ gitea_runner_version }}/act_runner-{{ gitea_runner_version }}-linux-amd64"
|
||||||
|
dest: "{{ gitea_runner_dir }}/act_runner"
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Register Runner
|
||||||
|
# =========================================================================
|
||||||
|
- name: Check if runner is already registered
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ gitea_runner_dir }}/.runner"
|
||||||
|
register: runner_config
|
||||||
|
|
||||||
|
- name: Register runner with Gitea
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: >
|
||||||
|
{{ gitea_runner_dir }}/act_runner register
|
||||||
|
--instance {{ gitea_url }}
|
||||||
|
--token {{ gitea_runner_token }}
|
||||||
|
--name {{ gitea_runner_name }}
|
||||||
|
--labels {{ gitea_runner_labels }}
|
||||||
|
--no-interactive
|
||||||
|
chdir: "{{ gitea_runner_dir }}"
|
||||||
|
when: not runner_config.stat.exists
|
||||||
|
register: register_result
|
||||||
|
|
||||||
|
- name: Show registration result
|
||||||
|
ansible.builtin.debug:
|
||||||
|
var: register_result.stdout_lines
|
||||||
|
when: register_result is changed
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Create Systemd Service
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create systemd service for runner
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/systemd/system/gitea-runner.service
|
||||||
|
mode: "0644"
|
||||||
|
content: |
|
||||||
|
[Unit]
|
||||||
|
Description=Gitea Actions Runner
|
||||||
|
After=network.target docker.service
|
||||||
|
Requires=docker.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=root
|
||||||
|
WorkingDirectory={{ gitea_runner_dir }}
|
||||||
|
ExecStart={{ gitea_runner_dir }}/act_runner daemon
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
|
||||||
|
- name: Reload systemd
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
daemon_reload: true
|
||||||
|
|
||||||
|
- name: Start and enable runner service
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: gitea-runner
|
||||||
|
state: started
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Verify
|
||||||
|
# =========================================================================
|
||||||
|
- name: Wait for runner to be active
|
||||||
|
ansible.builtin.pause:
|
||||||
|
seconds: 5
|
||||||
|
|
||||||
|
- name: Check runner status
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: gitea-runner
|
||||||
|
register: runner_status
|
||||||
|
|
||||||
|
- name: Display deployment status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Gitea Actions Runner Deployed!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Service status: {{ runner_status.status.ActiveState }}
|
||||||
|
|
||||||
|
The runner should now appear in:
|
||||||
|
{{ gitea_url }}/admin/actions/runners
|
||||||
|
|
||||||
|
Labels available:
|
||||||
|
{{ gitea_runner_labels }}
|
||||||
|
|
||||||
|
View logs:
|
||||||
|
journalctl -u gitea-runner -f
|
||||||
|
|
||||||
|
============================================
|
||||||
8
ansible/gitea-runner/poc-inventory.yml
Normal file
8
ansible/gitea-runner/poc-inventory.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
gitea_runner_servers:
|
||||||
|
hosts:
|
||||||
|
gitea-runner-poc:
|
||||||
|
ansible_host: observability-poc.networkmonitor.cc
|
||||||
|
ansible_user: root
|
||||||
80
ansible/gitea/cleanup-full.yml
Normal file
80
ansible/gitea/cleanup-full.yml
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea - Full Cleanup
|
||||||
|
# =============================================================================
|
||||||
|
# Removes containers and optionally all data
|
||||||
|
#
|
||||||
|
# Usage (containers only):
|
||||||
|
# ansible-playbook -i inventory.yml cleanup-full.yml
|
||||||
|
#
|
||||||
|
# Usage (including data - DESTRUCTIVE):
|
||||||
|
# ansible-playbook -i inventory.yml cleanup-full.yml -e "remove_data=true"
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Gitea - Full Cleanup
|
||||||
|
hosts: gitea_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/gitea_servers.yml
|
||||||
|
vars:
|
||||||
|
remove_data: false
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Display warning
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
WARNING: Full Cleanup
|
||||||
|
============================================
|
||||||
|
remove_data: {{ remove_data }}
|
||||||
|
Data directory: {{ gitea_data_dir }}
|
||||||
|
============================================
|
||||||
|
|
||||||
|
- name: Check if docker-compose.yml exists
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ gitea_base_dir }}/docker-compose.yml"
|
||||||
|
register: compose_file
|
||||||
|
|
||||||
|
- name: Stop and remove containers with volumes
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose down -v --remove-orphans
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
when: compose_file.stat.exists
|
||||||
|
ignore_errors: true
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Remove OAuth setup script
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_base_dir }}/setup-gitea-oauth.sh"
|
||||||
|
state: absent
|
||||||
|
|
||||||
|
- name: Remove Gitea data directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_data_dir }}"
|
||||||
|
state: absent
|
||||||
|
when: remove_data | bool
|
||||||
|
|
||||||
|
- name: Remove base directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_base_dir }}"
|
||||||
|
state: absent
|
||||||
|
when: remove_data | bool
|
||||||
|
|
||||||
|
- name: Display cleanup status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Gitea - Full Cleanup Complete
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Containers: Removed
|
||||||
|
Data: {{ 'REMOVED' if remove_data else 'Preserved at ' + gitea_data_dir }}
|
||||||
|
|
||||||
|
Note: Authentik OAuth application still exists.
|
||||||
|
To remove, go to Authentik admin panel:
|
||||||
|
https://{{ authentik_domain }}/if/admin/#/core/applications
|
||||||
|
|
||||||
|
To redeploy:
|
||||||
|
ansible-playbook -i inventory.yml playbook.yml --ask-vault-pass
|
||||||
|
|
||||||
|
============================================
|
||||||
47
ansible/gitea/cleanup-soft.yml
Normal file
47
ansible/gitea/cleanup-soft.yml
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea - Soft Cleanup
|
||||||
|
# =============================================================================
|
||||||
|
# Stops containers but preserves configuration and data
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook -i inventory.yml cleanup-soft.yml
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Gitea - Soft Cleanup
|
||||||
|
hosts: gitea_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/gitea_servers.yml
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Check if docker-compose.yml exists
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ gitea_base_dir }}/docker-compose.yml"
|
||||||
|
register: compose_file
|
||||||
|
|
||||||
|
- name: Stop containers (preserve data)
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose down
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
when: compose_file.stat.exists
|
||||||
|
ignore_errors: true
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Display cleanup status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Gitea - Soft Cleanup Complete
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Containers stopped. Data preserved at:
|
||||||
|
{{ gitea_data_dir }}
|
||||||
|
|
||||||
|
To restart:
|
||||||
|
ansible-playbook -i inventory.yml playbook.yml --ask-vault-pass
|
||||||
|
|
||||||
|
To fully remove:
|
||||||
|
ansible-playbook -i inventory.yml cleanup-full.yml
|
||||||
|
|
||||||
|
============================================
|
||||||
30
ansible/gitea/group_vars/gitea_servers.yml
Normal file
30
ansible/gitea/group_vars/gitea_servers.yml
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea GitOps PoC Configuration
|
||||||
|
# =============================================================================
|
||||||
|
# Standalone Gitea installation without external OAuth.
|
||||||
|
# Used for hosting Terraform/Pulumi repos and CI/CD pipelines.
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Domain Configuration
|
||||||
|
# =============================================================================
|
||||||
|
gitea_domain: "gitea-poc.networkmonitor.cc"
|
||||||
|
gitea_ssh_domain: "gitea-poc.networkmonitor.cc"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Let's Encrypt Configuration
|
||||||
|
# =============================================================================
|
||||||
|
letsencrypt_email: "vlad.stus@gmail.com"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Paths
|
||||||
|
# =============================================================================
|
||||||
|
gitea_base_dir: "/opt/gitea"
|
||||||
|
gitea_data_dir: "{{ gitea_base_dir }}/gitea_data"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Docker Configuration
|
||||||
|
# =============================================================================
|
||||||
|
gitea_image: "gitea/gitea:latest"
|
||||||
|
gitea_http_port: 3000
|
||||||
|
gitea_ssh_port: 2222
|
||||||
20
ansible/gitea/group_vars/vault.yml
Normal file
20
ansible/gitea/group_vars/vault.yml
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea Deployment Vault Secrets
|
||||||
|
# =============================================================================
|
||||||
|
# Copy to vault.yml and encrypt:
|
||||||
|
# cp vault.yml.example vault.yml
|
||||||
|
# # Edit vault.yml with your values
|
||||||
|
# ansible-vault encrypt vault.yml
|
||||||
|
#
|
||||||
|
# Run playbook with:
|
||||||
|
# ansible-playbook -i inventory.yml playbook.yml --ask-vault-pass
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Authentik API Access
|
||||||
|
# =============================================================================
|
||||||
|
# Bootstrap token from Authentik deployment
|
||||||
|
# Get from VPS:
|
||||||
|
# ssh root@auth.stuslab.cc "grep AUTHENTIK_BOOTSTRAP_TOKEN /opt/authentik/authentik.env"
|
||||||
|
#vault_authentik_bootstrap_token: "PASTE_AUTHENTIK_BOOTSTRAP_TOKEN_HERE"
|
||||||
14
ansible/gitea/group_vars/vault.yml.example
Normal file
14
ansible/gitea/group_vars/vault.yml.example
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea PoC Vault Secrets
|
||||||
|
# =============================================================================
|
||||||
|
# This PoC deployment doesn't use external OAuth, so no secrets are required.
|
||||||
|
# This file exists for consistency with the deployment pattern.
|
||||||
|
#
|
||||||
|
# If you add secrets later:
|
||||||
|
# cp vault.yml.example vault.yml
|
||||||
|
# # Edit vault.yml
|
||||||
|
# ansible-vault encrypt vault.yml
|
||||||
|
|
||||||
|
# Placeholder - no secrets needed for standalone Gitea
|
||||||
|
# vault_example_secret: "changeme"
|
||||||
9
ansible/gitea/inventory.yml
Normal file
9
ansible/gitea/inventory.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
gitea_servers:
|
||||||
|
hosts:
|
||||||
|
gitea-homelab:
|
||||||
|
ansible_host: 94.130.181.201
|
||||||
|
ansible_user: root
|
||||||
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
89
ansible/gitea/migration/cleanup-full.yml
Normal file
89
ansible/gitea/migration/cleanup-full.yml
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea - Full Cleanup
|
||||||
|
# =============================================================================
|
||||||
|
# Removes containers and optionally all data
|
||||||
|
#
|
||||||
|
# Usage (containers only):
|
||||||
|
# ansible-playbook -i inventory.yml cleanup-full.yml
|
||||||
|
#
|
||||||
|
# Usage (including data - DESTRUCTIVE):
|
||||||
|
# ansible-playbook -i inventory.yml cleanup-full.yml -e "remove_data=true"
|
||||||
|
#
|
||||||
|
# Usage (including backups - VERY DESTRUCTIVE):
|
||||||
|
# ansible-playbook -i inventory.yml cleanup-full.yml -e "remove_data=true" -e "remove_backups=true"
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Gitea - Full Cleanup
|
||||||
|
hosts: gitea_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/gitea_servers.yml
|
||||||
|
vars:
|
||||||
|
# Safety flags - must be explicitly enabled
|
||||||
|
remove_data: false
|
||||||
|
remove_backups: false
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Display warning
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
WARNING: Full Cleanup
|
||||||
|
============================================
|
||||||
|
remove_data: {{ remove_data }}
|
||||||
|
remove_backups: {{ remove_backups }}
|
||||||
|
|
||||||
|
Data directory: {{ gitea_data_dir }}
|
||||||
|
Backup directory: {{ gitea_backup_dir }}
|
||||||
|
============================================
|
||||||
|
|
||||||
|
- name: Check if docker-compose.yml exists
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ gitea_base_dir }}/docker-compose.yml"
|
||||||
|
register: compose_file
|
||||||
|
|
||||||
|
- name: Stop and remove containers with volumes
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose down -v --remove-orphans
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
when: compose_file.stat.exists
|
||||||
|
ignore_errors: true
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Remove OAuth setup script
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_base_dir }}/setup-gitea-oauth.sh"
|
||||||
|
state: absent
|
||||||
|
|
||||||
|
- name: Remove Gitea data directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_data_dir }}"
|
||||||
|
state: absent
|
||||||
|
when: remove_data | bool
|
||||||
|
|
||||||
|
- name: Remove backup directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_backup_dir }}"
|
||||||
|
state: absent
|
||||||
|
when: remove_backups | bool
|
||||||
|
|
||||||
|
- name: Display cleanup status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Gitea - Full Cleanup Complete
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Containers: Removed
|
||||||
|
Data: {{ 'REMOVED' if remove_data else 'Preserved at ' + gitea_data_dir }}
|
||||||
|
Backups: {{ 'REMOVED' if remove_backups else 'Preserved at ' + gitea_backup_dir }}
|
||||||
|
|
||||||
|
Note: Authentik OAuth application still exists.
|
||||||
|
To remove, go to Authentik admin panel:
|
||||||
|
https://{{ authentik_domain }}/if/admin/#/core/applications
|
||||||
|
|
||||||
|
To redeploy:
|
||||||
|
ansible-playbook -i inventory.yml playbook.yml --ask-vault-pass
|
||||||
|
|
||||||
|
============================================
|
||||||
50
ansible/gitea/migration/cleanup-soft.yml
Normal file
50
ansible/gitea/migration/cleanup-soft.yml
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea - Soft Cleanup
|
||||||
|
# =============================================================================
|
||||||
|
# Stops containers but preserves configuration and data
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook -i inventory.yml cleanup-soft.yml
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Gitea - Soft Cleanup
|
||||||
|
hosts: gitea_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/gitea_servers.yml
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Check if docker-compose.yml exists
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ gitea_base_dir }}/docker-compose.yml"
|
||||||
|
register: compose_file
|
||||||
|
|
||||||
|
- name: Stop containers (preserve data)
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose down
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
when: compose_file.stat.exists
|
||||||
|
ignore_errors: true
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Display cleanup status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Gitea - Soft Cleanup Complete
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Containers stopped. Data preserved at:
|
||||||
|
{{ gitea_data_dir }}
|
||||||
|
|
||||||
|
Backups at:
|
||||||
|
{{ gitea_backup_dir }}
|
||||||
|
|
||||||
|
To restart:
|
||||||
|
ansible-playbook -i inventory.yml playbook.yml --ask-vault-pass
|
||||||
|
|
||||||
|
To fully remove:
|
||||||
|
ansible-playbook -i inventory.yml cleanup-full.yml
|
||||||
|
|
||||||
|
============================================
|
||||||
69
ansible/gitea/migration/group_vars/gitea_servers.yml
Normal file
69
ansible/gitea/migration/group_vars/gitea_servers.yml
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea Migration Configuration
|
||||||
|
# =============================================================================
|
||||||
|
# Migrating from stuslab.cc to code.stuslab.cc with Authentik OAuth
|
||||||
|
#
|
||||||
|
# Before running:
|
||||||
|
# 1. Ensure Authentik is deployed at auth.stuslab.cc
|
||||||
|
# 2. Create group_vars/vault.yml from vault.yml.example
|
||||||
|
# 3. Add DNS record: code.stuslab.cc -> 94.130.181.201
|
||||||
|
# 4. Run: ansible-playbook -i inventory.yml playbook.yml --ask-vault-pass
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Domain Configuration
|
||||||
|
# =============================================================================
|
||||||
|
# Old domain (will redirect to new)
|
||||||
|
gitea_old_domain: "stuslab.cc"
|
||||||
|
|
||||||
|
# New domain for Gitea
|
||||||
|
gitea_domain: "code.stuslab.cc"
|
||||||
|
|
||||||
|
# SSH domain (for git clone URLs)
|
||||||
|
gitea_ssh_domain: "code.stuslab.cc"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Let's Encrypt Configuration
|
||||||
|
# =============================================================================
|
||||||
|
letsencrypt_email: "vlad.stus@gmail.com"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Paths
|
||||||
|
# =============================================================================
|
||||||
|
# Existing Gitea installation path on VPS
|
||||||
|
gitea_base_dir: "/root/gitea"
|
||||||
|
|
||||||
|
# Data directory (contains repos, database, config)
|
||||||
|
gitea_data_dir: "{{ gitea_base_dir }}/gitea_data"
|
||||||
|
|
||||||
|
# Backup directory on VPS
|
||||||
|
gitea_backup_dir: "/root/gitea-backups"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Authentik Configuration
|
||||||
|
# =============================================================================
|
||||||
|
# Domain where Authentik is deployed
|
||||||
|
authentik_domain: "auth.stuslab.cc"
|
||||||
|
|
||||||
|
# OAuth provider name (must match exactly in Gitea UI)
|
||||||
|
gitea_oauth_provider_name: "Authentik"
|
||||||
|
|
||||||
|
# OAuth client ID (used in Authentik and Gitea)
|
||||||
|
gitea_oauth_client_id: "gitea"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Docker Configuration
|
||||||
|
# =============================================================================
|
||||||
|
gitea_image: "gitea/gitea:latest"
|
||||||
|
gitea_http_port: 3000
|
||||||
|
gitea_ssh_port: 2222
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Migration Flags
|
||||||
|
# =============================================================================
|
||||||
|
# Create backup before migration (recommended)
|
||||||
|
gitea_create_backup: true
|
||||||
|
|
||||||
|
# Upload backup to GDrive via rclone (requires rclone configured on VPS)
|
||||||
|
gitea_backup_to_gdrive: false
|
||||||
51
ansible/gitea/migration/group_vars/vault.yml
Normal file
51
ansible/gitea/migration/group_vars/vault.yml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
$ANSIBLE_VAULT;1.1;AES256
|
||||||
|
66313066626635366538383531303838363335366332373763343030373535343935343463363037
|
||||||
|
3661653331333337613763316135653338636265656238300a343233383237316565306161326435
|
||||||
|
62616533386336333932393230383332383839363366373566306165383936366361663864393231
|
||||||
|
3536343039663639650a643539323937623334616230363337306661616463313239306438326238
|
||||||
|
31663535333137323831303266336161353232626564613436613732626461343733623963376565
|
||||||
|
61303663326633616263613461383263353734303462363634393562663064663332363738303832
|
||||||
|
66636663653762343636323936656362646236383539666464373862336461363864373963313039
|
||||||
|
31313166656665663035353130643761616161353837313839636631373236343666343838653837
|
||||||
|
36366266636339323931383362646634343164666138633364623538383466363662656635636366
|
||||||
|
33326637303363353961633434376330623836666434383237346430373739333333396539636366
|
||||||
|
32396339663930353131323032343433656332373635643638623862363164636661313735626639
|
||||||
|
36613838366231636636623439393137353138613562646664336366663864306664316130656237
|
||||||
|
33643235646334306336613662303532653033343034643737326230653161326136313132666231
|
||||||
|
64323734623231623933353763383564353438343236323333613461363031363530356431393461
|
||||||
|
38636532636532633532613862636635353532666330373034353164326662656638356233306633
|
||||||
|
37653532306530633135393232316635333863626564666231623961366237366161656437623665
|
||||||
|
39643134623835316139623236633166636364313866343636326466393035653365626130363533
|
||||||
|
31633137653463333561653132636234633230373030376633623166383364646536646261633731
|
||||||
|
37626538623831613431353766656661346565643633353034343533316134616166316136306339
|
||||||
|
35323666306439393865626465396336623662353161396366653532326633346436336566646336
|
||||||
|
38373539353334386134646237653534343430343439366533383738653938336530666266636563
|
||||||
|
66313130313438363830386538306662393264643838656136623136386565303366636362306564
|
||||||
|
62343030616361616661393063313938663433323662373531333435333032353831663537636461
|
||||||
|
62666665646566656562303666333830363337663436633435653934656137626664616163303461
|
||||||
|
32376363353534366235383635333538316431313736663237623966363431343434386263376132
|
||||||
|
37353764313136323335633133343466343830343366363536303237333835303165333337636230
|
||||||
|
37643132643866616633376566623264633534343334306537316461616132336265626537333666
|
||||||
|
61353933366532363363613465313861333362383531306230343238313633633934626264366530
|
||||||
|
64316335623637363537336162303933393935613734326535613738333262323033373935313632
|
||||||
|
63393332346132353735356161393438643264343264326634353562613536303566623464646363
|
||||||
|
61663639336466666364353838323931323134333461303831383265626139303135303566376433
|
||||||
|
64383339373961303137616530616632366562326662646131363534613065623363633731313639
|
||||||
|
36353363633836316436666564396438353161623765356230333166346436346662373032336263
|
||||||
|
34613135623138306331626264316132363838376363373462616338613432343737646231333563
|
||||||
|
33633062613030643832663263376231316431616239373639646532623639646362393234656364
|
||||||
|
61393462346631633365613463323361626664316563656461646137386332366565366135623364
|
||||||
|
36343664333039343538353663346532623733386464306265396565363966363535353837366238
|
||||||
|
36643635623131313636393237643737343565656166653337656666636231343066383962306539
|
||||||
|
64303666613437353039353630353633353630336336636539333166373561626634353363623765
|
||||||
|
62626464386130646536323933653464656332373632366535633436346336306337313063356466
|
||||||
|
66663233616434383230316564343132663132373431396137623334333636363231336334333535
|
||||||
|
63336464623736306531653039333833316631393636363861613938386563613136636561626663
|
||||||
|
66323638653337333732326335376630633065623437386330323136623766313334306663613866
|
||||||
|
38383636353934386662633232303239656134633162396432393363336138366239323330643161
|
||||||
|
39666333393032373363633435316136366663643931366561643735633262323236373465323363
|
||||||
|
34323163353461616433613464646435326335336464333962646361666662656566636339646335
|
||||||
|
31633266663761666432656464323135343534346663383862306461323762306461626161356265
|
||||||
|
64653965643563643263386430653933613566303537636563636536366133383838336335316363
|
||||||
|
31653666323965346535646439316163346166343261656432343465386634313037323736376464
|
||||||
|
3562623165376161663466356130613064366433323662346430
|
||||||
20
ansible/gitea/migration/group_vars/vault.yml.example
Normal file
20
ansible/gitea/migration/group_vars/vault.yml.example
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea Migration Vault Secrets
|
||||||
|
# =============================================================================
|
||||||
|
# Copy to vault.yml and encrypt:
|
||||||
|
# cp vault.yml.example vault.yml
|
||||||
|
# # Edit vault.yml with your values
|
||||||
|
# ansible-vault encrypt vault.yml
|
||||||
|
#
|
||||||
|
# Run playbook with:
|
||||||
|
# ansible-playbook -i inventory.yml playbook.yml --ask-vault-pass
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Authentik API Access
|
||||||
|
# =============================================================================
|
||||||
|
# Bootstrap token from Authentik deployment
|
||||||
|
# Get from VPS:
|
||||||
|
# ssh root@auth.stuslab.cc "grep AUTHENTIK_BOOTSTRAP_TOKEN /opt/authentik/authentik.env"
|
||||||
|
vault_authentik_bootstrap_token: "PASTE_AUTHENTIK_BOOTSTRAP_TOKEN_HERE"
|
||||||
9
ansible/gitea/migration/inventory.yml
Normal file
9
ansible/gitea/migration/inventory.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
gitea_servers:
|
||||||
|
hosts:
|
||||||
|
gitea-homelab:
|
||||||
|
ansible_host: 94.130.181.201
|
||||||
|
ansible_user: root
|
||||||
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
308
ansible/gitea/migration/playbook.yml
Normal file
308
ansible/gitea/migration/playbook.yml
Normal file
@@ -0,0 +1,308 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea Migration Playbook
|
||||||
|
# =============================================================================
|
||||||
|
# Migrates Gitea from stuslab.cc to code.stuslab.cc with Authentik OAuth
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. Authentik deployed at auth.stuslab.cc
|
||||||
|
# 2. DNS record: code.stuslab.cc -> 94.130.181.201
|
||||||
|
# 3. group_vars/vault.yml with authentik bootstrap token
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook -i inventory.yml playbook.yml --ask-vault-pass
|
||||||
|
#
|
||||||
|
# What this playbook does:
|
||||||
|
# 1. Validates existing Gitea installation
|
||||||
|
# 2. Creates backup of gitea_data/
|
||||||
|
# 3. Updates app.ini with new domain settings
|
||||||
|
# 4. Deploys updated Caddyfile with redirect
|
||||||
|
# 5. Creates OAuth application in Authentik
|
||||||
|
# 6. Verifies migration success
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Migrate Gitea to code.stuslab.cc
|
||||||
|
hosts: gitea_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/gitea_servers.yml
|
||||||
|
- group_vars/vault.yml
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# Pre-flight Validation
|
||||||
|
# ===========================================================================
|
||||||
|
pre_tasks:
|
||||||
|
- name: Validate required variables
|
||||||
|
ansible.builtin.assert:
|
||||||
|
that:
|
||||||
|
- gitea_domain is defined
|
||||||
|
- gitea_old_domain is defined
|
||||||
|
- authentik_domain is defined
|
||||||
|
- vault_authentik_bootstrap_token is defined
|
||||||
|
- vault_authentik_bootstrap_token != "PASTE_AUTHENTIK_BOOTSTRAP_TOKEN_HERE"
|
||||||
|
fail_msg: |
|
||||||
|
Required variables not configured!
|
||||||
|
1. Copy vault.yml.example to vault.yml
|
||||||
|
2. Add your Authentik bootstrap token
|
||||||
|
3. Encrypt with: ansible-vault encrypt group_vars/vault.yml
|
||||||
|
|
||||||
|
- name: Check existing Gitea installation
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ gitea_base_dir }}/gitea_data/gitea/conf/app.ini"
|
||||||
|
register: existing_gitea
|
||||||
|
|
||||||
|
- name: Fail if no existing Gitea found
|
||||||
|
ansible.builtin.fail:
|
||||||
|
msg: |
|
||||||
|
No existing Gitea installation found at {{ gitea_base_dir }}
|
||||||
|
Expected app.ini at: {{ gitea_base_dir }}/gitea_data/gitea/conf/app.ini
|
||||||
|
when: not existing_gitea.stat.exists
|
||||||
|
|
||||||
|
- name: Display pre-flight status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Gitea Migration Pre-flight Check
|
||||||
|
============================================
|
||||||
|
Current domain: {{ gitea_old_domain }}
|
||||||
|
Target domain: {{ gitea_domain }}
|
||||||
|
Authentik: {{ authentik_domain }}
|
||||||
|
Base dir: {{ gitea_base_dir }}
|
||||||
|
============================================
|
||||||
|
|
||||||
|
- name: Verify Authentik is reachable
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "https://{{ authentik_domain }}/api/v3/core/brands/"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Bearer {{ vault_authentik_bootstrap_token }}"
|
||||||
|
status_code: 200
|
||||||
|
timeout: 30
|
||||||
|
register: authentik_check
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- name: Warn if Authentik not reachable
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
WARNING: Authentik not reachable at https://{{ authentik_domain }}
|
||||||
|
OAuth setup will be skipped. You can run it manually later.
|
||||||
|
when: authentik_check.failed
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Stage 1: Backup (skip if recent backup exists)
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create backup directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_backup_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
when: gitea_create_backup
|
||||||
|
|
||||||
|
- name: Check for existing backup from today
|
||||||
|
ansible.builtin.find:
|
||||||
|
paths: "{{ gitea_backup_dir }}"
|
||||||
|
patterns: "gitea-backup-{{ ansible_date_time.date | replace('-', '') }}*.tar.gz"
|
||||||
|
register: existing_backups
|
||||||
|
when: gitea_create_backup
|
||||||
|
|
||||||
|
- name: Set backup needed flag
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
backup_needed: "{{ gitea_create_backup and (existing_backups.files | length == 0) }}"
|
||||||
|
|
||||||
|
- name: Skip backup message
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: "Backup already exists from today: {{ existing_backups.files[0].path | basename }}. Skipping backup."
|
||||||
|
when:
|
||||||
|
- gitea_create_backup
|
||||||
|
- existing_backups.files | length > 0
|
||||||
|
|
||||||
|
- name: Stop Gitea container for consistent backup
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose stop gitea
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
when: backup_needed
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Generate backup timestamp
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
backup_timestamp: "{{ ansible_date_time.iso8601_basic_short }}"
|
||||||
|
when: backup_needed
|
||||||
|
|
||||||
|
- name: Create backup archive
|
||||||
|
ansible.builtin.archive:
|
||||||
|
path: "{{ gitea_data_dir }}"
|
||||||
|
dest: "{{ gitea_backup_dir }}/gitea-backup-{{ backup_timestamp }}.tar.gz"
|
||||||
|
format: gz
|
||||||
|
when: backup_needed
|
||||||
|
|
||||||
|
- name: Display backup status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: "Backup created: {{ gitea_backup_dir }}/gitea-backup-{{ backup_timestamp }}.tar.gz"
|
||||||
|
when: backup_needed
|
||||||
|
|
||||||
|
- name: Upload backup to GDrive (if configured)
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: "rclone copy {{ gitea_backup_dir }}/gitea-backup-{{ backup_timestamp }}.tar.gz GDrive:backups/gitea/"
|
||||||
|
when:
|
||||||
|
- backup_needed
|
||||||
|
- gitea_backup_to_gdrive
|
||||||
|
ignore_errors: true
|
||||||
|
register: gdrive_upload
|
||||||
|
changed_when: gdrive_upload.rc == 0
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Stage 2: Domain Migration (app.ini)
|
||||||
|
# =========================================================================
|
||||||
|
- name: Update app.ini ROOT_URL
|
||||||
|
ansible.builtin.lineinfile:
|
||||||
|
path: "{{ gitea_data_dir }}/gitea/conf/app.ini"
|
||||||
|
regexp: '^ROOT_URL\s*='
|
||||||
|
line: "ROOT_URL = https://{{ gitea_domain }}/"
|
||||||
|
backup: true
|
||||||
|
|
||||||
|
- name: Update app.ini SSH_DOMAIN
|
||||||
|
ansible.builtin.lineinfile:
|
||||||
|
path: "{{ gitea_data_dir }}/gitea/conf/app.ini"
|
||||||
|
regexp: '^SSH_DOMAIN\s*='
|
||||||
|
line: "SSH_DOMAIN = {{ gitea_ssh_domain }}"
|
||||||
|
|
||||||
|
- name: Update app.ini DOMAIN
|
||||||
|
ansible.builtin.lineinfile:
|
||||||
|
path: "{{ gitea_data_dir }}/gitea/conf/app.ini"
|
||||||
|
regexp: '^DOMAIN\s*='
|
||||||
|
line: "DOMAIN = {{ gitea_domain }}"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Stage 3: Deploy Updated Configuration
|
||||||
|
# =========================================================================
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/docker-compose.yml.j2
|
||||||
|
dest: "{{ gitea_base_dir }}/docker-compose.yml"
|
||||||
|
mode: "0644"
|
||||||
|
backup: true
|
||||||
|
|
||||||
|
- name: Deploy Caddyfile with domain redirect
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/Caddyfile.j2
|
||||||
|
dest: "{{ gitea_base_dir }}/Caddyfile"
|
||||||
|
mode: "0644"
|
||||||
|
backup: true
|
||||||
|
|
||||||
|
- name: Start services
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose up -d
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Wait for Caddy to start (port 443)
|
||||||
|
ansible.builtin.wait_for:
|
||||||
|
port: 443
|
||||||
|
host: 127.0.0.1
|
||||||
|
delay: 5
|
||||||
|
timeout: 60
|
||||||
|
|
||||||
|
- name: Wait for Gitea container to be healthy
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose ps gitea --format json
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
register: gitea_container
|
||||||
|
until: "'running' in gitea_container.stdout"
|
||||||
|
retries: 12
|
||||||
|
delay: 5
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Stage 4: Authentik OAuth Setup
|
||||||
|
# =========================================================================
|
||||||
|
- name: Install jq for OAuth setup script
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: jq
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
when: not authentik_check.failed
|
||||||
|
|
||||||
|
- name: Deploy Gitea OAuth setup script
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/setup-gitea-oauth.sh.j2
|
||||||
|
dest: "{{ gitea_base_dir }}/setup-gitea-oauth.sh"
|
||||||
|
mode: "0755"
|
||||||
|
when: not authentik_check.failed
|
||||||
|
|
||||||
|
- name: Run Gitea OAuth setup on Authentik
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: "{{ gitea_base_dir }}/setup-gitea-oauth.sh"
|
||||||
|
register: oauth_setup
|
||||||
|
when: not authentik_check.failed
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Display OAuth setup output
|
||||||
|
ansible.builtin.debug:
|
||||||
|
var: oauth_setup.stdout_lines
|
||||||
|
when:
|
||||||
|
- not authentik_check.failed
|
||||||
|
- oauth_setup is defined
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Stage 5: Verification
|
||||||
|
# =========================================================================
|
||||||
|
- name: Wait for Gitea to be healthy on new domain
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "https://{{ gitea_domain }}/api/v1/version"
|
||||||
|
method: GET
|
||||||
|
status_code: 200
|
||||||
|
timeout: 30
|
||||||
|
validate_certs: true
|
||||||
|
register: gitea_health
|
||||||
|
until: gitea_health.status == 200
|
||||||
|
retries: 12
|
||||||
|
delay: 10
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- name: Check old domain redirect
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "https://{{ gitea_old_domain }}/"
|
||||||
|
method: GET
|
||||||
|
follow_redirects: none
|
||||||
|
status_code: [301, 302, 308]
|
||||||
|
validate_certs: true
|
||||||
|
register: redirect_check
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- name: Display migration status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Gitea Migration Complete!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
New URL: https://{{ gitea_domain }}
|
||||||
|
Old URL: https://{{ gitea_old_domain }} (redirects)
|
||||||
|
|
||||||
|
Health check: {{ 'PASSED' if gitea_health.status == 200 else 'PENDING - may need DNS propagation' }}
|
||||||
|
Redirect check: {{ 'PASSED' if redirect_check.status in [301, 302, 308] else 'PENDING' }}
|
||||||
|
|
||||||
|
Backup: {{ gitea_backup_dir }}/gitea-backup-{{ backup_timestamp | default('N/A') }}.tar.gz
|
||||||
|
|
||||||
|
============================================
|
||||||
|
MANUAL STEPS REQUIRED:
|
||||||
|
============================================
|
||||||
|
|
||||||
|
1. DNS (if not done):
|
||||||
|
Add A record: code.stuslab.cc -> 94.130.181.201
|
||||||
|
|
||||||
|
2. OAuth Configuration in Gitea UI:
|
||||||
|
- Go to: https://{{ gitea_domain }}/admin/auths/new
|
||||||
|
- See credentials: cat /tmp/gitea-oauth-credentials.json
|
||||||
|
|
||||||
|
3. Test git operations:
|
||||||
|
ssh -T git@{{ gitea_ssh_domain }} -p {{ gitea_ssh_port }}
|
||||||
|
git clone git@{{ gitea_ssh_domain }}:user/repo.git
|
||||||
|
|
||||||
|
============================================
|
||||||
|
|
||||||
|
View logs:
|
||||||
|
ssh root@{{ ansible_host }} "cd {{ gitea_base_dir }} && docker compose logs -f"
|
||||||
|
|
||||||
|
============================================
|
||||||
17
ansible/gitea/migration/templates/Caddyfile.j2
Normal file
17
ansible/gitea/migration/templates/Caddyfile.j2
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
{
|
||||||
|
email {{ letsencrypt_email }}
|
||||||
|
}
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Primary Domain - Gitea
|
||||||
|
# =============================================================================
|
||||||
|
{{ gitea_domain }} {
|
||||||
|
reverse_proxy gitea:{{ gitea_http_port }}
|
||||||
|
}
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Old Domain - Permanent Redirect
|
||||||
|
# =============================================================================
|
||||||
|
{{ gitea_old_domain }} {
|
||||||
|
redir https://{{ gitea_domain }}{uri} permanent
|
||||||
|
}
|
||||||
49
ansible/gitea/migration/templates/docker-compose.yml.j2
Normal file
49
ansible/gitea/migration/templates/docker-compose.yml.j2
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
networks:
|
||||||
|
gitea:
|
||||||
|
external: false
|
||||||
|
|
||||||
|
services:
|
||||||
|
gitea:
|
||||||
|
image: {{ gitea_image }}
|
||||||
|
container_name: gitea
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- gitea
|
||||||
|
environment:
|
||||||
|
- USER_UID=1000
|
||||||
|
- USER_GID=1000
|
||||||
|
volumes:
|
||||||
|
- {{ gitea_data_dir }}:/data
|
||||||
|
- /etc/timezone:/etc/timezone:ro
|
||||||
|
- /etc/localtime:/etc/localtime:ro
|
||||||
|
ports:
|
||||||
|
- "{{ gitea_ssh_port }}:22"
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "100m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
caddy:
|
||||||
|
image: caddy:alpine
|
||||||
|
container_name: caddy
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- gitea
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
- "443:443/udp"
|
||||||
|
volumes:
|
||||||
|
- {{ gitea_base_dir }}/Caddyfile:/etc/caddy/Caddyfile
|
||||||
|
- caddy_data:/data
|
||||||
|
- caddy_config:/config
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "100m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
caddy_data:
|
||||||
|
caddy_config:
|
||||||
252
ansible/gitea/migration/templates/setup-gitea-oauth.sh.j2
Normal file
252
ansible/gitea/migration/templates/setup-gitea-oauth.sh.j2
Normal file
@@ -0,0 +1,252 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea OAuth Application Setup for Authentik
|
||||||
|
# =============================================================================
|
||||||
|
# Creates OAuth2 provider and application in Authentik for Gitea
|
||||||
|
# Outputs credentials for manual Gitea UI configuration
|
||||||
|
#
|
||||||
|
# Generated by ansible - do not edit manually
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
AUTHENTIK_DOMAIN="{{ authentik_domain }}"
|
||||||
|
GITEA_DOMAIN="{{ gitea_domain }}"
|
||||||
|
CLIENT_ID="{{ gitea_oauth_client_id }}"
|
||||||
|
PROVIDER_NAME="{{ gitea_oauth_provider_name }}"
|
||||||
|
OUTPUT_FILE="/tmp/gitea-oauth-credentials.json"
|
||||||
|
|
||||||
|
# Bootstrap token from Authentik
|
||||||
|
API_TOKEN="{{ vault_authentik_bootstrap_token }}"
|
||||||
|
|
||||||
|
echo "============================================"
|
||||||
|
echo "Gitea OAuth Application Setup"
|
||||||
|
echo "============================================"
|
||||||
|
echo ""
|
||||||
|
echo "Authentik: https://${AUTHENTIK_DOMAIN}"
|
||||||
|
echo "Gitea: https://${GITEA_DOMAIN}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Test API access
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Testing Authentik API access..."
|
||||||
|
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/core/brands/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
|
||||||
|
if [ "$HTTP_CODE" != "200" ]; then
|
||||||
|
echo "ERROR: API authentication failed (HTTP $HTTP_CODE)"
|
||||||
|
echo "Check that vault_authentik_bootstrap_token is correct"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "Authentik API ready!"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Get authorization flow PK
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Finding authorization flow..."
|
||||||
|
AUTH_FLOW_RESPONSE=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/flows/instances/?designation=authorization" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
AUTH_FLOW_PK=$(echo "$AUTH_FLOW_RESPONSE" | jq -r '.results[0].pk')
|
||||||
|
echo "Authorization flow: $AUTH_FLOW_PK"
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Get invalidation flow PK
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Finding invalidation flow..."
|
||||||
|
INVALID_FLOW_RESPONSE=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/flows/instances/?designation=invalidation" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
INVALID_FLOW_PK=$(echo "$INVALID_FLOW_RESPONSE" | jq -r '.results[0].pk')
|
||||||
|
echo "Invalidation flow: $INVALID_FLOW_PK"
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Get signing certificate
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Finding signing certificate..."
|
||||||
|
CERT_RESPONSE=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/crypto/certificatekeypairs/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
SIGNING_KEY_PK=$(echo "$CERT_RESPONSE" | jq -r '.results[0].pk')
|
||||||
|
echo "Signing key: $SIGNING_KEY_PK"
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Get scope mappings
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Getting scope mappings..."
|
||||||
|
SCOPE_MAPPINGS=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/propertymappings/provider/scope/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
|
||||||
|
OPENID_PK=$(echo "$SCOPE_MAPPINGS" | jq -r '.results[] | select(.scope_name=="openid") | .pk')
|
||||||
|
PROFILE_PK=$(echo "$SCOPE_MAPPINGS" | jq -r '.results[] | select(.scope_name=="profile") | .pk')
|
||||||
|
EMAIL_PK=$(echo "$SCOPE_MAPPINGS" | jq -r '.results[] | select(.scope_name=="email") | .pk')
|
||||||
|
|
||||||
|
echo "Scopes: openid=$OPENID_PK, profile=$PROFILE_PK, email=$EMAIL_PK"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Check if provider already exists
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Checking for existing Gitea provider..."
|
||||||
|
EXISTING_PROVIDER=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/providers/oauth2/?name=Gitea" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
EXISTING_PK=$(echo "$EXISTING_PROVIDER" | jq -r '.results[0].pk // empty')
|
||||||
|
|
||||||
|
if [ -n "$EXISTING_PK" ] && [ "$EXISTING_PK" != "null" ]; then
|
||||||
|
echo "Provider already exists (PK: $EXISTING_PK), updating..."
|
||||||
|
|
||||||
|
# Update existing provider
|
||||||
|
PROVIDER_RESPONSE=$(curl -s -X PATCH \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/providers/oauth2/${EXISTING_PK}/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"redirect_uris\": [
|
||||||
|
{\"matching_mode\": \"strict\", \"url\": \"https://${GITEA_DOMAIN}/user/oauth2/${PROVIDER_NAME}/callback\"}
|
||||||
|
]
|
||||||
|
}")
|
||||||
|
|
||||||
|
PROVIDER_PK="$EXISTING_PK"
|
||||||
|
CLIENT_SECRET=$(echo "$EXISTING_PROVIDER" | jq -r '.results[0].client_secret // empty')
|
||||||
|
else
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Create OAuth2 Provider (confidential client for Gitea)
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Creating Gitea OAuth2 Provider..."
|
||||||
|
PROVIDER_RESPONSE=$(curl -s -X POST \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/providers/oauth2/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"name\": \"Gitea\",
|
||||||
|
\"authorization_flow\": \"${AUTH_FLOW_PK}\",
|
||||||
|
\"invalidation_flow\": \"${INVALID_FLOW_PK}\",
|
||||||
|
\"signing_key\": \"${SIGNING_KEY_PK}\",
|
||||||
|
\"client_type\": \"confidential\",
|
||||||
|
\"client_id\": \"${CLIENT_ID}\",
|
||||||
|
\"redirect_uris\": [
|
||||||
|
{\"matching_mode\": \"strict\", \"url\": \"https://${GITEA_DOMAIN}/user/oauth2/${PROVIDER_NAME}/callback\"}
|
||||||
|
],
|
||||||
|
\"access_code_validity\": \"minutes=10\",
|
||||||
|
\"access_token_validity\": \"hours=1\",
|
||||||
|
\"refresh_token_validity\": \"days=30\",
|
||||||
|
\"property_mappings\": [\"${OPENID_PK}\", \"${PROFILE_PK}\", \"${EMAIL_PK}\"],
|
||||||
|
\"sub_mode\": \"user_email\",
|
||||||
|
\"include_claims_in_id_token\": true,
|
||||||
|
\"issuer_mode\": \"per_provider\"
|
||||||
|
}")
|
||||||
|
|
||||||
|
PROVIDER_PK=$(echo "$PROVIDER_RESPONSE" | jq -r '.pk // empty')
|
||||||
|
CLIENT_SECRET=$(echo "$PROVIDER_RESPONSE" | jq -r '.client_secret // empty')
|
||||||
|
|
||||||
|
if [ -z "$PROVIDER_PK" ] || [ "$PROVIDER_PK" = "null" ]; then
|
||||||
|
echo "ERROR: Failed to create provider"
|
||||||
|
echo "$PROVIDER_RESPONSE" | jq .
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Provider PK: $PROVIDER_PK"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Check if application already exists
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Checking for existing Gitea application..."
|
||||||
|
EXISTING_APP=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/core/applications/?slug=gitea" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
EXISTING_APP_SLUG=$(echo "$EXISTING_APP" | jq -r '.results[0].slug // empty')
|
||||||
|
|
||||||
|
if [ -z "$EXISTING_APP_SLUG" ] || [ "$EXISTING_APP_SLUG" = "null" ]; then
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Create Application
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Creating Gitea Application..."
|
||||||
|
APP_RESPONSE=$(curl -s -X POST \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/core/applications/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"name\": \"Gitea\",
|
||||||
|
\"slug\": \"gitea\",
|
||||||
|
\"provider\": ${PROVIDER_PK},
|
||||||
|
\"meta_launch_url\": \"https://${GITEA_DOMAIN}\",
|
||||||
|
\"open_in_new_tab\": false
|
||||||
|
}")
|
||||||
|
|
||||||
|
APP_SLUG=$(echo "$APP_RESPONSE" | jq -r '.slug // empty')
|
||||||
|
if [ -z "$APP_SLUG" ] || [ "$APP_SLUG" = "null" ]; then
|
||||||
|
echo "WARNING: Failed to create application (may already exist)"
|
||||||
|
else
|
||||||
|
echo "Application created: $APP_SLUG"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Application already exists: $EXISTING_APP_SLUG"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Output credentials
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
cat > "$OUTPUT_FILE" << EOF
|
||||||
|
{
|
||||||
|
"client_id": "${CLIENT_ID}",
|
||||||
|
"client_secret": "${CLIENT_SECRET}",
|
||||||
|
"auto_discover_url": "https://${AUTHENTIK_DOMAIN}/application/o/gitea/.well-known/openid-configuration",
|
||||||
|
"scopes": "email profile",
|
||||||
|
"provider_name": "${PROVIDER_NAME}"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo "============================================"
|
||||||
|
echo "OAuth Setup Complete!"
|
||||||
|
echo "============================================"
|
||||||
|
echo ""
|
||||||
|
echo "Credentials saved to: ${OUTPUT_FILE}"
|
||||||
|
echo ""
|
||||||
|
echo "========================================"
|
||||||
|
echo "MANUAL CONFIGURATION REQUIRED IN GITEA"
|
||||||
|
echo "========================================"
|
||||||
|
echo ""
|
||||||
|
echo "1. Log into Gitea as admin:"
|
||||||
|
echo " https://${GITEA_DOMAIN}/user/login"
|
||||||
|
echo ""
|
||||||
|
echo "2. Navigate to:"
|
||||||
|
echo " Site Administration -> Authentication Sources -> Add"
|
||||||
|
echo ""
|
||||||
|
echo "3. Fill in the form:"
|
||||||
|
echo " Authentication Type: OAuth2"
|
||||||
|
echo " Authentication Name: ${PROVIDER_NAME}"
|
||||||
|
echo " OAuth2 Provider: OpenID Connect"
|
||||||
|
echo " Client ID: ${CLIENT_ID}"
|
||||||
|
echo " Client Secret: ${CLIENT_SECRET}"
|
||||||
|
echo " OpenID Connect Auto Discovery URL:"
|
||||||
|
echo " https://${AUTHENTIK_DOMAIN}/application/o/gitea/.well-known/openid-configuration"
|
||||||
|
echo " Additional Scopes: email profile"
|
||||||
|
echo ""
|
||||||
|
echo "4. Click 'Add Authentication Source'"
|
||||||
|
echo ""
|
||||||
|
echo "5. Test by logging out and clicking 'Sign in with ${PROVIDER_NAME}'"
|
||||||
|
echo ""
|
||||||
|
echo "========================================"
|
||||||
|
echo ""
|
||||||
|
echo "Credentials JSON:"
|
||||||
|
cat "$OUTPUT_FILE"
|
||||||
|
echo ""
|
||||||
209
ansible/gitea/playbook.yml
Normal file
209
ansible/gitea/playbook.yml
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea PoC Deployment Playbook (Standalone)
|
||||||
|
# =============================================================================
|
||||||
|
# Deploys standalone Gitea without external OAuth.
|
||||||
|
# Used for hosting Terraform/Pulumi repos and CI/CD pipelines.
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. DNS record: gitea-poc.networkmonitor.cc -> VPS IP
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook -i poc-inventory.yml playbook.yml
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Deploy Gitea Code Hosting
|
||||||
|
hosts: gitea_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/gitea_servers.yml
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
- name: Validate required variables
|
||||||
|
ansible.builtin.assert:
|
||||||
|
that:
|
||||||
|
- gitea_domain is defined
|
||||||
|
fail_msg: "gitea_domain must be defined in group_vars/gitea_servers.yml"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Prerequisites
|
||||||
|
# =========================================================================
|
||||||
|
- name: Update apt cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
cache_valid_time: 3600
|
||||||
|
|
||||||
|
- name: Install prerequisites
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- apt-transport-https
|
||||||
|
- ca-certificates
|
||||||
|
- curl
|
||||||
|
- gnupg
|
||||||
|
- lsb-release
|
||||||
|
- jq
|
||||||
|
state: present
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Docker Installation
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create keyrings directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/apt/keyrings
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
- name: Add Docker GPG key
|
||||||
|
ansible.builtin.shell: |
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
chmod a+r /etc/apt/keyrings/docker.gpg
|
||||||
|
args:
|
||||||
|
creates: /etc/apt/keyrings/docker.gpg
|
||||||
|
|
||||||
|
- name: Add Docker repository
|
||||||
|
ansible.builtin.apt_repository:
|
||||||
|
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
|
||||||
|
state: present
|
||||||
|
filename: docker
|
||||||
|
|
||||||
|
- name: Install Docker packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- docker-ce
|
||||||
|
- docker-ce-cli
|
||||||
|
- containerd.io
|
||||||
|
- docker-buildx-plugin
|
||||||
|
- docker-compose-plugin
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
|
||||||
|
- name: Start and enable Docker
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: docker
|
||||||
|
state: started
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Gitea Directory Structure
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create Gitea directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_base_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
- name: Create Gitea data directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_data_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Deploy Configuration Files
|
||||||
|
# =========================================================================
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/docker-compose.yml.j2
|
||||||
|
dest: "{{ gitea_base_dir }}/docker-compose.yml"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy Caddyfile
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/Caddyfile.j2
|
||||||
|
dest: "{{ gitea_base_dir }}/Caddyfile"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Firewall (UFW)
|
||||||
|
# =========================================================================
|
||||||
|
- name: Install UFW
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: ufw
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Allow SSH
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "22"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "80"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTPS
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "443"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow Gitea SSH
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "{{ gitea_ssh_port }}"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Enable UFW
|
||||||
|
community.general.ufw:
|
||||||
|
state: enabled
|
||||||
|
policy: deny
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Start Services
|
||||||
|
# =========================================================================
|
||||||
|
- name: Pull Docker images
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose pull
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Start Gitea services
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose up -d
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Wait for Gitea to be ready
|
||||||
|
# =========================================================================
|
||||||
|
- name: Wait for Gitea container to be healthy
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose ps gitea --format json
|
||||||
|
chdir: "{{ gitea_base_dir }}"
|
||||||
|
register: gitea_container
|
||||||
|
until: "'running' in gitea_container.stdout"
|
||||||
|
retries: 12
|
||||||
|
delay: 5
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Deployment Summary
|
||||||
|
# =========================================================================
|
||||||
|
- name: Display deployment status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Gitea Container Deployed!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Container: gitea (port {{ gitea_http_port }} internal)
|
||||||
|
SSH: port {{ gitea_ssh_port }} exposed
|
||||||
|
|
||||||
|
============================================
|
||||||
|
NEXT STEPS:
|
||||||
|
============================================
|
||||||
|
|
||||||
|
1. Deploy shared Caddy:
|
||||||
|
cd ../caddy && ansible-playbook -i poc-inventory.yml playbook.yml
|
||||||
|
|
||||||
|
2. Then access https://{{ gitea_domain }}
|
||||||
|
|
||||||
|
============================================
|
||||||
|
|
||||||
|
View logs:
|
||||||
|
ssh root@{{ ansible_host }} "cd {{ gitea_base_dir }} && docker compose logs -f"
|
||||||
|
|
||||||
|
============================================
|
||||||
8
ansible/gitea/poc-inventory.yml
Normal file
8
ansible/gitea/poc-inventory.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
gitea_servers:
|
||||||
|
hosts:
|
||||||
|
gitea-poc:
|
||||||
|
ansible_host: observability-poc.networkmonitor.cc
|
||||||
|
ansible_user: root
|
||||||
7
ansible/gitea/templates/Caddyfile.j2
Normal file
7
ansible/gitea/templates/Caddyfile.j2
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
email {{ letsencrypt_email }}
|
||||||
|
}
|
||||||
|
|
||||||
|
{{ gitea_domain }} {
|
||||||
|
reverse_proxy gitea:{{ gitea_http_port }}
|
||||||
|
}
|
||||||
25
ansible/gitea/templates/docker-compose.yml.j2
Normal file
25
ansible/gitea/templates/docker-compose.yml.j2
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
networks:
|
||||||
|
gitea:
|
||||||
|
external: false
|
||||||
|
|
||||||
|
services:
|
||||||
|
gitea:
|
||||||
|
image: {{ gitea_image }}
|
||||||
|
container_name: gitea
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- gitea
|
||||||
|
environment:
|
||||||
|
- USER_UID=1000
|
||||||
|
- USER_GID=1000
|
||||||
|
volumes:
|
||||||
|
- {{ gitea_data_dir }}:/data
|
||||||
|
- /etc/timezone:/etc/timezone:ro
|
||||||
|
- /etc/localtime:/etc/localtime:ro
|
||||||
|
ports:
|
||||||
|
- "{{ gitea_ssh_port }}:22"
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "100m"
|
||||||
|
max-file: "2"
|
||||||
246
ansible/gitea/templates/setup-gitea-oauth.sh.j2
Normal file
246
ansible/gitea/templates/setup-gitea-oauth.sh.j2
Normal file
@@ -0,0 +1,246 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea OAuth Application Setup for Authentik
|
||||||
|
# =============================================================================
|
||||||
|
# Creates OAuth2 provider and application in Authentik for Gitea
|
||||||
|
# Outputs credentials for manual Gitea UI configuration
|
||||||
|
#
|
||||||
|
# Generated by ansible - do not edit manually
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
AUTHENTIK_DOMAIN="{{ authentik_domain }}"
|
||||||
|
GITEA_DOMAIN="{{ gitea_domain }}"
|
||||||
|
CLIENT_ID="{{ gitea_oauth_client_id }}"
|
||||||
|
PROVIDER_NAME="{{ gitea_oauth_provider_name }}"
|
||||||
|
OUTPUT_FILE="/tmp/gitea-oauth-credentials.json"
|
||||||
|
|
||||||
|
# Bootstrap token from Authentik
|
||||||
|
API_TOKEN="{{ vault_authentik_bootstrap_token }}"
|
||||||
|
|
||||||
|
echo "============================================"
|
||||||
|
echo "Gitea OAuth Application Setup"
|
||||||
|
echo "============================================"
|
||||||
|
echo ""
|
||||||
|
echo "Authentik: https://${AUTHENTIK_DOMAIN}"
|
||||||
|
echo "Gitea: https://${GITEA_DOMAIN}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Test API access
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Testing Authentik API access..."
|
||||||
|
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/core/brands/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
|
||||||
|
if [ "$HTTP_CODE" != "200" ]; then
|
||||||
|
echo "ERROR: API authentication failed (HTTP $HTTP_CODE)"
|
||||||
|
echo "Check that vault_authentik_bootstrap_token is correct"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "Authentik API ready!"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Get authorization flow PK
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Finding authorization flow..."
|
||||||
|
AUTH_FLOW_RESPONSE=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/flows/instances/?designation=authorization" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
AUTH_FLOW_PK=$(echo "$AUTH_FLOW_RESPONSE" | jq -r '.results[0].pk')
|
||||||
|
echo "Authorization flow: $AUTH_FLOW_PK"
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Get invalidation flow PK
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Finding invalidation flow..."
|
||||||
|
INVALID_FLOW_RESPONSE=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/flows/instances/?designation=invalidation" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
INVALID_FLOW_PK=$(echo "$INVALID_FLOW_RESPONSE" | jq -r '.results[0].pk')
|
||||||
|
echo "Invalidation flow: $INVALID_FLOW_PK"
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Get signing certificate
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Finding signing certificate..."
|
||||||
|
CERT_RESPONSE=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/crypto/certificatekeypairs/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
SIGNING_KEY_PK=$(echo "$CERT_RESPONSE" | jq -r '.results[0].pk')
|
||||||
|
echo "Signing key: $SIGNING_KEY_PK"
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Get scope mappings
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Getting scope mappings..."
|
||||||
|
SCOPE_MAPPINGS=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/propertymappings/provider/scope/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
|
||||||
|
OPENID_PK=$(echo "$SCOPE_MAPPINGS" | jq -r '.results[] | select(.scope_name=="openid") | .pk')
|
||||||
|
PROFILE_PK=$(echo "$SCOPE_MAPPINGS" | jq -r '.results[] | select(.scope_name=="profile") | .pk')
|
||||||
|
EMAIL_PK=$(echo "$SCOPE_MAPPINGS" | jq -r '.results[] | select(.scope_name=="email") | .pk')
|
||||||
|
|
||||||
|
echo "Scopes: openid=$OPENID_PK, profile=$PROFILE_PK, email=$EMAIL_PK"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Check if provider already exists
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Checking for existing Gitea provider..."
|
||||||
|
EXISTING_PROVIDER=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/providers/oauth2/?name=Gitea" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
EXISTING_PK=$(echo "$EXISTING_PROVIDER" | jq -r '.results[0].pk // empty')
|
||||||
|
|
||||||
|
if [ -n "$EXISTING_PK" ] && [ "$EXISTING_PK" != "null" ]; then
|
||||||
|
echo "Provider already exists (PK: $EXISTING_PK), updating..."
|
||||||
|
|
||||||
|
PROVIDER_RESPONSE=$(curl -s -X PATCH \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/providers/oauth2/${EXISTING_PK}/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"redirect_uris\": [
|
||||||
|
{\"matching_mode\": \"strict\", \"url\": \"https://${GITEA_DOMAIN}/user/oauth2/${PROVIDER_NAME}/callback\"}
|
||||||
|
]
|
||||||
|
}")
|
||||||
|
|
||||||
|
PROVIDER_PK="$EXISTING_PK"
|
||||||
|
CLIENT_SECRET=$(echo "$EXISTING_PROVIDER" | jq -r '.results[0].client_secret // empty')
|
||||||
|
else
|
||||||
|
# Create OAuth2 Provider
|
||||||
|
echo "Creating Gitea OAuth2 Provider..."
|
||||||
|
PROVIDER_RESPONSE=$(curl -s -X POST \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/providers/oauth2/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"name\": \"Gitea\",
|
||||||
|
\"authorization_flow\": \"${AUTH_FLOW_PK}\",
|
||||||
|
\"invalidation_flow\": \"${INVALID_FLOW_PK}\",
|
||||||
|
\"signing_key\": \"${SIGNING_KEY_PK}\",
|
||||||
|
\"client_type\": \"confidential\",
|
||||||
|
\"client_id\": \"${CLIENT_ID}\",
|
||||||
|
\"redirect_uris\": [
|
||||||
|
{\"matching_mode\": \"strict\", \"url\": \"https://${GITEA_DOMAIN}/user/oauth2/${PROVIDER_NAME}/callback\"}
|
||||||
|
],
|
||||||
|
\"access_code_validity\": \"minutes=10\",
|
||||||
|
\"access_token_validity\": \"hours=1\",
|
||||||
|
\"refresh_token_validity\": \"days=30\",
|
||||||
|
\"property_mappings\": [\"${OPENID_PK}\", \"${PROFILE_PK}\", \"${EMAIL_PK}\"],
|
||||||
|
\"sub_mode\": \"user_email\",
|
||||||
|
\"include_claims_in_id_token\": true,
|
||||||
|
\"issuer_mode\": \"per_provider\"
|
||||||
|
}")
|
||||||
|
|
||||||
|
PROVIDER_PK=$(echo "$PROVIDER_RESPONSE" | jq -r '.pk // empty')
|
||||||
|
CLIENT_SECRET=$(echo "$PROVIDER_RESPONSE" | jq -r '.client_secret // empty')
|
||||||
|
|
||||||
|
if [ -z "$PROVIDER_PK" ] || [ "$PROVIDER_PK" = "null" ]; then
|
||||||
|
echo "ERROR: Failed to create provider"
|
||||||
|
echo "$PROVIDER_RESPONSE" | jq .
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Provider PK: $PROVIDER_PK"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Check if application already exists
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
echo "Checking for existing Gitea application..."
|
||||||
|
EXISTING_APP=$(curl -s \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/core/applications/?slug=gitea" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json")
|
||||||
|
EXISTING_APP_SLUG=$(echo "$EXISTING_APP" | jq -r '.results[0].slug // empty')
|
||||||
|
|
||||||
|
if [ -z "$EXISTING_APP_SLUG" ] || [ "$EXISTING_APP_SLUG" = "null" ]; then
|
||||||
|
echo "Creating Gitea Application..."
|
||||||
|
APP_RESPONSE=$(curl -s -X POST \
|
||||||
|
"https://${AUTHENTIK_DOMAIN}/api/v3/core/applications/" \
|
||||||
|
-H "Authorization: Bearer ${API_TOKEN}" \
|
||||||
|
-H "Accept: application/json" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"name\": \"Gitea\",
|
||||||
|
\"slug\": \"gitea\",
|
||||||
|
\"provider\": ${PROVIDER_PK},
|
||||||
|
\"meta_launch_url\": \"https://${GITEA_DOMAIN}\",
|
||||||
|
\"open_in_new_tab\": false
|
||||||
|
}")
|
||||||
|
|
||||||
|
APP_SLUG=$(echo "$APP_RESPONSE" | jq -r '.slug // empty')
|
||||||
|
if [ -z "$APP_SLUG" ] || [ "$APP_SLUG" = "null" ]; then
|
||||||
|
echo "WARNING: Failed to create application (may already exist)"
|
||||||
|
else
|
||||||
|
echo "Application created: $APP_SLUG"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Application already exists: $EXISTING_APP_SLUG"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# Output credentials
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
cat > "$OUTPUT_FILE" << EOF
|
||||||
|
{
|
||||||
|
"client_id": "${CLIENT_ID}",
|
||||||
|
"client_secret": "${CLIENT_SECRET}",
|
||||||
|
"auto_discover_url": "https://${AUTHENTIK_DOMAIN}/application/o/gitea/.well-known/openid-configuration",
|
||||||
|
"scopes": "email profile",
|
||||||
|
"provider_name": "${PROVIDER_NAME}"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo "============================================"
|
||||||
|
echo "OAuth Setup Complete!"
|
||||||
|
echo "============================================"
|
||||||
|
echo ""
|
||||||
|
echo "Credentials saved to: ${OUTPUT_FILE}"
|
||||||
|
echo ""
|
||||||
|
echo "========================================"
|
||||||
|
echo "MANUAL CONFIGURATION REQUIRED IN GITEA"
|
||||||
|
echo "========================================"
|
||||||
|
echo ""
|
||||||
|
echo "1. Log into Gitea as admin:"
|
||||||
|
echo " https://${GITEA_DOMAIN}/user/login"
|
||||||
|
echo ""
|
||||||
|
echo "2. Navigate to:"
|
||||||
|
echo " Site Administration -> Authentication Sources -> Add"
|
||||||
|
echo ""
|
||||||
|
echo "3. Fill in the form:"
|
||||||
|
echo " Authentication Type: OAuth2"
|
||||||
|
echo " Authentication Name: ${PROVIDER_NAME}"
|
||||||
|
echo " OAuth2 Provider: OpenID Connect"
|
||||||
|
echo " Client ID: ${CLIENT_ID}"
|
||||||
|
echo " Client Secret: ${CLIENT_SECRET}"
|
||||||
|
echo " OpenID Connect Auto Discovery URL:"
|
||||||
|
echo " https://${AUTHENTIK_DOMAIN}/application/o/gitea/.well-known/openid-configuration"
|
||||||
|
echo " Additional Scopes: email profile"
|
||||||
|
echo ""
|
||||||
|
echo "4. Click 'Add Authentication Source'"
|
||||||
|
echo ""
|
||||||
|
echo "5. Test by logging out and clicking 'Sign in with ${PROVIDER_NAME}'"
|
||||||
|
echo ""
|
||||||
|
echo "========================================"
|
||||||
|
echo ""
|
||||||
|
echo "Credentials JSON:"
|
||||||
|
cat "$OUTPUT_FILE"
|
||||||
|
echo ""
|
||||||
292
ansible/netbird/README.md
Normal file
292
ansible/netbird/README.md
Normal file
@@ -0,0 +1,292 @@
|
|||||||
|
# NetBird Deployment
|
||||||
|
|
||||||
|
Self-hosted NetBird VPN with embedded IdP (no external SSO required).
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ansible/deployments/netbird
|
||||||
|
|
||||||
|
# 1. Generate secrets
|
||||||
|
./generate-vault.sh
|
||||||
|
ansible-vault encrypt group_vars/vault.yml
|
||||||
|
|
||||||
|
# 2. Deploy
|
||||||
|
ansible-playbook playbook-ssl.yml -i inventory.yml --ask-vault-pass
|
||||||
|
|
||||||
|
# 3. Create admin (manual - see below)
|
||||||
|
|
||||||
|
# 4. Create PAT (manual - see below)
|
||||||
|
|
||||||
|
# 5. Provision groups and users
|
||||||
|
ansible-playbook setup-groups.yml -i inventory.yml --ask-vault-pass
|
||||||
|
ansible-playbook setup-users.yml -i inventory.yml --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Ubuntu 22.04+ VPS with public IP
|
||||||
|
- Ports 80, 443, 3478 (TCP/UDP) open
|
||||||
|
- Ansible 2.14+
|
||||||
|
|
||||||
|
## Deployment Modes
|
||||||
|
|
||||||
|
| Playbook | Use Case |
|
||||||
|
| --------------------- | ------------------------------------------------------------------------ |
|
||||||
|
| `playbook-ssl-ip.yml` | HTTPS with self-signed cert on IP (recommended for isolated deployments) |
|
||||||
|
| `playbook-ssl.yml` | HTTPS with Let's Encrypt (requires domain) |
|
||||||
|
| `playbook-no-ssl.yml` | HTTP only (not recommended) |
|
||||||
|
|
||||||
|
## Full Workflow
|
||||||
|
|
||||||
|
### Step 1: Configure Inventory
|
||||||
|
|
||||||
|
Edit `inventory.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
netbird_servers:
|
||||||
|
hosts:
|
||||||
|
netbird-vps:
|
||||||
|
ansible_host: YOUR_SERVER_IP
|
||||||
|
ansible_user: root
|
||||||
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Generate Secrets
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./generate-vault.sh
|
||||||
|
ansible-vault encrypt group_vars/vault.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
This creates:
|
||||||
|
|
||||||
|
- `vault_turn_password` - TURN server authentication
|
||||||
|
- `vault_relay_secret` - Relay server secret
|
||||||
|
- `vault_encryption_key` - Embedded IdP encryption (BACK THIS UP!)
|
||||||
|
- `vault_admin_password` - Initial admin password
|
||||||
|
- `vault_netbird_service_pat` - Leave empty for now
|
||||||
|
|
||||||
|
### Step 3: Deploy NetBird
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook playbook-ssl-ip.yml -i inventory.yml --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
|
Wait for deployment to complete. Dashboard will be available at `https://YOUR_IP`.
|
||||||
|
|
||||||
|
### Step 4: Create Admin User (Manual)
|
||||||
|
|
||||||
|
1. Open `https://YOUR_IP` in browser (accept certificate warning)
|
||||||
|
2. Create admin account:
|
||||||
|
- **Email**: `admin@achilles.local` (from `group_vars/netbird_servers.yml`)
|
||||||
|
- **Password**: Use `vault_admin_password` from vault
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View password
|
||||||
|
ansible-vault view group_vars/vault.yml | grep vault_admin_password
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Create Service User & PAT (Manual)
|
||||||
|
|
||||||
|
After logging in as admin:
|
||||||
|
|
||||||
|
1. Go to **Team** → **Service Users**
|
||||||
|
2. Click **Create Service User**
|
||||||
|
- Name: `Automation Service`
|
||||||
|
- Role: `Admin`
|
||||||
|
3. Click on the created service user
|
||||||
|
4. Click **Create Token**
|
||||||
|
- Name: `ansible-automation`
|
||||||
|
- Expiration: 365 days
|
||||||
|
5. **Copy the token** (shown only once!)
|
||||||
|
|
||||||
|
### Step 6: Store PAT in Vault
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-vault edit group_vars/vault.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Set:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
vault_netbird_service_pat: "nbp_xxxxxxxxxxxxxxxxxxxx"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Create Groups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook setup-groups.yml -i inventory.yml --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
|
This creates:
|
||||||
|
|
||||||
|
- Battalion groups: `battalion-1-pilots`, `battalion-1-ground-stations`, etc.
|
||||||
|
- Dev team group: `dev-team`
|
||||||
|
- Setup keys for each group
|
||||||
|
- Access policies (battalion isolation + dev access)
|
||||||
|
|
||||||
|
### Step 8: Provision Users
|
||||||
|
|
||||||
|
Edit `group_vars/netbird_servers.yml` to define users:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
netbird_users:
|
||||||
|
# Dev team (full access)
|
||||||
|
- email: "vlad.stus@achilles.local"
|
||||||
|
name: "Vlad Stus"
|
||||||
|
role: "admin"
|
||||||
|
auto_groups:
|
||||||
|
- "dev-team"
|
||||||
|
|
||||||
|
# Battalion users
|
||||||
|
- email: "pilot1.bat1@achilles.local"
|
||||||
|
name: "Pilot One B1"
|
||||||
|
role: "user"
|
||||||
|
battalion: "battalion-1"
|
||||||
|
type: "pilot"
|
||||||
|
|
||||||
|
- email: "gs-operator1.bat1@achilles.local"
|
||||||
|
name: "GS Operator One B1"
|
||||||
|
role: "user"
|
||||||
|
battalion: "battalion-1"
|
||||||
|
type: "ground-station"
|
||||||
|
```
|
||||||
|
|
||||||
|
Then run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook setup-users.yml -i inventory.yml --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
|
**Dry run** (preview without creating):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook setup-users.yml -i inventory.yml --ask-vault-pass -e "dry_run=true"
|
||||||
|
```
|
||||||
|
|
||||||
|
Credentials are saved to `files/credentials/users-YYYY-MM-DD.yml`.
|
||||||
|
|
||||||
|
## User Roles
|
||||||
|
|
||||||
|
| Role | Permissions |
|
||||||
|
| ------- | ------------------------------------------ |
|
||||||
|
| `owner` | Full control, can delete account |
|
||||||
|
| `admin` | Manage users, groups, policies, setup keys |
|
||||||
|
| `user` | Connect peers, view own peers |
|
||||||
|
|
||||||
|
## User Types (for battalion assignment)
|
||||||
|
|
||||||
|
| Type | Auto-Group |
|
||||||
|
| ---------------- | ----------------------------- |
|
||||||
|
| `pilot` | `{battalion}-pilots` |
|
||||||
|
| `ground-station` | `{battalion}-ground-stations` |
|
||||||
|
|
||||||
|
## Backup & Restore
|
||||||
|
|
||||||
|
**Create backup** (downloads to `~/achilles-backups/netbird/`):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook backup.yml -i inventory.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restore latest backup**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook restore.yml -i inventory.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restore specific backup**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook restore.yml -i inventory.yml -e "backup_file=netbird-backup-20250116T120000.tar.gz"
|
||||||
|
```
|
||||||
|
|
||||||
|
Backups include:
|
||||||
|
|
||||||
|
- Management SQLite database (peers, routes, policies, users)
|
||||||
|
- Configuration files (docker-compose.yml, Caddyfile, etc.)
|
||||||
|
|
||||||
|
## Cleanup
|
||||||
|
|
||||||
|
**Soft cleanup** (stop containers, keep data):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook cleanup-soft.yml -i inventory.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Full cleanup** (remove everything including data):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook cleanup-full.yml -i inventory.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Connecting Peers
|
||||||
|
|
||||||
|
After provisioning, users connect with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Using setup key (for automated deployments)
|
||||||
|
netbird up --management-url https://YOUR_IP --setup-key SETUP_KEY
|
||||||
|
|
||||||
|
# Using user credentials (interactive)
|
||||||
|
netbird up --management-url https://YOUR_IP
|
||||||
|
# Then login with email/password in browser
|
||||||
|
```
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
ansible/deployments/netbird/
|
||||||
|
├── backup.yml # Backup management DB and config
|
||||||
|
├── cleanup-full.yml # Remove everything
|
||||||
|
├── cleanup-soft.yml # Stop containers only
|
||||||
|
├── generate-vault.sh # Generate random secrets
|
||||||
|
├── group_vars/
|
||||||
|
│ ├── netbird_servers.yml # Main configuration
|
||||||
|
│ ├── vault.yml # Encrypted secrets
|
||||||
|
│ └── vault.yml.example # Template for vault
|
||||||
|
├── inventory.yml # Server inventory
|
||||||
|
├── playbook-no-ssl.yml # HTTP deployment
|
||||||
|
├── playbook-ssl-ip.yml # HTTPS with self-signed (IP)
|
||||||
|
├── playbook-ssl.yml # HTTPS with Let's Encrypt
|
||||||
|
├── restore.yml # Restore from backup
|
||||||
|
├── setup-bootstrap.yml # Bootstrap admin (if API available)
|
||||||
|
├── setup-groups.yml # Create groups, keys, policies
|
||||||
|
├── setup-users.yml # Provision users
|
||||||
|
├── templates/ # Jinja2 templates
|
||||||
|
└── files/
|
||||||
|
└── credentials/ # Generated user passwords (gitignored)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Certificate Warning
|
||||||
|
|
||||||
|
Expected for SSL-IP mode. Accept the self-signed certificate in your browser.
|
||||||
|
|
||||||
|
### "Unauthenticated" after login
|
||||||
|
|
||||||
|
JWKS race condition bug in NetBird. Wait 30 seconds and try again, or restart the management container:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@YOUR_IP "cd /opt/netbird && docker compose restart management"
|
||||||
|
```
|
||||||
|
|
||||||
|
### API returns 404 on /api/instance/setup
|
||||||
|
|
||||||
|
The bootstrap API isn't exposed in all NetBird versions. Create admin manually in the dashboard.
|
||||||
|
|
||||||
|
### View logs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@YOUR_IP "cd /opt/netbird && docker compose logs -f"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check container status
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@YOUR_IP "cd /opt/netbird && docker compose ps"
|
||||||
|
```
|
||||||
75
ansible/netbird/cleanup-full.yml
Normal file
75
ansible/netbird/cleanup-full.yml
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Full Cleanup - Remove everything including data
|
||||||
|
# =============================================================================
|
||||||
|
# WARNING: This will delete all NetBird data including:
|
||||||
|
# - Peer configurations
|
||||||
|
# - User accounts
|
||||||
|
# - Groups and policies
|
||||||
|
# - TLS certificates
|
||||||
|
#
|
||||||
|
# Run: ansible-playbook -i inventory.yml cleanup-full.yml
|
||||||
|
|
||||||
|
- name: Full Cleanup - Remove everything
|
||||||
|
hosts: netbird_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/netbird_servers.yml
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Check if docker-compose.yml exists
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ netbird_base_dir }}/docker-compose.yml"
|
||||||
|
register: compose_file
|
||||||
|
|
||||||
|
- name: Stop and remove containers with volumes
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose down -v
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
when: compose_file.stat.exists
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Remove any orphaned NetBird volumes
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker volume rm {{ item }}
|
||||||
|
loop:
|
||||||
|
- netbird_management
|
||||||
|
- netbird_caddy_data
|
||||||
|
ignore_errors: true
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Remove configuration directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ netbird_base_dir }}"
|
||||||
|
state: absent
|
||||||
|
|
||||||
|
- name: Prune unused Docker images
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker image prune -af --filter "label=org.opencontainers.image.title=netbird*"
|
||||||
|
changed_when: true
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- name: Display cleanup summary
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Full Cleanup Complete!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Removed:
|
||||||
|
- All NetBird containers
|
||||||
|
- All NetBird Docker volumes
|
||||||
|
- Configuration directory: {{ netbird_base_dir }}
|
||||||
|
- Unused NetBird Docker images
|
||||||
|
|
||||||
|
NetBird has been completely removed.
|
||||||
|
To redeploy, run the appropriate playbook:
|
||||||
|
ansible-playbook playbook-ssl-ip.yml -i inventory.yml --ask-vault-pass
|
||||||
|
or
|
||||||
|
ansible-playbook playbook-ssl.yml -i inventory.yml --ask-vault-pass
|
||||||
|
or
|
||||||
|
ansible-playbook playbook-no-ssl.yml -i inventory.yml --ask-vault-pass
|
||||||
|
|
||||||
|
Then bootstrap admin:
|
||||||
|
ansible-playbook setup-bootstrap.yml -i inventory.yml --ask-vault-pass
|
||||||
|
============================================
|
||||||
65
ansible/netbird/cleanup-soft.yml
Normal file
65
ansible/netbird/cleanup-soft.yml
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# Soft Cleanup - Stop containers, preserve data
|
||||||
|
# =============================================================================
|
||||||
|
# Run: ansible-playbook -i inventory.yml cleanup-soft.yml
|
||||||
|
|
||||||
|
- name: Soft Cleanup - Stop containers, preserve data
|
||||||
|
hosts: netbird_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/netbird_servers.yml
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Check if docker-compose.yml exists
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ netbird_base_dir }}/docker-compose.yml"
|
||||||
|
register: compose_file
|
||||||
|
|
||||||
|
- name: Stop and remove containers (preserve volumes)
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose down
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
when: compose_file.stat.exists
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Get preserved Docker volumes
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker volume ls -q --filter name=netbird
|
||||||
|
register: preserved_volumes
|
||||||
|
changed_when: false
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- name: Get config files
|
||||||
|
ansible.builtin.find:
|
||||||
|
paths: "{{ netbird_base_dir }}"
|
||||||
|
patterns: "*"
|
||||||
|
register: config_files
|
||||||
|
|
||||||
|
- name: Display cleanup summary
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Soft Cleanup Complete!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Stopped and removed:
|
||||||
|
- All NetBird containers
|
||||||
|
|
||||||
|
Preserved (data intact):
|
||||||
|
- Docker volumes:
|
||||||
|
{% for vol in preserved_volumes.stdout_lines %}
|
||||||
|
- {{ vol }}
|
||||||
|
{% endfor %}
|
||||||
|
- Configuration directory: {{ netbird_base_dir }}
|
||||||
|
- Configuration files:
|
||||||
|
{% for file in config_files.files %}
|
||||||
|
- {{ file.path | basename }}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
To restart services:
|
||||||
|
cd {{ netbird_base_dir }} && docker compose up -d
|
||||||
|
|
||||||
|
To perform full cleanup (wipe data):
|
||||||
|
ansible-playbook cleanup-full.yml -i inventory.yml
|
||||||
|
============================================
|
||||||
9
ansible/netbird/dev-inventory.yml
Normal file
9
ansible/netbird/dev-inventory.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
netbird_servers:
|
||||||
|
hosts:
|
||||||
|
netbird-vps:
|
||||||
|
ansible_host: dev.netbird.achilles-rnd.cc
|
||||||
|
ansible_user: app
|
||||||
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
9
ansible/netbird/ext-inventory.yml
Normal file
9
ansible/netbird/ext-inventory.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
netbird_servers:
|
||||||
|
hosts:
|
||||||
|
netbird-vps:
|
||||||
|
ansible_host: ext.netbird.achilles-rnd.cc
|
||||||
|
ansible_user: app
|
||||||
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
71
ansible/netbird/generate-vault.sh
Executable file
71
ansible/netbird/generate-vault.sh
Executable file
@@ -0,0 +1,71 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# =============================================================================
|
||||||
|
# Generate vault.yml with random passwords
|
||||||
|
# =============================================================================
|
||||||
|
# Usage: ./generate-vault.sh
|
||||||
|
# Output: group_vars/vault.yml (ready to encrypt)
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
VAULT_FILE="$SCRIPT_DIR/group_vars/vault.yml"
|
||||||
|
|
||||||
|
# Generate alphanumeric passwords (no special chars - safe for connection strings)
|
||||||
|
generate_password() {
|
||||||
|
local length=${1:-32}
|
||||||
|
openssl rand -base64 48 | tr -d '/+=\n' | head -c "$length"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate base64 encryption key (for AES-256-GCM)
|
||||||
|
generate_encryption_key() {
|
||||||
|
openssl rand -base64 32
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "Generating vault.yml with random passwords..."
|
||||||
|
|
||||||
|
cat > "$VAULT_FILE" << EOF
|
||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 Vault Secrets
|
||||||
|
# =============================================================================
|
||||||
|
# Generated: $(date -Iseconds)
|
||||||
|
# Encrypt with: ansible-vault encrypt group_vars/vault.yml
|
||||||
|
|
||||||
|
# TURN server password
|
||||||
|
vault_turn_password: "$(generate_password 32)"
|
||||||
|
|
||||||
|
# Relay secret
|
||||||
|
vault_relay_secret: "$(generate_password 32)"
|
||||||
|
|
||||||
|
# Encryption key for embedded IdP (AES-256-GCM)
|
||||||
|
# CRITICAL: Back this up! Loss prevents recovery of user data.
|
||||||
|
vault_encryption_key: "$(generate_encryption_key)"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# User Provisioning
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# Initial admin password (for setup-bootstrap.yml)
|
||||||
|
vault_admin_password: "$(generate_password 20)"
|
||||||
|
|
||||||
|
# Service user PAT for API automation
|
||||||
|
# LEAVE EMPTY - fill after running setup-bootstrap.yml and creating PAT in dashboard
|
||||||
|
vault_netbird_service_pat: ""
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Generated: $VAULT_FILE"
|
||||||
|
echo ""
|
||||||
|
echo "Contents:"
|
||||||
|
echo "----------------------------------------"
|
||||||
|
cat "$VAULT_FILE"
|
||||||
|
echo "----------------------------------------"
|
||||||
|
echo ""
|
||||||
|
echo "Next steps:"
|
||||||
|
echo " 1. Review the file above"
|
||||||
|
echo " 2. Encrypt: ansible-vault encrypt group_vars/vault.yml"
|
||||||
|
echo " 3. Deploy: ansible-playbook -i inventory.yml playbook-ssl-ip.yml --ask-vault-pass"
|
||||||
|
echo " 4. Bootstrap: ansible-playbook -i inventory.yml setup-bootstrap.yml --ask-vault-pass"
|
||||||
|
echo " 5. Create service user PAT in dashboard, add to vault.yml"
|
||||||
|
echo " 6. Groups: ansible-playbook -i inventory.yml setup-groups.yml --ask-vault-pass"
|
||||||
|
echo " 7. Users: ansible-playbook -i inventory.yml setup-users.yml --ask-vault-pass"
|
||||||
73
ansible/netbird/group_vars/netbird_servers.yml
Normal file
73
ansible/netbird/group_vars/netbird_servers.yml
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird GitOps PoC Configuration
|
||||||
|
# =============================================================================
|
||||||
|
# Lightweight deployment using NetBird's native user management.
|
||||||
|
# No external IdP dependency.
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Domain Configuration
|
||||||
|
# =============================================================================
|
||||||
|
netbird_domain: "netbird-poc.networkmonitor.cc"
|
||||||
|
netbird_protocol: "https"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Let's Encrypt Configuration
|
||||||
|
# =============================================================================
|
||||||
|
letsencrypt_email: "vlad.stus@gmail.com"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Paths
|
||||||
|
# =============================================================================
|
||||||
|
netbird_base_dir: "/opt/netbird"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Network Configuration
|
||||||
|
# =============================================================================
|
||||||
|
netbird_dns_domain: "netbird.local"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# TURN Server Configuration
|
||||||
|
# =============================================================================
|
||||||
|
turn_user: "netbird"
|
||||||
|
turn_password: "{{ vault_turn_password }}"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Relay Configuration
|
||||||
|
# =============================================================================
|
||||||
|
relay_secret: "{{ vault_relay_secret }}"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Embedded IdP Encryption Key
|
||||||
|
# =============================================================================
|
||||||
|
encryption_key: "{{ vault_encryption_key }}"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Docker Configuration
|
||||||
|
# =============================================================================
|
||||||
|
netbird_version: "0.63.0"
|
||||||
|
dashboard_version: "v2.27.1"
|
||||||
|
caddy_version: "2.10.2"
|
||||||
|
coturn_version: "4.8.0-r0"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# PoC Groups (for Terraform/Pulumi comparison)
|
||||||
|
# =============================================================================
|
||||||
|
# These mirror Achilles network structure for testing IaC tools
|
||||||
|
poc_groups:
|
||||||
|
- name: "ground-stations"
|
||||||
|
display_name: "Ground Stations"
|
||||||
|
- name: "pilots"
|
||||||
|
display_name: "Pilots"
|
||||||
|
- name: "operators"
|
||||||
|
display_name: "Operators"
|
||||||
|
- name: "fusion-servers"
|
||||||
|
display_name: "Fusion Servers"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Admin User Configuration (for setup-bootstrap.yml)
|
||||||
|
# =============================================================================
|
||||||
|
netbird_admin_user:
|
||||||
|
email: "admin@poc.local"
|
||||||
|
name: "PoC Administrator"
|
||||||
|
password: "{{ vault_admin_password }}"
|
||||||
58
ansible/netbird/group_vars/vault.yml
Normal file
58
ansible/netbird/group_vars/vault.yml
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
$ANSIBLE_VAULT;1.1;AES256
|
||||||
|
35363237356164656566323662333037363362353262303931363066386262323061636431333535
|
||||||
|
3938623466643935666439373239323731633432633166360a393938373433626136323237346338
|
||||||
|
39623463663566336662343365643338313162656161613963363262383038326366333730323733
|
||||||
|
6137393662316165330a326562663631313637353837333335643838303663356162376361363732
|
||||||
|
39393132306330643530393235303136363936343065613361646635666564636436366332366137
|
||||||
|
30633965336434653938646339343662653932663330353934343837626335343163326637666331
|
||||||
|
34373261616639323635326266346562383065656463373863383039626365656233386230346265
|
||||||
|
31393731323530313937323038633135376134663863646137336261643862396561336262636637
|
||||||
|
38616536613565623631646363613564623934623736633865626162346330313038663636623438
|
||||||
|
65663565313630356433623735663631333932336435663036393839653237383363316162306436
|
||||||
|
32383735643434336166383236383464333462346339653638393231316562383331613163303762
|
||||||
|
32386464353761333238613562386565316437343265323765373833336666303462656639616662
|
||||||
|
32663732373162653239626537313861356466643835643965633737376138363466303736663233
|
||||||
|
32313439623163643664643961356337323330316365326231616331666336663562323661313261
|
||||||
|
63356130313736303165303365646139346131646165323432383930623630303430353361636635
|
||||||
|
37333263373930613930313533623731613264336236623335346364323734613134666465306564
|
||||||
|
65313161643831343264363134303066653630343538326165316562666463613633653666613436
|
||||||
|
31383331613734366538623636356663613432663138356135666531323534333532353731343561
|
||||||
|
31303062306434343534333564336263646564303266373661393837313561343465623734386265
|
||||||
|
36336432666163383432353330613862393934303066323463353561393236653963653034363731
|
||||||
|
31346635666132303436356230383031623330303861613539663139616266313865313932383035
|
||||||
|
38373531386237306233663963613132353435326234383364616136323636636537633235633364
|
||||||
|
35613038353730323463346561336231613938656664333030313534396438396538353738336434
|
||||||
|
34323963663434633133643739336164623337626339363566323965346136346365626336393737
|
||||||
|
66653165306438616535623532313530653338626131353035623832393961643133363561636562
|
||||||
|
36303262656231633138336462663332656430306538343461383566623437323830303733333066
|
||||||
|
36383834393365393566366333396266346364303232363462663632346236353936616534643438
|
||||||
|
64623564383038323038643135356136646262636263623232383136366533636261343536353763
|
||||||
|
35666137316262383138646337646133383762346436333137393737613830313064356231643635
|
||||||
|
38616164646166363064663962373433313431303861353433356462643865343361646161646263
|
||||||
|
63316565393835633163313763336662383636313061636439643966363834623331363561306138
|
||||||
|
31633830633531306435633463396332633639316562643334393865396234373831333031643463
|
||||||
|
33346466653237343838636639626633633930343465346562623934643732393466393765643162
|
||||||
|
36346166643066343766373135383037363834346331343736623537373033383565343864393038
|
||||||
|
39396438316437653066303966396261643536373865366463306235326139306365316534393730
|
||||||
|
61613966303139643631343831383334656561333730663033653461323139653663313033613664
|
||||||
|
64643464323433373833356661383062356465356535396534323336636662303733313636373433
|
||||||
|
30613565306165303865363333316631653231636561313737373135383263343532343939333162
|
||||||
|
33313338343335313436656239316234363231313264303063333337636637643137393536626661
|
||||||
|
66623164636263663663383535663235336432646363393663626363323939666638616335633566
|
||||||
|
35303934386630616361343362333361316164356532363964613133633136336435623434343037
|
||||||
|
62383733636130303335323163663538333430363465353965333064316530346165653031303832
|
||||||
|
61613164356537633436313338636131646161636631376339383237663536336533653361393666
|
||||||
|
66363032346431623666326163393633656136303435356430653937323566653261376339623532
|
||||||
|
31643232336538626138353433616563656666326630356530346131396162666133666366316562
|
||||||
|
32356635663337396662303931633031363963656665383238356662383063303734313333313931
|
||||||
|
66613764343836356637396336373833323338623632366630326566623231633138623363366132
|
||||||
|
34393566626662643635643036393763666331623431393931366136613566396631393937626132
|
||||||
|
33646361346262333730333830343562393635316363373435306333353033316566356238646235
|
||||||
|
33376665633937613431303763316564666339626564313737383237393432313365356566313234
|
||||||
|
30663636363833313261616630393535376163323637346666613130623338623134633737616237
|
||||||
|
34373565306338383531633932623366343864653563313062613131303564356164653137626634
|
||||||
|
32333431663365343365346665383032663437636666316163386436633261313839623235373838
|
||||||
|
61376131393238623834663838333265316536383439353862633334653135386137353864373034
|
||||||
|
39303037363661613263653665376231386266393061646435353038633935623163333630313336
|
||||||
|
33343532373565333461373666396335666664663838313037383864643033666538316163336663
|
||||||
|
3031
|
||||||
42
ansible/netbird/group_vars/vault.yml.example
Normal file
42
ansible/netbird/group_vars/vault.yml.example
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 Vault Secrets
|
||||||
|
# =============================================================================
|
||||||
|
# Copy to vault.yml, edit values, then encrypt:
|
||||||
|
# cp vault.yml.example vault.yml
|
||||||
|
# # Edit vault.yml with your values
|
||||||
|
# ansible-vault encrypt vault.yml
|
||||||
|
#
|
||||||
|
# Or use: ./generate-vault.sh to auto-generate all secrets
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# TURN/Relay Configuration
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# TURN server password (alphanumeric only)
|
||||||
|
# Generate: openssl rand -base64 32 | tr -d '/+=\n'
|
||||||
|
vault_turn_password: "YourTurnPassword2024"
|
||||||
|
|
||||||
|
# Relay secret (alphanumeric only)
|
||||||
|
# Generate: openssl rand -base64 32 | tr -d '/+=\n'
|
||||||
|
vault_relay_secret: "YourRelaySecret2024"
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Embedded IdP Encryption Key
|
||||||
|
# =============================================================================
|
||||||
|
# CRITICAL: Back this up! Loss prevents recovery of user data.
|
||||||
|
# Generate: openssl rand -base64 32
|
||||||
|
vault_encryption_key: "YourBase64EncryptionKey=="
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# User Provisioning (for setup-bootstrap.yml and setup-users.yml)
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# Initial admin password (for setup-bootstrap.yml)
|
||||||
|
# Generate: openssl rand -base64 16 | tr -d '/+=\n'
|
||||||
|
vault_admin_password: "YourAdminPassword2024"
|
||||||
|
|
||||||
|
# Service user PAT for API automation
|
||||||
|
# LEAVE EMPTY UNTIL AFTER BOOTSTRAP!
|
||||||
|
# Create manually in dashboard: Team → Service Users → Create Token
|
||||||
|
vault_netbird_service_pat: ""
|
||||||
9
ansible/netbird/inventory.yml
Normal file
9
ansible/netbird/inventory.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
netbird_servers:
|
||||||
|
hosts:
|
||||||
|
netbird-vps:
|
||||||
|
ansible_host: your-server.example.com # CHANGE THIS
|
||||||
|
ansible_user: root
|
||||||
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
244
ansible/netbird/playbook-no-ssl.yml
Normal file
244
ansible/netbird/playbook-no-ssl.yml
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 Deployment - No-SSL Mode (HTTP only)
|
||||||
|
# =============================================================================
|
||||||
|
# Lightweight deployment for LAN/air-gapped networks.
|
||||||
|
# Access dashboard by IP address only.
|
||||||
|
#
|
||||||
|
# WARNING: All traffic is unencrypted. Only use on isolated networks.
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. VPS/server accessible on local network
|
||||||
|
# 2. Update inventory.yml with your server IP
|
||||||
|
# 3. Create group_vars/vault.yml from vault.yml.example
|
||||||
|
#
|
||||||
|
# Run:
|
||||||
|
# ansible-playbook -i inventory.yml playbook-no-ssl.yml --ask-vault-pass
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Deploy NetBird v1.6 (No-SSL Mode)
|
||||||
|
hosts: netbird_servers
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/netbird_servers.yml
|
||||||
|
- group_vars/vault.yml
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
- name: Set no-SSL variables (override group_vars)
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
netbird_domain: "{{ ansible_default_ipv4.address }}"
|
||||||
|
netbird_protocol: "http"
|
||||||
|
relay_protocol: "rel"
|
||||||
|
relay_port: 80
|
||||||
|
signal_port: 80
|
||||||
|
single_account_domain: "netbird.local"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Prerequisites
|
||||||
|
# =========================================================================
|
||||||
|
- name: Update apt cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
cache_valid_time: 3600
|
||||||
|
|
||||||
|
- name: Install prerequisites
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- apt-transport-https
|
||||||
|
- ca-certificates
|
||||||
|
- curl
|
||||||
|
- gnupg
|
||||||
|
- lsb-release
|
||||||
|
- jq
|
||||||
|
state: present
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Docker Installation
|
||||||
|
# =========================================================================
|
||||||
|
- name: Check if Docker is installed
|
||||||
|
ansible.builtin.command: docker --version
|
||||||
|
register: docker_installed
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Create keyrings directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/apt/keyrings
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Add Docker GPG key
|
||||||
|
ansible.builtin.shell: |
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
chmod a+r /etc/apt/keyrings/docker.gpg
|
||||||
|
args:
|
||||||
|
creates: /etc/apt/keyrings/docker.gpg
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Add Docker repository
|
||||||
|
ansible.builtin.apt_repository:
|
||||||
|
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
|
||||||
|
state: present
|
||||||
|
filename: docker
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Install Docker packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- docker-ce
|
||||||
|
- docker-ce-cli
|
||||||
|
- containerd.io
|
||||||
|
- docker-buildx-plugin
|
||||||
|
- docker-compose-plugin
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Start and enable Docker
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: docker
|
||||||
|
state: started
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# NetBird Directory Structure
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create NetBird directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ netbird_base_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Deploy Configuration Files
|
||||||
|
# =========================================================================
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/docker-compose.yml.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/docker-compose.yml"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy Caddyfile (No-SSL mode)
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/Caddyfile-no-ssl.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/Caddyfile"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy management.json
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/management.json.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/management.json"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy dashboard.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/dashboard.env.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/dashboard.env"
|
||||||
|
mode: "0640"
|
||||||
|
|
||||||
|
- name: Deploy relay.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/relay.env.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/relay.env"
|
||||||
|
mode: "0640"
|
||||||
|
|
||||||
|
- name: Deploy turnserver.conf
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/turnserver.conf.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/turnserver.conf"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Firewall (UFW)
|
||||||
|
# =========================================================================
|
||||||
|
- name: Install UFW
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: ufw
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Allow SSH
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "22"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "80"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow TURN UDP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "3478"
|
||||||
|
proto: udp
|
||||||
|
|
||||||
|
- name: Allow TURN TCP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "3478"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Enable UFW
|
||||||
|
community.general.ufw:
|
||||||
|
state: enabled
|
||||||
|
policy: deny
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Start Services
|
||||||
|
# =========================================================================
|
||||||
|
- name: Pull Docker images
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose pull
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Start NetBird services
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose up -d
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Wait for Services
|
||||||
|
# =========================================================================
|
||||||
|
- name: Wait for Management API to be available
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "http://{{ netbird_domain }}/api/users"
|
||||||
|
method: GET
|
||||||
|
status_code: [200, 401, 403]
|
||||||
|
register: api_check
|
||||||
|
until: api_check.status in [200, 401, 403]
|
||||||
|
retries: 30
|
||||||
|
delay: 10
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Display Summary
|
||||||
|
# =========================================================================
|
||||||
|
- name: Display deployment status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
NetBird v1.6 Deployed Successfully! (No-SSL)
|
||||||
|
============================================
|
||||||
|
|
||||||
|
WARNING: Running in HTTP mode - traffic is unencrypted!
|
||||||
|
Only use on isolated/air-gapped networks.
|
||||||
|
|
||||||
|
Dashboard: http://{{ netbird_domain }}
|
||||||
|
|
||||||
|
Initial Setup:
|
||||||
|
1. Access the dashboard by IP
|
||||||
|
2. Create your first user (admin)
|
||||||
|
3. Generate setup keys for battalions
|
||||||
|
|
||||||
|
Connect peers with:
|
||||||
|
netbird up --management-url http://{{ netbird_domain }} --setup-key <KEY>
|
||||||
|
|
||||||
|
View logs:
|
||||||
|
ssh root@{{ ansible_host }} "cd {{ netbird_base_dir }} && docker compose logs -f"
|
||||||
|
============================================
|
||||||
258
ansible/netbird/playbook-ssl-ip.yml
Normal file
258
ansible/netbird/playbook-ssl-ip.yml
Normal file
@@ -0,0 +1,258 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 Deployment - SSL Mode with Public IP (Self-Signed)
|
||||||
|
# =============================================================================
|
||||||
|
# Uses Caddy's internal CA with self-signed certificates for HTTPS on IP.
|
||||||
|
# Browser will show certificate warning - this is expected.
|
||||||
|
#
|
||||||
|
# Note: Let's Encrypt supports IP certificates now, but Caddy's implementation
|
||||||
|
# is incomplete (GitHub issue #7399). Using self-signed as reliable fallback.
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. VPS with public IP address
|
||||||
|
# 2. Port 80 and 443 accessible
|
||||||
|
# 3. Create group_vars/vault.yml from vault.yml.example
|
||||||
|
#
|
||||||
|
# Run:
|
||||||
|
# ansible-playbook -i inventory.yml playbook-ssl-ip.yml --ask-vault-pass
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Deploy NetBird v1.6 (SSL Mode - Public IP Self-Signed)
|
||||||
|
hosts: netbird_servers
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/netbird_servers.yml
|
||||||
|
- group_vars/vault.yml
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
- name: Set SSL-IP variables (override group_vars)
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
netbird_domain: "{{ ansible_default_ipv4.address }}"
|
||||||
|
netbird_protocol: "https"
|
||||||
|
relay_protocol: "rels"
|
||||||
|
relay_port: 443
|
||||||
|
signal_port: 443
|
||||||
|
single_account_domain: "netbird.local"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Prerequisites
|
||||||
|
# =========================================================================
|
||||||
|
- name: Update apt cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
cache_valid_time: 3600
|
||||||
|
|
||||||
|
- name: Install prerequisites
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- apt-transport-https
|
||||||
|
- ca-certificates
|
||||||
|
- curl
|
||||||
|
- gnupg
|
||||||
|
- lsb-release
|
||||||
|
- jq
|
||||||
|
state: present
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Docker Installation
|
||||||
|
# =========================================================================
|
||||||
|
- name: Check if Docker is installed
|
||||||
|
ansible.builtin.command: docker --version
|
||||||
|
register: docker_installed
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Create keyrings directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/apt/keyrings
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Add Docker GPG key
|
||||||
|
ansible.builtin.shell: |
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
chmod a+r /etc/apt/keyrings/docker.gpg
|
||||||
|
args:
|
||||||
|
creates: /etc/apt/keyrings/docker.gpg
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Add Docker repository
|
||||||
|
ansible.builtin.apt_repository:
|
||||||
|
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
|
||||||
|
state: present
|
||||||
|
filename: docker
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Install Docker packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- docker-ce
|
||||||
|
- docker-ce-cli
|
||||||
|
- containerd.io
|
||||||
|
- docker-buildx-plugin
|
||||||
|
- docker-compose-plugin
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Start and enable Docker
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: docker
|
||||||
|
state: started
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# NetBird Directory Structure
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create NetBird directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ netbird_base_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Deploy Configuration Files
|
||||||
|
# =========================================================================
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/docker-compose.yml.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/docker-compose.yml"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy Caddyfile (SSL-IP mode)
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/Caddyfile-ssl-ip.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/Caddyfile"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy management.json
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/management.json.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/management.json"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy dashboard.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/dashboard.env.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/dashboard.env"
|
||||||
|
mode: "0640"
|
||||||
|
|
||||||
|
- name: Deploy relay.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/relay.env.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/relay.env"
|
||||||
|
mode: "0640"
|
||||||
|
|
||||||
|
- name: Deploy turnserver.conf
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/turnserver.conf.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/turnserver.conf"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Firewall (UFW)
|
||||||
|
# =========================================================================
|
||||||
|
- name: Install UFW
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: ufw
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Allow SSH
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "22"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTP (for ACME challenge)
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "80"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTPS
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "443"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTPS UDP (HTTP/3)
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "443"
|
||||||
|
proto: udp
|
||||||
|
|
||||||
|
- name: Allow TURN UDP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "3478"
|
||||||
|
proto: udp
|
||||||
|
|
||||||
|
- name: Allow TURN TCP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "3478"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Enable UFW
|
||||||
|
community.general.ufw:
|
||||||
|
state: enabled
|
||||||
|
policy: deny
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Start Services
|
||||||
|
# =========================================================================
|
||||||
|
- name: Pull Docker images
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose pull
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Start NetBird services
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose up -d
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Wait for Services
|
||||||
|
# =========================================================================
|
||||||
|
- name: Wait for Management API to be available
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "https://{{ netbird_domain }}/api/users"
|
||||||
|
method: GET
|
||||||
|
status_code: [200, 401, 403]
|
||||||
|
validate_certs: false
|
||||||
|
register: api_check
|
||||||
|
until: api_check.status in [200, 401, 403]
|
||||||
|
retries: 30
|
||||||
|
delay: 10
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Display Summary
|
||||||
|
# =========================================================================
|
||||||
|
- name: Display deployment status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
NetBird v1.6 Deployed Successfully! (SSL-IP)
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Dashboard: https://{{ netbird_domain }}
|
||||||
|
|
||||||
|
Note: Using self-signed certificate (Caddy internal CA).
|
||||||
|
Your browser will show a certificate warning - accept it to proceed.
|
||||||
|
|
||||||
|
Initial Setup:
|
||||||
|
1. Access the dashboard (accept certificate warning)
|
||||||
|
2. Create your first user (admin)
|
||||||
|
3. Generate setup keys for battalions
|
||||||
|
|
||||||
|
Connect peers with:
|
||||||
|
netbird up --management-url https://{{ netbird_domain }} --setup-key <KEY>
|
||||||
|
|
||||||
|
View logs:
|
||||||
|
ssh root@{{ ansible_host }} "cd {{ netbird_base_dir }} && docker compose logs -f"
|
||||||
|
============================================
|
||||||
242
ansible/netbird/playbook-ssl.yml
Normal file
242
ansible/netbird/playbook-ssl.yml
Normal file
@@ -0,0 +1,242 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 Deployment - SSL Mode (Let's Encrypt)
|
||||||
|
# =============================================================================
|
||||||
|
# Lightweight deployment without Authentik SSO.
|
||||||
|
# Uses NetBird native user management.
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. Domain with DNS A record pointing to VPS IP
|
||||||
|
# 2. Port 80 open for ACME challenge
|
||||||
|
# 3. Update inventory.yml with your VPS IP/domain
|
||||||
|
# 4. Update group_vars/netbird_servers.yml with your domain
|
||||||
|
# 5. Create group_vars/vault.yml from vault.yml.example
|
||||||
|
#
|
||||||
|
# Run:
|
||||||
|
# ansible-playbook -i inventory.yml playbook-ssl.yml --ask-vault-pass
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Deploy NetBird v1.6 (SSL Mode)
|
||||||
|
hosts: netbird_servers
|
||||||
|
become: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/netbird_servers.yml
|
||||||
|
- group_vars/vault.yml
|
||||||
|
vars:
|
||||||
|
# SSL-specific settings
|
||||||
|
netbird_protocol: "https"
|
||||||
|
relay_protocol: "rels"
|
||||||
|
relay_port: 443
|
||||||
|
signal_port: 443
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Prerequisites
|
||||||
|
# =========================================================================
|
||||||
|
- name: Update apt cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
cache_valid_time: 3600
|
||||||
|
|
||||||
|
- name: Install prerequisites
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- apt-transport-https
|
||||||
|
- ca-certificates
|
||||||
|
- curl
|
||||||
|
- gnupg
|
||||||
|
- lsb-release
|
||||||
|
- jq
|
||||||
|
state: present
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Docker Installation
|
||||||
|
# =========================================================================
|
||||||
|
- name: Check if Docker is installed
|
||||||
|
ansible.builtin.command: docker --version
|
||||||
|
register: docker_installed
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Create keyrings directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/apt/keyrings
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Add Docker GPG key
|
||||||
|
ansible.builtin.shell: |
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
chmod a+r /etc/apt/keyrings/docker.gpg
|
||||||
|
args:
|
||||||
|
creates: /etc/apt/keyrings/docker.gpg
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Add Docker repository
|
||||||
|
ansible.builtin.apt_repository:
|
||||||
|
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
|
||||||
|
state: present
|
||||||
|
filename: docker
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Install Docker packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- docker-ce
|
||||||
|
- docker-ce-cli
|
||||||
|
- containerd.io
|
||||||
|
- docker-buildx-plugin
|
||||||
|
- docker-compose-plugin
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
when: docker_installed.rc != 0
|
||||||
|
|
||||||
|
- name: Start and enable Docker
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: docker
|
||||||
|
state: started
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# NetBird Directory Structure
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create NetBird directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ netbird_base_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Deploy Configuration Files
|
||||||
|
# =========================================================================
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/docker-compose.yml.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/docker-compose.yml"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
# Caddyfile is NOT deployed here - shared Caddy handles reverse proxy
|
||||||
|
# See ../caddy/playbook.yml
|
||||||
|
|
||||||
|
- name: Deploy management.json
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/management.json.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/management.json"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy dashboard.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/dashboard.env.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/dashboard.env"
|
||||||
|
mode: "0640"
|
||||||
|
|
||||||
|
- name: Deploy relay.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/relay.env.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/relay.env"
|
||||||
|
mode: "0640"
|
||||||
|
|
||||||
|
- name: Deploy turnserver.conf
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: templates/turnserver.conf.j2
|
||||||
|
dest: "{{ netbird_base_dir }}/turnserver.conf"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Firewall (UFW)
|
||||||
|
# =========================================================================
|
||||||
|
- name: Install UFW
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: ufw
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Allow SSH
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "22"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTP (ACME challenge)
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "80"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTPS
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "443"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow TURN UDP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "3478"
|
||||||
|
proto: udp
|
||||||
|
|
||||||
|
- name: Allow TURN TCP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "3478"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Enable UFW
|
||||||
|
community.general.ufw:
|
||||||
|
state: enabled
|
||||||
|
policy: deny
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Start Services
|
||||||
|
# =========================================================================
|
||||||
|
- name: Pull Docker images
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose pull
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Start NetBird services
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose up -d
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Wait for Services
|
||||||
|
# =========================================================================
|
||||||
|
- name: Wait for management container to be running
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose ps management --format json
|
||||||
|
chdir: "{{ netbird_base_dir }}"
|
||||||
|
register: management_container
|
||||||
|
until: "'running' in management_container.stdout"
|
||||||
|
retries: 12
|
||||||
|
delay: 5
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Display Summary
|
||||||
|
# =========================================================================
|
||||||
|
- name: Display deployment status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
NetBird v1.6 Containers Deployed!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Containers: dashboard, signal, relay, management, coturn
|
||||||
|
|
||||||
|
============================================
|
||||||
|
NEXT STEPS:
|
||||||
|
============================================
|
||||||
|
|
||||||
|
1. Deploy shared Caddy:
|
||||||
|
cd ../caddy && ansible-playbook -i poc-inventory.yml playbook.yml
|
||||||
|
|
||||||
|
2. Then access https://{{ netbird_domain }}
|
||||||
|
|
||||||
|
============================================
|
||||||
|
|
||||||
|
View logs:
|
||||||
|
ssh root@{{ ansible_host }} "cd {{ netbird_base_dir }} && docker compose logs -f"
|
||||||
|
============================================
|
||||||
8
ansible/netbird/poc-inventory.yml
Normal file
8
ansible/netbird/poc-inventory.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
netbird_servers:
|
||||||
|
hosts:
|
||||||
|
netbird-poc:
|
||||||
|
ansible_host: observability-poc.networkmonitor.cc
|
||||||
|
ansible_user: root
|
||||||
9
ansible/netbird/prod-inventory.yml
Normal file
9
ansible/netbird/prod-inventory.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
netbird_servers:
|
||||||
|
hosts:
|
||||||
|
netbird-vps:
|
||||||
|
ansible_host: achilles-rnd.cc
|
||||||
|
ansible_user: root
|
||||||
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
197
ansible/netbird/setup-bootstrap.yml
Normal file
197
ansible/netbird/setup-bootstrap.yml
Normal file
@@ -0,0 +1,197 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 - Instance Bootstrap
|
||||||
|
# =============================================================================
|
||||||
|
# Run ONCE on fresh deployment to:
|
||||||
|
# 1. Check if instance needs setup
|
||||||
|
# 2. Create initial admin user via API
|
||||||
|
# 3. Display instructions for manual PAT creation
|
||||||
|
#
|
||||||
|
# NOTE: The embedded IdP (Dex) does not support password grants,
|
||||||
|
# so PATs cannot be created programmatically without dashboard access.
|
||||||
|
# After this playbook, you MUST manually:
|
||||||
|
# 1. Login to dashboard with admin credentials
|
||||||
|
# 2. Create a service user
|
||||||
|
# 3. Generate PAT for service user
|
||||||
|
# 4. Store PAT in vault.yml as vault_netbird_service_pat
|
||||||
|
#
|
||||||
|
# Run:
|
||||||
|
# ansible-playbook -i inventory.yml setup-bootstrap.yml --ask-vault-pass
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Bootstrap NetBird Instance
|
||||||
|
hosts: netbird_servers
|
||||||
|
become: false
|
||||||
|
gather_facts: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/netbird_servers.yml
|
||||||
|
- group_vars/vault.yml
|
||||||
|
vars:
|
||||||
|
# For SSL-IP mode, use server IP; for domain mode, use netbird_domain
|
||||||
|
netbird_api_host: "{{ hostvars[inventory_hostname].ansible_host | default(netbird_domain) }}"
|
||||||
|
netbird_api_url: "https://{{ netbird_api_host }}/api"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Check Instance Status
|
||||||
|
# =========================================================================
|
||||||
|
- name: Check instance status
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/instance"
|
||||||
|
method: GET
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 404]
|
||||||
|
register: instance_status
|
||||||
|
delegate_to: localhost
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- name: Debug instance status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: "Instance API response: status={{ instance_status.status | default('N/A') }}, json={{ instance_status.json | default('N/A') }}"
|
||||||
|
|
||||||
|
- name: Determine setup status
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
setup_required: >-
|
||||||
|
{{
|
||||||
|
instance_status.status != 200 or
|
||||||
|
(instance_status.json.setup_required | default(false))
|
||||||
|
}}
|
||||||
|
instance_check_failed: "{{ instance_status.status != 200 }}"
|
||||||
|
|
||||||
|
- name: Check PAT status
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
pat_configured: "{{ vault_netbird_service_pat is defined and vault_netbird_service_pat | length > 0 }}"
|
||||||
|
|
||||||
|
- name: Display status - already configured
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Instance Already Configured
|
||||||
|
============================================
|
||||||
|
{% if pat_configured %}
|
||||||
|
Service PAT: ✓ Configured
|
||||||
|
|
||||||
|
Ready to provision users:
|
||||||
|
ansible-playbook -i inventory.yml setup-groups.yml --ask-vault-pass
|
||||||
|
ansible-playbook -i inventory.yml setup-users.yml --ask-vault-pass
|
||||||
|
{% else %}
|
||||||
|
Service PAT: ✗ Not configured
|
||||||
|
|
||||||
|
NEXT STEPS:
|
||||||
|
1. Login to dashboard: https://{{ netbird_api_host }}
|
||||||
|
(Accept self-signed certificate warning)
|
||||||
|
2. Create service user: Team → Service Users → Create
|
||||||
|
3. Generate PAT: Select user → Create Token
|
||||||
|
4. Store in vault: ansible-vault edit group_vars/vault.yml
|
||||||
|
Set: vault_netbird_service_pat: "<your-token>"
|
||||||
|
5. Then run:
|
||||||
|
ansible-playbook -i inventory.yml setup-groups.yml --ask-vault-pass
|
||||||
|
ansible-playbook -i inventory.yml setup-users.yml --ask-vault-pass
|
||||||
|
{% endif %}
|
||||||
|
============================================
|
||||||
|
when: not setup_required
|
||||||
|
|
||||||
|
- name: End play - instance already configured
|
||||||
|
ansible.builtin.meta: end_play
|
||||||
|
when: not setup_required
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Bootstrap Admin User
|
||||||
|
# =========================================================================
|
||||||
|
- name: Attempt to create initial admin user
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/instance/setup"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
email: "{{ netbird_admin_user.email }}"
|
||||||
|
password: "{{ netbird_admin_user.password }}"
|
||||||
|
name: "{{ netbird_admin_user.name }}"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201, 404]
|
||||||
|
register: bootstrap_result
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Handle 404 - API endpoint not available
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Bootstrap API Not Available (404)
|
||||||
|
============================================
|
||||||
|
The /api/instance/setup endpoint returned 404.
|
||||||
|
This means either:
|
||||||
|
- Admin already exists, OR
|
||||||
|
- This endpoint isn't exposed in your NetBird version
|
||||||
|
|
||||||
|
Create admin manually in the dashboard:
|
||||||
|
1. Go to: https://{{ netbird_api_host }}
|
||||||
|
2. Create admin account:
|
||||||
|
Email: {{ netbird_admin_user.email }}
|
||||||
|
Password: {{ netbird_admin_user.password }}
|
||||||
|
|
||||||
|
Then create service user + PAT:
|
||||||
|
3. Go to Team → Service Users → Create
|
||||||
|
4. Generate PAT: Select user → Create Token
|
||||||
|
5. Store in vault: ansible-vault edit group_vars/vault.yml
|
||||||
|
Set: vault_netbird_service_pat: "<your-token>"
|
||||||
|
|
||||||
|
Then provision users:
|
||||||
|
ansible-playbook -i inventory.yml setup-groups.yml --ask-vault-pass
|
||||||
|
ansible-playbook -i inventory.yml setup-users.yml --ask-vault-pass
|
||||||
|
============================================
|
||||||
|
when: bootstrap_result.status == 404
|
||||||
|
|
||||||
|
- name: End play if already bootstrapped
|
||||||
|
ansible.builtin.meta: end_play
|
||||||
|
when: bootstrap_result.status == 404
|
||||||
|
|
||||||
|
- name: Display bootstrap result
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
NetBird Instance Bootstrapped!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Admin User Created:
|
||||||
|
Email: {{ netbird_admin_user.email }}
|
||||||
|
Password: {{ netbird_admin_user.password }}
|
||||||
|
User ID: {{ bootstrap_result.json.user_id | default('N/A') }}
|
||||||
|
|
||||||
|
============================================
|
||||||
|
MANUAL STEPS REQUIRED:
|
||||||
|
============================================
|
||||||
|
|
||||||
|
1. Login to dashboard:
|
||||||
|
https://{{ netbird_api_host }}
|
||||||
|
(Accept self-signed certificate warning)
|
||||||
|
|
||||||
|
2. Login with:
|
||||||
|
Email: {{ netbird_admin_user.email }}
|
||||||
|
Password: {{ netbird_admin_user.password }}
|
||||||
|
|
||||||
|
3. Create Service User:
|
||||||
|
- Go to Team → Service Users
|
||||||
|
- Click "Create Service User"
|
||||||
|
- Name: "Automation Service"
|
||||||
|
- Role: "Admin"
|
||||||
|
- Save the user
|
||||||
|
|
||||||
|
4. Create PAT for Service User:
|
||||||
|
- Select the service user
|
||||||
|
- Click "Create Token"
|
||||||
|
- Name: "ansible-automation"
|
||||||
|
- Expiration: 365 days
|
||||||
|
- COPY THE TOKEN (shown only once!)
|
||||||
|
|
||||||
|
5. Store PAT in vault:
|
||||||
|
ansible-vault edit group_vars/vault.yml
|
||||||
|
Set: vault_netbird_service_pat: "<your-token>"
|
||||||
|
|
||||||
|
6. Run group and user provisioning:
|
||||||
|
ansible-playbook -i inventory.yml setup-groups.yml --ask-vault-pass
|
||||||
|
ansible-playbook -i inventory.yml setup-users.yml --ask-vault-pass
|
||||||
|
|
||||||
|
============================================
|
||||||
|
when: setup_required | default(true)
|
||||||
367
ansible/netbird/setup-groups.yml
Normal file
367
ansible/netbird/setup-groups.yml
Normal file
@@ -0,0 +1,367 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 - Battalion Group & Access Control Setup
|
||||||
|
# =============================================================================
|
||||||
|
# Creates:
|
||||||
|
# - Groups for each battalion (pilots + ground stations)
|
||||||
|
# - Dev team group with full access
|
||||||
|
# - Setup keys with auto-group assignment
|
||||||
|
# - Access control policies (battalion isolation + dev access)
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. NetBird deployed and running (playbook-ssl.yml or playbook-no-ssl.yml)
|
||||||
|
# 2. Admin user created via dashboard
|
||||||
|
# 3. PAT (Personal Access Token) generated from dashboard
|
||||||
|
#
|
||||||
|
# Run:
|
||||||
|
# ansible-playbook -i inventory.yml setup-groups.yml --ask-vault-pass
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Configure NetBird Battalion Access Control
|
||||||
|
hosts: netbird_servers
|
||||||
|
become: false
|
||||||
|
gather_facts: false
|
||||||
|
vars_files:
|
||||||
|
- group_vars/netbird_servers.yml
|
||||||
|
- group_vars/vault.yml
|
||||||
|
vars:
|
||||||
|
# For SSL-IP mode, use server IP; for domain mode, use netbird_domain
|
||||||
|
netbird_api_host: "{{ hostvars[inventory_hostname].ansible_host | default(netbird_domain) }}"
|
||||||
|
netbird_api_url: "https://{{ netbird_api_host }}/api"
|
||||||
|
# Use PAT from vault, or allow override via command line
|
||||||
|
netbird_pat: "{{ vault_netbird_service_pat }}"
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
- name: Validate PAT is provided
|
||||||
|
ansible.builtin.assert:
|
||||||
|
that:
|
||||||
|
- netbird_pat is defined
|
||||||
|
- netbird_pat | length > 0
|
||||||
|
fail_msg: |
|
||||||
|
Service PAT not configured in vault.yml!
|
||||||
|
1. Create service user + PAT in dashboard
|
||||||
|
2. Store in vault: ansible-vault edit group_vars/vault.yml
|
||||||
|
Set: vault_netbird_service_pat: "<your-token>"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Get Existing Groups (to avoid duplicates)
|
||||||
|
# =========================================================================
|
||||||
|
- name: Get existing groups
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/groups"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Accept: "application/json"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200]
|
||||||
|
register: existing_groups
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Extract existing group names
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
existing_group_names: "{{ existing_groups.json | map(attribute='name') | list }}"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Create Battalion Groups
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create battalion pilot groups
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/groups"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
name: "{{ item.name }}-pilots"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
loop: "{{ battalions }}"
|
||||||
|
when: "item.name + '-pilots' not in existing_group_names"
|
||||||
|
register: pilot_groups_created
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Create battalion ground station groups
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/groups"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
name: "{{ item.name }}-ground-stations"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
loop: "{{ battalions }}"
|
||||||
|
when: "item.name + '-ground-stations' not in existing_group_names"
|
||||||
|
register: gs_groups_created
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Create dev team group
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/groups"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
name: "{{ dev_team_group }}"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
when: "dev_team_group not in existing_group_names"
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Re-fetch Groups to Get IDs
|
||||||
|
# =========================================================================
|
||||||
|
- name: Get all groups with IDs
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/groups"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Accept: "application/json"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200]
|
||||||
|
register: all_groups
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Build group ID mapping
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
group_id_map: "{{ group_id_map | default({}) | combine({item.name: item.id}) }}"
|
||||||
|
loop: "{{ all_groups.json }}"
|
||||||
|
|
||||||
|
- name: Get All group ID
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
all_group_id: "{{ (all_groups.json | selectattr('name', 'equalto', 'All') | first).id }}"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Create Setup Keys
|
||||||
|
# =========================================================================
|
||||||
|
- name: Get existing setup keys
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/setup-keys"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Accept: "application/json"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200]
|
||||||
|
register: existing_keys
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Extract existing setup key names
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
existing_key_names: "{{ existing_keys.json | map(attribute='name') | list }}"
|
||||||
|
|
||||||
|
- name: Create setup keys for battalion pilots
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/setup-keys"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
name: "{{ item.name }}-pilot-key"
|
||||||
|
type: "reusable"
|
||||||
|
expires_in: 31536000 # 1 year in seconds
|
||||||
|
revoked: false
|
||||||
|
auto_groups:
|
||||||
|
- "{{ group_id_map[item.name + '-pilots'] }}"
|
||||||
|
usage_limit: 0 # unlimited
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
loop: "{{ battalions }}"
|
||||||
|
when: "item.name + '-pilot-key' not in existing_key_names"
|
||||||
|
register: pilot_keys
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Create setup keys for battalion ground stations
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/setup-keys"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
name: "{{ item.name }}-gs-key"
|
||||||
|
type: "reusable"
|
||||||
|
expires_in: 31536000
|
||||||
|
revoked: false
|
||||||
|
auto_groups:
|
||||||
|
- "{{ group_id_map[item.name + '-ground-stations'] }}"
|
||||||
|
usage_limit: 0
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
loop: "{{ battalions }}"
|
||||||
|
when: "item.name + '-gs-key' not in existing_key_names"
|
||||||
|
register: gs_keys
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Create setup key for dev team
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/setup-keys"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
name: "dev-team-key"
|
||||||
|
type: "reusable"
|
||||||
|
expires_in: 31536000
|
||||||
|
revoked: false
|
||||||
|
auto_groups:
|
||||||
|
- "{{ group_id_map[dev_team_group] }}"
|
||||||
|
usage_limit: 0
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
when: "'dev-team-key' not in existing_key_names"
|
||||||
|
register: dev_key
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Create Access Control Policies
|
||||||
|
# =========================================================================
|
||||||
|
- name: Get existing policies
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/policies"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Accept: "application/json"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200]
|
||||||
|
register: existing_policies
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Extract existing policy names
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
existing_policy_names: "{{ existing_policies.json | map(attribute='name') | list }}"
|
||||||
|
|
||||||
|
- name: Create battalion internal access policies
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/policies"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
name: "{{ item.display_name }} - Internal Access"
|
||||||
|
description: "Allow {{ item.display_name }} pilots to access their ground stations"
|
||||||
|
enabled: true
|
||||||
|
rules:
|
||||||
|
- name: "{{ item.name }}-pilot-to-gs"
|
||||||
|
description: "Pilots can access ground stations"
|
||||||
|
enabled: true
|
||||||
|
sources:
|
||||||
|
- "{{ group_id_map[item.name + '-pilots'] }}"
|
||||||
|
destinations:
|
||||||
|
- "{{ group_id_map[item.name + '-ground-stations'] }}"
|
||||||
|
bidirectional: true
|
||||||
|
protocol: "all"
|
||||||
|
action: "accept"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
loop: "{{ battalions }}"
|
||||||
|
when: "item.display_name + ' - Internal Access' not in existing_policy_names"
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Create dev team full access policy
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/policies"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
name: "Dev Team - Full Access"
|
||||||
|
description: "Dev team can access all peers for troubleshooting"
|
||||||
|
enabled: true
|
||||||
|
rules:
|
||||||
|
- name: "dev-full-access"
|
||||||
|
description: "Dev team has access to all peers"
|
||||||
|
enabled: true
|
||||||
|
sources:
|
||||||
|
- "{{ group_id_map[dev_team_group] }}"
|
||||||
|
destinations:
|
||||||
|
- "{{ all_group_id }}"
|
||||||
|
bidirectional: true
|
||||||
|
protocol: "all"
|
||||||
|
action: "accept"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
when: "'Dev Team - Full Access' not in existing_policy_names"
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Fetch and Display Setup Keys
|
||||||
|
# =========================================================================
|
||||||
|
- name: Get all setup keys
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/setup-keys"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ netbird_pat }}"
|
||||||
|
Accept: "application/json"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200]
|
||||||
|
register: final_keys
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Display configuration summary
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Battalion Access Control Configured!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Groups Created:
|
||||||
|
{% for bat in battalions %}
|
||||||
|
- {{ bat.name }}-pilots
|
||||||
|
- {{ bat.name }}-ground-stations
|
||||||
|
{% endfor %}
|
||||||
|
- {{ dev_team_group }}
|
||||||
|
|
||||||
|
Access Control Matrix:
|
||||||
|
{% for bat in battalions %}
|
||||||
|
[{{ bat.display_name }}]
|
||||||
|
{{ bat.name }}-pilots <--> {{ bat.name }}-ground-stations
|
||||||
|
{% endfor %}
|
||||||
|
[Dev Team]
|
||||||
|
{{ dev_team_group }} --> All (full access)
|
||||||
|
|
||||||
|
Setup Keys (use these to register peers):
|
||||||
|
{% for key in final_keys.json %}
|
||||||
|
{% if key.name.endswith('-pilot-key') or key.name.endswith('-gs-key') or key.name == 'dev-team-key' %}
|
||||||
|
{{ key.name }}: {{ key.key }}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
Peer Registration Commands:
|
||||||
|
{% for bat in battalions %}
|
||||||
|
# {{ bat.display_name }} Pilot:
|
||||||
|
netbird up --management-url {{ netbird_protocol }}://{{ netbird_domain }} \
|
||||||
|
--setup-key <{{ bat.name }}-pilot-key> \
|
||||||
|
--hostname pilot-{{ bat.name }}-<callsign>
|
||||||
|
|
||||||
|
# {{ bat.display_name }} Ground Station:
|
||||||
|
netbird up --management-url {{ netbird_protocol }}://{{ netbird_domain }} \
|
||||||
|
--setup-key <{{ bat.name }}-gs-key> \
|
||||||
|
--hostname gs-{{ bat.name }}-<location>
|
||||||
|
|
||||||
|
{% endfor %}
|
||||||
|
# Dev Team:
|
||||||
|
netbird up --management-url {{ netbird_protocol }}://{{ netbird_domain }} \
|
||||||
|
--setup-key <dev-team-key> \
|
||||||
|
--hostname dev-<name>
|
||||||
|
|
||||||
|
============================================
|
||||||
281
ansible/netbird/setup-users.yml
Normal file
281
ansible/netbird/setup-users.yml
Normal file
@@ -0,0 +1,281 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 - User Provisioning
|
||||||
|
# =============================================================================
|
||||||
|
# Creates users with embedded IdP and stores generated passwords.
|
||||||
|
# Requires: service user PAT in vault.yml (see setup-bootstrap.yml)
|
||||||
|
#
|
||||||
|
# Run:
|
||||||
|
# ansible-playbook -i inventory.yml setup-users.yml --ask-vault-pass
|
||||||
|
#
|
||||||
|
# Optional variables:
|
||||||
|
# -e "dry_run=true" Preview changes without creating users
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Provision NetBird Users
|
||||||
|
hosts: netbird_servers
|
||||||
|
become: false
|
||||||
|
gather_facts: true
|
||||||
|
vars_files:
|
||||||
|
- group_vars/netbird_servers.yml
|
||||||
|
- group_vars/vault.yml
|
||||||
|
vars:
|
||||||
|
# For SSL-IP mode, use server IP; for domain mode, use netbird_domain
|
||||||
|
netbird_api_host: "{{ hostvars[inventory_hostname].ansible_host | default(netbird_domain) }}"
|
||||||
|
netbird_api_url: "https://{{ netbird_api_host }}/api"
|
||||||
|
dry_run: false
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Validate Prerequisites
|
||||||
|
# =========================================================================
|
||||||
|
- name: Validate service PAT is provided
|
||||||
|
ansible.builtin.assert:
|
||||||
|
that:
|
||||||
|
- vault_netbird_service_pat is defined
|
||||||
|
- vault_netbird_service_pat | length > 0
|
||||||
|
fail_msg: |
|
||||||
|
Service PAT not configured!
|
||||||
|
Run setup-bootstrap.yml first, then add PAT to vault.yml
|
||||||
|
|
||||||
|
- name: Verify API connectivity with PAT
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/users"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ vault_netbird_service_pat }}"
|
||||||
|
Accept: "application/json"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200]
|
||||||
|
register: api_check
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Display connection status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: "API connection successful. Found {{ api_check.json | length }} existing users."
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# Fetch Existing State
|
||||||
|
# =========================================================================
|
||||||
|
- name: Get existing users
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/users"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ vault_netbird_service_pat }}"
|
||||||
|
Accept: "application/json"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200]
|
||||||
|
register: existing_users_response
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Extract existing user emails
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
existing_user_emails: "{{ existing_users_response.json | map(attribute='email') | list }}"
|
||||||
|
|
||||||
|
- name: Get existing groups
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/groups"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ vault_netbird_service_pat }}"
|
||||||
|
Accept: "application/json"
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200]
|
||||||
|
register: existing_groups_response
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Build group ID mapping
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
group_id_map: "{{ group_id_map | default({}) | combine({item.name: item.id}) }}"
|
||||||
|
loop: "{{ existing_groups_response.json }}"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Resolve Auto-Groups for Users
|
||||||
|
# =========================================================================
|
||||||
|
- name: Resolve auto-groups for users
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
resolved_users: "{{ resolved_users | default([]) + [user_with_groups] }}"
|
||||||
|
vars:
|
||||||
|
battalion_group: >-
|
||||||
|
{%- if item.battalion is defined and item.battalion -%}
|
||||||
|
{%- if item.type | default('pilot') == 'pilot' -%}
|
||||||
|
{{ item.battalion }}-pilots
|
||||||
|
{%- else -%}
|
||||||
|
{{ item.battalion }}-ground-stations
|
||||||
|
{%- endif -%}
|
||||||
|
{%- endif -%}
|
||||||
|
final_auto_groups: >-
|
||||||
|
{{ item.auto_groups | default([]) + ([battalion_group | trim] if battalion_group | trim else []) }}
|
||||||
|
resolved_group_ids: >-
|
||||||
|
{{ final_auto_groups | map('extract', group_id_map) | select('defined') | list }}
|
||||||
|
user_with_groups:
|
||||||
|
email: "{{ item.email }}"
|
||||||
|
name: "{{ item.name }}"
|
||||||
|
role: "{{ item.role | default('user') }}"
|
||||||
|
auto_groups: "{{ resolved_group_ids }}"
|
||||||
|
auto_group_names: "{{ final_auto_groups }}"
|
||||||
|
battalion: "{{ item.battalion | default(none) }}"
|
||||||
|
skip: "{{ item.email in existing_user_emails }}"
|
||||||
|
loop: "{{ netbird_users | default([]) }}"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Display Plan
|
||||||
|
# =========================================================================
|
||||||
|
- name: Count users to process
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
users_to_create: "{{ resolved_users | default([]) | rejectattr('skip') | list }}"
|
||||||
|
users_to_skip: "{{ resolved_users | default([]) | selectattr('skip') | list }}"
|
||||||
|
|
||||||
|
- name: Display provisioning plan
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
User Provisioning Plan
|
||||||
|
============================================
|
||||||
|
Mode: {{ 'DRY RUN' if dry_run else 'EXECUTE' }}
|
||||||
|
|
||||||
|
Users to CREATE ({{ users_to_create | length }}):
|
||||||
|
{% for user in users_to_create %}
|
||||||
|
- {{ user.email }}
|
||||||
|
Name: {{ user.name }}
|
||||||
|
Role: {{ user.role }}
|
||||||
|
Groups: {{ user.auto_group_names | join(', ') or 'None' }}
|
||||||
|
{% endfor %}
|
||||||
|
{% if users_to_create | length == 0 %}
|
||||||
|
(none)
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
Users to SKIP - already exist ({{ users_to_skip | length }}):
|
||||||
|
{% for user in users_to_skip %}
|
||||||
|
- {{ user.email }}
|
||||||
|
{% endfor %}
|
||||||
|
{% if users_to_skip | length == 0 %}
|
||||||
|
(none)
|
||||||
|
{% endif %}
|
||||||
|
============================================
|
||||||
|
|
||||||
|
- name: End play in dry run mode
|
||||||
|
ansible.builtin.meta: end_play
|
||||||
|
when: dry_run | bool
|
||||||
|
|
||||||
|
- name: End play if no users to create
|
||||||
|
ansible.builtin.meta: end_play
|
||||||
|
when: users_to_create | length == 0
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Create Users
|
||||||
|
# =========================================================================
|
||||||
|
- name: Create credentials directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ playbook_dir }}/files/credentials"
|
||||||
|
state: directory
|
||||||
|
mode: "0700"
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
- name: Create new users
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ netbird_api_url }}/users"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "Token {{ vault_netbird_service_pat }}"
|
||||||
|
Content-Type: "application/json"
|
||||||
|
Accept: "application/json"
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
email: "{{ item.email }}"
|
||||||
|
name: "{{ item.name }}"
|
||||||
|
role: "{{ item.role }}"
|
||||||
|
auto_groups: "{{ item.auto_groups }}"
|
||||||
|
is_service_user: false
|
||||||
|
validate_certs: false
|
||||||
|
status_code: [200, 201]
|
||||||
|
loop: "{{ users_to_create }}"
|
||||||
|
register: created_users
|
||||||
|
delegate_to: localhost
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Store Credentials
|
||||||
|
# =========================================================================
|
||||||
|
- name: Build credentials list
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
user_credentials: "{{ user_credentials | default([]) + [credential] }}"
|
||||||
|
vars:
|
||||||
|
matching_user: "{{ resolved_users | selectattr('email', 'equalto', item.json.email) | first }}"
|
||||||
|
credential:
|
||||||
|
email: "{{ item.json.email }}"
|
||||||
|
name: "{{ item.json.name }}"
|
||||||
|
password: "{{ item.json.password | default('N/A') }}"
|
||||||
|
user_id: "{{ item.json.id }}"
|
||||||
|
role: "{{ item.json.role }}"
|
||||||
|
created_at: "{{ ansible_date_time.iso8601 }}"
|
||||||
|
groups: "{{ matching_user.auto_group_names | default([]) }}"
|
||||||
|
loop: "{{ created_users.results }}"
|
||||||
|
when: item.json is defined
|
||||||
|
|
||||||
|
- name: Save credentials to file
|
||||||
|
ansible.builtin.copy:
|
||||||
|
content: |
|
||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird User Credentials
|
||||||
|
# =============================================================================
|
||||||
|
# Generated: {{ ansible_date_time.iso8601 }}
|
||||||
|
# Instance: {{ netbird_domain }}
|
||||||
|
#
|
||||||
|
# WARNING: Store securely! Passwords cannot be retrieved again.
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
users:
|
||||||
|
{% for user in user_credentials %}
|
||||||
|
- email: "{{ user.email }}"
|
||||||
|
name: "{{ user.name }}"
|
||||||
|
password: "{{ user.password }}"
|
||||||
|
user_id: "{{ user.user_id }}"
|
||||||
|
role: "{{ user.role }}"
|
||||||
|
groups:
|
||||||
|
{% for group in user.groups %}
|
||||||
|
- "{{ group }}"
|
||||||
|
{% endfor %}
|
||||||
|
{% if user.groups | length == 0 %}
|
||||||
|
[]
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
dest: "{{ playbook_dir }}/files/credentials/users-{{ ansible_date_time.date }}.yml"
|
||||||
|
mode: "0600"
|
||||||
|
delegate_to: localhost
|
||||||
|
when:
|
||||||
|
- user_credentials is defined
|
||||||
|
- user_credentials | length > 0
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Display Summary
|
||||||
|
# =========================================================================
|
||||||
|
- name: Display provisioning summary
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
User Provisioning Complete!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
Created Users ({{ user_credentials | default([]) | length }}):
|
||||||
|
{% for user in user_credentials | default([]) %}
|
||||||
|
{{ user.email }}
|
||||||
|
Password: {{ user.password }}
|
||||||
|
Role: {{ user.role }}
|
||||||
|
Groups: {{ user.groups | join(', ') or 'None' }}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
Credentials saved to:
|
||||||
|
{{ playbook_dir }}/files/credentials/users-{{ ansible_date_time.date }}.yml
|
||||||
|
|
||||||
|
IMPORTANT:
|
||||||
|
1. Share passwords securely with users
|
||||||
|
2. Encrypt or delete credentials file after distribution:
|
||||||
|
ansible-vault encrypt files/credentials/users-{{ ansible_date_time.date }}.yml
|
||||||
|
3. Users should change passwords on first login
|
||||||
|
|
||||||
|
Login URL: https://{{ netbird_api_host }}
|
||||||
|
============================================
|
||||||
|
when: user_credentials is defined
|
||||||
35
ansible/netbird/templates/Caddyfile-no-ssl.j2
Normal file
35
ansible/netbird/templates/Caddyfile-no-ssl.j2
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 Caddyfile - No-SSL Mode (HTTP only, LAN access)
|
||||||
|
# =============================================================================
|
||||||
|
# WARNING: This configuration transmits data in plaintext.
|
||||||
|
# Only use on isolated/air-gapped networks.
|
||||||
|
{
|
||||||
|
servers :80 {
|
||||||
|
protocols h1 h2c
|
||||||
|
}
|
||||||
|
# Disable automatic HTTPS
|
||||||
|
auto_https off
|
||||||
|
}
|
||||||
|
|
||||||
|
:80 {
|
||||||
|
# Embedded IdP OAuth2 endpoints
|
||||||
|
reverse_proxy /oauth2/* management:80
|
||||||
|
reverse_proxy /.well-known/openid-configuration management:80
|
||||||
|
reverse_proxy /.well-known/jwks.json management:80
|
||||||
|
|
||||||
|
# NetBird Relay
|
||||||
|
reverse_proxy /relay* relay:80
|
||||||
|
|
||||||
|
# NetBird Signal (gRPC)
|
||||||
|
reverse_proxy /signalexchange.SignalExchange/* h2c://signal:10000
|
||||||
|
|
||||||
|
# NetBird Management API (gRPC)
|
||||||
|
reverse_proxy /management.ManagementService/* h2c://management:80
|
||||||
|
|
||||||
|
# NetBird Management REST API
|
||||||
|
reverse_proxy /api/* management:80
|
||||||
|
|
||||||
|
# NetBird Dashboard (catch-all)
|
||||||
|
reverse_proxy /* dashboard:80
|
||||||
|
}
|
||||||
|
}
|
||||||
61
ansible/netbird/templates/Caddyfile-ssl-ip.j2
Normal file
61
ansible/netbird/templates/Caddyfile-ssl-ip.j2
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 Caddyfile - SSL Mode with Public IP (Self-Signed)
|
||||||
|
# =============================================================================
|
||||||
|
# Uses Caddy's internal CA to generate self-signed certificates for IP access.
|
||||||
|
# Note: Let's Encrypt IP certificates are supported but Caddy's implementation
|
||||||
|
# is incomplete (issue #7399). Using self-signed as reliable fallback.
|
||||||
|
{
|
||||||
|
servers :80,:443 {
|
||||||
|
protocols h1 h2c h2
|
||||||
|
}
|
||||||
|
# Required for IP-based TLS - clients don't send SNI for IP addresses
|
||||||
|
# Docker networking makes Caddy see internal IPs, so we need default_sni
|
||||||
|
default_sni {{ netbird_domain }}
|
||||||
|
}
|
||||||
|
|
||||||
|
(security_headers) {
|
||||||
|
header * {
|
||||||
|
Strict-Transport-Security "max-age=3600; includeSubDomains; preload"
|
||||||
|
X-Content-Type-Options "nosniff"
|
||||||
|
X-Frame-Options "SAMEORIGIN"
|
||||||
|
X-XSS-Protection "1; mode=block"
|
||||||
|
-Server
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
:80 {
|
||||||
|
# Redirect HTTP to HTTPS
|
||||||
|
redir https://{host}{uri} permanent
|
||||||
|
}
|
||||||
|
|
||||||
|
# Bind to IP address explicitly so Caddy knows what certificate to generate
|
||||||
|
https://{{ netbird_domain }} {
|
||||||
|
# Use Caddy's internal CA for self-signed certificate
|
||||||
|
tls internal {
|
||||||
|
protocols tls1.2 tls1.3
|
||||||
|
}
|
||||||
|
|
||||||
|
import security_headers
|
||||||
|
|
||||||
|
# Embedded IdP OAuth2 endpoints
|
||||||
|
reverse_proxy /oauth2/* management:80
|
||||||
|
reverse_proxy /.well-known/openid-configuration management:80
|
||||||
|
reverse_proxy /.well-known/jwks.json management:80
|
||||||
|
|
||||||
|
# NetBird Relay
|
||||||
|
reverse_proxy /relay* relay:80
|
||||||
|
|
||||||
|
# NetBird Signal (gRPC)
|
||||||
|
reverse_proxy /signalexchange.SignalExchange/* h2c://signal:10000
|
||||||
|
|
||||||
|
# NetBird Management API (gRPC)
|
||||||
|
reverse_proxy /management.ManagementService/* h2c://management:80
|
||||||
|
|
||||||
|
# NetBird Management REST API
|
||||||
|
reverse_proxy /api/* management:80
|
||||||
|
|
||||||
|
# NetBird Dashboard (catch-all)
|
||||||
|
reverse_proxy /* dashboard:80
|
||||||
|
}
|
||||||
|
}
|
||||||
45
ansible/netbird/templates/Caddyfile-ssl.j2
Normal file
45
ansible/netbird/templates/Caddyfile-ssl.j2
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 Caddyfile - SSL Mode (Let's Encrypt)
|
||||||
|
# =============================================================================
|
||||||
|
{
|
||||||
|
servers :80,:443 {
|
||||||
|
protocols h1 h2c h2 h3
|
||||||
|
}
|
||||||
|
email {{ letsencrypt_email }}
|
||||||
|
}
|
||||||
|
|
||||||
|
(security_headers) {
|
||||||
|
header * {
|
||||||
|
Strict-Transport-Security "max-age=3600; includeSubDomains; preload"
|
||||||
|
X-Content-Type-Options "nosniff"
|
||||||
|
X-Frame-Options "SAMEORIGIN"
|
||||||
|
X-XSS-Protection "1; mode=block"
|
||||||
|
-Server
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{{ netbird_domain }} {
|
||||||
|
import security_headers
|
||||||
|
|
||||||
|
# Embedded IdP OAuth2 endpoints
|
||||||
|
reverse_proxy /oauth2/* management:80
|
||||||
|
reverse_proxy /.well-known/openid-configuration management:80
|
||||||
|
reverse_proxy /.well-known/jwks.json management:80
|
||||||
|
|
||||||
|
# NetBird Relay
|
||||||
|
reverse_proxy /relay* relay:80
|
||||||
|
|
||||||
|
# NetBird Signal (gRPC)
|
||||||
|
reverse_proxy /signalexchange.SignalExchange/* h2c://signal:10000
|
||||||
|
|
||||||
|
# NetBird Management API (gRPC)
|
||||||
|
reverse_proxy /management.ManagementService/* h2c://management:80
|
||||||
|
|
||||||
|
# NetBird Management REST API
|
||||||
|
reverse_proxy /api/* management:80
|
||||||
|
|
||||||
|
# NetBird Dashboard (catch-all)
|
||||||
|
reverse_proxy /* dashboard:80
|
||||||
|
}
|
||||||
|
}
|
||||||
22
ansible/netbird/templates/dashboard.env.j2
Normal file
22
ansible/netbird/templates/dashboard.env.j2
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# NetBird Dashboard Environment (v1.6 - Embedded IdP)
|
||||||
|
# =============================================================================
|
||||||
|
# Uses NetBird's embedded IdP - no external auth required
|
||||||
|
|
||||||
|
# Endpoints
|
||||||
|
NETBIRD_MGMT_API_ENDPOINT={{ netbird_protocol }}://{{ netbird_domain }}
|
||||||
|
NETBIRD_MGMT_GRPC_API_ENDPOINT={{ netbird_protocol }}://{{ netbird_domain }}
|
||||||
|
|
||||||
|
# OIDC - using embedded IdP
|
||||||
|
AUTH_AUDIENCE=netbird-dashboard
|
||||||
|
AUTH_CLIENT_ID=netbird-dashboard
|
||||||
|
AUTH_CLIENT_SECRET=
|
||||||
|
AUTH_AUTHORITY={{ netbird_protocol }}://{{ netbird_domain }}/oauth2
|
||||||
|
USE_AUTH0=false
|
||||||
|
AUTH_SUPPORTED_SCOPES=openid profile email groups
|
||||||
|
AUTH_REDIRECT_URI=/nb-auth
|
||||||
|
AUTH_SILENT_REDIRECT_URI=/nb-silent-auth
|
||||||
|
|
||||||
|
# SSL (handled by Caddy)
|
||||||
|
NGINX_SSL_PORT=443
|
||||||
|
LETSENCRYPT_DOMAIN=none
|
||||||
97
ansible/netbird/templates/docker-compose.yml.j2
Normal file
97
ansible/netbird/templates/docker-compose.yml.j2
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# NetBird v1.6 - Lightweight Deployment (No Authentik, No Caddy)
|
||||||
|
# =============================================================================
|
||||||
|
# Services: Dashboard, Signal, Relay, Management, Coturn
|
||||||
|
# Caddy is deployed separately as shared reverse proxy.
|
||||||
|
|
||||||
|
services:
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# NetBird Dashboard
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
dashboard:
|
||||||
|
image: netbirdio/dashboard:{{ dashboard_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks: [netbird]
|
||||||
|
env_file:
|
||||||
|
- {{ netbird_base_dir }}/dashboard.env
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# NetBird Signal Server
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
signal:
|
||||||
|
image: netbirdio/signal:{{ netbird_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks: [netbird]
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# NetBird Relay Server
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
relay:
|
||||||
|
image: netbirdio/relay:{{ netbird_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks: [netbird]
|
||||||
|
env_file:
|
||||||
|
- {{ netbird_base_dir }}/relay.env
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# NetBird Management Server
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
management:
|
||||||
|
image: netbirdio/management:{{ netbird_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks: [netbird]
|
||||||
|
volumes:
|
||||||
|
- netbird_management:/var/lib/netbird
|
||||||
|
- {{ netbird_base_dir }}/management.json:/etc/netbird/management.json
|
||||||
|
command: [
|
||||||
|
"--port", "80",
|
||||||
|
"--log-file", "console",
|
||||||
|
"--log-level", "info",
|
||||||
|
"--disable-anonymous-metrics=false",
|
||||||
|
"--single-account-mode-domain={{ single_account_domain | default(netbird_domain) }}",
|
||||||
|
"--dns-domain={{ netbird_dns_domain }}"
|
||||||
|
]
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Coturn TURN/STUN Server
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
coturn:
|
||||||
|
image: coturn/coturn:{{ coturn_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- {{ netbird_base_dir }}/turnserver.conf:/etc/coturn/turnserver.conf:ro
|
||||||
|
network_mode: host
|
||||||
|
command:
|
||||||
|
- "-c"
|
||||||
|
- "/etc/coturn/turnserver.conf"
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
netbird_management:
|
||||||
|
|
||||||
|
networks:
|
||||||
|
netbird:
|
||||||
49
ansible/netbird/templates/management.json.j2
Normal file
49
ansible/netbird/templates/management.json.j2
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
{
|
||||||
|
"Stuns": [
|
||||||
|
{
|
||||||
|
"Proto": "udp",
|
||||||
|
"URI": "stun:{{ netbird_domain }}:3478"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"TURNConfig": {
|
||||||
|
"Turns": [
|
||||||
|
{
|
||||||
|
"Proto": "udp",
|
||||||
|
"URI": "turn:{{ netbird_domain }}:3478",
|
||||||
|
"Username": "{{ turn_user }}",
|
||||||
|
"Password": "{{ turn_password }}"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"TimeBasedCredentials": false
|
||||||
|
},
|
||||||
|
"Relay": {
|
||||||
|
"Addresses": [
|
||||||
|
"{{ relay_protocol }}://{{ netbird_domain }}:{{ relay_port }}/relay"
|
||||||
|
],
|
||||||
|
"CredentialsTTL": "168h",
|
||||||
|
"Secret": "{{ relay_secret }}"
|
||||||
|
},
|
||||||
|
"Signal": {
|
||||||
|
"Proto": "{{ netbird_protocol }}",
|
||||||
|
"URI": "{{ netbird_domain }}:{{ signal_port }}"
|
||||||
|
},
|
||||||
|
"Datadir": "/var/lib/netbird",
|
||||||
|
"DataStoreEncryptionKey": "{{ encryption_key }}",
|
||||||
|
"StoreConfig": {
|
||||||
|
"Engine": "sqlite"
|
||||||
|
},
|
||||||
|
"HttpConfig": {
|
||||||
|
"Address": "0.0.0.0:80"
|
||||||
|
},
|
||||||
|
"IdpManagerConfig": {
|
||||||
|
"ManagerType": "none"
|
||||||
|
},
|
||||||
|
"EmbeddedIdP": {
|
||||||
|
"Enabled": true,
|
||||||
|
"Issuer": "{{ netbird_protocol }}://{{ netbird_domain }}/oauth2",
|
||||||
|
"DashboardRedirectURIs": [
|
||||||
|
"{{ netbird_protocol }}://{{ netbird_domain }}/nb-auth",
|
||||||
|
"{{ netbird_protocol }}://{{ netbird_domain }}/nb-silent-auth"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
8
ansible/netbird/templates/relay.env.j2
Normal file
8
ansible/netbird/templates/relay.env.j2
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# NetBird Relay Environment
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
NB_LOG_LEVEL=info
|
||||||
|
NB_LISTEN_ADDRESS=:80
|
||||||
|
NB_EXPOSED_ADDRESS={{ relay_protocol }}://{{ netbird_domain }}:{{ relay_port }}/relay
|
||||||
|
NB_AUTH_SECRET={{ relay_secret }}
|
||||||
15
ansible/netbird/templates/turnserver.conf.j2
Normal file
15
ansible/netbird/templates/turnserver.conf.j2
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# Coturn TURN/STUN Server Configuration
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
listening-port=3478
|
||||||
|
external-ip={{ ansible_default_ipv4.address }}
|
||||||
|
relay-ip={{ ansible_default_ipv4.address }}
|
||||||
|
fingerprint
|
||||||
|
lt-cred-mech
|
||||||
|
user={{ turn_user }}:{{ turn_password }}
|
||||||
|
realm={{ netbird_domain }}
|
||||||
|
log-file=stdout
|
||||||
|
no-tls
|
||||||
|
no-dtls
|
||||||
|
no-cli
|
||||||
51
terraform/.gitea/workflows/terraform.yml
Normal file
51
terraform/.gitea/workflows/terraform.yml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
name: Terraform
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [main]
|
||||||
|
pull_request:
|
||||||
|
branches: [main]
|
||||||
|
|
||||||
|
env:
|
||||||
|
TF_VAR_netbird_token: ${{ secrets.NETBIRD_TOKEN }}
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
terraform:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Setup Terraform
|
||||||
|
uses: hashicorp/setup-terraform@v3
|
||||||
|
with:
|
||||||
|
terraform_version: 1.7.0
|
||||||
|
|
||||||
|
- name: Terraform Init
|
||||||
|
run: terraform init
|
||||||
|
|
||||||
|
- name: Terraform Format Check
|
||||||
|
run: terraform fmt -check
|
||||||
|
continue-on-error: true
|
||||||
|
|
||||||
|
- name: Terraform Validate
|
||||||
|
run: terraform validate
|
||||||
|
|
||||||
|
- name: Terraform Plan
|
||||||
|
if: github.event_name == 'pull_request'
|
||||||
|
run: terraform plan -no-color
|
||||||
|
|
||||||
|
- name: Terraform Apply
|
||||||
|
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||||
|
run: terraform apply -auto-approve
|
||||||
|
|
||||||
|
- name: Commit state changes
|
||||||
|
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||||
|
run: |
|
||||||
|
git config user.name "Terraform CI"
|
||||||
|
git config user.email "ci@localhost"
|
||||||
|
git add terraform.tfstate terraform.tfstate.backup 2>/dev/null || true
|
||||||
|
if ! git diff --staged --quiet; then
|
||||||
|
git commit -m "chore: update terraform state [skip ci]"
|
||||||
|
git push
|
||||||
|
fi
|
||||||
12
terraform/.gitignore
vendored
Normal file
12
terraform/.gitignore
vendored
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
# Terraform
|
||||||
|
.terraform/
|
||||||
|
*.tfplan
|
||||||
|
crash.log
|
||||||
|
crash.*.log
|
||||||
|
|
||||||
|
# Secrets (tfvars contains the PAT token)
|
||||||
|
terraform.tfvars
|
||||||
|
*.auto.tfvars
|
||||||
|
|
||||||
|
# State files are committed for this POC (single operator)
|
||||||
|
# For production, use remote backend instead
|
||||||
25
terraform/.terraform.lock.hcl
generated
Normal file
25
terraform/.terraform.lock.hcl
generated
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# This file is maintained automatically by "terraform init".
|
||||||
|
# Manual edits may be lost in future updates.
|
||||||
|
|
||||||
|
provider "registry.terraform.io/netbirdio/netbird" {
|
||||||
|
version = "0.0.8"
|
||||||
|
constraints = "~> 0.0.8"
|
||||||
|
hashes = [
|
||||||
|
"h1:2EOFY+2GNGCZJKfzUxIRtC9nSzDPpaqUE2urhgaf/Ys=",
|
||||||
|
"zh:0871acec1ec4d73453f68210b2bdc7062e987f610c1fb98b9db793744df86f12",
|
||||||
|
"zh:16ab23609fa36e8fde7ffefbd2b3213abf904f5128981718eb9336895dbd1732",
|
||||||
|
"zh:18fac816d51fcf160825c2fd138aba898bc14720bd5620b916c8f969f210753a",
|
||||||
|
"zh:3c64aa00ed10c15834af8a59187268cc9cd263862c4a4f3478f8db0f8478b4f0",
|
||||||
|
"zh:503430b1fde77d01e5b8a0843f75212d414f2c5613bf3788118612ea97260f33",
|
||||||
|
"zh:890df766e9b839623b1f0437355032a3c006226a6c200cd911e15ee1a9014e9f",
|
||||||
|
"zh:8be272b0b1600cfc7849d66921b403fef0f35b3236eba7886766724ad0693220",
|
||||||
|
"zh:99100210da2c127ad0a5440983ad3e0c3211ec2ee23d61bff334ae5e366a97fc",
|
||||||
|
"zh:9dbc18a8d15c8af494e8534eb080fea16265858733250a6f6fcaac86ab1267a7",
|
||||||
|
"zh:c8300eb51406998d72c6dced64df83739c5cb59773575c3a75badcbf6122cd34",
|
||||||
|
"zh:dcef1377ff20c18353c88e350f353418ba3faca0d1d07f3cb828146c09ca3e5f",
|
||||||
|
"zh:e0366b9ef2929fc4d7ab08e888196e2b005eb29b151d25a6644be001f62af5b9",
|
||||||
|
"zh:e8954bf072bafaf61dd3fad76aab927109e8538c1dcab93d8415be13a764e394",
|
||||||
|
"zh:edbd0560a215bec8366d239640fbd6516ff39d211f611fda1cd456f759bc1f4b",
|
||||||
|
"zh:f349d5e7ddc1f6e7d9f97bbfe7ade9cb7a3369b8d54d4389d5cb863f7a7adcb8",
|
||||||
|
]
|
||||||
|
}
|
||||||
86
terraform/README.md
Normal file
86
terraform/README.md
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
# NetBird IaC
|
||||||
|
|
||||||
|
Terraform configuration for managing NetBird VPN resources via GitOps.
|
||||||
|
|
||||||
|
## Resources Managed
|
||||||
|
|
||||||
|
- **Groups:** ground-stations, pilots, operators, fusion-servers
|
||||||
|
- **Policies:** Access control between groups
|
||||||
|
- **Setup Keys:** For peer enrollment
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Making Changes
|
||||||
|
|
||||||
|
1. Edit the relevant `.tf` file
|
||||||
|
2. Create a PR
|
||||||
|
3. CI runs `terraform plan` - review the changes
|
||||||
|
4. Merge PR
|
||||||
|
5. CI runs `terraform apply` - changes applied
|
||||||
|
|
||||||
|
### Adding a New Group
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# groups.tf
|
||||||
|
resource "netbird_group" "new_team" {
|
||||||
|
name = "new-team"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adding a Setup Key (Per-Ticket)
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# setup_keys.tf
|
||||||
|
resource "netbird_setup_key" "ticket_1234_pilot" {
|
||||||
|
name = "ticket-1234-pilot-ivanov"
|
||||||
|
type = "one-off"
|
||||||
|
auto_groups = [netbird_group.pilots.id]
|
||||||
|
usage_limit = 1
|
||||||
|
ephemeral = false
|
||||||
|
}
|
||||||
|
|
||||||
|
# outputs.tf
|
||||||
|
output "ticket_1234_pilot_key" {
|
||||||
|
value = netbird_setup_key.ticket_1234_pilot.key
|
||||||
|
sensitive = true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Retrieving Setup Keys
|
||||||
|
|
||||||
|
After apply, retrieve keys locally:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
terraform output -raw gs_setup_key
|
||||||
|
terraform output -raw pilot_setup_key
|
||||||
|
```
|
||||||
|
|
||||||
|
## Local Development
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create tfvars (copy from example)
|
||||||
|
cp terraform.tfvars.example terraform.tfvars
|
||||||
|
# Edit with your NetBird PAT
|
||||||
|
|
||||||
|
# Init and plan
|
||||||
|
terraform init
|
||||||
|
terraform plan
|
||||||
|
|
||||||
|
# Apply (be careful!)
|
||||||
|
terraform apply
|
||||||
|
```
|
||||||
|
|
||||||
|
## CI/CD
|
||||||
|
|
||||||
|
Configured in `.gitea/workflows/terraform.yml`:
|
||||||
|
- PR: `terraform plan`
|
||||||
|
- Merge to main: `terraform apply`
|
||||||
|
|
||||||
|
Required secrets in Gitea:
|
||||||
|
- `NETBIRD_TOKEN`: NetBird PAT
|
||||||
|
|
||||||
|
## State Management
|
||||||
|
|
||||||
|
State is committed to git (`terraform.tfstate`). This is acceptable for single-operator scenarios but not recommended for production with multiple operators.
|
||||||
|
|
||||||
|
For production, configure a remote backend (S3, Terraform Cloud, etc.).
|
||||||
16
terraform/groups.tf
Normal file
16
terraform/groups.tf
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
# Groups matching Achilles network structure
|
||||||
|
resource "netbird_group" "ground_stations" {
|
||||||
|
name = "ground-stations"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "netbird_group" "pilots" {
|
||||||
|
name = "pilots"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "netbird_group" "operators" {
|
||||||
|
name = "operators"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "netbird_group" "fusion_servers" {
|
||||||
|
name = "fusion-servers"
|
||||||
|
}
|
||||||
13
terraform/main.tf
Normal file
13
terraform/main.tf
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
netbird = {
|
||||||
|
source = "netbirdio/netbird"
|
||||||
|
version = "~> 0.0.8"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
provider "netbird" {
|
||||||
|
management_url = var.netbird_management_url
|
||||||
|
token = var.netbird_token
|
||||||
|
}
|
||||||
18
terraform/outputs.tf
Normal file
18
terraform/outputs.tf
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
output "gs_setup_key" {
|
||||||
|
value = netbird_setup_key.gs_onboarding.key
|
||||||
|
sensitive = true
|
||||||
|
}
|
||||||
|
|
||||||
|
output "pilot_setup_key" {
|
||||||
|
value = netbird_setup_key.pilot_onboarding.key
|
||||||
|
sensitive = true
|
||||||
|
}
|
||||||
|
|
||||||
|
output "group_ids" {
|
||||||
|
value = {
|
||||||
|
ground_stations = netbird_group.ground_stations.id
|
||||||
|
pilots = netbird_group.pilots.id
|
||||||
|
operators = netbird_group.operators.id
|
||||||
|
fusion_servers = netbird_group.fusion_servers.id
|
||||||
|
}
|
||||||
|
}
|
||||||
52
terraform/policies.tf
Normal file
52
terraform/policies.tf
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
# Access policies for Achilles network
|
||||||
|
resource "netbird_policy" "pilot_to_gs" {
|
||||||
|
name = "pilot-to-ground-station"
|
||||||
|
description = "Allow pilots to connect to ground stations"
|
||||||
|
enabled = true
|
||||||
|
|
||||||
|
rule {
|
||||||
|
name = "pilot-gs-access"
|
||||||
|
enabled = true
|
||||||
|
sources = [netbird_group.pilots.id]
|
||||||
|
destinations = [netbird_group.ground_stations.id]
|
||||||
|
bidirectional = true
|
||||||
|
protocol = "all"
|
||||||
|
action = "accept"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "netbird_policy" "operator_full_access" {
|
||||||
|
name = "operator-full-access"
|
||||||
|
description = "Operators can access all network resources"
|
||||||
|
enabled = true
|
||||||
|
|
||||||
|
rule {
|
||||||
|
name = "operator-all"
|
||||||
|
enabled = true
|
||||||
|
sources = [netbird_group.operators.id]
|
||||||
|
destinations = [
|
||||||
|
netbird_group.ground_stations.id,
|
||||||
|
netbird_group.pilots.id,
|
||||||
|
netbird_group.fusion_servers.id
|
||||||
|
]
|
||||||
|
bidirectional = true
|
||||||
|
protocol = "all"
|
||||||
|
action = "accept"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "netbird_policy" "fusion_to_gs" {
|
||||||
|
name = "fusion-to-ground-station"
|
||||||
|
description = "Fusion servers coordinate with ground stations"
|
||||||
|
enabled = true
|
||||||
|
|
||||||
|
rule {
|
||||||
|
name = "fusion-gs"
|
||||||
|
enabled = true
|
||||||
|
sources = [netbird_group.fusion_servers.id]
|
||||||
|
destinations = [netbird_group.ground_stations.id]
|
||||||
|
bidirectional = true
|
||||||
|
protocol = "all"
|
||||||
|
action = "accept"
|
||||||
|
}
|
||||||
|
}
|
||||||
17
terraform/setup_keys.tf
Normal file
17
terraform/setup_keys.tf
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
# Setup keys for peer onboarding
|
||||||
|
resource "netbird_setup_key" "gs_onboarding" {
|
||||||
|
name = "ground-station-onboarding"
|
||||||
|
type = "reusable"
|
||||||
|
auto_groups = [netbird_group.ground_stations.id]
|
||||||
|
usage_limit = 0 # unlimited
|
||||||
|
ephemeral = false
|
||||||
|
}
|
||||||
|
|
||||||
|
# Comment to trigger CI
|
||||||
|
resource "netbird_setup_key" "pilot_onboarding" {
|
||||||
|
name = "pilot-onboarding"
|
||||||
|
type = "reusable"
|
||||||
|
auto_groups = [netbird_group.pilots.id]
|
||||||
|
usage_limit = 0
|
||||||
|
ephemeral = false
|
||||||
|
}
|
||||||
343
terraform/terraform.tfstate
Normal file
343
terraform/terraform.tfstate
Normal file
@@ -0,0 +1,343 @@
|
|||||||
|
{
|
||||||
|
"version": 4,
|
||||||
|
"terraform_version": "1.14.4",
|
||||||
|
"serial": 17,
|
||||||
|
"lineage": "2e6257d6-c04c-6864-63e8-38721dda9040",
|
||||||
|
"outputs": {
|
||||||
|
"group_ids": {
|
||||||
|
"value": {
|
||||||
|
"fusion_servers": "d68tmmml93fs73c93ek0",
|
||||||
|
"ground_stations": "d68tmmml93fs73c93ejg",
|
||||||
|
"operators": "d68tmmml93fs73c93el0",
|
||||||
|
"pilots": "d68tmmml93fs73c93ekg"
|
||||||
|
},
|
||||||
|
"type": [
|
||||||
|
"object",
|
||||||
|
{
|
||||||
|
"fusion_servers": "string",
|
||||||
|
"ground_stations": "string",
|
||||||
|
"operators": "string",
|
||||||
|
"pilots": "string"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"gs_setup_key": {
|
||||||
|
"value": "A19E165B-A6EB-4494-AA1D-F85D3EA9D382",
|
||||||
|
"type": "string",
|
||||||
|
"sensitive": true
|
||||||
|
},
|
||||||
|
"pilot_setup_key": {
|
||||||
|
"value": "0267D2E9-E4A5-451C-A1BC-513F32A9FCD8",
|
||||||
|
"type": "string",
|
||||||
|
"sensitive": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"resources": [
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_group",
|
||||||
|
"name": "fusion_servers",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"id": "d68tmmml93fs73c93ek0",
|
||||||
|
"issued": "api",
|
||||||
|
"name": "fusion-servers",
|
||||||
|
"peers": [],
|
||||||
|
"resources": []
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_group",
|
||||||
|
"name": "ground_stations",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"id": "d68tmmml93fs73c93ejg",
|
||||||
|
"issued": "api",
|
||||||
|
"name": "ground-stations",
|
||||||
|
"peers": [],
|
||||||
|
"resources": []
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_group",
|
||||||
|
"name": "operators",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"id": "d68tmmml93fs73c93el0",
|
||||||
|
"issued": "api",
|
||||||
|
"name": "operators",
|
||||||
|
"peers": [],
|
||||||
|
"resources": []
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_group",
|
||||||
|
"name": "pilots",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"id": "d68tmmml93fs73c93ekg",
|
||||||
|
"issued": "api",
|
||||||
|
"name": "pilots",
|
||||||
|
"peers": [],
|
||||||
|
"resources": []
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_policy",
|
||||||
|
"name": "fusion_to_gs",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"description": "Fusion servers coordinate with ground stations",
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93en0",
|
||||||
|
"name": "fusion-to-ground-station",
|
||||||
|
"rule": [
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"bidirectional": true,
|
||||||
|
"description": null,
|
||||||
|
"destination_resource": null,
|
||||||
|
"destinations": [
|
||||||
|
"d68tmmml93fs73c93ejg"
|
||||||
|
],
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93en0",
|
||||||
|
"name": "fusion-gs",
|
||||||
|
"port_ranges": null,
|
||||||
|
"ports": null,
|
||||||
|
"protocol": "all",
|
||||||
|
"source_resource": null,
|
||||||
|
"sources": [
|
||||||
|
"d68tmmml93fs73c93ek0"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source_posture_checks": null
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0,
|
||||||
|
"dependencies": [
|
||||||
|
"netbird_group.fusion_servers",
|
||||||
|
"netbird_group.ground_stations"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_policy",
|
||||||
|
"name": "operator_full_access",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"description": "Operators can access all network resources",
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93eng",
|
||||||
|
"name": "operator-full-access",
|
||||||
|
"rule": [
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"bidirectional": true,
|
||||||
|
"description": null,
|
||||||
|
"destination_resource": null,
|
||||||
|
"destinations": [
|
||||||
|
"d68tmmml93fs73c93ejg",
|
||||||
|
"d68tmmml93fs73c93ekg",
|
||||||
|
"d68tmmml93fs73c93ek0"
|
||||||
|
],
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93eng",
|
||||||
|
"name": "operator-all",
|
||||||
|
"port_ranges": null,
|
||||||
|
"ports": null,
|
||||||
|
"protocol": "all",
|
||||||
|
"source_resource": null,
|
||||||
|
"sources": [
|
||||||
|
"d68tmmml93fs73c93el0"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source_posture_checks": null
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0,
|
||||||
|
"dependencies": [
|
||||||
|
"netbird_group.fusion_servers",
|
||||||
|
"netbird_group.ground_stations",
|
||||||
|
"netbird_group.operators",
|
||||||
|
"netbird_group.pilots"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_policy",
|
||||||
|
"name": "pilot_to_gs",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"description": "Allow pilots to connect to ground stations",
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93epg",
|
||||||
|
"name": "pilot-to-ground-station",
|
||||||
|
"rule": [
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"bidirectional": true,
|
||||||
|
"description": null,
|
||||||
|
"destination_resource": null,
|
||||||
|
"destinations": [
|
||||||
|
"d68tmmml93fs73c93ejg"
|
||||||
|
],
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93epg",
|
||||||
|
"name": "pilot-gs-access",
|
||||||
|
"port_ranges": null,
|
||||||
|
"ports": null,
|
||||||
|
"protocol": "all",
|
||||||
|
"source_resource": null,
|
||||||
|
"sources": [
|
||||||
|
"d68tmmml93fs73c93ekg"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source_posture_checks": null
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0,
|
||||||
|
"dependencies": [
|
||||||
|
"netbird_group.ground_stations",
|
||||||
|
"netbird_group.pilots"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_setup_key",
|
||||||
|
"name": "gs_onboarding",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"allow_extra_dns_labels": false,
|
||||||
|
"auto_groups": [
|
||||||
|
"d68tmmml93fs73c93ejg"
|
||||||
|
],
|
||||||
|
"ephemeral": false,
|
||||||
|
"expires": "0001-01-01T00:00:00Z",
|
||||||
|
"expiry_seconds": 0,
|
||||||
|
"id": "d68v8pml93fs73c93itg",
|
||||||
|
"key": "A19E165B-A6EB-4494-AA1D-F85D3EA9D382",
|
||||||
|
"last_used": "0001-01-01T00:00:00Z",
|
||||||
|
"name": "ground-station-onboarding",
|
||||||
|
"revoked": false,
|
||||||
|
"state": "valid",
|
||||||
|
"type": "reusable",
|
||||||
|
"updated_at": "2026-02-15T16:29:26Z",
|
||||||
|
"usage_limit": 0,
|
||||||
|
"used_times": 0,
|
||||||
|
"valid": true
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "get_attr",
|
||||||
|
"value": "key"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"identity_schema_version": 0,
|
||||||
|
"dependencies": [
|
||||||
|
"netbird_group.ground_stations"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_setup_key",
|
||||||
|
"name": "pilot_onboarding",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"allow_extra_dns_labels": false,
|
||||||
|
"auto_groups": [
|
||||||
|
"d68tmmml93fs73c93ekg"
|
||||||
|
],
|
||||||
|
"ephemeral": false,
|
||||||
|
"expires": "0001-01-01T00:00:00Z",
|
||||||
|
"expiry_seconds": 0,
|
||||||
|
"id": "d68v8pml93fs73c93isg",
|
||||||
|
"key": "0267D2E9-E4A5-451C-A1BC-513F32A9FCD8",
|
||||||
|
"last_used": "0001-01-01T00:00:00Z",
|
||||||
|
"name": "pilot-onboarding",
|
||||||
|
"revoked": false,
|
||||||
|
"state": "valid",
|
||||||
|
"type": "reusable",
|
||||||
|
"updated_at": "2026-02-15T16:29:26Z",
|
||||||
|
"usage_limit": 0,
|
||||||
|
"used_times": 0,
|
||||||
|
"valid": true
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "get_attr",
|
||||||
|
"value": "key"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"identity_schema_version": 0,
|
||||||
|
"dependencies": [
|
||||||
|
"netbird_group.pilots"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"check_results": null
|
||||||
|
}
|
||||||
299
terraform/terraform.tfstate.backup
Normal file
299
terraform/terraform.tfstate.backup
Normal file
@@ -0,0 +1,299 @@
|
|||||||
|
{
|
||||||
|
"version": 4,
|
||||||
|
"terraform_version": "1.14.4",
|
||||||
|
"serial": 9,
|
||||||
|
"lineage": "2e6257d6-c04c-6864-63e8-38721dda9040",
|
||||||
|
"outputs": {
|
||||||
|
"group_ids": {
|
||||||
|
"value": {
|
||||||
|
"fusion_servers": "d68tmmml93fs73c93ek0",
|
||||||
|
"ground_stations": "d68tmmml93fs73c93ejg",
|
||||||
|
"operators": "d68tmmml93fs73c93el0",
|
||||||
|
"pilots": "d68tmmml93fs73c93ekg"
|
||||||
|
},
|
||||||
|
"type": [
|
||||||
|
"object",
|
||||||
|
{
|
||||||
|
"fusion_servers": "string",
|
||||||
|
"ground_stations": "string",
|
||||||
|
"operators": "string",
|
||||||
|
"pilots": "string"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"resources": [
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_group",
|
||||||
|
"name": "fusion_servers",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"id": "d68tmmml93fs73c93ek0",
|
||||||
|
"issued": "api",
|
||||||
|
"name": "fusion-servers",
|
||||||
|
"peers": [],
|
||||||
|
"resources": []
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_group",
|
||||||
|
"name": "ground_stations",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"id": "d68tmmml93fs73c93ejg",
|
||||||
|
"issued": "api",
|
||||||
|
"name": "ground-stations",
|
||||||
|
"peers": [],
|
||||||
|
"resources": []
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_group",
|
||||||
|
"name": "operators",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"id": "d68tmmml93fs73c93el0",
|
||||||
|
"issued": "api",
|
||||||
|
"name": "operators",
|
||||||
|
"peers": [],
|
||||||
|
"resources": []
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_group",
|
||||||
|
"name": "pilots",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"id": "d68tmmml93fs73c93ekg",
|
||||||
|
"issued": "api",
|
||||||
|
"name": "pilots",
|
||||||
|
"peers": [],
|
||||||
|
"resources": []
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_policy",
|
||||||
|
"name": "fusion_to_gs",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"description": "Fusion servers coordinate with ground stations",
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93en0",
|
||||||
|
"name": "fusion-to-ground-station",
|
||||||
|
"rule": [
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"bidirectional": true,
|
||||||
|
"description": "Fusion to GS access",
|
||||||
|
"destination_resource": null,
|
||||||
|
"destinations": null,
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93en0",
|
||||||
|
"name": "fusion-gs",
|
||||||
|
"port_ranges": null,
|
||||||
|
"ports": null,
|
||||||
|
"protocol": "all",
|
||||||
|
"source_resource": null,
|
||||||
|
"sources": null
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source_posture_checks": null
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_policy",
|
||||||
|
"name": "operator_full_access",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"description": "Operators can access all network resources",
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93eng",
|
||||||
|
"name": "operator-full-access",
|
||||||
|
"rule": [
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"bidirectional": true,
|
||||||
|
"description": "Full operator access",
|
||||||
|
"destination_resource": null,
|
||||||
|
"destinations": null,
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93eng",
|
||||||
|
"name": "operator-all",
|
||||||
|
"port_ranges": null,
|
||||||
|
"ports": null,
|
||||||
|
"protocol": "all",
|
||||||
|
"source_resource": null,
|
||||||
|
"sources": null
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source_posture_checks": null
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_policy",
|
||||||
|
"name": "pilot_to_gs",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"description": "Allow pilots to connect to ground stations",
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93epg",
|
||||||
|
"name": "pilot-to-ground-station",
|
||||||
|
"rule": [
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"bidirectional": true,
|
||||||
|
"description": "Pilots can access ground stations",
|
||||||
|
"destination_resource": null,
|
||||||
|
"destinations": null,
|
||||||
|
"enabled": true,
|
||||||
|
"id": "d68tmmul93fs73c93epg",
|
||||||
|
"name": "pilot-gs-access",
|
||||||
|
"port_ranges": null,
|
||||||
|
"ports": null,
|
||||||
|
"protocol": "all",
|
||||||
|
"source_resource": null,
|
||||||
|
"sources": null
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source_posture_checks": null
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_setup_key",
|
||||||
|
"name": "gs_onboarding",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"allow_extra_dns_labels": false,
|
||||||
|
"auto_groups": [
|
||||||
|
"d68tmmml93fs73c93ejg"
|
||||||
|
],
|
||||||
|
"ephemeral": false,
|
||||||
|
"expires": "0001-01-01T00:00:00Z",
|
||||||
|
"expiry_seconds": null,
|
||||||
|
"id": "d68tmmul93fs73c93eo0",
|
||||||
|
"key": null,
|
||||||
|
"last_used": "0001-01-01T00:00:00Z",
|
||||||
|
"name": "ground-station-onboarding",
|
||||||
|
"revoked": false,
|
||||||
|
"state": "valid",
|
||||||
|
"type": "reusable",
|
||||||
|
"updated_at": "2026-02-15T14:42:35Z",
|
||||||
|
"usage_limit": 0,
|
||||||
|
"used_times": 0,
|
||||||
|
"valid": true
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "get_attr",
|
||||||
|
"value": "key"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mode": "managed",
|
||||||
|
"type": "netbird_setup_key",
|
||||||
|
"name": "pilot_onboarding",
|
||||||
|
"provider": "provider[\"registry.terraform.io/netbirdio/netbird\"]",
|
||||||
|
"instances": [
|
||||||
|
{
|
||||||
|
"schema_version": 0,
|
||||||
|
"attributes": {
|
||||||
|
"allow_extra_dns_labels": false,
|
||||||
|
"auto_groups": [
|
||||||
|
"d68tmmml93fs73c93ekg"
|
||||||
|
],
|
||||||
|
"ephemeral": false,
|
||||||
|
"expires": "2026-03-17T14:42:35Z",
|
||||||
|
"expiry_seconds": null,
|
||||||
|
"id": "d68tmmul93fs73c93eq0",
|
||||||
|
"key": null,
|
||||||
|
"last_used": "0001-01-01T00:00:00Z",
|
||||||
|
"name": "pilot-onboarding",
|
||||||
|
"revoked": false,
|
||||||
|
"state": "valid",
|
||||||
|
"type": "reusable",
|
||||||
|
"updated_at": "2026-02-15T14:42:35Z",
|
||||||
|
"usage_limit": 0,
|
||||||
|
"used_times": 0,
|
||||||
|
"valid": true
|
||||||
|
},
|
||||||
|
"sensitive_attributes": [
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "get_attr",
|
||||||
|
"value": "key"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"identity_schema_version": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"check_results": null
|
||||||
|
}
|
||||||
4
terraform/terraform.tfvars.example
Normal file
4
terraform/terraform.tfvars.example
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
# Copy to terraform.tfvars and fill in your PAT
|
||||||
|
# Note: URL should NOT include /api suffix - provider adds it automatically
|
||||||
|
# netbird_management_url = "https://netbird-poc.networkmonitor.cc"
|
||||||
|
netbird_token = "nbp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||||
11
terraform/variables.tf
Normal file
11
terraform/variables.tf
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
variable "netbird_management_url" {
|
||||||
|
type = string
|
||||||
|
description = "NetBird Management API URL"
|
||||||
|
default = "https://netbird-poc.networkmonitor.cc"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "netbird_token" {
|
||||||
|
type = string
|
||||||
|
sensitive = true
|
||||||
|
description = "NetBird admin PAT"
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user