Point-to-Point Connections
For a point-to-point connection, two communication endpoints are needed. To demonstrate a point-to-point connection, we include a test webserver config in our basic setup.
Envoy & XTRA
XTRA proxy is build around Envoy Proxy. Envoy is an open-source, high-performance edge and service proxy designed for cloud-native applications. XTRA incorporates a custom WebAssembly (WASM) filter into the Envoy Proxy, which is designed to be compatible with various Envoy Proxy configurations, handling encryption and decryption operations seamlessly.
Keep in mind: When considering deployment needs for XTRA functionality, Envoy Proxy configuration is needed to leverage XTRA's encryption and decryption operations.
Configuration YAMLs
When deploying XTRA, it is necessary to set a number of kubernetes manifest to ensure XTRA is deployed successfully. In this section, we will breakdown the minimum needed manifests to get a point-to-point setup working.
1. Namespace
A namespace houses resources that are deployed to a cluster. The name of the namespace can be whatever you like, but you will need to verify the namespace name is the same across HOPR manifests.
---
apiVersion: v1
kind: Namespace
metadata:
name: hopr-p2p
2. Configmap: Envoy yaml
Envoy Configuration Config Map sets the listeners, cluster, and admin for the envoy setup.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: hopr-envoyconfig
namespace: hopr-p2p
data:
envoy.yaml: |
static_resources:
listeners:
- name: ingress
address:
socket_address:
address: 0.0.0.0
port_value: 18000
filter_chains:
- filters:
- name: envoy.filters.network.wasm
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm
config:
vm_config:
runtime: "envoy.wasm.runtime.v8"
code:
local:
filename: "/etc/envoy/xtra.wasm"
- name: envoy.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: ingress
cluster: local_service
- name: egress
address:
socket_address:
address: 0.0.0.0
port_value: 18001
filter_chains:
- filters:
- name: envoy.filters.network.wasm
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm
config:
vm_config:
runtime: "envoy.wasm.runtime.v8"
code:
local:
filename: "/etc/envoy/xtra.wasm"
- name: envoy.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: egress
cluster: remote_service
clusters:
- name: local_service
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: mock_local
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8000
- name: remote_service
connect_timeout: 5.00s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: mock_remote
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: my-awesome-test.hoprapi.com
port_value: 18000
- name: xtra
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: xtra
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8080
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
3. Configmap: test webserver.py
Test webserver used as one of our communication endpoints.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: hopr-webserver-script
namespace: hopr-p2p
data:
test-webserver.py: |
#/usr/bin/env python3
from http.server import BaseHTTPRequestHandler, HTTPServer
import os
host_name = "0.0.0.0"
server_port = 8000
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
msg = os.environ.get("SERVER_TEXT", "Hello from the other side!")
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("hopr XTRA p2p Test", "utf-8"))
self.wfile.write(bytes("", "utf-8"))
self.wfile.write(bytes(f"{msg}
", "utf-8"))
self.wfile.write(bytes("", "utf-8"))
if __name__ == "__main__":
webserver = HTTPServer((host_name, server_port), MyServer)
print(f"Server started at http://{host_name}:{server_port}")
try:
webserver.serve_forever()
except KeyboardInterrupt:
pass
webserver.server_close()
print("Server stopped.")
4. Secret: docker configuration file
When onboarding with Hopr a docker config.json is given to you. The docker config json needs to be included in a secret for the permissions required to pull from the Hopr repo.
---
apiVersion: v1
kind: Secret
metadata:
name: hopr-registrycreds
namespace: hopr-test
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ewogICJhdXRocyI6IHsKICAgICJjaGlwcy5ob3ByLmV4b2ZmaWMuaW8iOiB7CiAgICAgICJ1c2VybmFtZSI6ICJqb24iLAogICAgICAicGFzc3dvcmQiOiAiSGFyYm9yMTIzNDUiLAogICAgICAiZW1haWwiOiAiam9uYXRoYW4uZ29yZG9uQGVpdHIudGVjaCIsCiAgICAgICJhdXRoIjogImFtOXVPa2hoY21KdmNqRXlNelExIgogICAgfQogIH0KfQo=
5. Secret: hopr license
When onboarding with HOPR a HOPR license file is given to you. The HOPR license file and associated key needs to be included in a secret for customer's to access specific functionality.
---
apiVersion: v1
kind: Secret
metadata:
name: hopr-license
namespace: hopr-test
type: Opaque
stringData:
HOPR_KEY: "46O0IJNIM9VPQk4Be7hO72S4mUCdI9B6JmjbD0m4Vv0="
HOPR_LICENSE: "WwIWOuohjcSlXFjaG+TcCOuk7AXxlM8xaM3r5eTubnnVN+bY2m2DB87CieiTEhhtZ6lKszmoP0GDiwYOVtlpsx1KiuTn1Q16ZODBgWzXFiOpQVNSgQrO938LKzORaoPxTLEk"
6. MAID Volume
A MAID volume is needed for persistent storage of XTRA's MAID (Machine Alias Identity) Identifier.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hopr-maid-storage
namespace: hopr-p2p
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Mi
7. Deployment
The deployment YAML deploys the containers
- curl-test: This container simulates a client. On an interval, it contacts the xtra-router container which should encrypt and proxy the communication out to the XTRA test endpoint on the Internet. The URL environment variable may need to be changed if the Envoy listener configuration is modified.
- test-webserver: This container simulates a server. It provides a very simple web server with a configurable message, which should be able to be viewed in the curl-test container logs on the "other side".
- xtra-router: This container proxies network communication inside the Pod. It can be configured in many ways to accomplish a desired result via the ConfigMap shown previously. This ConfigMap is mounted as the /etc/envoy/envoy.yaml file in the container, which is loaded and used as the running configuration by Envoy.
- xtra-keyserver: This container is used by plugin within the xtra-router container. It provides a rotating key to the router on demand, which will be used to encrypt/decrypt network communication. The license secret is used by this container as environment variable. The CHIPS algorithm shown is the default for the public XTRA test endpoint, but can be changed for scenarios where the remote endpoint uses a different one.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hopr-p2p
namespace: hopr-p2p
labels:
app: hopr-p2p
spec:
replicas: 1
selector:
matchLabels:
app: hopr-p2p
template:
metadata:
labels:
app: hopr-p2p
spec:
containers:
- name: curl-test
image: chips.hopr.exoffic.io/hopr/continuouscurl:v0.1.1
env:
- name: CURLARGS
value: "-s"
- name: URL
value: "http://localhost:18001"
- name: test-webserver
image: python:3.10-buster
command: ["python"]
args: ["/tmp/test-webserver.py"]
ports:
- containerPort: 8000
name: webserver
env:
- name: SERVER_TEXT
value: "Hello from side A!"
volumeMounts:
- name: webserver-script
mountPath: /tmp/test-webserver.py
subPath: test-webserver.py
- name: xtra-router
image: chips.hopr.exoffic.io/hopr/xtra-wasm-filter:v0.3.0
ports:
- containerPort: 18000
name: envoy-ingress
volumeMounts:
- name: envoy-config
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
- name: xtra-keyserver
image: chips.hopr.exoffic.io/hopr/xtra:v0.6.3
env:
- name: POP_LOG_LEVEL
value: "DEBUG"
- name: CHIPS_ALGORITHM
valueFrom:
secretKeyRef:
name: hopr-license
key: CHIPS_ALGORITHM
- name: HOPR_LICENSE
valueFrom:
secretKeyRef:
name: hopr-license
key: HOPR_LICENSE
- name: HOPR_KEY
valueFrom:
secretKeyRef:
name: hopr-license
key: HOPR_KEY
volumeMounts:
- name: hopr-maid
mountPath: /maid
imagePullSecrets:
- name: hopr-registrycreds
volumes:
- name: envoy-config
configMap:
name: hopr-envoyconfig
- name: webserver-script
configMap:
name: hopr-webserver-script
- name: hopr-maid
persistentVolumeClaim:
claimName: hopr-maid-storage
8. Service
A Service resource is deployed to expose the XTRA ingress router as a NodePort on a Kubernetes host.
---
apiVersion: v1
kind: Service
metadata:
name: hopr-webserver-service
namespace: hopr-p2p
spec:
selector:
app: hopr-p2p
type: NodePort
externalTrafficPolicy: Cluster
ports:
- name: envoy-proxy-to-webserver
protocol: TCP
nodePort: 30002
port: 18000
targetPort: envoy-ingress
You're All Set
Once you have set and deployed the YAML files, you should be up and running with a basic XTRA setup.
Congratulations. XTRA is configured correctly.
Sleeping 30 ...