Peplink as home gateway/firewall!

Peplink as home gateway/firewall!

I’m a big fan of routers and firewalls, love the idea of running pfSense back in the days, before m0n0wall/pfSense, I used to run a custom FreeBSD firewall!!

Do you remember m0n0wall ??
Yes, the father of pfSense and some may say that m0n0wall is the father of opnSense!

Since a year ago, I decided in a branded router/firewall for the home, just because one feature. Yes, only one feature made me buy this Peplink Balance 20X.

SpeedFusion

Peplink’s patented SpeedFusion technology powers enterprise VPNs that tap into the bandwidth of multiple low-cost cable, DSL, 3G/4G/LTE, and other links connected anywhere on your corporate or institutional WAN. Whether you’re transferring a few documents or driving real-time POS data, video feeds, and VoIP conversations, SpeedFusion pumps all your data down a single bonded data-pipe that’s budget-friendly, ultra-fast, and easily configurable to suit any networking environment.

This is the description that comes from the official website of Peplink [ https://www.peplink.com/technology/speedfusion-bonding-technology/ ]

A diagram of a network

Description automatically generated

There are free alternatives to SpeedFusion, but none of them works so seamlessly that sometimes I just forget about the SpeedFusion thing.

In my case, I have two Internet connections, one with CLARO (300/75) and a second one with OrbitCable (10/5). Why the second Internet connections? Well, is cheap and in case CLARO has issues, I can be online and read emails or even do a Teams Call.
A year ago I was running OpnSense with Dual Wan and was Ok, got my hand on an old Balance 20 (a previous model of the 20X and slower), play a little bit with SpeedFusion, at that moment I was convinced that this solutions is better for multiple internet connection with the bonding options, the failover is transparent because the IP of the VM hosting the other side of the SpeedFusion tunnel is the one being used for stablishing connections.

It looks like this (almost…):

A diagram of a network

Description automatically generated

My drawing skills are dead.

A screenshot of a computer screen

Description automatically generated

Traceroute from a machine with policy rules the sent traffic via SpeedFusion tunnel. A few machines are using this policy and going out to the internet using the bonded tunnel.

A screenshot of a computer program

Description automatically generated

Traceroute from a machine without route policy rule, going out via CLARO. This is the normal behavior for the entire network, if CLARO goes down, the traffic moves over to Orbit.

What are the advantages of this?

  • Using the SpeedFusion tunnel, the remote connections doesn’t see my real WAN IP, so in case one of the WAN goes down, the connection doesn’t reset.
  • The classic Dual Wan/Load Balance is available, using policies you can manage how the WANs are being used inside the tunnel.
  • You can publish internal services using the SpeedFusion VM WAN IP Address and only need to open ports on that VM hosted in the remote Data Center.

Any disadvantages?

  • Yes, some sites detect my connections as bot/crawlers, and I need to complete captchas to get into some sites (Cloudflare, eBay and others).
  • Slow, the bandwidth available inside the VPN is 100Mbps, this is a hardware limitation of the Balance 20X.
  • Price, for this model, I need to pay for the 2nd WAN, it only has one Ethernet WAN and need to create a Virtual WAN which cost $49/y, the Balance 20 have two Ethernet WANs, I wasn’t aware of this until I got my hands on the 20X.

Special Use Case?

I’ve been running an ASN enable network for almost 6 years. big part of this network is connecting different Linux VMs with BGP via GRE/Wireguard tunnels to able to route to internet using a /24 of Public Routable IPv4 and a /40 of IPv6 Addresses.

There is a Mikrotik RB3011 connected directly to the Peplink, using this connection a GRE Tunnel is formed with another Mikrotik (CHR) running in the same virtual network as the SpeedFusion VM, the CHR is receiving a default route from a Debian VM with BGP Sessions to BuyVM routers, a lot of configurations in place. Before of this setting, there were two Wireguard Tunnels to different places to form the BGP Sessions, now I only need one, which is running on top of the two WANs.

A black screen with white text

Description automatically generated

Is cleaner, I think… this a topic for an upcoming post!

 

HomeLAB is going almost 100% HarvesterHCI

 

A black and orange x-shaped symbol

Description automatically generated A close-up of a logo

Description automatically generated

I just decided that Proxmox is not challenging enough and interesting enough to keep me trying stuff and investing learning time in it. Yeah, a lot of Proxmox lovers will hate me for saying that but the time for a new platform/way to do things is among us, and maybe you already notice, I don’t do VMware stuff anymore.

I just want to run Kubernetes, a lot of Kubernetes clusters for testing solutions like Cilium and others, my real issue with Proxmox is the integration with automation tools like Terraform (there is not an official module for it) and the Storage plugins for consuming Proxmox storage on Kubernetes is terrible. I always liked Harvester and the latest integration into the downstream cluster running on Harvester and deployed with Rancher is awesome!

So, this is the plan.

Currently running a 3-node Proxmox cluster with Ceph and a few NVME and Intel SSD drives that are good enough for decent storage performance. I will remove 2 nodes from that cluster and convert them to Harvester, now there is support for a 2-node Cluster + Witness node. The witness node will run on the stand-alone Proxmox (Harvester doesn’t have a USB pass-through yet and I run unRAID on that host).

The 2 HP z440 will be my new 2-node Harvester cluster.

This machine will have 4 drives, 2 NVME, and 2 Intel SSD, in the RAM area, there will be 256GB of RAM available and enough CPU to run at least 3 or 4 RKE2 clusters for testing and production stuff.

I’ll update on the progress…

 

Cilium BGP Lab with LoadBalancing and more!

 

At this point, we know how to install Cilium and create a BGP peering with our routers. Now we need to let the outside world reach our Kubernetes apps.

If you don’t have the KinD cluster with Cilium go to https://arielantigua.com/weblog/2024/07/cilium-bgp-lab-locally/

When using Cilium you can reach an application using the Pod IP address or using a LoadBalance IP assigned to a Service. In the previous article we only advertised the Pod Address to our BGP neighbors, lets add more stuff so we can be close to a real deployment.

If you already have cloned the repo, go and do a pull so you can get the new config files and other stuff in the Makefile, or better yet, go and do a new clone of the repo and start from scratch, that’s the idea of the repo!

Continuar leyendo «Cilium BGP Lab with LoadBalancing and more!»

Cilium BGP Lab, locally!

Maybe you already know about Cilium, You don’t?
Go read https://docs.cilium.io/en/stable/overview/intro/ and come back !!

Hello again!
So now you want to learn about Cilium BGP functionality, for me this is one of the most exciting features of Cilium, maybe the reason is the addiction that I already have for BGP, who knows (AS207036). Back to the point, with Cilium you can establish a BGP session with your routers (Tor, border, or core, you decide.) and announce PodCIDR or LoadBalance for services.

For this learning exercise, we will use Kind and other tools to run a K8s cluster locally on any Linux (Windows or MacOS) machine. There is a lot of info on the internet on how to get Kind up and running and even how to install Cilium, I decided to build a collection of Cilium Labs (ciliumlabs) to speed up the process of getting a Cilium testing environment up and running.

First, go and clone the repo, all the information is on the README of each lab type, in this case, bgp/README.md is the one with the steps to get this ready, but we first need to install the prereqs. The prerequisites are listed in the main README file. In my environment, all this is met so I can proceed with the lab creation.

Continuar leyendo «Cilium BGP Lab, locally!»

Platform9 KubeVirt solution.

 

This is my opinion about this platform.

A few weeks ago, Platform9 announced a Hands-on-Lab for their KubeVirt implementation, and after using Harvester for running VMs mainly for deploying Rancher RKE clusters, I got my hands on this platform and the differences are huge.

First, Platform9 keeps its offering very close to the upstream project, what does this mean, it looks like you installed KubeVirt manually in your K8s cluster, this is good. The good thing about it is that you are more familiar with the solution and when the time to move to another KubeVirt offering comes, the changes will be minimal.

Continuar leyendo «Platform9 KubeVirt solution.»