Jun 252017

The Environment

I have a RamNode VDS, which is a wonderful thing to have. 8GB RAM, 2 dedicated vCPU, an entire IPv6 /64, and a bunch of addon IPv4 addresses.

It’s currently being used as an LXD container host, with more and more things are becoming containerized on this box to replace other servers of mine that I have from RamNode, to cut back on “overall” costs.

But there’s one problem with LXD: The default LXD bridge usually gets set up to masquerade as the host node’s default IP route. This is problematic if we want certain containers to be accessible over different IPs on the host, and different firewall rules for each.

We also have a second issue here: The IP addresses are not directly routed as “routable” addresses. They’re routed as on-link addresses, so we can’t use Stephane Graber’s suggested default of setting the host interface to manual, and letting the bridges and LXD containers negotiate for the actual static public IP addresses.

So, we are left with one solution: 1:1 NAT addressing with a static public IP address NAT’d to a static internal IP address.

1:1 NAT

A brief description of NAT in general

For those who may not know exactly what NAT is, it’s the automatic routing of internal private network packets out over a specific public IP address, and the corresponding inbound traversal of packets so it gets back to the proper internal IP address.

Anyone with a router on their home network which is getting a Dynamic IP address is actually using NAT, whether they’re aware of it or not. (There’s some exceptions – some routers properly receive an entire /64 prefix or such of IPv6 addresses, and that is in turn being dynamically assigned to clients connecting to the network, like on Comcast’s Gateway/Modem/Router combos. And some enterprise networks have so many IPv4 and IPv6 addresses that they are individually statically assigned and routed to machines, which means every system has a public IP address. But the average network, with a handful of IP addresses, web gateways, etc., are all going to have some form of NAT involved, which is how most Firewalls work.)

What makes 1:1 NAT different?

1:1 NAT is mostly the same, but a little bit different. In a 1:1 NAT setup, you set a specific traffic route. That is, if you have internal IP addresses and and want it to go out over public IP address and only that, you are configuring the system NAT to specifically route those two internal IP addresses over the single public IP address of, and not other IPs on the gateway to the Internet. In this way, you can have different IP addresses internally using different external IP addresses, provided the system doing the NAT routing has those public IP addresses.

You can do this on IPv4 without question. IPv6 gets… interesting.

LXD: The Bridges

Here’s where I do some magic. I have, actually, two separate LXD bridges / network segments – lxdbr0 and NoNAT0. They are each configured slightly differently.

lxdbr0: The MASQUERADE-ing bridge

I have the standard lxdbr0 which has the IP address and CIDR range of as the host IP address for the bridge; it also has the IPv6 address and range of fd42:a6c7:f9ea:ff9c::1/64. This network is set up with the mostly-default bridge setting of ipv4.nat: "true" and ipv6.nat: "true" which allows all of the IP addresses on that bridge’s internal network to ‘pretend’ to be the default IP route for the host machine. This is the primary IP address of my VDS that it acts over, as it’s the default route.

NoNAT0: A Non-MASQUERADE-ing Bridge

This is the ‘other’ bridge on my host. This is for all the containers getting 1:1 NAT.

It has the IPv4 address and CIDR range of, and the IPv6 address and range of fd00:0000:dead:beef::1/64. But, unlike ldxbr0 above, it has the bridge settings of ipv4.nat: "false" and ipv6.nat: "false". This tells LXD to not automatically configure the MASQUERADE rules on the host node. This means that there is no NAT traversal for the containers inside this bridge, and we have to set up 1:1 NAT for anything to traverse outbound.

Another big note: I do not set the container IPv6 to be unconfigured. That way, IPv4 is DHCP’d, and while I reserve a specific range for DHCP in IPv6, I usually set up Static IPv6 later.

This is the config of my NoNAT0 bridge, which you may want to emulate to get IPv6 1:1 NAT working properly:

  ipv4.dhcp: "true"
  ipv4.firewall: "false"
  ipv4.nat: "false"
  ipv6.address: fd00:0000:dead:beef::1/64
  ipv6.dhcp: "true"
  ipv6.dhcp.ranges: fd00:0000:dead:beef:ffff:ffff:fff0:0001-fd00:0000:dead:beef:ffff:ffff:fff9:ffff
  ipv6.dhcp.stateful: "true"
  ipv6.firewall: "false"
  ipv6.nat: "false"

1:1 NAT: The Setup

For sane 1:1 NAT, especially on RamNode virtual systems, we need to know what the host system’s network interface is. That’s easily obtained by ifconfig. On Ubuntu 16.04 and up, it’s usually ens-prefix named. In my system all the IP addresses on the host are put on ens3, the only node known on the host system.

A very big notice here: when we set up 1:1 NAT, we *prevent* anything else from using that NAT route. That is, if we tried to get a different private IP address to go over the specific IPv6 address we’re setting up in the 1:1 NAT route, and we don’t add them in the rules to go over that IP route, we can’t exactly sanely do that. The default route will be used (for MASQUERADE setup bridges).

Now, onto the configurations. Note that whether you have a GUI or not, it’s easiest to mess with netfilter and NAT via the command line. So we’ll do that, via iptables for IPv4, and ip6tables for IPv6.

Note that you may also want to install iptables-persistent or whatever it is in your distribution to “keep” the rules being added. I use Ubuntu and Debian systems, so that’s the name I know for the package. CentOS based systems will be different for where the firewall config is saved, but the initial rules to set up 1:1 NAT is actually the same across most systems.

Setting Up 1:1 IPv4 NAT

Let’s assume we have the public IP address of on the host system, and the container I want to set up the 1:1 NAT for has the internal IP on my NoNAT0 bridge of We’ll also assume that this is statically set on the NoNAT0 bridge. We can also assume the LXD container is named ‘my-container’.

On the host, we just need to execute the following commands to make everything work. The commented lines (preceded with a #) are just there for information of what each command does. You’ll either need to be the root user, or have sudo access, to execute these rules. Also note that I add a comment that will show up when I do iptables -t nat -L -n -v or just plain iptables -L -n -v, so I know what rules do at-a-glance.

# This rule adds the proper route for the public IP of the host to the private IP 
# of the container, for Inbound traffic on the host Interface.
iptables -t nat -A PREROUTING -i ens3 -d -j DNAT --to-destination -m comment --comment "1:1 NAT - <-> lxd:my-container"

# This rule adds the proper route for the private IP of the container to the public IP 
# of the houst, for Outboud traffic on the host Interface.
iptables -t nat -A POSTROUTING -o ens3 -s -j SNAT --to-source -m comment --comment "1:1 NAT - <-> lxd:my-container"

# These next two rules are necessary to make sure that we can actually do the NAT
# forwarding of traffic.
iptables -A FORWARD -s -j ACCEPT -m comment --comment "1:1 NAT - <-> lxd:my-container"
iptables -A FORWARD -d -j ACCEPT -m comment --comment "1:1 NAT - <-> lxd:my-container"

Once that’s in place, you should be able to access external IPv4 addresses from within the container, and reach directly into the container via the public IP address.

Setting up 1:1 IPv6 NAT

Now before anyone tells me that IPv6 NAT doesn’t exist, you’re right – the whole set of NAT functionality for IPv6 doesn’t really exist in the IPv6 spec. However, the Linux kernel and others did create the ability to change source and destination IPv6 addresses on packets via similar SNAT/DNAT rules.

Let’s assume we’re still working with the same container (‘my-container’ from the IPv4 section), and that the container has the IPv6 static private address of fd00:0000:dead:beef:cafe::ff10. Let’s also assume the host node has the IP address of 2583:ffff:ffff::dead:beef:cafe on its ens3 interface, and that’s what we want to 1:1 NAT. (I created this public IPv6 address off the top of my head, if it actually is in use, I apologize!)

The rules are similar, but different, in that they use ip6tables.

# This rule adds the proper route for the public IP of the host to the private IP 
# of the container, for Inbound traffic on the host Interface.
ip6tables -t nat -A PREROUTING -i ens3 -d 2583:ffff:ffff::dead:beef:cafe -j DNAT --to-destination fd00:0000:dead:beef:cafe::ff10 -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

# This rule adds the proper route for the private IP of the container to the public IP 
# of the houst, for Outboud traffic on the host Interface.
ip6tables -t nat -A POSTROUTING -o ens3 -s fd00:0000:dead:beef:cafe::ff10 -j SNAT --to-source 2583:ffff:ffff::dead:beef:cafe -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

# These next two rules are necessary to make sure that we can actually do the NAT
# forwarding of traffic.
ip6tables -A FORWARD -s 2583:ffff:ffff::dead:beef:cafe -j ACCEPT -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"
ip6tables -A FORWARD -d fd00:0000:dead:beef:cafe::ff10 -j ACCEPT -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

Now, your container should be able to work with IPv6 addresses, and be seen as the public IPv6 address on the Internet.

And now you’re done!

You should be good to go with 1:1 NAT. Just make sure to set up the firewall on the *container* to protect it as if it were your server itself and not a container. You will want to protect the container the same as you would the server, so make sure to do those final hardening steps.

Sorry, the comment form is closed at this time.