Jun 252017
 

The Environment

I have a RamNode VDS, which is a wonderful thing to have. 8GB RAM, 2 dedicated vCPU, an entire IPv6 /64, and a bunch of addon IPv4 addresses.

It’s currently being used as an LXD container host, with more and more things are becoming containerized on this box to replace other servers of mine that I have from RamNode, to cut back on “overall” costs.

But there’s one problem with LXD: The default LXD bridge usually gets set up to masquerade as the host node’s default IP route. This is problematic if we want certain containers to be accessible over different IPs on the host, and different firewall rules for each.

We also have a second issue here: The IP addresses are not directly routed as “routable” addresses. They’re routed as on-link addresses, so we can’t use Stephane Graber’s suggested default of setting the host interface to manual, and letting the bridges and LXD containers negotiate for the actual static public IP addresses.

So, we are left with one solution: 1:1 NAT addressing with a static public IP address NAT’d to a static internal IP address.


1:1 NAT

A brief description of NAT in general

For those who may not know exactly what NAT is, it’s the automatic routing of internal private network packets out over a specific public IP address, and the corresponding inbound traversal of packets so it gets back to the proper internal IP address.

Anyone with a router on their home network which is getting a Dynamic IP address is actually using NAT, whether they’re aware of it or not. (There’s some exceptions – some routers properly receive an entire /64 prefix or such of IPv6 addresses, and that is in turn being dynamically assigned to clients connecting to the network, like on Comcast’s Gateway/Modem/Router combos. And some enterprise networks have so many IPv4 and IPv6 addresses that they are individually statically assigned and routed to machines, which means every system has a public IP address. But the average network, with a handful of IP addresses, web gateways, etc., are all going to have some form of NAT involved, which is how most Firewalls work.)

What makes 1:1 NAT different?

1:1 NAT is mostly the same, but a little bit different. In a 1:1 NAT setup, you set a specific traffic route. That is, if you have internal IP addresses 10.20.30.40 and 10.20.30.41 and want it to go out over public IP address 1.2.3.4 and only that, you are configuring the system NAT to specifically route those two internal IP addresses over the single public IP address of 1.2.3.4, and not other IPs on the gateway to the Internet. In this way, you can have different IP addresses internally using different external IP addresses, provided the system doing the NAT routing has those public IP addresses.

You can do this on IPv4 without question. IPv6 gets… interesting.


LXD: The Bridges

Here’s where I do some magic. I have, actually, two separate LXD bridges / network segments – lxdbr0 and NoNAT0. They are each configured slightly differently.

lxdbr0: The MASQUERADE-ing bridge

I have the standard lxdbr0 which has the IP address and CIDR range of 10.74.250.1/24 as the host IP address for the bridge; it also has the IPv6 address and range of fd42:a6c7:f9ea:ff9c::1/64. This network is set up with the mostly-default bridge setting of ipv4.nat: "true" and ipv6.nat: "true" which allows all of the IP addresses on that bridge’s internal network to ‘pretend’ to be the default IP route for the host machine. This is the primary IP address of my VDS that it acts over, as it’s the default route.

NoNAT0: A Non-MASQUERADE-ing Bridge

This is the ‘other’ bridge on my host. This is for all the containers getting 1:1 NAT.

It has the IPv4 address and CIDR range of 10.74.251.1/24, and the IPv6 address and range of fd00:0000:dead:beef::1/64. But, unlike ldxbr0 above, it has the bridge settings of ipv4.nat: "false" and ipv6.nat: "false". This tells LXD to not automatically configure the MASQUERADE rules on the host node. This means that there is no NAT traversal for the containers inside this bridge, and we have to set up 1:1 NAT for anything to traverse outbound.

Another big note: I do not set the container IPv6 to be unconfigured. That way, IPv4 is DHCP’d, and while I reserve a specific range for DHCP in IPv6, I usually set up Static IPv6 later.

This is the config of my NoNAT0 bridge, which you may want to emulate to get IPv6 1:1 NAT working properly:

config:
  ipv4.address: 10.74.251.1/24
  ipv4.dhcp: "true"
  ipv4.dhcp.ranges: 10.74.251.200-10.74.251.250
  ipv4.firewall: "false"
  ipv4.nat: "false"
  ipv6.address: fd00:0000:dead:beef::1/64
  ipv6.dhcp: "true"
  ipv6.dhcp.ranges: fd00:0000:dead:beef:ffff:ffff:fff0:0001-fd00:0000:dead:beef:ffff:ffff:fff9:ffff
  ipv6.dhcp.stateful: "true"
  ipv6.firewall: "false"
  ipv6.nat: "false"

1:1 NAT: The Setup

For sane 1:1 NAT, especially on RamNode virtual systems, we need to know what the host system’s network interface is. That’s easily obtained by ifconfig. On Ubuntu 16.04 and up, it’s usually ens-prefix named. In my system all the IP addresses on the host are put on ens3, the only node known on the host system.

A very big notice here: when we set up 1:1 NAT, we *prevent* anything else from using that NAT route. That is, if we tried to get a different private IP address to go over the specific IPv6 address we’re setting up in the 1:1 NAT route, and we don’t add them in the rules to go over that IP route, we can’t exactly sanely do that. The default route will be used (for MASQUERADE setup bridges).

Now, onto the configurations. Note that whether you have a GUI or not, it’s easiest to mess with netfilter and NAT via the command line. So we’ll do that, via iptables for IPv4, and ip6tables for IPv6.

Note that you may also want to install iptables-persistent or whatever it is in your distribution to “keep” the rules being added. I use Ubuntu and Debian systems, so that’s the name I know for the package. CentOS based systems will be different for where the firewall config is saved, but the initial rules to set up 1:1 NAT is actually the same across most systems.

Setting Up 1:1 IPv4 NAT

Let’s assume we have the public IP address of 2.3.4.5 on the host system, and the container I want to set up the 1:1 NAT for has the internal IP on my NoNAT0 bridge of 10.74.251.63. We’ll also assume that this is statically set on the NoNAT0 bridge. We can also assume the LXD container is named ‘my-container’.

On the host, we just need to execute the following commands to make everything work. The commented lines (preceded with a #) are just there for information of what each command does. You’ll either need to be the root user, or have sudo access, to execute these rules. Also note that I add a comment that will show up when I do iptables -t nat -L -n -v or just plain iptables -L -n -v, so I know what rules do at-a-glance.

# This rule adds the proper route for the public IP of the host to the private IP 
# of the container, for Inbound traffic on the host Interface.
iptables -t nat -A PREROUTING -i ens3 -d 2.3.4.5 -j DNAT --to-destination 10.74.251.63 -m comment --comment "1:1 NAT - 2.3.4.5 <-> lxd:my-container"

# This rule adds the proper route for the private IP of the container to the public IP 
# of the houst, for Outboud traffic on the host Interface.
iptables -t nat -A POSTROUTING -o ens3 -s 10.74.251.63 -j SNAT --to-source 2.3.4.5 -m comment --comment "1:1 NAT - 2.3.4.5 <-> lxd:my-container"

# These next two rules are necessary to make sure that we can actually do the NAT
# forwarding of traffic.
iptables -A FORWARD -s 2.3.4.5 -j ACCEPT -m comment --comment "1:1 NAT - 2.3.4.5 <-> lxd:my-container"
iptables -A FORWARD -d 10.74.251.63 -j ACCEPT -m comment --comment "1:1 NAT - 2.3.4.5 <-> lxd:my-container"

Once that’s in place, you should be able to access external IPv4 addresses from within the container, and reach directly into the container via the public IP address.

Setting up 1:1 IPv6 NAT

Now before anyone tells me that IPv6 NAT doesn’t exist, you’re right – the whole set of NAT functionality for IPv6 doesn’t really exist in the IPv6 spec. However, the Linux kernel and others did create the ability to change source and destination IPv6 addresses on packets via similar SNAT/DNAT rules.

Let’s assume we’re still working with the same container (‘my-container’ from the IPv4 section), and that the container has the IPv6 static private address of fd00:0000:dead:beef:cafe::ff10. Let’s also assume the host node has the IP address of 2583:ffff:ffff::dead:beef:cafe on its ens3 interface, and that’s what we want to 1:1 NAT. (I created this public IPv6 address off the top of my head, if it actually is in use, I apologize!)

The rules are similar, but different, in that they use ip6tables.

# This rule adds the proper route for the public IP of the host to the private IP 
# of the container, for Inbound traffic on the host Interface.
ip6tables -t nat -A PREROUTING -i ens3 -d 2583:ffff:ffff::dead:beef:cafe -j DNAT --to-destination fd00:0000:dead:beef:cafe::ff10 -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

# This rule adds the proper route for the private IP of the container to the public IP 
# of the houst, for Outboud traffic on the host Interface.
ip6tables -t nat -A POSTROUTING -o ens3 -s fd00:0000:dead:beef:cafe::ff10 -j SNAT --to-source 2583:ffff:ffff::dead:beef:cafe -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

# These next two rules are necessary to make sure that we can actually do the NAT
# forwarding of traffic.
ip6tables -A FORWARD -s 2583:ffff:ffff::dead:beef:cafe -j ACCEPT -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"
ip6tables -A FORWARD -d fd00:0000:dead:beef:cafe::ff10 -j ACCEPT -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

Now, your container should be able to work with IPv6 addresses, and be seen as the public IPv6 address on the Internet.


And now you’re done!

You should be good to go with 1:1 NAT. Just make sure to set up the firewall on the *container* to protect it as if it were your server itself and not a container. You will want to protect the container the same as you would the server, so make sure to do those final hardening steps.

Oct 132016
 

After a long enough time, I’ve finally got builds working again! NGINX PPAs now have updated builds pending.

  • Stable PPA: Resync 1.10.1 packages with Debian, incorporate 1.10.1-3 into the PPAs. (Includes dynamic modules as exist in Unstable)
  • Mainline PPA: Resync basic packaging with Debian, using 1.10.1 as a base for Mainline packaging. Bump Mainline version to 1.11.5 for the PPA.

These updates *do* have a resync with Debian on the packaging, which should address some issues, and also a bump in Mainline to the latest release there.

As of October 13, 2016, at 10:54 AM (UTC-04:00), these are not yet available in the primary Stable or Mainline PPAs, and exist in the staging repository. I’m waiting on the builds to finish running and uploading first, because I can’t copy them over until that’s done.

As of October 13, 2016, at 11:04 AM (UTC-04:00), the primary PPAs have had the process of copying the packages from the staging repositories into the main PPAs. They should be available soon for use.


That said, these updates being overdue as they are, I will have to make some decisions here. And these decisions are effective as of now, mostly for Precise and older Ubuntu releases using the PPAs.

  • Ubuntu Precise will continue to get Stable PPA updates until April 2017. After April 2017, Precise will no longer be supported in the PPA.
  • Ubuntu Precise will no longer receive Mainline PPA updates effective October 15, 2016. We saw this before when we tried to backport newer NGINX to older Lucid releases long ago. The trouble with supporting old releases is multi-fold, but with regards to NGINX and Precise the two primary issues are supporting the build dependencies which continue to evolve as newer versions are available, and the timeline for Ubuntu to End Of Life the old Precise release. Precise is scheduled to go End of Life in April 2017. People still using Precise should be upgrading to Trusty or later at this point. Given this timeline of support, and the build dependencies issues, it will become far too difficult to maintain Mainline for Precise. (If an update to Mainline includes Security content, then an update will be made to Precise; however, no other updates will happen to Precise, so go and upgrade your Precise servers sooner rather than later!)

This may inconvenience some people using Precise, but unfortunately it’s getting too difficult to maintain NGINX for ancient releases.


Speaking of old releases, the PPAs are getting a cleanup too. Vivid and Wily packages, both releases now EOL, will be having their packages removed shortly.

The Road Ahead for NGINX in Ubuntu

 nginx, NGINX, Server Packages, Ubuntu  Comments Off on The Road Ahead for NGINX in Ubuntu
Jul 112016
 

Hello, everyone! Two blog posts and a flurry of tweets in a day, what the heck has gotten into me?

Some fun things have happened in the last development cycle leading up to Xenial for nginx! Let’s recap a couple of the big things that’re ‘great’ happenings:

  • NGINX 1.9.x was accepted into Xenial during the development process.
  • Later in the dev cycle, we were given the ACK by the Security Team to enable the HTTP/2 module (yay, HTTP/2 support!)
  • Close to the end, that was also updated to 1.10.x post-release to get us onto a Stable version for the duration of the LTS! Yay, an LTS with a Stable version!

All in all, a good dev cycle for getting NGINX into the Ubuntu repositories! Now, we look ahead to the future.


First, a note about Wily. The NGINX PPAs will no longer get any Wily updates, as of today. This close to the End of Life date of Wily, I can’t guarantee there’ll be any updates beyond security-critical ones prompting such updates, given the EOL date of Wily being in a couple weeks.

This means, for the most part, that bugs which are against the Wily package in Ubuntu also get less scrutiny as we focus on the future. Any such Wily-filed bugs will need to be confirmed in another release of an equal or newer version (basically, Xenial or later) before I poke at them or another person pokes at them (this doesn’t prevent the community from submitting patches though). This also means people on Wily boxes who want to get continued NGINX support should upgrade to Xenial because I can’t guarantee they’ll get updates as they wish. And once Wily goes EOL, they get nothing.


Secondly, the road ahead. Up in Debian, they’re starting to test builds against the next OpenSSL version (1.1.0). Unfortunately, NGINX Stable 1.10.x doesn’t build. After poking upstream, I’ve learned there is a fix for this… but for NGINX Mainline… and it won’t be backported to 1.10.x. This is a little bit of a headache, for a couple reasons.

  1. NGINX Stable 1.10.x is not going to be able to be supported at some point in the future in Ubuntu, because it won’t have OpenSSL support.
  2. To get NGINX Mainline as the version in NGINX, I need to merge in the quite-evil Debian ‘dynamic modules’ support.
  3. Further, to get NGINX Mainline into Ubuntu during a development cycle, I need to go and pull in from Debian Experimental, and then build test against the older OpenSSL to make sure nothing dies off.

The big issues of this are mostly that we don’t know the full timeline of OpenSSL 1.1.0 being released in Debian. I have assurances from the Ubuntu Security Team, however, that OpenSSL 1.1.0 will not be included until packages don’t Fail to Build from Source (FTBFS) against it. Which means that I don’t have to act on this immediately.

The additional headache added to this list though is that, while I merge in Dynamic Module Support, it is not 100% ‘supported’ yet in Debian, and it won’t be totally supported in a sane way for packages which ship third-party modules. There has been discussion threads on some third-party modules packaging their modules to work as a dynamic module for Ubuntu Universe / Debian. This is a double-edged sword. Not only do I have to worry about NGINX updates, but I will have to start making sure all the dynamic modules get rebuilt for each upload. I’ll be working to try and find a better solution to this, but this will preclude updates to things getting done at times, given the signature-based approach to dynamic modules that exists currently. We’ll work through this, though, at some point, and make it more supportable in the future.

——

Just wanted to give you all some insights into the future of NGINX, and the headaches I will have to work through, for Ubuntu’s packages going forward.

NGINX Mainline PPA: 1.11.2 is being built and released; Ubuntu Wily EOL: No new NGINX versions in PPAs for Wily.

 NGINX, NGINX Mainline PPA, NGINX PPA, NGINX Stable PPA, Ubuntu  Comments Off on NGINX Mainline PPA: 1.11.2 is being built and released; Ubuntu Wily EOL: No new NGINX versions in PPAs for Wily.
Jul 112016
 

Been a while since I posted about NGINX on my blog.

Anyways, good news. NGINX 1.11.2 has been uploaded to the staging PPA, and is in the process of being built. If there’s no issues with the builds, then I’ll push the packages to the main Mainline PPA when they’re completed.

NGINX 1.11.2 includes a few new features, but also a bunch of bugfixes:

Changes with nginx 1.11.2                                        05 Jul 2016

    *) Change: now nginx always uses internal MD5 and SHA1 implementations;
       the --with-md5 and --with-sha1 configure options were canceled.

    *) Feature: variables support in the stream module.

    *) Feature: the ngx_stream_map_module.

    *) Feature: the ngx_stream_return_module.

    *) Feature: a port can be specified in the "proxy_bind", "fastcgi_bind",
       "memcached_bind", "scgi_bind", and "uwsgi_bind" directives.

    *) Feature: now nginx uses the IP_BIND_ADDRESS_NO_PORT socket option
       when available.

    *) Bugfix: a segmentation fault might occur in a worker process when
       using HTTP/2 and the "proxy_request_buffering" directive.

    *) Bugfix: the "Content-Length" request header line was always added to
       requests passed to backends, including requests without body, when
       using HTTP/2.

    *) Bugfix: "http request count is zero" alerts might appear in logs when
       using HTTP/2.

    *) Bugfix: unnecessary buffering might occur when using the "sub_filter"
       directive; the issue had appeared in 1.9.4.

All in all this is a good thing.

However, for Ubuntu Wily 15.10 server users, who use the Mainline PPA, this is the last update for the Mainline PPA for Ubuntu Wily. Ubuntu Wily goes End of Life on July 28, 2016. This means it will no longer be supported by Ubuntu upstream, and will receive no new security updates, bug fix updates, etc. on that date. With the EOL date being so close, this is the last upload to the Mainline PPA for Ubuntu Wily. (This also holds true for the Stable PPA – there will be no new Wily updates except for security updates that may happen between now and July 28th)

planet.ubuntu-wisconsin.org downtime – May 03, 2016

 Uncategorized  Comments Off on planet.ubuntu-wisconsin.org downtime – May 03, 2016
May 012016
 

planet.ubuntu-wisconsin.org may have temporary downtime on May 3rd, 2016 for system updates on the underlying server.

Downtime is not a guarantee, but should downtime occur, it is not predicted to be down for more than 30 minutes.

If you have any questions about this downtime, please contact Thomas Ward with those questions.

Ubuntu Server 16.04 on RamNode KVM VPSes: A How-To Guide Using the ISOs

 RamNode, RamNode KVM VPS, Ubuntu  Comments Off on Ubuntu Server 16.04 on RamNode KVM VPSes: A How-To Guide Using the ISOs
Apr 262016
 

This is a guide to make RamNode KVM VPSes run Ubuntu 16.04 Server, by manually setting it up via the Ubuntu Server 16.04 ISO.

  1. PREREQUISITES:
    • A new RamNode KVM VPS (or one you don’t mind losing all the data on).
    • Knowledge of Linux
    • Access to VPSCP (Solus)
  2. Start by setting up the VPS with a pre-made image (14.04 Minimal works).
  3. Login to the VPS, get the /etc/network/interfaces information and store it in a different file not on the server, for a reference of settings. We will re-implement this later, or rather, we’ll be using this later if the configuration fails, or to know what data needs to be provided for the system to work.
  4. Login to VPSCP.
  5. Go to your VPS settings, open the settings for your VM, and under the CDRom tab select the “Ubuntu 16.04 Server x86_64” item, and hit “Mount”.
  6. Under the “Settings” tab on the VPSCP for the VM, make sure Boot Order is set to “(1) CDROM (2) Hard Disk”
  7. Either use VPSCP to shut down the VPS, or login via SSH and then shut down the VPS.
  8. Click “Boot”. The VPS will now boot, and boot from the CDROM image.
  9. In the VPSCP, click the “VNC” button for your VPS. You can use the HTML5 VNC client or use an actual VNC client to connect to the connection information available on the VNC Viewer page.
  10. Once VNC is up, select your language from the CDROM prompt, then select “Install Ubuntu” on the screen that remains.
  11. Follow the screen prompts, providing the relevant information requested by the system. When it gets to the option for partitioning, and says “reuse partitions” or “Erase disk”, select the “Erase Disk” option that says to use LVM.
  12. Go through the rest of the prompts, and select the software features you want to install. Once it’s installed, the VPS will reboot. Close the VNC connection.
  13. Go back to VPSCP, and click “Shutdown” on the VPS control panel. The VPS will not boot when you select “Boot from first hard disk” from the CDROM menu.
  14. Under “Settings”, change your “Boot Order” to “(1) Hard Disk (2) CDROM” or to “Hard disk only”.
  15. Before we start the VPS, we need to reconfigure the networking – this will install the Solus network configuration to get the networking up and working. Note that this configuration will need to be edited.
  16. Once the “Reconfigure Networking” step is completed, and your VPS boots, connect once again to the VNC. Login to the server, using the credentials you set up during the installation steps via the ISO. Note that this VPS will not have working networking at this step – you must UPDATE the configuration to adapt for the Predictable Network Interface Naming, which Solus does not yet support in its network auto-configuration
  17. Run the following command, and take a note as to what network interface name(s) come up other than lo (it may show up as ens3 or similar):
    ifconfig -a
  18. We now need to edit /etc/network/interfaces. Wherever eth0 shows up in the interfaces file, replace it with the interface name you gathered from step 16.
  19. Once your VPS reboots, we have to test its connectivity to the Internet.
    1. If you selected “OpenSSH server” or “SSH Server” during the installation steps, then you can attempt to directly SSH to your server, specifying the user you configured, and the IP address, for the connection details.
    2. If you did not select to install an SSH server, then connect to the server via the VNC, and login with the credentials you specified during installation.
  20. Once on the server, make sure you get ping replies for both of these commands.
    ping 8.8.8.8
    ping6 2001:4860:4860::8888
  21. If you received ping replies from the above commands, you have successfully redone the network configuration on the server, and everything is all ready for you to begin using your Ubuntu 16.04 VPS on RamNode! If not, verify the network settings put in place by Solus match the network settings you should be using.
Apr 122016
 

Hello again! NGINX 1.9.14 is now available in Ubuntu Xenial. There’s quite a few things we should make known to everyone who uses nginx in Ubuntu, with php5-fpm currently!


HTTP/2 is now enabled

Yes, HTTP/2 is now enabled for nginx-core, nginx-full, and nginx-extras in Ubuntu Xenial. Add http2 to your SSL listener line in your server blocks, and HTTP/2 will be enabled for that port and site.

For HTTP/2 on non-Xenial Ubuntu releases, you can use the Mainline PPA for Wily and later. Anything before Wily does not have full HTTP/2 support, and very likely will not be usable to get HTTP/2 working as intended.


Ubuntu Xenial ships php7.0-fpm, and not php5-fpm, and this will break existing site configurations

The Ubuntu Xenial packages for nginx have already been updated for this change, pointing to php7.0-fpm instead of php5-fpm.

However, users who have existing site configurations will not benefit from these changes. They must manually apply the changes.

Effectively, this is what a default setup uses to interface with the default php5-fpm setup on Ubuntu versions before Xenial, passing all PHP processing to the php5-fpm backend. This is from the default configuration file, but it’s still similar for all PHP passing:

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
    
        # With php5-cgi alone:
        #fastcgi_pass 127.0.0.1:9000;
        # With php5-fpm:
        fastcgi_pass unix:/var/run/php5-fpm.sock;
    }

In Ubuntu Xenial, the TCP listener for php7.0-cgi will be unchanged, however for php7.0-fpm, it will be necessary to update the configuration to look like this for existing site configurations:

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
    
        # With php7.0-cgi alone:
        #fastcgi_pass 127.0.0.1:9000;
        # With php7.0-fpm:
        fastcgi_pass unix:/run/php/php7.0-fpm.sock;
    }

This will prevent HTTP 502 Bad Gateway errors, and will use the updated php7.0-fpm instead of the php5-fpm packages.

(If for some reason you still want to have php5-fpm under Xenial, you will not be able to get support from Ubuntu for this; you will need to use a PPA. I explain this on a different post on my blog.)

Ubuntu Xenial: Adding php5.6 to Xenial

 Uncategorized  Comments Off on Ubuntu Xenial: Adding php5.6 to Xenial
Apr 122016
 

Ubuntu Xenial will not ship php5 at all.

The only way to get continued php5 access is to use a PPA, specifically Ondřej Surý’s PPA for co-installable php5 and php7.0. However, this is not supported by the Ubuntu Server Team or the Ubuntu Security Team, and you accept the risks therein of using PPAs for getting php5.

The packages are *not* named php5 but are instead named php5.6.

So, to add php5.6-fpm to Xenial, you would do something like this to add the PPA, update, and then also install php5.6-fpm and dependencies:

sudo apt-get install python-software-properties
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php5.6-fpm

(Note that I have not tested this; this is, however, supposedly usable based on user experience data gathered on Ask Ubuntu by myself.)

This should be a similar process for any of the other php5.6 packages you would need. However, you do NOT need to re-add the PPA if it’s already on your system.

planet.ubuntu-wisconsin.org now operational

 Uncategorized  Comments Off on planet.ubuntu-wisconsin.org now operational
Feb 052016
 

Thanks to a joint effort between Thomas Ward (https://launchpad.net/~teward) providing server power and disk space and Simon Quigley (https://launchpad.net/~tsimonq2) providing the drive behind the project, the Ubuntu Wisconsin LoCo Team’s Planet feed aggregation site, here, is now operational.

Please contact Simon Quigley for information about this feed aggregator (and how to get added to it), or the Ubuntu Wisconsin LoCo Team.

If there is a technical issue with this site, or if the site is not responding for some reason, please contact Thomas Ward.

NGINX PPA Cleanup

 NGINX, NGINX Mainline PPA, NGINX PPA, NGINX Stable PPA  Comments Off on NGINX PPA Cleanup
Feb 052016
 

The NGINX PPAs have had some cleanup done to them today.

Previously, the PPAs kept the ‘older’ package versions in them for now-EOL releases (this included keeping ancient versions for Maverick, Natty, Oneiric, Quantal, Raring, Saucy, and Utopic). This was decided upon in order to prevent people from seeing 404 errors on PPA checking. We also included a large list of “Final Version” items for each Ubuntu release, stating there would be no more updates for that release, but keeping the ancient packages in place for installation.

Looking back on this, this is a bad thing for multiple reasons. Firstly, it means people in ‘older releases’ can still use the PPA for that release. This means security-holed versions of NGINX could still be used. Secondly, it implies that we still ‘support’ the use of older releases of Ubuntu in the PPAs. This has the security connotation that we are OK with people using no-longer-updated releases, which in turn have their own security holes.

So, today, in an effort to discourage the use of ancient Ubuntu versions which get no security updates or support anymore, I’ve made changes to the way that the PPAs will operate going forward: Unless a release recently went End of Life, versions of the nginx package in the PPAs for older Ubuntu releases are no longer going to be kept, and will be deleted a week after the version goes End of Life.

Therefore, as of today, I have deleted all the packages in the NGINX PPAs (both Stable and Mainline, in both staging and release PPAs) for the following releases of Ubuntu:

  • Maverick (10.10)
  • Natty (11.04)
  • Oneiric (11.10)
  • Quantal (12.10)
  • Raring (13.04)
  • Saucy (13.10)
  • Utopic (15.04)

People still using ancient versions of NGINX or Ubuntu are strongly recommended to upgrade to get continued support and security/bug fixes.