NGINX Updates: Ubuntu Bionic, and Mainline and Stable PPAs

 nginx, NGINX, NGINX Mainline PPA, NGINX PPA, NGINX Stable PPA, Server Packages, Ubuntu  Comments Off on NGINX Updates: Ubuntu Bionic, and Mainline and Stable PPAs
Apr 182018

NGINX has been updated in multiple places.

Ubuntu Bionic 18.04

Ubuntu Bionic 18.04 now has 1.14.0 in the repositories, and very likely will have 1.14.0 for the lifecycle of 18.04 from April of 2018 through April of 2023, as soon as it is released.

NGINX PPAs: Mainline and Stable

There are two major things to note:

First: Ubuntu Trusty 14.04 is no longer supported in the PPAs, and will not receive the updated NGINX versions. This is due to the older versions of libraries in the 14.04 release, which are too old to compile the third-party modules which are included from the Debian packages. Individuals using 14.04 should strongly consider using the repositories instead, for newer releases, as they don’t need any libraries which the PPA versions of the packages need.

Secondly: With the exception of Ubuntu Trusty 14.04, the NGINX PPAs are in the process of being updated with NGINX Stable 1.14.0 and NGINX Mainline 1.13.12. Please note that 1.14.0 is equal to 1.13.12 in terms of features, and you should probably use NGINX 1.14.0 instead of 1.13.12 for now. NGINX Mainline will be updated to 1.15.x when NGINX has a ‘new’ Mainline release that is ahead of NGINX Stable.

My Secret for Multiple Python Environments without Ubuntu Packages for them: `pyenv`, and `virtualenv`.

 Uncategorized  Comments Off on My Secret for Multiple Python Environments without Ubuntu Packages for them: `pyenv`, and `virtualenv`.
Dec 042017

Been a while since I last posted, heh. Been busy, unfortunately.

In any case, I ran into multiple problems recently. In testing my code for some Python scripts I was writing, they functioned fine in Python 3.5, but failed on a newer system running Python 3.6.3 (but worked somehow on 3.6.1). I ran into a small headache, because my 16.04 system only has Python 3.5 available on it.

So, I had four choices:

  1. Use multiple systems (virtual or bare metal) or containers to get newer Ubuntu versions, which could be a headache.
  2. Upgrade my own system to a newer version (which I don’t want to do, because I track LTSes for my regular-use laptop)
  3. Try and make multiple Python versions on my system in a way that doesn’t clobber the system packages, or
  4. Python virtualenv and pyenv.

Now, fortunately for me, I’m a power user. I could have done most of these options.

I could have done option 1, because my computer can handle amazing things like VMs, or I could have used LXD to install a newer OS image and used that for testing. However, that requires me to duplicate whatever I’m working on and run it separately or run its tests separately, which can cause some… headaches. Mainly because I can’t use PyCharm IDE to run everything locally and test, or use its debugger for the newer version.

Option 2 is a no-go – my preference is to stick to LTSes for my system, and not use the cutting-edge because of problems that usually arise.

Option 3 is painful because I have to compile each version I need from source. Granted, I’d have to do the same for any newer unpackaged version, or install versions made by others which could be malicious.

This leaves option 4 – use virtualenv for custom project directories and execution environments, or for PyCharm IDE, set up a pyenv with a given Python version, and point PyCharm IDE at that. This option works because everything runs in userspace for these environments and doesn’t require me to do much messing around with my system.

Now, you’re probably wondering a couple of things: what are virtualenv and pyenv, and why should I use it. I’ll address both. I’ll also address how to get all this set up.

What is virtualenv and pyenv, and what do I use them for?

This is a two part question. Put simply, pyenv provides a method for creating Python installations and environments in a local directory. Usually, within the user’s home directory, in a .pyenv folder, with a fairly complicated setup. You can also create environments based on standard Python versions and use those environments as-is. This allows you to create multiple Python setups for multiple Python versions. I mentioned that pyenv is mostly deprecated. It’s been replaced with, essentially, `python3

That said, there’s also a headache – if you want to install a specific program that needs to be its own new item via pip3 it can cause some issues. Especially if you want to not have to alter your pristine environment. A prime example of this is the PostgreSQL administration tool called pgadmin4 because it needs a newer Python version than what Ubuntu 16.04 has. For these cases (with Python 3.0 and above only), I also pair pyenv with the standard python3-virtualenv software. I used virtualenv to create a pgadmin4 environment based on Python 3.6.3. This allowed me to make a pristine environment for pgadmin4 to modify. This has its own ups and downs, but sorta worked for what I need.

However, for my other programs and scripts I develop, I only need the standard Python 3.6.3 environment. To that end, I will only explain the use of pyenv here.

Installing pyenv, and making it work.

Now, this is the fun part. I started with the instructions here, which is a pretty good guide for 15.10. It’s also valid here, so I’ll bring you the relevant steps.

  1. Install dependencies.
    sudo apt-get update && sudo apt-get upgrade sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev git
  2. Download the pyenv script.
    curl -L | bash
  3. Update your .bash_profile or .bashrc file with the following:
    export PATH="~/.pyenv/bin:$PATH"
    eval "$(pyenv init -)"
    eval "$(pyenv virtualenv-init -)"
  4. If you want to make this work instantly without restarting your terminal, run source ~/.bashrc. Otherwise, restart your Terminal or SSH session.

That’s really all to make that work. However, there’s a lot of usage parts of relevance here.

Install a different Python version with pyenv

This is the easy part. It’ll download and compile Python versions from the source tarballs from Python upstream. It’s useful because you can get multiple Python environments.

Let’s start by installing an oddball Python version, like Python 2.5 (specifically, 2.5.6, the last version of the 2.5.x series) which is out of date. Just run pyenv install 2.5.6, let it download, and then compile and install it.

But, how do I use it?

Well, let’s assume we’re in my computer, where the python3 executable on the system itself is actually 3.5.2. I need 3.6.3. I ran pyenv install 3.6.3 and it installed. Now, how do I use it?

Well, that is the usefulness of the pyenv software. I can change the local Python environment temporarily. By running pyenv local 3.6.3, the soystem will put me into the 3.6.3 directory. This is useful, because if I run python3 after executing pyenv local 3.6.3, the Python 3 version is 3.6.3 for that execution session. And if you want to switch back to the system’s 3.5.2 version, just run pyenv local system to switch back to the standard System version of Python, which is 3.5.2 on a 16.04 system.

You mentioned PyCharm IDE… how do I use this with pyenv?

Well thankfully, we don’t actually have to do any of the pyenv software via PyCharm IDE. PyCharm is smart enough to be able to detect obvious locations such as those in the pyenv system’s installation locations, and can offer them as usable environments. You just have to adjust the environment settings for PyCharm for the given project (or execution or runtime configuration) to use the alternate Python version as the runtime environment. This is fairly straightforward, but as I prefer to not develop in odd systems, I just have to point PyCharm to the local Python environment. Which is not as easy, because I ahve to add it into PyCharm. For any given PyCharm project, there’s a “Project Interpreter” option setting. You can choose the specific interpreter to use.

Which means you can choose the Python version to test and program against. When you go into there, there is a small gear icon next to the interpreter selection dropdown. Go into there, choose “More…”, and if you’re lucky, you’ll see the Python version in there. If not, we just have to add it. It’ll be in /home/$USER/.pyenv/versions/VERSION/bin/pythonX.Y, where you replace $USER with your username, VERSION with the version number you installed, and X.Y with the major and minor version numbers (for example, 3.6 for 3.6.3. Then PyCharm can work with that version.

It’ll be a bit more complex than this, of course, but this is the basics.

Adventures in Networking: RamNode VDS, IPv4 and IPv6 on-link addresses, LXD, Bridges, and 1:1 NAT (IPv4 and IPv6)

 Bridges, IPv4, IPv6, LXC / LXD, NAT, Networking  Comments Off on Adventures in Networking: RamNode VDS, IPv4 and IPv6 on-link addresses, LXD, Bridges, and 1:1 NAT (IPv4 and IPv6)
Jun 252017

The Environment

I have a RamNode VDS, which is a wonderful thing to have. 8GB RAM, 2 dedicated vCPU, an entire IPv6 /64, and a bunch of addon IPv4 addresses.

It’s currently being used as an LXD container host, with more and more things are becoming containerized on this box to replace other servers of mine that I have from RamNode, to cut back on “overall” costs.

But there’s one problem with LXD: The default LXD bridge usually gets set up to masquerade as the host node’s default IP route. This is problematic if we want certain containers to be accessible over different IPs on the host, and different firewall rules for each.

We also have a second issue here: The IP addresses are not directly routed as “routable” addresses. They’re routed as on-link addresses, so we can’t use Stephane Graber’s suggested default of setting the host interface to manual, and letting the bridges and LXD containers negotiate for the actual static public IP addresses.

So, we are left with one solution: 1:1 NAT addressing with a static public IP address NAT’d to a static internal IP address.

1:1 NAT

A brief description of NAT in general

For those who may not know exactly what NAT is, it’s the automatic routing of internal private network packets out over a specific public IP address, and the corresponding inbound traversal of packets so it gets back to the proper internal IP address.

Anyone with a router on their home network which is getting a Dynamic IP address is actually using NAT, whether they’re aware of it or not. (There’s some exceptions – some routers properly receive an entire /64 prefix or such of IPv6 addresses, and that is in turn being dynamically assigned to clients connecting to the network, like on Comcast’s Gateway/Modem/Router combos. And some enterprise networks have so many IPv4 and IPv6 addresses that they are individually statically assigned and routed to machines, which means every system has a public IP address. But the average network, with a handful of IP addresses, web gateways, etc., are all going to have some form of NAT involved, which is how most Firewalls work.)

What makes 1:1 NAT different?

1:1 NAT is mostly the same, but a little bit different. In a 1:1 NAT setup, you set a specific traffic route. That is, if you have internal IP addresses and and want it to go out over public IP address and only that, you are configuring the system NAT to specifically route those two internal IP addresses over the single public IP address of, and not other IPs on the gateway to the Internet. In this way, you can have different IP addresses internally using different external IP addresses, provided the system doing the NAT routing has those public IP addresses.

You can do this on IPv4 without question. IPv6 gets… interesting.

LXD: The Bridges

Here’s where I do some magic. I have, actually, two separate LXD bridges / network segments – lxdbr0 and NoNAT0. They are each configured slightly differently.

lxdbr0: The MASQUERADE-ing bridge

I have the standard lxdbr0 which has the IP address and CIDR range of as the host IP address for the bridge; it also has the IPv6 address and range of fd42:a6c7:f9ea:ff9c::1/64. This network is set up with the mostly-default bridge setting of ipv4.nat: "true" and ipv6.nat: "true" which allows all of the IP addresses on that bridge’s internal network to ‘pretend’ to be the default IP route for the host machine. This is the primary IP address of my VDS that it acts over, as it’s the default route.

NoNAT0: A Non-MASQUERADE-ing Bridge

This is the ‘other’ bridge on my host. This is for all the containers getting 1:1 NAT.

It has the IPv4 address and CIDR range of, and the IPv6 address and range of fd00:0000:dead:beef::1/64. But, unlike ldxbr0 above, it has the bridge settings of ipv4.nat: "false" and ipv6.nat: "false". This tells LXD to not automatically configure the MASQUERADE rules on the host node. This means that there is no NAT traversal for the containers inside this bridge, and we have to set up 1:1 NAT for anything to traverse outbound.

Another big note: I do not set the container IPv6 to be unconfigured. That way, IPv4 is DHCP’d, and while I reserve a specific range for DHCP in IPv6, I usually set up Static IPv6 later.

This is the config of my NoNAT0 bridge, which you may want to emulate to get IPv6 1:1 NAT working properly:

  ipv4.dhcp: "true"
  ipv4.firewall: "false"
  ipv4.nat: "false"
  ipv6.address: fd00:0000:dead:beef::1/64
  ipv6.dhcp: "true"
  ipv6.dhcp.ranges: fd00:0000:dead:beef:ffff:ffff:fff0:0001-fd00:0000:dead:beef:ffff:ffff:fff9:ffff
  ipv6.dhcp.stateful: "true"
  ipv6.firewall: "false"
  ipv6.nat: "false"

1:1 NAT: The Setup

For sane 1:1 NAT, especially on RamNode virtual systems, we need to know what the host system’s network interface is. That’s easily obtained by ifconfig. On Ubuntu 16.04 and up, it’s usually ens-prefix named. In my system all the IP addresses on the host are put on ens3, the only node known on the host system.

A very big notice here: when we set up 1:1 NAT, we *prevent* anything else from using that NAT route. That is, if we tried to get a different private IP address to go over the specific IPv6 address we’re setting up in the 1:1 NAT route, and we don’t add them in the rules to go over that IP route, we can’t exactly sanely do that. The default route will be used (for MASQUERADE setup bridges).

Now, onto the configurations. Note that whether you have a GUI or not, it’s easiest to mess with netfilter and NAT via the command line. So we’ll do that, via iptables for IPv4, and ip6tables for IPv6.

Note that you may also want to install iptables-persistent or whatever it is in your distribution to “keep” the rules being added. I use Ubuntu and Debian systems, so that’s the name I know for the package. CentOS based systems will be different for where the firewall config is saved, but the initial rules to set up 1:1 NAT is actually the same across most systems.

Setting Up 1:1 IPv4 NAT

Let’s assume we have the public IP address of on the host system, and the container I want to set up the 1:1 NAT for has the internal IP on my NoNAT0 bridge of We’ll also assume that this is statically set on the NoNAT0 bridge. We can also assume the LXD container is named ‘my-container’.

On the host, we just need to execute the following commands to make everything work. The commented lines (preceded with a #) are just there for information of what each command does. You’ll either need to be the root user, or have sudo access, to execute these rules. Also note that I add a comment that will show up when I do iptables -t nat -L -n -v or just plain iptables -L -n -v, so I know what rules do at-a-glance.

# This rule adds the proper route for the public IP of the host to the private IP 
# of the container, for Inbound traffic on the host Interface.
iptables -t nat -A PREROUTING -i ens3 -d -j DNAT --to-destination -m comment --comment "1:1 NAT - <-> lxd:my-container"

# This rule adds the proper route for the private IP of the container to the public IP 
# of the houst, for Outboud traffic on the host Interface.
iptables -t nat -A POSTROUTING -o ens3 -s -j SNAT --to-source -m comment --comment "1:1 NAT - <-> lxd:my-container"

# These next two rules are necessary to make sure that we can actually do the NAT
# forwarding of traffic.
iptables -A FORWARD -s -j ACCEPT -m comment --comment "1:1 NAT - <-> lxd:my-container"
iptables -A FORWARD -d -j ACCEPT -m comment --comment "1:1 NAT - <-> lxd:my-container"

Once that’s in place, you should be able to access external IPv4 addresses from within the container, and reach directly into the container via the public IP address.

Setting up 1:1 IPv6 NAT

Now before anyone tells me that IPv6 NAT doesn’t exist, you’re right – the whole set of NAT functionality for IPv6 doesn’t really exist in the IPv6 spec. However, the Linux kernel and others did create the ability to change source and destination IPv6 addresses on packets via similar SNAT/DNAT rules.

Let’s assume we’re still working with the same container (‘my-container’ from the IPv4 section), and that the container has the IPv6 static private address of fd00:0000:dead:beef:cafe::ff10. Let’s also assume the host node has the IP address of 2583:ffff:ffff::dead:beef:cafe on its ens3 interface, and that’s what we want to 1:1 NAT. (I created this public IPv6 address off the top of my head, if it actually is in use, I apologize!)

The rules are similar, but different, in that they use ip6tables.

# This rule adds the proper route for the public IP of the host to the private IP 
# of the container, for Inbound traffic on the host Interface.
ip6tables -t nat -A PREROUTING -i ens3 -d 2583:ffff:ffff::dead:beef:cafe -j DNAT --to-destination fd00:0000:dead:beef:cafe::ff10 -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

# This rule adds the proper route for the private IP of the container to the public IP 
# of the houst, for Outboud traffic on the host Interface.
ip6tables -t nat -A POSTROUTING -o ens3 -s fd00:0000:dead:beef:cafe::ff10 -j SNAT --to-source 2583:ffff:ffff::dead:beef:cafe -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

# These next two rules are necessary to make sure that we can actually do the NAT
# forwarding of traffic.
ip6tables -A FORWARD -s 2583:ffff:ffff::dead:beef:cafe -j ACCEPT -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"
ip6tables -A FORWARD -d fd00:0000:dead:beef:cafe::ff10 -j ACCEPT -m comment --comment "1:1 NAT - 2583:ffff:ffff::dead:beef:cafe <-> lxd:my-container"

Now, your container should be able to work with IPv6 addresses, and be seen as the public IPv6 address on the Internet.

And now you’re done!

You should be good to go with 1:1 NAT. Just make sure to set up the firewall on the *container* to protect it as if it were your server itself and not a container. You will want to protect the container the same as you would the server, so make sure to do those final hardening steps.

Oct 132016

After a long enough time, I’ve finally got builds working again! NGINX PPAs now have updated builds pending.

  • Stable PPA: Resync 1.10.1 packages with Debian, incorporate 1.10.1-3 into the PPAs. (Includes dynamic modules as exist in Unstable)
  • Mainline PPA: Resync basic packaging with Debian, using 1.10.1 as a base for Mainline packaging. Bump Mainline version to 1.11.5 for the PPA.

These updates *do* have a resync with Debian on the packaging, which should address some issues, and also a bump in Mainline to the latest release there.

As of October 13, 2016, at 10:54 AM (UTC-04:00), these are not yet available in the primary Stable or Mainline PPAs, and exist in the staging repository. I’m waiting on the builds to finish running and uploading first, because I can’t copy them over until that’s done.

As of October 13, 2016, at 11:04 AM (UTC-04:00), the primary PPAs have had the process of copying the packages from the staging repositories into the main PPAs. They should be available soon for use.

That said, these updates being overdue as they are, I will have to make some decisions here. And these decisions are effective as of now, mostly for Precise and older Ubuntu releases using the PPAs.

  • Ubuntu Precise will continue to get Stable PPA updates until April 2017. After April 2017, Precise will no longer be supported in the PPA.
  • Ubuntu Precise will no longer receive Mainline PPA updates effective October 15, 2016. We saw this before when we tried to backport newer NGINX to older Lucid releases long ago. The trouble with supporting old releases is multi-fold, but with regards to NGINX and Precise the two primary issues are supporting the build dependencies which continue to evolve as newer versions are available, and the timeline for Ubuntu to End Of Life the old Precise release. Precise is scheduled to go End of Life in April 2017. People still using Precise should be upgrading to Trusty or later at this point. Given this timeline of support, and the build dependencies issues, it will become far too difficult to maintain Mainline for Precise. (If an update to Mainline includes Security content, then an update will be made to Precise; however, no other updates will happen to Precise, so go and upgrade your Precise servers sooner rather than later!)

This may inconvenience some people using Precise, but unfortunately it’s getting too difficult to maintain NGINX for ancient releases.

Speaking of old releases, the PPAs are getting a cleanup too. Vivid and Wily packages, both releases now EOL, will be having their packages removed shortly.

The Road Ahead for NGINX in Ubuntu

 nginx, NGINX, Server Packages, Ubuntu  Comments Off on The Road Ahead for NGINX in Ubuntu
Jul 112016

Hello, everyone! Two blog posts and a flurry of tweets in a day, what the heck has gotten into me?

Some fun things have happened in the last development cycle leading up to Xenial for nginx! Let’s recap a couple of the big things that’re ‘great’ happenings:

  • NGINX 1.9.x was accepted into Xenial during the development process.
  • Later in the dev cycle, we were given the ACK by the Security Team to enable the HTTP/2 module (yay, HTTP/2 support!)
  • Close to the end, that was also updated to 1.10.x post-release to get us onto a Stable version for the duration of the LTS! Yay, an LTS with a Stable version!

All in all, a good dev cycle for getting NGINX into the Ubuntu repositories! Now, we look ahead to the future.

First, a note about Wily. The NGINX PPAs will no longer get any Wily updates, as of today. This close to the End of Life date of Wily, I can’t guarantee there’ll be any updates beyond security-critical ones prompting such updates, given the EOL date of Wily being in a couple weeks.

This means, for the most part, that bugs which are against the Wily package in Ubuntu also get less scrutiny as we focus on the future. Any such Wily-filed bugs will need to be confirmed in another release of an equal or newer version (basically, Xenial or later) before I poke at them or another person pokes at them (this doesn’t prevent the community from submitting patches though). This also means people on Wily boxes who want to get continued NGINX support should upgrade to Xenial because I can’t guarantee they’ll get updates as they wish. And once Wily goes EOL, they get nothing.

Secondly, the road ahead. Up in Debian, they’re starting to test builds against the next OpenSSL version (1.1.0). Unfortunately, NGINX Stable 1.10.x doesn’t build. After poking upstream, I’ve learned there is a fix for this… but for NGINX Mainline… and it won’t be backported to 1.10.x. This is a little bit of a headache, for a couple reasons.

  1. NGINX Stable 1.10.x is not going to be able to be supported at some point in the future in Ubuntu, because it won’t have OpenSSL support.
  2. To get NGINX Mainline as the version in NGINX, I need to merge in the quite-evil Debian ‘dynamic modules’ support.
  3. Further, to get NGINX Mainline into Ubuntu during a development cycle, I need to go and pull in from Debian Experimental, and then build test against the older OpenSSL to make sure nothing dies off.

The big issues of this are mostly that we don’t know the full timeline of OpenSSL 1.1.0 being released in Debian. I have assurances from the Ubuntu Security Team, however, that OpenSSL 1.1.0 will not be included until packages don’t Fail to Build from Source (FTBFS) against it. Which means that I don’t have to act on this immediately.

The additional headache added to this list though is that, while I merge in Dynamic Module Support, it is not 100% ‘supported’ yet in Debian, and it won’t be totally supported in a sane way for packages which ship third-party modules. There has been discussion threads on some third-party modules packaging their modules to work as a dynamic module for Ubuntu Universe / Debian. This is a double-edged sword. Not only do I have to worry about NGINX updates, but I will have to start making sure all the dynamic modules get rebuilt for each upload. I’ll be working to try and find a better solution to this, but this will preclude updates to things getting done at times, given the signature-based approach to dynamic modules that exists currently. We’ll work through this, though, at some point, and make it more supportable in the future.


Just wanted to give you all some insights into the future of NGINX, and the headaches I will have to work through, for Ubuntu’s packages going forward.

NGINX Mainline PPA: 1.11.2 is being built and released; Ubuntu Wily EOL: No new NGINX versions in PPAs for Wily.

 NGINX, NGINX Mainline PPA, NGINX PPA, NGINX Stable PPA, Ubuntu  Comments Off on NGINX Mainline PPA: 1.11.2 is being built and released; Ubuntu Wily EOL: No new NGINX versions in PPAs for Wily.
Jul 112016

Been a while since I posted about NGINX on my blog.

Anyways, good news. NGINX 1.11.2 has been uploaded to the staging PPA, and is in the process of being built. If there’s no issues with the builds, then I’ll push the packages to the main Mainline PPA when they’re completed.

NGINX 1.11.2 includes a few new features, but also a bunch of bugfixes:

Changes with nginx 1.11.2                                        05 Jul 2016

    *) Change: now nginx always uses internal MD5 and SHA1 implementations;
       the --with-md5 and --with-sha1 configure options were canceled.

    *) Feature: variables support in the stream module.

    *) Feature: the ngx_stream_map_module.

    *) Feature: the ngx_stream_return_module.

    *) Feature: a port can be specified in the "proxy_bind", "fastcgi_bind",
       "memcached_bind", "scgi_bind", and "uwsgi_bind" directives.

    *) Feature: now nginx uses the IP_BIND_ADDRESS_NO_PORT socket option
       when available.

    *) Bugfix: a segmentation fault might occur in a worker process when
       using HTTP/2 and the "proxy_request_buffering" directive.

    *) Bugfix: the "Content-Length" request header line was always added to
       requests passed to backends, including requests without body, when
       using HTTP/2.

    *) Bugfix: "http request count is zero" alerts might appear in logs when
       using HTTP/2.

    *) Bugfix: unnecessary buffering might occur when using the "sub_filter"
       directive; the issue had appeared in 1.9.4.

All in all this is a good thing.

However, for Ubuntu Wily 15.10 server users, who use the Mainline PPA, this is the last update for the Mainline PPA for Ubuntu Wily. Ubuntu Wily goes End of Life on July 28, 2016. This means it will no longer be supported by Ubuntu upstream, and will receive no new security updates, bug fix updates, etc. on that date. With the EOL date being so close, this is the last upload to the Mainline PPA for Ubuntu Wily. (This also holds true for the Stable PPA – there will be no new Wily updates except for security updates that may happen between now and July 28th) downtime – May 03, 2016

 Uncategorized  Comments Off on downtime – May 03, 2016
May 012016 may have temporary downtime on May 3rd, 2016 for system updates on the underlying server.

Downtime is not a guarantee, but should downtime occur, it is not predicted to be down for more than 30 minutes.

If you have any questions about this downtime, please contact Thomas Ward with those questions.

Ubuntu Server 16.04 on RamNode KVM VPSes: A How-To Guide Using the ISOs

 RamNode, RamNode KVM VPS, Ubuntu  Comments Off on Ubuntu Server 16.04 on RamNode KVM VPSes: A How-To Guide Using the ISOs
Apr 262016

This is a guide to make RamNode KVM VPSes run Ubuntu 16.04 Server, by manually setting it up via the Ubuntu Server 16.04 ISO.

    • A new RamNode KVM VPS (or one you don’t mind losing all the data on).
    • Knowledge of Linux
    • Access to VPSCP (Solus)
  2. Start by setting up the VPS with a pre-made image (14.04 Minimal works).
  3. Login to the VPS, get the /etc/network/interfaces information and store it in a different file not on the server, for a reference of settings. We will re-implement this later, or rather, we’ll be using this later if the configuration fails, or to know what data needs to be provided for the system to work.
  4. Login to VPSCP.
  5. Go to your VPS settings, open the settings for your VM, and under the CDRom tab select the “Ubuntu 16.04 Server x86_64” item, and hit “Mount”.
  6. Under the “Settings” tab on the VPSCP for the VM, make sure Boot Order is set to “(1) CDROM (2) Hard Disk”
  7. Either use VPSCP to shut down the VPS, or login via SSH and then shut down the VPS.
  8. Click “Boot”. The VPS will now boot, and boot from the CDROM image.
  9. In the VPSCP, click the “VNC” button for your VPS. You can use the HTML5 VNC client or use an actual VNC client to connect to the connection information available on the VNC Viewer page.
  10. Once VNC is up, select your language from the CDROM prompt, then select “Install Ubuntu” on the screen that remains.
  11. Follow the screen prompts, providing the relevant information requested by the system. When it gets to the option for partitioning, and says “reuse partitions” or “Erase disk”, select the “Erase Disk” option that says to use LVM.
  12. Go through the rest of the prompts, and select the software features you want to install. Once it’s installed, the VPS will reboot. Close the VNC connection.
  13. Go back to VPSCP, and click “Shutdown” on the VPS control panel. The VPS will not boot when you select “Boot from first hard disk” from the CDROM menu.
  14. Under “Settings”, change your “Boot Order” to “(1) Hard Disk (2) CDROM” or to “Hard disk only”.
  15. Before we start the VPS, we need to reconfigure the networking – this will install the Solus network configuration to get the networking up and working. Note that this configuration will need to be edited.
  16. Once the “Reconfigure Networking” step is completed, and your VPS boots, connect once again to the VNC. Login to the server, using the credentials you set up during the installation steps via the ISO. Note that this VPS will not have working networking at this step – you must UPDATE the configuration to adapt for the Predictable Network Interface Naming, which Solus does not yet support in its network auto-configuration
  17. Run the following command, and take a note as to what network interface name(s) come up other than lo (it may show up as ens3 or similar):
    ifconfig -a
  18. We now need to edit /etc/network/interfaces. Wherever eth0 shows up in the interfaces file, replace it with the interface name you gathered from step 16.
  19. Once your VPS reboots, we have to test its connectivity to the Internet.
    1. If you selected “OpenSSH server” or “SSH Server” during the installation steps, then you can attempt to directly SSH to your server, specifying the user you configured, and the IP address, for the connection details.
    2. If you did not select to install an SSH server, then connect to the server via the VNC, and login with the credentials you specified during installation.
  20. Once on the server, make sure you get ping replies for both of these commands.
    ping6 2001:4860:4860::8888
  21. If you received ping replies from the above commands, you have successfully redone the network configuration on the server, and everything is all ready for you to begin using your Ubuntu 16.04 VPS on RamNode! If not, verify the network settings put in place by Solus match the network settings you should be using.
Apr 122016

Hello again! NGINX 1.9.14 is now available in Ubuntu Xenial. There’s quite a few things we should make known to everyone who uses nginx in Ubuntu, with php5-fpm currently!

HTTP/2 is now enabled

Yes, HTTP/2 is now enabled for nginx-core, nginx-full, and nginx-extras in Ubuntu Xenial. Add http2 to your SSL listener line in your server blocks, and HTTP/2 will be enabled for that port and site.

For HTTP/2 on non-Xenial Ubuntu releases, you can use the Mainline PPA for Wily and later. Anything before Wily does not have full HTTP/2 support, and very likely will not be usable to get HTTP/2 working as intended.

Ubuntu Xenial ships php7.0-fpm, and not php5-fpm, and this will break existing site configurations

The Ubuntu Xenial packages for nginx have already been updated for this change, pointing to php7.0-fpm instead of php5-fpm.

However, users who have existing site configurations will not benefit from these changes. They must manually apply the changes.

Effectively, this is what a default setup uses to interface with the default php5-fpm setup on Ubuntu versions before Xenial, passing all PHP processing to the php5-fpm backend. This is from the default configuration file, but it’s still similar for all PHP passing:

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        # With php5-cgi alone:
        # With php5-fpm:
        fastcgi_pass unix:/var/run/php5-fpm.sock;

In Ubuntu Xenial, the TCP listener for php7.0-cgi will be unchanged, however for php7.0-fpm, it will be necessary to update the configuration to look like this for existing site configurations:

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        # With php7.0-cgi alone:
        # With php7.0-fpm:
        fastcgi_pass unix:/run/php/php7.0-fpm.sock;

This will prevent HTTP 502 Bad Gateway errors, and will use the updated php7.0-fpm instead of the php5-fpm packages.

(If for some reason you still want to have php5-fpm under Xenial, you will not be able to get support from Ubuntu for this; you will need to use a PPA. I explain this on a different post on my blog.)

Ubuntu Xenial: Adding php5.6 to Xenial

 Uncategorized  Comments Off on Ubuntu Xenial: Adding php5.6 to Xenial
Apr 122016

Ubuntu Xenial will not ship php5 at all.

The only way to get continued php5 access is to use a PPA, specifically Ondřej Surý’s PPA for co-installable php5 and php7.0. However, this is not supported by the Ubuntu Server Team or the Ubuntu Security Team, and you accept the risks therein of using PPAs for getting php5.

The packages are *not* named php5 but are instead named php5.6.

So, to add php5.6-fpm to Xenial, you would do something like this to add the PPA, update, and then also install php5.6-fpm and dependencies:

sudo apt-get install python-software-properties
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php5.6-fpm

(Note that I have not tested this; this is, however, supposedly usable based on user experience data gathered on Ask Ubuntu by myself.)

This should be a similar process for any of the other php5.6 packages you would need. However, you do NOT need to re-add the PPA if it’s already on your system.