Tuesday 9 May 2023

Let's Encrypt on OPNSense, using a local Bind server because I'm too cheap for Namecheap API

I've recently been migrating my home network to use an ProxMox + OPNSense based router. I used to use a fairly high end consumer grade tri-band router/AP flashed with dd-wrt, but I've long been frustrated with the fact that it basically could not be updated - whenever I tried a newer versions of dd-wrt it always ended in major stability issues forcing me to downgrade, and even if that wasn't an issue, dd-wrt recommends erasing the nvram when applying an update, which effectively means wiping all the settings and having to configure it again from scratch. This means that even if those stability issues have been resolved I'm still not really able to afford trying to update to find out, and as such I'm effectively running firmware that is almost a decade old and who knows what kind of security vulnerabilities it might be susceptible to as a result.

I've been pondering what to do about this for years, but a few recent factors have finally pushed me to upgrade:

  • We have a smart home now, and the number of devices trying to connect to the 2.4GHz WiFi simultaneously was overwhelming our consumer grade WiFi devices and we'd often find a device unable to connect ("Kettle isn't responding", or we'd see one of the esphome fallback hotspots show up). Our TpLink router provided by our old ISP has a hard limit of 30 devices, and I don't think my other consumer grade APs were doing much better. When every light switch/bulb is a device on your network, this becomes an issue very quickly.
  • We recently upgraded to NBN Fibre to the Premesis with gigabit down, and our old WiFi devices were nowhere near this fast. Even the brand new TpLink WiFi 6 router provided by our new ISP cannot actually handle this speed - on WiFi with the largest channel width it supports (80MHz) it maxes out just shy of 700mbps even at point blank range.
  • We had a recent incident where our dd-wrt access point/router mysteriously locked up for several hours paralyzing our home network and smart home, and nothing I could do would make it responsive. WiFi was down, the switch was down, I couldn't even get to the admin page to find out what in the blazes was going on, and no amount of rebooting would help - actually, it seemed like every time it was about to bring up the WiFi the fault light illuminated and it rebooted itself. After a few hours it mysteriously started working again, and since dd-wrt doesn't save logs I have no idea what happened, but given how old the firmware was it wouldn't surprise me at all if it was the victim of a wireless Denial of Service attack. Unfortunately I didn't have any other devices that supported monitor mode ready to run Kismet or similar to prove this.

So, given that consumer grade WiFi+router combo devices tend to be poor at both tasks we've now separated them - our WiFi is now on a Ubiquiti WiFi 6 Pro access point, which is capable of doing around 1.5gbps on the 5GHz network (to nearby devices on a 160MHz channel, but even the 80MHz channel can do over 900mbps, whipping the ISP provided TpLink) and claims to be able to support 300+ simultaneous devices, which should hopefully sort out our smart home connectivity issues for the forseable future (though we might still need a second for devices with poor signal strength on the other side of the house - still using a consumer grade AP for those...).

As for the router component - that's now an OPNSense software router running in a virtual machine under ProxMox on one of these mini routers from AliExpress.

As for choosing OPNSense over PFSense - for the moment that choice is made for me as PFSense doesn't yet support the 2.5gbps network ports on this device. When that changes I may consider it as I do generally value stability over bleeding edge, and OPNSense has not exactly been bug free so far (though the development team have responded near instantly to the bug reports I've filed so far, so that's a huge plus). The nice thing about running these under ProxMox is that I'll be able to shut down OPNSense VM and boot up a PFSense VM in it's place when it's ready to try out and I can easily switch back if need be.

Since installing the new router I've been slowly migrating services over to it from my previous router and old HP Microserver - Dynamic DNS, regular DNS and DHCP are now on OPNSense (not exactly without incident - but DHCP bug report was filed and the OPNSense dev team had fixed the issue in under 2 hours. I do miss being able to just edit a dnsmasq config file directly as we could do in dd-wrt, but realistically the web forms work fine in OPNSense). The unifi controller is now in one ProxMox container and frigate is in another. I've still got a few other services to move like Home Assistant and Plex, but there's a few others I want to set up that will need signed SSL certificates, so today's task was figuring out how to get Let's Encrypt working in OPNSense... and oh gawd this turned out to be not such an easy task. This was very much a one thing after another after another after another... And this is why I'm writing this blog post now, while it's still fresh in my mind and so next time I go through this I can refer back to it.

Previously I've had this all working on Debian on my HP Microserver, where it basically places a challenge file on the web server to prove to Let's Encrypt that I own the web server that the domain name points to, and I remember it taking me a while to figure out how to make that work, but I remember that it wasn't too difficult in the end - at least I didn't deem that experience worthy of a blog post! OPNSense's os-acme-client plugin supports essentially this same method so my first thought was to use that... but there was a couple of problems that meant I ultimately did not attempt using this:

  • They introduction page in the OPNSense ACME plugin says they are "not recommended" and "Other challenge types should be preferred".
  • This method requires that the acme plugin temporarily takes over port 80 / 443 on the router, leading to some brief downtime when this happens. My current setup under Debian is not subject to this as the plugin is able to use the running apache web server so can complete the challenge with no downtime. In reality this probably isn't much concern for a home network, as the downtime would be infrequent and brief, and home internet doesn't exactly have the best uptime anyway... but it is still not desirable.
  • They have three settings "IP Auto-Discovery", "Interface" and "IP Address" that all state "NOTE:This will ONLY work if the official IP addresses are LOCALLY configured on your OPNsense firewall", which is not currently the case for me as I still have the ISP provided router between my OPNSense router and the Internet (so my OPNSense router has a private IP on its WAN interface), as it is needed to provide a VoIP service (why this ISP doesn't use one of the UNI-V ports on the NBN NTD Box like my previous ISP I don't know).
  • Even if I bypassed my ISP router so that the OPNSense router would have a public IP, if the "IP Address" field is mandatory (which is unclear, possibly one or both of the other settings would suffice in its place), my IP address is not static (ISP charges extra for that), and I do not want to have to edit anything if my IP changes (this will be a recurring theme throughout the rest of this post).

Ok, that leaves... DNS-01 as the only option... that or forgoing setting this up on OPNSense altogether, but I also want to play with using OpenVPN under OPNSense at a later date, and as I understand it that needs a signed SSL certificate so I have multiple reasons to push on (Edit: DO NOT use Let's Encrypt for OpenVPN, there are serious security concerns with doing so. Always use your own personal CA for OpenVPN)...

My darkstarsword.net domain is registered through Namecheap, and Namecheap is supported by the acme.sh/Let's Encrypt script, and it looks very simple to use - only needing a user and API key filled out. I already have an API key that I use for dynamic DNS and I don't even need to fill out my IP address - perfect!!! Or at least that's what I would be saying if I hadn't read the acme script's documentation on Namecheap first or noted some bug reports warning of dynamic DNS entries being wiped out after running the script. The API key they want is not the one used for dynamic DNS - it's a business / dev tools API key that is only available if your account has more than $50 credit (the fact that I've already paid 10 years in advance doesn't count apparently) or meets some other requirements. And you DO need to fill in your IP address on Namecheap's side - and as noted earlier, I don't want to go and edit anything when my IP changes.

So, that's out.

What are my options? Migrate to a different DNS provider that doesn't have such arduous requirements? Self hosting a name server doesn't seem viable given - again, my IP address is not static and I want darkstarsword.net to be stable as many of the subdomains I've added point to various cloud servers that should be available even if my home internet is down - like for instance, this blog. The acme.sh documentation does talk about a DNS Alias mode, but that suggests it needs a second domain and then I'd need to register that at another name provider which doesn't seem much better than just migrating my existing domain... but wait, why does it need a separate domain? It's just setting up a CNAME record pointing at the other domain - couldn't that point to a subdomain of my existing domain instead? Could that subdomain have its nameserver be self hosted on my own equipment and then have OPNSense update that? Yes, yes it can.

To try to clarify things I'm going to substitute some of the fun hostnames I'm using for more descriptive ones. In namecheap (or whatever other DNS provider you are using) you want similar to the following entries:

  • Type="A+ Dynamic DNS Record" Host="dyndns" - This will be dynamically updated to point to your home IP.
  • Type="NS Record" Host="home_subdomain" Value="dyndns.example.net." - This creates a subdomain managed by a nameserver running on your home IP.
  • Type="CNAME Record" Host="_acme-challenge.dyndns" Value="_acme-challenge.home_subdomain.example.net." - This tells the Let's Encrypt acme.sh challenge script to look for the challenge TXT record in your home_subdomain when creating an SSL certificate for "dyndns.example.net".

The A+ Dynamic DNS record type is specific to namecheap I think, other providers might work differently. On OPNSense this is updated via the os-ddclient plugin - install via System -> Firmware -> Plugins and configure under Services -> Dynamic DNS. This was reasonably straight forward to set up and I didn't encounter any issues here. Make sure that the name is resolving to your home IP before proceeding.

You can add additional CNAME records for additional hosts that you want certificates for, just substituting "....dyndns" in the Host field, or if you want to create a wildcard certificate just use Host="_acme-challenge" instead.

Next step is to install a DNS server on OPNSense... well, it already has Unbound and/or dnsmasq for your internal DNS, but AFAIK neither of those will work and so we need another one, and of course we can't just replace them because there's a bunch of features in OPNSense that only work with one or both of those, so... we'll be running two DNS servers on different ports. Some people elect to have one of these forward requests to the other, but I'm not going to do that as my internal network has no need of BIND, and the Internet has no need of my internal DNS, so at least for now I'll keep them independent of each other.

Head over to System -> Firmware -> Plugins and install os-bind. Start setting it up under Services -> BIND -> Configuration.

In the ACLs tab, create a new ACL, call it "anywhere" and set networks to "0.0.0.0/0" (maybe we can lock this down to just Let's Encrypt IPs + localhost/LAN?).

Back in the General tab, enable the plugin, change "Listen IPs" from "0.0.0.0" to "any" (this will be unecessary soon - I spotted they fixed this in github earlier today), change "Allow Query" to the "anywhere" ACL you just created and save. At this point you might want to verify that you can connect to BIND from your LAN - I was stuck here for some time until I worked out the issue with Listen IPs:

dig @192.168.1.1 -p 53530 example.com +short
93.184.216.34

Now, head over to the Primary Zones tab (I guess this used to be called Master Zones?) and create a zone for your home subdomain. Following the naming examples above and substituting with your own, set "Zone Name" to "home_subdomain.example.net", "Allow Query" to the "anywhere" ACL, "Mail Admin" to your email, and "DNS Server" to "dyndns.example.net".

Now create an NS record in this zone - without this BIND will refuse to load the zone. Leave the "Name" field blank, set "Type" to "NS" and set "Value" to "dyndns.example.net." - note, the trailing . is important here to indicate this is a fully qualified domain name, otherwise it would point to a sub-sub-sub...sub?-domain and BIND would complain about that too. Note that just because you need the trailing . here doesn't mean you need it elsewhere, and there's probably a few places that would break if you add it (and some where it won't matter or gets automatically added if it's missing, like on namecheap).

Now go and look at the Log Files section for BIND, and make sure you see "zone home_subdomain.example.net/IN: loaded serial ..." and not some error.

Next head on over to Firewall -> NAT -> Port Forward and add a new entry. Interface should be "WAN" (probably already set), Protocol needs to be changed to "TCP/UDP" (important, DNS needs both), Destination should be "WAN Address", "Destination Port Range" should have both From and To set to "DNS", "Redirect Target IP" should be "127.0.0.1" and "Redirect Target Port" should be "(other)" 53530. Put something meaningful in the Description field, such as "External DNS -> BIND (for ACME LetsEncrypt)", and save, then apply changes to the firewall when prompted.

At this point you might want to test whether this is working - I added a "test" A record to my zone in BIND to a recognisable IP address and was able to confirm that "test.home_subdomain.example.net" successfully resolved to that IP, and I didn't have to explicily point dig to my name server - it was able to find it through the breadcrumb trail through namecheap, to my BIND server then find the record. I did this test from an external server, but since we didn't set up any forwarding between Unbound and BIND testing from your LAN should be nearly equivelent.

Alright, home stretch - all that's left is setting up the ACME Plugin to use Let's Encrypt and start issuing certificates. Unfortunately this part went anything but smoothly for me, but given how quickly OPNSense devs move, the issues I encountered will likely already be fixed for you by the time you read this - they're already in github while I'm writing this.

Over in System -> Firmware -> Plugins install os-acme-client. Then head on over to Services -> ACME Client to configure it. Under Settings enable the plugin and apply. Under Accounts create two new accounts, one with the ACME CA set to "Let's Encrypt" and the second set to "Let's Encrypt Test CA" - the former is the real one, the later we use to make sure things work without worrying about being rate limited if something goes wrong. Give them distinct names so you can tell them apart at a glance and fill out your email. You can ignore the EAB fields.

Take a detour over to System -> Access -> Users and edit the root user. Find "API Keys" near the bottom and click the plus to add a new one. This will give you an apikey.txt file that you should open as you will need it in a moment.

Head back over to Services -> ACME Client -> Challenge Types and add a new entry. I named mine "OPNSense Bind Plugin" and set the type to "DNS-01" and "DNS Service" to "OPNSense BIND Plugin". I left "OPNSense Server (FQDN)" set to "localhost" (this is for the dns update script running on OPNSense to find the OPNSense API, it's not used by Let's Encrypt so I don't see any reason to use anything other than localhost here) and "OPNSense Server Port" on 443 - you may need to change this if you are using that port for another service like nginx and have relocated the OPNSense web interface to another port (in my case 443 is still being port forwarded to my old server, though this will likely change soon). "User API key" and "User API token" should be filled out with the "key=....." and "secret=....." (without the literal "key=" and "secret=" part) values from the apikey.txt file you obtained in the previous step. Save.

Almost done - under Certificates create a new certificate. Set the "Common Name" to "dyndns.example.net" (substituting for your own host and domain, obviously). If you are going to create a test certificate first (recommended), write something like "test" in the Description field and set the account to the "Let's Encryt Test CA" from earlier. "Challenge Type" should be "OPNSense Bind Plugin" and "DNS Alias Mode" should be "Challenge Alias Mode" (meaning the CNAME record you added in Namecheap a few pages ago is pointing to a record in your home subdomain named "_acme-challenge" - you can use the other option here if you decided you were too cool for that name. Automatic might work too - I haven't tried it), and "Challenge Alias" should be "home_subdomain.example.net".

Save. Make sure your certificate is enabled and click the "Issue/Renew All Certificates" button (or the one next to the certificate if you want to do it individually). Check the logs (both system + ACME), see if it worked. For me it didn't - I got an "Invalid domain" error that cost me a few hours of debugging to find it was fallout from the global movement to strike the potentially insensitive terms "master" and "slave" from general use, but that's fixed now (in github at the time of writing, hopefully live by the time anyone reads this).

If that worked, then duplicate the certificate, change the description and account to the real live "Let's Encrypt" CA, save, disable the test certificate and issue the real one. Also maybe delete the test certificate from System -> Trust -> Certificates.

That's as far as I've got for now - I haven't actually started using the certificate for anything yet (hopefully that part will be a bit easier), but I think this is enough for one blog post. Before I go though some food for thought - while setting this up I have been wondering if there might be any security concerns with this setup and potentially there could be - if an attacker was using the same ISP as you they could potentially try to take your IP - say they went to your house and shut off your power at your breaker box, then started rapidly connecting and disconnecting their own internet hoping to be randomly assigned the IP address that you were using and your dynamic DNS entry still points to until you get back online to refresh it. If they succeed they would potentially be able to issue certificates for your domains that they could then use to masquerade as your servers in future MITM attacks - maybe it's a good idea not to set your wildcard _acme-challenge so they are limited to hijacking names you intended for your home service which are probably not going to be of much use to them anyway - sure, they could theoretically MITM you while you're in a coffee shop WiFi connecting back to your home servers, but if they are capable of that you have much bigger problems on your hands. I don't think most people should be overly concerned about this, and if you are consider asking your ISP for a static IP address - after all, if this is of legitimate concern in your threat model it's worth remembering that there are a host of other similar issues possible with using a dynamic IP.

Monday 4 January 2016

Dealing with Ultra High Packet Loss

"The Brown Fox was quick, even in the face of obstacles"
  - Ian Munsie, 2016

Over the last couple of weeks my Internet connection has developed a fault which is resulting in rather high packet loss. Even doing a simple ping test to 8.8.8.8 shows up to about 26% packet loss! Think about that - that means that 1 in every 4 packets might get dropped. A technician from my ISP visited last week and a Telstra technician (the company responsible for the copper phone lines) coming this Friday to hopefully sort it out, but in the meantime I'm stuck with this lossy link.

Trying to use the Internet with such high packet loss really reveals just how poorly TCP is designed for this situation. See, TCP is designed around the assumption that a lost packet means the link is congested and that it should slow down. But that is not the case for my link - a packet has a high chance of being dropped even there is no congestion whatsoever.

That leads to a situation where anything using TCP will slow down after only a few packets have been sent and at least one has been lost, and then a few packets later it will slow down again, and then again, and again, and again... While my connection should be able to maintain 300KB/s (theoretically more, but that's a ball park figure that it has been able to achieve in practice in the past), right now I'm only getting closer to 3KB/s, and some connections just hang indefinitely (it's hit or miss whether I can even finish a speedtest.net run). Interactive traffic is also affected, but fares slightly better - a HTML page probably only needs one packet for the HTML, so there's a 3/4 chance it will load in the first attempt... but every javascript or css file it links only has a 3/4 chance of loading and every image has a lower chance (since they are larger and will take several packets or more) - some of those will try again, but some will ultimately give up when several retries also get lost.

Now, TCP is a great protocol - the majority of the Internet runs on it and things generally work pretty well, but it's just not designed for this situation (and there's a couple of other situations which it is not suitable for either, such as very high latency links - it will not be a good choice as we advance further into space for example). The advent of WiFi led to some improvements in congestion avoidance protocols and tunables so that it doesn't immediately assume that packet loss means congestion, but even then it can only tolerate a very small amount of packet loss before performance starts to suffer - and experimenting with different algorithms and tunables made no appreciable difference to my situation whatsoever.

So, I started thinking - what we need is a protocol that does not take packet loss to mean congestion. This protocol would instead base it's estimation of the available bandwidth on how much data was actively being received, and more to the point - how this changed as it changes how much data was being transmitted.

So, for instance, if it started transmitting at (lets pick an arbitrary number) 100KB/s and then the receiver would reply back to tell the sender that it was receiving 75KB/s (25% packet loss). At this point TCP would go "oh shit, congestion - slow down!", but our theoretical protocol would instead try sending 125KB/s to see what happens - if the receiver replies to say it is now receiving 100KB/s then it knows that it has not yet hit the bandwidth limit and the discrepancy is just down to packet loss. It could then increase to 200KB/s, then 300KB/s until finally it finds when the receiver is no longer able to receive any more data.

It could also try reducing the data being sent - if there is no change in the amount being received than it knows that it was sending too fast for no good reason, while if there is a change then it knows that the original rate was ok. The results would of course need to be smoothed out to cope with real world fluctuations and the algorithm would have to periodically repeat this experiment to cope with changes in actual congestion, but with some tuning the result should be quite a bit better than what we can achieve with TCP in this situation (at least for longer downloads over relatively low latency links that can respond to changes in bandwidth faster - this would still not be a good choice for space exploration).

This protocol would need to keep track of which packets have been transmitted but not yet acknowledged, and just resend them after a while. It should not slow down until all acknowledgements have been received - if it has other packets that haven't been sent yet it could just send them and resend unacknowledged packets a little later, or if there's only a few packets it should just opportunistically resend them until they are acknowledged. It would want to be a little smart in how acknowledgements themselves are handled - in this situation an acknowledgement itself has just as much chance of being lost as a data packet, and each lost acknowledgement would mean the packets it was trying to acknowledge will be resent. But we can make some of these redundant and acknowledge a packet several times to have the best chance that the sender will see at least one acknowledgement before it tries to resend the packet.

So, I've started working on an implementation of this in Python. This is very much a first cut and should largely be considered a highly experimental proof of concept - it currently just transfers a file over UDP between two machines, has no heuristics to estimate available bandwidth (I just tell it what rate to run at), and it's acknowledgement & resend systems needs some work to reduce the number of unnecessary packets being resent, but given this download was previously measured in Bytes per second (and Steam was estimating this download would take "more than 1 year", so I had my remote server download it using steamcmd), I'd say this is a significant improvement:

Sent: 928M/2G 31% (238337 chunks, 69428 resends totalling 270M, 101 not acked) @ 294K/s
Received: 928M (238322 chunks, 5672 duplicates) @ 245K/s

The "101 not acknowledged" is just due to my hardcoding that as the maximum number of unacknowledged packets that can be pending before it starts resending - it needs to be switched to use a length of time that has elapsed since the packet was last sent compared to the latency of the network and some heuristics. With some work I should also be able to get the number of unnecessary packets being resent down quite a bit (but 5672 / 69428 is close to 10%, which is actually pretty good - this is resending 30% of the packets and the link has 26% packet loss).

Feel free to check out the code - just keep in mind that it is still highly experimental (and one 3GB file I transferred earlier ended up corrupt and had to be repaired with rsync - still need to investigate exactly what happened there) and the usage is likely to change so I won't document how to use it (hint: it supports --help):

https://raw.githubusercontent.com/DarkStarSword/junk/master/quickfox.py

Thursday 24 December 2015

Stereo Photography

I have two main hobbies at the moment - I'm one of the top currently active shaderhackers that make video games work in stereo 3D and one of the developers on 3DMigoto to make this possible, and I am also into photography. I sometimes combine both of these hobbies as well, in the form of stereo photography and was recently asked about this subject.

Stereo photography can become a rather tricky subject due to some (unsolvable) technical issues I'll touch on a little below, but it can be quite fun nevertheless.

Camera

The camera I mostly use for this is a Fujifilm FinePix Real 3D W3:

https://en.wikipedia.org/wiki/Fujifilm_FinePix_Real_3D

It includes two lenses separated by some distance similar to human eyes (it's actually a little wider than my eyes) and takes two photos of the same subject simultaneously from different perspectives (it has other 3D modes as well, but nothing that couldn't be done with a regular camera). It also has a glasses-free 3D display built into the camera ("sweet spot" based, meaning you have to look at it straight on), which allows you to see in advance how the 3D photos will look, and is handy to show subjects themselves in 3D, which they always like.

It is also possible to take stereo photographs with any camera by taking two photos from slightly different perspectives, but this can be difficult to get the orientation right between the two, and if the subject moves (or wind blows a leaf, etc) it means the photos will not quite match up between both eyes. There are various rigs available to remove some of the error from this process.

It is also possible to use two individual (preferably identical) cameras simultaneously if their settings (focal point, focal length, f-stop) are identical and their shutters are synchronised. At some point, I'd really like to try this set up using two DSLRs with "tilt-shift" lenses rather than ordinary lenses as my experience working with stereo projections in computer graphics leads me to believe that could result in a superior stereo photograph if setup correctly with a known display size, but trying that would be somewhat expensive and I have never heard of anyone else doing it.

Viewing Options

There are a number of options available to view a stereo photograph, each with their advantages and disadvantages: computer monitors, TVs or projectors using either active shutter glasses or passive polarised glasses, anaglyph (red-cyan) glasses with any display, displays / photographs with a lenticular lens array over the top for glasses-free 3D viewing, or just simply using the cross-eyed or distance viewing techniques to see a 3D photo with no special display, or by using the mirror technique.

I personally have a laptop with a 3D display (no longer being manufactured), and a 3D DLP projector (BenQ W1070).

3D computer monitors usually use nvidia 3D Vision and are 120Hz (or higher) active displays and the V2 ones feature a low-persistence backlight (turns off while the glasses change eyes to reduce crosstalk and increase the perceived brightness). These use nvidia's proprietary active shutter glasses, which are 60Hz per eye. These types of displays are a pretty good choice, but do suffer from some degree of crosstalk, and depend on nvidia's proprietary drivers (also, for Linux I believe that a Quadro card may be required from the documentation, though I have seen reports that it might be possible to make it work with a GeForce card like we do in Windows).

3D televisions have several different 3D formats they may use. side-by-side is usually the easiest option (though not necessarily the best as it halves the horizontal resolution) and is supported by geeqie and mplayer. 3D televisions are a poor choice for stereo content as they tend to suffer from exceptionally bad crosstalk thanks to the long time it takes the pixels to change (that is, each eye can see part of the image intended for the other eye), and they tend to have pretty high latency (fine for photos, not good for gaming), but have the advantage that they are fairly common and you may already have one. Which glasses they use and whether they are active or passive will depend on the specific TV. I believe that some use DLP glasses, which are standard.

For 3D projectors we only really consider 3D DLP projectors. These are similar to 3D TVs, but they are generally a much better choice - they have zero crosstalk thanks to the speed at which the DPL mirrors are able to switch (much faster than even the best LCD) and when used for gaming are generally much lower latency than TVs. Their disadvantages are the space required (short throw versions are available for smaller rooms), need to keep the room dark (or use a rather expensive black projector screen), and replace the bulb every now and then. The active DPL glasses they use are a standard so you are not forced to use the projector's brand glasses, though beware that the projector probably won't come with any and they will need to be purchased separately. The IR signal used to synchronise the glasses is emitted from the projector and simply bounced off the projector screen.

Given the typical screen size of a projector, these have the highest risk of violating infinity for pre-rendered content (displaying an object further apart than your eyes) and photos may require a parallax adjustment to offset their left and right images to be able to comfortably view. Movies already are calibrated for a larger screen (IMAX), so no need to worry there (but 3D movies also generally suck as a result of this), and games can calibrate to whatever screen size they are being used with for the best result.

Anaglyph glasses are a low-cost option ($2 from ebay) that can be used with any display, but I would not recommend this for anything other than trying out 3D since the false colours and high crosstalk result in eye-strain. I cannot tolerate anaglyph for more than a few minutes, whereas I can comfortably wear active shutter glasses all day with 3D games. In Linux, geeqie and mplayer can both output stereo content in several forms of anaglyph (compromising between more realistic colours and less crosstalk between the eyes).

Displays with a lenticular lens array do not require glasses to view - the Fujifilm camera I use has one of these on the back. They usually require the viewer to have their head in a specific position ("sweet spot") however, though there are some that use eye-tracking to compensate for this in real time and can support a very small number of viewers anywhere in the room (I'm not sure if any of those are consumer grade yet though).

Fujifilm also produces a 3D photo frame that is aimed at users of their camera with the same sort of lenticular lens array over it. I have yet to purchase this as I have my doubts as to it's general usefulness since the fact that it still has a sweet spot means the viewer must stand in a specific spot and cannot enjoy the photos from anywhere in the room.

It is also possible to print out a photo with the left and right views interlaced and place a lenticular lens array on the photo itself, allowing for 3D prints. Fujifilm has a service to do this, but it is not available in Australia and I have yet to track down an alternative print service available here. Apparently it is possible to purchase the supplies to do this yourself.

The cross-eyed and distance viewing methods do not require any special displays as they are simply a technique you can use to view a pair of stereo images placed side-by-side. The images must be fairly close together and should not be more than about 7cm or so wide, perhaps even less. The further apart the images are on the screen, the harder these techniques are to achieve. These will not give you the full impact as using glasses with a full 3D display, but they don't cost anything and with a bit of practice can become easy.

This is an example of a photo I took with the left and right reversed for cross-eyed viewing. The trick is to go cross-eyed until the two images merge into one. To help practice this technique, hold your finger up half way between your face and the display and look at your finger instead of the display. Focus on your finger and slowly move it forwards or backwards until the images on the display behind it have merged together, then try to refocus your eyes on the 3D image without pointing them back at the screen. It may take a few attempts while you get used to the technique.

This image is set up for the distance viewing method. For this method you need to relax your eyes and allow them to defocus from the screen and look behind the display until the images merge, then try to refocus on the image without looking back at the screen.

The mirror technique works by placing a mirror in front of your nose (in this case facing to the left) so you can see a reflection of the image in the mirror. Focus on the image in the mirror and it should pop into 3D. This can be easier than the above techniques since it does not require your eyes to be looking in a different direction to their focus, and can comfortably be used to view larger stereo images, though it can be difficult to fit the entire image in the mirror (you may have to move your head back or forwards). Also, since most mirrors are imperfect (especially at this angle) they may show a double image (click for a larger version which may be easier to use this technique):

Subjects

I've found that there are certain subjects that work well in stereo that don't work at all in 2D, yet just as many that work better in 2D than 3D. If you ever see a scene that looks really interesting to your eyes, but plain and uninteresting on a 2D photo when the depth has been lost (or replaced with a depth of field blur) it might just be a candidate to try in stereo - here's a good example of this:


Crosseyed Distance Mirror Left Mirror Right Anaglyph

In 2D all the rocks blend together and it becomes a plan and uninteresting shot, but in 3D the individual rocky outcroppings can easily be distinguished from one another and the shot is interesting. Here's another example:


Crosseyed Distance Mirror Left Mirror Right Anaglyph

In 2D there is nothing interesting about this shot and I would delete it, but in 3D the depth of the hole is apparent and the shot is interesting (still not really a keeper, just interesting to show the 3D).

If the subject will not gain much from 3D, it may be better shot with the additional control that a DSLR provides in 2D and without the technical problems that stereo photography brings. 3D tends to work better for closer subjects rather than those further away, and when the subject links multiple depths together.

If the subjects are too far away or too far apart they may appear as layered 2D images, which can be ok, but does not really do stereo photography justice. Zooming in on a distant subject with the camera will not provide the same stereo effect as moving closer to it (the same thing happens in 2D - you might be familiar with the dolly zoom effect, but in 3D it is far more pronounced).

For instance, this photo did not gain much from being shot in stereo as everything is just too far away and the effect is not very pronounced (displaying this on a larger screen may help a little):


Crosseyed Distance Mirror Left Mirror Right Anaglyph

Stereo photography can work especially well to show detail that is lost in a 2D image - most photographers will see running water and immediately set their camera to use a longer exposure time to get that classic artistic streaking effect, but in 3D you might do the opposite and try to freeze the water in the frame so you can examine it's structure in detail (I have better examples, but not that I can post here):


Crosseyed Distance Mirror Left Mirror Right Anaglyph

In video games, playing in stereo brings out a lot of detail that players would usually ignore - grass, leaves and rocks are no longer just there to "not look weird because they are missing" - they now have real detail and players will stop and admire just how much effort the 3D artist put into them (or in some cases how little). The same works in a stereo photo - if I were taking these in 2D I would probably have focused on an individual flower or leaf and used depth of field to emphasise it, but in 3D the wider scene is interesting as the detail on every single flower, leaf and blade of grass is apparent (if possible, best viewed on a larger screen to see the detail more clearly):


Crosseyed Distance Mirror Left Mirror Right Anaglyph


Crosseyed Distance Mirror Left Mirror Right Anaglyph


Crosseyed Distance Mirror Left Mirror Right Anaglyph

Issues

The Fujifilm camera should only be used in landscape orientation when both lenses are used since the lenses must be aligned horizontally - otherwise the images will be misaligned between the eyes and will cause eye-strain and will not be pleasant to view in stereo (if possible at all). This can be corrected in post to some extent, but only to a point - if the photo was a full 90 degrees out it will not be possible to correct (you could still salvage either of the two images as a 2D photo).

That's not to say that portraits can't be taken in stereo, but the lenses have to be aligned horizontally, whether that means using a different rig, or taking a wider angle landscape shot and cropping it to portrait.

A stereo camera sees the world in much the same way our eyes do:

\   \    /   /
 \   \  /   /
  \   \/   /
   \  /\  /
    \/  \/

But the problem is that is not beamed directly into our eyes, but rather has to be displayed on an intermediate display and we don't quite see that display the same way. There's not much that can be done about this in photography or fimography, which is one of several reasons that 3D movies are usually not considered to be very good. I do think that a pair of tilt-shift lenses could help here, but even that would not help with the fact that we do not know ahead of time what size display will be used to view the image later.

The reason the display size is important, is that if the left and right images of an object is displayed on the screen further apart than the viewer's eyes (regardless of how far away the display is), the object will appear to be beyond infinity, which quickly becomes uncomfortable or impossible to view. The only way to combat this is to shift the offset of the two images until nothing is more than 7cm apart on the largest display it might ever be displayed on. Displaying the content on a smaller screen will quickly diminish the strength of the stereo effect - therefore, the IMAX theatre in Sydney is another reason that 3D movies are considered poor as their 3D effect is reduced on anything other than the IMAX theatre in Sydney, and by the time you are viewing it in a home theatre there is almost no 3D left.

But video games do not suffer this same problem - they are rendered live and know the size of the display they are being rendered on, and can use this information to skew the projection so the viewing frustrum for each eye will touch the edge of the screen at the point of convergence, plus they can dial the overall strength of the 3D effect and the point of convergence up and down as desired:

\-    \                           /    -/
  \-   \                         /   -/
    \-  \                       /  -/
      \- \      screen of      / -/
        \-\     known size    /-/
           \-----------------/   <-- point of convergence
            \\-           -//
             \ \-       -/ /
              \  \-   -/  /
               \   \-/   /
                \ -/ \- /
                 o     o

3D screenshots of games are still a problem however - if they are scaled up to a larger display they may violate infinity, and if they are scaled down to a smaller display they will have a reduced 3D effect. Now that you know this, here are some screenshots I have taken in various games that are calibrated to a 17" display for a comparison of how they look compared to the photos:

http://photos.3dvisionlive.com/DarkStarSword/

Without the nvidia plugin that site is pretty useless, but I made a user script to add a download button to it to get at the raw side-by-side images that can be saved as .jps files and opened with a stereo photo viewer such as geeqie (Linux), sView (Windows, Linux, Mac, Android) or nvidia photo viewer (Windows):

https://github.com/DarkStarSword/3d-fixes/raw/master/3dvisionlive_download_button.user.js

Tuesday 28 August 2012

Nokia N9 Bluetooth PAN, USB & Dummy Networks

Please note: All of these instructions assume you have developer mode enabled and are familiar with using the Linux console. One of the variants of dummy networking I present here also requires a package to be installed with Inception or use of an open-mode kernel to disable aegis. I present an alternative method to use a pseudo-dummy network for people who do not wish to do that.

Background

Earlier this year I bought a Nokia N9 (then took it in for service TWICE due to a defective GPS, then returned it for a refund since Nokia had returned it un-repaired both times, then bought a new one for $200 less than I originally paid, then bought a second for my fiancé).

The SIM card I use in the N9 is a pretty basic TPG $1/month deal, which is fine for the small amount of voice calls I make, but it's 50MB of data per month is a not really enough, so I'd like it to use alternative networks wherever possible.

When working on another computer with an Internet connection, I could simply hook up the N9 via USB networking and have the computer give it a route to the Internet. That works well, but has the problem that any applications using the N9's Internet Connectivity framework (anything designed for the platform is supposed to do this via libconic) would not know that there was an Internet connection and would refuse to work - so I had to find a way to convince them that there was an active Internet connection using a dummy network. Also, this obviously wouldn't work when I was away from a computer.

I also happen to carry a pure data SIM card in my Optus MyTab with me all the time (being my primary Internet connection), so when I'm on the go I'd like to be able to connect to the Internet on the N9 via the tablet rather than use the small amount of data from the TPG SIM.

The MyTab is running CyanogenMod 7 (I'm not a fan of Android, but at $130 to try it out the price was right), so I am able to switch on the WiFi tethering on the tablet and connect that way, but it has a couple of problems:

  • It needs to be manually activated before use
  • It needs to be manually deactivated to allow the bluetooth tethering to work
  • It isn't very stable (holding a wakelock helps a lot - the terminal application can be used for this purpose)
  • It's a bit of a battery drain (at least the tablet has a huge battery)

The MyTab also supports tethering over bluetooth PAN (which I regularly use at home), so it made a lot of sense to me to connect the N9 to the tablet using that as well when I am out and about. Unfortunately, the N9 does not come with any software to connect to a bluetooth network, and I couldn't manage to find anyone else who had successfully done this (There are a couple of threads discussing it).

Fortunately, the N9 has a normal Linux userspace under the hood (one reason I'd take this over Android any day), which includes bluez 4.x and as such I was able to use that to make it do bluetooth PAN.

USB Network

Let's start with USB Networking since it is already supported on the N9 and works out of the box once developer mode is enabled (select SDK mode when plugging in).

Here's a few tricks you can do to streamline the process of using the USB network to gain an Internet connection. You will also want to follow the steps under one of the Dummy Networking sections below to allow applications (such as the web browser) to use it.

On the host, add this section to your /etc/network/interfaces (this is for Debian based distributions, if you use something else you will have work out the equivalent):

allow-hotplug usb0
iface usb0 inet static
    address 192.168.2.14
    netmask 255.255.255.0
    up iptables -t nat -I POSTROUTING -j MASQUERADE
    up iptables -A FORWARD -i usb0 -j ACCEPT
    up iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
    up echo 1 > /proc/sys/net/ipv4/ip_forward
    down echo 0 > /proc/sys/net/ipv4/ip_forward
    down iptables -F FORWARD
    down iptables -t nat -F POSTROUTING

Next, modify the same file on the N9 so that the usb0 section looks like this (this section already exists - I've just extended it a little):

auto usb0
iface usb0 inet static
    address 192.168.2.15
    netmask 255.255.255.0
    gateway 192.168.2.14
    up /usr/lib/sdk-connectivity-tool/usbdhcpd.sh 192.168.2.14
    down /usr/lib/sdk-connectivity-tool/usbdhcpd.sh stop
    up echo nameserver 208.67.222.222 >> /var/run/resolv.conf
    up echo nameserver 208.67.220.220 >> /var/run/resolv.conf
    down rm /var/run/resolv.conf

Now whenever you plug in the N9 and choose SDK mode it should automatically get an Internet connection with no further interaction required and you should be able to ping hosts on the Internet :)

But, you will probably notice that most applications (like the web browser) will still bring up the "Connect to internet" dialog whenever you use them and will refuse to work. To make these applications work we need to create a dummy network that they can "connect" to, while in reality they actually use the USB network.

    USB Networking Notes:
  • The iptables commands on the host will alter the firewall and routing rules to allow the N9 to connect to the Internet through the host. If you use your own firewall with other forwarding rules you may want to remove those lines and add the appropriate rules to your firewall instead.
  • The above commands will turn off all forwarding on the host and purge the FORWARD and POSTROUTING tables when the N9 is unplugged - if your host is a router for other things you definitely will want to remove those lines.
  • The two IP addresses used for the DNS lookups on the N9 are those of OpenDNS.org - you might want to replace them with some other appropriate servers. OpenDNS should be accessible from any Internet connection, which is why I chose them.
  • The N9 will use the most recently modified file under /var/run/resolv.conf* (specifically those listed in /etc/dnsmasq.conf) for DNS lookups. Which means that connecting to a WiFi/3G network AFTER bringing up the USB network would override the DNS settings. I suggest setting the DNS settings for your dummy network to match to avoid that problem.
  • The N9 doesn't run the down rules when it should, rather they seem to be delayed until the USB cable is plugged in again, when they are run immediately before the up rules. Because of the previous note, this isn't really an issue for the dnsmasq update, but it may be an issue if you wanted to do something more advanced.
  • Alternatively, there is an icd2 plugin for USB networking for the N900 available on gitorious. I haven't had a look at this yet to see if it works on the N9 or how it compares to the above technique. This would require installation with Inception.

Dummy Network

This approach to setting up a dummy network isn't for everyone. You are going to need to compile a package in the Harmattan platform SDK (or bug me to upload the one I built somewhere) and install it on the device with Inception, or use an open mode kernel. If you don't feel comfortable with this, you might prefer to use the technique discussed in the Alternative Dummy Network section instead.

First grab the dummy icd plugin from https://maemo.gitorious.org/icd2-network-modules

[host]$ cd /scratchbox/users/$USER/home/$USER
[host]$ git clone git://gitorious.org/icd2-network-modules/libicd-network-dummy.git
[host]$ scratchbox
[sbox]$ sb-menu
 Select -> HARMATTAN_ARMEL
[sbox]$ cd libicd-network-dummy
[sbox]$ dpkg-buildpackage -rfakeroot

Now copy /scratchbox/users/$USER/home/$USER/libicd-network-dummy_0.14_armel.deb to the N9, then install and configure it on the N9 with:

[N9]$ /usr/sbin/incept libicd-network-dummy_0.14_armel.deb

[N9]$ gconftool-2 -s -t string /system/osso/connectivity/IAP/DUMMY/type DUMMY
[N9]$ gconftool-2 -s -t string /system/osso/connectivity/IAP/DUMMY/name 'Dummy network'

[N9]$ devel-su
[N9]# /sbin/initctl restart xsession/icd2

Next time the connect to Internet dialog appears you should see a new entry called 'Dummy network' that you can "connect" to so that everything thinks there is an Internet connection, while they really use your USB or bluetooth connection.

Alternative Dummy Network

This isn't ideal in that it enables the WiFi & creates a network that nearby people can see, but it does have the advantage that it works out of the box and does not require Inception or Open Mode.

Open up settings -> internet connection -> create new connection

Fill out the settings like this:

Connection name: dummy
Network Name (SSID): dummy
Use Automatically: No
network mode: ad hoc
Security method: None

Under Advanced settings, fill out these:

Auto-retrieve IP address: No
IP address: 0.0.0.0
Subnet mask: 0.0.0.0
Default gateway: 0.0.0.0

Auto-retrieve DNS address: No
Primary DNS address: 208.67.222.222
Secondary DNS address: 208.67.220.220

These are the OpenDNS.org DNS servers - feel free to substitute your own.

Then if the 'Connect to internet' dialog comes up you can connect to 'dummy', which will satisfy that while leaving your real USB/bluetooth network alone.

Bluetooth Personal Area Networking (PAN)

This is very much a work in progress that I hope to polish up and eventually package up and turn into an icd2 plugin so that it will nicely integrate into the N9's internet connectivity framework.

First thing's first - you will need to enable the bluetooth PAN plugin on the N9, by finding the line DisabledPlugins in /etc/bluetooth/main.conf and removing 'network' from the list so that it looks something like:

[General]

# List of plugins that should not be loaded on bluetoothd startup
# DisablePlugins = network,hal
DisablePlugins = hal

# Default adaper name
...

Then restart bluetooth by running:

[N9]$ devel-su
[N9]# /sbin/initctl restart xsession/bluetoothd

Until I package this up more nicely you will need to download my bluetooth tethering script from:

https://raw.github.com/DarkStarSword/junk/master/blue-tether.py

You will need to edit the dev_dbaddr in the script to match the bluetooth device you are connecting to. Note that I will almost certainly change this to read from a config file in the very near future, so you should double check the instructions in the script first.

Put the modified script on the N9 under /home/user/blue-tether.py

You first will need to pair with the device you are connecting to in the N9's bluetooth GUI like usual.

Once paired, you may run the script from the terminal with develsh -c ./blue-tether.py

The bluetooth connection will remain up until you press enter in the terminal window. Currently it does not detect if the connection goes away, so you would need to restart it in that case.

For convenience you may create a desktop entry for it by creating a file under /usr/share/applications/blue-tether.desktop with this contents:

[Desktop Entry]
Type=Application
Name=Blue Net
Categories=System;
Exec=invoker --type=e /usr/bin/meego-terminal -n -e develsh -c /home/user/blue-tether.py
Icon=icon-m-bluetooth-lan

Again, this is very much an active work in progress - expect to see a packaged version soon, and hopefully an icd2 plugin before not too long.

One Outstanding Graphical Niggle

You may have noticed that the dummy plugin doesn't have it's own icon - in the connect to Internet dialog it seems to pick a random icon, and once connected the status bar displays it as though it was a cellular data connection. As far as I can tell, the icons (and other connectivity related GUI elements) are selected by /usr/lib/conniaptype/lib*iaptype.so which is loaded by /usr/lib/libconinetdui.so which is in turn used by /usr/bin/sysuid. I haven't managed to find any API references or documentation for these and I suspect being part of Nokia's GUI that they fall into the firmly closed source side of Harmattan. This would be nice to do properly if I want to create my own icd2 plugins, so if anyone has some pointers for this, please leave a note in the comments.

Why is Inception required for real dummy networking?

Well, it's because the Internet Connectivity Daemon requests CAP::sys_module (i.e. The capability to load kernel modules):

~ $ ariadne sh
Password for 'root':

/home/user # accli -I -b /usr/sbin/icd2
Credentials:
        UID::root
        GID::root
        CAP::kill
        CAP::net_bind_service
        CAP::net_admin
        CAP::net_raw
        CAP::ipc_lock
        CAP::sys_module
        SRC::com.nokia.maemo
        AID::com.nokia.maemo.icd2.
        icd2::icd2
        icd2::icd2-plugin
        Cellular

Because of this, aegis will only allow it to load libraries that originated from a source that has the ability to grant CAP::sys_module, which unfortunately (but understandably given what the capability allows) is only the system firmware by default, so attempting to load it would result in this (in dmesg):

credp: icd2: credential 0::16 not present in source SRC::9990007
Aegis: credp_kcheck failed 9990007 libicd_network_dummy.so
Aegis: libicd_network_dummy.so verification failed (source origin check)

Ideally the developers would have thought of this and separated the kernel module loading out into a separate daemon so that icd2 would not require this credential and therefore would allow third-party plugins to be loaded, but since that is not the case we have to use Inception to install the dummy plugin from a source that has the ability to grant the same permissions that the system firmware enjoys (Note that the library does not actually request any permissions because libraries always inherit the permissions of the binary that loaded them - it just needs to have come from a source that could have granted it that permission).

Also, if anyone could clarify what the icd2::icd2-plugin credential is for I would appreciate it - I feel like I've missed something because it's purpose as documented (to load icd2 plugins) seems rather pointless to me (icd2 loads libraries based on gconf settings, which it can do just as well without this permission... so what is the point of this?).

Thursday 23 February 2012

Tiling tmux Keybindings

When most people use a computer, they are are using either a compositing or stacking window manager - which basically means that windows can overlap. The major alternative to this model is known as a tiling window manager, where the window manager lays out and sizes windows such that they do not overlap each other.

I started using a tiling window manager called wmii some years ago after buying a 7" EeePC netbook and trying to find alternative software more suited to the characteristics of that machine. Most of the software I ended up using on that machine I now use on all of my Linux boxes, because I found that it suits my workflow so much better.

Wmii as a window manager primarily focuses on organising windows into tags (like multiple desktops) and columns. Within a column windows can either be sized evenly, or a single window can take up the whole height of the column, optionally with the title bars of the other windows visible (think minimised windows on steroids).

Wmii is very heavily keyboard driven (which is one of it's strengths from my point of view), though a mouse can be used for basic navigation as well. It is also heavily extensible with scripting languages and in fact almost all interactions with the window manager are actually driven by the script. It defaults to using a shell script, but also ships with equivalent python and ruby scripts (the base functionality is the same in each), and is easy to extend.

By default keyboard shortcuts provide ways to navigate left and right between columns, up and down between windows within a column, and to switch between 10 numbered tags (more tags are possible, but rarely needed). Moving a window is as simple as holding down shift while performing the same key combos used to navigate, and columns and tags are automatically created as needed (moving a window to the right of the rightmost column would create a new column for example), and automatically destroyed when no longer used.

Recent versions of wmii also work really well with multiple monitors (though there is still some room for improvement in this area) allowing windows to really easily be moved between monitors with the same shortcuts used to move windows between columns (and they way it differentiates between creating a new column on the right of the left monitor versus moving the window to the right monitor is pure genius).

Naturally with such a powerful window manager, I want to use it to manage all my windows and all my shells. The problem with this of course is SSH - specifically, when I have many remote shells open at the same time and what happens when the network goes away. You see, I've been opening a new terminal and SSH connection for each remote shell so I can use wmii to manage them, which works really great until I need to suspend my laptop or unplug it to go to a meeting, then have to spend some time re-establishing each session, getting it back to the right working directory, etc. And, I've lost the shell history specific to each terminal.

Normally people would start screen on the remote server if they expect their session to go away, and screen can also manage a number of shells simultaneously, which would be great... except that it is no where near as good at managing those shells as wmii can manage windows and if I'm going to switch it would need to be pretty darn close.

I've been aware for some time of an alternative to screen called tmux which seemed to be much more sane and feature-rich than screen, so the other day I decided to see if I could configure tmux to be a realistic option for managing many shells on a remote machine that I could detach and re-attach from when suspending my laptop.

Tmux supports multiple sessions, "windows" (like tags in wmii), and "panes" (like windows in wmii). I managed to come up with the below configuration file which sets up a bunch of keybindings similar to the ones I use in wmii (but using the Alt modifier instead of the Windows key) to move windows... err... "panes" and to navigate between them.

Unlike wmii, tmux is not focussed around columns, which technically gives it more flexibility in how the panes are arranged, but sacrifices some of the precision that the column focus gives wmii (in this regard tmux is more similar to some of the other tiling window managers available).

None of these shortcut keys need to have the tmux prefix key pressed first, as that would have defeated the whole point of this exercise:

Alt + ' - Split window vertically *
Alt + Shift + ' - Split window horizontally

Alt + h/j/k/l - Navigate left/down/up/right between panes within a window
Alt + Shift + h/j/k/l - Swap window with the one before or after it **

Alt + Ctrl + h/j/k/l - Resize pane *** - NOTE: Since many environments use Ctrl+Alt+L to lock the screen, you may want to change these to use the arrow keys instead.

Alt + number - Switch to this tag... err... "window" number, creating it if it doesn't already exist.
Alt + Shift + number - Send the currently selected pane to this window number, creating it if it doesn't already exist.

Alt + d - Tile all panes **
Alt + s - Make selected pane take up the maximum height and tile other panes off to the side **
Alt + m - Make selected pane take up the maximum width and tile other panes below **

Alt + f - Make the current pane take up the full window (actually, break it out into a new window). Reverse with Alt + Shift + number **

Alt + PageUp - Scroll pane back one page and enter copy mode. Release the alt and keep pressing page up/down to scroll and press enter when done.

* Win+Enter opens a new terminal in wmii, but Alt+Enter is already used by xterm, so I picked the key next to it

** These don't mirror the corresponding wmii bindings because I could find no exact equivalent, so I tried to make them do something similar and sensible instead.

*** By default there is no shortcut key to resize windows in wmii (though the python version of the wmiirc script provides a resize mode which is similar), so I added some to my scripts.


~/.tmux.conf (Download Latest Version Here)

# Split + spawn new shell:
# I would have used enter like wmii, but xterm already uses that, so I use the
# key next to it.
bind-key -n M-"'" split-window -v
bind-key -n M-'"' split-window -h

# Select panes:
bind-key -n M-h select-pane -L
bind-key -n M-j select-pane -D
bind-key -n M-k select-pane -U
bind-key -n M-l select-pane -R

# Move panes:
# These aren't quite what I want, as they *swap* panes *numerically* instead of
# *moving* the pane in a specified *direction*, but they will do for now.
bind-key -n M-H swap-pane -U
bind-key -n M-J swap-pane -D
bind-key -n M-K swap-pane -U
bind-key -n M-L swap-pane -D

# Resize panes (Note: Ctrl+Alt+L conflicts with the lock screen shortcut in
# many environments - you may want to consider the below alternative shortcuts
# for resizing instead):
bind-key -n M-C-h resize-pane -L
bind-key -n M-C-j resize-pane -D
bind-key -n M-C-k resize-pane -U
bind-key -n M-C-l resize-pane -R

# Alternative resize panes keys without ctrl+alt+l conflict:
# bind-key -n M-C-Left resize-pane -L
# bind-key -n M-C-Down resize-pane -D
# bind-key -n M-C-Up resize-pane -U
# bind-key -n M-C-Right resize-pane -R

# Window navigation (Oh, how I would like a for loop right now...):
bind-key -n M-0 if-shell "tmux list-windows|grep ^0" "select-window -t 0" "new-window -t 0"
bind-key -n M-1 if-shell "tmux list-windows|grep ^1" "select-window -t 1" "new-window -t 1"
bind-key -n M-2 if-shell "tmux list-windows|grep ^2" "select-window -t 2" "new-window -t 2"
bind-key -n M-3 if-shell "tmux list-windows|grep ^3" "select-window -t 3" "new-window -t 3"
bind-key -n M-4 if-shell "tmux list-windows|grep ^4" "select-window -t 4" "new-window -t 4"
bind-key -n M-5 if-shell "tmux list-windows|grep ^5" "select-window -t 5" "new-window -t 5"
bind-key -n M-6 if-shell "tmux list-windows|grep ^6" "select-window -t 6" "new-window -t 6"
bind-key -n M-7 if-shell "tmux list-windows|grep ^7" "select-window -t 7" "new-window -t 7"
bind-key -n M-8 if-shell "tmux list-windows|grep ^8" "select-window -t 8" "new-window -t 8"
bind-key -n M-9 if-shell "tmux list-windows|grep ^9" "select-window -t 9" "new-window -t 9"

# Window moving (the sleep 0.1 here is a hack, anyone know a better way?):
bind-key -n M-')' if-shell "tmux list-windows|grep ^0" "join-pane -d -t :0" "new-window -d -t 0 'sleep 0.1' \; join-pane -d -t :0"
bind-key -n M-'!' if-shell "tmux list-windows|grep ^1" "join-pane -d -t :1" "new-window -d -t 1 'sleep 0.1' \; join-pane -d -t :1"
bind-key -n M-'@' if-shell "tmux list-windows|grep ^2" "join-pane -d -t :2" "new-window -d -t 2 'sleep 0.1' \; join-pane -d -t :2"
bind-key -n M-'#' if-shell "tmux list-windows|grep ^3" "join-pane -d -t :3" "new-window -d -t 3 'sleep 0.1' \; join-pane -d -t :3"
bind-key -n M-'$' if-shell "tmux list-windows|grep ^4" "join-pane -d -t :4" "new-window -d -t 4 'sleep 0.1' \; join-pane -d -t :4"
bind-key -n M-'%' if-shell "tmux list-windows|grep ^5" "join-pane -d -t :5" "new-window -d -t 5 'sleep 0.1' \; join-pane -d -t :5"
bind-key -n M-'^' if-shell "tmux list-windows|grep ^6" "join-pane -d -t :6" "new-window -d -t 6 'sleep 0.1' \; join-pane -d -t :6"
bind-key -n M-'&' if-shell "tmux list-windows|grep ^7" "join-pane -d -t :7" "new-window -d -t 7 'sleep 0.1' \; join-pane -d -t :7"
bind-key -n M-'*' if-shell "tmux list-windows|grep ^8" "join-pane -d -t :8" "new-window -d -t 8 'sleep 0.1' \; join-pane -d -t :8"
bind-key -n M-'(' if-shell "tmux list-windows|grep ^9" "join-pane -d -t :9" "new-window -d -t 9 'sleep 0.1' \; join-pane -d -t :9"

# Set default window number to 1 instead of 0 for easier key combos:
set-option -g base-index 1

# Pane layouts (these use the same shortcut keys as wmii for similar actions,
# but don't really mirror it's behaviour):
bind-key -n M-d select-layout tiled
bind-key -n M-s select-layout main-vertical \; swap-pane -s 0
bind-key -n M-m select-layout main-horizontal \; swap-pane -s 0

# Make pane full-screen:
bind-key -n M-f break-pane
# This isn't right, it should go back where it came from:
# bind-key -n M-F join-pane -t :0

# We can't use shift+PageUp, so use Alt+PageUp then release Alt to keep
# scrolling:
bind-key -n M-PageUp copy-mode -u

# Don't interfere with vi keybindings:
set-option -s escape-time 0

# Enable mouse. Mostly to make selecting text within a pane not also grab pane
# borders or text from other panes. Unfortunately, tmux' mouse handling leaves
# something to be desired - no double/tripple click support to select a
# word/line, all mouse buttons are intercepted (middle click = I want to paste
# damnit!), no automatic X selection integration(*)...
set-window-option -g mode-mouse on
set-window-option -g mouse-select-pane on
set-window-option -g mouse-resize-pane on
set-window-option -g mouse-select-window on

# (*) This enables integration with the clipboard via termcap extensions. This
# relies on the terminal emulator passing this on to X, so to make this work
# you will need to edit your X resources to allow it - details below.
set-option -s set-clipboard on


You may also need to alter your ~/.Xresources file to make some things work (this is for xterm):

~/.Xresources (My Personal Version)

/* Make Alt+x shortcuts work in xterm */
XTerm*.metaSendsEscape: true
UXTerm*.metaSendsEscape: true

/* Allow tmux to set X selections (ie, the clipboard) */
XTerm*.disallowedWindowOps: 20,21,SetXprop
UXTerm*.disallowedWindowOps: 20,21,SetXprop

/* For some reason, this gets cleared when reloading this file: */
*customization: -color

To reload this file without logging out and back in, run:
xrdb ~/.Xresources

There's a pretty good chance that I'll continue to tweak this, so I'll try to update this post anytime I add something cool.

Edit 27/02/2012: Added mouse & clipboard integration & covered changes to .Xresources file.

Friday 17 February 2012

SSH passwordless login WITHOUT public keys

I was recently in a situation where I needed SSH & rsync over SSH be to able to log into a remote site without prompting for a password (as it was being called from within a script and would have been non-trivial to make the script pass in a password, especially as OpenBSD-SSH does not provide a trivial mechanism for scripts to pass in passwords - see below).

Normally in this situation one would generate a public / private keypair and use that to log in without a prompt, either by leaving the private key unencrypted (ie, not protected by a passphrase), or by loading the private key into an SSH agent prior to attempting to log in (e.g. with ssh-add).

Unfortunately the server in question did not respect my ~/.ssh/authorized_keys file, so public key authentication was not an option (boo).


Well, it turns out that you can pre-authenticate SSH sessions such that an already open session is used to authenticate new sessions (actually, new sessions are basically tunnelled over the existing connection).

The option in question needs a couple of things set up to work, and it isn't obviously documented as a way to allow passwordless authentication - I had read the man page multiple times and hadn't realised what it could do until Mikey at work pointed it out to me.

To get this to work you first need to create (or modify) your ~/.ssh/config as follows:

Host *
  ControlPath ~/.ssh/master_%h_%p_%r


Now, manually connect to the host with the -M flag to ssh and enter your password as normal:

ssh -M user@host

Now, as long as you leave that connection open, further normal connections (without the -M flag) will use that connection instead of creating their own one, and will not require authentication.


Edit:
Note that you may instead edit your ~/.ssh/config as follows to have SSH always create and use Master connections automatically without having to specify -M. However, some people like to manually specify when to use shared connections so that the bandwidth between the low latency interactive sessions and high throughput upload/download sessions doesn't mix as that can have a huge impact on the interactive session.

Host *
  ControlPath ~/.ssh/master_%h_%p_%r

  ControlMaster auto



Alternate method, possibly useful for scripting


Another method I was looking at using was specifying a program to return the password in the SSH_ASKPASS environment variable. Unfortunately, this environment variable is only used in some rare circumstances (namely, when no tty is present, such as when a GUI program calls SSH or rsync), and would not normally be used when running SSH from a terminal (or in the script as I was doing).

Once I found out about the -M option I stopped pursuing this line of thinking, but it may be useful in a script if the above pre-authentication method is not practical (perhaps for unattended machines).

To make SSH respect the SSH_ASKPASS environment variable when running from a terminal, I wrote a small LD_PRELOAD library libnotty.so that intercepts calls to open("/dev/tty") and causes them to fail.

If anyone is interested, the code for this is in my junk repository (libnotty.so & notty.sh). You will also need a small script that echos the password (I hope it goes without saying that you should check the permissions on it) and point the SSH_ASKPASS environment variable to it.

https://github.com/DarkStarSword/junk

Git trick: Deleting non-ancestor tags

Today I cloned the git tree for the pandaboard kernel, only to find that it didn't include the various kernel version tags from upstream, so running things like git describe or git log v3.0.. didn't work.

My first thought was to fetch just the tags from an upstream copy of the Linux kernel I had on my local machine:

git fetch -t ~/linus

Unfortunately I hadn't thought that though very well, as that local tree also contained all the tags from the linux-next tree, the tip tree as well as a whole bunch more from various distro trees and several other random ones, which I didn't want cluttering up my copy of the pandaboard kernel tree.

This lead me to try to find a way to delete all the non-ancestor tags (compared to the current branch) to simplify the tree. This may be useful to others to remove unused objects and make the tree smaller after a git gc -- that didn't factor into my needs as I had specified ~/linus to git clone with --reference so the objects were being shared.

Anyway, this is the script I came up with, note that this only compares the tags with the ancestors of the *current HEAD*, so you should be careful that you are on a branch with all the tags you want to keep first. Alternatively you could modify this script to collate the ancestor tags of every local/remote branch first, though this is left as an exercise for the reader.


#!/bin/sh

ancestor_tags=$(mktemp)
echo -n Looking up ancestor tags...\ 
git log --simplify-by-decoration --pretty='%H' > $ancestor_tags
echo done.

for tag in $(git tag --list); do
 echo -n "$tag"
 commit=$(git show "$tag" | awk '/^commit [0-9a-f]+$/ {print $2}' | head -n 1)
 echo -n ...\ 
 if [ -z "$commit" ]; then
  echo has no commit, deleting...
  git tag -d "$tag"
  continue
 fi
 if grep $commit $ancestor_tags > /dev/null; then
  echo is an ancestor
 else
  echo is not an ancestor, deleting...
  git tag -d "$tag"
 fi
done

rm -fv $ancestor_tags


Also note that this may still leave unwanted tags in if they are a direct ancestor of the current HEAD - for instance, I found a bunch of tags from the tip tree had remained afterwards, but they were much more manageable to delete with a simple for loop and a pattern.