LUG Community Blogs

Steve Engledow (stilvoid): Sorted

Planet ALUG - 1 hour 49 min ago

I decided to restructure the folder I keep code in (~/code, natch) - taking my cue from how Go does it - so that the folder structure represents where code has come from.

As with all things, moving a couple of hundred folders by hand seemed far too daunting so I wrote a bash script to do it.

This script enters each subdirectory within the current directory and, if it has a git remote, moves it to a folder that represents the git remote's path.

For example, if I had a folder called scripts that had a git remote of, this script will move the folder to

#!/bin/bash # Target directory for renamed folders BASE=/home/steve/code/sorted for i in $(find ./ -maxdepth 1 -mindepth 1 -type d); do cd "$i" folder="$(git remote -v 2>/dev/null | head -n 1 | awk '{print $2}' | sed -e 's/^.*:\/\///' | sed -e 's/:/\//' | sed -e 's/^.*@//' | sed -e 's/\.git$//')" cd .. if [ -n "$folder" ]; then mkdir -p "$BASE/$(dirname $folder)" mv "$i" "$BASE/$folder" fi done

Yes it's horrid but it did today's job ;)

Categories: LUG Community Blogs

Steve Kemp: Spent the weekend improving the internet

Planet HantsLUG - Sun, 29/11/2015 - 15:58

This weekend I've mostly been tidying up some personal projects and things.

This was updated to use recaptcha on the sign-up page, which is my attempt to cut down on the 400+ spam-registrations it receives every day.

I've purged a few thousand bogus-accounts, which largely existed to point to spam-sites in their profile-pages. I go through phases where I do this, but my heuristics have always been a little weak.

This site offers free dynamic DNS for a few hundred users. I closed fresh signups due to it being abused by spammers, but it does have some users and I sometimes add new people who ask politely.

Unfortunately some users hammer it, trying to update their DNS records every 60 seconds or so. (One user has spent the past few months updating their IP address every 30 seconds, ironically their external IP hadn't changed in all that time!)

So I suspended a few users, and implemented a minimum-update threshold: Nobody can update their IP address more than once every fifteen minutes now.

Literate Emacs Configuration File

Working towards my stateless home-directory I've been tweaking my dotfiles, and the last thing I did today was move my Emacs configuration over to a literate fashion.

My main emacs configuration-file is now a markdown file, which contains inline-code. The inline-code is parsed at runtime, and executed when Emacs launches. The init.el file which parses/evals is pretty simple, and I'm quite pleased with it. Over time I'll extend the documantion and move some of the small snippets into it.

Offsite backups

My home system(s) always had a local backup, maintained on an external 2Tb disk-drive, along with a remote copy of some static files which were maintained using rsync. I've now switched to having a virtual machine host the external backups with proper incrementals - via attic, which beats my previous "only one copy" setup.

Virtual Machine Backups

On a whim a few years ago I registered which I use to maintain backups of my personal virtual machines. That still works, though I'll probably drop the domain and use or similar in the future.

FWIW the external backups are hosted on BigV, which gives me a 2Tb "archive" disk for a £40 a month. Perfect.

Categories: LUG Community Blogs

Mick Morgan: cameron meets corbyn

Planet ALUG - Sat, 28/11/2015 - 20:01

(With thanks to David Malki!)

Categories: LUG Community Blogs


Planet SurreyLUG - Thu, 26/11/2015 - 07:47

A couple of experiences recently have reminded me that I ought to rely more on profilers when debugging performance problems. Usually I'll start with logging/printing or inline REPLs to figure out where the time is taking, but in heavily iterative functions, a profiler identifies the cause much more quickly.

1. Augeas path expression performance

Two users mentioned that Augeas's path expression performance was terrible in some circumstances. One was working on an Augeas provider for Puppet and it was slow matching entries when there were hundreds or thousands of services defined in a Nagios config, while netcf developers reported the utility was very slow with hundreds of VLAN interfaces.

Specifically it was found that filtering of nodes from the in-memory tree was slow when they had the same label, even when accessed directly by index, e.g. /files/foo[0] or /files/foo[512]. Running Valgrind's callgrind utility against a simple test run of the binary showed a huge proporation of time in the ns_add and memmove functions.

Using KCachegrind to analyse the output file from callgrind:

Visually it's very easy to spot where the time's taken up from the callee map. The two candidates are the red and yellow squares shown on the left, which are being called through two different paths (hence showing up twice). The third large group of functions are from the readline library initialisation (since I was testing the CLI, not the API), and the fourth block is Augeas' own initialisation. These have a much more regular spread of time spent in functions.

The bulk of the time in the two functions on the left was eliminated by David by:

  1. Replacing an expensive check in ns_add that was attempting to build a set of unique nodes. Instead of rescanning the existing set each time to see if the new node was already present, a flag inside the node indicates if it's already been added.
  2. Replacing many memmove calls with batched calls. Instead of removing candidate nodes from the start of a list by memmoving the rest of the list towards the start as each one is identified, the number to remove is batched and removed in one operation.

After fixing it, the group of expensive ns_* functions have moved completely in the callee map. They're now over on the right hand side of KCachegrind, leaving aug_init being the most expensive part, followed by readline, followed by the actual filtering.

2. JGrep filtering performance

Puppet module tests for theforeman/puppet had recently started timing out during startup of the test. The dependencies would install, the RSpec test framework would start up and there would be a long pause before it started listing tests that were running. This caused a timeout on Travis CI and the cancellation of the builds after 10 minutes of inactivity.

Julien dug into it and narrowed it down to a method call to rspec-puppet-facts, which loads hashes of system data (facts) for each supported operating systems and generates sets of tests for each OS. This in turn calls facterdb, a simple library that loads JSON files into memory and provides a filtering API using jgrep. On my system, this was taking about seven or eight seconds per call and per test, so a total of about two minutes on this one method call.

I was rather prejudiced when I started looking at this, as I knew from chasing a Heisenbug into facterdb a few weeks earlier that it loaded all of its JSON files into one large string before passing it into JGrep to be deserialised and filtered. I figured that it was either taking a long time to parse or deserialise the huge blob of JSON or that it was too large to filter quickly. Putting some basic logging into the method showed that I was completely wrong about the JSON, it was extremely quick to parse and was a non-issue.

Next up, with some basic logging and inserting pry (a REPL) into jgrep, I could see that parsing the filter string was taking about 2/3 of the time and then applying it another 1/3. This surprised me, because although the filter string was reasonably complex (it lists all of the supported OSes and versions), it was only a few lines long. Putting ruby-prof around the JGrep method call quickly showed where the issues lay.

ruby-prof has a variety of output formats, including writing callgrind-compatible files which can be imported straight into KCachegrind. These quickly showed an alarming amount of time splitting strings and matching regular expressions:

The string splitting was only in order to access individual characters in an array during parsing of the grep filter, so this could easily be replaced by String#[] for an instant performance boost. The regex matching was mostly to check whether certain characters were in strings, or if the string started with certain substrings, both of which could be done much more cheaply with String#include? and String#start_with?. After these two small changes, the map looks a lot healthier with a more even distribution of runtime between methods:

Actual runtime has dropped from 7-8 seconds to a fraction of a second, and tests no longer timeout during startup.

Both callgrind and ruby-prof are extremely easy to use - it's just a matter of running a binary under Valgrind for the former, and wrapping a code block in a RubyProf.profile call for the latter - and the results from fixing some low-hanging performance problems is very satisfying!

Categories: LUG Community Blogs

Steve Kemp: A transient home-directory?

Planet HantsLUG - Wed, 25/11/2015 - 14:00

For the past few years all my important work has been stored in git repositories. Thanks to the mr tool I have a single configuration file that allows me to pull/maintain a bunch of repositories with ease.

Having recently wiped & reinstalled a pair of desktop systems I'm now wondering if I can switch to using a totally transient home-directory.

The basic intention is that:

  • Every time I login "rm -rf $HOME/*" will be executed.

I see only three problems with this:

  • Every time I login I'll have to reclone my "dotfiles", passwords, bookmarks, etc.
  • Some programs will need their configuration updated, post-login.
  • SSH key management will be a pain.

My dotfiles contain my my bookmarks, passwords, etc. But they don't contain setup for GNOME, etc.

So there might be some configuration that will become annoying - For example I like "Ctrl-Alt-t" to open a new gnome-terminal command. That's configured on each new system I login to the first time.

My images/videos/books are all stored beneath /srv and not in my home directory - so the only thing I'll be losing is program configuration, caches, and similar.

Ideally I'd be using a smartcard for my SSH keys - but I don't have one - so for the moment I might just have to rsync them into place, but that's grossly bad.

I'll be interesting to see how well this works out, but I see a potential gain in portability and discipline at the very least.

Categories: LUG Community Blogs

Facebook experts: how do I stop all re-shared posts appearing in my feed?

Planet SurreyLUG - Wed, 25/11/2015 - 10:29

I’d like to stop all re-shared posts appearing in my Facebook feed, and only
see newly created content from my contacts. Is this possible?

The post Facebook experts: how do I stop all re-shared posts appearing in my feed? appeared first on life at warp.

Categories: LUG Community Blogs

Engine stalled yesterday. #ill

Planet SurreyLUG - Wed, 25/11/2015 - 10:20

The problem with being the engine of your business is that, sometimes, engines stall.

Recovering now.

The post Engine stalled yesterday. #ill appeared first on life at warp.

Categories: LUG Community Blogs

Self-hosting the IndieWeb

Planet SurreyLUG - Tue, 24/11/2015 - 10:15

Discovering the IndieWeb movement was a 2015 highlight for me. It addressed many of my concerns about the direction of the modern internet, especially regarding ownership and control over that data.  But to truly own your own data, self-hosting is a must!

Background: Self-hosting your own stuff

I’m an ideas person. I have a number of projects – or, rather, project ideas – lined up, which I need to record and review. My blog provides me with the ideal space for that, as some ideas may attract the attention of others who are also interested. But why does this matter?

As someone who naturally likes to share experiences and knowledge, I see no benefit in not sharing my ideas too. After all, the web is all about sharing ideas. This matters to me, because the web is widely regarded as the most valuable asset civilised society has today (aside from the usual – like natural resources, power, warmth and sustenance)!

Owning your own data

As a small business owner, I sometimes benefit from various common business practices. For example, the standard accounting principle of straight-line depreciation means that after several years, capital assets once purchased by the business have little-to-no use for the business, meaning they become potential liabilities (both in the financial and risk-management sense). This means I am able to get hold of used, good-condition computing hardware of 4-5 years old at very little cost.


Even 10 year old servers still make for good general purpose machines. I’ll be using one of these for this blog, soon.  Expect plenty of caching!


This is useful for me, as a blogger and an IndieWeb advocate, as I can not only publish and manage all my own data, but also physically host my own data too. As I have fibre broadband running to my house, it’s now feasible to serve my blog as reasonable speeds with 10-20 Mib/sec upstream (“download speed” to you), which is sufficient for my likely traffic and audience.

This ties in nicely with one of my core beliefs, that people should be able to manage all their own data if they choose. I am technically competent enough, and have the meants at my disposal to do it. So why not!

Another driver towards this is that I wish to permanently separate “work” and “pleasure”. My business web hosting and cloud service is for my customers. Yes, we host our own web content as a business, but personal content? Well, in the interests of security and vested interests, I am pushing towards making  personal content something that is only hosted for a paying customer.

Of course, I would encourage anyone to start their own adventure self-hosting too!

Many bridges to cross

Naturally, taking on this type of arrangement has various challenges attached. Here is a selection of the tasks still to be achieved:

  • Convert some space in house for hosting
    • Create a level screed
    • Sort out wiring
    • Fire detection/resistance considerations
    • Power supply (e.g. UPS)
    • Physical security
  • Get server cabinet & rack it up
  • Configure firewall(s)/routing accordingly
  • Implement back-up – and possibly failover – processes
Step one: documentation

Whilst I am progressing these endeavours, it would be remiss if I didn’t document them. There is a lot to be said for the benefits (to a devop, anyway) of hosting one’s own sites and data, but naturally my blog must carry on while I am in the process of building its new home.

A quick jiggle around of my site’s menu structure will hopefully clarify where you can see this work, going forwards (hint, check the projects menu).

Taking it from here

If you are interested in hosting your own servers and being in direct control over your content/data, why not subscribe to this blog’s RSS feed or subscribe by email (form towards footer). Or if you have comments, just “Leave a Reply” beneath!

The post Self-hosting the IndieWeb appeared first on life at warp.

Categories: LUG Community Blogs

Mick Morgan: christmas present

Planet ALUG - Mon, 23/11/2015 - 19:12

Like most people in the UK at this time of the year I’ve been doing some on-line shopping lately. Consequently I’m waiting for several deliveries. Some delivery companies (DHL are a good example) actually allow you to track your parcels on-line. In order to do this they usually send out text or email messages giving the tracking ID. Today I received an email purporting to come from UKMail. That email message said:

UKMail Info!
Your parcel has not been delivered to your address November 23, 2015, because nobody was at home.
Please view the information about your parcel, print it and go to the post office to receive your package.

UKMail expressly disclaims all conditions, guarantees and warranties, express or implied, in respect of the Service. Where the law prevents such exclusion and implies conditions and warranties into this contract, where legally permissible the liability of UKMail for breach of such condition,
guarantee or warranty is limited at the option of UKMail to either supplying the Service again or paying the cost of having the service supplied again. If you don’t receive a package within 30 working days UKMail will charge you for it’s keeping. You can find any information about the procedure and conditions of parcel keeping in the nearest post office.

Best regards,

I /very/ nearly opened the attached file. That is probably the closest I have come to reacting incorrectly to a phishing attack. Nice try guys. And a very good piece of social engineering given the time of year.

Virustotal suggests that the attached file is a malicious word macro container. Interestingly though, only 7 of the 55 AV products that Virustotal uses identified the attachment as malicious. And even they couldn’t agree on the identity of the malware. I suspect that it may be a relatively new piece of code.

Categories: LUG Community Blogs

Varidesk Pro Plus 48 Standing Desk

Planet SurreyLUG - Sun, 22/11/2015 - 18:25

In my recent post on the Bac Posture Stand I mentioned that I had purchased the Varidesk Pro Plus 48. I promised to review this "in due course" and that time has now come!

The Varidesk Pro Plus 48 is "desk riser" that enables you to choose your working height. You can start the day standing up and then, after your executive lunch, you can spend your afternoon sitting down.

N.B. This post includes Amazon Associates links, I have never actually received anything from them, but I live in hope!!


At £437 (including delivery) this is a very expensive piece of equipment indeed. The alternative is to buy an actual standing desk and in many cases this will be a better option. The beauty of the Varidesk is that you are able to retain your existing furniture.

You do also need the Varidesk Standing Mat, which is another £67.50 (including delivery).

There are cheaper solutions, such as the LIFT Standing Desk mentioned on Bad Voltage, but at the time I was unable to locate this in the UK. You can even build your own.


In the end I purchased the Varidesk Pro Plus 48 from Amazon*. Not only was it almost the only option that I could find, it was a professional looking unit and the reviews were also very positive.

*Please note that the photo on the Amazon item is actually incorrect and the Pro Plus 48 actually looks like the photo above, visit the Varidesk website for details. I did contact Varidesk about this anomaly and they explained that they had informed Amazon on multiple occasions and were unable to get the photo corrected.

To enhance my new standing desk I also purchased a number of other items:


Typically the Varidesk arrived when I was on a day off. It turned up on a pallet, resulting in much consternation at work. In fact, once the packaging was removed, the item was the desired size and all was well.

There is no assembly required, simply unpackage and lift onto your desk. Be warned though, the Varidesk does weigh a lot and will require two fairly strong people to manoeuvre it into place.

I was a little concerned as to how this Pro Plus 48 would fit on my corner desk. I did originally order the Varidesk Cube Corner 48, but I received an email from Varidesk explaining that this unit had been discontinued and cancelling and refunding my order.

In the event the Pro Plus 48 fitted my corner desk beautifully.

First Impressions

The Varidesk Pro Plus 48 appears to be extremely well constructed and solidly made. It has what I assume are bolt holes for those scared of the desk falling over, but in practice the weight of the metal base is sufficient.

The desk can be raised and lowered without physical effort, although sometimes cables will get under the desk requiring their removal before it will lower fully. I cannot blame Varidesk for that, but you need to give some thought about cable routing to minimise the likelihood of this happening.

The First Month

After a month of using my Varidesk I am very happy with it. I would love to be able to report a stunning transformation of my painful back, but sadly my back is still very sore. That said, I certainly feel healthier from standing up and being more active.

Standing up all day is not without its issues, my feet and legs get tired, so I tend to sit down at lunchtime and late afternoon. But I have also found that sitting down for too long has started feeling very uncomfortable. Perhaps it always did? But now I can do something about it, raising up my desk and enjoying standing up again, if only for a while.

In short, for me at least, being able to choose to sit or stand feels very natural and it would feel very strange now to be forced to spend the entire day sitting down.

I believe I now have the closest thing possible to the perfect desk.

Categories: LUG Community Blogs

Jonathan McDowell: Updating a Brother HL-3040CN firmware from Linux

Planet ALUG - Sat, 21/11/2015 - 13:27

I have a Brother HL-3040CN networked colour laser printer. I bought it 5 years ago and I kinda wish I hadn’t. I’d done the appropriate research to confirm it worked with Linux, but I didn’t realise it only worked via a 32-bit binary driver. It’s the only reason I have 32 bit enabled on my house server and I really wish I’d either bought a GDI printer that had an open driver (Samsung were great for this in the past) or something that did PCL or Postscript (my parents have an Xerox Phaser that Just Works). However I don’t print much (still just on my first set of toner) and once setup the driver hasn’t needed much kicking.

A more major problem comes with firmware updates. Brother only ship update software for Windows and OS X. I have a Windows VM but the updater wants the full printer driver setup installed and that seems like overkill. I did a bit of poking around and found reference in the service manual to the ability to do an update via USB and a firmware file. Further digging led me to a page on resurrecting a Brother HL-2250DN, which discusses recovering from a failed firmware flash. It provided a way of asking the Brother site for the firmware information.

First I queried my printer details:

$ snmpwalk -v 2c -c public hl3040cn.local iso. iso. = STRING: "MODEL=\"HL-3040CN series\"" iso. = STRING: "SERIAL=\"G0JXXXXXX\"" iso. = STRING: "SPEC=\"0001\"" iso. = STRING: "FIRMID=\"MAIN\"" iso. = STRING: "FIRMVER=\"1.11\"" iso. = STRING: "FIRMID=\"PCLPS\"" iso. = STRING: "FIRMVER=\"1.02\"" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: ""

I used that to craft an update file which I sent to Brother via curl:

curl -X POST -d @hl3040cn-update.xml -H "Content-Type:text/xml" --sslv3

This gave me back some XML with a URL for the latest main firmware, version 1.19, filename LZ2599_N.djif. I downloaded that and took a look at it, discovering it looked like a PJL file. I figured I’d see what happened if I sent it to the printer:

cat LZ2599_N.djf | nc hl3040cn.local 9100

The LCD on the front of printer proceeded to display something like “Updating Program” and eventually the printer re-DHCPed and indicated the main firmware had gone from 1.11 to 1.19. Great! However the PCLPS firmware was still at 1.02 and I’d got the impression that 1.04 was out. I didn’t manage to figure out how to get the Brother update website to give me the 1.04 firmware, but I did manage to find a copy of LZ2600_D.djf which I was then able to send to the printer in the same way. This led to:

$ snmpwalk -v 2c -c public hl3040cn.local iso. iso. = STRING: "MODEL=\"HL-3040CN series\"" iso. = STRING: "SERIAL=\"G0JXXXXXX\"" iso. = STRING: "SPEC=\"0001\"" iso. = STRING: "FIRMID=\"MAIN\"" iso. = STRING: "FIRMVER=\"1.19\"" iso. = STRING: "FIRMID=\"PCLPS\"" iso. = STRING: "FIRMVER=\"1.04\"" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: ""

Cool, eh?

[Disclaimer: This worked for me. I’ve no idea if it’ll work for anyone else. Don’t come running to me if you brick your printer.]

Categories: LUG Community Blogs

Howto | Add ActiveDirectory Addressbook to Sylpheed Email

Planet SurreyLUG - Thu, 19/11/2015 - 21:00

Where we require a lightweight mail client, we tend to use Sylpheed, a fork of the old Claws Mail.

It seems unlikely that you would be able to add an ActiveDirectory Address Book into such a lightweight email client, and indeed the manual states:

### FIXME: write this part.

But in fact it was trivially easy:


Whilst these instructions worked for us, do be aware that we are using Samba4 ActiveDirectory. In theory this is a drop-in replacement for Windows ActiveDirectory and these instructions should work unchanged.

Add LDAP Addressbook

Firstly run Sylpheed and go to Tools and Addressbook. Within the Sylpheed Addressbook go to File New LDAP Server You should now see a screen like this:

Having entered the Name, Hostname and Port you are able to "Check Server", to ensure connectivity. Next either enter your Search Base, or click on the &ellipsis; button to select from the detected Search Bases.

Item Explanation Example Name Addressbook or server name example Hostname ActiveDirectory Host Name ads.example.lan Port LDAP Port Number* 389 or 636 Search Base Your AD domain in LDAP form DC=example,DC=lan

*You should probably choose 636 when connecting via a public network, and you may need to open ports on your router.

Now select the Extended tab and you should see the following screen:

Item Explanation Example Search Criteria This simple example worked for us (objectclass=*) Bind DN Your ActiveDirectory username chris@example.lan Bind Password Your ActiveDirectory password -

Now click on OK to finish.


You should now have a Search field available, enter a colleague's first name and Search and you should be faced with their email addresses.


As far as I can tell the addressbook lookup is not automatic and you have to click on the addressbook icon in the Compose Window and search for the person, in order to add them to the To: field. A bit clunky perhaps, but arguably not so very different from the need in Outlook to press Check Names to look up new addresses. Needless to say - once the address is in the recent address list, it is auto-completed in the future.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Wombat Upgrade

Planet HantsLUG - Wed, 18/11/2015 - 22:21

Today the order went in for a major rebuild of Wombat. Some parts will remain from the original, but overall most of the system will be replaced with more modern parts:

  • The new CPU has double the core count, higher clock speed and better features. It should be faster under both single and multi-threaded use. It should also use less electricity and be cooler.
  • The new GPU should be much faster, it's on a faster bus, and it has proprietary driver support (if required).
  • The SATA controller is more modern and should be much faster than the hard disk, the current controller is an older generation than the disk.
  • The RAM is much faster - two generations faster and there is four times as much of it.

Overall it should be faster, use less electricity, and be thermally cooler. It won't be as fast as my desktop, but it should be noticeably faster and my better half should be happy enough - especially as I shouldn't have to touch the data on the hard disk, which was only recently reinstalled.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Newstalgia

Planet ALUG - Tue, 17/11/2015 - 23:57

Well, well, after talking about my time at university only yesterday, tonight I saw Christian Death who were more or less the soundtrack to my degree with their compilation album The Bible.

I'm pleased to say they seemed like thoroughly nice folks and played a good gig. Proving my lack of musical snobbery (which, to be honest, generally goes with the goth scene), I only knew one song but enjoyed everything they played, from new stuff to old.

The support band was a local act called Painted Heathers who (for me, at least) set the scene nicely and represented an innovative, modern take on the musical theme behind the act they were supporting. Essentially an indie band with goth leanings (I wonder if they agree), they suit my current musical bent (I play keyboards in an indie band) and the mood I was in. They're very new, young, and I shall spread their word for them if I can :)

Categories: LUG Community Blogs

Howto | Install ESET Remote Administrator on Ubuntu

Planet SurreyLUG - Tue, 17/11/2015 - 20:12

After much research I decided to purchase ESET Antivirus for our Windows clients. But rather than install ESET Antivirus on each client in turn, I decided to install ESET Remote Administrator on an Ubuntu VPS. ESET Remote Administrator is a server program for controlling ESET Antivirus (and other ESET programs) on clients, you can use it to deploy the ESET Agent, which in turn can then install and monitor the ESET Antivirus software.

Our Windows PCs are controlled by an ActiveDirectory server (actually a Samba4 ActiveDirectory server, although that should not make any difference to these instructions).

I found the ESET instructions for so doing decidedly sketchy, but I eventually managed to install it and took some notes as I went. I cannot promise these instructions are complete, but used in conjunction with the ESET instructions they may be of help.

Update Hosts

Edit /etc/hosts: localhost.example.lan localhost.localdomain localhost eset.example.lan eset

Test by typing the following two commands and checking the output matches:

# hostname eset # hostname -f eset.example.lan Dependencies # apt-get install mysql-server unixodbc libmyodbc cifs-utils libjcifs-java winbind libqtwebkit4 xvfb # dpkg-reconfigure libmyodbc Install MySQL Server

Edit /etc/mysql/my.cnf:


Restart MySQL

# service mysql-server restart Configure MySQL # mysql -u root -p

Cannot remember whether I created database and/or username and password, suggest you try without and then with.

Install ESET Remote Administration Server

Replace ??? with actual:

# sh --license-key=??? \ --db-type="MySQL Server" --db-driver="MySQL" --db-hostname= --db-port=3306 \ --server-root-password="???" --db-name="era_db" --db-admin-username="root" --db-admin-password="???" \ --db-user-username="era" --db-user-password="???" \ --cert-hostname="eset" --cert-auth-common-name="eset.example.lan" --cert-organizational-unit="eset.example.lan" \ --cert-organization="example ltd" --cert-locality="UK" \ --ad-server="ads.example.lan" --ad-user-name="era" --ad-user-password="???"

In case of error, read the following carefully:

/var/log/eset/RemoteAdministrator/EraServerInstaller.log Install Tomcat7 # apt-get install tomcat7 tomcat7-admin

Wait 5 minutes for Tomcat to start, then visit http://localhost:8080 to check it has worked.

Configure SSL

See SSL/TLS Configuration HOW-TO.

Step 1 - Generate Keystore $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA -keystore /usr/share/tomcat7/.keystore

Use password 'changeit'.

At the time of writing $JAVA_HOME is /usr/lib/jvm/java-7-openjdk-amd64/jre

Step 2 - Configure Tomcat

You should have two sections like this:

<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/share/tomcat7/.keystore" keystorePass="changeit" clientAuth="false" sslProtocol="TLS" />

These will already exist, but need uncommenting and adjusting with keystore details.


You should now be able to go to http://localhost:8443.

Join Windows Domain

The ERA server must be joined to the domain

Install samba and stop any samba services that start.

# apt-get install samba krb5-user smbclient

Edit /etc/samba/smb.conf:

[global] workgroup = EXAMPLE security = ADS realm = EXAMPLE.COM dedicated keytab file = /etc/krb5.keytab kerberos method = secrets and keytab server string = Samba 4 Client %h winbind enum users = yes winbind enum groups = yes winbind use default domain = yes winbind expand groups = 4 winbind nss info = rfc2307 winbind refresh tickets = Yes winbind normalize names = Yes idmap config * : backend = tdb idmap config * : range = 2000-9999 idmap config EXAMPLE : backend = ad idmap config EXAMPLE : range = 10000-999999 idmap config EXAMPLE:schema_mode = rfc2307 printcap name = cups cups options = raw usershare allow guests = yes domain master = no local master = no preferred master = no os level = 20 map to guest = bad user username map = /etc/samba/smbmap

Create /etc/samba/smbmap:

!root = EXAMPLE\Administrator Administrator admionistrator

Edit /etc/krb5.conf:

[libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = true ticket_lifetime = 24h forwardable = yes

Make sure that /etc/resolv.conf points to the AD DC, and dns is setup correctly.

Then run this command:

# net ads join -U Administrator@EXAMPLE.COM

Enter Administrators password when requested.

Edit /etc/nsswitch.conf and add 'winbind' to passwd & group lines.

Start samba services.

Update DNS

For some reason the join does not always create the DNS entry on Samba4, so you may need to add this manually.

Categories: LUG Community Blogs

Intermittent USB3 Drive Mount

Planet SurreyLUG - Tue, 17/11/2015 - 14:38

I have a problem with a Samsung M3 Portable USB3 external hard drive only working intermittently. I use a couple of these for off-site backups for work and so I need them working reliably. In fairness a reboot does cure the problem, but I hate that as a solution.

To troubleshoot this problem, the first step was to check the system devices:

# ls -al /dev/disk/by-id

But only my main two system drives were showing, and not the USB3 drive, which would have been sdc, being the third drive (a, b, c).

System Logs

The next step was to check the system logs:

# dmesg | tail

This showed no issues at all. Then I checked syslog:

# tail /var/log/syslog

This was completely empty, and had not been written to for almost a year. I can't quite believe I haven't needed to check the logs in all that time, but there you are.

I checked that rsyslogd was running and it was as user syslog. But the /var/log/syslog file was owned by root with group adm, but whilst syslog was a member of the adm group, the files all had user rw permissions only -rw-------.

This was easily fixed:

# chmod g+rw /var/log/* # service rsyslog restart

Now the syslog file was being written to, but there was a problem writing to /dev/xconsole:

Nov 17 12:20:48 asusi5 rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="11006" x-info=""] start Nov 17 12:20:48 asusi5 rsyslogd: rsyslogd's groupid changed to 103 Nov 17 12:20:48 asusi5 rsyslogd: rsyslogd's userid changed to 101 Nov 17 12:20:48 asusi5 rsyslogd-2039: Could no open output pipe '/dev/xconsole': No such file or directory [try ] Nov 17 12:20:48 asusi5 rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="11006" x-info=""] exiting on signal 15.

So I duly visited the link mentioned, which gave instructions for disabling /dev/xconsole, but this made me nervous and further research suggested that that was indeed the correct fix for headless servers, but possibly not for a desktop PC. Instead I used the following fix:

# mknod -m 640 /dev/xconsole c 1 3 # chown syslog:adm /dev/xconsole # service rsyslog restart USB Powersaving

Now at least it seems that my syslog is working correctly. Unfortunately unplugging and plugging in the USB drive still was not writing to the logs! When I plugged in the drive the blue light would flash repeatedly and then switch off. I would have believed that the drive had a fault, if it weren't for the fact that rebooting the PC solves the problem.

Thinking that perhaps this problem was USB3 related, I decided to Google for "USB3 drive not recognised" which found this post. Except, that post was only relevant when operating on battery power, whereas I am using a plugged in desktop PC. Clearly that page could not be relevant? Except that in my notification area there is a battery icon, relating to my Uninterruptible Power Supply. But surely Ubuntu couldn't be treating my UPS as if it were a battery? Could it?

In order to find out the power status of my USB devices I followed the suggestion of typing:

# grep . /sys/bus/usb/devices/*/power/control

Most were flagged as "on", but a number of devices were flagged as "auto". I decided to try switching them on to see if that made any difference:

# for F in /sys/bus/usb/devices/*/power/control; do echo on >"${F}"; done # grep . /sys/bus/usb/devices/*/power/control

Now all the devices were showing as on. Time to try out the drive again - I unplugged it and plugged it back in again. This time the power light flashed repeatedly and then went solid.

Unfortunately the drive was still not mounted, but at least it was alive now. What next? Ah yes, I should check the logs to see if they have any new messages, now that the drive is powered and my logs are working:

# dmesg | tail [15243.369812] usb 4-3: reset SuperSpeed USB device number 2 using xhci_hcd [15243.616871] xhci_hcd 0000:00:14.0: xHCI xhci_drop_endpoint called with disabled ep ffff880213c71800 Faulty USB-SATA Bridge

At last I have something to go on. I searched for that error message, which took me to this page, which suggested:

Reason is faulthy usb-sata bridge ASM1051 in USB3 drives. It has nothing to do with motherboards. See this. Workaround was made to kernel 3.17-rc5. Now those disks works. Some perfectly and some works but data transfer is not smooth.

Could that be my problem? I checked my current Kernel:

# uname -r 3.13.0-68-generic

So yes, it could be. As I cannot read the device yet, I cannot check whether ASM1051 is present in my USB3 drive.

Having uninstalled 34 obsolete kernels and a further 8 newer kernels that weren't in use, I was in a position to install the latest Kernel from Vivid with:

# apt install linux-generic-lts-vivid

The problem of course is that in order to boot onto the newer Kernel I must reboot, thereby fixing the problem anyway.

And finally

It would be easy to portray this as clear evidence of the difficulty of running Linux, certainly Samsung will have thoroughly tested their drive on Windows and Mac operating systems. That said such problems are not unheard of on the more popular operating systems and debugging such problems is far harder, I would argue, than the above logical steps.

Having rebooted, as expected, the drive worked. I tried running lshw to see if ASM1051 was present, but I could not find it. Of course upgrading the Kernel could have fixed the problem anyway.

For the present I will labour under the comforting belief that the problem is fixed and will update this post again when either the problem reoccurs on in a week's time if all is well!

Categories: LUG Community Blogs

Steve Engledow (stilvoid): More ale

Planet ALUG - Tue, 17/11/2015 - 00:35

There are several reasons I took the degree course I did - joint honours in Philosophy and Linguistics - but the prime one is that I really felt I wanted to study something I would enjoy rather than something that would guarantee me a better career. To be honest, my degree subject versus the line of work I'm in - software development - is usually a good talking point in interviews and has probably landed me more jobs than if I'd had a degree in Computer Science.

The recent events in Paris, leaving politics aside, were an undeniably bad thing and the recent news coverage and various (sometimes depressingly moronic) Facebook posts on the subject got me thinking about moral philosophy again. Specifically, given we see conflicts between groups of people ostensibly because they live according to different moral codes (let's ignore the fact that their motivations are clearly not based on this at all) and those codes are complex and ambiguous (some might say intentionally so), can there be a moral code that's simple, unambiguous, and agreeable?

My 6th form philosophy teacher, Dr. John Beresford-Fry (Dr. Fry - I can't find him online. If anyone knows how to contact him, I'd love to speak to him again), believed he had a simple code that worked:

Do nothing gratuitous.

To my mind, that doesn't quite cut it; I don't think it's actually possible to do anything completely gratuitously; there's always some reason or reasoning behind an action. Maybe he meant something more subtle and it's been lost on me.

Some years ago, I thought I had a nice simple formulation:

Act as though everyone has the right to do whatever they wish.


You may do whatever you want so long as it doesn't restrict anybody's right to do the same.

Today though, I was going round in very big circles trying to think that one through. It works ok for simple, extreme cases (murder, rape, theft) and even plays nicely (I think) in some grey areas (streaming movies illegally) but I really couldn't figure out how to apply it to anyone in a position of power. How could an MP apply that rule when voting on bringing in a law to raise the minimum wage?

Come to think of it, how could an MP apply any rule when voting on any law?

Then I remembered the conclusion I came to when I was nearing the end of my philosophy course: the sentimentalists or the nihilists probably have it right.

Oh well, it kept me busy for a bit, eh ;)

Note to self: I had an idea for a game around moral philosophy, don't forget it!

Categories: LUG Community Blogs

Wordpress Comments Migration to Disqus

Planet SurreyLUG - Mon, 16/11/2015 - 21:00

I assumed that, when I migrated from Wordpress to Jekyll, one of the casualties would be the old Wordpress comments. I had decided to use DISQUS for comments on Jekyll, partly because it is the "new thing" and partly because it is incredibly easy to achieve. But I never for a moment considered that migrating the old comments would be a possibility.

In fact, not only is it possible, but it is almost trivially easy.

I did encounter a few issues, as I had foolishly renamed some pages to more logical names, thereby both breaking all Google and other links, but also breaking the link with DISCUS comments. That was a very bad idea and as a consequence I spent quite a while renaming files back again, but other than that it all worked flawlessly. If you are going to rename pages - probably best to do it in Wordpress before migrating to Jekyll.

Well done [DISCUS] for making something that you would expect to be difficult easy. And thank you Wordpress for providing a decent export routine, without which my data would have been locked away.

I look forward to your comments!


Categories: LUG Community Blogs
Syndicate content