17.11.2010 00:47

Cfengine3 bootstrap

Cfengine has been around for a very long time. Its dependencies are minimal, its resources needs minimal, its performance excellent. Version 2 of cfengine is well documented, thoroughly discussed and described in numerous tutorials. But v2 is also deprecated in favor of cfengine3. It so happens v3 was a rewrite, and many things changed. There are a few tutorials on v3 worth mentioning, but they all describe deploying the policy host, and little else.

I've been reading "Automating Linux and Unix System Administration" recently and the book covers cfengine2 extensively. Authors describe a complete, extremely modular, cfengine setup. I had problems wrapping my brain around methods for doing the same in v3. After spending some time with the cfengine reference manual I finally had a solid basic policy for deploying the policy host and clients. Which I want to share here: cf3-bootstrap.tar.bz2.

After installing cfengine with its workdir defined, let's say, as /srv/cfengine, we can bootstrap the policy server:

# /usr/sbin/cf-key
# cd /srv/cfengine/
# tar -xvjf cf3-bootstrap.tar.bz2 ; rm -i $_
# cd masterfiles/
# cp /usr/share/doc/cfengine/cfengine_stdlib.cf cf-stdlib.cf
# emacs update.cf
# /usr/sbin/cf-agent --bootstrap
# /usr/sbin/cf-execd -v -F
The cf-key will create the basic cfengine directory tree, and will generate the hosts public and private keys (cfengine functions somewhat like SSH in this regard). We edit the update.cf. to define our policy server, and its hostname. Other files will require more edits for customization, but update bundle is all that is needed for bootstrapping the policy host (and later pulling changes in policy).

Method for bootstrapping a client is similar. We run cf-key, and we copy the failsafe.cf and update.cf files from the policy host to /srv/cfengine/inputs. Then run cf-agent from that directory to bootstrap, and finally cf-execd's initial run (which BTW configures a cronjob for cf-execd to be executed periodically).

A few words about basic cfengine operation and policy. It is kept, and modified in the masterfiles directory. Remember, the policy host also manages it self, which means it too copies the policy to its inputs directory, and it runs from there. Clients pull the policy from the masterfiles directory on your policy host to their inputs directory, from where they are acted upon. The policy defines that our policy server runs the cf-serverd daemon, which acts as a file server for our clients. First time the client contacts the server their keys will be exchanged.

Example bundles I provided propose to keep the service configuration files, and any other data for distribution in the clientfiles directory. I am a long way from templating in cfengine3, but the authors of the book I mention are willing to argue distributing complete files is the right way to do things.

Now a few words about the policy I shared. I wanted a modular policy, one that will easily scale from 3 servers to 23. Cfengine configuration, and basic policy is in the masterfiles directory, I make use of the cfengine community library (distributed with cfengine package), and I also maintain a site specific library.cf file. The promises.cf file is the main cfengine policy file, it imports all needed policy files, and defines policies that should run on all hosts. All site specific policies are in the site/ sub-directory. The main file site.cf defines policies that should run per host (or groups of hosts), and does nothing else. Policies for daemons or anything else get their own file. I included a few basic ones with examples. One with some example Apache controls, one for editing resolv.conf and one generic that pretends to run actions on system users.

Where to take it from bootstrapping. First and foremost, policy should be kept in revision control. Git will do nicely. With several cfengine policy hosts, controlling clusters or groups of machines, we can think about masters of master servers. In that scenario the policy hosts can pull their complete policies with Git from the master(s). Developing complex policies will take some time, but it's worth investing your time. It will make your job better, but also be a good prospect for the future. I don't remember seeing an ad for a bigger shop that didn't require cfengine or puppet experience.


Written by anrxc | Permalink | Filed under work, code

16.09.2010 15:18

Mobile broadband and OpenVPN on GNU/Linux

I wrote a short article on GPRS more than a year ago. Now I finally moved on to HSPA, after GPRS failed me in an emergency. I was unable to get OpenVPN working over it, and to make matters worse I found my (extremely expensive) accumulated bandwidth disappeared from the T-Com account. I decided to switch the provider, and technology, and bought a Huawei E180 mobile broadband modem from TELE2.

With a fairly recent Linux kernel the modem will be recognized correctly as storage (it has a MicroSD card slot), CD-ROM (read-only part of the stick with MS Windows software/driver) and a modem. So, usb_modeswitch is not needed. Modem can be plugged in and usbserial and ppp-generic modules should be loaded. As a PPP dialer I continue to use wvdial, configured like so:

; wvdial configuration
;   /etc/wvdial.conf
;
[Dialer tele2pin]
Modem = /dev/ttyUSB0
baud = 460800
Init1 = AT+CPIN=XYZW

[Dialer tele2]
Modem = /dev/ttyUSB0
baud = 460800
Stupid Mode = 1
Init2 = ATZ
Init3 = AT&F E1 V1 X1 &D2 &C1 S0=0
Init4 = AT+CGDCONT=1,"IP","data.tele2.hr"
Phone = *99#
Username = none
Password = none
Under the tele2pin dialer a PIN must be provided in place of "XYZW". Username and password in the tele2 dialer are not used, but must be provided. The APN "mobileinternet.tele2.hr" always produced a "No carrier detected" message so I switched to "data.tele2.hr". The Stupid mode is enabled so there's no time wasted waiting on the prompt, and that's about all you need to know.

To initiate a connection a PIN must be provided first, that's why there's a special dial section for it. Afterwords a connection can be established:
# wvdial tele2pin
# wvdial tele2
Unfortunately OpenVPN failed to work with this setup as well. Some mobile providers block ports, some do double NAT or otherwise mess with VPN connections... but not TELE2, and I eventually got it to work. I found it was a routing problem, after seeing the following pppd message in the syslog:
pppd: Could not determine remote IP address: defaulting to 10.64.64.64
The routing table would look like this:
Kernel IP routing table
Destination     Gateway Genmask         Flags Metric Ref Use Iface
10.64.64.64     *       255.255.255.255 UH    0      0     0 ppp0
default         *       0.0.0.0         U     0      0     0 ppp0
I determined the gateway by doing a simple traceroute to google.com, and a quick fix was:
# route del default gw 0.0.0.0
# route add -host 130.244.219.90 dev ppp0
# route add default gw 130.244.219.90
# route del -host 10.64.64.64
# /etc/rc.d/openvpn start
The packets will be properly routed now, but the DNS servers of the mobile provider can no longer be reached. Change your DNS servers to the ones provided by your (virtual)private network, or otherwise any DNS servers you would normally use.


Written by anrxc | Permalink | Filed under crypto, main

04.08.2010 18:16

Books on systems administration

I had some money to spare recently, and I decided to buy some good technical books. Actually I went out to find some great books, and not so much with an agenda to learn from them but more to be used as reference for years to come.

First book I settled on was the Principles of Network and System Administration by Mark Burgess. Last edition is from 2003 but this book is still excellent and unique in many ways. It is one of the only (if not the only) book approaching the topic of systems administration from a scientific standpoint, treating it as a branch of engineering. Book on principles, best practice, ethics, users and management of the human-computer systems. At times it seems perhaps too academic in its approach in respect to business environments where something is always going wrong (Murphy is out to get you) and is often hard to maintain levels of operation this book advocates. Still it's something every good sysadmin should aspire to, I enjoyed re-reading it and it's just the kind of book I had in mind.

It felt kind of karmic that the fourth, 20th year anniversary, edition of the Unix and Linux System Administration Handbook was released these days. This series of books (previously UNIX and Linux treated separately) by Evi Nemeth (and others) is well known, and I just had to buy this edition. It is very much different than the book above, a 1300 pages volume filled with practical matters including real world experiences of its authors. The foreword says it is not one of those "administration" books targeting users who run a UNIX system in their garage. It's rather oriented towards running UNIX in the enterprise. This revised edition covers AIX, HP-UX, Solaris, RedHat, SuSE and Ubuntu, and as usual dedicates a large part of the book to networking. It could be worth to you to know that it also includes chapters on topics like DNSSEC and virtualization (being that the 3rd edition was released back in 2000).


Written by anrxc | Permalink | Filed under work, books

26.06.2010 22:08

Notes on systems monitoring

Often it's hard to beat a few lines of shell script to perform a basic monitoring task on a personal system. A system-load, or a file-system monitoring script running from cron is extremely easy to write and setup, while being invaluable in emergencies.

I wrote plenty of those during the years (load, fs, pacct, daemons... monitors), which I use in combination with network and intrusion monitoring. And here's how it works; e-mail alerts from all personal systems are sent out to a third-party system where they are immediately relayed to my local mailbox, and then archived, but not marked as read. That way a backup remains, but that archive mailbox is also available through IMAP. While having e-mail alerts is good, a live alert is better. Live in my case means IM, and if I'm on the road: SMS - which is easily done through a service like email2sms offered by the mobile service provider. Although the IM part is more to the point of this article.

Years ago while learning Python I wrote two bots, an IRC one and a Jabber one. Dealing with a protocol like IRC is great for learning, it involves many different libraries and problems. Text processing, sockets, databases, accessing web resources... When I lost interest (don't we all) I decided maintaining and running the Jabber version is a better choice, and besides, I love that protocol. Sometime later hooking the Jabber bot into the existing alert system was easy thanks to the imaplib library. Bot connects to the IMAP server and relays, to Jabber, all alerts which have the Unseen flag. Once read an alert is flagged as Seen, and that's all there is to it. Live alerts 24/7, over my favorite protocol, in my favorite messaging client.

If you are interested in writing your own bots you can checkout the xmpppy library. Handling the connection, presence and subscriptions can be done in as little as 20 lines of code. To complete the cycle I should also mention some of my favorites for local monitoring. For process monitoring the htop project provides a great interactive replacement for top in procps. Finally, no article would be complete without mentioning nmon - an amazing AIX and Linux performance monitor, developed (and unofficially supported) by an IBM employee.


Written by anrxc | Permalink | Filed under jabber, work, code

31.05.2010 20:15

Introducing rybackup, again

One of the earliest articles here was about my personal backup solution, rsync based, with rotating backup-snapshots. Implemented as a simple shell script. It served me well, but I was never fully satisfied with it, it had a few Todo items attached to it which I never got around to address. I couldn't motivate my self to write BASH, and the script was working the way it was designed, running on a dozen of my machines.

After reading "Python for Unix System Administration" last week I decided it's time to rewrite it, in Python. Result is available in a git repository; rybackup.git. The script is designed to backup chosen directories, and files to an NFS backup server, or removable storage. Maintaining an arbitrary number of backup-snapshots going back hourly/daily/weekly/monthly as long as you need, rotating them as it goes. All of this is controlled by a few settings at the top of the script. In addition it has a few functions making it suitable to use for backup of eCryptfs encrypted home directories. Script will exit with EAGAIN if the eCryptfs mount is active, and relies on Dillon's cron to retry once in a while, after receiving the signal. To give you an idea of how it works, here is a directory tree after a few months of rotating snapshots:

2010-05-30 16:04 daily.1/
2010-05-29 00:04 daily.2/
2010-05-28 20:04 daily.3/
2010-05-31 20:03 hourly.1/
2010-05-31 16:04 hourly.2/
2010-05-31 00:05 hourly.3/
2010-05-30 20:03 hourly.4/
2010-04-04 00:05 monthly.1/
2010-03-10 00:04 monthly.2/
2010-05-28 00:02 weekly.1/
2010-05-13 00:04 weekly.2/

$ du -hs ; du -hs hourly.1
1.5G    .
778M    hourly.1

While I was at it, I wrote a simple restore script, ryrestore. The script makes it easy to restore a file, or a directory, so I don't have to dig through the backup server. Here is an example of how it works:
# ~/.backup/ryrestore.py /home/anrxc/.xinitrc

0: monthly.1 [2010-04-04 00:05:33]
1: monthly.2 [2010-03-10 00:04:10]
2: daily.2   [2010-05-29 00:04:21]
3: hourly.3  [2010-05-31 00:05:26]
4: hourly.2  [2010-05-31 16:04:16]
5: hourly.1  [2010-05-31 20:03:57]
6: hourly.4  [2010-05-30 20:03:56]
7: daily.3   [2010-05-28 20:04:01]
8: weekly.2  [2010-05-13 00:04:04]
9: daily.1   [2010-05-30 16:04:28]
10: weekly.1 [2010-05-28 00:02:52]

Snapshot: 5

sending incremental file list
.xinitrc

sent 2634 bytes  received 31 bytes  5330.00 bytes/sec
total size is 2543  speedup is 0.95


Written by anrxc | Permalink | Filed under crypto, code

30.05.2010 02:49

The Bridge Trilogy

First time I read Virtual Light by William Gibson I liked it, but I didn't appreciate it as much as it deserved. Only after Pattern Recognition and the current series I realized just how good the whole Bridge Trilogy really is. To me the path towards Pattern Recognition is evident in every chapter, hinted in every reference. While waiting for the Zero History release I decided to buy the whole bridge series and read it again. I'm half way through now, and I'm loving every page of it. I was born too late to really appreciate Neuromancer anyway, it never influenced me, not like it did the previous generation. It wasn't even what introduced me to cyberpunk, it was Neal Stephenson's Snow Crash instead. Maybe that's the reason I love Gibson's recent work so much, his amazing portrayal of the present, his ability to spot patterns and nodal points where nobody else does... bits of the literal future right here, right now.

Virtual Light is set in the year 2005. Tokyo is recovering from a huge earthquake, and the society as a whole from AIDS. This is where we meet Chevette Washington, a bike courier. She lives on the San Francisco Bay Bridge, where squatters have built settlements. Visiting San Francisco, the bridge and Chevette's roommate (Skinner, a bridge veteran) is Yamazaki, a student of sociology from Japan. Another character is Rydell, a security guy and former policemen, who is brought in to help investigate a theft of VR glasses. Which just so happen were stolen by Chevette, on an impulse.

The second book, Idoru, finds Rydell working for hotel security where he befriends a guest, Colin Laney, a data analyst. Laney has a singular gift - he can intuitively spot trends developing within masses of seemingly unrelated data. Through Yamazaki, who is now in Tokyo, Rydell finds a new job for Laney. The assignment is for Lo/Rez - the hottest rock band on earth. The lead signer has just announced that he intends to marry Rei Toi, a software agent and Japanese idoru. Chia, a member of a Lo/Rez fan club from Seattle, travels to Tokyo to visit the local chapter of the fan-club and find out if rumors about the wedding are true.

The third book sees the culmination of all these events, and although All Tomorrow's Parties includes many of the same characters, it's not a direct sequel to either. The book offers its own story line, and is perhaps the best of the three. Laney can now see significant "nodal points" in the vast streams of data in the worldwide computer network, and he owes this gift to an experimental drug he received during his youth. Such nodal points are rare but significant events in history that forever change society, even though they might not be recognizable as such when they occur. Laney isn't quite sure what's going to happen when society reaches this latest nodal point, but he knows it's going to be big, and he knows it's going to occur in San Francisco. On the Bay Bridge.

What happens when we reach the nodal point? Finding out is a perfect prelude to Zero History coming in September.


Written by anrxc | Permalink | Filed under cyberpunk, books

24.05.2010 01:56

Working in Arch Linux

The arrival of my new workstation saw the culmination of a 2 year quest to (drastically) improve my desktop environment. Machine came pre-installed with Ubuntu and LUKS which I disposed of, and installed the best desktop/workstation OS available at the moment, Arch Linux. Its solid UNIX fundations, its philosophy and package management, deserve an article in its own right so that is all I'll say about it now. Once the OS was installed I cloned my dotfiles.git repository and was ready to go. During these past few years I wrote about various software I use every day, but to see these components work in unison, to see the interaction and the big picture, is what matters to me most.

Following the order of my awesome tag layout is a good path through my workspace. But first to mention Zenburn, a color scheme I discovered a few years ago, which now plays a very important role. Just about everything on my desktop follows the schemes guidelines, everything but GTK and QT widgets. Zenburn is easy on my eyes and saved me a lot of headaches.

First tag is "term" where my terminals reside, Zenburn themed urxvt and screen connecting me to the outside world. An SSH client and Irssi are often found there. Long lasting sessions are always on that tag, but for quick terminal jobs the scratch module provides me with disposable terminals that slide-in or pop-up. While working, awesome's fair layout ensures each terminal gets an equal part of the screen, and one that requires my total attention I often maximize.

Next tag is "emacs", probably the most important tag, where I code, write and take notes. The Emacs org-mode plays a crucial role, I use its format for notes, documentation, keeping track of projects and working hours, auth credentials, personal agenda and much more. I do use eCryptfs, but every sensitive file is also GPG encrypted with some help of Emacs epa-mode. Which brings me to the GPG agent which I mentioned in many previous articles. Every time Emacs needs my key a PIN entry dialog will appear, every time I open a new SSH session a dialog will appear to unlock that key. I have dozens of crypto keys but it's easy to keep track of them in this manner.

Next tag is "web" with Firefox and vimperator that changed my browsing drastically. Once I wrote about connecting awesome with org-mode, and this is the tag where I utilize that connection the most. The Mod4+q key-binding spawns a little remember frame for taking a note, or automatically pasting the clipboard selection. I store huge amounts of web data in this way. Another very important connection is passing text field contents to Emacs, for editing. I use it almost exclusively for managing support tickets, once the ticket is opened in Emacs the post-mode is invoked.

Speaking of e-mail, the "mail" tag comes next. The realm of Alpine and awesome's magnifier layout. Most of the time there are two instances running, one personal and one connected to the company's IMAP server. By the way Alpine handles a 500k mailbox with ease, and only days ago I heard a Thunderbird user complain it couldn't handle just 60. Where would I be without it, I can't imagine. Every time a new mail comes in the tag turns red, because of the urgent flag, one key press and the client which triggered the event is automatically focused. Since I use Topal this tag too spawns a lot of PIN entry dialogs.

My fifth tag is reserved for IM, where Gajim was used almost exclusively until I needed OTR encryption on a daily basis. Now I run Pidgin, and I was very surprised that it took very little effort to make it look and behave exactly like Gajim. I spend a lot of time on this tag and it was very important to have zenburn in pidgin, otherwise all other efforts would be useless. The following tag, "rss", was very important while I was freelancing. Akregator would fetch the new projects feed every 5 minutes and often that responsiveness alone would land jobs. Last tag is "media", a floating layout tag with smplayer, utorrent, ROX, Okular... mostly for multimedia, and for reading.


Written by anrxc | Permalink | Filed under main, desktop, work, emacs

22.04.2010 18:11

Awesome widget properties

Awesome progressbars The next stable release of the awesome window manager will introduce some new widget properties. When graphs and progress-bars were ported to Lua, in the 3.3 to 3.4 transition, some of the properties were lost. Most notably the progress-bar ticks, and the graphs ability to draw multiple values at once. Well, they are back, and will be included in awesome v3.4.5! To tell the truth they are not as nice as the old properties, because I tried to keep them as simple as possible (by design and implementation).

The progress-bar ticks introduce two new methods: "set_ticks_gap" and "set_ticks_size". Default gap size is 1, and tick size 4, in respect to the default progress-bar width of 100px. That's what the above picture shows, defaults. But if you use a lot of custom properties, and change the progress-bar size, it's up to you to pick the perfect gap and tick size for that progress-bar.

The graph stacking (also called multigraph by some) introduces these new methods: "set_stack" (false by default) and "set_stack_colors" (i.e. {"red", "white", "blue"}). The order of colors matters, because the "add_value" method now accepts an (optional) last argument, an index of a color from your stack color group. With these properties you can draw graphs similar to those found in Gnome, feed them multiple values and by specifying a color index they will all be drawn on the graph.

Remaining two are smaller properties, but could be as important as the others to some people. First of them found its way into awesome in the current 3.4.4 release. The progress-bar "max_value" property allows you to feed your progress-bars with any value without having to scale it to the 0-1 range. Graph widgets already supported this. The last property is the progress-bar "offset", which may not be included after all, but some future user might want it so I'll link to the mailing list patch. With offset the progress-bar will be drawn distanced from the border by as many pixels as the offset argument.


Written by anrxc | Permalink | Filed under desktop, code

19.04.2010 02:05

GNU/Linux and ThinkPad SL510

I got a new workstation last month, a laptop from the ThinkPad SL series. The TuxMobil article about installing Arch Linux on it is here. Overall it works good, but I soon regretted the decision to go with Lenovo. The ACPI support is almost non-existent, none of the function keys work, there's no bluetooth rfkill so it constantly draws power, and the machine can't wake up from suspend.

It is my workstation, but still what use is a laptop without any power management features? It's 2010, and I can barely comprehend the suspend/hibernate situation in Linux. Last two years with my TravelMate have been a constant battle, 3 months of suspend working, followed by periods when it was broken. Last of which is especially ugly, it breaks hibernation for people with Intel graphics. Worst of all, in periods when it was working you still couldn't suspend because you couldn't trust it.

These machines actually have the IdeaPad firmware, which rules out using thinkpad_acpi. Next up was lenovo-sl-laptop, a third party module which provides support for SL models, but only up to SL500. Then I turned to asus-laptop which provides official in-kernel support for ThinkPad SL. Unfortunately after inspecting the DSDT developers concluded SL510 support is not possible. These machines expose a wmi interface, but it's not handled by any current module. Developing one will not be easy.

I don't want to write to kernel mailing lists or Lenovo until I find more owners of SL510, or some other model with the same interface. Individually we could be ignored, together maybe we get the ball rolling towards "lenovo-sl-wmi".


Written by anrxc | Permalink | Filed under main, desktop, work

17.04.2010 18:42

Illustrated Primers

Tablet Computer The iPad was released and sales are sky high, software wise it is terrible but the fact makes me happy anyway. We are getting closer to some of the ideas laid down in 1994 by Neal Stephenson in his book The Diamond Age. Even though the age of nanoscience is only just beginning, there are some fundamental similarities between his Illustrated Primer, today's eBook readers, OLPC and the iPad.

These are the primers of the early 21st century. Beautiful devices that we read from, learn from and play with. We could consider our laptops as primers, but I can't wait to get my hands on one of those devices. At this point most likely the Sony PRS-600. Even though much different than the iPad it is still my first choice, because of the software limitations but also practicality. I would use a pad mostly for reading anyway, and here the E-Ink has the advantage, regarding contrast and battery life. Multiple new devices, by just about every big player on the market, were already announced. Some of them will run GNU/Linux and in the long run that will probably prove to be the best choice.

The mock-ups of next generation OLPC, the XO-2 are probably the closest, especially considering their role, to truly serve in education of children. The now classic article, Sic Transit Gloria Laptopi, by Ivan Krstic addresses some problems, and reminds me once more I shouldn't get carried away. There's still a long way to go. Gillian 'gus' Andrews gave an interesting talk on the subject, at "The Last HOPE" conference. The audio is still available: Hacking the Young Lady's Illustrated Primer.


Written by anrxc | Permalink | Filed under cyberpunk, books, media