After using Acquia Drupal for a while, I took advantage of a trial subscription to the Acquia Network. The network’s services showed me that I had ﬁles present in my install that the agent could not account for.
I suspected this was happening because of the way I manage my Acquia Drupal installation with Mercurial. So, I’ve modiﬁed my previous process (and updated my instructions) to extract the downloaded tar archive with the —recursive-unlink option. This option appears to successfully remove the contents of every directory before putting new ﬁles back into them.
When the archive is extracted in this way, my repository’s working directory shows modiﬁed, unknown, and deleted ﬁles. This allows me to treat each category of ﬁles individually before I commit the changes for a Drupal update as a revision.
$ hg status
The modiﬁed ﬁles will be tracked normally because they’ve already been added to the Mercurial repository, so I don’t need to do anything special for them.
The unknown ﬁles are ones that are completely new, and have not appeared in the same position in a previous revision. They have yet to be tracked by Mercurial, so I have to add them to the repository. To add just those unknown ﬁles, then, I have to pick them out from the status listing:
$ hg status --unknown
In order to operate just on those ﬁles to add them to the repository, I run a for loop:
This changes the “?” status to “A,” because the ﬁles were successfully being tracked by Mercurial.
I use the “—no-status” ﬂag on the “status” command so that just the ﬁle paths are printed; the actual status code is not, which is appropriate for the target of the “add” command in the loop.
I do the same basic steps with deleted ﬁles. These are ﬁles that were in the previous revisions but have been deleted by the —recursive-unlink option from the tar extraction and not replaced with the extraction of the new Acquia Drupal tar archive. If the deleted ﬁles had been replaced by the tar extraction, they would either be unchanged (which would not show up in the “status” output) or marked as modiﬁed.
To remove the ﬁles that are marked as deleted from the repository’s working directory:
However, that may be the same as simply using the following, which I have to explore further:
$ hg remove --after
So, to follow all of these changes in the repository, I run the loop for the uknown ﬁles and the loop for the deleted ﬁles. The modiﬁed ﬁles are already tracked, so I don’t need to do anything additional for them. After that, a “commit” will record all of the changes — modiﬁcations, additions, and deletions — in the repo.
These commands are based on my current understanding of Mercurial, and they do work for me right now. There could certainly be another better way to do this in one fell swoop — or at least fewer steps. I would welcome that, so if you’re aware of a way, feel free to comment or contact me.
Update: I found that the “hg addremove” command cleanly replaces all of the shell loops I mentioned above. Therefore, I recommend using it instead of the “for” loops I described.
Apache Solr provides a Web service front end for the Apache Lucene indexing and search engine library. Both Solr and Lucene (upon which Solr depends) are Java-based, which has implications for shared Web hosting.
Drupal is an open source CMS, and I happen to use it on a shared Web hosting provider as of this writing. Drupal is gaining support for Apache Solr through a module that has had a lot of input from Acquia (the “Red Hat” of Drupal).
Dries Buytaert of Acquia has some interesting perspective on search for the Web and CMSes in some recent articles on his site. Speciﬁcally, he talks about Acquia Search, a Solr-based search service that is being oﬀered to Drupal sites on the Acquia Network. He discusses the advantages aﬀorded by good search capabilities for both visitors to a Drupal Web site and for site administrators.
I’ve used Acquia Search (in beta), and it has been great. It’s very fast compared to the core Drupal Search module. The ability to perform faceted searches, word stemming, spell checking, and more is all tremendous. (You can see it in action in the search ﬁeld in the site sidebar, as long as my Acquia Network subscription from the beta lasts.)
But Acquia Search part of a larger service oﬀering — the Acquia Network — which ultimately makes it too expensive for me on my personal sites. It’s priced out of reach for me — more costly for one year than two years of Web hosting, domain registrations, and separate e-mail hosting for my domains are today. I think it’s clear that Acquia is aiming at a diﬀerent market, and that’s ﬁne.
My idle thought, however, is that search by itself is a compelling feature even for small Web sites like mine. It’s as compelling as hosting ﬁles, like HTML or PHP or images, or serving databases, like MySQL and PostgreSQL.
If it’s important as Dries notes in his posts — that the search market is so large and growing, and of such universal importance — then great search is a compelling feature to have for many levels of sites. After serving the ﬁles and serving the database, it may be the next big service that a Web hosting provider could oﬀer. And today, Web hosting oﬀers a range of pricing (and service levels) to meet various needs.
I could see advertising that for some monthly fee, a Web host oﬀers 55 GB of storage and 550 GB of monthly data transfer and unlimited MySQL databases — and oh, by the way, some reasonable level of indexing/search with Apache Solr and/or Sphinx or whatever. Although I hate to suggest it, search could even be an optional add-on, as many providers treat dedicated IP addresses or SSL or the like.
There may be an additional win, in that separate servers could be optimized for search to oﬄoad that processing from the Web server. It could even be something that a Web host contracts out or partners with another to provide — maybe even with a company like Acquia that’s already set up their infrastructure to scale on Amazon EC2.
Especially if other CMSes, such as WordPress, get Solr integration — as with this WordPress Solr plugin — then the case for Web hosts oﬀering something like Solr search gets convincing.
I see a lot of complaints about Belkin SOHO-series KVM (keyboard, video, and mouse) switches. I think many of these complaints are warranted; I’ve used two of these KVMs for a long time and have some familiarity with them.
However, one complaint that does have a workaround covers the mapping of the Mac’s Command and Option keys. For the hybrid PS/2-USB KVMs I have used, you must use a PS/2 keyboard and mouse.
That PS/2 terminal requirement assures that your keyboard is going to be labeled for PC/Windows use. If you connect any Macs, you’ll be frustrated by the key layout of Command and Option. Initially, the Alt key will act like Option, and the Windows key will behave as if it’s Command. This is the opposite of what you’d expect from an Apple keyboard — or another keyboard designed primarily for Mac use.
The good news is that this behavior can be changed, and it applies individually to each KVM port. If you have a Mac on Port 1 and a Windows computer on Port 2, they can each have the settings you’d expect. To do so, switch to the port connecting to a Mac and press Esc-A. This puts that port in the “Mac function” mode. In this mode, PS/2 Alt is Command and PS/2 Windows is Option.
Other keys also change, according to a table from an addendum to the Belkin manual. Given that it was a separate sheet in the box, I’m not surprised that many people have apparently missed it.
|PS/2 keyboard key||Mac function|
|Delete||Deletes text coming from the right side of the document|
|Scroll Lock||Power key — documented as a shortcut key to Shut Down menu command|
To reverse the setting back to the previous function mode, press Esc-Y to disable the remapping. Again, you have to do this on a port-by-port basis.
If you ever switch the computers connected to the ports, you will need to disable this change for each aﬀected port; it by no means updates itself dynamically. That’s why there’s a problem in the ﬁrst place.
I owe Greg Madore for this tip, as he’s the one who originally found it for me.
Unfortunately, this does not ﬁx another failing of the Belkin SOHO KVMs for my kind of work — namely, the inability to change startup behavior on Macs. (I have not yet seen a KVM with keyboard emulation that consistently allows the use of modiﬁer keys — such as C, T, Option, H, etc. — to change the startup behavior of a Mac. That capability would be extremely handy for KVMs used in technical support scenarios.)
I discovered — after I’d set up OpenID delegation (using the Drupal OpenID URL module and Sam Ruby’s instructions) — that each OpenID used with a Drupal site needs to be associated with a Drupal account.
Therefore, even though OpenID delegation may point to a previously-associated provider, such as Verisign Labs’ Personal Identity Portal, or PIP, it acts as its own identity. The delegating URL is a URL in its own right, so this makes some sense even if it is not convenient when you set up delegation after starting to use other OpenIDs.
I had to teach each of my Drupal accounts on various sites that I wanted to use my own URL in addition to any previously-associated OpenIDs.
Apparently, I’ve been installing too many applications on my iPod touch. The other day, I got this warning from it while trying to use the App Store application to download a new app: “There is not enough space to download this application. Please delete some photos or videos.”
Trimming the applications list is a lot less satisfying than ﬁlling it up.
It should come as no surprise that Apple Installer installation packages can contain scripts. These scripts are supposed to conduct important operations during the course of the software installation.
However, when you are the system administrator of more than one Mac, you ﬁnd that developers sometimes miss a good balance between what you think should be in the installer payload versus what should be in its scripts. The payload of a installer, by deﬁnition, are the ﬁles and links that should be installed, along with information on where they should be installed as well as how (i.e. ownership, permissions).
Therefore, developers should not need to run scripts that create or delete ﬁles — they should be created from the payload itself, and if a ﬁle must be deleted during the install then consider that perhaps you’re doing it wrong. Likewise, there should be little need move or copy ﬁles, because as many copies as desired can be installed from the paylod. Similarly, the need to change ownership or modify permissions should be taken care of in the payload.
Perhaps I’m being a purist here. I’m certainly accused of that, from time to time. However, this just makes sense to me and I happen to think that many developers are similarly logical people. They just aren’t the kind of logical people who happen to spend eﬀort on software installation, especially the kind that results in a deployment-friendly installer package.
So how do we as administrators verify the quality of the scripts in installers? Is there a way we can quickly peer into them to decide if any of the scripts’ steps will be superﬂuous or even (gasp!) harmful?
Well, I have a quick suggestion for scanning packaged installers. The following one-liner shell command will search an installer package or metapackage for scripts that have the kinds of steps outlined above.
However, it’s a start. The output displays the oﬀending ﬁle and line number, so you can conduct more careful examination of the matches it ﬁnds.
I haven’t run this on an exhaustive list of installation packages, but I have already seen at least one installer that produces worrisome output.
Update: I’ve changed the regex for the pre/postﬂight script so that it is more general that what I originally posted. I’m also having some problems with the snippet working with a certain installer whose scripts I know have cp and chmod commands. So, I may be back to the drawing board with this; comments are welcome.
After the news hit about Time Warner Cable’s intent to charge diﬀerent rates for tiers of monthly data transfer — and an enormous $1/GB fee for overages — it seems eminently sane to consider the competition.
In Rochester, that competition is Frontier DSL. For a long time, that basically meant there was no competition, I’m very sorry to say.
However, the changes to TWC’s fee structure may be so extreme that even that level of competition is good. While I don’t think our household monthly data transfer is excessive, I’m reasonably sure (based on what I’ve seen from the data I’ve collected from our broadband router) that we’ll blow right past the 5 GB/month tier and maybe the 10 GB/month one. We would have to — and by that I mean, I would have to, really — develop some more austere usage of the family Internet connection that we’re accustomed to. Thus, I’m examining the pro and con positions for Frontier’s high-speed Internet service.
With Frontier DSL, my family should:
However, there are some drawbacks to Frontier DSL. My family would be concerned about:
Anyway, while we’re mulling this over, the news is playing out on sites like StopTheCap and StopTWC! Meanwhile, I’m more than a little annoyed at the traditional news media avoiding some of the other angles surrounding this topic — the pricing change as a way to protect cable television revenues, the local monopoly (and how cable infrastructure compares to its telephone equivalent), the impact on increasingly Internet-dependent households during a recession, how this might change the habits of people (including employees working at home), and so on.
It was extremely satisfying to see the following dialog about the volume expansion when logging into my Infrant ReadyNAS:
The overall process of expansion took about four days, converting from four 320 GB drives to four 1 TB drives. (For reference, I selected the Hitachi 7K1000.B 0A38016 drive from ZipZoomFly, and did so almost entirely on price — despite my longstanding misgivings about IBM/Hitachi drives.)
The capacity expansion took longer than strictly necessary, because I wrote zeros across each of the drives before installing them, one at a time, in the ReadyNAS. (I ended up writing zeros to each drive twice, switching from the Disk Utility to the command line equivalent.)
Near the end of the process, I found out that the automatic X-RAID™ expansion doesn’t happen until you reboot the ReadyNAS after upgrading the last drive. I had also enabled a snapshot on the ReadyNAS, which also prevented the automatic volume capacity expansion, so I had to delete that.
John C. Welch’s article, On Installers, is linked from Daring Fireball today. He links to me — thank you very much John, for that and for the kind words about my signal-to-noise ratio (whatever my front page says on that score right now) — placing me one jump away from Daring Fireball.
I was a little worried about that until I checked my Web analytics account. Luckily, my link is third to last and at the end of a long article, or my Web host might be having words with me about traﬃc.
It was very good to be mentioned, and even be situated in auspicious company between Greg and Nigel. All of us are current or former Radmind admins, and as a group I think Radmind admins tend to know a bit about the foibles of vendors’ installers. Along its famous learning curve, Radmind teaches you a lot about the ﬁlesystem and about what’s going into it.
Anyway, for anyone completely new here, you can follow the Mac OS X system administration topic on its own — and skip others, like random Python, Mercurial, Western New York sports, Drupal, and personal chatter.
Under Leopard, all local users are members of lpadmin, but I think network users are not. So this method won’t grant network users CUPS rights.
To conﬁrm Greg’s suspicions, I ran the following shell snippet.
This loops through the ﬁctional accounts, “mobile_account_user,” “network_account_user,” and “local_account_user.” These accounts are, as you might expect, as a locally-cached mobile account from a network directory, a wholly network directory-based account, and a simple local admin account. While the accounts presented here are ﬁctional, the results were conﬁrmed on a live system bound to a directory service.
The rest of the snippet determines if the accounts are members of any of the speciﬁed computational groups that debuted in Leopard. The last group checked is the “lpadmin” group. By looking at these group memberships, we can determine whether Leopard thinks that the account being tested is a local or network account.
Running the snippet above, with the right accounts available, produces the following output:
So, it appears mobile and local users get added to the lpadmin group automatically in Leopard, but network accounts do not.
Note that I didn’t check whether membership in the “admin” group made a diﬀerence or not. I also didn’t isolate for that factor.
I found it interesting that the mobile account is a member of “netaccounts” but not “localaccounts.” (By group membership alone, I’m not sure you could identify whether an account was a mobile account or not. Yet, that kind of test is part of the point of having these computational groups in the ﬁrst place.)