I discovered — after I’d set up OpenID delegation (using the Drupal OpenID URL module and Sam Ruby’s instructions) — that each OpenID used with a Drupal site needs to be associated with a Drupal account.
Therefore, even though OpenID delegation may point to a previously-associated provider, such as Verisign Labs’ Personal Identity Portal, or PIP, it acts as its own identity. The delegating URL is a URL in its own right, so this makes some sense even if it is not convenient when you set up delegation after starting to use other OpenIDs.
I had to teach each of my Drupal accounts on various sites that I wanted to use my own URL in addition to any previously-associated OpenIDs.
Apparently, I’ve been installing too many applications on my iPod touch. The other day, I got this warning from it while trying to use the App Store application to download a new app: “There is not enough space to download this application. Please delete some photos or videos.”
Trimming the applications list is a lot less satisfying than ﬁlling it up.
It should come as no surprise that Apple Installer installation packages can contain scripts. These scripts are supposed to conduct important operations during the course of the software installation.
However, when you are the system administrator of more than one Mac, you ﬁnd that developers sometimes miss a good balance between what you think should be in the installer payload versus what should be in its scripts. The payload of a installer, by deﬁnition, are the ﬁles and links that should be installed, along with information on where they should be installed as well as how (i.e. ownership, permissions).
Therefore, developers should not need to run scripts that create or delete ﬁles — they should be created from the payload itself, and if a ﬁle must be deleted during the install then consider that perhaps you’re doing it wrong. Likewise, there should be little need move or copy ﬁles, because as many copies as desired can be installed from the paylod. Similarly, the need to change ownership or modify permissions should be taken care of in the payload.
Perhaps I’m being a purist here. I’m certainly accused of that, from time to time. However, this just makes sense to me and I happen to think that many developers are similarly logical people. They just aren’t the kind of logical people who happen to spend eﬀort on software installation, especially the kind that results in a deployment-friendly installer package.
So how do we as administrators verify the quality of the scripts in installers? Is there a way we can quickly peer into them to decide if any of the scripts’ steps will be superﬂuous or even (gasp!) harmful?
Well, I have a quick suggestion for scanning packaged installers. The following one-liner shell command will search an installer package or metapackage for scripts that have the kinds of steps outlined above.
However, it’s a start. The output displays the oﬀending ﬁle and line number, so you can conduct more careful examination of the matches it ﬁnds.
I haven’t run this on an exhaustive list of installation packages, but I have already seen at least one installer that produces worrisome output.
Update: I’ve changed the regex for the pre/postﬂight script so that it is more general that what I originally posted. I’m also having some problems with the snippet working with a certain installer whose scripts I know have cp and chmod commands. So, I may be back to the drawing board with this; comments are welcome.
After the news hit about Time Warner Cable’s intent to charge diﬀerent rates for tiers of monthly data transfer — and an enormous $1/GB fee for overages — it seems eminently sane to consider the competition.
In Rochester, that competition is Frontier DSL. For a long time, that basically meant there was no competition, I’m very sorry to say.
However, the changes to TWC’s fee structure may be so extreme that even that level of competition is good. While I don’t think our household monthly data transfer is excessive, I’m reasonably sure (based on what I’ve seen from the data I’ve collected from our broadband router) that we’ll blow right past the 5 GB/month tier and maybe the 10 GB/month one. We would have to — and by that I mean, I would have to, really — develop some more austere usage of the family Internet connection that we’re accustomed to. Thus, I’m examining the pro and con positions for Frontier’s high-speed Internet service.
With Frontier DSL, my family should:
However, there are some drawbacks to Frontier DSL. My family would be concerned about:
Anyway, while we’re mulling this over, the news is playing out on sites like StopTheCap and StopTWC! Meanwhile, I’m more than a little annoyed at the traditional news media avoiding some of the other angles surrounding this topic — the pricing change as a way to protect cable television revenues, the local monopoly (and how cable infrastructure compares to its telephone equivalent), the impact on increasingly Internet-dependent households during a recession, how this might change the habits of people (including employees working at home), and so on.
It was extremely satisfying to see the following dialog about the volume expansion when logging into my Infrant ReadyNAS:
The overall process of expansion took about four days, converting from four 320 GB drives to four 1 TB drives. (For reference, I selected the Hitachi 7K1000.B 0A38016 drive from ZipZoomFly, and did so almost entirely on price — despite my longstanding misgivings about IBM/Hitachi drives.)
The capacity expansion took longer than strictly necessary, because I wrote zeros across each of the drives before installing them, one at a time, in the ReadyNAS. (I ended up writing zeros to each drive twice, switching from the Disk Utility to the command line equivalent.)
Near the end of the process, I found out that the automatic X-RAID™ expansion doesn’t happen until you reboot the ReadyNAS after upgrading the last drive. I had also enabled a snapshot on the ReadyNAS, which also prevented the automatic volume capacity expansion, so I had to delete that.
John C. Welch’s article, On Installers, is linked from Daring Fireball today. He links to me — thank you very much John, for that and for the kind words about my signal-to-noise ratio (whatever my front page says on that score right now) — placing me one jump away from Daring Fireball.
I was a little worried about that until I checked my Web analytics account. Luckily, my link is third to last and at the end of a long article, or my Web host might be having words with me about traﬃc.
It was very good to be mentioned, and even be situated in auspicious company between Greg and Nigel. All of us are current or former Radmind admins, and as a group I think Radmind admins tend to know a bit about the foibles of vendors’ installers. Along its famous learning curve, Radmind teaches you a lot about the ﬁlesystem and about what’s going into it.
Anyway, for anyone completely new here, you can follow the Mac OS X system administration topic on its own — and skip others, like random Python, Mercurial, Western New York sports, Drupal, and personal chatter.
Under Leopard, all local users are members of lpadmin, but I think network users are not. So this method won’t grant network users CUPS rights.
To conﬁrm Greg’s suspicions, I ran the following shell snippet.
This loops through the ﬁctional accounts, “mobile_account_user,” “network_account_user,” and “local_account_user.” These accounts are, as you might expect, as a locally-cached mobile account from a network directory, a wholly network directory-based account, and a simple local admin account. While the accounts presented here are ﬁctional, the results were conﬁrmed on a live system bound to a directory service.
The rest of the snippet determines if the accounts are members of any of the speciﬁed computational groups that debuted in Leopard. The last group checked is the “lpadmin” group. By looking at these group memberships, we can determine whether Leopard thinks that the account being tested is a local or network account.
Running the snippet above, with the right accounts available, produces the following output:
So, it appears mobile and local users get added to the lpadmin group automatically in Leopard, but network accounts do not.
Note that I didn’t check whether membership in the “admin” group made a diﬀerence or not. I also didn’t isolate for that factor.
I found it interesting that the mobile account is a member of “netaccounts” but not “localaccounts.” (By group membership alone, I’m not sure you could identify whether an account was a mobile account or not. Yet, that kind of test is part of the point of having these computational groups in the ﬁrst place.)
I’m keenly interested in the CalDigit RAID card for the Mac Pro. It looks like a much better solution than the Apple RAID card to the storage problem — a First World problem if ever there was one — facing certain Mac Pro owners.
I’ve asked myself, “Now that you have this beast, how do you ﬁll up its drive bays?”
The answer is somewhat diﬃcult. You can put four drives in the bays, but in order to get a single volume, you’d minimally need software RAID. For example, you could conﬁgure a RAID 1 0 volume with Disk Utility. You could get the expensive Apple RAID card. You might populate a Drobo and connect it via FireWire. Or, you could get a CalDigit RAID card — which is the only bootable, fully-internal RAID controller I’m aware of that competes with Apple’s card.
One advantage of a solution that ﬁts completely inside the Mac Pro case is that you have one less power cord to deal with. In this sense, the CalDigit RAID Card seems preferable to a Drobo. The CalDigit card interfaces directly with the Mac Pro’s own SATA ports, so you can use the existing internal drive bays and slide drives into the normal SATA/power connectors.
I just need to ﬁnd one on sale …
I’ve been struggling with my Site5 Web hosting account for two years. In many respects, it has been great — good service at a price I was willing to pay. However, my biggest single aggravation has been that there is a primary domain name associated with the hosting account, and that primary domain could not be hosted in a subdirectory of the account cleanly. URL rewriting in an .htaccess ﬁle had been my workaround for a long time, but it never really did everything I wanted — and it was complicated.
The hosting account was up for renewal today. I had a decision to make: keep the account and the hassle, keep the account and ﬁnd a solution to the hassle, or back up my data and move on.
I’m happy to say that I’m keeping the account and it appears that all of the frustration regarding the subdirectory has been eliminated.
The CEO of Site5 helped me out, after seeing my complaints on Twitter. He suggested that I request changing the primary domain to a dummy, nonexistent domain name. With that done, I could then create a domain pointer for my former primary domain (this one, actually), linking it to a subfolder of my account’s public_html directory.
I made the support request, which was fulﬁlled promptly. The change was made and it works.
Now, it looks like some issues I’d been having have cleared up. Namely, the Global Redirect module I use in Drupal correctly redirects from URLs like /node/300 to their human-friendly paths.
From the “I didn’t post this when it was current” ﬁles is an article from Stories of Apple, titled Ten years ago: here comes Mac OS X Server. On January 5, 1999, Apple announced Mac OS X Server.
I had access to a Macintosh Server G4 running Mac OS X Server 1.2. I recall being pretty baﬄed by it at the time, especially when the setup assistant wanted to conﬁgure an entire network routed by the server. The OS looked like a darker version of classic Mac OS, but was very diﬀerent in every other respect from that OS I’d become so comfortable with. The ﬁlesystem layout was foreign. The administration tools were Web-based, and relatively poor (to my thinking) compared to AppleShare IP 6’s. However, there was the sense that this new system was the future, and that Mac OS X Server 1.0 and 1.2 were the gateways to it. I wanted to know more.
And my, how far we’ve all come in a decade.
[Via Eric Z.]