I see a lot of complaints about Belkin SOHO-series KVM (keyboard, video, and mouse) switches. I think many of these complaints are warranted; I’ve used two of these KVMs for a long time and have some familiarity with them.
However, one complaint that does have a workaround covers the mapping of the Mac’s Command and Option keys. For the hybrid PS/2-USB KVMs I have used, you must use a PS/2 keyboard and mouse.
That PS/2 terminal requirement assures that your keyboard is going to be labeled for PC/Windows use. If you connect any Macs, you’ll be frustrated by the key layout of Command and Option. Initially, the Alt key will act like Option, and the Windows key will behave as if it’s Command. This is the opposite of what you’d expect from an Apple keyboard — or another keyboard designed primarily for Mac use.
The good news is that this behavior can be changed, and it applies individually to each KVM port. If you have a Mac on Port 1 and a Windows computer on Port 2, they can each have the settings you’d expect. To do so, switch to the port connecting to a Mac and press Esc-A. This puts that port in the “Mac function” mode. In this mode, PS/2 Alt is Command and PS/2 Windows is Option.
Other keys also change, according to a table from an addendum to the Belkin manual. Given that it was a separate sheet in the box, I’m not surprised that many people have apparently missed it.
|PS/2 keyboard key||Mac function|
|Delete||Deletes text coming from the right side of the document|
|Scroll Lock||Power key — documented as a shortcut key to Shut Down menu command|
To reverse the setting back to the previous function mode, press Esc-Y to disable the remapping. Again, you have to do this on a port-by-port basis.
If you ever switch the computers connected to the ports, you will need to disable this change for each aﬀected port; it by no means updates itself dynamically. That’s why there’s a problem in the ﬁrst place.
I owe Greg Madore for this tip, as he’s the one who originally found it for me.
Unfortunately, this does not ﬁx another failing of the Belkin SOHO KVMs for my kind of work — namely, the inability to change startup behavior on Macs. (I have not yet seen a KVM with keyboard emulation that consistently allows the use of modiﬁer keys — such as C, T, Option, H, etc. — to change the startup behavior of a Mac. That capability would be extremely handy for KVMs used in technical support scenarios.)
I discovered — after I’d set up OpenID delegation (using the Drupal OpenID URL module and Sam Ruby’s instructions) — that each OpenID used with a Drupal site needs to be associated with a Drupal account.
Therefore, even though OpenID delegation may point to a previously-associated provider, such as Verisign Labs’ Personal Identity Portal, or PIP, it acts as its own identity. The delegating URL is a URL in its own right, so this makes some sense even if it is not convenient when you set up delegation after starting to use other OpenIDs.
I had to teach each of my Drupal accounts on various sites that I wanted to use my own URL in addition to any previously-associated OpenIDs.
Apparently, I’ve been installing too many applications on my iPod touch. The other day, I got this warning from it while trying to use the App Store application to download a new app: “There is not enough space to download this application. Please delete some photos or videos.”
Trimming the applications list is a lot less satisfying than ﬁlling it up.
It should come as no surprise that Apple Installer installation packages can contain scripts. These scripts are supposed to conduct important operations during the course of the software installation.
However, when you are the system administrator of more than one Mac, you ﬁnd that developers sometimes miss a good balance between what you think should be in the installer payload versus what should be in its scripts. The payload of a installer, by deﬁnition, are the ﬁles and links that should be installed, along with information on where they should be installed as well as how (i.e. ownership, permissions).
Therefore, developers should not need to run scripts that create or delete ﬁles — they should be created from the payload itself, and if a ﬁle must be deleted during the install then consider that perhaps you’re doing it wrong. Likewise, there should be little need move or copy ﬁles, because as many copies as desired can be installed from the paylod. Similarly, the need to change ownership or modify permissions should be taken care of in the payload.
Perhaps I’m being a purist here. I’m certainly accused of that, from time to time. However, this just makes sense to me and I happen to think that many developers are similarly logical people. They just aren’t the kind of logical people who happen to spend eﬀort on software installation, especially the kind that results in a deployment-friendly installer package.
So how do we as administrators verify the quality of the scripts in installers? Is there a way we can quickly peer into them to decide if any of the scripts’ steps will be superﬂuous or even (gasp!) harmful?
Well, I have a quick suggestion for scanning packaged installers. The following one-liner shell command will search an installer package or metapackage for scripts that have the kinds of steps outlined above.
However, it’s a start. The output displays the oﬀending ﬁle and line number, so you can conduct more careful examination of the matches it ﬁnds.
I haven’t run this on an exhaustive list of installation packages, but I have already seen at least one installer that produces worrisome output.
Update: I’ve changed the regex for the pre/postﬂight script so that it is more general that what I originally posted. I’m also having some problems with the snippet working with a certain installer whose scripts I know have cp and chmod commands. So, I may be back to the drawing board with this; comments are welcome.
After the news hit about Time Warner Cable’s intent to charge diﬀerent rates for tiers of monthly data transfer — and an enormous $1/GB fee for overages — it seems eminently sane to consider the competition.
In Rochester, that competition is Frontier DSL. For a long time, that basically meant there was no competition, I’m very sorry to say.
However, the changes to TWC’s fee structure may be so extreme that even that level of competition is good. While I don’t think our household monthly data transfer is excessive, I’m reasonably sure (based on what I’ve seen from the data I’ve collected from our broadband router) that we’ll blow right past the 5 GB/month tier and maybe the 10 GB/month one. We would have to — and by that I mean, I would have to, really — develop some more austere usage of the family Internet connection that we’re accustomed to. Thus, I’m examining the pro and con positions for Frontier’s high-speed Internet service.
With Frontier DSL, my family should:
However, there are some drawbacks to Frontier DSL. My family would be concerned about:
Anyway, while we’re mulling this over, the news is playing out on sites like StopTheCap and StopTWC! Meanwhile, I’m more than a little annoyed at the traditional news media avoiding some of the other angles surrounding this topic — the pricing change as a way to protect cable television revenues, the local monopoly (and how cable infrastructure compares to its telephone equivalent), the impact on increasingly Internet-dependent households during a recession, how this might change the habits of people (including employees working at home), and so on.
It was extremely satisfying to see the following dialog about the volume expansion when logging into my Infrant ReadyNAS:
The overall process of expansion took about four days, converting from four 320 GB drives to four 1 TB drives. (For reference, I selected the Hitachi 7K1000.B 0A38016 drive from ZipZoomFly, and did so almost entirely on price — despite my longstanding misgivings about IBM/Hitachi drives.)
The capacity expansion took longer than strictly necessary, because I wrote zeros across each of the drives before installing them, one at a time, in the ReadyNAS. (I ended up writing zeros to each drive twice, switching from the Disk Utility to the command line equivalent.)
Near the end of the process, I found out that the automatic X-RAID™ expansion doesn’t happen until you reboot the ReadyNAS after upgrading the last drive. I had also enabled a snapshot on the ReadyNAS, which also prevented the automatic volume capacity expansion, so I had to delete that.
John C. Welch’s article, On Installers, is linked from Daring Fireball today. He links to me — thank you very much John, for that and for the kind words about my signal-to-noise ratio (whatever my front page says on that score right now) — placing me one jump away from Daring Fireball.
I was a little worried about that until I checked my Web analytics account. Luckily, my link is third to last and at the end of a long article, or my Web host might be having words with me about traﬃc.
It was very good to be mentioned, and even be situated in auspicious company between Greg and Nigel. All of us are current or former Radmind admins, and as a group I think Radmind admins tend to know a bit about the foibles of vendors’ installers. Along its famous learning curve, Radmind teaches you a lot about the ﬁlesystem and about what’s going into it.
Anyway, for anyone completely new here, you can follow the Mac OS X system administration topic on its own — and skip others, like random Python, Mercurial, Western New York sports, Drupal, and personal chatter.
Under Leopard, all local users are members of lpadmin, but I think network users are not. So this method won’t grant network users CUPS rights.
To conﬁrm Greg’s suspicions, I ran the following shell snippet.
This loops through the ﬁctional accounts, “mobile_account_user,” “network_account_user,” and “local_account_user.” These accounts are, as you might expect, as a locally-cached mobile account from a network directory, a wholly network directory-based account, and a simple local admin account. While the accounts presented here are ﬁctional, the results were conﬁrmed on a live system bound to a directory service.
The rest of the snippet determines if the accounts are members of any of the speciﬁed computational groups that debuted in Leopard. The last group checked is the “lpadmin” group. By looking at these group memberships, we can determine whether Leopard thinks that the account being tested is a local or network account.
Running the snippet above, with the right accounts available, produces the following output:
So, it appears mobile and local users get added to the lpadmin group automatically in Leopard, but network accounts do not.
Note that I didn’t check whether membership in the “admin” group made a diﬀerence or not. I also didn’t isolate for that factor.
I found it interesting that the mobile account is a member of “netaccounts” but not “localaccounts.” (By group membership alone, I’m not sure you could identify whether an account was a mobile account or not. Yet, that kind of test is part of the point of having these computational groups in the ﬁrst place.)
I’m keenly interested in the CalDigit RAID card for the Mac Pro. It looks like a much better solution than the Apple RAID card to the storage problem — a First World problem if ever there was one — facing certain Mac Pro owners.
I’ve asked myself, “Now that you have this beast, how do you ﬁll up its drive bays?”
The answer is somewhat diﬃcult. You can put four drives in the bays, but in order to get a single volume, you’d minimally need software RAID. For example, you could conﬁgure a RAID 1 0 volume with Disk Utility. You could get the expensive Apple RAID card. You might populate a Drobo and connect it via FireWire. Or, you could get a CalDigit RAID card — which is the only bootable, fully-internal RAID controller I’m aware of that competes with Apple’s card.
One advantage of a solution that ﬁts completely inside the Mac Pro case is that you have one less power cord to deal with. In this sense, the CalDigit RAID Card seems preferable to a Drobo. The CalDigit card interfaces directly with the Mac Pro’s own SATA ports, so you can use the existing internal drive bays and slide drives into the normal SATA/power connectors.
I just need to ﬁnd one on sale …
I’ve been struggling with my Site5 Web hosting account for two years. In many respects, it has been great — good service at a price I was willing to pay. However, my biggest single aggravation has been that there is a primary domain name associated with the hosting account, and that primary domain could not be hosted in a subdirectory of the account cleanly. URL rewriting in an .htaccess ﬁle had been my workaround for a long time, but it never really did everything I wanted — and it was complicated.
The hosting account was up for renewal today. I had a decision to make: keep the account and the hassle, keep the account and ﬁnd a solution to the hassle, or back up my data and move on.
I’m happy to say that I’m keeping the account and it appears that all of the frustration regarding the subdirectory has been eliminated.
The CEO of Site5 helped me out, after seeing my complaints on Twitter. He suggested that I request changing the primary domain to a dummy, nonexistent domain name. With that done, I could then create a domain pointer for my former primary domain (this one, actually), linking it to a subfolder of my account’s public_html directory.
I made the support request, which was fulﬁlled promptly. The change was made and it works.
Now, it looks like some issues I’d been having have cleared up. Namely, the Global Redirect module I use in Drupal correctly redirects from URLs like /node/300 to their human-friendly paths.