Mac OS X Snow Leopard appears to roll in the functionality of the separate Keychain+Minder tool. Keychain Minder has provided a way for system administrators to help keep the passphrase in sync with the login account password. That can be very helpful for users in a directory services environment, because users may change their password in ways outside Mac OS X, thereby leaving the keychain passphrase out of sync.
The keychain passphrase is separate from the password used to log in to a Mac OS X user account. By default, however, the password on the login account is set as the passphrase for that user’s default keychain. When the password and passphrase get out of sync, it can cause a lot of confusion for those who don’t understand what’s going on.
I’d wager it’s a rare Mac OS X user that intentionally sets their login account password and keychain passphrase to be different, as I do. Therefore, keeping the two in sync is a benefit in a large percentage of cases.
Snow Leopard implements this feature as a preference item in Keychain Access, under the First Aid tab. It’s labeled “Synchronize login keychain password with account.” (I would have rephrased that as “default keychain” since keychains by other names can be the default keychain; the default name just happens to be “login” nowadays.)
Keychain Minder stored its settings in the com.afp548.KeychainMinder.plist preferences file. This doesn’t seem to have any impact, one way or another, on this particular keychain preference.
So, I looked for and eventually discovered that the new built-in feature of Snow Leopard stores its state in the SyncLoginPassword key of the com.apple.keychainaccess.plist file. You can see this change by use of the
defaults command in Terminal:
You will want to have this preference disabled on any user accounts — likely power users — whose login account passwords will differ from their keychain passphrases. Otherwise, they will get prompted regularly to “Synchronize,” “Create New,” or “Continue” during the login process.
Throughout the history of Mac OS X’s inclusion of Kerberos, there has been a Kerberos application available in /System/Library/CoreServices. This utility was the graphical interface for managing Kerberos tickets &mdash. It put a Mac OS X face on the MIT Kerberos command line tools like kinit, klist, and kdestroy.
In Snow Leopard, that utility is replaced by a new application called “Ticket Viewer.” I’m not sure of the reason for the change, as it seems arbitrary, but it is what it is. (And it’s not as if Apple hasn’t changed more heavily-used application’s names — the Print Center vs. Printer Setup Utility situation springs to mind.)
Those of you who may have linked to the Kerberos application — I liked having a symlink in /Applications/Utilities, for example — should update those links. (I also have to update my explicit indexing of that application for LaunchBar, because it doesn’t scan all of /System/Library/CoreServices by default.)
The Ticket Viewer is also available from the application menu in Keychain Access. It has the keyboard shortcut of Command-Option-K.
After using Acquia Drupal for a while, I took advantage of a trial subscription to the Acquia Network. The network’s services showed me that I had files present in my install that the agent could not account for.
I suspected this was happening because of the way I manage my Acquia Drupal installation with Mercurial. So, I’ve modified my previous process (and updated my instructions) to extract the downloaded tar archive with the --recursive-unlink option. This option appears to successfully remove the contents of every directory before putting new files back into them.
When the archive is extracted in this way, my repository’s working directory shows modified, unknown, and deleted files. This allows me to treat each category of files individually before I commit the changes for a Drupal update as a revision.
$ hg status
The modified files will be tracked normally because they’ve already been added to the Mercurial repository, so I don’t need to do anything special for them.
The unknown files are ones that are completely new, and have not appeared in the same position in a previous revision. They have yet to be tracked by Mercurial, so I have to add them to the repository. To add just those unknown files, then, I have to pick them out from the status listing:
$ hg status --unknown
In order to operate just on those files to add them to the repository, I run a for loop:
This changes the “?” status to “A,” because the files were successfully being tracked by Mercurial.
I use the “--no-status” flag on the “status” command so that just the file paths are printed; the actual status code is not, which is appropriate for the target of the “add” command in the loop.
I do the same basic steps with deleted files. These are files that were in the previous revisions but have been deleted by the --recursive-unlink option from the tar extraction and not replaced with the extraction of the new Acquia Drupal tar archive. If the deleted files had been replaced by the tar extraction, they would either be unchanged (which would not show up in the “status” output) or marked as modified.
To remove the files that are marked as deleted from the repository’s working directory:
However, that may be the same as simply using the following, which I have to explore further:
$ hg remove --after
So, to follow all of these changes in the repository, I run the loop for the uknown files and the loop for the deleted files. The modified files are already tracked, so I don’t need to do anything additional for them. After that, a “commit” will record all of the changes — modifications, additions, and deletions — in the repo.
These commands are based on my current understanding of Mercurial, and they do work for me right now. There could certainly be another better way to do this in one fell swoop — or at least fewer steps. I would welcome that, so if you’re aware of a way, feel free to comment or contact me.
Update: I found that the “hg addremove” command cleanly replaces all of the shell loops I mentioned above. Therefore, I recommend using it instead of the “for” loops I described.
Apache Solr provides a Web service front end for the Apache Lucene indexing and search engine library. Both Solr and Lucene (upon which Solr depends) are Java-based, which has implications for shared Web hosting.
Drupal is an open source CMS, and I happen to use it on a shared Web hosting provider as of this writing. Drupal is gaining support for Apache Solr through a module that has had a lot of input from Acquia (the “Red Hat” of Drupal).
Dries Buytaert of Acquia has some interesting perspective on search for the Web and CMSes in some recent articles on his site. Specifically, he talks about Acquia Search, a Solr-based search service that is being offered to Drupal sites on the Acquia Network. He discusses the advantages afforded by good search capabilities for both visitors to a Drupal Web site and for site administrators.
I’ve used Acquia Search (in beta), and it has been great. It’s very fast compared to the core Drupal Search module. The ability to perform faceted searches, word stemming, spell checking, and more is all tremendous. (You can see it in action in the search field in the site sidebar, as long as my Acquia Network subscription from the beta lasts.)
But Acquia Search part of a larger service offering — the Acquia Network — which ultimately makes it too expensive for me on my personal sites. It’s priced out of reach for me — more costly for one year than two years of Web hosting, domain registrations, and separate e-mail hosting for my domains are today. I think it’s clear that Acquia is aiming at a different market, and that’s fine.
My idle thought, however, is that search by itself is a compelling feature even for small Web sites like mine. It’s as compelling as hosting files, like HTML or PHP or images, or serving databases, like MySQL and PostgreSQL.
If it’s important as Dries notes in his posts — that the search market is so large and growing, and of such universal importance — then great search is a compelling feature to have for many levels of sites. After serving the files and serving the database, it may be the next big service that a Web hosting provider could offer. And today, Web hosting offers a range of pricing (and service levels) to meet various needs.
I could see advertising that for some monthly fee, a Web host offers 55 GB of storage and 550 GB of monthly data transfer and unlimited MySQL databases — and oh, by the way, some reasonable level of indexing/search with Apache Solr and/or Sphinx or whatever. Although I hate to suggest it, search could even be an optional add-on, as many providers treat dedicated IP addresses or SSL or the like.
There may be an additional win, in that separate servers could be optimized for search to offload that processing from the Web server. It could even be something that a Web host contracts out or partners with another to provide — maybe even with a company like Acquia that’s already set up their infrastructure to scale on Amazon EC2.
Especially if other CMSes, such as WordPress, get Solr integration — as with this WordPress Solr plugin — then the case for Web hosts offering something like Solr search gets convincing.
I see a lot of complaints about Belkin SOHO-series KVM (keyboard, video, and mouse) switches. I think many of these complaints are warranted; I’ve used two of these KVMs for a long time and have some familiarity with them.
However, one complaint that does have a workaround covers the mapping of the Mac’s Command and Option keys. For the hybrid PS/2-USB KVMs I have used, you must use a PS/2 keyboard and mouse.
That PS/2 terminal requirement assures that your keyboard is going to be labeled for PC/Windows use. If you connect any Macs, you’ll be frustrated by the key layout of Command and Option. Initially, the Alt key will act like Option, and the Windows key will behave as if it’s Command. This is the opposite of what you’d expect from an Apple keyboard — or another keyboard designed primarily for Mac use.
The good news is that this behavior can be changed, and it applies individually to each KVM port. If you have a Mac on Port 1 and a Windows computer on Port 2, they can each have the settings you’d expect. To do so, switch to the port connecting to a Mac and press Esc-A. This puts that port in the “Mac function” mode. In this mode, PS/2 Alt is Command and PS/2 Windows is Option.
Other keys also change, according to a table from an addendum to the Belkin manual. Given that it was a separate sheet in the box, I’m not surprised that many people have apparently missed it.
|PS/2 keyboard key||Mac function|
|Delete||Deletes text coming from the right side of the document|
|Scroll Lock||Power key — documented as a shortcut key to Shut Down menu command|
To reverse the setting back to the previous function mode, press Esc-Y to disable the remapping. Again, you have to do this on a port-by-port basis.
If you ever switch the computers connected to the ports, you will need to disable this change for each affected port; it by no means updates itself dynamically. That’s why there’s a problem in the first place.
I owe Greg Madore for this tip, as he's the one who originally found it for me.
Unfortunately, this does not fix another failing of the Belkin SOHO KVMs for my kind of work — namely, the inability to change startup behavior on Macs. (I have not yet seen a KVM with keyboard emulation that consistently allows the use of modifier keys — such as C, T, Option, H, etc. — to change the startup behavior of a Mac. That capability would be extremely handy for KVMs used in technical support scenarios.)
I discovered — after I’d set up OpenID delegation (using the Drupal OpenID URL module and Sam Ruby’s instructions) — that each OpenID used with a Drupal site needs to be associated with a Drupal account.
Therefore, even though OpenID delegation may point to a previously-associated provider, such as Verisign Labs’ Personal Identity Portal, or PIP, it acts as its own identity. The delegating URL is a URL in its own right, so this makes some sense even if it is not convenient when you set up delegation after starting to use other OpenIDs.
I had to teach each of my Drupal accounts on various sites that I wanted to use my own URL in addition to any previously-associated OpenIDs.
Apparently, I’ve been installing too many applications on my iPod touch. The other day, I got this warning from it while trying to use the App Store application to download a new app: “There is not enough space to download this application. Please delete some photos or videos.”
Trimming the applications list is a lot less satisfying than filling it up.
It should come as no surprise that Apple Installer installation packages can contain scripts. These scripts are supposed to conduct important operations during the course of the software installation.
However, when you are the system administrator of more than one Mac, you find that developers sometimes miss a good balance between what you think should be in the installer payload versus what should be in its scripts. The payload of a installer, by definition, are the files and links that should be installed, along with information on where they should be installed as well as how (i.e. ownership, permissions).
Therefore, developers should not need to run scripts that create or delete files — they should be created from the payload itself, and if a file must be deleted during the install then consider that perhaps you’re doing it wrong. Likewise, there should be little need move or copy files, because as many copies as desired can be installed from the paylod. Similarly, the need to change ownership or modify permissions should be taken care of in the payload.
Perhaps I’m being a purist here. I’m certainly accused of that, from time to time. However, this just makes sense to me and I happen to think that many developers are similarly logical people. They just aren’t the kind of logical people who happen to spend effort on software installation, especially the kind that results in a deployment-friendly installer package.
So how do we as administrators verify the quality of the scripts in installers? Is there a way we can quickly peer into them to decide if any of the scripts’ steps will be superfluous or even (gasp!) harmful?
Well, I have a quick suggestion for scanning packaged installers. The following one-liner shell command will search an installer package or metapackage for scripts that have the kinds of steps outlined above.
However, it’s a start. The output displays the offending file and line number, so you can conduct more careful examination of the matches it finds.
I haven’t run this on an exhaustive list of installation packages, but I have already seen at least one installer that produces worrisome output.
Update: I’ve changed the regex for the pre/postflight script so that it is more general that what I originally posted. I’m also having some problems with the snippet working with a certain installer whose scripts I know have cp and chmod commands. So, I may be back to the drawing board with this; comments are welcome.
After the news hit about Time Warner Cable’s intent to charge different rates for tiers of monthly data transfer — and an enormous $1/GB fee for overages — it seems eminently sane to consider the competition.
In Rochester, that competition is Frontier DSL. For a long time, that basically meant there was no competition, I’m very sorry to say.
However, the changes to TWC’s fee structure may be so extreme that even that level of competition is good. While I don’t think our household monthly data transfer is excessive, I’m reasonably sure (based on what I’ve seen from the data I’ve collected from our broadband router) that we’ll blow right past the 5 GB/month tier and maybe the 10 GB/month one. We would have to — and by that I mean, I would have to, really — develop some more austere usage of the family Internet connection that we’re accustomed to. Thus, I’m examining the pro and con positions for Frontier’s high-speed Internet service.
With Frontier DSL, my family should:
However, there are some drawbacks to Frontier DSL. My family would be concerned about:
Anyway, while we’re mulling this over, the news is playing out on sites like StopTheCap and StopTWC! Meanwhile, I’m more than a little annoyed at the traditional news media avoiding some of the other angles surrounding this topic — the pricing change as a way to protect cable television revenues, the local monopoly (and how cable infrastructure compares to its telephone equivalent), the impact on increasingly Internet-dependent households during a recession, how this might change the habits of people (including employees working at home), and so on.
It was extremely satisfying to see the following dialog about the volume expansion when logging into my Infrant ReadyNAS:
The overall process of expansion took about four days, converting from four 320 GB drives to four 1 TB drives. (For reference, I selected the Hitachi 7K1000.B 0A38016 drive from ZipZoomFly, and did so almost entirely on price — despite my longstanding misgivings about IBM/Hitachi drives.)
The capacity expansion took longer than strictly necessary, because I wrote zeros across each of the drives before installing them, one at a time, in the ReadyNAS. (I ended up writing zeros to each drive twice, switching from the Disk Utility to the command line equivalent.)
Near the end of the process, I found out that the automatic X-RAID™ expansion doesn’t happen until you reboot the ReadyNAS after upgrading the last drive. I had also enabled a snapshot on the ReadyNAS, which also prevented the automatic volume capacity expansion, so I had to delete that.