Here are links to the resources related to my talk with Rich Trouton on Friday, July 11, 2014, at the Penn State MacAdmins Conference 2014.
If one of the management tools at hand is more server-driven, like DeployStudio or Casper, but can scope and tell clients to execute scripts, this integration may work. Casper can do this with policies that can be scoped to groups or smart groups — or even directory service groups.
However, Munki is client-driven. Its smart client needs to make decisions using information on the local system, interpreting instructions pulled from a server. A lot of those instructions are pretty static, but Munki has more tricks available. How can we make that smart client that depends upon local decisions work with a server-driven system?
Casper is extensible; its policies can run scripts, and there’s nothing I’m aware of stopping a policy script from writing out a Facter fact. A DeployStudio script can do it, too. Speciﬁcally, a type of external fact called a structured data fact.
The structured data fact ﬁle can be written to a plain text ﬁle in /private/etc/facter/facts.d/ (for the open source version of Facter). (The plain text ﬁle could also be JSON data or a YAML ﬁle, if we’d rather use those.) No matter what format, the ﬁle contains one or more key-value pairs.
Casper is interesting here because it has policies whose scripts could write out Facter facts automatically, while also supporting Self Service policies whose scripts could do the same when the user selects them.
This wouldn’t work if Munki didn’t have a means to use Facter facts, of course. Would I have lead us down this path if it didn’t? Out of the box, that isn’t there, but it can be extended with administrator-provided conditional scripts. Luckily, Tim Sutton comes to the rescue (again!) with a Munki condition script for facts from Facter. Add this, and the way is open for us to build Munki conditional phrases to scope deployments based on the values of these facts.
So here’s a rough outline of the process.
Roll this all back a step to deployment time, since we may want this to start as far upstream as possible. A tool like DeployStudio could run scripts that write facts, or copy whole structured data fact ﬁles, during a deployment workﬂow. This could be entirely automated or perhaps based on entries in the computer record in the DeployStudio database.
There, we just connected diﬀerent systems and scoped a Munki deployment based on external information.
I haven’t heard of anyone using this technique, so if this is old hat to you, I’d love to hear how it has worked out.
It has now been one year since I started my new position with Tamman Technologies and moved my family to the Greater Philadelphia area. It feels like everything has changed, but everything has stayed the same. Plus ça change, plus c’est la même chose, as the saying goes.
It has been a good change, a happy change. While it was a big move, it felt more dramatic to me a year ago than it does now, when it is just reality. I don’t have all of my thoughts about it collected and organized right now, but I felt that I should at least mark this point in time.
I’d also like to thank all of our family and friends that made the move possible, bearable, and successful.
I am a little late to this, but my AutoPkg recipe repository is now available alongside many others. So, I made the switch:
So, if you’ve been following my whimsical AutoPkg repository (Acorn! Fantastical! LaunchBar! XRG!), I suggest you switch, too.
In case you were still interested, I’ve updated the source code link for my Penn State MacAdmins Conference 2013 Luggage talk. My examples from the talk are ﬁnally on-line, so you can follow along in a little more detail if the information already presented in the slides wasn’t enough.
The main repository for the talk contains links to several Mercurial subrepositories, since I tracked each example separately as its own project.
Munki writes out data into the ManagedInstallReport.plist ﬁle when it runs. The InstallResults key in the report shows if anything at all was installed. If something was installed during the last Munki cycle, can we can programmatically ﬁlter out just the OS X Update?
The answer is yes! If it weren’t, of course, this would be an even shorter article. No short articles!
Let’s take a look at the Python code to do this. You can enter the following lines of code in the interactive Python interpreter. To get there, type “python” at the command prompt in Terminal on an OS X system.
We’ll start with the assumption that the OS X 10.8.4 update has just been installed. In that case, it would be listed in the current ManagedInstallReport.plist. (This won’t be the case on your own computer, of course, unless the same OS X update was just installed by Munki. For more on other conditions, stay tuned.) Create a variable for the plist ﬁle path as follows.
Import the Python “sys” module and add “/usr/local/munki” to the sys.path. This tells Python to look for importable modules there. This is roughly akin to adding to the PATH environment variable in a UNIX shell.
Since the Munki tools are installed — otherwise, we wouldn’t be worried about interpreting Munki install results — we can depend upon the availability of the “munkilib” Python module. That happens to include FoundationPlist, which is a handy way to read property lists. (In the following example, I import FoundationPlist as “plistlib” to hearken back to the name of an older Python module that did the same.)
Read the Managed Installs Report plist ﬁle from the path given earlier. Pull out only the contents of the “InstallResults” array from the property list data.
Iterate through the “InstallResults” array to ﬁnd each dictionary whose name is “OS X Update” and whose “applesus” value is “true.” There should only ever be one result, because only one OS X update should be installed during any given Munki run.
Printing the result displays information from the matching dictionary.
I was involved in a discussion today about the utility of adding scripts that can be run via “periodic.” It’s extremely handy to be able to drop or deploy scripts into the locations examined by periodic when it is scheduled. It’s so handy, I can’t believe I haven’t written about this before.
I’ve long been a fan of /usr/sbin/periodic and its ability to run other scripts. It has been tied together with ﬁrst cron and then launchd on the OS X platform for years, where maintenance scripts are run on a daily, weekly, and monthly basis.
Periodic itself is a shell script from FreeBSD that runs other executables in a speciﬁed directory. It will run any executables at the path speciﬁed by the argument following the command. By default, it will treat the argument as a directory within /private/etc/periodic, but you could also speciﬁed a arbitrary path.
Apple’s OS X maintenance jobs consist of scripts found within periodic subfolders:
Running “periodic daily” to execute the contents of the “daily” folder above is triggered by /System/Library/LaunchDaemons/com.apple.periodic-daily.plist.
The weekly and monthly launchd jobs are similar, diﬀering in their StartCalendarInterval values. I’m pretty sure that the timing of the weekly job has shifted from previous OS X releases to what I see on my system today.
|Job||Day||Time (system local)|
|Daily||Every day||3:15 AM|
|Weekly||Every week, day 6 (Saturday)||3:15 AM|
|Monthly||Every month, day 1||5:30 AM|
Once you place a new script into any of the existing folders, it will be called on the schedule speciﬁed by these three launchd tasks. The next time it runs, the periodic utility automatically picks up on any new executables in whatever subfolder it examines.
But, you don’t have to stop there. You can add your own periodic subfolders and wire them up with launchd. Add your own subfolders in /private/etc/periodic. Then, these subfolders can be activated via new launchd jobs that you create (and place in /Library/LaunchDaemons).
For example, to run all of the scripts in the new folder /private/etc/periodic/morning before the start of each business day, you could specify:
In researching the InstallResults key from the ManagedInstallReport.plist, I found that Munki automatically archives previous ManagedInstallReport.plist ﬁles. The archive is stored at the path /Library/Managed Installs/Archives.
I had 100 timestamped, archived reports there. That was such a speciﬁc number that it didn’t seem to be a coincidence. Sure enough, in the source code of munkicommon.py, there is a segment devoted to trimming the archived reports down to the last 100. Excess reports over 100 are deleted.
These archived reports are helpful if you are seeking to ﬁnd am example of something speciﬁc in the ManagedInstallReport output. In my case, I was looking to see how the InstallReport presented a system software installation, like the OS X 10.8.4 Update. On my home systems, I install those packages through the standard public Apple Software Update mechanism, but the update itself could have been obtained through a Software Update Server (SUS).
Luckily, the installation of the OS X 10.8.4 Update was recent enough that it did appear in the AppleUpdates and InstallResults of some of my archived ManagedInstallReport ﬁles. To ﬁnd where, I opened up the “Archives” folder as a Project in BBEdit and performed a Project-wide search for “OS X Update.” When I found a single ﬁle with two “hits,” I was reasonably sure that was the one where the install happened. For reference, its AppleUpdates key:
… and the InstallResults key: