Myself? I now look after a total of 25 Mac Servers, 21 of which are dedicated to our JSS & are installed across 8 different sites globally, some of the challenges of which are touched upon in the JNUC2014 Panel I was on, titled Thinking Big: Scaling JSS Infrastructures for the Mobile Workforce.
The focus of the post below is the Mac Mini’s which are used onsite for Casper Suite Distribution Points, NetBoot Servers & Caching Servers.
Why Mac Mini’s?
There are a few reasons we chose & subsequently stuck with Mac Mini’s;
- Cost: Depending on your infrastructure, the Mini can be quite cheap. The cost for a Mini including AppleCare & a rack kit can be cheaper than the cost of some annual OS licenses cost
- Storage: Even if the above is not a factor, the cost of the storage which can get pricey if costing 1TB of RAID10 3PAR or Left Hand Node.
- Portability: Ok so this is an odd one when looking at a server, but the Mini’s diminutive form definitely helps in multi-office environments as shipping a Mini between sites is inexpensive & at some times can be transported as hand luggage.
- OSX Server: Some people will see this as a distinct disadvantage, but I myself have managed OSX Server since 10.3 & am somewhat comfortable with it. There are also a number of services offered via OSX Server that make it perfect for Casper Distribution Points & “On Site Replicas”.
“Redundancy? What redundancy?” I see people exclaim, well when we put the 1st set of Mini Servers in around 2011 they shipped with 2 x SATA disks that we setup in a RAID Mirror (read: RAID1).
That gave us a very limited level of redundancy per Mini per site, with the JSS having failovers set if one sites server was down. (There are some people I know whom will tell you a very colourful story of this redundant setup failing for them). But 4 years later, only 3 disks out of the 24 in the 12 Mini’s installed failed.
Now with the start of 2015 we have “doubled” the redundancy by doubling the Mini’s. Some may call this cheating when talking about redundancy & cost but I’ll hopefully explain both throughout this post.
Each 2011 Mini was rack mounted in a Sonnet RackMac Mini which as shown below has space for 2 Mini’s.
So the rack enclosures were already onsite, the other consideration is that the Mini’s did not incur any recurring annual costs.
In fact, we depreciated them at 33% per year so after 3 years the Mini’s were due to be replaced. But with them still in working order & the new Mini’s only having 2 drives in the flavour of a Fusion Drive we decided to keep the older Mini’s in place until they stop working.
Having at least 2 Mini’s per site has allowed us to leverage several of the load balancing & failover features of the JSS & OSX Server for the services that the Mini’s are used for.
Before I proceed, there is little point to having a Server onsite if your clients are not going to point to it. Therefore it’s important to advise you have your network segments in place as you do this.
One question that comes up often is how to define a Network Segment for clients that are off WAN. Well the JSS works by assigning a Network Segment to a client based from it’s “IP Address” (note, not “Reported IP Address”) & assigns the most restrictive Network Segment that applies to that clients “IP Address”.
For example, say we have the following Network Segments defined:
- Segment Name: External, IP Range: 184.108.40.206 – 255.255.255.255.
- Segment Name: Internal, IP Range 10.1.1.1 – 10.1.1.255
If a Mac client reports back an IP address of 220.127.116.11, it will fall under the “External” Network Segment.
Whereas if a Mac client reports back the IP 10.1.1.14 it will fall under the “Internal” Network Segment.
Clients should update their IP with the JSS at every checkin & at the start of policy execution, & so should then fall within the correct Network Segment.
With Network Segments defined, we can point clients to their nearest or accessible server for;
- Distribution Point.
- Software Update.
But really, there is no need for the last two to be defined & I’ll mention why in their respective section below.
As we’re using OSX Server on these Mini’s we’ll have both AFP & HTTP distribution points defined for each server under “File Share Distribution Points” within the JSS.
With both defined the JSS with use HTTP for all actions except for when using Casper Imaging unless you override this manually.
I personally prefer HTTP over AFP for the other stuff as it allows for “resumable downloads” which can be handy with an ever mobile workforce.
To create a HTTP distribution point, you need to create a symlink in the web server root directory that points to your share, JAMF have a great article on this.
With that configured, in the JSS where each “File Share Distribution Point” is set each site’s Mini is set to failover to the other Mini on site & has “Radomized Load Sharing” ticked to also share the load with the other Mini.
This way both Mini’s will be used & if one goes down all clients that have been pointed to this “File Share Distribution Point” will failover to the other automagically.
Or NetInstall Service as it has been renamed in OSX Server.app not only allows for imaging Macs but also network booting them into an OS.
Within the .nbi folder is an NBImageInfo.plist within which is an property for the .nbi’s Index. If this number is 4096+ then the client knows there is an identical image stored on multiple servers for load balancing.
So if a site has 2 Mini’s each hosting the same .nbi which has the same Index the one .nbi will be shown in the Startup Disk prefpane or boot picker. If one of those Mini’s is off-line clients will still see the one image hosted from the single server.
This presents an issue when defining a NetBoot Server in the JSS as currently only one NetBoot Server can be defined per Network Segment.To get around this limitation we do not define a NetBoot Server within the JSS.
Clients can still be started up from a NetBoot Image via policy or Casper Remote, they will just pick the default image from the server they choose when negotiating the load balancing.
Historically we would run the Software Update service & then cascade the updates we enabled to all on site Minis.
This would waste several hundreds of GB’s from hosting un-needed updates, (which is something that Reposado could have resolved).
Not only that, but as per NetBoot Servers only one can be defined per Network Segment & lastly if a client is off-WAN you would then need to point to a managed SUS that they could access.
Unlike Software Update, clients do not need to be pointed to the caching server. Instead the caching server will need to be configured to offer updates to clients, also unlike Software Update caching server will only hold the updates clients have requested.
Once the caching server is configured, clients can be updated via policy or Casper Remote. They will contact Apple & will then be redirected to their caching server if there is one configured, meaning that we do not need a Software Update Server defined within the JSS.
The caching service is also “peer aware” meaning that:
When your network has more than one caching server, the servers automatically become peers and can consult and share cached software.
If a peer is not available when queried the original server will download the software & offer to the clients. This brings redundancy to the service.
One issue with this service is that you cannot block updates at a service level, if you’re in an environment where that is a requirement then this may not be something for you to move to. However, if you enable all updates & only block the odd “bad” updates then you can set the clients to ignore these “bad” updates via the below command:
Usually the case with “bad” Apple updates is that they are reissued after a while, in which case the below can be run once the “bad” update becomes “good”
OSX Server has a number alerts that can be delivered via either Email or Push Notifications. When set to deliver via Push Notifications any Mac that has connected to your server using Server.app will receive these Alerts in its Notification Center.
Personally I prefer Email Alerts, depending on your environment this may require some configuration. The file to edit is located at: /Library/Server/Mail/Config/postfix/main.cf (this location & file is only created once Server.app has been downloaded & it’s first run completed).
By default OSX Server.app will query your environments DNS for a MX record & will then attempt to send email via that server. If the MX record route does not work for you, scroll to the end of the main.cf & look for the “relayhost = ” entry & enter your desired mail server, so it becomes “relayhost = mail.mycompany.com”, being mindful of the files permissions as per normal.
Once done we’ll need to enable postfix’s launch daemon:
To test alert deliveries set an email address in Server.app’s alert section & click the Action pop-up menu , then select Send Tests Alert.
You can now also use Erik Gomez’s Cacher to get some awesome stats from the caching service.
Simple Network Management Protocol (SNMP) is a protocol that can be used to monitor a Mini’s resource usage remotely.
To configure you’ll need to edit: /etc/snmp/snmpd.conf
Once edited as needed, the services Launch Daemon will need to be enabled & ran:
Keeping it all in Sync
With so many Mini’s distributed a means is needed to keep some of the data hosted on them in sync, predominantly the NetBoot/NetInstall Images & the “File Share Distribution Point”.
If we were to use a JDS, then we’d not need to look at other methods to sync the “File Share Distribution Point” as the jamfds binary will keep the JDS in sync.
However, we’re not using the JDS due to that syncing happening up to 5 minutes after uploading items to the Master Distribution Point which is not so great when Mini’s are connected to the Master via MPLS links with limited bandwidth.
So we use rsync scripts which are triggered via Launch Daemons to run at a time when the MPLS is not being used, if you were only worried about the “File Share Distribution Points” syncing & could sync during working hours you can do this via Casper Admin.
When trying to run rsync scripts in an automated fashion we obviously can’t have the rsync command prompting for a password when being run, we also want to run the rsync as root to avoid permissions issues.
So within terminal we elevate to root:
Once there we generate an RSA key, pressing return at any asked questions:
With the RSA key generated, we need to grab the public key. Copy the returned text:
Now SSH into the server that hosts the share we wish to rsync, accepting any prompts to add to known hosts.
Now elevate to root again, open the “authorized_keys” file & add the key we generated earlier.
We’re now ready to run a script like that below via a LaunchDaemon. Changing the below to wanted values:
- server.mycompany.com to the sever that is hosting the folder we wish to sync.
- /Some/Folder to the path to the folder wanted to sync.
- email@example.com to an email address to send an alert to if the rsync failed.
If you wanted to run the script but with no
The above can be tested by running the script manually, once you’re happy that the script is working we can trigger it via a LaunchDaemon such as the below:
Changing the below to wanted values:
- com.launchdaemon.name to the LaunchDaemons name
- /path/to/rsync/script to the path to the rsync script
- /path/to/error.log & /path/to/success.log will log the actions of the LaunchDaemon. If this is wanted then this folder needs to be created 1st.
- The “StartCalendarInterval” should be adjusted to a time that suits, in this example it set to run a 22:00 every Saturday.
The below will run the script whenever the LaunchDaemon is loaded, & so is handy for testing purposes.
As ever with launchd items, the files permissions are important & should be 644 for a LaunchDaemon.
If the above runs & is successful, you should have entries in the /path/to/success.log file given in the LaunchDaemon.
The 1st entry is a sync with no items needing to be synced, the second is a sync with some items synced.
If you defined a mail server then you’ll get an error like the below on an error only.
DHCP or Static IP?
If you define the “File Share Distribution Point” within the JSS using the Mini’s FQDN, then a Static IP “may” not be needed.
The “may” part comes into how your mail server &/or SNMP monitoring servers discover/authenticate clients, as well as how you’re network handles DNS updates.
If your DNS is Windows hosted & it is a Dynamic DNS your Mini should update it’s entry if bound to AD. If that all rings true, I would advise you limit the updates to the Ethernet interface, so that if any other adaptors are plugged in there is no confusion amongst the running services:
I hope the above rambling post helps some of you out there, & maybe makes you feel better for having that Mini in the server room. Below is how some of our Mac Servers are racked, & yep we have 2 Mac Pro’s clustered & running our JSS.