For several years I’ve been involved in methods to deal with Adobe Creative Cloud packages via AutoPkg.
Well, due to changes to the packages contents I’ve created yet another method (which is hopefully the last one).
Details on this method can be found below, as well as a history of the various methods employed over the years.
If you deploy Adobe Creative Cloud, and you’re not leaving folks to solely update via the Creative Cloud Desktop App, you might want to be notified on new updates.
Well, Adobe does have a way to notify you via email. But it’s a little hidden, and doesn’t cover all Adobe Creative Cloud products.
See below for how to subscribe to these email updates.
As forewarned in my prior post, here’s a post detailing methods to block tof macOS Big Sur.
In truth, the majority of this post will be rehashing items mentioned in previous post titled: Blocking macOS Catalina with Jamf Pro.
But there are a couple of amendments, with most not being Jamf Pro specific.
Jamf has a wealth of documentation available for its various products, below is a little tip on how to access the most current documentation, with 99% less click bait.
Tonight, Apple released macOS Catalina.
See below on how to block this upgrade with Jamf Pro.
With Jamf Pro 10.6.0, Jamf changed the default storage engine from MyISAM to InnoDB.
Jamf also released a tool to convert the tables with the release of 10.7.0.
However, as the Jamf supplied tools are not available currently available as a standalone download for those of us that do manual installations & as we host our datajar.mobi instances on Kubernetes… we went the manual route.
Below is how we performed these conversions.
As mentioned before, when using Box with ADFS for SSO there are more than a few limitations.
In an attempt to overcome them, I took on Box’s API. The first hurdle was trying to connect to it as Box uses OAuth2 which massively differs from other API authentication for other API’s I’ve access such as Airwatch.
However, I’ve a method & in I’ve detailed it below. This method is used throughout all my Box API scripts.
So the past few posts detailed how to parse Portfolio’s logs to find troublesome files as well as how to restart Portfolio’s services.
Well when Portfolio is struggling to catalog files, it can generate massive amounts of temp files & folders, which if not maintained can fill the Portfolio hosts hard drive.
Not a great situation, so I’ve written the below to help automate the maintenance.
As mentioned in my previous post, we’re currently cataloging around 40TB of data in Portfolio.
However, again as per the previous post, we’ve found some files to be troublesome causing issues cataloging. Once Portfolio attempts to catalog these files, the processes used can hang & get stuck on those files.
Portfolio then spawns more & more processes for the next files it finds until the box running Portfolio eventually runs out of resources.
To maintain the hosts uptime, we’re using the below script to restart the services daily.
As mentioned before, we use Portfolio to archive old projects.
However since moving to Portfolio v1.x (from v11), we’ve had numerous issues. The main one has been getting the 40TB of data we host re-cataloged into Portfolio.
I’ve recently deep dived into Portfolio, & found some files that cause issues & these can be found via the logs. Parsing them allows me to “fix” the troublesome files or delete any corrupt ones.
Below is the how I’m parsing the logs.