XSAN 2.3 & Lion to 3.0 and Mountain Lion upgrade

rlackeyjr's picture
Forums: 

Hi guys,

I have a very happy XSAN 2.3 running on Lion and need to add some Mountain Lion clients. The first time I gave up as the ML client refused to authenticate... I couldn't even add it. So I just installed Lion on the new Mac Pro that came with ML and it worked fine, but now that's not so practical, so I think it's time to bring the whole shebang up to ML.

I have searched for information on this as I'll be upgrading a live XSAN, is there anything I need to be aware of?

Thanks,

Rich

thomasb's picture

Hi,

We have been running 10.8.3 and Xsan 3 for a few months now at one of our smaller regional offices, and it seems to be running fine.

We are going to upgrade our big MultiSAN with 7 MDCs and 100 clients from 10.7.5 to 10.8.x and Xsan 3 this summer, and we'll upgrade 3-4 smaller installations before that.

Remember to backup all your Xsan and volume config files, including getting a cvlabel backup via Terminal:
[code]cvlabel -c > ~/Desktop/lunlabels.txt/code

You can find detailed instructions from Apple here:
http://help.apple.com/advancedserveradmin/mac/10.8/

Look under "Other services > Xsan > Upgrade from a previous version of Xsan".

Apple wrote:
[b]Upgrade procedures/b
There are two different sets of instructions depending on what you are upgrading:

[b]1. If you want to upgrade your current metadata controllers to Xsan 3 and OS X Server, see this help topic:/b

Upgrade your SAN software

Step 1: Back up your SAN volumes
Step 2: Disable Spotlight on all volumes
Step 3: Upgrade the primary controller to OS X Mountain Lion and OS X Server
Step 4: Upgrade the remaining controllers
Step 5: Reestablish Open Directory replicas
Step 6: Upgrade the SAN clients
Step 7: Enable extended attributes
Step 8: Change filename case sensitivity
Step 9: Reenable Spotlight

[b]2. If you need to replace your current metadata controllers in addition to upgrading to Xsan 3 and OS X Server, see this help topic:/b

Upgrade SAN hardware and software

Step 1: Back up your SAN volumes
Step 2: Disable Spotlight on all volumes
Step 3: Adjust volume failover priorities
Step 4: Convert all standby controllers to clients
Step 5: Unmount and stop all volumes
Step 6: Connect new computers to the SAN
Step 7: Migrate the primary controller to a new computer
Step 8: Migrate previous standby controllers to new client computers
Step 9: Convert clients to standby controllers
Step 10: Migrate remaining SAN clients
Step 11: Enable extended attributes
Step 12: Change filename case sensitivity
Step 13: Reenable Spotlight
Step 14: Re-create your MultiSAN configuration/quote

Remember that for volume failover to work with Xsan, both controllers need to have the volume(s) mounted.

Only issue I've been able to come accross so far by searching on Google, is this: http://list-archives.org/2013/04/07/xsan-users-lists-apple-com/issues-wi...

It is supposed to work fine though, according to Apple: http://support.apple.com/kb/HT3755

I have not yet tested Xsan 2.2.2 10.6.8 clients with 10.8.3 Xsan 3 MDCs, but I'm planning on testing this in our lab environment before upgrading our big MultiSAN.

abstractrude's picture

My only note is I had a 10.7>10.8 upgrade completely delete the OD Master. I had a backup of course but the whole directory was just gone after upgrade. Other than that, upgrades seem to be going well.

-Trevor Carlson
THUMBWAR

digitaldesktop's picture

We have a system that is running 10.8.2 on the MDCs, and a combination of 10.8.2, 10.7.5 and 10.6.8 on the clients. We saw the same issue where LUNs were not being discovered on the 10.6.8 clients.

We traced it down to how the Macs do LUN discovery. It changed dramatically in 10.7 and 10.8 and is also related to how Apple has changed its multipathing support. More specifically, Apple changed their support of AULA.

Thus, what we have found with an Areca-based controller is that you need to enable the ability to have Unique WWNN. Once we flipped the switch, all was well.

It should also be noted that you should not mix AULA and non-AULA enabled storage. If the non-AULA storage can NOT be switched into AULA mode, then all storage should be made non-AULA.

Hope this helps.

abstractrude's picture

DigitalDesktop
Just to expand on what you said. Its more than the OS version, it also has to do with the Apple branded LSI cards. They also want the Apple style multi-pathing. If you are running ATTO and 10.7 or later all is good.

-Trevor Carlson
THUMBWAR

rlackeyjr's picture

Hi Guys,

I started this thread, just wanted to say thanks for responding and all is mostly well. I just installed ML and ML server on the MDC's one by one, the volumes failing over each time, and then upgraded the clients only after the MDC's and it all happened without any interruption at all.

Just having some trouble with two clients not wanting to mount... asking me to check fibre cables, but they see the LUNS so that's the last niggling thing I have to figure out.

Thanks,

Rich

thomasb's picture

digitaldesktop wrote:
We have a system that is running 10.8.2 on the MDCs, and a combination of 10.8.2, 10.7.5 and 10.6.8 on the clients. We saw the same issue where LUNs were not being discovered on the 10.6.8 clients.

We traced it down to how the Macs do LUN discovery. It changed dramatically in 10.7 and 10.8 and is also related to how Apple has changed its multipathing support. More specifically, Apple changed their support of AULA.

Thus, what we have found with an Areca-based controller is that you need to enable the ability to have Unique WWNN. Once we flipped the switch, all was well.

It should also be noted that you should not mix AULA and non-AULA enabled storage. If the non-AULA storage can NOT be switched into AULA mode, then all storage should be made non-AULA./quote
This is very interesting. This sounds familiar to the issues I have asked about in this other thread.

Some LUNs invisible to some clients but not others?
http://www.xsanity.com/forum/viewtopic.php?t=18688

I would really appreciate it if anybody has any more clues about the LUN visibility issues described in my thread linked to above.