NetApp & Powershell – Snapshot Report


Nathan Manzi has created a powershell script for reporting snapshots on a Netapp. The script below has been altered for a cluster mode instead of a 7 mode. Also, a few parameters have been set to optional with default settings. This way, these settings does not needs to be entered when running the script.

Also when reporting on multiple netapp’s, all volumes with no snapshots comes on top of the mail. Below there, all old snapshots (older then $WarningDays) are displayed.

In $ExcludedVolumes alle volumes can be written down for which no results needed to be displayed

The script can be run with the following command:

.\NetApp-SnapshotReport.ps1 -Username <user>  -Password <"pass">

The report will be delivered in the mail:

Download the file below:

You may see “Media is Write Protected” Error or VDS error 80070013 after bringing SAN disk online via Diskpart in Windows Server 2008

When a LUN is presented from a SAN to Windows Server 2008, the following error may pop up and Event ID: 10 may be logged in the Event log when trying to use the disk for the first time.

Error Message:

“The Media is Write Protected

System Event Log:

Log Name: System

Source: Virtual Disk Service


Event ID: 10

Task Category: None

Level: Error

Keywords: Classic

User: N/A


VDS fails to write boot code on a disk during clean operation. Error code:


In Windows Server 2008 there is a policy new to Windows related to SAN disks. This “SAN policy” determines whether a newly discovered disk is brought online or remains offline, and whether it is made read/write or remains read-only.

On Windows Server 2008 Enterprise and Windows Server 2008 Datacenter, the default SAN policy is VDS_SP_OFFLINE_SHARED. On all other Windows Server 2008 editions, the default SAN policy is VDS_SP_ONLINE.

SAN Policies:

VDS_SP_ONLINE: All newly discovered disks are brought online and made read-write.

VDS_SP_OFFLINE_SHARED: All newly discovered disks that do not reside on a shared bus are brought online and made read-write.

VDS_SP_OFFLINE: All newly discovered disks remain offline and read-only.

If the policy is such that newly discovered disks are set to offline and readonly, then the administrator can use DiskPart at the command line or Diskmanagement from Server Manager\storage to prepare the disks for use.

When using the diskmanagement snap in to Online a disk, the new disk will be set to online and read-write. When using DiskPart, only those flags you specify will be changed. Thus if you issue the command to bring a disk online, it will only be put into online state. You must issue a separate command to make the disk read/write. In this way, Diskpart allows you to have finer control than Disk Management.

Using diskpart to online a disk does not change the read only attribute. This needs to be done manually using the following steps:

1. Run DiskPart

2. List and select the disk that needs to be made available.



3. If the disk is offline, bring it online by running ONLINE DISK

4. View the attributes by running DETAIL DISK

The command DETAIL DISK may give an output similar to the following

DISKPART> detail disk

Disk ID: ########

Type :

Bus : #

Target : #

LUN ID : #

Read-only : Yes

Boot Disk : No

Pagefile Disk : No

Hibernation File Disk : No

Crashdump Disk : No

5. To clear the read only flag, run ATTRIBUTE DISK CLEAR READONLY

6. Exit DiskPart

You should now be able to write to the disk.


Windows 2008 Multipath I/O Overview

Adding and removing MPIO support
To install Multipath I/O on a computer running Windows Server 2008, complete the following steps.

To install Multipath I/O
1.Open Server Manager.

To open Server Manager, click Start, point to Administrative Tools, and then click Server Manager.

2.In the Features area, click Add Features.

3.On the Select Features page of the Add Features Wizard, select Multipath I/O, and then click Next.

4.On the Confirm Installation Selections page, click Install.

5.When installation has completed, click Close.

Documentation: [wpdm_file id=”18″]

Adding a EXP810 to an existing DS4700

More information:

Connecting storage expansion enclosures at the end (bottom) of a drive loop 

To add a 16-drive expansion enclosure, for example, an EXP810 or EXP420 to the DS4000 subsystem configuration, you basically follow the same procedure as in adding the 14-drive enclosure to a DS4000 subsystem configuration; however, the port connections on the 16-drive enclosure are different, as illustrated in Figure 4and the three steps that follow it.

1. Insert the small form-factor pluggables (SFPs) or gigabit interface converters (GBICs) into only those ports that you intend to use. Do not leave GBICs or SFPs inserted into port connectors without connecting them to other ports using cables.
2. Extend one of the drive loops (that is, drive loop A) in a DS4000 storage subsystem redundant drive loop pair by connecting the OUT port of the last storage expansion enclosure to the IN port of the storage expansion enclosure as shown in Figure 1Note: The port name on an EXP810 is not labeled IN. See Figure 4 for details.

Attention: Carefully reconfigure only one drive loop at a time, making sure that the drive loop that you modify is correctly connected and in Optimal state before you attempt to reconfigure another drive loop. Take this precaution to prevent the arrays from being inadvertently failed by the DS4000 storage subsystem controllers, which happens when two or more drives in the arrays cannot be reached through either drive loop in the redundant drive loop pair.

3. Power on the added storage expansion enclosure unit.
4. Wait a few seconds; verify that the port bypass LEDs of all of the ports in drive loop A, now extended to the storage expansion enclosure, are not lit. Using the DS Storage Manager Client Subsystem Management window, verify that the storage expansion enclosure is added and displayed in the Logical/Physical view of the window.Correct any errors before you proceed to step 5. For port bypass, complete the following steps:

  1. Make sure that the SFPs and GBICs or fiber cables are in good condition.
  2. Remove and reinsert SFPs, GBICs, and fiber cables.
  3. Make sure the drive expansion enclosure speed switch is set to the same speed as the existing drive expansion enclosures and the DS4000 storage subsystem speed setting.
  4. Make sure that the ESM is functioning correctly by removing and swapping it with the other ESM in the same drive expansion enclosure. For enclosure ID conflict, set the drive expansion enclosure ID switch to values that are unique from the current settings in the existing drive expansion enclosures and storage server.


Call IBM support for assistance, if the problem remains.

5. In the other drive loop (drive loop B) in a DS4000 storage subsystem redundant drive loop pair, remove the connection from the storage subsystem drive loop port to the OUT port of the last storage expansion enclosure and connect it to the OUT port of the drive enclosure as shown in Figure 2.Note: The port name on an EXP810 is not labeled OUT. See Figure 4 for details.
6. Wait a few seconds; verify that the port bypass LEDs of the two ports in the connection between the storage subsystem drive loop port and the OUT port of the drive enclosure are not lit. Using the DS Storage Manager Client Subsystem Management window, verify that the drive enclosure does not indicate the Drive enclosure lost redundancy path error. See step 4 for possible corrective actions, as needed.Note: The existing storage expansion enclosures are shown with “Drive enclosure lost redundancy” path errors until you establish the Fibre Channel cabling connection that is described in step 7.
7. In drive loop B, cable the drive enclosure IN port to the OUT port of the last enclosure in the already functioning storage expansion enclosure drive loop, as shown in Figure 3.
8. Wait a few seconds; verify that the port bypass LEDs of all of the ports in drive loop B to which you have added a connection are not lit. Using the DS Storage Manager Client Subsystem Management window, verify that all of the drive enclosures in the DS4000 redundant drive loop pair to which the enclosure was added do not report the “Drive enclosure lost redundancy” path error.


Figure 1: Cabling a single drive enclosure to the end of a functioning drive loop (step 1 of 3)

Figure 2: Cabling a single drive enclosure to the end of a functioning drive loop (step 2 of 3)


Figure 3: Cabling a single drive enclosure to the end of a functioning drive loop (step 3 of 3)


Figure 4: Cabling an EXP810 to the end of a functioning drive loop

Connect Powervault 220s to a server

Cabling Your System for Joined-Bus, Split-Bus, or Cluster Mode

How you cable your storage system to your host system(s) depends on the bus configuration you choose: joined-bus, split-bus, or cluster.

  • A joined-bus configuration is one in which two SCSI buses are joined to form one contiguous bus.
  • A split-bus configuration enables you to connect your storage system to either one server with a multichannel RAID controller, or to two servers. However, if one server fails, information controlled by that server is inaccessible.
  • A cluster configuration offers multiple paths to the system, which provides high data availability.

Joined-Bus Configuration


Split-Bus Configuration (One Server)

Cluster Configuration or Split-Bus Configuration (Two Servers)

SCSI ID Assignments

Configuration Cables
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Joined-bus 1                 H S            
Split-bus—primary EMM 1                 H S            
Split-bus—secondary EMM 1                 H S            
Cluster 2 S               H H            
NOTE: The unshaded SCSI IDs are available for hard-drive use as indicated for each configuration. The reserved SCSI IDs are used as follows:
H = used by the host system initiator.
S = used by the storage system SES.

SCSI ID Numbers and Associated Hard Drives 


Split-Bus Module Modes

Mode LED Icon Position of Bus Configuration Switch Function
Joined-bus mode  


Top LVD termination on the split-bus module is disabled, electrically joining the two SCSI buses to form one contiguous bus. In this mode, neither the split-bus nor the cluster LED indicators on the front of the system are illuminated.
Split-bus mode  


Center LVD termination on the split-bus module is enabled and the two buses are electrically isolated, resulting in two seven-drive SCSI buses. The split-bus LED indicator on the front of the system is illuminated while the system is in split-bus mode.
Cluster mode  


Bottom LVD termination is disabled and the buses are electrically joined. The cluster LED on the front of the system is illuminated while the system is in cluster mode.
More information:

How to Install IBM DS4000 (FastT) Storage Manager Client on Windows

The IBM Storage Manager Client allows you to control and manage your DS4800, DS4700, DS4300 disk systems.  This guide describes how to install on a Windows machine.

How to Install or Upgrade Storage Manager Client

1) Download the software.

For Windows XP, 2000, 2003, or 2008, on a 32-bit platform, click here.
For Windows Vista , 2003, 2008, on a 64-bit platform, click here
For Windows Vista 32-bit, click here.

If these links don’t work for you, try navigating IBM’s site: > support & downloads > fixes, updates, and drivers
Category > (under SYSTEMS) system storage
Product Family > disk systems, Product > DS4800 or whichever DS4000
Select Operating System and Click GO
Click Downloads on the Support and Downloads box.

2) Unzip your download. Keep a copy of these install files around.  The scripts folder will likely come in handy.

3) Run your installation executable file.  In my case it’s SMIA-WS32-

4) Select your language, agree to IBM’s legal stuff.

5) Select your Installation type.  To select the right type, answer this question.  What computer are you installing Storage Manager on?

If it’s the server connecting to your DS4000 disks, select Full Installation.
If it’s just a desktop workstation or a laptop, select Management Station.

6) Select Do not Automatically Start the Monitor

You should designate one machine, preferably a server that’s always running and connected to the disks by fibre channel, to act as the monitor.  For me, that’s our AIX machine.

7) Review the disk space requirements and click Install.

Reset DS4700 password

Log on to controller A with a serial or telnet connection (close the Storage manager client)

Log on with the following credentials:

User=  shellUsr
Password= wy3oo&w4

Type in the following commands:

isp clearSYMbolPassword
“Unld ffs:Debug”

The password should be set to the default (infinity), a reset of the controllers is not necessary.

Tested on firmware