Script for Collecting CISCO Flogi and Active Zones count

I bet that every storage admin has a track of their Flogi DB and Zones in their SAN. But I really don’t keep !. Well when someone asks me. Hi dude you screwed up the zonning and my servers are down . “Wake up call !!!” . I will login to every switches, Start checking zones, compare with my pretty old database. Realize the data mismatch ! Ka-Boom !!!! End of story !

Well if you keep a track of your FLDB count and zones on a daily basis, on a huge and  busyenvironment with lot of fabrics, then you are Super Man. But in reality, After running behind Changes, Incidents, Calls and other junk work.. A day well spent. Oh my gosh you are out of time. You will keep this work “Collecting FLDB and Zone Count” for the next day. And rest assured. Tomorrow never dies !!!

So, writing a PowerShell script  to do this task on a daily/weekly but not on a monthly basis ,will keep your guard up. Hell yeah ! Nobody will ever come and tell you about your SAN switches alerts. The day you see an abnormal change of count in the report, You can dig into the respective switches, look for changes and alert the team or fix by yourself. Yeah man. Now we are talking.

You can download the script here

capture

Finding from the output of this script

  1. The Flogi on the Switch SW6 has decreased (Path down)
  2. Change in Zones

To run the script yo need to have plink on your system.
Update the complete path of plink on \res\Commands\plinkpath.txt

For e.g.

capture

 

 

Automating SAN admin tasks on CISCO SAN switches

Introduction

SAN switches are backbone of every fabric. All of the traffic goes through these special devices via optical cables. FCID, pwwn, fpwwn, SFP, zoning are the most common things a storage admin deals day to day. For any troubleshooting finding the initiators logged into which targets are one of the primary task. On a medium size fabric the number of zones, logins, aliases, pwwn ranges from thousands of tables. What if we have a single table which has the consolidated information from all your switches.

Imagine when you search for a particular PWWN, the query returns all the zones its participating, FCID, port, switch it connects, aliases. This would definitely save a lot of time for you.  Keeping a track of ever changing databases is a tedious work. the only way to skip that is to write a piece of code and it will run for ever in a click.

  • Tools

For interacting with SAN switches we have Putty, Plink (command line utility for putty) for Windows and SSH for Linux. Here the focus is given to automating the health check on Cisco switches using Plink and Windows PowerShell 1.0. In general, all the windows machine comes with PowerShell 1.0 or greater. Plink can be downloaded from here. Here is the link for using Plink in batch and scripts.


Output of Plink when called by PS

Here the PowerShell will call the Plink application installed on your computer. Plink then fetch  the username on the IP”10.10.10.10 ” via port number 22  and password “password ” and issue the command “show zoneset active” on the switch end using SSH.

PS C:\Users\admin> & ‘D:\Installations Files\plink.exe’ -ssh username@10.10.10.10 -P 22
-pw password show zoneset active

Plink: command-line connection utility
Release 0.66
Usage: plink [options] [user@]host [command]
(“host” can also be a PuTTY saved session name)
Options:
-V print version information and exit
-pgpfp print PGP key fingerprints and exit
-v show verbose messages
-load sessname Load settings from saved session
-ssh -telnet -rlogin -raw -serial
force use of a particular protocol
-P port connect to specified port
-l user connect with specified username
-batch disable all interactive prompts
-sercfg configuration-string (e.g. 19200,8,n,1,X)
Specify the serial configuration (serial only)
The following options only apply to SSH connections:
-pw passw login with specified password
-D [listen-IP:]listen-port
Dynamic SOCKS-based port forwarding
-L [listen-IP:]listen-port:host:port
Forward local port to remote address
-R [listen-IP:]listen-port:host:port
Forward remote port to local address
-X -x enable / disable X11 forwarding
-A -a enable / disable agent forwarding
-t -T enable / disable pty allocation
-1 -2 force use of particular protocol version
-4 -6 force use of IPv4 or IPv6
-C enable compression
-i key private key file for user authentication
-noagent disable use of Pageant
-agent enable use of Pageant
-hostkey aa:bb:cc:…
manually specify a host key (may be repeated)
-m file read remote command(s) from file
-s remote command is an SSH subsystem (SSH-2 only)
-N don’t start a shell/command (SSH-2 only)
-nc host:port
open tunnel in place of session (SSH-2 only)


 

  • Extracting data using PowerShell

The power shell is used to extract the data coming from Plink. As the output is not in a standard format, some PowerShell scripts needs to be written to convert the output to a CSV. The CSV from many switches are consolidated to one and act as a database for your query.

How to use this piece of code.

  1. open your PowerShell as Admin
  2. run Set-ExecutionPolicy unrestricted
  3. Click yes on the popup
  4. Now create a folder cisco on your d drive. If you do not have a d drive you may chande the ‘D:\cisco\’ to ‘C:\cisco\’
  5. Copy the code in a text file and rename it to filename.ps1.Basically the extension should be ps1 and move it to the folder which you have created in step 4
  6. Keep your  pilink.exe application on ‘D:\Installations Files\plink.exe’ or just change the path of the code to point the plink.exe file. eg: if you plink file is there in C:\folder\plink.exe, change the code to ‘C:\folder\plink.exe’
  7. Run the .ps1 file using powershell
  8. Voila your CSV will be created

If you have many IPs then use a for loop. Similarly  Flogi database, interface  and FCAlias or device-aliases details can be made into friendly databases and

$defpath = ‘D:\cisco\’
(& ‘D:\Installations Files\plink.exe’ -ssh admin@10.10.10.10 -P 22 -pw password show zoneset active) | Out-File -Encoding UTF8 (“$defpath” + “cisco_zsactive.txt”) #-Append
“Zonename,VSAN_ID,Member1,Member2,Member3,Member4,Member5,Member6,Member7,Member8,Member9,Member10” | Out-File (“$defpath” + “cisco_zsactive.csv”) -Encoding UTF8
$variable = ((gc (“$defpath” + “cisco_zsactive.txt”)) | Select-String “zone name”,pwwn -SimpleMatch) -join “`r`n”
#($variable + “`r`n }”) is added to input a fill word to get the pattern of the container
$ZoneContainer = ($variable + “`r`n }”) | ForEach-Object {
$_ -replace ‘Zone name’, “}`r`nZone name” `
}

($ZoneContainer| Select-String ‘(?s)(?<=Z)(.+?)(?=\ })’ -AllMatches).Matches | % {
$ZoneContainer = $ZoneContainer.Replace($_.Value, ($_.Value -split “`r`n” | % { $_.Trim() }) -join ” “)
}
$ZoneContainer | Out-File (“$defpath” + “cisco_temp_zsactive.txt”) -Encoding UTF8

$Zone_csv_unformatted = gc (“$defpath” + “cisco_temp_zsactive.txt”)

$FCIDpattern = ‘(?s)(?<=0x)(.+?)(?= )’

$Zone_csv_unformatted | ForEach-Object{
$_ -replace ‘ }’, ” `
-replace ‘Zone name ‘, ”`
-replace ‘vsan ‘, ”`
-replace ‘pwwn ‘, ”`
-replace ‘\*’, ”`
-replace ‘ FCID ‘, ”`
-replace “0x+$FCIDpattern”, ”
}|ForEach-Object {
$csvmaker = (“$IP” + ‘ ‘ + $_ -replace “\s{1,}”, “,”)
Out-File -InputObject $csvmaker -FilePath (“$defpath” + “cisco_zsactive.csv”) -Encoding UTF8 -append
}

echo “ZoneSet Active data processed”
}

For extracting the information from the CSV, use

$pwwnsearch = Read-Host ” Enter the pwwn”

gc (“$defpath” + “cisco_zsactive.csv”) | Select-String -SimpleMatch $pwwnsearch | Out-File (“$defpath” + ‘pwwnout.csv’) -Encoding UTF8 -Width 3000

 

 

StorageWinds

Ever wondered when your data center grows larger and larger, more switches, cabling, bigger storage arrays, thousands of servers.Managing the arrays become a challenging job.  keeping records of what is connected to what,where becomes unimaginable. Keeping things in track is always a challenging job. From a storage admin perspective, keeping the logs, zone databases, connectivity diagrams and help us to a great extend. But when things are changing in much faster rate, Keeping journals update require a lot of manual  interventions, the time required to handle a change or incident becomes larger. But we admins are used to it and somehow will run the show. But to stand out and to bring the spot light on to you, I guess we need more fire power. Things needs to be automated. But how ? What tools ?  Is it really worth  ?

When there is a will there is always way. Create a flowchart. Break down things to smaller chunks. Write scripts and automate ! This blogs mostly contains the work and ideas which I have used to break down my challenges as a storage admin. Most this blog covering the operations from automation and scripting perspective assuming the reader know the architecture and terminologies.

Lets start with creating a database for your Cisco fabric. Brocade guys please excuse me as, I would be posting one real soon.