It has been quite awhile since my last post.  I have been busy with work (almost 8 months at my new job already?!) and home.

iOS 9 is causing quite a stir with its ability to block ads.  Consumers appear to love it.  Content providers are concerned about it.  Ad peddlers hate it.  It appears to be a never ending game of cat and mouse.  Pop-ups, pop-unders, cookie tracking, etc.  One devises a way to deliver ads to monetize and others find a way to remove it to improve their experience.

I’m not dumb enough to think it cannot affect me so I should not care.  My experience here on wordpress.com along with a lot of online services I get to enjoy for free have to be paid by somebody.  I’m not part of the kumbaya camp where everything has to be free but I think there could be ways to make it a win for all.

To the ad companies and the commercial companies who use them to sell your widgets: How about not making ads so ‘in your face’ and obnoxious?  How about not having to track everything that I do to make it seem more relevant to me?  You’re creeping out the public and forcing people to be concerned about privacy in the process.  Instead of driving folks to you because you have something to offer, you’re driving them away from you because you’re starting to be like the shady salesman that nobody wants to deal with.

To the sites that make their living off of ads: How about promising your visitors not to bombard them with ads all over the place?  We get the fact that you’re delivering something for free.  Don’t make yourself seem so desperate by saddling me with ads inline, ads on the side, ads that pop up, ads that pop under, ads that play videos automatically, etc.  While out making a living, be respectful to the folks who visit you.  Without your viewers (customers), you have no business to run.

To the consumers: We all enjoy having information at our fingertips.  With the advent of the mobile web, we can get to just about anything we want to see in an instant.  If ads aren’t obnoxious and don’t go all Big Brother on you, then allow them.  Understand that some folks out there are making a living by providing content for you to consume and enjoy.  Without folks doing this, free information would disappear and we all will be forced to pay to see content that we enjoy today.

This issue can be fixed IF folks are willing to come to a happy medium over it.

Hey, it has been awhile since I’ve posted.  I’ve been head down keeping stuff running at work but it is time I’ve posted again.

Like most of us, I do a concalls.  I mean a LOT of concalls.  Scheduled meeting concalls.  Impromptu concalls.  Concalls to plan for other concalls.  You know the drill.  It can be a royal pain to dial or autodial the main number but then have to remember the 5-10 digit conference id afterwards.  It is hard sitting there on the phone screen trying to remember and enter the conference id number while the automated attendant is yammering on in the background!

Commonly called numbers are easy.  You either do it on autopilot or you enter the concall number as an extra number for a contact in the Contacts app.  I try to enter in the whole string so I don’t have to fumble around stabbing at the screen when I’m doing something else (like driving – using hands free, of course!).

One thing that has always eluded me was getting that one touch autodialing to work with meeting invites in the Calendar app.  Most of the time, Apple’s data detectors would detect the wrong thing.  Commonly, it would only detect the main conference number and not the pauses and the conference id afterwards plus all of the other prompts that come with it.  I couldn’t put the autodial number in the Location section of the meeting and have it recognized.  I couldn’t put it in the Notes section and have it recognized.  I was relegated to putting the whole string into the notes section and copy/paste it into the phone app.  Not optimal.

This highlights the entire concall string to automatically dial the concall number.

This highlights the entire concall string to automatically dial the concall number.

This week I got an invite for a customer troubleshooting meeting by one of my Service Delivery Executives.  I don’t have him in my Contacts app so I didn’t already have his concall number.  He listed it in his meeting invite along with the answer I’ve been searching for years.  Data detectors will only recognize a long concall number if you don’t put in any parenthesis, dashes nor periods.  You can put in commas for pauses and semicolons for prompts to continue but it will only accept up to the first pound sign.  That last bit is the key.  Something like 8885551212,,123456,#,# would not work but 8885551212,,123456# will!  In my case, I can automatically dial that and omit the last pause and pound sign.  It will just take an extra 10 seconds to get into my call.  Considering I don’t have to fumble with any additional buttons, I think I can wait the little extra time.

I will be using it with all of my meeting invites going forward.  Now that you know, hopefully you will too.  Want to know the kicker?  I just tested it on an old iPhone 3G running iOS 4.2.1 that I had laying around and it worked there as well.  I just didn’t crack the code.  Judging by searches on the Internet, others haven’t either.  A tip of the hat to David Taylor for showing me the way!

PS: I’m looking to find something similar for my colleagues who run Android.  It is my understanding  the string that works for Apple data detectors will open Google Maps on Android.  I will continue to research it and will update this post if I find it.  If you know, please feel free to leave me a comment about it!

I am giving up my title and responsibilities as a Senior Storage Engineer at SunGard Availability.

I’m not leaving the company and I’m not leaving Enterprise Cloud Services (aka Cloud Ops).  Matter of fact, I’ll still sit at the same desk, with the same email and phone number.  I’ll still report to the same management chain that I have been reporting to for the past year.

So what is different?  I’m getting a new title and with that, a promotion with new responsibilities.  I will become a Cloud Engineer.  I will no longer be responsible for just storage.  I will also be responsible for network, servers, virtualization and security.  I’ve been doing this already for about 8 months (cloud jack-of-all-trades).  It is good to see SunGard decide that what I’m doing is the right thing to do and encourage me to continue.

A while back, Scott Lowe blogged about the evolution of infrastructure engineers.  I agree with him wholeheartedly.  With the advancement of cloud, everything is converging.  It will be very hard to keep your focus solely upon a single skill.  Take FCoE for example.  Who is responsible for it?  The network engineer?  The SAN engineer?  It will get real ‘fun’ trying to troubleshoot an FCoE problem when there are multiple groups that could be responsible for a gray area.  As configurations get more complex, so will be the skills necessary to operate and troubleshoot them.

SunGard’s Cloud Operations group is a separate group made up of individuals with different skill sets.  We are operating like a startup.  We still interact with other groups within SunGard but essentially we own the stack from the distribution switches on down.  For SunGard’s Cloud Engineer, not only do you get the opportunity to become multi-disciplined but it is a requirement of the job.  This is a rare opportunity within any large company where folks have a tendency to get put into a silo.

There are times when my manager hears about it from me when we do something that I think could have been done better.  He also hears about it when I think we’ve done something right.  I think creating the position of Cloud Engineer to handle the operational care and feeding of Enterprise Cloud was a good idea.  I don’t believe this position exists anywhere else….yet.

PS: SunGard Enterprise Cloud is hiring!  Different positions will be popping up from time to time as we expand and improve our product.  For the latest open positions, peep our jobs site.

This one might seem rudimentary for those of you who have a lot of PowerCLI fu, but it saved this n00b a few hours of time.  I found a condition where our automation software didn’t add a static route on a few Windows VMs.  The job now entails RDP’ing into each one and adding the route to our backup servers.  A not so great way of killing an afternoon if you have more than 5 VMs that needed this but necessary to do because having backups is a GoodThing(tm).

Enter PowerCLI and Invoke-VMScript.  You can supply Invoke-VMScript the ESX server username/password, VM username/password, (optionally the vCenter Server or login to it beforehand with Connect-VIServer), the VM name and the command you want to run.  You can add multiple commands by inserting a double ampersand between each command.  On Linux, && works out of the box because the ScriptType is bash by default.  For Windows, you have to specify ScriptType to be bat for batch files because it uses PowerShell by default and && will throw an error.  For example:

Invoke-VMscript -HostUser root -HostPassword qwerty12345 -GuestUser Administrator -GuestPassword yuiop67890 -ScriptType “bat” -ScriptText “route -p add 10.0.0.0 mask 255.255.255.0 10.10.0.1 && route print && ping backup01” -VM dbserver01

In this example, I’m adding a persistent static route, printing the route table to verify it was added and then pinging the backup server to make sure all is well.  One nice side note is that I don’t have to look up the management interface of each VM I want to run this on.  I also don’t have to look up which ESX server my VM is running on.  I’ve listed the VM name as the last parameter so I can easily up arrow and replace the VM name.  At some point, I’ll figure out how to loop through to make this even more hands off.

Doing this has freed me up to go back to other tasks at hand like using backup software documentation to cure insomnia!

A Twitter conversation I had a while ago discussed the merits of learning the NX-OS CLI.  A friend who is a consultant was talking about installing Fabric Manager/Device Manager on his laptop and having to use it with the different versions of switch code his clients were using.  He would not need to use Fabric Manager/Device Manager if he got more comfortable with the CLI.

I have pretty much abandoned Fabric Manager/Device Manager a while ago.  For the longest time, I relied upon Fabric Manager’s ease of use.  Fire up the GUI, login, pop on an alias and zone, commit and done.  I hadn’t really used the CLI since learning about it in class.  I quickly found that Fabric Manager was nice when you had to add small set of aliases and zones.  It wasn’t so cute when you have to add them en masse or if you had a lab environment where you were constantly creating and removing configs.  Recently, we added two new arrays to two 24 node ESX clusters.  4 zones per ESX server.  384 zones total.  Had I used Fabric Manager, I’d still be clicking away and adding zones.  When you add that much volume, you are bound to have config errors.  There simply had to be a better way.

That ‘better way’ became mkzone.  You simply fill out a spreadsheet with some basic switch information, device names and pwwn’s, and what you want zoned.  Export to CSV and feed it into a script that spits out the config.  It even gives you a back out plan.  Said configs can then be copied and pasted into Change Management procedures (try that with Fabric Manager).  More importantly, when it is ‘go’ time, you can copy and paste into an ssh session.  A win all around.

As a commitment to helping others, I can releasing mkzone and its config spreadsheet to the public under the beerware license (you grab it and use it, if you like it and we meet someday, feel free to buy me a beer).  That being said, please note that it also comes with no warranty or liability of any kind.  Please use at your own risk!  I ask that when you use it, please double-check the config BEFORE you press it into production.  There will be instances where you will remove some items before you copy/paste, such as already existing aliases.

Some limitations of the script:

  •  It configures one fabric.  You have to do two configs and two separate runs to cover redundant fabrics.
  • It is currently works only for Cisco SAN switches running NX-OS.  Someday I hope to expand it to include Brocades.
  • It is for one VSAN.  If you use multiple VSANs, you will need to run this multiple times with separate configs per VSAN.
  • It is a Bourne shell script.  It will run on unix/linux and its variants (like OSX).  If you are using this on a Windows machine, you will need to install something that has Bourne shell.  I recommend Cygwin.
  • It would be nice someday to run this as a CGI script, hosted somewhere where folks can upload their configs and get the results.  If you are interested in hosting it, please contact me.

Here is the explanation for the items in the config spreadsheet:

  • SwitchName and VSAN – Pretty self explanatory.
  • ZonesetName – The name of the zone set you want to create with the info in the spreadsheet.  I like to use dates in the zone set name because then I know when exactly it was created.  Also, if I have to back out of my change, I can just reactivate the previous zone set.
  • ZonesetClone – If you already have a zone set in use and you are adding zones to that zone set, list that zone set name here.  The script output will script out a copy (clone) of that zone set, add your new zones to it and then activate the new zone set.  If you are setting up a zone set from scratch (like in a lab), you can just leave NOT_IN_USE on this line.
  • ZoneBy – You have the option of zoning by pwwn or devaliases.  Either will work.  Zoning by devaliases is easier to double-check before you commit although the switch internally will still use pwwn zoning.

The next bit needs its own explanation.  This is where you list the devices, their pwwn’s and group them according to how you want them zoned.  In the Type column, you list the kind of device it is: array port, server HBA or another switch port.  Switch port examples include the port on the other end of an ISL or NPV connection (like from a UCS Fabric Interconnect).  You won’t zone switch-to-switch ports but it is nice to have them defined in the device alias database.

The script will match up all of the server ports with its corresponding array port.  All servers listed as SERVER1 will be zoned to array ports listed as ARRAY1.  SERVER2 servers will be zoned to ARRAY2 ports.  And so on.  The script creates all zones as single initiator zones.  It will loop through all of the servers marked with SERVER1 and zone them to ARRAY1.  In my spreadsheet template, server01_hba0 will be zoned to both clariion01_spa0 and clariion01_spb1.  It will create them as two separate zones.  server02_hba0 will get zoned to clariion01_spb0 and clariion01_spa1.*  If you need to zone any SERVER1 server to a second array, you would also add that second array pwwn’s as ARRAY1. All servers that would need to be zoned to ARRAY1 ports would be listed as SERVER1, even though there may be more than one server.

Once you’ve completed your spreadsheet, export it as a CSV file.  You will use that CSV file as a config file to the script, mkzone.sh.  The syntax of the script is:

./mkzone.sh ./<configfile.CSV>

ie:

./mkzone.sh ./myconfig.csv

The script will send the output right to the screen.  You can redirect the output to a file if you’d like:

./mkzone.sh ./myconfig.csv > zoneoutput.txt

 

Here is a link to the spreadsheet template and mkzone script.  Feedback is welcomed!

 

* Some of you may notice that my example is zoning a server to both SPs of a clariion.  This is following the EMC best practice of crisscross zoning (aka mesh zoning).  The B side fabric would zone the servers to clariion_spa2, clariion_sp3, clariion_spb2 and clariion_spb3.

How many of you out there know the time settings on your Cisco network and SAN switches?  Are they keeping the correct time?  Is NTP setup properly?  Is your timezone settings proper?  You might ask why one should care.  Proper timestamps can help you correlate events as they happen.  Having to play around with offsets because time isn’t proper can be a pain.  “That switch is 3 hours behind so when it said it lost as power supply at 2am, it was really 11pm….yesterday.”

Figure out whether you want your date/timestamps to be synced to a single timezone (say, UTC) or your local time.  Local time can get interesting if you have devices spread out in multiple timezones.  Then setup NTP to sync time.  The last bit is to make sure you are using the proper time.  This goes for SAN switches as well as network switches (both running NX-OS):

clock timezone EST -5 0
clock summer-time EDT 2 Sunday March 02:00 1 Sunday November 02:00 60

The first line sets your timezone it EST which is 5 hours and 0 minutes behind UTC.  Set your timezone to Standard Time and modify it for Daylight Savings with the second line.  If your clock timezone line is using Daylight Savings Time, it won’t change back to Standard Time once Daylight Savings Time is over.

The second line is the line that actually sets up Daylight Savings Time.  Thanks to the US government fooling around with the timing of Daylight Savings Time, we need to change it from what used to be default accepted norms.  Most vendors decided to come up with a method of customizing it instead of hard coding it once again should the government decide to change it again.  The second line basically says “Change to Daylight Savings Time on the second Sunday of March at 2am and change back on the first Sunday of November at 2am.  Change it by an hour.”

By making time consistent across your switches, you will be able to rely on the actual timestamps when you need them.  That is one less headache when trying to troubleshoot things.