Monthly Archives: June 2011

July 11, my start date at EMC

First of all it needs to be said that living in Raleigh, NC is great if you work in IT. Within a 30 minute commute I have:

  • The company formerly known as Data General (developer of what is known as the EMC CLARiiON today. Head over to Apex (SW Raleigh) and you will find one of 3 EMC manufacturing plants in the world (ask your EMCsalesperson to get you an invite to “CLARiiON Days” if you want to see it)
  • A very nice NetApp campus that employs more and more people every day
  • Down the street from NetApp is a huge campus that houses this up and coming company called Cisco Systems
  • Someone called Red Hat; they employ quite a few people around here.
  • IBM, and bits of IBM that are now bits of Lenovo
  • GFI
  • National IT consulting firms such as Presidio, NWN, and at a reasonable distance from Raleigh Varrow
  • Tech heavy companies such as Cree, Allscripts, Tekelec, RTI, SAS, Siemens Medical, Nortel, and many others that I am only excluding for the sake of space

Getting back on point, I recently accepted an offer to be a Principal Engineer at EMC working on midrange storage solutions (technically my title is Principal Software Engineer but I think you would agree that is misleading). I’ll be working on integration and testing of current/future virtualization solutions (VDI included) with current/futureEMC midrange storage solutions (CLARiiON, VNX, VNXe). I wouldn’t be surprised if I test Microsoft platforms (SharePoint, SQL, and Exchange) as well given my background working with those products. I’m not sure what I could say that expresses how excited I am about this job. This is about as pure of an engineering role as I could ever hope for and I can only imagine that things I will get to work with and documentation that I will assist in creating. This job will absolutely require everything I have and then some and I absolutely cannot wait. Credit must be given where credit is due.

  • Mark McCullough (formerly of FHI) for moving me to your global infrastructure team and giving me to opportunity to work on an IT infrastructure that was about as bleeding edge as it can be (example: a 99% virtualized and coloed datacenter in late 2007 that supported multiple sites throughout Africa and Asia). Mark is one of those bosses who doesn’t have to tell you to do good work; you do it because you don’t want to disappoint him and make the IT infrastructure team look bad.
  • Pat Oliva (formerly of FHI) for being one of two people who combined to give me the most difficult technical interview I have ever seen; it has been my frame of reference when preparing for every interview since then. Pat also gets credit for teaching me most of what I know about a virtualized datacenters and high availability Microsoft platforms. Pat and Mark (and the next person on my list) worked hard to get FHI all the wonderful tech it has and truth be told it is the best setup I have ever seen. Pat also has this unique trick i dubbed the jedi mind trick; we used it many times for things such as a request to purchase an EMC Avamar platform for each of the two primary FHI datacenters. Very cool guy to be around.
  • Ken Rudd (formerly of FHI) for being the other half of my tech interview team and yet another key component of the development of the FHI IT infrastructure as it stands today. Like Pat he is a cool guy to work with and I’d work with them both again in a heartbeat.
  • Jon Rudol (FHI) for showing me what was possible with regard to a global international WAN. VoIP over 1000 msec latency links (including VSAT)? No problem. 2000 people coming into our Atlanta colo for Exchange, Sharepoint, and so on? Piece of cake. Global Cisco VoIP deployment? Easy, but I’ll need a helper. Those CME’s don’t configure themselves. Mix together good Cisco gear and Riverbed Steelhead WAN optimization appliances and suddenly anything is possible.

I truly feel that if you dropped the 5 of us in any company of (say) less than 20,000 people and we can make anything happen. We’ve all pulled out some major miracles over the years and when I describe them to people they come away truly impressed. I worked a lot of late nights and long weekends but I can honestly say that every minute was absolutely worth it and it made me what I am today. Credit must also be given to Ron Unger and Clay Harris at WorkSmart. They are a local IT consulting firm that serves companies throughout North Carolina as well as a few other states. The nearly 3 years I spent at WorkSmart exposed me to project after project and while most ofmy clients were smaller businesses I gained experience that enabled me to succeed at FHI. I recommend working for a company like that if you want to gain experience quickly; where else would you be able to do a major implementation or upgrade every couple of weeks and manage the entire project yourself? Learn the interfaces of the products you are deploying and when it comes time to work with platforms that are highly available (such as clustered servers like Exchange, SQL, etc) you will be able to adapt quickly. Consulting isn’t for everyone; if you don’t like being in front of customers (and the pressure that comes along with it) I would recommend another line of (IT) work. If it weren’t for WorkSmart and FHI I know that I would have had far less to offer EMC and would probably not have even been interviewed; I truly believe that.   – Jason

The VMware lab is (slowly) coming to life

Slowly” means that Murphy’s Law reared its ugly head; one of the Intel system boards would not post so I am awaiting an RMA replacement. Thankfully the other board was fine and one of the new systems is up and running fine.

Things I have learned so far:

1) Xeon E5310 and Core 2 Quad Q6600 processors support enough of the same features to allow vmotions between each. I am using my “old” Q6600 based host as the second node in my cluster until the new system board comes in.

2) Xeon E5310 processors and the ram they require generate a lot of heat; I needed to purchase some additional fans to keep the temperature in check. None of this is surprising but having never run this equipment outside of a prebuilt server (Dell in my case) the amount of heat generated is impressive. In my case an extra $30 worth of fans is not worth it considering I already had the processors and ram needed to build the host.

3) The crawl space of a house in North Carolina (if properly sealed) appears to maintain a very suitable temperature for storing lab servers. The only downside is that I don’t have remote power management so when I want to bring up my extra nodes to run labs (only 1 will run at all times) I have to trudge down to my makeshift datacenter. Perhaps now is the time to set up wake on lan; I’ll make that a future project.

I already had a suitable vCenter instance running so all that was required to bring the cluster up was to:

1) Transition all my existing VM’s from my legacy Q6600 host local storage to the ix2 datastore and deregister them from vCenter.

2) Rebuild all hosts to vSphere ESXi 4.1 U1.

3) Register the new hosts with vCenter and register all the (now orphaned) VM’s.

Initial impressions of the environment, in particular the Iomega ix2, are very positive. I keep the following servers running now and performance is more than adequate: Windows 2008 R2 – 5 servers in total:

  • Domain Controller
  • Virtual Center
  • VMware View Controller
  • VMware View Security Server
  • VMware View Transfer Server
  • Windows 7 x64 – VMware View Desktop (I haven’t setup VMware View Composer yet so this is a dedicated VM)
  • EMC VNX VSA (VNX simulator that can be used for VMware SRM testing or for general EMC Unisphere/NAS labs)
  • VMware vCMA appliance (vCenter management from mobile browsers or the Apple iPad vSphere client)
  • Astaro Security Gateway appliance (I use this free appliance as a VPN endpoint)

I’ve only just begun to dig into VMware View but I must admit it is a very cool product. Unfortunately outside of the work of Mike Laverick or VMware itself there isn’t a lot of information out there about it so to date most of what I am doing is trial and error. My next step is to get View Composer up and running so I can use linked clones rather than the dedicated clones I am using today. Once my new Intel board comes in I can complete the lab setup and get the VMware SRM labs up and running. I am familiar enough with the core VMware vSphere features but unfortunately none of the employers I have worked for has had to leverage FT let alone SRM, which is why I decided to put this lab together.

I must admit that I am having a lot of fun building the new lab and the VMware View environment. This isn’t a surprise mind you, but a reminder that no matter what I do I will always be an engineer at heart. To me architecting and building things is always going to be more exciting than managing them. Once I complete the lab build I will post some pictures of the my setup and go into some additional detail about what I am doing with the lab. – Jason