Monday, December 31, 2018

ExploreVM Podcast Short: VMworld 2018 - A Network Expert at a Virtualization Conference

In another ExploreVM Podcast short, I speak with Matt Elliot about his first time at VMworld, and what it's like to attend VMworld as a Network expert at a Virtualization conference.

Listen to "VMworld 2018 - A Network Expert at a Virtualization Conference" on Spreaker.


My Guest:
Twitter - Matt Elliot

Links:
vBrownBag VMworld 2018 Tech Talks
VMworld US 2018 Day 1 Keynote
VMworld US 2018 Day 2 Keynote
Troubleshoot and Assess the Health of VMware Environments with Free Tools (VIN3257BU)

Do you have an idea or a topic for the show? Would you like to be a guest on the ExploreVM podcast? Or just keep up the conversation about VMworld 2018? If so, please contact me on TwitterEmailLinkedInInstagram, or Facebook.



ExploreVM Podcast - HCI Series: HCI Testing with Alan Comstock

On this episode of the podcast we begin a series on hyperconverged infrastructure. We're going to start with a guest who's put a few products through the ringer to decide which HCI vendor worked best for them.


Listen to "HCI Series - HCI Testing with Alan Comstock" on Spreaker.

My Guest:
Alan Comstock

In the future, watch for episodes where I dive into several different  HCI technologies.

To continue the conversation on hyperconverged infrastructure, or if you're an HCI vendor and would like to be a guest on the ExploreVM podcast, please contact me on TwitterEmailLinkedInInstagram, or Facebook.


Thursday, December 27, 2018

ExploreVM Podcast - A VMworld 2018 Conversation with Mike Burkhart

As 2018 comes to an end, I look back at some sessions that haven't been featured on the podcast yet this season. This episode was originally intended to be a video featuring Mike Burkhart live at VMworld 2018. Unfortunately due to some technical difficulties during the editing process, we can only enjoy as an audio podcast.


Listen to "A VMworld 2018 Conversation with Mike Burkhart" on Spreaker.



My Guest:
Mike Burkhart

Links:
vBrownBag VMworld 2018 Tech Talks
VMworld US 2018 Day 1 Keynote
VMworld US 2018 Day 2 Keynote
Troubleshoot and Assess the Health of VMware Environments with Free Tools (VIN3257BU)

Do you have an idea or a topic for the show? Would you like to be a guest on the ExploreVM podcast? Or just keep up the conversation about VMworld 2018? If so, please contact me on Twitter, Email, LinkedIn, Instagram, or Facebook.

Thursday, September 6, 2018

Troubleshoot & Assess the Health of VMware Environments with Free Tools - VMworld 2018

I had the opportunity to present my first VMworld breakout session at this years conference in Las Vegas. Below are the videos of the demos provided during the session, as well as links to the tools discussed. Please do not hesitate to contact me for further information!

Session Description 


Based on his highly popular VMUG session, Paul Woodward Jr. (@ExploreVM) will review some of the tools he's used in his career to assess the health and troubleshoot issues with VMware environments. Paul will provide demonstrations and real world examples on how these tools have helped him solve problems that plague every VMware admin. And the best part, these tools are free!

vCheck


 Links

vCheck Website 
vCheck Github

RVTools


Links


RVtools Website 
Yellow Bricks - New Version Available 

ESXTOP


\

Links

Yellow Bricks 
vFrank - ESXTOP 
Virten - ESXTOP

vRealize Log Insight


Links


VMware - vRealize Log Insight 
vRealize Log Insight - End of Availability

Other Session Links


vDocumentation 
Vester
Veeam One Free Edition
Top 21 Must Have VMware Admin Tools 
101 Free VMware Tools 
Free VMware Tools 

 If you have suggestions for tools that should be added to the above list, do not hesitate to contact me via any of the channels provided below.

Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.

Sunday, August 26, 2018

SuperMicro Build Day Live with vBrownBag

Recently, SuperMicro hosted Alastair Cooke and vBrownBag for another edition of Build Day Live. For those who don't know, "vBrownBag is a community of people who believe in helping other people.". They run weekly podcasts and webinars, and also host live Tech Talks at conferences around the country. What makes the Build Day Live event different is that vBrownBag is on site at the vendor, building a production cluster throughout the event, from start to finish.

SuperMicro Does Networking?

Up until Build Day Live, I had no idea that SuperMicro was in the networking space. They offer a wide array of products from 1 to 100GB in a 1U chassis. These switches are bare metal, and are compatible with the Cumulus Linux networking operating system. SuperMicro also has its own proprietary NOS for the 1GB switches as well. Configuration can be completed via CLI or GUI, making management easy for admins at all skill sets.

JBOF Disaggregated Storage

Outside of the server hardware we all know, SuperMicro also has a deep selection of storage hardware. Of the two storage specific segments of SuperMicro Build Day, I was most interested in the JBOF/NVMe storage. During this segment, Alastair spoke to Mike Scriber. To quote Mike, "I design really, really cool storage systems using NVMe. Very high density systems." And when you look at what SuperMicro is up to, he's right.


Utilizing the Intel Ruler NVMe form factor, SuperMicro is quickly closing in on 1 petabyte of storage in a 1U rack chassis. The chassis has slots for 32 "rulers" that connect into 16 lanes of PCIe leading to 4 ports which allows for 64gb/s bandwidth. Another interesting feature of both the ruler form factor and the standard U.2 chassis is the engineering of the back planes. The back planes run parallel to the ruler form factor, and across the top of the U.2 drives. This design helps to keep densely packed 1U chassis cool with limited to no blocks in the airflow.

There were a lot of aspects to the SuperMicro Build Day Live event worth checking out, lots more than commented on in this post! Check out the links below for all of the videos from the event.

Links:

SuperMicro Build Day - Condensed
vBrownBag YouTube SuperMicro Build Day Live Videos
The CTO Advisor - SuperMicro Build Day Interviews
Anthony Hook's SuperMicro Build Day Blog Post
vBrownBag.com - SuperMicro Build Day Live
vBrownBag on Twitter
SuperMicro's Web Site

If you'd like to continue the conversation about SuperMicro Build Day Live, do not hesitate to contact me via any of the channels provided below. Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.

Thursday, August 2, 2018

Getting Started with StorMagic SvSAN - A Product Review


Getting Started with StorMagic SvSAN - A Product Review

Recently, I had the opportunity to try out StorMagic SvSAN in my home lab to see how it stacks up. The following is an introduction to SvSAN, a description of the deployment, testing, testing results and my findings. 



What is StorMagic SvSAN 6.2?

StorMagic SvSAN provides a Hyperconverged solution that has been designed with the remote office/branch office in mind. Two host nodes with onboard storage can be utilized in a shared storage style deployment in locations where a traditional 3 tiered architecture would prove to be difficult to manage or too cost prohibitive.  SvSAN is vendor agnostic so it can be deployed onto existing infrastructure without the need to acquire additional hardware. The two storage nodes can scale out to support up to 64 compute-only nodes. Licensing is straight forward: perpetual license per pair of clustered storage nodes as one license per pair. Initial pricing is also very accessible, starting at approximately $4,000 for the first 2TB license. Licensing and capacity can scale beyond the initial 2TB.



When asked about their typical customer base, StorMagic provided the following response: "StorMagic SvSAN is designed for large organizations with thousands of sites and companies running small data centers that require a highly available, two-server solution that is simple, cost-effective and flexible. Our typical customers have distributed IT operations in locations like retail stores, branch offices, factories, warehouses and even wind farms and oil rigs. It is also perfect for IoT projects that require a small IT footprint, and the uptime and performance necessary to process large amounts of data at the edge."



Technical Layout of SvSAN

A typical SvSAN deployment consists of the following base components: hypervisor integration, Virtual Storage Appliances, Neutral Storage Host. In my lab environment, I used VMware vSphere, but StorMagic does offer support for Hyper-V as well. A plugin is loaded into the vCenter Server and provides the dashboard for management and deploying the VSAs. Following the wizard, a Virtual Storage Appliance is deployed on each host and the local storage is presented to the VSA. Before creating storage pools the witness service (Neutral Storage Host) must be deployed external to the StorMagic cluster. The NSH can be deployed on a server, Windows PC, or Linux. It is light weight enough that it can run on a Raspberry Pi.



SvSAN 6.2 introduced the ability to encrypt data. A key management server is required for encryption. For this evaluation, I installed Fornetix Key Orchestration as the KMS. Encryption options available include encryption of a new datastore, encryption of an existing datastore, re-keying a datastore, and decrypting the datastore. As I was curious to as what kind of performance hit encryption may have against the environment, I ran my tests against the non-encrypted datastore, then again after encrypting it.



Deployment and Testing

The overall installation process is fairly straight forward. StorMagic provides an Evaluators guide which outlines the installation process, and their website has ample documentation for the product. I had to read through the documentation a couple of times to fully understand the nuances of the deployment. I did encounter a few hiccups during deployment, one IP issue which I resolved and a timeout on the VSA deployment. I did need to contact support to release the license for the Virtual Storage Appliance which timed out, but support was responsive and resolved my issue quickly. The timeout may have been tied to the IP issue as the VSA deployed successfully on the second attempt.



With the underlying infrastructure in place, a shared datastore was deployed across both host nodes. Now the testing could begin. A Windows Server 2012 R2 virtual machine was deployed on the SvSAN datastore to run performance testing against. The provided Evaluation Guide gives many suggested tests to put the SvSAN environment through its paces. As I mentioned previously, I ran the tests against an encrypted datastore, a non-encrypted datastore, and a local datastore.



Following the guidelines set forth by the Evaluation Guide, Iometer was the tool of choice for performance benchmarking. Below is a chart of the metrics used. Outside of the suggested performance testing I also ran various tests to see what the end user experience could feel like on a SvSAN backed server. These tests included RDP session into the VM, continuously pinging locations internal and external to the network, and running various applications.






The final tests ran against the SvSAN cluster included failure scenarios and how it would impact the virtual machine. Drives were removed, connectivity to the Neutral Storage Host was severed, iSCSI & cluster networking were removed. An interesting aspect to the guide is that it gives you testing options to cause failures that will affect VMs running on the SvSAN datastore so you can see first-hand how the systems will handle the loss of storage.



SvSAN Results & Final Thoughts


Performance testing ran against the VM on the SvSAN datastore provided positive results. I was curious as to whether passing through an additional step in the process would affect IOPS, but there were only nominal differences between the local storage and the SvSAN datastore. I found the same to be true when it came to running an encrypted versus a non-encrypted datastore. IOPS performance held steady across all testing scenarios.



The same was true with the user experience performance testing. While running Iometer, Firefox, a popular chat application, and pinging a website the following failures were introduced to no impact:



  • hard drives were remove
  • a Virtual Storage Appliance was powered down
  • an ESXi host was shut down
  • Connectivity to the Neutral Storage host was severed



I was impressed with my experience with StorMagic's SvSAN. From no prior exposure to running production ready datastores in approximately an hour. The solution performed well under duress. Overall, StorMagic SvSAN is an excellent choice for those in need of a solid remote office/branch office solution that is reliable and cost effective.



Lab Technology Specifications:

  • Two Dell R710s
  • 24 GB RAM each
  • 2x X5570 Xeon 2.93 GHz 8M Cache, Turbo, HT, 1333MHz CPU Each
  • One 240 GB SSD drive for caching in each host
    • Presented as a single 240 GB pool from the RAID controller
  • 5 x 600 10k SAS drives configured in RAID 5
    • Presented as two pools; 400GB & 1.8 TB
  • VMware vCenter Server Appliance 6.5
  • VMware ESXi 6.5 U2 Dell Custom ISO
  • Cisco Meraki MS220 1GB Switching 

Further reading on StorMagic:
SvSAN Lets You Go Sans SAN 
 
This blog was originally published at Gestalt IT as a guest blog post. 

If you'd like to continue the conversation about StorMagic SvSAN, do not hesitate to contact me via any of the channels provided below. Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.

Monday, June 4, 2018

VMware ESXi 5.5 End of Support: What Does That Mean for You?

On September 19th, 2018, VMware vSphere ESXi 5.5 reaches End of General Support. But what exactly does that mean? Well, after the 19th, VMware will no longer provide new security patches, bug fixes, maintenance updates, upgrades, or new hardware support. Also, you may no longer open phone support tickets with Global Support Services (GSS) for severity 1 outage issues.

That's not to say all is lost, but your environment is in a precarious state. VMware will maintain Technical Guidance for vSphere 5.5 until September 19th, 2020. Support requests can only be opened via the self service portal for severity 2 and lower issues, so if you experience an outage, you are on your own. These tickets only apply to supported configurations as well.

As a VMware Support and Subscription (SnS) customer, you need to upgrade before September 19th to avoid losing the full protection of a supported vSphere platform.

 


"You're right, we need to upgrade... But what does that all entail?". The vSphere platform has a very specific upgrade path you need to follow to ensure no service interruptions during the process. Before we dive into the VMware aspects, there are other variables to consider. Let's start with the host servers; the hardware ESXi calls home. Validate that the hardware is compatible with the version of ESXi you are upgrading to via the VMware Compatibility Guide. Take a look at host BIOS and firmware as well. The host hardware vendor may also provided information about compatible versions of ESXi on their website.

Even in the event that the existing BIOS and firmware is compatible with ESXi 6.0 or higher, now is a great time to upgrade. This helps to keep your hardware secure and up to the manufacturers recommended levels. One of the first things the vendor's tech support engineer is going to tell you to do is to upgrade the BIOS to troubleshoot any problems you may call in with. I know this from many calls to *insert hardware vendor here*.

So the host hardware is taken care of, what's next? Think about other systems that interact with vSphere. Is your storage platform compatible? What about your backup solution?  Are there any vCenter plugins in use? Potentially many systems could be impacted by a VMware upgrade that's not properly planned.

"Alright, the hardware and peripheral systems are ready to go, time to move on to VMware!". Well, almost. Before diving into upgrading vSphere, take inventory of what vSphere editions and VMware programs you have deployed in your environment. Certain versions of vSphere and vCenter Server cannot be upgraded directly to 6.5+, so be sure to check the VMware Product Interoperability Upgrade Matrix first.

vCenter Upgrade Path taken from kb.vmware.com
Calling back a few paragraphs, I pointed out that vSphere has a very specific order of operations for upgrades. Now that you have the full list of VMware products in play in your environment ready to go, you can map out your next steps. A quick search of VMware KBs will provide the upgrade sequence for the specific version of ESXi to which you have chosen to upgrade. Here is the KB for ESXi 6.5. You'll notice that services such as vRealize Operations, NSX, and the Platforms Services Controller (PSC) must be upgraded before vCenter.

It's worth noting that hosts are upgraded AFTER vCenter. Older versions of ESXi can connect to newer vCenters, but it does not work the other way around. I have encountered many people in my days attending VMUG meetings where they were unaware of this requirement and upgraded the hosts first.

One more thought on vSphere upgrades: in-place versus a clean install. I'm not going to tell you which way is best. Everyone has their own options and experiences. What I will say is that, from my experiences, in place upgrades to vSphere 6.5 have been successful. I've encountered some issues going from 5.5 to 6.0, mainly around Site Recovery Manager, but 6.0 to 6.5 appears to be more stable than in place upgrades past. (Again, this is my experience, your may vary.)

Finally, be sure to have backups before upgrading your environment and read the release notes for each product before proceeding. Watch out for any known issues that may trip you up during your upgrade or daily operations.

The key to a successful vSphere upgrade is planning. With the clock ticking on ESXi 5.5, you need to start planning as soon as possible to ensure a smooth transition and to stay protected.

Further Reading:
End of General Support for vSphere 5.5 (51491)
VMware Lifecycle Policies
VMware Extened Support


If you'd like to continue the conversation about vSphere upgrades, do not hesitate to contact me via any of the channels provided below. Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.