Wednesday, February 20, 2019

Storage Field Day 18, Here I Come!

For the second time, I'm lucky enough to be selected as a Tech Field Day delegate! This time, it's for Storage Field Day 18 in Silicon Valley February 27th through March 1st.

For those of you unfamiliar with Tech Field Day, it's an event ran by Gestalt IT where independent tech influencers are brought together with vendors to review products and solutions with deep technical conversations. Each event has a particular technology focus, such as networking, cloud, wireless, and security. 

Storage Field Day 18 will feature 7* different storage vendors. There's an asterisk as the anticipated number of vendors can grow (wink wink).  Expect to see some traditional players like NetApp and Western Digital, and a few newcomers to the scene such as Cohesity, WekiaIO, and StorPool. There's even a secret vendor on the list that even we the delegates are not privy to yet. 

SFD18 can be viewed live on with sessions available a short while later on the Tech Field Day YouTube channel. You can also follow along on Twitter with hashtag #SFD18. 

For more info about Storage Field Day, including a full list of delegates, check out the site here

Monday, December 31, 2018

ExploreVM Podcast Short: VMworld 2018 - A Network Expert at a Virtualization Conference

In another ExploreVM Podcast short, I speak with Matt Elliot about his first time at VMworld, and what it's like to attend VMworld as a Network expert at a Virtualization conference.

Listen to "VMworld 2018 - A Network Expert at a Virtualization Conference" on Spreaker.

My Guest:
Twitter - Matt Elliot

vBrownBag VMworld 2018 Tech Talks
VMworld US 2018 Day 1 Keynote
VMworld US 2018 Day 2 Keynote
Troubleshoot and Assess the Health of VMware Environments with Free Tools (VIN3257BU)

Do you have an idea or a topic for the show? Would you like to be a guest on the ExploreVM podcast? Or just keep up the conversation about VMworld 2018? If so, please contact me on TwitterEmailLinkedInInstagram, or Facebook.

ExploreVM Podcast - HCI Series: HCI Testing with Alan Comstock

On this episode of the podcast we begin a series on hyperconverged infrastructure. We're going to start with a guest who's put a few products through the ringer to decide which HCI vendor worked best for them.

Listen to "HCI Series - HCI Testing with Alan Comstock" on Spreaker.

My Guest:
Alan Comstock

In the future, watch for episodes where I dive into several different  HCI technologies.

To continue the conversation on hyperconverged infrastructure, or if you're an HCI vendor and would like to be a guest on the ExploreVM podcast, please contact me on TwitterEmailLinkedInInstagram, or Facebook.

Thursday, December 27, 2018

ExploreVM Podcast - A VMworld 2018 Conversation with Mike Burkhart

As 2018 comes to an end, I look back at some sessions that haven't been featured on the podcast yet this season. This episode was originally intended to be a video featuring Mike Burkhart live at VMworld 2018. Unfortunately due to some technical difficulties during the editing process, we can only enjoy as an audio podcast.

Listen to "A VMworld 2018 Conversation with Mike Burkhart" on Spreaker.

My Guest:
Mike Burkhart

vBrownBag VMworld 2018 Tech Talks
VMworld US 2018 Day 1 Keynote
VMworld US 2018 Day 2 Keynote
Troubleshoot and Assess the Health of VMware Environments with Free Tools (VIN3257BU)

Do you have an idea or a topic for the show? Would you like to be a guest on the ExploreVM podcast? Or just keep up the conversation about VMworld 2018? If so, please contact me on Twitter, Email, LinkedIn, Instagram, or Facebook.

Thursday, September 6, 2018

Troubleshoot & Assess the Health of VMware Environments with Free Tools - VMworld 2018

I had the opportunity to present my first VMworld breakout session at this years conference in Las Vegas. Below are the videos of the demos provided during the session, as well as links to the tools discussed. Please do not hesitate to contact me for further information!

Session Description 

Based on his highly popular VMUG session, Paul Woodward Jr. (@ExploreVM) will review some of the tools he's used in his career to assess the health and troubleshoot issues with VMware environments. Paul will provide demonstrations and real world examples on how these tools have helped him solve problems that plague every VMware admin. And the best part, these tools are free!



vCheck Website 
vCheck Github



RVtools Website 
Yellow Bricks - New Version Available 




Yellow Bricks 
vFrank - ESXTOP 
Virten - ESXTOP

vRealize Log Insight


VMware - vRealize Log Insight 
vRealize Log Insight - End of Availability

Other Session Links

Veeam One Free Edition
Top 21 Must Have VMware Admin Tools 
101 Free VMware Tools 
Free VMware Tools 

 If you have suggestions for tools that should be added to the above list, do not hesitate to contact me via any of the channels provided below.

Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.

Sunday, August 26, 2018

SuperMicro Build Day Live with vBrownBag

Recently, SuperMicro hosted Alastair Cooke and vBrownBag for another edition of Build Day Live. For those who don't know, "vBrownBag is a community of people who believe in helping other people.". They run weekly podcasts and webinars, and also host live Tech Talks at conferences around the country. What makes the Build Day Live event different is that vBrownBag is on site at the vendor, building a production cluster throughout the event, from start to finish.

SuperMicro Does Networking?

Up until Build Day Live, I had no idea that SuperMicro was in the networking space. They offer a wide array of products from 1 to 100GB in a 1U chassis. These switches are bare metal, and are compatible with the Cumulus Linux networking operating system. SuperMicro also has its own proprietary NOS for the 1GB switches as well. Configuration can be completed via CLI or GUI, making management easy for admins at all skill sets.

JBOF Disaggregated Storage

Outside of the server hardware we all know, SuperMicro also has a deep selection of storage hardware. Of the two storage specific segments of SuperMicro Build Day, I was most interested in the JBOF/NVMe storage. During this segment, Alastair spoke to Mike Scriber. To quote Mike, "I design really, really cool storage systems using NVMe. Very high density systems." And when you look at what SuperMicro is up to, he's right.

Utilizing the Intel Ruler NVMe form factor, SuperMicro is quickly closing in on 1 petabyte of storage in a 1U rack chassis. The chassis has slots for 32 "rulers" that connect into 16 lanes of PCIe leading to 4 ports which allows for 64gb/s bandwidth. Another interesting feature of both the ruler form factor and the standard U.2 chassis is the engineering of the back planes. The back planes run parallel to the ruler form factor, and across the top of the U.2 drives. This design helps to keep densely packed 1U chassis cool with limited to no blocks in the airflow.

There were a lot of aspects to the SuperMicro Build Day Live event worth checking out, lots more than commented on in this post! Check out the links below for all of the videos from the event.


SuperMicro Build Day - Condensed
vBrownBag YouTube SuperMicro Build Day Live Videos
The CTO Advisor - SuperMicro Build Day Interviews
Anthony Hook's SuperMicro Build Day Blog Post - SuperMicro Build Day Live
vBrownBag on Twitter
SuperMicro's Web Site

If you'd like to continue the conversation about SuperMicro Build Day Live, do not hesitate to contact me via any of the channels provided below. Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.

Thursday, August 2, 2018

Getting Started with StorMagic SvSAN - A Product Review

Getting Started with StorMagic SvSAN - A Product Review

Recently, I had the opportunity to try out StorMagic SvSAN in my home lab to see how it stacks up. The following is an introduction to SvSAN, a description of the deployment, testing, testing results and my findings. 

What is StorMagic SvSAN 6.2?

StorMagic SvSAN provides a Hyperconverged solution that has been designed with the remote office/branch office in mind. Two host nodes with onboard storage can be utilized in a shared storage style deployment in locations where a traditional 3 tiered architecture would prove to be difficult to manage or too cost prohibitive.  SvSAN is vendor agnostic so it can be deployed onto existing infrastructure without the need to acquire additional hardware. The two storage nodes can scale out to support up to 64 compute-only nodes. Licensing is straight forward: perpetual license per pair of clustered storage nodes as one license per pair. Initial pricing is also very accessible, starting at approximately $4,000 for the first 2TB license. Licensing and capacity can scale beyond the initial 2TB.

When asked about their typical customer base, StorMagic provided the following response: "StorMagic SvSAN is designed for large organizations with thousands of sites and companies running small data centers that require a highly available, two-server solution that is simple, cost-effective and flexible. Our typical customers have distributed IT operations in locations like retail stores, branch offices, factories, warehouses and even wind farms and oil rigs. It is also perfect for IoT projects that require a small IT footprint, and the uptime and performance necessary to process large amounts of data at the edge."

Technical Layout of SvSAN

A typical SvSAN deployment consists of the following base components: hypervisor integration, Virtual Storage Appliances, Neutral Storage Host. In my lab environment, I used VMware vSphere, but StorMagic does offer support for Hyper-V as well. A plugin is loaded into the vCenter Server and provides the dashboard for management and deploying the VSAs. Following the wizard, a Virtual Storage Appliance is deployed on each host and the local storage is presented to the VSA. Before creating storage pools the witness service (Neutral Storage Host) must be deployed external to the StorMagic cluster. The NSH can be deployed on a server, Windows PC, or Linux. It is light weight enough that it can run on a Raspberry Pi.

SvSAN 6.2 introduced the ability to encrypt data. A key management server is required for encryption. For this evaluation, I installed Fornetix Key Orchestration as the KMS. Encryption options available include encryption of a new datastore, encryption of an existing datastore, re-keying a datastore, and decrypting the datastore. As I was curious to as what kind of performance hit encryption may have against the environment, I ran my tests against the non-encrypted datastore, then again after encrypting it.

Deployment and Testing

The overall installation process is fairly straight forward. StorMagic provides an Evaluators guide which outlines the installation process, and their website has ample documentation for the product. I had to read through the documentation a couple of times to fully understand the nuances of the deployment. I did encounter a few hiccups during deployment, one IP issue which I resolved and a timeout on the VSA deployment. I did need to contact support to release the license for the Virtual Storage Appliance which timed out, but support was responsive and resolved my issue quickly. The timeout may have been tied to the IP issue as the VSA deployed successfully on the second attempt.

With the underlying infrastructure in place, a shared datastore was deployed across both host nodes. Now the testing could begin. A Windows Server 2012 R2 virtual machine was deployed on the SvSAN datastore to run performance testing against. The provided Evaluation Guide gives many suggested tests to put the SvSAN environment through its paces. As I mentioned previously, I ran the tests against an encrypted datastore, a non-encrypted datastore, and a local datastore.

Following the guidelines set forth by the Evaluation Guide, Iometer was the tool of choice for performance benchmarking. Below is a chart of the metrics used. Outside of the suggested performance testing I also ran various tests to see what the end user experience could feel like on a SvSAN backed server. These tests included RDP session into the VM, continuously pinging locations internal and external to the network, and running various applications.

The final tests ran against the SvSAN cluster included failure scenarios and how it would impact the virtual machine. Drives were removed, connectivity to the Neutral Storage Host was severed, iSCSI & cluster networking were removed. An interesting aspect to the guide is that it gives you testing options to cause failures that will affect VMs running on the SvSAN datastore so you can see first-hand how the systems will handle the loss of storage.

SvSAN Results & Final Thoughts

Performance testing ran against the VM on the SvSAN datastore provided positive results. I was curious as to whether passing through an additional step in the process would affect IOPS, but there were only nominal differences between the local storage and the SvSAN datastore. I found the same to be true when it came to running an encrypted versus a non-encrypted datastore. IOPS performance held steady across all testing scenarios.

The same was true with the user experience performance testing. While running Iometer, Firefox, a popular chat application, and pinging a website the following failures were introduced to no impact:

  • hard drives were remove
  • a Virtual Storage Appliance was powered down
  • an ESXi host was shut down
  • Connectivity to the Neutral Storage host was severed

I was impressed with my experience with StorMagic's SvSAN. From no prior exposure to running production ready datastores in approximately an hour. The solution performed well under duress. Overall, StorMagic SvSAN is an excellent choice for those in need of a solid remote office/branch office solution that is reliable and cost effective.

Lab Technology Specifications:

  • Two Dell R710s
  • 24 GB RAM each
  • 2x X5570 Xeon 2.93 GHz 8M Cache, Turbo, HT, 1333MHz CPU Each
  • One 240 GB SSD drive for caching in each host
    • Presented as a single 240 GB pool from the RAID controller
  • 5 x 600 10k SAS drives configured in RAID 5
    • Presented as two pools; 400GB & 1.8 TB
  • VMware vCenter Server Appliance 6.5
  • VMware ESXi 6.5 U2 Dell Custom ISO
  • Cisco Meraki MS220 1GB Switching 

Further reading on StorMagic:
SvSAN Lets You Go Sans SAN 
This blog was originally published at Gestalt IT as a guest blog post. 

If you'd like to continue the conversation about StorMagic SvSAN, do not hesitate to contact me via any of the channels provided below. Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.