Just in time for the Holidays… Kevin Kline has come to town.
Kevin’s topic at the Charlotte SQL Server Users Group weighed the pros and cons of SQL Server virtualization. In this post I will note some of the many points that I learned.
Before I dive into the technology details I would like to add my personal comments. This was definitely one of the more enjoyable geek-meets I’ve been to in quite a while. Our chapter president and by default program coordinator, Peter Shire, has done a superb job of bringing in some top notch SQL experts to our humble group but Kevin’s topics were in alignment with my own personal interests. Not administration; not hard core programming. It was more a topic that bridges admin and development. Let me coin it DevelopMin.
Here are the high points of the meeting, from my perspective…
1. If you have a bunch of SQL Servers lying around that are predominately low utilization, virtualization may make sense. Another application for virtual servers would be a development shop that needs to certify their application against a variety of OSs, SQL server versions, service packs, etc.
2. The overhead of virtualization has been improved since the initial releases. Much of that improvement is in a reduced CPU utilization IFF you use current generation processors. The new processors are VM-aware and will perform better in the virtual environment.
3. If you’re configuring your virtual machines don’t let the software decide what resources to assign to each machine.
4. Hyper-V was originally named Hyperv.
5. A side discussion occurred when the topic of setting up the SAN on 64k blocks and how performance degradation can occur if disk alignment is out of sorts. Apparently the newer controllers can be configured to buffer the data so in some cases, particularly with the newer controllers, this may not be an issue.
6. In most cases, Microsoft will no longer make the customer reproduce a SQL Server bug in a non-virtualized machine before they will look at the bug. We’ve come a long way baby.
7. We got on the topic of how Microsoft clusters are good for availability but adds nothing along the lines of scalability. I’m sorry, but I was spoiled in the mid-80s when I was working with VAX/VMS clusters (remember Digital Equipment, before they were bought by Compaq and HP?). Back then, a node in a cluster shared the disk drives but added memory and CPU resources to the cluster. Additional nodes in the cluster added processing power. How does Microsoft get away with using the same terminology to mean something that is backward technology? Why does the IT community put up with it? Kevin mentioned several products that support the concept of scalable clusters… but I still wonder why it has to be provided by a third party at an additional cost.
8. Several web sites were authentically referenced including Jimmy May at SQLCAT.com on disk alignment, Linchi Shea’s methodical analysis of disk arrays at sqlblog.com. What’s really amazing is the depth of knowledge that there is out there. As soon as you think you know something you find people like this that are in a completely different league. It is humbling and inspiring, concurrently.
9. Quest’s LiteSpeed database backup tool is used throughout Microsoft. Lot’s of companies say their products are covertly used in the inner sanctums of Microsoft. And Kevin said it on a Microsoft campus in front of Microsoft techs. It must be true! What I found intriguing about the product is that an errant implicit UPDATE or DELETE transaction can be rolled back from the transaction log. Since we use LiteSpeed at my place of employment, and I have been known to skip the WHERE clause once or twice, I couldn’t wait to bounce this off my local admin. Apparently this only works if the database is in FULL RECOVERY mode; our OLAP dbs are in SIMPLE, and should be.
9. If you’re stuck in a rut, get out to a local user group meeting. You’ll either be uplifted and inspired, or you can just go back to your rut.
I’m off to rating the presentation at speakerrate.com.