In a previous blog post, I walked you thru how I built my portable Hyper-V lab using Gigabyte’s GB-BXi5-4570R mini barebone PC. I even had a chance to test it out immediately after getting it all configured. I brought the kit with me to Microsoft’s Big Data Hackathon event in Toronto. And, boy, I surely don’t miss the heavy backpack. Even with my MacBook Pro, a portable router, power adapters and extension chords, my pack it’s still way lighter than when I carried the Dell Latitude E6520.
As an IT professional, it is important to have a lab environment to play around with – whether you’re a developer writing code or a systems administrator building servers – to test ideas and concepts prior to doing it for reals. Time and time again, I’ve heard folks say how expensive it is to build a lab environment. That used to be the case but not anymore. With cloud providers and virtualization options, there is really no excuse to not build a personal lab environment that is both affordable and economical.
A few days ago, one of my customers reached out to me on Yahoo Messenger (yes, it still exists) and asked how to identify what the potential data loss is when DBCC CHECKDB reports corruption of a SQL Server database. My common response is the usual “it depends” in that there are cases when DBCC CHECKDB may recommend using the option REPAIR_ALLOW_DATA_LOSS. And while you may be fine with doing so, it may not be supported. An example of this is a SharePoint database where Microsoft KB 841057 specifically mentions that using this option renders the database in an unsupported configuration. But say you have decided to proceed, how do you know what data potentially gets lost? This blog post walks you thru the process of identifying potential data loss when DBCC CHECKDB reports corruption in your SQL Server database.
This video is a compilation of different database recovery techniques that SQL Server DBAs should be familiar and comfortable with. We will look at recovering a database to a specific point in time, isolating critical objects or using table partitioning as an HA/DR option (more commonly called online piecemeal restore) and performing page-level restores.
I started doing some serious database disaster recovery stuff in 2006 while working for Fujitsu Asia. As the only hard core SQL Server guy in a team of Wintel engineers, the buck stops with me when it comes to anything related to SQL Server. And since our main offerings all revolve around high availability and disaster recovery, I have to be very familiar and well versed with all possible recovery techniques that are available to the version of SQL Server that we are running. On top of that, I need to make sure that all our backup strategies meet a specific application platform’s recovery objectives and service level agreements.
“The only way to walk on water is to step out of the boat – and focus on Jesus.”
I’ve made it official last week on my LinkedIn profile. It’s been interesting to see the different comments I got after my LinkedIn status change. But what’s really exciting is the journey and the story behind it.
I have worked for Pythian since November 2008 – started out as a SQL Server DBA until eventually moving on to become one of their senior principal consultants. They were responsible for helping me and my family migrate from Singapore to Canada. I have had the privilege of working with the smartest and brightest data professionals in my entire career, exposed to some of the most challenging and complex environments and worked on different projects not directly related to SQL Server – Hadoop, SharePoint, System Center, Citrix and VMWare to name a few. I can say that I have been on stable ground in my employment since the day I joined. So, you might ask, “Why leave, then?“
Last year, as part of launching my very first online course, I gave away FREE access to my SQL Server High Availability and Disaster Recovery Deep Dive Course. I’m doing it again this year but with a totally different reason. Here’s why.
I’ve been very active in the SQL Server community in one way or another. A lot of people ask me why I do what I do. It all started in late 1999 when, fresh out of college with no one wanting to hire me, a potential customer asked me to write an inventory application for their small business. This might sound really exciting for somebody who would consider this their very first consulting opportunity immediately out of college. This plus considering the fact that my potential customer was willing to pay me any price I would charge them for it. But not for me. You see, I didn’t have a computer science/engineering degree. I even failed my only computer programming course. The only reason I passed the second time I took it was because I asked my best friend to help me write my final project. So, taking this project on was really not a good idea for me. But my then customer really wanted me to do this project for them because they liked me and trust that I would do a great job at it. So, I gave in and that was the beginning of my career in the IT industry. I managed to finish the project in about 6 months and my customer was happy. End of story.
Well, not quite yet. If you read between the lines, you’ll see that I’m not really good at writing code. Heck, I can barely read code at that time. So, how did I manage to finish the project and make my then customer happy? I started learning about how to write code – Visual Basic 4 at that time. I borrowed a book from one of my former classmates and started reading, slowly learning one line of code at a time. This, of course, was before a lot of content was even available on the Internet. But what really got me thru was a young guy named Ken (I don’t even know if this was his real name) who I met in one of the bulletin board system (BBS) that I constantly visit to learn about Visual Basic programming. I would ask questions, he would answer. Patiently. When I didn’t understand a syntax, he would explain further. When a piece of code didn’t work, I would send it over to him and he would look at it, acting as if he was my virtual debugger with an explanation of why I got the error and how to possibly fix it. I spent an average of 16 hours a day on the computer writing code, almost half of that time was with Ken asking questions and following his advice. That was my routine for almost 4 months. And, that’s the reason behind why I was able to finish my project and ended up having a happy customer.
I never got to meet Ken personally. I don’t know where he is from, what he does or if he still writes code. But I’m thankful that I met him virtually on one of the BBS. Since then, I’ve started doing what he did for me – helping online communities by answering questions on forums, presenting at events, mentoring others, etc. I hope I bump into Ken one of these days and personally thank him for what he did for me.
I’m very thankful for communities like that of the SQL Server community. I’ve met folks who have become my friends, extended family members, prayer partners, career advisors, etc. There’s a reason why the #SQLFamily hashtag exists on Twitter.
And this is why I’m doing this again. I owe the SQL Server community big time. And this is my way of saying a big “thank you” to every one who contributes to make this community even better every day.
Now, in order to be one of the ten lucky individuals who will receive FREE access to the full course, you must take the following actions:
- Leave a comment below. What are the TOP 3 things that you are thankful for about the SQL Server community? Be very specific. If you need to reach out to the folks whom you are thankful for, do it via email or social media and share it with the whole world. That’ll be a great way to put a smile on their face that day.
- Fill out my Contact Form. Provide a valid email address that you check on a regular basis. You want to make sure that my email announcement doesn’t end up in your Spam folder.
- Share this this blog post via social media. Use the #SQLHADRRocks hashtag on Twitter, share it on Facebook (I know Facebook now uses hashtags as well,) LinkedIn, Google+, Reddit, and anything else you can think of. Include at least one of the links in your comment below.
On Saturday, 06-Dec-2014, I will be selecting ten (10) lucky individuals based on my evaluation of their submission. If you have been selected, you will receive a personal email from me on 13-Dec-2014. If you didn’t receive any email from me, you can assume that your submission was not selected.
Thanks for reading this blog post. And if you’re in the United States or an American living elsewhere, Happy Thanksgiving!
[UPDATE: 13-Dec-2014] The winners have been chosen. Expect an email from me and enjoy FREE access to the online course.
Last year, I started writing an article that was supposed to be a series for Installing, Configuring and Managing Windows Server Failover Cluster using Windows PowerShell. The first of the series came out on July 2013 (which ended up being the last article I wrote for the year 2013 on MSSQLTips.com.) Since then, I’ve been involved with more projects on SharePoint and business intelligence (BI) integration that I barely had a chance to work with the SQL Server database engine on a regular basis. But since part 1 of the series is already out there, I figured it’s worth the time and effort to finish it up. So, here it is – the complete series on Installing, Configuring and Managing Windows Server Failover Cluster using Windows PowerShell.
- Part 1 – from installing the feature to creating the Windows Server Failover Cluster
- Part 2 – retrieving and changing cluster object properties and adding clustered disks
- Part 3 – managing permissions, changing parameter values, moving clustered resources and dependencies
- Part 4 – common cluster troubleshooting tasks
I’ve done two SQL Server webcasts for my friends at MSSQLTips.com. One is regarding security best practices for deploying SQL Server databases in the cloud. As more and more customers are thinking of deploying databases in the cloud, security is one of their main concerns. In the webcast, I talked about principles and concepts on securing databases in the cloud. You can check out the recording from the MSSQLTips.com website.
The second one is about networking best practices for SQL Server high availability and disaster recovery. The premise of the webcast is that SQL Server DBAs are now dependent on the things that they have no control over. Knowing what SQL Server depends on for high availability and disaster recovery enabled SBAs to be better prepared to communicate with the other teams to meet their overall objectives. You can check out the recording from the MSSQLTips.com website.
When I first heard of the Microsoft PowerBI Demo contest, I figured it would be a great opportunity to explore the business intelligence capabilities of Excel 2013. With the preview of Power Maps and Power Query, I thought it would be fun to play around with these features. Besides, I’ve seen some fancy demos at the last PASS Summit.
Being a story teller and presenter, I can’t help but think of a way to showcase these features without having a story around it. So, while having dinner with my family one day, I told my 11-year-old son (who is into creating stop motion videos) if he might be interested in working on a project with me. I told him about the contest and what it’s about. More importantly, I told him about a story that I already have in mind which I was absolutely sure he’ll love: enter Spiderman. The story revolved around the fact that Spiderman needs to pay his tuition soon but he doesn’t have enough money on his savings. He just got fired from The Daily Bugle so there isn’t really much he could do. However, he can go back to delivering pizza around New York City since he’s done it before. But this time, he needs to be strategic and efficient. He needs to identify which boroughs in New York City has the highest mean household income so that he can focus his efforts on that particular area. He also needs to find out which boroughs have the highest number of complaints and crime incidents for the past few weeks. That way he can avoid bumping into situations where he may need to stop what he was doing just to go on a rescue mission. This is where he starts using his wits and gets working on his computer to search for publicly available New York City data. With the insights he came up with based on the data he gathered, he sets out to pursue his plans until he hits his first roadblock.
I had fun working on this project with my son (he enjoyed it a lot since he was the one who created the Spiderman scenes). I hope you like the video, too. And if I may ask, can you vote for the video by clicking on this link? The final video is embedded below.
And you can try it out for yourself. Download the Power Maps and Power Query preview on the Microsoft Download Center and enable them on your Excel 2013 workbooks. I bet you’ll have fun the way I did.
This post is way overdue. Since I’ve been getting a lot of requests about this specific presentation regarding SharePoint databases, I decided to do two things. First, I recorded this presentation for all my attendees to use as a reference. Now, you might be thinking, “If you’ve already recorded your presentation, wouldn’t that affect attendance in your events?” Yes and No. Yes, because those who have seen the video will no longer attend my presentation. For me, this is a great opportunity to help those individuals to plan ahead and maximize their time while attending events. As IT professionals, we’re busy, stuck in our day-to-day work and don’t even have time to look into some of these best practices that need to be applied in our environment. Often times, we are forced into the let’s-do-things-quick-and-fix-it-later corner because of the constant demand for our time. If the drop in attendance in my presentation/events would mean helping those individuals maximize their time, then, I’m all for that. This also includes those who really wanted to attend my presentations but do not have the means to do so (those in different time zones, different countries, no budget, etc.) And the flip side? No, it wouldn’t affect attendance in my events. Everyone who has seen me deliver a presentation can tell you a thing or two about why they’ve decided to attend my presentations even though they’ve already seen me (or the same presentation) at a previous event. I really work hard to prepare my presentations – the proper use of pictures, colors, fonts, and stories all are done with intent – even if it’s the same presentation delivered at a different event. This is my way of saying, “thank you for taking time off your hectic and tight schedule to attend my presentation.” In addition to that, I wanted them to have a resource that they can use as a reference when they go back to work. I want them to become valuable and continue to grow as an IT professional. After all, that’s my primary mission statement.
So, here it is, a video recording of my most requested topic at SharePoint conferences and events – Database Configuration for Maximum SharePoint 2010/2013 Performance
And, you’ve probably seen the corresponding slide deck.
But this is just the first of the two things I mentioned. Here’s the second one. I’ve written a PowerShell script to check the SQL Server instance that you use for your SharePoint databases. This is the PowerShell script that I use when delivering my presentation on Windows PowerShell for the SharePoint Administrators. It’s also the same PowerShell script that I use when I work with customers who request for my services to review and evaluate their SharePoint databases. The script checks for best practices configuration recommended for SharePoint databases – stuff like MAXDOP =1, disabled autoupdate and autocreate statistics, etc. As SQL Server DBAs, we hate some of these configuration. However, these are all documented and supported. Which means they have to be applied to your SQL Server instances and databases used by SharePoint. In addition, I have also included checks that we SQL Server DBAs consider best practices – separation of MDF and LDF files, regular DBCC CHECKDB execution, backup compression enabled, etc. You can download the PowerShell script from here.
Keep in mind that this is not the best way to write PowerShell scripts. I didn’t apply those best practices here so that would probably be my next personal project.
Feel free to use this script as you wish. It has only been tested on default instances of SQL Server 2008 and higher (named instances have not been considered yet) running on Windows Server 2008 and higher. High availability checks like failover clustering, database mirroring and Availability Groups have not been included yet on this version. Comment on the script for bugs and fixes that you want included, keeping in mind that this is specifically for SharePoint databases. Don’t expect any indexing improvements nor identifying the TOP I/O consumers because there is no way for us to modify those queries without breaking your SharePoint support contract (and I am in no way a lawyer to even argue about the contents of the EULA.)