• About
  • Contact
  • Resources
  • Speaking, Technical Community Involvement & Publications

– Microsoft technologies and what I do for fun –

Category Archives: SQL Server Disaster Recovery

Giving away FREE Access to my SQL Server High Availability and Disaster Recovery Deep Dive Course – Round 2

26 Wednesday Nov 2014

Posted by Edwin M Sarmiento in SQL Server, SQL Server Disaster Recovery, SQL Server learning

≈ 15 Comments

Tags

#SQLFamily, SQL Server, SQL Server community, SQL Server High Availability and Disaster Recovery, SQL Server learning


TwitterPromo2

Last year, as part of launching my very first online course, I gave away FREE access to my SQL Server High Availability and Disaster Recovery Deep Dive Course. I’m doing it again this year but with a totally different reason. Here’s why.

I’ve been very active in the SQL Server community in one way or another. A lot of people ask me why I do what I do. It all started in late 1999 when, fresh out of college with no one wanting to hire me, a potential customer asked me to write an inventory application for their small business. This might sound really exciting for somebody who would consider this their very first consulting opportunity immediately out of college. This plus considering the fact that my potential customer was willing to pay me any price I would charge them for it. But not for me. You see, I didn’t have a computer science/engineering degree. I even failed my only computer programming course. The only reason I passed the second time I took it was because I asked my best friend to help me write my final project. So, taking this project on was really not a good idea for me. But my then customer really wanted me to do this project for them because they liked me and trust that I would do a great job at it. So, I gave in and that was the beginning of my career in the IT industry. I managed to finish the project in about 6 months and my customer was happy. End of story.

Well, not quite yet. If you read between the lines, you’ll see that I’m not really good at writing code. Heck, I can barely read code at that time. So, how did I manage to finish the project and make my then customer happy? I started learning about how to write code – Visual Basic 4 at that time. I borrowed a book from one of my former classmates and started reading, slowly learning one line of code at a time. This, of course, was before a lot of content was even available on the Internet. But what really got me thru was a young guy named Ken (I don’t even know if this was his real name) who I met in one of the bulletin board system (BBS) that I constantly visit to learn about Visual Basic programming. I would ask questions, he would answer. Patiently. When I didn’t understand a syntax, he would explain further. When a piece of code didn’t work, I would send it over to him and he would look at it, acting as if he was my virtual debugger with an explanation of why I got the error and how to possibly fix it. I spent an average of 16 hours a day on the computer writing code, almost half of that time was with Ken asking questions and following his advice. That was my routine for almost 4 months. And, that’s the reason behind why I was able to finish my project and ended up having a happy customer.

I never got to meet Ken personally. I don’t know where he is from, what he does or if he still writes code. But I’m thankful that I met him virtually on one of the BBS. Since then, I’ve started doing what he did for me – helping online communities by answering questions on forums, presenting at events, mentoring others, etc. I hope I bump into Ken one of these days and personally thank him for what he did for me.

I’m very thankful for communities like that of the SQL Server community. I’ve met folks who have become my friends, extended family members, prayer partners, career advisors, etc. There’s a reason why the #SQLFamily hashtag exists on Twitter.

And this is why I’m doing this again. I owe the SQL Server community big time. And this is my way of saying a big “thank you” to every one who contributes to make this community even better every day.

Udemy

Now, in order to be one of the ten lucky individuals who will receive FREE access to the full course, you must take the following actions:

  1. Leave a comment below. What are the TOP 3 things that you are thankful for about the SQL Server community? Be very specific. If you need to reach out to the folks whom you are thankful for, do it via email or social media and share it with the whole world. That’ll be a great way to put a smile on their face that day.
  2. Fill out my Contact Form. Provide a valid email address that you check on a regular basis. You want to make sure that my email announcement doesn’t end up in your Spam folder.
  3. Share this this blog post via social media.  Use the #SQLHADRRocks hashtag on Twitter, share it on Facebook (I know Facebook now uses hashtags as well,) LinkedIn, Google+, Reddit, and anything else you can think of. Include at least one of the links in your comment below.

On Saturday, 06-Dec-2014, I will be selecting ten (10) lucky individuals based on my evaluation of their submission. If you have been selected, you will receive a personal email from me on 13-Dec-2014. If you didn’t receive any email from me, you can assume that your submission was not selected.

Thanks for reading this blog post. And if you’re in the United States or an American living elsewhere, Happy Thanksgiving!

[UPDATE: 13-Dec-2014] The winners have been chosen. Expect an email from me and enjoy FREE access to the online course.

Advertisements

Two SQL Server Webcasts from MSSQLTips.com

24 Thursday Jul 2014

Posted by Edwin M Sarmiento in AlwaysOn Availability Groups, presentations, SQL Server, SQL Server Administration, SQL Server Clustering, SQL Server Disaster Recovery, SQL Server learning, SQL Server Security, Windows Cluster

≈ Leave a comment

Tags

SQL Server, webcast


I’ve done two SQL Server webcasts for my friends at MSSQLTips.com. One is regarding security best practices for deploying SQL Server databases in the cloud. As more and more customers are thinking of deploying databases in the cloud, security is one of their main concerns. In the webcast, I talked about principles and concepts on securing databases in the cloud. You can check out the recording from the MSSQLTips.com website.

securityWebcast

 

The second one is about networking best practices for SQL Server high availability and disaster recovery. The premise of the webcast is that SQL Server DBAs are now dependent on the things that they have no control over. Knowing what SQL Server depends on for high availability and disaster recovery enabled SBAs to be better prepared to communicate with the other teams to meet their overall objectives. You can check out the recording from the MSSQLTips.com website.

HADRNetworkingWebcast

Giving Away FREE Access to My SQL Server High Availability and Disaster Recovery Deep Dive Course

23 Monday Sep 2013

Posted by Edwin M Sarmiento in SQL Server, SQL Server Disaster Recovery, SQL Server learning

≈ 11 Comments

Tags

contest, e-learning, SQL Server High Availability and Disaster Recovery, Udemy


Tweet_HADR_Free
Yesterday, I tweeted about giving away FREE access to my online course on Udemy (and, yes, it’s a birthday gift from me). If you’ve been following my blog post, you may already know that I’ve launched my very first learning experiment last week via the online course. I haven’t really promoted the course yet (aside from SQL Server MVP and MCM Brent Ozar mentioning it on his blog post) which is kind of unusual for me since I also write about topics on the subject.

When I was preparing for the course, I had two things in mind. First, I wanted the course to have an impact on both the ones taking it and those who matter to them. I had several assumptions of those who might be interested in taking it. They’re the ones who really do care about their personal growth – those who invest time and resources to learn about something new so that they can improve themselves. These are the folks reading books, blog posts, whitepapers, articles and even someone else’s code during their spare time. They attend conferences, user group meetings and events so long as their time and budget allow them to. They search the internet for free stuff when their budget doesn’t allow them to invest in additional resources and they regularly try out something new. They do this not only because they feel the personal satisfaction of improving and developing themselves but also because they want to spend more time on the things that really matter to them – family, friends, loved ones, etc. Second, I want the course to become a part of their career. They say “experience is the best teacher“. I say learned experience is. What good is knowledge if it isn’t applied. How many books have been collecting dust on the bookshelf, waiting for their turn to be opened and read by their owners? How many concepts learned have been applied? We don’t need more ideas. What we need is to apply the ideas and lessons that we’ve already learned.

And, that’s the story behind why I am giving away FREE access to my online course. I have made the first five lectures of the online course accessible to anyone who has access to the internet – no need to register to Udemy to access them. If you’ve found this blog post, it means you are a SQL Server professional who is serious about personal growth (and I’m pretty sure you’ve also seen the free lectures.) The first five lectures contain very important concepts in high availability and disaster recovery, things that we technology professionals don’t even think about sometimes. In fact, this is the foundation behind implementing effective high availability and disaster recovery solutions. Even non-SQL Server professionals will benefit from these free lectures.

Udemy

In order to be one of the twelve lucky individuals who will receive FREE access to the full course, you must take the following actions:

  1. Leave a comment below. What are the TOP 3 ideas that you have taken away from the first five modules of the course? And how do you intend to apply those 3 ideas in your organization or your customers? Be creative. You’ll never know if those ideas end up being implemented – either by you or someone else.
  2. Fill out my Contact Form. Provide a valid email address that you check on a regular basis. You want to make sure that my email announcement doesn’t end up in your Spam folder.
  3. Share this this blog post via social media.  Use the #SQLHADRRocks hashtag on Twitter, share it on Facebook (I know Facebook now uses hashtags as well,) LinkedIn, Google+, Reddit, and anything else you can think of. Include at least one of the links in your comment below.

On Tuesday, 01-Oct-2013, I will be selecting twelve (12) lucky individuals based on my evaluation of their submission. If you have been selected, you will receive a personal email from me on 13-Oct-2013. If you didn’t receive any email from me, you can assume that your submission was not selected.

[UPDATE: 01-Oct-2013] I’ve received requests to extend the deadline to 07-Oct-2013 due to very tight schedules. So, you still have a week to go to take advantage of this. I guess I didn’t promote it well enough 🙂

[UPDATE: 08-Oct-2013] The winners have been chosen. Expect an email from me and enjoy FREE access to the online course.

SQL Server High Availability and Disaster Recovery Deep Dive Course Now Available

16 Monday Sep 2013

Posted by Edwin M Sarmiento in SQL Server, SQL Server Disaster Recovery, SQL Server learning, Uncategorized

≈ Leave a comment

Tags

online course, SQL Server High Availability and Disaster Recovery


Udemy

I’ve been working on this personal project since early this year. If you’ve been following my blog posts, my articles on MSSQLTips.com or even my presentations at various events, you know that my area of expertise is on SQL Server high availability and disaster recovery. I’ve compiled years of experience and exposure with SQL Server and related technologies to prepare this online course, some of which were delivered to events and conferences worldwide. One of my personal favourite is the topic on Database Recovery Techniques where I vividly recall delivering my presentation at Microsoft TechEd Southeast Asia back in 2007 in a room full of about 200 attendees where my demos failed dramatically. Imagine trying to present on the topic of database disaster recovery when the most important thing that you need to do was the very thing that you forgot to do. It was the basis of a previous blog post on delivering presentations.

But this is more than just an online course. It is my commitment to continuous personal growth. It’s also an expression of faith and taking risks. I’ve experienced a lot of failures in my entire career, one of which is the now defunct BlogcastRepository.com website where I hosted my very first attempt at creating video lessons on SQL Server 2008 back when it was still in CTP. Part of preparing this online course is realizing that it may or may not work, similar to what happened with the video lessons I recorded for BlogcastRepository.com. But I set aside my fears and decided to work on it anyway – skipping holiday weekends and possible movie nights. I even had to put down my digital camera for a while to focus on this project. This online course contains within it several parts of who I am – the risk taking, fearful, committed, and dedicated individual who chose to persist despite his failures.

This is just the beginning. I’m still experimenting and trying out a couple of ideas. But I have an offer to make. If you’re a SQL Server DBA who is serious about taking your skills and career to the next level and willing to help someone else in the process, let me know how I can help.

Why Going Back To The Basics Matters

21 Tuesday May 2013

Posted by Edwin M Sarmiento in AlwaysOn Availability Groups, SQL Server, SQL Server 2012, SQL Server Administration, SQL Server Clustering, SQL Server Disaster Recovery

≈ 2 Comments

Tags

DevTeach, SQL Server High Availability and Disaster Recovery


I was thinking of posting this in my non-technical blog but realized that technical professionals will find value in the underlying principles and concepts.

When I look at different questions posted on the MSDN and TechNet forums, I notice a common thread. Questions are focused on either “how to do X?” or “what is Y?” types. Sometimes, I see questions about “why is Y not working and how do I fix it?” If you’ve been to one of my presentations, you may have noticed that I almost always start from the basics and internals. Most people find that boring, especially when the attendees consider themselves as senior technical professionals. But what I learn from the attendees of my presentations is interesting. Almost everyone has found some sort of appreciation of the basics. It’s the same thing I teach my kids: Learning the basics is the key to understanding the complex. 

I also deal a lot with high availability and disaster recovery stuff. When customers ask me questions about how a certain technology or feature work, I ask them back questions that I know they are already familiar with. Often times, they get confused about my approach until I explain that the complex things can be best explained by going back to the basics. It’s like learning how to do complex mathematical calculations or reading a financial statement; they are both founded on the basic principles of math that our grade school math teachers taught us. So, when I explain the concepts behind SQL Server AlwaysOn Availability Groups, I go back to the concepts behind failover clustering and database mirroring. Once they understood the concepts behind these two technologies, it becomes easier to discuss more complex architectures like multi-subnet clustering with Availability Groups by going back to the concepts that they already know. I use the same approach when answering questions on the MSDN and TechNet forums. So, the next time you’re faced with a technical challenge, approach the complexity of the problem using the filters of the basics that you already know. You’ll be surprised that you actually know a few things.

Side Note: If you’re anywhere near Toronto next week and interested in learning more about SQL Server High Availability and Disaster Recovery, check out the whole-day workshop that I’m doing for DevTeach. I started doing this workshop last year and got excellent feedback from the attendees. The format of the workshop is similar to what I’ve outlined in this blog post: looking at complex high availability and disaster recovery architectures from the lenses of the basic principles and concepts.

To get a $250 discount off of the main conference, use this registration code: TO00MSDNGOLD. Plus, if you register for the pre- or post-conference workshops, you have the opportunity to bring a friend with you – for FREE. How cool is that?

If you’re attending my whole-day workshop, leave a comment below to get a special gift from me.

DevTeach

Are you switching to BULK_LOGGED recovery model? Know your risks.

16 Sunday Dec 2012

Posted by Edwin M Sarmiento in SQL Server, SQL Server Disaster Recovery

≈ Leave a comment

Tags

recovery model, sql server databases


For years I have been led to believe that using the bulk-logged recovery model for SQL Server databases was a safe place to be in (that was entirely my fault, not MSDN’s nor TechNet’s.) I took it upon myself the definition of this recovery model – MINIMAL log space is used by bulk operations. My understanding from this definition was that it will only use minimal space in the transaction log while performing transactions in this recovery model. Wasn’t that the definition in the first place? I was wrong – for many years. You see, being in the bulk-logged recovery model may mean using minimal log space for transactions but that’s for a reason. Being in this recovery model means that the log will not contain all of the changes made by a transaction – only enough changes to recreate the end result. An analogy for this scenario would be like hopping on one of those computerized treadmills. If I wanted to spend half an hour on the treadmill, all I need to do is set it to half an hour. In my mind, I will do a half-hour of treadmill work. During the course of me going thru my exercise routine, I may bump up the speed to 2 mph for the first 5 minutes to warm myself up, maybe up to 5 mph for next 5 minutes, up to 7 mph for the next 10 minutes, start to cool down on the next 5 minutes at 3 mph and possibly do deep breathing exercises for the last 5 minutes at 2 mph. At the end of my exercise routine, I would have done half an hour of treadmill work, which was what I set out to accomplish initially. But what if I want to recreate the exact same routine with the combination of speed and duration on the treadmill? The only way for me to do that is to look at the record in the treadmill and note when I changed the speed, at what time and for how long. The treadmill keeps all that information, therefore, I can say that it is in the FULL recovery model. Since I have very limited memory, I am in the BULK_LOGGED recovery model. I don’t have to keep all of that information in my brain, just enough to recreate what I just did.

Going back to the discussion about recovery models, switching to bulk-logged recovery model during some high volume transactions may be a good idea to minimize the amount of log space used. But have we thought about the risks that we are getting our databases into when we switch to this recovery model? Since it does not have enough information in the log to recreate a transaction running while in this recovery model, we run the risk of not being able to do a point-in-time recovery of the database. Here’s an example. Let’s say we switch our database to the bulk-logged recovery model prior to running an index maintenance job to minimize its impact on our log shipping configuration. If something happens to the database before the next transaction log backup, we end up running a tail-of-the-log backup that is potentially corrupt. Since the bulk-logged recovery model does not have all of the changes made in the transaction log, a log backup will need to grab the changes in the affected data files in order to keep the database consistent during a restore process. If the log backup only took the transaction log records, restoring that particular backup would render the database inconsistent. However, in the Full recovery model, all of the changes are already in the transaction log. A log backup no longer needs to access the data files to record those changes. This is one of the reasons why we can still recover the database to a specific point-in-time prior to a disaster by using a tail-of-the-log backup. To illustrate, let’s say I will create a database with a simple table and a clustered index.

CREATE DATABASE [testDB]
GO
CREATE TABLE testTable (
	c1 INT IDENTITY,
	c2 VARCHAR (100));
GO
CREATE CLUSTERED INDEX testTable_CL
	ON testTable (c1);
GO

Next, I’ll insert several rows in the table and take my very first full database backup. My backup will then contain the record that I just inserted.

INSERT INTO testTable
	VALUES ('Row inserted: transaction # 1');
GO
BACKUP DATABASE [testDB] TO
	DISK = 'C:\Demos\testDB.bak'
WITH INIT,STATS, STATS;
GO

I will, then, insert 100 additional rows in the table and take my first log backup. The log backup will contain all of those 100 rows that I just added.

INSERT INTO testTable
	VALUES ('Insert more rows...');
GO 100
BACKUP LOG testDB TO
	DISK = 'C:\Demos\testDB_Log1.trn'
WITH INIT,STATS, STATS;
GO

Assume that I will switch the database recovery model to bulk-logged because I will be doing an index maintenance.

ALTER DATABASE testDB
	SET RECOVERY BULK_LOGGED;
GO
ALTER INDEX testTable_CL ON testTable REBUILD;
GO

I’ll switch the database back to the FULL recovery model after the index maintenance and add more rows.

ALTER DATABASE testDB
	SET RECOVERY FULL;
GO
INSERT INTO testTable
	VALUES ('Row inserted: transaction # 2');
GO
INSERT INTO testTable
	VALUES ('Row inserted: transaction # 3');
GO

Now, since we haven’t done any backups yet after switching to bulk-logged recovery model and back to FULL, the next log backup will have to look at the data files and grabs the changed data pages (and index pages, in this case) to keep the database consistent. If this was in the FULL recovery model, all that the backup process needs is the transaction log file. What if the server crashes and corrupted the data files containing the table? The first thing that we need to do to restore the database to a point-in-time prior to the crash is to do a tail-of-the-log backup and use that as the last step in our restore process. Let’s try that.

-- Backup the tail-of-the-log so we can keep the transactions that are still in the log but not persisted to the data files
BACKUP LOG [testDB] TO
	DISK = 'C:\Demos\testDB_tail.trn'
WITH INIT,STATS, NO_TRUNCATE;
GO

Notice that while the tail-of-the-log backup may have succeeded, it generates a message that is a bit alarming. Wouldn’t you consider this as something to be worried about?

Basically, the tail-of-the-log backup encountered an error in the process but continued anyway. That also means that we can’t really rely that much on this backup. Let’s try restoring this tail-of-the-log backup as part of our restore sequence.

--  Try restoring from backups
RESTORE DATABASE [testDB] FROM
	DISK = 'C:\Demos\testDB.bak'
WITH REPLACE, NORECOVERY;
GO
RESTORE LOG [testDB] FROM
	DISK = 'C:\Demos\testDB_Log1.trn'
WITH REPLACE, NORECOVERY;
GO
--Restore the tail-of-the-log backup
RESTORE LOG [testDB] FROM
	DISK = 'C:\Demos\testDB_tail.trn'
WITH REPLACE;
GO

Because the database was switched to bulk-logged recovery model and there was no other backup that occurred prior to the disaster, the tail-of-the-log backup that we were trying to attempt did not contain enough information to recreate the index maintenance task that we did. In order to properly recreate that transaction, the backup process needed to access the data files that have been changed by the transaction. Since the data file in this case was damaged, there was no way for the tail-of-the-log backup to capture that information, thus, rendering it as corrupt.

This should give you some insights into the risk that your databases are in by switching to the bulk-logged recovery model. So, what do you need to do to avoid this risk? Make sure that you run a backup immediately after the transactions you are running under the bulk-logged recovery model complete. That backup will certainly include all of the data pages that were changed by the minimally logged transaction and would be enough to recover your database should something happen afterwards. A graphic of how that can be done is highlighted in the MSDN article.

Backup sequence when switching from FULL to BULK_LOGGED recovery model and back.

Don’t say I didn’t warn you.

Deploying a SQL Server 2012 Multi-Subnet Cluster

21 Saturday Jul 2012

Posted by Edwin M Sarmiento in SQL Server, SQL Server 2012, SQL Server Clustering, SQL Server Disaster Recovery, Windows Server 2008 Clustering

≈ 8 Comments

Tags

geoclustering, multi-subnet clusters, SQL Server 2012


I’ve been wanting to write a series of articles on deploying a SQL Server 2012 on a multi-subnet cluster for quite some time now. This was driven by the fact that my series of articles on SQL Server 2008 Failover Clustering had been in the Top 10 Tips for more than 2 years since being published three years ago. I guess more and more systems administrators and SQL Server DBAs are being tasked with deploying failover cluster instances. Ever since I had my hands on the beta version of Denali (codename for SQL Server 2012) last year,  I’ve been testing some configurations for the multi-subnet clustering feature. I think I’ve built like 3 test environments prior to Denali going RTM just so I can wrap my head around the concepts (plus the fact that Windows Clustering Experts like Microsoft MVP Allan Hirt (blog | Twitter) have been gracious enough to answer questions.)  Check out this first of a series of articles on how to deploy a SQL Server 2012 Multi-Subnet Cluster on MSSQLTips.com.

And if you’re in New York City or the nearby cities and want to see this whole process in action, catch me at SQL Saturday #158 this coming 4-Aug-2012.

[UPDATE:] Part 2 of the series has been published on 26-July-2012. Stay tuned for the rest of the series.

[UPDATE:] Part 3 of the series has been published on 13-Aug-2012. Stay tuned for the rest of the series.

[UPDATE:] Part 4, the last of the series has been published on 06-Sep-2012.

Why People and Processes Matter More Than Technology

19 Thursday Jul 2012

Posted by Edwin M Sarmiento in log shipping, SQL Server, SQL Server Disaster Recovery

≈ Leave a comment


I was thinking twice about posting this to my non-technical blog but thought that it applies to the technology realm.

Almost 5 years ago, I wrote a blog post about what I call the poor man’s SQL Server log shipping. In it, I outlined the process of how log shipping works. This became the basis of the chapter I wrote for the SQL Server MVP Deep Dives Volume 1 book. What’s interesting is that while I wrote the content with SQL Server 2000 in mind, the concepts and the principles behind the process still apply up to SQL Server 2012. I recently had a customer who wanted to move their existing database server from SQL Server 2000 to SQL Server 2008 R2 with minimal downtime. The only option was to implement log shipping because of the size of the database. However, there are a few restrictions.

  • We can’t change the SQL Server service account on the current production environment because it will require a service restart
  • Log shipping between SQL Server 2000 and SQL Server 2008 R2 is not supported out-of-the-box. We do not have the appropriate wizards and stored procedures  that we can use to configure log shipping between these two versions

Given the restrictions, it’s easy to just give up on the option to use log shipping. But isn’t log shipping just an automated backup-copy-restore process? As long as the source can run log backups and the destination can copy and restore those generated backups without breaking the log sequence, I don’t see any reason why it can’t be done. But every once in a while, I get asked about my approach and how it could possibly work.

  • This can’t be a disaster recovery solution. I didn’t say it is. In fact, it’s a one-way traffic because the database schema between the two versions are different. Once you failover to the higher version, there is no turning back. This approach is ideal for doing version upgrades on different hardware while minimizing downtime. If you are dealing with the same version of SQL Server, I don’t see any reason why you would use this approach because the wizards and the stored procedures are available for you to use.
  • I don’t have a domain account. I used to think that I needed a domain account to implement log shipping. It is recommended as a best practice but not necessary. This means that two SQL Server instances can be on a workgroup and still be configured for log shipping. How? Configure the SQL Server service account (database engine and agent) to use a local Windows account. Create the same local Windows account (with the same password and permissions) on the machines participating as standby servers for a log shipping configuration and use the account for the SQL Server service account. While this poses a challenge in managing credentials and making sure that passwords are modified on all of the machines at the same time, it will still work. Anybody can walk from Manhattan to Boston (about 250 miles and roughly 71 hours) but I doubt it’ll be anybody’s primary option.
  • We don’t have a DBA. That’s where I come in. 🙂

My point is that the reason why I was able to recommend this solution to customers is because I understand the principles and concepts involved in the process. It’s easy to focus our attention on technology solutions instead of the processes and people involved to get the job done. And this is the reason I coined the PPT methodology – people, process and technology. I’ve used this methodology a lot in high availability and disaster recovery projects but it pretty much applies to just about any aspect in life – finding a job, planning a vacation, etc. Technology is just there to complement what we – the people – can accomplish by defining the processes that we need to follow to accomplish a goal. Besides, we need to stop looking at constraints as a limitation but rather as an opportunity for creativity.

Watch Out for SQL Server 2012 AlwaysOn Webcast

03 Thursday Nov 2011

Posted by Edwin M Sarmiento in SQL Server, SQL Server "Denali", SQL Server Disaster Recovery

≈ 2 Comments


My friends at MSSQLTips.com asked me to do a webcast on SQL Server 2012 AlwaysOn Availability Groups. You’ve probably noticed that I have not been writing anything about SQL Server 2012 (formerly “Denali”) from both the articles that I’ve been writing and the previous blog posts as compared to when SQL Server 2008 was being released. That’s just my preference specifically because of all the stuff that I can’t talk about back then. Well, now that it is officially out in the public with the new name, I guess I no longer have to worry about mentioning anything that isn’t publicly available.

Being a high availability/disaster recovery (HA/DR) guy, AlwaysOn is one of the features that I like about SQL Server 2012. This provides organizations with more options to consider when implementing an HA/DR solution.  Two things that are being introduced here are AlwaysOn Availability Groups and AlwaysOn Failover Clustering.

On the 30th of November 2011 (3PM EDT), join me and the guys from Fusion-IO and MSSQLTips.com as we explore this new feature in SQL Server 2012 called AlwaysOn Availability Groups. To register for this webcast, simply click on this link. If you have questions about AlwaysOn Availability Groups  even before the webcast, you can post them here so we can discuss it further. I will try my very best to make sure that your question be mentioned during the webcast

On Disaster Recovery and my SQL Rally 2011 Presentation

19 Sunday Jun 2011

Posted by Edwin M Sarmiento in SQL Server Disaster Recovery

≈ 3 Comments


Yesterday, I saw a Twitter post regarding the speaker evaluation results from SQL Rally 2011 in Orlando, FL last May. I was surprised to see that my session was in the top 3 best sessions of the conference. I dug up the Excel spreadsheet containing my session evaluation results and began to read. I found one comment very fascinating (the only evaluation where I got very low scores) as the response pertains to the speaker’s knowledge of the subject. The comment was: “copy and paste coder.” I’ve been doing this specific presentation for almost 5 years now with a few tweaks every once in a while based on feedback from attendees. Yes, I live and breathe disaster recovery as part of my day-to-day job. However, there are several reasons why I do not type nor write code during my presentations. Here are a few of them:
  1. A presentation is a performance: Many will disagree with me on this, especially experts who believe that to demonstrate their expertise, they should be writing code and doing live demos during a presentation. Whenever I go up the stage to deliver a presentation, I always think about the attendee/audience. My goal is not to display my expertise nor to brag about what I can do that the audience could not. I always remember that my presentations are not about me, but about the audience. Which is why I do a lot of preparation prior to delivery – research, writing an appropriate storyline (you got it right – storyline), selecting the right demos, building test environments, writing demo scripts, rehearsing my presentation, etc. Yes, I rehearse my presentations and I say it out loud. I do the best that I can to make sure that the audience will be entertained, engaged, enlightened, educated and encouraged. If I’m doing a presentation on disaster recovery, I even plan out what type of disaster will I be simulating. Doing this will help me make sure that I don’t go beyond the time limit that was allotted for my session while covering all of the items that I intend to. I’d be very happy if the audience will walk out of my presentation with something that they will do when they get back to their regular routine. I keep in mind what Dr. Nick Morgan, one of America’s top communication theorist and coach, always say:”The only reason to give a speech is to change the world.” So, if you’ll be attending a presentation I’m delivering in the future, I’ll assure you that you won’t be disappointed. 
  2. Presentation time is limited: I hear presenters and speakers apologize for not covering the full content of their presentation. In some cases, you would see them breeze thru their slides as they get to the summary slide. If the presentation was rehearsed and scripted, they would know how long it will take to cover everything in their slide and add or remove as necessary. Copying and pasting code is my way of saying, “I value your time so much that I would rather copy and paste code so that I can move on to more important stuff than let you suffer from every typographical error I would make while typing.” As I said, many won’t agree with me on this but I need to focus on the more important content of the presentation.
  3. Focus on the important: Same as the previous point. Enough said.
(I did a presentation about delivering presentations last December for SQL Saturday 61 entitled Presentation WOW. You can download the slide deck from here.)
But what about disaster recovery? Yes, this is more than just a blog post about improving your presentation skills. The main reason why I copy and paste code, especially when doing a disaster recovery presentation is to prove a point: You want to accomplish your task with the least amount of time and the least amount of effort. This is because every minute you waste is a minute against your recovery point objective (RPO) and recovery time objective (RTO). Imagine having to recover a SQL Server database by applying the latest FULL database backup and a series of LOG backups. The more LOG backups that you need to restore, the longer it will take. Plus, if somebody is behind your back watching every move you make and asking when the database will be back online, you wouldn’t want that to last longer that it possibly can. Remember, in a disaster recovery incident, every second matters. For highly transactional databases that are being used for main line-of-business applications, every minute lost is revenue lost. Having these in mind, you would do everything you can possibly think of to recover the database as fast as you possibly can – even copy and paste code. In fact, I keep a dozen or so scripts in my repository that works as code generators – scripts that generate scripts. One of them is a script that reads thru my backup history stored in the MSDB database and creates a series of RESTORE DATABASE/LOG scripts that end up getting executed so that I don’t have to figure out when the last LOG backup ran and restore the backups in sequence. Would you call this cheating because I copy and paste code? I don’t know about you but I’d call this being creative when the rubber meets the road.
And one more thing, I will be delivering this presentation but a bit more on the non-technical side of things in the upcoming PASS Community Summit 2011 in Seattle, WA on 11-14 October 2011. If you intend to attend, drop by my session so we can talk about it more.
Let me know your thoughts. Do you copy and paste code when recovering a database?
← Older posts

Twitter Updates

  • When you are up at 4AM because you are passionate about making a difference in the lives of other people 13 minutes ago
  • In today's age of artificial intelligence and automation, we humans need to provide value where machines cannot. Th… twitter.com/i/web/status/1… 2 days ago
  • @TheStephLocke @kpuls Nah, not a failure in my books. I've done worse 😃 2 days ago
  • Amazing day for my #SQLServer DBA's Guide to #Docker #Containers and #Kubernetes @DifinityConf 4 days ago
  • @erikdarlingdata Even In-and-Out is worse than Jollibee? So much for data reliable data 5 days ago
Follow @EdwinMSarmiento

SQL Server 2012 with PowerShell v3 Cookbook

SQL Server MVP Deep Dives Vol 2

Grab a copy now: SQL Server MVP Deep Dives Volume 2

SQL Server MVP Deep Dives Vol 1

Categories

AlwaysOn Availability Groups High Availability PowerShell SharePoint SQL PASS SQLPASS SQL Server SQL Server "Denali" SQL Server 2005 SQL Server 2008 SQL Server 2012 SQL Server Administration SQL Server Disaster Recovery SQL Server learning TechEd Asia 2008 Uncategorized Windows Cluster Windows PowerShell Windows Server 2008 Windows Server 2008 Clustering

Archives

  • March 2015 (4)
  • February 2015 (1)
  • November 2014 (1)
  • September 2014 (1)
  • July 2014 (1)
  • January 2014 (1)
  • December 2013 (1)
  • October 2013 (3)
  • September 2013 (4)
  • August 2013 (3)
  • July 2013 (2)
  • June 2013 (1)
  • May 2013 (1)
  • April 2013 (2)
  • February 2013 (1)
  • January 2013 (2)
  • December 2012 (1)
  • November 2012 (2)
  • October 2012 (1)
  • September 2012 (1)
  • July 2012 (3)
  • June 2012 (1)
  • March 2012 (4)
  • January 2012 (1)
  • December 2011 (1)
  • November 2011 (3)
  • October 2011 (2)
  • September 2011 (1)
  • August 2011 (2)
  • July 2011 (2)
  • June 2011 (5)
  • May 2011 (1)
  • April 2011 (3)
  • March 2011 (3)
  • February 2011 (1)
  • January 2011 (1)
  • December 2010 (1)
  • November 2010 (4)
  • August 2010 (2)
  • May 2010 (2)
  • April 2010 (1)
  • March 2010 (2)
  • February 2010 (2)
  • January 2010 (1)
  • December 2009 (1)
  • November 2009 (3)
  • October 2009 (1)
  • September 2009 (3)
  • July 2009 (3)
  • June 2009 (3)
  • May 2009 (2)
  • April 2009 (6)
  • March 2009 (8)
  • February 2009 (8)
  • January 2009 (2)
  • November 2008 (2)
  • October 2008 (3)
  • September 2008 (2)
  • August 2008 (5)
  • July 2008 (12)
  • June 2008 (5)
  • May 2008 (4)
  • April 2008 (4)
  • March 2008 (7)
  • February 2008 (3)
  • January 2008 (5)
  • December 2007 (6)
  • November 2007 (6)
  • October 2007 (22)
  • September 2007 (5)
Advertisements

Blog at WordPress.com.

Cancel
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy