I am asked by clients/team members/friends a lot about how to set up TEMPDB to use the D:\ drive on an Azure VM for SQL Server. Below are the steps I took to configure it on my VMs.
At one of my previous DBA jobs, I encountered some performance issues in our Azure PaaS offering. Clients were experiencing 30-40 second login times and 10-15 second save times, which is considered poor. Of course, everybody’s first thoughts were, “THE DATABASE IS SLOW!!” After pulling up every single chart and metric I could, I finally proved to them it was not the database.
Continue reading Azure DBaaS Alerts
One of the sessions I give at PASS events is about configuring your on-premises SQL instance. I have also been working with Database-as-a-Service, DBaaS, for a while now and thought it would be helpful to do a session on configuring DBaaS for beginners. The purpose of this blog post is to list some of the things that I believe are crucial for first time deployment
Continue reading Configuring AzureDB DBaaS
As database administrators, we are always looking for ways to automate our daily processes. SQL Server Agent has always been a great tool for doing this, whether it be for scheduling regular maintenance or administrative jobs. For those of you making the leap to the PaaS offering of Azure SQL databases, you will quickly discover that SQL Server Agent is no longer a feature. For those of you who might start to panic thinking you will now be required to wake up at 2:00 AM to manually run your weekly maintenance or nightly administrative job- don’t worry! This is where Azure Automation comes to save the day! Azure Automation brings a PowerShell workflow execution service to the Azure platform that allows one to automate those maintenance and administrative tasks all within the Azure portal and take the role of the SQL Server Agent. To demonstrate how you can leverage Azure Automation, I will take a common request that I have encountered with many clients who have the need to schedule a stored procedure execution.
My previous blog post was about the SSIS Lookup task and how it really works. Now that I have shown that the Lookup task shouldn’t be used for one-to-many or many-to-many joins, let’s take a look at the Merge Join transformation task. If you follow along with this blog, you will learn a little tip that will eliminate the requirement for you to add a SORT transformation task within your data flow task.
Circa 1988. I entered the technology space, writing SAS/JCL on an IBM 3270 for the Federal Government. There was something missing though. Countless merges, duplicate data filtering and large data sets all seemed like a horrible waste of resources and, more importantly, time. Not only that, SAS user guides filled my entire cubicle, making it nearly impossible to evolve the skill set quickly.
One of the main benefits of an AlwaysOn Availability Group is being able to read off of the secondary replicas. However, Read-Only Routing is not automatically configured when you first build your AlwaysOn Availability Group. To fully utilize an AlwaysOn Availability Group and take full advantage of having read-only connections connect to your secondary database, you will have to configure Read-Only Routing.
Oftentimes, I am presented with queries from a client with a myriad of joins that have no table aliases. In order to improve performance, I often will have to create temporary tables from pieces of the query, and sometimes they need to be created manually as opposed to performing a SELECT INTO. Having to search through all of the tables through the GUI manually to determine the proper information on the columns can be quite a pain and a waste of time. In an effort to better utilize my time, I created a simple query that will return where the column resides as well as everything you need to know about the column and more.
I recently was tasked with a project to consolidate several SQL 2005 database servers down to one existing SQL 2012 database cluster. While working on this project, I found that one of the database servers needing to be consolidated was utilizing SQL Server Reporting Services (SSRS), but the existing cluster was not configured for Reporting Services. SSRS is not cluster aware, so adding this feature to an existing clustered instance is not straightforward and will likely lead to a rule check failure on the “Existing clustered or cluster-prepared instance” rule. Well today is your lucky day, as I’m going to show you exactly how to get beyond this error and on your way to making your SSRS cluster aware!
In the world of big data, we are always trying to lighten our storage footprint. Luckily for us, Microsoft has introduced data compression as an enterprise-level feature to aid in conserving storage. Not only are you able to save on storage, you will also dramatically reduce the number of I/O requests. Knowing that the disk subsystem is the slowest part of our environments, these fewer I/O requests needed for retrieving data will lead to an increase in performance.