First thoughts on PDW from Microsoft

If you think of a normal Data Warehouse in the old SMP fashion as a big humpback whale and the new Parallel Data Warehouse version as a pack of killer whales I think you clearly get the picture of what Microsoft is attempting to achieve. The old humpback is slow and fat it can only grow to a certain size before it really runs out of steam it trundles slowly through the water with feeding on the smallest of sea life (krill) these are in fact your data loads and returns you result sets at a slow steady rate. Whilst the killer whale pod can grow and grow its very nimble can take large chunks of new data it can respond to request in a very timely manner.

The issue for companies when evaluating PDW versus the old Single Server Data Warehouse is the hardware cost. Now realistically I don't think this is a real issue if we look back on history 10 years ago only the biggest companies where involved in Data Warehousing the wanted and need to get answers from the years of information that they had collected and stored. Today small business is using data warehouses to gain the same information from their data. The amount of information that companies collect has increased and will continue to increase and therefore PDW has a real place in the market place as companies want answers faster and faster MD's  no longer want to wait a week or even a day to know what is happening.

The beauty of PDW from an administrators point of view is that Microsoft has built in redundancy at every level, the management console monitors all the servers in PDW to alert the admin's to issues and problems that are affecting the system.


House fire caused by faulty power supply

This is my house after a fire caused by a faulty/dusty power supply the CFA where great in getting the fire out before it spread to ajoing sheds and trees. so take the time vacum out your power supply and computers regularly.



Week 2 Spray

I have been doing some consulting for a small company with 60 sites each site has a system running either MSDE or SQL Express 2005. Now you are not going to believe this but the login for the application to the MSDE is SA and the Password is blank but worse still all this inofrmation is stored in an ini file makes me really worried. All this computers are open to the internet and currently i have been assisting in setting them up on a version of my basic log shipping.

Now guys we all know that leaving a blank password around is dangerous, storing it in an ini file is just as bad and using the SA account is worse still. Security one day people will wake up and smell the roses because they have been buried in manure.


New Weekly Series the SPRAY

So what is a spray, in Australia coaches at quarter, half and three quartertime talk to the team sometimes this becomes a spray.

Here is a typical one delivered by the coach to a local football team.

“You blokes are jokes! Don’t you want to win? You need to go into the room of mirrors and have a good, hard look at yourselves. You have let the team down! You’ve let the club down! You have let Tigers fans down! You have pissed on the Tigers’ jumper. You have dudded your families and, most importantly, you bludgers have let the whole of the Barossa Valley down.

So I will be spraying at lots of different issues all faced with SQL Server

My First Spray is to Microsoft

Why in all this time have you not fixed the security on VDI, for those that don't know all third part backup tools use the old legacy technology that is VDI to communicate with SQL Server to perform their backup. The problem is that to do this the user/account issuing the commands for backup must have sysadmin rights. Under database roles we have db_backupoperator which can be a member of public but a user without sysadmin level cannot use the VDI backup set only the Native Backup.

This is a security issue and has been around since SQL Server 7 its time Microsoft fixed this, possibly introduce a server role for backup operations that an account can be assigned to allowing a senior level dba to lock down the server even further.

It would then be possible to stop unauthorized database backups, monitor specific accounts doing backups and remove the need for a user to have sysadmin privileges unless required.




The Pitfalls of Log Shipping on SQL Server

Common incorrect assumptions about Log Shipping I’ve seen.

Many people think Log shipping is a type of high availability solution that is a much cheaper solution than clustering however log shipping does not replace clustering. Clustering allows fail-over through the redundancy of the hardware and the OS not the SQL data. If you want High Availability for the SQL data look at Replication or synchronous mirroring. Log shipping is really more Disaster Recovery, which is used to enable you to recover from a disaster whether the disaster is external or internal etc.

Think Log shipping keeps the two servers in Sync -- wrong! Log Shipping occurs at the database level and not at the SQL server level so anything that is changed in other areas of the SQL Server such as configuration changes, OS changes will be unknown to the database that is been log shipped.

What is Log Shipping?

Log shipping is the means of establishing a warm backup solution by using the SQL Server Native or third party utilities to transfer and restore logs between databases.

Log shipping is a Disaster Recovery strategy for databases and nothing more. It does have some side benefits. For instance, the database that is on the standby server can be used for reporting, which is a good thing. If a production database fails, all you have to do is point the users to the standby server, and you are all set. Well, it’s not really that simple, but it comes close if you put enough effort into your log shipping setup.

So why use Log Shipping?

In today’s world, business cannot afford disruptions to applications, web sites or information. Most of the data stored for these areas is found in backend databases. So as good DBAs we look at ways of making our data safer by using standby servers to have a copy of the database either onsite or offsite. If the worst occurs we can swap to the standby server and be back up and running in next to no time.

Log shipping is actually the process of backing up the transaction log for a given database and sending it to another server where we restore. If the time between backups is minimal (the default most DBAs tend to use is 15 mins) than we only risk a limited amount of data loss if something goes wrong.

The Pit Falls of Log Shipping that DBA’s forget.

Imagine you’ve gone to the trouble to implement log shipping and now it is time to bring the standby server online due to a problem with the primary server. You know you are safe. You’ve gone through the documentation from Microsoft and everything looks sweet, right? I’m afraid to tell you there is a lot more to it than just shipping the logs.

Let’s look at what log shipping does not do. It does not ship any SQL Server Objects that is the SQL Jobs, the SQL Maintenance Plans, the SQL Server Security settings and so on, it only ships the DATABASE.

So to bring your standby server online you need to run through everything that has occurred at the server level that will affect that database.

The Problems that DBAs face with Log Shipping.

There are 2 main problems DBAs face with log shipping.

•1. A standby server needs to be managed, updated, and maintained; so you need to allow time for this and you need to plan for the cost of replacing the standby server as the equipment gets old.


•2. The process of failover is not automatic. You may even want to try restoring the tail of the log before you bring it online therefore reducing data lose to nothing.

We can overcome a lot of these issues by restoring master and MSDB on the Standby Server, but wait, what if you are using the standby server for to protect other SQL servers or other databases? It all starts getting very tricky. Consider this, SQL server logins will need to be synchronized from the primary server to the standby server.

I love log shipping for what it can do, but with the advent of Virtual Machines, I have been looking at how to restore the entire server or critical parts to virtual.

If you break your SQL Server into parts, you can see how a virtual image of the server can work very easily for you. The server hardware usually includes the system and application drives since the data and log drives are normally on a SAN. So if my server breaks, I need to identify if it’s just the system and application drives that I need to get back up and running or if it’s the SAN for my data and logs, or if it’s both. (Of course if you are a smaller, non SAN, shop, the data is local so it really is the restoring of both areas at one time.)

In these times where companies are trying to reduce infrastructure costs and postpone the purchase of new equipment, we as DBAs need to start looking into newer technologies to resolve our protection needs. We need to be more eco-friendly by using less electricity and having a smaller foot print on the environment. One way I have found is using AppAssure’s Replay AppImage for Disaster Recovery. It has the ability to restore from its backup images either to a Virtual Machine or to new hardware in minutes. It also has the unique ability to continuously restore to a Virtual Machine that can be powered on when required. So do yourself and the environment a favour and visit there website today