Software boundaries and limits for SharePoint 2013 – Content database limits

One of the most oft-quoted Microsoft TechNet articles for architecting and designing SharePoint 2013 environments is Software boundaries and limits for SharePoint 2013 (and previously SharePoint Server 2010 capacity management: Software boundaries and limits). This is an excellent article that details explicitly the upper boundaries and thresholds that have been tested and are supported by SharePoint across all environments. Most of the boundaries and thresholds, such as 30,000,o00 items per list, will likely never become a problem for most organisations. They do however need to be monitored and sized accordingly.

The one section that almost always is included as part of any SharePoint design is the section on Content Database limits, and in particular the general usage content database size scenario. It states:

We strongly recommended limiting the size of content databases to 200 GB, except when the circumstances in the following rows in this table apply.

If you are using Remote BLOB Storage (RBS), the total volume of remote BLOB storage and metadata in the content database must not exceed this limit.

and it continues on to discuss IOPS per GB, high availability, disaster recovery, and so on. There are also references to Remote Blob Storage, which is of huge interest to me and my company’s product, Stepwise.

Whenever I meet with a client and discuss their architecture, one of the first points I hear is “and of course we are planning to split up our Content Databases so they don’t get bigger than 200GB”. And that’s when I ask them “why?” and they say “because Microsoft”. And if you read the article, it is really quite clear – stick to 200Gb. But here is why I think the article is misinformed. Let me go through the scenario line-by-line.

Line 1: “We strongly recommended limiting the size of content databases to 200 GB, except when the circumstances in the following rows in this table apply.

I’ll get to this later. Suffice to say, if you have more than 200GB of content, you probably meet all the criteria which says you *can* have a larger than 200GB content database.

Line 2: “If you are using Remote BLOB Storage (RBS), the total volume of remote BLOB storage and metadata in the content database must not exceed this limit.

Remote Blob Storage is a fully supported Microsoft solution that enables electronic files (Binary Large OBjects, or BLOBs) to be extracted from the content database and stored on the file system. Stepwise is an example of a Remote Blob Storage system. BLOBs take up the majority of any content database, so we often find that databases shrink by as much as 95% after implementing Stepwise. So your 200GB database just became about 5GB in size.

Line 3: “Content databases of up to 4 TB are supported when the following requirements are met: Disk sub-system performance of 0.25 IOPs per GB. 2 IOPs per GB is recommended for optimal performance

If you are working with a medium to large organisation, and/or have a lot of data, you are probably using enterprise storage like a SAN. And most SANs offer these types of performance. Check with your enterprise storage team, but you should be able to tick this one.

Line 4: “Content databases of up to 4 TB are supported when the following requirements are met: You must have developed plans for high availability, disaster recovery, future capacity, and performance testing.

As with Line 3 above, any medium-to-large organisation probably has this well in hand. They will have failover plans, multiple servers to handle load and loss of service, possible secondary and even tertiary redundant sites, and be managing your infrastructure on a daily basis. This is usually a tick.

Line 5: “Requirements for backup and restore may not be met by the native SharePoint Server 2013 backup for content databases larger than 200 GB. It is recommended to evaluate and test SharePoint Server 2013 backup and alternative backup solutions to determine the best solution for your specific environment.”

I worked with a team to set up an enterprise backup solution, and this is what I asked them to add to their backups.

  1. Farm configuration database
  2. Content database(s)
  3. Virtual server drives (C:, D:, etc)

and if using Stepwise:

  1. Stepwise administration database
  2. File system(s) used to store BLOB data

You can include search databases in this list as well if you have them, and if they are worthwhile to back up. I have a complete article on backup/restore for Remote Blob Storage using Stepwise.

Line 6: “It is strongly recommended to have proactive skilled administrator management of the SharePoint Server 2013 and SQL Server installations.”

You need a SQL person and a SharePoint person, or at least someone with skills in these areas if you don’t have a dedicated resource.

Line 7: “The complexity of customizations and configurations on SharePoint Server 2013 may necessitate refactoring (or splitting) of data into multiple content databases. Seek advice from a skilled professional architect and perform testing to determine the optimum content database size for your implementation. Examples of complexity may include custom code deployments, use of more than 20 columns in property promotion, or features listed as not to be used in the over 4 TB section below.”

This is more of a configuration control issue that a content database issue. In plain English, if you install things that will create dependencies and/or impact future upgrades, separate it out to its own content database. That will limit the impact the customization and configuration will cause.

Line 8: “Refactoring of site collections allows for scale out of a SharePoint Server 2013 implementation across multiple content databases. This permits SharePoint Server 2013 implementations to scale indefinitely. This refactoring will be easier and faster when content databases are less than 200 GB.”

The issue here is around the time it will take when refactoring. If your refactoring takes 3 hours instead of 2 hours, will that be an issue? What about 20 hours instead of 16 hours? For most organisations this will be a task that is not performed often. Large content migrations can be done out of hours, with little to no downtime. You will spend more time fixing things like link changes and planning the content migration, than actually performing the task itself.

Line 9: “It is suggested that for ease of backup and restore that individual site collections within a content database be limited to 100 GB. For more information, see Site collection limits.”

Most enterprise storage and backup solutions will not find this a problem, but it should be included as part of the calculations for disaster recovery. The length of time to restore your SharePoint environment is the sum total of restoring all the individual components. If your content databases takes 8 hours to restore, then that is how long your environment could be down for.

This of course is relevant if you want to restore a single content database because it has become corrupted. But what happens if you need to restore all your site collections? Then it won’t matter if your content is split across 2 content databases, or across 200 – the total time to restore them will be the same.

So let’s summarise:

  1. Don’t let the 200GB recommendation control your design.
  2. Content databases can go beyond 200GB, without issue.
  3. You need to be managing your environment properly and effectively.
  4. Backup and restore works, make sure you document the process and test.
  5. If you environment can handle it, everything will be OK.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: