Archive for the ‘Administration’ Category

Changing SharePoint 2013 Site Collection URLs from Host-Named to Path-Based

September 4, 2015

Huge thankyou to Todd Klindt for saving me many many tears.

I have recently installed SharePoint 2013 August 2015 Cumulative Update, and one of the commands available as of the February 2015 Cumulative Update is the [site].Rename() method. This allows you to change the URL of your site from path-based (http://theweb/sites/myweb) to host-named (http://newweb.ishere). It’s a lot better than having to use Backup-SPSite and Restore-SPSite!

We had some databases that had been dismounted from their web application via Dismount-SPContentDatabase. I attached them to their new web application via:

Mount-SPContentDatabase -WebApplication http://the.webapp -Name DB_SiteCollection

This adds the content database to the web application and makes it available. In my case it became the “root” site collection as it was previously attached to a host-header web application (as opposed to a host-named site collection).

I then accessed the site via:

$db = Get-SPContentDatabase DB_SiteCollection $site = $db.Sites[0]

Renaming the site first of all using:

$site.Rename(“http://new-hnsc-url”)

Gave me the following error:

Exception calling “Rename” with “1” argument(s): “Content database does not support renaming to and from host header site collections. Please upgrade content database to version ‘|0’.”

Oops! Forgot to upgrade it! No problem, just using the command:

Upgrade-SPContentDatabase -WebApplication http://the.webapp -Name DB_SiteCollection

Fixed up the version of the content database, then tried my rename command again. This time I got:

Exception calling “Rename” with “1” argument(s): “Cannot rename a site collection with recycled items. Empty the site recycle bin of site collection http://new-hnsc-url and retry.”

OK that makes sense. Recycle bin content would be pointing to different URLs after a site rename. Not sure why recycle bin content can’t be updated at the same time but whatever:

$site.RecycleBin.DeleteAll()

and then the previous $site.Rename() magic worked perfectly. That’s days of work saved thanks to a nice new function.

HOWTO Fix SharePoint 2013 and Workflow Manager Configuration

August 14, 2015

The configuration and integration between SharePoint 2013 and Workflow Manager is hard. Really hard – when it doesn’t work straight away. Workflow Manager is a stand-alone product, integration between SharePoint and WM is all done via PowerShell, diagnosing issues between WM and SharePoint is problematic, and there doesn’t seem to be a concrete method to guarantee it will work. Many articles say “just rebuild your SP farm and/or Workflow farm”. Ouch.

Let’s work backwards to solve the problem. SharePoint Designer 2013 is looking for two configuration items to be completed before it will let you create SharePoint 2013 workflows.

  1. The Workflow service proxy is configured and working (via Central Admin)
  2. The Workflow Service feature is configured and working (via PowerShell)

Both of these components must be enabled and functioning in order to tick the boxes in SharePoint Designer 2013. If either fails, SharePoint Designer will not allow you to create SharePoint 2013 workflows.

The Workflow service proxy (1) connects to the Workflow services system. The Workflow Service feature (2) is a SharePoint feature that must be enabled and configured via SharePoint Powershell for the site collection. Lets got through all the components.

Checking Workflow Manager Service Configuration

Use the following Workflow Powershell commands to view the server configuration:

Get-WFFarm | format-list

This should return the following:

RunAsAccount      : <hostname>
AdminGroup      : <hostname>
HttpPort      :    12290
HttpsPort    :    12291

Endpoints   : <https://my.workflow.url:12290&gt;

You should be able to use your browser to access the Endpoint and return an Xml configuration page.
To get the Farm status use:

Get-WFFarmStatus | format-list

This should return the following:

HostName      : <hostname>
ServiceName   : WorkflowServiceBackend
ServiceStatus : Running

HostName      : <hostname>
ServiceName   : WorkflowServiceFrontEnd
ServiceStatus : Running

The ServiceStatus “Running” is the important part here.

Checking SharePoint 2013 Workflow Service Configuration

Use the following SharePoint 2013 powershell commands to view Workflow Service configuration:

Get-SPWorkflowConfig -SiteCollection [sitecollectionurl]

Get-SPWorkflowServiceApplicationProxy | format-list

Validate that the Endpoints are correct, and you can indeed navigation to them.

Checking SharePoint 2013 Workflow Proxy Application

The workflow proxy application is available through the Central Admin, Manage Service Applications interface. Clicking on the Workflow service application should show you the following:

Workflow - Service Application Proxy Status

Fixing SharePoint 2013 Workflow Service Feature

The following SharePoint Powershell command forces the Workflow Service feature to be enabled:

Enable-SPFeature -Identify WorkflowServiceStore -url <site collection url> -Force

I received this error when I didn’t use the -Force parameter:

Enable-SPFeature: The field with id {GUID} defined in feature {GUID} was found in the current site collection or in a subsite.

SharePoint 2013 CU2 Upgrade Issues with Workflow Manager

During a recent upgrade of our SharePoint farm I noted the following warning in the upgrade log file:

WARNING Error UpgradeWorkflowServiceActivities.  
Run Copy-SPLocalActivitiesToWorkflowService manually to complete this action.  
Error: Microsoft.Workflow.Client.WorkflowEndpointNotFoundException: Unable to connect to the remote 
service at http://<servername>:12291/SharePoint/. See InnerException for more details.

It turned out that during the upgrade the IIS service was offline, so the workflow management endpoint was also offline. There doesn’t actually seem to be a Copy-SPLocalActivitiesToWorkflowService powershell command, but there is a Copy-SPActivitiesToWorkflowService. I ran that command with the following details (i’m using HTTP in my test environment):

$cred=[System.Net.CredentialCache]::DefaultNetworkCredentials
Copy-SPActivitiesToWorkflowService -WorkflowServiceAddress "http://<servername>:12291/SharePoint" -Credential $cred -Force $true

Addendum: Workflow Manager and Service Bus CU2

Service Bus 1.0 Cumulative Update 2 (CU2)

Workflow Manager 1.0 Cumulative Update 2 (CU2)

Shared Properties Not Available for Selection in Document Set Content Type

May 15, 2015

We are using Document Set-based content types to manage documents in our SharePoint 2013 collaboration environment. Document Sets include the ability to use Shared Properties. Shared Properties allow you to pre-set values that will be pushed down to any document within the Document Set, and are great for discovery and automatic settings.

One of our Document Set-based content types wasn’t letting us select all of our site columns as Shared Columns. Even though the Site Columns were added to the content type, they didn’t all appear. Of the 8 site columns added to the content type, only 3 were available for selection as Shared Columns. You can view Shared Columns within any content type based on a Document Set by selecting the Document Set settings link in the Content Type edit screen.

The issue was identified after some Internet sleuthing on the problem. Some of our site columns were set to not display in the Edit form, and this is a requirement to be selectable as a Shared Property.

Background information is here:

http://sharepoint.stackexchange.com/questions/70156/sharepoint-2010-document-set-shared-column-settings-arent-displaying-all-colum

The site column property for us was ShowInEditForm=FALSE. Changing this site column property allowed us to select it as a Shared Property. The following PowerShell commands allow you to set this field.

$web = Get-SpWeb http://my-web-url
$web.Fields["InternalFieldName"].ShowInEditForm = $true
$web.Fields["InternalFieldName"].Update()

After setting this property the field showed up in the Shared Columns list immediately.

Page File impacts on SharePoint 2013

April 29, 2015

We were getting very high numbers of errors on our SharePoint 2013 crawl servers, in the 10,000s in some cases. After contacting Microsoft they advised that our page files weren’t big enough. We had 4GB allocated in static mode, but then changed it to Auto mode. The general rule of thumb is 1.5x RAM

This fixed our search problem.

Reference: http://blogs.msdn.com/b/chaun/archive/2014/07/09/recommendations-for-page-files-on-sharepoint-servers.aspx

HOWTO Check the SharePoint 2013 Crawl Status using SQL Server

April 13, 2015

The Content Sources page in SharePoint 2013 Search Service app displays the current status of crawls on your farm. A lot of my crawls were stuck in the Starting or Completing state and were never marked off as being finished, even after running for over 48 hours on small data sets. I used the following script to find out the exact state of the search components:

SELECT [CrawlID]
 ,[CrawlType]
 ,[ContentSourceID]
 ,[Status]
 ,[SubStatus]
 ,CASE
 WHEN (Status=0 and SubStatus=0) THEN 'New crawl, requesting start'
 WHEN (Status=1 and SubStatus=1) THEN 'Starting, Add Start Address(es)'
 WHEN (Status=1 and SubStatus=2) THEN 'Starting, waiting on Crawl Component(s)'
 WHEN (Status=4 and SubStatus=1) THEN 'Crawling'
 WHEN (Status=4 and SubStatus=2) THEN 'Crawling, Unvisited to Queue'
 WHEN (Status=4 and SubStatus=3) THEN 'Crawling, Delete Unvisited'
 WHEN (Status=4 and SubStatus=4) THEN 'Crawling, Wait for All Databases'
 WHEN (Status=5 and SubStatus=0) THEN 'Failed to Start (eg Another Crawl Already Running)'
 WHEN (Status=7) THEN 'Resuming'
 WHEN (Status=8 and SubStatus=1) THEN 'Pausing, Waiting on crawl component(s) to pause'
 WHEN (Status=8 and SubStatus=2) THEN 'Pausing, complete pause'
 WHEN (Status=9) THEN 'Paused'
 WHEN (Status=11 and SubStatus=0) THEN 'Completed'
 WHEN (Status=12) THEN 'Stopped'
 WHEN (Status=13 and SubStatus=1) THEN 'Stopped, waiting on crawl component(s) to stop'
 WHEN (Status=13 and SubStatus=2) THEN 'Stopping, complete stop'
 WHEN (Status=14 and SubStatus=1) THEN 'Completing, waiting on crawl component(s) to complete'
 WHEN (Status=14 and SubStatus=2) THEN 'Completing'
 WHEN (Status=14 and SubStatus=4) THEN 'Completing, get deletes pending'
 END AS [StatusAsText]
 ,[Request]
 ,[RequestTime]
 ,[StartTime]
 ,[EndTime]
 ,[DeleteUnvisitedStart]
 ,[IsCompleting]
 ,[StartAddressList]
 FROM [MSSCrawlHistory]
ORDER BY StartTime DESC

I was getting “…waiting on crawl” messages against my content sources. One of my crawl servers had the search service hung. Restarting the search service on that server finished off the crawl and reported Completed straight away. Much better!

 

HOWTO Remove the Lock on a SharePoint File

April 10, 2015

A user on one of our SharePoint sites had a file in an Exclusive lock on their laptop, and couldn’t remove the lock. After checking a few sites for how to unlock:

http://sharepoint.stackexchange.com/questions/122067/release-client-side-lock-by-javascript

I cobbled together this PowerShell function based on that. By default it displays the lock status of a file, and if you use the $unlock parameter it will also unlock the file.

Add-PSSnapin microsoft.sharepoint.powershell -ErrorAction SilentlyContinue
function SPUnlockFile()
 {
 param(
 [Parameter(Mandatory=$True)][string] $webUrl
 ,[Parameter(Mandatory=$True)][string] $fileUrl
 ,[Parameter()][bool] $unlock = $false
 )
#Get Web and File Objects
 $web = Get-SPWeb $WebURL
 $File = $web.GetFile($FileURL)
#Check if File is Checked-out
 if ($File.CheckOutType -ne "None")
 {
 Write-host "File is Checked Out to user: " $File.CheckedOutByUser.LoginName
 Write-host "Checked Out Type: " $File.CheckOutType
 Write-host "Checked Out On: " $File.CheckedOutDate
#To Release from Checkout, Ask the checked out user to Checkin
 #$File.Checkin("Checked in by Administrator")
 #Write-host "File has been Checked-In"
 }
#Check if File is locked
 if ($File.LockId -ne $null)
 {
 Write-host "File is Locked by:" $File.LockedByUser.LoginName
 Write-host "File Lock Type: "$file.LockType
 Write-host "File Locked On: "$file.LockedDate
 Write-host "File Lock Expires on: "$file.LockExpires
 Write-host "File Lock Id: "$file.LockId
if ($unlock)
 {
 Write-Host "Releasing lock..."
 $userId = $file.LockedByUser.ID
 $user = $web.AllUsers.GetByID($userId)
$impSite= New-Object Microsoft.SharePoint.SPSite($web.Url, $user.UserToken); $impWeb = $impSite.OpenWeb(); $impFile = $impweb.GetFile($FileURL)
 $impFile.ReleaseLock($impFile.LockId)
}
 }
 else
 {
 Write-Host "File is unlocked" -foregroundcolor Green
 }
}
#USAGE
 #SPUnlockFile -weburl "http://weburl" -fileUrl "http://fullpathtofileurl"
 -unlock $false

HOWTO Restart SharePoint Web Services without having to use IISReset

April 11, 2012

We’ve all been there before – a SharePoint component has broken, you have ULS log events flooding in, stuff is broken, and you have to use IISReset to fix the whole thing. It’s a sledgehammer solution for opening your SharePoint walnut.

We had an ongoing issue with the Search and Metadata web services. Search results for people search was failing with an error, and we were receiving ULS errors such as:

SharePoint Web Services Round Robin Service Load Balancer Event: EndpointFailure Process Name: OWSTIMER Process ID: 12264 AppDomain Name: DefaultDomain AppDomain ID: 1 Service Application Uri: urn:schemas-microsoft-com:sharepoint:service:id#authority=urn:uuid:anotherid&authority=https://myappsserver:1234/Topology/topology.svc Active Endpoints: 1 Failed Endpoints:1 Affected Endpoint: http://myappsserver:1234/1234567890/MetadataWebService.svc

and:

SharePoint Web Services Round Robin Service Load Balancer Event: EndpointFailure Process Name: w3wp Process ID: 8240 AppDomain Name: /LM/W3SVC/id/ROOT-1-anotheridAppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-com:sharepoint:service:anotherid#authority=urn:uuid:eb9a958eaedf480bad3c2beaff70549b&authority=https://myappsserver:1234/Topology/topology.svc Active Endpoints: 1 Failed Endpoints:1 Affected Endpoint: http://myappsserver:1234/5678901234/SearchService.svc

The normal fix for this is an IISReset, however a less brutal method can be just to recycle the application pool for the web service instead. You can do this via IIS Manager as follows:

  1. Identity the service that is failing via ULS entries (as above) and in particular note the Id of the web service. This will be located in the URL, so in the above example the web service id for the SearchService.svc is 5678901234
  2. Open the IIS manager and expand the node Sites, SharePoint Web Services. This will display all web services available on that SharePoint server
  3. Locate the service using the id (they will be named the same). You can validate it is the correct service by switching from “Features View” to “Content View” at the bottom of the IIS manager once you have selected the service. Content view will show you (for example) two files, SearchService.svc and web.config
  4. In the “Actions” pane on the right hand-side select “Advanced Settings…” and note the Application Pool id
  5. In IIS Manager, select the “Application Pools” node and select the node with the matching Application Pool id.
  6. In the Actions Pane on the right hand-side, click Stop
  7. Wait about 5 seconds, then click Start

This will force the application pool to unload and effectively restart the web service.  You can then check your ULS logs and test whether the service is working in SharePoint.

NOTE: be aware that you will generally get a flood of ULS entries while stopping and starting the service, as SharePoint tries to access the web service but cannot.

HOWTO Resolve CAPI2 Event ID 4107 errors on SharePoint 2010 Servers

March 13, 2012

This problem has plagued me for a very long time. There are a multitude of web pages dedicated to resolving the issue, but because the CAPI2 event can be caused by so many different systems, and can affect so many different systems, it is still tough to pinpoint.

Microsoft Windows CAPI2 is part of the Public Key Infrastructure (PKI) that is used to prove that digital certificates are valid. Microsoft maintains several such certificates and certificate revocation lists on the Microsoft Windows download site (URLs below) and these are used by Microsoft Windows to ensure any certificates you use are valid and have not been created maliciously. With recent hacks of certificate providers, this becomes even more important.

SharePoint uses PKI to authenticate many components, in particular the DLLs that SharePoint uses – they are all signed by Microsofts’ code-signing certificates. SharePoint uses PKI to ensure the certificate used to sign these components is still valid. To do this, it regularly checks the Microsoft site for any changes to the valid certificate list and installs updates as required. The CAPI2 event errors occur when the Windows server has problems checking certificates, and this is particularly problematic when you have web servers that are behind firewalls and cannot connect to the Microsoft sites to get the latest certificate information.

Implications

One of the big complaints I have with this issue is around SharePoint spin-up times. If we need to do an IISRESET for any reason, all sites are offline for 5-10 minutes while SharePoint starts up. The evidence seems to say that it is .NET under the hood trying to validate the code-signing certificate by contacting Microsoft for the latest-and-greatest certificate list. If your .NET components can’t get to the Microsoft site, you get a timeout, a CAPI2 error, and a delay – for each and every DLL it tries to check. Resolving this issue in theory could speed up your SharePoint spin-up time!

Identifying Root Causes

The best way to isolate what the actual problem is is to enable CAPI2 logging. Using Windows Event Logger on Windows Server 2008, locate the CAPI2 application log via:

  1. Applications and Services Logs
  2. Microsoft
  3. Windows
  4. CAPI2
  5. Operational

When you select the “Operational” node, the list will normally be empty. In the Actions area on the right hand side, click ” Enable Log”. This will start logging any CAPI2 errors with more information. Data may not appear straight away – it can often take 30 minutes before you get any results, depending on when a CAPI2 error occurs. It took about 20 minutes for me.

If there are any issues with either retrieving the certificate updates, or processing the certificate updates, you should see them here. A common error I got was:

CAPI2 - Event ID 53 - Retrieve Object from Network

examining the details for this error (I prefer the XML View) noted:

<Security UserID="S-1-[MORE-STUFF-HERE]" />
...
<URL scheme="http">http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab<//URL>
...
This network connection does not exist.

The Security UserID is important here because it shows what user account is being used to try and make the connection to the Microsoft website. You can use this powershell command to find out the User ID (in a readable form!):

function ConvertTo-NtAccount ($sid) {(new-object system.security.principal.securityidentifier($sid)).translate([system.security.principal.ntaccount])}

which in this case told me the Network Service was unable to access the site.

In my case, the Network Service account did not have permission to use the Proxy Server. We added an exception to the server, and the CAPI2 error disappeared.

Proxy Server Configuration

Another issue I’ve had to address is when changes to the proxy weren’t valid, so I needed to ensure that the account trying to connect to the Microsoft site had the correct settings. Using the powershell above I identified a FAST for SharePoint 2010 service account (“my_fast_service”) could not get to the Microsoft download site to get the certificate revocation list.

To verify, I started Internet Explorer *on the server* using the  CTRL-SHIFT-RIGHT-CLICK combination so I could use “Run As…” on Internet Explorer. I entered the credentials for my user “my_fast_service” and tried to access the URL:

http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab

which after several seconds, came back with a network “not found” error. I checked the Internet Explorer proxy settings, and there was no proxy set and the “Automatically detect settings” had been set. So the network wasn’t setting the proxy correctly. I un-checked auto detec and entered my proxy settings, and after trying the link again that stopped the CAPI2 error for this user.

It is worth checking the event log for any other user accounts that do not have their proxy settings correct. Or of course you could just fix the auto-detect configuration 🙂

Other Options

  1. You can manually update the certificate list. I haven’t had a great deal of success with this – there are quite a few certificates that need to be installed (the full list will be in the details of the CAPI2 Operational event log)
  2. You can create your own Microsoft download site! Either via DNS entries or using a “hosts.” file change, you can set the update sites to point to 127.0.0.1 which will make them instantly fail and not try to download anything. Or, you could even point them to your own internal website (as long as the URL is the same) and get them to download a pre-created copy of the certificates (which of course you would need to keep up-to-date yourself!). There may be flow-on implications as well, such as the URL www.download.windowsupdate.com could be used by your servers to download windows updates. Take care with this option!

I wouldn’t recommend either of the above options in preference to fixing the root cause (i.e. network access to the Microsoft sites) but they are options in some cases where there is no connection at all, such as in DMZ or secure/hardened sites.

SharePoint 2010 Content Deployment – System.NullReferenceException: Object reference not set to an instance of an object

November 8, 2011

Content deployment in SharePoint 2010 works fantastically well. Once it is working. Getting to the ‘working’ stage however is a path that can cause a lot of pain, not the least of which is getting a generic error!
My configuration for our SharePoint publishing farm is as follows:

  1. SharePoint central admin site (PUB01)
  2. 2 x SharePoint web front-ends serving content (WFE01 and WFE02)
  3. Radware load balance (LOAD01)

I’m mentioning these items because they impacted how my content deployment was not working, and also the resolution required.

What You Absolutely Need To Have

There is ample documentation on setting up Content Deployment, but from a high level you must have the following in place:

  1. Your source Central Admin website must be able to access the destination Central Admin website, including setting the destination CA to accept content deployment. This includes the IP address and port of course!
  2. The account you use to deploy content on the destination server must have the appropriate permissions
  3. Your source and destination farms must be the same SharePoint version (right down to the Cumulate Update if applicable!)
  4. The destination CA must have enough room to store the content in a temporary location (check hard drive space on the destination server that hosts the Central Admin site)
  5. The destination CA must have the Microsoft SharePoint Foundation Web Application service “Started” on the server (destination CA uses this to check IIS configuration – more on this later)

Content Deployment Process

Content deployment goes through several high-level stages when it deploys content:

  1. The Source CA packages up the content into one or more CAB files
  2. The Source CA pushes the CAB files to the Destination CA, which stores them in a temporary location
  3. The Destination CA validates the IIS site and destination site collection
  4. The Destination CA unpacks the cab files and individually pushes the content into the destination site collection

The import log file shows the results of the attempted import and will display any warnings or errors that occurred.

System.NullReferenceException: Object reference not set to an instance of an object

In my environment, this error occurs very quickly after the data has been transferred to the destination farm. I dialed up the ULS diagnostic monitoring by editing the Monitoring, “Configure diagnostic logging”, “Category – Web Content Management/Content Deployment” setting to Verbose. Your ULS log file will then show much more detail about the process, including whether the CAB content files are being copied up properly from the source to the destination CA. Look for entries such as:

Upload file 'C:\SPDeploy\1c8b92e5-975c-4d30-b38e-04c25975e2bb\ExportedFiles9.cab' succeeded

The “System.NullReferenceException” error occurs very soon after the final CAB file has been deployed.

The thing that tripped me up was the load-balanced url I had set up in the Radware load-balancer and DNS settings. My destination URL DNS entry was pointing to the load-balancer virtual IP address, which then load-balanced requests to my two web front-end servers. My suspicion is that the destination Central Admin content deployment process checks IIS to see if it is configured correctly, and because it is traveling via a virtual IP address it doesn’t resolve to a local IIS metabase entry. In the ULS logs I saw entries such as:

Saving CachingSettings for SiteUrl 'http://your.sharepoint.url/'

immediately followed by the dreaded:

Failed import operation for Content Deployment job 'Remote import job for job with sourceID = 409e7be7-7694-496d-8469-eb472e5070f6'. Exception was: 'System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.SharePoint.SPSite.PreinitializeServer(SPRequest request)
at Microsoft.SharePoint.SPWeb.InitializeSPRequest()
at Microsoft.SharePoint.SPWeb.get_AllProperties()
at Microsoft.SharePoint.Publishing.SiteCacheSettings..ctor(SPSite site)
at Microsoft.SharePoint.Publishing.SiteCacheSettingsWriter..ctor(SPSite site)
at Microsoft.SharePoint.Publishing.SiteCacheSettingsWriter.SaveCacheSettingsBeforeImport(String importSiteUrl)
at Microsoft.SharePoint.Publishing.Administration.ContentDeploymentJob.DoImport()'

I checked the “PreInitializeServer” code using ILSpy (an improved, and more importantly free, replacement for Reflector), which wasn’t doing anything particularly complex but does use the SPRequest object. My thinking at this point was that the CA was unable to connect to IIS – it possibly does an IIS metabase lookup to get configuration settings, and because it was being bounced to the load-balancer and then down to the web front-ends, this step was failing. This occurs immediately after the cab files have been uploaded to the destination server.

The fix I put in was to edit the “hosts.” file (C:\Windows\System32\Drivers\etc\hosts.) and add an entry for the CA destination server IP address against the URL I was publishing to. This forces the destination CA to use the local server instead of trying to go via the load-balancer. This won’t affect the content being served, the web front-ends are configured via the load-balancer, and the destination CA doesn’t actually participate in this process. If your destination CA is also a web front-end, this would also still work as the URL resolves properly anyway.

After I set up my hosts file and re-ran the content deployment, it all worked perfectly! At least, on this occasion 🙂

A second “fix” I have had to use is to create another host-header site collection on my destination CA. I didn’t need to use this new site collection – I just created a blank one (see powershell script below). I’m not sure what happens in the background, but when I next tried my content deployment, it immediately changed from FAILED to SUCCESS. It’s possible that the act of creating another site collection resets Something(tm) so that content deployment can connect correctly.

Additional Content Deployment Troubleshooting Tips

While trying to diagnose the problem I also came up against the following issues (in no particular order):

  1. When creating a destination site collection, do not add a template to the site. In essence, you are creating an empty site collection, not a site collection with the blank or blank internet template. This breaks content deployment. I use the following Powershell script to auto-create a host-header (multi-tenancy) site collection:
    $contentDbName = “Your.Database.Name”
    $webName = “Your web app name”
    $url = “http://your.url”
    # Get the web application
    $webApp = Get-SPWebApplication –Identity $webName
    # Create content database
    New-SPContentDatabase –Name $contentDbName -WebApplication $webApp
    # Create the site
    $contentDb = Get-SPContentDatabase –Identity $contentDbName
    New-SPSite –Url $url –OwnerAlias “DOMAIN\gavin.mckay” –OwnerEmail “gavin.mckay@domain.local” –ContentDatabase $contentDb –HostHeaderWebApplication $webApp

    The important point here is not to use the “-Template <template name>” parameter in New-SPSite. This will break content deployment and you will get a stack of errors about being unable to overwrite content. Deployment failure!

  2. If you recreate your site collection, you will need to delete your content deployment job and path from the source CA server. The destination site collection id is stored as part of the job/path, and if you recreate your destination site collection it will have a new id. Deployment failure!
  3. If you delete your content database and recreate it, you will have to do an IISRESET on your CA and web front-ends to get them to resolve properly. You will recognise this by trying to connect to the destination URL and getting the following in your browser:
    “HTTP/1.1 200 OK Server: Microsoft-IIS/7.5 Date: Tue, 08 Nov 2011 02:41:04 GMT Connection: close”
    SharePoint is confused. Deployment failure!
  4. If you have Office Web Apps enabled on your source site collection, but not on the destination site collection, you will get:’Microsoft.SharePoint.SPException’ : ‘Could not find Feature MobilePowerPointViewer.’
    Deployment failure! Use Site settings, Site collection features, to disable Office Web Apps in the source site collection, then start your deployment job again.
  5. “The exception thrown was ‘Microsoft.SharePoint.SPException’ : ‘Unable to import the folder _catalogs/fpdatasources. There is already an object with the Id <Guid> in the database from another site collection.'”
    Deployment failure! This seems to occur if you have previously had a failed content deployment for a new site collection/content database.
    Using your destination CA remove the content database then re-add it again. This will restore the previous empty site collection ready for content deployment.
  6. Warning “A minor version has been exported with no corresponding major version. It is possible that this item was unpublished then published again. If this item is meant to be published, publish a new version and export/import it again”
    This is reasonably straight-forward. Either indivudally (or via a batch job) publish a major version, or turn off major/minor version control for the list/library.
  7. Warning “Cannot revert to the site definition version of this file”
    Currently unknown how to fix this…
  8. Error “A file with the name [filename] already exists”
    Currently unknown how to fix this…

More Detail – What Happens in the Database

The Content Deployment jobs are stored in the source farm SharePoint_Config database. You can view the details in a SQL query via:

Select top 100

id

,classid

,parentid

,name

,status

,version

,properties

FROM [dbo].[Objects]

WHERE Properties like ‘%ContentDeploymentJobDefinition%’

The Properties field is an XML data string that defines the job information.

More Detail – What Happens in ULS

You can view more detail via ULS logging if you enable Verbose logging for the category “Web Content Management” / “Content Deploymet”. Using the ULS Log Viewer will then enable you to see more detail on the process.

After the SharePoint source server packages the deployment files into multiple .cab files, they are individually pushed across to the destination farm. Yo should see ULS entries similar to:

Name=Request (POST:http://destination.sharepoint.local:8080/_admin/Content%20Deployment/DeploymentUpload.aspx?filename=%22ExportedFiles1.cab%22&remoteJobId=%2240ae2be7-f6ee-44c5-a2f0-f456d272a731%22)

 

After all the CAB files have been pushed across, the source server creates a deployment job on the destination server

HOWTO Create a Content Index for a Host Header Site Collection

September 20, 2011

We have an environment with several web applications that have a number of host header site collections attached to them. This reduces the resources required by your server (you have fewer physical IIS web applications) while still allowing you to have 100s/1000s/lots of site collections each with their own URL.

We wanted to be able to search these site collections, but host header site collections cannot be searched on their own. Your content indexer has to access the site collections via the web application itself. As an example, lets assume you have a web application that resolves to “my.sharepoint.local” and you create a host-header site collection “site.sharepoint.local” using the following powershell script:

$formsContentDbName = “WSS_Content_SiteLocal”
$webName = “my.sharepoint.local”
$url = “http://site.sharepoint.local”

# Get the web application
$webApp = Get-SPWebApplication –Identity $webName

# Create content database
New-SPContentDatabase –Name $formsContentDbName -WebApplication $webApp

# Create the site
$contentDb = Get-SPContentDatabase –Identity $formsContentDbName
New-SPSite –Url $url –OwnerAlias “myalias” –OwnerEmail “my.email@somewhere.com” –ContentDatabase $contentDb –HostHeaderWebApplication $webApp –Template “STS#0”

You will now be able to access the host-header site collection via http://site.sharepoint.local.

If you create a content source however using the url “http://site.sharepoint.local&#8221; and run a full crawl, you’ll get the following warning in your crawl log if you don’t have a site collection attached to http://my.sharepoint.local:

This URL is part of a host header SharePoint deployment and the search application is not configured to crawl individual host header sites. This will be crawled as a part of the host header Web application if configured as a start address

As the warning states, the trick here is to create a site collection for the web application, in this case my.sharepoint.local. Then in the content source add only the url http://my.sharepoint.local. Your content source will then happily index the in your host header site collection, as well as the base site collection.

This however has some implications. Lets assume you have a large site collection (say 100GB) and a smaller site collection (say 10MB). They are both attached to a web application. You cannot do a full reindex of the smaller site collection without also reindexing the larger site collection.

I’ve raised a support incident with Microsoft to try and find a resolution. Stay tuned…