Second Life of a Hungarian SharePoint Geek

March 29, 2015

May Merge-SPLogFile Flood the Content Database of the Central Administration Web Application?

Filed under: Content database, PowerShell, SP 2013 — Tags: , , — Peter Holpar @ 00:20

In the recent weeks we searched for the cause of a specific error in one of our SharePoint 2013 farms. To get detailed trace information, we switched the log level often to the VerboseEx mode. A few days later the admins alerted us that the size of the Central Administration content database has been increased enormously (at that time it was about 80 GB!).

Looking for the source of this unexpected amount of data I found a document library called Diagnostics Log Files (Description: View and analyze merged log files from the all servers in the farm) that contained 1824 items.

image

image

Although I consider myself primarily a SharePoint developer, I always try to remain up-to-date in infrastructural themes as well, but to tell the truth I’ve never seen this document library before. Searching the web didn’t help as well.

Having a look into the document library I found a set of folders with GUID names.

image

Each of the folders contained a lot of files: a numbered set for each of the servers in the farm.

image

The files within the folders are archived logs in .gz file format, each around 26 MB.

image

Based on the description of the library I guessed that it has a relation to the Merge-SPLogFile cmdlet, that performs collection of ULS log files from all of the servers in the farm an saves the aggregation in the specified file on the local system, although I have not found any documentation how it performs this action and if it has anything to do with the content DB of the central admin site.

 

After a few hours of “reflectioning”, it was obvious, how this situation was achieved. If you are not interested in call chains, feel free to jump through the following part.

All of the classes and methods below are defined in the Microsoft.SharePoint assembly, if not specified otherwise.

The InternalProcessRecord method of the Microsoft.SharePoint.PowerShell.SPCmdletMergeLogFile class (Microsoft.SharePoint.PowerShell assembly) creates a new instance of the SPMergeLogFileUtil class based on the path of the aggregated log folder and the filter expression, and calls its Run method:

SPMergeLogFileUtil util = new SPMergeLogFileUtil(this.Path, filter);
ThreadPool.QueueUserWorkItem(new WaitCallback(util.Run));

In the Run method of the Microsoft.SharePoint.Diagnostics.SPMergeLogFileUtil class:

public void Run(object stateInfo)
{
  try
  {
    this.Progress = 0;
    // Creates the diagnostic log files document library in central admin (if does not yet exist) via the GetLogFilesList method
    // Add a new subfolder (having GUID name) to the library. If the folder already exists, deletes its content.
    // Executes child jobs (SPMergeLogFilesJobDefinition) on each farm member, see more details about it later below
    List<string> jobs = this.DispatchCollectingJobs();
    // Waits for all child jobs to be finished on the farm members
    this.MonitorJobs(jobs);
    // Merges the content of the collected files from the central admin document library into the specified local file system folder by calling the MergeFiles method
    // Finally deletes the temporary files one by one from the central admin document library and at the very end deletes their folder as well
    this.RunMerge();
  }
  catch (Exception exception)
  {
    this.Error = exception;
  }
  finally
  {
    this.Progress = 100;
  }
}

Yes, as you can see, if there is any error in the file collection process on any of the farm members, or the merging process fails, the files won’t be deleted from the Diagnostics Log Files document library.

Instead of the RunMerge method, the deletion process would have probably a better place in the finally block, or at least, in the catch block one should check if the files were removed successfully.

 

A few words about the Microsoft.SharePoint.Diagnostics.SPMergeLogFilesJobDefinition, as promised earlier. Its Execute method calls the CollectLogFiles method that creates an ULSLogFileProcessor instance based on the requested log filter and gets the corresponding ULS entries from the farm member the job running on, stores the entries in a temporary file in the file system, uploads them to the actual subfolder (GUID name) of the Diagnostics Log Files document library (file name pattern: [ServerLocalName] (1).log.gz or [ServerLocalName].log.gz if a single file is enough to store the log entries from the server), and delete the local temporary file.

 

A few related text values can we read via PowerShell as well:

In the getter of the DisplayName property in the SPMergeLogFilesJobDefinition class returns:

SPResource.GetString("CollectLogsJobTitle", new object[] { base.Server.DisplayName });

reading the value via PowerShell:

[Microsoft.SharePoint.SPResource]::GetString("CollectLogsJobTitle")

returns

Collection of log files from server |0

In the GetLogFilesList method of the SPMergeLogFileUtil class we find the title and description of the central admin document library used for log merging

string title = SPResource.GetString(SPGlobal.ServerCulture, "DiagnosticsLogsTitle", new object[0]);
string description = SPResource.GetString(SPGlobal.ServerCulture, "DiagnosticsLogsDescription", new object[0]);

Reading the values via the PowerShell:

[Microsoft.SharePoint.SPResource]::GetString("DiagnosticsLogsTitle")

returns

Diagnostics Log Files

[Microsoft.SharePoint.SPResource]::GetString("DiagnosticsLogsDescription")

returns

View and analyze merged log files from the all servers in the farm

These values supports our theory, that the library was created and filled by these methods.

 

Next, I’ve used the following PowerShell script to look for the failed merge jobs in the time range when the files in the Central Administration have been created:

$farm = Get-SPFarm
$from = "3/21/2015 12:00:00 AM"
$to = "3/21/2015 6:00:00 AM"
$farm.TimerService.JobHistoryEntries | ? {($_.StartTime -gt $from) -and ($_.StartTime -lt $to) -and ($_.Status -eq "Failed") -and ($_.JobDefinitionTitle -like "Collection of log files from server *")}

The result:

image

As you can see, in our case there were errors on both farm member servers, for example, the storage space was not enough on one of them. After the jobs have failed, the aggregated files were not deleted from the Diagnostics Log Files document library of the Central Administration.

Since even a successful execution of the Merge-SPLogFile cmdlet can temporarily increase the size of the Central Administration content database considerably, and the effect of the failed executions is not only temporary (and particularly large, if it happens several times and is combined with verbose logging), SharePoint administrators should be aware of these facts, consider them when planning database sizes and / or take an extra maintenance step to remove the rest of failed merge processes from the Diagnostics Log Files document library regularly. As far as I see, the issue hits SharePoint 2010 and 2013 as well.

March 26, 2015

Strange Localization Issue When Working with List and Site Templates

Filed under: MUI, PowerShell, SP 2013 — Tags: , — Peter Holpar @ 23:44

One of our clients runs a localized version of SharePoint 2013. The operating system is a Windows 2012 R2 Server (English), the SharePoint Server itself is English as well. The German language pack is installed and sites and site collections were created in German. We are working with various custom site templates. Recently one of these templates had to be extended with a Task-list based custom lists (called ToDos). The users prepared the list in a test site, and we saved the list as a template. We created a new site using the site template (we will refer to this site later as prototype), and next we created a new list based on the custom list template. Finally, we saved the altered web site as site template, including content using the following PowerShell commands:

$web = Get-SPWeb $siteTemplateSourceUrl
$web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)

We created a test site using the new site template, and everything seemed to be OK. However, after a while, the users started to complain, that a menu for the new list contains some English text as well. As it turned out, some of the views for the new list were created with English title:

image

Problem2

First, we verified the manifest.xml of the list template, by downloading the .stp file (that has a CAB file format) and opening it using IZArc. We found, that the DisplayName property of the default view (“Alle Vorgänge” meaning “All Tasks”) and a custom datasheet view (called “db”, stands for “Datenblatt”) contains the title as text, the DisplayName property of the other views contains a resource reference (like “$Resources:core,Late_Tasks;”).

ListTemplate

Next, we downloaded the site template (the .wsp file has also a CAB file format, and can be opened by IZArc), and verified the schema.xml for the ToDos list. We found, that original, German texts (“Alle Vorgänge” and “db”) were kept, however, all other view names were “localized” to English.

English

At this point I guessed already, that problem was caused by the local of the thread the site template exporting code was run in. To verify my assumption, I saved the prototype site from the site settings via the SharePoint web UI (that is German in our case). This time the resulting schema.xml in the new site template .wsp contained the German titles:

German

We got the same result (I mean German view titles) if we called our former PowerShell code by specifying German as the culture for the running thread. See more info about the Using-Culture helper method, SharePoint Multilingual User Interface (MUI) and PowerShell here:

Using-Culture de-DE 
  $web = Get-SPWeb $pwsSiteTemplateSourceUrl
  $web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)
}

We’ve fixed the existing views via the following PowerShell code (Note: Using the Using-Culture helper method is important in this case as well. We have only a single level of site hierarchy in this case, so there is no recursion in code!):

$web = Get-SPWeb http://SharePointServer

function Rename-View-IfExists($list, $viewNameOld, $viewNameNew)
{
  $view =  $list.Views[$viewNameOld]
  If ($view -ne $Null) {
      Write-Host Renaming view $viewNameOld to $viewNameNew
      $view.Title = $viewNameNew
      $view.Update()
  }
  Else {
    Write-Host View $viewNameOld not found
  }
}

Using-Culture de-DE {
  $web.Webs | % {
    $list = $_.Lists["ToDos"]
    If ($list -ne $Null) {
      Write-Host ToDo list found in $_.Title
      Rename-View-IfExists $list "Late Tasks" "Verspätete Vorgänge"
      Rename-View-IfExists $list "Upcoming" "Anstehend"
      Rename-View-IfExists $list "Completed" "Abgeschlossen"
      Rename-View-IfExists $list "My Tasks" "Meine Aufgaben"
      Rename-View-IfExists $list "Gantt Chart" "Gantt-Diagramm"
      Rename-View-IfExists $list "Late Tasks" "Verspätete Vorgänge" 
    }
  }
}

Strange, that we had no problem with field names or other localizable texts when worked with the English culture.

March 10, 2015

Automating the Provisioning of a PWA-Instance

Filed under: ALM, PowerShell, PS 2013 — Tags: , , — Peter Holpar @ 23:56

When testing our custom Project Server 2013 solutions in the development system, or deploying them to the test system I found it useful to be able to use a clean environment (new PWA instance having an empty project database and separate SharePoint content database for the project web access itself and the project web sites) each time.

We wrote a simple PowerShell script that provisions a new PWA instance, including:
- A separate SharePoint content database that should contain only a single site collection: the one for the PWA. If the content DB already exists, we will use the existing one, otherwise we create a new one.
- The managed path for the PWA.
- A new site collection for the PWA using the project web application site template, and the right locale ID (1033 in our case). If the site already exists (in case we re-use a former content DB), it will be dropped before creating the new one.
- A new project database. If a project database with the same name already exists on the SQL server, it will be dropped and re-created.
- The content database will be mounted to the PWA instance, and the admin permissions are set.

Note, that we have a prefix (like DEV or TEST) that identifies the system. We set the URL of the PWA and the database names using this prefix. The database server names (one for the SharePoint content DBs and another one for service application DBs) include the prefix as well, and are configured via aliases in the SQL Server Client Network Utilities, making it easier to relocate the databases if needed.

$environmentPrefix = "DEV"

$webAppUrl = [string]::Format("http://ps-{0}.company.com", $environmentPrefix)

$contentDBName = [string]::Format("PS_{0}_Content_PWA", $environmentPrefix)
$contentDBServer = [string]::Format("PS_{0}_Content", $environmentPrefix)

$pwaMgdPathPostFix = "PWA"
$pwaUrl = [string]::Format("{0}/{1}", $webAppUrl, $pwaMgdPathPostFix)
$pwaTitle = "PWA Site"
$pwaSiteTemplate = "PWA#0"
$pwaLcid = 1033
$ownerAlias = "domain\user1"
$secondaryOwnerAlias = "domain\user2"

$projServDBName = [string]::Format("PS_{0}_PS", $environmentPrefix)
$projServDBServer = [string]::Format("PS_{0}_ServiceApp", $environmentPrefix)

Write-Host Getting web application at $webAppUrl
$webApp = Get-SPWebApplication -Identity $webAppUrl

$contentDatabase = Get-SPContentDatabase -Identity $contentDBName -ErrorAction SilentlyContinue

if ($contentDatabase -eq $null) {
  Write-Host Creating content database: $contentDBName
  $contentDatabase = New-SPContentDatabase -Name $contentDBName -WebApplication $webApp -MaxSiteCount 1 -WarningSiteCount 0 -DatabaseServer $contentDBServer
}
else {
  Write-Host Using existing content database: $contentDBName
}

$pwaMgdPath = Get-SPManagedPath -Identity $pwaMgdPathPostFix -WebApplication $webApp -ErrorAction SilentlyContinue
if ($pwaMgdPath -eq $null) {
  Write-Host Creating managed path: $pwaMgdPathPostFix
  $pwaMgdPath = New-SPManagedPath -RelativeURL $pwaMgdPathPostFix -WebApplication $webApp -Explicit
}
else {
  Write-Host Using existing managed path: $pwaMgdPathPostFix
}

$pwaSite = Get-SPSite –Identity $pwaUrl -ErrorAction SilentlyContinue
if ($pwaSite -ne $null) {
  Write-Host Deleting existing PWA site at $pwaUrl
  $pwaSite.Delete()
}

Write-Host Creating PWA site at $pwaUrl
$pwaSite = New-SPSite –Url $pwaUrl –OwnerAlias $ownerAlias –SecondaryOwnerAlias  $secondaryOwnerAlias -ContentDatabase $contentDatabase –Template $pwaSiteTemplate -Language $pwaLcid –Name $pwaTitle

$projDBState = Get-SPProjectDatabaseState -Name $projServDBName -DatabaseServer $projServDBServer
if ($projDBState.Exists) {
  Write-Host Removing existing Project DB $projServDBName
  Remove-SPProjectDatabase –Name $projServDBName -DatabaseServer $projServDBServer -WebApplication $webApp
}
Write-Host Creating Project DB $projServDBName
New-SPProjectDatabase –Name $projServDBName -DatabaseServer $projServDBServer -Lcid $pwaLcid -WebApplication $webApp

Write-Host Bind Project Service DB to PWA Site
Mount-SPProjectWebInstance –DatabaseName $projServDBName -DatabaseServer $projServDBServer –SiteCollection $pwaSite

#Setting admin permissions on PWA
Grant-SPProjectAdministratorAccess –Url $pwaUrl –UserAccount $ownerAlias

Using this script helps us to avoid a lot of manual configuration steps, saves us a lot of time and makes the result more reproducible.

March 4, 2015

How to find out the real number of Queue Jobs

Filed under: PowerShell, PS 2013, PSI — Tags: , , — Peter Holpar @ 00:45

Recently we had an issue with Project Server. Although the Microsoft Project Server Queue Service was running, the items in the queue has not been processed, and the performance of the system degraded severely. The same time we found a lot of cache cluster failures in Windows Event Logs and ULS logs, it is not clear which one was the source of the problem and which is the result of the other. The “solution” was to install the February 2015 CU SharePoint Product Updates and restarting the server. 

However, even after the restart the number of the job entries seemed to be constant when checking via the PWA Settings / Manage Queue Jobs (Queue and Database Administration): at the first page load the total number displayed at the left bottom of the grid was 1000, however when we paged through the results or refreshed the status, the total was changed to 500 (seems to be an issue with the product). It means that PWA administrators don’t see the real number of the entries.

But how could one then get the real number of the queue jobs?

If you have permission to access the performance counters on the server (in the simplest case, if you are a local admin), then you can use the Current Unprocessed Jobs counter (ProjectServer:QueueGeneral), that – as its name suggests – give the total number of the current unprocessed jobs.

You have to find an alternative solution if you need the count of jobs having other status, or need even more granulate results, for example, the number of job entries that are ready for processing and are related to publishing a project.

The QueueJobs property of the Project class in the client object model (see the QueueJob and QueueJobCollection classes as well) provides only information related to a given project, and the same is true for the REST interface, where you can access the same information for your project like (the Guid is the ID of your project):

http://YourProjServer/PWA/_api/ProjectServer/Projects(‘98138ffd-d0fa-e311-83c6-005056b45654&#8242;)/QueueJobs

The best solution I’ve found is based on the GetJobCount method in the QueueSystem object in the PSI interface. Let’s see some practical PowerShell examples how to use it.

To get the reference for the proxy:

$pwaUrl = "http://YourProjServer/PWA&quot;
$svcPSProxy = New-WebServiceProxy -Uri ($pwaUrl + "/_vti_bin/PSI/QueueSystem.asmx?wsdl") –UseDefaultCredential

To get the number of all entries without filtering:

$svcPSProxy.GetJobCount($Null, $Null, $Null)

For filtering, we can use the second and the third parameter of the method: the jobStates and messageTypes that are arrays of the corresponding JobState and QueueMsgType enumerations. Both of these enums are available as nested enums in the Microsoft.Office.Project.Server.Library.QueueConstants class. This class is defined in the Microsoft.Office.Project.Server.Library assembly, so if we would like to use the enums, we should load the assembly first:

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Project.Server.Library")

Note: you can use the integer values corresponding to the enum values like:

$jobStates = (1, 7)

…however I don’t find it very developer friendly to use such magical constants in code, and you lose the autocomplete feature of PowerShell as well that you have when working with the enums as displayed below:

$jobStates = (
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+JobState]::ReadyForProcessing,
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+JobState]::ProcessingDeferred
)

similarly for message types:

$msgTypes = (
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+QueueMsgType]::ReportingProjectPublish,
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+QueueMsgType]::ReportingProjectDelete
)

You can then access the count of filtered items like:

$svcPSProxy.GetJobCount($Null, $jobStates, $Null)

or

$svcPSProxy.GetJobCount($Null, $jobStates, $msgTypes)

It is worth to know that the QueueConstants class has two methods (PendingJobStates and CompletedJobStates) that return a predefined set of the enum values as a generic IEnumreable<QueueConstants.JobState>. We can use these methods from PowerShell as well:

$jobStates = Microsoft.Office.Project.Server.Library.QueueConstants]::PendingJobStates() | % { [int]$_ }

or

$jobStates = Microsoft.Office.Project.Server.Library.QueueConstants]::CompletedJobStates() | % { [int]$_ }

February 19, 2015

How to Programmatically “Enable reporting of offensive content” on a community site

Filed under: PowerShell, SP 2013 — Peter Holpar @ 23:10

Recently a question was posted on sharepoint.stackexchange.com about how to “Enable reporting of offensive content” from code on SharePoint 2013 community sites.

image

The Community Settings page (CommunitySettings.aspx) has a code behind class Microsoft.SharePoint.Portal.CommunitySettingsPage. In its BtnSave_Click method it depends on the static EnableDisableAbuseReports method of the internal FunctionalityEnablers class to perform the actions required for reporting of offensive content. Furthermore, it sets the value of the vti_CommunityEnableReportAbuse web property to indicate that reporting is enabled for the community site.

To perform the same actions from PowerShell I wrote this script:

$site = New-Object Microsoft.SharePoint.SPSite("http://YourServer/CommunitySite&quot;)
$web = $site.OpenWeb()

# this command has only effect to the check box on the Community Settings page
$web.AllProperties["vti_CommunityEnableReportAbuse"] = "true"
$web.Update()

# the functionality itself is activated by the code below
# get a reference for the Microsoft.SharePoint.Portal assembly
$spPortalAssembly = [AppDomain]::CurrentDomain.GetAssemblies() | ? { $_.Location -ne $Null -And $_.Location.Split(‘\\’)[-1].Equals(‘Microsoft.SharePoint.Portal.dll’) }
$functionalityEnablersType = $spPortalAssembly.GetType("Microsoft.SharePoint.Portal.FunctionalityEnablers")
$mi_EnableDisableAbuseReports = $functionalityEnablersType.GetMethod("EnableDisableAbuseReports")
$mi_EnableDisableAbuseReports.Invoke($null, @($spWeb, $True))

Note: If you use “false” instead of “true” when setting the value of vti_CommunityEnableReportAbuse, and $False instead of $True when invoking the static method in the last line of code, then you can inactivate the reporting for the site.

The alternative solution is to use the server side API from C#:

web.AllProperties["vti_CommunityEnableReportAbuse"] = "true";
web.Update();

// get an assembly reference to "Microsoft.SharePoint.Portal" via an arbitrary public class from the assembly
Assembly spPortalAssembly = typeof(Microsoft.SharePoint.Portal.PortalContext).Assembly;

Type functionalityEnablersType = spPortalAssembly.GetType("Microsoft.SharePoint.Portal.FunctionalityEnablers");
MethodInfo mi_EnableDisableAbuseReports = functionalityEnablersType.GetMethod("EnableDisableAbuseReports");
mi_EnableDisableAbuseReports.Invoke(null, new object[] { web, true });

We can verify the effect of our code by checking the columns in the first view of the Discussions List before enabling the reporting via this PowerShell script:

$list = $web.Lists["Discussions List"]
$list.Views[0].ViewFields

I found these ones:

Threading
CategoriesLookup
Popularity
DescendantLikesCount
DescendantRatingsCount
AuthorReputationLookup
AuthorNumOfRepliesLookup
AuthorNumOfPostsLookup
AuthorNumOfBestResponsesLookup
AuthorLastActivityLookup
AuthorMemberSinceLookup
AuthorMemberStatusIntLookup
AuthorGiftedBadgeLookup
LikesCount
LikedBy

And after we enable reporting. Four further columns should be appended to the list of view fields if the code succeeded:

AbuseReportsCount
AbuseReportsLookup
AbuseReportsReporterLookup
AbuseReportsCommentsLookup

Or we can invoke the static IsReportAbuseEnabled method of the internal Microsoft.SharePoint.Portal.CommunityUtils class to verify if reporting is enabled, just as the OnLoad method of the Microsoft.SharePoint.Portal.CommunitySettingsPage does. You should know however, that this method does not more as simply to check the value of the vti_CommunityEnableReportAbuse web property, so even if it returns true, it does not mean for sure that reporting is really enable. So I prefer checking the columns in view as shown earlier.

The PowerShell version:

$site = New-Object Microsoft.SharePoint.SPSite("http://YourServer/CommunitySite&quot;)
$web = $site.OpenWeb()

$communityUtilsType = $spPortalAssembly.GetType("Microsoft.SharePoint.Portal.CommunityUtils")
$mi_IsReportAbuseEnabled = $communityUtilsType.GetMethod("IsReportAbuseEnabled")
$mi_IsReportAbuseEnabled.Invoke($null, @($spWeb))

The C# version:

// get an assembly reference to "Microsoft.SharePoint.Portal" via an arbitrary public class from the assembly
Assembly spPortalAssembly = typeof(Microsoft.SharePoint.Portal.PortalContext).Assembly;

Type communityUtilsType = spPortalAssembly.GetType("Microsoft.SharePoint.Portal.CommunityUtils");
MethodInfo mi_IsReportAbuseEnabled = communityUtilsType.GetMethod("IsReportAbuseEnabled");
mi_IsReportAbuseEnabled.Invoke(null, new object[] { web });

February 10, 2015

Further Effects of Running Code in Elevated Privileges Block

Filed under: Permissions, PowerShell, SP 2010, SP 2013 — Tags: , , , — Peter Holpar @ 23:29

A few days ago I already published a blog post about the effects of running your PowerShell code in an elevated privileges block. In the past days I made some further tests to check what kind of effect the permission elevation might have if the code runs out of the SharePoint web application context, for example, in the case of server side console applications, windows services or PowerShell scripts, just to name a few typical cases.

To test the effects, I’ve created a simple console application in C#, and a PowerShell script.

I’ve tested the application / script with two different permissions. In both cases, the user running the application was neither site owner (a.k.a. primary administrator) nor a secondary administrator. In the first case, the user has the Full Control permission level on the root web of the site collection, in the second case the user has no permissions at all.

In C# I’ve defined a CheckIfCurrentUserIsSiteAdmin method as:

  1. private void CheckIfCurrentUserIsSiteAdmin(string url)
  2. {
  3.     using (SPSite site = new SPSite(url))
  4.     {
  5.         using (SPWeb web = site.OpenWeb())
  6.         {
  7.             SPUser currentUser = web.CurrentUser;
  8.             Console.WriteLine("Current user ({0}) is site admin on '{1}': {2}", currentUser.LoginName, url, currentUser.IsSiteAdmin);
  9.             Console.WriteLine("Current user ({0}) is site auditor on '{1}': {2}", currentUser.LoginName, url, currentUser.IsSiteAuditor);
  10.             Console.WriteLine("Effective permissions on web: '{0}'", web.EffectiveBasePermissions);
  11.             try
  12.             {
  13.                 Console.WriteLine("web.UserIsWebAdmin: '{0}'", web.UserIsWebAdmin);
  14.             }
  15.             catch (Exception ex)
  16.             {
  17.                 Console.WriteLine("'web.UserIsWebAdmin' threw an exception: '{0}'", ex.Message);
  18.             }
  19.             try
  20.             {
  21.                 Console.WriteLine("web.UserIsSiteAdmin: '{0}'", web.UserIsSiteAdmin);
  22.             }
  23.             catch (Exception ex)
  24.             {
  25.                 Console.WriteLine("'web.UserIsSiteAdmin' threw an exception: '{0}'", ex.Message);
  26.             }
  27.         }
  28.     }
  29. }

Then called it without and with elevated permissions:

  1. string url = http://YourServer;
  2. Console.WriteLine("Before elevation of privileges");
  3. CheckIfCurrentUserIsSiteAdmin(url);
  4. Console.WriteLine("After elevation of privileges");
  5. SPSecurity.RunWithElevatedPrivileges(
  6.     () =>
  7.         {
  8.             CheckIfCurrentUserIsSiteAdmin(url);
  9.         });

The summary of the result:

If the user executing the application has full permission on the (root) web (Full Control permission level):

Before elevation:
Current user is site admin: False
Effective perm.: FullMask
web.UserIsWebAdmin: True
web.UserIsSiteAdmin: False

After elevation:
Current user is site admin: True
Effective perm.: FullMask
web.UserIsWebAdmin: True
web.UserIsSiteAdmin: True

If the user has no permission on the (root) web:

Before elevation:
Current user is site admin: False
Effective perm.: EmptyMask
web.UserIsWebAdmin: ‘Access denied’ exception when reading the property
web.UserIsSiteAdmin: ‘Access denied’ exception when reading the property

After elevation:
Current user is site admin: True
Effective perm.: FullMask
web.UserIsWebAdmin: True
web.UserIsSiteAdmin: True

In PowerShell I defined a CheckIfCurrentUserIsSiteAdmin method as well, and invoked that without and with elevated right:

function CheckIfCurrentUserIsSiteAdmin($url) {
  $site = Get-SPSite $url 
  $web = $site.RootWeb
  $currentUser = $web.CurrentUser
  Write-Host Current user $currentUser.LoginName is site admin on $url : $currentUser.IsSiteAdmin
  Write-Host Current user $currentUser.LoginName is site auditor on $url : $currentUser.IsSiteAuditor

  Write-Host Effective permissions on web: $web.EffectiveBasePermissions
  Write-Host web.UserIsWebAdmin: $web.UserIsWebAdmin
  Write-Host web.UserIsSiteAdmin: $web.UserIsSiteAdmin
}

$url = "http://YourServer&quot;

Write-Host Before elevation of privileges
CheckIfCurrentUserIsSiteAdmin $url

Write-Host After elevation of privileges
[Microsoft.SharePoint.SPSecurity]::RunWithElevatedPrivileges(
  {
    CheckIfCurrentUserIsSiteAdmin $url
  }
)

The results were the same as in the case of the C# console application, except the web.UserIsWebAdmin and web.UserIsSiteAdmin, when calling with no permissions on the web. In case we don’t receive any exception, simply no value (neither True nor False) was returned.

These results show, that any code, let it be a method in a standard SharePoint API, or a custom component, that depends on the above tested properties, behaves differently when using with elevated privileges, even if it is executed from an application external to the SharePoint web application context, that means, even if the identity of the process does not change.

February 9, 2015

“Decoding” SharePoint Error Messages using PowerShell

Filed under: PowerShell, SP 2010, Tips & Tricks — Tags: , , — Peter Holpar @ 22:26

When working with SharePoint errors in ULS logs, you can find the error message near to the stack trace. In case of simple methods the stack trace may be enough to identify the exact conditions under which the exception was thrown. However, if the method is complex, with a lot of conditions and branches, it is not always trivial to find the error source, as we don’t see the exception message itself, as it is stored in language-specific resource files, and you see only a kind of keyword in the code.

For example, let’s see the GetItemById method of the SPList object with this signature:

internal SPListItem GetItemById(string strId, int id, string strRootFolder, bool cacheRowsetAndId, string strViewFields, bool bDatesInUtc)

There is a condition near to the end of the method:

if (this.IsUserInformationList)
{
    throw new ArgumentException(SPResource.GetString("CannotFindUser", new object[0]));
}
throw new ArgumentException(SPResource.GetString("ItemGone", new object[0]));

How could we “decode” this keyword to the real error message? It is easy to achieve using PowerShell.

For example, to get the error message for the “ItemGone”:

[Microsoft.SharePoint.SPResource]::GetString("ItemGone")

is

Item does not exist. It may have been deleted by another user.

Note, that since the second parameter is an empty array, we can simply ignore it when invoking the static GetString method.

If you need the language specific error message (for example, the German one):

$ci = New-Object System.Globalization.CultureInfo("de-de")
[Microsoft.SharePoint.SPResource]::GetString($ci, "ItemGone")

it is

Das Element ist nicht vorhanden. Möglicherweise wurde es von einem anderen Benutzer gelöscht.

Having the error message, it is already obvious most of the time, at which line of code the exception was thrown.

It can also help to translate the localized message to the English one, and use it to look up a solution for the error on the Internet using your favorite search engine, as there are probably more results when you search for the English text.

Setting the Value of an URL Field or other Complex Field Types using PowerShell via the Managed Client Object Model

Filed under: Managed Client OM, PowerShell, SP 2013 — Tags: , , — Peter Holpar @ 00:30

Assume you have a SharePoint list that includes fields of type Hyperlink or Picture (name this field UrlField), Person or Group (field name User) and Lookup (field name Lookup) and you need to set their values remotely using PowerShell via the Managed Client Object Model.

In C# you would set the Hyperlink field using this code:

  1. var siteUrl = "http://YourServer&quot;;
  2.  
  3. using (var context = new ClientContext(siteUrl))
  4. {
  5.     var web = context.Web;
  6.     var list = web.Lists.GetByTitle("YourTestList");
  7.     var item = list.GetItemById(1);
  8.  
  9.     var urlValue = new FieldUrlValue();
  10.     urlValue.Url = "http://www.company.com&quot;;
  11.     urlValue.Description = "Description of the URL";
  12.     item["UrlField"] = urlValue;
  13.  
  14.     item.Update();
  15.     context.ExecuteQuery();
  16. }

You may think, that translating this code to PowerShell is as easy as:

Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"
Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll"

$siteUrl = "http://YourServer&quot;

$context = New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl)
$web = $context.Web
$list = $web.Lists.GetByTitle("YourTestList")
$item = $list.GetItemById(1)

$urlValue = New-Object Microsoft.SharePoint.Client.FieldUrlValue
$urlValue.Url = "http://www.company.com&quot;
$urlValue.Description = "Description of the URL"
$item["UrlField"] = $urlValue

$item.Update()
$context.ExecuteQuery()

However, that approach simply won’t work, as you receive this error:

"Invalid URL: Microsoft.SharePoint.Client.FieldUrlValue" (ErrorCode: -2130575155)

When capturing the network traffic with Fiddler, the C# version sends this for the SetFieldValue method:

image

For the PowerShell code however:

image

You can see, that the first parameter, the field name is the same in both cases, however the second parameter, that should be the value we assign to the field is wrong in the second case. In the C# case it has the correct type (FieldUrlValue , represented by the GUID value in the TypeId), however in PowerShell the type is Unspecified, and the really type is sent as text in this parameter (I assume the ToString() method was called on the type by PowerShell)

Solution: You should explicitly cast the object in the $urlValue variable to the FieldUrlValue type as shown below (the single difference is highlighted in code with bold):

Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"
Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll"

$siteUrl = "http://YourServer&quot;

$context = New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl)
$web = $context.Web
$list = $web.Lists.GetByTitle("YourTestList")
$item = $list.GetItemById(1)

$urlValue = New-Object Microsoft.SharePoint.Client.FieldUrlValue
$urlValue.Url = "http://www.company.com&quot;
$urlValue.Description = "Description of the URL"
$item["UrlField"] = [Microsoft.SharePoint.Client.FieldUrlValue]$urlValue

$item.Update()
$context.ExecuteQuery()

Similarly in case of the other complex field types (only the relevant codes are included).

For the Person or Group field in C# (assuming 2 is the ID of the user you would like to set in the field):

  1. var userValue = new FieldUserValue();
  2. userValue.LookupId = 2;
  3. item["User"] = userValue;

The network trace:

image

PowerShell code, that does not work:

$userValue = New-Object Microsoft.SharePoint.Client.FieldUserValue
$userValue.LookupId = 2
$item["User"] = $userValue

The error you receive:

"Invalid data has been used to update the list item. The field you are trying to update may be read only." (ErrorCode: -2147352571)

The network trace:

image

PowerShell code, that does work:

$userValue = New-Object Microsoft.SharePoint.Client.FieldUserValue
$userValue.LookupId = 2
$item["User"] = [Microsoft.SharePoint.Client.FieldUserValue]$userValue

For the Lookup field in C# (assuming 2 is the ID of the related list item you would like to set in the field):

  1. var lookUpValue = new FieldLookupValue();
  2. lookUpValue.LookupId = 2;
  3. item["Lookup"] = lookUpValue;

The network trace:

image

PowerShell code, that does not work:

$lookUpValue = New-Object Microsoft.SharePoint.Client.FieldLookupValue
$lookUpValue.LookupId = 2
$item["Lookup"] = $lookUpValue

The error you receive:

"Invalid data has been used to update the list item. The field you are trying to update may be read only." (ErrorCode: -2147352571)

The network trace:

image

PowerShell code, that does work:

$lookUpValue = New-Object Microsoft.SharePoint.Client.FieldLookupValue
$lookUpValue.LookupId = 2
$item["Lookup"] = [Microsoft.SharePoint.Client.FieldLookupValue]$lookUpValue

February 7, 2015

Changing Site Collection Administrators via PowerShell without Elevated Permissions

Filed under: PowerShell, Security, SP 2010 — Tags: , , — Peter Holpar @ 22:20

In my recent post I’ve already illustrated how to read and set the primary and secondary site collection administrators (the Owner and SecondaryContact properties of the corresponding SPSite object) via PowerShell. In that samples I’ve used elevated privileges to achieve my goals.

Let’s see if it is there a way without elevating the privileges.

Before PowerShell, before SharePoint 2010, the standard way to display / change the site owner and the secondary owner in command line was the stsadm command, that is still available to us.

For example, we can display this information for all site of a web application using the enumsites operation:

stsadm -o enumsites -url http://mysiteroot

To set the site owner, we can use the siteowner operation with the ownerlogin parameter:

stsadm -o siteowner -url http://mysiteroot/users/user1 -ownerlogin "domain\user1"

For the secondary admin, use the secondarylogin parameter instead of ownerlogin.

Note: You should have local administrator rights on the server where you run this commands, otherwise you receive an “Access denied.” error message. The reason, that for most of the stsadm operation, the following security check is performed in the entry method (public static int Main) of the Microsoft.SharePoint.StsAdmin.SPStsAdmin class:

if (!SPAdministrationServiceUtilities.IsCurrentUserMachineAdmin())
{
  Console.WriteLine(SPResource.GetString("AccessDenied", new object[0]));
  Console.WriteLine();
  return -2147024891;
}

I decided to check, what kind of method stsadm uses to read and change the values. The implementation of the siteowner operation can be found in the Microsoft.SharePoint.StsAdmin.SPSiteOwner class. To access the the Owner and SecondaryContact properties of the SPSite object, the OwnerLoginName and SecondaryContactLoginName properties of the SPSiteAdministration class are used. In the constructor of this class there is a security check that verify if the calling user is a farm administrator:

internal SPSiteAdministration(SPSite site)
{
    if (site == null)
    {
        throw new ArgumentNullException("site");
    }
    this.m_Site = site;
    if (this.m_Site.WebApplication.Farm.CurrentUserIsAdministrator())
    {
        this.m_Site.AdministratorOperationMode = true;
    }

To display the owner and the secondary contact of the site collection, we can use the following PowerShell script:

$url = "http://mysiteroot/users/user1&quot;
$siteAdmin = New-Object Microsoft.SharePoint.Administration.SPSiteAdministration($url)
$siteAdmin.OwnerLoginName
$siteAdmin.SecondaryContactLoginName

Changing these values is so simple as:

$url = "http://mysiteroot/users/user1&quot;
$siteAdmin = New-Object Microsoft.SharePoint.Administration.SPSiteAdministration($url)
$siteAdmin.OwnerLoginName = "domain\user1"
$siteAdmin.SecondaryContactLoginName = "domain\user2"

Note, that we are using the login name as string, and not an SPUser when assigning the values, and there is no need for elevated privileges. The caller must be a farm administrator for the current SharePoint farm, however, as we call this code directly, and not from stsadm, where this is checked, the user should not be a local admin.

I found it a bit inconsistent and disturbing, that we can access (read / set) the same properties via various objects and methods of the base SharePoint library, that perform various permission checks, and so one can avoid the security checks implemented in one of the objects when accessing the very same information via another class.

Changing Site Collection Administrators via PowerShell Using Elevated Permissions

Filed under: PowerShell, Security, SP 2010 — Tags: , , — Peter Holpar @ 05:06

Recently we migrated the users of a SharePoint farm into another domain. As part of the migration the MySites of the users should have been migrated as well. For the user migration we used the Move-SPUser Cmdlet, however it seems to have no effect on the site primary and secondary administrators (that means the Owner and SecondaryContact properties of the corresponding SPSite object). As you might know, each MySite is a site collection in SharePoint, the user the MySite belongs to is by default the primary site collection administrator and there is no secondary admin specified. It is possible to change (“migrate”) the admins via the Central Administration web UI, however having hundreds of users, it was not a viable option to us. But no problem, we can surely change these values via PowerShell as well, couldn’t we? Let’s test it first, and use PowerShell to read the values. We have all of the MySites in a separate web application, so we tried to iterate through all its site collections and dump out the information we need.

These samples assume that the URL of the web application (the MySite root) is http://mysiteroot, and the MySites are under the managed path /users, for example, the URL of the MySite of user1 is http://mysiteroot/users/user1.

Our first try was:

$waUrl = "http://mysiteroot&quot;

$wa = Get-SPWebApplication $waUrl
Get-SPSite -WebApplication $wa -Limit ALL | % {
  Write-Host "Url:" $_.Url
  Write-Host "Primary Administrator:" $_.Owner.LoginName
  Write-Host "Secondary Administrator:" $_.SecondaryContact.LoginName
  Write-Host "—————————–"
}

Surprise! The URLs are dumped out, however the Owner only for the root site and the own MySite of the user who executed the script, the SecondaryContact only for the root (in the MySite of the executing user had no SecondaryContact defined). That means one could access these properties only for the site where one is defined as site collection administrator (either primary or secondary). There is no error message (like Access denied) for the other sites, simply no information about the admins displayed.

However, if we run the code in an elevated privileges block, the Owner and SecondaryContact properties are dumped out as well:

$waUrl = "http://mysiteroot&quot;

[Microsoft.SharePoint.SPSecurity]::RunWithElevatedPrivileges (
  {
    $wa = Get-SPWebApplication $waUrl
    Get-SPSite -WebApplication $wa -Limit ALL | % {
      Write-Host "Url:" $_.Url
      Write-Host "Primary Administrator:" $_.Owner.LoginName
      Write-Host "Secondary Administrator:" $_.SecondaryContact.LoginName
      Write-Host "—————————–"
    }
  }
)

Note 1: If you would like to dump out the info without Write-Host in a RunWithElevatedPrivileges code block, you would not see the result in the console, even the Url properties would disappear.

The following sample display no result:

$waUrl = "http://mysiteroot&quot;

[Microsoft.SharePoint.SPSecurity]::RunWithElevatedPrivileges (
  {
    $wa = Get-SPWebApplication $waUrl
    Get-SPSite -WebApplication $wa -Limit ALL | % {
      $_.Url
      $_.Owner.LoginName
      $_.SecondaryContact.LoginName
    }
  }
)

Note 2: You don’t need to place all of your code in the elevated block. It is enough to place the code that gets the reference to the web application (site, web, etc.) in this block.

The following code works just as well as the original one with elevated privileges:

$waUrl = "http://mysiteroot&quot;

[Microsoft.SharePoint.SPSecurity]::RunWithElevatedPrivileges (
  {
    $wa = Get-SPWebApplication $waUrl
  }
)

Get-SPSite -WebApplication $wa -Limit ALL | % {
  Write-Host "Url:" $_.Url
  Write-Host "Primary Administrator:" $_.Owner.LoginName
  Write-Host "Secondary Administrator:" $_.SecondaryContact.LoginName
  Write-Host "—————————–"
}

After we displayed the information about the site admins, let’s see, how can we change the configuration. First I tried without elevation, although I was already sure, that it won’t perform the requested operation. In the following samples I try to set the Owner property, but in the case of SecondaryContact it would have the same outcome.

$url = "http://mysiteroot/users/user1&quot;
$siteOwnerLogin = "domain\user1" 
$web = Get-SPWeb $url
$user = $web.EnsureUser($siteOwnerLogin)
$site = $web.Site
$site.Owner = $user

The code above does not work. We receive an error message (Attempted to perform an unauthorized operation.), and the site owner is not changed.

However, using the elevated code block will solve the problem again:

$url = "http://mysiteroot/users/user1&quot;
$siteOwnerLogin = "domain\user1"

[Microsoft.SharePoint.SPSecurity]::RunWithElevatedPrivileges(
     {
         $web = Get-SPWeb $url
         $user = $web.EnsureUser($siteOwnerLogin)
         $site = $web.Site
         $site.Owner = $user
     }
)

You can find opinions on the web, like this one, that states, that running PowerShell code in elevated block has no effect at all. The samples above demonstrate however, that it really DOES have effect.

But what effect of code elevation is it, that makes it possible to get and set the Owner and SecondaryContact properties in the former samples?

If you have a look at the source code of the Owner (or SecondaryContact) property, for example, using Reflector, you will see, that there is a double permission check, both of them can throw an UnauthorizedAccessException. We can ignore the first one (that is !this.AdministratorOperationMode), while this condition is only checked if there is no CurrentUser for the RootWeb of the site. The second permission check is the important one, that is checked if the CurrentUser is not null, and it is:

(!this.RootWeb.CurrentUser.IsSiteAdmin)

If this condition is true, then an UnauthorizedAccessException is thrown. The setter method of the Owner (or SecondaryContact) property calls first the getter, so the same condition is checked in this case as well. Although the call to the getter is in a try-catch block, the catch is valid only for the type SPException, so it has no effect on the UnauthorizedAccessException thrown by the getter.

Remark: I have to admit, that it is not yet clear to me, why the exception thrown by the getter is not displayed when calling the getter from PowerShell.

Let’s see a further example of code elevation from PowerShell to clear the reason, why the former samples with elevation solved the issue of getting / setting the Owner and SecondaryContact properties:

$url = "http://mysiteroot/users/user1&quot;
$web = Get-SPWeb $url

$userName = $web.CurrentUser.LoginName
$userIsAdmin = $web.CurrentUser.IsSiteAdmin

# elevated privilages
[Microsoft.SharePoint.SPSecurity]::RunWithElevatedPrivileges(
     {
         $web = Get-SPWeb $url
         $userNameElevated = $web.CurrentUser
         $userIsAdminElevated = $web.CurrentUser.IsSiteAdmin
     }
)

Write-Host "Before elevation"
Write-Host "User name: " $userName
Write-Host "User is site admin: " $userIsAdmin
Write-Host "After elevation"
Write-Host "User name: " $userNameElevated
Write-Host "User is site admin: " $userIsAdminElevated

Assuming the executing user has neither owner nor secondary contact for the site collection, the code returns the same user name before and after elevation, however the value of the IsSiteAdmin property is false before the elevation, but true after the elevation. As we have seen from the previous reflectoring, this change is just enough for the getter / setter  methods of the Owner and SecondaryContact properties to work.

In my next post I show you an alternative method that make it possible to read / change these values from PowerShell without any kind of elevation.

Older Posts »

The Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 53 other followers