Second Life of a Hungarian SharePoint Geek

March 29, 2015

May Merge-SPLogFile Flood the Content Database of the Central Administration Web Application?

Filed under: Content database, PowerShell, SP 2013 — Tags: , , — Peter Holpar @ 00:20

In the recent weeks we searched for the cause of a specific error in one of our SharePoint 2013 farms. To get detailed trace information, we switched the log level often to the VerboseEx mode. A few days later the admins alerted us that the size of the Central Administration content database has been increased enormously (at that time it was about 80 GB!).

Looking for the source of this unexpected amount of data I found a document library called Diagnostics Log Files (Description: View and analyze merged log files from the all servers in the farm) that contained 1824 items.



Although I consider myself primarily a SharePoint developer, I always try to remain up-to-date in infrastructural themes as well, but to tell the truth I’ve never seen this document library before. Searching the web didn’t help as well.

Having a look into the document library I found a set of folders with GUID names.


Each of the folders contained a lot of files: a numbered set for each of the servers in the farm.


The files within the folders are archived logs in .gz file format, each around 26 MB.


Based on the description of the library I guessed that it has a relation to the Merge-SPLogFile cmdlet, that performs collection of ULS log files from all of the servers in the farm an saves the aggregation in the specified file on the local system, although I have not found any documentation how it performs this action and if it has anything to do with the content DB of the central admin site.


After a few hours of “reflectioning”, it was obvious, how this situation was achieved. If you are not interested in call chains, feel free to jump through the following part.

All of the classes and methods below are defined in the Microsoft.SharePoint assembly, if not specified otherwise.

The InternalProcessRecord method of the Microsoft.SharePoint.PowerShell.SPCmdletMergeLogFile class (Microsoft.SharePoint.PowerShell assembly) creates a new instance of the SPMergeLogFileUtil class based on the path of the aggregated log folder and the filter expression, and calls its Run method:

SPMergeLogFileUtil util = new SPMergeLogFileUtil(this.Path, filter);
ThreadPool.QueueUserWorkItem(new WaitCallback(util.Run));

In the Run method of the Microsoft.SharePoint.Diagnostics.SPMergeLogFileUtil class:

public void Run(object stateInfo)
    this.Progress = 0;
    // Creates the diagnostic log files document library in central admin (if does not yet exist) via the GetLogFilesList method
    // Add a new subfolder (having GUID name) to the library. If the folder already exists, deletes its content.
    // Executes child jobs (SPMergeLogFilesJobDefinition) on each farm member, see more details about it later below
    List<string> jobs = this.DispatchCollectingJobs();
    // Waits for all child jobs to be finished on the farm members
    // Merges the content of the collected files from the central admin document library into the specified local file system folder by calling the MergeFiles method
    // Finally deletes the temporary files one by one from the central admin document library and at the very end deletes their folder as well
  catch (Exception exception)
    this.Error = exception;
    this.Progress = 100;

Yes, as you can see, if there is any error in the file collection process on any of the farm members, or the merging process fails, the files won’t be deleted from the Diagnostics Log Files document library.

Instead of the RunMerge method, the deletion process would have probably a better place in the finally block, or at least, in the catch block one should check if the files were removed successfully.


A few words about the Microsoft.SharePoint.Diagnostics.SPMergeLogFilesJobDefinition, as promised earlier. Its Execute method calls the CollectLogFiles method that creates an ULSLogFileProcessor instance based on the requested log filter and gets the corresponding ULS entries from the farm member the job running on, stores the entries in a temporary file in the file system, uploads them to the actual subfolder (GUID name) of the Diagnostics Log Files document library (file name pattern: [ServerLocalName] (1).log.gz or [ServerLocalName].log.gz if a single file is enough to store the log entries from the server), and delete the local temporary file.


A few related text values can we read via PowerShell as well:

In the getter of the DisplayName property in the SPMergeLogFilesJobDefinition class returns:

SPResource.GetString("CollectLogsJobTitle", new object[] { base.Server.DisplayName });

reading the value via PowerShell:



Collection of log files from server |0

In the GetLogFilesList method of the SPMergeLogFileUtil class we find the title and description of the central admin document library used for log merging

string title = SPResource.GetString(SPGlobal.ServerCulture, "DiagnosticsLogsTitle", new object[0]);
string description = SPResource.GetString(SPGlobal.ServerCulture, "DiagnosticsLogsDescription", new object[0]);

Reading the values via the PowerShell:



Diagnostics Log Files



View and analyze merged log files from the all servers in the farm

These values supports our theory, that the library was created and filled by these methods.


Next, I’ve used the following PowerShell script to look for the failed merge jobs in the time range when the files in the Central Administration have been created:

$farm = Get-SPFarm
$from = "3/21/2015 12:00:00 AM"
$to = "3/21/2015 6:00:00 AM"
$farm.TimerService.JobHistoryEntries | ? {($_.StartTime -gt $from) -and ($_.StartTime -lt $to) -and ($_.Status -eq "Failed") -and ($_.JobDefinitionTitle -like "Collection of log files from server *")}

The result:


As you can see, in our case there were errors on both farm member servers, for example, the storage space was not enough on one of them. After the jobs have failed, the aggregated files were not deleted from the Diagnostics Log Files document library of the Central Administration.

Since even a successful execution of the Merge-SPLogFile cmdlet can temporarily increase the size of the Central Administration content database considerably, and the effect of the failed executions is not only temporary (and particularly large, if it happens several times and is combined with verbose logging), SharePoint administrators should be aware of these facts, consider them when planning database sizes and / or take an extra maintenance step to remove the rest of failed merge processes from the Diagnostics Log Files document library regularly. As far as I see, the issue hits SharePoint 2010 and 2013 as well.

March 26, 2015

Strange Localization Issue When Working with List and Site Templates

Filed under: PowerShell, MUI, SP 2013 — Tags: , — Peter Holpar @ 23:44

One of our clients runs a localized version of SharePoint 2013. The operating system is a Windows 2012 R2 Server (English), the SharePoint Server itself is English as well. The German language pack is installed and sites and site collections were created in German. We are working with various custom site templates. Recently one of these templates had to be extended with a Task-list based custom lists (called ToDos). The users prepared the list in a test site, and we saved the list as a template. We created a new site using the site template (we will refer to this site later as prototype), and next we created a new list based on the custom list template. Finally, we saved the altered web site as site template, including content using the following PowerShell commands:

$web = Get-SPWeb $siteTemplateSourceUrl
$web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)

We created a test site using the new site template, and everything seemed to be OK. However, after a while, the users started to complain, that a menu for the new list contains some English text as well. As it turned out, some of the views for the new list were created with English title:



First, we verified the manifest.xml of the list template, by downloading the .stp file (that has a CAB file format) and opening it using IZArc. We found, that the DisplayName property of the default view (“Alle Vorgänge” meaning “All Tasks”) and a custom datasheet view (called “db”, stands for “Datenblatt”) contains the title as text, the DisplayName property of the other views contains a resource reference (like “$Resources:core,Late_Tasks;”).


Next, we downloaded the site template (the .wsp file has also a CAB file format, and can be opened by IZArc), and verified the schema.xml for the ToDos list. We found, that original, German texts (“Alle Vorgänge” and “db”) were kept, however, all other view names were “localized” to English.


At this point I guessed already, that problem was caused by the local of the thread the site template exporting code was run in. To verify my assumption, I saved the prototype site from the site settings via the SharePoint web UI (that is German in our case). This time the resulting schema.xml in the new site template .wsp contained the German titles:


We got the same result (I mean German view titles) if we called our former PowerShell code by specifying German as the culture for the running thread. See more info about the Using-Culture helper method, SharePoint Multilingual User Interface (MUI) and PowerShell here:

Using-Culture de-DE 
  $web = Get-SPWeb $pwsSiteTemplateSourceUrl
  $web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)

We’ve fixed the existing views via the following PowerShell code (Note: Using the Using-Culture helper method is important in this case as well. We have only a single level of site hierarchy in this case, so there is no recursion in code!):

$web = Get-SPWeb http://SharePointServer

function Rename-View-IfExists($list, $viewNameOld, $viewNameNew)
  $view =  $list.Views[$viewNameOld]
  If ($view -ne $Null) {
      Write-Host Renaming view $viewNameOld to $viewNameNew
      $view.Title = $viewNameNew
  Else {
    Write-Host View $viewNameOld not found

Using-Culture de-DE {
  $web.Webs | % {
    $list = $_.Lists["ToDos"]
    If ($list -ne $Null) {
      Write-Host ToDo list found in $_.Title
      Rename-View-IfExists $list "Late Tasks" "Verspätete Vorgänge"
      Rename-View-IfExists $list "Upcoming" "Anstehend"
      Rename-View-IfExists $list "Completed" "Abgeschlossen"
      Rename-View-IfExists $list "My Tasks" "Meine Aufgaben"
      Rename-View-IfExists $list "Gantt Chart" "Gantt-Diagramm"
      Rename-View-IfExists $list "Late Tasks" "Verspätete Vorgänge" 

Strange, that we had no problem with field names or other localizable texts when worked with the English culture.

March 24, 2015

How to Read the Values of Fields bound to Lookup Tables via REST

Filed under: JavaScript, Project Server, REST — Tags: , , — Peter Holpar @ 22:51

In my recent post I’ve illustrated how to read the values of Enterprise Custom Fields (ECT) that are bound to Lookup Tables. I suggest you to read that post first, as it can help you to better understand the relations between the custom field values and the internal names of the lookup table entries.

In this post I show you how to read such values using the REST interface. Instead of C# I use JavaScript in this example. The sample code is using the version 3.0.3-Beta4 of the LINQ for JavaScript library (version is important, as this version contains lower case function names in contrast to the former stable relases!) and the version 1.8.3 of jQuery.

Assuming that these scripts are all located in the PSRESTTest/js subfolder in the Layouts folder, we can inject them via a Script Editor Web Part using this HTML code:

<script type="text/ecmascript" src="/_layouts/15/PSRESTTest/js/jquery-1.8.3.min.js"></script>
<script type="text/ecmascript" src="/_layouts/15/PSRESTTest/js/linq.min.js"></script>
<script type="text/ecmascript" src="/_layouts/15/PSRESTTest/js/GetCustFieldREST.js"></script>

In our GetCustFieldREST.js script we define the String.format helper function first:

String.format = (function () {
    // The string containing the format items (e.g. "{0}")
    // will and always has to be the first argument.
    var theString = arguments[0];

    // start with the second argument (i = 1)
    for (var i = 1; i < arguments.length; i++) {
        // "gm" = RegEx options for Global search (more than one instance)
        // and for Multiline search
        var regEx = new RegExp("\\{" + (i – 1) + "\\}", "gm");
        theString = theString.replace(regEx, arguments[i]);

    return theString;

Another helper function supports sending fault-tolerant REST-queries:

function sendRESTQuery(queryUrl, onSuccess, retryCount) {

    var retryWaitTime = 1000; // 1 sec.
    var retryCountMax = 3;
    // use a default value of 0 if no value for retryCount passed
    var retryCount = (retryCount != undefined) ? retryCount : 0;

    //$.support.cors = true; // enable cross-domain query
        //beforeSend: function (request) {
        //    request.withCredentials = false;
        type: ‘GET’,
        //xhrFields: { withCredentials: false },
        contentType: ‘application/json;odata=verbose’,
        url: baseUrl + queryUrl,
        headers: {
            ‘X-RequestDigest': $(‘#__REQUESTDIGEST’).val(),
            "Accept": "application/json; odata=verbose"
        dataType: "json",
        complete: function (result) {
            var response = JSON.parse(result.responseText);
            if (response.error) {
                if (retryCount <= retryCountMax) {
                    window.setTimeout(function () { sendRESTQuery(queryUrl, onSuccess, retryCount++) }, retryWaitTime);
                else {
                    alert("Error: " + response.error.code + "\n" + response.error.message.value);
            else {
                bgDetails = response.d;


The baseUrl variable holds the root of the REST endpoint for Project Server. I assume your PWA site is provisioned to the PWA managed path.

var baseUrl = String.format("{0}//{1}/PWA/_api/ProjectServer/", window.location.protocol,;

We call the getResCustProp method when the page is loaded:


In the getResCustProp method I first query the InternalName of the custom field, as well as the InternalName and the FullValue properties of the lookup entries of the corresponding lookup table. In a second query I read the custom field value for the specified enterprise resource, and compare the value (or values) stored in the field with the InternalName property of the lookup table entries from the first query. Note, that we should escape the underscore in the InternalName property, and should use the ‘eval’ JavaScript function, as we don’t know the name of the custom field (that is the name of the property) at design time.

function getResCustProp() {
    var resourceName = "Brian Cox";
    var fieldName = "ResField";
    var fieldQueryUrl = String.format("CustomFields?$expand=LookupTable/Entries&$filter=Name eq ‘{0}’&$select=InternalName,LookupTable/Entries/InternalName,LookupTable/Entries/FullValue", fieldName);
    sendRESTQuery(fieldQueryUrl, function (fieldResponseData) {
        var field = fieldResponseData.results[0];
        var lookupTableEntries = Enumerable.from(field.LookupTable.Entries.results);
        var resourceQuery = String.format("EnterpriseResources?$filter=Name eq ‘{0}’&$select={1}", resourceName, field.InternalName)
        sendRESTQuery(resourceQuery, function (resourceResponseData) {
            var resource = resourceResponseData.results[0];
            var encodedInternalName = field.InternalName.replace(‘_’, ‘_x005f_’);
            var fieldValue = eval("resource." + encodedInternalName);
            Enumerable.from(fieldValue.results).forEach(function (fv) {
                var entry = lookupTableEntries.first(String.format(‘$="{0}"’, fv));
                alert(String.format("Value: ‘{0}’, Entry: ‘{1}’", entry.FullValue, fv));

Of course, if you would like to use the code in production, you should add further data validation (if there is any resource and custom field returned by the queries, etc.) and error handling to the method.

March 23, 2015

How to Read the Values of Fields bound to Lookup Tables via the Client Object Model

Filed under: Managed Client OM, Project Server — Tags: , — Peter Holpar @ 05:16

Assume you have an Enterprise Custom Field (let’s call this ECFResField’) defined for project resources, that is bound to a Lookup Table.

How can we read the textural values of the field as it is assigned to you resources? After all, what makes reading such values makes any different than getting field values without lookup tables?

If we have a look at a resource with a lookup table based custom field via an OData / REST query (for example, by http://YourProjectServer/pwa/_api/ProjectServer/EnterpriseResources), you can see, that the value is stored as a reference, like ‘’Entry_4d65d905cac9e411940700505634b541‘.


If we access the value via the managed client OM, we get it as a string array, even if only a single value can be selected from the lookup table. The reference value in the array corresponds to the value in the InternalName property of the lookup table entry. If we know the ID of the resource (we want to read the value from), the enterprise custom field (that means we know its internal name as well) and the related lookup table, we can get the result in a single request as shown below:

  1. using (var projectContext = new ProjectContext(pwaUrl))
  2. {
  3.     var lookupTableId = "4c65d905-cac9-e411-9407-00505634b541";
  4.     var resourceId = "f9497d1d-9145-e411-9407-00505634b541";
  6.     var res = projectContext.EnterpriseResources.GetById(resourceId);
  7.     var lt = projectContext.LookupTables.GetById(lookupTableId);
  8.     var cfInternalName = "Custom_80bd269ecbc9e411940700505634b541";
  9.     projectContext.Load(res, r => r[cfInternalName]);
  10.     projectContext.Load(lt.Entries);
  11.     projectContext.ExecuteQuery();
  13.     var valueEntries = res[cfInternalName] as string[];
  14.     if (valueEntries != null)
  15.     {
  16.         foreach (var valueEntry in valueEntries)
  17.         {
  18.             var lookupText = lt.Entries.FirstOrDefault(e => e.InternalName == valueEntry) as LookupText;
  19.             var ltValue = (lookupText != null) ? lookupText.Value : null;
  20.             Console.WriteLine("Value: '{0}' (Entry was '{1}')", ltValue, valueEntry);
  21.         }
  22.     }
  23. }

However, if these values are unknown, and we know only the name of the resource and the field, we need to submit an extra request to get the IDs for the second step:

  1. using (var projectContext = new ProjectContext(pwaUrl))
  2. {
  3.     var resourceName = "Brian Cox";
  4.     var fieldName = "ResField";
  6.     projectContext.Load(projectContext.EnterpriseResources, ers => ers.Include(r => r.Id, r => r.Name));
  7.     projectContext.Load(projectContext.CustomFields, cfs => cfs.Include(f => f.Name, f => f.InternalName, f => f.LookupTable.Id, f => f.LookupEntries));
  9.     projectContext.ExecuteQuery();
  11.     var resourceId = projectContext.EnterpriseResources.First(er => er.Name == resourceName).Id.ToString();
  12.     var cf = projectContext.CustomFields.First(f => f.Name == fieldName);
  13.     var cfInternalName = cf.InternalName;
  14.     var lookupTableId = cf.LookupTable.Id.ToString();
  16.     var res = projectContext.EnterpriseResources.GetById(resourceId);
  17.     var lt = projectContext.LookupTables.GetById(lookupTableId);
  19.     projectContext.Load(res, r => r[cfInternalName]);
  20.     projectContext.Load(lt.Entries);
  21.     projectContext.ExecuteQuery();
  23.     var valueEntries = res[cfInternalName] as string[];
  24.     if (valueEntries != null)
  25.     {
  26.         foreach (var valueEntry in valueEntries)
  27.         {
  28.             var lookupText = lt.Entries.FirstOrDefault(e => e.InternalName == valueEntry) as LookupText;
  29.             var ltValue = (lookupText != null) ? lookupText.Value : null;
  30.             Console.WriteLine("Value: '{0}' (Entry was '{1}')", ltValue, valueEntry);
  31.         }
  32.     }
  33. }

Note: although this post was about a custom field defined for the resource entity, you can apply the same technique for project and task fields as well.

March 13, 2015

Strange Access Denied Error in Project Server Client OM

Filed under: Managed Client OM, Project Server — Tags: , — Peter Holpar @ 22:42

Recently I worked with the Managed Client Object Model of Project Server 2013. Although being Farm Administrator, Site Collection Administrator (Site Owner) of every site collections and member of the Administrators group in Project Server, I received an Access Denied error (UnauthorizedAccessException) when executing even the simplest queries like this one:

  1. using (var projectContext = new ProjectContext(pwaUrl))
  2. {
  3.     var projects = projectContext.Projects;
  4.     projectContext.Load(projects);
  5.     projectContext.ExecuteQuery();
  6. }

The error message was:

Access denied. You do not have permission to perform this action or access this resource.

Error code: –2147024891


No entries corresponding the error or the correlation ID in the response was found in the ULS logs.

The real problem was the pwaUrl parameter in the ProjectContext constructor: by mistake I used the URL of the root site (like http://projectserver) instead of the URL of the PWA site (http://projectserver/PWA).

March 10, 2015

Automating the Provisioning of a PWA-Instance

Filed under: ALM, PowerShell, PS 2013 — Tags: , , — Peter Holpar @ 23:56

When testing our custom Project Server 2013 solutions in the development system, or deploying them to the test system I found it useful to be able to use a clean environment (new PWA instance having an empty project database and separate SharePoint content database for the project web access itself and the project web sites) each time.

We wrote a simple PowerShell script that provisions a new PWA instance, including:
- A separate SharePoint content database that should contain only a single site collection: the one for the PWA. If the content DB already exists, we will use the existing one, otherwise we create a new one.
- The managed path for the PWA.
- A new site collection for the PWA using the project web application site template, and the right locale ID (1033 in our case). If the site already exists (in case we re-use a former content DB), it will be dropped before creating the new one.
- A new project database. If a project database with the same name already exists on the SQL server, it will be dropped and re-created.
- The content database will be mounted to the PWA instance, and the admin permissions are set.

Note, that we have a prefix (like DEV or TEST) that identifies the system. We set the URL of the PWA and the database names using this prefix. The database server names (one for the SharePoint content DBs and another one for service application DBs) include the prefix as well, and are configured via aliases in the SQL Server Client Network Utilities, making it easier to relocate the databases if needed.

$environmentPrefix = "DEV"

$webAppUrl = [string]::Format("http://ps-{0}", $environmentPrefix)

$contentDBName = [string]::Format("PS_{0}_Content_PWA", $environmentPrefix)
$contentDBServer = [string]::Format("PS_{0}_Content", $environmentPrefix)

$pwaMgdPathPostFix = "PWA"
$pwaUrl = [string]::Format("{0}/{1}", $webAppUrl, $pwaMgdPathPostFix)
$pwaTitle = "PWA Site"
$pwaSiteTemplate = "PWA#0"
$pwaLcid = 1033
$ownerAlias = "domain\user1"
$secondaryOwnerAlias = "domain\user2"

$projServDBName = [string]::Format("PS_{0}_PS", $environmentPrefix)
$projServDBServer = [string]::Format("PS_{0}_ServiceApp", $environmentPrefix)

Write-Host Getting web application at $webAppUrl
$webApp = Get-SPWebApplication -Identity $webAppUrl

$contentDatabase = Get-SPContentDatabase -Identity $contentDBName -ErrorAction SilentlyContinue

if ($contentDatabase -eq $null) {
  Write-Host Creating content database: $contentDBName
  $contentDatabase = New-SPContentDatabase -Name $contentDBName -WebApplication $webApp -MaxSiteCount 1 -WarningSiteCount 0 -DatabaseServer $contentDBServer
else {
  Write-Host Using existing content database: $contentDBName

$pwaMgdPath = Get-SPManagedPath -Identity $pwaMgdPathPostFix -WebApplication $webApp -ErrorAction SilentlyContinue
if ($pwaMgdPath -eq $null) {
  Write-Host Creating managed path: $pwaMgdPathPostFix
  $pwaMgdPath = New-SPManagedPath -RelativeURL $pwaMgdPathPostFix -WebApplication $webApp -Explicit
else {
  Write-Host Using existing managed path: $pwaMgdPathPostFix

$pwaSite = Get-SPSite –Identity $pwaUrl -ErrorAction SilentlyContinue
if ($pwaSite -ne $null) {
  Write-Host Deleting existing PWA site at $pwaUrl

Write-Host Creating PWA site at $pwaUrl
$pwaSite = New-SPSite –Url $pwaUrl –OwnerAlias $ownerAlias –SecondaryOwnerAlias  $secondaryOwnerAlias -ContentDatabase $contentDatabase –Template $pwaSiteTemplate -Language $pwaLcid –Name $pwaTitle

$projDBState = Get-SPProjectDatabaseState -Name $projServDBName -DatabaseServer $projServDBServer
if ($projDBState.Exists) {
  Write-Host Removing existing Project DB $projServDBName
  Remove-SPProjectDatabase –Name $projServDBName -DatabaseServer $projServDBServer -WebApplication $webApp
Write-Host Creating Project DB $projServDBName
New-SPProjectDatabase –Name $projServDBName -DatabaseServer $projServDBServer -Lcid $pwaLcid -WebApplication $webApp

Write-Host Bind Project Service DB to PWA Site
Mount-SPProjectWebInstance –DatabaseName $projServDBName -DatabaseServer $projServDBServer –SiteCollection $pwaSite

#Setting admin permissions on PWA
Grant-SPProjectAdministratorAccess –Url $pwaUrl –UserAccount $ownerAlias

Using this script helps us to avoid a lot of manual configuration steps, saves us a lot of time and makes the result more reproducible.

March 4, 2015

How to find out the real number of Queue Jobs

Filed under: PowerShell, PS 2013, PSI — Tags: , , — Peter Holpar @ 00:45

Recently we had an issue with Project Server. Although the Microsoft Project Server Queue Service was running, the items in the queue has not been processed, and the performance of the system degraded severely. The same time we found a lot of cache cluster failures in Windows Event Logs and ULS logs, it is not clear which one was the source of the problem and which is the result of the other. The “solution” was to install the February 2015 CU SharePoint Product Updates and restarting the server. 

However, even after the restart the number of the job entries seemed to be constant when checking via the PWA Settings / Manage Queue Jobs (Queue and Database Administration): at the first page load the total number displayed at the left bottom of the grid was 1000, however when we paged through the results or refreshed the status, the total was changed to 500 (seems to be an issue with the product). It means that PWA administrators don’t see the real number of the entries.

But how could one then get the real number of the queue jobs?

If you have permission to access the performance counters on the server (in the simplest case, if you are a local admin), then you can use the Current Unprocessed Jobs counter (ProjectServer:QueueGeneral), that – as its name suggests – give the total number of the current unprocessed jobs.

You have to find an alternative solution if you need the count of jobs having other status, or need even more granulate results, for example, the number of job entries that are ready for processing and are related to publishing a project.

The QueueJobs property of the Project class in the client object model (see the QueueJob and QueueJobCollection classes as well) provides only information related to a given project, and the same is true for the REST interface, where you can access the same information for your project like (the Guid is the ID of your project):


The best solution I’ve found is based on the GetJobCount method in the QueueSystem object in the PSI interface. Let’s see some practical PowerShell examples how to use it.

To get the reference for the proxy:

$pwaUrl = "http://YourProjServer/PWA&quot;
$svcPSProxy = New-WebServiceProxy -Uri ($pwaUrl + "/_vti_bin/PSI/QueueSystem.asmx?wsdl") –UseDefaultCredential

To get the number of all entries without filtering:

$svcPSProxy.GetJobCount($Null, $Null, $Null)

For filtering, we can use the second and the third parameter of the method: the jobStates and messageTypes that are arrays of the corresponding JobState and QueueMsgType enumerations. Both of these enums are available as nested enums in the Microsoft.Office.Project.Server.Library.QueueConstants class. This class is defined in the Microsoft.Office.Project.Server.Library assembly, so if we would like to use the enums, we should load the assembly first:


Note: you can use the integer values corresponding to the enum values like:

$jobStates = (1, 7)

…however I don’t find it very developer friendly to use such magical constants in code, and you lose the autocomplete feature of PowerShell as well that you have when working with the enums as displayed below:

$jobStates = (

similarly for message types:

$msgTypes = (

You can then access the count of filtered items like:

$svcPSProxy.GetJobCount($Null, $jobStates, $Null)


$svcPSProxy.GetJobCount($Null, $jobStates, $msgTypes)

It is worth to know that the QueueConstants class has two methods (PendingJobStates and CompletedJobStates) that return a predefined set of the enum values as a generic IEnumreable<QueueConstants.JobState>. We can use these methods from PowerShell as well:

$jobStates = Microsoft.Office.Project.Server.Library.QueueConstants]::PendingJobStates() | % { [int]$_ }


$jobStates = Microsoft.Office.Project.Server.Library.QueueConstants]::CompletedJobStates() | % { [int]$_ }

February 28, 2015

People Picker does not work when ActiveX Filtering is enabled in Internet Explorer

Filed under: SP 2010 — Tags: — Peter Holpar @ 18:26

Last week we had a strange issue with one of the users. She used a simple SharePoint application that includes a People Picker control to enable users to assign items to other employees.

A pre-defined user should have been selected as a default value in the field via server side code, however it was empty on page load in this case. If the user typed the name or login name of any employees in the text box, the name was not resolved by the control as expected, although as we checked the network traffic by Fiddler, we saw that the request for name resolution was sent to the server and the server responded with the resolved entity. If the user tried to select the assignee via browsing the entities, she was able select the employee, but as she clicked the OK button in the Select People and Groups Webpage Dialog, the selection was lost and the text box remained empty.

Other users (using the same browser version, Internet Explorer 9, and having the very same permission on the system) had no problem with the application.

Fortunately, I noticed an unusual icon near to the URL in the browser, saying “Some content is blocked to help protect your privacy”.


As I found out, it was a result of the user activated unconsciously the ActiveX Filtering feature in her browser.


The solution was turning off ActiveX filtering for the site, however, one can turn off the whole feature, if it is not needed.


I was able to reproduce the issue when working with SharePoint 2010 up to the Internet Explorer version 11. In case of SharePoint 2013 the issue seems to be fixed, at least, I was not able to reproduce it with IE 10.

February 19, 2015

How to Programmatically “Enable reporting of offensive content” on a community site

Filed under: PowerShell, SP 2013 — Peter Holpar @ 23:10

Recently a question was posted on about how to “Enable reporting of offensive content” from code on SharePoint 2013 community sites.


The Community Settings page (CommunitySettings.aspx) has a code behind class Microsoft.SharePoint.Portal.CommunitySettingsPage. In its BtnSave_Click method it depends on the static EnableDisableAbuseReports method of the internal FunctionalityEnablers class to perform the actions required for reporting of offensive content. Furthermore, it sets the value of the vti_CommunityEnableReportAbuse web property to indicate that reporting is enabled for the community site.

To perform the same actions from PowerShell I wrote this script:

$site = New-Object Microsoft.SharePoint.SPSite("http://YourServer/CommunitySite&quot;)
$web = $site.OpenWeb()

# this command has only effect to the check box on the Community Settings page
$web.AllProperties["vti_CommunityEnableReportAbuse"] = "true"

# the functionality itself is activated by the code below
# get a reference for the Microsoft.SharePoint.Portal assembly
$spPortalAssembly = [AppDomain]::CurrentDomain.GetAssemblies() | ? { $_.Location -ne $Null -And $_.Location.Split(‘\\’)[-1].Equals(‘Microsoft.SharePoint.Portal.dll’) }
$functionalityEnablersType = $spPortalAssembly.GetType("Microsoft.SharePoint.Portal.FunctionalityEnablers")
$mi_EnableDisableAbuseReports = $functionalityEnablersType.GetMethod("EnableDisableAbuseReports")
$mi_EnableDisableAbuseReports.Invoke($null, @($spWeb, $True))

Note: If you use “false” instead of “true” when setting the value of vti_CommunityEnableReportAbuse, and $False instead of $True when invoking the static method in the last line of code, then you can inactivate the reporting for the site.

The alternative solution is to use the server side API from C#:

web.AllProperties["vti_CommunityEnableReportAbuse"] = "true";

// get an assembly reference to "Microsoft.SharePoint.Portal" via an arbitrary public class from the assembly
Assembly spPortalAssembly = typeof(Microsoft.SharePoint.Portal.PortalContext).Assembly;

Type functionalityEnablersType = spPortalAssembly.GetType("Microsoft.SharePoint.Portal.FunctionalityEnablers");
MethodInfo mi_EnableDisableAbuseReports = functionalityEnablersType.GetMethod("EnableDisableAbuseReports");
mi_EnableDisableAbuseReports.Invoke(null, new object[] { web, true });

We can verify the effect of our code by checking the columns in the first view of the Discussions List before enabling the reporting via this PowerShell script:

$list = $web.Lists["Discussions List"]

I found these ones:


And after we enable reporting. Four further columns should be appended to the list of view fields if the code succeeded:


Or we can invoke the static IsReportAbuseEnabled method of the internal Microsoft.SharePoint.Portal.CommunityUtils class to verify if reporting is enabled, just as the OnLoad method of the Microsoft.SharePoint.Portal.CommunitySettingsPage does. You should know however, that this method does not more as simply to check the value of the vti_CommunityEnableReportAbuse web property, so even if it returns true, it does not mean for sure that reporting is really enable. So I prefer checking the columns in view as shown earlier.

The PowerShell version:

$site = New-Object Microsoft.SharePoint.SPSite("http://YourServer/CommunitySite&quot;)
$web = $site.OpenWeb()

$communityUtilsType = $spPortalAssembly.GetType("Microsoft.SharePoint.Portal.CommunityUtils")
$mi_IsReportAbuseEnabled = $communityUtilsType.GetMethod("IsReportAbuseEnabled")
$mi_IsReportAbuseEnabled.Invoke($null, @($spWeb))

The C# version:

// get an assembly reference to "Microsoft.SharePoint.Portal" via an arbitrary public class from the assembly
Assembly spPortalAssembly = typeof(Microsoft.SharePoint.Portal.PortalContext).Assembly;

Type communityUtilsType = spPortalAssembly.GetType("Microsoft.SharePoint.Portal.CommunityUtils");
MethodInfo mi_IsReportAbuseEnabled = communityUtilsType.GetMethod("IsReportAbuseEnabled");
mi_IsReportAbuseEnabled.Invoke(null, new object[] { web });

February 18, 2015

Strange (?) Behavior of the Client Object Model You should be Aware of

Filed under: Managed Client OM, PS 2013, SP 2010, SP 2013 — Tags: , , , — Peter Holpar @ 00:28

Note: The code examples in this post are based on the managed client OM (C#), however, you can expect the same behavior when working with the JavaScript / Silverlight versions of the client OM, both for SharePoint 2010 and 2013, furthermore, the behavior I describe applies to the client OM of Project Server 2013 as well.

Assume you have a SharePoint list including two fields, Title is a text column, and User is a field of type People or Group. You would like to update both of these fields via the client object model.

In the first case we simply set fix values for the item with Id=1, the text ‘Test’ for Title and an integer (2) as user Id for the User field.

using (var context = new ClientContext(webUrl))
  var web = context.Web;
  var list = web.Lists.GetByTitle("TestList");

  var item = list.GetItemById(1);
  item["Title"] = "Test";

  var userValue = new FieldUserValue();
  userValue.LookupId = 2;
  item["User"] = userValue;


Everything works as expected, both of the fields are updated by the query.

In the second case, we would like to set the same values for the item with Id=2, but in this case we get the user Id from the Id of the current user (that has “accidentally” the value 2 in this case). To get that value, we call the ExecuteQuery in the middle of the code, after setting the Title field, and before setting the User field and before invoking the Update method on the item. Finally we invoke the ExecuteQuery method again.

using (var context = new ClientContext(webUrl))
  var web = context.Web;
  var list = web.Lists.GetByTitle("TestList");

  var item = list.GetItemById(2);
  item["Title"] = "Test";

  var userValue = new FieldUserValue();
  var cu = web.CurrentUser;
  context.Load(cu, u => u.Id);

  userValue.LookupId = cu.Id;
  item["User"] = userValue;


One could expect that both of the fields would be updated, however, the Title field won’t be in fact, and we will understand why, after having a look at the requests (for example, using Fiddler) sent by the client OM.

In the first case, there is a single request that includes setting both of the fields (see the SetFieldValue methods in the screenshot below, one for each field) and invoking the Update method.


In the second case there are two requests sent by the client OM, one for each in ExecuteQuery code.

The first request includes a single SetFieldValue method for the Title field, the query for the Id of the current user is not included in the screenshot for the sake of simplicity:


The second request includes a single SetFieldValue method for the User field, and invoking the Update method.


You can see, that calling ExecuteQuery for the the Id of the current user submits the change of the Title field as well, and thereby clears the “dirty” flag of the field (that means it won’t be re-submitted by the OM once again in later requests), but due to the lack of invoking the Update method in this request, this value change has no effect, it is simply not persisted. In the second query we call the Update method, but in this request only the change in the User field was submitted, it has no effect on the Title field.

Lesson learned: The requests sent by the client OM are independent on the server side, no session state or context is persisted between the requests. Only data changed since the last request or required to be able to perform the current request on the server side are sent by the client OM. You should consider the code blocks between (or before) calling ExecuteQuery as they were separate server side console applications you try to run one after another. In this case you won’t expect either, that setting a field without calling the Update method may have any effect on the persisted value of the field. If you are aware of the client OM architecture internals, this is just the expected behavior. If you are not, you might have such surprises in the future.

Older Posts »

The Shocking Blue Green Theme. Blog at


Get every new post delivered to your Inbox.

Join 53 other followers