Second Life of a Hungarian SharePoint Geek

April 8, 2015

Automating the Deployment of a Customized Project Web Site Template via PowerShell and the Managed Client Object Model

Filed under: ALM, Managed Client OM, PowerShell, Project Server — Tags: , , , — Peter Holpar @ 21:45

Assume, you have created a customized web site template for your enterprise project type in the development environment as described here, and now you would like to deploy it into the test farm. Of course, you can manually delete the former site template, upload the new one, re-configure it to be the associated web site template for your enterprise project type, and finally re-create your test project (that means, checking in and deleting the existing one, and create it again using the new template), but this procedure is boring, cumbersome and – as any human-based process – rather error-prone.

Why do not automate this step as well?

I’ve created a PowerShell script that performs the steps outlined above. The first steps (deleting the former version of the site template and uploading the new one) can be done by native PowerShell Cmdlets, but for the remaining, Project Server related tasks require the Managed Client Object Model, so we import the necessary assemblies into the process.

First we get a list of all projects and a list of all enterprise project types, then query for the right ones on the “client side”.

Note: Although PowerShell does not support .NET extension methods (like the Where and Include methods of the client object model) natively, we could restrict the items returned by these queries to include really only the item we need (see solution here), and include only the properties we need (as described here). As the item count of the projects and enterprise project types is not significant, and we should use the script on the server itself due to the SharePoint Cmdlets, it has no sense in this case to limit the network traffic via these tricks.

Next, we update the web site template setting (WorkspaceTemplateName  property) of the enterprise project type. We need this step as the original vale was reset to the default value as we deleted the original site template on re-upload.

If the test project is found, we delete it (after we checked it in, if it was checked out), and create it using the updated template.

Since these last steps (project check-in, deletion, and creation) are all queue-based operations, we should use the WaitForQueue method to be sure the former operation is completed before we start the next step.

$pwaUrl = "http://YourProjectServer/PWA/"
$solutionName = "YourSiteTemplate"
$wspFileName = $solutionName + ".wsp"
$timeoutSeconds = 1000
$projName = "TestProj"

# English
$projType = "Enterprise Project"
$pwaLcid = 1033
# German
#$projType = "Enterprise-Projekt"
#$pwaLcid = 1031

# path of the folder containing the .wsp
$localRootPath = "D:\SiteTemplates\"
$wspLocalPath = $localRootPath + $wspFileName

# uninstall / remove the site template if activated / found
$solution = Get-SPUserSolution -Identity $wspFileName -Site $pwaUrl -ErrorAction SilentlyContinue
If ($solution -ne $Null) {
  If ($solution.Status -eq "Activated") {
    Write-Host Uninstalling web site template
    Uninstall-SPUserSolution -Identity $solutionName -Site $pwaUrl -Confirm:$False
  }
  Write-Host Removing web site template
  Remove-SPUserSolution -Identity $wspFileName -Site $pwaUrl -Confirm:$False
}

# upload and activate the new version
Write-Host Uploading new web site template
Add-SPUserSolution -LiteralPath $wspLocalPath -Site $pwaUrl
Write-Host Installing new web site template
$dummy = Install-SPUserSolution -Identity $solutionName -Site $pwaUrl
 
# set the path according the location of the assemblies
Add-Type -Path "c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.ProjectServer.Client.dll"
Add-Type -Path "c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"

$projectContext = New-Object Microsoft.ProjectServer.Client.ProjectContext($pwaUrl)

# get lists of enterprise project types and projects
$projectTypes = $projectContext.LoadQuery($projectContext.EnterpriseProjectTypes)
$projects = $projectContext.Projects
$projectList = $projectContext.LoadQuery($projectContext.Projects)

$projectContext.ExecuteQuery()

$entProjType = $projectTypes | ? { $_.Name -eq $projType }
$project = $projectList | ? { $_.Name -eq $projName }

Write-Host Updating web site template for the enterprise project type
$web = Get-SPWeb $pwaUrl
$template = $web.GetAvailableWebTemplates($pwaLcid) | ? { $_.Title -eq $solutionName }

$entProjType.WorkspaceTemplateName = $template.Name
$projectContext.EnterpriseProjectTypes.Update()
$projectContext.ExecuteQuery()

If ($project -ne $Null) {
  If ($project.IsCheckedOut) {
    Write-Host Project $projName is checked out, checking it in before deletion
    $checkInJob = $project.Draft.CheckIn($True)
    $checkInJobState = $projectContext.WaitForQueue($checkInJob, $timeoutSeconds)
    Write-Host Check-in project job status: $checkInJobState
  }
  Write-Host Deleting existing project $projName
  # we can delete the project either this way
  #$removeProjResult = $projects.Remove($project)
  #$removeJob = $projects.Update()
  # or
  $removeJob = $project.DeleteObject()
  $removeJobState = $projectContext.WaitForQueue($removeJob, $timeoutSeconds)
  Write-Host Remove project job status: $removeJobState
}

I found the set of Project Server PowerShell Cmdlets is limited, and rather operation-based. You can use it, as long as your single task is to administer Project Server instances and databases. However, when it comes to the interaction with Project Server entities, you have to involve the Managed Client Object Model. Hopefully this example provides not only a reusable tool, but also helps you understand, how to extend your own PowerShell library with the methods borrowed from the client side .NET libraries.

Automating Project Server development tasks via PowerShell and the Client Object Model – Customizing Project Web Site templates

Filed under: ALM, Managed Client OM, PowerShell, Project Server — Tags: , , , — Peter Holpar @ 21:35

I start with a note this time: Even though you were not interested in Project Server itself at all, I suggest you to read the post further, while most of the issues discussed below are not Project Server specific, they apply to SharePoint as well.

Recently I work mostly on a Project Server customization project. As I’ve learned on my former development projects, I try to automate so much repetitive tasks as possible (like automating the PWA provisioning), thus remains more time for the really interesting stuff. I plan to post my results on this blog to share the scripts and document the experiences for myself as well.

One of the very first tasks (and probably a never-ending one) was to create a customized Project Web Site (PWS) site template. New Enterprise Projects created in the Project Web Access (PWA) should have their PWS created based on the custom site template.

The process of customizing a PWS site template is described in this post, however, there are a few issues if we apply this approach alone, just to name a few of them:

– PWS master pages cannot be edited using SharePoint Designer by default. There is a workaround for this issue.

– If I create a custom master page for the PWA and would like a PWS to refer the same master page, I can set it for example using PowerShell. However, if I create a site template from this PWS, this configuration seems to be lost in the template, and template refers to the default seattle.master. I have similar experience with the site image / logo, I can set one, but this setting seems to be not saved in the template.

– The standard navigation structure of a project site (and all site template created based on it) contains project-specific navigation nodes, like Project Details that contains the Guid of the current project as a query string parameter. If you create a site template from this site, any project sites that will be created based on this template will contain this node twice: one of the is created based on the site template (wrong Guid, referring to the project the original site belongs to, thus wrong URL), and another one is created dynamically as the project web site gets provisioned.

The workflow of my web site template creation and customization process includes four main parts, and two of them – step 2 and step 4 – are automated by our script.

The first part of the process (including step 1 and step 2) is optional. If you have changed nothing in your web site prototype, you can immediately start with the manual manipulation of the extracted web site template content (step 3), otherwise, we have to get a fresh version of the template into our local system for the further customizations.

Step 1: Creation and customization a SharePoint web site, that serves as a prototype for the web site template.

A SharePoint web site is customized based on the requirements using the SharePoint web UI, SharePoint Designer (for PWA see this post), or via other tools, like PowerShell scripts (for example, JSLink settings). This is a “manual” task.

Step 2: Creation of the web site template based on the prototype, downloading and extracting the site template.

A site template is created (including content) based on the customized web site. If a former site template having the same name already exists, if will be deleted first.

The site template is downloaded to the local file system (former file having the same name is deleted first).

The content of the .wsp file (CAB format) is extracted into a local folder (folder having the same name is deleted first, if it exists).

Step 3: Customization of the extracted web site template artifacts.

The script is paused. In this step you have the chance to manual customization of the solution files, like ONet.xml.

Step 4: Compressing the customized files into a new site template, and uploading it to SharePoint.

After a key press the script runs further.

Files having the same name as our site template and extension of .cab or .wsp will be deleted. The content of the folder is compressed as .cab and the renamed to .wsp.

In the final step the original web site template is removed and the new version is installed.

Next, a few words about the CAB extraction and compression tools I chose for the automation. Minimal requirements were that the tool must have a command line interface and it should recognize the folder structure to be compressed automatically, without any helper files (like the DDF directive file in case of makecab).

After reading a few comparisons (like this and this one) about the alternative options, I first found IZArc and its command line add-on (including IZARCC for compression and IZARCE for extraction, see their user’s manual for details) to be the best choice. However after a short test I experienced issues with the depth of the folder path and file name length in case of IZARCE, so I fell back to extrac32 for the extraction.

Finally, the script itself:

$pwaUrl = "http://YourProjectServer/PWA/"
$pwsSiteTemplateSourceUrl = $pwaUrl + "YourPrototypeWeb"
$solutionName = "YourSiteTemplate"
$wspFileName = $solutionName + ".wsp"
$cabFileName = $solutionName + ".cab"
$siteTemplateTitle = $solutionName
$siteTemplateName = $solutionName
$siteTemplateDescription = "PWS Website Template"

$localRootPath = "D:\SiteTemplates\"
$wspExtractFolderName = $solutionName
$wspExtractFolder = $localRootPath + $wspExtractFolderName
$wspFilePath = $localRootPath + $wspFileName
$wspLocalPath = $localRootPath + $wspFileName
$wspUrl = $pwaUrl + "_catalogs/solutions/" + $wspFileName

$cabFilePath = $localRootPath + $cabFileName

function Using-Culture (
   [System.Globalization.CultureInfo]   $culture = (throw "USAGE: Using-Culture -Culture culture -Script {…}"),
   [ScriptBlock]
   $script = (throw "USAGE: Using-Culture -Culture culture -Script {…}"))
   {
     $OldCulture = [Threading.Thread]::CurrentThread.CurrentCulture
     $OldUICulture = [Threading.Thread]::CurrentThread.CurrentUICulture
         try {
                 [Threading.Thread]::CurrentThread.CurrentCulture = $culture
                 [Threading.Thread]::CurrentThread.CurrentUICulture = $culture
                 Invoke-Command $script
         }
         finally {
                 [Threading.Thread]::CurrentThread.CurrentCulture = $OldCulture
                 [Threading.Thread]::CurrentThread.CurrentUICulture = $OldUICulture
         }
   }

function Remove-SiteTemplate-IfExists($solutionName, $wspFileName, $pwaUrl) 
{
  $us = Get-SPUserSolution -Identity $solutionName -Site $pwaUrl -ErrorAction SilentlyContinue
  if ($us -ne $Null)
  {
    Write-Host Former version of site template found on the server. It will be removed…
    Uninstall-SPUserSolution -Identity $solutionName -Site $pwaUrl -Confirm:$False
    Remove-SPUserSolution -Identity $wspFileName -Site $pwaUrl -Confirm:$False
  }
}

function Remove-File-IfExists($path)
{
  If (Test-Path $path)
  {
    If (Test-Path $path -PathType Container)
    {
      Write-Host Deleting folder: $path
      Remove-Item $path -Force -Recurse
    }
    Else
    {
      Write-Host Deleting file: $path
      Remove-Item $path -Force
    }
  }
}

Do { $downloadNewTemplate = Read-Host "Would you like to get a new local version of the site template to edit? (y/n)" }
Until ("y","n" -contains $downloadNewTemplate )

If ($downloadNewTemplate -eq "y")
{

    Remove-SiteTemplate-IfExists $solutionName $wspFileName $pwaUrl

    Using-Culture de-DE { 
     Write-Host Saving site as site template including content
     $web = Get-SPWeb $pwsSiteTemplateSourceUrl
     $web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)
   }

  Remove-File-IfExists $cabFilePath

  Write-Host Downloading site template
  $webClient = New-Object System.Net.WebClient
  $webClient.UseDefaultCredentials  = $True 
  $webClient.DownloadFile($wspUrl, $cabFilePath)

  # clean up former version before downloading the new one
  # be sure you do not lock the deletion, for example, by having one of the subfolders opened in File Explorer,
  # or via any file opened in an application
  Remove-File-IfExists $wspExtractFolder

  Write-Host Extracting site template into folder $wspExtractFolder
  #
http://updates.boot-land.net/052/Tools/IZArc%20MANUAL.TXT
  # limited file lenght / folder structure depth! :-(
  #& "C:\Program Files (x86)\IZArc\IZARCE.exe" -d $cabFilePath $wspExtractFolder

  #http://researchbin.blogspot.co.at/2012/05/making-and-extracting-cab-files-in.html
  #expand $cabFilePath $wspExtractFolder -F:*.*
  extrac32 /Y /E $cabFilePath /L $wspExtractFolder
}

Write-Host "Alter the extracted content of the site template, then press any key to upload the template…"
# wait any key press without any output to the console
#
http://technet.microsoft.com/en-us/library/ff730938.aspx
$dummy = $host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")

# clean up former version before creating the new one
# TODO rename it using a date time pattern instead of deletion!
Remove-File-IfExists $cabFilePath
Remove-File-IfExists $wspFilePath

# makecab: we cannot include multiple files directly. To do that, we have to create a directive file called a Diamond Directive File(DDF) and include instructions in it
#
http://comptb.cects.com/automate-compression-tasks-cli/
& "C:\Program Files (x86)\IZArc\IZARCC.exe" -a -r -p $cabFilePath $wspExtractFolder

Rename-Item $cabFilePath $wspFileName

# remove former solution before uploading and activating the new one
Remove-SiteTemplate-IfExists $solutionName $wspFileName $pwaUrl

Write-Host Installing the new version of the site template
Add-SPUserSolution -LiteralPath $wspFilePath -Site $pwaUrl
$dummy = Install-SPUserSolution -Identity $solutionName -Site $pwaUrl

Note: If you are working with the English version of the PWA and have an English operating system on the server, you don’t need the Using-Culture function. To learn more about it see this post.

March 29, 2015

May Merge-SPLogFile Flood the Content Database of the Central Administration Web Application?

Filed under: Content database, PowerShell, SP 2013 — Tags: , , — Peter Holpar @ 00:20

In the recent weeks we searched for the cause of a specific error in one of our SharePoint 2013 farms. To get detailed trace information, we switched the log level often to the VerboseEx mode. A few days later the admins alerted us that the size of the Central Administration content database has been increased enormously (at that time it was about 80 GB!).

Looking for the source of this unexpected amount of data I found a document library called Diagnostics Log Files (Description: View and analyze merged log files from the all servers in the farm) that contained 1824 items.

image

image

Although I consider myself primarily a SharePoint developer, I always try to remain up-to-date in infrastructural themes as well, but to tell the truth I’ve never seen this document library before. Searching the web didn’t help as well.

Having a look into the document library I found a set of folders with GUID names.

image

Each of the folders contained a lot of files: a numbered set for each of the servers in the farm.

image

The files within the folders are archived logs in .gz file format, each around 26 MB.

image

Based on the description of the library I guessed that it has a relation to the Merge-SPLogFile cmdlet, that performs collection of ULS log files from all of the servers in the farm an saves the aggregation in the specified file on the local system, although I have not found any documentation how it performs this action and if it has anything to do with the content DB of the central admin site.

 

After a few hours of “reflectioning”, it was obvious, how this situation was achieved. If you are not interested in call chains, feel free to jump through the following part.

All of the classes and methods below are defined in the Microsoft.SharePoint assembly, if not specified otherwise.

The InternalProcessRecord method of the Microsoft.SharePoint.PowerShell.SPCmdletMergeLogFile class (Microsoft.SharePoint.PowerShell assembly) creates a new instance of the SPMergeLogFileUtil class based on the path of the aggregated log folder and the filter expression, and calls its Run method:

SPMergeLogFileUtil util = new SPMergeLogFileUtil(this.Path, filter);
ThreadPool.QueueUserWorkItem(new WaitCallback(util.Run));

In the Run method of the Microsoft.SharePoint.Diagnostics.SPMergeLogFileUtil class:

public void Run(object stateInfo)
{
  try
  {
    this.Progress = 0;
    // Creates the diagnostic log files document library in central admin (if does not yet exist) via the GetLogFilesList method
    // Add a new subfolder (having GUID name) to the library. If the folder already exists, deletes its content.
    // Executes child jobs (SPMergeLogFilesJobDefinition) on each farm member, see more details about it later below
    List<string> jobs = this.DispatchCollectingJobs();
    // Waits for all child jobs to be finished on the farm members
    this.MonitorJobs(jobs);
    // Merges the content of the collected files from the central admin document library into the specified local file system folder by calling the MergeFiles method
    // Finally deletes the temporary files one by one from the central admin document library and at the very end deletes their folder as well
    this.RunMerge();
  }
  catch (Exception exception)
  {
    this.Error = exception;
  }
  finally
  {
    this.Progress = 100;
  }
}

Yes, as you can see, if there is any error in the file collection process on any of the farm members, or the merging process fails, the files won’t be deleted from the Diagnostics Log Files document library.

Instead of the RunMerge method, the deletion process would have probably a better place in the finally block, or at least, in the catch block one should check if the files were removed successfully.

 

A few words about the Microsoft.SharePoint.Diagnostics.SPMergeLogFilesJobDefinition, as promised earlier. Its Execute method calls the CollectLogFiles method that creates an ULSLogFileProcessor instance based on the requested log filter and gets the corresponding ULS entries from the farm member the job running on, stores the entries in a temporary file in the file system, uploads them to the actual subfolder (GUID name) of the Diagnostics Log Files document library (file name pattern: [ServerLocalName] (1).log.gz or [ServerLocalName].log.gz if a single file is enough to store the log entries from the server), and delete the local temporary file.

 

A few related text values can we read via PowerShell as well:

In the getter of the DisplayName property in the SPMergeLogFilesJobDefinition class returns:

SPResource.GetString("CollectLogsJobTitle", new object[] { base.Server.DisplayName });

reading the value via PowerShell:

[Microsoft.SharePoint.SPResource]::GetString("CollectLogsJobTitle")

returns

Collection of log files from server |0

In the GetLogFilesList method of the SPMergeLogFileUtil class we find the title and description of the central admin document library used for log merging

string title = SPResource.GetString(SPGlobal.ServerCulture, "DiagnosticsLogsTitle", new object[0]);
string description = SPResource.GetString(SPGlobal.ServerCulture, "DiagnosticsLogsDescription", new object[0]);

Reading the values via the PowerShell:

[Microsoft.SharePoint.SPResource]::GetString("DiagnosticsLogsTitle")

returns

Diagnostics Log Files

[Microsoft.SharePoint.SPResource]::GetString("DiagnosticsLogsDescription")

returns

View and analyze merged log files from the all servers in the farm

These values supports our theory, that the library was created and filled by these methods.

 

Next, I’ve used the following PowerShell script to look for the failed merge jobs in the time range when the files in the Central Administration have been created:

$farm = Get-SPFarm
$from = "3/21/2015 12:00:00 AM"
$to = "3/21/2015 6:00:00 AM"
$farm.TimerService.JobHistoryEntries | ? {($_.StartTime -gt $from) -and ($_.StartTime -lt $to) -and ($_.Status -eq "Failed") -and ($_.JobDefinitionTitle -like "Collection of log files from server *")}

The result:

image

As you can see, in our case there were errors on both farm member servers, for example, the storage space was not enough on one of them. After the jobs have failed, the aggregated files were not deleted from the Diagnostics Log Files document library of the Central Administration.

Since even a successful execution of the Merge-SPLogFile cmdlet can temporarily increase the size of the Central Administration content database considerably, and the effect of the failed executions is not only temporary (and particularly large, if it happens several times and is combined with verbose logging), SharePoint administrators should be aware of these facts, consider them when planning database sizes and / or take an extra maintenance step to remove the rest of failed merge processes from the Diagnostics Log Files document library regularly. As far as I see, the issue hits SharePoint 2010 and 2013 as well.

March 26, 2015

Strange Localization Issue When Working with List and Site Templates

Filed under: MUI, PowerShell, SP 2013 — Tags: , — Peter Holpar @ 23:44

One of our clients runs a localized version of SharePoint 2013. The operating system is a Windows 2012 R2 Server (English), the SharePoint Server itself is English as well. The German language pack is installed and sites and site collections were created in German. We are working with various custom site templates. Recently one of these templates had to be extended with a Task-list based custom lists (called ToDos). The users prepared the list in a test site, and we saved the list as a template. We created a new site using the site template (we will refer to this site later as prototype), and next we created a new list based on the custom list template. Finally, we saved the altered web site as site template, including content using the following PowerShell commands:

$web = Get-SPWeb $siteTemplateSourceUrl
$web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)

We created a test site using the new site template, and everything seemed to be OK. However, after a while, the users started to complain, that a menu for the new list contains some English text as well. As it turned out, some of the views for the new list were created with English title:

image

Problem2

First, we verified the manifest.xml of the list template, by downloading the .stp file (that has a CAB file format) and opening it using IZArc. We found, that the DisplayName property of the default view (“Alle Vorgänge” meaning “All Tasks”) and a custom datasheet view (called “db”, stands for “Datenblatt”) contains the title as text, the DisplayName property of the other views contains a resource reference (like “$Resources:core,Late_Tasks;”).

ListTemplate

Next, we downloaded the site template (the .wsp file has also a CAB file format, and can be opened by IZArc), and verified the schema.xml for the ToDos list. We found, that original, German texts (“Alle Vorgänge” and “db”) were kept, however, all other view names were “localized” to English.

English

At this point I guessed already, that problem was caused by the local of the thread the site template exporting code was run in. To verify my assumption, I saved the prototype site from the site settings via the SharePoint web UI (that is German in our case). This time the resulting schema.xml in the new site template .wsp contained the German titles:

German

We got the same result (I mean German view titles) if we called our former PowerShell code by specifying German as the culture for the running thread. See more info about the Using-Culture helper method, SharePoint Multilingual User Interface (MUI) and PowerShell here:

Using-Culture de-DE 
  $web = Get-SPWeb $pwsSiteTemplateSourceUrl
  $web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)
}

We’ve fixed the existing views via the following PowerShell code (Note: Using the Using-Culture helper method is important in this case as well. We have only a single level of site hierarchy in this case, so there is no recursion in code!):

$web = Get-SPWeb http://SharePointServer

function Rename-View-IfExists($list, $viewNameOld, $viewNameNew)
{
  $view =  $list.Views[$viewNameOld]
  If ($view -ne $Null) {
      Write-Host Renaming view $viewNameOld to $viewNameNew
      $view.Title = $viewNameNew
      $view.Update()
  }
  Else {
    Write-Host View $viewNameOld not found
  }
}

Using-Culture de-DE {
  $web.Webs | % {
    $list = $_.Lists["ToDos"]
    If ($list -ne $Null) {
      Write-Host ToDo list found in $_.Title
      Rename-View-IfExists $list "Late Tasks" "Verspätete Vorgänge"
      Rename-View-IfExists $list "Upcoming" "Anstehend"
      Rename-View-IfExists $list "Completed" "Abgeschlossen"
      Rename-View-IfExists $list "My Tasks" "Meine Aufgaben"
      Rename-View-IfExists $list "Gantt Chart" "Gantt-Diagramm"
      Rename-View-IfExists $list "Late Tasks" "Verspätete Vorgänge" 
    }
  }
}

Strange, that we had no problem with field names or other localizable texts when worked with the English culture.

March 10, 2015

Automating the Provisioning of a PWA-Instance

Filed under: ALM, PowerShell, PS 2013 — Tags: , , — Peter Holpar @ 23:56

When testing our custom Project Server 2013 solutions in the development system, or deploying them to the test system I found it useful to be able to use a clean environment (new PWA instance having an empty project database and separate SharePoint content database for the project web access itself and the project web sites) each time.

We wrote a simple PowerShell script that provisions a new PWA instance, including:
– A separate SharePoint content database that should contain only a single site collection: the one for the PWA. If the content DB already exists, we will use the existing one, otherwise we create a new one.
– The managed path for the PWA.
– A new site collection for the PWA using the project web application site template, and the right locale ID (1033 in our case). If the site already exists (in case we re-use a former content DB), it will be dropped before creating the new one.
– A new project database. If a project database with the same name already exists on the SQL server, it will be dropped and re-created.
– The content database will be mounted to the PWA instance, and the admin permissions are set.

Note, that we have a prefix (like DEV or TEST) that identifies the system. We set the URL of the PWA and the database names using this prefix. The database server names (one for the SharePoint content DBs and another one for service application DBs) include the prefix as well, and are configured via aliases in the SQL Server Client Network Utilities, making it easier to relocate the databases if needed.

$environmentPrefix = "DEV"

$webAppUrl = [string]::Format("http://ps-{0}.company.com", $environmentPrefix)

$contentDBName = [string]::Format("PS_{0}_Content_PWA", $environmentPrefix)
$contentDBServer = [string]::Format("PS_{0}_Content", $environmentPrefix)

$pwaMgdPathPostFix = "PWA"
$pwaUrl = [string]::Format("{0}/{1}", $webAppUrl, $pwaMgdPathPostFix)
$pwaTitle = "PWA Site"
$pwaSiteTemplate = "PWA#0"
$pwaLcid = 1033
$ownerAlias = "domain\user1"
$secondaryOwnerAlias = "domain\user2"

$projServDBName = [string]::Format("PS_{0}_PS", $environmentPrefix)
$projServDBServer = [string]::Format("PS_{0}_ServiceApp", $environmentPrefix)

Write-Host Getting web application at $webAppUrl
$webApp = Get-SPWebApplication -Identity $webAppUrl

$contentDatabase = Get-SPContentDatabase -Identity $contentDBName -ErrorAction SilentlyContinue

if ($contentDatabase -eq $null) {
  Write-Host Creating content database: $contentDBName
  $contentDatabase = New-SPContentDatabase -Name $contentDBName -WebApplication $webApp -MaxSiteCount 1 -WarningSiteCount 0 -DatabaseServer $contentDBServer
}
else {
  Write-Host Using existing content database: $contentDBName
}

$pwaMgdPath = Get-SPManagedPath -Identity $pwaMgdPathPostFix -WebApplication $webApp -ErrorAction SilentlyContinue
if ($pwaMgdPath -eq $null) {
  Write-Host Creating managed path: $pwaMgdPathPostFix
  $pwaMgdPath = New-SPManagedPath -RelativeURL $pwaMgdPathPostFix -WebApplication $webApp -Explicit
}
else {
  Write-Host Using existing managed path: $pwaMgdPathPostFix
}

$pwaSite = Get-SPSite –Identity $pwaUrl -ErrorAction SilentlyContinue
if ($pwaSite -ne $null) {
  Write-Host Deleting existing PWA site at $pwaUrl
  $pwaSite.Delete()
}

Write-Host Creating PWA site at $pwaUrl
$pwaSite = New-SPSite –Url $pwaUrl –OwnerAlias $ownerAlias –SecondaryOwnerAlias  $secondaryOwnerAlias -ContentDatabase $contentDatabase –Template $pwaSiteTemplate -Language $pwaLcid –Name $pwaTitle

$projDBState = Get-SPProjectDatabaseState -Name $projServDBName -DatabaseServer $projServDBServer
if ($projDBState.Exists) {
  Write-Host Removing existing Project DB $projServDBName
  Remove-SPProjectDatabase –Name $projServDBName -DatabaseServer $projServDBServer -WebApplication $webApp
}
Write-Host Creating Project DB $projServDBName
New-SPProjectDatabase –Name $projServDBName -DatabaseServer $projServDBServer -Lcid $pwaLcid -WebApplication $webApp

Write-Host Bind Project Service DB to PWA Site
Mount-SPProjectWebInstance –DatabaseName $projServDBName -DatabaseServer $projServDBServer –SiteCollection $pwaSite

#Setting admin permissions on PWA
Grant-SPProjectAdministratorAccess –Url $pwaUrl –UserAccount $ownerAlias

Using this script helps us to avoid a lot of manual configuration steps, saves us a lot of time and makes the result more reproducible.

March 4, 2015

How to find out the real number of Queue Jobs

Filed under: PowerShell, PS 2013, PSI — Tags: , , — Peter Holpar @ 00:45

Recently we had an issue with Project Server. Although the Microsoft Project Server Queue Service was running, the items in the queue has not been processed, and the performance of the system degraded severely. The same time we found a lot of cache cluster failures in Windows Event Logs and ULS logs, it is not clear which one was the source of the problem and which is the result of the other. The “solution” was to install the February 2015 CU SharePoint Product Updates and restarting the server. 

However, even after the restart the number of the job entries seemed to be constant when checking via the PWA Settings / Manage Queue Jobs (Queue and Database Administration): at the first page load the total number displayed at the left bottom of the grid was 1000, however when we paged through the results or refreshed the status, the total was changed to 500 (seems to be an issue with the product). It means that PWA administrators don’t see the real number of the entries.

But how could one then get the real number of the queue jobs?

If you have permission to access the performance counters on the server (in the simplest case, if you are a local admin), then you can use the Current Unprocessed Jobs counter (ProjectServer:QueueGeneral), that – as its name suggests – give the total number of the current unprocessed jobs.

You have to find an alternative solution if you need the count of jobs having other status, or need even more granulate results, for example, the number of job entries that are ready for processing and are related to publishing a project.

The QueueJobs property of the Project class in the client object model (see the QueueJob and QueueJobCollection classes as well) provides only information related to a given project, and the same is true for the REST interface, where you can access the same information for your project like (the Guid is the ID of your project):

http://YourProjServer/PWA/_api/ProjectServer/Projects(‘98138ffd-d0fa-e311-83c6-005056b45654&#8242;)/QueueJobs

The best solution I’ve found is based on the GetJobCount method in the QueueSystem object in the PSI interface. Let’s see some practical PowerShell examples how to use it.

To get the reference for the proxy:

$pwaUrl = "http://YourProjServer/PWA&quot;
$svcPSProxy = New-WebServiceProxy -Uri ($pwaUrl + "/_vti_bin/PSI/QueueSystem.asmx?wsdl") –UseDefaultCredential

To get the number of all entries without filtering:

$svcPSProxy.GetJobCount($Null, $Null, $Null)

For filtering, we can use the second and the third parameter of the method: the jobStates and messageTypes that are arrays of the corresponding JobState and QueueMsgType enumerations. Both of these enums are available as nested enums in the Microsoft.Office.Project.Server.Library.QueueConstants class. This class is defined in the Microsoft.Office.Project.Server.Library assembly, so if we would like to use the enums, we should load the assembly first:

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Project.Server.Library")

Note: you can use the integer values corresponding to the enum values like:

$jobStates = (1, 7)

…however I don’t find it very developer friendly to use such magical constants in code, and you lose the autocomplete feature of PowerShell as well that you have when working with the enums as displayed below:

$jobStates = (
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+JobState]::ReadyForProcessing,
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+JobState]::ProcessingDeferred
)

similarly for message types:

$msgTypes = (
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+QueueMsgType]::ReportingProjectPublish,
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+QueueMsgType]::ReportingProjectDelete
)

You can then access the count of filtered items like:

$svcPSProxy.GetJobCount($Null, $jobStates, $Null)

or

$svcPSProxy.GetJobCount($Null, $jobStates, $msgTypes)

It is worth to know that the QueueConstants class has two methods (PendingJobStates and CompletedJobStates) that return a predefined set of the enum values as a generic IEnumreable<QueueConstants.JobState>. We can use these methods from PowerShell as well:

$jobStates = Microsoft.Office.Project.Server.Library.QueueConstants]::PendingJobStates() | % { [int]$_ }

or

$jobStates = Microsoft.Office.Project.Server.Library.QueueConstants]::CompletedJobStates() | % { [int]$_ }

February 19, 2015

How to Programmatically “Enable reporting of offensive content” on a community site

Filed under: PowerShell, SP 2013 — Peter Holpar @ 23:10

Recently a question was posted on sharepoint.stackexchange.com about how to “Enable reporting of offensive content” from code on SharePoint 2013 community sites.

image

The Community Settings page (CommunitySettings.aspx) has a code behind class Microsoft.SharePoint.Portal.CommunitySettingsPage. In its BtnSave_Click method it depends on the static EnableDisableAbuseReports method of the internal FunctionalityEnablers class to perform the actions required for reporting of offensive content. Furthermore, it sets the value of the vti_CommunityEnableReportAbuse web property to indicate that reporting is enabled for the community site.

To perform the same actions from PowerShell I wrote this script:

$site = New-Object Microsoft.SharePoint.SPSite("http://YourServer/CommunitySite&quot;)
$web = $site.OpenWeb()

# this command has only effect to the check box on the Community Settings page
$web.AllProperties["vti_CommunityEnableReportAbuse"] = "true"
$web.Update()

# the functionality itself is activated by the code below
# get a reference for the Microsoft.SharePoint.Portal assembly
$spPortalAssembly = [AppDomain]::CurrentDomain.GetAssemblies() | ? { $_.Location -ne $Null -And $_.Location.Split(‘\\’)[-1].Equals(‘Microsoft.SharePoint.Portal.dll’) }
$functionalityEnablersType = $spPortalAssembly.GetType("Microsoft.SharePoint.Portal.FunctionalityEnablers")
$mi_EnableDisableAbuseReports = $functionalityEnablersType.GetMethod("EnableDisableAbuseReports")
$mi_EnableDisableAbuseReports.Invoke($null, @($spWeb, $True))

Note: If you use “false” instead of “true” when setting the value of vti_CommunityEnableReportAbuse, and $False instead of $True when invoking the static method in the last line of code, then you can inactivate the reporting for the site.

The alternative solution is to use the server side API from C#:

web.AllProperties["vti_CommunityEnableReportAbuse"] = "true";
web.Update();

// get an assembly reference to "Microsoft.SharePoint.Portal" via an arbitrary public class from the assembly
Assembly spPortalAssembly = typeof(Microsoft.SharePoint.Portal.PortalContext).Assembly;

Type functionalityEnablersType = spPortalAssembly.GetType("Microsoft.SharePoint.Portal.FunctionalityEnablers");
MethodInfo mi_EnableDisableAbuseReports = functionalityEnablersType.GetMethod("EnableDisableAbuseReports");
mi_EnableDisableAbuseReports.Invoke(null, new object[] { web, true });

We can verify the effect of our code by checking the columns in the first view of the Discussions List before enabling the reporting via this PowerShell script:

$list = $web.Lists["Discussions List"]
$list.Views[0].ViewFields

I found these ones:

Threading
CategoriesLookup
Popularity
DescendantLikesCount
DescendantRatingsCount
AuthorReputationLookup
AuthorNumOfRepliesLookup
AuthorNumOfPostsLookup
AuthorNumOfBestResponsesLookup
AuthorLastActivityLookup
AuthorMemberSinceLookup
AuthorMemberStatusIntLookup
AuthorGiftedBadgeLookup
LikesCount
LikedBy

And after we enable reporting. Four further columns should be appended to the list of view fields if the code succeeded:

AbuseReportsCount
AbuseReportsLookup
AbuseReportsReporterLookup
AbuseReportsCommentsLookup

Or we can invoke the static IsReportAbuseEnabled method of the internal Microsoft.SharePoint.Portal.CommunityUtils class to verify if reporting is enabled, just as the OnLoad method of the Microsoft.SharePoint.Portal.CommunitySettingsPage does. You should know however, that this method does not more as simply to check the value of the vti_CommunityEnableReportAbuse web property, so even if it returns true, it does not mean for sure that reporting is really enable. So I prefer checking the columns in view as shown earlier.

The PowerShell version:

$site = New-Object Microsoft.SharePoint.SPSite("http://YourServer/CommunitySite&quot;)
$web = $site.OpenWeb()

$communityUtilsType = $spPortalAssembly.GetType("Microsoft.SharePoint.Portal.CommunityUtils")
$mi_IsReportAbuseEnabled = $communityUtilsType.GetMethod("IsReportAbuseEnabled")
$mi_IsReportAbuseEnabled.Invoke($null, @($spWeb))

The C# version:

// get an assembly reference to "Microsoft.SharePoint.Portal" via an arbitrary public class from the assembly
Assembly spPortalAssembly = typeof(Microsoft.SharePoint.Portal.PortalContext).Assembly;

Type communityUtilsType = spPortalAssembly.GetType("Microsoft.SharePoint.Portal.CommunityUtils");
MethodInfo mi_IsReportAbuseEnabled = communityUtilsType.GetMethod("IsReportAbuseEnabled");
mi_IsReportAbuseEnabled.Invoke(null, new object[] { web });

February 10, 2015

Further Effects of Running Code in Elevated Privileges Block

Filed under: Permissions, PowerShell, SP 2010, SP 2013 — Tags: , , , — Peter Holpar @ 23:29

A few days ago I already published a blog post about the effects of running your PowerShell code in an elevated privileges block. In the past days I made some further tests to check what kind of effect the permission elevation might have if the code runs out of the SharePoint web application context, for example, in the case of server side console applications, windows services or PowerShell scripts, just to name a few typical cases.

To test the effects, I’ve created a simple console application in C#, and a PowerShell script.

I’ve tested the application / script with two different permissions. In both cases, the user running the application was neither site owner (a.k.a. primary administrator) nor a secondary administrator. In the first case, the user has the Full Control permission level on the root web of the site collection, in the second case the user has no permissions at all.

In C# I’ve defined a CheckIfCurrentUserIsSiteAdmin method as:

  1. private void CheckIfCurrentUserIsSiteAdmin(string url)
  2. {
  3.     using (SPSite site = new SPSite(url))
  4.     {
  5.         using (SPWeb web = site.OpenWeb())
  6.         {
  7.             SPUser currentUser = web.CurrentUser;
  8.             Console.WriteLine("Current user ({0}) is site admin on '{1}': {2}", currentUser.LoginName, url, currentUser.IsSiteAdmin);
  9.             Console.WriteLine("Current user ({0}) is site auditor on '{1}': {2}", currentUser.LoginName, url, currentUser.IsSiteAuditor);
  10.             Console.WriteLine("Effective permissions on web: '{0}'", web.EffectiveBasePermissions);
  11.             try
  12.             {
  13.                 Console.WriteLine("web.UserIsWebAdmin: '{0}'", web.UserIsWebAdmin);
  14.             }
  15.             catch (Exception ex)
  16.             {
  17.                 Console.WriteLine("'web.UserIsWebAdmin' threw an exception: '{0}'", ex.Message);
  18.             }
  19.             try
  20.             {
  21.                 Console.WriteLine("web.UserIsSiteAdmin: '{0}'", web.UserIsSiteAdmin);
  22.             }
  23.             catch (Exception ex)
  24.             {
  25.                 Console.WriteLine("'web.UserIsSiteAdmin' threw an exception: '{0}'", ex.Message);
  26.             }
  27.         }
  28.     }
  29. }

Then called it without and with elevated permissions:

  1. string url = http://YourServer;
  2. Console.WriteLine("Before elevation of privileges");
  3. CheckIfCurrentUserIsSiteAdmin(url);
  4. Console.WriteLine("After elevation of privileges");
  5. SPSecurity.RunWithElevatedPrivileges(
  6.     () =>
  7.         {
  8.             CheckIfCurrentUserIsSiteAdmin(url);
  9.         });

The summary of the result:

If the user executing the application has full permission on the (root) web (Full Control permission level):

Before elevation:
Current user is site admin: False
Effective perm.: FullMask
web.UserIsWebAdmin: True
web.UserIsSiteAdmin: False

After elevation:
Current user is site admin: True
Effective perm.: FullMask
web.UserIsWebAdmin: True
web.UserIsSiteAdmin: True

If the user has no permission on the (root) web:

Before elevation:
Current user is site admin: False
Effective perm.: EmptyMask
web.UserIsWebAdmin: ‘Access denied’ exception when reading the property
web.UserIsSiteAdmin: ‘Access denied’ exception when reading the property

After elevation:
Current user is site admin: True
Effective perm.: FullMask
web.UserIsWebAdmin: True
web.UserIsSiteAdmin: True

In PowerShell I defined a CheckIfCurrentUserIsSiteAdmin method as well, and invoked that without and with elevated right:

function CheckIfCurrentUserIsSiteAdmin($url) {
  $site = Get-SPSite $url 
  $web = $site.RootWeb
  $currentUser = $web.CurrentUser
  Write-Host Current user $currentUser.LoginName is site admin on $url : $currentUser.IsSiteAdmin
  Write-Host Current user $currentUser.LoginName is site auditor on $url : $currentUser.IsSiteAuditor

  Write-Host Effective permissions on web: $web.EffectiveBasePermissions
  Write-Host web.UserIsWebAdmin: $web.UserIsWebAdmin
  Write-Host web.UserIsSiteAdmin: $web.UserIsSiteAdmin
}

$url = "http://YourServer&quot;

Write-Host Before elevation of privileges
CheckIfCurrentUserIsSiteAdmin $url

Write-Host After elevation of privileges
[Microsoft.SharePoint.SPSecurity]::RunWithElevatedPrivileges(
  {
    CheckIfCurrentUserIsSiteAdmin $url
  }
)

The results were the same as in the case of the C# console application, except the web.UserIsWebAdmin and web.UserIsSiteAdmin, when calling with no permissions on the web. In case we don’t receive any exception, simply no value (neither True nor False) was returned.

These results show, that any code, let it be a method in a standard SharePoint API, or a custom component, that depends on the above tested properties, behaves differently when using with elevated privileges, even if it is executed from an application external to the SharePoint web application context, that means, even if the identity of the process does not change.

February 9, 2015

“Decoding” SharePoint Error Messages using PowerShell

Filed under: PowerShell, SP 2010, Tips & Tricks — Tags: , , — Peter Holpar @ 22:26

When working with SharePoint errors in ULS logs, you can find the error message near to the stack trace. In case of simple methods the stack trace may be enough to identify the exact conditions under which the exception was thrown. However, if the method is complex, with a lot of conditions and branches, it is not always trivial to find the error source, as we don’t see the exception message itself, as it is stored in language-specific resource files, and you see only a kind of keyword in the code.

For example, let’s see the GetItemById method of the SPList object with this signature:

internal SPListItem GetItemById(string strId, int id, string strRootFolder, bool cacheRowsetAndId, string strViewFields, bool bDatesInUtc)

There is a condition near to the end of the method:

if (this.IsUserInformationList)
{
    throw new ArgumentException(SPResource.GetString("CannotFindUser", new object[0]));
}
throw new ArgumentException(SPResource.GetString("ItemGone", new object[0]));

How could we “decode” this keyword to the real error message? It is easy to achieve using PowerShell.

For example, to get the error message for the “ItemGone”:

[Microsoft.SharePoint.SPResource]::GetString("ItemGone")

is

Item does not exist. It may have been deleted by another user.

Note, that since the second parameter is an empty array, we can simply ignore it when invoking the static GetString method.

If you need the language specific error message (for example, the German one):

$ci = New-Object System.Globalization.CultureInfo("de-de")
[Microsoft.SharePoint.SPResource]::GetString($ci, "ItemGone")

it is

Das Element ist nicht vorhanden. Möglicherweise wurde es von einem anderen Benutzer gelöscht.

Having the error message, it is already obvious most of the time, at which line of code the exception was thrown.

It can also help to translate the localized message to the English one, and use it to look up a solution for the error on the Internet using your favorite search engine, as there are probably more results when you search for the English text.

Setting the Value of an URL Field or other Complex Field Types using PowerShell via the Managed Client Object Model

Filed under: Managed Client OM, PowerShell, SP 2013 — Tags: , , — Peter Holpar @ 00:30

Assume you have a SharePoint list that includes fields of type Hyperlink or Picture (name this field UrlField), Person or Group (field name User) and Lookup (field name Lookup) and you need to set their values remotely using PowerShell via the Managed Client Object Model.

In C# you would set the Hyperlink field using this code:

  1. var siteUrl = "http://YourServer&quot;;
  2.  
  3. using (var context = new ClientContext(siteUrl))
  4. {
  5.     var web = context.Web;
  6.     var list = web.Lists.GetByTitle("YourTestList");
  7.     var item = list.GetItemById(1);
  8.  
  9.     var urlValue = new FieldUrlValue();
  10.     urlValue.Url = "http://www.company.com&quot;;
  11.     urlValue.Description = "Description of the URL";
  12.     item["UrlField"] = urlValue;
  13.  
  14.     item.Update();
  15.     context.ExecuteQuery();
  16. }

You may think, that translating this code to PowerShell is as easy as:

Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"
Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll"

$siteUrl = "http://YourServer&quot;

$context = New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl)
$web = $context.Web
$list = $web.Lists.GetByTitle("YourTestList")
$item = $list.GetItemById(1)

$urlValue = New-Object Microsoft.SharePoint.Client.FieldUrlValue
$urlValue.Url = "http://www.company.com&quot;
$urlValue.Description = "Description of the URL"
$item["UrlField"] = $urlValue

$item.Update()
$context.ExecuteQuery()

However, that approach simply won’t work, as you receive this error:

"Invalid URL: Microsoft.SharePoint.Client.FieldUrlValue" (ErrorCode: -2130575155)

When capturing the network traffic with Fiddler, the C# version sends this for the SetFieldValue method:

image

For the PowerShell code however:

image

You can see, that the first parameter, the field name is the same in both cases, however the second parameter, that should be the value we assign to the field is wrong in the second case. In the C# case it has the correct type (FieldUrlValue , represented by the GUID value in the TypeId), however in PowerShell the type is Unspecified, and the really type is sent as text in this parameter (I assume the ToString() method was called on the type by PowerShell)

Solution: You should explicitly cast the object in the $urlValue variable to the FieldUrlValue type as shown below (the single difference is highlighted in code with bold):

Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"
Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll"

$siteUrl = "http://YourServer&quot;

$context = New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl)
$web = $context.Web
$list = $web.Lists.GetByTitle("YourTestList")
$item = $list.GetItemById(1)

$urlValue = New-Object Microsoft.SharePoint.Client.FieldUrlValue
$urlValue.Url = "http://www.company.com&quot;
$urlValue.Description = "Description of the URL"
$item["UrlField"] = [Microsoft.SharePoint.Client.FieldUrlValue]$urlValue

$item.Update()
$context.ExecuteQuery()

Similarly in case of the other complex field types (only the relevant codes are included).

For the Person or Group field in C# (assuming 2 is the ID of the user you would like to set in the field):

  1. var userValue = new FieldUserValue();
  2. userValue.LookupId = 2;
  3. item["User"] = userValue;

The network trace:

image

PowerShell code, that does not work:

$userValue = New-Object Microsoft.SharePoint.Client.FieldUserValue
$userValue.LookupId = 2
$item["User"] = $userValue

The error you receive:

"Invalid data has been used to update the list item. The field you are trying to update may be read only." (ErrorCode: -2147352571)

The network trace:

image

PowerShell code, that does work:

$userValue = New-Object Microsoft.SharePoint.Client.FieldUserValue
$userValue.LookupId = 2
$item["User"] = [Microsoft.SharePoint.Client.FieldUserValue]$userValue

For the Lookup field in C# (assuming 2 is the ID of the related list item you would like to set in the field):

  1. var lookUpValue = new FieldLookupValue();
  2. lookUpValue.LookupId = 2;
  3. item["Lookup"] = lookUpValue;

The network trace:

image

PowerShell code, that does not work:

$lookUpValue = New-Object Microsoft.SharePoint.Client.FieldLookupValue
$lookUpValue.LookupId = 2
$item["Lookup"] = $lookUpValue

The error you receive:

"Invalid data has been used to update the list item. The field you are trying to update may be read only." (ErrorCode: -2147352571)

The network trace:

image

PowerShell code, that does work:

$lookUpValue = New-Object Microsoft.SharePoint.Client.FieldLookupValue
$lookUpValue.LookupId = 2
$item["Lookup"] = [Microsoft.SharePoint.Client.FieldLookupValue]$lookUpValue

Older Posts »

The Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 53 other followers