Building ASP.Net MVC site Part 1

MVC (Model View Controller) allows you to create different screen components to help development of your site. Here is a short introduction that helps you to understand some practices of building good application.

Shared MVC views

MVC contains three shared views:

  • _Layout.cshtml (in /Views/Shared)
    File defines common layout for “all” views. You can create custom layout file if required, but I don’t recommend doing it. It’s seldom needed. You should keep it as the last resort. Layout file defines body section (not same as HTML body tag) and one or more sections. Default layout file contains one scripts section that allows developer to inject script tags from view to rendered page.
  • _ViewImports.cshtml (in /Views)
    File defines import statements (same as using statements in C# code). This is hierarchical. Each folder can contain one. You should carefully think what you want to put here. I recommend that you add to common _ViewImports.cshtml file in /Views folder only those namespaces you really require in “every” page and if you will create controller level files, you will put only common definitions shared in all views to that file.
  • _ViewStart.cshtml (in /Views)
    File defines code executed before view. This file is also hierarchical. Each folder can contain one. If folder contains _ViewStart.chtml file, it will be executed after shared one. Default file defines Layout file that view uses. I personally wouldn’t use this for any custom code. You can Controller or Service to provide additional logic you need in your page or use some other technique to add logic in place you can better manage (create unit tests etc.).

You should always remember that if something is technically possible, it’s not always the recommended way of doing things like over using those three files.


As described above layout is used for creating common layout for all views. There are couple things you should know about placement of different html elements in the file.

Your view is rendered before jQuery and other scripts are loaded. This is for performance reasons so that HTML will be rendered before larger javascript files will get loaded. This can however cause some problems, if you want to add script tags to partial views or view components.


View imports is used mainly for declaring namespaces for view. It also contains several other purposes. As I have already said, you shouldn’t use all possible features even they are technically possible. This applies to imports file well. If multiple _ViewImports.cshtml files are run for a view, combined behavior of the directives included in the ViewImports.cshtml files will be as follows:

  • @addTagHelper, @removeTagHelper: all run, in order
    I recommend that you define all used tag helpers to main imports file in Views folder. Tag helpers should be common for your application. If you will end up having view specific tag helpers, you should consider if you are developing maintenance nightmare or are your tag helpers doing too much inside one tag helper.
  • @tagHelperPrefix: the closest one to the view overrides any others
    I don’t recommend using this at all. Your tag helpers should follow HTML syntax and there are no namespaces or prefixes.
  • @model: the closest one to the view overrides any others
    If you want to define model to here, you really should be sure that you can reuse same model in all or almost all views this imports file is affecting. In any other case, you shouldn’t use this setting either. Preferred way is to define model in view.
  • @inherits: the closest one to the view overrides any others
    This will allow you to create a custom page class instead of default RazorPage<T> where T is the model you defined for the page. This can be useful in some special cases where you want to all some custom functionality to page. In most cases however this can be done with several other means like tag helpers or dependency injection. I personally find this a method echo from the old days.
  • @using: all are included; duplicates are ignored
    This is the most common usage for the imports file. It helps you to import common namespaces to your views. In most cases you will have own namespaces for view models that are specific to one domain. You can easily add those namespaces to all your domain specific views by creating own imports file to same folder where your views are.
  • @inject: for each property, the closest one to the view overrides any others with the same property name
    Inject allows you to add additional services to the views. I think the most common use case is to add Localizer to your views if you want to use multilingual site. You should use inject in same way as tag helpers. Add inject lines to common imports file and do not use inject directive in any other imports file unless you have really good reason for it.


View start allows you to execute common code in views. This can be used similar cases as inherits directive could be used. Code is executed in all views that are in same folder or any sub folders beneath view start file. This means that you need to be sure that all views can execute this code. I recommend that you will use some other method to achieve similar functionality.

Building ASP.Net MVC site series:

User-driven Architecture (UDA)

User-driven (Software) Architecture is next step in software architecture. It uses microservices and cloud based architectures as components and its purpose is to provide tools for users to automate their daily tasks without huge integration projects.

Currently this type of activities has been driven by IT departments as different type of RPA (Robotic Process Automation). The purpose has been to automate simple work processes and help end users to focus on more productive tasks.

The biggest problem in these RPA projects has been the lack of real integration between different applications and systems. Applications are “integrated” with each other using copy-paste integrations and guess which button I try to click now. In real world situations, many of the solutions don’t offer any kind of way to communicate with application data or logic, so automations are done by mimicking user actions with keyboard and mouse. This leads to fragile automations that can break easily.

User-driven Software Architecture (UDA) addresses to this problematic area. It provides architecture where users can themselves automate their daily activities without difficult integration and automation projects with the tools their IT-department can support and monitor. User-driven Architecture is based in following principles:

  1. Interfaces. Each application or system must provide API (Application Programming Interface) and documentation for API in OpenAPI Specification (OAS) format. See more about OAS from
  2. Security. Each application or system must provide authentication (OAuth 2.0). Each application or system should also provide role based authorization to operations and data provided using interface. This means that users should only see the data they are entitled to see. In many applications or systems this has been implemented in user interface, not inside of application logic.
  3. Workflow. Each organization should have workflow service that users can utilize. This workflow can be for example Microsoft Flow or Amazon Simple Workflow Service. It can be even some EAI product that will be configured to connect different interfaces that each application or system provides.
  4. Federation. This is optional, but recommended. Federation means cross organization communication. Each organization can publish selected set of interfaces and/or workflows to their clients or other interest groups. For example, a client could order a product automatically from their internal workflow and organizations internal workflow can be started to make an order to product supplier, if number of products in stock is getting too low.

User-driven Software Architecture (UDA) focuses to design and build solutions from user’s perspective. Each application should be easy to automate without using user interface. This type of thinking allows both users and organizations to automate simple and repeatable tasks and let users to focus on matters that will help organizations to be more profitable.

Upgrading sp-pnp-js to 1.0.5

You have already working SharePoint project which is using SharePoint PnP JavaScript Core Component library. ( You want to update it or you accidently updated it from earlier version to 1.0.5.

I have used in this example Patrick Rodgers’ Yeoman sample project (

Before you will do anyting everything works fine and if you execute gulp build you will get successful build.


Updating sp-pnp-js

Before we can update to new version of the sp-pnp-js package, we need to check how it’s installed to the project. This can be done by checking the project.json file. It has two sections for external packages (dependencies and devDependencies).


We will execute

npm install sp-pnp-js@latest --save

to update sp-pnp-js to latest version. Note that you need to use –save parameter instead of –save-dev, because sp-pnp-js is in dependencies section instead of devDepencencies.


You can see that we have now the 1.0.5 version which we were looking for. Now if we will try to build our project, we notice that it will fail. At first everything seems to go fine.


but then we can see only red error lines and lots of those.


Updating Typescript

Now it’s time to read the manual or something like that. In our case explanation comes from GitHub commit comments. “Updates to TypeScript 2.0”. Ok. What this means? What version we have? It’s easy to find out. You can see it right after gulp build has started. It says that our version is 1.8.10.



npm install typescript@latest --save-dev

Note that now you need to use –save-dev, because typescript is in devDependencies section.


Now everything should work as we have updated TypeScript to required version.


Wait!! Why we get all those errors even though we have correct TypeScript version?


Removing extra typings

It seems that we have duplicate type definitions for all those typescript files. Previously sp-pnp-js required several external typings before it worked. Now those require typings have been added inside the package.


We need to remove external typings. Extra typings can be found from project\typings\ folder.

Execute following commands

typings uninstall whatwg-fetch --global
typings uninstall es6-promise --global


Now you can execute

gulp build

Everything works now.


Updating your Node.js environment

I have used to use Windows Update and Visual Studio Extensions and Updates feature. Now the problem is that keeping your Node.js environment up-to-date is not possible using those tools.

There are couple package managers available for Windows, but there are no “native” for example PowerShell command that could be executed. I don’t like to use a tool for just single task.


You can see Node.js version number when you start Node.Js Command Prompt


So far the easiest way to update your Node.js to download installer from Node.js web site from Note that there are two versions available LTS and Current. LTS stands for Long Term Support. Now I’m currently running the latest version of Node.js so I don’t need to update it.


If you select LTS (it’s selected by default), you can see that Current version is v.4.4.7 with npm version 2.15.8.


If you select Current, you can see that Current version number has changed to v.6.3.1 with npm version 3.10.3.


If I have the latest version of Node.js, it doesn’t mean that I have the latest version of NPM. I found really excellent script that will update NPM for you with couple commands. It will show the current version of the NPM installed to your system. It also installs “any” NPM version you want so you can downgrade if the latest version would turn out to be buggy.

Open PowerShell as Administrator


Set-ExecutionPolicy Unrestricted -Scope CurrentUser -Force


npm install --global --production npm-windows-upgrade




Now we can close PowerShell and start Node.js Command Prompt to verify that installation was successful.

npm version


Possible error messages

You can get few error message while executing npm-windows-upgrade.

The first one “Scripts cannot executed on this system.” comes if you haven’t changed execution policy.


The another one “NPM cannot be upgraded without administrative rights. To run PowerShell as Administrator, right-click PowerShell and select ‘Run as Administrator’.” comes if you try to execute npm-windows-upgrade with non-administrative mode.


PnP-JS-Core samples

This is the part four of my posts about SharePoint client side development using Node.js.

You can see previous parts from here:

During this post we will extend existing PnP Workbench with different common scripts.

Code Template

I have created following template that can be used to test different codes easily. You just need to write your own code after // write code here comment. I have used this template for all example codes here unless I have told otherwise.

var testbench = testbench || {};
testbench.tests = function() {
    var targetElement = jQuery("#pnp-test-bench");

    runCode = function(){
        targetElement.append("<h2>Test: Test name</h2>");
        // write code here
    return {
        runCode: runCode

    var t = new testbench.tests();



Here are scripts related to Web.

Show title of the web

var testbench = testbench || {};

testbench.tests = function() {
    var targetElement = jQuery("#pnp-test-bench");

    runCode = function(){
        targetElement.append("<h2>Test: getWebTitle</h2>");
                targetElement.append(web.Title + "<br />");
                targetElement.append(error + "<br />");

    return {
        runCode: runCode

    var t = new testbench.tests();


If everything went fine, you should see the title of the web in test bench area.


Show all properties from web object

This can be useful when you try to see what is current state of the web object.

var testbench = testbench || {};

testbench.tests = function() {
    var targetElement = jQuery("#pnp-test-bench");

    runCode = function(){
        targetElement.append("<h2>Test: getWeb</h2>");
                for (var key in web) {
                    targetElement.append(key + ": " + web[key] + "<br />");
                targetElement.append(error + "<br />");

    return {
        runCode: runCode

    var t = new testbench.tests();


If everything went fine, you should see all properties of the current web in test bench area.



Enumerate all lists from current web

This enumerates all lists from current web and shows all their properties.

var testbench = testbench || {};

testbench.tests = function() {
    var targetElement = jQuery("#pnp-test-bench");

    runCode = function(){
        targetElement.append("<h2>Test: enumLists</h2>");
        $'Id', 'Title').orderBy('Title').get()
            .then(function(listIds) {
                for (var index in listIds) {
                    var l = $pnp.sp.web.lists.getById(listIds[index].Id);
                            for (var key in list) {
                                targetElement.append(key + ": " + list[key] + "<br />");
                            targetElement.append("<hr />");
                            targetElement.append(error + "<br />");
                targetElement.append(error + "<br />");

    return {
        runCode: runCode

    var t = new testbench.tests();


If everything went fine, you should see all lists under current web and all properties of the list in test bench area. This is quite long list, so I have taken just a snapshot of the first list.


Enumerate List Items

Enumerates all items from the selected list.

var testbench = testbench || {};

testbench.tests = function() {
    var targetElement = jQuery("#pnp-test-bench");

    runCode = function(){
        targetElement.append("<h2>Test: enumListItems</h2>");
                for (var i=0; i < items.length; i++) {
                   for (var key in items[i]) {
                        targetElement.append(key + ": " + items[i][key] + "<br />");
                    targetElement.append("<hr />");
                targetElement.append(error + "<br />");

    return {
        runCode: runCode

    var t = new testbench.tests();


If everything went fine, you should see all lists items from the selected list and all properties of each list item in test bench area. This is quite long list, so I have taken just a snapshot of the first list item.



Search sites

This example uses search to list all site collections from the tenant. It lists only those current user has rights.

var testbench = testbench || {};

testbench.tests = function() {
    var targetElement = jQuery("#pnp-test-bench");

    runCode = function(){
        targetElement.append("<h2>Test: searchSites</h2>");
            .then(function(searchResult) {
                targetElement.append("Search took " + searchResult.ElapsedTime + "ms to complete.<br />");
                targetElement.append("Total rows " + searchResult.TotalRows + ".<br />");
                targetElement.append("Total rows including duplicates " + searchResult.TotalRowsIncludingDuplicates + ".<br />");
                targetElement.append("<hr />");

                for (var index in searchResult.PrimarySearchResults) {
                    var resultItem = searchResult.PrimarySearchResults[index];
                    for (var key in resultItem) {
                        targetElement.append(key + ": " +resultItem[key] + "<br />");
                    targetElement.append("<hr />");
                targetElement.append(error + "<br />");

    return {
        runCode: runCode

    var t = new testbench.tests();


If everything went fine, you should see list of search results and all properties of each search result item in test bench area. This is quite long list, so I have taken just a snapshot of the first item.PnPJsSamples_SearchSitesBrowser

Creating a PnP-JS-Core Workbench

This is the third part of my posts about SharePoint client side development using Node.js.

You can see previous parts from here:

During this post we will load Office PnP client side scripts and create simple workbench to test our applications in SharePoint.

Setting up project

Create project

The first this is to create a new project. It’s called PnPTestBench. I have covered these steps more detailed in part 2 so I will just show commands and end result here.

Open Node.js Command Prompt

cd \Source
md PnPTestBench
cd PnPTestBench
npm init

name: pnptestbench
version: (Accept default value)
description: PnP-JS-Core Test Bench
entry point: (Accept default value)
test command:
git repository:
author: (Enter your name)
license: (Accept default value)

npm install --save-dev gulp


npm install --save-dev gulp-serve


Copy files from existing project

We have already created working gulpfile in our previous project so we can reuse those in this also.



Copy gulpfile.js, dev_sharepoint_local.crt and dev_sharepoint_local.key from MyFirstProject folder to PnPTestBench folder.

Add required scripts

Office PnP scripts can be loaded as npm package manager. If you are not familiar with Office PnP, you can go to see more information from their GitHub project page.


npm install jquery --save-dev



npm install typings --save-dev



npm install sp-pnp-js –save-dev


If you will get following error message “’typings’ is not recognized as an internal or external command and “Failed at the sp-pnp-js@x.x.x postinstall script ‘typings install’.”, you didn’t install typings package before installing sp-pnp-js package.


Building test bench

Creating folders

Now we have installed required packages to get our project working.

We need to create a new folder called app and couple sub folders for it. This is because gulpfile.js contains reference to it.

md app
md app\scripts
md app\styles


Open Visual Studio code


Creating files

Now we can create one html, style sheet and JavaScript file with following content.


(This is empty file)


var testbench = testbench || {};
testbench.tests = function() {
   var targetElement = jQuery("#pnp-test-bench");

   writeGreeting = function(name) {
      targetElement.append("<h2>Test: writeGreeting</h2>");
      targetElement.append("Hello " + name + "<br />");
   return {
      writeGreeting: writeGreeting

   var t = new testbench.tests();



<title>SharePoint PnP test bench</title>
body {
   font-family: Verdana, Geneva, sans-serif;
.example {
   margin: 10px;
.example .code {
   border: 1px solid lightgray;
   padding: 10px;
   font-family: "Lucida Console", Monaco, monospace;

<h1>SharePoint PnP test bench</h1>
<div class="example">
<p>Copy following code to your SharePoint site:</p>
<p class="code">
&lt;script type="text/javascript" src="https://dev.sharepoint.local/scripts/jquery.min.js"&gt;&lt;/script&gt;<br />
&lt;script type="text/javascript" src="https://dev.sharepoint.local/scripts/pnp.min.js"&gt;&lt;/script&gt;<br />
&lt;script type="text/javascript" src="https://dev.sharepoint.local/scripts/app.js"&gt;&lt;/script&gt;<br />
&lt;link rel="stylesheet" type="text/css" href="https://dev.sharepoint.local/styles/app.css" /&gt;<br />
&lt;div id="pnp-test-bench"&gt;&lt;/div&gt;<br />


Copying library files

But wait… What about pnp javascript file. It’s located in node_modules\sp-pnp-js\dist sub folder, but we are not serving that folder. It doesn’t make sense to serve content from every single library folder. It makes the testing of the application much harder.

We can use Gulp to copy library file from its original location to app\scripts folder.

Open gulpfile.js and add following content to it.

gulp.task('copy-files', function() {


Because we don’t modify these files that often, we can copy library files to scripts folder when we add new library files. This is done by executing following command in Node.js Command Prompt.

gulp copy-files


Now we can start our test bench

gulp serve-https


Then we need to copy html from the example code block to SharePoint test page. If you are unsure how to do this, check instructions from Getting ready for SharePoint development using node.js: posting.


Our test bench works fine. Now we can continue working with it. I have created few test cases that use Office PnP JavaScript Core. I will

Getting ready for SharePoint development using node.js

There has been lot of discussion about new SharePoint Framework to be released soon. Here are instructions how you will get your development environment ready for new SharePoint programming model. You can also use these instructions to start working with OfficePnP JavaScript libraries.

Install required / recommended software

Visual Studio Code

You can download lightweight Visual Studio for free from Microsoft (

Installation of the VSCode is really simple. I have noticed that if you select “Open with Code” actions will help you working with VSCode better. So during “Select Additional Tasks” step, remember to select both Add “Open with Code”… checkboxes as shown on below.



Next thing is to install Node.js to host your new development environment. You can download it from Node.js web site ( Installation is Next – Next installation that won’t require any configurations.


The first thing is to install gulp. Gulp needs to be installed globally so that every project can use it.

Open Node.js command prompt and type

npm install --global gulp-cli

It will install Gulp for you. This won’t take long.


After we have installed Gulp, we will create a folder for the project.

Create Project

Create project folders

C:\Users\janis>cd \
C:\>md Source
C:\>cd Source
C:\Source>md MyFirstProject
C:\Source>cd MyFirstProject


Execute init command to setup project. Project initialization asks several questions about your project. This is almost identical for Visual Studio project window. At this time we don’t connect project to any Source code repository. Note that you need to use lowercase characters in project name.

Configure project

Install gulp to project. It will add required configurations and packages for Gulp to work.

init npm


name: myfirstproject (Note that you need to use lower case characters here so you can’t accept default value.)
version: (Accept default value)
description: My First node.js project
entry point: (Accept default value)
test command: (Leave empty)
git repository: (Leave empty)
keywords: (Leave empty)
author: (Enter your name)
license: (Accept default value)


After accepting license, init will create package.json file for you like this. You need to confirm it.


After you have initialized project, you can install gulp to your project.

install --save-dev gulp


Open Windows Explorer and go to C:\Source.

Select MyFirstProject and select Open with code from Context menu.WebStack_OpenVSCode

Create a new file called gulpfile.js and copy following code there.

var gulp = require('gulp');

gulp.task('default', function() {
  // place code for your default task here


Then you can execute gulp. If you get error message that says “No gulpfile found”, you can have created gulpfile.js to wrong folder or made a mistake in its name.


Adding web server to project

Before you can start developing application, you need to enable web server to your project. This will be done by adding gulp-serve package to your project.

npm i --save-dev gulp-serve


After we have added gulp-serve package to our project, we need to edit gulpfile.js to enable web server functionality. This will be done with following lines of code.

var serve = require('gulp-serve');

gulp.task('serve', serve(['app']));


Now you can execute following command in Node.js Command Prompt. System tells us that it will be serving files in localhost port 3000.

gulp serve


Open web browser and type to address bar http://localhost:3000. As you can see from the result, there is nothing to serve for the clients.


Creating content

Create a new folder called app and subfolders for it called scripts and styles.

Create also files called app.js, app.css, and index.html with following contents.

function greet() {


body {
    background-color: lightgray;


        <title>Hello world</title>
        <script type="text/javascript" src="scripts/app.js"></script>
        <link rel="stylesheet" type="text/css" href="styles/app.css" />
        <h1>Hello world!</h1>
        <button onclick="greet();">Greet</button>


Refresh your browser page. Now you can check that you can actually see our test page in your browser. If you click greet button, you will see greeting.


Connecting to SharePoint

Creating test site

Now we have created a working test bench and it is time to connect our code to SharePoint. The first thing is to create a test site for us. Log into you tenant and go to Site Contents page.

Select New | Subsite


Enter site title, description and url.

Select Team Site as site template.


Editing page

Edit page and add Script Editor Web Part to the page.


Edit Script Editor Web Part and click EDIT SNIPPLET link.


Add following lines to Script Editor.

<script type="text/javascript" src="http://localhost:3000/scripts/app.js"></script>

<link rel="stylesheet" type="text/css" href=" http://localhost:3000/styles/app.css" />

<button onclick="greet();">Greet</button>

Accept changes. Now you can try to click Greet button. As you can see it won’t work. The problem is that our SharePoint site is using encrypted HTTPS connection and our added scripts are using normal HTTP connection. If you try this with Edge as I did in, you may notice that there is no way of getting our button work.


If you open same page in Internet Explorer, you will see following notification. If you click the Show all content button, you will notice that background changes to light gray and you can click the button.


Setting up HTTPS

We don’t always want to confirm that we actually want to see our development work. This requires that we will set up HTTPS service to our development environment.

Creating certificates

Before you will enable HTTPS service, you need to create certificates for you. I have created another post to cover this.

Configuring project

We need to copy our certificates to project folder.

Go to C:\Source\DevCertificate folder.

Select dev_sharepoint_local.crt and dev_sharepoint_local.key files and copy files to C:\Source\MyFirstProject folder.



Open gulpfile.js in Visual Studio Code and add following lines to it.

gulp.task('serve-https', serve({
  root: ['app'],
  port: 443,
  https: {
    key: 'dev_sharepoint_local.key',
    cert: 'dev_sharepoint_local.crt'


Open Nodejs Command Prompt.

Go to project folder and start gulp with https

gulp serve-https


Testing connection

Open your browser and verify that connection to https://dev.sharepoint.local is actually working and you won’t get any certification errors.


Modifying SharePoint

Now we can modify our scripts from SharePoint.

Go back to your SharePoint test page and replace existing code with this.

<script type="text/javascript" src="https://dev.sharepoint.local/scripts/app.js"></script>
<link rel="stylesheet" type="text/css" href="https://dev.sharepoint.local/styles/app.css" />
<button onclick="greet();">Greet</button>

Click Insert, accept changes and save page.


Now you can see that background has been changed to light gray, button works and you don’t get any warnings about the page.


Now we have finalized the first part of the SharePoint development with Node.js.


Setting up HTTPS certificates

If we want to make proper node.js development for our SharePoint applications, we need to enable HTTPS for our project.

Add host name

Before we can create certificates for our development environment, we need to define host name for development site.

Click Search icon and type note too list Notepad.

Select “Run as administrator” from context menu.


Open C:\Windows\System32\drivers\etc folder.

Change File type to All Files.

Select hosts file and click open.


Add new entry to hosts file with following information: dev.sharepoint.local


Save file and close Notepad.

Creating certificates

I have used OpenSSL toolkit ( to generate certificates that I use for HTTPS. OpenSSL is open source toolkit, but they only provide source code.

Install OpenSSL client

You can install OpenSSL client from this location.

I have used default settings for it so my application is installed to C:\OpenSSL-Win32. If you have installed it do different location, you need to check paths of next commands.

I have used this excellent post from Dieter Stevens as a base of this part of the post. He has shown step by step installation of the client so I won’t cover that in this document.

Note that default length for country name is 2 letters.
If your country has three letter country code, you need to change countryName_max value from openssl.cfg file before going forward.

Creating Certificates

When you have installed the client, you need to create a folder for certificates.

Open Command Prompt.

Go to Source folder.

Create DevCertificate folder and go there.

set RANDFILE= C:\Source\DevCertificate\.rnd
set OPENSSL_CONF=C:\OpenSSL-Win32\bin\openssl.cfg


Start OpenSSL client



genrsa -out ca.key 4096


req -new -x509 -days 1826 -key ca.key -out ca.crt

Country Name: (Enter 2 letter country code for your country).
State: (Enter your state)
Locality Name: (Enter your city)
Organizational Name: (Enter your company). I have added Development after company name, so that nobody actually thinks that this is real certificate.
Organizational Unit Name: (Enter your OU)
Common Name: sharepoint.local
Email address: (Enter your email address)


genrsa -out dev_sharepoint_local.key 4096


req -new -key dev_sharepoint_local.key -out dev_sharepoint_local.csr

Country Name: (Enter 2 letter country code for your country).
State: (Enter your state)
Locality Name: (Enter your city)
Organizational Name: (Enter your company). I have added Development after company name, so that nobody actually thinks that this is real certificate.
Organizational Unit Name: (Enter your OU)
Common Name: dev.sharepoint.local
Email address: (Enter your email address)
Challenge password: (Enter proper password or leave empty)
Optional company name: (Leave empty)


x509 -req -days 730 -in dev_sharepoint_local.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out dev_sharepoint_local.crt


pkcs12 -export -out dev_sharepoint_local.p12 -inkey dev_sharepoint_local.key -in dev_sharepoint_local.crt -chain -CAfile ca.crt


Installing certificates

Before you can use certificates you need to install those to local machine.

Go to C:\Source\DevCertificate.

Install Root certificate

Select ca.crt file and select Install Certificate from context menu.


Select Current User and click next.


Select Place all certificates in the following location and click Browse…

Select Trusted Root Certification Authorities and click OK.

Click Next



Click Yes to security warning.


Click OK to close final dialog.

Install development certificate

Select dev_sharepoint_local.crt file and select Install Certificate from context menu.


Select Automatically select the certificate store based on the type of the certificate.

Click Next

Click Finish

Click OK to close final dialog.


Now you have successfully created one root certificate and one certificate for development server.

Hands-on guide to O365 Planner

061016_1531_Handsonguid1.pngMicrosoft has released Planner to all O365 users and it has already rolled out or it will be rolling out soon also to your O365 environment. You can find it look following icon from O365 global navigation. If you can’t find it, you can always go to direct site Planner is created on top of the O365 Groups. It gives end user really nice and familiar look and feel especially for people who has worked in iterative projects. It uses similar way of showing and organizing tasks. You can move tasks from one bucket (group of tasks) to another just dragging and dropping. You can assign tasks to users in similar way. Overall the easiness is the main feeling when you work with Planner. It’s meant for different users and the adaption doesn’t take long. This may lead to a problem, because it’s really easy to create plans, buckets and tasks.

Planner and Groups

Planner uses Groups behind its task management pages. This probably makes sense, but it’s also a big drawback. Jumping between Planner and Groups functionalities seems to be afterthought. You can easily go from Planner to Groups (Conversations, Calendar, Members and Files) as we can see from picture on the right. I have highlighted Planner features in red and Groups features in blue. If I for example go to Conversations, I will get new browser window with Outlook. This was not actually the expected behavior. The second problem is that there is no way of going back from Conversations to my plan and the tasks I created there.


Image 1 Planner user interface


Image 2 Groups user interface

Creating a new plan

Creating a new plan is really easy task. All you have to do is to give a name and that is. Well… Not exactly. There are couple things you need to decide before creating a new plan.

  1. Do you want to a private (accessible for only those invited) or public (accessible for all users) plan?
    • Private plan creates private group, but private groups are still visible for all users even private plan is not.
    • If you want to make public plan to private, you need to delete existing plan and then recreate it again.
  2. Email is global in your O365 tenant.
    • This means that you should consider using some kind of naming scheme like country code + department or something similar. For example FI_IT_PlanName (for Finland, IT department and then PlanName)
    • Email uses domain for all email addresses so this is not same as your company’s normal email address.
  3. What kind of plans you want to create?
    • If you create one plan for the project, it makes sense, because project has exact ending date.
    • If you have a reoccurring meeting, you need to think if you want to create a plan for each (weekly, monthly) meeting or do you want to create one plan and then organize tasks inside that plan.
      If you create multiple plans, you need to invite users to the plan every time you create a new plan. At the moment there is no way of using one plan as a template for new one.


Image 3 New plan form

Managing tasks

Now we have created successfully a new plan and we can start working with tasks. The easiest way is to type new task name and then hit enter key. It will create a new task and you don’t need to touch mouse during entering tasks. After you have created your tasks (task names), you can make changes to those like assigning them to different people or setting due date. These all are optional steps, so they are not anything required. Actually the task name is the only required information. The main view you are using is Board. It allows you to manage tasks. Default bucket (To do in English) can be renamed. You can also create new buckets for different types of tasks like for Sales, Marketing or stages of your project delivery model. You can easily move a task from one bucket to another by dragging and dropping it to target bucket. This same functionality also works when you are assigning tasks to people in Assigned to view or changing the status in Progress view.


Image 4 New task form

You can manage task information with own form. It contains huge amount of the information. You can manage following information:

  • Bucket
  • Status
    Status is shown as small circle on bottom right corner.
    If it’s not visible, task hasn’t been started.
    If it’s half full, task is in progress.
    If it has green checked mark, it has been completed.
  • Start date
  • Due date
    Due date is shown next to task status.
  • Task name
  • You can also assign task to a user and it’s shown in task preview.
  • Task description
  • Attachment and Links
    Attachment can be any type of file. Common files and links can be shown in task preview.
    PlannerEditTaskLink PlannerTaskAttachment PlannerTaskLink
  • Checklist
    You can create a list of check items that can be shown in preview also.
    Preview shows uncompleted checklist items and how many items have been completed from total amount of items.
    This is really nice feature, because you can mark items done in preview and you don’t have to go and edit task.
  • Labels
    You can define one or more labels for the task.

Here is complete edit form of the task. Behind … you can find only Delete Task. This is also the place you can see if there have been any comments to the task. All status changes and assignments also are listed as comments, so it doesn’t actually mean that somebody has commented the task.


Image 5 Edit task form


Image 6 Managing tasks

In Charts view you can easily check the current status of the tasks. It will show you basic metrics of the tasks and also highlights all tasks that haven’t been finished in due date.


Image 7 Charts view




Install Script Generalization

If you have tried to execute scripts more than once, you have noticed that those are not practical. You need to type your user name and password quite often. It takes lot of effort if you want to use same scripts in different environments. It would mean that you need to make huge search and replace job every time you update something.

Now we will make those script more generic and with little effort you can easily use scripts in different tenants.


Configurations script

The first thing is to create a common PowerShell Script that will contain all configurations. This file will be used by all other scripts. Now we need to make small change to one file and we can reuse same files over and over again.

Add New PowerShell Script Item ‘Configurations.ps1’ to Scripts project.

# Configurations.ps1
$Global:TenantName = “tenant”
$Global:ContentFolder = Join-Path $PSScriptRoot -ChildPath “..\Project.SharePointApp1.Web” -Resolve

# Do not change these lines
$Global:TenantRootUrl = “https://” + $Global:TenantName + “”
$Global:TenantAdminUrl = “https://” + $Global:TenantName + “”
$Global:Credential = Get-SPOStoredCredential -Name $Global:TenantName -Type PSCredential

It defines following common variables for us:

  • TenantName contains the name of the tenancy.
  • TenantRootUrl contains the url to SharePoint root site of the tenant. As Microsoft has deprecated public site, we only have one root site.
  • TenantAdminUrl contains the url to SharePoint admin site. This needs to be used in some of the scripts.
  • Credential contains the credentials we are using to connect to the tenancy so that we don’t have to type them over and over again.


Open Windows Credential Manager. The easiest way to do this is to type word credential to Windows Search. It gives you two results. Select Manage Windows Credentials.

Add a generic credential. Type your credentials. Make sure that you will type your tenant name to internet or network address field, because the script is looking for that information.

Using configuration

Common Settings

Now we have created one common configuration file and we have saved our credentials to secure place. We can start using this new way in other scripts.

Here is example script that activates our custom JavaScript files.

We can modify it a little bit. After this modification, you don’t need to enter user name and password anymore. If you want to use same scripts between development, testing and production environments you need to just change TenantName variable and everything works between different environments.

# ActivateScripts.ps1
& .\Configurations.ps1
$siteUrl = $Global:TenantRootUrl + “/sites/app1”
Connect-SPOnline -Url $siteUrl -Credentials $Global:Credential

We need to use local siteUrl variable, because PowerShell won’t allow use to concatenate two variables while calling Connect-SPOnline.

It would be possible to use parentheses to combine two varibles together. Then it would be Connect-SPOnline -Url ($Global:TenantRootUrl + “/sites/app1”) -Credentials $Global:Credential. Both ways are acceptable.

Upload files

We also had hard coded source path for the files. Now we can make that relative to script location so we can copy scripts to new computer and we don’t need to expect that it has same folder structure as our own development server.

# UpdateFiles.ps1
& .\Configurations.ps1
$siteUrl = $Global:TenantRootUrl + “/sites/app1”
Connect-SPOnline -Url $siteUrl -Credentials $Global:Credential
Add-SPOFile -Path (Join-Path $Global:ContentFolder “\Scripts\jquery-2.2.0.min.js”) -Folder “SiteAssets/Scripts/jQuery”
Add-SPOFile -Path (Join-Path $Global:ContentFolder “\Scripts\Project.js”) -Folder “SiteAssets/Scripts”