Cross Domain Communication in SharePoint 2013 Apps

I believe that the most common types of apps people will build will be either provider-hosted or hybrid apps that include important server-side components. These server side components will manipulate content inside of SharePoint based on actions performed in the browser and on-demand based on scheduled jobs or in response to input from other parts of the overall system.

How this communication takes place and the overall architecture is of primary importance in any distributed app architecture and it is something on which I spent the most time figuring out as I built out my current reference architecture. I tried many options before settling on a final strategy. This post is about the various options, strengths, weaknesses, and my current implementation.


I’ve built many distributed systems over the years using a variety of technologies. If the experience has taught me anything it’s that it is critical to minimize dependencies between subsystems – this includes dependencies between user interfaces and application services.

In past projects it has often been the case that the purpose of the distributed architecture is to provide services to multiple UIs built on heterogeneous technologies. This is increasingly the case as we deal with phones, tablets, and other form factors. So, the second goal of this architecture is to support multiple user interfaces. This means that the data services and business logic belong in one or more app servers and not in JavaScript in the client.

In any distributed system there may be operations that take more time than a typical page load. This is especially common in SharePoint solutions because provisioning is comparatively slow. The third goal is that the architecture must support long running transactions.

Sometimes an operation will face problems. When this happens the system must be robust enough to identify the issue and compensate to recover. The fourth goal is that the architecture allows complete control over the manipulation of data and other artifacts.

To sum up the goals, the application services need to support an arbitrary set of UI technologies and ensure data integrity for short running and long running transactions.


Remote Event Receivers

I was very excited by the concept of remote event receivers when I first learned about apps, but I had a very hard time getting them to work because of bugs in the beta and quirks in the early Visual Studio tools. I am happy to report that both problems were addressed and now it is pretty easy to create and handle remote events.

The data flow for a remote event receiver looks like this:

Note the question marks: remote event receivers are not reliable. If your listener is not available to handle the event or the call is not successful, your app won’t handle the event and SharePoint won’t retry the notification. Worse yet, there is no direct mechanism available to the client to determine the ultimate success or failure of the event.

Finally, the use of remote event receivers makes the application server reliant on SharePoint for part of what should be atomic operations and creates a dependency between the app services and SharePoint. I still envision scenarios where remote events are helpful, but they will need to be augmented by background jobs that sweep up any missed events.

Verdict: Remote event receivers do not meet the goals.

Others have suggested that workflows are good alternative because the workflow engine can deal with transient errors in service calls, but that approach fails to minimize dependencies between my tiers and cedes the critical functionality of one subsystem to another… not good. But I freely admit that I don’t know enough about this approach to dismiss it generally. In my case, lack of time to investigate the possibility was the biggest reason I didn’t investigate this option.

At this point I realized that I needed to put all of my data processing into my application server and orchestrate my transactions from there.

Web Proxy / Remote Endpoints

Clearly what I needed to do was call services on my app server from the browser that could take charge of a complete transaction. The challenge here is that the app server is always on a different domain and that cross-site scripting restrictions exist. Fortunately SharePoint includes a couple facilities to enable this sort of interaction. One facility, the SharePoint Web Proxy, is for scenarios where the page is in SharePoint and the service is in another domain, the second, the cross domain library, is for the inverse and is primarily for communicating with SharePoint from pages contained in App Parts.

In my architecture the pages are stored in SharePoint and so I began working with the web proxy. The web proxy requires RemoteEndpoint elements in your app manifest that identify domains you are allowed to call. Calls are sent using the SP.WebRequestInfo object to SharePoint which forwards the request to the actual service.

I had a few problems with this approach. The first is that it is inflexible and creates a dependency between my app manifest and my services. What if I want to move the services around? The location is baked into the manifest. However, the deal-breaker was that the Web Proxy enforces certain timeouts and other settings that you can neither view nor change. It is possible for the Web Proxy to report an error to the client in cases where the application server received and processed the message.

I first encountered this problem with a long transaction before the worker role was in place – the timeout appears to be 30 seconds, but I also had transient issues that I can’t explain. Sometimes it just reported a failure even though the request made it to my server and was processed successfully. The web proxy is a black box and there appears to be no facility for troubleshooting.

Verdict: The Web Proxy does not meet my goals.

Cross Domain Library

The cross domain library uses an HTML 5.0 feature called postMessage via SP.RequestExecutor.js. I quickly realized that this was the mechanism I needed to use, but that I couldn’t use the cross domain library itself because it goes from a remote page to a SharePoint service and I needed to go from a SharePoint page to a remote service.

Verdict: Not applicable

Custom postMessage Implementation and JSONP

In the end I wound up building a custom postMessage implementation for my complex flows. My services are implemented as generic ASP.NET handlers. My pages in SharePoint use a custom function that adds an iframe to the page which receives the script that ultimately calls the custom service and proxies the reply back to the SharePoint page.

This mechanism meets all of my goals and it has an added advantage in that it allows me to implement additional security. Complete control over the communication lets me to easily share my context token cookie, support deep links, and implement a digest to prevent man in the middle attacks.

For simpler interactions, like the service that supports my autocomplete fields I am simply using jQuery with JSONP.

Verdict: Winner!

Happy architecting!

Author: Doug Ware

Using the Web and Folder Properties Collections with CSOM (2013)

One annoying thing about CSOM in SharePoint 2010 is that there is no way to add persisted items to Web.AllProperties or Folder.Properties. This deficiency is fixed in 2013 but the way it works is not obvious because if you look at the objects (and you are me) you’d guess that the way to set a property is to manipulate the PropertyValues.FieldValues collection and you’d be wrong!

Here is a working example in C#.

var props = clientContext.Web.AllProperties;


props["propertyName"] = "Value";




In JavaScript use props.setItem("propertyName", "Value");

Happy coding!

Author: Doug Ware

SharePoint 2013 App Web versus Host Web Redux

One thing I do not like about apps as they exist today is the reduced functionality of app webs compared to a normal web, e.g. a team site. To review, an app web is created as part of a SharePoint-hosted app and as part of hybrid provider-hosted app. I wrote about my complaints in more depth here: Building Traditional SharePoint Collaboration Solutions with the App Model. In that post I advocate the use of the host web as a deployment target to create a full-featured app without the restrictions found in the ghetto-web where everything is treated as a second-class citizen.

Not long after I wrote that post fellow MVP Chris O’Brien wrote this post: SharePoint apps: working with the app web, and why you should. In it he briefly discusses reasons why you might not want to deploy to the host web.

Enough time has passed and I’ve learned enough that I feel like the question is worth revisiting.

Why App Webs Exist

The app web concept really exists for one reason only – to protect the host environment from malicious JavaScript while integrating with a site collection in the host. An app web gets its security users and groups plus things like available content types and fields from the host web. If you use a tool like SharePoint Manager you can see that an app web is contained by the host site collection. The key thing is that it is on a different domain.

If the host web’s url is something like then the app web url will be something like

This accomplishes two very important things:

  1. The cookies from the host web’s domain are not visible to the app web domain. This is very important because among the cookies for the host web’s domain is the user’s authentication cookie. Scripts can’t create a client context or issue REST messages against the host domain.
  2. The JavaScript code running in the pages from the app web are prohibited by default from issuing POST requests to the host domain.

Manage Web and Full Control

In my original post and in my offline conversations with Chris I pointed out that it was possible to provision to the host web as long as the app had Manage Web permissions to the host. This was true in the beta, but when the RTM bits became available, this was changed to require Full Control for any type of file that can contain JavaScript. So, you can add word documents and text files with modest app permissions, but a Web Part page or master page requires full control.

Consider that an app with manage web permissions can read every bit of content on a site, manage users and permissions, and even delete content without having full control. So, what good is full control!? What does the requirement to have full control achieve?

It protects the tenant from the following scenarios…

An Easy Way to Hack a Farm

If you have the ability to add JavaScript to a SharePoint site, you can hack your environment with a little social engineering and help from an admin. This applies to any site that allows users to add JavaScript and is the scenario that we fear most when discussing cross-site scripting exploits.

  1. Add some JavaScript that tries to do something only an admin can do, e.g. make a user a site collection or even a farm admin.
  2. Call an admin and complain about the site not working and ask them to check it for you.
  3. Use your newly granted permissions to perform evil deeds.

You should understand that this attack doesn’t require anything more than the ability to put script into a content editor web part or to edit pages in SharePoint Designer.

Another Easy Way to Hack a Farm

Install or get an admin to install a farm solution that uses RunWithElevatedPrivileges that uses the application pool identity to perform evil deeds. In many poorly configured environments this will be a domain admin.

This might sound far-fetched, but how hard would it be to put a couple cool free tools on a site like CodePlex just to phish for farms?

How the App Web Protects You

The protection and benefit of the App Web concept should now be clear to you – prevention of cross-site scripting exploits.

How you can Protect Yourself

Use separate accounts to administer your farm. Do not browse your sites using a farm or tenant level admin account.

Should I Deploy Solutions to the Host Web?

Clearly you are taking risks if you install an app that requires Full Trust and makes significant changes to the host web. Because of this, Microsoft does not allow apps that require this permission into the marketplace. If you are a vendor and you require this permission you will have to sell directly to your clients and use a side-install approach. My plan is to offer a marketplace compatible app which deploys to the app web and offer a more advanced version that is sold directly.

However, the risk you take with a Full Trust app is the same risk that you accept by using SharePoint Designer or Content Editor Web Parts, and it is less than the risk you assume by installing a farm solution. The key decision from a security perspective comes down to a simple question of trust. Do you trust the source of the app?

If the app is for internal use and you built it in-house, I see no reason to accept the fact that the app web strips so much functionality found in a host web. I feel the same way about trusted, reputable vendors.

P.S. A Note on Full Trust

You should be aware that an app’s permissions within a given context is constrained by the permissions of the user under normal circumstances. In order to escalate to a higher level of permission the app must either impersonate a known user or create the client context using an app-only policy. Full trust does not equate to constant god-mode processing.

Furthermore, an app principal always has full control over its app web regardless of its permissions on the host web. However, it is always the case that a user’s permissions constrain the app’s permissions when it is acting on behalf of a user. Even in an app web, normal code execution is subject to the user’s rights.

P.P.S. Bugs in RTM

The permissions required to perform many kinds of actions in the on-prem version are defective as of this writing. Many operations that should not require full control require full control. Office 365 works as documented. Hopefully the issues will be resolved in a hot-fix soon.


Author: Doug Ware

An Architecture for Provider SharePoint 2013 Hosted Apps on Azure

For the past several months I’ve been working on a complex app and last month I wrote a post called The SharePoint 2013 App Model is better than Farm Solutions based on my experiences. This post led to much discussion online, in person, and via email. These conversations have made it clear to me that if I want to continue blogging about app architecture I need to provide more detail about my specific architecture.

There is a great deal of confusion and misinformation out there, and I don’t want to get into point-by-point debates based on generalities and assertions. This is especially easy to do when discussing apps because ‘app’ is a meaningless generic term. The architecture discussed here is by no means the only way to build an app. In fact, it has very little in common with a SharePoint hosted app or a provider-hosted app delivered via an app part.

Before I go any further I should remind you that traditional solutions are easier to build and that apps have a learning curve that can’t be discounted. At no time should you take my statement that apps are better as meaning that they are easier to write. Most of the pieces of the architecture I am about to describe are things I won’t have to write again for subsequent apps, however they did take many months of work to design, refine, and build.

Operational Goals

I believe that enterprise architectures should have clear operational goals that the system meets that are independent of user facing functionality. I’ve seen first-hand how a good architecture can yield serious competitive advantages. When I first began thinking about sandbox solutions that could be sold to Office 365 users I had a few questions that I wanted the architecture to answer.

  1. Entitlement – how do I deal with a customer who doesn’t pay but has the solution installed?
  2. Updateability – how do I keep (hopefully) thousands of instances of the app consistent with each other and how do I push new functionality?
  3. Protection of IP – if I give out a solution package or if I rely on JavaScript, how do I prevent trivially easy reverse engineering?
  4. Other devices – How can I design my code base to allow clients other than SharePoint? Is SharePoint a good platform for application services to a heterogeneous set of client technologies?

These are all concerns of a SaaS vendor more so than concerns applicable to in-house solutions although #2 and #4 could be critical to an in-house app. Either way, if you add these requirements into the mix, all of a sudden a farm or sandbox solution becomes potentially much harder to build and in the case of sandbox, you are dependent on a deprecated technology.


To meet these goals I settled on certain strategies and guidelines based on my previous experience with large distributed systems and with SaaS; yes, I had a life before SharePoint!

Embrace web technologies

This one means that I want to align to the current state of the art for web based systems. I want to use the most common libraries like jQuery and I want to exchange data using JSON. I want to avoid putting UI code into any of my services and keep the UI in the browser.

Minimize technology stack dependencies between subsystems

Pieces of the system should be as atomic as possible and they should have absolutely minimal dependencies on the technology used to implement other subsystems. I should be able to move them around and re-platform them if necessary. Most importantly, the architecture should allow major changes in one subsystem (such as a new version of SharePoint) with minimal impact on the other subsystems.


I dream of success and thousands of clients! I want as much centralized control as possible. Therefore my application services should centralize:

  • Entitlement
  • Versioning
  • Business logic
  • Data services
  • Instrumentation

This allows me to know when a client is having problems, to push updates as needed, and to handle many different scenarios like trial accounts, notifications specific to a client, and subscription based services. Furthermore, it protects my IP and makes piracy a non-issue.


Ultimately I want to support a variety of client devices and I also want my clients to keep their data so that their information stays secure and private. Therefore the static user interface components, i.e. web pages, are distributed and the data stays in the client’s environment, i.e. in SharePoint lists and libraries.

High-Level Architecture

The following diagram shows the high-level architecture and data flow. Notice that my pages do not talk directly to SharePoint with the exception of list views and the other functionality gained simply by being a page in a SharePoint site. The data layer and application services are in a web role and a worker role in Windows Azure.

The pages depend on the services for processing and they do so by exchanging JSON via HTTP GETs and POSTs. There is no JavaScript CSOM or REST in the browser which protects my IP. It also means I can reuse my HTML and UI JavaScript with other types of client technologies. The list views are an exception. I am not using any listview web parts. Instead I am using my list view plugin. However, this is a dependency I am willing to accept because I want the app to look and feel exactly as it would if I wrote it as a farm solution.

Notice also that the JavaScript and the CSS are not stored in SharePoint, but are served by the application server. They are centralized. I can fix a bug in one spot and it immediately becomes available to every customer even if there are millions of installations. This also allows the system to serve page resources dynamically and enables such things as special prompts for a user whose subscription is about to expire.

It also means I can support any client technology that can consume HTML and JavaScript such as Windows 8 and IOS.

You might worry that such a system would incur a big performance penalty. In fact the communication overhead is not significant on Azure. There is very little latency between my Azure services and Office 365. I assume they are on the same network.

App Launch Flow

I wrote earlier about Managing Identity and Context in Low-Trust Hybrid SharePoint 2013 Apps. The mechanism I described requires a certain flow when the user launches an app. This flow is also the key to updating the pieces which are distributed to the client’s site. A conceptual version of this flow is shown below.

Whenever the user launches the app the centralized system checks the version of the user’s installation. If necessary, the app server can make changes or additions to the deployment and add new lists, libraries, or pages. Again, this works with one client or with millions of clients.


Hopefully this post goes a long way to frame the discussion about the pros and cons of apps compared to a farm solution. As I get closer to release of the monster I’ll be showing more concrete examples, but until then, if you need help or training with an app I’d love to hear from you!


Author: Doug Ware

Adding Custom Actions with JavaScript to the Host Web from an App

You can add custom actions to the host web using CAML in an App package, but there is a serious restriction: you can’t invoke any JavaScript. Liam Cleary noticed this limitation on twitter not long ago.

I soon replied…

I had the code lying around and so I made a demo for him using Napa. One of Napa’s cooler features is the ability to publicly share app code. The code also illustrates the use of the cross domain library to access the host web from the app web.

You can view the full sample here: .

Happy coding!

Author: Doug Ware

Managing Identity and Context in Low-Trust Hybrid SharePoint 2013 Apps

SharePoint 2013 Apps written for Office 365 and on-premises installations using Windows Azure Active Directory rely on OAuth for authentication of the app and users of the app against SharePoint. Conceptually, the way this works is fairly simple, but in practice it can be complex when you are starting from scratch because there is some infrastructure and related code that you will almost certainly need to build to manage the various pieces of the puzzle. The SharePoint App project templates in Visual Studio include a file named TokenHelper.cs which contains essential functionality for processing tokens and creating authenticated connects to SharePoint, but there are no out-of-box components for storing and managing the tokens with which TokenHelper helps.

This is a very high-level post that you can use as a basis for your solution to this problem. I included links to Microsoft’s documentation if you need more detailed information.

Initial Authentication Flow

Step 1 – App Redirect

When a user launches an app in SharePoint, the first step is an HTTP GET of SharePoint page named appredirect.aspx, _layouts/15/appredirect.aspx. The GET includes a single parameter, instanceId. AppRedirect uses the instanceId to get information from the app’s manifest that the page uses to build the URL and form POST for the next step in the process. This information includes the app’s ClientId, the app’s start page URL, and the query string parameters that should be passed to the start page.

ClientId is a GUID that represents the app’s identity. It is used by tokenhelper.cs along with the ClientSecret to generate the tokens the app will require to talk to SharePoint. These values are created when the app is registered with SharePoint.

Step 2 – App Start Page

The app redirect page issues a POST to your app’s start page. In a hybrid app, this page is on your application server and it redirects to the App Web once it is finished processing and storing the authentication data. The POST contains quite a bit of important information and (for the most part) this is your only chance to grab it!

If your manifest specifies the Visual Studio default tokens, {StandardTokens}, the query string will contain two critical pieces of information: SPHostUrl and SPAppWebUrl.

  • SPHostUrl is the url of the web where the app is installed.
  • SPAppWebUrl is the url of the app web assuming there is one. If this parameter is not present, the app does not include an app web.

The POST’s form body contains the rest of the information. The most important parameter is SPAppToken. SPAppToken is specific to low-trust configurations. If you are using a high-trust configuration the SPAppToken token is missing. This fact causes a great deal of confusion because all of the original SDK samples assume low-trust and break when they attempt to get this value in a high-trust configuration.

SPAppToken is an encrypted string that contains many more parameters with which we are concerned. TokenHelper.cs uses the Microsoft.Identity classes to decrypt SPAppToken with the ClientSecret. The decrypted and parsed SPAppToken contains several important pieces of information:

  • CacheKey – The cache key is an opaque (non-readable) string that is unique to the combination of user, app, and tenant: CacheKey = UserNameId + "," + UserNameIdIssuer + "," + ApplicationId + "," + Realm
  • AccessToken – The token required to create an authenticated connection to SharePoint, it expires in 12 hours.
  • RefreshToken – The token used to request a new AccessToken or RefreshToken, it expires in 6 months.

If you have these values, you can talk to SharePoint on behalf of the user.

Values you need to Persist and Why

In practice you will always need a mechanism to persist these key values. I use a combination of Windows Azure SQL Database for the tokens, the App Web’s property bag for the host url, and session cookies for my version of the CacheKey. This allows me to accommodate the following scenarios:

  1. Deep links – If a user enters my app via a link instead of from the app’s icon in the host site the traversal of appredirect and the app start page won’t happen. If there is a session cookie for the CacheKey everything is fine because the code can get the tokens from persisted storage which can be used to create a full context. However, if the cookie does not exist the app needs to redirect the user to appredirect to authenticate the app for the user. For the redirect to work the code needs the SPHostUrl and the instance id.
  2. Background processes – I use Azure worker roles for long running processes. To create a CSOM client context, the input to the job must include a valid cache key.
  3. Authentication to the provider hosted web site and services – My apps don’t actually store the cache key from SharePoint. They add additional information to the key and encrypt the result using a private certificate. The creation of this value requires the user to have successfully launched the app from SharePoint and the user can be considered authenticated. By storing this value in the database and in the session key I add a layer of protection that I build on as part of my services infrastructure.
  4. Privacy – My apps never store any information that can be used to identify an individual user without help from the SharePoint site. If someone with malicious intent gets the content of this database it will do them no good without also breaking the encryption, knowing the app identity, client secret, and app web url.

Figuring this all out took me much time and effort. Hopefully this post can spare you from going through that, but if you would like some consulting or training on the subject I would love to hear from you!

Happy coding!

Author: Doug Ware