Global Office 365 Developer Bootcamp Atlanta is Saturday Nov 3, 2018

The Global Office 365 Developer Bootcamp Atlanta is this Saturday! It’s not too late to register as there are a few slots left before we sell out.

You can register here:

The Global Office 365 Developer Bootcamp is a free, one-day, hands-on training event led by Microsoft MVPs with support from Microsoft and local community leaders. Developers worldwide are invited to attend a local bootcamp to learn the latest on the Office 365 platform with topics ranging from Microsoft Graph, SharePoint Framework, Microsoft Teams, Office Add-ins, Connectors and Actionable Messages. Developers can then apply these learnings to their existing products or solutions to achieve more right away or begin planning how to apply what they learn to their future projects.

Watch the video to hear from Jeff Teper and Microsoft MVPs on 2018 Global Office 365 Developer Bootcamp.

Technologies covered: Microsoft Graph, Microsoft Teams, Office Add-ins, Connectors and Actionable Messages, and more. To be successful in this workshop, you should have a general understanding of Office 365, SharePoint, Microsoft Teams and an ability to code in C# or JavaScript.

Come Prepared:

  • You will need to bring along your own laptops. We’ll help with any tools you need to download and install to get you up and running.
  • An Office 365 Developer Tenant (free) to make the most out of these sessions.
  • A basic understanding of Office 365 and Microsoft Azure is recommended.

Bootcamp Agenda (each session provides a brief topic intro and Hands on Labs):

8:00am – Doors Open – Registration, Coffee, Networking
8:45am – 9:00am Welcome and Introduction to the Global Office 365 Developer BootCamp
9:00am – 12:00pm Morning Sessions 
12:00pm – 1:30pm Lunch 
1:30pm – 4:30pm Afternoon Sessions 
4:30pm – 5:00pm Closing

Sessions: Attendees can choose to attend the following sessions (each session repeated in AM and PM slots)

  1. Microsoft Teams Apps – Tabs, Connectors and Bots
  2. Microsoft Graph & Actionable Messages
  3. Building Office Addins with Modern JavaScript

Atlanta Code Camp 2018 is Saturday!

This is always a great event and I am proud to be part of it. We would love to see you there at Kennesaw State University, Marietta Campus!

Our website is here – and you should go there right now and register!

Can’t make it?

If you are into Office development, please register for the Office Developer Global Bootcamp (Atlanta Edition) on Nov 3.

–Doug Ware

Understanding Azure Functions with Pictures of Lightning

Consider the Logo

The Azure Functions logo is a lightning bolt.

This is how lightning works in the real world according to this fourth grade science text.

Much like the real world, Azure Functions can be described as an environment. We call this the Azure Functions Runtime.

You can think of the system that creates lightning as a function of the environment.

The function happens in the environment when the right conditions are met causing an event that triggers the function (such as a file being uploaded to storage).

A binding is what provides the information from the triggered event to a function instance. (ok, this one is a bit of a stretchy analogy).

Execution is when it all comes together and the function runs.



How Azure Consumption Pricing Saves Money Compared to Buying Servers

You pay for the lightning bolts, not for the sky. With consumption-based pricing clear skies cost nothing but with infinite* scalability you can simulate Harvey AND Irma if needed.

Five Tips for Organizing Functions in Azure Functions

A function is the primary concept in Azure Functions. However, in terms of and configuration and deployment the primary concept is the Azure Function App. The Function App is the runtime host that contains one or more functions. A Function App is the runtime configuration, application settings and runtime host shared by individual functions that collectively make up the function app’s runtime components.

The organization of your function apps directly affect the performance, maintainability, and cost of solutions that take advantage of function apps.

There really are no hard and fast rules about deciding how to organize your functions into function apps, but this post is about some of the things I personally take into account when I build my own function apps in no particular order.

1… First and foremost, aim strong cohesion and loose coupling

Cohesion means that the functions belong together. This one is easy! A function app with a grab bag of unrelated functions is a bad idea. With consumption based plans there is a negligible difference between one function app with five functions or five function apps with containing a single function. In either case you are charged based on invocations. However, it is possible for two functions to be logically cohesive but still choose to deploy the functions as separate function apps for reasons related to runtime performance or scalability needs.

Coupling is the degree to which the functions in a function app depend on each other and also the degree to which a function depends on the runtime host. In both cases the goal is to have none because that means we can deploy a function wherever we want and move it around easily and not just between function apps but also other types of runtime environment including AWS Lambda and traditional servers. However, if functions are highly cohesive, it makes sense to accept some coupling for shared management and configuration. There is no generic advantage to a grab-bag function, but there are disadvantages to isolating related functions into different function apps.

2… Organize according to usage patterns and performance profiles

Azure function invocation is based on triggers. When something happens, the event trigger fires the function. This could be from a popular application calling a web service with a user who is eagerly waiting for a reply or it could be an event type that happens frequently but not continuously that runs a function that takes minutes to complete and uses significant resources.

The Scale Controller monitors the rate of events and determines whether to scale in or out. Because these two functions have completely different runtime characteristics the result is unlikely to be optimal. Consider that, if the combination of unrelated performance profiles is difficult for software, how bad is it for the poor human!

I feel pretty strongly that Application Insights is a must have addition to any function app. One thing I like about it is how useful it is out of the box.

Mixing things that are fast with things that are slow makes it much less useful because are those spikes expected because the slow thing is happening or is it because the thing that should be fast is slow?

3… Don’t Deploy Functions Together Unless You Can Accept the Requirement to Version Them Together

If you bundle functions together into a function app, always version and test the functions as a set. Build them together and deploy them together.

This is my experience with C# and NuGet. I don’t know if this same advice applies in JavaScript (npm) or Java (idk) hosts, but I imagine similar issues.

Imagine that you have multiple functions in a function app and you are using precompiled functions. In your implementation, each of your functions is implemented in its own .NET class library. You use NuGet and some function class libraries have shared dependencies that are indirect and could be different versions. In a normal .NET executable or like a console application or a web site. The final program folder when everything is built has an app.config or web.config file that specifies how to handle conflicts via a .NET feature known as Assembly Binding Redirection.

Your function app might be broken from the start or work fine at first, but stop working after updating a NuGet package in one of the function assemblies with an exception –

“Could not load file or assembly ‘…’ or one of its dependencies. The system cannot find the file specified.”

or maybe

“Exception has been thrown by the target of an invocation. : Method not found:

The dots in both cases are whichever of the conflicting assemblies loads second because of the conflict.

The easiest solution to this problem is to not have a conflict by resolving the conflict to a single version. If this can’t be done, moving the functions into their own function apps.

4… If you require a traditional app service plan, use a traditional app service plan

A consumption plan is not always the best fit for every scenario. If you need a job that takes more than 10 minutes or a web service that should always be available in loaded instances (so as to never make a user wait for loading and scaling) use an App Service Plan instead of a consumption plan. In most cases it will not be worth the time to try to find ways to work around the known limits of the consumption plan. If you have significant free capacity, consider hosting other compute functionality in this app service plan, but remember tip #2.


5… If you need auto-scaling or you don’t have a requirement that prevents it, use a consumption plan

It is probably waaaay cheaper and more likely to be optimal than whatever manual tuning you try.

Remote Debugging Azure Functions from Visual Studio Stops Working

…and what you can learn from it

“The first thing you must keep in mind when you start using a serverless computing platform like Azure Functions is that everything is different, especially when things are obviously the same.”
–Abraham Lincoln, 1863

If you develop Azure Functions with Visual Studio it is easy to debug a function app running in the cloud. All you have to do is right-click the function app in Server Explorer and you are in business.

Although it is possible (and ultimately easier in many cases) to debug locally, remote debugging works well and is most convenient. If you choose a consumption plan to make it basically free (you totally did, you know you did!), eventually you may see the dreaded message “The breakpoint will not currently be hit. No symbols have been loaded for this document.”

You and I both know that your first reaction to this problem will be to open your favorite search engine which might lead you to a site named Stack Overflow where you might find the following article: visual studio 2017 remote debugging azure api app: “The breakpoint will not currently be hit. No symbols have been loaded for this document.”

You will try the suggestion and it might work. You will also be see that it was a bug, but it’s been fixed until you check and realize you have that or some later update and think “this is a bug and they broke it again!”

It isn’t a bug and it probably won’t be fixed!

A slightly different problem that might happen for the same reason is that the breakpoint will light up in the Visual Studio editor, but execution won’t stop, or it will stop some of the time but not other times. Maybe you’ll restart the function app and it will work for awhile.

“Ah-hah! I have found a bug,” you will think, and head to github to enter an issue such as this one: Remote debugging woes #538.

The Cause and the Lessons Learned

My experience with my first ‘development’ function app came from the fact that I started with one function and kept adding them. Back then the tools were pretty limited and it was (again) an easy thing to do. I had a variety of triggers in that app and the one I was working on was long running and used much more capacity than a short running web service. If during my development testing I put it under load by sending more than a few messages into the queue at a time, Azure Functions would very helpfully scale my function and helpfully create more instances automatically. Once this happens there is no way to attach to the right instance other than by shear luck because there is no way in a default configuration to steer all requests to a single instance. Furthermore, there is isn’t any way for the Azure Functions host to identify that a particular random message you sent should be treated uniquely and routed to the one instance out of however many there for debugging.

There are a few lessons I learned as a result if this experience that I can share. Here are my top 5.

  1. Never forget that in the default state of a consumption plan, there can (and probably will) be multiple instances.
  2. If you have more than one function in a function app, one of them might be causing the app to scale even if the one you want to debug has no or very little traffic.
  3. Good logging plus Application Insights are things use should strongly consider using throughout your application lifecycle. Logging, because you can’t assume you will be able to remotely debug all scenarios in development or even locally. And Application Insights because it makes it possible to understand and visualize complex and heavy load runtime behaviors.
  4. There are a number of configuration ‘knobs’ you can turn that affect scaling behavior and the number of instances for a given function. I will be writing about this in more depth in a future post.
  5. There are a lot of things to consider when combining functions in a single app. I will also be writing about this in more depth in a future post.

Thanks for reading!

–Doug Ware

P.S. Application Insights makes it easy to see how many instances are active. However, if you are not using Application Insights and prefer to bang rocks together, you can use Process Explorer from the Platform Features tab of the function app in the portal. If you click it and it takes a very long time to load, that is a good sign that there are a bunch of instances, but eventually it will usually display a list.



Some Real Performance Numbers from Some Real Azure Functions

I’ve been using Azure Functions in various production systems since late 2016. If you wonder how real apps perform using a consumption plan with auto-scaling (which means pay as you go based on usage), I have some data for you!

First up
is the most commonly executed function in my busiest production installation of Azure Functions for SharePoint. The EventDispatch function receives event notifications from various SharePoint Online tenants for a few different apps. When something of interest happens in a SharePoint site, Microsoft sends the EventDispatch function an HTTP POST request. EventDispatch process this message and drops the result into the Service Bus queue for whatever app that created the event receiver subscription.

Over the last 8 days or so this function was triggered around 40,000 times a day at an error rate of about 4 per day.

SharePoint waits for a response from the event receiver for some events and a user experiences this as a delay when this kind of event fires. Therefore, EventDispatch needs to run quickly. One thing people new to Azure Functions sometimes notice is with consumption-based function apps is that functions triggered by web requests are slow during requests that cause Azure to load the function. Fortunately, the traffic to this function is steady. This means that the startup operation happens in a tiny percent of requests and performance is excellent.

The following charts show what normal traffic looks like. Scouts honor! I opened the Application Insights for this app and took a screenshot of the point in time sometimes there is much more traffic and other times there is much less.

This function app and its associated storage account costs me less around $10 a month. The storage account is around 60% of the total cost and there is room for some optimization.

Next up is application specific functionality hosted in a different function app in a different Azure tenant. This particular function app is a bit of a grab bag in terms of individual functions and has a lot of room for improvement in a couple of ways, the least of which is the mingling of a long-running process that introduces a few other issues which are a subject for another day and another post.

The long running process fires periodically based on scheduled message delivery and that schedule causes all of the messages to begin delivery at the same time subject. On top of that, the way I wrote it is dumb and the job takes much longer to execute than it should. So, during the periods when it runs it forces the function app to scale and load on a lot of servers. The numbers are pretty grim.

That high server count represents instances of the app, and it is wasted consumption that costs money. One thing I have noticed is that instances are slow to unwind.

Nevertheless, this badly written function inside this grab-bag app still only costs around $45 a month. I haven’t fixed it because the cost is too low to justify fixing it.

Task Runner Explorer is the Best Visual Studio Feature You Probably Aren’t Using

TL;DR – There is a handy feature in VS 2015 called Task Runner Explorer that you can use to run PowerShell or batch commands to do just about anything. You can also bind these to build events.

A task runner is a program that runs tasks. If you’ve been doing much web development these past couple of years you are probably familiar with this concept and popular task runners like Grunt and Gulp. In fact one or both of these might be essential to your development workflow. And, since many web developers consider these to be essential tools, the Visual Studio team released the Task Runner Explorer extension for Visual Studio 2013 and later made Task Runner Explorer an out of box feature in Visual Studio 2015.

If you aren’t aware that this feature exists, you aren’t alone! I took a poll on twitter.

<sarcasm>I was a bit surprised by this as the feature is prominently available by going to View | Other Windows | Task Runner Explorer.</sarcasm>

JavaScript Task Runner? No Thanks!

If your work isn’t mostly JavaScript, using a JavaScript based task runner probably sounds pretty unappealing. Happily there is an extension that supports .exe, .cmd, .bat, .ps1 and .psm1 files called Command Task Runner.

We use this in the Azure Functions for SharePoint project to automate deployment at build time by binding the script to the build event.

The deploy script is complicated, but there are a couple others that are pretty simple and are not bound to any events. We run them manually and I think they illustrate best why this tool is something that belongs in your everyday toolkit.

For example, each Azure Function for SharePoint relies on a config.json file. It would be an error-prone pain to create them by hand or by copying an existing configuration and so we have a script that creates a new config and puts it on the clipboard:

$scriptdir = $PSScriptRoot
$config = New-Object AzureFunctionsForSharePoint.Core.ClientConfiguration

#Pretty print output to the PowerShell host window
ConvertTo-Json -InputObject $config -Depth 4

#Send to clipboard
ConvertTo-Json -InputObject $config -Depth 4 -Compress | clip

When a new client config.json is needed, all one must do is run the command from Task Runner Explorer.

Pretty cool eh?

–Doug Ware

Introducing Azure Functions for SharePoint

I’m excited to announce the first public release of Azure Functions for SharePoint, a powerful but inexpensive to operate open source backbone for SharePoint add-ins. We’ve been using Azure Functions in production for a while now, and I love it!

I’ll be speaking about Azure Functions next Saturday, January 21, 2017 at Cloud Saturday Atlanta. You should come!

About Azure Functions for SharePoint

AzureFunctionsForSharePoint is a multi-tenant, multi-add-in back-end for SharePoint add-ins built on Azure Functions. The goal of this project is to provide the minimal set of functions necessary to support the common scenarios shared by most SharePoint provider hosted add-ins cheaply and reliably.

Features include:

  • Centralized Identity and ACS token management
  • Installation and provisioning of add-in components to SharePoint
  • Remote event dispatching to add-in specific back-end services via message queues including
    • App installation
    • App launch
    • SharePoint Remote Events

Navigating the Documentation

These documents consist of articles that explain what the functions do, how to set up the hosting environment, and how to use the functions in your add-ins and API documentation for .NET developers linked to the source code in GitHub.

A Note on Terminology

These documents use the term client to refer to a given SharePoint add-in. A client is identified using its client ID which is the GUID that identifies the add-in’s ACS client ID in the SharePoint add-in’s AppManifest.xml.


There are three functions in this function app.

  1. AppLaunch
  2. EventDispatch
  3. GetAccessToken

Setup Guide

We’re working on full automation with an ARM template, etc. The Visual Studio Solution includes a PowerShell script you can use with Task Runner Explorer and Command Task Runner. Until then, create a function app and copy the contents of this zip file into the function app’s wwwroot folder.

Configuring the Function App

Until the automation is fully baked, you can use this video to guide you through the relatively easy setup of the function app.

Configuring SharePoint Add-ins to use the Function App

Azure Functions for SharePoint is multi-tenant in that it can service add-ins installed broadly across SharePoint Online and also because the back-end processes that respond to client specific events in SharePoint or rely on Azure Functions for SharePoint for security token management can be located anywhere with a connection to the Internet.

See the Client Configuration Guide for more information.

Using the Function App to Support Custom Back-ends

It is possible to use Azure Functions for SharePoint to deliver pure client-side solutions, i.e. HTML/JS. However, many add-ins must support scenarios that are difficult or impossible to achieve through pure JavaScript. Azure Functions for SharePoint supports custom back-ends in two ways:

  1. Notification of add-in and SharePoint events via Azure Service Bus queues via the EventDispatch Function
  2. A REST service that provides security access tokens for registered clients via the GetAccessToken Function

In both cases the client back-end receives all the information it needs to connect to SharePoint as either the user or as an app-only identity with full control. The function app does the actual authorization flow and its client configuration is the only place where the client secret is stored.

Your custom back-ends can live anywhere from the same Function App where you deployed Azure Functions for SharePoint to completely different Azure tenancies or on-premises servers. All that is required is that the back-end can read Azure Service Bus Queues and access the REST services via the Internet. Aside from these requirements, the back-end can run on any platform and be written in any language.

That said, if you are using .NET, this project included an assembly named AzureFunctionsForSharePoint.Common that you can use to make things even easier!

API Docs

Complete documentation of the Azure Functions for SharePoint API see the API Guide.

Want to Contribute to this Project?

We’re still working on that too, but please reach out to me if you want to help!


Receiving BrokeredMessages Instead of Strings with Service Bus Queue Triggers

When you create a new Azure Function with a Service Bus queue trigger, the initial run.csx takes a string as input and looks like this:

The benefit of this is that the function infrastructure hides all of the complexity of the service bus from you making everything nice and simple. However, I like to get the real message object instead of its payload because the messages properties support a number of useful scenarios. Among these properties is the message’s ContentType which is useful when the bus delivers more than one type of message.

It isn’t obvious from the documentation how to get the brokered message instead of a simple string, and there is a wrinkle involved if you like to do as I do and deliver the bulk of your functionality in complied assemblies.

Scenario #1 – C# Script Only

As your function is being triggered by a service bus message, it makes sense that Microsoft.Azure.ServiceBus.dll is loaded by default. You don’t need to do anything other than reference it and change the method signature and you can leave your function.json file alone. In this case I changed the binding in function.json so that the name is receivedMessage instead of myQueueItem, but that’s only because I felt like it! J

You can write your function as follows:

Notice that the assembly import line does not include an extension. If you include one, you will get a compile error!

Scenario #2 – Compiled Assemblies

If you are using compiled assemblies, the story is a little more nuanced and also potentially more dangerous because the potential for a version mismatch exists.

Assemblies deployed to the function go into the bin folder below the run.csx file. Generally, what this entails is copying all of the build output from your project to bin. When you do this, it becomes possible to reference the assembly using the dll file extension as follows:

This compiles because the assembly is in bin. You can still leave the file extension out. But, should you?

To test this out I made a little test assembly that references a very old version of Microsoft.ServiceBus.dll; version It returns a string that contains the assembly name along with the message’s content type.

The calling function logs the assembly name and then the output of the compiled function.

Somewhat surprisingly, this works and the output of the function’s compilation and execution looks like this:

As expected, everything is using the version of the assembly that was loaded by Azure Functions ( and not the version referenced by the compiled project ( I actually expected to get a runtime error or at the very least a warning!

If I remove the file extension, it still works, but at least this time I get a warning!

The Moral of the Story

That this works at all has a lot to do with the fact that what I am using of the BrokeredMessage class is compatible between versions 1.8 and 3.0. Had I written the test differently, it would not have worked.

There is clearly some danger here that requires a function developer to know when they are using assemblies that will already be loaded and have the potential to not match the build. The function compilation does not recompile the deployed assemblies and has no way to know about this potential runtime mismatch, but it can know that what the C# Script file is using conflicts with what is in the bin folder as long as you leave the file extension off of the reference.

–Doug Ware

An Azure Functions Design Pattern for Your Consideration

Without question, Azure Functions is my favorite new offering from Microsoft in 2016. One reason I like it so much is that it is extremely flexible, scalable and inexpensive, making it useful in a wide range of scenarios. One reason for this is that you can create functions using your choice of nine different languages (many of these are experimental as of this writing). Naturally, each of these languages has its own nuances.

Here at InstantQuick we have several functions in production based upon C#. Naturally, the documentation for C# is comparatively good, there is some best practices guidance, and there are tools for Visual Studio in preview. However, in each of these cases, the primary focus is C# Script, not C# compiled to assembly dll files. Fortunately, the documentation does describe how to load assemblies in a variety of ways.

Minimizing the Use of CSX

The file extension of a C# Script file is CSX. By default a C# based function has a single CSX file named run.csx. You can, however, reference other C# Script files and share C# Script files between functions. So, you can theoretically build a complex solution using nothing but C# Script and for very small functions written to augment something like a Logic App C# Script makes perfect sense. However, in our case, and I suspect in many others, we want to deliver the bulk of our functionality as built assemblies for a few important reasons.

  1. We are moving existing functionality previously developed in a traditional manner to functions
  2. Sometimes, functions aren’t an appropriate delivery vehicle and we want to host the functionality in traditional cloud services or on premises
  3. The tooling for CS files is, at the moment, much better than the tooling for CSX files

Pattern for run.csx

Most of our run.csx files look like this:

#r "AppLaunch.dll"
#r "FunctionsCore.dll"
using System.Net;
using System.Configuration;
using System.Net.Http.Formatting;
using AppLaunch;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
   Log(log, $"C# HTTP trigger function processed a request! RequestUri={req.RequestUri}");
   var func = new AppLaunchHandler(req);
   func.FunctionNotify += (sender, args) => Log(log, args.Message);
   var appLauncherFunctionArgs = new AppLauncherFunctionArgs()
      StorageAccount = ConfigurationManager.AppSettings["ConfigurationStorageAccount"],
      StorageAccountKey = ConfigurationManager.AppSettings["ConfigurationStorageAccountKey"]
   return func.Execute(appLauncherFunctionArgs);

public static void Log(TraceWriter log, string message)

The code does only five things

  1. Load the assemblies that do the actual work
  2. Receive the input from the function’s trigger via its bindings – this includes a TraceWriter for logging
  3. Gets the configuration from the Function App’s configuration settings
  4. Invokes the code that does the real work, passing the function input and the configuration values and returning its output
  5. Receive logging notifications as events and log them via the TraceWriter

Avoiding Dependencies on Azure Functions

The very first function I wrote included a dependency on the Web Jobs SDK for logging. Since one of our primary needs is to be able to host the functionality outside of Azure Functions, that wasn’t something I could keep doing. The workers should be delivered as plain old class libraries with minimal dependency on the runtime environment. To that end I wrote a simple little base class that uses an event to raise notifications. This allows the hosting environment to deal with that information in whatever way is needed.

The base class and event look delegate like this:

namespace FunctionsCore
   public delegate void FunctionNotificationEventHandler(object sender, FunctionNotificationEventArgs eventArgs);
   public class FunctionNotificationEventArgs : EventArgs
      public string Message { get; set; }
   public class FunctionBase
      public event FunctionNotificationEventHandler FunctionNotify;

      public void Log(string message)
         FunctionNotify?.Invoke(this, new FunctionNotificationEventArgs { Message = message });

A subclass can then simply notify the caller of anything interesting via Log function as follows.

Log($"Error creating view {ex}");

How Much Logging is Too Much Logging?

The SDK’s best practices guidance notes that excessive amounts of logging can slow your function down. I suppose there may be times when that is a concern, but most of our functions are not interactive and still execute in the range of sub-second to a few seconds. The cost savings we get from using a consumption plan instead of dedicated cores is so dramatic that whatever extra CPU time the logging causes is a non-issue. So, my opinion is that minimizing the logging is a premature optimization and not great advice.


The final piece of the puzzle is how the deployment works. The current preview tools for Visual Studio are not especially helpful when it comes to working with regular old C#, and that is being generous. Instead, we use plain old class library projects, Visual Studio’s Task Runners with the Command Task Runner, and PowerShell scripts. The full script that can also create the Function App via Azure Resource Manager is a work in progress, but when it’s done, I’ll write another blog post and publish the project template to GitHub.

–Doug Ware

Powering SharePoint customizations…