Receiving BrokeredMessages Instead of Strings with Service Bus Queue Triggers

When you create a new Azure Function with a Service Bus queue trigger, the initial run.csx takes a string as input and looks like this:

The benefit of this is that the function infrastructure hides all of the complexity of the service bus from you making everything nice and simple. However, I like to get the real message object instead of its payload because the messages properties support a number of useful scenarios. Among these properties is the message’s ContentType which is useful when the bus delivers more than one type of message.

It isn’t obvious from the documentation how to get the brokered message instead of a simple string, and there is a wrinkle involved if you like to do as I do and deliver the bulk of your functionality in complied assemblies.

Scenario #1 – C# Script Only

As your function is being triggered by a service bus message, it makes sense that Microsoft.Azure.ServiceBus.dll is loaded by default. You don’t need to do anything other than reference it and change the method signature and you can leave your function.json file alone. In this case I changed the binding in function.json so that the name is receivedMessage instead of myQueueItem, but that’s only because I felt like it! J

You can write your function as follows:

Notice that the assembly import line does not include an extension. If you include one, you will get a compile error!

Scenario #2 – Compiled Assemblies

If you are using compiled assemblies, the story is a little more nuanced and also potentially more dangerous because the potential for a version mismatch exists.

Assemblies deployed to the function go into the bin folder below the run.csx file. Generally, what this entails is copying all of the build output from your project to bin. When you do this, it becomes possible to reference the assembly using the dll file extension as follows:

This compiles because the assembly is in bin. You can still leave the file extension out. But, should you?

To test this out I made a little test assembly that references a very old version of Microsoft.ServiceBus.dll; version 1.8.0.0. It returns a string that contains the assembly name along with the message’s content type.

The calling function logs the assembly name and then the output of the compiled function.

Somewhat surprisingly, this works and the output of the function’s compilation and execution looks like this:

As expected, everything is using the version of the assembly that was loaded by Azure Functions (3.0.0.0) and not the version referenced by the compiled project (1.8.0.0). I actually expected to get a runtime error or at the very least a warning!

If I remove the file extension, it still works, but at least this time I get a warning!

The Moral of the Story

That this works at all has a lot to do with the fact that what I am using of the BrokeredMessage class is compatible between versions 1.8 and 3.0. Had I written the test differently, it would not have worked.

There is clearly some danger here that requires a function developer to know when they are using assemblies that will already be loaded and have the potential to not match the build. The function compilation does not recompile the deployed assemblies and has no way to know about this potential runtime mismatch, but it can know that what the C# Script file is using conflicts with what is in the bin folder as long as you leave the file extension off of the reference.

–Doug Ware

An Azure Functions Design Pattern for Your Consideration

Without question, Azure Functions is my favorite new offering from Microsoft in 2016. One reason I like it so much is that it is extremely flexible, scalable and inexpensive, making it useful in a wide range of scenarios. One reason for this is that you can create functions using your choice of nine different languages (many of these are experimental as of this writing). Naturally, each of these languages has its own nuances.

Here at InstantQuick we have several functions in production based upon C#. Naturally, the documentation for C# is comparatively good, there is some best practices guidance, and there are tools for Visual Studio in preview. However, in each of these cases, the primary focus is C# Script, not C# compiled to assembly dll files. Fortunately, the documentation does describe how to load assemblies in a variety of ways.

Minimizing the Use of CSX

The file extension of a C# Script file is CSX. By default a C# based function has a single CSX file named run.csx. You can, however, reference other C# Script files and share C# Script files between functions. So, you can theoretically build a complex solution using nothing but C# Script and for very small functions written to augment something like a Logic App C# Script makes perfect sense. However, in our case, and I suspect in many others, we want to deliver the bulk of our functionality as built assemblies for a few important reasons.

  1. We are moving existing functionality previously developed in a traditional manner to functions
  2. Sometimes, functions aren’t an appropriate delivery vehicle and we want to host the functionality in traditional cloud services or on premises
  3. The tooling for CS files is, at the moment, much better than the tooling for CSX files

Pattern for run.csx

Most of our run.csx files look like this:

#r "AppLaunch.dll"
#r "FunctionsCore.dll"
using System.Net;
using System.Configuration;
using System.Net.Http.Formatting;
using AppLaunch;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
   Log(log, $"C# HTTP trigger function processed a request! RequestUri={req.RequestUri}");
   var func = new AppLaunchHandler(req);
   func.FunctionNotify += (sender, args) => Log(log, args.Message);
   var appLauncherFunctionArgs = new AppLauncherFunctionArgs()
   {
      StorageAccount = ConfigurationManager.AppSettings["ConfigurationStorageAccount"],
      StorageAccountKey = ConfigurationManager.AppSettings["ConfigurationStorageAccountKey"]
   };
   return func.Execute(appLauncherFunctionArgs);
}

public static void Log(TraceWriter log, string message)
{
   log.Info(message);
}
			

The code does only five things

  1. Load the assemblies that do the actual work
  2. Receive the input from the function’s trigger via its bindings – this includes a TraceWriter for logging
  3. Gets the configuration from the Function App’s configuration settings
  4. Invokes the code that does the real work, passing the function input and the configuration values and returning its output
  5. Receive logging notifications as events and log them via the TraceWriter

Avoiding Dependencies on Azure Functions

The very first function I wrote included a dependency on the Web Jobs SDK for logging. Since one of our primary needs is to be able to host the functionality outside of Azure Functions, that wasn’t something I could keep doing. The workers should be delivered as plain old class libraries with minimal dependency on the runtime environment. To that end I wrote a simple little base class that uses an event to raise notifications. This allows the hosting environment to deal with that information in whatever way is needed.

The base class and event look delegate like this:

namespace FunctionsCore
{
   public delegate void FunctionNotificationEventHandler(object sender, FunctionNotificationEventArgs eventArgs);
   
   public class FunctionNotificationEventArgs : EventArgs
   {
      public string Message { get; set; }
   }
   
   public class FunctionBase
   {
      public event FunctionNotificationEventHandler FunctionNotify;

      public void Log(string message)
      {
         FunctionNotify?.Invoke(this, new FunctionNotificationEventArgs { Message = message });
      }
   }
}

A subclass can then simply notify the caller of anything interesting via Log function as follows.

Log($"Error creating view {ex}");

How Much Logging is Too Much Logging?

The SDK’s best practices guidance notes that excessive amounts of logging can slow your function down. I suppose there may be times when that is a concern, but most of our functions are not interactive and still execute in the range of sub-second to a few seconds. The cost savings we get from using a consumption plan instead of dedicated cores is so dramatic that whatever extra CPU time the logging causes is a non-issue. So, my opinion is that minimizing the logging is a premature optimization and not great advice.

Deployment

The final piece of the puzzle is how the deployment works. The current preview tools for Visual Studio are not especially helpful when it comes to working with regular old C#, and that is being generous. Instead, we use plain old class library projects, Visual Studio’s Task Runners with the Command Task Runner, and PowerShell scripts. The full script that can also create the Function App via Azure Resource Manager is a work in progress, but when it’s done, I’ll write another blog post and publish the project template to GitHub.

–Doug Ware