Nasty SharePoint Workflow Bug

First off, thanks to my friend Kirk Allen Evans for his help with this problem on a Saturday! Microsoft is lucky to have him. In the process he created a nice screen cast that you can watch to set the background. He doesn’t encounter the bug, but as it is a threading issue he might have simply been lucky (or me unlucky threading bugs are like that), or (I’m hoping) he has a hotfix that I lack.

In some situations, a workflow will enter into a non-error, but unrecoverable state that looks as if an OnTaskCreated workflow activity did not fire even though the task exists. Sample code that reproduces the problem in my environment on both MOSS SP1 And MOSS SP1 + Infrastructure update is located here. If the problem occurs, the workflow will not continue processing beyond the OnTaskCreated activity. You can see reports of this issue as experienced by others here. Scroll down to the bottom of the post for a workaround.

I have a moderately complex workflow than starts three different tasks. A simplified version is shown below.

Intermittently, the workflow will stop working with history that makes it look like the workflow failed to handle one or more OnTaskCreated activities as shown below.

Notice that there is no entry for Task 1 Created (nor will there ever be). This workflow is toast and will not complete.

It turns out that the problem is actually in the workflow manager. If you step through the workflow, you will see every onTaskCreated activity light up in the designer, but the runtime will not invoke the method. If you look at the error log, you will see two exceptions:

RunWorkflow: System.ArgumentException: Item has already been added. Key in dictionary: ‘c6923528-0908-4d75-86ed-f58342ea507e’ Key being added: ‘c6923528-0908-4d75-86ed-f58342ea507e’ at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add) at System.Collections.Hashtable.Add(Object key, Object value) at System.Collections.Hashtable.SyncHashtable.Add(Object key, Object value) at Microsoft.SharePoint.Workflow.SPWorkflowManager.TrackCreatedInstance(Guid trackingId, SPWorkflow workflow) at Microsoft.SharePoint.Workflow.SPWorkflowManager.RunWorkflowElev(SPWorkflow originalWorkflow, SPWorkflow workflow, Collection`1 events, SPRunWorkflowOptions runOptions)   

RunWorkflow: System.Workflow.Activities.EventDeliveryFailedException: Event "OnTaskCreated" on interface type "Microsoft.SharePoint.Workflow.ITaskService" for instance id "c6923528-0908-4d75-86ed-f58342ea507e" cannot be delivered. —> System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SharePoint.Workflow.SPWorkflowHostServiceBase.LoadInstanceData(Guid instanceId, Boolean& compressedData) at Microsoft.SharePoint.Workflow.SPWinOePersistenceService.LoadWorkflowInstanceState(Guid instanceId) at System.Workflow.Runtime.WorkflowRuntime.InitializeExecutor(Guid instanceId, CreationContext context, WorkflowExecutor executor, WorkflowInstance workflowInstance) at System.Workflow.Runtime.WorkflowRuntime.Load(Guid key, CreationCont…   

Apparently, there is a threading issue with the Workflow manager and when it happens, the workflow receives notice that the event is firing, but it can’t load the instance data.

I’m hoping that this is a known issue and that there is a hotfix available. I’ll follow up as I learn more.

In the meantime, you can work around this problem by simply avoiding OnTaskCreated. The requirement I have that led me down this path is to set up permissions on the tasks beyond what the SpecialPermissions property allows (more on that some other day). I will meet that requirement by using a list event handler and apply the permissions outside the workflow via the handler.

Good luck!

–Doug Ware

Code Signing and Strong Names

This is second document from way back in 2002 talks about code signing and strong names. Visual Studio has much better support for key generation these days, but the discussion of delay signing is kind of interesting. This one is redacted because it was part of a memo I wrote to a client, but I think it still has relevance in 2008 although you may not be as well versed in COM as the people I wrote this for over 6 years ago.

Edit: 8/12/2008 – How embarrassing, I included some content from MSDN when I put this memo together so long ago. You can read the central portion about the nature of strong names here. I have no idea what other content in this post is unattributed, but if you see something in this post that you think came from elsewhere, it probably did. Please let me know and I’ll give credit where credit is due. I thought about just deleting the post, but it’s an important subject and I think most of the content at the top and the bottom is original. That said, many brain-cells have died in the intervening years. (Truth be told, I’d forgotten about this memo, but found it using desktop search while looking for the one on Dispose.)

–Doug Ware

The list of differences between COM and .NET development is long. .NET is a new generation of technology, it is not an evolution of COM, and it was designed to resolve many of the frustrating aspects of COM development. One of these frustrations is the situation known colloquially as "DLL Hell".

COM Components are Global to the Machine and have Identity…

COM components are, generally speaking, always global to the machine on which they are installed. When a new program is installed, the installer registers each component in the Windows registry. A typical entry contains the component’s Program ID, for example WinWord for Microsoft Word, and a hexadecimal Globally Unique Identifier, also known as a GUID.

Any application installed on the same machine that knows either the Program ID or the GUID can execute code inside the component. This feature makes it easy for developers to take advantage of common components for user interfaces, database access, and many, many other features.

DLL Hell occurs when a version of a given COM component that is incompatible with a previously installed version is registered, usually by installing a new program, which causes existing applications to malfunction. Over time, this scenario has cost countless dollars and lost hours of productivity.

However, it is also advantageous for COM components to have identity based on a GUID because it becomes possible to establish security policies based on specific component. Role based security is based on the intersection of NT Users or Groups and GUID’s.

By Default, .NET Components are Local to a Project and have no Identity…

Most components do not contain generic capabilities that people want to reuse. Furthermore, the process of registering components in the registry makes installing programs more difficult. .NET supports global registration of components when necessary, but by default, when a program needs to load a given component, it looks for that component in the local path.

The default behavior allows many .NET applications to be deployed by copying the files to the target machine. This is referred to as XCOPY installation in the .NET documentation. The default behavior also allows several applications, or multiple versions of the same application, with different versions of the same component to coexist on the same machine without DLL Hell style conflicts. This is referred to as side-by-side installation in the .NET documentation.

However, because .NET components, properly known as assemblies, do not have an analog to a GUID by default, additional work must be done to provide an identity if they are to be installed globally or if a specific security policy is desired.

.NET Assemblies Get Identity from Strong Names

COM components get their identity from a GUID. .NET assemblies get identity via strong names.

A strong name provides the assembly’s identity — its simple text name, version number, and culture information (if provided) — plus a public key and a digital signature. The strong name is generated from an assembly using the corresponding private key. Visual Studio .NET and other development tools provided in the .NET Framework SDK can assign strong names to an assembly. Assemblies with the same strong name are expected to be identical.

You can ensure that a name is globally unique by signing an assembly with a strong name. In particular, strong names satisfy the following requirements:

  1. Strong names guarantee name uniqueness by relying on unique key pairs. No one can generate the same assembly name that you can, because an assembly generated with one private key has a different name than an assembly generated with another private key.
  2. Strong names protect the version lineage of an assembly. A strong name can ensure that no one can produce a subsequent version of your assembly. Users can be sure that a version of the assembly they are loading comes from the same publisher that created the version the application was built with.
  3. Strong names provide a strong integrity check. Passing the .NET Framework security checks guarantees that the contents of the assembly have not been changed since it was built. This provides a level of protection against certain types of Trojans that work by injecting malicious code into common components, an assembly with a strong name can not be altered without invalidating the internal hash code. If the code is not valid, the runtime will refuse to execute the assembly

Note, however, that strong names in and of themselves do not imply a level of trust, such as that provided by a digital signature and supporting certificate.

By default, two assemblies installed on the same machine have the same level of permissions even if one is strongly named and the other is not.

Once an assembly is provided with a strong name, it becomes possible to register the assembly globally to the machine and to set security policy on it. Note however that, unlike a COM GUID, an assembly’s strong name includes its version number, thus enabling side-by-side execution of different versions of the same assembly.

Do not confuse the signature of a strong name with Authenticode Signatures. Although Authenticode is supported in .NET it is a completely separate topic. The primary differences are:

  • Strong names are lighter weight. The implementation is simpler, the process involved is less complex, and there is no network connection made to a third party for verification.
  • There is no automatic way to associate a strong name with a specific publisher.
  • No means exist to revoke the use of a given strong public key if it is compromised. If such a situation occurs, you must revoke the permission of compromised assemblies, resign, and redeploy them.
  • Strong names never result in a dialog asking the user to make trust decisions. Specific security policies for a given public key must be deployed prior to runtime if needed.

.NET Security

.NET provides a security model that can be configured at the enterprise, machine, and user level. This model provides a finer grain of control than anything previously available on the Windows platform. All .NET applications are executed within the context of Common Language Runtime, CLR. The CLR is conceptually similar to a Java Virtual Machine.

A primary virtue of this architecture is that code executes within a ‘sandbox’ and because of this cannot access any resources unless the CLR allows it to do so. This is very different from previous generations of Windows’ development technologies where executables communicated directly with the operating system.

Permissions are evaluated based on the most-restrictive configuration. In other words, if the machine’s security policy explicitly disallows an action that the user’s security explicitly allows, permission will always be denied for that action.

Furthermore, the CLR is still governed by the underlying set of permissions provided by the local machine and the domain. It is not possible to write an application that will allow the executing user the ability to access resources for which the user does not have permission.

Security Zones

Out of the box, the .NET framework security policies are based exclusively on the "zone" the code comes from. The zones are: Local Machine, Intranet, Internet, Trusted Sites, and Untrusted Sites.

Local Machine Zone

Code originating from the Local Machine zone is assumed to be safe by default and is governed by the user’s normal security.

Intranet Zone

The local intranet zone is used for content located on a company’s intranet. Because the servers and information would be within a company’s firewall, a user or company could assign a higher trust level to the content on the intranet.

Internet Zone

The Internet zone is used for the Web sites on the Internet that do not belong to another zone. The default settings allow code downloaded from these sites only minimal access to resources on the user’s computer. Web sites that are not mapped into other zones automatically fall into this zone.

Trusted Sites Zone

The Trusted sites zone is used for content located on Web sites that are considered more reputable or trustworthy than other sites on the Internet. Users can use this zone to assign a higher level of trust to specific Internet sites. The URLs of these trusted Web sites need to be mapped into this zone by the user. By default, sites in the Trusted sites zone receive no higher trust than those in the Internet zone. A user or company needs to change the level of trust granted to this zone if they want the sites it contains to be given a higher level of trust.

Untrusted Sites Zone

The Restricted sites zone is used for Web sites that contain content that could cause, or could have previously caused, problems when downloaded. This zone could be used to prevent code downloaded from these sites from running on the user’s computer. The URLs of these untrusted Web sites need to be mapped into this zone by the user.

It is possible to adjust the permissions of individual zones. The first release of the .NET framework provided a similar level of trust to the Local Machine and Intranet zones. However, service pack 1 (of .NET 1.0) lowered the trust for the Intranet Zone to disallow certain permissions. Specifically it disallowed COM interop, remoting, and non-isolated file I/O.

Organizational Options for Creating Strong Names

In order to create a strong name, you must have a key pair. The easiest way to generate a key pair is to use the strong name tool included with the .NET framework, Sn.exe. (Note: remember that I wrote this in 2002, this part is easy in VS 2005 and later!)

Key generation and signing can be done by individual development teams, resulting in a different public key for each application, or by a central authority within an organization.

Security policy can applied at install time by the installation process, or centrally administered with tools like SMS.

If the key is easily available to all members of a development team, it is increasingly difficult to ensure that the key is not compromised or even to detect a compromise should it occur.

.NET supports a process known as Delay Signing where the development team is provided with the public portion of the key pair. The developers add the public key to the assembly to establish its identity for testing purposes, but the builds simply reserve space for the digital signature.

Because the developers know the public key and space is reserved for the digital signature, it is (relatively) easy for the development team to perform its function and for a different organizational unit to complete the process of signing the code. Because the application is already built, the second organizational unit does not have to recreate the build environment in order to complete the process. It also minimizes the differences between development and release versions of the assemblies, boosting confidence that development tests on the development version was not invalidated by the signing process.

The Price of a Compromised Key

This is an interesting question. Ultimately I believe that centralizing the process of code signing is clearly the best practice. However, when considering these points, remember that:

  • Code executing in the Local Machine zone has Full Trust by default (just like any COM application).
  • .NET never gives a user more authority than Windows itself.
  • The system in question is not a candidate for distribution to a third-party.

Internally to a company, the only exploits I can see based on a compromised key require social engineering to place a malicious application with the compromised key on the Intranet or Internet and convince someone to launch it via a hyperlink (possibly in an email). Not to downplay this possibility, a crafty villain engaging in such an attack could get the same results by getting a user to run the code locally as a script, a native .exe, a COM based .exe, or a .NET .exe. A native or COM .exe would be the easiest way to actually get root on the machine. The .NET version would require breaking two levels of security, the OS and the CLR.

I provide this speculation to help to begin weighing the (hypothetical) risks versus the (unknown) costs of centralizing the code signing.

I hope this makes sense and clarifies the issues.

Author: Doug Ware

On the Use of Dispose()

I was looking through some archives recently and came across a couple of documents I wrote way back in 2002 for a project shortly after .NET 1.0 went RTM. Both provide good background information for some topics I want to cover later. So, I thought I’d go ahead and post them for your enjoyment. There’s nothing too revelatory here, but if you are getting started with ASP.NET and SharePoint, you should take the time to make sure you understand this topic.

Introduction

Developers using C#, or any CLS language, must take special care when writing code that uses finite system resources or resources that require special work to clean up when no longer needed. This document defines the terms required to discuss the problem and provides a general overview of one way to deal with the issue.

Differences between .NET and COM

Before discussing the details of resource management, consider the differences between .NET components and COM components.

COM manages resources through a process known as reference counting. When a new COM object is created by code running under COM, the operating system makes an internal note that the new object has one other object (the one that did the creation) referencing it. Each time the new object acquires a new reference, for example by being passed to another function, the operating system increases the reference count.

As object references are released, the operating system decreases the reference count. When a COM object’s reference count reaches zero, the object is destroyed and the object frees any resources it was using.

For example:

Sub Foo()

    Dim myObject as SomeObject

 

    Set myObject = New SomeObject    ‘The object is created and it’s reference count is 1.

 

    Call Foo2(myObject)

   

    Set myObject = Nothing        ‘The object is rleased and it’s reference count is 0.

End Sub

 

Sub Foo2(myObject as SomeObject)    ‘The object is passed to a function

                                ‘and it’s reference count is 2.

    .

    .

    .

End Sub                        ‘The function ends and the reference count is 1.

 

By tracing the sample code, you can see exactly when the object’s reference count reaches zero. When this happens, the object is destroyed. If required, the object executes logic at the time of its destruction to release any critical resources it may be using.

The key point is that traditional COM allows the developer to know exactly when an object will be destroyed.

.NET does things a bit differently.

The .NET Common Language Runtime, CLR, manages memory for applications. The CLR provides an automatic service called a garbage collector that periodically searches memory for objects that are no longer used and frees those it finds. By default, a .NET developer has no way to know when this will happen or the order in which objects found by the garbage collector are released.

Fortunately, that does not prevent a .NET developer from writing code that conserves resources or does required cleanup. It simply requires a different approach.

Garbage Collector

The .NET Framework’s garbage collector manages the allocation and release of memory for your application. Each time you use the new operator to create an object, the runtime allocates memory for the object from the managed heap. As long as address space is available in the managed heap, the runtime continues to allocate space for new objects.

However, memory is not infinite.

Eventually the garbage collector must perform a collection in order to free some memory. The garbage collector’s optimizing engine determines the best time to perform a collection, based upon the allocations being made. When the garbage collector performs a collection, it checks for objects in the managed heap that are no longer being used by the application and performs the necessary operations to reclaim their memory.

This algorithm used to decide if an object should be garbage collected is considerably more sophisticated than the reference counting scheme used by COM.

Consider this example:

aForm

        |_

        |    simpleProperty

        |_

            complexCollectionObject   

                             |_

                             | parentFormProperty = aForm

                             |_

                             bunchOfCollectionObjects

There is a form object named aForm that contains a complexCollectionObject that has an array of other objects AND a parentFormProperty that has a reference that points back to aForm.

When some other code, perhaps a different form, opens aForm and the form’s complexCollectionObject is created, the reference count to the aForm object is 2. It’s being referenced by the code that created it and by an object that it contains. When the code that created the form ends or drops its reference to aForm, it is no longer possible for the program to navigate to either the object instance or any of its properties. Unfortunately, the reference count of the form is still equal to 1 because of complexCollectionObject.parentFormProperty.

If this example were a COM program, the result would be a memory leak. Unless the developer took special care to release the second reference, the memory consumed by the objects would not be released until the program ended or leaked enough memory to crash.

The .NET garbage collector is smart enough to know that aForm is no longer reachable by the application and will release these objects when it runs.

Destructors

Destructors are used to destruct instances of classes.

A class can only have one destructor.

Destructors cannot be inherited or overloaded.

Destructors cannot be called. They are invoked automatically.

A destructor does not take modifiers or have parameters. For example, the following is a declaration of a destructor for the class MyClass:

~ MyClass()

{

// Cleanup statements.

}

The destructor implicitly calls the Object.Finalize method on the object’s base class. Therefore, the preceding destructor code is implicitly translated to:

protected override void Finalize()

{

try

{

// Cleanup statements.

}

finally

{

base.Finalize();

}

}

This means that the Finalize method is called recursively for all of the instances in the inheritance chain, from the most derived to the least derived.

The programmer has no control on when the destructor is called because the garbage collector determines this. The garbage collector checks for objects that are no longer being used by the application. It considers these objects eligible for destruction and reclaims their memory.

Destructors are also called when the program exits.

THE PROBLEM WITH DESTRUCTORS

There are a couple of problems with destructors that present a challenge to the developer. The first one is that you have no control over when they will be called. This is a problem if you have some resource in use that you want to cleanup right this second. It is possible to force garbage collection to occur by using the GC.Collect method. Forcing garbage collection causes every pending destructor to execute. However, doing this can have a serious effect on performance.

It is also serious overkill if you only need to release the resources of a single object.

The second problem is more subtle.

When the garbage collector is going through the process of calling destructors, in what order are the destructors called? If there are 50 objects waiting for cleanup, does it destroy them in the order in which they were created? The answer is, "you don’t know."

Because of this, it is not safe to reference any other managed objects in the body or call stack of a destructor. The object you are trying to reference could be garbage collected already. If it was, your destructor will produce an unhandled exception and your program will stop running!

At this point you may be wondering why, given the above, you would ever want to write a destructor. The key term in the above paragraph is managed. Generally speaking, managed resources are other .NET objects that the CLR controls. However, resources created with the Windows API, COM objects, and mainframe queues exist outside of the CLR’s control. Because these types of resources exist outside of the control of .NET you need destructors to ensure they get cleaned up.

Fortunately, there is an agreed upon standard that .NET developers can use to clean up critical managed resources as needed and that allows you to guarantee that unmanaged resources are always disposed of properly.

iDisposable()

The .NET framework includes an interface called IDisposable. The IDisposable interface defines a single method, void Dispose(). Classes that implement this interface advertise to the outside world that they contain resources that require special cleanup. Code that creates an object from a class that exposes the interface is expected to call Dispose when the object is no longer needed.

The following function uses a framework class called Graphics. Graphics resources are finite and should be conserved when possible. Therefore, the Graphics class provides a dispose method.

private Bitmap _GetBitmap()

{

    //Make the bitmap by bit blasting the drawing areas dc to a new dc.

    Graphics gSource = _FormToPrint.CreateGraphics();

    Bitmap bmCopy = new Bitmap(_FormToPrint.ClientSize.Width,

    _FormToPrint.ClientSize.Height, gSource);

Graphics gDest = Graphics.FromImage(bmCopy);

    IntPtr dc1 = gSource.GetHdc();

    IntPtr dc2 = gDest.GetHdc();

    BitBlt(dc2, 0, 0, _FormToPrint.ClientSize.Width,

    _FormToPrint.ClientSize.Height, dc1, 0, 0, 13369376);

    gSource.ReleaseHdc(dc1);

    gDest.ReleaseHdc(dc2);

    gSource.Dispose();

    gDest.Dispose();

   

    return bmCopy;

}

You can test to see if an object supports IDisposable at runtime using a variety of techniques. The most straightforward is to use the "is" operator.

if(someObject is IDisposable)

{

    someObject.Dispose()

}

However, you will generally only need to perform this test if you are using something whose specific type is unknown at design time, like the elements in a dictionary collection for example. If you know a class uses Dispose, make sure you call it when you are done! The easiest ways to discover this is with intellisense or the object browser.

The Problem with Dispose

There are two problems with Dispose methods, or any method that cleans up critical resources.

The first is that you expect the people using your class to be responsible and clean up after themselves. If you believe that people can be trusted to do this every time, I know a used-car salesman who would love to meet you!

Luckily you can help those who seem so unwilling to help themselves. The destructor is guaranteed to run eventually. Because of this, you can call Dispose from the destructor like so:

//This class is implementing IDisposable

public
class DisposeExample : IDisposable

{

    //This is a managed .NET class and the GarbageCollector supports it.

    public SomeDotNetClass managedObject = new SomeDotNetClass();

 

    //This is some other, unmanaged resource and the

    //GarbageCollector DOES NOT support it.

    public SomeUnmanagedResource unmanagedObject = new SomeUnmanagedResource();

 

    //This is the implementation of IDisposable.Dispose

// Do not make this method virtual.

// A derived class should not be able to override this method.

    public
void Dispose()

    {

        unmanagedObject.CleanUpMethod();

    }

 

    //This is the destructor.

    ~DisposeExample()

    {

        this.Dispose();

    }

}

What if the managed object also included a dispose method? Since you are a good developer, you need to support calling its dispose method when your dispose method gets called. However, if you were paying attention earlier, you know that changing our Dispose method to look like this:

    //This is the implementation of IDisposable.Dispose

// Do not make this method virtual.

// A derived class should not be able to override this method.

    public
void Dispose()

    {

        unmanagedObject.CleanUpMethod();

        managedObject.Dispose();        //Unhandled exception waiting to happen.

    }

would be bad! Remember, you have no control over the order the destructors are called! If Dispose() is called by the destructor, it is possible that managedObject is already gone. If it is, the user will receive a message box, followed by a trip to the desktop, followed by a call to the help-desk, etc….

What you need is a way to know when Dispose is being called directly instead of by the destructor. You provide this by overloading Dispose to accept a flag that tells us what we need to know. Change the base implementation of Dispose to call the overloaded method indicating Dispose was called directly and modify the destructor to call the overloaded method indicating the garbage collector is running.

 

//This class is implementing IDisposable

public
class DisposeExample : IDisposable

{

    //This is a managed .NET class and the GarbageCollector supports it.

    public SomeDotNetClass managedObject = new SomeDotNetClass();

 

    //This is some other, unmanaged resource and the

    //GarbageCollector DOES NOT support it.

    public SomeUnmanagedResource unmanagedObject = new SomeUnmanagedResource();

 

    //This is the implementation of IDisposable.Dispose

    public
void Dispose()

    {

        //Note that Dispose was called directly.

        this.Dispose(true);

    }

 

    // Dispose(bool disposing) executes in two distinct scenarios.

    // If disposing equals true, the method has been called directly

    // or indirectly by a user’s code. Managed and unmanaged resources

    // can be disposed.

    // If disposing equals false, the method has been called by the

    // runtime from inside the destructor and you should not reference

    // other objects. Only unmanaged resources can be disposed.

    protected
virtual
void Dispose(bool disposing)

    {

        //Always clean up unmanaged resources.

        unmanagedObject.CleanUpMethod();

 

        if(disposing)

        {

            //We know that Dispose was called directly and it is safe

            //to dispose other managed objects!

            managedObject.Dispose();

        }

    }

 

    //This is the destructor.

    ~DisposeExample()

    {

        //Note that Dispose is being called by the destructor.

        this.Dispose(False);

    }

}

The second problem is easier to understand and to solve. The current example doesn’t prevent something calling Dispose more than once. In fact, it is also a runtime error waiting to happen. If someone calls Dispose, sooner or later the destructor will run. If unmanagedObject.CleanUpMethod(); is the sort of operation that can only be done once safely, it could produce an error.

There is also nothing to protect anybody from accidentally calling Dispose more than once.

The example needs two more features. The first is a way to tell the garbage collector that it shouldn’t call the destructor. The second is to add an internal flag to keep Dispose from doing its job more than once.

//This class is implementing IDisposable

public
class DisposeExample : IDisposable

{

    //Private internal flag used to keep dispose from running more than once.

    bool _disposed;

   

    //This is a managed .NET class and the GarbageCollector supports it.

    public SomeDotNetClass managedObject = new SomeDotNetClass();

 

    //This is some other, unmanaged resource and the

    //GarbageCollector DOES NOT support it.

    public SomeUnmanagedResource unmanagedObject = new SomeUnmanagedResource();

 

    //This is the implementation of IDisposable.Dispose

    public
void Dispose()

    {

        //Note that Dispose was called directly by some other object.

        this.Dispose(true);

        // Take yourself off of the Finalization queue

        // to prevent finalization code for this object

        // from executing a second time.

        GC.SuppressFinalize(this);

    }

 

    // Dispose(bool disposing) executes in two distinct scenarios.

    // If disposing equals true, the method has been called directly

    // or indirectly by a user’s code. Managed and unmanaged resources

    // can be disposed.

    // If disposing equals false, the method has been called by the

    // runtime from inside the destructor and you should not reference

    // other objects. Only unmanaged resources can be disposed.

    protected
virtual
void Dispose(bool disposing)

    {

        //Check to see if it’s already been done.

        if(!_disposed)

        {

            _disposed = true;

 

            //Always clean up unmanaged resources.

            unmanagedObject.CleanUpMethod();

 

            if(disposing)

            {

                //We know that Dispose was called directly and it is safe

                //to dispose other managed objects!

                managedObject.Dispose();

            }

        }

    }

 

    //This is the destructor.

    ~DisposeExample()

    {

        //Note that Dispose is being called by the destructor.

        this.Dispose(False);

    }

}

 

Author: Doug Ware

CodeStock was Awesome Sauce

I was in Knoxville, Tennessee this past Saturday at CodeStock. Hat’s off to Mike, Alan, and Wally, they put together a heck of an event and everyone had a great time. The show definitely had the best Open Spaces discussion I’ve ever seen anywhere.

If you attended my presentation, or even if you didn’t, you can download the slides here.

–Doug Ware

Code to Associate an Approval Workflow and Make it the Default for Content Approval

Kind of random, but I had this laying around and I figure it might be useful to someone. I got the XML by configuring an Approval workflow and using SharePoint Manager to copy the property value. Note: this assumes you are running MOSS and already have a Tasks list and Workflow History list.

static
string _associationXml = @"

<my:myFields xml:lang=’en-us’ xmlns:xsi=’http://www.w3.org/2001/XMLSchema-instance’ xmlns:my=’http://schemas.microsoft.com/office/infopath/2003/myXSD’>

<my:Reviewers>

<my:Person>

<my:DisplayName>A Site Members</my:DisplayName>

<my:AccountId>A Site Members</my:AccountId>

<my:AccountType>SharePointGroup</my:AccountType>

</my:Person>

</my:Reviewers>

<my:CC></my:CC>

<my:DueDate xsi:nil=’true’></my:DueDate>

<my:Description>Please review this document.</my:Description>

<my:Title></my:Title>

<my:DefaultTaskType>1</my:DefaultTaskType>

<my:CreateTasksInSerial>true</my:CreateTasksInSerial>

<my:AllowDelegation>true</my:AllowDelegation>

<my:AllowChangeRequests>true</my:AllowChangeRequests>

<my:StopOnAnyReject xsi:nil=’true’></my:StopOnAnyReject>

<my:WantedTasks xsi:nil=’true’></my:WantedTasks>

<my:SetMetadataOnSuccess>false</my:SetMetadataOnSuccess>

<my:MetadataSuccessField></my:MetadataSuccessField>

<my:MetadataSuccessValue></my:MetadataSuccessValue>

<my:ApproveWhenComplete>false</my:ApproveWhenComplete>

<my:TimePerTaskVal xsi:nil=’true’></my:TimePerTaskVal>

<my:TimePerTaskType xsi:nil=’true’></my:TimePerTaskType>

<my:Voting>false</my:Voting>

<my:MetadataTriggerField></my:MetadataTriggerField>

<my:MetadataTriggerValue></my:MetadataTriggerValue>

<my:InitLock>false</my:InitLock>

<my:MetadataStop>false</my:MetadataStop>

<my:ItemChangeStop>false</my:ItemChangeStop>

<my:GroupTasks>false</my:GroupTasks>

</my:myFields>";

 

static
void Main(string[] args)

{


//Get the site, web, and list


SPSite site = new
SPSite(@"http://YourUrlHere");


SPWeb web = site.RootWeb;


SPList docs = web.Lists["Your List Here"];

 


//Get the workflow template


SPWorkflowTemplate approvalTemplate =

web.WorkflowTemplates.GetTemplateByName("Approval",

System.Globalization.CultureInfo.CurrentCulture);

 


//Create the association


SPWorkflowAssociation assoc =


SPWorkflowAssociation.CreateListAssociation(

approvalTemplate,


"Approval Workflow",

web.Lists["Tasks"],

web.Lists["Workflow History"]);

 


//Set the startup options

assoc.AllowManual = true;

assoc.AutoStartCreate = true;

 


//Provide the association data

assoc.AssociationData = _associationXml;

 


//Apply the association to the document library

docs.AddWorkflowAssociation(assoc);

 


//Enable moderation and make the workflow the


//default approval workflow

docs.EnableModeration = true;

docs.EnableVersioning = true;

docs.EnableMinorVersions = true;

docs.DefaultContentApprovalWorkflowId = assoc.Id;

docs.Update();

 


Console.WriteLine("Done!");


Console.ReadLine();

}

Author: Doug Ware