Wednesday, December 09, 2009

Note to Self: MQSeries

I've been off working on another project for a very busy 3 weeks (more on this perhaps in the next post) and have now returned back onto the project I was on since July.

This is a quick note to self for when the password on the service account for IBM MQSeries needs to be changed. When the password expired I got an error message like this when I tried to start a queue manager:

Initialization of resource 'amqmsrvn' failed, rc=0x8000401a
The server process could not be started because the configured identity is incorrect. Check the username and password.
exitvalue = -4


First, you need to change the service account / password on the IBM MQSeries windows service (from the Services MMC):

And second you need to go into Component Services (COM+) and find the "IBM MQSeries Services" in the DCOM Config node:

And then update the password in the "Log On" tab.


Hopefully I won't waste an hour rebuilding my queues next time!

Thursday, November 05, 2009

Use of generic types in BizTalk and XLANG/s

This is just a quick post to note down something that I have been explaining to people lately. The project I am working on at the moment has several teams of developers working together and most of the code is being cut in Visual Studio 2008 / .Net 3.5. As such, we in the BizTalk feature team are taking delivery of components coded in other teams and integrating them. We are using BizTalk 2006 R2 and developing the BizTalk orchestrations in Visual Studio 2005, although there are no compatability issues between the binaries developed in 2008 and 2005, and anyway generic types were introduced in 2005 / .Net 2.0.

However, where I have come across an issue is in the use of generic types, and specifically assigning generic types to orchestration variables. In some cases orchestrations and expression shapes support generics and in some cases they do not.

All I am going to do is to illustrate some cases where generics are OK and some where they are not, with a bit of explanation as to why.

Assigning to variables

Imagine that you have a class:

public class MyClass {}

And you want to use a collection of this class in your code, these days you would usually use a generic collection in our code:

Collection collection = new Collection<MyClass>();

However, if you try to assign this type to a variable in an orchestration you will find that you can't. This is because you can only pick from a type that is compiled, and generic types aren't in there. If you want to use a collection like this in an orchestration you will have to create a type for it. You can define a new class:

public class MyClassCollection : Collection<MyClass> {}

If you do this, you can reference "MyClassCollection" as a variable in your orchestration and everything will be fine. Note that the usual rules about classes being marked as serializable apply.

The same applies if you create a class that has a generic interface such as this:

namespace AndrewGenerics.Components
{
[
Serializable]
public class OrchestrationInstanceHelper<T>
{
public void UpdateInstance(T objectInstance)
{
// Some code in here
}
}
}

If you see this class in the type picker and try to assign it to a variable you will be able to pick it:

However you will not be able to assign the generic type:


As a result if you try to compile you will get a whole heap of errors like these:

Error 1 '` (0x60)': character cannot appear in an identifier C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 62 63
Error 2 '` (0x60)': character cannot appear in an identifier C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 80
Error 3 identifier 'OrchestrationInstanceHelper' does not exist in 'AndrewGenerics.Components'; are you missing an assembly reference? C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 62 36
Error 4 cannot find symbol 'AndrewGenerics.Components.OrchestrationInstanceHelper' C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 62 36
Error 5 expected 'identifier' C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 62 64
Error 6 unexpected token: 'numeric-literal' C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 62 64
Error 7 identifier 'helper' does not exist in 'TestOrchestration'; are you missing an assembly reference? C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 14
Error 8 cannot find symbol 'helper' C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 14
Error 9 identifier 'OrchestrationInstanceHelper' does not exist in 'AndrewGenerics.Components'; are you missing an assembly reference? C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 53
Error 10 cannot find symbol 'AndrewGenerics.Components.OrchestrationInstanceHelper' C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 53
Error 11 'new OrchestrationInstanceHelper': a new expression requires () after type C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 23
Error 12 expected 'identifier' C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 82
Error 13 unexpected token: '(' C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 82
Error 14 illegal statement '1' C:\Projects\AndrewGenerics\AndrewGenerics.BizTalk\TestOrchestration.odx 67 81

All those errors are because of a couple of things, but mainly because when you add a variable to an orchestration, or add code in an expression shape, the designer writes some C# code for you and then compiles it. Because the way that the designer handles generics is incorrect the C# doesn't compile and you get the build errors.

Again though, if you created a class that inherited from the above class and assigned the generic type you'd be OK:

public class MyClassInstanceHelper : OrchestrationInstanceHelper<MyClass> {}

Passing parameters to methods

OK. Let's now create a helper component that we will call from an orchestration. [Disclaimer: The code below if for illustration purposes only!]

namespace
AndrewGenerics.Components

{
public static class OrchestrationHelper
{

// Note that this uses the collection type
public static void UpdateItemsInCollection(MyClassCollection collection)
{
// Some code in here
}

// Note that this uses the generic type
public static void UpdateItemsInCollection2(IEnumerable<MyClass> collection)
{
// Some code in here }
}
}

The first of these calls uses the inherited type and so will go through no problem:

AndrewGenerics.Components.OrchestrationHelper.UpdateItemsInCollection(myCollection);

But even if we use the second of the method calls with the IEnumerable<MyClass> parameter, that still works OK:

AndrewGenerics.Components.OrchestrationHelper.UpdateItemsInCollection2(myCollection);

Therefore, even though the method in a helper component has a generic type in the code, when the class is compiled the generic type gets "baked" into the interface and so the variable assignment works.

Assigning output from methods to variables

Now let's add another method onto the helper class that we can use to receive a collection of objects, but again through a generic type.

public static IEnumerable<MyClass> GetCollection()
{
Collection<MyClass> coll = new Collection<MyClass>();

coll.Add(
new MyClass());

return coll;
}

In order to call this I might use a line of code like this in an expression shape:

myCollection = (AndrewGenerics.Components.MyClassCollection)AndrewGenerics.Components.OrchestrationHelper.GetCollection();

Again, this is handled OK by the BizTalk compiler when we are assigning to our collection and the we are casting the type of the result. For the same reasons as above, we can't create a variable of IEnumerable<MyClass;> so we can't receive the output of this method without casting it, but we can at least handle it. Obviously, if at runtime we were presented with a different object that conformed to IEnumerable<MyClass;> (such as an array) we would get a runtime error because of an invalid cast.

However, we have been able to declare a method that uses a generic type, and for the same reason we can use it, i.e. when the helper component is compiled the method signature becomes fixed and the orchestration compiler can handle the output.

Conclusions

Here are some key points from this blog post:

  1. You cannot declare generic type in an orchestration because you can only select types in the type picker. If you select a generic interface that does not have the type assigned you get some crazy errors.
  2. In order to use a generic type you can create a class that implements the generic type. This will then be usable by BizTalk.
  3. When a helper component uses generic types in the interface the types get baked into the interface and can be used by BizTalk.

Wednesday, November 04, 2009

Am I missing something?

I was shopping on Amazon for my son's Christmas present this afternoon. He wants a laptop and I was looking for an entry-level laptop that is running Windows 7. I found this nice looking Compaq Presario:



However, when I scrolled down on the description I found the following:


That's it. The only thing that other people have bought. I was expecting to see antivirus and MS Office! Just shows that you can't assume what choices the public will make!

Thursday, October 29, 2009

BizTalk Orchestration Designer Crashes Visual Studio

This is just a quick post to note in passing an issue that I encountered recently. In my current project I am again working as a BizTalk architect, and we are using BizTalk Server 2006 R2, and hence the designer is hosted in Visual Studio 2005. After taking over some enormous orchestrations from another member of the team recently I started to find that Visual Studio started crashing often and unpredictably. So much so, in fact, that I had to press ctrl+S after just about every shape change, as I was losing work so often.

Now, I had worked on a project a couple of years ago where we suffered this a lot. I think that one was BizTalk Server 2006 R1. After having done some stints with BizTalk 209 / VS 2008 it came as a nasty surprise to be getting this issue again.

As you might expect, I decided to search to see who else had had the same issue. I got a couple of relevant hits. This one, http://continuouslyintegrating.blogspot.com/2008/01/orchestration-designer-crashes-visual.html, was interesting as it implied that there was something in your profile that was causing the issue. I didn't really want to have to rebuild a profile in the middle of a critical phase in the project so I kept searching.

I then came across this one, http://www.sabratech.co.uk/blogs/yossidahan/labels/visual%20studio.html, and more things to try. The thing that seemed to be intuitively right was that the size of the orchestration might be an issue. I had only started experiencing issues once I started to work on huge orchestrations. Before that there had been no problem. When working on modest orchestrations there was no problem.

I therefore took the suggestion that had the least amount of pain, to decrease the colour depth in my display settings from 32-bit down to 16-bit. I was hoping that this was going to do the trick. I then started working on the offending orchestrations and I could at least get started, but I did experience further crashes.

I then thought of a further step I might take, and it seemed to follow on from reducing the graphics load. I zoomed out. Never had another crash after that!

I think part of the problem is that the orchestration designer renders the orchestration as an image, or rather as a series of overlapping images. Then, depending on where you scroll to, a certain portion of the image is displayed. Therefore, no matter how small your viewing area in the designer the orchestration designer is still rendering a pretty big image. However, if you scroll out you reduce the overall size f the image that needs to be rendered. And, as mentioned, if you decrease the colour depth you reduce the size of it still further.

Conclusion

Huge orchestrations, from a design point of view, are bad. Huge orchestrations, from the Visual Studio Orchestration Designer point of view, are bad. If you must have them (as in my case where I was handed them and had to make them work), reducing the graphics load on your machine is a quick way to prevent Visual Studio from crashing under the load.

Tuesday, October 20, 2009

Use of interfaces within BizTalk Orchestrations and XLANG/s

Abstract

This blog post discusses the way in which interfaces are handled in BizTalk. In particular, the reason why variables defined by interfaces cannot be saved to orchestration state. This causes a compiler error [a non-serializable object type '{type name} {variable name}' can only be declared within an atomic scope or service ].

Introduction

I have recently been working back on BizTalk, and I have come across a strange issue with the way that interfaces are handled by the BizTalk compiler. As usual, this is of interest not only because there are people who may encounter this issue, but also because of the implications it has for understanding how the BizTalk orchestration engine works.

Let's consider a scenario: You want to encapsulate a common sub-process within a single orchestration. You need to invoke some business logic within this sub-process and this may change depending on which orchestration has invoked the sub-process.

In my case I decided to use a factory pattern, so I created an interface for the business logic and then the orchestration invoked the factory to receive the correct business logic instance. I have created a simplified example project to demonstrate the issue.

Example Solution

First, I created an interface that defines how the business logic is to be called:

public interface IOrchestrationHelper
{
void DoStuff();
}


I then created a base class (more on this later):

[Serializable]
public class OrchestrationHelperBase : IOrchestrationHelper
{
#region IOrchestrationHelper Members

public virtual void DoStuff()
{
throw new Exception("The method or operation is not implemented.");
}

#endregion
}

And then I created 2 implementations of the class:

public class OrchestrationHelperA : OrchestrationHelperBase
{
public override void DoStuff()
{
Debug.WriteLine("This is OrchestrationHelperA");
}
}

[Serializable]
public class OrchestrationHelperB : OrchestrationHelperBase
{
public override void DoStuff()
{
Debug.WriteLine("This is OrchestrationHelperB");
}
}

And the factory to instantiate the classes:

public static class OrchestrationHelperFactory
{
public static IOrchestrationHelper CreateHelper(string helperName)
{
switch (helperName)
{
case "A":
return new OrchestrationHelperA();
case "B":
return new OrchestrationHelperB();
default:
throw new Exception("Could not match a helper to the input specification.");
}
}
}

OK, so far so good. Simple stuff, we do this sort of thing every day don't we? This needed to be hooked into the BizTalk processes, so I incorporated the calls to the factory and the business logic into an orchestration, as follows:


If you look at the orchestration, I have a parameter called helperSpecification of type System.String that is passed in by the caller, which defines the piece of business logic to invoke (in practice this would possibly be an enum, but this is just to demonstrate). There is also an orchestration parameter called orchestrationHelper of type IOrchestrationHelper that contains the instance of the business logic component.

In the first expression shape I create the orchestration helper:

orchestrationHelper = Andrew.InterfacesXLANGs.Components.OrchestrationHelperFactory.CreateHelper(helperSpecification);

And in the next expression shape I call the business logic:

orchestrationHelper.DoStuff();

Again, this is almost as simple an orchestration as it is possible to get. However, when I try to compile it I get the following error:

Error 1 a non-serializable object type 'Andrew.InterfacesXLANGs.Components.IOrchestrationHelper orchestrationHelper' can only be declared within an atomic scope or service C:\Documents and Settings\v-anriv\My Documents\Visual Studio 2005\Projects\InterfacesXLANGs\InterfacesXLANGs\SubProcess.odx 46 66

Now, if you look into the cause of this error it is quite simple. BizTalk is a reliable messaging and orchestration server; the mechanism for achieving this reliability is that the state of messages and orchestrations is persisted to the Message Box database at runtime, either at persistence points (send ports, timeouts, exiting atomic scopes) or when the service decides to save the state to manage load. This is where the issue lies. In order to save the state of a variable it must be marked as serializable. When an orchestration hits a persistence point it serializes all of its variables and saves the data into the database. When the orchestration is "rehydrated", the state is deserialized and the processing can continue.

I mentioned scopes just above. Atomic scopes are a special case in BizTalk. These are the scopes in which an atomic (MSDTC) transaction is running. Obviously, in order to marshal the resources for such a transaction the orchestration must remain in memory during the processing of an atomic scope. This means that the scope must complete, or if it fails half way through BizTalk will assume that the work in the atomic scope has not been done, and will attempt to re-run it when the orchestration is started.

A side-effect of atomic scopes is that variables that are defined in an atomic scope will never be persisted to the database as they will always be in memory until the scope is complete. Because of this, it is possible to define a variable that is a non-serializable class.

As you can imagine, when a BizTalk host is running an orchestration t is just like any process executing code. However, when a persistence point is reached there is a double-hit on the performance as the state is serialized and is then saved into the message box database as a recoverable state point. This increased the latency of the orchestration considerably, and if throughput performance is an issue then you should minimise the number of persistence points. If you are more concerned with reliability then it's not such a bad thing.

If you look at the classes I defined they were all marked as serializable, so they could all happily exist in BizTalk orchestrations. However, because the variable was defined to use the interface, the compiler did not know if the class that would implement the interface would also be marked as serializable, therefore it generated an error.

The Solution, and some words of warning

In order to allow the BizTalk compiler to be fooled into accepting the interface, you need to declare the variable as the base class, and mark the base class as serializable. But be careful, if you do anything in your subclasses to make them non-serializable then there will be some unfortunate unintended consequences when the object is instantiated by the factory and loaded into the orchestration state.

Summary

I have used an example where an interface is used in a declaration in BizTalk to illustrate how BizTalk manages orchestration state through serialization of the orchestration variables. If you think about why and where orchestrations serialize and persist / dehydrate / rehydrate you will also start to get a grip on the factors that affect orchestration performance, and you can alter your designs to suit.


Wednesday, October 14, 2009

Just read this somewhere....

"A computer without COBOL or FORTRAN is like a chocolate cake without ketchup or mustard"

Thursday, October 08, 2009

Reminder : Fusion Log Viewer

Just a quick note - been having some errors loading assemblies lately and have called on the little-known tool in the SDK called Fusion Log Viewer. You can load it from the Visual Studio Command Prompt using the command fuslogvw or load it from here:

C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\ FUSLOGVW.exe

You'll get a view of all assembly load failures, which is really useful with systems that dynamically load stuff from the GAC at runtime such as BizTalk.



Wednesday, October 07, 2009

BizTalk Error: Failed while delivering a message to a service instance. Message details follow.

I had a strange error in BizTalk this week. I was integrating some changes by the database team on my project, and I got a strange error condition that I had not seen before. The symptoms were as follows:

  • (Normal functionality): Orchestration extracts an entity from the data access layer, and uses a Start Orchestration shape to launch another orchestration, passing the entity as a parameter.
  • (Error functionality): The second orchestration shows as started in the BizTalk Admin Console, it can be viewed in Orchestration Debugger, but it never starts processing even its first shape. (The event log blurb will be at the end of the post, because it's a bit bloaty and will detract from the commentary).
So, in my usual troubleshooting-cum-why-is-it-always-me-to-sort-this-out kind of way, I then had to start looking at why this was happening. The strange thing is that nothing much had changed on these orchestrations for about 2 months. It was a stable and well-tested piece of functionality.

What had changed, however, was the structure of the entity that I was getting back. An additional property had been added to the top-level entity that was itself a class containing 3 properties, two strings and a byte array. Nothing controversial here, I thought. I checked out this new class to see if it was serializable (a common thing for non-BizTalk devs to miss off) and it was OK (in fact if it is not serializable then you get a compile rather than a runtime error).

I then put a temporary hack into the data access code to set the new property to null, to see if the creation of this new object was causing the issue and instantly the orchestration started to work as normal. I was starting to get somewhere.

If you look at the error and examine the stack trace you see that there is an index out of bounds error on an array. Now, you may recall that the new object has a byte array, so I thought that this must be the candidate. I then looked at the values being passed through to the byte array and I found that although the byte array was being instantiated it had zero length.

In the end the error was caused by BizTalk not being able to binary serialize / deserialize the state of the object into the orchestration. This only happens if I have a zero-length array. Just to prove it I put in the following line as a temporary measure:

entity.MyProperty.MyByteArray = new byte[] { 0x00 };

This went through the message box OK. I then put in the following instead:

entity.MyProperty.MyByteArray = null;

This also went through. Therefore there is just an issue with a byte array of zero length. I eventually settled on the following:

if (entity.MyProperty.MyByteArray.Length == 0)
{
entity.MyProperty.MyByteArray = null;
}

In Summary

This error was caused by an exception very low down in the BizTalk engine, and lay in the failure to serialize or deserialize the object into the orchestration state. This was caused by the byte array being of zero length (which is legitimate from a .Net point of view), so this may be a fault oin the BizTalk engine.

As a work around, setting the byte array to null when it is empty allowed the orchestration to function.

Exception / Stack Trace

Event Type: Error
Event Source: XLANG/s
Event Category: None
Event ID: 10001
Date: 06/10/2009
Time: 10:14:17
User: N/A
Computer: <>
Description:
Failed while delivering a message to a service instance. Message details follow.
Message ID: 24abb7a4-060d-480b-abdd-c22f70118c11
Service Instance ID: c58613fa-3716-48df-9fc9-edd76cae2f13
Service Type ID: afa69d35-1f0f-7665-4264-b4b78f9abfef
Subscription ID: d7676de9-1e27-4dba-ae5d-223e77c64b50
Body part name:
Service type: <>, <>, Version=<>, Culture=neutral, PublicKeyToken=<>
Exception type: BTXMessageDeliveryException
The following is a stack trace that identifies the location where the exception occured

at Microsoft.BizTalk.XLANGs.BTXEngine.BTXSession._receiveOneMessage(Guid& instanceId, Guid& serviceId, IBTMessage currentMsg)
at Microsoft.BizTalk.XLANGs.BTXEngine.BTXSession.ReceiveMessages(IBTMessage[] messages, Int32 firstIdx, Int32 count)
at Microsoft.BizTalk.XLANGs.BTXEngine.AppDomains.AppDomainRoot.Microsoft.XLANGs.BizTalk.ProcessInterface.IAppDomainStub.ReceiveMessages(Object objMsg)
at Microsoft.XLANGs.BizTalk.CrossProcess.AppDomainStubProxy.Microsoft.XLANGs.BizTalk.ProcessInterface.IAppDomainStub.ReceiveMessages(Object msgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext)
at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg)
at System.Runtime.Remoting.Messaging.ServerObjectTerminatorSink.SyncProcessMessage(IMessage reqMsg)
at System.Runtime.Remoting.Messaging.ServerContextTerminatorSink.SyncProcessMessage(IMessage reqMsg)
at System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessageCallback(Object[] args)
at System.Threading.Thread.CompleteCrossContextCallback(InternalCrossContextDelegate ftnToCall, Object[] args)
at System.Threading.Thread.InternalCrossContextCallback(Context ctx, IntPtr ctxID, Int32 appDomainID, InternalCrossContextDelegate ftnToCall, Object[] args)
at System.Threading.Thread.InternalCrossContextCallback(Context ctx, InternalCrossContextDelegate ftnToCall, Object[] args)
at System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessage(IMessage reqMsg)
at System.Runtime.Remoting.Channels.ChannelServices.SyncDispatchMessage(IMessage msg)
at System.Runtime.Remoting.Channels.CrossAppDomainSink.DoDispatch(Byte[] reqStmBuff, SmuggledMethodCallMessage smuggledMcm, SmuggledMethodReturnMessage& smuggledMrm)
at System.Runtime.Remoting.Channels.CrossAppDomainSink.DoTransitionDispatchCallback(Object[] args)
at System.Threading.Thread.CompleteCrossContextCallback(InternalCrossContextDelegate ftnToCall, Object[] args)

Additional error information:

Failed while delivering a message to a service instance. Message details follow.
Message ID: 24abb7a4-060d-480b-abdd-c22f70118c11
Service Instance ID: c58613fa-3716-48df-9fc9-edd76cae2f13
Service Type ID: afa69d35-1f0f-7665-4264-b4b78f9abfef
Subscription ID: d7676de9-1e27-4dba-ae5d-223e77c64b50
Body part name:
Service type: <>, <>, Version=<>, Culture=neutral, PublicKeyToken=<>
Exception type: BTXMessageDeliveryException
Source: Microsoft.XLANGs.BizTalk.Engine
Target Site: Void DeliverMessage(System.Guid, Microsoft.BizTalk.Agent.Interop.IBTMessage, Boolean ByRef)
The following is a stack trace that identifies the location where the exception occured

at Microsoft.BizTalk.XLANGs.BTXEngine.BTXSession._tryReceiveOneMessage(Boolean& loggedError, Guid& instanceId, IBTMessage currMsg)
at Microsoft.BizTalk.XLANGs.BTXEngine.BTXSession._receiveOneMessage(Guid& instanceId, Guid& serviceId, IBTMessage currentMsg)
at Microsoft.BizTalk.XLANGs.BTXEngine.BTXSession.ReceiveMessages(IBTMessage[] messages, Int32 firstIdx, Int32 count)
at Microsoft.BizTalk.XLANGs.BTXEngine.AppDomains.AppDomainRoot.Microsoft.XLANGs.BizTalk.ProcessInterface.IAppDomainStub.ReceiveMessages(Object objMsg)
at Microsoft.XLANGs.BizTalk.CrossProcess.AppDomainStubProxy.Microsoft.XLANGs.BizTalk.ProcessInterface.IAppDomainStub.ReceiveMessages(Object msgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext)
at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg)
at System.Runtime.Remoting.Messaging.ServerObjectTerminatorSink.SyncProcessMessage(IMessage reqMsg)
at System.Runtime.Remoting.Messaging.ServerContextTerminatorSink.SyncProcessMessage(IMessage reqMsg)
at System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessageCallback(Object[] args)
at System.Threading.Thread.CompleteCrossContextCallback(InternalCrossContextDelegate ftnToCall, Object[] args)
at System.Threading.Thread.InternalCrossContextCallback(Context ctx, IntPtr ctxID, Int32 appDomainID, InternalCrossContextDelegate ftnToCall, Object[] args)
at System.Threading.Thread.InternalCrossContextCallback(Context ctx, InternalCrossContextDelegate ftnToCall, Object[] args)
at System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessage(IMessage reqMsg)
at System.Runtime.Remoting.Channels.ChannelServices.SyncDispatchMessage(IMessage msg)
at System.Runtime.Remoting.Channels.CrossAppDomainSink.DoDispatch(Byte[] reqStmBuff, SmuggledMethodCallMessage smuggledMcm, SmuggledMethodReturnMessage& smuggledMrm)
at System.Runtime.Remoting.Channels.CrossAppDomainSink.DoTransitionDispatchCallback(Object[] args)
at System.Threading.Thread.CompleteCrossContextCallback(InternalCrossContextDelegate ftnToCall, Object[] args)

Additional error information:

Index was outside the bounds of the array.
Exception type: IndexOutOfRangeException
Source: Microsoft.BizTalk.Pipeline
Target Site: Int32 Read(Byte[], Int32, Int32)
The following is a stack trace that identifies the location where the exception occured

at Microsoft.BizTalk.Message.Interop.StreamViewOfIStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.BinaryReader.ReadBytes(Int32 count)
at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadArrayAsBytes(ParseRecord pr)
at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadArray(BinaryHeaderEnum binaryHeaderEnum)
at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run()
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream)
at Microsoft.XLANGs.Core.CustomFormattedPart.ProtectedUnpersist(Stream stm)
at Microsoft.XLANGs.Core.CustomFormattedPart.Unpersist(UnderlyingPart ulPart)
at Microsoft.XLANGs.Core.Part._slowProtectedRegisterWithValueTable()
at Microsoft.XLANGs.Core.Part.ProtectedRegisterWithValueTable()
at Microsoft.XLANGs.Core.Part.RetrieveAs(Type t)
at Microsoft.XLANGs.Core.DotNetPart.get_Object()
at Microsoft.BizTalk.XLANGs.BTXEngine.ExecMessage.GetParam(Int32 i)
at Microsoft.BizTalk.XLANGs.BTXEngine.BTXService.ArgsFromExecEnvelope(IBTMessage msg)
at Microsoft.BizTalk.XLANGs.BTXEngine.BTXService.DeliverMessageImpl2(Guid subscriptionId, IBTMessage msg, Boolean& receiveCompleted)
at Microsoft.BizTalk.XLANGs.BTXEngine.BTXService.DeliverMessageImpl(Guid subscriptionId, IBTMessage msg, Boolean& receiveCompleted)
at Microsoft.BizTalk.XLANGs.BTXEngine.BTXService.DeliverMessage(Guid subscriptionId, IBTMessage msg, Boolean& receiveCompleted)


For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

Saturday, October 03, 2009

BizTalk Multi-Part Messages and Serialization

I've been back working on BizTalk lately so the next few posts are going to be about BizTalk issues that have arisen from some things that have come up lately.

This post is going to cover a defect that I had to resolve lately, and the insight into using BizTalk that comes from the cause / solution.

I am currently working on a banking system that needs to reliably flow payment transactions to people's accounts. The functionality of the system I was working on can be described as follows, without any danger of giving away anything commercially sensitive:

  • The system batch-processes BACS payments onto a ledger. Once per day input files come from a mainframe system and the payments need to be loaded into the ledger.
  • An SSIS process decodes the files into a staging database location.
  • The payments are extracted from the database for transmission to the ledger. This is done using BizTalk.
  • The payments are split into batches and transmitted to the ledger.
  • The ledger responds asynchronously with a message that describes the success / failure of the payment transaction.
  • Payments that ultimately fail get written into a SQL table.
  • Once per day, when processing of the payments is complete, the failed payments are extracted into a BACS file using an SSIS process.
Now, given this, there was a bug raised which stated that the amount value for the payments that are in the failed payment output file were being written as zero. Here is a description of the basic fault finding process:

  • Bug was raised against the SSIS developer who wrote the output package. He unit tested the extraction process and verified that the value was being written correctly.
  • The bug was then assigned to the developer who wrote the data access layer that writes the failed payments into the output staging table. He then verified that the amount values get written correctly into the database.
  • The bug was then assigned to the BizTalk team and that meant that me, being one of the BizTalk architects with an overview of the entire process, was called in to look at the issue.
  • The first thing I did was to attach a debugger onto the BizTalk hosts, so that I could look at the actual values being passed through the system. First, I debugged the orchestration that writes out the failed payments. I verified that the amount being written by BizTalk was zero - thus confirming that there was no issue with the data access code.
  • I then debugged the orchestration that receives the failed payments and verified that the payment amount was non-zero. This meant that somewhere between receiving the payment and writing the payment out the value was being set to zero - but how?
The answer to this lay in the way that BizTalk handles messages. Using good practice, my payment information was held in a multi-part message type (see previous post). Because of the required throughput of the system and the need to access the payment object model, the payment data is held in BizTalk as a .Net object rather than an XML message. Now, this is OK - I can assign a .Net class to a message part as well as an XML schema - as long as the classes are XML serializable. This is because the multi-part message, when sent to the message box, gets XML serialized.

Now, we're getting somewhere. In the process I was looking at the payments are received (as mentioned) and then context information is promoted on the multi-message and it is written into the message box. Different orchestrations then subscribe to the multi-part message by filtering on the context and performing payment-specific business processing. Through further debugging I narrowed the fault down - before we write to the message the value is non-zero and after the message box, in the subscribing orchestration, the value is zero. Baffling.

Now, the answer to this lay in the serialization, as you could probably guess from the title of this post. The payments were originally defined using an XML schema and then the .Net classes were generated using XSD.exe.

OK, so let's strip this back to the essence of the problem. Let's say that I have a payment schema:


And I then create a serializable class for this using XSD.exe:

//------------------------------------------------------------------------------
//
// This code was generated by a tool.
// Runtime Version:2.0.50727.3074
//
// Changes to this file may cause incorrect behavior and will be lost if
// the code is regenerated.
//
//------------------------------------------------------------------------------

//
// This source code was auto-generated by xsd, Version=2.0.50727.3038.
//
namespace Andrew.Blog.MultiPart.Entities {
using System.Xml.Serialization;
///
[System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "2.0.50727.3038")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlTypeAttribute(AnonymousType=true, Namespace="http://Andrew.Blog.MultiPart.Schemas.Payment")]
[System.Xml.Serialization.XmlRootAttribute(Namespace="http://Andrew.Blog.MultiPart.Schemas.Payment", IsNullable=false)]
public partial class Payment : object, System.ComponentModel.INotifyPropertyChanged {
private decimal paymentValueField;
private bool paymentValueFieldSpecified;
///
[System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)]
public decimal PaymentValue {
get {
return this.paymentValueField;
}
set {
this.paymentValueField = value;
this.RaisePropertyChanged("PaymentValue");
}
}
///
[System.Xml.Serialization.XmlIgnoreAttribute()]
public bool PaymentValueSpecified {
get {
return this.paymentValueFieldSpecified;
}
set {
this.paymentValueFieldSpecified = value;
this.RaisePropertyChanged("PaymentValueSpecified");
}
}
public event System.ComponentModel.PropertyChangedEventHandler PropertyChanged;
protected void RaisePropertyChanged(string propertyName) {
System.ComponentModel.PropertyChangedEventHandler propertyChanged = this.PropertyChanged;
if ((propertyChanged != null)) {
propertyChanged(this, new System.ComponentModel.PropertyChangedEventArgs(propertyName));
}
}
}
}

And you will see that the serialized entity not only has a field for the decimal, but it expects the "PaymentValueSpecified" property to be set to indicate that the decimal has a value. This is because the field is marked as an optional field in the XML schema, to handle the cases where the field is nullable or unassigned. Unfortunately, in the code the flag to indicate that the payment value was set had not been changed and was still false. [It would be a good idea to set this field in the property set.] Therefore, the XML serializer was still thinking that there was no value and hence the payment value element was not present in the XML when the object was serialized (as it is written to the message box). When the message is deserialized the XML element for payment value is not present and so the decimal field defaults in value to zero. Hence the bug.

In Summary

  • .Net objects can be used as part of multi-part message types; this improves the maintainability of your orchestrations and also allows context information to be written onto .Net objects.
  • .Net objects, when assigned to a message part, are XML serialized when they are written to the message box. For this purpose, they must be XML serializable. It is easiest to generate these classes from an XML schema.
  • Be careful that your .Net object serializes as you expect, because if it doesn't you can get unexpected and unexplained issues in your solution.


I would point out that I inherited the schemas and entities here but somehow I was the one who had to sort out this issue. as usual ;)




Sunday, September 27, 2009

Handy BizTalk links

I haven't been blogging much lately because I have had a couple of back-to-back customer engagements that have involved a lot of travel and work hours and the last thing on my mind at the end of it has been to get my laptop out again! However, I have been working on some seriously large stuff and have come across lots of interesting things to blog about. I should soon be starting to detail some of these.

However, to get started, I have been working with Benjy (http://santoshbenjamin.wordpress.com/) lately and he sent me a link to a useful article so I thought I'd post it here. The reason I'm posting it is that one of the items that is detailed in here, multi-part message types in BizTalk, is going to be the subject of my next post.

Tuesday, July 07, 2009

SQL Deployment Strategies

Most line of business applications now maintain their state in a database. Well, most of the ones I have been involved with do anyway. I have recently been parachited onto the fag end of a project to sort out....well you guessed it - database deployment. And guess what - after I got involved and did my stuff we got it right. Almost.

It's quite easy to deploy a database from scratch. What always gets in the way is to upgrade databases, especially when you have an application of many different versions and where the data is in different states as well.

As a result of this I have been prompted to write a bit about database deployment strategies and to give some thoughts about what I believe is the way forward.

Clean / first-time deployment

The first time you deploy an application you can go for the "clean deployment" strategy. All you need to do is to create a SQL script (or scripts) that deploy your database and you're in business. A useful tool for this may be the SQL Publishing Wizard (http://www.microsoft.com/downloads/details.aspx?FamilyId=56E5B1C5-BF17-42E0-A410-371A838E570A&displaylang=en) which will script off your entire database so you can deploy it elsewhere.

One of the more difficult aspects of SQL deployment comes where the deployment will chenge depending on the target environment. A prime example of this is where you need to deploy tour .MDF and .LDF files into specific locations (e.g where your database server has SAN-attached storage) and so your create database script needs to specify this.

Another complication of environment-specific database installation is permissions and logins. Each environment will potentially have different logins.

The way that I usually get around this is as follows: I don't usually script off my database as one huge script. I usually break it down into sections (CreateDatabase.sql, CreateLogins.sql etc) and have a batch file run these scripts in order. I then use InstallShield to deploy the scripts onto the target server and I run the batch file as a custom action. Easy you might think. However, before the batch file runs I need to modify the create database script with the file locations and I need to modify the permissions script with the actual logins. This is soemthing again that InstallShield can do, or you can also perform some other sort of environment-specific replacement on your scripts before you deploy them.

Upgrading a database

Strategy #1 - Create a set of "delta" scripts

Using this strategy, you have an install for v1.0 (say). When time comes to make version 2.0 you create a script that assumes that your database is in the 1.0 state and then applies the changes. This may involve adding tables, adding columns to tables and adding data. You can even create a first step in your script that will raise an error if your database is not at the correct version.

Hint: Create a table in your database that includes the database version number. Your upgrade scripts can then update this value on each deployment, and it's a handy way of making sure that you don't run scripts incorrectly and a means of having a confidence check in your deployment.

The advantages of this method is that everything is above board. If you are installing an application v3.0, and your database is at 1.0, you need to install database v2.0 and then database v3.0 and you're in business. The downside of this is that if your database structure is out of kilter then everything breaks irretrievebly - but then again is that such a bad thing? Another downside is that you never get a "complete" view of your database objects as the scripts to create then become fragmented across many sql scripts as you progress through versions.

Strategy #2 - use a SQL differencing tool!

This is a simpler strategy to implement if you have already got the scripts to create your database from scratch. What you do here is firstly maintain a "reference" installation of each of the databases you have in your live environment. You then create your new database installation by maintaining your database build scripts. When it comes to release, you use some sort of SQL Comparison tool to detect the differences between the database installations and generate a script to upgrade from one to the other.

An advantage of this approach is that your developers only work with scripts to create the new database standard (from scratch) and the upgrade process becomes an issue of release management, which may be controlled in a different way. Also, if you need to support customers who have different database structures that were in service before you got control of the deployment process you can just take a reference version of their database and create a script for them.

The downside of this approach is that it often requires use of desktop tools and so it often does not lend itself to automatic build processes, so there is a human fators element in there and not a process automation. From a quality perspective, this is to be avoided.

Strategy #3 - use re-runnable scripts

One of the features of a database is that it is really only the tables that have to be retained when you upgrade a database. This means from the point of view of the deployment you can drop / recreate most of your database objects from views, stored procedures, indexes etc. The scripts might look something like this:

IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[usp_HelloWorld]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[usp_HelloWorld]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Andrew
-- Create date: 7/7/2009
-- Description: Hello World
-- =============================================
CREATE PROCEDURE usp_HelloWorld
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;

-- Insert statements for procedure here
SELECT 'Hello World!' AS HelloWorld
END
GO

The other thing with tables is that of course you can check to see if they are there, so your script for a table might look similar, but just wouldn't have the DROP statement at the top:

IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[MyTable]') AND type in (N'U'))
BEGIN

CREATE TABLE [dbo].[MyTable](
[ID] [uniqueidentifier] NOT NULL,
[SomeText] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY];

PRINT 'Table MyTable added.'
END
ELSE
BEGIN
PRINT 'Table MyTable already exists.'
END

The other issue is where we upgrade tables by adding columns to them. The way that I would do this in my projects would be to have a script for each table, and the create table script will always create the table at the latest version. The for each upgrade I will add something like this in the same script but underneath the CREATE statement:

IF NOT EXISTS (SELECT * FROM syscolumns WHERE [name] = 'AnotherColumn' AND id = (SELECT id FROM sysobjects WHERE [name] = N'MyTable'))
BEGIN
ALTER TABLE dbo.MyTable ADD
AnotherColumn int NULL
PRINT 'Column AnotherColumn on table MyTable added.'
END
ELSE
PRINT 'Column AnotherColumn on table MyTable already exists.'
GO

Using this strategy, you can create a set of database installation scripts that can be run and re-run any number of times, and by runing the scripts you will upgrade the database to the latest version whether by install or upgrade. In my opinion, this is the safest way to write scripts for your database as you run the same set of scripts on all databases and can re-run them any number of times. It makes the process of patching easier as again you just need to run the full set of scripts.

Summary Thoughts

As I said at the start, we've been doing this for years and yet time and again I see projects where this isn't going right. I think it is key for a project to decide on a database deployment and upgrade strategy and to stick with it. It won't go right if you leave it to chance and hope. Make sure the developers in your project team know what the strategy is and when creatign scripts make sure that all scripts run OK. Test deployment and upgrades thoroughly.

Happy scripting!


Friday, June 19, 2009

Team Foundation Server: Some thoughts on source control branching strategies

I thought I'd just write a short note on some of the source control branching strategies employed and how these relate to how I might use my preferred source control repository, Team Foundation Server (TFS).

I have been working with a customer who are using the IBM Rational toolset for just about everything and I have been ranting about how bad it is and how great TFS is, but one of the things that has given me cause to think is the way that the customer uses the ClearCase source control for branching and merging.

This has echoes with another customer I worked with last year (very large UK bank) that was using Harvest - now a CA product - for their source control. This was also working in a similar way and I was considering how this contrasted with the way that we usually use TFS.

Streaming Source Control Model

The first thing that I would say is that TFS is tuned for developer productivity whereas IBM Rational ClearCase is tuned for control. This in itself makes for some interesting differences, which I may expound on in a later blog post. The next immediate observation is that they handle changesets differently. This is because Harvest and ClearCase use a hierarchial branching structure. Typically, you might have a series of environments that your code is going to progress through, and you might have different builds of code at each of these stages. You therefore create a branch that represents each of these stages and order them in a hierarchy. At the top of the stack you have your current production code, then you might have QA, then maybe System Integration, then Continuous Integration and at the bottom of the stack you have a branch where you are actually developing.

The way that the Harvest and ClearCase work is that you create packages of changes that essentially ought to relate to features, which may actually contain multiple check-ins of code. When the feature is deemed to be complete in development it is "promoted" to the next level (i.e. CI), built and tested. You might vary the next bit depending on your project methodology, but essentially you take periodic releases of the software. This is then usually done by promoting the tested backages from your CI environment and progressing them through the remaining branches until they become the production release. If bugs occur through the environment fixes can be applied at any of the other levels and added to the build.

One of the sticking points here is that you can only check in changes against a single feature at one time, and so this makes concurrent development more difficult - and this is especially difficult when it comes to a bug-fixing stage of a project iteration when lots of small changes are being made with high frequency. (I know we should only be promoting bug-free code, but get in the real world).

In Team Foundation Server source control it is possible to set up a similar branching strategy and use the merging features as a means of promotion. You can still make all of the associated levels but there is no direct hierarchy implicit in the system. This is enforced by usage and also that merges can only be made up and down the lines of the branches.

One of the major contrasts lies with TFS where changesets. These can be / are associated with one or more work items, but they have a much looser correlation, and the changeset is the atomic unit of change rather than the feature. Therefore if you want to promote a feature you have to merge in all of the changesets up to the last changeset for that feature. This will also usually mean merging in all of the changesets that relate to other features as well, meaning that code on "unfinished" features may get promoted as well.

This might appear to mean that TFS has got much less control (and there is some merit in that observation) but it also means that you have a more consistent behaviour when you merge. In the feature-based model of ClearCase it is possible that some dependent code, not changed in a feature being promoted but changed in another non-promoted feature, will change the overall behaviour of the solution. If you promote all changesets up to a given point then at least you know that your build will behave the same.

Release-Based Source Control Model

One of the implicit assumptions of the streaming approach is that you have a separate build in each stream and that the sourec code of the stream constitutes the "release" in many ways. An alternative model that is often used in TFS is to branch based on releases. Let's imagine you are shipping a software product Widget 2009. You also have to support Widget 2008, Widget 2007 and Widget 2006, all of which are based on the same source code but with enhancements and developments in the intervening period. You have to support all of these products and be able to issue service packs against them. You also need to make sure that if you bug fix Widget 2006 that same fix can be merged into the later releases as well.

In this scenario the hierarchical streaming model is not suitable, because each time you promote a new set of features to your production stream and start building a new "production" release of your software you are effectively ending the ability to build your previous versions.

What you might do in this model is to have a branch for each release of your software again with a build process for each branch, but each branch does not overwrite the others and has its own lifecycle. If you need to patch a previous release into "production" you don't have to overwrite the other production release.

When to use each approach

I have discussed a couple of different approaches to branching and maintaining releases, both of which are in use in various organisations and each has its merits and demerits. The question is - if you have to put in place a branching strategy which one qould you choose? Which one is most appropriate in different circumstances? What are the pros / cons of this approach?

What I would say with this is that if you are shipping software product where you need to be able to manage the source on many versions of the software at the same time then you will need the second approach, and have a branch for each release. This scenario involves many different users who use different releases from each other and therefore all need to be supported. Desktop applications definitely fall into this category, as do many other retail packaged software products such as components and server products.

The release-based approach does have its side-effects and these should not be ignored. The main side-effect is that if you have many releases of your software you end up with a large number of branches and build profiles and these become difficult to manage. As I said, an annual release of a software product isn't going to lead to unacceptable overhead in this model.

However, another very common scenario is where an organisation has a software product that needs to be refreshed on a periodic basis. When the upgrade happens all users are affected at the same time and cannot choose whether they participate in the product or not - they are sent the upgrade anyway. This is most commonly applied to software teams within an company producing bespoke software, but may also be applied to self-updating applications such as iTunes where upgrades are pushed out regularly and downloaded over the Internet and installed. It also applies to .com organisations where you obviously have one current production build of your software.

In these scenarios you tend to have a high number of releases, especially with agile projects, and once a release has made it to production you discard the previous releases as you will never be opening up and servicing the old code. In this scenario it is easier to manage your source code if you have a limited number of streams and promote changes up through them, irrespective of whether you are prmoting changesets in TFS or features in Harvest or ClearCase. Once you have got all of your branches building you may find you have a lower project overhead in maintaining your builds.

In conclusion.....

A modern source control repository must support effective branching and merging in order to handle development of new versions of software whilst supporting current versions. The manner in which you branch will depend on your release cycle and the type of software that you produce. Picking the correct branching strategy for your project will have a direct impact on how effectively you can support your software, so take time to think about it and get it right.

Tuesday, June 09, 2009

Enable Unit Testing in BizTalk 2009

BizTalk 2009 has, for the first time, built-in developer support for testing schemas, maps and pipelines that is available for automated unit testing rather than being hidden behind features in Visual Studio where it can only be used for manual testing.

Anyone interested in finding out more about BizTalk unit testing wouldn't go far wrong looking at Michael Stephenson's series on unit testing http://geekswithblogs.net/michaelstephenson/archive/2008/12/12/127828.aspx

To use or not to use (the unit testing framework)

There has been a debate around the office lately regarding the unit testing features in BizTalk 2009. The debate goes something like this:

1. You don't want to have unit testing enabled on your production assemblies.
2. If you enable unit testing on your assemblies for test and then switch it off for release then you are releasing different (although generated) code.
3. Is the unit testing framework any good anyway?

There has seemed to be a consensus that the unit testing features in BizTalk 2009 aren't that good and we should do without them. However, I beg to differ! Premise #1 - that unit testing should not be enabled on production releases - is I think a false premise.

For example, let's look at unit testing of schemas in more detail. When you enable unit testing you actually change the base class from which your schemas derive, but this base class in turn inherits from the SchemaBase class anyway. All that is added is a method called ValidateInstance which you hook into for your tests - all the rest of the implementation is the same, so to me there is no issue with using this in production code. It means that you can build in release mode and then run your unit tests against your production assemblies to examine their quality which surely should be a good thing!

Remember to set the configurations

When you set your deployment properties on your BizTalk project, you are probably developing in Debug mode, and you might not think to change the deployment settings for release. If you don't explicitly enable unit testing for all configurations or specifically for release then by default your release assemblies will be built without unit testing enabled.

This is something that happened to us today. We are in the first iteration of a project, we've successfully put in a build process right at the start, that build process packages the code into an InstallShield package and installs it onto a Consuinuous Integration environment. We were just getting the build to run the unit and integration tests and publish the results back to examine the build quality and we started getting an error like this when building the test projects:

Cannot implicitly convert type 'XXXX.YYYY.Transforms.CustomerAccount_to_CustomerDetails' to 'Microsoft.BizTalk.TestTools.Mapper.TestableMapBase'

My first reaction was to check that the references in the unit test project were correct - which they were. I had copied the Microsoft.BizTalk.TestTools.dll assembly into a referenced assemblies folder. That was all OK. I then checked to see whether the referenced assembly was available on the build server. It was. In the end the issue was that we were building in Release mode and I had only enabled unit testing in Debug mode. Because of this, my schemas were inheriting from SchemaBase and not TestableSchemaBase and my maps from MapBase and not TestableMapBase etc. Whilst the schemas and maps projects all built OK the build error appeared in the test project.

As soon as I enabled unit testing for all build configurations the schemas then everything built OK and I could go ahead and build my tests.

Test tools not available

This then led quickly onto another issue. The build process always builds in release mode, but it had been set so that unit testing was not enabled. Then when I enabled unit testing the build was OK, and all of the unit tests were passing but we're doing continuous integration so the build process, after the unit tests, includes a deployment step and then a second set of integration tests on the deployed system.

Unfortunately, once the build was making the assemblies testable the deployment script failed (we use InstallShield to put the DLLs onto the file system and then use BTSTask to deploy them into BizTalk as a custom action). BTSTask was failing to deploy the new build because the reference to Microsoft.BizTalk.TestTools could not be resolved. I checked my development machine - it has BTS2009 Developer Edition and Microsoft.BizTalk.TestTools was in the Global Assembly Cache. I then checked the build server, also using developer edition, Microsoft.BizTalk.TestTools was in the GAC on there. I then checked the integration server where we are automatically deploying to - Microsoft.BizTalk.TestTools was missing from the GAC as the developer tools were not installed.

I had to modify the build so that the build tools were deployed into the GAC and then the assemblies would build OK. This goes right back to point #1 above: if the core BizTalk 2009 install does not include the test tools should we be putting them on there anyway or using a different method of unit testing that does not rely on the test tools? On the other hand, if the test tools are there out-of-the-box, shoudn't we be using them?

The debate rumbles on.......

Wednesday, June 03, 2009

Some funnies with creating and writing to event logs

Today I was working with my project team on tracing and diagnostics, and we made a decision to move all of our event sources into a new application log.  So, I changed my installer to create a new application log, removed my old application sources and registered my new application sources.  

I then made sure that the BizUnit tests were changed to look for the event log entries in the correct event log and ran the regression suite and guess what - they all failed!  I looked for the events and found that my new event log was there but pitifully empty and instead the events were still being written into the Application log.  

I then checked my registry keys in HKLM/SYSTEM/CurrentControlSet/Services/EventLog and they were all OK as well, all pointing to the correct event log.

I was also wondering if this was a Windows 2008 thing, because the event log infrastructure has changed quite a bit in there as well.

I did a bit of rooting around, and found some good thoughts out there, including a forums post here http://www.eggheadcafe.com/community/aspnet/2/10022041/writing-to-custom-event-l.aspx which is one place that points to some support info from Microsoft http://support.microsoft.com/default.aspx/kb/815314 and that didn't really take me anywhere because I was doing what seemed to be the right thing already.

In the end it was much more mundane.  If you have removed an event source and registered it onto a different event log you need to reboot.  When the machine comes back up again all the events go into the right place.