Thursday, 22 November 2007

Hi

Recently, I ran into a peculiar problem, which really had me fooled a long time.

Basically, I had a BizTalk Server 2006 R2 solution running. It consisted of 10 assemblies. One of the orchestrations in each of 9 of the assemblies was exposed as a web service, all using complex types - no strings, ints and so on.

And then there was the system, that was to call my web services. I had no saying over that system. We started calling the first web service. Everything went ok. Then they implemented code to call the second web service, everything OK... all was well (except some minor things here and there) for the first eight web services.

But at the ninth web service, strange things started to happen.

Basically, the other system would call my web service, IIS would return http 200 OK, but still, no data came into BizTalk. I had NOTHING in group hub page, nothing in HAT, nothing in the eventlog, NOTHING! IIS log said: I received a post on this url and responded with 200 OK - that's it - nothing more.

Really weird - I mean... where did the XML go? Why were there no errors? So we installed YATT, which is an http sniffer tool, that would be able to tell us what was exatly sent across the wires. Basically, what we found out was, that allthough the sender might have send 13000 bytes, the sniffer only reported maybe 10500 bytes. So we started investigating the network. The two servers were on the same subnet, one hop away from eachother. So no servers on the route could mingle with the traffic.

I decided that I would write my own little C# test program, that would call the web service and see if that failed as well. It didn't. I ended up calling the web service succesfully with more than 100k (I didn't bother to try anything higher than that.)

But it turned out, that the sniffer must have a bug - it reported all sorts of different numbers, when using my test program, and none were correct. Apparently, it wasn't created to handle large packets, but just a few kilobytes. So we installed wireshark instead (get it from sourceforge). Now THAT is a nice tool! Totally professional (and free), and it showed us everything that came in and out - no limitations.

So we did a test with my tool, and a test with the other system, and tried comparing the http headers, the soap action, and so on. It turned out, that all the other web services, when called by the other system, returned http 202 Accept and not http 200 OK. And when my test program called the web service, it got the http 202 Accept.

We ended up discovering what the issue was. The other system (programmed in .NET) wasn't calling my web services the "right way". They were sending everything using httprequests. Now, this is a perfectly legal way of doing it, but it really requires that you know what you are doing. I mean: They added a header to the httprequest for the SOAPAction, and then they built up XML with the soap envelope, soap body, elements for the web method and inside that the actual XML. This was just a string that they sent using httprequest.

The answer ended up being that the XML that the other system was sending me had invalid data in elements of type xsd:date. So basically, the XML couldn't be deserialized into the object that my web method on the web service was expecting. Therefore, the web method was never called, and therefore there was no data in biztalks log, the eventlog or anywhere else.

So, I have learned two things from this:

1. You should always accept the help your programming environment gives you. If the programmer of the other system had added a web reference to the web service and built XML and deserialized it into the object that was the parameter, he would have gotten an exception at runtime, that he could debug. The way he did it meant that we got NO errors at all - the data just disappeared (which I think .NET shouldn't do. Some sort of warning somewhere would have been nice.)

2. When debugging, don't trust the tools you download you to help debugging :-)

I hope this can help someone.

--
eliasen

Thursday, 22 November 2007 00:34:20 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 

Hi

I am currently supporting an existing BizTalk 2004 environment, and came across the need to debug an orchestration in the production environment. Yes, I know - "Not in the production environment", you scream... but yes, indeed - in the production environment.

Anyway, I set some breakpoints, waited for the orchestration to hit the breakpoint and tried to attach to the orchestration. I got this error:

Debugging user validation against group '<servername>\BizTalk Server Administrators' failed with error: Debuging Client is not a BizTalk Server Administrator.

This seemed odd, so I investigated a bit further. It turns out, that the setup is a multiple server setup, ie. one server for SQL Server and one for BizTalk 2004. Also, it tunrs out, that the gyuy who installed the servers didn't use domain groups. The services were running under domain accounts, but the BizTalk groups were created on both machines. Not a supported setup, but I am hoping they will upgrade to BizTalk 2006 R2 before long, and therefore, we are not going to touch that.

Anyway, it turns out, that the user I was logged in as was a member of the "BizTalk Server Administrators" group - but only on the BizTalk Server. Once I added him to the same group on the SQL Server server, all was fine.

I googled the error, and didn't stumble upon an answer, so I just thought I'd blog about it in case anyone has the need for the answer some day :-)

--
eliasen

Thursday, 22 November 2007 00:11:37 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 
Sunday, 21 October 2007

Hi

I just thought I would share my experiences from the first BizTalk 2006 R2 I have installed and configured.

It was on two different boxes - one for SQL Server and one for BizTalk. Domain groups were created beforehand, as well as a service account for the services. So everything should be in place.

Installation went fine, naturally, but the configuration wouldn't let me configure Group and Runtime. I checked the logs, off course, and the first error was this one:

[09:14:15 Info ConfigHelper]  is not a local entity.
[09:14:15 Error ConfigHelper] d:\depot2300\mercury\private\common\configwizard\confighelper\service.cpp(729): FAILED hr = 80070421

[09:14:15 Warning ConfigHelper] The account name is invalid or does not exist, or the password is invalid for the account name specified.
[09:14:15 Warning ConfigHelper]  Failed to validate service credentials for account: %1

So it had to be something about the credentials I have specified. So I unconfigured, reconfigured, being very carefully to enter the correct credentials - same error. I tried again, with extra extra focus on not mistyping anything. Same error.

Then I searched some more in the log file, and found this:

2007-09-25 09:16:49:0441 [INFO] WMI Deploying 'C:\Program Files\Microsoft BizTalk Server 2006\Microsoft.BizTalk.GlobalPropertySchemas.dll'
2007-09-25 09:16:49:0723 [WARN] AdminLib GetBTSMessage: hrErr=80070002; Msg=The system cannot find the file specified.;
2007-09-25 09:16:49:0723 [WARN] AdminLib GetBTSMessage: hrErr=c0c02560; Msg=Failed to read "KeepDbDebugKey" from the registry.
The system cannot find the file specified.;

But the file actually existed. Then I searched the log file some more, and found this:

2007-09-25 09:16:49:0863 [INFO] WMI Error occurred during database creation; attempt to rollback and delete the partially created database'hcpr-hd-axa-01\BizTalkMgmtDb'
2007-09-25 09:16:49:0863 [INFO] WMI Calling CDataSource.Open() against hcpr-hd-axa-01\master
2007-09-25 09:16:49:0879 [INFO] WMI CDataSource.Open() returned
2007-09-25 09:17:09:0942 [WARN] WMI Rollback failed.  Could not delete database.
2007-09-25 09:17:09:0942 [ERR] WMI Failed in pAdmInst->Create() in CWMIInstProv::PutInstance(). HR=c0c025b3
2007-09-25 09:17:09:0942 [ERR] WMI WMI error description is generated: Exception of type 'System.EnterpriseServices.TransactionProxyException' was thrown.
2007-09-25 09:17:09:0942 [INFO] WMI CWMIInstProv::PutInstance() finished. HR=c0c025b3
[09:17:09 Error BtsCfg] d:\depot2300\mercury\private\mozart\source\setup\btscfg\btswmi.cpp(358): FAILED hr = c0c025b3

[09:17:09 Error BtsCfg] Exception of type 'System.EnterpriseServices.TransactionProxyException' was thrown.
[09:17:09 Error BtsCfg] d:\depot2300\mercury\private\mozart\source\setup\btscfg\btscfg.cpp(1769): FAILED hr = c0c025b3

This error pointed to some transaction error, so I downloaded and ran dtctester and it turned out my MSDTC settigns were not good enough. I spend the better part of a day looking for this. What really had me confused was that the SSODB was created fine - the BRE-DB was created fine... and the BizTalkMgmtDb database was sometimes created just fine. I mean... sometimes it would create the BizTalkMgmtDB database and fail during creation of the MessageBox. Other times it would fail on the Management database. So seing as two databases were created just fine, I really didn't think there were any issues with DTC.

BUT, this just goes to show; Before starting a multibox installation of BizTalk, ALWAYS run dtc tester first - just to be sure :-)

--
eliasen

Sunday, 21 October 2007 20:02:05 (Romance Daylight Time, UTC+02:00)  #    Comments [5]  | 
Thursday, 18 October 2007

Hi all

This isn't about BizTalk, .NET or anything else technical. This is about ME!

The other day I mended a fuse in our car.

Really? you might say.. so what? Well, to me this is a big deal :-) I don't like getting dirty hands - I generally never do anything practical around the house... I am lousy at it, I hate it, and I would rather pay someone else to do it.

But then, the back light on the car stopped working. I changed the light bulbe, which in itself took me about 2 hours, including driving to the gas station to buy a new bulbe... and then it turned out the original bulbe wasn't broken. That sucked! Then I decided I had to take the car to the mechanic... but a friend asked me if I had checked the fuses. Well, duh... off course not - how would I do that? So with the manual in one hand and a screwdriver (YES, a screwdriver... me... a screwdriver...) in the other hand, I found the fuse that wasn't working anymore. I drove, once again, to the gas station, bought a new one (approximately one dollar) and put it into place. And now the back lights are working again.

WOW! What en experience, eh? :-)

So after reading this, you might still think: "Is this guy crazy? All this fuzz about mending a fuse (Thanks to Mads Orbesen Troest for telling me how to say this in English)? YES! It's a bg deal! :-)

--
eliasen

Thursday, 18 October 2007 22:07:20 (Romance Daylight Time, UTC+02:00)  #    Comments [2]  | 
Sunday, 14 October 2007

Hi

The other day I published an orchestration of mine as a web service. Not a big deal. Then, I needed to export the MSI for my application, so I could install it on the test server. Now THAT was a Big Deal! :-)

I got this one:

A really silly restriction on a quite normal Windows Server 2003 R2 - as you can see, the entire path of a file, including the filename, must be less than 260 characters long. And the path itself must be less than 248 characters long.

This had me stunned for a moment, until I took a closer look at what file creation was the issue. It turns out, that the issue was with creating temporary files in c:\documents and settings\administrator\local settings\temp. Yes, I am logged in as administrator. No, I wouldn't normally do that. Quit asking these questions, and let me finish the post! Right. Then I got clever, if I might say so myself (nobosy else is saying it, so I suppose I have to do it myself :-) )... Turns out that the temporary files are created in the %TMP% (NOT the %TEMP% one...) directory. So what to do? Simple - change the %TMP% environment variable to point at c:\tmp in stead of c:\documents and settings\administrator\local settings\temp. That's what I did, and it worked. I got my MSI file.

BUT... Well... You know that sometimes you do something that is like peeing in your pants? At first it is warm, but then it just gets cold and nasty? Well, this is like that. Because when I then took the MSI file to the test server and tried to install it (not the import part, but the install part), I got the exact same error. The default installation path is C:\Program Files\Generated by BizTalk\ - which is also a rather long path. So in order to install my application, I ended up installing it to c:\biz. Now having to tell your customer that they can't install the application to a path longer than 5 characters really isn't an option.

So my clever and very nice workaround to just set the TMP environment variable to c:\tmp in order to generated the MSI file really wasn't all that clever, since the installation wasn't acceptable at all. Had the issue just been my own developers box, I wouldn't have minded... but now I have to go rename all artefacts anyway. Bugger!

So basically, this post is written for people looking for a workaround for the error they get with long filenames/paths. My suggestion: Rename your artefacts, and don't wet your pants! :-)

--
eliasen

Sunday, 14 October 2007 21:52:42 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Sunday, 07 October 2007

Hi

Well, some people have their BizTalk vNext wishlist on their blog. I'd like to add a couple of requests to the growing list :-)

  1. For development purposes, it would be really nice to be able to rightclick a receive location that is disabled and choose "Execute". If for instance I have a SQL adapter receive location that is supposed to poll every minute, then I don't want to have to quickly disable the receive location once it has been fired. I want to keep it disabled, so data wont go through my system when I am not ready, and then just execute it whenever I am ready.
  2. Deployment of single assembly from VS.NET. If I have three projects in my solution: Schemas, Maps and Orchestrations, and both Maps and Orchestrations reference Schemas, then I can not deploy them all at the same time from VS.NET :-( Deploying Orchestrations will make VS.nET deploy Schemas as well - even if there are no changes to it. To deploy Schemas, the current Schemas assemblymust be undeplyed, and therefore, the Maps assembly must also be undeployed. So VS.NET will Undeploy Orchestrations, Undeploy Maps, Undeploy Schemas, Deploy Schemas, Deploy Orchestrations. This isn't acceptable, because Maps isn't deployed anymore. If I then deploy Maps, the same thing happens, only the Orchestrations gets undeployed and it isn't redeployed. To me, VS.NET should ONLY interfere like that if I deploy the entire solution. If I deploy just project, then just let me do so! Right now, I would have to let Orchestrations reference Maps even if it isn't necessary and then always deploy Orchestrations.
  3. Restart Host Instances only once. Right now, if I deploye my solution from VS.NET, and this solution has 10 proejcts that are all set to "Restart Host Instances" on deployment, then the host instances will get restarted 10 times. Would be nice if VS.NET could figure this out and only do it once.
  4. Specify the node that is body, when using enveloping and not just the parent. It makes great sense, that I can specify a node and all child elements are then submitted as seperate messages from the receive pipeline. This is how we can receive orders, invoices, etc. in the same XML. BUT, if I receive XML where I only need the Orders, then I would like to point at the Orders element so that is all I get. Right now I have to use standard enveloping, and implement logic to just delete the invoices, etc. Not really nice, I think.

That's it for now :-)

--
eliasen

Sunday, 07 October 2007 00:40:36 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 

Hi

A guy on the newsgroups recently needed to create exactly 5 elements in the output of his map, no matter how many records appeared in the input.

Well, I am always looking for new things to try out, and frankly, my XSLT coding skills could be better, so I thought I'd give it a shot.

I created a project with the following input schema:

The schema is for an XML document, and has a header (1..1), a LoopingRecord (1..1) and a footer (1..1). The LoopingRecord has a Field5 element, that can appear at most 5 times.

The output schema looks like this:

This schema is for a flat file. It baiscally has the same signature as the input schema - exception being that the Field1 element has minOccurs=5. It MUST be present 5 times - this is a schema for a positional file.

The map is pretty simple:

Header and footer are mapped using regular mapping techniques. But the Detail-element is created using a custom scripting functoid.

The string concatenate functoid only has one input, the string "5". This is because I want to create exactly 5 elements in the ouput.

The custom scripting functoid is an "Inline XSLT Call TEmplate" scripting functoid with the following code:

<xsl:template name="CreateXElements">
   <xsl:param name="totalCount" />
      <Detail>
         <xsl:for-each select="/*[local-name()='InputRoot']/*[local-name()='LoopingRecord']/*[local-name()='Field5']">
            <DetailLoop>
               <Field1><xsl:value-of select="text()" /></Field1>
            </DetailLoop>
         </xsl:for-each>
         <xsl:variable name="countRecords" select="count(/*[local-name()='InputRoot']/*[local-name()='LoopingRecord']/*[local-name()='Field5'])" />
         <xsl:if test="$countRecords&lt;$totalCount + 1">
            <xsl:call-template name="BuildTheRest">
               <xsl:with-param name="counter"><xsl:value-of select="$countRecords + 1" /></xsl:with-param>
               <xsl:with-param name="totalCount"><xsl:value-of select="$totalCount" /></xsl:with-param>
            </xsl:call-template>
         </xsl:if>
      </Detail>
</xsl:template>
<xsl:template name="BuildTheRest">
   <xsl:param name="counter" />
   <xsl:param name="totalCount" />
      <DetailLoop>
         <Field1></Field1>
      </DetailLoop>
      <xsl:if test="$counter&lt;$totalCount">
         <xsl:call-template name="BuildTheRest">
            <xsl:with-param name="newCounter"><xsl:value-of select="$counter + 1" /></xsl:with-param>
         </xsl:call-template>
      </xsl:if>
</xsl:template>

Basically, I start by copying existing nodes to the destination. As I have explained in this post you need to use XSLT for the whole thing. You can't copy the existing nodes using the mapper and create the new nodes using XSLT. After the existing nodes have been copied, I create the new empty nodes, by recursively calling a template that will create a single element for me.

You can find my entire project here: CreateXNumberOfElements.zip (23.33 KB)

I hope this will come in handy for someone in the future.

--
eliasen

Sunday, 07 October 2007 00:09:42 (Romance Daylight Time, UTC+02:00)  #    Comments [4]  | 
Tuesday, 18 September 2007

Hi

An old collegue of mine asked me if there wasn't anyhow he could overwrite the folder of a send port using the FILE adapter, so that he could decide the entire path of a file that was to be written from within his orchestration.

After all, you can overwrite many things, like SMTP server to be used in the send port or the username and password for an FTP connection. So why not the file folder?

Well, I have investigated it a little bit, and I can't get around to making BizTalk do it.

I tried:

  1. Set the folder on the send port blank, and use the %SourceFileName% as filename and set its value (The FILE.ReceivedFileName property of the message) inside the orchestration. This isn't valid, since the folder path can not be empty.
  2. Set the folder to c:\ and set the FILE.ReceivedFileName property to be <directory> + "\\" + <filename>. This didn't work either. When writing the file, BizTalk strips all folder names from the FILE.ReceivedFileName value and just wrote the file with filename in c:\

Off course, you can use a dynamic port to do it, but this customer wanted to avoid seeing all those subscriptions in the subscription viewer.

So I don't see a way around it. And basically, when you know it works like this, it makes sense that the administrator who has decided what folder files go into also gets to decide that you can not overwrite this. Oh well, that's life.

Hope this helps someone at some point...

--
eliasen

Tuesday, 18 September 2007 22:29:23 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Sunday, 16 September 2007

Today, I discovered, that google groups can tell me how many posts I have made to newsgroups.

I switched profile at some point in my newsgroup career, so there are two profiles to watch:

This one (my first) and this one (my current).

Quite a lot of posts, actually - no wonder I get tired every now and then :-)

--
eliasen

Sunday, 16 September 2007 23:26:30 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 

Well, I suppose we have all been there – in order to get the business process running, a specific element from a schema needs to be promoted in order to route on it, correlate on it, and so on.

 

Unfortunately, elements that can occur more than once can not be promoted. This, off course, makes perfectly sense, since the property can only hold one value, and how would BizTalk know which one of the many occurring elements to take the value from at runtime? So we agree with the limitation, but hope for a nice solution. :-)

 

If you try to promote a reoccurring element, you get this error when adding it to the list of promoted properties:

“This node can occur potentially multiple times in the instance document. Only nodes which are guaranteed to be unique can be promoted.”

 

Right. Now, some people have found the editor for the XPath describing the element that one wants to promote. If you have promoted some element, you can click on it like this:

Then you can click on the dot at the right of the line, and get into the editor like this:

 

Now, wouldn’t it be lovely, if you could just change this expression to include for instance an index on the reoccurring element? In my example from this screenshot, the “ReoccuringRecord” record can occur multiple times. So it would be nice, if I could just change the XPath to be like this:

 

/*[local-name()='ExampleRoot' and namespace-uri()='http://PromotingReoccuringElement.ExampleSchema']/*[local-name()='ReoccuringRecord' and namespace-uri()=''][1]/*[local-name()=’ElementWhereNumber1IsPromoted’ and namespace-uri()='']

 

By setting the “[1]” into the XPath, I state that I will be needing the first occurrence of the ReoccuringRecord and therefore, this XPath expression will always give me exactly one node. Unfortunately, the engine can not see this, so the error will be the same, only difference being that this error doesn’t occur until compile time:

 

Node "ElementWhereNumber1IsPromoted" - The promoted property field or one of its parents has Max Occurs greater than 1. Only nodes that are guaranteed to be unique can be promoted as property fields.

 

Bummer!

 

So how do we get this working? If I really need to promote a value that occurs in an element that might occur multiple times, I see four options:

 

  1. Map to a schema on receive port
  2. Custom pipeline component
  3. Orchestration to do it and then publish to MessageBox
  4. Call pipeline from orchestration

I will go these options in more detail here:

 

Option 1: Map to a schema on receive port.

When a map is executed on a receive port, some extra magic functionality is performed by BizTalk. After the map has been executed, the message is sent through some code that promotes properties that are specified inside the destination schema. If you execute a map inside an orchestration, this doesn’t happen.

 

So you can create a schema that has an extra field, in which you place the value that needs to be promoted. This element must not be able to occur multiple times. Promote this new field, and after the map on the receive port has been executed, you have your value promoted.

 

Option 2: Custom Pipeline Component.

It isn’t that difficult to create a custom pipeline component, that can promote a field for you. Your Execute method might look just like this:

 

public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(IPipelineContext pContext, Microsoft.BizTalk.Message.Interop.IBaseMessage pInMsg)

{

    pInMsg.Context.Promote("MyProp", "http://ReoccuringElement.PropertySchema", "MyValue");

    return pInMsg;

 }

 

Of course, you will probably want to load the body stream of the IBaseMessage somehow, in order to find the value inside the body to promote and then replace "MyValue" with the value form within the XML.

 

Just use the pipeline component inside a custom receive pipeline, and you are all set.

 

Option 3: Orchestration to do it and then publish to MessageBox

Create a intermediate orchestration, that gets the input message. Then, it should create a new message of the same type in a message assignment shape like this:

NewMessage = InputMessage;

NewMessage(*) = InputMessage(*);

NewMessage(MyNewProperty) = xpath(InputMessage, xpathexpression);

 

Then, use a direct bound port to publish the message to the MessageBox. In order for the new property to follow the message, you need to initialize a correlation set on the send shape that is based on this new property.

 

Let other orchestrations and send ports subscribe to this message and let then do their work.

 

Option 4: Call pipeline from orchestration

The last option is to call a receive pipeline from within your orchestration. This requires a new schema, that has a field for the value to be promoted, just as in option 1. Inside your orchestration, map the input message to this new schema, and call a receive pipeline with this new message as a parameter. Remember to promote the field in this new schema. There is an article on MSDN about calling a pipeline from within an orchestration, which can be found at http://msdn2.microsoft.com/en-us/library/aa562035.aspx

 

Upsides and downsides

In order to choose which way to go in a specific solution, several things need to be considered.

 

Basically, I'd go for option 1 almost anytime. This is because it is best practices to map anything incoming into a canonical schema anyway. So instead of promoting values inside all your partners schemas - schemas they might change, you should promote from within your own canonical schema.

 

Reasons not to choose option 1 include: The canonical schema also has a reoccurring element, so it hasn't provided extra functionality with regards to getting this specific value promoted. Or perhaps, we aren't using canonical schemas, because there was no time for this when the project was started.

 

If we can't go for number 1, I'd go for number 3. Number 2 requires programming of a pipeline component, which can be a bottleneck, unless done correct. Also, the pipeline component is a whole new component to maintain, document and test. Number 4 requires a new schema and therefore also a map to be built. If I am ready to do this, I'd go for number 1 instead.

 

If I don't like number 3, for unknown reasons, I'd go for option 2 - the custom pipeline component. Allthough it is custom code, and must be done right, and testet and everything... I still feel that creating a new schema and map in order to call the pipeline in option 4 is overkill, since I'd go for option number 1 instead, which would also require the new schema and map.

I hope this explains some details about this issue, and that it helps someone in the future.

--
eliasen

Sunday, 16 September 2007 23:22:51 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 

Theme design by Jelle Druyts