Tuesday, 30 June 2009

Hi all

A friend and colleague of mine has just release SolZip version 1.1 on CodePlex – Find it here: http://solzip.codeplex.com/.

Basically, it is a nifty way of zipping a Visual Studio 2008 C# solution. The utility is pointed towards the .sln file and then zips all files in the solution and projects into one zip file.

To me this is really nice, because as of BizTalk 2009, the project files are just specialized C# projects and therefore, it seems to work just fine for BizTalk 2009 solutions as well. I can use it when blogging to quickly zip up a solution to attach to a blog post without having to zip the entire folder and then deleting the large dll files and other silly stuff. SolZip just takes what is necessary.

I was the very first downloader of the 1.1 version – you should go download it now! :-)

--
eliasen

Tuesday, 30 June 2009 22:02:39 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Sunday, 28 June 2009

Hi all

Recently, I posted a post about handling mapping to and from a comma separated list. You can find it at http://blog.eliasen.dk/2009/06/22/HandlingCommaSeparatedValuesInsideAMapPartI.aspx

I came to think that this was actually also a great opportunity for me to develop my first cumulative functoid. The functoid is then supposed to take the values as input and output the list of values with a comma in between them.

The functoid has been developed and added to my functoid library at http://eebiztalkfunctoids.codeplex.com – free to download. Also another functoid in the library, which has been there for a while is the “CSV Extract” functoid which extracts a specific position from a separated list.

--
eliasen

Sunday, 28 June 2009 00:58:34 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Monday, 22 June 2009

Hi all

Some times people need to convert a list of elements into a comma separated list of values. In the forums this questions arises every now and then, so here goes a solution.

Suppose this input:

<ns0:MultipleElementsRoot xmlns:ns0="http://MapCommaSeparatedValue.MultipleElementsSchema">
  <MyField>Jan</MyField>
  <MyField>Eliasen</MyField>
  <MyField>BizTalk</MyField>
</ns0:MultipleElementsRoot>

And I want this output:

<ns0:CommaSeparatedRoot xmlns:ns0="http://MapCommaSeparatedValue.CommaSeparatedSchema">
  <CommaField>Jan,Eliasen,BizTalk</CommaField>
</ns0:CommaSeparatedRoot>

Inside a map I would handle it like this (2 approached, that share the first two functoids):

multipletocomma

The reoccurring element is used as input to a string concatenate functoid. The functoid takes a comma as the second input. The output of the concatenate functoid is then used as input for a cumulative concatenate functoid, effectively creating a comma separated list of values. The issue with this list is, that it will have a comma at the end of the line, which needs to be removed.

One way of doing this is to use three functoids to pull this last comma away. First, I take the length of the string, then I subtract one from that, and use the string extract functoid to extract the string beginning at position 1 and ending at (length - 1) and send that to the destination element. Works like a charm.

Now, plenty of people out there (Henrik, you know who you are) would have no moral qualms using a C# scripting functoid to get rid of the extra comma. Like this:

public string RemoveLastComma(string input)
{
  return input.Substring(0, input.Length-1);
}

That one scripting functoid would eliminate the last three functoids I added to remove the last comma. But I still prefer the functoid way of doing it, mostly for maintenance reasons. A new BizTalk developer looking at a map with lots of scripting functoids has no idea what is happening, and even after looking at the code in all the scripting functoids, he can’t remember what is happening in more than a couple of them. In my opinion; Always use the built-in functoids if possible.

Now, just for the kick, I thought that some times the opposite could be the case – going from a comma separated list to lots of elements. This, I cannot solve using built-in functoids, unfortunately, so the map ended up like this:

commatomultiple

Just one scripting functoid. The type is an “Inline XSLT Call Template” and the script is:

<xsl:template name="MyXsltSplitTemplate">
  <xsl:param name="param1" />
  <xsl:if test="$param1 != ''">
    <xsl:element name="MyField">
      <xsl:choose>
        <xsl:when test="contains($param1, ',')">
          <xsl:value-of select="substring-before($param1, ',')" />
        </xsl:when>
        <xsl:otherwise>
          <xsl:value-of select="$param1" />
        </xsl:otherwise>
      </xsl:choose>
    </xsl:element>
    <xsl:call-template name="MyXsltSplitTemplate">
      <xsl:with-param name="param1" select="substring-after($param1, ',')" />
    </xsl:call-template>
  </xsl:if>
</xsl:template>

Basically, the function substring-before is used to get any string before a comma and output it. If no comma is in the string, the string is output. The string to the right of a comma is then used as a recursive call to the same template. Again; Works like a charm :-)

So, with any luck, this question can be answered by googling soon instead of asking in the forums :-)

Edit on 27’th June:

--
eliasen

Monday, 22 June 2009 22:55:08 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Saturday, 13 June 2009

Hi all

I have received numeral questions from people getting a compile time error when compiling an orchestration that assigns a distinguished field of type xs:integer to a variable of type int32. Without looking into anything, you would expect this would work.

So lets say that we have a schema like this:

schema

with this XSD:

<?xml version="1.0" encoding="utf-16"?>
<xs:schema xmlns:b="http://schemas.microsoft.com/BizTalk/2003" xmlns="http://IntegerDecimal.InputSchema" targetNamespace="http://IntegerDecimal.InputSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="InputRoot">
    <xs:annotation>
      <xs:appinfo>
        <b:properties>
          <b:property distinguished="true" xpath="/*[local-name()='InputRoot' and namespace-uri()='http://IntegerDecimal.InputSchema']/*[local-name()='MyInteger' and namespace-uri()='']" />
        </b:properties>
      </xs:appinfo>
    </xs:annotation>
    <xs:complexType>
      <xs:sequence>
        <xs:element name="MyInteger" type="xs:integer" />
      </xs:sequence>
    </xs:complexType>
  </xs:element>
</xs:schema>

As you can see, it is a simple schema and the “MyInteger” field is of type xs:integer and it is marked as a distinguished field.

In my orchestration, I have a variable of type int32 called “MyInt” and in an expression shape I do this:

MyInt = InputMessage.MyInteger;

and it fails, which is quite surprising at first glance. At compile time I just get this: "The expression that you have entered is not valid.". That doesn’t help, so looking at my expression shape and reading the mouse over on the error in the expression shape I get this: “cannot implicitly convert type ‘System.Decimal’ to ‘System.Int32’”. Changing the line in the expression shape to this:

MyInt = System.Convert.ToInt32(InputMessage.MyInteger);

will make it work, because now you are explicitly casting the value to an integer.

The first times I saw this error, I thought that the compiler had an error, because I thought I was assigning an integer the value of another integer. It turns out, though, that I wasn’t really – only sort of…

If you look here: http://www.w3.org/TR/xmlschema-2/#integer you can see that the xs:integer type is actually a decimal type restricted to whole numbers. So the compiler actually thinks the field is a decimal and not an integer.

This is kind of silly, since the data type IS restricted to whole numbers. The actual error should be that the xs:integer can contain the number 4567234987623409763546387623476149823497862354845 which really is way to large for an int variable in an orchestration.

Anyway, the way to handle this is, naturally, to avoid the xs:integer type if possible and use the xs:int (or variants of this) instead. Sometimes it is needed, though, and in that case you just need to declare your variables of a type that will be able to contain all the possible values you receive. This can be an Int16 sometimes, if the people who have created the schema just didn’t know what the xs:integer type is. Sometimes the values received will be too large even for an Int64 variable, then consider using a decimal, a string or some other data type.

--
eliasen

Saturday, 13 June 2009 21:43:03 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Thursday, 11 June 2009

Hi all

So, the other day, a guy asked a question on the online forums, and another guy tried helping out by stating, among other things, that the maps on receive ports are executed before the receive pipeline. This isn’t true, and I posted a post, where I tried to explain how things work. This ended up being slightly wrong, so I posted a correction, but now it seems I need to post another correction… and I ended up writing this post to explain how stuff works.

First of all, let me set one thing straight; When a message arrives on a receive location it is first sent through the receive pipeline that is specified on the receive location. This is needed before the map for several reasons, including converting the input to XML and promoting the MessageType, so the receive port can choose the correct map to execute. The receive pipeline also promotes all the distinguished fields and promoted properties that are specified on the schema.

Now, after the map has finished executing, the transformation engine will look up the schema for the output and it will instantiate the XMLDisassembler with this particular schema, so the disassembler doesn’t have to find the correct schema itself. After instantiating it, it will call it, so the XMLDisassembler will read the output from the map and promote all distinguished fields and promoted properties to the context of the message. Also, before doing the promotion, it will copy all the context from the original message, so you get all the properties from the adapter and so on copied to the destination message.

Now, there are a couple of issues to this, which most people don’t realize – mostly because they will only affect you on very very rare occasions:

Distinguished fields
I found that if you have an input message with a field marked as a distinguished field and then look at the context of the output from the map, then the output message also has the distinguished field from the input message in its context which really doesn't make sense, since you can't use it in any way. This has NO influence at runtime and NO influence at design time, so you will go through your life not realizing it. Also, usually we don’t set distinguished fields on the external schemas, because we don’t want to change them and because we don’t want to use schemas exposed to trading partners in our business processes, which is the only place we can use the distinguished fields.

Promoted Properties
If you have promoted a field from both input and destination schema to the same property, the value from input schema in the properties is overwritten by the value from the destination schema after the map. For the same reasons as for the distinguished fields, we rarely have promoted properties on the external schemas, and therefore you probably will never have an issue with this.

Envelopes
IF the schema for the destination message is marked as an envelope, the message will fail. This is because the disassembler will recognize the schema is an envelope and it will then debatch the message into several messages. Only the first is returned though, since the mapping engine which is calling the disassembler assumes this is not an envelope and therefore will only call the GetNext method once, whereas normally the GetNext emthod is called until it returns null. This first message is then looked at, but in the properties for the disassembler, the transformation engine has all ready set the only allowed schema, which was the envelope schema. So the disassembler only has the envelope schema as a possible schema, and the instance that comes out after debatching is not the output from the map anymore, meaning that it will fail with the standard error message “Details:"Document type "http://MyNameSpace.com#Record" does not match any of the given schemas."” As with the first to issues about distinguished fields and promoted properties, this should practically never happen, since you are most likely mapping the incoming message to some internal schema, for which there is usually no reason to mark as an envelope.

Conclusion
So, basically, when a message arrives, the receive pipeline is executed, then the map is executed and at the end, the XML disassembler is executed by the transformation engine to get all promoted fields in the destination message promoted. There are a couple of known issues with this, but they are either totally unimportant or very unlikely to occur.

I hope this helps someone out there.

--
eliasen

Thursday, 11 June 2009 01:30:19 (Romance Daylight Time, UTC+02:00)  #    Comments [7]  | 
Saturday, 06 June 2009

Hi all

Just a quick book review, since I have just read the “Who moved my Cheese” book.

It only takes like an hour to read, and it is the story of two mice and two people who are searching for cheese, where the cheese is a metaphor for something you really want to have. At some point, they run out of cheese, and the four characters take very different approaches to the changes in the environment. The point of the story is, naturally, that you can most likely identify yourself as one of the characters and after reading the book, perhaps you have learnt something about how you react to changes and how you should react – and hopefully improve your life.

Easily read with lots of points. Read it…

--
eliasen

Saturday, 06 June 2009 18:49:05 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Sunday, 24 May 2009

Hi all

Just to let you all know, I have updated the documentation for my pipeline component library at http://eebiztalkpipelinecom.codeplex.com/ and the documentation for my functoid library at http://eebiztalkfunctoids.codeplex.com/

Happy reading :-)

--
eliasen

Sunday, 24 May 2009 20:48:59 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Saturday, 23 May 2009

Hi all

Saravana Kumar has launched version two of his great site http://www.biztalk247.com – it has loads of nice information about BizTalk, so go check it out.

Also, while you are checking it out, take a look at his other new site: http://blogdoc.biztalk247.com/ which is collecting information from lots of blog entries about BizTalk.

--
eliasen

Saturday, 23 May 2009 21:36:39 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Thursday, 07 May 2009

Hi all

Today I encountered something I have never seen before, when creating a map.

The issue occurs because my customer had a schema that imports two other schemas, both of which have an element called “metadata” – but naturally the two schemas have different target namespaces.

The main schema imports both, and has two records just below the root, and these two records reference each of the two metadata elements in the two imported schemas.

So the first schema could look like this:

MetadataOne

And the second schema could look like this:

MetadataTwo

So both have an element named “metadata” but one is in the namespace “http://TwoElementsDifferentNamespace.MetadataOne” and the other is in the namespace “http://TwoElementsDifferentNamespace.MetadataTwo”.

After that, I create the schema that impors both:

BigSchema

As you can see it imports the first two schemas, and has to elements that reference each of the metadata elements form the two first schemas.

Also, I crated an output schema that just has three elements and then I created this map:

map

Pretty simple. Now, the issue comes when compiling, because I get this error:

Exception Caught: The map contains a reference to a schema node that is not valid.  Perhaps the schema has changed.  Try reloading the map in the BizTalk Mapper.  The XSD XPath of the node is: /*[local-name()='<Schema>']/*[local-name()='Root']/*[local-name()='metadata']/*[local-name()='Field2']

o the first try was to reload the schema – but that didn’t help – it just broke one of my links.

In he end I found out, that the issue is that the links are stored like this in the .BTM file:

<Link LinkID="1" LinkFrom="/*[local-name()='&lt;Schema&gt;']/*[local-name()='Root']/*[local-name()='metadata']/*[local-name()='Field1']" LinkTo="/*[local-name()='&lt;Schema&gt;']/*[local-name()='OutputRoot']/*[local-name()='Field1']" Label="" />

<Link LinkID="2" LinkFrom="/*[local-name()='&lt;Schema&gt;']/*[local-name()='Root']/*[local-name()='SomeFields']/*[local-name()='Field3']" LinkTo="/*[local-name()='&lt;Schema&gt;']/*[local-name()='OutputRoot']/*[local-name()='Field2']" Label="" />

<Link LinkID="3" LinkFrom="/*[local-name()='&lt;Schema&gt;']/*[local-name()='Root']/*[local-name()='metadata']/*[local-name()='Field2']" LinkTo="/*[local-name()='&lt;Schema&gt;']/*[local-name()='OutputRoot']/*[local-name()='Field3']" Label="" />

So. basically, the .BTM file saves links as XPath expressions WITHOUT the namespaces. So naturally, this has to go wrong, when there are two “metadata” elements on the same level in the schema.

The way to solve this is to choose the properties of the map and disable the “Ignore Namespaces for Links” like this:

property

After setting this property, the links change having namespaces inside the .BTM file and everything is just fine.

One might wonder why the namespaces are not enabled as the default, since they do make the solution more robust. Well, the reason is simple; If the namespaces are in all the links, and you change for instance the namespace of the root node, then ALL links in the map gets broken. So actually, not having the namespaces in the links make the solution more robust.

So… I hope this can help someone.

You can download the solution here:

--
eliasen

Thursday, 07 May 2009 00:36:16 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 
Saturday, 25 April 2009

Hi all

Just as i have started developing a functoid library (found at http://eebiztalkfunctoids.codeplex.com/) I have also started developing a pipeline component library. Right now it contains three components:

  • DevNull. This pipeline component is quite simple. It will "swallow" everything that comes as input. This enables performance testing of stuff without concerns about adapter transport time at send port for instance.
  • SearchAndReplace. This component will perform a search and replace on the incoming stream, replacing some string with some other string. Optionally, you can decide to let the input string be a regular expression and replace based on that instead of normal string search and replace.
  • Promote. This component has three parameters, the name of a property, the namespace of the property and an XPath expression. The component will read in the value that corresponds to the XPath expression at runtime and promote it to the property given by the name and name space. This enables you to promote reoccurring elements.

You can find it at http://eebiztalkpipelinecom.codeplex.com/ – the url is weird, I know. But there is a limit to the length of the urls at codeplex, unfortunately.

--
eliasen

Saturday, 25 April 2009 23:08:40 (Romance Daylight Time, UTC+02:00)  #    Comments [0]  | 

Theme design by Jelle Druyts