Tuesday, 02 February 2010

Hi all

Lots of people think, that if they use a Parallel Actions shape, they get things done in parallel. Well, rethink that. An orchestration executes in just one thread, so no chance of getting anything to run in parallel. At runtime, the shapes in the parallel shape are simply serialized.

But what is the algorithm, then?

Well, I did some tests. First of all, I created this simple orchestration:

image

It’s a receive shape to fire up the orchestration and then a parallel shape with four branches and five expression shapes in each. The code in each expression shape is this:

   1: System.Diagnostics.Trace.WriteLine("X. Y");

where X is a number indicating the branch and Y is a sequence number within the branch. This means that X=2 and Y=3 is the third expression shape in the second branch and X=4 and Y=1 is the first expression shape in the fourth branch.

Running this orchestration I get this result from DebugView:

image

So as you can see, the entire first branch is executed, then the entire second branch, and so on until the fourth branch has finished. Sounds easy enough. But lets try some other scenarios like this one:

image

In essence I have thrown in a receive shape in branch 2 to see if branches three and four will still have to wait until branch 2 has finished.

The result can be seen here:

image

So as you can see, the second branch stops after the second shape because now it awaits the receive shape. Branches three and four are then executed and after I send in a message for the receive shape, the second branch completes.

So some form of parallelism is actually achieved, but only when a shape takes too long to handle. Lets see what happens with a Delay shape instead like this:

image

I have switched the Receive Shape for a Delay shape, and I have set the Delay shape to wait for 100 milliseconds. The result of this is the same as with the Receive shape:

image

Then I tried setting the Delay shape to just 1 millisecond, but this gave the same result.

With shapes that take time in two branches, like this:

image

And the Delay is still set at one millisecond. I get the following result:

image

So as you can see, the Receive shape causes branch 2 to stop executing, and the Delay shape causes branch 3 to stop executing, allowing branch 4 to execute. Branch 3 is then executed because the Delay shape has finished and finally once the message for the Receive shape has arrived, branch 2 is executed to its end.

Another thing to note is, that the Delay shape actually doesn’t make the thread sleep. If it did, we couldn’t continue in another branch once a Delay shape is run. This makes perfectly sense, since the shapes in one branch are to be seen as a mini-process within the big process, and the delay that is needed for that mini-process shouldn’t affect the other branches. This is exemplified in this process:

image 

The third expression shape in the first branch has been updated to this code:

   1: System.Diagnostics.Trace.WriteLine("1. 3");
   2: System.Threading.Thread.Sleep(2000);

 

image

So as you can see, even though the first branch must wait for 2 seconds, it still executes completely before the second branch is started.

So, takeaways:

  1. The Parallel Actions shape does NOT mean you get any multi-threading execution environment.
  2. Think of the PArallel Actions shape as a way of letting multiple Business Activities happen and you don’t know in what order they will occur.
  3. The Delay shape does not use Thread.Sleep, but instead handles things internally.

--
eliasen

Tuesday, 02 February 2010 20:27:56 (Romance Standard Time, UTC+01:00)  #    Comments [6]  | 
Wednesday, 27 January 2010

Hi all

A hot fix has been released, which is quite poorly described, but which supposedly fixes some of the issues I have described at http://blog.eliasen.dk/2009/07/21/IssuesWithBizTalk2009OnVSNET2008.aspx

The hotfix is available at http://support.microsoft.com/kb/977428/en-us

 

Good luck :-)

--
eliasen

Wednesday, 27 January 2010 08:05:16 (Romance Standard Time, UTC+01:00)  #    Comments [2]  | 
Monday, 18 January 2010

Hi all

Today I started receiving this error in the event log every time I tried to test my custom functoid in a map on the receive port.

A message received by adapter "FILE" on receive location "Receive Location3" with URI "C:\Projects\TestCumulativeFunctoid\TestCumulativeFunctoid\Instances\In\*Copy*.xml" is suspended.
Error details: The system cannot find the file specified. (Exception from HRESULT: 0x80070002)
MessageId:  {5C621C74-A873-4E68-84E0-D0621DF9471E}
InstanceID: {21D3DCEC-7C1C-4865-BB46-6D1BF6FAC7AA}

The map worked fine in Visual Studio and I was quite confused and even restarted my machine.

It turned out that I had forgotten to sign the assembly with the functoid, so the script I have to deploy a new functoid failed when adding the assembly to the GAC, which I didn’t notice, since the script runs so fast I never see the result :-)

But really.. why can’t an error like that contain the name of the file that cannot be found?

--
eliasen

Monday, 18 January 2010 00:39:50 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 
Wednesday, 13 January 2010

Hi all

I just did a post on developing a custom cumulative functoid. You can find it here: http://blog.eliasen.dk/2010/01/13/DevelopingACustomCumulativeFunctoid.aspx

At the very end of the post I write that you should NEVER develop a custom referenced cumulative functoid but instead develop a custom inline cumulative functoid. Given the title of this blog post, probably by now you know why this is :-)

When I developed my first cumulative functoid, I developed a referenced functoid, since this is what I prefer. I tested it on an input and it worked fine. Then I deployed it and threw 1023 copies of the same message through BizTalk at the same time. My test solution had two very simple schemas:

image_2 Source schema.

image_4 Destination schema.

The field “Field1” in the source schema has a maxOccurs = unbounded and the field “Field1” in the destination schema has maxOccurs = 1.

I then created a simple map between them:

image_6 

The map merely utilizes my “Cumulative Comma” functoid (Yes, I know the screen shot is of another functoid. Sorry about that… :-) ) to get all occurrences of “Field1” in the source schema concatenated into one value separated by commas that is output to the “Field1” node in the output.

My 1023 test instanced all have 10 instances of the “Field1” in the input, so all output XML should have these ten values in a comma separated list in the “Field1” element in the output schema.

Basically, what I found was, that it was quite unpredictable what the outcome of that was. Some of the output XML has a completely empty “Field1” element. Others had perhaps 42 values in their comma separated list. About 42% of the output files had the right number of fields in the comma separated list, but I don’t really trust they are the right values…

Anyway, I looked at my code, and looked again… couldn’t see anything wrong. So I thought I’d try with the cumulative functoids that ship with BizTalk. I replaced my functoid with the built-in “Cumulative Concatenate” functoid and did the same test. The output was just fine – nothing wrong. This baffled me a bit, but then I discovered that the cumulative functoids that ship with BizTalk are actually developed so they can be used as BOTH referenced functoids and inline functoids. Which one is used depends on the value of the “Script Type Precedence” property on the map. By default, inline C# has priority, so the built-in “Cumulative Concatenate” functoid wasn’t used as a referenced functoid as my own functoid was. I changed the property to have “External Assembly” as first priority and checked the generated XSLT to make sure that now it was using the functoid as a referenced functoid. It was. So I deployed and tested… and guess what?

I got the same totally unpredictable output as I did with my own functoid!

So the conclusion is simple; The cumulative functoids that ship with BizTalk are NOT thread safe, when used as referenced functoids. As a matter of fact, I claim that it is impossible to write a thread safe referenced cumulative functoid, for reasons I will now explain.

When using a referenced cumulative functoid, the generated XSLT looks something like this:

   1: <xsl:template match="/s0:InputRoot">
   2:   <ns0:OutputRoot>
   3:     <xsl:variable name="var:v1" select="ScriptNS0:InitCumulativeConcat(0)" /> 
   4:     <xsl:for-each select="/s0:InputRoot/Field1">
   5:       <xsl:variable name="var:v2" select="ScriptNS0:AddToCumulativeConcat(0,string(./text()),"1000")" /> 
   6:     </xsl:for-each>
   7:     <xsl:variable name="var:v3" select="ScriptNS0:GetCumulativeConcat(0)" /> 
   8:     <Field1>
   9:       <xsl:value-of select="$var:v3" /> 
  10:     </Field1>
  11:   </ns0:OutputRoot>
  12: </xsl:template>

As you can see, the “InitCumulativeConcat” is called once, then “AddToCumulativeConcat is called for each occurrence of “Field1” and finally “GetCumulativeConcat” is called and the value is inserted into the “Field1” node of the output.

In order to make sure the functoid can distinguish between instances of the functoid, there is an “index” parameter to all three methods, which the documentation states is unique for that instance. The issue here is, that this is only true for instances within the same map and not across all instances of the map. As you can see in the XSLT, a value of “0” is used for the index parameter. If the functoid was used twice in the same map, a value of “1” would be hardcoded in the map for the second usage of the functoid and so on.

But if the map runs 1000 times simultaneously, they will all send a value of “0” to the functoids methods. And since the functoid is not instantiated for each map, but rather the same object is used across all the maps, there will a whole lot of method calls with the value “0” for the index parameter without the functoid having a clue as to which instance of the map is calling it, basically mixing everything up good.

The reason it works for inline functoids is, of course, that there is no object to be shared across map instances – it’s all inline for each map… so here the index parameter is actually unique and things work.

And the reason I cannot find anyone on the internet having described this before me (This issue must have been there since BizTalk 2004) is probably that the default behavior of maps is to use the inline functionality if present, then probably no one has ever changed that property at the same time as having used a cumulative functoid under high load.

What is really funny is, that the only example of developing a custom cumulative functoid I have found online is at MSDN: http://msdn.microsoft.com/en-us/library/aa561338(BTS.10).aspx and the example is actually a custom referenced cumulative functoid… which doesn’t work, because it isn’t thread safe. Funny, eh?

So, to sum up:

Never ever develop a custom cumulative referenced functoid – use the inline versions instead. I will have o update the one at http://eebiztalkfunctoids.codeplex.com right away :)

Good night…

--
eliasen

Wednesday, 13 January 2010 22:34:55 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 

Hi all

As many of you know, I am currently writing a book alongside some of the best of the community. Currently I am writing about developing functoids, and in doing this I have discovered that there are plenty of blog posts and helpful articles out there about developing functoids, but hardly any of them deal with developing cumulative functoids. So I thought the world might end soon if this wasn’t rectified. :-)

Developing functoids really isn’t as hard as it might seem. As I have explained numerous times, I consider creating a good icon as the most difficult part of it :—)

When developing custom functoids, you need to choose between developing a referenced functoid or an inline functoid. The difference is that a referenced functoid is a normal .NET assembly that is GAC’ed and called from the map at runtime requiring it to be deployed on all BizTalk Servers that have a map that use the functoid. Inline functoids on the other hand output a string containing the method and this method is put inside the XSLT and called from there.

There are ups and downs to both – my preference usually goes towards the referenced functoid… not because of the reasons mentioned on MSDN, but simply because I can’t be bothered creating a method that outputs a string that is a method. It just looks ugly :)

So, in this blog post I will develop a custom cumulative functoid that generates a comma delimited string based on a reoccurring node as input.

First, the functionality that is needed for both referenced and inline functoids

All functoids must inherit from the BaseFunctoid class which is found in the Microsoft.BizTalk.BaseFunctoids namespace which is usually found in <InstallationFolder>\Developer Tools\Microsoft.BizTalk.BaseFunctoids.dll.

Usually a custom functoid consists of:

  • A constructor that does almost all the work and setting up the functoid
  • The method that should be called at runtime for a referenced functoid or a method that returns a string with a method for inline functoids.
  • Resources for name, tooltip, description and icon

A custom cumulative functoid consists of the same but also has a data structure to keep aggregated values in and it has two methods more to specify. The reason a cumulative functoid has three methods instead of one is that the first is called to initialize the data structure, the second is called once for every occurrence of the input node and the third is called to retrieve the aggregated value.

To exemplify, I have created two very simple schemas:

image Source schema.

image Destination schema.

The field “Field1” in the source schema has a maxOccurs = unbounded and the field “Field1” in the destination schema has maxOccurs = 1.

I have then created a simple map between them:

image

The map merely utilizes the built-in “Cumulative Concatenate” functoid to get all occurrences of “Field1” in the source schema concatenated into one value that is output to the “Field1” node in the output.

The generated XSLT looks something like this:

   1: <xsl:template match="/s0:InputRoot">
   2:   <ns0:OutputRoot>
   3:     <xsl:variable name="var:v1" select="ScriptNS0:InitCumulativeConcat(0)" /> 
   4:     <xsl:for-each select="/s0:InputRoot/Field1">
   5:       <xsl:variable name="var:v2" select="ScriptNS0:AddToCumulativeConcat(0,string(./text()),"1000")" /> 
   6:     </xsl:for-each>
   7:     <xsl:variable name="var:v3" select="ScriptNS0:GetCumulativeConcat(0)" /> 
   8:     <Field1>
   9:       <xsl:value-of select="$var:v3" /> 
  10:     </Field1>
  11:   </ns0:OutputRoot>
  12: </xsl:template>

As you can see, the “InitCumulativeConcat” is called once, then “AddToCumulativeConcat is called for each occurrence of “Field1” and finally “GetCumulativeConcat” is called and the value is inserted into the “Field1” node of the output.

So, back to the code needed for all functoids. It is basically the same as normal functoids:

   1: public class CummulativeComma : BaseFunctoid
   2: {
   3:     public CummulativeComma() : base()
   4:     {
   5:         this.ID = 7356;
   6:  
   7:         SetupResourceAssembly(GetType().Namespace + "." + NameOfResourceFile, Assembly.GetExecutingAssembly());
   8:  
   9:         SetName("Str_CummulativeComma_Name");
  10:         SetTooltip("Str_CummulativeComma_Tooltip");
  11:         SetDescription("Str_CummulativeComma_Description");
  12:         SetBitmap("Bmp_CummulativeComma_Icon");
  13:  
  14:         this.SetMinParams(1);
  15:         this.SetMaxParams(1);
  16:  
  17:         this.Category = FunctoidCategory.Cumulative;
  18:         this.OutputConnectionType = ConnectionType.AllExceptRecord;
  19:  
  20:         AddInputConnectionType(ConnectionType.AllExceptRecord);
  21:     }
  22: }

Basically, you need to:

  • Set the ID of the functoid to a unique value that is greater than 6000. Values smaller than 6000 are reserved for BizTalks own functoids.
  • Call SetupResourceAssembly to let the base class know what resource file to get resources from
  • Call SetName, SetTooltip, SetDescription and SetBitmap to let the base class get the appropriate resources from the resource file. Remember to add the appropriate resources to the resource file.
  • Call SetMinParams and SetMaxParams to determine how many parameters the functoid can have. They should be set to 1 and 2 respectively. The first is for the reoccurring node and the second is a scoping input.
  • Set the category of the functoid to “Cumulative”
  • Determine both the type of nodes/functoids the functoid can get input from and what it can output to.

I won’t describe these any more right now. They are explained in more details in the book :) And also, there are plenty of posts out there about these emthods and properties.

Now for the functionality needed for a referenced functoid:

Beside what you have seen above, for a referenced functoid, the three methods must be written and referenced. This is done like this:

   1: SetExternalFunctionName(GetType().Assembly.FullName, GetType().FullName, "InitializeValue");
   2: SetExternalFunctionName2("AddValue");
   3: SetExternalFunctionName3("RetrieveFinalValue");

The above code must be in the constructor along with the rest. Now, all that is left is to write the code for those three methods, which can look something like this:

   1: private Hashtable myCumulativeArray = new Hashtable();
   2:  
   3: public string InitializeValue(int index)
   4: {
   5:     myCumulativeArray[index] = "";
   6:     return "";
   7: }
   8:  
   9: public string AddValue(int index, string value, string scope)
  10: {
  11:     string str = myCumulativeArray[index].ToString();
  12:     str += value + ",";
  13:     myCumulativeArray[index] = str;
  14:     return "";
  15: }
  16:  
  17: public string RetrieveFinalValue(int index)
  18: {
  19:     string str = myCumulativeArray[index].ToString();
  20:     if (str.Length > 0)
  21:         return str.Substring(0, str.Length - 1);
  22:     else
  23:         return "";
  24: }

So, as you can see, a data structure (in this case a Hashtable) is declared to store the aggregated results and all three methods have an index parameter that is used to know how to index the data structure for each method call in case the functoid is used multiple times at the same time. The mapper will generate a unique index to be used.

Compile the project, copy the DLL to “<InstallationFolder>\Developer Tools\Mapper Extensions” and GAC the assembly and you are good to go. Just reset the toolbox to load the functoid.

Now for the functionality needed for an inline functoid:

The idea behind a cumulative inline functoid is the same as for a cumulative referenced functoid. You still need to specify three methods to use. For an inline functoid you need to generate the methods that will be included in the XSLT, though.

For the constructor, add the following lines of code:

   1: SetScriptGlobalBuffer(ScriptType.CSharp, GetGlobalScript());
   2: SetScriptBuffer(ScriptType.CSharp, GetInitScript(), 0);
   3: SetScriptBuffer(ScriptType.CSharp, GetAggScript(), 1);
   4: SetScriptBuffer(ScriptType.CSharp, GetFinalValueScript(), 2);

The first method call sets a script that will be global for the map. In this script you should initialize the needed data structure.

The second method call sets up the script that will initialize the data structure for a given instance of the functoid.

The third method call sets up the script that will add a value to the aggregated value in the data structure.

The fourth method call sets up the script that is used to retrieve the aggregated value.

As you can see, the second, third and fourth line all call the same method. The last parameter is used to let the functoid know if it is the initialization, aggregating or retrieving method that is being setup.

So, what is left is to implement these four methods. The code for this can look quite ugly, since you need to build a string and output it, but it goes something like this:

   1: private string GetFinalValueScript()
   2: {
   3:     StringBuilder sb = new StringBuilder();
   4:     sb.Append("\npublic string RetrieveFinalValue(int index)\n");
   5:     sb.Append("{\n");
   6:     sb.Append("\tstring str = myCumulativeArray[index].ToString();");
   7:     sb.Append("\tif (str.Length > 0)\n");
   8:     sb.Append("\t\treturn str.Substring(0, str.Length - 1);\n");
   9:     sb.Append("\telse\n");
  10:     sb.Append("\t\treturn \"\";\n");
  11:     sb.Append("}\n");
  12:     return sb.ToString();
  13: }
  14:  
  15: private string GetAggScript()
  16: {
  17:     StringBuilder sb = new StringBuilder();
  18:     sb.Append("\npublic string AddValue(int index, string value, string scope)\n");
  19:     sb.Append("{\n");
  20:     sb.Append("\tstring str = myCumulativeArray[index].ToString();");
  21:     sb.Append("\tstr += value + \",\";\n");
  22:     sb.Append("\tmyCumulativeArray[index] = str;\n");
  23:     sb.Append("\treturn \"\";\n");
  24:     sb.Append("}\n");
  25:     return sb.ToString();
  26: }
  27:  
  28: private string GetInitScript()
  29: {
  30:     StringBuilder sb = new StringBuilder();
  31:     sb.Append("\npublic string InitializeValue(int index)\n");
  32:     sb.Append("{\n");
  33:     sb.Append("\tmyCumulativeArray[index] = \"\";\n");
  34:     sb.Append("\treturn \"\";\n");
  35:     sb.Append("}\n");
  36:     return sb.ToString();
  37: }
  38:  
  39: private string GetGlobalScript()
  40: {
  41:     return "private Hashtable myCumulativeArray = new Hashtable();";
  42: }

I suppose, by now you get why I prefer referenced functoids? :-) You need to write the method anyway in order to check that it compiles. Wrapping it in methods that output the strings is just plain ugly.

Conclusion

As you can hopefully see, developing a cumulative functoids really isn’t that much harder than developing a normal functoid. Just a couple more methods. I did mention that I usually prefer the referenced functoids because of the uglyness of creating inline functoids. For cumulative functoids, however, you should NEVER use referenced functoids but instead only use inline functoids. The reason for this is quite good, actually – and you can see it in my next blog post which will come in a day or two.

Thanks

--
eliasen

Wednesday, 13 January 2010 21:57:50 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 
Sunday, 20 December 2009

Hi all

I just threw my blog through http://typealyzer.com to see what type my blog is, and it turns out my blog is an “ESTP - The Doers”. Description of that:


The active and playful type. They are especially attuned to people and things around them and often full of energy, talking, joking and engaging in physical out-door activities.
The Doers are happiest with action-filled work which craves their full attention and focus. They might be very impulsive and more keen on starting something new than following it through. They might have a problem with sitting still or remaining inactive for any period of time.


The graph show is this one:

image

I am not going to comment on the accuracy of this description other than well… pretty accurate :-)

What blog type is your blog?

--
eliasen

Sunday, 20 December 2009 11:25:38 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 
Tuesday, 15 December 2009

Hi all

I had a discussion with Randal van Splunteren (http://biztalkmessages.vansplunteren.net/) today about demotion. Randal has been so kind as to review the first chapter I am writing for the book (http://blog.eliasen.dk/2009/09/18/BizTalkServer2009Unleashed.aspx) and we started chatting about demotion. Specifically we discussed whether existing values in XML would be overwritten when demotion occurs.

As it turns out, it depends.

I did a small sample with two schemas and a map. I used a receive port to receive a message, mapped it to the second schema (which just created empty nodes in the destination schema) and output the result through a send port. The receive location used XMLReceive and the send port used the XMLTransmit pipeline. What happened was, that the output had the correct demoted values in them, since the XML assembler had empty elements to map into. Now, if I changed the map to put a value into the fields, then the mapped values were also output and not the demoted values. This means, that demotion does NOT overwrite existing values.

Randal, however, had a sample, where the existing values WERE overwritten. His solution was leveraging an orchestration, however, which seems to be the big difference. As I have blogged about here: http://blog.eliasen.dk/2009/10/16/DemotionDoesNotWorkForAttributesOrDoesIt.aspx orchestrations can demote into attributes, which normal demotion cannot. So apparently there is another difference, which is, that demotion in an orchestration, actually overwrites existing values.

 

But now for the funny (weird?) part. I setup a solution where I had an XML instance as input and used the passthrough receive pipeline. So no message type was promoted. Even without an orchestration, the XML assembler actually does demotion, which is cool. BUT, it overwrites existing values… Go figure. If the message type is present, existing values are not overwritten, but if it is not present, existing values are overwritten.

Weird!

--
Eliasen

Tuesday, 15 December 2009 21:50:43 (Romance Standard Time, UTC+01:00)  #    Comments [2]  | 
Sunday, 22 November 2009

Hi all

The other day I was given the task of updating an InfoPath template part that was in use on a laptop, because a newer version of this template part was available.

Upon opening the InfoPath client, I saw this:

image

There were two template parts, and in this case they are named “TemplateGroup1” and “TemplateGroup2”. I had a new version for the “TemplateGroup1” template part. I clicked on “Add or Remove Custom Controls” and got this screen:

image

As you can see, the “TemplateGroup1” does not show up, which I thought was weird. So, I tried clicking on “Add” to just add the new version of the “TemplateGroup1”, but that gave me this error:

The custom control, <TemplateGroup1> (urn:schemas-microsoft-com:office:infopath:templategroup1:-myxsd-2009-11-22t19-43-32), is already installed. Remove the existing custom control, and then try installing again.

So, I was at a loss… I couldn’t remove the existing version, and I couldn’t upload a new version.

Finally, I discovered what had happened. The user has an entry in the registry like this:

image

The key “IPCustomControlsFolder” is placed in the “HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\InfoPath\Designer” path of the registry. All template parts you put into this folder are automatically added to the controls of InfoPath.

So I found that path, and deleted the “TemplateGroup1” template part, and everything was fine.

So, what I now know is, that there are two ways of adding new template parts to be used by InfoPath:

  1. Add them manually inside the InfoPath client (or the toolbox in VS.NET)
  2. Add the right registry key to the registry, so you have a repository of template parts. This is especially useful for a repository on a shared network folder that can be used for all employees.

So if you ever have trouble removing a template parts from the custom controls section, look for the registry key.

--
eliasen

Sunday, 22 November 2009 21:35:51 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 
Tuesday, 17 November 2009

Hi all

I KNOW it is old, but I have just today finally taken the time to listen to an interview on Channel 9 with Sean O’Driscoll, who is the general manager for community support and the MVP program. You can find it here.

Sean talks a lot about what the MVP program is, and I’d like to just take a couple of really important points from his talk and list here

  1. The MVP award is a “Thank you” for your PAST efforts in the communities. There are NO expectations to an MVP about what to do the next 12 months or the next day, even.
  2. The MVP award lasts 12 months. After that you will have to be reevaluated to see if your past 12 months of contributions to the community have been good enough to warrant a reaward.
  3. A true MVP gets the award not because he wants the award but because he wants to help people. A true MVP would do exactly the same effort in communities even if there was no MVP award.

Especially the third point is important to me. I mean.. the first time I was awarded the MVP title, I got an email from MS stating that I had been nominated, and I had to go search on the internet to find out what the MVP award was, because I had NO idea…

Anyway, it’s a good video – go watch it! :)

Edit: Only 15 minutes after I posted it: Sean is no longer GM of community support. Toby Richards is that now. Big thanks to my very fast MVP lead Gerard Verbrugge for setting me straight! :)

--
eliasen

Tuesday, 17 November 2009 19:16:36 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 
Saturday, 14 November 2009

Hi all

Here at Logica in Denmark, we have just been told that we have been chose as the Danish Microsoft Partner of the year. We are naturally quite proud of this, and one of the reasons for choosing us is, that despite the financial crisis we have gained market shares.

You can read Microsofts press release (only in Danish, I am afraid) here: http://www.logica.dk/file/18133

--
eliasen

Saturday, 14 November 2009 14:59:25 (Romance Standard Time, UTC+01:00)  #    Comments [0]  | 

Theme design by Jelle Druyts