Request a topic or
contact an Arke consultant
Arke Systems Blog | Useful technical and business information straight from Arke.

Arke Systems Blog

Useful technical and business information straight from Arke.

About the author

Author Name is someone.
E-mail me Send mail

Recent comments




The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2023

User an Inner Tube DIV Instead of Padding

Making sure things are cross browser compatible is always a challenge.  But a general rule of thumb is you should just avoid using padding in your CSS.  It’s probably the #1 reason you’ll get cross browser issues.




Avoid using padding in general.  It especially causes issues when you need to set a width in your CSS because you’ll get different results in IE and Firefox.  The better method is to put a div inside a div (called an inner tube) and set margin on the inner div.

Posted by Eric Stoll on Thursday, November 19, 2009 9:39 AM
Permalink | Comments (0) | Post RSSRSS comment feed

IIS 6 Compression Review

There’s a lot of information out there on IIS compression.  Especially for what seems like it should be a simple topic…

Here are a few things I haven’t seen collected well in one spot:

1) Don’t test your compression settings from behind ISA.  Two reasons: ISA can strip the accept compression header to allow it to inspect content, and some of the default IIS settings are to not compress for web proxies (apparently some web proxies have trouble with compression).

2) There are two sources of troubles with compression:

  • Proxies serving compressed content to web browsers that don’t support compression.  There are two ways to address this in IIS 6: Either don’t serve compressed content to proxies by setting HcNoCompressionForProxies to true, or else set HcSendCacheHeaders to true and use the default in-the-past HcExpiresHeader & HcCacheControlHeader values so that proxies will consider the file expired and not cache it.
  • Web browsers that claim to support compression but don’t support it properly.  The most common one still in use is some versions of IIS 6 (see for example ).  A good list of compression compatability issues is at .  The safest thing to do is to not serve compressed to IE6.  Unfortunately, there is no simple way to do this in IIS 6.   (the Microsoft Ajax library automatically does this for its files if you use its compression). 

3) If you are using Microsoft’s AJAX (ScriptResource.axd) it can do compression separately from IIS, but if you have both do compression you will break things.  So either configure IIS to NOT compress AXD and set the scriptResourceHandler enableCompression="true" flag, or else configure IIS to compress AXD and set scriptResourceHandler enableCompression to false.  Note that you may have problems with other AXD files depending on what sort of content they produce.

Posted by David Eison on Tuesday, September 1, 2009 5:28 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Log4net and web gardens

If you actually want to use more than 1 processor for a website on your fancy multi-processor server (IIS calls this a “web garden” because marketers like to brand new terms for existing technologies), and you’re using a log4net file appender, a default configuration will cause problems for you as your separate web server processes run into locks on the same log file.  To avoid this, you’ll need to configure log4net to put the PID in the filename:


<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">

. . .

<file type="log4net.Util.PatternString" value="C:\\IISLogs\\project\\" />

. . .

<datePattern value="yyyyMMdd'.txt'" />

Unfortunately, this ruins easy filename based date sorting. Don’t know how to do anything about that.

I recommend avoiding the ‘MinimalLock’ configuration that you will see mentioned in google searches. It opens and closes the file for every log message; while I haven’t tested this specifically with log4net, I have run into several apps in the past that ruined their performance by excessive file opening and seeking.

Posted by David Eison on Tuesday, September 1, 2009 4:40 PM
Permalink | Comments (0) | Post RSSRSS comment feed

StackOverflowException in COM object called from a w3wp.exe process

The w3wp.exe has a stack reserve size set at 256kb.  I ran into a StackOverflowException while calling a COM object in ASP.NET  Anyway, I didn’t have access to the source code of the COM object.  But after a lot of reading and elp from others, I figured that the third-party dll must be setting a lot of local variables or going through a large recursive call.   So I attempted the following which fixed it. 


After step two you only have 60 seconds to perform step 3

1.) Download WfpDeprotect:

2.) Run:  wfpdeprotect.exe “c:\WINDOWS\system32\inetsrv\w3wp.exe”

3.) Run: editbin /STACK:1048576 “c:\WINDOWS\system32\inetsrv\w3wp.exe”

4.) Wait at least one minute

5.) Run: dumpbin /ALL /OUT:”c:\dumpbinresult.txt” “c:\WINDOWS\system32\inetsrv\w3wp.exe”

6.) Check the text file to make sure that the reserve stack size is now 1000000


This isn’t a permanent fix per-se.  Anytime the w3wp.exe is updated with Windows Update or something similar, the above steps will have to be re-run.

Posted by Trenton Adams on Friday, August 28, 2009 2:03 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Loading a client’s CRM data locally

We had a client encountering a problem that the normal try-the-obvious-things via phone or email wasn’t able to solve, who had a problem in production that wasn’t showing up on their test system with test data, and whose production operation restrictions couldn’t let us work on the production system but could get us a database export of their CRM environment.

So, I set about getting their database backup to work well enough in a local environment that I could replicate the problem. 

Conveniently, this is exactly what the CRM's Deployment Manager "Import Organization" feature does.  Point it to the newly restored database, tell it how to map users in Active Directory, and it sets up the data for you.  Directions are at

If you need to try understanding what is happening at a lower level, you can do a similar procedure manually to get the UI working. NOTE that this approach definitely did not get a fully working environment, I quit adjusting things once I had an environment good enough for me to do my testing.  In particular, CRM async service crashes on a data not found error, so clearly at least one GUID isn’t updated as needed for fully correct functioning. So this procedure could not be used in production, but it’s good enough to replicate and test our scenario via the UI and SDK.  This is absolutely not supported, use at your own risk, stick to a clean environment that you can mangle safely, and try giving the "Import Organization" feature a shot first! 

  1. On a clean working CRM that you can mess up and break if necessary (e.g. ideally in a VM with its own active directory ‘just in case’), make a new CRM organization with a matching name/friendlyname as the CRM to restore.  This will make a organame_MSCRM database.  Rename that orgname_MSCRM database to something else for now (e.g. CLEAN_orgname_MSCRM).  Restore the client CRM database backup to the same name (change filesystem filenames to avoid an error on restore, renaming a database doesn’t change its files)
  2. We need to swap out GUIDs, but don’t need to know much about them in detail, so install a searchandreplace sproc in the restored db. I used this one:
    Modified to filter on: AND       DATA_TYPE IN ('uniqueidentifier')
    Can’t cut-and-paste it directly here due to author including an all-rights-reserved copyright, but he clearly states on the page to feel free to use it and modify it.
  3. Get the organization guids involved: select organizationid from organizationbase
    in both clean db and restored db
  4. To change GUIDs, we need to temporarily disable constraints.  This script is ‘good enough’; it throws some cannot alter errors that can be ignored.
  5. Make the restored database use guids from the clean database and clean up some other data

    -- replace organization guids:
    exec SearchAndReplace 'organizationid_restoredguid','organizationid_cleanguid';

    -- Fix system users ids so that an admin matches your own login
    update SystemUserBase set DomainName='[domain\user] ' where FullName='[client admin login name]';

    -- systemuserids, for their admin and my admin
    exec SearchAndReplace 'adminsystemuserid_restoredguid','mysystemuserid_cleanguid';

    -- Fix domain logins from CLIENTDOMAIN\ to local domain
    UPDATE SystemUserBase set DomainName=
    CASE WHEN (CHARINDEX('\',DomainName,0)>0) THEN '[localdomain]' + SUBSTRING(DomainName,CHARINDEX('\',DomainName,0),99999) 
                    ELSE DomainName

    -- update guids found in the organization table, on the left are values from client db, on the right values from clean db.
    -- user group id
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- privilege user group id
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- system user id
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- sql access group id
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- reporting group id
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- Modified by
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- integration userid
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- priv reporting group id
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- Base currency id
    exec SearchAndReplace 'restoredguid','cleanguid';
    -- handle a few strings that aren’t guids
    update OrganizationBase set PrivReportingGroupName='[cleanvalue]';
    update OrganizationBase set ReportingGroupName='[cleanvalue]';
    update OrganizationBase set SqlAccessGroupName='[cleanvalue]';
  6. Re-enable constraints, same script as before but replace ‘nocheck’ with ‘check’.
  7. Check your work:
    -- no error data should be printed out
  8. Restart CRM, log in (remember to enter the /OrgName URL), should be good with client data running in a local CRM!


Posted by David Eison on Wednesday, July 22, 2009 6:44 PM
Permalink | Comments (0) | Post RSSRSS comment feed

jQuery Validation. My first use and modification

I meant to update this post earlier but forgot.  The same can be accomplished using the built-in valid: option with CSS classes.

I’m relatively new to jQuery.  But I needed a client-side validation framework.  Enter jQuery.validate.  Great little framework and easy to implement.  But I needed one more thing.  When a field WAS valid, I wanted it to display a success message.  Below is the (ugly, hardcoded) additions I made.  It gives me a nice little green checkmark when the field’s all nice and valid.


successes: function() {

      return $(this.settings.errorElement + "." + this.settings.successClass, this.errorContext);



successesFor: function(element) {

      return this.successes().filter("[for='" + this.idOrName(element) + "']");



showSuccess: function(me) {

      for (var i = 0; this.successList[i]; i++) {

            var success = this.successList[i];

            if (success) {

                  var message = "<img src=\"../../Content/Images/greencheckmark.png\" class=\"checkmark\" />"

                  if (this.settings.successClass) { } else { this.settings.successClass = "field-validation-success"; }

                  var label = this.successesFor(success);

                  if (label.length) {

                        // refresh error/success class



                        // check if we have a generated label, replace the message then

                        label.attr("generated") && label.html(message);

                  } else {

                        // create label

                        label = $("<" + this.settings.errorElement + "/>")

                        .attr({ "for": this.idOrName(success), generated: true })


                        .html(message || "");

                        if (this.settings.wrapper) {

                              // make sure the element is visible, even in IE

                              // actually showing the wrapped element is handled elsewhere

                              label = label.hide().show().wrap("<" + this.settings.wrapper + "/>").parent();


                        if (!this.labelContainer.append(label).length)


                  ? this.settings.errorPlacement(label, $(success))

                  : label.insertAfter(success);



                  if (this.settings.success) {


                        typeof this.settings.success == "string"

            ? label.addClass(this.settings.success)

            : this.settings.success(label);




      if (this.successList.length) {

            this.toShow = this.toShow.add(this.containers);


      if (this.settings.success) {

            for (var i = 0; this.successList[i]; i++) {




      if (this.settings.unhighlight) {

            for (var i = 0, elements = this.invalidElements(); elements[i]; i++) {

                  var label = this.successesFor(elements[i]);

                  if (label.length) {

                        label.attr("generated") && label.html("");




      this.toHide = this.toHide.not(this.toShow);







And modified the following function:


                  element: function(element) {

                        element = this.clean(element);

                        this.lastElement = element;


                        this.currentElements = $(element);

                        var result = this.check(element);

                        if (result) {

                              delete this.invalid[];

                        } else {

                              this.invalid[] = true;


                        if (!this.numberOfInvalids()) {

                              // Hide error containers on last error

                              this.toHide = this.toHide.add(this.containers);




                        return result;


Posted by Trenton Adams on Tuesday, June 16, 2009 2:16 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Linq To SQL and return types for dynamic SQL inside sprocs

LINQ TO SQL figures out what a method returns by executing it with SET FMTONLY ON, which causes SQL Server to not really run the method but instead just examine the tables and columns used.

Unfortunately, this completely doesn’t work for dynamic SQL, causing the LINQ designer to not be able to figure out the return type.  It even goes so far as to grays out the option for you to set the return type, forcing it to (none).

You can manually hack on the .cs file, but that file gets regenerated so it should be avoided.  Instead, if you just have a ‘get’ style sproc that doesn’t have bad side-effects, you can tell SQL Server that it’s ok to really run your sproc.

First, verify you have no bad side effects from running your sproc (e.g. that it’s ok to call it whenever Visual Studio thinks it wants to). 

Then, inside of the stored procedure, add:


Next, make sure your method runs ok with all null parameters. I do this by providing some reasonable values:

IF (@Param1 IS NULL)
  SET @Param1=0;

Finally, run it from Visual Studio to make sure it works:

Exec MySproc null, null, null

If you get back columns, you’re great.  If not, check that your reasonable values work ok.

Finally, the LINQ designer does some aggressive caching of method return types. To change a return type, I have had to delete the method, save my project, close the connection in server explorer, exit visual studio, re-open the connection, and re-drag the method over for it to get over the (none) return type and let me pick one.  “Refresh” didn’t work.

Note that return type will stay as (none) in Visual Studio if it encounters a problem running your method, so be sure it works with SET FMTONLY ON; Exec [methodname] [null parameters] before trying to fight the cache problem.

DamienG's blog was the best source of info I found while troubleshooting this problem.

Posted by David Eison on Thursday, June 4, 2009 12:16 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Google Maps link in CRM

So I've been looking at a good way to get maps of locations direct from Dynamics CRM.  There are lots of neat solutions out there that embed a Google or Live map directly into a tab, but I didn't want the extra tab.  I wanted something simpler.  So I made this.  It's the simplest solution I could think of that effectively solve the issue.  It literally took more time to think of than to implement.

First,  go to customize the Entity that you want to have the link on.  I chose Account and Address for my implementation, but any entity with an address will do.  So first create an Attribute called Google Maps and put it on the form.  Make sure you give the Attribute a format of URL and increase the maximum length to 500 characters.  This will make it clickable form the UI and ensure you don't cut off the end of the address.

Now go to the Form and go to Form Properties. Open the OnLoad event and paste this JavaScript in:

crmForm.all.new_googlemaps.DataValue = "" + crmForm.all.line1.DataValue + "+" + + "+" + crmForm.all.stateorprovince.DataValue + "+" + crmForm.all.postalcode.DataValue;

A few important things here:   "" is the beginning of the Google Maps query string. You need it.  The attributes after that, such as line1, city, etc. are specific to the Address entity.  If you want to do this in a different Entity, you'll have to find out what the specific name of the address, city, state and zip code fields are. also, these instructions are US-oriented.  For international addresses, you'll have to add whatever fields are relevant to get Google Maps to give you correct addresses.

In the end, you'll have a clickable field that will open up a map to the address in CRM in a new browser window. Very convenient, and also compatible with mobile CRM solutions!

Categories: CRM
Posted by Wayne Walton on Friday, May 29, 2009 2:30 PM
Permalink | Comments (0) | Post RSSRSS comment feed

FedEx Integration into CRM 4.0

The FedEx Shipping manager software has a suite of integration features.  This allows one to import and export fields from external data sources including files, and ODBC connections.  After creating an ODBC connection to the CRM Database, imports were simple to setup.  But, the export was not working.  All the fields registered as read-only not matter how I set up the permissions.  I set about trying a number of work arounds.  First I started with trying to create an Access database to link to the CRM tables, but to no avail.  The FedEx integration assistant would not see the linked tables at all.  After trying a few more things with ODBC, I finally created a new database.  Within this database, I set up a single table with two columns (I was only wanting to export the tracking number for now).  An ID column and the tracking number.


OrderNumber TrackingNumber


I set up the integration in FedEx to insert a new row into this table everytime a shipment completed.  Then, to update the CRM SalesOrder.  I set an ‘on insert’ trigger on this table.  The trigger updated the appropriate record in CRM.

It’s a hack, but it worked.

Posted by Trenton Adams on Monday, May 18, 2009 10:11 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Microsoft Dynamics CRM 4.0 Update Rollup 4 released

Why is seems like just yestarday we were posting about Update Rollup 3, and here comes Rollup 4!


You can find the KB article here: 

The actual files are here:  

Don't forget the updated help files!

One quick addendum, make sure you clear your Internet Explorer cache after installing on both the server and the client side.

Posted by Wayne Walton on Monday, May 11, 2009 2:16 PM
Permalink | Comments (0) | Post RSSRSS comment feed