Request a topic or
contact an Arke consultant
Web presentation technologies

Arke Systems Blog

Useful technical and business information straight from Arke.

About the author

Author Name is someone.
E-mail me Send mail

Recent comments




The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Render unto c#

When it comes to CMS development (among other things in life) I’m a big fan of granularity and reusability. I believe that in a good solution architecture, both content and code are managed in discreet, granular pieces, which promotes both consistency and reusability.

That’s why it bothers me that so many Sitecore professionals react to the word “rendering” the same way a horse would. To my mind, a good solution has a geometric ratio of layouts to sublayouts, and sublayouts to renderings. Yet I see projects that are extremely sublayout-centric, with content IA and logic governing page layout.

Sitecore’s arsenal of presentation management tools, such as layout details, placeholder settings and rendering parameters, allows us as developers to truly empower our content owners. When implemented well, these features give the content authors what I like to call “controlled control” over their pages.

Unfortunately, we often see solutions where sublayouts are the dominant presentation component. Sublayouts are far less efficient than renderings, both in terms of performance and management. With sublayouts, there are all the ascx files to manage and deploy. And using sublayouts to present content seems to promote having a single component (sublayout) present multiple fields. This can dramatically reduce the solution’s flexibility – or worse, lead to having multiple components that are slight variations on each other, or have cumbersome logic (often wired to “control” fields in the content) to suppress content or change the presentation behavior.

Worse, some solutions rely almost entirely on the content tree to govern layout, such as by having fields in page templates that change the behavior of the layout, and/or by having sets of child items that code “bubbles up” into the page when they are present.

So why are renderings so often left out of solution architecture? I suspect that the problem lies in Sitecore training. Although Sitecore trainers do point out the different ways that renderings can be created, they tend to use XSL as the example technology in class (probably because it’s quicker to demonstrate --the trainer can show changes to an XSL rendering immediately, without compiling). Regrettably, this leaves many with a linkage in their minds between renderings as a Sitecore artifact and XSL as a technology. The oft-maligned rendering can be implemented using multiple technologies, yet many, many developers believe that renderings can only be developed using XSL.

I’m not going to wade into the great XSL debate here. I personally like XSL, but I rarely use it in Sitecore projects, for a number of reasons that I’ll get into that in a separate post. Suffice it to say that many developers, even if they know XSL, want to avoid it if for no other reason than to make their projects sustainable. XSL is a far more rare skill than c#, so it makes sense to ensure that future developers will be able to extend and maintain the project. And since there’s this misconception that “renderings = XSL”, the “baby” of a rendering-based architecture gets thrown out with the “bathwater” of XSL. And that’s a shame.

So let’s set the record straight. Renderings can and should be developed in c#. Actually, John West points out in his book “Professional Sitecore Development,” there are four types of renderings:

  1. XSL renderings
  2. Method renderings
  3. URL renderings
  4. Web Controls

Of these, Web Control renderings – controls that are implemented entirely in code and deployed in assemblies – are the least used yet most useful presentation component available. (In my next post, I’ll delve into the anatomy of a web control rendering.)

As a simple example, consider a page with a “core content” (“body”) area consisting of a title, a main image, and some body text. These three fields are defined in the page’s template. One way of handling this is to create a sublayout that renders these fields to the output (hopefully, at least, using field renderers). But what happens when the author does not want a title on a particular page (this is a simple example, so let’s not quibble over usability or SEO). Sure, the sublayout’s code-behind could suppress the <h1> when the field is empty or null. Or we could have a checkbox in the template to suppress the title.

What if the content author wants something other than a main image, like a flash or a video? What if they need to insert something between the image and the body? And after the body, they might need set of “spots” to point to other content?

So we might resort to having different sublayouts with variations on the content. This can lead to an unmanageable mess, creating confusion as to which sublayout does what. It also leads to redundant code, which makes ongoing maintenance and modification to the site much more challenging.

Another solution would be to put fields in the template to suppress content, or add content, or modify the presentation of content. Or we might create templates for child items that, when present, are bubbled up into the page. This puts management of presentation into the data. Sitecore is so well architected to give us excellent separation of content and presentation, so why fight that and force content to manage presentation?

I much prefer solutions that have very “light” layouts and sublayouts, with lots of renderings bound to placeholders. For the simple example, I would have a separate rendering each for title, main image, and body text, which I would bind to a placeholder in the sublayout for the core content area. Standard values in the item’s template would bind them by default, but the content owner would have the freedom to change the presentation as required. We can use features like thumbnails, rendering parameters, placeholder settings and compatible renderings both to assist them in the layout process, and to enforce brand or visual design requirements.

Most of my layouts and sublayouts are little more than div’s and placeholders. They exist to manage the geometry of the page or of regions of the page, not to present the actual content. Renderings bound to placeholders actually emit the content. Rendering parameter templates allow the editor to influence the source and behavior of the content in the page. Now, control over presentation and layout are managed in the presentation layer, and content is managed in the content layer. The design of the IA can break the content down into more manageable chunks, promoting reusability and avoiding redundancy.

To be fair, there are times when sublayouts are a better choice than renderings. For example, for forms or other situations where postback is required, I refer to use sublayouts (ascx) controls.. Also, in cases where the parts of the page structure are immutable, it is more efficient to statically bind renderings into sublayouts (by including them in the sublayout markup). This is fine for cases like fixed headers and footers, or when the site design requires an element (like a title) to always be present. There are also rare occasions when I allow a folder of child items to bubble up into a page, but even then, I usually use a placeholder-bound rendering to do the bubbling.

A highly granular architecture, which maintains separation of content from presentation, is hugely empowering to both developers and content editors. It promotes reusability of both code and content, shifts much of the responsibility for page assembly from code to configuration, and empowers editors with more control over the layout of their pages.

Posted by Andy Uzick on Thursday, January 24, 2013 6:52 PM
Permalink | Comments (0) | Post RSSRSS comment feed

More DNN performance

Some DNN sites spend way too much time running the sproc dbo.GetSchedule.  This is probably worse if DNN is configured with its Scheduled Jobs in the default ‘Request’ mode (instead of ‘Timer’ mode).  Unfortunately that job is both slow and can deadlock on updates. 

The original job we had in our DNN 5.6.1 is doing:

    FROM dbo.Schedule S
        LEFT JOIN dbo.ScheduleHistory SH ON S.ScheduleID = SH.ScheduleID
    WHERE (SH.ScheduleHistoryID = (SELECT TOP 1 S1.ScheduleHistoryID
                                        FROM dbo.ScheduleHistory S1
                                        WHERE S1.ScheduleID = S.ScheduleID
                                        ORDER BY S1.NextStart DESC)
                OR SH.ScheduleHistoryID IS NULL)
            AND (@Server IS NULL OR S.Servers LIKE '%,' + @Server + ',%' OR S.Servers IS NULL)

Here’s almost the same thing, but faster and less likely to deadlock:

        (SELECT TOP 1 NextStart FROM ScheduleHistory S1 with(nolock)
         WHERE S1.ScheduleID = S.ScheduleID
         ORDER BY S1.NextStart DESC) as NextStart
    FROM dbo.Schedule S  with(nolock)
    WHERE (@Server IS NULL OR S.Servers LIKE '%,' + @Server + ',%' OR S.Servers IS NULL)

Replacing this one query dropped one problematic DNN site from 100% sql server cpu utilization to more in the 30% range.

Posted by David Eison on Thursday, February 17, 2011 1:33 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Gotchas when loading jquery from a CDN

A client wants to take advantage of browser cache and load jquery from a CDN.

There is always a lot of discussion about whether one should load jquery from a CDN.  Pros include:

  • Some users will have the file cached, saving you 20k of bandwidth and them a bit of load time
  • It’s a very cheap (free) way to serve files from geographically distributed close-to-your-user servers
  • Browsers only simultaneously load a few files per domain name, so using an extra domain name can lead to more parallel file loads
  • You can count on Google to get gzip and expires headers right (provided you request the right specific file)

Drawbacks include:

  • There may be an extra DNS resolution to load from a new domain name, and DNS resolution takes time
  • Google might be down or blocked (particularly on intranet sites)
  • You’re serializing loads of your js anyway, so will you see gains from other parallel loads?

Particularly because some of our users are in controlled office environments where they don’t necessarily have access to the entire web, the concern about being blocked is a real concern.  So, the script should fallback to load jquery locally if it exists.

There are plenty of blog and forum discussions about this, so this is yet another, but I thought I’d like to focus on some of the gotchas that can be encountered with even such an apparently straightforward task.

This is the most obvious potential solution:

<script type="text/javascript" src=""></script>  
<script type="text/javascript">   
if (typeof jQuery === 'undefined') {   
   var e = document.createElement('script');   
   e.src = '/js/jquery-1.4.2.min.js';   
    <script type="text/javascript" src="/script-that-needs-jquery-loaded.js"></script>

However, don’t use that ‘solution’! 

Do this instead, even though document.write is on the ‘generally avoid this’ list of things not to do:

<script type="text/javascript" src=""></script>
<script type="text/javascript">
if (typeof jQuery === 'undefined')
    document.write(unescape("%3Cscript src='/js/jquery-1.4.2.min.js' type='text/javascript'%3E%3C/script%3E"));
    <script type="text/javascript" src="/script-that-needs-jquery-loaded.js"></script>


The first ‘obvious’ solution has a few non-obvious problems:

  1. The first problem is a race condition.  When a browser encounters a script tag in a document, it pauses while the script loads.  This is so that you can chain together a bunch of scripts that rely on each other – the jquery library makes some functions available, the next library you load can use those functions.  It’s a pretty essential feature, and we routinely rely on it all the time without even thinking about it.

    However, look at what our failover case is doing – if jquery doesn’t load, it tacks it in as the last element in the document head in the DOM.

    Elements loaded via adding dynamically to the DOM *don’t* block like regular script elements, they load asynchronously. 
    This means that script-that-needs-jquery-loaded probably won’t have jquery loaded in time.  But maybe it will, due to caching.  We now have a heisenbug – if google is accessible, everything works great.  If google is not accessible, jquery still gets loaded, but without blocking, so now sometimes things later in the page that need jquery will work but other times it won’t have loaded in time.
    The easiest fix is to switch to document.write to include a script tag in the page.  A script tag written with document.write will run and block once your current script block completes.

  2. Next up, we have a caching problem.  When you request , look at this little header:

    Cache-Control: public, must-revalidate, proxy-revalidate, max-age=3600

    The odds of getting a useful document from a visitor’s cache with a max-age of one hour are not that great.
    However, when you request , the cache time is now:

    Cache-Control: public, max-age=31536000

    Google serves up 1.4 with a 1 hour cache expires time, because a new version might be released and they don’t want the old version stuck in your cache forever.  The solution is to instead request a specific version, which is served up with a year long expires time.

  3. Now, on our document.write change: There’s a gotcha here too. You’re technically not allowed to have the string </whatever> in your javascript, even if it’s inside a string.  Some browsers may run it wrong.  Some AV/firewall software will incorrectly dynamically rewrite this.  So

    document.write("<script src='/codescripts/js/jquery-1-3-2-min.js' type='text/javascript'></script>"));

    isn’t legit.  Two solutions here – either split up the /script into two strings added together, or else encode the brackets and use the unescape function to decode.  I think the unescape is more clear:

    document.write(unescape("%3Cscript src='/codescripts/js/jquery-1-3-2-min.js' type='text/javascript'%3E%3C/script%3E"));

  4. Final problem: If using jquery 1.3: Firefox 3.5 doesn’t support document.readyState, so if you put this randomly somewhere in the body of your page and it runs after the page is ready, your onready events may not fire.  There are fixes for this behavior in jquery 1.4 and in Firefox 3.6, but in general to be safest, include your jquery in the document head so it runs before onready fires.

No matter what, test!  On that first script load, temporarily set an invalid filename, temporarily set an invalid domain name, and make sure the fallback works like you’d expect!

Posted by David Eison on Thursday, August 12, 2010 5:38 PM
Permalink | Comments (0) | Post RSSRSS comment feed

jQuery Validation. My first use and modification

I meant to update this post earlier but forgot.  The same can be accomplished using the built-in valid: option with CSS classes.

I’m relatively new to jQuery.  But I needed a client-side validation framework.  Enter jQuery.validate.  Great little framework and easy to implement.  But I needed one more thing.  When a field WAS valid, I wanted it to display a success message.  Below is the (ugly, hardcoded) additions I made.  It gives me a nice little green checkmark when the field’s all nice and valid.


successes: function() {

      return $(this.settings.errorElement + "." + this.settings.successClass, this.errorContext);



successesFor: function(element) {

      return this.successes().filter("[for='" + this.idOrName(element) + "']");



showSuccess: function(me) {

      for (var i = 0; this.successList[i]; i++) {

            var success = this.successList[i];

            if (success) {

                  var message = "<img src=\"../../Content/Images/greencheckmark.png\" class=\"checkmark\" />"

                  if (this.settings.successClass) { } else { this.settings.successClass = "field-validation-success"; }

                  var label = this.successesFor(success);

                  if (label.length) {

                        // refresh error/success class



                        // check if we have a generated label, replace the message then

                        label.attr("generated") && label.html(message);

                  } else {

                        // create label

                        label = $("<" + this.settings.errorElement + "/>")

                        .attr({ "for": this.idOrName(success), generated: true })


                        .html(message || "");

                        if (this.settings.wrapper) {

                              // make sure the element is visible, even in IE

                              // actually showing the wrapped element is handled elsewhere

                              label = label.hide().show().wrap("<" + this.settings.wrapper + "/>").parent();


                        if (!this.labelContainer.append(label).length)


                  ? this.settings.errorPlacement(label, $(success))

                  : label.insertAfter(success);



                  if (this.settings.success) {


                        typeof this.settings.success == "string"

            ? label.addClass(this.settings.success)

            : this.settings.success(label);




      if (this.successList.length) {

            this.toShow = this.toShow.add(this.containers);


      if (this.settings.success) {

            for (var i = 0; this.successList[i]; i++) {




      if (this.settings.unhighlight) {

            for (var i = 0, elements = this.invalidElements(); elements[i]; i++) {

                  var label = this.successesFor(elements[i]);

                  if (label.length) {

                        label.attr("generated") && label.html("");




      this.toHide = this.toHide.not(this.toShow);







And modified the following function:


                  element: function(element) {

                        element = this.clean(element);

                        this.lastElement = element;


                        this.currentElements = $(element);

                        var result = this.check(element);

                        if (result) {

                              delete this.invalid[];

                        } else {

                              this.invalid[] = true;


                        if (!this.numberOfInvalids()) {

                              // Hide error containers on last error

                              this.toHide = this.toHide.add(this.containers);




                        return result;


Posted by Trenton Adams on Tuesday, June 16, 2009 2:16 PM
Permalink | Comments (0) | Post RSSRSS comment feed

IE, hover, and transparent PNGs

IE, particularly IE6, has a few quirks that anybody working on websites needs to know about.  Luckily there are workarounds, but unfortunately the workarounds for different problems don't always play nice with each other.

1) Transparent PNGs don't work right by default.  Inside an image tag, the alpha channel of a transparent png will be ignored in IE6, leading in my case to very ugly corners on a rounded window.  To fix this, you need to use CSS to specify a "filter" to have IE display the image with a different image display routine.  The easy approach is to run some javascript to go back and rewrite all your images if you're in IE6; scripts like SuperSleight or Unit PngFix will go through and dynamically change your page to use filters for PNGs.  Unfortunately, I've had bad luck combining these scripts with other fix scripts; on my latest project I ended up doing all of my PNGs with external style sheets, then including an IE6 specific style sheet using conditional comments, and manually setting the PNG filters in the style sheet.  If you're doing it manually, note that you may also run into IE z-index problems, in my case it was on some input boxes, that may be solvable by putting the problem elements inside a position:relative container and manually setting their z-index.  Yes, the position:relative shouldn't be necessary, it's just a bug workaround.

2) CSS "Hover" doesn't work right except for links in IE6.  Particularly, it doesn't work on images, so if your client wants an image to light up when the mouse is over it, you're going to have to deal with this.  Adding javascript "onmouseover" and "onmouseout" events can make hover work by dynamically swapping the CSS class when the mouse triggers the javascript.  The easiest way to do this is 'whatever:hover'; it's a .htc file that you add to the CSS for the body of your page via the IE-specific 'behavior' property, and it automatically tracks down your hover classes and adds the necessary Javascript to invoke them.  If doing it manually, what you want are onmouseover and onmouseout events that change the CSS class, coupled with CSS classes that specify the non-hover and hover behavior you want - but you only need this for IE6, so it's probably best not to do it manually because the conditional comments throughout your code will get ugly quick.

Unfortunately, whatever:hover and unitpngfix didn't play nice together for me.  Manually doing the PNG fix was an acceptable workaround, and had the added benefit of not waiting for the page to load before swapping the pngs.

3) CSS "Hover" doesn't work right except for links in IE7 unless you set a strict doctype.  So, your first step is to set a strict doctype for IE7 to work right.  The main thing to be aware of when setting a strict doctype is that browsers become less forgiving; in particular, in-line elements can not have a width and height specified, so your stylesheets will need to explicitly set "display:block" if you were forcing widths and heights on spans.  Also be aware that your box model will change, from the IE specific quirks model to the actual correct spec, but you were probably dealing with that already due to cross-browser compatability.

I'm sure there are more troubles you will run into, but these were the main ones that caused an IE6 headache on my latest project.  I've found plenty of info on getting PNG transparency to work, and plenty on getting hover to work, but nothing on both together.  It can be done, but the easy drop-in fixes might not be enough for you and it pays to understand what the root problem they are addressing is.

Posted by David Eison on Tuesday, December 16, 2008 3:52 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Firefox 3 Release Tomorrow

Tomorrow, June 17th, is the long-awaited release of Firefox 3.  I have been using version 3 for quite a while now, and I've been using it at work since the Developer Tools were made compatible. It vastly improves both its performance and memory footprint over version 2, so the biggest complaints people have had about Firefox should be resolved.

Also, for you Opera users, 9.5 is out as well.

Posted by Wayne Walton on Monday, June 16, 2008 11:41 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Mitigate SQL Injection Attacks on Legacy ASP Sites

For those of you, like me, that have to support old sites for your clients, dealing with the vulnerabilities of old code can be quite a hassle.  Especially now that the best documented and known exploits can be completely automated.  One of our clients was recently subject to such an attack.  Unfortunately, when the site was originally developed, no real security was built into the code.  One user posted all SQL requests, no matter if it was coming from the public side or the admin side. All requests came directly from the page, meaning that every page had the code, and every one would have to be touched to really fix it.

As we were already redeveloping the modern replacement to the site, the client wanted us to spend as little time as possible on the old one.  So a true security audit was out of the question.  This, of course, is still the right way to solve the problem, but right isn't always in the budget.  So that leads us to a couple tools to help avoid the problem until we could release the replacement site.

The first is a tool from Microsoft called URLScan. URLScan has a lot of features, but what we used it for here was to limit the length of query strings.  Since the attack strings were almost always longer than a regular POST or GET, we just had to limit the length of the strings for most of those attacks to fail. Take a look at it, there's lots of neat tricks URLScan can do.

The big gun we used was an ISAPI filter written by Rodney Viana.  It's designed to scrub GET and POST requests of anything that would look like an attack.  It has been a life saver, especially when the attacks were happening hourly.

Posted by Wayne Walton on Monday, June 16, 2008 10:50 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Web Developer Tools now support Firefox 3.0

For those of you like me that use Chris Pederick's Web Developer Tools on the regular, you should be happy to know that 1.1.5 is out, and supports Firefox 3.0.  This makes me happy, as I have migrated to 3.0 everywhere except on my work machine, becuase I was waiting on support for a few extensions. 

Apparantly I was a bit behind the power curve on this one,  as he released it a couple weeks ago.

Posted by Wayne Walton on Thursday, April 3, 2008 1:14 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Asp:LinkButton as an Asp:Panel's Default Button in FireFox

The asp:LinkButton doesn’t work as a panel's DefaultButton in FireFox

Here’s a link explaining the issue:

I’ve written a custom control similar to the one in the article (Arke:LinkButton). Which fixes the FireFox issue.


using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Text;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace Arke.Web.UI.WebControls
[ToolboxData("<{0}:LinkButton runat=server></{0}:LinkButton>")]
public class LinkButton : System.Web.UI.WebControls.LinkButton
protected override void OnLoad(System.EventArgs e)
Page.ClientScript.RegisterStartupScript(GetType(), "addClickFunctionScript", _addClickFunctionScript, true);
string script = String.Format(_addClickScript, ClientID);
Page.ClientScript.RegisterStartupScript(GetType(), "click_" + ClientID, script, true);
private const string _addClickScript = "addClickFunction('{0}');";
private const string _addClickFunctionScript =
@"  function addClickFunction(id) {{
var b = document.getElementById(id);
if (b && typeof( == 'undefined') = function() {{
var result = true; if (b.onclick) result = b.onclick();
if (typeof(result) == 'undefined' || result) {{ eval(b.href); }}

To use this control...

  1. Add "using Arke.Web.UI.WebControls;" to your code behind.
  2. Register the assembly in the Asp.NET page "<%@ Register Assembly="Arke.Web" Namespace="Arke.Web.UI.WebControls" TagPrefix="Arke" %>"
  3. Add the control (or change your asp:LinkButtons to Arke:LinkButtons) "<Arke:LinkButton ID="ArkeLoginButton" Text="log in" runat="server" CssClass="login_button" />"
View Trenton Adams's LinkedIn profileView Trenton Adams's profile

Posted by Trenton Adams on Thursday, March 13, 2008 1:16 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Flash talking to Javascript

For the past few versions of Shockwave Flash, loading a flash object from a browser is no longer a one-way street.  Flash objects can interact with the browser, including calling Javascript, and the browser can interact with Flash, including Javascript calling Flash's ActionScript.

There are some security restrictions in place - ActionScript has to be registered via ExternalInterface.addCallback, and by default you can't call scripts across servers.  But we were running into a problem I haven't seen clearly explained anywhere - calls from Flash to Javascript weren't working right only when displayed in Mozilla. A simple test with window.parent.doTest() worked in IE, not in Netscape.

It turns out that Mozilla has wrapped the Javascript object model in XPCNativeWrapper as part of a security fix a while back.  Unfortunately, part of the functionality of this wrapper includes hiding any Javascript functions outside of the 'self' container.

So, for Javascript calls to work right in Netscape, they need to only call functions in the self container (i.e. the javascript needs to be in the head for the current iframe.)  If you try to embed the .swf directly in an iframe, it won't be able to call any Javascript. 

Changing out test to self.doTest(), and moving the definition of the doTest function inside of the current iframe's HTML, fixed the problem.  But, we have to change our app a bit - previously the iframe directly included the .swf, and there is no way to make the Javascript available that way.

Maybe this is documented clearly somewhere, and if so, I'd love to know the link so that I can have better docs to review next time I'm stuck.  So far About exchanging data with Flex applications has been useful, but everything else I could find about flash javascript problems just pointed to the allowScriptAccess parameter Adobe added to combat cross site scripting.

Posted by David Eison on Friday, February 8, 2008 1:46 PM
Permalink | Comments (0) | Post RSSRSS comment feed