Request a topic or
contact an Arke consultant
404-812-3123
Arke Systems Blog | Useful technical and business information straight from Arke.

Arke Systems Blog

Useful technical and business information straight from Arke.

About the author

Author Name is someone.
E-mail me Send mail

Recent comments

Archive

Authors

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2024

More DNN performance

Some DNN sites spend way too much time running the sproc dbo.GetSchedule.  This is probably worse if DNN is configured with its Scheduled Jobs in the default ‘Request’ mode (instead of ‘Timer’ mode).  Unfortunately that job is both slow and can deadlock on updates. 

The original job we had in our DNN 5.6.1 is doing:

SELECT
  S.*,
        SH.NextStart
    FROM dbo.Schedule S
        LEFT JOIN dbo.ScheduleHistory SH ON S.ScheduleID = SH.ScheduleID
    WHERE (SH.ScheduleHistoryID = (SELECT TOP 1 S1.ScheduleHistoryID
                                        FROM dbo.ScheduleHistory S1
                                        WHERE S1.ScheduleID = S.ScheduleID
                                        ORDER BY S1.NextStart DESC)
                OR SH.ScheduleHistoryID IS NULL)
            AND (@Server IS NULL OR S.Servers LIKE '%,' + @Server + ',%' OR S.Servers IS NULL)

Here’s almost the same thing, but faster and less likely to deadlock:

    SELECT
        S.*,
        (SELECT TOP 1 NextStart FROM ScheduleHistory S1 with(nolock)
         WHERE S1.ScheduleID = S.ScheduleID
         ORDER BY S1.NextStart DESC) as NextStart
    FROM dbo.Schedule S  with(nolock)
    WHERE (@Server IS NULL OR S.Servers LIKE '%,' + @Server + ',%' OR S.Servers IS NULL)

Replacing this one query dropped one problematic DNN site from 100% sql server cpu utilization to more in the 30% range.


Posted by David Eison on Thursday, February 17, 2011 1:33 AM
Permalink | Comments (0) | Post RSSRSS comment feed

DotNetNuke PurgeScheduleHistory

DotNetNuke runs a task to purge it’s schedule history; however, the stored procedure that does this has performance problems that will cause deadlocks on a high traffic website.

The query that the PurgeScheduleHistory stored procedure ships with is:

DELETE FROM dbo.ScheduleHistory
FROM dbo.Schedule s
WHERE (
  SELECT COUNT(*)
  FROM dbo.ScheduleHistory sh with (nolock)
  WHERE sh.ScheduleID = ScheduleHistory.ScheduleID
  AND sh.StartDate >= ScheduleHistory.StartDate
) > s.RetainHistoryNum
AND s.RetainHistoryNum <> -1
AND s.ScheduleID = ScheduleHistory.ScheduleID

Anytime you write a query that does a delete from a select you run the risk of deadlock.  Also, whenever you do a sweeping delete on a table you can escalate to a page lock, have trouble with locking indexes, and end up in a deadlock scenario again.  On the surface this doesn’t sound like a deadlock scenario, but you have to understand the way locks affect indexes and how locks escalate in SQL Server—you might be surprised to learn a simple SELECT query can deadlock under the right conditions also (http://stackoverflow.com/questions/661908/sql-server-deadlocks-between-select-update-or-multiple-selects).

The fix to this stored procedure follows the same approach Microsoft used to fix the DeleteExpiredSessions stored procedure that ships with ASP.NET (http://support.microsoft.com/kb/973849).

The following query should replace the PurgeScheduleHistory stored procedure:

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[PurgeScheduleHistory]

AS

SET NOCOUNT ON
SET DEADLOCK_PRIORITY LOW

create table #T (ID int not null primary key)

insert into #T
select schedulehistoryid from (
select s.ScheduleID, sh.schedulehistoryid, rank() over (partition by s.scheduleid order by sh.startdate) rn, RetainHistoryNum
from ScheduleHistory sh WITH (READUNCOMMITTED)
join Schedule s WITH (READUNCOMMITTED) on s.ScheduleID = sh.ScheduleID
where s.RetainHistoryNum <> -1) a
where rn > RetainHistoryNum

DECLARE ESC CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT ID FROM #T

declare @ID int

open ESC

fetch next from ESC into @ID

WHILE @@FETCH_STATUS = 0
    BEGIN
        DELETE FROM ScheduleHistory WHERE ScheduleHistoryID = @ID
        FETCH NEXT FROM ESC INTO @ID
    END

CLOSE ESC

DEALLOCATE ESC

drop table #T

Thanks to David Eison for finding this solution.


Categories: SQL Server | ASP.NET | DotNetNuke
Posted by Eric Stoll on Wednesday, February 16, 2011 1:13 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Digging through Event logs

I haven’t found a tool that I love that parses Event Viewer well.

What I do these days I use psloglist from Windows SysInternals to dump the log to a tab delimited file, then I hack on strings in Excel using IFERROR, SEARCH, RIGHT and LEFT to get decently representative strings and then sort and subtotal.

Also today I needed to read a DNN error table while DNN was not behaving well.  DNN writes XML to the database; the easiest thing to do is to cast it to XML with

cast(LogProperties as XML)

Plug it into a temp table, then process the XML using Sql Server’s XML query command:

select cast(LogProperties as XML) as props, LogTypeKey, LogGUID, LogCreateDate into #tmptable from EventLog with(nolock)
WHERE LogTypeKey='GENERAL_EXCEPTION'

select count(*) as cnt,a.msg
from
(select cast(props.query('LogProperties/LogProperty/PropertyName[text()="Message"]/../PropertyValue/text()') as nvarchar(MAX)) as msg
  from #tmptable) a
group by a.msg
order by COUNT(*) desc

select cast(props.query('LogProperties/LogProperty/PropertyName[text()="Message"]/../PropertyValue/text()') as nvarchar(MAX)) as msg, *
  into #tmp2
  from #tmptable

select * from #tmp2 where msg like '%[interesting keyword]%' order by LogCreateDate desc

I test out xpath using Xpath Visualizer.


Posted by David Eison on Monday, February 7, 2011 7:12 PM
Permalink | Comments (0) | Post RSSRSS comment feed

CRM2011 CrmOnline CrmSvcUtil

This post is based off of the CRM2011 SDK. If you do not have a Windows Live DeviceID and Device Password, you will need to build and run CreateCRM2011Device. After running, it creates an XML file in  C:\Users\your_username\LiveDeviceID called LiveDevice.XML that contains your DeviceID and Device Password. You should first check to see if you have this file if you are unsure of whether or not you have a DeviceID and Password. Also, you will need to install WIF.

After you have created your DeviceID and Password, I recommend creating "CrmSvcUtil.exe.config" containing:

<configuration>
  <appSettings>
    <add key="deviceid" value="your_Device_ID" />
    <add key="devicepassword" value="your_Device_Password" />
  </appSettings>
<system.diagnostics>
  <trace autoflush="false" indentsize="4">
   <listeners>
    <add name="configConsoleListener"
     type="System.Diagnostics.ConsoleTraceListener">
       <filter type="System.Diagnostics.EventTypeFilter" initializeData="Error" />
    </add>
   </listeners>
  </trace>
 </system.diagnostics>
</configuration>

This config file will hold your DeviceID and Password and enable tracing for easier troubleshooting, should any errors occur.

If you try to run CrmSvcUtil the way the SDK shows, it will not work. There are switches that the SDK says to use that are not valid, they should be removed: /disco /partner /env /org.  The only switches that willbe needed to accomplish the task at hand are /url /out /username and /password.  With that in mind, your command should look like:

CrmSvcUtil.exe /out:Code.cs /url:https://Your_CRMOnline_URL/XRMServices/2011/Organization.svc "/username:Windows_Live_Username" "/password:Windows_Live_Password"

NOTE: The url MUST look exactly as above. It cannot end with "/" or "/$metadata" and should not be OrganizationData.svc


Posted by Michael Casciano on Thursday, January 6, 2011 11:33 AM
Permalink | Comments (0) | Post RSSRSS comment feed

HTTP: PUT vs POST

PUT doesn’t come up much with plain html work.  Pages and forms pretend that everything is a get or a post.  But once one is working with web services, your choices are:

SOAP – send everything as an HTTP POST.  HTTP is being used mainly to dodge firewalls and not really providing any benefit.  Protocol was intentionally designed to be too complex for humans, leading to dependence on tools.

REST – use verbs applied to nouns.  HTTP has other useful verbs besides GET and POST. 

Unfortunately, PUT is conceptually similar so easy to get confused.  I thought I’d mention here in one spot the two key differences:

a) PUT is indempotent.  Repeating the same operation should get the same result.  Side effects are allowed, but the side effects should be the same for repeated requests.  POST is not indempotent.  Repeating the same operation may yield different results.

b) PUT contains the actual item for a specified resource.  POST provides an item for the specified resource to work with.

For a practical example, POST would be used to tell an account to add a new note to it, while PUT would be used to edit the contents of a note.  PUT can’t do an “append” operation, if you POST your ‘add a note’ twice you have two notes, if you PUT the same note twice you just have the one note in the end.

I think it’s really important to consider both of these things together – the side effects are different, and the entity you specify are different – PUT says “this resource is this data”.  POST says “this resource should work with this data”.

http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html for a primary source


Posted by David Eison on Wednesday, December 8, 2010 2:40 AM
Permalink | Comments (0) | Post RSSRSS comment feed

CRM entity browser appears unreliable

CRM has this nice entity browser that is supposed to show you info about your entities, like what fields there are and how long they can be.  You go to your server and add /sdk/list.aspx to the url and it shows you what entitites you have and what their attributes are.

I used to rely on it because as a programmer, it’s the quickest/most convenient way to get the length information and the ‘valid for update’ information about fields.  Unfortunately, it’s apparently unreliable.

The below screenshot is from a client site. They only have 1 CRM organization configured, yes I’m pointing to the right server on both, and yes their contact is published.  But the entity browser does not show that the fields have had their sizes reduced (to better fit on mailing labels), and if I relied on this to write my code for working with addresses, I’d get it all wrong.

SNAGHTML381e91c


Posted by David Eison on Friday, December 3, 2010 12:16 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Loading a CRM page from a post-build event

CRM uses NTLM authentication, so you can’t just pull down a page using any simple thing.

Luckily, “curl” is a nice command-line tool for loading webpages that supports NTLM.

The only tricky part is, not all builds support the “--ntlm” and ”-u :” features you need for NTLM to work.  The Win32 Generic 7.21.2 binary build at http://curl.haxx.se/download.html is working for me (the MSVC build supports ntlm but “-u :” silently doesn’t work. You can tell it failed because you get back a 401.)

So, I now put a copy of curl in my project’s references folder, and have added to my post-build events that bounce IIS:

SETLOCAL ENABLEDELAYEDEXPANSION
set URL=http://localcrm/orgname/loader.aspx

. . .

set loop=0
:TRYCURL
rem Load front page to get app pool started up again
set /a loop=%loop%+1
echo Loading site to initialize app pool %URL%
rem curl should use --ntlm -u : to pull user from the environment.
"$(SolutionDir)\references\curl\curl" --user-agent "Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)" --location --location-trusted --ntlm -u : --silent --show-error -w"%%{http_code}" "%URL%"  > NUL
if !ERRORLEVEL! NEQ 0 GOTO CURLFAIL
goto DONE
:CURLFAIL
if %LOOP% LEQ 4 GOTO TRYCURLSLEEP
goto FAIL
:TRYCURLSLEEP
echo sleeping before retry
sleep 1
goto TRYCURL
:DONE
echo OK at %TIME%


Posted by David Eison on Sunday, November 28, 2010 2:14 AM
Permalink | Comments (0) | Post RSSRSS comment feed

CRM API – Picklist details

Just thought I’d share a bug I ran into with everyone and hope it helps you avoid it.

I see plenty of code that sets picklist values.  You might do this in javascript, or in the CRM API.  Set a new value, submit the update, picklist value is changed. 

But, it’s easy to miss that picklist has two fields: a Name, and a Value.  Value is something boring like 2, name is something to show to the user, like “Critical”. 

You would think that writing code like this was great:

Picklist prop = source.Properties[attribute] as Picklist;
if (prop == null || prop.IsNull)
{
    return defaultvalue;
}
return prop.name;

However, a problem crops up when dealing with client code written by people who didn’t understand that the name field is important.  If you put that code into a plugin, you get passed straight the name + value that the API client specified – so you could run into javascript code that only changed the value, or into other CRM API code that only changed the value, and name can either be completely not set, or worse, set to an old previous value.

So, when writing server side code that handles data submitted by the client, it looks like you’ll need to only trust the value and ignore the name.  It’s possible you could audit your client code and make sure that everywhere a new value is set a new name is set too.. but one day somebody will find some code from another project, add it to yours, and your picklist handling will be wrong.

Happy Thanksgiving!


Posted by David Eison on Saturday, November 27, 2010 11:57 PM
Permalink | Comments (0) | Post RSSRSS comment feed

CRM and ViewState

Microsoft Dynamics CRM 4.0 doesn’t use ViewState, or Sessions.  Indeed, they are disabled in the web.config file.  This can be a bit of a surprise to an ASP.NET developer working on a custom page in the /ISV directory. 

Three possible solutions on ViewState:

1) Don’t use ViewState. 

This requires re-initializing any data with every postback.  One thing you can do is minimize postbacks.  You’ll see CRM does this in many places by encouraging the use of Javascript and popping extra windows to handle immediate-response things like filling in or validating a lookup field.  If you are displaying a grid, it will be empty after postback because it wasn’t re-populated from viewstate, so you need to repopulate it on every request – which means you also need to deal with the possibility that values may have changed due to another user editing records in the meantime.  Some simple strategies to start with are to refer to records by guids instead of by row numbers and to minimize updates to only the fields your user actually changed, and to code defensively.

2) You can enable viewstate for a particular page by adding it to the page directive at the top of the page:

<%@ Page . . . EnableViewState="true" . . .

Note that if you have a server cluster and use viewstate, you’ll either need to set a machine key in web.config or else disable viewstate validation with another Page directive:

<%@ Page . . . EnableViewState="true" EnableViewStateMac="false" %>

Disabling ViewStateMac means your users will be able to tamper with the viewstate, so just keep that concern in mind if you have custom permissioning rules in your app beyond CRM’s built in permissioning.

3) You can enable viewstate for all of your ISV pages by setting it up as its own app. 

Create a virtual directory under /ISV, point it to your pages, and give yourself a web.config that sets enableviewstate for your pages (and a machine key).  See, for example, this guide at xrmlinq.

Remember that if you’re going to use viewstate, you should keep an eye on viewstate size ; perhaps you don’t want to transfer a megabyte of serialized grid data with every page load.  You can see how big viewstate is by viewing source on your page, or by adding some code to your pages (If Request.IsLocal is true and DEBUG is defined, I tack on a label with the size from LosFormatter; see example code at scottonwriting ); but in general, if you use viewstate on a grid or listbox, you’re going to be storing a lot of data.  If you’d rather repopulate your grid or listbox with every request instead of serializing their data, simply databind them before viewstate begins being tracked - during init instead of during load.  See ASP.NET Page Lifecycle.

As for Sessions, I would recommend avoiding them.  You’ll probably have some real headaches if you need to support the offline client and rely on sessions, and it can be hard to anticipate (and test for) how sessions will be affected by one user popping several windows open.  In general, if data is ephemeral it can be handled well by viewstate.  If data is not ephemeral, you probably want to be storing it in a database.  So pass on session.


Posted by David Eison on Tuesday, October 19, 2010 3:04 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Sitecore: Setting and Customizing the Rich Text Field Editor

This is well documented in the Sitecore documentation and elsewhere, but I always forget where so this post is definitely for my own benefit (and anyone else like me who doesn’t know where to look right away).

There are two ways of changing around the rich text editor that I’ll cover, I’ll start with the easier one:

1. Setting the source property for the rich text field on your template

templatesource

 

I was pretty thrilled to learn this, just go into your content editor or template manager, open up the template with the rich text field that you want to set this for and choose one of the following options as the source (in italics).

Rich Text Default:

/sitecore/system/Settings/Html Editor Profiles/Rich Text Default

This is the default (shocking!) and the control portion looks like the following:rtdefault

Rich Text Full:

/sitecore/system/Settings/Html Editor Profiles/Rich Text Full

This is a much more filled out editor shown below:

rtfull

Rich Text Medium:

/sitecore/system/Settings/Html Editor Profiles/Rich Text Medium

This is the middle of the road editor, more than just the default and less than the Full version.

rtmedium

Our next method of changing the rich text editor comes about when the above (or the ones not listed: IDE, Mail) do not meet your needs, or if they Almost do but need to be adjusted.

2. Modifying the default rich text editor

To do this you need to switch to the Core Database – at the bottom right in Desktop view is a little grey icon: click that and choose Core from the popup.

Once the screen refreshes open up the Content Editor and we need to browse to the HTML Editor Profiles (the paths above are where we are going). If you want to change the properties of the default editor for all rich text fields Copy the Rich Text Default and rename it (just in case!) and then you can pick and choose items from the other Profiles – just copy them over to the Rich Text Default item.

rtfull-folder As seen on the left there are a number of folders and then sets of toolbars: the toolbars are where the magic is at and the folders contain data that can be displayed and modified.

For example, you can change the inline styles that are available by selecting the inline style item, and then changing the children (or adding new children for your own custom styles).

If you wanted to add Inline styles to your Rich Text Default you can do the following: make sure to copy the Inline Styles folder to Rich Text Default, and then make sure to add the drop down option for css (called Css Class in Toolbar 3 from Full) to the toolbar in Rich Text Default.

Each toolbar is separated by a solid line around it or displayed on a new row on the edit screen and you can add, copy, modify or rearrange items as you wish like any other set of items. They can be deleted as well, so it’s always a good idea to keep a copy of the profile you’re editing if something goes wrong.

I hope that helps anyone else who has wanted to make changes to the default rich text editor and wasn’t sure where to begin!

 

 

 

 

Posted from: Amy’s Sitecore Adventures!


Categories: Sitecore
Posted by Amy Winburn on Wednesday, October 13, 2010 1:11 PM
Permalink | Comments (0) | Post RSSRSS comment feed