Request a topic or
contact an Arke consultant
404-812-3123
Arke Systems Blog | Useful technical and business information straight from Arke.

Arke Systems Blog

Useful technical and business information straight from Arke.

About the author

Author Name is someone.
E-mail me Send mail

Recent comments

Archive

Authors

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2024

CRM 4.0 Deletion Service

Today I had an issue where I had deleted a number of records from CRM during my loads of testing through Scribe.  After deleting these records, CRM updated those records to have a deletestatecode = 2.  Therefore, every subsequent scribe job would no longer work properly as the address still existed in the database.  Those of you used to CRM 3.0, there used to be a MSCRMDeletionService you could restart in your running services and it would force the job to run through all your records.  That is now wrapped in the Async Service of CRM 4.0 which you can restart.  For those of you who don't/can't restart that service for any reason, try this here:

USE MSCRM_CONFIG

UPDATE dbo.ScaleGroupOrganizationMaintenanceJobs SET
NextRunTime = getdate() -- Now
WHERE OperationType = 14
-- Deletion Service

This was found on an old archive site, posted by Aaron Elder.  Hope this helps others find it.  Be patient.  I fumed for a few minutes thinking it wasn't working but just give it some time. ;)

 

When all else fails.....

There are other ways going directly to the database level, but I cannot stress how bad of an idea that is and how much it should never be done.  Exhaust all other options CRM provides you before going to the database level and modifying it by hand.  'Because it would be faster' should never be a good enough excuse!!  If you have no other options of using CRM to it's full abilities and must do it by hand, something went horribly wrong and needs to be re-evaluated immediately after you clean up your data.

 

 

you're still reading....*sigh* ok but I warned you.

 CRM gurus correct me if I'm wrong, but from the understanding I have at this moment, every entity in CRM has two tables:

  • [YourEntity]base - table that holds all the default out of the box attributes of your entity
  • [YourEntity]extensionbase - table that holds all your 'new_' custom attributes for your entity

So if you have a 'Contact' with a 'new_foo' attribute and someone adds a new 'Contact' to your CRM instance, 

  •  1 row added to your 'Contactbase' table - holds address1_line1, name, firstname, etc (all defaults)
  •  1 row added to your 'Contactextensionbase' table - holds new_foo (all custom)

The BOTH have the same ContactID because they ARE the same contact.  So if you have to delete at the database level you MUST delete from the extensionbase table first and foremost.  The smartest thing to do is to first find out exactly which contacts you need to delete.  Whether its on IDs or a certain field state, get a select statement as correct and accurate as possible.  Once you have that select, I like to make a temp table of those records but this is purely optional.  The reason I like to do this is because sometimes the field I care to do my search on is a 'new_' attribute.  Where 'new_foo' = null.  But if I run my delete query on extensionbase where 'new_foo' = null, I just shot off my own foot.  The reason?  I just deleted all the rows who had any knowledge that 'new_foo' existed.  How do I know what rows I just deleted to now clear out the base table?   With a temp table, I can capture all that data and include a 'contactid' so I never am without a reference of what fields need to be deleted.

 As always, BACKUP before attempting this.  Run your delete on extensionbase.  Write down the number deleted.  Run your delete on base.  Match number deleted.  If it doesn't match, Panic.  Restore you database and refine your search until those numbers match.  Re-evaluate your process and NEVER DO IT AGAIN!

 


Categories: CRM
Posted by Nicole Rodriguez on Thursday, June 16, 2011 2:19 PM
Permalink | Comments (0) | Post RSSRSS comment feed

CRM 2011 Claims Based Authentication and CrmSvcUtil.exe

The crmsvcutil program has some lousy code in it that breaks when you are using "Claims based authentication" (which is used for Internet Facing Deployments - IFD).

This error shows up as:

" Exiting program with exception: The logon attempt failed

Enable tracing and view the trace files for more information."

The fine folks at adxstudio have provided a dll that you can use to work around the problem.  http://community.adxstudio.com/Default.aspx?DN=9a7499fb-4e9a-408c-8096-6d658f9509a2

This is specifically for the 5.0.3 version of the SDK, which is the one currently available today at http://www.microsoft.com/downloads/en/details.aspx?FamilyID=420f0f05-c226-4194-b7e1-f23ceaa83b69

Presumably MS will fix this in the next SDK release.

Put the dll into the bin directory that has your crmsvcutil, and then add to your command line:

/metadataproviderservice:"MetadataProvider.IfdMetadataProviderService, MetadataProvider" /codecustomization:"Microsoft.Xrm.Client.CodeGeneration.CodeCustomization, Microsoft.Xrm.Client.CodeGeneration"

For example, a working command line is:

C:\projects\crm2011\sdk5.0.3\sdk\bin>crmsvcutil /url:https://CRM.YOURCOMPANY.COM/xrmservices/2011/organization.svc /out:xrm.cs /u[username] /p:"[passwordgoeshere]" /metadataproviderservice:"MetadataProvider.IfdMetadataProviderService, MetadataProvider" /codecustomization:"Microsoft.Xrm.Client.CodeGeneration.CodeCustomization, Microsoft.Xrm.Client.CodeGeneration"

The root cause is the MS crmsvcutil included this code:

if (orgServiceConfig.AuthenticationType != AuthenticationProviderType.LiveId)

which doesn't account for IFD using AuthenticationProviderType.Federation

(bug found in http://social.microsoft.com/Forums/en/crmdevelopment/thread/10bfc1ca-3cac-46d9-99b5-8f997e4b1ec9?prof=required)


Posted by Eric Stoll on Monday, May 23, 2011 1:43 AM
Permalink | Comments (0) | Post RSSRSS comment feed

CRM 4.0 Bulk Delete Tool

I found this tool out on the web.  It’s a bulk delete tool for CRM 4.0 since it doesn’t have a bulk delete option.  (As far as I’ve been told, CRM 4.0 Online does, but CRM 4.0 does not).  Anyways, you run the executable and it gives you a very simple UI.  All you need to do is create a view for the entity with all the records you want to bulk delete.

 

1.       Run the executable

 

 

2.       The bottom corner shows you’re not connected.  Click the drop down and ‘Create a new Connection…’

3.       You will get the connection information screen

 

 

4.       Give it all your information and click ‘OK’.

5.       You’ll now see you’re connected at the bottom of your screen.  And in the future, you will have that connection information available to you in that small dropdown as shown below:

 

 

6.       NOTE: for this to work as you want it to, you need to create a view on that entity for the records you want to delete.  If this is going to be routine, perhaps a CRM job plugin might be better, but we’ve had several instances where we loaded a CRM instance with a test data and want to dump it all, so here’s a fun tool for it.  Create your view. (I’m assuming here you know how to do that).

7.       After your view is created, enter your entity in the textbox and click ‘Retrieve Views’.  You’ll now get a list of those views. 

 

 

8.       You can schedule it to run in the future or recur but, again, I think a plugin would be more beneficial for that.  You can also have it send you an email after the job is done deleting.  It’s not perfect but it’s out there. 

 

Enjoy.


Tags: ,
Categories: CRM
Posted by Nicole Rodriguez on Tuesday, April 26, 2011 4:54 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Sitecore: Getting Started with Breadcrumbs

Posted from Amy’s Sitecore Adventures (a little late)

Breadcrumbs have been covered by just about everyone so there are lots of examples that all seem to do things a Little differently. With that in mind I’m going to keep this short with my example and two other examples I’ve found that might also meet your needs and cover the basics for just about every xslt breadcrumb example you’ll find.

The general idea: you’re at item c, in your tree the path is something like:  /sitecore/content/a/b/c and you want to display a pretty html list for a » b » c anywhere on your site.  You’ll always be dealing with the ancestors of your current item so you’ll be making use of $sc_currentitem/ancestor-or-self::… somewhere.

You’ll need to go through each ancestor item and display it, probably checking to see if you’re at the last item so you don’t display a ‘»’ after the final item. You also need to make sure to not display unwanted ancestors in your breadcrumb, being /sitecore and /content in our case.

So onto the examples!

First is the Sitecore breadcrumb xslt example: http://sdn.sitecore.net/Articles/XSL/Breadcrumb%20Example.aspx
  You will need a login to sdn to view this, but the magic is that it does a for-each across ancestor-or-self::item and then has an if statement to make sure that position()> 2 (this avoids the sitecore/content portion) and avoiding folders and there is the requisite check for position()!=last() so that we do not have the extra » after c.

Next up is Brian Pedreson’s breadcrumb example http://briancaos.wordpress.com/2009/02/09/breadcrumb-in-sitecore/
Similar to the above, however in this case he only selected items which had the appropriate template and a check to make sure the items should be in the navigation at all:
select="$sc_currentitem/ancestor-or-self::item[sc:IsItemOfType('mypages',.) and sc:fld('ShowInMenu',.)='1']"

I am certain there are many many more breadcrumb examples, but these were the first I was able to find easily to share.

Finally, my contribution to the breadcrumb party:

  <xsl:template match="*" mode="main">
    <div>
      <ul class="breadcrumb">
        <xsl:variable name="ancestors" select="$sc_currentitem/ancestor-or-self::item[ancestor-or-self::item/@template='home page']" />
        <xsl:for-each select="$ancestors">
          <li>
            <xsl:choose>
              <xsl:when test="position()=last()">
                <sc:text field="Breadcrumb Title" />
              </xsl:when>
              <xsl:otherwise>
                <sc:link>
                  <sc:text field="Breadcrumb Title" />
                </sc:link>  »
              </xsl:otherwise>
            </xsl:choose>
          </li>
        </xsl:for-each>
      </ul>
    </div>
  </xsl:template>

To satisfy the condition of not showing /sitecore or /content, I am only grabbing ancestors (or self) who have the ancestor (or is the item) that has the special template for the Home item. This excludes anything above the home item, but includes the home item too so that it can be listed. And we end up with a » b » c.

Hope this can be helpful if you are getting started with breadcrumbs!


Categories: Sitecore
Posted by Amy Winburn on Tuesday, March 22, 2011 4:32 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Sitecore: Using the Source Property

For each of the properties in your template you can set a source for it, this isn’t always used but can improve user experience drastically when done throughout a site. The source field comes in to play whenever you are using any of the following fields: Droplink, Droplist, Droptree, File, Grouped Droplink, Grouped Droplist, Image, Multilist, Treelist, Rich text field and a number of others.

There are various ways of setting these up to achieve different results – but in general you are using the source to limit the set of items that can be used, and this requirement can also help you determine what kind of field to use. For example, if you have a Set of items all split down into sub folders and want the content editor to make use of the tree, you could use a TreeList or Drop Tree, but if you just want a set of items without the opportunity to see where those items are – multilists or droplinks are the way to go. For Images you’re generally just specifying where to look for and put the images within the media library, and for Rich text fields the source determines the type of editor to use (if not the default).

There are a number of options for setting the source property but not all can work with every field.

Setting the Root Node: To do this you just give the full path to the item you want to use as the ‘root’. This works with just about every field that pulls options and you can easily grab the path from the content pane when you select your desired item(looking at the Item Path). Ie: if you have a treelist and you only want to show the categories item and its children, you’d put in the path to that item: /sitecore/content/Data/Categories

Sitecore Query: This applies to a smaller set of fields (the List Fields), but gives a lot of power and is what I’ll mainly be going over. The fields you can use this with are: Checklist, Droplink, Droplist, Grouped Droplist, Grouped Droplink, or Multilist. You can also use the fast query: this has some limitations over just using sitecore query but gives benefits of better performance and using less memory.

Treelists are also a little special and can use parameters(I think some other fields will work with this as well, but I most often end up using the query on other items so I don’t have a ton of experience with this), you can specify the root item, what templates/items to display or ignore and which they can select, for example:

DataSource=/Sitecore/Content/Home/Root/Node&IncludeTemplatesForSelection=desiredTemplate1,desiredTemplate2&ExcludeTemplatesForDisplay=secretFolder&ExcludeItemsForDisplay=secretItem&AllowMultipleSelection=true

The above sets the root node to /Sitecore/Content/Home/Root/Node and allows the user to select items using desiredTemplate1 or desiredTemplate2, it also excludes secretFolder and secretItem from showing up in the list as well as allowing the user to choose more than one of the same item.

More on Sitecore Query

Using the query option over simply setting a root node improves the experience for your clients as well as helps to keep your data accurate. If you don’t want them to be able to pick a certain template or value or need to select something dependent on a specific axes – using sitecore query will make it possible.

In the source field, your queries need to begin with query: followed by your query. I’m going to go over a few examples I’ve found useful, but for a more detailed explanation of using Sitecore Query take a look at the Data Definition Reference, there is a Sitecore Query section that explains all the details!

So, a few useful tidbits:

* grabs all the children of a node: query:/sitecore/content/home/dessert/*  <--  the * grabs every child and / followed by text denotes the exact item name you’re looking at. So you could also mix this up a bit and return query:/sitecore/home/*/pie/* <-- this will grab all the items that have a parent pie from any item under home with the pie grandchild.

. references the context item, this can be handy if you need to find the ancestors, children or anything along the axes (if you need just the parent, use: .. ): query:./pies/* <-- if we were the dessert item, this would grab all the grandchildren with the parent pie.

// is the descendant axis – this should be used Very sparingly but can be done the following way: query:/sitecore/content/home//pie/* <-- this grabs all the items with a parent pie under home. So that would include, home/pie/*, home/anything/pie/* and so on. The fear with using this is that you’re going to return the whole tree or a whole section which might have thousands of items, so when considering using the descendant axis or any query, be mindful of the result set you will get.

@ denotes a field, and @@ denotes an xml attribute for the item, you’ll probably be using mainly @@templatename or @@templateid.

For example: query:/sitecore/content/home/dessert/*[@pastry=’1’] <-- this grabs all the items under desserts which have the checkbox ‘pastry’ checked (so it might return pies).

Sitecore query also supports the xpath axes, allowing you to use things like ancestor-or-self, following and preceeding (siblings) and so on:

query:./ancestor-or-self::*[@@templatename='Site']/Data/Touts/*

For a more practical example in a multisite solution, the above takes the current item, finds an ancestor or itself that represents the top ‘site’ item and then find the Data folder and then grabs all the children of the Touts folder.

Logical operators can be used to combine options as well, so we could look for query:./*[@@templatename='template1' or @@templatename='template2'] or something more like query:./pies/*|./cakes/* <-- this would give both children of pies and cakes instead of choosing one or the other.

There are also functions you can use (here’s a link with a listing of a number of useful functions), primarily I end up using: position(), last(), and contains()

To use them, you’d do something like query:./*[position()=1] <-- grabs the first item

query:./*[position()=last()] <-- grabs the last item

query:./*[contains(@ingredients,’apple’)] <-- grabs the items with ‘apple’ in their ingredients field, this could also be written as: query:./*[@ingredients = '%apple%']

To test out any query, you can always open up the developer center and then open the xpath builder. You do not need query: before your query but you do need to include fast: if you want to use the fast query.

These queries can become pretty complex depending on your needs, but that initial work can leave content editors with a very easy to use and understand set of items and fields. Leave a comment if you know of any good query examples or are wondering how to form a query to meet your needs (I’ll try to help)!


Categories: Sitecore
Posted by Amy Winburn on Tuesday, March 22, 2011 4:27 PM
Permalink | Comments (0) | Post RSSRSS comment feed

The law of leaky abstractions and Reddit’s experience with the cloud

Reddit had a 6 hour downtime that was caused by them running a database backed by Amazon's cloud disk storage product (EBS Elastic Block Store), but EBS is both unreliable and doesn't flush writes to disk when told to.  This led to corrupted data and disagreements between the master database and slave databases, causing the slaves to not be usable while the master was down. 

Modern SQL databases are not written to work right on hardware that does not have a reliable flush to disk operation. Correct functioning of a modern database absolutely requires that "write back" caching can be shut off, so that if a disk reports a write succeeded that the write actually succeeded.  See for example http://www.postgresql.org/docs/current/static/wal-reliability.html ; this is not at all unique to postgres, I believe that every sql database requires committed writes to actually commit to disk.

The law of leaky abstractions says that as we virtualize more we can do things that appear to work but don't really actually truly work exactly how we think they do, and Murphy says it will hit us with public downtime.

Row ID #770 - Bob submits article about puppies.  Master says it commits, so data is sent to slaves.  Master lied, data was actually in a cache somewhere, write later actually fails in the master - but succeeded in the slaves.

Row ID #770 - John submits article about kittens.  Master now has kittens, slaves have puppies, dogs and cats living together mass hysteria reddit is down for 6 hours migrating master data to new hardware and manually hacking up rebuilt slave tables. 

I don't know how this works with nosql systems and eventual consistency.  Is cassandra ok to run on ebs disks but postgres not at all?  Reddit says their solution is they are going to move to using local ec2 disks; is that actually a solution, or does it just make hitting the problem less likely because ec2 local disks are more reliable than ebs?  Do they still do write back caching?

Meanwhile, Netflix has pointed out that they have actually moved most of their functionality into the cloud.  Specifically, most everything that scales with customers and streaming usage is now served from clouds (although movies come from CDNs, not Amazon's EC2.)

Netflix has posted some really interesting information about the testing they did on EC2: http://perfcap.blogspot.com/2011/03/understanding-and-using-amazon-ebs.html

And their lessons learned is a great place to start when considering working in the cloud at scale: Netflix 5 Lessons We’ve Learned Using AWS

The upshot is, scaling by working in the cloud leads to a whole new set of challenges.  You have to invest more in writing your software to handle hardware failure, you have to test failure scenarios more, you may have to go so far as to redesign network protocols to be less chatty because you have unpredictable latency from shared systems, and you have to expect problems from when abstractions leak as layers of complexity are added to what used to be a simple operation like “write this to disk”.  If you’re at the point where your hardware costs from scaling exceed your software development costs, or if you truly need to be able to handle rapid customer growth faster than you can expand traditional data center use, it can make a lot of sense to tackle these challenges.  But it’s not a no-brainer no-effort proposition – development and testing are going to get harder to handle new scenarios as you switch to using a larger quantity of less reliable resources.

Reddit's explanation:

http://blog.reddit.com/2011/03/why-reddit-was-down-for-6-of-last-24.html

Netflix uses simpledb, hadoop, and cassandra.

http://nosql.mypopescu.com/post/2981945438/why-netflix-picked-amazon-simpledb-hadoop-hbase-and

http://techblog.netflix.com/2011/01/nosql-at-netflix.html

-David


Posted by David Eison on Sunday, March 20, 2011 2:16 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Windows 7 blocks files from external source

A while back I wrote to ArkeGlobal about windows 7 blocking my files.  It really should go here, so here it is with some updates.

 Original Sept 10, 2009

I downloaded an external dll source for a web application and adding the reference to my project yielded this error:

System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.”

As it turns out Windows 7 security was blocking my downloaded dlls from running as a trusted source.  Going into the file system itself, I checked the permissions on the actually dll file and a message at the bottom says:

“This file came from another computer and might be blocked to help protect this computer.”

There is an ‘Unblock’ button right next to it and once I clicked it, it solved my issue.

 

Updated Oct 27, 2009

Since then, I ran into a similar issue but because of the way I had to build the project, every time it ran, it re-added that security back onto those dlls, so I found this: http://www.petri.co.il/unblock-files-windows-vista.htm

Look at solution 3 and/or 4.

 Updated Mar 2011

This last encounter is for those of you that have large, and I do mean large files to copy down.  I had a particular instance where I had to download an upgrade zip file for DNN to lay over a pre-existing one.  When I downloaded and extracted all the contents of it to the client's machine, I had some blocked files.  I grumbled and unblocked it, tried again and got the same error.  Then I realized, Windows had blocked *every* file inside that zip.  I immediately started googling and every single article basically said, 'window's does not provide a way to unblock in bulk'.  So I stared at over 1700 files I had to right-click and unblock by hand with no one providing a solution.  I was about 1 hour in when I had an idea: 

"Right-click -> Properties" on the ZIP folder

It will have the same 'unblock' feature that all it's children are inheriting from.  Once I unblocked the zip, I extracted and all my files played nice again.

 


Posted by Nicole Rodriguez on Thursday, March 17, 2011 2:50 PM
Permalink | Comments (0) | Post RSSRSS comment feed

DNN Event Viewer times out

Continuing in our theme of adjusting DNN sprocs…

The DNN Event Viewer first runs a purge sproc, then runs a get log sproc.

If you have a lot of events, the purge sproc is almost certain to time out. 

The get log sproc can benefit from a nolock.  But the purge sproc is where we saw most of our trouble.

The purge sproc in dnn 551:

    ;WITH logcounts AS
    (  
      SELECT 
        LogEventID, 
        LogConfigID, 
        ROW_NUMBER() OVER(PARTITION BY LogConfigID ORDER BY LogCreateDate DESC) AS logEventSequence
      FROM dbo.EventLog with(NOLOCK)
    )
    DELETE dbo.EventLog 
    FROM dbo.EventLog el 
        JOIN logcounts lc ON el.LogEventID = lc.LogEventID
        INNER JOIN dbo.EventLogConfig elc ON elc.ID = lc.LogConfigID
    WHERE elc.KeepMostRecent <> -1
        AND lc.logEventSequence > elc.KeepMostRecent 

 

This was failing for a few clients of ours.  They would end up with say 65k records in the event log table, and this would never complete. 

This should lock less rows, delete in 1000 record chunks, and put an upper bound on how many records it will tackle at once.  This helped this lookup quit failling for our client:

ALTER PROCEDURE [dbo].[PurgeEventLog]
AS
SET NOCOUNT ON
SET DEADLOCK_PRIORITY LOW

create table #TLog (LogGUID uniqueidentifier not null primary key, LogCreateDate datetime)

;WITH logcounts AS
(  
  SELECT 
    LogEventID, 
    LogConfigID, 
    ROW_NUMBER() OVER(PARTITION BY LogConfigID ORDER BY LogCreateDate DESC) AS logEventSequence
  FROM dbo.EventLog with(NOLOCK)
)
insert into #TLog
 SELECT LogGUID, LogCreateDate
    FROM dbo.EventLog el with(NOLOCK)
        JOIN logcounts lc with(NOLOCK) ON el.LogEventID = lc.LogEventID
        INNER JOIN dbo.EventLogConfig elc with(NOLOCK) ON elc.ID = lc.LogConfigID
    WHERE elc.KeepMostRecent <> -1
        AND lc.logEventSequence > elc.KeepMostRecent 

declare @intRowCount int
declare @intErrNo int
declare @commiteveryn int
declare @maxloops int

set @commiteveryn=1000
set @intErrNo=0
set @intRowCount=1 -- force first loop
set @maxloops=20

WHILE @intRowCount > 0 and @maxloops > 0
    BEGIN
        set @maxloops = @maxloops - 1
        BEGIN TRANSACTION
        BEGIN TRY
        DELETE FROM EventLog WHERE LogGuid IN (select top (@commiteveryn) LogGUID from #TLog order by LogCreateDate DESC)
        SELECT @intErrNo = @@ERROR, @intRowCount = @@ROWCOUNT        
        DELETE FROM #TLog WHERE LogGuid IN (select top (@commiteveryn) LogGUID from #TLog order by LogCreateDate DESC)

        commit
        END TRY
        BEGIN CATCH
         rollback;
         set @maxloops=0
        END CATCH
    END

drop table #TLog

-- used to be
    --;WITH logcounts AS
    --(  
    --  SELECT 
    --    LogEventID, 
    --    LogConfigID, 
    --    ROW_NUMBER() OVER(PARTITION BY LogConfigID ORDER BY LogCreateDate DESC) AS logEventSequence
    --  FROM dbo.EventLog with(NOLOCK)
    --)
    --DELETE dbo.EventLog 
    --FROM dbo.EventLog el 
    --    JOIN logcounts lc ON el.LogEventID = lc.LogEventID
    --    INNER JOIN dbo.EventLogConfig elc ON elc.ID = lc.LogConfigID
    --WHERE elc.KeepMostRecent <> -1
    --    AND lc.logEventSequence > elc.KeepMostRecent 




GO



Posted by David Eison on Monday, February 28, 2011 2:26 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Sitecore: Links as Items Redux!

Previously I had posted on how to set up items in your content tree to act as external links to other pages (for use with Navigation mainly – for example if you have a blog elsewhere but still want it listed in the main navigation). However, Ivan Buzyka pointed out some issues with the simple implementation so I added creating a better redirect to my ‘to do’ list for the blog. The time has come!

Let’s pretend we are modifying an existing site, we don’t want to change the navigation so that won’t be covered here – we just want to update our layout to work a little more universally. Our new items need to be able to link to an internal, external or Media item reliably for display in our navigation. Our template will consist of similar things to last time:

Link: General Link

Nav Title: Text -> standard values: $name

In Navigation: Checkbox ->standard values: checked

Create the template, add in standard values for it with the above settings and now we can create our Layout which should be assigned to the standard values of the new template.

In my layout is the following (inside the page load):

String url;
Item extItem = Sitecore.Context.Item;
LinkField extLink = (LinkField)(extItem.Fields["Link"]);
if (extLink != null)
{
  if (extLink.IsInternal && extLink.TargetItem != null)
  {
    url = Sitecore.Links.LinkManager.GetItemUrl(extLink.TargetItem);
  }
  else if (extLink.IsMediaLink && extLink.TargetItem != null)
  {
    url = Sitecore.StringUtil.EnsurePrefix('/', Sitecore.Resources.Media.MediaManager.GetMediaUrl(extLink.TargetItem));
  }
  else
  {
    url = extLink.Url;
  }
}
else 
{
  Item homeItem = Sitecore.Context.Database.GetItem(Sitecore.Context.Site.StartPath);
  url = Sitecore.Links.LinkManager.GetItemUrl(homeItem);
}
if (!String.IsNullOrEmpty(url))
{
   Response.Redirect(url);
}

To step through it: we’re setting the default to bring the user back to the home page just in case something goes wrong. From there, we check to see if the Link field exists and what type of link it is.

For an internal link, we grab the url for the item itself, and for the media item, we’re grabbing the url for the Media item to be displayed (or pdf etc.), and if it’s external – we’re just redirecting them to the url they specified.

If we stopped here, everything would be working great as long as the content was entered appropriately, however, that doesn’t always happen and we’d like to avoid this going boom. To do that we can add a simple validator: open up the content tree within Sitecore and then head to your template, expanding out the children and select the Link field item.

Scroll down to Validation in the Data section.

We want to make sure that the Link field is Always one of the following: and Internal link, a Media item, or an external Link. Also, we want it to have Some value.

Within Validation you’ll need to put the actual content you want to validate, and ValidationText is what will appear if that is not met. This will pop up when the user tries to save the item with an improper value.

validation

Shown above is the Validation and our error message: linktype is how we can determine what sort of link it is, and is generated automatically when a user selects their link (unless they are editing raw values). Our validation just makes sure that the linktype text contains one of those three options (internal, media, or external) and as long as one of those match the text in the raw value for the field we have a valid link.

This helps prevent a scenario where the user has used one of the other options for an external link which would stop the page from going anywhere.

You can also add in some of the default validation options – I'd recommend adding the Required field validator as well.


Categories: Sitecore
Posted by Amy Winburn on Thursday, February 24, 2011 6:10 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Sitecore: Adding your own Icons

Items can be configured to have an icon – and Sitecore provides an extensive list of them. But you may want to add your own for whatever reason. To do this you’ll need an image suitable for making into an icon, and the ability to resize this image to the correct sizes (Paint.net, Photoshop, GIMP).logo

For our example we’re making an Arke Icon and we're going to say our largest icon will be 128x128, so we have a transparent png called logo.png to work with: 

The image you want to use should have a transparent background unless you want the icon to be a square, and ideally not overly detailed as it will be very small.

Next, we need to make this image into various sizes, and put logo.png into the appropriately named directories, the structure is as follows (if you have several icons, just do the same thing by putting multiple icons into each size folder):

  • ArkeIcon
    • 16x16
      • logo.png
    • 24x24
      • logo.png
    • 32x32
      • logo.png
    • 48x48
      • logo.png
    • 128x128
      • logo.png

Zip this all up with the same name as the containing folder, so ArkeIcon.zip

Upload this new zip file to /sitecore/shell/Themes/Standard/ and make sure the permissions are correct for your installation (check the other files such as Application.zip for a comparison).

Back within your content editor: each item has an Icon field and you can now enter in your custom icon by entering: ArkeIcon/16x16/logo.png

Once you do this, the new icon will show up in the list of recently used icons as well.

As for adding the new icon set to the list of usable icons – that’s a little more tricky since the list is specified statically (if you know a good way of changing this please share).

You can however modify the existing sets/zips of images, such as the aforementioned application.zip – just add your image to the appropriate directories and you can use it just like all the other icons!

Posted from: Amy's Sitecore Adventures


Categories: Sitecore
Posted by Amy Winburn on Friday, February 18, 2011 5:52 PM
Permalink | Comments (0) | Post RSSRSS comment feed