Blog

First impressions of Alfresco Community 4.0a

The long awaited release of Alfresco 4.0a (code named Project Swift) has just been released so I thought I would take the chance of downloading it and seeing what had changed. Being an Enterprise subscriber we're eager to see what lies in wait for us in the future.

Installing

We would normally install Alfresco on a headless Linux box, which seems to add a whole lot more complexity to the install process. Although in fairness the bods at Alfresco have gone some lengths in recent releases to make this almost a windows wizard like experience. However installing the Windows version on my Windows 7 x64 laptop was a relatively painless exercise overall. I chose the advanced route in order to see all what was available. I chose all the available components and the only problem I had during the process is an error saying that component "alfrescowcm" not available. Once installed the process of deploying the servers took a good 5mins to accomplish. Whilst I did appreciate the progress bar telling me that something was happening, I did start to wonder if this had just stuck in an endless "animated gif" like way. A little paitence paid off and the install completed without any problems, even going to far as to open my browser at the sign in page ready for sign in.

First Impressions

When you first sign in to Share after installing you are met with the usual dashboard like interface. The only difference is a new welcome section at the top that highlights how to get your started on various tasks. Whilst the interface does go further than previous incarnations in providing a web 2.0 like interface with lots of nice icons, I couldn't help but feel slightly disappointed. Given that this is a major version upgrade I had expected a bit more.

All the usual functionality is available in Share. Document Librarys, Blogs, Wiki, etc, with their functionality being more or less the same.

Tags: 

Sharing Content Module

Sharing Content

At the moment we have one drupal install to hold all our sites (see previous article). Whilst this allows for easy content sharing, it isn’t scalable in the grand scheme of things. So we’re planning to move to an Aegir hosting system to deploy multiple different sites in an easily managed way. However our problem comes in that we won’t then be able to share content between sites which is one of our primary requirements. Having looked around there doesn’t seem to be any module that fits our exact requirements, so a new custom module might be the way to go. What follows is a rough outline of how I see it working, but very happy to comments / suggestions from others as to how best tackle it.

Requirements

  • Provide a centralised content store of information
  • Make that data available to external sources
  • Allow external sources to query the content available and download content.
  • Maintain a link between the downloaded content and the content store and keep the two in sync.
  1. We need to be able to recognise when content has been changed manually and not overwrite that when doing the sync.
  • Information download should be automatic and manual
  1. Automatic: Set up parameters (taxonomy terms?) to download information automatically.
  2. Manual: Search for information and download
  • BONUS FEATURE: Allow for syncing of content on a field by field basis. i.e. including/excluding certain fields as required.

Design the Custom Module

Content Store

This is perhaps the easiest of the requirements to set up. Using CCK we can build content types for all our different types of information. We setup vocabularies and tag the information as required thereby creating a searchable database of content. We’d also want to use the UUID module so we have unique IDs for our content for the purposes of maintaining a link.

Content Store Services

The content store information needs to be made available externally, and the Services module provides an ideal way to do this. We’d need to setup the following services for use by the sync module.

  1. Retrieve taxonomy terms
    1. We want to be able to allow for the filtering of content based on taxonomy term. This may just be a user friendly way to cut down content, but we may use it to restrict searching of content to predefined areas.
    2. Retrieve content types
      1. We don’t want to download content for which we have no destination. So the resulting list would be filtered depending on which content types are available on the external install.
      2. Retrieve content summary
        1. Uuid, title, *content type, *taxonomy terms, created, *updated
        2. The items with * would be filter terms based on what had been selected in the GUI.
        3. Retrieve nodes
          1. Send a list of UUIDs and get back the content items.

There may be others that we’ll need, but for the time being that should be enough for our base requirements. As to how the data is presented to the receiver…I’m open to suggestions. I’d like to make them REST calls and return JSON, but that may change further down the development process.

Content Receiver

This is the real meat to this project and what is going to take the most time. I’ll expand on this in the future hopefully, but for the sake of time, here’s a rough idea of how the manual import would work. For the sake of keeping it simple, we’ll assume everyone can see everything.

  1. Retrieve the list of taxonomy terms in order to create a combo box for the filtering of content.
  2. Retrieve the list of available content types and filter according to what we have available locally.
  3. Based on the current taxonomy and content type selection, retrieve a list of nodes from the content store, ordered by last updated date.
  4. Display form to user. Something along the lines of…ContentSyncMockup
  5. User selects the content items that they want and press “Sync Selected”.
  6. UUIDs are requested from the content store and the information inserted into the local database.
  7. Additionally the UUIDs are inserted into a separate table for logging which UUIDs need checked to see if the content store has been updated.
  8. If a content item is changed locally, the UUID is removed from the table and the sync is broken.

For the automatic import, we’d probably need a screen like the following.

SetupContentImport

Cron

We’d need to set up a regular cron job that polled the content store to see whether any information needed to added/updated for both the manual and automatic versions.

Alternatives

There is a slight alternative to the above. Instead of the client having to poll the server, we instead use the pubsubhubbub protocol to have information automatically pushed out.

Conclusion

As you can see, it’s a fairly complex module. I’d be more than happy to hear from anyone who thinks there are modules out there that can do this already. With so many modules available it’s easy to overlook the obvious.

Tags: 

Sharing Information

Sharing information is without a doubt something that Universities are just not good at. We hold large amounts of really useful information in corporate systems, but it tends to sit there and very little actually gets distributed out with the bounds of central services. Sometimes it’s down to the technicalities of sharing that information. If the system in question is not capable of making its data available in some kind of format, you’re stuck. However sometimes it is more to do with the politics behind the data. I remember working in one place where getting the data into a format that my scripts could consume took no more than a day or so. Arranging the authorisation to actually get and use that data took over a year of being passed from pillar to post to get it rubber stamped.

One of the great failings of large organisations is not allowing the correct people to access the information that can help them do their job. Very often we’re aware that there are information and systems out there, but in the interests of timescales it is often easier to gather our own information / design our own systems, instead of trying to plug into the corporate systems. This invariably leads to duplicate and incorrect information being stored, which in the realms of data protection results in a myriad of potential legal action.

If we as higher education institutions are to survive the 21st century we need to use the data we have to its maximum effect. 10-20 years ago the idea of sharing information and letting others out with central services crunch data wasn’t the norm. However with a host of data analysis tools being available, it’s now to be expected that others should, and do, want to crunch the numbers and analyse the data for themselves.

Therefore as we move into the future and deploy new systems, data sharing must be thought of and built in from the start rather than being thought of later. Failure to do this results at best in increased workload for those on the fringes, at worst it results in lost income and opportunities.

Rant over; let me outline the ideas we had behind sharing information in a web context.

As I detailed in a previous article, the College of Life Sciences has to be able to cope with potentially over 100 different sites, each with a different look and feel. However one thing we’ve been very aware of for a long time is that a lot of the information presented on these sites is the same (or very similar). We were also aware that the same person could be updating several sites with the same information. Additionally, the information that was being presented was already being maintained centrally for the main College site. It therefore made sense that any CMS implementation had to allow for the sharing of this information across sites. In effect we needed a “create once, share often” methodology.

Sharing information is one thing, but we also wanted to ensure that by making information available, we weren’t simply adding to the already large burden of work those who were updating sites had. So it wasn’t simply enough to make the data available, but also to proactively push that information out to the sites that needed it. The result being that our information is publicised more widely than it ever was. To illustrate this, let me give you an example.

Let’s assume a fictional member of the College, and call him Professor A (very imaginative name, I think you’ll agree). Professor A works in the School of Research in the fictional division of Biological Research and has just been awarded a new multi-million pound grant to continue his research. As is the norm, the College writes a press release for the website and it is placed on the main College site. Previously the news item would have sat there in isolation until either links to it were manually made from other pages, or the text was copied manually onto other site. You’ll already see not only the wasted effort, but the wasted opportunity with the above scenario. With our new system, by tagging the news item correctly, links to the item automatically appear on the School of Research site, Division of Biological Research site and Professor A’s own personal website. The only additional effort has been a few extra clicks and it’s job done. Now no matter which level someone accesses information on Professor A, they’ll be able to see exactly what he’s been up to.

So we had a good method of sharing information, but we wanted to take it further still. We knew that information was being held in the College about everyone who worked there. Again, it was centrally held, albeit in a separate database, and contained information that was useful. So we went about integrating it as well. In our above scenario Professor A could potentially have his contact details on any and all of the sites. If those were to change for any reason, we’d have to change it in four different places. Now, because information is created once and used often, any changes made to the central authoritative source, are reflected quickly and easily on the websites in question.

We pushed it further still. We not only held contact information on a person, but organisational information, in other words, where in the college they worked. Using that information meant that we can show at all the different levels who our Principal Investigators are, who is in their group, etc. And because this information is updated centrally, anyone moving divisions/groups or starting/leaving is displayed automatically without us having to lift a finger.

The above examples are by no means revolutionary, but hopefully they demonstrate the big plan that we have. Whilst we want to give our customers as much control as possible over their own sites and provide infrastructure and support that starts to push forward rather than catch up, we need to do that in a manageable and cost effective way. Our mantra is to develop once, deploy often. So as we design new functionality we always try to ensure that the work can be transferred easily to other areas if required.

Tags: 

Overriding how an image displays

One of the big problems we have is that we have a staging server and then copy that information across to our live server which is proxied via another server. Unfortunately because Drupal uses absolute URLs this breaks things like images. A quick change to the way image cache generates the URLs does the trick.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
function cls2_imagecache($presetname, $path, $alt = '', $title = '', $attributes = NULL, $getsize = TRUE) {
  // Check is_null() so people can intentionally pass an empty array of
  // to override the defaults completely.
  global $base_url;
 
  if (is_null($attributes)) {
    $attributes = array('class' => 'imagecache imagecache-'. $presetname);
  }
  if ($getsize && ($image = image_get_info(imagecache_create_path($presetname, $path)))) {
    $attributes['width'] = $image['width'];
    $attributes['height'] = $image['height'];
  }
  $attributes = drupal_attributes($attributes);
  $imagecache_url = imagecache_create_url($presetname, $path);
  $regex = "^" . $base_url . "^";
  $final_url = preg_replace($regex, "", $imagecache_url);
  return '<img src="'. $final_url .'" alt="'. check_plain($alt) .'" title="'. check_plain($title) .'" '. $attributes .' />';
}
 
Tags: 

Building a College Website

The College of Life Sciences at the University of Dundee recently launched a new website based on the Drupal Content Management System. What follows is a description of how the site was built and the hurdles that we have had to overcome / are still overcoming in order to promote the College and boost awareness.

Background

The College of Life Sciences has a somewhat complicated organisational structure. The College itself is split into two schools (School of Learning and Teaching, School of Research). The School of Research is then split into twelve different divisions covering a variety of research areas. Each division is then split into a number of different research groups which total approximately 80.

Each of these distinct areas (schools/divisions/groups) can have their own site (same domain for the most part but different directories, but we also need to be able to cope with different domains). However there is a lot of information that is common to each site that needs to be shared between them. This ranges from staff profiles (which are imported from an external source) to news and event items. Much of this information is maintained centrally, but needed to be filtered to each site automatically. Additionally, we also needed to provide individual site administrators with the ability to create their own news/event/content items that were specific to their own site.

To top it all off, we wanted to provide a staging system that separated the live site from the staging site to reduce the likelihood of problems on the staging site affecting the live site.

How we did it

Architecture

Our Drupal environment is made up of three servers: live, staging and development. In order to synchronise information between these servers we use scripts that make heavy use of drush. At their most basic these scripts copy the database, then the files and then turn on/off various modules as required. This allows us to mirror current information onto the development server for testing new themes/modules in a semi live environment so that we can be relatively sure they will work when copied onto the staging/live servers. Every hour everything from the staging site is copied across to the live site.

The live site is isolated from the rest of our infrastructure. Nobody has access to it, with information being pushed to it via drush from the staging site. In the event that the site is hacked, it should be a case of synching with the staging site to remove any problems. It itself sits behind a proxy server to further speed things up.

Drupal Install

We have a single Drupal 6 installation that manages all our different sites. Using a combination of modules such as Virtual Sites and Organic Groups we are able to deliver the look of different sites (same domain, different directory).

Managing Content

Utilising the CCK module, different content types were created for each distinctive content type (news, events, profiles, etc) and taxonomy created to represent the structure of the College (schools, divisions, groups, etc). As each content item is inserted, it is tagged with the area to which it belongs. As each Organic Group is created, it is tagged in a similar way which allows us to design views that can be used in different sites to show information that is related to that site. It also allows us to enable users to create their own content items which aren’t replicated to other sites.

Performance

In order to boost performance we use the Boost module. This works really well for us as most of our traffic is anonymous. However it does pose problems when information is being replicated across to the live site. We came across an issue early on where old information wasn’t being expired when it was updated on the staging server and transferred to the live server. After many days of digging into Boost we discovered it was because of the way that it flagged stale content. After much fiddling we managed to fix this problem.

Problems

“Could you do it this way?”

Standardising views is a great idea in principle, but we’re starting to find that each group wants things slightly different. Unfortunately this has resulted in a large increase in the number of views we have to look after and is starting to get a bit unwieldy. Whilst not a huge problem in theory, it goes against our mantra of “develop once, deploy often”.

Modules, modules everywhere…

As we bring more groups onto the system we inevitably find that they want to do “new and exciting things”. The beauty of Drupal is that if you want to do it, there is most likely a module out there that will do it for you. However it means that our codebase is increasing, things are starting to slow down, and makes for doing routine upgrades a real pain. Whilst Drush goes a long way towards making that easier, testing each site after each upgrade is tedious at best.

Performance

As I mentioned above the Boost module works well for us at the moment as most of our traffic is anonymous. However we have plans in the future for making the site more interactive and personalised. As Boost won’t cache authenticated content, we are going to have to look at other ways of speeding the site up once we get to this stage.

Staging

Whilst our plan for staging servers seemed sensible at the time, as we move towards a more personalised experience for users, it becomes impossible to work with. How exactly we replicate end user content from the live site to staging without overwriting anything that has been input since the last sync, and vice versa, is a tricky beast to figure out.

Future Plans

For our first foray into a Drupal site we’re quite happy. Compared to the system that we had it is infinitely more configurable and we can set up pretty much anything we want. However with potentially over 100 clients all wanting different things, our current implementation is going to struggle. We could limit what we’ll implement, we could insist on a one design to fit all, but with the web changing so quickly and user requirements changing all the time, we need an infrastructure that can react quickly and easily to those changes.

We’re currently looking into implementing Aegir. If successful this should give us the ability to spin out Drupal installs in a managed way. What we also need to get better at is trying to put as much configuration into code as possible in order to realise our “Develop Once, Deploy Often” mantra. Through a combination of Features and upcoming improvements in Drupal core for this kind of thing, it should be manageable.

The big hurdle to implementing Aegir is that it breaks our ability to share content between sites. How we go about that I’m still investigating, any and all suggestions very welcome!

Tags: 

Drupal at the College of Life Sciences

The College of Life Sciences at the Unviersity of Dundee is one of the largest and most productive research institutes in Europe. With over 900 members of staff from over 50 countries our community is as diverse as our reputation is global.

The College is split into two schools with one focusing on the teaching of students, whilst the other is focused on research. Within the research school it is further split into 12 divisions, each of which is made up of individual groups which total over 100 in all. The College employs one web developer, one web designer and one content editor to work with and manage websites covering all these groups/divisions/schools and the College in particular. A wide variety of information is generated centrally that applys to all these areas ranging from news items and events, to vacancies and awards.

Our primary ideal that has driven development of our current drupal based CMS is to centralise information but allow that information to appear on the individual sites owned by the schools/divisions/groups.

Tags: 

Drupal 8 Wishlist

As the Drupal 8 development kicks off, I thought it would be useful to enumerate what we'd love to see Drupal offer in future versions. Whether or not this is something that can be implemented in Drupal 8, or if it needs to reach further into the future is up for debate. 

Background

Firstly, a bit of background information on the type of scenarios we have to deal with on a day to day basis.

The College of Life Sciences at the Unviersity of Dundee is one of the largest and most productive research institutes in Europe. With over 900 members of staff from over 50 countries our community is as diverse as our reputation is global. The College is split into two schools with one focusing on the teaching of students, whilst the other is focused on research. Within the research school it is further split into 12 divisions, each of which is made up of individual groups which total over 100 all in. The College employs one web developer, one web designer and one content editor to work with and manage websites covering all these groups/divisions/schools and the College in particular. A wide variety of information is generated centrally that applys to all these areas ranging from news items and events, to vacancies and awards.

Our primary ideal that has driven development of our current drupal based CMS is to centralise information but allow that information to appear on the individual sites owned by the schools/divisions/groups.

Development/Testing/Production Servers

One of the big things that is lacking in Drupal at the moment is a proper way to implement a dev/test/prod setup that allows for easy transfer of work between the three. Drush has done a lot to improve this, especially through the rsync and sql-sync commands, but it would be nice to simply select what you want transferred, and have it deploy automatically.

Remote Content Separation

In the instituion I work in we have the scenario where there is lots of information and lots of interested parties. Each of those interested parties wants the information to appear on their own sites.

Tags: 

Custom Type Example

An example custom type taken from our model.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
        <type name="cls:events">
          <title>CLS Events</title>
          <parent>cls:www</parent>
            <properties>
                <property name="cls:eventspeaker">
                    <type>d:text</type>
                    <multiple>true</multiple>
                </property>
                <property name="cls:eventcontact">
                    <type>d:text</type>
                    <multiple>false</multiple>
                </property>
                <property name="cls:eventlocation">
                    <type>d:text</type>
                    <multiple>false</multiple>
               </property>
                <property name="cls:eventdate">
                    <type>d:datetime</type>
                    <multiple>false</multiple>
               </property>
                <property name="cls:eventduration">
                    <type>d:text</type>
                    <multiple>false</multiple>
               </property>
                <property name="cls:eventtype">
                    <type>d:text</type>
                    <multiple>false</multiple>
               </property>
            </properties>
        </type>

Config I've tried for the Drupal integration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$conf['cmis_sync_map'] = array(
  'page' => array(
    'enabled' => TRUE,
    'cmis_folderPath' => '/WebContent/Website'
  ),
  'cls_event' => array(
        'enabled' => TRUE,
        'cmis_folderPath' =>'/WebContent/Repo/cls_events',
        'cmis_type' => 'cls:events',
        'fields' => array(
                'field_cls_eventspeaker' => 'cls:eventspeaker',
                'field_cls_eventcontact' => 'cls:eventcontact',
                'field_cls_eventlocation' => 'cls:eventlocation',
                'field_cls_eventdate' => 'cls:eventdate',
                'field_cls_eventduration' => 'cls:eventduration',
                'field_eventtype' => 'cls:eventtype'
        ),
        'subfolders' => TRUE,
        'full_sync_next_cron' => TRUE,
        'cmis_sync_cron_enabled' => TRUE,
        'cmis_sync_nodeapi_enabled' => TRUE
  )
 
Tags: 

Perfect Powerpoints

Powerpoint has become a wonderful tool for those involved not only in kids work, but any avenue of Christian service. Whereas those travelling around used to have to carry a huge amount of material in the form of acetates, flannel graphs and a myriad of other resources, much of it can be replaced with a small laptop and projector. In saying that, the older methods are not obsolete, but the real focus of this article is how to get the most out of your powerpoint slides.

One of Powerpoints big selling points, is that you can do a huge number of things with very little prior knowledge of the technology or ideals behind it. However, that is also one of its big failings. Let me illustrate this with an example of something most of us would use everyday...the printer.

When computers first started to rise in popularity and companies started the widespread deployment of them, there needed to be some way (before email) of transferring the information presented on a persons screen to others in the company and further afield. Enter the printer. These came in a wide variety of shapes and sizes from the typewriter like daisy wheel, through to the dot matrix printer. These were wonderful things, if not a little noisy, but their big failing was that they only had one colour. To paraphrase the tagline of the Model T Ford, "You can have any colour you want, as long as it's black". Then came the inkjet printer and suddenly a world of colour opened up in front of us.

Very soon we found documents being printed that used just about every colour under the sun. However nice these might have look to those producing them, very often the result for the reader was eye strain and a painful headache. And so the same can be said of powerpoint. Just because you can represent 16.7million colours does not mean that you have to.

So, for the powerpoint novices out there, here's a quick and dirty guide to getting the most out of your powerpoint slides.

Do a little research

Researching where it is you are doing your presentation is never a waste of time. Nor is finding out what kind of audience you will be speaking to. Whilst many of the design principles we'll talk about in this article can be applied to most scenarios, there may be times where you'll want to show something that will work in one place but not in another. Here are some of the things to look out for.

  1. Ambient Light
    Is it a dark room with natural light blocked out, or does it have large amounts of light streaming in through the windows.
  2. Room Size
  3. Where are you in relation to the projected image.
  4. Will you have a screen in front of you, or do you need to rely on the screen alone.
  5. Age of your audience

Size Matters!

Although when you are designing your slides things might be perfectly readable, you need to take into account the things you have researched above. If for example you are in a large venue, then you need to make sure that even the people sitting right at the back can read what you have on the screen. Very often this means having less on a slide in favour of a larger font. Smaller venus can cope with smaller fonts.

Colour Choice

Make sure that whatever colour you choose for your text contrasts well with your background. For example, If you stick with the default black text, then a white background works well. A great tool that you can use to check this is Jonathan Snooks Colour Contrast Check. Simply plug in your colour choice either by entering the hex number of the colour, or using the sliders. The information panel on the right let you know whether it complies with the WCAG guidelines.

Example of how text disappears  when  gradients are used.

Watch your gradients

There has been a big shift in recent years to using gradients as backgrounds. Be very careful of how you use these. The colour you use for the text might show up clearly at the top of the screen, but be illegible at the bottom. The example shows how text can disappear if a poor colour choice is selected.

Content of your slide

This is probably the most important part of designing your slide and is something that will take quite a bit of practice. No matter what kind of an audience you are presenting to, you should try and keep your slides short and to the point. The aim of a powerpoint slide is not to replace you as the presenter, it is to give the audience a little prompt to remind them what you are talking about and also to put things in context. Try to avoid copying huge long portions of text from your bible onto the screen. If you are lucky, the audience will read it, but most likely they'll not be able to concentrate on what you are saying. If they try to listen and read, the chances are they'll not pick up either you or the text.

Bite the bullet

Whilst bullet points work well in a company setting, they very rarely (but not always) work when talking to children. Powerpoint offers you a great chance to illustrate your point visually with text backing up what you are saying. For example, if you were telling the story of creation, you could just list using bullets what was created each day. However if you instead show pictures of what was created along with a heading describing it, you encourage your audience

Keep animations to a minimum

Animations are a great tool within Powerpoint, but shouldn't be used a lot. Remember, powerpoint is there to backup what you are saying, not to take over. In the case of animations, less is definitely more! Final point, unless you are trying to illustrate someone being shot, or how loud it used to be in the typing pools of old, don't add sounds to your animations.

 

Creating a SiteTree webscript

 

One of the things

we needed to do was create a script that displayed the site hierarchy. I tried in vain to get this working with the built in webscripts that come with Alfresco, so in the end, decided to build my own. The steps to reproduce are as follows.

On the Alfresco side

First of all we need a script that will return the site structure. Unfortunately the only script I could find within Alfresco to do this, returned information back in CMIS format. Now whilst that should be useful, for a newbie, it was a pit of woe. Hopefully once Dave Caruana's CMIS parser comes in, this will become a lot easier to handle. So basically, what this script does is read in the site structure, and return back a JSON stream for processing at the other end.

getsitetree.get.desc.xml

 

1
2
3
4
5
6
7
8
<webscript>
  <shortname>Get Site Tree</shortname>
  <description>Retrieve the site tree from Alfresco</description>
  <url>/alfrescocms/getsitetree</url>
  <format default="json">extension</format>
  <authentication>user</authentication>
</webscript>
 

 

getsitetree.get.js

 

1
2
var folder = companyhome.childByNamePath("WebStore/www");
model.www = folder.children;

 

getsitetree.get.json.ftl

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<#macro parseChild objectModel>
	<#assign loop = 0 />
	<#list objectModel as child>
		<#if (loop > 0) >, </#if>
		{
			"name" : "${child.properties["name"]}",
			"nodeRef" : "${child.nodeRef}",
			"description" : "${child.properties["description"]}",
			"icon" : "${child.icon16}"
			<#if (child.children?size > 0)> 
				, "nodes" : 	[	
				<@parseChild child.children />
			]
			</#if>
		<#assign loop = loop + 1 />
		}
	</#list>
</#macro>
 
<#if (www?size > 0)>
{
	"nodes" : 
	[ 
		<@parseChild www />
	]
}
<#else>
	Empty
</#if>

 

A quick word about the above Freemarker template. As there may be an unlimited number of children to a folder, I needed some kind of recursive function to keep reading until there were no children left and display that in JSON format. The #macro section accomplishes this.

SpringSurf

Once we've completed that, we then need to utilise that in our SpringSurf application. The scripts are as follows.

getsitetree.get.desc.xml

 

1
2
3
4
5
<webscript>
  <shortname>Get Site Tree</shortname>
  <description>Retrieves the top level site tree from Alfresco</description>
  <url>/cms/getsitetree</url>
</webscript>

getsitetree.get.js

 

1
2
3
4
5
6
7
8
9
10
 
// get a connector to the Alfresco repository endpoint
var connector = remote.connect("alfresco"); 
var result = connector.call("/alfrescocms/getsitetree");
 
var sitetree = eval( '(' + result + ')' );
var remoteURL = remote.getEndpointURL("alfresco");
 
model.sitetree = sitetree;
model.remoteURL = remoteURL.substr(0, (remoteURL.length())-1);

 

getsitetree.get.html.ftl

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<#macro parseChild sitetree>
	<#if (sitetree?size > 0)>	
		<ul>
		<#list sitetree as node>
			<li><img src="${remoteURL}${node.icon}" /><a href="${url.context}/?filePath=${node.nodeRef}">${node.name}</a>	
				<#-- Check to see whether child nodes exist -->
				<#if node.nodes??>
					<@parseChild node.nodes />
				</#if>
			</li>
		</#list>
		</ul>
	<#else>
		Empty
	</#if>		
</#macro>
 
<h3><a href="#">Site Tree</a></h3>
<div>
 	<@parseChild sitetree.nodes />
 </div>
 

Again, the above freemarker recursively reads the site structure and formats it into HTML unordered lists for displaying within a site.

Gotchas

With everything that you develop, there's always going to be something that you miss that causes you endless hours of heartache.

  1. If there are no child nodes, nothing gets returned.
    As you'll see in the Alfresco side script, I check to see whether the current node being processed has any children, if it does, then a new element called nodes is inserted and the children listed within it. However, if no children are found, then it doesn't get created. This caused a problem on the SpringSurf end in that the template failed everytime it tried to recurse. Enter the ?? operator in Freemarker that checks for the existence of a variable. If it exists, the if statement equates to true and everything within the tags gets actioned, otherwise, it is ignored. Simple enough fix for two days of head scratching!
  2. Make sure you are passing the same type of variable at each recursion.
    All was going well with the top level, but I was getting confused when passing the next level into the same procedure. Therefore make sure that you are passing what the function is expecting to get!
  3. Authentication
    The SpringSurf script requires a user level of authentication in order to run. The SpringSurf application deals with this nicely (if configured to do so!). However the template I was working with hid much of the errors I was getting, so I decided to try and hit the script directly via the console. However, because no authentication was taking place when trying to retrieve the JSON stream, I was getting XML errors back that the script was complaining about. Of course, because I had no visibility to what was being returned, it made debugging very difficult.
Tags: 

Pages