You are here
Sharing information is without a doubt something that Universities are just not good at. We hold large amounts of really useful information in corporate systems, but it tends to sit there and very little actually gets distributed out with the bounds of central services. Sometimes it’s down to the technicalities of sharing that information. If the system in question is not capable of making its data available in some kind of format, you’re stuck. However sometimes it is more to do with the politics behind the data. I remember working in one place where getting the data into a format that my scripts could consume took no more than a day or so. Arranging the authorisation to actually get and use that data took over a year of being passed from pillar to post to get it rubber stamped.
One of the great failings of large organisations is not allowing the correct people to access the information that can help them do their job. Very often we’re aware that there are information and systems out there, but in the interests of timescales it is often easier to gather our own information / design our own systems, instead of trying to plug into the corporate systems. This invariably leads to duplicate and incorrect information being stored, which in the realms of data protection results in a myriad of potential legal action.
If we as higher education institutions are to survive the 21st century we need to use the data we have to its maximum effect. 10-20 years ago the idea of sharing information and letting others out with central services crunch data wasn’t the norm. However with a host of data analysis tools being available, it’s now to be expected that others should, and do, want to crunch the numbers and analyse the data for themselves.
Therefore as we move into the future and deploy new systems, data sharing must be thought of and built in from the start rather than being thought of later. Failure to do this results at best in increased workload for those on the fringes, at worst it results in lost income and opportunities.
Rant over; let me outline the ideas we had behind sharing information in a web context.
As I detailed in a previous article, the College of Life Sciences has to be able to cope with potentially over 100 different sites, each with a different look and feel. However one thing we’ve been very aware of for a long time is that a lot of the information presented on these sites is the same (or very similar). We were also aware that the same person could be updating several sites with the same information. Additionally, the information that was being presented was already being maintained centrally for the main College site. It therefore made sense that any CMS implementation had to allow for the sharing of this information across sites. In effect we needed a “create once, share often” methodology.
Sharing information is one thing, but we also wanted to ensure that by making information available, we weren’t simply adding to the already large burden of work those who were updating sites had. So it wasn’t simply enough to make the data available, but also to proactively push that information out to the sites that needed it. The result being that our information is publicised more widely than it ever was. To illustrate this, let me give you an example.
Let’s assume a fictional member of the College, and call him Professor A (very imaginative name, I think you’ll agree). Professor A works in the School of Research in the fictional division of Biological Research and has just been awarded a new multi-million pound grant to continue his research. As is the norm, the College writes a press release for the website and it is placed on the main College site. Previously the news item would have sat there in isolation until either links to it were manually made from other pages, or the text was copied manually onto other site. You’ll already see not only the wasted effort, but the wasted opportunity with the above scenario. With our new system, by tagging the news item correctly, links to the item automatically appear on the School of Research site, Division of Biological Research site and Professor A’s own personal website. The only additional effort has been a few extra clicks and it’s job done. Now no matter which level someone accesses information on Professor A, they’ll be able to see exactly what he’s been up to.
So we had a good method of sharing information, but we wanted to take it further still. We knew that information was being held in the College about everyone who worked there. Again, it was centrally held, albeit in a separate database, and contained information that was useful. So we went about integrating it as well. In our above scenario Professor A could potentially have his contact details on any and all of the sites. If those were to change for any reason, we’d have to change it in four different places. Now, because information is created once and used often, any changes made to the central authoritative source, are reflected quickly and easily on the websites in question.
We pushed it further still. We not only held contact information on a person, but organisational information, in other words, where in the college they worked. Using that information meant that we can show at all the different levels who our Principal Investigators are, who is in their group, etc. And because this information is updated centrally, anyone moving divisions/groups or starting/leaving is displayed automatically without us having to lift a finger.
The above examples are by no means revolutionary, but hopefully they demonstrate the big plan that we have. Whilst we want to give our customers as much control as possible over their own sites and provide infrastructure and support that starts to push forward rather than catch up, we need to do that in a manageable and cost effective way. Our mantra is to develop once, deploy often. So as we design new functionality we always try to ensure that the work can be transferred easily to other areas if required.