Though all resources released by these projects will be deposited into the Jorum Open repository, Core-Materials and EngSCOER opted to disseminate these resources further using web 2.0 file-sharing platforms such as YouTube, Vimeo, SlideShare, Scribd, Zoho and Flickr.
The Core-Materials project had large collections of resources to be released and therefore required a facility to batch upload these resources and associated descriptions to the web 2.0 platforms in order to take advantage of the benefits they offer (see list below). As the metadata for its resources was already contained within a database, manual upload would have been an unnecessary drain on time, requiring re-keying or copying and pasting of metadata. The project used the APIs provided by each of the Web 2.0 sites to batch upload resources. This included over 800 images to Flickr, over 100 videos to YouTube and over 150 documents to Scribd and SlideShare. The Core-Materials project is currently trialling the use of RSS as a mechanism for bulk deposit of its resources into JorumOpen. RSS is a simple, effective mechanism for sharing descriptions and even the resources themselves (think of podcasting).
The Core-Materials project has developed a search facility for the database of OERs released by the project. Each resource is available in a variety of locations, therefore it was essential to keep track of where these resources were deposited, in order to keep it’s database up to date and to consolidate duplicated results. Since the project used APIs to batch upload the documents, it was straightforward to return the unique id for each resource at the time of upload to store in the database.
The Engineering Subject Centre’s OER pilot project consisted of many smaller individual partners willing to contribute their content as Open Educational Resources. The project team provided support in suggesting web 2.0 platforms best suited to sharing their resources and helped some partners getting started by creating accounts and uploading some of their content to these sites. This will help with sustainability as each partner has the facility to release more resources in the future through these channels.
The EngSCOER project made occasional use of the APIs provide by these web 2.0 services for batch upload where metadata was similar or already contained in electronic format. However, the main use of APIs for this project was to provide a search interface for these resources. This was an alternative to creating another database of resources. Prior to deciding upon using the APIs as a search interface the project also investigated using a Google Custom Search and Yahoo! Pipes.
Using web 2.0 platforms as a repository for educational resources fulfils many requirements or expectations that both contributors and consumers of content have come to expect. The following list describes some of these features:
The ability to view or preview content online without the need to download. (How successful would youtube be if users were required to download each video clip prior to viewing).
Many services (such as Flickr, Scribd, SlideShare, Zoho) allow contributors to release their content under creative commons licences, giving users clear guidelines of reuse.
The resources can easily be embedded in other websites or blogs using html embed codes provided to users.
These services tend to have a high web presence and often results have a higher visibility in search engines.
Many of these sites provide socially networking functionality allowing users to share or reuse the content more easily.
As part of the ticTOCs project I contributed to writing Recommendations on RSS Feeds for Scholarly Publishers. The guidelines have now been published. It is expected that industry wide adoption of these best practices will help drive more traffic to publisher web sites.
Improving Your Online Presence - a Netskills Workshop
I recently attended a two day Netskills workshop on “Improving Your Online Presence” at the Hilton Hotel, Edinburgh on the 1st and 2nd of July 2009. I thought this workshop would be especially useful to the work I’m currently doing for the two #UKOER projects I’m currently involved with (one of the key factors of Open Educational Resources is that they should be easily discoverable).
Although the workshop was perhaps more focussed to entire websites (as most of the participants were web editors and managers of their institutions website), the basic principles could be still be applied to Open Educational Resources. The workshop was split into six main sections:
The Importance of Structure
The section focussed on getting the basics right, ensuring that HTML content is marked up using heading <h1> </h1> <h2> </h2>… and paragraph <p> </p> tags appropriately. Search engines give preference to keywords in the headings so please do not use paragraph tags using style to make them look like headings.
When writing for the web it is important to consider how users read your webpages (they tend to scan, jumping from page to page). It is therefore advisable to front-load content. I.e. give the conclusion first, explore the content then give further details to ensure key information is not missed. This is at odds with traditional print based writing of introduction, details and then conclusion
Give your pages, directories and sub-directories descriptive titles to generate meaningful URLs. Ensure the URLs are persistent to allow deep linking to your resources.
Total Accessibility of Content
When discussing the term “accessibility” on the web, it is most commonly associated with the concept of being usable by users of any ability or disability. This section of the workshop explored that concept and expanded it. Therefore a website should also be accessible by different browsers/devices, easy to navigate and therefore easy to access and download speeds should be acceptable.
Day one was wrapped up with a session on content integrity. Key factors from this session were, consistency, spelling and grammar and branding. The OER projects hope that their content will be re-used by others, which may be limited by strong branding (the not invented here syndrome). However, knowing the reputation of the content provider through branding may improve the integrity of the content. Integrity is also about the right people, getting the right information at the right time (it is pointless having a high hit rate on your site if there is an equally high bounce rate. This session demonstrated tools such as Google analytics that can analyse the visitors to your site, allowing a better of the target audience.
Day two of the workshop started with a session on metadata. Metadata is data about data and many search engines use this data to rank results. Basic information can be put in the HTML meta tags, such as title and authors but more complex metadata such as Dublin Core can enhance this information.
This session discussed the semantic web and how search engines like wolfram alpha used this semantic information to give compute answers to user generated questions. A video, demonstrating a device which uses semantic information, was shown to the workshop. The video could have been taken from a science fiction movie, and although some of the applications could be thought of as an invasion of privacy, others were quite amazing at demonstrating the potential power of semantics.
The final session of the workshop was delivered by Brian Kelly of UKOLN and was entitled “Pimp Up Your Stuff! Using the Social Web”. This session demonstrated examples of using the social web to promote your resources/project/institution/yourselves. This session focussed on wikis, blogs, twitter, Institutional Repositories, RSS Feeds, social networks, video sharing sites, slide sharing sites. Pick the sites you use carefully, i.e. select sites that are most appropriate for your content, don’t try to have a presence on them all. Quoting from Brian’s slides:
"The social web can be used to enhance access to digital resources, real world resources and ideas and concepts. Ignoring the potential may mean you lose out to your peers, competitors or rivals".
For more information on this workshop, please visit the SCASEO Netvibes page which contains bookmarks, images, videos and tweets from the event.
The Bayesian Feed Filtering project will be trying to identify those articles that are of interest to specific researchers from a set of RSS feeds of Journal Tables of Content by applying the same approach that is used to filter out junk emails. We had the first project meeting this afternoon, though we’ve each done a little bit of work in the last week or two. We went over our plans for the two main work packages in some detail…
This is a demo using the Flickr API and RSS from YouTube and SlideShare to display all content added by a particular user. It shows some of the detailed information that can be pulled in from these external sites which may be useful to UKOER projects.
This is an example of using Yahoo Pipes to aggregate content from sites such as SlideShare, Flickr and YouTube with a particular tag and provide a search across items with those tag. Try searching for keywords such as Talat, Engineering, Spherulite, etc. This could be relevant to UKOER projects wanting to aggregate and search across content distributed across various platforms.
Here is a summary of the four different projects I am working on at the moment:
Technical input into two projects, one for Engineering the other for Materials Science, both creating Open Educational Resources (OERs). These projects are part of a £5.2M HEFCE initiative promoting the release of OERs.
Two JISC Funded Rapid Innovation Projects: One to develop and investigate the performance of a tool that will aggregate and personalize RSS alerts using Bayesian filtering. The other to develop a facility for monitoring current journal issues so that institutional repository managers can identify when papers deposited as drafts are published.
This paper, presented at ELAG09 looks at the current situation with respect to RSS and then reports upon the findings of the ticTOCs and Gold Dust projects. We will look at the lessons learnt from developing the ticTOCs service, and also report on two iterations of the Gold Dust development and use cycles. We will deliver an appraisal of the effectiveness of the raft of techniques being employed by Gold Dust. How effective are current data mining and pattern matching techniques for such an application? How useful is RSS metadata in this context? These findings will be of considerable pertinence both for future services which may use RSS Feeds, and for future research and development in the area of adaptive personalisation using RSS.
I worked as a project officer on the JISC funded ticTOCs project which set up a pilot journal tables of contents service which aggregates RSS feeds from over 12000 journals, published by over 430 small, medium and large publishers.
I have worked at the Institute for Computer Based Learning for three and a half years now so it is probably about time I started a work blog. The first few posts will probably be a catch up of things I have been working on recently. I have interests in the sharing of computer based learning resources, but have also worked on projects that investigate current awareness services for researchers.