- Screenshots or diagram of prototype:
Searching for a person
Choosing an individual
Viewing information about them and people who ‘write like them’
- Description of Prototype: Explore people, publications, institutions and themes through oai metadata
- End User of Prototype: “Jonathan is a researcher in evolutionary linguistics. He has become very interested in possible mathematical mechanisms for describing the nature, growth and adaption of language, as he has heard that others, such as Partha Nyogi, have done some very interesting work in this area. Unfortunately, Jonathan is not a mathematician and finds that some of the detail is hard to follow. He realises that what he really needs to do is either to go to the right sort of event or the right sort of online forum and find some people who might be interested in exploring links between his specialist area and their own. Both of these are difficult in their own ways. To go to the right sort of event would mean identifying what sort of event that would be, and he does not have enough money to go to very many. So he chooses to look up possible events and web forums, thinking that he can look through the participant lists for names that he recognises. This is greatly simplified by a system that uses information about the papers and authors that he considers most relevant; with this information it is able to parse through lists of participants in events or online communities in order to provide him with a rough classification of how relevant the group is likely to be to his ideas.”
- Link to working prototype: writeslike.us
- Link to end user documentation: http://www.ukoln.ac.uk/projects/writeslike.us
- Link to code repository or API: http://code.google.com/p/writeslikeus/
- Link to technical documentation: http://www.ukoln.ac.uk/projects/writeslike.us (TBA)
- Date prototype was launched: Dec 01 2009
- Project Team Names, Emails and Organisations: Emma Tonkin, email@example.com, UKOLN; Alexey Strelnikov, firstname.lastname@example.org, UKOLN, Andrew Hewson, email@example.com, UKOLN
- Project Website: http://code.google.com/p/writeslikeus/
- PIMS entry: https://pims.jisc.ac.uk/projects/view/1263
- Table of Content for Project Posts: TBA
Probably the most important thing I discovered in this project was the importance of ‘crowdsourced’ data in filling in the gaps between metadata and common knowledge.
The availability of Wikipedia as a source of random information, although much of it contains inadequate structure to search through with something like dbpedia, is a very important factor for us in improving the metadata and the data that we are putting together to support usage of that metadata. It’s not perfect of course – or perhaps it’s better to say that the imperfect and rough ways in which we use the data are not able to achieve the sorts of results that one might like – but it seems obvious that it’s an invaluable resource for the future.
Other data sources have been invaluable for us as well, particularly DBLP, despite the strong focus on computer science (which, however, means that for training across domains we should probably be looking elsewhere too 🙂 )
Finally, social tags have been less effective for our purposes than one might imagine for one reason, which is that there aren’t an awful lot of them around, and those that are need to be detected by a relatively complex process of resolving title/author into the most popular mirror URI(s).
We’ll be publishing some of the extracted data shortly – boring but useful stuff like lists of institutions, urls, coordinates, enhanced metadata, etc – so hopefully it will come in useful to others!
➢ Getting information such as institution names/URLs from Wikipedia, and widespread use of available web services in general
➢ Extracting names from OAI-DC was easier than expected – although there are still issues with identifying name pair order.
➢ Evidence based learning methods can be applied successfully to the data retrieved to enhance it – getting into FixRep territory. The project has been very useful for the purpose of establishing further use cases for ‘cleaning up’ metadata.
➢ Some interesting work in name / identity disambiguation through statistical clustering analysis. We’re looking at linking extracted info together with formal information such as that made available by the NAMES project.
➢ Storyboards defining the workflow of the system form an effective part of the agile development process, and were very useful for us.
➢ Using an SQL db as the repository was effective once problems with slow queries was addressed through: normalizing data, reviewing db schema design, adding indexes as necessary.
➢ Natural Language Tool Kit – didn’t use it for its original purpose. Instead, went back to the Tree Tagger, although this was not specifically trained for the sort of technical document we were analysing.
➢ Text analysis expertise required for this project wasn’t already extant in the team. It would’ve been a good idea to have ensured training for team to make sure we were all on the same page!
➢ Ensure all related documents, URIs, etc, are contained/linked in the project wiki.
➢ Cultural mismatch between research approach to defining requirements/expectations and development requirements/expectations. e.g. who writes the formal requirements document?
➢ Earlier storyboard scenario development would have been helpful, so a good lesson for next time.
➢ Swine flu and its effects were quite severe on this project – our Portugese collaborators were unavailable for quite some time due to a) the danger of traveling to the UK and contracting the virus, and (subsequently to contracting the illness in Portugal) b) the effects of the illness!
Good knowledge of data source aggregation and normalisation
Agile rapid prototype development and evaluation
Links to IEMSR, FixRep, Aggregator: internal resources – strong support network
Staff time is limited
Various programming languages in use
Lots of interest in auto-extraction of community networks data
University of Minho now have funding to take work forward
University of Leiden interest in specific technical tagging
Learning from NAMES and enhancing other existing services like NAMES
FixRep stuff: data provenance, evaluation, quality assurance
Establishment of user community
Reliance on external services/data
Quality of source data and availability of source data is variable
User community not firmly established
The problem is unsolvable without additional information identifying the person. But for the case when this information is unavailable, there is a suggestion to apply pure statistics. In other words, evaluate for a testing (sample) data set what error level is linked to both possibilities – two different persons or one, just publishing in two places.
Then we switched to looking at practical methods to use within the Writeslike.us project, to identify individuals who write similarly or about similar topics.
This lovely diagram displays how the system we’re building will do the ‘magic’. Since none of the dimensions available to us are able to discriminate identity alone, we need to bring several onto the stage. The result will be an approximation incorporating evidence from several sources, which will hopefully make it more precise.
The table is just a representation of the way in which the heap of raw data will be mapped into something useful.
A new staff member (casual staff) has just begun work, specifically on the Writeslike.us project. He is very experienced in designing and implementing web portals, familiar with machine learning concepts and enthusiastic about project research area – constructing a graph of relations between researchers based on analysis of their papers.
Here is his home page
Here is a mind map of system architecture and idea. How to read it:
- From the Repo extract the metainformation
- save it to the internal database
- then use a process to determine, who else is in this author’s community?
- the process may have a variety of interfaces (a gadget for Repository Interface – a list of authors in the same community; or it could be search engine saying: Authors from the same community also wrote those papers)
Documents are analysed by a number of dimensions (could be weighted) like used keywords, bibliographical references and social tags.
Here is a data model, let’s say documents are related to others by those dimensions (draw with different colours in the diagram), then the process calculate relation ‘value’ by browsing particular document links.
It’s been a good event; we’ve introduced a prototype for our new metadata consistency improvement system – FixRep. We’ve also talked to a lot of people and identified several opportunities for collaboration.
Additionally, we took the time to visit what may well be Austin’s most famous population-
Last week we also attended ElPub’s Social Tagging workshop, in which we introduced the ideas and theory behind writeslike.us.