Benj Pettit works at Mendeley and works on text and data mining tools that help researchers to find new articles, collaborators etc. One of the special things about the Mendeley catalogue is that it is formed in a crowdsourced way.
Open Science is a new research paradigm that is facing many challenges. In order to improve the uptake of Open Science, four EU-projects join forces and organise an event that will showcase critical elements, from infrastructures to policies and new types of activities. Join us for the Open Science FAIR, September 6-8 in Athens, and get inspired.
Last April 26-27 the BioCreative V.5 Challenge Evaluation Workshop took place in Barcelona. The goal of BioCreative V.5 was to address some of the major barriers to the adoption and use of text mining tools, related to assessment, accessibility, interoperability, robustness and integration.
25 years ago, when Laurents Sesink was still a history student, his thesis on political internal relations included a lot of reading and tally marks. Back then he already thought “There must be a better way to do this”, so he built a database and started to get into informatics and digitisation. Now he is the head of the Centre for Digital Scholarship at the library of Leiden University.
The 9th Plenary Meeting of the Research Data Alliance (RDA) took place in Barcelona, Spain, from 5 to 7 April 2017. The RDA Plenary Meetings constitute a major event where more than 4000 members from 100 countries come together to discuss, develop and promote data-sharing and data-driven research infrastructure through Working and Interest Groups. The Interest Group on Agricultural Data (IGAD) pre-meeting took place just a couple of days before the 9th RDA plenary meeting, from 3 to 4 April 2017 and attracted more than 100 participants from all over the world.
Frontiers in Neuroinformatics has just released a new paper by O’Reilly, Iavarone and Hill. It describes a systematic framework to curate neuroscientific literature. This framework provides an easier and more reliable way to integrate published data into neuronal models. The work was done in the context of the OpenMinTeD and Blue Brain projects.
On February 20th 2017, Agroknow had the pleasure to host a workshop at the premises of the Agricultural University of Athens (AUA). The workshop was organized together with colleagues from the Laboratory of Viticulture.
You will not catch Steven Claeyssens carrying a smartphone and he will always prefer a paper book to an e-reader. Yet he is the curator of digital collections at the National Library of the Netherlands. I interviewed him about his job, text and data mining (TDM) in the humanities and the role of libraries in the research landscape.
How is a scientific paper structured and how related is it to other papers? These are some of the things that Iana Atanassova of the University of Bourgogne Franche-Comte (Besancon, France) focuses on in her research. She uses text and data mining (TDM) to study full-text scientific articles. Studying these papers can be a challenge, as they are usually in a format that is hard to process.
Marc Bertin is assistant professor at the University of Toulouse uses text and data mining to study scientific papers. Text and data mining can help us move from an information society to a knowledge society, but not without open access to research papers.
When scientists need information about the structure, name or properties of small molecules, they often turn to a high quality database called ChEBI. This database is largely curated manually and this process takes a lot of time. OpenMinTeD is working on a textmining application that can help to speed up the process, while maintaining the quality of the database.
Joris van Eijnatten is professor of cultural history at Utrecht University, The Netherlands. He has a fascination for numbers that not many historians have. Last year he was the research fellow for digital humanities at the National Library of The Netherlands, where he applied text and data mining to study the image people have of Europe based on newspapers. I interviewed him about text and data mining in humanities, his work and his personal romance with numbers.
What is the real novelty of a research paper? How do different researchers contribute to innovation? And does this change throughout their career? Shubhanshu Mishra of the University of Illionois uses textmining techniques to study the novelty of biomedical articles.
Systematic review of medical research papers can lead to new knowledge and treatments of diseases. The existing software tools however, are very limited and often a lot of manual work is involved. Stephen Gilbert of Iowa State University uses artificial intelligence and machine learning to automate the process of systematic review.
While discussions at the EU on copyright reform and an exception for text and data mining (TDM) are very much live, FutureTDM, a Commission funded project of TDM experts has, for the past year, already been gathering information, mapping the TDM landscape and listening to the wide variety of individuals and organisations involved in data analytics. The project has just produced the first in a series of reports, providing a range of stakeholders with recommendations to improve TDM uptake in the EU. This FutureTDM policy framework document sets out high-level principles and recommendations.
Frederico Nanni was not always a text miner. He actually started out as a historian and then switched to digital humanities. During his PhD, he developed a method to detect interdisciplinary research, based on scientific abstracts. Now, he finds text mining fascinating and thinks more historians should learn how to do it.
It took some time for Drahomira Hermannova to see the value of her research topic, but now she thinks it is the best topic she could ever choose: using text and data mining to evaluate which research can change the world. Not only can this help scientists, it may change the way research is done altogether.
In the OpenMinTeD project, partners from different scientific communities are involved to make sure the OpenMinTeD infrastructure will address their needs. As regards the social sciences, a useful application for text mining is the improvement of literature search and information interlinking. To this end, three main challenges were identified: named entity recognition, automatic keyword assignment to texts and automatic detection of mentions of survey variables. This post gives an overview of these tasks and the progress of work so far.
Would you like to get more insight in the world of text and data miners? Daniel Duma is a PhD student at the Alan Turing Institute and the University of Edinburgh and he shares his story in a short movie. He is working on software that will recommend relevant papers to scientists writing papers.
If you want to do text and data mining in the EU, you run into a complex legal framework of copyright rules. During the OpenMinTeD webinar of November 23rd , this legal framework, limits and opportunities were discussed with legal as well as non-legal TDM experts. Recordings of the webinar and the discussion are available online.
There are situations where text miners might struggle with getting the textual data to perform the mining on in the first place. One problem for us is that most of scientific publications – especially in social sciences and humanities – are only available in PDF format, which is not suitable to be read and processed by computers. The OpenMinTeD social sciences work group accepted the challenge to work on this problem.
Are you looking for support or training for text and data mining? Then you’re at the right place! OpenMinTeD recently released a Knowledge Base, that will host open access support and training material. At the moment we are still in the process of uploading content, but you can already have a look.
Text and data mining is important to different scientific communities, but what do these different user communities need to mine succesfully? One of the aims of workpackage 4 of the OpenMinTeD project is to collect these requirements. This was done using a combination of methods, including online surveys and focus groups. The results are summarized in the ‘White paper on OpenMinTed Community Requirements’ that was finished last week.
CORE is an aggregation service that harvests open access journals and repositories, institutional and disciplinary, from around the world. It offers one of the largest collections of scientific content via its Datasets, ready to be text-mined. We encourage everyone to use it as part of OpenMinTeD and beyond.
How the Future TDM workshop highlighted the draft exception must be improved for TDM to have a future in Europe
For the legal geeks among us, it is now old news that the European Commission, after promising to modernise copyright, issued a rather unhinged and disappointing copyright review proposal aimed at creating what it claims to be a ‘well-functioning marketplace’.
Let’s take a step to the near future.
A shared global data space for agriculture and food will propel the industry forward. Information will become available to all actors producing innovation.
Hi there, I’m Lucie Guibault, Associate Professor at the Institute for Information Law of the University of Amsterdam.
Over the past few years, I became increasingly aware of TDM as a research method in all fields of science and humanities. With the increase of computational capacity, of digital born information and the digitisation of collections, the use of TDM in research is on its way towards achieving tremendous societal and economic benefits. Think about all the new insights and cost savings that would otherwise not be possible. This means more scientific breakthroughs and a greater understanding of society.
On 22-23 June 2016, OpenMinTeD organised its third stakeholder workshop at the Joint Conference on Digital Libraries in Newark, just outside of New York City. The workshop, called “the International Workshop on Mining Scientific Publications,” was organised by the Open University for the fifth time (almost everytime in conjunction with JCDL) and featured speakers from OpenMinTeD, as well as speakers who presented their text and data mining research results.
Our efforts towards improving interoperability in the communities of Text Mining (TM) and Natural Language (NLP) processing continue. OpenMinTeD organised a workshop on this subject at the International Conference on Language Resources and Evaluation (LREC) on 23 May 2016. Alessandro Di Bari (IBM) opened the workshop with a keynote on transferring ideas from the model driven approaches of software engineering to enhance interoperability in TM and NLP.
Conducting TDM activities in the current legal context is very difficult. This is due to the unclear and uncoherent legal framework for copyright licences and to the highly fragmented landscape of copyright exceptions and limitations in the EU. In this blogpost, we’ll discuss the current legal context and what needs to be changed to open the paths for TDM in the EU.
On 13 June 2016, the OpenMinTeD project organised its third stakeholder workshop titled “Mining Repositories: How to assist the research and academic community in their text and data mining needs”. The workshop took place in Trinity College Dublin as part of the OpenRepositories Conference, and brought together repository managers from all over the world who are interested in text and data mining.
The seventh Berlin Buzzwords 2016, Germany‘s leading Conference on Open Source Big Data technologies, was held from 5-7 June at the Kulturbrauerei in Berlin. A very interesting venue for cultural events, under national trust protection, Kulturbrauerei is a spacious former brewery with a lot of courtyards and buildings.
On 22 May 2016, OpenMinTeD held its second stakeholder workshop at the LREC conference in Portoroz, Slovenia. The workshop took place in the form of a roundtable, and brought together strategic players and stakeholders from the language technology community and neighboring areas. Stelios Piperidis (Athena Research Center / ILSP) led the discussion. Among the attendees were representatives from CLARIN-CZ, CLARIN-ERIC, OpenAire, ELDA and LAPPS Grid.
The use of keywords is crucial for the description, organization, indexing, retrieval and sharing of research in every scientific field and agriculture is not excluded. However, manual annotation of research outcomes is time-consuming and error-prone so automatic methods for metadata annotation are always explored. AgroTagger is one of the tools facilitating the work of information and knowledge managers (among others) in the agri-food sector, by applying text-mining on top of agri-food research outcomes.
Can you text mine agricultural content?
“Absolutely!” is the answer that AgroKnow will give you. And they can prove it! AgroKnow is one of the partners in the OpenMinTeD projects who are already very active in projects which apply text mining technologies to the agricultural sector.
Are you a researcher in frequent need of searching and accessing textual content? Does your research involve looking for information in repositories of publications, reports, patents, and other textual content archives?
Then we are looking for your input!
Does your company develop text-mining powered applications? Would you benefit from a platform that provides access to a variety of text mining tools and components, along with the possibility to examine their specifications and performance? Are you an application developer in need of integrating text-mining services in your software? Then we are looking for your input!
Does your organisation have tons of data that you want to make available for text and data mining? Would you benefit from an infrastructure that brings your data together with text and data mining tools? Are you a repository manager, a publisher, or do you represent any other type of content collection?
Then we are looking for your input!
Are you a researcher in text and data mining? Would you benefit from making your mining software widely discoverable and interoperable, and would you like to easily explore and evaluate the work of other researchers in your field?
Then we are looking for your input!
In association with the OpenMinTeD project, The Open University organises the 5th International Workshop on Mining Scientific Publications (WOSP) at JCDL 2016.
The workshop is organised by Open University and aims to give a useful overview of Text and Data Mining (TDM). The topics of the workshop are organised around the following themes:
The 9th GATE training course will be taught this June, at The University of Sheffield, and we are looking for you to join us! GATE, or the General Architecture for Text Engineering, is a mature, comprehensive suite of tools for information extraction, natural language processing and related tasks that has been developed continuously since 1995 at the University of Sheffield. The course is open to industrial and academic participants of any ability or experience level.
On February 29th researchers from around the world gathered in Tokyo for the data sharing symposium “Data-driven Science – The trigger of Scientific development”. It’s been a place of vibrant discussion of opportunities and challenges brought by current trends, such as open science, data-driven research and big data. OpenMinTeD, which perceives openness as one of its basic principles, participated in this event.
At the end of last year, I presented a webinar to the American Medical Informatics Association on clinical text mining and text engineering – applying text mining to medical records. This is not an area that we are concentrating on in OpenMinTeD, but it is still an area on which we should keep a watchful eye. There is a rapid growth of text mining over medical records, and it exposes issues and problems that we need to be aware of.
The OpenMinTeD project is divided into different tasks. It is the task of Agroknow to carry out the important job of gathering TDM requirements from our stakeholders (OpenMinTeD’s future platform users and contributors), so that OpenMinTeD will build a TDM platform that meets the requirements of our platform stakeholders as good as possible. We focus on gathering requirements from four different scientific domains, represented by the following different communities.
Final Call for Submissions: Cross-Platform Text Mining and Natural Language Processing Interoperability
Recent years have witnessed an upsurge in the quantity of available digital research data, offering new insights and opportunities for improved understanding. Following advances in Natural Language Processing (NLP), Text and data mining (TDM) is emerging as an invaluable tool for harnessing the power of structured and unstructured content and data. Hidden and new
knowledge can be discovered by using TDM at multiple levels and in multiple dimensions. However, text mining and NLP solutions are not easy to discover and use, nor are they easy to combine for end users.