Supported by the Ministry of Higher Education, Research and Innovation (MESRI) in the framework of the Digital Scientific Library (BSN) and the Committee for Open Science (CoSO), the Visa TM project (Towards Advanced Text-Mining Services Infrastructure), led by INRA, was launched in June 2017 for a two-year period.
This project aims to study the conditions of production of high value-added TDM services based on semantic analysis by synergizing the interests and complementarities of the various partners: an IST operator (Inist), a research establishment (INRA) and a university (University of Montpellier).
OpenMinTeD was present since three of the main partners (Institut National de la Recherche Agronomique, France, Athena Research & Innovation Center, Greece, Ubiquitous Knowledge Processing Lab (UKP) at Technische Universität Darmstadt, Germany) of the project were invited and gave presentations, participating into fruitful sessions and discussions.
Claire Nedellec, presented VisaTM, focusing on bridging the gap between the needs and the solutions!
On the strategic side of TDM, applicable to broad communities, Natalia Manola from Athena Research & Innovation Center, pointed the need within EOSC, to use TDM to make publications and data smart and actionable through OpenAIRE and OpenMinTeD.
From research developers’ on TDM point of view, a very interesting talk about Open Development Strategy and data mining tools like DKPro Core (software components for natural language processing) and INCEpTION was given by Dr.-Ing. Richard Eckart de Castilho.
At the end, a talk by Sylvain Massip to imagine the future TDM services was presented collecting all the the outputs of the workshop dedicated to this topic at VisaTMDay. The ultimate dream: a service based on all the TDM services that can answer any natural language question !
You may consult and download from VisaTM blog :
– the eight public reports of the project which detail the various points discussed during the day,
– afternoon workshop feedbacks – thanks to the facilitators and contributors
OpenAIRE becomes a fully fledged organisation
An EU organisation to facilitate openness in scholarly communication
October 29, 2018
OpenAIRE is happy to announce today the formation of its legal entity, OpenAIRE A.M.K.Ε., a non-profit partnership, to ensure a permanent presence and structure for a European-wide national policy and open scholarly communication infrastructure.
“OpenAIRE has reached a milestone: for ten years we have spearheaded the principles of openness, and we have now emerged as a key player in the Open Science landscape in Europe with global ties. Open Science practices are gaining global momentum, and committed players are needed to support this shift. OpenAIRE as an organisation from now on, will provide a permanent platform to support tomorrow’s research for Europe. We can’t wait to make this work and to achieve this, we actively invite the contribution of the Open Science and research community.’’
Prof. Yannis Ioannidis, OpenAIRE A.M.K.E Interim Head
About OpenAIRE: OpenAIRE (www.openaire.eu), funded by the EC since 2008, has led the shift to open scholarship in Europe and helped alignment with the rest of the world. An e-Infrastructure with a true EU footprint, OpenAIRE promotes open scholarship and improves the discoverability, accessibility, sharability, reusability, reproducibility and monitoring of data-driven research results, across scientific disciplines and thematic domains, cross-border in Europe and beyond.
We democratise the research life-cycle, by assisting the transition of how research is performed and knowledge is shared.
A community-driven organisation at heart, OpenAIRE addresses via our 34 National Open Access Desks (NOADs) in EU member states and associated countries, accompanied by a service-driven architecture, the “no-one size-fits-all” of the diverse research community and cultural variety of Europe, making this unique infrastructure an integral part and a leading force behind the developments of the European Open Science Cloud (EOSC).
Structure: Following a hybrid model of member organisation and member state representation, the OpenAIRE A.M.K.E. aims to become the foundation for national coordination on Open Science in Europe, achieving long-term sustainability and economies of scale.
Becoming a member: OpenAIRE A.M.K.E. sets off with its current base. To accomplish a truly open and participatory modus operandi, it is open for other organisations to join from February 2019 onwards. Members of the organisation will apply their expertise in their national or thematic contexts to:
- Support of reproducible research with technical services
- Alignment of Open Science policies
- Support & Training for Open Science
Our members are expected to actively contribute to shaping the European open scholarly communication infrastructure, capitalising on their collective experience in Open Science. In this new setting, we will continue and strengthen our efforts within the EOSC context to engage all EU and associated member states to commit to the alignment and implementation of Open Science and outreach to other organisations beyond the OpenAIRE project base.
Announcement video by Professor Yiannis Ioannidis
Further information on OpenAIRE: https://www.openaire.eu/organization
Who to contact to learn how to join the OpenAIRE organisation: Prodromos Tsiavos at email@example.com
Type of legal entity: OpenAIRE has the legal form of a Non-Profit Partnership (NPP) incorporated under the provisions of Greek Law (articles 741 onwards of the Greek Civil Code) and Law No 4072/2012.
Background: Open Science era + the sheer volume of scholarly works (about 2.5 mi peer reviewed publications every year in English alone)
What OpenMinTeD is about: Researchers, Open Access publishers, librarians, repository managers and SMEs can now easily harness the power of text and data mining (TDM) for scientific content. The recently launched OpenMinTeD infrastructure, funded by the European Commission H2020 Grant 654021, a preamble to the European Open Science Cloud, enables the registration and deployment of existing TDM tools and applications, the connection to OA scientific content, allowing researchers to seamlessly discover, share, analyse and re-use knowledge. All, well presented and operating on a cloud infrastructure. It makes this possible through the OpenMinTeD Interoperability Guidelines, which address interoperability aspects for content and services.
Does your work involve supporting researchers who are interested in Text and Data Mining (TDM)? Do you have an interest in the topic, but no coding or computer skills? Then this course may be interesting to you: OpenMinTeD and the University of Cambridge developed a free online course on text and data mining for ‘non-tech people’.
The OpenMinTeD event titled ‘ Paving the way for text and data mining in science’ was successfully organized in Brussels on May 24th, 2018. It was an open invitation to all stakeholders (publishers as content providers, TDM experts, researchers and SMEs) of TDM in Europe. The structure of the event’s agenda was carefully designed as to provide a full TDM experience and only focus on OpenMinTeD. Afterall, OpenMinTeD is a “TDM Hub” of TDM applications and components combined with open access content from open access aggregators.
The event started with a brief welcome and a short introduction of what OpenMinTeD is by the OpenMinTeD coordinator and OpenAIRE Managing Director, Natalia Manola.
Following, the EC perspective on Text and Data Mining and Open Science was presented by two EC officers Caroline Colin, and Jean-François Dechamp. In their presentation, the audience was informed on the main objectives of the new directive on copyright in the Digital Single Market.
- Modernising EU rules on key exceptions and limitations in the areas of research, education, and preservation of cultural heritage
- Facilitating licences in order to ensure wider access to content (out-of-commerce works, negotiation mechanism/VoD platforms)
- Introducing fairer rules for a better functioning copyright marketplace (press publisher’s rights, value gap, remuneration of authors and performers)
Furthermore, it was explained, why do we (EC) care for TDM?
- It’s different for science, meaning that authors usually give away their copyright rights and license-based solutions for scientific papers do not seem to work
- Digital data amount of content requires massive analysis with TDM and almost all scientific journals are already available online such as research libraries collections
- Open Science that is supported by public funding, is composed of multi-discipline sources from public and private owners, and allows reusability of data
Additionally, it was mentioned that the proposal of the European Commission in the European Council, focusing on TDM was to set a mandatory exception allowing research organisations to carry out TDM on content they have lawful access to for scientific research purposes (commercial and non-commercial).
The next session was on a storyline on TDM, on “Making sense of Science”. The story of OpenMinTeD was also presented, how it started and now how you can process, share and discover TDM tools and content, by Stelios Piperidis (Institute for Language and Speech Processing, Athena Research & Innovation Center). The presentation pointed the massive content production in general and focused on the scientific content (2.5 million articles/year). The need to make sense of all that data by using machine learning, understanding of entities, relations, structures, and extract meaningful insights to improve the ability to predict was pointed out. Even though there are solutions out there, they focus on different text types, domains, tasks, languages, creating a complex landscape. This complexity triggered the initialization of the OpenMinTeD project and its services that focus on content providers, software providers, researchers, SMEs. The services and the overall operations of OpenMinTeD were explained.
The services of OpenMinTeD platform are briefly the following:
- The OpenMinTeD catalogue of corpora, mainly datasets of open access scholarly publications, registered in the OpenMinTeD platform. Users can view and browse publicly available corpora.
- The OpenMinTeD catalogue of TDM applications. The catalogue targets users with no or little prior text mining experience that can search for, discover and easily use ready-to-run applications on content registered in the platform.
- The OpenMinTeD catalogue of TDM components, i.e. pieces of software that perform basic tasks and can be reused to build applications, targets mainly TDM developers who know how to combine them together in order to build workflows with the OpenMinTeD workflow editor and finally offer them to end-users in the form of ready-to-use applications.
- The OpenMinTeD catalogue of ancillary knowledge resources includes Machine Learning (ML) models and computational grammars that can be combined with TDM software, as well as annotation resources, (lexica, ontologies, etc.), that can be used for annotating content resources. Users can browse through the catalogue or discover resources according to specific criteria.
- OpenMinTeD TDM applications execution service This service targets primarily researchers with little or no knowledge of text mining who need to find and run TDM applications on content without going through complicated processes.
- OpenMinTeD corpus builder of scholarly works. This service mechanism allows users to form a collection of open access to scholarly and scientific content from major content aggregators (i.e. OpenAIRE, CORE) and create a “corpus” to mine.
- OpenMinTeD builder of TDM applications, where users can build new TDM applications by combining together various TDM components. The service is intended for expert TDM developers who know how to configure the TDM components.
- OpenMinTeD TDM Support & Training services that aim to (a) raise awareness about TDM among researchers and instruct them on how to integrate it in their research activities and workflows, and (b) promote the OpenMinTeD platform. The OpenMinTeD services on TDM support & training include FAQs, Webinars, Tutorials, TDM stories courses, guidelines. More can be found in OpenMinTeD Knowledge Base in the FOSTER platform.
- Catering for legal interoperability, OpenMinTeD has elaborated a license compatibility matrix , a service that expands its usage beyond OpenMinTeD. It demonstrates the compatibility among available licenses on content, software and services.
Lastly, Piperidis demonstrated how OpenMinTeD is reaching out to scientific communities from the very beginning of this project, on Scholarly communication, Life Sciences, Agriculture, Social Sciences.
Next session was on TDM for scientific literature in practice; starting with the publishers and closing with the success stories of three winners of the OpenMinTeD Open call2 on software providers. The publishers that kindly accepted our invitation to participate in this discussion were: Elizabeth Crossick (RELX Group), Frederick Fenter (Frontiers) and Stuart Taylor (Royal Society). All three representatives of publishers group, explained that the TDM approach over analysing many articles is crucial to assist research.
The session started with the panelists making brief presentations on the barriers to and opportunities of TDM from their own perspectives and experiences. It was then followed by an open discussion between the panelists and the audience. Several key themes were touched upon, including technical and policy barriers to mine content from scientific publishers, expectations and trust both from the publisher perspective and the miner perspective, opportunities for effective collaboration and mutual benefits, licensing and the role of Open Access publishing in TDM.
Throughout the discussion, the collaborative aspect, along with the need from the TDM community to be able to efficiently mining the corpora hosted on the publisher platforms without incurring in unnecessary technical barriers, were emphasised, with both the panelists and the audience agreeing that it is extremely important to lower as much as possible barrier to TDM within the legal framework of copyright and that only through thoughtful and practical conversations with the community publishers would be able to provide the best services in support on efficient and effective TDM practices.
The session was completed with the following presentations:
Three winners of the Open Calls were invited to present their work. Horacio Saggion (UPF, TALN Group, University of Barcelona), showed the “Scientific Summarization Services” tool that his team has integrated in the OpenMinTeD platform. It automatically identifies the most important information of a research article, by analyzing, extracting and characterizing several aspects of each sentence. This information is used to compute different scores to rank each sentence of the article.
Fabio Rinaldi (University of Zurich and Swiss Institute of Bioinformatics, Switzerland), presented the “BTH & OGER for OpenMinTeD” tool integrated in OpenMinTeD. The OntoGene’s Biomedical Entity Recogniser (OGER) allows annotation of a collection of documents, while the Bio Term Hub is a one-stop site for obtaining up-to-date biomedical terminological resources.
Matthew Shardlow (Manchester Metropolitan University), presented a Text mining application for Journalism, integrated in the OpenMinTeD platform. “A journalist must be a temporary expert in a wide variety of topics”. Starting from this fact, the presentation showed how the five W’s (What, Where, When, Who, Why) a journalist has to answer, can be found by searching in scientific literature and applying this text mining tool.
Continuing, the legal session took over with Maria Rehbinder (Aalto University) and Prodromos Tsiavos (Athena Research Center) accepting the invitation to join. The almost identical day of activating the GDPR directive all over Europe, initiated an open discussion on the effect of GDPR on TDM. Would GDPR signal the death of TDM? Thomas Margoni (University of Glasgow, Create) explained how OpenMinTeD managed to overcome legal challenges, barriers and informed researchers, TDM experts, content providers. The key element was the “Compatibility Matrix” created within OpenMinTeD project to guide stakeholders on combination of licenses on content, software, services.
At the end of this session, the winners of the Open Call 2 discussed and commented on the unique features of OpenMinTeD in comparison to other platforms in this area. These include that OpenMinTeD enables, as opposed to other TDM orchestration platforms, a very flexible way of integrating text and data mining components available widely used TDM tools, including UIMA and GATE, as well as the use of custom built TDM components as docker images and external web services. Another area mentioned that has been seen as a powerful feature of OpenMinTeD is the availability of large corpora and text processing tools within the same platform.
The legal session offered an overview of the main results of the project’s legal interoperability working group led by Thomas Margoni from CREATe – University of Glasgow. The report started with a brief overview of the current EU legal framework in the field of TDM and why the currently proposed text of Art. 3 (the TDM exception for research organisations) while underpinned by the right innovation policy goal is not satisfactory. Furthermore, in addition to the already mentioned licence compatibility matrix, a set of supporting documents (e.g. the Open Science Fact Sheet and an Open Access FAQs) and a recent analysis of the legal implications on training models for natural language processing (NLP) applications (poster here) were showcased. These results and documents were presented in the format of an open discussion. Maria Rehbinder (Aalto University) kindly accepted to moderate and Prodromos Tsiavos (Athena Research Center) offered a high level perspective extending to privacy/data protection (very timely as the GDPR entered into force on the next day!) and Public Sector Information and suggesting that these latter pieces of EU law, which are or have been also object of recent reform or reform proposals, may offer a better source of inspiration for the future challenges of data governance.
The last session 3YFN (3 years from now) was a panel discussion, focusing on the potential use of TDM technologies, platforms, infrastructures in the near future. How industry responds and moves towards the TDM adoption? What do researchers foresee? The panel was composed by: Alfonso Valencia (ELIXIR & Barcelona Supercomputing Center), Laurence El Khouri (ISTEX & National Center for Scientific Research (DIST/CNRS)), Sophia Ananiadou (NaCTeM, National Centre for Text Mining, University of Manchester), Claire Nédellec (INRA, Institut national de la recherche agronomique).
Presentations material here:
Ewoud Sanders is best known for his weekly column WoordHoek (‘Word Corner’) in the newspaper NRC Handelsblad where he writes about the history of Dutch words and expressions.
He is on a quest to improve digital access to printed Dutch language resources and his pamphlet Eerste Hulp Bij e-Onderzoek (‘First Aid for e-Research’) has been reprinted 16 times and distributed free of charge to students by several Dutch institutes of higher learning. In 2011, Google gave him a grant of $15,000 to help improve internet searching in the Netherlands.
The Bibliome group at the French National Institute for Agricultural Research (INRA) has developed a text-mining application that extracts fine information about seed development from thousands of texts. It gives scientists better and quicker access to how molecules, genes and proteins interact when a seed starts to grow.
Good Seed Makes a Good Crop
Inside a seed are components such as molecules, genes and proteins. The presence of these components and how they interact determines if a particular seed can be used for human or animal consumption or by industry. A better understanding of seed biology and development is therefore important for both crop breeders and industrial companies. Finding out which genes interact with which protein in which tissue at which stage is a key question for researchers in plant breeding.
Stephane Schneider is IT project manager at the Institute for Scientific and Technical Information (INIST-CNRS). INIST has one of the most important collections of scientific publications in Europe and provides a range of information search services for science and higher education. Stephane tells about his work and what he expects for the future of TDM.
The Proposal for a Directive on Copyright in the Digital Single Market (the Proposal) contains a number of provisions intended to modernise EU copyright law and to make it “fit for the digital age”. Some of these provisions have been object of a lively scholarly debate in the light of their controversial nature (the proposed adjustment of intermediary liability for copyright purposes contained in Art. 13, see here at p. 7) or because they propose to introduce a new right within the already variegate EU neighbouring right landscape (i.e. the protection for press publishers contained in Art. 11).
The event ‘OpenMinTeD: Paving the way for text and data mining in science’ marks the official launch of the OpenMinTeD platform (www.openminted.eu, services.openminted.eu). We would like to invite you to join us for a live discussion on the way forward.
To join the event, a registration via Eventbrite is required here.
As part of the OpenMinTeD project, INRA has been working on a text mining application dedicated to food microbiology. This infographic will tell you the story.
With a tasty bite of cheese necessarily come some microbial strains. Some of them are well known, but the presence of others can puzzle researchers and they might want to investigate why they are there. A better understanding of microorganisms, their interaction and their adaptation to their environment are important issues for research and industry. It could help improve public health or develop innovative products.
To make sense of the huge amount of scientific text and data available, we need text and data mining (TDM). The European project OpenMinTeD has been paving the way for TDM in science by working on an infrastructure for the past three years. We would like to invite you to join us for our event in Brussels on May 24th. Learn about best practices in TDM, perspectives of different stakeholders, the GDPR and the future of TDM and OpenMinTeD.
Dr Jane Reed is Head of Life Science Strategy at Linguamatics, a UK-based company which makes TDM tools to help companies in the healthcare and pharmaceutical industries. She spoke to OpenMinTed about how TDM is being used to speed up drug discoveries and treat patients, and gave a vision for the future of text and data mining.
Read the full interview below, or download a printable version to share with others.
Dr Alan Akbik is a Research Scientist at Zalando Research. He’s using text and data mining to create tools which can be developed in one language and then applied automatically to other languages. This is valuable for companies such as Zalando, which work in many different countries around the world.
Read the full interview below, or download a printable version to share with others.
Federico Nanni is a researcher who uses TDM to build collections of materials from large archives which can be used to better understand recent, historically critical events such as the rise of Euroscepticism as a consequence of the recent economic crisis.
It’s time for our final episode of this series of ‘Key concepts and areas in TDM explained’. This time Robert Patton of the Oak Ridge National Laboratories introduces Deep Learning and discusses how it can be applied in practice.
Knowledge discovery is the process of discovering new information. In text and data mining this happens for example by finding new connections or trends in a large amount of text and data. Ron Daniel is director at the Elsevier Labs. He explains Knowledge Discovery and Knowledge Representation in three short videos.
It was a great honour and opportunity to interact with the Docker community during the meetup in Athens on November 29th. More than 30 people attended our talk ‘A scalable, virtual, flexible workflow infrastructure in OpenMinTeD stack’. The talk covered the software stack responsible for executing Text and Data Mining (TDM) workflows on a distributed cloud environment. The workflow setup greatly overlaps with (but is not limited to) modern containerization technologies and especially Docker.
In the old days, if you would do a search in a search engine, you would get a lot of irrelevant hits that for some reason contained the keyword you used. Nowadays search engines give you much better results, because they put the keyword into context. This new way of searching is called ‘Semantic Search‘. Waleed Ammar of the Allen Intitute for Artificial Ingelligence explains semantic search, the challenges and the state-of-the-art in a few short video clips.
Thomas Margoni and Giulia Dore of the University of Glasgow have developed a matrix and two fact sheets on open science and licensing. They presented the tools at the IP summer summit in Glasgow last June. The tools can help researchers, repository owners and many others with how to use open access licences in the context of text and data mining. Curious? You can access the tools through the links in this blogpost.
Are you ready to develop and share an application or software component for text and data mining (TDM)? Or do you have knowledge resources that you would like to share and integrate with our platform? OpenMinTeD is looking for service providers, innovators, SMEs and researchers who can join and build on the platform! You can apply for this call until 26 January 2018. Winners of the call will be awarded a sum of money to implement their plans. You will also be part of an online hackathon to help you along the way.
Mads Rydahl is the founder of UNSILO, a Danish start-up that applies machine learning to scientific publishing.
Iana Atanassova, Centre Tesnière – CRIT, University of Bourgogne Franche-Comté, is using Text and Data Mining (TDM) to study full-text scientific articles. Studying these papers can be a challenge, as they are usually in a format that is hard to process.
Daniel Duma is a PhD candidate at Alan Turing Institute and University of Edinburgh. He’s creating software that will plug into your existing word processor or text editor. The software will then use text and data mining to recommend papers that you should be aware of, you should read or that you would want to cite.
One of the things you can do with textmining, is discovering conceptually related items within a collection of text and data. Want to know more? Anas Alzogbi is research assistant and doctoral student at the University of Freiburg. He explains Recommenders and Filtering in four short movies.
It’s time for the second part of ‘Key concepts and areas in TDM explained’. This time, Jevin West tells us more about “Text and Data Mining” and “Knowledge Representation” in three short videos. Jevin West is Assistant Professor at the University of Washington and Co-ordinator of DataLab.
During 25 – 27 October OpenMinTeD participated in the FORCE2017 Research Communication and e-Scholarship conference that brings together a diverse group of people interested in changing the way in which scholarly and scientific information is communicated and shared.
The OpenMinTed project co-organized with Agroknow and the AIMS team a webinar entitled “The Text and Data mining functionalities of the PoolParty Semantic Suite”. The webinar took place on the 21st September 2017.
The deadline for submissions for the call for content has been extended with one week to November 5th. Were you thinking about submitting a proposal, but too busy the last weeks? This is your chance! All information is available on the OpenMinTeD Open Tenders blog.
What are the benefits of text and data mining (TDM) and how can its practices be applied in science? We asked recognised experts in the field to introduce key areas and concepts in short videos. The videos will be released during the following weeks in a series of blogposts. Today we start with day 1: introduction to text and data mining. The videos will also be part of the TDM Knowledge Base .
From September 6th– September 8th, over 200 people with an interest in open science came together in Athens for the Open Science Fair. OpenMinTeD was one of the co-organisers, and also organised a workshop on text and data mining. The first part of the workshop showcased successful TDM initiatives. The second part was focused on content providers and was more OpenMinTeD specific.
We are happy to announce that the OpenMinTeD platform for text and data mining is now ready to accept content. We invite publishers, repositories, libraries and other holders of scholarly publications to join the open call for content, by submitting a proposal by 29 October 2017 at the latest.
We are pleased to invite you to attend an upcoming Webinar on the Text and Data mining functionalities of the PoolParty Semantic Suite.
In 2016, 30 people from important institutions all over the world came together for the first Open Harvest gathering. The goal was to set the stage for a global data infrastructure for agriculture and food. One year later, Agroknow presented the OpenMinTeD application VITIS at Open Harvest 2017.
Many university and national libraries are exploring the best way to support researchers with text and data mining. That’s why on July 5th 2017, OpenMinTeD and FutureTDM organised a workshop about text and data mining at the LIBER conference in Patras. 4 different speakers guided 16 participants through the various aspects of TDM.
Tom Potok works at the Oak Ridge National Laboratory in Tenessee. He has been in the field of text and data mining for twenty years and worked on a wide variety of things. Some of the biggest challenges are the amounts of information out there, and trying to figure out how the mind works with text.
Benj Pettit works at Mendeley and works on text and data mining tools that help researchers to find new articles, collaborators etc. One of the special things about the Mendeley catalogue is that it is formed in a crowdsourced way.
Open Science is a new research paradigm that is facing many challenges. In order to improve the uptake of Open Science, four EU-projects join forces and organise an event that will showcase critical elements, from infrastructures to policies and new types of activities. Join us for the Open Science FAIR, September 6-8 in Athens, and get inspired.
Last April 26-27 the BioCreative V.5 Challenge Evaluation Workshop took place in Barcelona. The goal of BioCreative V.5 was to address some of the major barriers to the adoption and use of text mining tools, related to assessment, accessibility, interoperability, robustness and integration.
25 years ago, when Laurents Sesink was still a history student, his thesis on political internal relations included a lot of reading and tally marks. Back then he already thought “There must be a better way to do this”, so he built a database and started to get into informatics and digitisation. Now he is the head of the Centre for Digital Scholarship at the library of Leiden University.
The 9th Plenary Meeting of the Research Data Alliance (RDA) took place in Barcelona, Spain, from 5 to 7 April 2017. The RDA Plenary Meetings constitute a major event where more than 4000 members from 100 countries come together to discuss, develop and promote data-sharing and data-driven research infrastructure through Working and Interest Groups. The Interest Group on Agricultural Data (IGAD) pre-meeting took place just a couple of days before the 9th RDA plenary meeting, from 3 to 4 April 2017 and attracted more than 100 participants from all over the world.
Frontiers in Neuroinformatics has just released a new paper by O’Reilly, Iavarone and Hill. It describes a systematic framework to curate neuroscientific literature. This framework provides an easier and more reliable way to integrate published data into neuronal models. The work was done in the context of the OpenMinTeD and Blue Brain projects.
On February 20th 2017, Agroknow had the pleasure to host a workshop at the premises of the Agricultural University of Athens (AUA). The workshop was organized together with colleagues from the Laboratory of Viticulture.
You will not catch Steven Claeyssens carrying a smartphone and he will always prefer a paper book to an e-reader. Yet he is the curator of digital collections at the National Library of the Netherlands. I interviewed him about his job, text and data mining (TDM) in the humanities and the role of libraries in the research landscape.
How is a scientific paper structured and how related is it to other papers? These are some of the things that Iana Atanassova of the University of Bourgogne Franche-Comte (Besancon, France) focuses on in her research. She uses text and data mining (TDM) to study full-text scientific articles. Studying these papers can be a challenge, as they are usually in a format that is hard to process.
Marc Bertin is assistant professor at the University of Toulouse uses text and data mining to study scientific papers. Text and data mining can help us move from an information society to a knowledge society, but not without open access to research papers.
When scientists need information about the structure, name or properties of small molecules, they often turn to a high quality database called ChEBI. This database is largely curated manually and this process takes a lot of time. OpenMinTeD is working on a textmining application that can help to speed up the process, while maintaining the quality of the database.
Joris van Eijnatten is professor of cultural history at Utrecht University, The Netherlands. He has a fascination for numbers that not many historians have. Last year he was the research fellow for digital humanities at the National Library of The Netherlands, where he applied text and data mining to study the image people have of Europe based on newspapers. I interviewed him about text and data mining in humanities, his work and his personal romance with numbers.
What is the real novelty of a research paper? How do different researchers contribute to innovation? And does this change throughout their career? Shubhanshu Mishra of the University of Illionois uses textmining techniques to study the novelty of biomedical articles.
Systematic review of medical research papers can lead to new knowledge and treatments of diseases. The existing software tools however, are very limited and often a lot of manual work is involved. Stephen Gilbert of Iowa State University uses artificial intelligence and machine learning to automate the process of systematic review.
While discussions at the EU on copyright reform and an exception for text and data mining (TDM) are very much live, FutureTDM, a Commission funded project of TDM experts has, for the past year, already been gathering information, mapping the TDM landscape and listening to the wide variety of individuals and organisations involved in data analytics. The project has just produced the first in a series of reports, providing a range of stakeholders with recommendations to improve TDM uptake in the EU. This FutureTDM policy framework document sets out high-level principles and recommendations.
Frederico Nanni was not always a text miner. He actually started out as a historian and then switched to digital humanities. During his PhD, he developed a method to detect interdisciplinary research, based on scientific abstracts. Now, he finds text mining fascinating and thinks more historians should learn how to do it.
It took some time for Drahomira Hermannova to see the value of her research topic, but now she thinks it is the best topic she could ever choose: using text and data mining to evaluate which research can change the world. Not only can this help scientists, it may change the way research is done altogether.
In the OpenMinTeD project, partners from different scientific communities are involved to make sure the OpenMinTeD infrastructure will address their needs. As regards the social sciences, a useful application for text mining is the improvement of literature search and information interlinking. To this end, three main challenges were identified: named entity recognition, automatic keyword assignment to texts and automatic detection of mentions of survey variables. This post gives an overview of these tasks and the progress of work so far.
Would you like to get more insight in the world of text and data miners? Daniel Duma is a PhD student at the Alan Turing Institute and the University of Edinburgh and he shares his story in a short movie. He is working on software that will recommend relevant papers to scientists writing papers.
If you want to do text and data mining in the EU, you run into a complex legal framework of copyright rules. During the OpenMinTeD webinar of November 23rd , this legal framework, limits and opportunities were discussed with legal as well as non-legal TDM experts. Recordings of the webinar and the discussion are available online.
There are situations where text miners might struggle with getting the textual data to perform the mining on in the first place. One problem for us is that most of scientific publications – especially in social sciences and humanities – are only available in PDF format, which is not suitable to be read and processed by computers. The OpenMinTeD social sciences work group accepted the challenge to work on this problem.
Are you looking for support or training for text and data mining? Then you’re at the right place! OpenMinTeD recently released a Knowledge Base, that will host open access support and training material. At the moment we are still in the process of uploading content, but you can already have a look.
Text and data mining is important to different scientific communities, but what do these different user communities need to mine succesfully? One of the aims of workpackage 4 of the OpenMinTeD project is to collect these requirements. This was done using a combination of methods, including online surveys and focus groups. The results are summarized in the ‘White paper on OpenMinTed Community Requirements’ that was finished last week.
CORE is an aggregation service that harvests open access journals and repositories, institutional and disciplinary, from around the world. It offers one of the largest collections of scientific content via its Datasets, ready to be text-mined. We encourage everyone to use it as part of OpenMinTeD and beyond.
How the Future TDM workshop highlighted the draft exception must be improved for TDM to have a future in Europe
For the legal geeks among us, it is now old news that the European Commission, after promising to modernise copyright, issued a rather unhinged and disappointing copyright review proposal aimed at creating what it claims to be a ‘well-functioning marketplace’.
Let’s take a step to the near future.
A shared global data space for agriculture and food will propel the industry forward. Information will become available to all actors producing innovation.
Hi there, I’m Lucie Guibault, Associate Professor at the Institute for Information Law of the University of Amsterdam.
Over the past few years, I became increasingly aware of TDM as a research method in all fields of science and humanities. With the increase of computational capacity, of digital born information and the digitisation of collections, the use of TDM in research is on its way towards achieving tremendous societal and economic benefits. Think about all the new insights and cost savings that would otherwise not be possible. This means more scientific breakthroughs and a greater understanding of society.
On 22-23 June 2016, OpenMinTeD organised its third stakeholder workshop at the Joint Conference on Digital Libraries in Newark, just outside of New York City. The workshop, called “the International Workshop on Mining Scientific Publications,” was organised by the Open University for the fifth time (almost everytime in conjunction with JCDL) and featured speakers from OpenMinTeD, as well as speakers who presented their text and data mining research results.
Our efforts towards improving interoperability in the communities of Text Mining (TM) and Natural Language (NLP) processing continue. OpenMinTeD organised a workshop on this subject at the International Conference on Language Resources and Evaluation (LREC) on 23 May 2016. Alessandro Di Bari (IBM) opened the workshop with a keynote on transferring ideas from the model driven approaches of software engineering to enhance interoperability in TM and NLP.
Conducting TDM activities in the current legal context is very difficult. This is due to the unclear and uncoherent legal framework for copyright licences and to the highly fragmented landscape of copyright exceptions and limitations in the EU. In this blogpost, we’ll discuss the current legal context and what needs to be changed to open the paths for TDM in the EU.
On 13 June 2016, the OpenMinTeD project organised its third stakeholder workshop titled “Mining Repositories: How to assist the research and academic community in their text and data mining needs”. The workshop took place in Trinity College Dublin as part of the OpenRepositories Conference, and brought together repository managers from all over the world who are interested in text and data mining.
The seventh Berlin Buzzwords 2016, Germany‘s leading Conference on Open Source Big Data technologies, was held from 5-7 June at the Kulturbrauerei in Berlin. A very interesting venue for cultural events, under national trust protection, Kulturbrauerei is a spacious former brewery with a lot of courtyards and buildings.
On 22 May 2016, OpenMinTeD held its second stakeholder workshop at the LREC conference in Portoroz, Slovenia. The workshop took place in the form of a roundtable, and brought together strategic players and stakeholders from the language technology community and neighboring areas. Stelios Piperidis (Athena Research Center / ILSP) led the discussion. Among the attendees were representatives from CLARIN-CZ, CLARIN-ERIC, OpenAire, ELDA and LAPPS Grid.
The use of keywords is crucial for the description, organization, indexing, retrieval and sharing of research in every scientific field and agriculture is not excluded. However, manual annotation of research outcomes is time-consuming and error-prone so automatic methods for metadata annotation are always explored. AgroTagger is one of the tools facilitating the work of information and knowledge managers (among others) in the agri-food sector, by applying text-mining on top of agri-food research outcomes.
Can you text mine agricultural content?
“Absolutely!” is the answer that AgroKnow will give you. And they can prove it! AgroKnow is one of the partners in the OpenMinTeD projects who are already very active in projects which apply text mining technologies to the agricultural sector.
Are you a researcher in frequent need of searching and accessing textual content? Does your research involve looking for information in repositories of publications, reports, patents, and other textual content archives?
Then we are looking for your input!
Does your company develop text-mining powered applications? Would you benefit from a platform that provides access to a variety of text mining tools and components, along with the possibility to examine their specifications and performance? Are you an application developer in need of integrating text-mining services in your software? Then we are looking for your input!
Does your organisation have tons of data that you want to make available for text and data mining? Would you benefit from an infrastructure that brings your data together with text and data mining tools? Are you a repository manager, a publisher, or do you represent any other type of content collection?
Then we are looking for your input!
Are you a researcher in text and data mining? Would you benefit from making your mining software widely discoverable and interoperable, and would you like to easily explore and evaluate the work of other researchers in your field?
Then we are looking for your input!
In association with the OpenMinTeD project, The Open University organises the 5th International Workshop on Mining Scientific Publications (WOSP) at JCDL 2016.
The workshop is organised by Open University and aims to give a useful overview of Text and Data Mining (TDM). The topics of the workshop are organised around the following themes:
The 9th GATE training course will be taught this June, at The University of Sheffield, and we are looking for you to join us! GATE, or the General Architecture for Text Engineering, is a mature, comprehensive suite of tools for information extraction, natural language processing and related tasks that has been developed continuously since 1995 at the University of Sheffield. The course is open to industrial and academic participants of any ability or experience level.
On February 29th researchers from around the world gathered in Tokyo for the data sharing symposium “Data-driven Science – The trigger of Scientific development”. It’s been a place of vibrant discussion of opportunities and challenges brought by current trends, such as open science, data-driven research and big data. OpenMinTeD, which perceives openness as one of its basic principles, participated in this event.
At the end of last year, I presented a webinar to the American Medical Informatics Association on clinical text mining and text engineering – applying text mining to medical records. This is not an area that we are concentrating on in OpenMinTeD, but it is still an area on which we should keep a watchful eye. There is a rapid growth of text mining over medical records, and it exposes issues and problems that we need to be aware of.
The OpenMinTeD project is divided into different tasks. It is the task of Agroknow to carry out the important job of gathering TDM requirements from our stakeholders (OpenMinTeD’s future platform users and contributors), so that OpenMinTeD will build a TDM platform that meets the requirements of our platform stakeholders as good as possible. We focus on gathering requirements from four different scientific domains, represented by the following different communities.
Final Call for Submissions: Cross-Platform Text Mining and Natural Language Processing Interoperability
Recent years have witnessed an upsurge in the quantity of available digital research data, offering new insights and opportunities for improved understanding. Following advances in Natural Language Processing (NLP), Text and data mining (TDM) is emerging as an invaluable tool for harnessing the power of structured and unstructured content and data. Hidden and new
knowledge can be discovered by using TDM at multiple levels and in multiple dimensions. However, text mining and NLP solutions are not easy to discover and use, nor are they easy to combine for end users.
I’m Angus, and I lead the “Platform Integration, Testing and Deployment” workpackage for OpneMinTeD – or WP7 as it is affectionately known in Project-Speak. Our task in WP7 is to take the services that have been designed and created in OpenMinTeD, and to deliver these as a whole, so that they can be deployed as a running system. But what tools are needed for this?
On 12 November, OpenMinTeD’s specification Working Groups (WP5; task 5.2) met for the first time in person. This one-day workshop was attended by 30 participants with wide-ranging expertise in the many faces of TDM interoperability (both project-internal participants and invited external experts).
A high level meeting on Open Data in Agriculture took place on 28 September 2015 in Amsterdam, Netherlands. The participants of the event represented organisations like the Global Forum on Agricultural Research (GFAR), the Food and Agriculture Organisation of the UN (FAO), Land Portal Foundation, Wageningen UR, Open Data Institute (ODI) and Institute of Development Studies, UK (IDS).