titre

Introduction - Elliot R. Siegel

See the movie

An introduction to the purpose of the workshop and its genesis as a technical project of ICSTI on interactive visualizations. A brief account will be given of a 2008 NLM–Elsevier project comparing knowledge gain and satisfaction amongst medical students with standard and interactive versions of the same paper.

I. Interactive visualizations

The purpose of this session is twofold: 
(1) to showcase examples where journals use interactive (usually three-dimensional) data visualization tools as an integral component of the scientific discourse and demonstrate their benefit;
but (2) also to define the ‘costs’ involved in terms of providing infrastructure for creating, editing, reviewing and archiving such interactive content.

Interactive Science Publishing: a joint OSA-NLM project - E.R. Siegel

See the movie

Interactive Scientific Publishing (ISP) has been developed by the Optical Society of America with support from the NIH National Library of Medicine. It is an initiative that allows authors to publish large 2D and 3D datasets with original source data that can be viewed and analyzed interactively by readers. ISP provides the software for authors to organize and publish source data while offering readers the viewing and analysis tools

Breaking out of 2D: interactive PDFs - Michelle Borkin

See the movie

Interactive PDFs have obvious appeal to the reader and the publisher; they need only widely available (and comfortingly familiar) software to view, and they require no extra effort from the publisher to create or archive.  They offer many advantages to both reader and author including providing 3D interactivity.  How easy are they to create, and can they be used or designed for specific domains?  What are the implications of using specific closed-source or proprietary software? What improvements does it bring to the scientist?

Accessing the data: going beyond what the author wanted to tell you - Brian McMahon

See the movie

Structure reports in IUCr journals depend on a standard data file format. Structure visualization uses both helper applications (e.g. Mercury, but the end-user has complete freedom of choice) and embedded applets (Jmol). Tools are provided for creating/editing dynamic content, and the handling of such content is completely integrated in the editorial/production workflow. Data validation is an integral part of the peer review process. There are concerns about long-term access to functionality that depends on how particular software programs behave, but archiving the basic data provides more scope for retaining such functionality.

 

II. Adding value with enriched content and semantic links

This session surveys some of the growing ways in which value is added to the online publication by exposing the semantic content to computerised application: hyperglossaries, thesauri, taxonomies, chemical substructure display and search. Sometimes the value is added (expensively) by the publisher through semantic markup, sometimes it can be inferred by dynamic post-processing. How important is it to retain this type of added value in long-term archives?

Project Prospect and the place of primary data - Richard Kidd

See the movie

Project Prospect is an RSC initiative to add extensive semantic markup to chemistry publications. It improves literature searching, enhances the information content of an article by annotating chemical content, and opens the door to database searching of chemical structures. Reliable tagging of content is an intensive editorial process (for example, many chemicals referred to in an article may not be named, but simply referred to as something like ‘13b’); but adding accurate machine-readable information adds tremendous value to an article

Semantic linking in the Concept Web - Jan Velterop

See the movie

Another approach to semantic markup is explored in applications of the Concept Web Alliance. Instead of relying on at-source markup, readers can stream content through a large knowledge-based annotation server that overlays annotations, glossaries, hyperlinks etc. deduced from on-the-fly parsing and mapping to discrete concepts organised in a large triples store. The Alliance allows participants to explore many of the new semantic web technologies, maximizing the potential for knowledge discovery while at the same time removing both redundancy and ambiguity from available knowledge

Visualizing and citing dynamic datasets - Toby Green

See the movie

OECD is pioneering work in citing and visualizing economic and statistical data sets. These are available outside of journal publications, but can also be referenced and linked into from scholarly articles. Many of the data sets of interest here are growing continuously, so again the challenge is to properly cite a dynamic object, and to recreate a particular time slice or other subset of a complex and large data set.

III. The archival problem and infrastructure for solutions

This session considers what needs to be in place to allow the added value and functionality of interactive publications to be retained and made available into the future. Is there a place (or a need) for emulation of legacy software environments? Do we yet have a consensus on how to package, identify and interlink the independent components of a complex article (e.g. linking article text, figures of various types – including animations, movies, audio annotations, data sets, procedural scripts)? Can we handle distributed articles – text on a publisher’s web site, associated data in a subject repository? Can we identify and retrieve slices through large archived data sets? Do we have any idea how to approach changing data sets? What is actually worth keeping for posterity anyway?

Maintaining a persistent scholarly citation record when content is protean and identity is cheap - Geoffrey Bilder

See the movie

CrossRef can bring experience with a number of publishers to the subject of providing persistent identifiers to data sets and linking them to related publications. There are different ways of describing compound documents (articles and their supplementary material, data sets, multimedia content), with distinct DOIs linked through appropriate compound-document schemas. Among current CrossRef initiatives in providing long-term and unambiguous linking tools, the quest for persistent author identifiers suggests the possibility of granular citations of data or nano-publications.

Bridging the gap between data centres and publishers - Jan Brase

See the movie

TIB has growing experience in assigning persistent identifiers to data sets, and linking them to primary publications, and has given some thought to standards for citing data sets with the aim of securing appropriate scholarly credit. With the foundation of DataCite – the International Data Citation Initiative, founded in December 2009 by TIB, the British Library, the Library of the ETH Zurich, the Australian National Data Service (ANDS) and several other information institutions – a global initiative is starting to be established to bridge the gap between data centres and publishers.

IV. W(h)ither journals?

The final session will explore the direction in which these new developments are taking the whole practice of scholarly communication. Conventional journals have been the mainstay of formal scientific discourse for over 300 years. The technological explosion of the early 21st Century offers a bewildering variety of new ways to communicate. How will they complement or supplant the traditional publication?

The nature of scholarly publishing in the new century - Matthew Day

See the movie

Nature is experimenting energetically with new technologies and techniques of social networking. It is bringing to the scholarly publication elements of interactive review, critique, comment and feedback, and is helping to develop new tools for tracking new literature and sharing scholarly information. Which of these experiments represent passing fads, and which reflect real changes in the communication of scholarly information that will be adopted as reputable practice by the community?

Dumbing down or opening new horizons? - Phil Bourne

See the movie

Multimedia innovations such as SciVee (‘YouTube for Scientists’) improve the immediacy and impact of scientific reporting, but do they carry the risk of ‘dumbing down’ the information content? Are they a useful adjunct to the written word, or will they replace it in some circumstances? What new challenges of citability and archivability do they raise? What new tools are needed by authors to compose in the new interactive media? Will they be willing to learn? Will the new ways of communicating science change the way scientists work, and indeed how they think?

 

 

 

 

 

 

 

 

 

UPMC     Masthead ICSTI web