Science Hour with Leo Laporte & Dr. Kiki

Just recently Dr. Jason Hoyt of Mendeley and Pete Binfield of PLoS ONE where invited to the podcast Science Hour with Leo Laporte & Dr. Kiki. During this one hour podcast they discussed scientific publishing on the internet. Some interesting topics were discussed, I will summarize them in this blogpost.

Why is publishing important in science.

Currently their is an attitude of “publish or perish”, which means their is pressure to publish to get or maintain a career in academia. A consequence is that people want to publish first and sometimes this causes fraud or a publication with false data. To avoid this their is the concept of scientific publishing, it contains the following phases:

  • records premise of a discovery (first to publish = guy who thought it up)
  • certification (peer-review, analysis of correctness),
  • dissemination,
  • archiving

One would think that this would be easy using the internet as a tool, we don’t need a publishing company any more to publish. But a problem arises how do we give credibility to a publication, is peer-review a correct way to do this?


PLoS ONE tried adding a commenting system, this had a surprising result. Commenting on article was possible by annotation on text or sentence, star rating on article, …

The problem was that is wasn’t very well used, only a few comment. The discussion took place in a broader community like on blogs, chat, …

Traditional publishing

Another problem is that traditionally a researcher would write an iteration paper, receive comment on it and write a new one. If we look at the current technology this seems ridiculous. The result or conclusion of your paper isn’t as important any more, the first worry should be to get your data out in the world. Often publication don’t even contain the full data.

Another tradition in publishing is in peer-review, the current technology seems to require a quicker way to put your paper online and review it. What happens now at PLoS ONE is that they check if it contains good science. Other aspects like impact factor is decided by community.

At the same time I think this is a problem because some scientist are critical about online communities and might not think their is a high value to the online determined impact factor. This is why online tools should prove that the systems they use are just as good in determining values like impact factor as the old system.

How to get the respected scientists online.

At the moment younger scientists are forced to publish in traditional way because of pressure to get a grant, tenure track, … An other reason is social reluctance the way career structure of academics is setup.

At the moment I think this seems to be a social problem, which might solve over time. But might be helped by putting some journals online only. Of course again the issue of online value of a paper should be addressed.

Too much information

A lot of people are publishing, will we be able to find the right information? Traditionally journals would filter the information flow, disadvantage was that some good articles weren’t discovered. Filtering information online is done by an algorithm, Mendeley has another approach. Mendeley allows articles to be discovered at an individual level, lower impact articles can still be found.

When data is published online the find ability and speed of searching is a lot higher than in the traditional way. Another way to find your way in the large number of publications is by tagging.

Impact factor

Another problem with online publication is the impact factor, users with a high impact factor are those who already have established an impact factor. This forces younger people to go to places where they can get an high impact factor because of the academic pressure to publish. They end up in the old model where they want to get published in an journal with an high impact factor.

Mendeley wants to get rid of this by introducing impact factor at the article level, when doing this the article itself gets examined. To do this Mendeley got some good algorithms.

Article level metrics

An interesting point made in this podcast is about article level metrics. PLoS ONE is pioneering in article level metrics. This is an indicator on an article containing: number of citations, number of social bookmarks, number of blogposts about the article, … Recently they introduced usage data this is information like number of downloads, page views, … Their seems to be a reluctance to make this level of data open. You can refer to these metrics as social media metrics.

To me this is very useful data as it gives you an overview of impact and allows other analysis on the data. But in the podcast they mention we should also focus on standardization so that in the end we have the same metrics everywhere.


I think this podcast contained some relevant discussion it made me familiar with the “publish or perish” tradition. A lot of problems to get people to use online publication seems to involve getting people out of this tradition. The solution isn’t clear at the moment but as people in the podcast suggest, the focus should be more on the article level. To do this we need to find a way to give a value to article based on their impact factor, citations, … As my thesis will involve an open repository I think it could be a nice feature to at some kind of impact factor or article level metrics to the papers. This would offer users a good way to discover new content. It also offers them a way to decide what they think is of great value for science for example by a ranking system, commenting, …


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: