Archive for the ‘read-articles’ Category

Choosing Colors for Data Visualization

March 2, 2010

This article explains that a good use of colors can enhance and clarify a presentation, when used poorly it will have a negative effect. The use of color is all about function: what information are you trying to convey, and how color can enhance it. The author uses a lot of examples to help us understand what the effects of colors are. Within this summary I’ll take some of the conclusions about the examples which I think could be important within my thesis.

One of the functions of colors is to distinguish one element from another, but one should not forget that all visible parts of a presentation must be some color and if all are taken together they must be effective. Effective in this case is making it easy for the viewer to understand the roles and relationships between the elements. To do this one could define categories of information and group and order the information. Using color will group related items and command attention in proportion to importance.

A next step is choosing an effective set of colors, to understand this the author explains the principles of color design. contrasting colors are different, analogous colors are similar. Contrasting draws attention, analogy groups. In color design, color is specified by three dimensions. The first, hue is the color’s name and is typically drawn as a hue circle. Analogous hues are close together and contrasting hues are on the opposite side of the hue circle. Next is the value of a color which is the perceived lightness or darkness of the color. Contrast in value defines legibility as well as having a powerful effect on attention. Last is the chroma which indicates how bright, saturated, vivid or colorful a color is. High chroma colors are vivid and bright. Using darker and grayer has many benefits: looks less garish, more sophisticated, …

Different dimensions have different application to information display. Making related items the same color (analogues hue) is a powerful way to label and group. Hue contrast is easy to overuse to the point of visual clutter, a better approach is to use a few high chroma colors as color contrast in a presentation consisting primarily of grays and muted colors.

Legibility

Legibility means being able to read, decipher, discover and to be understood. Difference in value between the symbol and its background is important for legibility. The higher the luminance contrast (difference in value) , the easier it is to see the edge between one shape and another. Variation in luminance can also be used to separate overlaid values into layers, where low contrast layers can sit behind high contrast ones without causing visual clutter. A primary rule in many forms of design is “get it right in black and white”, meaning that important information would be legible even if chroma were reduced to zero.

Summarized these previous statements tell us to “assign color according to function”.

  • use contrast to highlight
  • analogy to group
  • control value contrast for legibility

Most design situations, the best results are achieved by limiting hue to a palette of two or three colors, and using hue and chroma variations within these hues to create distinguishable different colors. The article gives some examples that makes things more clear. ColorBrewer is a website that helps choose colors for data display and is refered to by the paper. The examples of the paper always use a white background and the contextual information are shades of gray. A general rules is to make background white and its supporting information shades of gray this provides the most effective foundation for your color palette.

The paper ends by a few notes on background color, noting that most color palettes are designed to be printed on white paper. White as a background color has the advantage that the human visual system is designed to adapt its color perception relative to the local definition of white. A white background gives a stable definition of white, and a stable “surface” to focus on.

Thesis

This paper helped me realize that colors are very important to making a visualization easy to understand, I already applied the contrast rule for all of my tags within my graph. I will most likely change my background of my application to white and give the supporting information appropriate colors.

Advertisements

Toward Measuring Visualization Insight

March 2, 2010

This paper starts with telling us that one of the purposes of visualization is gaining insight. It is hard to define insight when it comes to visualizations so the article identifies some essential characteristics of insight. Insight is: complex, deep qualitative, unexpected and relevant. An insight is more interesting if it has more of these characteristics. Often visualizations are evaluated using controlled experiments. When benchmark tasks are used in these controlled experiments they are not proper tools for measuring insight. This method depends on the fact that these benchmark tasks and metrics represent insight. According to the author there are four fundamental problems compared to the previously mentioned characteristics:

  • they must be predefined by test administrators, leaving little room for unexpected insight and even forcing users into a line of thought that they might not otherwise take.
  • they need definitive completion times
  • they must hav definitive answers that measure accuracy
  • they require simple answers

This forces the experimenter to search-like tasks that don’t represent insight well. These benchmark tasks are far too simplistic and constrained to indicate insight of a visualization. A claim often made to generalize results of simple benchmark tasks is that complex tasks are build from simple tasks. The author counters this, first of all efficiency of simple benchmark tasks is often due to specific visualization interface features that don’t generalize to more complex tasks. Second a clear decomposition doesn’t exist yet. Another problem often arising in the interpretation of the benchmark results is the tradeoff between performance and accuracy. Often users are forced to continue until correctly completing a tasks, leading to trail-and-error approach by users and a misrepresentation of accuracy. It is concluded that controlled experiments on benchmarks are not the right method to evaluate insight.

First of all the author suggest to include more complex benchmark tasks, this still involves some uncertainty because these tasks generally support visualization overviews rather that detail views. Another method could be to let users interpret visualization into a textual answer but this is difficult to score. Allowing multiple-choice could again lead towards biasing the user. Also these methods lead to longer tasks times and a larger group of participants to be tested to get statistically significant results.

A second suggestion is made to eliminating benchmark tasks and letting researchers observe what insights users gain on their own. Using a open-ended protocol is a possible method, users are instructed to explore the data and report their insights. A qualitative insight analysis is also a possible solution like the think-aloud protocol. For each insight, a coding method quantifies various metrics (insight category, complexity, …), these categories can be assigned to common clusters like usability, … The coding converts qualitative data to quantitative but still is subjective but supports the qualitativeness of insight. The advantage of eliminating benchmark tasks is that they reveal what insights visualization users gained. The measures are closely related to the fundamental characteristics of insight (previously mentioned). These insights can also be compared to insights a researcher expected users to gain.

The author concludes with pointing to the fact that both types of controlled experiments are needed. Benchmark tasks for low-level effects and the eliminating of benchmarks tasks for a broader insight. Noted is that if combining both approaches into a single experiment, benchmark taks should not precede the open-ended portion this could lead to constraining the user.

Thesis

This article helped me to understand that I need to pay more attention to the open-ended portion of the evaluation of my visualizations. I will combine both methods to gain more information, in my previous evaluations I allowed the user to explore the visualisation for a very short time this should be extended. I’ll also need to note what I kind of insights I’d like to achieve from my visualization and compare these to the insights gained from the evaluation. In my previous evaluation I also noticed how hard is to find a good benchmark to test the visualization, this article confirms my thoughts that these are often to simple and force the user in a certain direction. I’ll also need to pay more attention to how I will formulate my question to not bias a user.

Should Scientists Be Tweeting?

November 2, 2009

The article tells us there’s a growing number of science Twitterers. They consider Twitter a useful tool to share their insights about recently published papers and science presentations or discussions, as well as information about grants, careers, science policy, …

Why scientist use Twitter

A few scientist tell about the reasons why they use Twitter. For scientist Twitter is a single source where you can go to scan news and papers. Another reason mentioned is that Twitter is another source for tips on papers, often people who a scientist follows recommend papers that the scientist didn’t come across. This way the scientist feels more up to date on his science literature.
Twitter is also often used to report on interesting (or sometimes dreadful) presentations heard at a scientific conference. Because of some controversy people must sometimes obtain permission from the presenting author to use Twitter during a presentation. The main reason behind this is that unlike regular blogs or news articles, tweets have the potential to spread like wild fire.

Disseminating scientific information is a driving mission for many Twitter users.  Twitter gives scientists a way to communicate their work to non-scientists and allows anyone to see science in a way that is more accessible.

Twitter also offers other people a window into the life of a scientist. Scientist can write stories that educate and publicize
science, and more accurately explain what scientists do to lay people. Twitter and regular blogging are an effective way of telling people about your work.

Potential

The article also explains that there are still small number of science Twitters compared to the numbers of scientists who could join. This is not just the case for Twitter but also for online social networking. Part of the problem is that Twitter has a reputation for being a social venue for friends to tell each other about their daily activities. But like explained above Twitter is more than this.

Too short?

The 140-character forces people to be concise and creative and makes others more likely to read the messages. But also has some limitations like the fact that you cannot have a decent, full-blown, high-level scientific debate via Twitter messages. This might be the reason why some scientist aren’t joining Twitter, a possible alternative could be FriendFeed which allows user to post longer messages.

Thesis

Twitter is a great service to spread new papers and I’m sure the number of scientist using it will grow. The reasons why scientist are using Twitter are interesting, not only do they use the Twitter to find news and scientific papers. But they also use it as a window into their everyday research activities. Twitter also allows to communicate with non scientific people. Twitter offers search, recommendation, sharing, …

In my opinion Twitter can enable people to spread the word about a paper quickly which is a great advantage compared to the publishing in journals which might only reach a limited group of people. Discussion on Twitter about a paper might be considered an indicator of a positive or negative impact.

Reference tools

October 22, 2009
Online reference tools
Researchers used to keep track of their references manually, with the arival of computers, software tools where developed for acadamic publications. Some of these tools are desktop applications but most of them are inspired by Delicious and offer online services. A good comparison can be found on Wikipedia. When looking at the tools that excist one could say there is a devision between tools that make citing papers easy (Endnot, RefWork, …) and tools that make sharing easy (CiteULike,Connotea, …). In the next few paragraphs I’ll explain some of the features offered by the 2 kind of tools. (Ref’s)
Desktop applications
Tools like EndNote allow users to manage their publications on the desktop, like mentioned before the main task of this tool is to make citing easier. So these tools don’t enable easy ways of sharing, most of the times the way to share your references with others is by exporting the database and mail it to your friend or colleague. Often these tools also allow users to manage their PDF’s of publcitations on their computer like Mendeley. Another advantage is that most of the desktop tools also have word processor integration although some online bookmarkings also offer this functionality.
Social bookmarking tools
A lot of social bookmarking tools exist now a days of mostly inspired by Delicious. But they deffer from Delicious because they have an acadamic public. The tools are mostly a meld of existing reference management conventions and new social bookmarking concepts (ref’s).
By moving the reference management online, the tools are able to offer a lot of social features like:
commenting
tagging which enables discovery
sharing of references
recommending
Next to social features their are also the advantages of being able to access your references from everywhere. Bookmarks are added to your library by using bookmarklets, this enables the user to quickly add them while doing another task or some research. The social aspect is increased by allowing groups to be made. In this way groups of researchers can collaborate and share references, this is a lot harder with desktop applications.
privacy (private bookmarks) problem
Conclusion
It is a good thing to see so many reference tools out there, in the beginning I mentioned a quote about the devision between tools that make cinting easier and those that make sharing easier. This is because a desktop often offers faster response, local management of files and has closer integration with word processor. The last one can easily also be offered by online tools. A good example of a combination of a strong desktop application and an online profile is Mendeley. The webbased profile is very limited and doesn’t come close to the functionality of discovery and tagging that tools like Zotera, … offer. A social bookmarking tool can offer almost the same functionality as the desktop applications accept for managing your publications on your local disk.
In my opinion the social features these social bookmarking tools offer make them superiour to the desktop applications. I think some of these social bookmarking site should also realise they should offer word processor integration to make it easier for users of desktop applications to switch.
A strong social bookmarking tool should offer
word processor integration
support most of the acadamic databases and search engines to be imported
support a wide range import & export file formats
BibSonomy
Zotero

Researchers used to keep track of their references manually, with the arival of computers, software tools where developed for acadamic publications. Some of these tools are desktop applications but most of them are inspired by Delicious and offer online services. A good comparison can be found on Wikipedia. When looking at the tools that excist one could say there is a devision between tools that make citing papers easy (EndNote, RefWorks, …) and tools that make sharing easy (CiteULike,Connotea, …). In the next few paragraphs I’ll explain some of the features offered by the 2 kind of tools.

Desktop applications

Tools like EndNote allow users to manage their publications on the desktop, like mentioned before the main task of this tool is to make citing easier. So these tools don’t enable easy ways of sharing, most of the times the way to share your references with others is by exporting the database and mail it to your friend or colleague. Often these tools also allow users to manage their PDF’s of publcitations on their computer like Mendeley. Another advantage is that most of the desktop tools also have word processor integration although some online bookmarkings also offer this functionality. Another way of achieving a desktop like feeling is with a Firefox-extension like Zotero does or an integration in a word processor.

Social bookmarking

A lot of social bookmarking tools exist now a days of mostly inspired by Delicious. But they deffer from Delicious because they have an acadamic public. While Delicious deals with simple URL’s, citations are a bit more complex and contain metadata like author, journals, … . The tools are mostly a meld of existing reference management conventions and new social bookmarking concepts.

By moving the reference management online, the tools are able to offer a lot of social features like:

  • commenting
  • tagging which enables discovery
  • sharing of references
  • recommending

Next to social features their are also the advantages of being able to access your references from everywhere. Bookmarks are added to your library by using bookmarklets, this enables the user to quickly add them while doing another task or some research. The social aspect is increased by allowing groups to be made. In this way groups of researchers can collaborate and share references, this is a lot harder with desktop applications. The online tools also offer RSS-feeds to follow certain tags, users, … Like in many science tools privacy is still an issue, some users might not want to make their bookmarks public but most tools will offer the user this possibility.

Conclusion

It is a good thing to see so many reference tools out there, in the beginning I mentioned a quote about the devision between tools that make cinting easier and those that make sharing easier. This is because a desktop often offers faster response, local management of files and has closer integration with word processor. The last one can easily also be offered by online tools.

A good example of a combination of a strong desktop application and an online profile is Mendeley. The webbased profile is very limited and doesn’t come close to the functionality of discovery and tagging that tools like Zotero, … offer. A social bookmarking tool can offer almost the same functionality as the desktop applications accept for managing your publications on your local disk.

In my opinion the social features these social bookmarking tools offer make them superiour to the desktop applications. I think some of these social bookmarking site should also realise they should offer word processor integration to make it easier for users of desktop applications to switch.

CiteULike offers some features that might convince some people to join or increase it’s popularity. This service offers the possibility to view all tags related to a journal, this offers another way to discover content and also journals might be eager to link to this service on their official website. CiteULike also has recommendations for the users and allows user to upload PDF’s (and access them from anywhere). BibSonomy also has the feature to upload and share a PDF with a group. A useful feature Bibsonomy offers is to  view relations between tags. Between BibSonomy and CiteULike, BibSonomy has the better import and export features, for example it offers export in BibTeX format and seems to be the better tool for the moment.

Did I miss a feature, tool or want to give feedback, please let me know in the comments?

Last.fm Explorer

October 15, 2009

This is a summary of the paper I read on Last.fm Explorer, this is an visualization tool that facilitates sophisticated data exploration with interactive controls to drill-down through hierarchical levels of data. The tool is heavily inspired on the “streamgraph” visualization of Lee Byron’s own Last.fm listening history. The visualization of Lee Byron had some shortcomings and the authors of this paper thought it did not realize the full potential of Last.fm’s data. The visualization of Lee Byron had a less-than-satisfactory experience: the graph is very wide and as it is a fixed rendered image, there is no way of seeing more data than what is immediately visible in the image.

The first shortcoming of the visualization was it only shows one level of a users’ data: artists listened to. Music data typically has four levels of hierarchy that users understand:

  • genre
  • artist
  • album
  • track

Last.fm Explorer implementation

Last.fm Explorer attempts to resolve these issues through interaction like being able to filter the stacked graph of a single users’ tag listening data, viewing of different levels of hierarchy, … Last.fm Explorer allows a limited amount of social context by allowing comparisons between two users’ history in parallel. Last.fm Explorer also uses animation heavily to make changes in visualization state clear. Humans are much better at perceiving differences in the position of objects as opposed to length, this is applied by Last.fm Explorer by the use of multiple visualizations as well as interactively allowing a user to re-arrange the stacked graph.

To solve the problem of the limited overview and navigation in Lee Byron’s visualization they added a pair of arrowhead shaped handles on a slider bellow the main stack graph, allowing the user to adjust the left and right limits of the graph by data. To make this time slider less confusing, they draw a small version of the graph above the slider. An example of interactivity is when a user hovers over an object, the object is highlighted and a tooltip is shown. Another way they implemented interactivity is by allowing when clicking on a layer in the stacked graph this layer is switched with the bottommost layer, this allows for any layer to be viewed with a flat baseline for better perception of changes in value. Double- clicks makes the visualization filter based on the object clicked and switch to the next level down of hierarchical data, e.g. clicking on tag rock the visualization will show all artists of the genre rock. The filters applied by double-clicking can still be removed.

Results and discussion

In the results and discussion about the visualization they tell that people familiar with Last.fm’s data find the visualization exciting and interesting and I must agree on this fact. Another conclusion drawn is that stacked graph display seems to be a popular and accessible visualization for this kind of data but a drawback is the difficulty of comparing two different layers. A line graph doesn’t suffer form this problem but has other issues by allowing respositioning of single layers in the stacked graph they attempt to mitigate these problems. The line graph they implemented ran into several problems which limited its usefulness, it had the following problems:

  • clumping of data in the bottom of the graph, they tried to mitigate this by displaying it in a logarithmic display
  • logarithmic scale made it very complex and hard to view the graph
  • playcounts are integer numbers, it is not uncommon to have more than one node at the same point, this makes the graph difficult to read

The interactive performance was sometimes difficult to maintain, the main cause of this was the way Last.fm API processed requests and limitations of the Flash platform.

Future work

The paper also suggests some future improvements, including track lengths from the MusicBrainz database is one of those. This would enable the visualization to combine track length data with Last.fm’s listening data. Allowing visualization of the actual time spent listening to specific track, artists, and tags. At the moment doesn’t use the colors for the same elements in different visualizations, this is confusing and a significant shortcoming. As mentioned before the line graph suffers from a number of shortcomings.

In the future the visualization would like to implement the technique used by Sese.us  for synchronizing application state with unique arguments appended to the application’s URL. This allows for sharing the link of your visualization in a specific state with others, enabling more social exploration and discussion.

Opinion

When I first applied this visualization to my own user data of Last.fm I was amazed. There are so many things you can learn and discover from your own user data. This visualization is really user friendly this is because it offers great ways to interact with the visualization like one click feature to put a layer at the bottom of the visualization. In general this visualization is a great improvement of the visualization made by Lee Byron.

What I learned from this paper is that a good analysis of the data your visualizing and the functionality you want to offer is a must. I like the way this visualization approached the hierarchy of the music data, starting the visualization from the top and by applying filter allowing the user to go down in the hierarchy. Also the interactivity of a visualization can really improve the user experience but also the way you analyze the your own data.

The repositioning of a layer to me doesn’t solve the problem of comparing, maybe this could be solved by allowing users to select 2 or 3 layers and change to another visualization for example a line graph that might allow better comparison.

Researcher Profile

October 15, 2009

In this post I’ll give a summary of the paper about Researcher Profile I read and describe my opinion of the application related to my thesis.

Researcher Profile is a Facebook for sharing research information within a community of collaborating researchers. The application offers visualizations on how researchers belonging to the same group collaborate. They chose to make a Facebook application for several reasons:

  • integration into an existing social network rather than developing a brand new social system (basic functions like forums, comments, … can be reused)
  • Facebook from all studied social networks had the best API: more stable and more functionalities
  • more development tools available for Facebook

The goal of the application is to allow researchers within the same domain (=Facebook group) to share and compare their research profile consisting of publications, events organised or attended, research projects involved in, … The application also allow importing of bibliographic references from a BibTeX file. There is also the implementation of tags to facilitate retreivel of information. The application offers to visualise all collaborations between different members of the research community in different ways and according to different viewpoints:

  • geographical viewpoint (Google Maps showing where an event takes place)
  • collaborative viewpoint (who is working with who)
  • viewpoint of a single individual’s collaborations(collaborations in which a single individual is involved)
  • evolutionary viewpoint: enables group members to see whether the activity of their community increases or decreases over time)
  • gender or age viewpoint

The paper also suggest some improvements to the application one these is the synchronization with digital libraries such as DBLP, the ACM Portal, … in order to facilitate importing new data without too much effort.

Opinion

This is a interesting approach to a research community, the visualization in the application are nice especially the viewpoint of the collaboration are very useful to researchers. Some of the application don’t seem that useful to me like the gender viewpoint. This application was developed without an existing community and that makes it quite different from the start point in my thesis. But the functionalities offered in this application would certainly be of added value to the open repository of DSpace.

Another fact in this paper I agree with is the fact that it is not wise to not develop a new social network but to take advantage of existing social network, instead of implementing a new social network in DSpace it might be easier to implement Facebook support into DSpace allowing users of DSpace to integrate their Facebook profile in the open repository. For example when a user uploads a paper a message is sent to his or her Facebook profile, Twitter Profile, … stating that a paper was uploaded. Comments made on the message would be shown in the profile of the researcher in DSpace. This would allow the user to publish his profile across DSpace. Allowing synchronization with digital libraries is also a great idea, it offers more information gathering and a bigger web presence of the user. One should realize that their are a lot of great online tools for science, working together with the most used tools will make DSpace a more attractive platform for researchers. Another route one could take is developing a Facebook Application for DSpace, this has the disadvantage of being another application which needs to be maintained but also installed by the user.

Scientific American on Science 2.0

October 15, 2009

In 2008 Scientific American published an article on Science 2.0, in this post I’ll try to recapture some of the main points of this article. The article start of by pointing out that Web 2.0 has influenced institutions like journalism, marketing, … by allowing users to publish, edit and collaborate with online information. Science could be next in line.

Openness

Critiquiquing, suggesting, sharing ideas and data is the heart of science, it is a powerful tool for correcting errors, building on colleagues’ work and fashioning new knowledge. Classic peer-reviewed papers are important but are not collaborative beyond that. Web 2.0 could open up a much richer dialogue! An example is open notebook science where a classic paper show the result, an open notebook allows people to see the research in more detail ( things someone tries but didn’t work out, … ). Some of the advantages of this open access are:

  • more collaboratite and therefore more productive
  • efficiency
  • faster development
  • competitiveness

Critique

Critics are afraid of the risk that comes with the openness like people copying or exploiting the work of others and even gain credit or patents for the work of others. In some fields of science patents, promotion and tenure can hinge on being the first to publsh a new discovery, so putting your work online might not be a good idea.

Success stories

Next the article tells us about some success stories. The write tells us that scientists have built up their knowledge about the world by “crowdsourcing” the contributions of many researchers and then refining that knowledge through open debate. Web 2.0 fits perfectly with the way science works, it’s just a matter of time before the transition will happen.

OpenWetWare

The article starts with the example of a wiki based on the same software as Wikipedia, called OpenWetWare. It is a collaborative website that can be edited by any one. It started of as a project to keep two labs up-to-date. Soon they discovered it was also a convient way to place posts about what they were learning about lab techniques (how-to’s, …). A side effect was that this information became available to the world and soon people who were searching information with Google found out about the website and started contributing. After a while enough people joined and dynamically evolving class sites where created, to share information. Another benefit mentioned in the article was it’s use in laboratory management, where it is hard to keep up with what your own team members are doing and organizing information. OpenWetWare is a solution for this problem and also allows people to access it from anywhere. Lately OpenWetWare has been used for a lot of sites offering some nice features like posting jobs, meetings, … May 2007 OpenWetWare got a grant to transform the platform into a selfsustaining community independent of its current base at M.I.T. and also to support creation of a generic version of OpenWetWare.

Trashing

But some fears remain, the article mentions an example of someone using OpenWetWare. At first the person kept all her post private afraid someone would trash published pages. But OpenWetWare has some built-in safeguards, every user has to be registered and established that they belong to a legitimate research organization. Even if you get trashed the system is able to perform a rollback.

Getting scooped

Another concern is getting scooped and losing the credit. This fear often keeps scientists from even discussing their unpublished work too freely, much less posting it on the Internet. As opposed to what people think the Web offers better protection than traditional journals. Every publication on a wiki gets a time stamp that proves you were the first. The article even suggest that this fear factor might drive open science: in journals your work won’t appear for another 6 or 9 months, on the web it is publish right away. Another benefit of a time stamp on every post is being able to track the contributions of every person.

Unsolved problems

Some problems might hold someone from publishing online, like the concern of the privacy of persons that were part of a research test. Also a journal might insist on copyrighting test and visuals, so pre-publishing online won’t be allowed. It still isn’t clear if a patent office will accept a wiki posting as proof of priority.

The more the better

The article also mentions a case in which a lot of people participated in a research, because of this search engines could index what they were doing and got discovered by people offering help from another part of the world.

Blogophobia

Scientist have been slow to adopt blogging. The whole point of blogging is getting ideas out there quickly, even at the risk of being wrong or incomplete. For a scientist this is a tough jump to make, because in the process of publishing a paper words are chosen carefully, … A benefit of blogging is that it is a good medium for brainstorming and discussion. Yet again some young scientists who are struggling to get tenure this might look dangerous because making a wrong impression could have some consequences. Out of fear pseudonyms are often used .

Credit problem

Some people might not participate on blogs because time spent with the online community is time not spent on cranking out that next publication. Scientist don’t blog because they get no credit, this credit problem is one of the biggest barriers tom many aspect of Science 2.0. The article explains that nobody believes that a scientist’s only contribution is from the papers he or she publishes, a good scientist also gives talks at conferences, shares ideas, takes a leadership role in the community, … Publication where the only thing one could measure, this has changed with a lot of this information going online and thus being able to measure it.

The payoff of collaboration

Acceptance of the measures requires a big change in academic culture.  The current technologies’ potential should be able to move researchers away from an obsessive focus on priority and publication. We should focus on openness and community instead these were the hallmarks of science. Great efforts are being made like Nature Network, Connotea, ..