Christina's LIS Rant

Visit Blog Website

14 posts · 27,388 views

This is my blog on library and information science. I'm into Sci/Tech libraries, special libraries, personal information management, sci/tech scholarly comms... I'm a librarian in a physics, astronomy, math, computer science, and engineering library. I'm also a doctoral student at Maryland. Any opinions expressed here are strictly my own and do not necessarily reflect those of my employer or graduate school.

Christina Pikas
14 posts

Sort by: Latest Post, Most Popular

View by: Condensed, Full

  • December 27, 2010
  • 11:15 AM

Are the old folks holding us back?

by Christina Pikas in Christina's LIS Rant

We’ve been hearing a lot about how hard it is to get a tenure track job – arguably harder even than it was during other economic recessions. We’ve also been hearing about how the age of NIH PIs is going up. I guess the age at first award is going up as well as the [...]... Read more »

  • December 20, 2010
  • 06:47 PM

Has the online search displaced the friend as the preferred first information source?

by Christina Pikas in Christina's LIS Rant

Review of a JASIST article looking at selection of information sources: co-workers or electronic resources.... Read more »

  • July 25, 2010
  • 02:31 PM

Hey maybe scientists should do more than just wait for their journal to issue a press release on their new fabu article

by Christina Pikas in Christina's LIS Rant

The authors thesis is that the only mandatory communication of results is in peer reviewed journal articles. Scientists aren't required to do other communicating and often leave communication to the public to the media. They ask if is this is adequate given the very low percentage of scientific articles that ever make it into the press, particularly in areas outside of health and medicine, and also given the fact that for everyone out of formal education, the media is their primary source of science education. Recent studies do show that scientists often don't mind talking to reporters and do so more frequently that one might think [1-2]. They do get kind of frustrated that their work is misrepresented - even if that misrepresentation is failing to include qualifying statements.  Newspapers in general covered a lot more science over time (as studied in the time period 1951-1971, I know). Fancy journals that issue press releases for papers find that those papers are more likely to be reported in the news media. The authors cite another study that some 84% of the newspaper stories originated from press releases. This study was just about how much makes it to the media and is that percentage staying steady as the number of papers increases. When they actually did the work, they only looked at parts of 2 years, 1990 and 2001, and two media outlets, Time and NBC News. They didn't use the WaPo or NYT because better educated people read them (???). Plus, they found that only 25-50% of news pieces actually mention the article's author and venue, so they probably missed a ton. So this is quite disappointing, really. The study narrowed the coverage of the search so much, that I don't think it's really representative of anything. Of course only a few articles get discussed in the media, but if you want numbers, this paper won't help. These articles also need to start discussing things like Nova and National Geographic and Discovery Channel. We watch that stuff all the time and so do a lot of people we know (of course I'm pretty well educated, I guess). They mention journal press releases, but for big science there are also lab press releases and media officers. There are also scientists talking directly to the public on blogs. One thing you can probably take away, if you work outside of biomed and/or are not publishing in Science or Nature and have a really cool result, don't wait for the press to come a knockin' - get it out there another way. Here's the citation: Suleski, J., & Ibaraki, M. (2010). Scientists are talking, but mostly to each other: a quantitative analysis of research represented in mass media Public Understanding of Science, 19 (1), 115-125 DOI: 10.1177/0963662508096776 [1] Peters, H. P., Brossard, D., de Cheveigne, S., Dunwoody, S., Kallfass, M., Miller, S., & Tsuchida, S. (2008). Science-Media Interface: It's Time to Reconsider. Science Communication, 30(2), 266-276. doi:10.1177/1075547008324809 [2] Dunwoody, S., Brossard, D., & Dudo, A. (2009). Socialization or rewards? Predicting U.S. scientist-media interactions. Journalism and Mass Communication Quarterly, 86(2), 299-314. Retrieved from Read the comments on this post...... Read more »

  • July 17, 2010
  • 01:47 PM

Across disciplines, what motivates or prevents faculty staff archiving?

by Christina Pikas in Christina's LIS Rant

This article is in early view at JASIST. It looks like it comes from the author's dissertation. It isn't terribly earth-shattering, but it's well done, it provides more evidence, and there are definitely some implications for library/IR manager practice. Here's the citation: Kim, J. (2010). Faculty self-archiving: Motivations and barriers Journal of the American Society for Information Science and Technology DOI: 10.1002/asi.21336 The author went through a complicated process to identify 1,500 faculty members at 17 research institutions with DSpace IRs (not immediately clear why only DSpace IRs). The faculty members were at all levels (associate, assistant, full) and from several areas of science (includes math), several areas of engineering (includes CS, hm), several areas of social science, and several areas of the humanities. Some had items in their IR and some didn’t. There was a web-based survey and with a 45% response rate (sounds good, but the author mailed the people and e-mailed them a bunch of times, so she worked for it).  The survey is included in the appendix. It has a bunch of likert scale questions, some yes/no, some multiple choice, and some open questions. Forty-one telephone interviews were done with survey respondents to get more in-depth information. So what did she find? Altruism – but this isn’t exactly what you think. It’s more like generalized reciprocity combined with quid pro quo combined with access for those in less developed countries. Coming from a self-archiving culture. Some actually mentioned peer pressure – if it weren’t expected of them, they wouldn’t do it. Copyright concerns. Some don’t self archive because they believe they don’t have the right. The nice part is that at least a few knew that they could amend the publication agreement. This sort of counteracts the idea that faculty don’t know about or get copyright. These folks were pretty clear on it. Technical skills and age. Younger and those who rated their technical skills more highly were more likely to self archive. Impact on tenure or promotion. They all seemed to think there would be a positive or no impact on promotion and tenure. Time and effort. It’s too much of a PITA for its priority. Applications/implications for librarians: If concern about copyright is preventing a lot of self-archiving, then there's real education that can be done. Also - the fact that it's a hassle. If they can populate their website by using a badge or widget from the IR, that would make things easier, eh? A couple of trivial things about the article: it seems really redundant - it repeats itself a lot. Some good editing would make it a bunch tighter. It has a great reference list - this might be a useful collection for anyone writing or presenting on the topic. Read the comments on this post...... Read more »

Kim, J. (2010) Faculty self-archiving: Motivations and barriers. Journal of the American Society for Information Science and Technology. DOI: 10.1002/asi.21336  

  • June 5, 2010
  • 10:41 PM

Inappropriate citations?

by Christina Pikas in Christina's LIS Rant

Kevin Zelnio of Deep Sea News tweeted the title of this piece and sent my mind going over the various theories of citation, what citations mean, studies showing how people cite without reading (pdf) (or at least propagate obvious citation errors), and also how people use things but don't cite them in certain fields... I was also thinking, I know what inappropriate touching is, but what's inappropriate citing?  So let's take a look at the article: Todd, P., Guest, J., Lu, J., & Chou, L. (2010). One in four citations in marine biology papers is inappropriate Marine Ecology Progress Series, 408, 299-303 DOI: 10.3354/meps08587 According to the authors, inappropriate citations intentionally or unintentionally misrepresent the meaning of the work cited. Here are some aspects they mention: citing a review article instead of the primary work (hmmm) citing something that asserts the idea based on another citation, not based on the work presented in that paper ("empty" citations) misunderstanding or misinterpreting an article (or citing without reading) I guess the first author's been on sort of a kick about this, the method they use comes from his earlier paper in ecology. They also reference similar studies in a number of areas in medicine. They selected a couple of articles from recent issues of 33 marine biology journals, and for each article they picked a citation that was provided to support one assertion. They rotated where in the article they got the citation from - the intro, methods, results/discussion. They retrieved the cited article and coded whether it provided clear support, no support, ambiguous, or empty citation for the assertion. Here's an issue: majority ruled, ties went to the author - the more typical thing is to negotiate disagreements and/or to come up with an inter-rater reliability measure. You see how this could be problematic for the ambiguous category which has the following scope note: "The material (either text or data) in the cited article has been interpreted one way, but could also be interpreted in other ways, including the opposite point. The assertion in the primary article is supported by a portion of the cited article, but that portion runs contrary to the overall thrust of the cited article. The assertion includes 2 or more components, but the cited article only supports one of them" The assertions were clearly supported 76% of the time, but another 10% were ambiguous.  It didn't matter which section the citations were in, the number of authors, the number of references in the list (that would be interesting, really, because might indicate some sort of padding), the length of the article or the journal impact factor (again, would have been interesting if there was some correlation with some proxy for "journal quality"). They suggest that this practice could undermine an entire discipline and that this padding to try to get a paper accepted is dirty business. I'm not really sure it's as widespread as all that or that it's as pernicious. Based on their methods, we don't know if the percentages found could be accounted for by inter-rater disagreements.  How often/well did the raters agree? Particularly in the ambiguous category that could make a big difference. They use this to make the case that citations might not be the best thing to use to judge people and institutions (well, yeah!). They also rest the responsibility on the authors, and suggest they be more careful, not provide whole lists at the end of sentences, and cite correctly. They suggest journals could require some random audits - seems like most journals are more likely to go the other way, to suggest new citations. The biggest problem with this is that all citations have to support assertions - some citations might be of other ways a method was used or for sources of additional information. A citation is an indication of utility, not quality, really. Also, some mistakes happen - the wrong article by the right author (maybe clicked the wrong button when inserting the citation into the manuscript or faulty memory) - and I think no one is really suggesting that an article should be retracted or an erratum should be issued if this is discovered. Dunno, I'm under-impressed by the article and the severity of the issue... you? Read the comments on this post...... Read more »

Todd, P., Guest, J., Lu, J., & Chou, L. (2010) One in four citations in marine biology papers is inappropriate. Marine Ecology Progress Series, 299-303. DOI: 10.3354/meps08587  

  • April 25, 2010
  • 05:46 PM

Review of an article using bibliometric qual methods to study sub-discipline collaboration behavior

by Christina Pikas in Christina's LIS Rant

Mixed methods are always attractive, but many researchers give up because each method typically requires some epistemology which often conflicts with the epistemology of other methods. When mixed methods are done, they are often done in sequence. For example, qualitative work to understand enough about a phenomenon to develop a survey or interviewing survey respondents  to get richer information about their responses. Network methods are neither quantitative* nor qualitative and it's not typical to combine them with qualitative methods - hence my interest in this piece. Of course I'm also interested in collaboration in science. The authors combine network analysis of the co-authorship network with qualitative interviews with the scientists to look at intergroup collaboration, migrations, and exchange of services or samples. The citation: Velden, T., Haque, A., & Lagoze, C. (in press). A new approach to analyzing patterns of collaboration in co-authorship networks: mesoscopic analysis and interpretation Scientometrics DOI: 10.1007/s11192-010-0224-6 (pre-print available at: As background, the authors note some of the limitations of doing ethnographic studies - rich information about a very small group of people (transferability, but never generalizability) - and of doing large bibliometric studies (mapping a large crowd, but may miss nuances of sub-discipline research area, and also doesn't explain what the network features mean). The qualitative part of their work is part of a larger ongoing study of chemistry research groups in the US and Europe.  The bibliometric information comes from topical queries in Web of Science. The keywords for the queries were selected to represent three sub-discipline research areas in chemistry. They went back 20 years, and kept the co-authors who had more than one paper in the retrieved set. The result sets were reviewed by participants in the qualitative portion to check that they were on topic. They extracted the largest component (a component is all of the nodes that are somehow connected to each other), and then did some clustering, some discussing with their participants, calculated a bunch of centrality measures, used a method from Guimera to find "hubs", and they determine if links between the clusters are more of a transfer type or collaboration type depending on their robustness to the removal of one or two author nodes. Their results:  The extracted clusters matched up with the scientists' immediate research groups plus a few external regular collaboration partners. In some fields, the PI-led groups were almost in a perfect star network shape where as in others, it was still hierarchical, but there was no one node that dominated the cluster.  Some of the clusters only had one or two authors connecting them, where as others had lots of collaborations. The authors asked about these connections, and found that the one or two author connections resulted from visiting professorships, career migration, one-off commissioned work, funded project collaboration of a sub-group leader. The many to many connections resulted from large collaborations on methods and the subject. Two of the research specialties had mostly large clusters, whereas the third had many more small clusters. The size of the cluster was correlated with the numbers of papers published (not surprising). The field with more small clusters also had a lot more single-hub clusters whereas the other two fields had multiple hubs per cluster. The collaboration type (vs transfer type) connections are more likely to be geographically proximate. Field A apparently requires large, expensive equipment, so there is some incentive to stronger integration/collaboration. Apparently collaboration isn't funded in the US for field C so there aren't as many in the US as there are in other parts of the world. Field B is some area of synthetic chemistry and fields A and C are some field of physical chemistry - so there do seem to be differences in the ways these authors work. B has more hub and spoke with a PI and his or her lab whereas A & C have more equal distributions with denser connections. Commentary: Is this combination of methods useful, is the article successful, and did we learn new things from it? I'm not sure the combination of methods is as new as I originally thought, but they certainly did integrate expertise from their participants at many different stages - and that's healthy. So often we just talk about the discipline level and that's really inadequate when you consider the diversity of research within chemistry, for example.  Elsewhere Velden et al discuss the difference between synthetic type chemists and other types, that seems to hold in this study, too. This paper was mostly about the methods (appropriate for the venue), but it would be nice to see this integrated with some of their other studies (I guess I'll have to wait for the dissertation). It's unfortunate that more detail can't be given on the precise research areas. That information is omitted to protect the privacy of the participants. * the authors say that network methods are quantitative - I disagree. For one thing, they are about the connections, not about actor attributes. For another, you can't do regular statistics on them because they violate all of the independence of samples, normal distributions things... so any statistics have to be done by bootstrapping. Read the comments on this post...... Read more »

  • April 17, 2010
  • 01:31 AM

Using the fact that sometimes scientists look at the pictures first

by Christina Pikas in Christina's LIS Rant

I was happy to see that the authors published this article in PlosOne. I was following their work a while ago, but had lost track (plus, when asked, the last author implied that they had moved on to new projects). So here's the citation and then I'll summarize and comment. Divoli, A., Wooldridge, M., & Hearst, M. (2010). Full Text and Figure Display Improves Bioscience Literature Search PLoS ONE, 5 (4) DOI: 10.1371/journal.pone.0009619 The authors created a prototype information system that used Lucene to index the metadata for open access biomed articles, the full text, and the captions for images and tables. The interface is set up to allow you to use one search box and then radio buttons to select full text and abstracts, figure captions, or tables. In the first, the results are sort of like the standard metadata and abstract with key word in context excerpts and extracted images. For figure captions, you can either have a grid of figures, or a list. For tables, you get a citation, the table caption, and the table. The article spends a good deal of time discussing design decisions, providing a tutorial for creating your own. To build the prototype, they got the XML from PubMed Central, pulled out authors, images, captions, abstracts... They made different sizes of the images for quick retrieval later.They then included different fields with different weights depending on what you select to search. They then got a group of biologists (n=20 although number isn't really important for qualitative studies), and ran them through a study. The participants provided the query and looked at it in each view, thinking aloud about their reactions and steps. They were then asked a few questions about each interface The majority of the participants would choose to use this type of interface for at least some of their searching. Seems like they got the full text search, but were not quite as sure about the table search. Some thought it would be useful for getting right to the results but several didn't think they would use it. Now for some commentary... I was somewhat critical in my post I linked to above, but I really think this is promising stuff. The authors point out that this is very dependent on access to the full text and also won't be universally useful. There are plenty of search situations in which the images wouldn't be used, but they should be an option. Since my earlier post, CSA has added "deep indexing" to more of their files.  It's not the same as their dedicated Illustrata product, which is more like Biotext. Publishers have the full text, so some of them are also making the images and tables available outside of the article. For example, both ACS and RSC have added images to their RSS feeds. ScienceDirect has a tables and images tab on their articles - which is nice for scanning to see if the article is relevant.  PlosOne lets you look through a list of the tables and images, download a ppt or high quality image. Springer Images also lets you search the tables and captions to get pictures. It also indexes the context of the reference to the image in the text. You also get a link to the article and excerpts like on Google Books. My colleague at work pointed out that it is useful for finding phase diagrams. But more than all of that, there's been a lot of talk recently about disaggregating the journal article or even doing away with the whole and just using the pieces. If so, maybe this is an intermediate step. Read the comments on this post...... Read more »

  • March 13, 2010
  • 12:06 PM

The evidence is: status, communication training, and intrinsic rewards are positively associated with scientists communicating with the media

by Christina Pikas in Christina's LIS Rant

Myths abound about how scientists do not talk with the media or communicate with the public and if they do so, it is only because they are required to by funders' "broader impact" requirements. The evidence, however, does not support this view. This article is another in a series of communications based on a multi-national study of how scientists in several fields communicate with the media. (you might have seen [1] or [2]). This article only uses data from US scientists who were recently corresponding authors on peer reviewed articles in stem cell research and epidemiology (survey sent to 1,254 with a response rate of 34.5% for n=363). Refer to the article for detailed description of their research questions, statistical methods, and significance. Two-thirds of the scientists had interacted with the media in the previous 3 years. More than a quarter interacted with the media six or more times. Status - career level and number of publications - was positively associated with a greater number of media contacts.  Respondents who were confident in their ability to interact with the media and those who participated in formal communication training were more likely to interact with the media. The authors found that extrinsic rewards  - like funder/sponsor and their own reputations - were not statistically significantly associated with frequency of interaction with the media. Intrinsic rewards - the scientists enjoyed communicating - were associated with more frequent interactions.   Citation Dunwoody, S., Brossard, D., & Dudo, A. (2009). Socialization or rewards? Predicting U.S. scientist-media interactions Journalism and Mass Communication Quarterly, 86 (2), 299-314. Retrieved March 13, 2010 from     [1] Peters, H. P., Brossard, D., de Cheveigné, S., Dunwoody, S.,  Kallfass, M., Miller, S., & Tsuchida, S. (2008). Science communication: Interactions with the mass media. Science, 321(5886), 204-205. doi:10.1126/science.1157780 [2] Scheufele, D. A., Brossard, D., Dunwoody, S., Corley, E. A., Guston, D., & Peters, H. P. (2009, August 4). Are scientists really out of touch? The Scientist, Retrieved from Read the comments on this post...... Read more »

Dunwoody, S., Brossard, D., . (2009) Socialization or rewards? Predicting U.S. scientist-media interactions. Journalism and Mass Communication Quarterly, 86(2), 299-314. info:/

  • March 6, 2010
  • 02:24 PM

Black men in women’s work do not get to ride the glass escalator

by Christina Pikas in Christina's LIS Rant

This post reviews a fairly recent article that examines the experiences of black men in nursing and asks whether they experience the "glass escalator" effect or if the work is racialized as well as gendered.

As requested by some fellow Sciblings, I recently blogged about an older article* that coined the term glass escalator. In my post I was uncertain about how the findings from the study were viewed by experts familiar with that body of work. In the comments, Kris D, who identifies as a sociologist, said that these findings have been upheld by subsequent research. Kris also recommended the article that is the focus of this post.


Wingfield, A. (2009). Racializing the Glass Escalator: Reconsidering Men's Experiences with Women's Work Gender & Society, 23 (1), 5-26 DOI: 10.1177/0891243208323054

As a reminder, white men in professions typically considered women's work such as nursing, social work, elementary school teaching, and librarianship, are often promoted earlier, paid better, and network better with management. The women in these professions are welcoming toward the men and push them up the escalator. The white men often distance themselves from the feminine aspects of the work - less caring more technical (nursing: ER not bedside, librarianship: systems not children's public services).

Wingfield asks whether gendered racism makes black men's experiences different from white men's. Here's what she found:

Black men were not welcomed by women, they were isolated and treated like they were not wanted.

Black men experienced a great deal of difficulty getting promoted.

While white men were mistaken for doctors, black men were mistaken for janitors regardless of how they presented themselves.

I've even given patients their medicines, explained their care to them, and then they'll say to me, "Well, can you send the nurse in?" (p.18)
The men in the study did not reject the caring aspects of nursing, but rather embraced it:  "concern for others is connected to fighting the effects of racial inequality" (p.21). They enjoy patient care and they provide services to the community to "challenge racial inequalities."

In her conclusion, she speculates that this might be sexualized as well as gendered and raced. That is, there might be an interaction with sexuality as well as the one between race and gender (i.e., homosexual men may not get to ride the escalator either).

* Williams, C.L. (1992). The Glass Escalator: Hidden Advantages for Men in the "Female" Professions. Social Problems, 39, 253-267. Read the comments on this post...... Read more »

  • January 9, 2010
  • 01:09 PM

Very quick note on things that are used but not cited

by Christina Pikas in Christina's LIS Rant

In most of the discussions of using usage as a metric of scholarly impact, the example of the clinician is given.  The example goes that medical articles might be heavily used and indeed have a huge impact on practice (saving lives), but be uncited. There are other fields that have practitioners who pull from the literature, but do not contribute to it. So it was with interest that I read this new article by the MacRoberts: MacRoberts, M., & MacRoberts, B. (2009). Problems of citation analysis: A study of uncited and seldom-cited influences Journal of the American Society for Information Science and Technology, 61, 1-12 DOI: 10.1002/asi.21228 The article provides great examples from the field of biogeography (the distribution of plants and animals over an area - they tell me). It is typical for researchers in this field, when writing articles in peer-reviewed journals, to not cite their data sources.  Some of the data sources are flora  - "a list of plant species known to occur within a region of interest." The flora might be books, government reports, notes in journals or some other sort of gray literature. The authors give a couple examples - one is their article - and show how these articles are uncited according to Web of Science, but heavily used and well incorporated into databases, books, and pamphlets. As they say, the purpose of the article has been achieved. Not only are these things used directly, but once their contents are incorporated into databases, the database then goes on to serve maybe thousands of people. The sources are often listed in notes or in an appendix but no citation. This content that is sucked up into books or databases provides no traceable usage link - as far as I can tell. If we can't even determine the impact of the article - a container for an idea - how can we understand or evaluate the impact of the author and his or her knowledge contributions? It's been noted elsewhere (see the article for citations/discussion) that the largest influence on a scientist often comes from informal communication partners - colleagues and co-workers. This is not cited, either.  So the question becomes, if we are truly interested in evaluating a scientist on his or her influence, we have to come up with new methods that look at how their ideas have been used - it is not enough to look at article citations or downloads. (as an aside: the authors quote a website that bemoans the difficulty of locating floras. Certainly if they were cited, that would help!) Read the comments on this post...... Read more »

MacRoberts, M., & MacRoberts, B. (2009) Problems of citation analysis: A study of uncited and seldom-cited influences. Journal of the American Society for Information Science and Technology, 1-12. DOI: 10.1002/asi.21228  

  • August 29, 2009
  • 03:59 PM

Understanding urban, low socioeconomic status, African-American Girls’ attitudes towards science

by Christina Pikas in Christina's LIS Rant

So often we hear of large studies like the GSS being used for attitudes towards science. We also hear the results of science achievement metrics and are disappointed. This article provides a great mix between generalizable quantitative understanding gained through use of a validated instrument and more individualized understanding gained through qualitative research using a critical feminist lens. The authors choose this sequential mixed-methods approach to attend to "questioning how to meet the needs of the many while coming to understand the uniqueness of the individuals among the many."  The other problem they address is confounding categories. In other words, typical studies study either urban/suburban/rural OR majority/minority OR gender OR socioeconomic status, but they seek to understand attitudes in this population who are urban AND low SES AND African-American AND female. There's definitely a tension between grouping this category and exploring the heterogeneity within the category - and what will be most useful in eventually promoting the participation of this group in science. Attitudes are important because they are predictors of choosing science classes. The study participants were 4th, 5th, and 6th graders at a school in the midwest. The school population is 99% African American, 1% Multiracial, and 88% qualify for free lunch (this is a typical measure in the US for the SES of a school). Eighty-nine students completed the questionnaire (the modified Attitudes Toward Science Inventory). Thirty were purposively selected to participate in group interviews. The selected students represented each grade and level of academic achievement as shown by their results on a statewide standardized test.  All participants qualified for free lunch and were African American. The questionnaire was administered by an African American teacher who is part of the research team. The group interviews consisted of 3 or 4 participants and were semi-structured. They were conducted by a Caucasian (or shall we say European-American) researcher who is a former science teacher. The authors mitigated the impact of this choice by having her introduce herself and make several site visits prior to the interviews. However, IMO, this is still a problem, particularly with this group of participants. The girls generally had positive perceptions of science, were confident, were not anxious, and had a desire to do science. The girls either had content-related definitions of science (it's about plants, the moon, keeping your body healthy) or process-related definitions (a way of learning about..., help you be a detective..., "an adventure of fun"....) (yay process girls!). In discussing the importance of science a third mentioned things like knowing what to eat, how to stay safe from a tornado, and what not to touch on a nature hike. A few mentioned science's importance for doing well in school or for an eventual career like in forensics or as a teacher or veterinarian. Some girls didn't see science as important for them at all (as in, well you need to know how to read to get a job, so that's important). Some of the girls experimented with their families at home or even at home on their own. Others saw it as just another thing done in school where you read the book, do what the teacher tells you to do, and then answer questions. They saw no relationship to things outside of school. Some of the students felt that they were very successful in doing science and if they ever got stuck, some help from the teacher would be enough to get them past it. Others were very frustrated and didn't understand the questions they got in their labs or projects they did. From these results the authors created profiles of some girls who, for example, viewed science as a process, did work outside of school, and are successful as high confidence/anti-anxiety, high desire/value and other profiles that were low on one or another of these areas.  What's really interesting is that there were some girls in this group with positive attitude, with high confidence, high desire, and who valued science who were C students in science. Why? The authors are going to try "connected problem based learning" to try to challenge the girls with real world problems, have them work together in small groups with a teacher as a facilitator, etc. This article is one of what will, I hope, be a series as these authors continue to work in and with this school.   Buck, G., Cook, K., Quigley, C., Eastwood, J., & Lucas, Y. (in press). Profiles of Urban, Low SES, African American Girls' Attitudes Toward Science: A Sequential Explanatory Mixed Methods Study Journal of Mixed Methods Research DOI: 10.1177/1558689809341797 Read the comments on this post...... Read more »

  • June 14, 2009
  • 02:39 AM

Is Taylor's "compromised need" pseudoscience?

by Christina Pikas in Christina's LIS Rant

If you've read my blog at all, you probably know I'm a Taylor (1962,

1968) groupie.

In fact, in a recent post

I talked about going from a visceral need to a compromised

need.  This is a central idea in library science. So when I

saw this article in my feeds today, I had to pounce on it:


J. (in press). Compromised need and the label effect: An examination of

claims and evidence Journal

of the American Society for Information Science and Technology,

1-6 DOI: 10.1002/asi.21129

Let's look at this paper, its claims, and discuss it a bit, shall we?

As a reminder, compromised need is what comes out of the information

seeker's mouth or is typed by her hands when interfacing with an

information retrieval system (here, an information retrieval system can

have a librarian as the interface - and that librarian can be there in

person or connected via some electronic means - or can be a web search

page or research database search page, or even a book index).

 The idea is that what actually comes out might be very

different from the actual need because there are labeling problems, you

might not know what you need or how to describe what is

needed, and because you change what you say based on what interface

you've got and what you think

the system can do with your input (see for example, my comps reading

from Wolfram (2008) in which the searches were different for two

systems, with similar google boxes).

Nicolaisen starts by talking about the importance of this concept - the

compromised need - and how it wasn't really used for much until the

1980s, when researchers started to use cognitive and psychological

research in LIS.  Apparently though,  this theory has

never been validated as such and tested to see if it holds water.

 It's basically been taken at face value and reference

training for librarians has changed accordingly.  His point in

this article is to compare Taylor's claims to empirical

studies that track reference questions received to see if there is

support for "compromised need."

In describing the claims, I think Nicolaisen says some things that seem

obvious, but do not match with my experience or what I've seen

in articles on evaluating reference service in the public library. The

first of these is that this compromised thing makes sense for areas

outside of one's expertise but makes no sense for a known-item search

or someone with a "verificative need".  He says:

If the information need is

a verificative need, the inquirer is in possession of bibliographical

data, and if the information need is a conscious topical need, the

inquirer is in possession of terms and concepts necessary for

expressing the required information. However, when confronting the

intermediary, inquirers allegedly tend to specify their needs using

other terms and concepts, which mitigate or misrepresent their true

information needs. It almost seems like the inquirers deliberately pull

the wool over the eyes of the intermediaries, thus making it much

harder for them to provide the desired information.

He seems very skeptical (the way this is written) and questions how

often this happens. But actually, there are many instances when this is

indeed the case.  For example, when the information need is on

a sensitive subject or if the patron doesn't have any faith that the

information system can respond to that request.  He lists a

pile of references in which this is taken as a given, and found none in

which this idea is questioned.  Indeed, in the literature

reviews everyone apparently relies on a study by Ingwersen that

essentially had a sample size of 2 - which is ok for qualitative work,

but it's not, by definition, generalizable.

When looking through the evidence provided in the studies he

reviewed, he found that only a very small proportion of the

questions required extensive interviews and likewise very few of the

questions changed from the initial question after the reference

interview. He ends the article by describing what's needed for

induction - going from some observations to a universal statement -

including large sample size, works in different settings, no

conflicting information.  Further he calls this pseudo

scientific because it faces unresolved problems and it is accepted

without question and testing (whoa... them's fightin' words)

I think Nicolaisen's sample of literature is weak, to be honest. Many,

many public libraries have evaluated reference service and time and

again they've shown that failure to do a proper reference interview

leads to poor results.  No interview -> few questions

are answered correctly -> there is something in the interview

process that allows the system to make a better match with the need

than without it. The patron could have perfectly specified the need -

but if it is not understood by the system, then they won't get the

answer.  How about studies of search engine logs?

 Clearly the needs are imperfectly specified because the

system returns documents matching the terms, and yet the user enters a

new search.

 I was one of the student investigators on a study by Kaske

and Arnold of virtual reference services.  We did a typical

Hernon and McClure study and it was the same old thing - librarians who

asked what we needed and checked back to see if what they gave us was

appropriate were the only ones who successfully answered the question.

 It's not that we weren't saying what we needed, it's that it

couldn't be interpreted correctly most of the time in isolation. Just

because interviews were not performed does not mean that the questions

did not require an interview! It is often the case that someone will

come in for a specific book, but really have a much bigger problem, and

that you can only address this bigger issue through an interview.

Of course my experience and the myriad studies in the state of Maryland

do not contradict what Nicolaisen found in the studies he looked at;

however, I think he picked the wrong studies. He was only looking for

studies that specifically looked at before and after statements of

information need. It's not that it's not studied and questioned so it's

pseudoscience, it's that it is part of all of the studies that we do in

certain areas of our field.  

My arguments are somewhat confused, but basically:

1) studies showing importance of reference interviews to answering

patrons' questions are relevant to this topic

2) reports that interviews aren't required do not say if/how the

patron's actual problems were solved or if the patrons were satisfied

with the service

3) people put 2-3 words into a search engine - that's it - there's no

way that can perfectly specify their information need

And I'm going to stop writing now, as I stopped saying anything new a

while ago!


R. S. (1962). Process of asking questions. American

Documentation, 13(4), 391-396.

Taylor, R. S. (1968).

Question-negotiation and information seeking in libraries.

College & Research Libraries, 29(3),



D. (2008). Search characteristics in

different types of Web-based IR environments:

are they the same? Information Processing &

Management, 44,


Read the comments on this post...... Read more »

Nicolaisen, J. (2009) Compromised need and the label effect: An examination of claims and evidence. Journal of the American Society for Information Science and Technology, 1-6. DOI: 10.1002/asi.21129  

  • August 29, 2008
  • 08:16 PM

The meaning of citations

by Christina Pikas in Christina's LIS Rant

What a grand post title, but actually, what I mean is slightly more like: the meaning of citations: what Garfield said he means in a bunch of articles vs. what people say he means and even worse what people do with his work, plus some commentary on a review chapter.Today I read the whole Nicolaisen[*] article which I just browsed earlier (ok, so it's been A LOT longer than I intended). This is not a review of how to *do* citation analysis, that's included in the several ARIST chapters on bibliometrics and informetrics. Rather, this is a review of two streams of literature about citations: why do scientists cite (and theories about that) and more weakly, one aspect of/model for/theory of how citation patterns "reflect the characterics of science and scholarship" -- how citing patterns can be used to model science/knowledge... **First, because I always run out of steam at the end, and because it's most important, what Garfield says vs. how his work is used.L.C. Smith (1981, cited in *) provides these assumptions that underlie citation analysis:1. Citation of a document implies use of that document by the citing author.2. Citation of a document (author, journal, etc.) reflects the merit (quality, significance, impact) of that document (author, journal, etc.).3. Citations are made to the best possible works.4. A cited document is related in content to the citing document.5. All citations are equal.So there's this idea that there's a linear relationship between quality and number of citations (as evidenced by linear regressions used everywhere - also in a note in *). More citations mean better paper, mean better institution, mean more money. BUT, that's not what Garfield said:A highly cited work is one that has been found useful by a relatively large number of people, or in a relatively large number of experiments. … The citation count of a particular piece of scientific work does not necessarily say anything about its elegance or its relative importance to the advancement of science or society.…The only responsible claim made for citation counts as an aid in evaluating individuals is that they provide a measure of the utility or impact of scientific work. They say nothing about the nature of the work, nothing about the reason for its utility or impact. (Garfield, 1979, p. 246, cited in *)In fact, Nicolaisen elsewhere provided evidence for Bornstein's suggested J- shape between quality and citations. Utility could be to illustrate a point and impact can be negative...So back to the content of the review article. Why study citation analysis? Because it's used for (as Zunde said and Nicolaisen added to)1. Qualitative and quantitative evaluation of scientists, publications, and scientific institutions2. Modeling of the historical development of science and technology3. Information search and retrievaland Nicolaisen's addition (here I paraphrase, above I quote) 4. knowledge organization/mapping through bibliographic coupling and co-citation analysisSo it can be pretty important in the life of an individual scientist as well as in the success of institutions. (particularly in certain European countries that allocate research funding this way)But there isn't a cut and dried accepted theory of why people cite. Seems pretty obvious, right? Here are the ones that the author reviewspsychological process - relying on Sperber and Wilson (eww - I do *not* recommend reading this bad boy) and Harter's review of psychological relevance in 1992 - you read something, it makes a change in your cognitive state.... etc. Unfortunately, apparently this doesn't take into account any kind of social or cultural factors so pretty much dead in the water at this point.normative theory of citing - comes from the Mertonian norms (refer to Dr. Free-Ride)- this is a happy theory. Scientists kind of cite because it's part of what they do and it's part of the reward structure of science: you give me info, I give you link love which you can combine with other link love to get funding. I decide what to cite purely on it's own merits and without regard to any particulars about the author (religion, gender, affiliation). I give credit where credit is due. Critics of this say that people don't cite all of their influences (one reason is when some fact becomes such a part of the field - there exists gravity, Maxwell's equations work - that it is taken for granted)science constructivist (?) - name dropping adds persuasive power. If I base my work on paper A and I clearly draw in from it, then to discredit me, you'd have to discredit A. Some of the authors cited in this section go so far as to talk about padding the reference list when the articles are irrelevant... and that isn't borne out by studies - there are rare actors who do bad things, but in general, this isn't supported.evolutionary accounts - well.. this one is much newer (from the author's dissertation) - I'm certain I won't get it right... but it's sort of an optimization thing - cite enough so that your readers won't mind (?). Pad the references or omit some key citations and you'll be caught in peer-review.As for the symbolic nature of citations - this goes to the heart of using citations to map knowledge. What can we say about paper A because it cites B, or about A and C if they both cite B? Citations as indicators that provide a formal representation of science - Wouters Reflexive Citation Theory. But look, we don't know why the citation was useful to the author - maybe the context is, "what an idiot Pikas is, see for example Pikas (2008)." So according to the author, Wouter's theory can't handle that.An interesting (and now on my research questions list) application of all of this is to look at explicit link-love mentions in SCTs used by scientists or well, really anyone. This idea is mentioned in Efimova, L., Hendrick, S., & Anjewierden, A. (2005) but not explicitly researched.[*]Nicolaisen, J. (2007). Citation Analysis. Annual Review of Information Science and Technology, 41, 609-641.[**] I do appreciate that research blogging is supposed to make articles more clear not less clear but hopefully I'll get better with practice ;)... Read more »

Jeppe Nicolaisen. (2007) Citation Analysis. Annual Review of Information Science and Technology, 609-641.

  • July 11, 2008
  • 03:55 PM

Positivist vs Pragmatic Classification Theory - yes, that's it

by Christina Pikas in Christina's LIS Rant

A long strange trip - reading about citations in Ange. Chem. then tried to find reference, then got only to the TOC page, so then started browsing and ran across...Hjørland's readers digest version of classification theory (thank you! everyone go read the whole thing, it's short and I'll wait):Hjørland, B.(2008). Core classification theory: a reply to Szostak. JDoc 64, 333-342: DOI:10.1108/00220410810867560 findI know next to nothing about classification theory (so Mark can help me!) - but I really hate some of Clay Shirky's throw off statements regarding LCSH and why it is broken vis tagging. One of his arguments (actually confusing LC call numbers with LCSH in some places) is that that system fails in describing all of man's knowledge. Of course it does, that's not what it's intended to do.Anyway - I've lacked the terminology for my point but I think I've found it in this excellent classification theory for dummies - this builds on current understandings of knowledge representation and social studies of knowledge instead of some of the superficial things some people like to pretend are reality:Any work on any subject is always made from a point of view, which may be uncovered by analysis (e.g. a feminist point of view or a “traditional” or an eclectic point of view). The same is the case with any classification. Ørom (2003) uncovered underlying points of view in major library classification schemes with regard to arts. Although it is often difficult to uncover the underlying point of view, it is meaningless to claim that is does not exists. “Objectivity” and “neutrality” are not attainable and are also problematic goals from the pragmatic point of view. Any given classification will always be a reflection of a certain view or approach to the objects being classified.  The (false) belief that there exist objective criteria for classification may be termed “empiricism” (or “positivism”), while the belief that classifications are always reflecting a purpose may be termed “pragmatism”.Classification systems that do not consider the different goals and interest reflected in the literature of a given domain are “positivist”. Two documents may “resemble” each other in many different ways, and there is no neutral ground on which to choose, for example, “a proximity measure”Hjorland argues that there can be no objective or neutral classification system because we always see things through a lens (my words) and choose words based on our purposes - and this is as it should be. Specialized language develops for a particular use, with particular meaning within a discipline (or other groupings of people). I'm all about classifying things for a purpose. While there is some beauty in a perfectly described resource (which is not possible), librarians catalog or index resources to provide access -- so that users can answer questions and fill their information need. There is some aboutness there, but it's also: "who can this help? What questions can be answered by this? For what searches should this appear as a result?"Likewise, when we're doing natural language search in free text (like using a web search engine) we try to find and use words from the domain of the information we're seeking. The other morning I happened (finally) upon the "correct" term the people in the domain use, and all of the sudden, tons of relevant hits when using what looked like a synonym to me retrieved few decent results. (luckily someone had used my terms and the correct terms).Slight problem with the author's reference to chemistry - read the nano registration thread on Cheminf-L.... Read more »

Birger Hjørland. (2008) Core classification theory: A reply to szostak. Journal of Documentation, 64(3), 333-342. DOI: 10.1108/00220410810867560  

join us!

Do you write about peer-reviewed research in your blog? Use to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SRI Technology.

To learn more, visit