저널 정보 -> 여기
1. Volumes 1:1-three years before current issue: Available through JSTOR, online and full-text (includes articles published under former titles Libraries and Culture and Journal of Library History)
**************** 참고로 JSTOR는 아래 저널도 제공한다.
Access Library Science (9 titles) Full Text Links to External Content
The American Archivist 1938-2006
Journal of Education for Library and Information Science 1984-2006
Journal of Education for Librarianship 1960-1984
Libraries & the Cultural Record 2006
Libraries & Culture 1988-2006
The Journal of Library History (1974-1987) 1974-1987
Journal of Library History, Philosophy, and Comparative Librarianship 1973
The Journal of Library History (1966-1972) 1966-1972
The Library Quarterly 1931-2004
****************
2. Volumes 36:1-present: Available through Project Muse, online and full-text
--> Project MUSE에서 이용 가능
Volume 45, Number 3, 2010
E-ISSN: 1932-9555 Print ISSN: 1932-4855
Publisher:
University of Texas Press
JOURNAL COVERAGE:
Vol. 36 (2001) through current issue
ABOUT THE JOURNAL:
Formerly Libraries and Culture, through volume 41, no. 2, Spring 2006 (E-ISSN: 1534-7591, Print ISSN: 0894-8631).
L&CR is an interdisciplinary journal that explores the significance of collections of recorded knowledge-their creation, organization, preservation and utilization–in the context of cultural and social history.
2010년 9월 29일 수요일
2010년 9월 23일 목요일
[journal]Information Research
현재 Volume 15 No 3 September, 2010 까지 나왔다.
전자저널이다.
Information Research
주제색인에서 골랐다.
cataloguing
cataloguing in special libraries in the 1990s
What is the title of a Web page? A study of Webography practice
chemical information processing
Extracting variant forms of chemical names for information retrieval
classification
Arguments for 'the bibliographical paradigm'. Some thoughts inspired by the new English edition of the UDC
Converting a controlled vocabulary into an ontology: the case of GEM
The retrievability of a discipline: a domain analytic view of classification [Poster abstract]
Dublin Core Constructing
Web subject gateways using Dublin Core, the Resource Description Framework and Topic Maps
government information
Tracking government Websites for information integration
indexing
An associative index model for the results list based on Vannevar Bush's selection concept
Observing documentary reading by verbal protocol
Trusting tags, terms, and recommendations
information management
The duality of knowledge
information quality
Information quality assessment on the Web: an expression of behaviour
An inside view: credibility in Wikipedia from the perspective of editors
information use
Conceptualizing the personal outcomes of information
Diversity in the conceptions of information use
How are records used in organizations?
Seeking relevance in academic information use
Young people's perceptions and usage of Wikipedia
information work
Human issues of library and information work
Japan
Information sharing between different groups: a qualitative study of information service to business in Japanese public libraries
Satisfaction and perception of usefulness among users of business information services in Japan
JavaScript™ Watch this: artisanal animation
job descriptions
The information professional's profile: an analysis of Brazilian job vacancies on the Internet
metadata
Watch this: Webified markup
name equivalences
On identifying name equivalences in digital libraries
records management
Education and training for records management in the electronic environment - the (re)search for an appropriate model
How are records used in organizations?
Wikipedia
An inside view: credibility in Wikipedia from the perspective of editors
Where does the information come from? Information source use patterns in Wikipedia
Young people's perceptions and usage of Wikipedia
XML
Where is meaning when form is gone? Knowledge representation on the Web
전자저널이다.
Information Research
주제색인에서 골랐다.
cataloguing
cataloguing in special libraries in the 1990s
What is the title of a Web page? A study of Webography practice
chemical information processing
Extracting variant forms of chemical names for information retrieval
classification
Arguments for 'the bibliographical paradigm'. Some thoughts inspired by the new English edition of the UDC
Converting a controlled vocabulary into an ontology: the case of GEM
The retrievability of a discipline: a domain analytic view of classification [Poster abstract]
Dublin Core Constructing
Web subject gateways using Dublin Core, the Resource Description Framework and Topic Maps
government information
Tracking government Websites for information integration
indexing
An associative index model for the results list based on Vannevar Bush's selection concept
Observing documentary reading by verbal protocol
Trusting tags, terms, and recommendations
information management
The duality of knowledge
information quality
Information quality assessment on the Web: an expression of behaviour
An inside view: credibility in Wikipedia from the perspective of editors
information use
Conceptualizing the personal outcomes of information
Diversity in the conceptions of information use
How are records used in organizations?
Seeking relevance in academic information use
Young people's perceptions and usage of Wikipedia
information work
Human issues of library and information work
Japan
Information sharing between different groups: a qualitative study of information service to business in Japanese public libraries
Satisfaction and perception of usefulness among users of business information services in Japan
JavaScript™ Watch this: artisanal animation
job descriptions
The information professional's profile: an analysis of Brazilian job vacancies on the Internet
metadata
Watch this: Webified markup
name equivalences
On identifying name equivalences in digital libraries
records management
Education and training for records management in the electronic environment - the (re)search for an appropriate model
How are records used in organizations?
Wikipedia
An inside view: credibility in Wikipedia from the perspective of editors
Where does the information come from? Information source use patterns in Wikipedia
Young people's perceptions and usage of Wikipedia
XML
Where is meaning when form is gone? Knowledge representation on the Web
2010년 9월 10일 금요일
[article]Westmar College archives: how we did it
Westmar College archives: how we did it
Authors:Kang, U H1
Source:Christian Librarian; Nov 1989, Vol. 33 Issue 1, p18-23, 6p
Document Type:Article
Subject Terms:ARCHIVES
CLASSIFICATION
RECORDS -- Management
Author-Supplied Keywords:Colleges
Abstract:This article describes how Westmar College established its archives and how a classification system was devised for its records. The first stage in the process of developing an archives program was the inventory of records, as well as appraisal of records for archival value. The classification outline is included in an appendix.Notes:Update Code: 2500Author Affiliations:1CBN Univ., Virginia Beach, VAISSN:04123131Accession Number:ISTA2501059Database: Library, Information Science & Technology Abstracts with Full Text
Authors:Kang, U H1
Source:Christian Librarian; Nov 1989, Vol. 33 Issue 1, p18-23, 6p
Document Type:Article
Subject Terms:ARCHIVES
CLASSIFICATION
RECORDS -- Management
Author-Supplied Keywords:Colleges
Abstract:This article describes how Westmar College established its archives and how a classification system was devised for its records. The first stage in the process of developing an archives program was the inventory of records, as well as appraisal of records for archival value. The classification outline is included in an appendix.Notes:Update Code: 2500Author Affiliations:1CBN Univ., Virginia Beach, VAISSN:04123131Accession Number:ISTA2501059Database: Library, Information Science & Technology Abstracts with Full Text
[proceeding]Evaluating a Metadata-based Term Suggestion Interface for PubMed with Real Users with Real Requests
from http://www.asis.org/Conferences/AM09/open-proceedings/papers/38.xml
2009 ASIST Annual Meeting
Thriving on Diversity - Information Opportunities in a Pluralistic World
November 6-11, 2009
Vancouver, British Columbia, Canada
Evaluating a Metadata-based Term Suggestion Interface for PubMed with Real Users with Real Requests
Authors
Muh-Chyun Tang
Department of Library and Information Science, National Taiwan University
No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan (R.O.C)
Email: mctang@ntu.edu.tw
Wan-Ching Wu
Department of Library and Information Science, National Taiwan University
No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan (R.O.C)
Email: reneemorrisb67@gmail.com
Bang-Woei Hung
Department of Information management, National Taiwan University
No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan (R.O.C)
Email: nennenpow@gmail.com
This paper reports results of an evaluation study of MAP (Multi-faceted Access to PubMed), a metadata induced query suggestion interface for PubMed bibliographic search.
A novel evaluation methodology was used to address the challenges involved in evaluating an IIR (Interactive Information Retrieval) system such as the MAP interface. The most significant aspect of this methodology is that, instead of using assigned tasks common in traditional IR evaluation, it asks real users with real search requests to search with real systems in an experimental setting. Several performance measures were created based on which comparisons were made between MAP and PubMed baseline. MAP was shown to perform better in several of these measures, especially when the search requests had not been attempted before.
The finding pointed to search characteristics as an important intervening variable in IIR evaluation. The advantages of and potential threats to our methodology were also discussed.
Introduction
Arguably one of the most difficult tasks in IR is the representation of users’ requests (Belkin, 1980). Search engine users are known to submit very short and ambiguous queries (Jansen, Sprink and Saracevic, 2000). The shallowness of the representation of user request stands in direct contrast to the thoroughness of document representation. This disparity often results in unmanageable search results for users of a heterogeneous and massive bibliographic database such as PubMed. Several system features of PubMed (e.g. default “explode” function and free-text indexing in title and abstract fields) that aim at facilitating end-user searching tend to increase indexing exhaustivity and therefore favor search recall at the expense of precision. Faced with the unmanageable amount of returned results, users are often left with few options but to hastily browse the first few returned pages.
The enormous size of the returned set creates at least two barriers to a successful user-system communication. Firstly, there is no telling whether there might be documents relevant to users’ needs buried deep down in the returned set. Secondly, the skimming of the surface of the returned set gives inadequate feedback to meaningfully help users’ judgment about the query performance.
The breakdown of user-system communication is especially severe in search situations where users are searching for unfamiliar topics or only have vague information needs. Without timely feedback from the system, users are unlikely to be able to interact effectively with the system, and refine their searches.
To address the communication breakdown, an interface was created for PubMed search that utilizes MeSH co-occurrence information to provide support for users’ query construction. The article reports the functionality of the interface and results of an experiment designed to assess its effectiveness. A novel approach was proposed to the evaluation of interactive information retrieval (IIR) systems such as ours. In the following sections we first elaborate on the functionality of the interface, followed by the research design and procedures, and conclude with the results of the quantitative analyses.
Query augmentation
To alleviate the mismatch of representational exhaustivity at the two ends of IR process, various techniques have been proposed to expand users’ queries, either automatically or interactively. An approach that has recently gained wide adoption is “metadata guided search”, which involves dynamically extracting metadata from the initial returned set for users to augment their queries (Hearst, Elliott, English, Sinha, Swearingen, and Yee, 2002; Lin, 1999; Pollitt, 1998). For literature searches in health sciences specifically, several attempts have been made to exploit term co-occurrence relationship for term suggestion purpose, with terms extracted either from a controlled vocabulary (Doms and Schroeder, 2005), or free-text in the article abstracts (Goetz and von der Lieth, 2005; Perez-Iratxeta, Bork, Andrade, 2001; Plikus, Zhang and Chuong, 2006).
Multi-faceted Access to PubMed
Our approach involves extracting faceted metadata to guide user exploration of the information space similar to that proposed in Hearst et al. (Hearst et al, 2002), with a few modifications made for the specific circumstances of PubMed search. An interface (MAP) was built that delivers the query submitted to PubMed and generates MeSH terms for users to refine their queries. For a massive database like PubMed, it become less feasible to adopt the browse and select method proposed in Hearst’s Flamenco system. Instead of relying solely on browsing, the proposed interface preserves the search mode of access while providing the browsable faceted category in support of searching.
At the implementation level, dynamically extracting metadata at search time could be problematic as the amount of computing time needed might cause a delay. Therefore instead of generating a term co-occurrence matrix on the fly during searching time from the search results, a database of term occurrence data was built beforehand. The term co-occurrence table can be updated regularly to better represent the conceptual relationships in the published literature. The database included descriptors from MeSH term, author and journal title fields extracted from all the PubMed bibliographic records in 2006 and 2007.
As the user submits her/his query though the MAP interface, the top two hundreds terms that co-occurs most frequently with the query term will be indentified and display for browsing. As users are likely to submit non-MeSH terms as their queries, users’ queries have to be first mapped to an appropriate MeSH term in the prebuilt database. This is done by utilizing PubMed’s automatic translation table function (See more on automatic term mapping in “PubMed Help” ). Thus as the user submit her/his query, proper MeSH terms interpreted by PubMed’s automatic translation table are also retrieved in order to identify terms co-occurring with the initial term from our local database. In cases where no MeSH term is returned by the translation table, another search mechanism will be activated where descriptors are extracted directly from the top two hundreds returned Medline records.
Two approaches to term display have been attempted, one simply ranks the terms without categorization (List); the other organizes the terms by the MeSH top-level categories (Faceted-category). Terms are ranked in both methods by their co-occurrence frequency multiplied by its inverse terms frequency.
An empirical study was conducted to investigate the usefulness of the faceted-category version of the interface. Specifically, we would like to know, firstly, how users with genuine information needs will interact with the proposed interface. Secondly, whether the MAP interface performs favorably, compared to the regular PubMed interface. Of particular interest is under what search situations will MAP be most effective.
Evaluation of IIR systems
The evaluation of an IIR (Interactive IR) system such as MAP poses a serious challenge to evaluation methodology. In traditional IR evaluation, other than the system components being compared, searchers, search topics, and their interaction effects with the systems are regularly treated as random variance the experimenters strive to systematically control and minimize. The “Laboratory Model” (Kekäläinen, and Jäärvelin, 2002) of evaluation is very efficient for comparing the effectiveness of algorithms. However, it becomes inadequate for today’s interactive systems whose effectiveness depends largely on active users’ engagement (Belkin et al. 2004). TREC interactive track signifies the early effort to include human subject into the modeling of IR performance (Dumais, and Belkin, 2005). The inclusion of the users in the loop, however, also increases the difficulty in evaluation experimental design and analysis. It has been shown in TREC interactive track data that “topic effect” accounts for the greatest variance in models that includes searchers, search topics, systems and their interactions. To make the main system effects comparable, replicated Latin square design has been adopted where all the treatment levels (i.e. different systems) have equal chance of been “crossed” by the same searchers and search topics. Yet the threat of topic-system interactions remains. In the non-interactive test environment, the issue of topic effects and topic-system interactions biasing the systems comparison has been addressed by averaging performance criteria over a sufficient number of topics (Lagergren and Over, 1998), which is unfeasible where human searchers are involved.
Another inherit constrain in the traditional IR evaluation paradigm is that the systems compared are conceptualized as general purposes tools, without considering for what kinds of search requests it might be more effective. Therefore the assigned tasks are created mostly in an ad hoc manner, without theorizing task characteristics and how these characteristics might interact with the system features.
To better understand how real users with real search requests might interact with MAP, and its effectiveness under different search situations, a novel approach to IIR evaluation was adopted in our study, most notably the sampling of real users’ genuine search requests. Instead of assigning the participants a uniform set of search requests, it was decided to allow the participants to conduct their own search requests in a controlled experimental setting. This was done for the following reasons: firstly, it was feared that, had we used pre-constructed requests, the participants would simply grab terms from the task narrative as query terms, which renders the interface, which is designed for facilitating query construction, less useful, if not entirely useless. Secondly, as pointed out earlier, one of our research questions is to look into the usefulness of the interface under different search situations. To do so using pre-constructed requests entails operationalizing search characteristics with topic narratives, which is difficult to pull through in a highly specialized domain such as health sciences without help from domain experts. Asking the participants to characterize their search requests with chosen attributes such as domain familiarity and whether it is a new or revisited search therefore affords us a rare opportunity to investigate the interactions between these attributes and interface on various performance criteria. Specifically, it hypothesized that, the MAP interface, because of the vocabulary support it provides, is more effective when the users are new to a research area and lack the necessary domain knowledge and terminology to conduct effective searches.
Research design
The decision to use real users searching for real information needs on real systems entails several thorny methodological issues that need to be addressed. First of all, without a set of predefined tasks we do not have the benefit of objective relevant judgment that serves as the benchmark
Table 1. Graphic representation of the experimental design
for traditional IR evaluations. Therefore it is important to come up with valid performance criteria other than recall and precision based on which system performance can be compared. Secondly, the use of real user requests poses further challenges to creating a research design capable of controlling the confounding factors.
In our design, participants were asked to search their requests with both interfaces (the regular PubMed interface and MAP), which makes it a repeated measure design where each request serves as its own control. The repeated measure has the advantage of reducing the error term thereby giving more power to the statistical test. However, it also comes with its own risks, most significantly carry-over effects that might confound the results. To control for possible carry-over effect, the requests were randomly assigned to alternate treatment order, so that any given request would have an equal chance of being searched first with either interface.
Table1 shows the eventual mixed factorial design adopted in this study with the interface as the within-subject factor and type of search requests as the between-subject factor. Participants were randomly assigned to one of the four groups in which they would search their own requests, first on either of the interface, then move on to the other (See Table1). Notice that the treatments for the same request were never received in immediate sequence; this is done so in the hope of further minimizing the carryover effect as the there was always a “wash-out” period between the two treatments to the same request.
It is recognized that, despite the alternation of the treatment order, carry-over effect might still persist as terms picked up from the preceding interface would wind up unduly advantaging the present one. The risk seems larger in group 3 and 4, when the MAP interface was used first, as the participants were likely to be exposed to more terms in these sessions. It is therefore crucial to instruct the participants to start their search with the same query and not to use terms they have learned from a previous session when the request was searched the second time.
Research procedures
A total of 44 regular PubMed/Medline users were recruited to participate in the study, all of whom were graduate students in health and biomedical sciences. They were told to prepare two search requests of their own prior to coming to the laboratory.
The research procedure was as follow: Upon their arrivals, the participants were asked to give consent for the study and fill out a brief entry questionnaire where data regarding their subject expertise, educational status and PubMed/Medline search experiences were collected. An online video tutorial was given to explain how to operate the interface.
Before the search for each request began, a pre-search questionnaire was first administered in which the participants were asked to write down a search statement for the quest they were about to search. They were also asked to provide what they believed to be the most ideal query for the request at this point. Scaled data were also elicited on the characteristics of the request such as their familiarity with the problem area, thoroughness needed for the request, whether the request has been searched before and so on.
As noted in the methodology session, each participant would search for two requests of their own alternately on two interfaces, resulting in four search sessions. As the search session began, the participants first input the original query they had given in the pre-search questionnaire and then were asked to retrieve ten useful records using the interface they were assigned in that particular session. After the initial input, they were allowed to revise their queries based on feedback from the interfaces and the search results. In other words, they were able to interact with the interfaces as they would normally do when conducting PubMed searches.
The participants were told they had 15 minutes for the task, but could stop whenever they had finished. After each session, they would again write down what they consider to be the best query terms at this time. They were then asked to evaluate the goodness of their pre-and post-search ideal queries on a 0-6 scale. This information allowed us to compare, for each search request, the participant’s perceived goodness of his/her query before and after interacting with either interface. Scaled data were also elicited on satisfaction with the results and perceived usefulness of the interface in post-search questionnaire. All their interactions with the interfaces were logged and recorded by screen capture software. Of particular interests are numbers of iterations, number of terms selected and number of records viewed, which should give us a clearer idea about how users might interact differently with different interfaces (See Table 2 for a summary of the research procedures).
Table 2. Research procedure
After both search sessions for the same request had finished, the participants were asked to indicate the degree of relevance of each of the 20 records (10 from each session) on a 0-6 scale and single out records they had seen before. This allowed us to compare the quality of the search results retrieved by the two interfaces. Other performance criteria collected in the post-questionnaire include users’ assessment of how well their requests were represented by their queries (“goodness” of the query), as well as their satisfaction with the search results. The participants were also asked to comment on the usefulness of the term suggestion function and in what search situations they thought the function might be helpful. In the following section the initial analysis of the results will be reported, as the qualitative analysis of the query formulations are still underway, the results reported here will mostly derived from quantitative measurements.
Results
Search characteristics
A total of 88 (44x2) search quests were sampled, which resulted in 176 search sessions as each search session was searched with both MAP and PubMed baseline. Among the 88 search quests, 60 of them were requests that had been searched by the participants before (revisited searches), 28 of them were searched for the first time (new searches). Sampling genuine search quests affords us an opportunity to examine the relationships among different characteristics of information needs and how they might impact on the effectiveness of the interface. Table3 shows the correlations among different attributes of the search requests. Not surprisingly, the original goodness of the query was found to be highly correlated with the participant’s familiarity with the search topic, which is also highly correlated with the completeness needed for the search.
Table 3. Search characteristics correlations (N=88)
** p<.01, *p<.05
Querying behaviors
As the interface was designed with a view to facilitating query reformulation, naturally we would like to see whether users’ querying behaviors differ between the two interfaces. Specifically, four measures were compared to help us get a better sense of how querying behaviors differ: number of terms added and deleted per session, number of query submissions, and the similarity between users’ original query and finalized query. The original-final queries similarity can be seen as an indication of the effect of the interfaces on users’ queries. The higher the similarity, the less users’ final queries diverge from the original. Jaccard’s coefficient was used as the similarity measure:
Where A stands for the set of terms in the original query and B stands for the set of terms in the finalized query. A paired-samples t-test was conducted to evaluate whether original-final queries similarity differ between two interfaces. The results indicated that mean similarity for PubMed (M =.54, SD =.31) was significantly greater than the mean similarity for MAP (M = .39, SD = .27), t (87) = 3.77, p <.001. In other words, when using MAP, users’ final queries diverge from their original queries to a significantly larger degree than when using PubMed baseline.
Significantly differences were also found between PubMed and MAP with respect to the number of terms added, t (88) = 4.03, p <.001; and deleted, t (87) = 2.06, p <.05 during user interactions, as well as number of terms in finalized query t (87) = 3.00, p <.005) (See Table 4).
The participants were also found to make more query submissions (i.e. each time the search button is clicked) when searching with MAP (M = 5.26, SD = 2.78) than with PubMed (M = 3.74, SD = 2.74), the difference is significant, t (87) = 20.81, p<.001. Despite great disparity in the numbers of search iterations, an equivalent amount of records were viewed between the two interfaces, which was measured by summing up the number of surrogate records contained in the results pages (20 surrogate records per page) brought up by the user during a search session. This indicates to us that, on average, fewer records were viewed per submission when MAP was used.
Table 4. Comparison of querying behaviors
Search performance
A one-way multivariate analysis of variance (MANOVA) was conducted to determine the effect of the two interfaces (PubMed baseline vs. MAP) on the four performance criteria variables: perceived usefulness, self-assessed goodness of the query, average relevance score of the ten records saved, and user satisfaction with the results. Significant differences were found between the interfaces on the performance measures, Wilk’ Λ = .92 F(4 , 167) = 3.79, p<.01.
As the MANOVA test was shown to be significant, individual ANOVAs were then conducted. In the following we report the results of factorial ANOVA that test for the effects of two factors, namely, the interfaces and the types of search requests, on the aforementioned performance criteria. Here search requests were classified by whether it was a new or revisited search. This is based on our hypothesis that the experimental interface is more effective for users who are new to a research area.
The mean and standard deviations for participants’ assessment of the goodness of queries as a function of two factors are presented in Table 6. The MAP interface was shown to do significantly better in terms of the final “goodness” of the query, F(1,86) = 7.88, p=.006), especially in cases where the requests have not been searched before by the participants (See Table 5).
Table 5. “Goodness” of final query (n=88)
A similar pattern manifests itself with respect to the usefulness of the interface, F(1,86) = 13.13, p <.001. The MAP interface did better, and especially so when new searches were attempted (Table 6).
Table 6. Usefulness of interface (n= 88)
In terms of average relevance scores, MAP was found to be only a slightly better than PubMed baseline in both new and revisited searches (See Table 7), though the difference is not significant, F(1,86) = 2.38, p=.126 The type of searches, however, has a slightly stronger impact on the quality of search results, F(1,86) = 3.84, p=.053. Revisited searches produced better search results than new searches.
Table 7. Average relevance scores (n=88)
Similarly a 2x2 ANOVA was conducted to evaluate the effect of the interfaces and users’ previous topic search experiences on their satisfaction with search results. The results of the ANOVA test indicated a non-significant effect for interfaces F(1,82) = 1.82, p = .18, and a non-significant effect for search types, F(1,82) =.61., p=.44. In order to further investigate whether there was an interaction effect between interfaces and types of searches, we chose to ignore the method main effect and instead examined the method simple main effects, that is, the differences between interfaces for new and revisited searches, separately. There were no significant differences in users’ satisfaction with the results between interfaces for revisited searches F(1,116) = .75, p =.39, but there was a significant difference between the two for new searches, F(1,52) = 5.71, p = .021, which indicates that MAP did better in this regard only when a new search was attempted.
Table 8. Satisfaction with results (n=84)
Users’ satisfaction with terms suggested by MAP was also slightly higher in requests that have not been searched before (Table 9), though the difference is not significant.
Table 9. Satisfaction with suggested terms (n= 88)
The analyses that have been done so far seem to suggest that MAP was beneficial to users’ searches, particularly when the searches had not been attempted before. The remaining question is: in what aspect does it help users’ search? In the post-questionnaire, the participants were asked specifically about their perception of MAP with several Likert scale (0-6) type questions. They were asked, if the MAP interface helps their search at all, in what aspect does it help: 1. it helps me generate new ideas and concepts for future research, 2. it helps me clarify my search questions, 3. it helps by showing the structure of the literature in the database, and 4. it helps me manage the enormous amount of search results. ANOVA tests were conducted to evaluate whether the participants’ answers differed significantly between old and new searches. Among the four dimensions, only one significant difference, “help me generate new concepts”, was found between new and revisited searches (Table 10).
Table 10. Help generate new concepts (n= 88)
Types of query reformulation
Detailed analysis of the characteristics of added and deleted terms during user interactions are currently underway and reserved for a future discussion. The analysis focuses on how the expanded terms are related to the users’ original query. Here we present a few examples showing how the initial-expanded terms relationships will be categorized. Our current coding scheme largely concerns two relationship categories: terms semantically related to the initial terms, and terms that represent new ideas not included in the initial queries. Semantically related terms are arraigned in either a hierarchical (i.e. broader or narrow terms) or synonymous relationship with the initial terms. (See Table11). For example, in the search conducted by subject #19 with “tolerance of morphine dosage’ as the original query, the added MeSH terms “pain” and “treatment outcome” were coded as new ideas generated by MAP, as they do not have a clear hierarchical or synonymous relationship with terms in the original query. Another example of new ideas generated can be found in the session conducted by subject #24 with MAP, two new MeSH terms: “environment exposure” and “flame retardants” were added in the final query. An instance of hierarchical relationship can be found in the search conducted by subject #48 with the original query “cartilage repair AND tissue engineering”. Two MeSH terms were added during user interaction with MAP: “articular cartilage”, “biomaterials”. “Articular cartilage” was identified as a narrow term for the initial term “cartilage”, and “biomaterials” was a broader term for “tissue engineering”. For the same search request, another term, “osteoarthritis”, was added in the session with PubMed, which was coded as a new concept. With the categorization of term relationships, we will be able to examine the distribution of the aforementioned relationships in MAP vs. PubMed, and new vs. revisited search, which should help us infer whether and how the two factors: interfaces and previous search experiences, might influence the characteristics of expanded terms.
Table 11. Original-expanded terms relationships
Acknowledgements
References
Bates, M. J. (1989). The Design of Browsing and Berrypicking Techniques for the Online Search Interface. Online Review. 13 (5), 407-424.
Belkin, N. J. (1980). Anomalous states of knowledge as a basis for information retrieval. Canadian Journal of Information Science, 5(1980), 133-143.
Borlund, P. (2000). Experimental components for the evaluation for interactive information retrieval systems. Journal of Documentation, 56(1), 71-90.
Doms, A., and Schroeder, M. (2005). GoPubMed: exploring PubMed with the Gene Ontology. Nucleic Acids Research. 33 (Web Server issue), 783-786.
Dumais, S. T. and Belkin, N. J. (2005). The TREC interactive tracks: Putting the user into search. In E. M. V. D. K. Harman (Ed.), TREC: Experiment and evaluation in information retrieval, 123-153. Cambridge, MA: MIT Press.
Efthimiadis, E. N. (2000). Interactive query expansion: A user-based evaluation in a relevance feedback environment. Journal of the American Society for Information Science, 51(11), 989-1003.
Goetz T, and von der Lieth CW. (2005). PubFinder: a tool for improving retrieval rate of relevant PubMed abstracts. Nucleic Acids Research. 33 (Web Server issue), 774-778.
Hearst, M., Elliott, A., English, J., Sinha, R., Swearingen, K., and Yee, K.-P. (2002). The Consumer Side of Search - Finding the Flow in Web Site Search. Communications of the ACM. 45 (9), 42.
Hersh, W., Turpin, A., Price, S., Kraemer, D., Chan, B., Sacherek, L., et al. (2000). Do Batch and User Evaluations Give the Same Results? An Analysis from the TREC-8 Interactive Track. NIST Special Publication. (500), 531-540.
Jansen, B. J., Spink, A., and Saracevic, T. (2000). Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web. Information Processing and Management. 36 (2), 207-227.
Joho, H., and Jose, J. M. (2006). Slicing and dicing the information space using local contexts. In IIiX: Proceedings of the 1st international conference on Information interaction in context , (pp. 66-74). New York, NY, USA: ACM Press.
Kekäläinen, J., and Jäärvelin, K. (2002). Evaluating information retrieval systems under the challenges of interaction and multidimensional dynamic relevance. Paper presented at the Proceedings of the CoLIS 4 Conference, Greenwood Village, Colo.
Lagergren, E., and Over, P. (1998). Comparing interactive information retrieval systems across sites: the TREC-6 interactive track matrix experiment. Paper presented at The 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Melbourne, Australia.
Lin, X. (1999). Visual MeSH. Paper presented at The 22nd International Conferences on Research and Development in Information Retrieval, Berkeley, CA.
Plikus, M., Zhang, Z., and Chuong, C. -M. (2006). PubFocus: semantic MEDLINE/PubMed citations analytics through integration of controlled biomedical dictionaries and ranking algorithm. BMC Bioinformatics, 7(1), 424.
Perez-Iratxeta C, Bork. P., Andrade, M. A. (2001). XplorMed: a tool for exploring MEDLINE abstracts. Trends in Biochemical Sciences, 26(9), 573-575.
Pollitt, A. S. (1998). The key role of classification and indexing in view-based searching. International cataloguing and bibliographic control, 27(2), 37-40.
Qu, Y., and Furnas, G. W. (2008). Model-driven formative evaluation of exploratory search: A study under a sensemaking framework. Information Processing and Management. , 44 (2), 534-555.
Robertson, S. E., and Hancock-Beaulieu, M. M. (1992). On the Evaluation of IR Systems. Information Processing and Management. 28 (4), 457-66.
Shiri, A., and Revie, C. (2006). Query Expansion Behavior Within a Thesaurus-Enhanced Search Environment: A User-Centered Evaluation. Journal of the American Society for Information Science and technology. 57 (4), 462-478.
Tang, M.-C. (2007). Browsing and searching in a faceted information space: a naturalistic study of PubMed Users' interaction with a display tool. Journal of the American Society for Information Science and Technology 58(13), p.1998-2006.
Taylor, R. S. (1968). Question-negotiation and information seeking in libraries. College and Research Libraries, 29(3), 178-194.
Vakkari, P. (2003). Task-based information searching. Annual Review of Information Science and Technology 2003, vol. 37 (B. Cronin, Ed.) Information Today: Medford, NJ, 2002, 413-464.
White, R. W., Muresan, G., and Marchionini, G. (2006). Report on ACM SIGIR 2006 workshop on evaluating exploratory search system. ACM SIGIR Forum, 40(2).
White, R. W., Drucker, S., Kules, B. and Schraefel, m. c. (2006). Supporting exploratory search. Communications of the ACM (Special Section), 49(4), 36-39.
White, R. W., Ruthven, I., and Jose, J. M. (2005). A study of factors affecting the utility of implicit relevance feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval (pp.35-42). Salvador, Brazil: ACM.
Appendix
Appendix I: Screenshot of MAP
Footnotes
1. The interface can be accessed at http://morris.lis.ntu.edu.tw/map/new/medlineFre2.php
See Appendix I for a screenshot of the interface
2. http://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=helpPubMed.chapter.PubMedhelp
2009 ASIST Annual Meeting
Thriving on Diversity - Information Opportunities in a Pluralistic World
November 6-11, 2009
Vancouver, British Columbia, Canada
Evaluating a Metadata-based Term Suggestion Interface for PubMed with Real Users with Real Requests
Authors
Muh-Chyun Tang
Department of Library and Information Science, National Taiwan University
No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan (R.O.C)
Email: mctang@ntu.edu.tw
Wan-Ching Wu
Department of Library and Information Science, National Taiwan University
No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan (R.O.C)
Email: reneemorrisb67@gmail.com
Bang-Woei Hung
Department of Information management, National Taiwan University
No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan (R.O.C)
Email: nennenpow@gmail.com
This paper reports results of an evaluation study of MAP (Multi-faceted Access to PubMed), a metadata induced query suggestion interface for PubMed bibliographic search.
A novel evaluation methodology was used to address the challenges involved in evaluating an IIR (Interactive Information Retrieval) system such as the MAP interface. The most significant aspect of this methodology is that, instead of using assigned tasks common in traditional IR evaluation, it asks real users with real search requests to search with real systems in an experimental setting. Several performance measures were created based on which comparisons were made between MAP and PubMed baseline. MAP was shown to perform better in several of these measures, especially when the search requests had not been attempted before.
The finding pointed to search characteristics as an important intervening variable in IIR evaluation. The advantages of and potential threats to our methodology were also discussed.
Introduction
Arguably one of the most difficult tasks in IR is the representation of users’ requests (Belkin, 1980). Search engine users are known to submit very short and ambiguous queries (Jansen, Sprink and Saracevic, 2000). The shallowness of the representation of user request stands in direct contrast to the thoroughness of document representation. This disparity often results in unmanageable search results for users of a heterogeneous and massive bibliographic database such as PubMed. Several system features of PubMed (e.g. default “explode” function and free-text indexing in title and abstract fields) that aim at facilitating end-user searching tend to increase indexing exhaustivity and therefore favor search recall at the expense of precision. Faced with the unmanageable amount of returned results, users are often left with few options but to hastily browse the first few returned pages.
The enormous size of the returned set creates at least two barriers to a successful user-system communication. Firstly, there is no telling whether there might be documents relevant to users’ needs buried deep down in the returned set. Secondly, the skimming of the surface of the returned set gives inadequate feedback to meaningfully help users’ judgment about the query performance.
The breakdown of user-system communication is especially severe in search situations where users are searching for unfamiliar topics or only have vague information needs. Without timely feedback from the system, users are unlikely to be able to interact effectively with the system, and refine their searches.
To address the communication breakdown, an interface was created for PubMed search that utilizes MeSH co-occurrence information to provide support for users’ query construction. The article reports the functionality of the interface and results of an experiment designed to assess its effectiveness. A novel approach was proposed to the evaluation of interactive information retrieval (IIR) systems such as ours. In the following sections we first elaborate on the functionality of the interface, followed by the research design and procedures, and conclude with the results of the quantitative analyses.
Query augmentation
To alleviate the mismatch of representational exhaustivity at the two ends of IR process, various techniques have been proposed to expand users’ queries, either automatically or interactively. An approach that has recently gained wide adoption is “metadata guided search”, which involves dynamically extracting metadata from the initial returned set for users to augment their queries (Hearst, Elliott, English, Sinha, Swearingen, and Yee, 2002; Lin, 1999; Pollitt, 1998). For literature searches in health sciences specifically, several attempts have been made to exploit term co-occurrence relationship for term suggestion purpose, with terms extracted either from a controlled vocabulary (Doms and Schroeder, 2005), or free-text in the article abstracts (Goetz and von der Lieth, 2005; Perez-Iratxeta, Bork, Andrade, 2001; Plikus, Zhang and Chuong, 2006).
Multi-faceted Access to PubMed
Our approach involves extracting faceted metadata to guide user exploration of the information space similar to that proposed in Hearst et al. (Hearst et al, 2002), with a few modifications made for the specific circumstances of PubMed search. An interface (MAP) was built that delivers the query submitted to PubMed and generates MeSH terms for users to refine their queries. For a massive database like PubMed, it become less feasible to adopt the browse and select method proposed in Hearst’s Flamenco system. Instead of relying solely on browsing, the proposed interface preserves the search mode of access while providing the browsable faceted category in support of searching.
At the implementation level, dynamically extracting metadata at search time could be problematic as the amount of computing time needed might cause a delay. Therefore instead of generating a term co-occurrence matrix on the fly during searching time from the search results, a database of term occurrence data was built beforehand. The term co-occurrence table can be updated regularly to better represent the conceptual relationships in the published literature. The database included descriptors from MeSH term, author and journal title fields extracted from all the PubMed bibliographic records in 2006 and 2007.
As the user submits her/his query though the MAP interface, the top two hundreds terms that co-occurs most frequently with the query term will be indentified and display for browsing. As users are likely to submit non-MeSH terms as their queries, users’ queries have to be first mapped to an appropriate MeSH term in the prebuilt database. This is done by utilizing PubMed’s automatic translation table function (See more on automatic term mapping in “PubMed Help” ). Thus as the user submit her/his query, proper MeSH terms interpreted by PubMed’s automatic translation table are also retrieved in order to identify terms co-occurring with the initial term from our local database. In cases where no MeSH term is returned by the translation table, another search mechanism will be activated where descriptors are extracted directly from the top two hundreds returned Medline records.
Two approaches to term display have been attempted, one simply ranks the terms without categorization (List); the other organizes the terms by the MeSH top-level categories (Faceted-category). Terms are ranked in both methods by their co-occurrence frequency multiplied by its inverse terms frequency.
An empirical study was conducted to investigate the usefulness of the faceted-category version of the interface. Specifically, we would like to know, firstly, how users with genuine information needs will interact with the proposed interface. Secondly, whether the MAP interface performs favorably, compared to the regular PubMed interface. Of particular interest is under what search situations will MAP be most effective.
Evaluation of IIR systems
The evaluation of an IIR (Interactive IR) system such as MAP poses a serious challenge to evaluation methodology. In traditional IR evaluation, other than the system components being compared, searchers, search topics, and their interaction effects with the systems are regularly treated as random variance the experimenters strive to systematically control and minimize. The “Laboratory Model” (Kekäläinen, and Jäärvelin, 2002) of evaluation is very efficient for comparing the effectiveness of algorithms. However, it becomes inadequate for today’s interactive systems whose effectiveness depends largely on active users’ engagement (Belkin et al. 2004). TREC interactive track signifies the early effort to include human subject into the modeling of IR performance (Dumais, and Belkin, 2005). The inclusion of the users in the loop, however, also increases the difficulty in evaluation experimental design and analysis. It has been shown in TREC interactive track data that “topic effect” accounts for the greatest variance in models that includes searchers, search topics, systems and their interactions. To make the main system effects comparable, replicated Latin square design has been adopted where all the treatment levels (i.e. different systems) have equal chance of been “crossed” by the same searchers and search topics. Yet the threat of topic-system interactions remains. In the non-interactive test environment, the issue of topic effects and topic-system interactions biasing the systems comparison has been addressed by averaging performance criteria over a sufficient number of topics (Lagergren and Over, 1998), which is unfeasible where human searchers are involved.
Another inherit constrain in the traditional IR evaluation paradigm is that the systems compared are conceptualized as general purposes tools, without considering for what kinds of search requests it might be more effective. Therefore the assigned tasks are created mostly in an ad hoc manner, without theorizing task characteristics and how these characteristics might interact with the system features.
To better understand how real users with real search requests might interact with MAP, and its effectiveness under different search situations, a novel approach to IIR evaluation was adopted in our study, most notably the sampling of real users’ genuine search requests. Instead of assigning the participants a uniform set of search requests, it was decided to allow the participants to conduct their own search requests in a controlled experimental setting. This was done for the following reasons: firstly, it was feared that, had we used pre-constructed requests, the participants would simply grab terms from the task narrative as query terms, which renders the interface, which is designed for facilitating query construction, less useful, if not entirely useless. Secondly, as pointed out earlier, one of our research questions is to look into the usefulness of the interface under different search situations. To do so using pre-constructed requests entails operationalizing search characteristics with topic narratives, which is difficult to pull through in a highly specialized domain such as health sciences without help from domain experts. Asking the participants to characterize their search requests with chosen attributes such as domain familiarity and whether it is a new or revisited search therefore affords us a rare opportunity to investigate the interactions between these attributes and interface on various performance criteria. Specifically, it hypothesized that, the MAP interface, because of the vocabulary support it provides, is more effective when the users are new to a research area and lack the necessary domain knowledge and terminology to conduct effective searches.
Research design
The decision to use real users searching for real information needs on real systems entails several thorny methodological issues that need to be addressed. First of all, without a set of predefined tasks we do not have the benefit of objective relevant judgment that serves as the benchmark
Table 1. Graphic representation of the experimental design
for traditional IR evaluations. Therefore it is important to come up with valid performance criteria other than recall and precision based on which system performance can be compared. Secondly, the use of real user requests poses further challenges to creating a research design capable of controlling the confounding factors.
In our design, participants were asked to search their requests with both interfaces (the regular PubMed interface and MAP), which makes it a repeated measure design where each request serves as its own control. The repeated measure has the advantage of reducing the error term thereby giving more power to the statistical test. However, it also comes with its own risks, most significantly carry-over effects that might confound the results. To control for possible carry-over effect, the requests were randomly assigned to alternate treatment order, so that any given request would have an equal chance of being searched first with either interface.
Table1 shows the eventual mixed factorial design adopted in this study with the interface as the within-subject factor and type of search requests as the between-subject factor. Participants were randomly assigned to one of the four groups in which they would search their own requests, first on either of the interface, then move on to the other (See Table1). Notice that the treatments for the same request were never received in immediate sequence; this is done so in the hope of further minimizing the carryover effect as the there was always a “wash-out” period between the two treatments to the same request.
It is recognized that, despite the alternation of the treatment order, carry-over effect might still persist as terms picked up from the preceding interface would wind up unduly advantaging the present one. The risk seems larger in group 3 and 4, when the MAP interface was used first, as the participants were likely to be exposed to more terms in these sessions. It is therefore crucial to instruct the participants to start their search with the same query and not to use terms they have learned from a previous session when the request was searched the second time.
Research procedures
A total of 44 regular PubMed/Medline users were recruited to participate in the study, all of whom were graduate students in health and biomedical sciences. They were told to prepare two search requests of their own prior to coming to the laboratory.
The research procedure was as follow: Upon their arrivals, the participants were asked to give consent for the study and fill out a brief entry questionnaire where data regarding their subject expertise, educational status and PubMed/Medline search experiences were collected. An online video tutorial was given to explain how to operate the interface.
Before the search for each request began, a pre-search questionnaire was first administered in which the participants were asked to write down a search statement for the quest they were about to search. They were also asked to provide what they believed to be the most ideal query for the request at this point. Scaled data were also elicited on the characteristics of the request such as their familiarity with the problem area, thoroughness needed for the request, whether the request has been searched before and so on.
As noted in the methodology session, each participant would search for two requests of their own alternately on two interfaces, resulting in four search sessions. As the search session began, the participants first input the original query they had given in the pre-search questionnaire and then were asked to retrieve ten useful records using the interface they were assigned in that particular session. After the initial input, they were allowed to revise their queries based on feedback from the interfaces and the search results. In other words, they were able to interact with the interfaces as they would normally do when conducting PubMed searches.
The participants were told they had 15 minutes for the task, but could stop whenever they had finished. After each session, they would again write down what they consider to be the best query terms at this time. They were then asked to evaluate the goodness of their pre-and post-search ideal queries on a 0-6 scale. This information allowed us to compare, for each search request, the participant’s perceived goodness of his/her query before and after interacting with either interface. Scaled data were also elicited on satisfaction with the results and perceived usefulness of the interface in post-search questionnaire. All their interactions with the interfaces were logged and recorded by screen capture software. Of particular interests are numbers of iterations, number of terms selected and number of records viewed, which should give us a clearer idea about how users might interact differently with different interfaces (See Table 2 for a summary of the research procedures).
Table 2. Research procedure
After both search sessions for the same request had finished, the participants were asked to indicate the degree of relevance of each of the 20 records (10 from each session) on a 0-6 scale and single out records they had seen before. This allowed us to compare the quality of the search results retrieved by the two interfaces. Other performance criteria collected in the post-questionnaire include users’ assessment of how well their requests were represented by their queries (“goodness” of the query), as well as their satisfaction with the search results. The participants were also asked to comment on the usefulness of the term suggestion function and in what search situations they thought the function might be helpful. In the following section the initial analysis of the results will be reported, as the qualitative analysis of the query formulations are still underway, the results reported here will mostly derived from quantitative measurements.
Results
Search characteristics
A total of 88 (44x2) search quests were sampled, which resulted in 176 search sessions as each search session was searched with both MAP and PubMed baseline. Among the 88 search quests, 60 of them were requests that had been searched by the participants before (revisited searches), 28 of them were searched for the first time (new searches). Sampling genuine search quests affords us an opportunity to examine the relationships among different characteristics of information needs and how they might impact on the effectiveness of the interface. Table3 shows the correlations among different attributes of the search requests. Not surprisingly, the original goodness of the query was found to be highly correlated with the participant’s familiarity with the search topic, which is also highly correlated with the completeness needed for the search.
Table 3. Search characteristics correlations (N=88)
** p<.01, *p<.05
Querying behaviors
As the interface was designed with a view to facilitating query reformulation, naturally we would like to see whether users’ querying behaviors differ between the two interfaces. Specifically, four measures were compared to help us get a better sense of how querying behaviors differ: number of terms added and deleted per session, number of query submissions, and the similarity between users’ original query and finalized query. The original-final queries similarity can be seen as an indication of the effect of the interfaces on users’ queries. The higher the similarity, the less users’ final queries diverge from the original. Jaccard’s coefficient was used as the similarity measure:
Where A stands for the set of terms in the original query and B stands for the set of terms in the finalized query. A paired-samples t-test was conducted to evaluate whether original-final queries similarity differ between two interfaces. The results indicated that mean similarity for PubMed (M =.54, SD =.31) was significantly greater than the mean similarity for MAP (M = .39, SD = .27), t (87) = 3.77, p <.001. In other words, when using MAP, users’ final queries diverge from their original queries to a significantly larger degree than when using PubMed baseline.
Significantly differences were also found between PubMed and MAP with respect to the number of terms added, t (88) = 4.03, p <.001; and deleted, t (87) = 2.06, p <.05 during user interactions, as well as number of terms in finalized query t (87) = 3.00, p <.005) (See Table 4).
The participants were also found to make more query submissions (i.e. each time the search button is clicked) when searching with MAP (M = 5.26, SD = 2.78) than with PubMed (M = 3.74, SD = 2.74), the difference is significant, t (87) = 20.81, p<.001. Despite great disparity in the numbers of search iterations, an equivalent amount of records were viewed between the two interfaces, which was measured by summing up the number of surrogate records contained in the results pages (20 surrogate records per page) brought up by the user during a search session. This indicates to us that, on average, fewer records were viewed per submission when MAP was used.
Table 4. Comparison of querying behaviors
Search performance
A one-way multivariate analysis of variance (MANOVA) was conducted to determine the effect of the two interfaces (PubMed baseline vs. MAP) on the four performance criteria variables: perceived usefulness, self-assessed goodness of the query, average relevance score of the ten records saved, and user satisfaction with the results. Significant differences were found between the interfaces on the performance measures, Wilk’ Λ = .92 F(4 , 167) = 3.79, p<.01.
As the MANOVA test was shown to be significant, individual ANOVAs were then conducted. In the following we report the results of factorial ANOVA that test for the effects of two factors, namely, the interfaces and the types of search requests, on the aforementioned performance criteria. Here search requests were classified by whether it was a new or revisited search. This is based on our hypothesis that the experimental interface is more effective for users who are new to a research area.
The mean and standard deviations for participants’ assessment of the goodness of queries as a function of two factors are presented in Table 6. The MAP interface was shown to do significantly better in terms of the final “goodness” of the query, F(1,86) = 7.88, p=.006), especially in cases where the requests have not been searched before by the participants (See Table 5).
Table 5. “Goodness” of final query (n=88)
A similar pattern manifests itself with respect to the usefulness of the interface, F(1,86) = 13.13, p <.001. The MAP interface did better, and especially so when new searches were attempted (Table 6).
Table 6. Usefulness of interface (n= 88)
In terms of average relevance scores, MAP was found to be only a slightly better than PubMed baseline in both new and revisited searches (See Table 7), though the difference is not significant, F(1,86) = 2.38, p=.126 The type of searches, however, has a slightly stronger impact on the quality of search results, F(1,86) = 3.84, p=.053. Revisited searches produced better search results than new searches.
Table 7. Average relevance scores (n=88)
Similarly a 2x2 ANOVA was conducted to evaluate the effect of the interfaces and users’ previous topic search experiences on their satisfaction with search results. The results of the ANOVA test indicated a non-significant effect for interfaces F(1,82) = 1.82, p = .18, and a non-significant effect for search types, F(1,82) =.61., p=.44. In order to further investigate whether there was an interaction effect between interfaces and types of searches, we chose to ignore the method main effect and instead examined the method simple main effects, that is, the differences between interfaces for new and revisited searches, separately. There were no significant differences in users’ satisfaction with the results between interfaces for revisited searches F(1,116) = .75, p =.39, but there was a significant difference between the two for new searches, F(1,52) = 5.71, p = .021, which indicates that MAP did better in this regard only when a new search was attempted.
Table 8. Satisfaction with results (n=84)
Users’ satisfaction with terms suggested by MAP was also slightly higher in requests that have not been searched before (Table 9), though the difference is not significant.
Table 9. Satisfaction with suggested terms (n= 88)
The analyses that have been done so far seem to suggest that MAP was beneficial to users’ searches, particularly when the searches had not been attempted before. The remaining question is: in what aspect does it help users’ search? In the post-questionnaire, the participants were asked specifically about their perception of MAP with several Likert scale (0-6) type questions. They were asked, if the MAP interface helps their search at all, in what aspect does it help: 1. it helps me generate new ideas and concepts for future research, 2. it helps me clarify my search questions, 3. it helps by showing the structure of the literature in the database, and 4. it helps me manage the enormous amount of search results. ANOVA tests were conducted to evaluate whether the participants’ answers differed significantly between old and new searches. Among the four dimensions, only one significant difference, “help me generate new concepts”, was found between new and revisited searches (Table 10).
Table 10. Help generate new concepts (n= 88)
Types of query reformulation
Detailed analysis of the characteristics of added and deleted terms during user interactions are currently underway and reserved for a future discussion. The analysis focuses on how the expanded terms are related to the users’ original query. Here we present a few examples showing how the initial-expanded terms relationships will be categorized. Our current coding scheme largely concerns two relationship categories: terms semantically related to the initial terms, and terms that represent new ideas not included in the initial queries. Semantically related terms are arraigned in either a hierarchical (i.e. broader or narrow terms) or synonymous relationship with the initial terms. (See Table11). For example, in the search conducted by subject #19 with “tolerance of morphine dosage’ as the original query, the added MeSH terms “pain” and “treatment outcome” were coded as new ideas generated by MAP, as they do not have a clear hierarchical or synonymous relationship with terms in the original query. Another example of new ideas generated can be found in the session conducted by subject #24 with MAP, two new MeSH terms: “environment exposure” and “flame retardants” were added in the final query. An instance of hierarchical relationship can be found in the search conducted by subject #48 with the original query “cartilage repair AND tissue engineering”. Two MeSH terms were added during user interaction with MAP: “articular cartilage”, “biomaterials”. “Articular cartilage” was identified as a narrow term for the initial term “cartilage”, and “biomaterials” was a broader term for “tissue engineering”. For the same search request, another term, “osteoarthritis”, was added in the session with PubMed, which was coded as a new concept. With the categorization of term relationships, we will be able to examine the distribution of the aforementioned relationships in MAP vs. PubMed, and new vs. revisited search, which should help us infer whether and how the two factors: interfaces and previous search experiences, might influence the characteristics of expanded terms.
Table 11. Original-expanded terms relationships
Acknowledgements
References
Bates, M. J. (1989). The Design of Browsing and Berrypicking Techniques for the Online Search Interface. Online Review. 13 (5), 407-424.
Belkin, N. J. (1980). Anomalous states of knowledge as a basis for information retrieval. Canadian Journal of Information Science, 5(1980), 133-143.
Borlund, P. (2000). Experimental components for the evaluation for interactive information retrieval systems. Journal of Documentation, 56(1), 71-90.
Doms, A., and Schroeder, M. (2005). GoPubMed: exploring PubMed with the Gene Ontology. Nucleic Acids Research. 33 (Web Server issue), 783-786.
Dumais, S. T. and Belkin, N. J. (2005). The TREC interactive tracks: Putting the user into search. In E. M. V. D. K. Harman (Ed.), TREC: Experiment and evaluation in information retrieval, 123-153. Cambridge, MA: MIT Press.
Efthimiadis, E. N. (2000). Interactive query expansion: A user-based evaluation in a relevance feedback environment. Journal of the American Society for Information Science, 51(11), 989-1003.
Goetz T, and von der Lieth CW. (2005). PubFinder: a tool for improving retrieval rate of relevant PubMed abstracts. Nucleic Acids Research. 33 (Web Server issue), 774-778.
Hearst, M., Elliott, A., English, J., Sinha, R., Swearingen, K., and Yee, K.-P. (2002). The Consumer Side of Search - Finding the Flow in Web Site Search. Communications of the ACM. 45 (9), 42.
Hersh, W., Turpin, A., Price, S., Kraemer, D., Chan, B., Sacherek, L., et al. (2000). Do Batch and User Evaluations Give the Same Results? An Analysis from the TREC-8 Interactive Track. NIST Special Publication. (500), 531-540.
Jansen, B. J., Spink, A., and Saracevic, T. (2000). Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web. Information Processing and Management. 36 (2), 207-227.
Joho, H., and Jose, J. M. (2006). Slicing and dicing the information space using local contexts. In IIiX: Proceedings of the 1st international conference on Information interaction in context , (pp. 66-74). New York, NY, USA: ACM Press.
Kekäläinen, J., and Jäärvelin, K. (2002). Evaluating information retrieval systems under the challenges of interaction and multidimensional dynamic relevance. Paper presented at the Proceedings of the CoLIS 4 Conference, Greenwood Village, Colo.
Lagergren, E., and Over, P. (1998). Comparing interactive information retrieval systems across sites: the TREC-6 interactive track matrix experiment. Paper presented at The 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Melbourne, Australia.
Lin, X. (1999). Visual MeSH. Paper presented at The 22nd International Conferences on Research and Development in Information Retrieval, Berkeley, CA.
Plikus, M., Zhang, Z., and Chuong, C. -M. (2006). PubFocus: semantic MEDLINE/PubMed citations analytics through integration of controlled biomedical dictionaries and ranking algorithm. BMC Bioinformatics, 7(1), 424.
Perez-Iratxeta C, Bork. P., Andrade, M. A. (2001). XplorMed: a tool for exploring MEDLINE abstracts. Trends in Biochemical Sciences, 26(9), 573-575.
Pollitt, A. S. (1998). The key role of classification and indexing in view-based searching. International cataloguing and bibliographic control, 27(2), 37-40.
Qu, Y., and Furnas, G. W. (2008). Model-driven formative evaluation of exploratory search: A study under a sensemaking framework. Information Processing and Management. , 44 (2), 534-555.
Robertson, S. E., and Hancock-Beaulieu, M. M. (1992). On the Evaluation of IR Systems. Information Processing and Management. 28 (4), 457-66.
Shiri, A., and Revie, C. (2006). Query Expansion Behavior Within a Thesaurus-Enhanced Search Environment: A User-Centered Evaluation. Journal of the American Society for Information Science and technology. 57 (4), 462-478.
Tang, M.-C. (2007). Browsing and searching in a faceted information space: a naturalistic study of PubMed Users' interaction with a display tool. Journal of the American Society for Information Science and Technology 58(13), p.1998-2006.
Taylor, R. S. (1968). Question-negotiation and information seeking in libraries. College and Research Libraries, 29(3), 178-194.
Vakkari, P. (2003). Task-based information searching. Annual Review of Information Science and Technology 2003, vol. 37 (B. Cronin, Ed.) Information Today: Medford, NJ, 2002, 413-464.
White, R. W., Muresan, G., and Marchionini, G. (2006). Report on ACM SIGIR 2006 workshop on evaluating exploratory search system. ACM SIGIR Forum, 40(2).
White, R. W., Drucker, S., Kules, B. and Schraefel, m. c. (2006). Supporting exploratory search. Communications of the ACM (Special Section), 49(4), 36-39.
White, R. W., Ruthven, I., and Jose, J. M. (2005). A study of factors affecting the utility of implicit relevance feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval (pp.35-42). Salvador, Brazil: ACM.
Appendix
Appendix I: Screenshot of MAP
Footnotes
1. The interface can be accessed at http://morris.lis.ntu.edu.tw/map/new/medlineFre2.php
See Appendix I for a screenshot of the interface
2. http://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=helpPubMed.chapter.PubMedhelp
2010년 9월 8일 수요일
[article]Facet analysis by Kathryn La Barre
ARIST vol.44: 243-284.
overview
Facet theory: Language and Definition
Primer for Facet Analysis and Faceted Classification
Heritage of Facet Analysis
Facets or Facet-Like Stuctures in Cognate Areas
Appendix 1: Glossary of Terms
Appendix 2: Canonical Literature
overview
Facet theory: Language and Definition
Primer for Facet Analysis and Faceted Classification
Heritage of Facet Analysis
Facets or Facet-Like Stuctures in Cognate Areas
Appendix 1: Glossary of Terms
Appendix 2: Canonical Literature
2010년 9월 7일 화요일
[journal]Journal of Research on Libraries and Young Adults
[from jESSE]
The Journal of Research on Libraries and Young Adults is an online
open access peer reviewed journal (http://www.yalsa.ala.org/jrlya)
launching November 2010. The purpose of Journal of Research on
Libraries and Young Adults is to enhance the development of theory,
research, and practices to support young adult library services.
Journal of Research on Libraries and Young Adults promotes and
publishes high quality original research concerning the informational
and developmental needs of young adults; the management,
implementation, and evaluation of library services for young adults;
and other critical issues relevant to librarians who work with young
adults. The journal also includes literary and cultural analysis of
classic and contemporary writing for young adults.
Journal of Research on Libraries and Young Adults invites manuscripts
based on original qualitative or quantitative research, an innovative
conceptual framework, or a substantial literature review that opens
new areas of inquiry and investigation. Case studies and works of
literary analysis are also welcome. The journal recognizes the
contributions other disciplines make to expanding and enriching
theory, research and practice in young adult library services and
encourages submissions from researchers and practitioners in all
fields.
The Journal of Research on Libraries and Young Adults uses the Chicago
Manual of style endnotes. For complete author guidelines including
examples citations, please visit the author guidelines website at,
http://www.ala.org/ala/mgrps/divs/yalsa/yalsapubs/research/authorguidelines.cfm.
While submissions average 4,000 to 7,000 words, manuscripts of all
lengths will be considered. Full color images, photos, and other
media are all accepted.
Submissions
Please contact editor Jessica Moyer at yalsaresearch@gmail.com or
jessicaemilymoyer@gmail.com to discuss submissions and/or author
guidelines. All completed manuscripts should be submitted as email
attachments to Jessica Moyer at yalsaresearch@gmail.com or
jessicaemilymoyer@gmail.com as email attachments. Please attach each
figure or graphic as a separate file.
The first issue of the Journal of Research on Libraries and Young
Adults will be available online at http://www.yalsa.ala.org/jrlya/
Monday November 1, 2010 and will feature the papers that will be
presented at the 2010 YALSA Symposium on Young Adult Literature.
Manuscripts are currently being accepted for the Winter 2011 and
Spring 2011 issues.
Jessica E. Moyer, M.S., C.A.S.
Doctoral Candidate, Literacy Education, College of Education and Human
Development, University of Minnesota Twin Cities
Doctoral Dissertation Fellow, 2010-2011 University of Minnesota
Member Editor, Journal of Research on Libraries and Young Adults,
yalsaresearch@gmail.com
Email: jessicaemilymoyer@gmail.com, http://jessicaemilymoyer.pbworks.com/
The Journal of Research on Libraries and Young Adults is an online
open access peer reviewed journal (http://www.yalsa.ala.org/jrlya)
launching November 2010. The purpose of Journal of Research on
Libraries and Young Adults is to enhance the development of theory,
research, and practices to support young adult library services.
Journal of Research on Libraries and Young Adults promotes and
publishes high quality original research concerning the informational
and developmental needs of young adults; the management,
implementation, and evaluation of library services for young adults;
and other critical issues relevant to librarians who work with young
adults. The journal also includes literary and cultural analysis of
classic and contemporary writing for young adults.
Journal of Research on Libraries and Young Adults invites manuscripts
based on original qualitative or quantitative research, an innovative
conceptual framework, or a substantial literature review that opens
new areas of inquiry and investigation. Case studies and works of
literary analysis are also welcome. The journal recognizes the
contributions other disciplines make to expanding and enriching
theory, research and practice in young adult library services and
encourages submissions from researchers and practitioners in all
fields.
The Journal of Research on Libraries and Young Adults uses the Chicago
Manual of style endnotes. For complete author guidelines including
examples citations, please visit the author guidelines website at,
http://www.ala.org/ala/mgrps/divs/yalsa/yalsapubs/research/authorguidelines.cfm.
While submissions average 4,000 to 7,000 words, manuscripts of all
lengths will be considered. Full color images, photos, and other
media are all accepted.
Submissions
Please contact editor Jessica Moyer at yalsaresearch@gmail.com or
jessicaemilymoyer@gmail.com to discuss submissions and/or author
guidelines. All completed manuscripts should be submitted as email
attachments to Jessica Moyer at yalsaresearch@gmail.com or
jessicaemilymoyer@gmail.com as email attachments. Please attach each
figure or graphic as a separate file.
The first issue of the Journal of Research on Libraries and Young
Adults will be available online at http://www.yalsa.ala.org/jrlya/
Monday November 1, 2010 and will feature the papers that will be
presented at the 2010 YALSA Symposium on Young Adult Literature.
Manuscripts are currently being accepted for the Winter 2011 and
Spring 2011 issues.
Jessica E. Moyer, M.S., C.A.S.
Doctoral Candidate, Literacy Education, College of Education and Human
Development, University of Minnesota Twin Cities
Doctoral Dissertation Fellow, 2010-2011 University of Minnesota
Member Editor, Journal of Research on Libraries and Young Adults,
yalsaresearch@gmail.com
Email: jessicaemilymoyer@gmail.com, http://jessicaemilymoyer.pbworks.com/
피드 구독하기:
글 (Atom)