Showing posts with label methods. Show all posts
Showing posts with label methods. Show all posts

Tuesday, March 31, 2015

Faculty Perceptions of a Library: Paneling for Assessment

I presented "Faculty Perceptions of a Library: Paneling for Assessment" at the Association for College and Research Libraries 2015 conference in Portland, Oregon on Friday. What follows is my presentation of that paper. Should you like to read the real thing, it's available here, and feedback is most welcome.
Abstract: This paper introduces librarians and library staff to “paneling,” a technique employed here to analyze the discourse around and within how faculty perceive an academic library at a small university. The concept of panels comes to librarianship from anthropology, and shows great promise as not only an assessment tool, but also one that informs library practices and behaviors. 
How the assessment sausage gets made. 
In both the paper and presentation I discuss the who, what, how, and why of panels. Paneling is a methodological tool, and when assessing one should use multiple tools. 

This was for one person who I knew would attend. They know who they are. 
Panels are interpretive. What I mean by this is that we, as people, create meaning when we observe, participate, and interact with each other. Panels are explicitly subjective; that is, my interpretation and understanding may differ from another person's. 


Paneling involves editing and coding the testimony of what anthropologists call informants into narratives, which gives a great deal of power, of editorial discretion, to a researcher. 


What I was looking for here are faculty narratives about the library. How do they perceive it, understand it, and tell each other stories about it. So in the spring of 2014, the then Associate Provost and now Provost and I convened a series of faculty focus groups to assess and understand how they view the library. With a major accreditation regime acting as a proverbial sword of Damocles, or a "buy-in," if you prefer, we were able to interview all of our seventy-plus full time faculty in seven semi-structured focus groups. We organized these groups by major, program, and school wherever possible, asking faculty what they thought of library services, collections, staff, website, and more. 

There were challenges to working with faculty in what are often their "natural" groups. Focus groups comprised of colleagues, some junior, some senior, are subject to the same kind of group dynamics that may occur in faculty offices, lounges, and hallways. It's a weakness of panels, one that I was very aware of. Some faculty may have felt silenced, for example, and while editing and coding faculty responses, dissent isn't included if it doesn't fit into dominant narratives. 

Nonetheless, there were very clear faculty narratives present, across majors, programs, and schools. I hand-edited and -coded these, which were then reviewed by the Provost for some measure of interoperator reliability. We were able to organize these narratives into five panels, stories about the library.
  1. Physical library space
  2. Library website
  3. Library instruction
  4. Print and online collections
  5. Customer Service
Note that it is impossible to get a clean separation with regards to these panels. It is difficult, if not impossible, for example, to talk about online collections without talking about the library website. 

With regards to the library's physical space, the dominant narrative among faculty, across majors, programs, and schools, was that the library have more flexible learning spaces. We've been able to carve out some spaces for mixed use, but they also come with mixed results. For example, multiple faculty referred to one of the library's newer mixed-uses spaces as "scary." 

Creating these kinds of spaces can be difficult, and will involve weeding, deaccession of older materials, and stack shifting. When I mentioned this to faculty, the response was positive; being in focus groups, having a conversation with them, allowed me to make that process more transparent, which will hopefully minimize problems down the road. 


At the time of the focus groups, we were transitioning to a new library website build around a discovery service, details here, and those faculty more familiar with the changes liked it. But other faculty members were frustrated with the site, and mentioned going to other institutions' websites to conduct research, or even calling for research on social media, such as "icanhazpdf."


The overwhelming narrative regarding library instruction was "more." More one-shots; more for-credit courses, as one of our schools has; and more learning objects both on the library website and on our learning management system. 


Two narratives emerged from the Print and online collections panel. First, that our collections are out of date. Second, that the policies and procedures by which we develop and grow collections are unclear. Here, as in other panels, faculty are giving us clear feedback. If we act on it, and we are, we as library staff will be better able to earn their trust. 


With regards to customer service, one faculty member referred to our reference librarian, at the reference desk, as "the nice lady at the reception desk." Overall, faculty asked for more events at the library, and some even volunteered their services, talking about their research, or current events, which I take as a sign that faculty are reaching out to the library staff, interested in partnering with us. 


What we as library staff want to do is to act on these faculty narratives, approach them from multiple angles. Faculty are telling stories about the library, narratives. As library staff, we don't have to be passive here. By listening to faculty and acting on their perceptions, we can participate in those narratives and reshape them. 

There are, of course, alternatives to panels. We could have used surveys, as many librarians are wont to do. However, surveys never would have told us about how scary one of our rooms is, for example, and with these focus groups we were able to have all full-time faculty participate. Surveys have more of an issue with representation, because not everyone, or even most faculty, would fill them out, and the questions one asks in a survey often affect the outcome, how people answer. 

On the other hand, individual interviews would be too time-consuming, as would be the case with an ethnographic study of how faculty use the library. 

Again, we were able to leverage accreditation to get full faculty participation in focus groups, but it's just one piece of the puzzle, because yes, you should use these other methods as well. Lots of kinds of meat go into a hot dog, and assessment should be multi-method as well. 


In addition, in a time when higher education seems obsessed with numbers, with statistical data, we shouldn't lose sight of other methods, there's more out there, and if we ignore it, we ignore both interesting and useful questions and answers.
Higher education is quantitative in part because of a policy orientation where evaluation is seen as equivalent to counting and measuring. - Donna Lanclos
Panels helped us uncover stories about the library, and stories have power. We're able to act on those stories, those narratives, and that too is power. And that's why I used panels here.

We might use them again, for adjunct faculty, for university staff who don't use the library for whatever reason or reasons, and maybe for students as well. They're a tool in a toolkit for assessment, and as you can tell, I think this method is more organic, and useful, than most.

I'd like to find out more about what many different groups think of our library, and I think that interpretive methods have a role in getting us there. Thank you.

----------------------
I had about 13 minutes to discuss what turned out to be more of a 15-minute presentation, so I had to gloss over issues of epistemology in discussing interpretivism, and some of the nuts and bolts of editing and coding faculty testimony, but again, the paper goes into these in a bit more depth, and I welcome your thoughts, comments, and questions.

Elsewhere on the site:
Explore the presentations and conferences tags.


Presentation image credits:
Hot dog Venn diagram via Woot Shirt, 3/19/15, http://derbyimages.woot.com/73175/7a1aad0d-9545-4a52-84a6-8aeff6266cdf.jpg
Dancing squirrel via Imgur, 3/19/15, http://i.imgur.com/op3mwqQ.gif
Snow, “Informer,” via EastWest Records, 1993, 3/19/15, giffed by Back2th90s, http://www.back2the90s.com/upload/9/6/5/back2the90s/informer-snow.large.gif
Prime Directive slide from @anthrotweets
Sword, maybe of Damocles, via MS Clip Art
Frye Meme, Futurama, Fox Network, 1999, 3/19/15, https://imgflip.com/readImage?iid=176908
Parker Posey, “Party Girl,” via Sony Pictures 1995, giffed by cryinanddrivin http://33.media.tumblr.com/tumblr_lx8xazJ6J61qzq62xo1_400.gif
Loading Page gif via http://www.dotnetfox.com/Terms-and-Conditions.aspx
Puppy! via Imgur, http://i.imgur.com/Zhr7yNY.gif
Counting money, via Yahoo! Money, 3/19/15, http://l.yimg.com/os/publish-images/news/2013-10-16/3d716da5-9448-4fef-b15e-5e5bc58fb975_counting-money.gif


Thursday, November 7, 2013

The Subtle Joys of Selecting on the Dependent Variable

Academic research in the social sciences has a variety of aims, but much of it seeks to explain or elucidate phenomena or condition(s) and the relationships therein. In research parlance, this phenomena or condition is the dependent variable. One should not select cases that satisfy the criteria of the dependent variable; doing so is called selection bias and can lead to incorrect conclusions.

To wit, here is an example of selection bias from my former field of study, political science.
Analysts trying to explain why some developing countries have grown so much more rapidly than others regularly select a few successful new industrializing countries (NICs) for study, most often Taiwan, South Korea, Singapore, Brazil, and Mexico. In all these countries, during the periods of most rapid growth, governments exerted extensive controls over labor and prevented most expressions of worker discontent. Having noted this similarity, analysts argue that the repression, cooptation, discipline, or quiescence of labor contributes to high growth. (Geddes, 134 pdf)
If one were to make policy recommendations based off this research, one might advocate that developing countries repress labor unions in order to get economic growth, the dependent variable.

Reaction Gifs, as always. And Clueless. 
As it turns out, Alicia Silverstone is right to be skeptical about this claim.
In order to establish the plausibility of the claim that labor repression contributes to development, it is necessary to select a sample of cases without reference to their position on the dependent variable, rate each on its level of labor repression, and show that, on average, countries with higher levels of repression grow faster. 
The two tasks crucial to testing any hypothesis are to identify the universe of cases to which the hypothesis should apply, and to find or develop measures of the variables. A sample of cases to examine then needs to be selected from the universe in such a way as to insure that the criteria for selecting cases are uncorrelated with the placement of cases on the dependent variable.(Geddes, 134-5)
A random sample from a given universe is one such way to test a hypothesis or a relationship, but selection bias is not random, and when one does this, the research findings may be biased.

However, there is a flip-side to selecting on the dependent variable: the results are often not only relevant, but highly entertaining.

To wit, James Scott's Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed is, in my mind, a towering achievement and an immensely absorbing piece of research. Of course, he selects on the schemes that have failed.

Via Google Books
And that brings us to library and information science.

Stanford University's Jacqueline Hettel and Chris Bourg are conducting research on "assessing library impact by text mining acknowledgements" from Google Books (Source). It is an impressive and creative way to measure how libraries can positively affect scholars, and at present it is in the "proof of concept" stage, so it is still early. Information and early data on the project is available at the following links.

http://www.linguabrarian.com/measuring-thanks/
http://www.linguabrarian.com/thanks-method-1/
http://www.linguabrarian.com/a-method-for-measuring-thanks-part-2-scraping-query-results-for-analysis-in-a-collaborative-project/

It seems that these scholars have a dependent variable robustly defined and measured in the form of acknowledgements that thank libraries and librarians for their help with research. While they have acknowledgements, proof of the impact of libraries, the dependent variable, they do not have the causes of these acknowledgements, and as a fellow librarian, the causes are what I am after. Those causes lead to a new metric of academic library success in scholarly communication. As of now, this work appears to be called "Measuring Thanks," a title that may hint at possible selection bias. I look forward to hearing more about the project, and I hope that they have not selected on the dependent variable by focusing on it at this early stage. As was the case above, a random sample of books, and the acknowledgements therein, is one way to avoid this bias.

Academic researchers are not supposed to select on the dependent variable, but doing so can lead to interesting and entertaining finds. More research that satisfies these latter conditions, please.