Monthly Archives: July 2004

P2P as a Function of Democracy

Standard

One of the things that has long bothered me about being a student at Western using Western’s otherwise fantastic T3 connection is the fact that P2P networks are verboten.

P2P: Peer to peer. Peer to peer technology allows two computers to connect without a central server; two users can connect their systems and trade files. Examples of famous P2P networks: Napster, Kazaa, Limewire, etc. From Kazaa’s P2P philosophy page: The most valuable contribution you can make to peer-to-peer is to provide original content for others to enjoy. You can also collect works in the public domain, that are licensed for public distribution (e.g. Creative Commons licenses), or open source software and become a resource for others.

But what P2P means to most people is the quick and dirty ability to steal music.

See, I use P2P systems all the time. Mostly this is because I do a lot of collaborative work, and use mulitple, difficult to network computers. I use P2P networks to trade word files back and forth. To trade links. Software. .php and .html files. The most annoying thing of all time to me when I first arrived at Western is that they shut down the ports that permit P2P sharing. Because, you know, trading mp3s is bad.

The number of assumptions involved in that decision is truly boggling. First, the file extension .mp3 isn’t limited to illegally ripped music files. It also includes recordings of public domain lectures. It includes music files that are owned by, say, me, or my friend Jason. If you’re paying attention to things like Wired or even MuchMusic you’ll know that musicians themselves use P2P networks to collaborate on creating the music we’re not supposed to be sharing over the internet.

So this is just one of those things that ticks me off about internet security. From a campus location, I’m not allowed to recieve or send .zip files (which, for someone like me with 92,000 words of manuscript to fire off, is extremely annoying) or open up my ichat file transfer system and send that zipped up manuscript to my friends in New York for a read through. No, I need to trust the wilds of email, which, by the way, are notoriously insecure and are owned and monitored by the University of Western Ontario. Argh, don’t even get me started.

But here is a good use of P2P: outragedmoderates.org is using P2P network technology to create a Government Document library. From Download for Democracy:

Peer-to-peer file sharing, or “P2P,” is best known for the role it has had in transforming the music industry. But what about using P2P to provide people with a way to rapidly transmit large amounts of political information? This isn’t a new idea – other groups, including the Libertarian Party, have used P2P to transmit political information before. But P2P hasn’t realized its full political potential until it has had a significant effect on a state or national election.

I think the time is right. The Download For Democracy campaign is currently offering PDF’s of over 600 government memos, communications, and reports, all of which were obtained from mainstream media sources, respected legal or academic groups, or the federal government itself.

Now, how about access to P2P gov docs libraries in, you know, libraries? I can feel the shiver starting, can’t you? [via metafilter.]

Social Software

Standard

My friend Jen sent me this link about social software, groups of people online, and some general guidelines about creating and maintaining social space on the internet. I can’t decide which part of my life this article feeds more; the librarian side, where I’m looking at social software for academic purposes, or the true geek side, who is/was a part of several of the communities mentioned in this article. (I mean, how many people can say they know exactly what that Lambda reference meant to that community?) But for the moment, the part that jumps out to me most echoes my own comments about the v-ref article from a few days back:

Now, when I say these are three things you have to accept, I mean you have to accept them. Because if you don’t accept them upfront, they’ll happen to you anyway. And then you’ll end up writing one of those documents that says “Oh, we launched this and we tried it, and then the users came along and did all these weird things. And now we’re documenting it so future ages won’t make this mistake.” Even though you didn’t read the thing that was written in 1978.

Word, yo. I feel like this is just what the v-ref people are doing; not so much with getting upset about unruly users, but explaining away their failure by blaming it on users. There’s been a lot of research on this sort of thing; I could tell you right off that there were problems with v-ref implementation. But no one listens to me, do they. Noooooooo.

Virtual Reference

Standard

I’m going to have a go at this. I’ve been poring over this article most of the morning. The guy who wrote it is a very important v-ref guy; he works at LSSI, the people who brought us the most expensive virtual reference software package ever. It can do it all; multiple seats (i.e., multiple librarians on at once), pushing urls, co-browsing (which is a fancy way of saying that the librarian can remotely control the user’s computer), and other fancy things. I will even leave aside my ethical problems with some of these features for the moment.

This article is so negative and missing some key points. The argument is based on faulty logic and a desire to blame the user rather than looking at a) the technology, b) the developers, and c) the people behind the desk answering the questions.

You can’t pick at v-ref without looking at reference services in general. The numbers are going down everywhere. People are less willing than they used to be to ask a librarian a question, whether they’re coming in on foot, picking up the phone or using the v-ref service. Why is that? You can’t blame technology for half of a problem like that.

There are lots of possible reasons for the decline in reference stats. The one I like to harp on most is reputational; why would a member of a community come to a librarian when most people believe that librarianship is a trade? We laugh about the way people think we have no education, that girl who commented that she wasn’t doing so well in school, so maybe she would drop out of undergrad and go to library school. If that’s the level people think we’re at, why would they come to us in the first place? You can’t blame a service for not enticing users if your product is lacklustre. Are we lacklustre? No. But people don’t know who we are, what we are, and what we can do. Before reference services can get a boost, we need to explain ourselves.

In this article, Coffman and Arret claim that “More important, the underlying chat technology that powered many live commercial reference services has also failed to find broad acceptance on the Web.” This is really interesting. Please, tell that to the millions of users of AIM, MSN messenger, ICQ, Yahoo Messenger, Trillian, Jabber, and my personal favourite, ichat, are part of such a tiny niche market that they can be overlooked. Coffman and Arret are using the business world as their base of users to inpretret “broad acceptance”. This feels like the arguments around open source software; the fact is that chat services don’t produce income, so businesses find themselves less interested in them. Letting people talk to each other about whatever they want is not something that generates income. In fact, technical support doesn’t generate income either. Just because services aren’t interested in supporting customer questions in the way they probably should be doesn’t seem like a good argument for or against chat services to me.

And in the end, what is a library transaction? Coffman and Arret cliam that “the general public has yet to accept chat as a means of communications for business dealings and other more formal transactions.” Is reference a category of business dealings? Or a more formal transaction? Is it more like casual chat, or more like online banking? As Jennifer says, know what business you’re in. What business are we in? What model are we emulating here?

While Coffman and Arret make the grand claim that the corporate world isn’t into chat, even that’s not true. Every major free chat service provider (AIM, Yahoo, MSN, etc.) have profitable corporate arms that build business chat services solutions for interoffice communications. If chat is so unpopular in general, why do these services make so much money? Perhaps the problem isn’t the technology but its implementation when it comes to customer service. How much buy in do we have? How prepared are we to actually do this right?

V-ref isn’t difficult, but what librarians tend to not understand is that chatting online is not the same as writing an email. Chatting is chatting, and v-ref is more like verbal communication written down than it is like composing a dissertation on a question. Conversation is an easy back and forth, with frequent interjections. Chat communications should take the form of short sentences, not paragraphs. When librarians get trained on v-ref, they learn the software but not the tricks that make it really work. If we treated our phone questions the way we treat v-ref, I’m sure those numbers would go down too. Would we take 10 minutes to consider an answer on the phone, not saying anything, just holding the phone while flipping through a source, waiting come up with the perfect answer before opening our mouths? Probably not. Could this lack of understanding about how to conduct a v-ref interview have an impact on our numbers? I wouldn’t be surprised.

I think the problems are rife in this v-ref business, from attitudes to marketing and even the technology. Too much time has been spent on creating features like the (highly unethical) co-browsing and not enough on integrating the system into the real life of a librarian. If I had my way I’d re-write the whole thing from top to bottom. I would integrate an in-house messenger system with an external one, so that everyone is always on the v-ref software. It’s there when you log on, and if you have a quick question for, say, the music librarian, you can contact her directly that way. You can do that from your desk when you’re doing collecitons work, or you can do it from the reference desk when someone has a quick question. Virtual reference could have the effect of linking service points, opening up our points of contact both to the public and to ourselves. I would then have a point person who lets their IM go “live”, become visible on the internet at large instead of just the intranet, and let that person field the questions, with the ability to easily ask other librarians across the entire system, or transfer a patron to someone else. That way even if the v-ref is totally dead on a given day, the software is still fulfilling a need.

I could go on. And on and on. But this is probably enough for now.

That First Mistake

Standard

The first mistake they made, back in the day, was deciding to stop cataloguing at the mongraph level. I understand why they decided to do this. It’s a lot of work. Tons of work. They’d never be able to cut tech services if they had them cataloguing individual journal articles, or individual chapters of books. If they had decided to catalogue the contents of compliations and conference proceedings, to list every contribution in any scholarly oeuvre as a separate record in the catalogue.

At one point the sheer size of such a database must have seemed too overwhelming for the poor systems. So big and sprawling it would be impossible to complete and too slow to sort through. But at this point you can probably store most of the sum of human knowledge on a laptop, so that concern has gone. Space is cheap these days, too. Google is giving gigs away.

But no. Someone must have seen the end coming when that decision was made. Maybe it was made before anyone even got their hands on it; maybe it was one person’s decision at the very beginning, back before one clear head could have prevailed.

If the libraries had at any point saw the error of their ways and thrown some support behind technical services, and done a proper cataloguing job on their collections, including journals most importantly, those leeches who make up the third party profiteering journal indexers/database vendors wouldn’t have had such an easy job getting a foothold. Think of the thousands that would have been saved. Millions! Wouldn’t it be better to employ more cataloguers in tech services than to line some third party’s pockets with university funds?

Eventually the necessity of full-text access would have reared its ugly head. But if we already had records and could search them, I think it would have been a fairly minor thing to get access to scanned versions. They probably wouldn’t have been cheap, but they would have to fit into our interface, no the other way around.

Liz and I were talking about the revolution, you see. That’s when we get all the scholars on the continent to say, okay, that’s it. We’re done. We’re not submitting articles to these bloodsuckers anymore. We’re not going to peer review anything. We don’t get paid to do this, we do it because it’s a service to our profession. Why are universities paying for access to scholarship they pay faculty to produce? So what if one day all the faculty say, that’s it. We’re going open source. Our research is going to be open to everyone. We’re founding our own journals. We’ll charge a bare minimum for pdfs or print versions. We’ll form our own publishing divisions. Instead of funnelling thousands to the third parties, we’ll fund a new department in our academic libraries to handle journal publications. We’ll submit to each other, peer review each other. And the sun will rise the next day on a better world.

Me and Liz are going to take over the world.