Monthly Archives: October 2010

2515 Futurology: The Role of the Library in 500 years (according to me)

Standard

I’ve recently returned from Internet Librarian in Monterey, where one of the evening sessions had a series of prominent thinkers in librarianship considering what libraries will look like in 2515. Of course, it’s notoriously difficult to predict the future 5 years out let alone 500 years. 500 years is actually impossible. But I want to give it a try anyway.

500 years ago, the printing press was still revolutionizing Europe; we’re 7 years from the 500th anniversary of the year Martin Luther nailed his theses to the chapel door in Wittenberg, which started the first ideological revolution fueled by the European printing press. If we imagine we are at just such a point, where a piece of technology is going to start a cascade that impacts every element of our society and culture, it’s clear just how impossible it is to predict 500 years into the future. The people complaining about information overload because of the volume of new books being produced were probably not anticipating the internet. But I think we can nod in their direction and remember that the moral crises of the moment will probably sound ridiculous in 500 years.

The key part I felt was missing from the predictions at Internet Librarian was the absence of the impact of the inevitable environmental and economic apocalypse that would likely occur between then and now. In 500 years we will have no more fossil fuels and will probably have exhausted all the key precious metals currently employed in high-tech manufacturing, so a computing culture based on better plastics and faster chips is probably unlikely. In a world where the fossil fuel economy crashes, there are two options: either we swap out fossil fuels for something cheaper and better and life continues on largely the same way (seems unlikely, given how little progress anyone has made thus far), or lifestyles change radically.

In the crash, I would expect that we largely cease to travel nearly as much as we currently do. There is a movement now toward rehabilitating large suburban areas built based on a car culture into walkable, livable spaces. If we lose access to cheap fossil fuels, and then lose fossil fuel altogether, I would expect that movement to grow and fundamentally alter the way we relate to our neighbourhoods and regions. Perhaps our highways would turn into long strips of farms. More walking, more biking, better public transit, more focus on local communities and getting what you need within a smaller radius. I would expect (relatively) cheap and accessible air travel to end, at least temporarily. Potentially, the costs of personal communication technology might rise as well. Remember: no more easy plastics. Our current disposable computing culture is based on cheap, easy plastic. Presumably computing devices could be made from recycled materials, but I would expect the cost of the devices to rise in any case. Would this create a second digital divide?

Of course, if we go through any length of time fighting over the remaining fossil fuels, there will be bloodshed. I would expect key civilizations to fall. I don’t imagine the world 500 years from now would be dominated by the English and the Americans. England would probably be mostly if not entirely underwater, as would large swaths of the United States. Potentially, the west’s focus on desktop/laptop computing would make it less agile. Europe, Africa and Asia are far better placed with their mobile technologies. Most of Africa skipped the hardwired internet infrastructure and went straight to wireless mobile; does that make them better suited when the crash inevitably comes and our hardwired infrastructure fails? Along with a shift in the dominant global cultures will come new metaphors and means of making sense of the world. This will also alter the way we think about and use technologies, and libraries.

Our increasing resistance to antibiotics, not to mention the rise of nonsensical distrust of vaccinations in the western world, will likely mean the return of certain terrible illnesses. In the next 500 years, I would expect to see first world countries contend with diseases long thought cured, managed or gone. We are a culture obsessed with cleanliness to such a level that we have encouraged a number of autoimmune disorders that I expect will only get worse. Doubtless we will discover a way to create stronger antibiotics, but I think the turn in that tide will come when we return to a more symbiotic relationship with our internal parasites and stop thinking of ourselves as so set apart from the natural world. I probably don’t need to mention our frankly terrible food culture and heartless animal/fish farming. Hopefully the radical drop in our populations due to illness, war, famine and infection will allow us the space to rethink how we manage our food resources.

In spite all this devastation in the picture, I expect it will all be history in 500 years. 500 years is a pretty long time for humans, so I would expect that we’d found a way to work through it in that time, primarily through altered expectations, cultural shift, and technological advances.

If this is the backdrop, where do libraries fit in? Clearly, a changed focus on the local over the global, or neighbourhoods over suburbs, means the library becomes, as it once was, a staple in the community. What are they offering? Is it books? It might be. Paper books are a more renewable resource than the technology that supports ebooks, certainly. I don’t have any doubt that the book (the novel, let’s say, even the monograph) will still be around as a concept, regardless of its form. We like long form content, we have always liked long form content. I would expect to see books get a lot longer, too. 1000 page books might be the norm, easily. If you look around you’ll see precious few thin books on shelves at the bookstore. Word processing has made it easier to write really long books and still edit and share them prior to publication. I expect our interfaces will only get easier and easier to use, resulting in longer and longer novels. Human minds seem to be nourished on stories; that’s been true for longer than 500 years (more like 3000, identifiably), so I don’t imagine our need for stories will vanish. Maybe the need will only increase, particularly because I think a turn will come where the values built into the humanities will rise.

That sounds crazy, because right now no one wants to fund the humanities anymore. The sciences are where its at, right? The more seamless our technologies get, the more story and metaphor will become the crucial factor in adoption and use. That’s something I learned more about at Internet Librarian, and it’s something I’ve been thinking about for some time as well. You can have all the technology in the world, but until the imaginative capacity is there to alter your culture to account for its use, you don’t progress. Technology can be a kind of driver, but you need story-inclined minds to make sense of it enough for the culture to absorb it in a rational way. That tendency is getting more obvious now, but it will probably become even more clear. Thought-leaders (sense-makers) can have technical skills, of course, in the same way that dancers need to know how to walk. In 500 years, I would expect that we live in a more deliberately metaphorical and ideas-driven world. This science-focus is, in my opinion, a short-term blip spurred on by the long industrial revolution. It will have tapered off long before we reach 2515. That doesn’t mean that technological innovation stops; only that its dominance as the only thing worth funding will end.

If we see internet content creation as a base, I would expect in 500 years that everyone creates and consumes content. Everyone is an artist, a novelist, a creator of some kind. As the need for specific technical skills vanishes, more and more people can enter into artistic realms. For instance: it takes tremendous knowledge, skill and finances to build, say, a bridge or a building. With virtual tools, I can build a building or a bridge any time I want, using building blocks and expanses of virtual space. For someone with no skills in architecture, I can still spend hours creating buildings.

But the virtual is different, right? I would expect that divide to crumble, especially once we determine how to manage a sustainable computing culture. Without the plastics, maybe we would be forced to take the next step and leave these clunky interfaces altogether. At IL someone said we would be curators of screens; either we provide them or people bring their own. I think in 500 years there will be no more screens. There will only be content.

I don’t know how this happens. It’s hard to imagine and it sounds impossible, but put yourself in Gutenberg’s shoes or in Martin Luther’s and try to imagine the 21st century. I don’t know how we’ll do it, but I feel sure we probably will. I doubt it’s an implant, but I wouldn’t be surprised if we use our biological knowledge to create some sort of virus or bacteria that will turn our optical/sensory nervous system into a dumb terminal for the scads of data we create. I would expect our future to be overlaid not just with the kinds of information we now have access to, but with whole other levels of creation piled into the physical space itself. What we think of as “information overload” volumes of data will be laughably quaint. You can see the beginnings of how this would happen with collaborative geotagging and GPS and crowdsourcing. I think what we have now is the thinnest base level of where it will go.

There will be no more sitting at a desk in front of a computer. The idea of a computer will be something kids go to museums to understand. In the distant future, I would expect what we think of as the internet to be just part of the walls, the furniture, ourselves. It informs us instantly all the time. We communicate with it by shifting ourselves through the space, not by forcing ourselves into a place to use a set of tools. I imagine we would return to physical objects (like a telephone handset or a typewriter) as cues and indicators; touch this, hold it this way, and the system responds. The difference between software and physical objects becomes invisible; demonstrated intent is the way you launch an application. We have touchscreens; the future would have gestures writ large.

I would expect the publishing industry to collapse. There is too much creation, independent novels and artists, to sustain it in its present form. We don’t need a publishing industry to publish and distribute, so these would cease to be key elements of the industry. However, in spite of the fact that our cities are coated with graffiti, we still go to art galleries; the publishing industry will die but be reborn, potentially as a refining service, or as a referral and recommendation service. With all creation and all information so easy to generate and find, the value third parties like “publishers” can bring is with advice, support, networks, and recommendation. These are crucial and probably lucrative roles. It would be the publishing industry, but with the actual publishing part excised. It will probably become one of the strongest industries, employing a far larger proportion of the population than it does currently.

And this brings us back to libraries. If I’m right and we ensure a series of technological and economic blows, the library may indeed become the only place where most people can access information and communication tools. We would need to expand our roles as access providers, expanding our computing services rather than restricting them. If computers suddenly become insanely expensive, public libraries in particular will need to fill a need we haven’t had to fill in the west since the 80s and early 90s. If it goes the other way and computing becomes insanely cheap, possibly with biodegradable computers, we might do the opposite; reduce computing facilities, but provide tons of free and fast access with a focus on supporting user generated content. Maybe we provide tools for creation, storage of digital media, in general support the voices of our patrons on the world stage.

By the time we adjust to a world with different energy sources and radically different economic models, with massive user-generated content of all kinds, search is long dead; context is what people need. Libraries, then, can act as filters. In some ways I think libraries can become totally ubiquitous civic services, providing support to neighbourhoods by filtering content based on where their patrons are at any given time. Rather than typing a search term, you would shift your physical self into a space that indicates the content you’re looking for. For instance: in a university, you might go to the history department to begin a “search” for history material. You might move toward a particular office to get closer to certain kinds of information, and that information is constantly refined by the person behind that door. Better yet, we take that universe of information and put it in a room where your shifting motion helps flip between topics and refine your process. Once again the university library is a global playhouse granting patrons access to the entire world of information from the start to the present. We often talk about the value of place, the library as a place. I think, built well, the library as place would become the entryway to all information in one room. It would remain a valuable location not because its the only place to access that information, but because it’s the easiest way to do so.

Keyword searches are the mode of the day currently; but they aren’t easy, and with increasing forms of content, they will become rapidly unwieldy. This isn’t how people think, no matter how popular google is; everyone still prefers to get recommendations from people they know and trust than to perform a cold call search. Right now classification systems are antiquated and increasingly not useful; I would expect their usefulness to return in the future because where you go in the “stacks” would filter the content you see, would make it resolve into its context. Merge your catalogue with your physical space; we already do this. This is exactly what the library was meant to be; a single repository of all information, ordered in a way that makes sense. In this world, that single repository would be dynamic, fundamentally cloud-based and terminal-free. Book-free, probably. Virtual but highly physical.

Libraries would retain their role, not because it’s the only place to find the content, but its the only way it appears in a rational, browseable, clear way, with exactly the context each individual patron needs. A librarian would be a refiner of context rather than a content selector. We would create roles for our patrons the way we do for usability study, refine those roles based on a huge series of factors, and help individuals to create the library in the image they need, based on who they are today, and help that role to shift and change along with the patron. And we help take information about others (anonymized, or not; there is prestige in being a good filter) and apply it in interesting ways. We would be a constantly-shifting collection of contexts, switchable, alterable, reorderable on the fly. There would be much more information, created by many more people; but it would be so easy to sift through it that the idea of searching would almost unthinkable. Searching would again become a highly-prized skill, because most people will not be required to do it. At that point, librarians can make a bid win back search and be the go-to people when someone needs to actually search for something rather than just find it through thoughtful context. Reference desks would be beehives of activity. (Though: possibly bees would be extinct. Pity, that.)

In the days when long-distance travel becomes prohibitively expensive, we might make ourselves communication hubs, creating spaces where our patrons can interact with others at a distance but still have the feeling that they are sharing physical space. Maybe we will transform our spaces into conference areas. In academic context, this make perfect sense: we would become local venues for every academic conference of interest to our faculty and students. We would foster community both locally and globally. Our faculty would probably attend more conferences than ever before, interact more, share more. We would archive not just the papers but the experience of the conference for others to revisit. Hopefully we would take over the “publishing” side of academic communication altogether, fostering academic sharing in more ways than one. Once long-distance travel becomes easy again, many might prefer the library as a communication hub, and attending virtually might be the new old-school way to interact with a conference. They would become far more frequent. Conference papers might overtake journal articles, but we would present each of them as though they are individual items in our collection, with peer review and context (and a thousand other variables) modifying how prominent they are.

At Internet Librarian, most people seem to despair a bit at the idea of a long-term future for the library. I’m really not in that camp, obviously. I believe that the traditional role of the library is still very much up for grabs in the future, more so than this blip of time we’re currently occupying. But as long as librarians think of libraries primarily as information storehouses rather than context-generators, and as themselves as “human search engines” rather personal, thoughtful and tech-savvy guides through a sea of available information, we will struggle to remain relevant. If we consider our true mission, underneath the formats and methods, I think we’ll find that the world always needs libraries. We just need to keep altering ourselves so that we keep meeting the same needs as the world changes.

iPhone: week one

Standard

For all my tech-geekery, I’ve never had a smartphone. There hasn’t been a really good reason for this, aside from a vague attempt at fiscal responsibility and the reality that I spend my life essentially in one of two wifi zones (home, work). I figured I didn’t really need a truly mobile device that connected to the internet. Couldn’t I have my (short) commute time away from it? It just never seemed that important. I’ve been following the developments, and while never anti-smartphone, I’ve just never been a good phone person. (At least: not since I was 16 and on the phone constantly.) There are so many other interesting ways to communicate: talking on on the phone just seemed like the least imaginative. I don’t have a home phone, and my work voicemail is something I have to remind myself to check.

The internet is, largely, my passion in life: communication, productivity, creative thinking with internet tech, that’s what I do for a living. It’s also something I enjoy in my off-time; I’m genuinely interested in web innovation, and my explorations and thinking don’t stop when I leave the office. I understand the app revolution, and while I’m on the side that believes the apps are probably only temporarily in power and the mobile web will probably take over, I’m intrigued by the apps and the interesting things developers and users are doing with them. So you’d think I’d have been on this smartphone thing ages ago, but no.

In spite of my obvious interest in all things online, it wouldn’t be fair to classify my web experiences as addictive or compulsive. I’m absolutely okay with pulling the plug at pretty much any time. I can take a long road trip without the internet, and I don’t miss it. I love to read, I love to talk to people, I love to sit and think and muse. Contrary to the “information overload” debate (which I think is code for “I procrastinate and the internet makes it too easy”), I don’t find my connection to the internet either overwhelming or demanding. It’s a give and take. If I don’t want to pay attention, I don’t. When I want it to entertain me, or confuse me, or engage me and make me think in new ways, it does. So while I thought the smartphone thing was pretty cool and clearly an intriguing and useful development, I didn’t actually have one of my own.

Until last week, that is. I finally got on the bandwagon. And I’ve been diving in head first. No holds barred, no panic about the 3G useage. Not in the first week, at least. I gave myself permission to be gluttonous with it, to roll around in it and see how it felt.

The only times prior to now that I thought I’d like to have a smartphone is when I’m out to dinner. Not because my dining companions have been sub par, but because I have an ongoing fascination with food history. I like to know how the composition on my plate came to be, and what historical events I can credit for it. This is easy with things like potatoes and tomatoes (“New World”, obviously), but garlic, carrots (did you know medieval Europeans ate not the orange root, but only the green tops of carrots?), bean sprouts, onions, cows, pigs, chickens, saffron, pepper, etc. It’s really the only time I’ve felt the lack of the internet. I want to look up some historical details at very odd times. I figured a smartphone would be helpful for that. (I can’t really carry around a comprehensive food history book everywhere I go, can I.) Filling specific information needs: in spite of my own certainty that search is basically dead, in the back of my head I figured this is how I would use a smartphone. I was not right.

But it’s been different than I expected. First, and most obvious, I suddenly always know when I have email. I bet people hate that. Email is my second least favourite means of communication, so putting it at the front of the line has mixed results. As I said, I’m reasonably good at not feeling pressure to look at anything when I don’t want to, but the thing pings when I get new email, and it makes me curious. But even in the first week, I don’t look every time. I didn’t stop my conversation with my mother when I heard it ping. I did, however, answer a question from an instructor while on the Go train back home on Saturday. If you want to be distracted, access to the internet via smartphone will certainly act as a decent distraction.

My best experience with it so far as been a trip to my home town, Guelph. It’s early October, and suddenly this week autumn appeared in full colour. If you’ve never experienced a southern Ontario fall, you’re missing something great. The cool temperatures at night mixed with the remaining warm days turns out a crazy quilt of colour across the landscape. It’s only when there’s enough cold that you get the firey reds and deep oranges. We’re in a banner year here, and on the bus on the way to Guelph I saw this awe-inspiring riot of colour out the window. Purple brush along the side of the road, a scintillating blue sky, red, orange, yellow and green leaves on the trees; this is the kind of thing that makes me happy to be living. The kind of thing I want to share, just out of the sheer unbelievability of it. It’s incredibly ephemeral, these fall colours, so capturing them and sharing them has additional appeal.

So this phone I had in my hand, it has a camera. This was actually my first experience using it. And I discovered quite by accident that I could snap a picture and then post it to twitter with a matter of a few swipes of a finger. So there I was, first on the bus, then walking down Gordon St. in Guelph, 22 degree weather, the sun warm on my skin, and while I was away from home, away from my computer, I was sharing my delight in the beauty around me, capturing it and sharing it effortlessly. It was one of those days when I felt like I could hardly believe the intensity of what I was seeing, but I was able to share it, record it, all as part of the experience. I’m not a great photographer: mostly I leave the camera alone and just experience my life without documenting it. But sometimes, documenting it is part of the experience, adds to it. So, in my 30 minute walk from the University of Guelph and my sister’s house, I shared the colours around me and saw the responses from my friends and colleagues far and wide. I was no less on the street, no less engaged. But I was also interacting with the world via the internet. I loved it. I was in two places at once. I had voices in my head. I was connected in two places. It reminded me of Snow Crash.

I’m sure this is no revelation for anyone who’s already had a smartphone all this time, so mea culpa. I was aware of the sort of ambient/ubiquitous computing, I just hadn’t had the chance to experiment with it myself yet, to see what it really feels like. I think the interface is still a bit clunky, too limiting, but the touch screen is getting closer to effortless. What’s wonderful about it is its seamlessness; picture to twitter, responses, all so easy to see and engage with. And engaging online isn’t even really drawing me away from my real life experience. It’s just a part of it. I’m not thinking about cables or connections or keyboards. Technology is getting to be close to invisible, just present and available.

As I sat on the train, reading fiction online, leaving comments, checking out links on Twitter, reading educause research, answering work email, I realized that I would never be bored again.

I read someone’s response to the iPad a few months ago where he returned his iPad for this very reason: the threat of never feeling bored again. Boredom as critical experience, necessary experience. I can understand that, but of course it’s all in the decisions that you opt to make. We are invariably drawn to the shininess of instant gratification via the internet, of course. But even that can get boring, eventually. You do reach a point where you’ve read it all for the moment, and you’ll have to wait for more to appear in the little niche of reading that you do. Does that force you to branch out, find more and more interesting things? That’s not necessarily a terrible thing. Does it allow you to avoid reflecting, being with yourself in a place?

One of the very early criticisms directed at the iPad was that it was a device for consumers, on which information is merely consumed, not created. That jarred me, as it felt untrue and frankly a bit elitist. Creation doesn’t just mean writing software or hacks. Creation can be writing, or drawing, or singing, or sharing reactions and thoughts. but I see now with both the iPhone and the iPad, that this criticism is both true and false. It’s true that these devices make it very easy to consume content created by others; it’s easier to browse and read than it is to write, for instance. The keyboard is pretty great, but it’s not as easy to use as the one attached to my laptop. But what I choose to browse/read/consume is still my choice; just because it’s on an iPad doesn’t mean that it’s all commercial content, not while the web is as relatively free and easy to access as it is. Most of my reading on these devices is not sponsored and not created by mainstream media. I’m not just reading the New York Times. I’m reading blogs and archives, primarily. And why are we so anti “consumer”? We need to consume the creations of others as part of a healthy dialogue, after all; there is a level of pop consumption that’s a good thing. Neither of these devices is as simple as a TV or a radio where there is a clear creator and a clear consumer. I am also a creator on these devices, a sharer of experiences, of thoughts and ideas. My experience walking down the street in Guelph on a beautiful day was a case in point; I was clearly a creator, sharing what I saw, engaging with others. That’s not a passive experience. Sitting on the train reading someone’s review of a movie, or a fictional take an on old idea; I’m consuming as well. In places where I couldn’t do so before.

It feels like there are fewer spaces in my life. The level of connection I’m currently experiencing seems to make my days blend together into one long back-and-forth with any number of people. Is this less downtime? Downtime transformed into time spent in this otherworld of communication and information? Am I reflecting less?

I started with a bang, so I guess it remains to be seen how much I keep at it. Will it get old? Will I return to my former habits, with less time testing the limits of my devices? It remains to be seen.

Adventures in Public Domain Reading

Standard

My acquisition of an iPad resulted in me reading my first ever ebook (Cassandra Clare’s Clockwork Angel) followed promptly by my second (Holly Black’s White Cat). Having learned that I enjoy reading ebooks via ibooks, I discovered the collection of free ebooks available on the platform via project Gutenberg. So, I finally read through a few Arthur Conan Doyle books, some Daniel Defoe, and others. Now, reading books written prior to the 20th century isn’t exactly a novel experience for me. My first degree is in English. I took Renaissance literature, I’ve read Paradise Lost and Pilgrim’s Progress and Canterbury Tales and Pride and Prejudice and all those books you read when you do a degree in English. I discovered my love of Daniel Defoe reading Roxana and Moll Flanders. I know very well how many great books are out there.

But this time around, reading them next to modern books on a hypermodern platform, I’m noticing something odd about them. They seem slightly flat. That seems unfair, why would these books feel flat? I thought maybe it had something to do with current expectations of character building. I thought, maybe vie just become accustomed to reaching a particular level of intimacy with a character that wasn’t the fashion before now. But then unpacking that a bit more, I thought it was actually just what mascarades as the illusion of intimacy with a character.

In a 19th century novel, we are fairly intimately enmeshed in the lives of the protagonist. We follow them everywhere. We know most of everything that they do. But somehow that didn’t feel like enough to me. Following them around, hearing all their conversations, accompanying them to meals, it just doesn’t feel like enough.

So then I started to think about all the current fiction I’ve been reading, and what’s going on in the, that’s so different.

For a start, current novels stick to a structure for more tightly. I read a lot of YA, fantasy and science fiction, and these genres all adhere to a pretty strict narrative structure. A protagonist with a mission, a story with a powerful beginning, lots of action in the middle to hold your attention, enemies that have at least some life breathed into them, a crashing, satisfying conclusion. I can’t read anything written in the last 10 years without being hyperaware of now word processing has shaped it. Easy editing, storage, searching, sharing, the relative ease of writing incredible volume that still hangs together as a complete story arc; I don’t imagine any of this would have been so easy and routine without access to a simple word processor. I think about J.R.R. Tolkien and how there’s just no way the detour with Tom Bombadil would have made it past an editor today. And I know he edited a lot, but I don’t think The Lord Of The Rings would have been quite the same book if J.R.R. Had had access to a MacBook and a copy of Scrivener. For nor, it would have been even longer.

But it isn’t only that. I also realized, reading Conan Doyle and Cassie Clare at roughly the same time, that we have very few stories without a Sixth Sense sort of twist to them. I’m hard pressed to think of a single story vie read in the past 10 years that doesn’t have some kind of ancient twist in the latter middle or end of the story. Not just a twist, see, actually a secret hidden in the past of the character that makes everything they’ve done all along suddenly appear in a different light. It not enough anymore to just have a plot; I also need this huge, revealing understory to cast a pall over everything else. I’m used to getting two stories for every story I read. And somehow this dual story surprise is what makes the characters feel more open to me. We don’t just go through a series of events together, which I think has largely been enough to make a good, immersive novel until relatively recently. I also expect to be let into a whole other internal drama, with secrets, betrayals, alternate identities, and shifts so massive there is no going back.

In Harry Potter, we have the relatively simple story of the boy who is a wizard, off to wizarding school; but of course there is the understory about his dead parents and all of their choices and relationships, all of which is in the past but coexists and underscores the progression of the narrative. Couldn’t we have done without it? Would it have seemed even thinner if it had just been a story about the here and now, like Holmes and Watson? Moriarty isn’t revealed to be Sherlock Holmes’ long lost twin brother, tangled in feelings of rejection and jealousy of his brother’s familial support and ability to avoid turning to the dark side. Nor is Moriarty Holmes’ father.

I think my expectation of this deeper explanation, revealed fairly late into a narrative but hinted at along the way, is what makes stories without them feel thin, more surface. I have no idea really why Sauron is so evil in The Lord of the Rings. He just is. Just like Moriarty. Defoe’s Roxana is a sexy criminal, for no apparent reason other than that is simply who she is. Without the big reveal and subsequent rethinking of the entire sequence of events toward the end of the story, I feel as though there’s a sizable chunk of the story left to the imagination. No wonder everyone questions Watson’s devotion to the confirmed bachelor Holmes; we’re used to the other shoe eventually dropping, and if it doesn’t, we’re left to find it and reveal it ourselves.

I love these stories with the twists in them. They’re extremely satisfying. I’ve just never noticed until now that the twists have the effect of simulating a new level of intimacy with the characters and the story, perhaps because I the reader learn something alongside the protagonist. We become confidants rather than merely storyteller and audience. But I think it is illusion, and a powerful one. Can’t an old school narrative filled with descriptions of actions and decisions tell you just as much about a character as learning an old family secret? By all rights, shouldn’t it tell us more?