School of Computer Science, carnegie Mellon
Site Map Contact Info Directory About SCS Careers Giving to SCS SCS Dean's Advisory Board

Look Who's Talking!

Meet Dr. Mary Shaw
Alan J. Perlis Professor of Computer Science
School of Computer Science, Carnegie Mellon University

Interview by CS Undergraduates: Dixie Kee, Lucy Li, and Young Jae Park

Tell us about your background. Where are you from? And how did you end up at Carnegie Mellon?

I grew up around Washington D.C. I went to college at Rice, in Houston Texas. Why Rice? I knew that I wanted to be at a first-rate school at least 200 miles from home, so that I didn’t have the issue of, “Why don’t you come home this weekend?” My mother’s people are Texan, so Texas seemed liked the natural place. While I was there—this was the early 1960s, and computers were relatively scarce resources, there was a machine that they had built. I managed to start working part time on the computer project there. After my junior year, they sent me to a summer conference at Michigan, which was one of the three or four powerhouses in computing at the time. I met Alan Perlis there, and that pretty much settled it. So I came back and applied to graduate schools. I applied to several graduate schools, but the real first choice was to come to Pittsburgh and work with Alan Perlis, who was working on programming and programming languages.

When you mentioned the computer program at Rice, was it called computer science at the time?

Oh heavens, no. There was no computer science department of any kind. There was a computer that had been built by the engineers, which was used by chemical engineers and geologists and electrical engineers for computing big numerical models. I was a math major, and very few people in the math department even talked about computers. There was one numerical analyst, but for all the rest, computers were just irrelevant. So I didn’t get a whole lot of sympathy in my home department, but I got a lot of interesting contacts and discussions with engineers and geologists and some other computer users. There were some physicists there also. Chemists, physicists, chemical engineers, electrical engineers. I did some programming for a mechanical engineer, and then there was the hyper geologist. Subsequently, of course, they started a computer science department, but that was after I left. When I applied here (to CMU), there was not a computer science department yet either. There was an interdisciplinary program between math, GSIA (which hadn’t yet been named Tepper), and electrical engineering. I applied to that program, and then the CS department and Ph.D. program were founded after I was accepted but before I arrived, and I actually enrolled in the first class in the Ph.D. program in computer science here.

What are you currently researching, and what’s your favorite project thus far?

Oh, picking one is hard; you can’t have a favorite child [laughs]. Well, programming languages when I started were Algol-class languages like Ada and C, straightforward imperative languages with declaration and nested scope and things like that. My history has been noticing shortcomings of what we were doing and what we were saying and then attacking some of the shortcomings. So, to give a little history first.

In the 1960s, people organized programs as collections of functions, and the data was incidental. In the early 1970s it was a new idea that you should focus on the data and the representation of the data, then let the functions follow from that. This was one of the precursors of object-oriented design and object-oriented languages. So think about an object-oriented language in which the object has no special status; it’s just collections of functions. That’s what it was like. Some of my early work was on programming languages that addressed this idea that the representation and the data should be first class, and it should be manipulated only by the select functions that are designed with it, not by any function that happens to be floating around the system.

One of the things that happens in the programming languages and programming systems part of computer science is that people tend to feel like they have to claim they have solved all the problems in the world with their language, rather than saying, “I’ve solved this specific problem with my language.” And so there was a lot of hype about how these new languages—they’re called abstract data type languages—that focused on the representation and the related functions were going to solve all the problems of programming because this new organization was the solution, but the problem was that we kept finding examples where these languages didn’t offer any particular advantage. So then I started thinking about why that might be. At the time, it didn’t show up in programming languages, but when you talk to programmers, they had some program organizations in their heads. You could sit down over a coffee or over other recreational beverages and talk about programmed systems and they would say, “Well, my system has a database and six distributed functions,” and they all had a vocabulary for talking about it, and they knew that some organizations worked for some problems and not for other problems, but it wasn’t part of the official doctrine. It wasn’t built into the languages; it wasn’t something that we taught. I started trying to reconcile the things that programmers really talked about, like the things that we were saying in languages (that was probably the mid 80’s), and that’s what led to my work in software architecture. Software architectural styles really are pulling ideas out of the over-coffee discussions into the front office where you show the design, where you think about the system. It’s about making system-level decisions, like do you organize your program as a Unix style data flow? Or do you organize it like a database with a data centered repository and little functions for the transactions standing around the outside? Or do you organize it as a collection of objects? Or is it a collection of processes that send messages to each other? Or is it a publish-subscribe system in which all the components are more or less ignorant of all the others, but they all know that you can send signals and they all assume that some other process out there will do something about a signal? But those are all different organizations, they work in different ways, and they’re suitable for different problems. Until we started working on bringing that knowledge out in the open sometime around 1990, it was all just folklore. There’s been a natural progression through this research, addressing progressively larger elements.

What I’m working on now, well there are a couple pieces. One is that for several years, I’ve been involved in a consortium with an emphasis on end-user software engineering. The insight there is that although computer scientists tend to think that programs are produced by people who are trained as programmers, in fact, in the world, there are tens of millions of people who are not trained as software developers who develop software—broadly construed—as part of their hobbies, as part of their jobs, or for some other reason. This includes people who build spreadsheets--nonprogrammers regularly create quite elaborate financial analysis spreadsheets. It includes people who do scripting. It includes sophisticated work-flows, Photoshop batch scripts, and things like that. It includes web development--more and more web development requires some fairly sophisticated understanding of timing, sequence, state, synchronization, and things like that. There are probably close to two million trained professional programmers in the United States, and there are probably close to thirty or forty million people who do programming-like things in the course of their own jobs, not to mention many more who do so for personal reasons. If we concentrate on the two million who are professionally trained, we’re not doing much good for the thirty million, give or take, who aren’t. They also need the help of tools and techniques and systematic ideas and ways to reason about whether what you’ve done is actually going to do what you think it is. They need all of that even worse than we do. So there’s a growing community of people who are interested in end-user programmers and end-user software engineers with the motivation of trying to make their world better, mostly by providing better interfaces, better tools, better scripting languages. Sometimes technical people do that by saying, “Well why don’t we go in and ‘fix’ the users? The problem is that they don’t understand distributed state. Let’s go and teach them about distributed state. They don’t understand why they should do backup. Let’s go show them why they should do backups.” All these things amount to saying, “Let’s take those people who aren’t like us and “fix” them so they are like us, which as you might suspect, I think is a non-starter; it’s a fool’s errand. Converting the rest of the world to look just like us is not the solution. We have an obligation to try to package computing in such a way that people can actually deal with it rather than expecting people to become programmers. So that’s the end-user software engineering activity.

I’m also interested in what happens when systems are not just isolated, standalone systems, but rather when they’re embedded in social context. So think about the Internet. A lot of people think about the Internet as stringing wires, that is, as a communication system. We have TCP/IP. We can ship packets from anywhere to anywhere. But that doesn’t begin to capture the effect of the Internet and the web over the last twenty years. Because the Internet as a whole, as an increasingly integrated part of the fabric of society, is really not just about stringing wires. It’s about all the people who are doing things. eBay isn’t just a protocol for buying and selling things; eBay wouldn’t be eBay if it didn’t have a critical mass of people who had things to buy, so that people would  come sell, so that people would come to buy. I’ve been to flea markets, you’ve probably been to flea markets. A flea market is a little parking lot with a bunch of card tables and a small smattering of stuff, most of which is junk, and with no index. eBay originally was a huge flea market with an index. It was qualitatively different, though, because it transcended geography, and because you could do comparison shopping and bid rather than barter. eBay originally had the kinds of used goods that you find in a flea market. It has certainly changed since then because now it’s an outlet for seconds and overruns and even first line merchandise. The thing that was magic about it, was that it provided visibility into what was available both by transcending geography and it provided the ability to find what was actually out there. So eBay is not just a technical thing, that’s a technical thing with a community, and the actions of the community are what make it unique, what make it eBay. So I’m increasingly interested in systems that are embedded as social structures of that kind.

I continue to be active in software architecture because, more and more, I think that the way that the system’s organized overall is at least as important as the details of the code that’s in each of the components. Most recently, I’ve been getting involved in the national health care medical records problem. It would be nice if, when you showed up at an emergency room in nowhere Kansas if they could get your medical records from Pittsburgh, which were complete by the way. Right now, though, the best they can do is attempt to find every place that has medical records for you and have the records faxed. There’s a national effort to move medical record keeping into an online form in which it’s not just photocopies of handwritten notes, but it’s rather structured and tagged so that you can say, “Show me all the cholesterol readings for the last fifteen years,” or “Show me the changes in this measure that happened when we changed the dose of this medication” so that you can select out parts of the record, you can search through it and you can do inferences, or even merge records from different places. Well there is, as you might imagine, a huge interoperability problem because the records are kept in one form over here and a different form over there. Even if you could read the handwriting and the items were individually isolated and tagged, you’d still have to worry about whether all the tags meant the same thing, whether they’re carrying the same amount of details. There’s around a hundred different kinds of arthritis, and one doctor may say “arthritis” because that’s all that’s relevant, but a doctor that’s trying to address a particular problem that a patient with arthritis has will pick the particular kind of arthritis. Now what happens when you put those together? And suppose there’s a third one who’s identified a different kind of arthritis. How do you understand that the one that just said “arthritis” wasn’t trying to make a discrimination as distinguished from those different kinds of arthritis being tagged? So it’s a very, very messy situation, but the new health care legislation has provided incentives that are trying to drive medical practices to try to get things online and interchangeable, so that you can send them from place to place. Now the question is, what’s a good architecture for doing that? One that will actually scale up to the national level? So I got involved in that last fall because some colleagues recruited me to it.

We figured they would have some standard by now, wouldn’t they?

Well there are standards, several of them. There are different versions of the standards, and within the tagging standards there are still different ways of managing the databases. Getting the records out of the databases and getting them transmitted is also a problem. So yes, there are standards. What happened with insurance billing codes is that several years ago, although much later than you would expect, the morass of different insurance billing codes finally settled out and everybody just picked up Medicare because Medicare was the big player and had a usable set of billing codes. Somebody told me recently that the problem was you would go to a doctor’s office and they give you a sheet that has four columns and numbers on it, whether it’s like short office visit, long office visit, or initial office visit, and whether you had diagnosis or no diagnosis, and they tick off several of those options. Then they send in the billing based on the four digit numbers that are next to whatever actually happened. Apparently they have all now settled on one set of the doctor’s office billing codes. These encode procedures, types of visits, diagnoses and things like that, so there’s progress along that line, but it’s not just a matter of getting your medical records all consolidated. There’s also the potential, assuming that it can be figured out, of how you can set the privacy permissions, so that you give permission for different parts of your information to be used in only specific ways. At the same time, though, people who worry about epidemics must still be able to collect specific types of information in order to get early warnings about epidemics starting up. There’s also potential for a different kind of medical research. Right now of the gold standard is prospective control group placebo studies, that is controlled experiments in which the experimenter recruits people and watches what happens. There’s potential for a kind of research in which the experimenter get access to large amounts of anonymous data. Suppose you could do that—you could look back through historical records and look at all the patients who presented with these symptoms and see what was done to them, sort out the responses, and then do retrospective analyses to try to understand the relationship between the treatments and the responses instead of doing prospective control group placebo studies. So electronic medical records offer an opportunity for a different kind of research, based on retrospective analysis of data that is mined out of medical histories. That would let you get results now based on the documented histories rather than waiting for a couple years for the study to run. That doesn’t tell you about new treatments, but it does allow you to look at the efficacy of old treatments. So there’s a lot of potential for extracting information from the records assuming that patient privacy can be maintained. It’s a really rich area for work.

How does it feel to be at the top of your field and to be recognized with so many awards?

It’s kind of cool. It’s seriously cool. It’s a kick when someone calls and says “we have chosen you for this award”. Does it change my life in a big way? It makes me feel good, it recognizes the work my colleagues and I have done. It probably gets me on a better class of committees – like it gets me invited to these medical discussions where the problems and the people are interesting and where maybe I can actually help. The visibility is good for the university and for all of us in the university. What’s not to like?

Software research has a lot of direct benefits on the industry. How does academia communicate with industry in this field?

I spent five years at the Software Engineering Institute (SEI) right after the university set it up. The Software Engineering Institute was started because the government (the Department of Defense) thought that they were not getting the best of academic ideas into practice fast enough. So it was set up largely as a mechanism for getting technology transitioned out of research into practice. What we used to say there was that the best technology transition vehicle is the moving van that takes the Ph.D. to his new job in the industry. (Being mostly a military culture, they said “his” rather than “his or hers.”) But the underlying idea holds—that the really effective way of moving ideas into practice is either for someone with the idea to go and work in a company that’s putting into a product or at least to work hand in hand.

There are a lot of specific instances of this. Intel has had a lab here for a while; Google was next door and is still close. The companies that set up next to the university are doing it to enable the research people and the best development people in the company to talk to each other and breathe the same air and chat over coffee. It’s very much intended to take the advantage of being right next to each other.

We try other things, too. We have industrial visitors from time to time. There was a period when Digital Equipment was still a corporation, when Digital sent somebody every year. Intel usually had somebody visiting frequently. So industrial visitors are one way to do it. Many conferences try to attract industrial participation one way or another. These are two-way streets, by the way--it is not just getting the ideas from the university out into industry. Half of the task of setting up the good research project is getting a good problem. And getting good problems often involves talking to people who have real problems. One of the direct benefits to me of spending five years at the Software Engineering Institute was that I came away with a different view of the kinds of problems that mattered around the world. So the moving van model doesn’t help bring new problems back in, but having enduring, on-going collaborations and discussions do.

Where do you see software engineering and research heading in the next 5–10 years?

I think it has to respond to the two things that I mentioned earlier, (1) the number of individuals who aren’t professionally trained for developing software and how we can help them to develop software that’s good enough for their needs, and (2) thinking about systems that are embedded in society rather than systems that are just standalone, technical systems with nice specifications at the edges. I think software engineering has to deal with both of these. A number of specific technical areas are thriving. One is self-adapting systems that monitor their own execution or the environment in which they are executing, and then change the operating parameters to respond to the changes in the environment. There is a lot of work in that area.

There’s work to do in software architecture in really studying and building good theories for the architectures that exist in the web. There are opportunities at two levels: one is enterprise architectures that businesses use to interact with each other, sometimes called services, and the other is the particular kinds of things that happen in the consumer of the web. Facebook is a total hodgepodge. It’s got a little bit of this, a little bit of that, and conceptually the pieces do not make sense together. I think people have trouble using it for that reason; it would be a lot clearer if there were a clearer organization so that you can anticipate what a new thing would do, i.e. does it fit in? Maybe I totally fail to understand Facebook, but that is what I get.  Also, Facebook has different ways to do the same thing.

The iTune’s connection to the iPad is the same way. There are at least four different concepts that amount to files, and they won’t show you the file system. So there’s music, which is synchronized through your music list, there’s the applications, which are synchronized through the app store, and there are photographs, which are synchronized because you happen to identify a folder and it synchronizes exactly that folder and subfolders. And there are files with data used by specific applications, which are really handled badly. And that does not exhaust the possibilities. But I know there’s a file system down there and I keep have to remind myself that this is consumer appliance and Apple has a good reason to hide it--but it drives me crazy. So I think some architectural cleanliness would actually help people understand how these systems work.

Historically companies have had their own databases, and they have sent messages back and forth to place orders and check inventories and things like that. But increasingly, companies are making partnerships in which their data is more tightly linked. It has been 15 to 20 years since I first talked to the people from the major shipping companies and they were talking about how they were working with their big shippers, actually getting into the warehouses and helping them to do inventory control pack the trucks in such a way that when they unload the trucks, the goods would come out in a way that would be easy to deal with at the next step. Amazon is now providing this as a commodity service. I can now rent some cubic feet in an Amazon warehouse and sell things on amazon.com, and they will fulfill orders out of the stuff I’ve stored there in my cubic feet. They will sell you a commodity cloud computing service at any volume you want. Increasingly the companies are getting together, so that the shipping specialists work with the inventory specialists. Specifically, the shipping company’s specialists work with the production company’s inventory specialists, so that the inventory is handled in such a way that the packing is done in boxes that stack and sort neatly, and they come out of the truck in such a way that they minimize the handling that is involved with resorting. That’s just one example. So architectures that support that kind of tight integration rather than “here’s my company, here’s your company, we have a short wire across” kind of cooperation. Those things are not just coming, they’re here now and growing.

You’re involved with the Institute for Software Research, the Computer Science Department, and the Human Computer Interaction Institute here at Carnegie Mellon. What are some of the similarities and differences between the departments?

I am mostly in the Institute for Software Research. My interests in HCI are related to end-user software engineering. I’ve also co-advised students with Brad Myers, for example... he’s on the HCI faculty. I talk to people occasionally and co-advise students. But I have those interests, and those colleagues are there when I want to talk to them. My interests in software development and software organizations and programming languages create ties to computer science department. My recent students include software engineering Ph.D students and computer science Ph.D. students because many computer science Ph.D. students have research interests that actually lie in software engineering.  

So the answer is that I am principally in the Institute for Software Research, but I have collaborations and colleagues in the other departments. Unlike many schools, the School of Computer Science has very, very low barriers to collaboration between departments. It is not hard to have engagements in multiple departments.  

All the departments have their own styles. The departmental styles are loose enough that individuals have their own styles. There tends to be a tone in each department, but it doesn’t really affect very much how I choose to do what I do.

How did you become involved in developing the curricula at SCS (in the early 80s)? What do you think were some of the most important changes?

We were still in the Mellon College of Science at the time.  We were a department standing alongside the math department and chemistry department, and biology department. It was kind of weird to go to college council meetings because we had different concerns. We were principally a graduate program, they weren’t exclusively an undergraduate program, but their big concerns were about their undergraduate programs. For close to ten years, there had been a cycle. Roughly speaking, the dean would come to computer science department and say, “Isn’t it time for you to start an undergraduate CS program?” The Computer Science head would say, “We’ll think about it.” Then CS would think about it and decide that the field wasn’t ripe enough yet, that there wasn’t enough coherent material to be a separate body of undergraduate content. And we’d go back and say, “It’s not ready yet.” Then a few months later, the dean would say, “It’s been a couple of years since I’ve asked you.” And we’d go around again.

Let’s distinguish a degree program from courses: We were offering courses in computer science in the mid 1960s when the department was formed. There had always been courses—and there has been an increasingly rich collection of courses, but we still thought that someone who wanted to do computer science should still get a good grounding in math and engineering. At that time, there was not a sufficient collection of distinctly computer science stuff to justify a separate program that we were happy with just a computer science concentration.

Finally at one point, in the early1980s, we said, “Maybe it’s time to think hard about it.” And I said, “Okay, I will take a collection of people off and think about what ought to be in the program.” We said, suppose that it’s ripe – that the field is mature enough for an undergraduate degree. If it’s ripe, then we ought to be able to design an instance; we ought to be able to design an undergraduate program and defend it as having long term value and real substance and intellectual quality and rigor and all the things you expect from current Carnegie Mellon quality undergraduate programs.

So I took a collection of people off, and we looked at other programs and textbooks, we talked about topics and did all those things that are involved in a design. What came out was a book, a design for a program that showed by example, “Yes, it is now feasible to have an undergraduate program.” We didn’t start one at the time, but the character of the departmental discussion changed from “the field is not mature enough yet”, to “we have an existence proof” (that the field is mature enough). Then the question became became, “It’s going to cost this much, can we afford it?” Then it became a money problem, and I bowed out. That curriculum design was never implemented in its entirety, but it served as a model, so people could say, “What if we did this? Or what if we did that? Or I disagree with that.” We presented the design as a topic for discussion—no intention from the six or eight of us that produced it to believe that we were somehow holding the magic baton and that we could design something that would stand alone without refinement from our colleagues. So, we presented it as, “Here’s a design for a curriculum. Let’s talk about it.” By the time we actually started the undergraduate program, we had collection of courses that had grown up in various ways, not necessarily and solely from that model. So you can trace influences, but it was not intended as a final plan.

One of the themes  of the design was to organize courses around ideas rather than around artifacts, and I’m not sure that’s entirely happened yet. Of course, there’s a compiler course and an operating systems course, etc., but the world does not need thirty students per university per semester that think they are compiler builders. That’s not what the compiler course is about. It’s really about understanding a programming language and what goes on behind the scenes to make it work. It’s also about some  more formalism and more advanced data structures than were presented in previous courses. It’s about what happens to those formalisms and data structures when they meet the needs of a real application. It’s about programs that are bigger than a single module. And it’s about compilers because compilers were the best understood system of a medium size that was available when the compilers course was invented. So it’s not that compilers are magic, it’s just that compilers have a long history. If you want to analyze the structure of a system that has more than one component in it, then that was, at the time, the natural place to reach. One of the things we tried to do in the curriculum design was shift the emphasis from particular things to ideas. I think we had some success in that.

We also reinforced the university tradition centered on the Carnegie Plan, though it has been diluted again, I think. In the 1930s or 1940s, engineering curricula were plug-and-chug—here are the equations and here’s how to put numbers into them. The pendulum swung to an emphasis on the fundamental principals, but it kind of overreached, so the engineers became more like scientists and less concerned with the actual practicality of what they were doing. CMU tried to push the pendulum back to the center with a plan that was originally meant for engineering education, later restated for education in general. This was the Carnegie Plan for Engineering Education (or the Carnegie Plan for Undergraduate Education, or the Carnegie Plan for Education). It used to be in every CMU catalog. When we were designing this curriculum, I found probably half a dozen statements of the Plan. They’re all pretty similar; it runs about a page. It basically says, at Carnegie Mellon, we teach students ideas that will last a long time and we teach them to apply them in practice. In an engineering program, so we teach you to think like an engineer, we teach some current technology, we teach you an engineering attitude, we teach you to learn new stuff as engineering evolves, we teach you to be a functioning citizen.

The enduring theme is that we picked up the Carnegie Plan and tried to build this curriculum to be consistent with it. The Carnegie Plan shows up again in a much more recent statement about software engineering education. It didn’t turn into a book this time, but there’s an essay instead on software engineering of the 21st century. [View publication - Carnegie Plan as Appendix A.] The Carnegie Plan infuses that also. I think it’s a bit of a shame that it has slipped from view.

The other thing I should mention is the continuing interest in design as a way of choosing what product to build to satisfy the client. This is not the same thing as drawing UML diagrams. Some people say UML diagrams are design, but design is really more about “how do you decide what types of UML diagrams to draw?”

What do you like best about Carnegie Mellon?

The open, supportive environment. Great colleagues, students and faculty both, who are more interested in doing great science and great engineering than squabbling with each other over how to do it. There are a few exceptions, but out of all the places I’ve been visited and all the colleagues I’ve talked to, this is the environment that is the most open and collegial.

What was the female community like when you were getting your Ph.D?

What female community?

Let’s see, there was me, one other full time student, and one part time student. No faculty. That was the female community. I used to have a list in which I kept track of what happened to the women we’ve admitted—it got out of hand, I can’t keep track anymore. But from 1965 to about 1980, the number of women admitted to the CSD program each year was 1, 0, 1, 1, 0, 2, 0, 1, 0, 2, 1, 0, 2, 1, 0—like that. I was the first woman on the CS faculty in 1972 or 1973. Anita Jones joined me shortly after that. Somebody did a study around that time about the representation of women in the science and engineering faculty, and he reported to our department head he needed to fire half of one of us to get back to the national average [laughs].

So yeah, what community of women?

I can’t remember when exactly, but there came a time when there were enough women to get together and commiserate. We started having potluck suppers. But I think at the time, the women selected into computer science were really committed because they wanted to do computer science and were pretty thick-skinned. And it was only after there got to be more women that people started to worry about these issues.

When I got married in 1973, half the banks in town wouldn’t consider my income when deciding whether my husband and I could afford our house. I was only able to find one place that would give a married woman a credit card in her own name—so before I got married, I ran around and collected all the credit cards I thought I was ever going to want and just didn’t bother to tell them I got married. The PA Department of Motor Vehicles was a problem, too—if I filed a change of address and they noticed my marital status changed from single to married, they would change my name to my husband’s name whether I liked it or not. So, I have been in an environment where I have been clearly at a disadvantage because I was a woman. It was a relief to come back to campus, where whatever was happening—and maybe it wasn’t perfect—was still so much better than what was going on in the world at large that it was like a haven. So for me, it was like that. Luckily, a lot of things like not considering my income when purchasing a house are illegal now.

We noticed you’re really involved with your hobbies, like cycling, canoeing, and photography. You’ve even published several books on these activities. Could you tell us more about how this came about?

Yeah, like I have a life [laughs]. Well I’ve carried a camera around since high school, I guess. I take pictures mostly outdoors, mostly of natural things, and largely to document things like the trail systems. I’ve got huge volumes of pictures from trips that I don’t refer to very often, and I have volumes of pictures on, for example, the local bicycle trail systems which are resources for trail developers, not just for me. There’s somebody developing a new website for one of the trails that I’ve just sent two pictures to because I said, “Go look here, and see what you want, and I’ll send you what you can use for your website.” So, it’s a resource like that. I do send pictures off to photo contests, and sometimes they win. They don’t win because I’m shooting with expensive equipment—I don’t carry expensive equipment on a bicycle because you beat it up, so I carry a subcompact camera on the grounds that the camera in your pocket is worth infinitely more than the one that’s safe at home. I think composition and a little Photoshop tuning go a long way. When I was a kid, I was taught to look at what’s at the edges of the picture, so you don’t have stray trees, elbows or automobiles sticking into the sides of pictures—sort of just thinking about the picture as a whole. There’s a lot of advice that follows from that; I also casually read some photography magazines. So, I take pictures.

My husband probably got me into bicycling sometime before we married. Well, bicycling, and canoeing, and cross-country skiing, and hiking. Western Pennsylvania’s really very nice. It’s pretty easy to get out of town. An hour or an hour and a half’s drive will get you into some really fine woods. The canoeing in the creeks that run west off of Laurel Ridge is just great. There’s cross-country skiing for a fair share of the season up on Laurel Ridge about an hour and a half from town. Occasionally we ski in the city parks too. We’ve skied Boyce, Schenley and Frick this year (the parks that I can think of offhand). So when it snows during the day, we can just go off and bash around the golf course at night. Usually, there’s a cloud cover and there’s enough city light bouncing back and forth that you can see where you are going. Am I worried about getting mugged in Schenley Park in a foot of snow? Well, not exactly. I don’t think any mugger is going to spring out from the bush and steal my ski poles [laughs].

Before I was married, I was sharing an apartment with a woman who was at the time, editing a canoe guide. I edited it with her, then she went off to other things, and my husband and I edited a couple of editions. We inherited the text, and every so often, it has to be cleaned up and put out again. Usually, we bring together the canoeing community, and say, “Okay guys, let’s start rewriting this stuff, clean it up, update the parts that have changed, update whatever needs to be updated.” Sometime around the mid 90s, my husband told me I needed a new hobby—false [wryly] — so we ought to write a guidebook on the emerging system of trails that were being built on railroad grades, which we did. That was pretty successful. We’re trying to get that out again, but there’s so many, and they’re changing so fast, that it’s been hard getting it out. We decided that rather than doing all of Western Pennsylvania at once, we’d do it in segments. We’re starting with the segment that goes up Allegheny River, which is the one most in need of the guidebook because there are a lot are little individual trails that people don’t know about, don’t realize how they connect and how much more connected they could be. By publishing a book that shows this, we’re hoping to get more support in making a connection, so someday, we can bike to Erie without having to ride in traffic. You can bike to Washington now, traffic-free. If you can ride 30 miles a day on successive days, you can bike to Washington. Plan a week and a half if you want to ride 30 mile days. Right now, you can take bus 61C to the Park and Ride lot in Duquesne, and you can bike from there to Washington with almost no time on streets. By June the trail will connect to the Waterfront shopping area in Homestead. By this time next year, I hope you can ride all the way to right here on campus, because those last pieces are under construction.

It turns out that knowing you’re going to take photographs and knowing you’re going to write about it makes you pay more attention to things, and so you experience them in a richer way. You can’t just slide by. You gotta say, “Oh, there’s a railroad mile post, and there’s a foundation. I wonder what was sitting on that foundation. Wow, look at that rock! Is that geologically interesting?”

You’re very busy, but you get a lot done. Do you have any time management tips or any other advice for current students?

Manage your time better than I do? I’m not that good at it. The advice that I have found most helpful is to tag all the things you need to do with two things. One is how hard is it, and the other is how important is it. Or, how urgent is it, and how important is it. Then, make sure that you’re working on the ones that are important, not the ones that are urgent but not important.

More useful advice is, if you want to do great science, find good questions, work on them, and be faithful to the science. Don’t get distracted by politics or deadlines for things that don’t matter. Figure out what it is that you care about, and go in—dive in, and get completely involved in that. Don’t let people distract you from it.

There’s another piece of advice that Herb Simon used to give to people. That is, in addition, understand the thing that you are particularly good at. You might be particularly good at a tricky kind of chemical synthesis, you might be particularly good about a tricky kind of design, or you might have a special insight into the statistical basis of a machine learning problem. So understand what it is that you have a little bit of edge with—your special talent, and look for problems where you can exploit that.

More Interviews