Monday, September 01, 2008


At long last, I am pursuing this long-held idea of starting my own software company focused on software solutions for environmental problems. I’ll start out with consulting, add staff and customers and move to independent product development as opportunities arise. And for this venture, I've secured the domain name "PlanetWare.Net"

The mission of PlanetWare.Net will be to bring the power of software technology to the problems of the environment. We build software for a healthy planet. (Of course, at the moment, “we” is refers to the “royal we.” ;) We'll combine software development expertise, experience in natural resources conservation and other environmental issues with a proven ability to deliver world-class solutions for diverse customers.

Our services include development consulting in technology strategy, system architecture, product definition, project management, software development and user experience. Overtime, the company will independently offer software products, perhaps in the area of water resources management, ecosystem services valuation (that is, systematic economic valuation of clean air, clean and plentiful water and other stuff in-tact ecosystems give us “for free”) and environmental systems data integration. Or somethin’ like that.

I've got my first contract and am building the foundation of a small business. What? Am I Scared? Nah! I'm terrified. But thanks to great friends, family, Pema Chödrön and The Buddha, it's all good.

By the way, the image to the left is an early prototype for a PlanetWare.Net logo. Let me know what you think!

Labels: , ,

Thursday, May 15, 2008

The Case for Conservation Information Systems

As promised, I'm beginning a series of posts based on components of my master's research. I'll begin by answering this basic question: why should anyone care? That is, why would we in conservation who have so few dollars to address the enormous need we face spend any money or even time on information systems?

Here's my answer: The game is changing. Conservation of natural systems has moved from a personal value to a requirement for living. Our historical focus on species loss and sprawl must broaden dramatically to include issues as complex and integrated as climate change, water. The Nature Conservancy's marketing department is already on the job: "Last great places" has been replaced by "Protecting nature. Preserving life." On the information side of the house, the conservation comunity must recognize and respond to the need for a new level of scientifically rigorous, defensable information to guide, improve and account for our actions.

Environmental conservation has always had to answer the question, what and how do we conserve? After all, limited resources and compelling alternatives are nothing new. But the more sophisticated answers mean higher demands on information.

Conservation 1.0. In its early form, human conservation of natual systems has focused on the protection of beautiful, "wild" places and the species that compell us. The criteria for conservation was thus subjective, even intuitive. Who needs an information systems for that? We see it, we love it, we protect it. Simple. Protecting a place is straightforward. Protecting a species usually means protecting habitat. So we'll need good ecology and habitat maps. Not so simple, but usally doable given that clear focus.

Conservation 2.0. We've known for a while now that the sixth great extinction is underway, anthropogneic in origin. Translation: massive critter die off, our fault. The Endangered Species Act moved us beyond the "charasmatic" critieria and instead asked questions about species' rarity. Now we're studying a lot more ecology and mapping a lot more habitat in the context of government accountability for the recovery of nature's rare elements. Then E.O. Wilson coined the term “biodiversity” as we recognized the need to preserve the full bounty of nature's creatures and habitats and conservation took up the charge. With more sophisticated analyses of critical species habitat, richness, rarity or irreplacibility we identified "hot spots" that gave us big biodiversity bang for our buck. Uh oh. We might need some serious information systems to help us understand a whole slew of species habitats and their conditions. Who has this data, how do we get it, integrate it (so easy to say, so hard to do), and analyze it?

Conservation 3.0. Present day. By necessity, the conservation agenda must respond to at least five new realities, all with significant ramifications for our information systems requirements:

  1. People really do need nature
    Awareness of the human dependency on functioning natural systems is on the rise and with it the need to explicitly value the services provided by intact, functioning natural systems. This view recognizes conservation’s role in informing tradeoffs in the ongoing human domestication of nature. Valuation of ecosystem services depends on highly quantitative, spatially-explicit, multi-scaled analyses based on both biophysical and socioeconomic datasets.

  2. Global Climate Change
    Suddently, effective conservation depends on forcasting nature's response to a changing climate. Tall freckin' order! We must develop models of biodiversity response to changing conditions at a scale that can inform natural resource management and landscape planning. These analyses themselves must accomodated improvements in prediction algorithms and the granularity and accuracy of input datasets.

  3. No more “go it alone”
    Both assessments and action increasingly require of conservation organizations deep collaborations with each other, partners in government and the private sector. To effect decision making, assessments across the spectrum of conservation subjects, from the condition of individual species to integrated regional land-use planning, increasingly require contributions from multiple organizations and disciplines. Similarly, implementation of conservation projects more and more often involves active participation of cooperating organizations. Effective collaboration depends on information sharing and integration.

  4. Scale Matters
    Conservation biology increasingly recognizes that the geographic scale at which analyses are performed changes the questions asked and answered. As a result, multi-scale assessments are required to effectively inform decision making within a given region.

  5. Account and Adapt
    Finally, the business of conservation is under increased pressure from the donors and the public to account for its spending and objectively measure outcomes of its strategies. Adaptive management specifically requires that we do no “wait for science;” rather measure and respond to measurement of our actions themselves.

All of the changes above translate to growing, not shrinking, demands on effective information systems. Conservation must adapt to the changes underway in its core business. The era of intuitive valuation of conservation priorities has ended. Donors and societies must know that their investments in conservation are based on rigorous and informed analysis. We may not be particularly good at this stuff today, but we better get good at it and quick. We risk fidelity to our mission as well as relevancy to society if we do not invest in the information systems capacity required to protect nature and preserve life.

Labels: , ,


At 2:02 PM, Blogger Seamus Abshere said...


You say:

Both assessments and action increasingly require of conservation organizations deep collaborations with each other, partners in government and the private sector.

I'd like to point out that social networking technology makes meaningful action by individuals possible by aggregating it. I work for a company that was founded on the idea of allowing individuals to both assess and take action on climate change (admittedly only one part of the conservation equation); see a blog post on our early attempts here:

Looking forward to reading more. Best,

PS. You've got a blog, why not sign up for our 350 Challenge?


Friday, April 25, 2008

Blue Devil Milestone

It's been a long great ride. With my adviser's approval, I have submitted my master's project to Duke. Grateful to all of those who have supported my efforts not just during this project but this degree, I am finished and awash in relief. Only the ritual and celebration remains (okay, the some of celebrations have already started).

When I graduated from the University of Texas at Austin with my computer science degree (way back when), I couldn't give a hoot about graduation ceremonies. While challenging, the completion of that degree was never really in question. On May 11th, 2008, however, I'll be doning that blue cap and gown with pride and receive this master's diploma grinning wide. For this degree was in question many times: it was through grit and grace that this moment has arrived.

For your loyalty, I'm going to punish all of you readers (both of you?) by selecting portions of my project and posting them here. I mightily encourage your comments: ideas, suggestions, criticisms, questions or just musings. So get ready, here comes An Information Systems Strategy for the Conservation Community.

Labels: , ,

Tuesday, December 04, 2007

Joel on the Tech-Culture War

Here's another great post I ran across on the blog "Joel on Software." I love reading this guy, Joel Spolsky, because, while I don't always agree with his points, he is consistently insightful, practical and sometimes eloquent about the strange new business of writing software. I found this particular post somewhat relevant to some (surprisingly sane) discussions we've been having at the Nature Conservancy on technology stacks/application patterns and deeply relevant to the 100 years technology war (or so it seems) all of us in the software industry been living through.

If you enjoy this, check out his recent posts recounting a talk he gave at Yale, his alma mater.

Labels: ,

Tuesday, November 20, 2007

Harvesting Data for Conservation
Part 2: The Solution

In 2002, The Heinz Center released its report on ecosystem conditions in the United Stated, “The State of the Nation’s Ecosystems.” The authors declared the assessment as incomplete due to the lack of data collection, reporting and systems infrastructure to sufficiently assess ecosystem condition. Yet the reality is that between the efforts of academic researchers, local and national governments and conservation organizations, an enormous amount of information deeply relevant to conservation is being collected, even in digital form. However, variation in syntax (e.g., file format) and semantics (e.g., termonology) prevent practical aggregation and analysis of collected data.

In my last post, I described this fundamental problem in conservation information systems, that of data model variability. I described how the variation in the schema of common conservation information systems entities such as observations, protected areas, conservation projects, conservation activities frustrates our ability to provide rich data entry/management applications as well as aggregation and analysis capabilities, capabilities that would significantly inform assessments like "The State of the Nations Ecosystems."

The solution is to develop a system that supports rich data entry/management/reporting and yet is independent of a specific data model. The system would treat the definition of entities such as observations or protected areas as data itself. Each of these entities has a core schema, the set of attributes that makes it what it is. A species observation, for instance, consists of an observer (person), location, date and time, and a species identifier. This core can then be augmented with observation attributes from a library (e.g. egg count, nest height, stratum, life stage). When needed, more advanced users can build their own attributes (solarization, acidity) and submit them to the common attribute library. Finally, the entity core schema and a selected set of extended attributes can be combined into an extended entity schema. Extended entity schema are useful for repeated use potentially reflecting and enforcing a standard or protocol. The library, shared across the natural resources management community, can thus contain core and extended entity schema and their component entity attributes.

From VanCouver07

For instance, a standard field protocol for surveying invasive species can be captured in a invasive-species observation schema and applied in numerous invasive species surveys. The schema can be extended to support surveys of specific invasives. (For instance, it was determined the length of the hind legs on the invasive Cane Toad was correlated with the geographical front edge of the invasion. "Hind Leg Length" would then be a key attribute of a survey of Cane Toads.)

The data management application parses the entity schema and component attributes as the definition of data types and behaviors. It then provides rich data entry, management, mapping, reporting, spatial and statistical analysis on the entered data. We can afford to invest heavily in the development of this system because the functionality is not specific to a given conservation data model. Attribute definitions are localizable into other languages including labels, help text, error messages, a feature critical for global conservation. Thus ends the tyranny of the software engineer. No longer are users beholden to software developers to create custom applications with rich functionality to support their data models, data models can that evolve with the needs of conservation and basic scientific understanding.

The other win for conservation is the ability to aggregate and analyze the resulting datasets arising from conservation organizations, the academic research community, even state agencies. When users populate datasets based on shared entity schema and extended attribute definitions, these datasets are inherently standardized and available for rich analysis. For instance ,species observation data can be harvested and mapped across all species surveys (using secure web services) for their common core (observer, observed species, date/time, location). Even this basic map would constitute a major breakthrough for conservation. Analysis of invasive species observations, based on an invasive species observation schema, would similarly bring insights to patterns in invasions. Population reductions or migrations over time associated with climate change can be mapped and analyzed based on surveys where climate change, per se, was not the primary focus of the survey. Again, while this approach would enable conservation to make use of an enormous wealth of basic observation data, these concepts apply equally well to information about lands managed for conservation (protected areas), stewardship activities, conservation projects themselves, etc.

We developed such a simple with a small team at NatureServe to support observations. The data entry/management/reporting tool is rich in functionality and yet supports any data model we could think throw at it. Parks Canada is the first customer and is already excited about the ability to support users conducting specialized surveys within parks and as well as those carrying out high level analysis of observation data across parks.

There is nothing specific to conservation in this technology. Indeed, I see examples of related systems existing and emerging on the web. Freebase is the closest I've seen yet to supporting what we need. But I'm not sure Metaweb is going where we need to go.

For instance, the support for combinations of attribute definitions into entity schema, corresponding to basic entities in conservation like observations and protected areas, is critical to support user-driven standards and protocols. We must have the ability to search and browse a community repository for core entity schema and their associated attributes. This open-source style resource would allow schema authors to post their submissions for use by the conservation community, solicit feedback, post modifications, and report on usage. In this way, subject-matter experts in various areas of conservation and biodiversity can directly share their expertise with the community in the form of widely-used entity and attribute definitions.

By separating the data model from the data management application functionality, we can provide conservation practitioners on the ground with powerful and usable tools to capture and manage their information. This same approach, to the extent we succeed in building a rich, common library of data model components, will enable unprecedented aggregation and analysis of similar, though not identical, data sets. The efficacy and efficiency of conservation at the project level can thus be improved as well as our overall understanding of the status and dynamics of nature.

Labels: ,


At 2:52 PM, Blogger Jamie said...

Kristin -

I've read your past few posts on data technologies for ecology and conservation and find your analysis very insightful.

Providing flexible, collaborative data structures while maintaining some semblance of shared "semantics" is a real challenge.

In an area like conservation, the importance of sharing distributed observations is even more critical than in other domains. As an Economist, I look at this as a design question: how do you develop a collaborative repository which makes the utility of the shared repository, for any individual researcher, greater than that of a private "spreadsheet" collection method.

I work at Metaweb and think these are the important questions which will determine whether shared information systems will be able to improve our world in meaning ways.

I have been working with a small, but growing community of biologists who have been developing schemas on Freebase for taxonomy, ecological models and bioinformatics. I would welcome the opportunity to work with you on the problems you outline here and talk with you about the direction Metaweb is taking.


(my email is my first name at

At 11:42 AM, Blogger Kristin said...

Hi Jamie,

Thanks for your comment.

Utility to the data producers is indeed the critical component. To acheive our goals, we simply have to beat Excel and that includes usability, performance and powerful functionality for this problem space. The very good news is that, when focused on the specific problem domain of managing conservation datasets, we definitely can beat Excel.

If we beat Excel and systems like Excel, the conservation data producer not only wins with a more suitable system to his/her problem, the conservation data consumer (aggregator/analyzer) also wins. Standards conformance, like armies in a Trojan horse, is embedded in the data management system. The data producer is producing conforming datasets without paying any additional costs. His/her data is available for downstream usage not because he's succumb to altruistic arguments about data sharing and then taken extra time and effort to cross-walk his/her data to standards, but because sharing amounts to checking a box, literally, checking a box.

It's possible and exciting. I very much believe that, as you put it, shared information systems can improve our world in meaningful ways.

I welcome the opportunity to collaborate. I'll be emailing you shortly.


At 6:07 PM, Anonymous Paul A said...

Kristin, we've already discussed these ideas in some detail and I'm totally on board. I just wanted make one comment in response to yours that there is still some cost to the data producer, in that there must still be compliance with the attribute library for the full goals to be achieved. So maybe consider it a watered down conformance to standards. In the best case scenario, everything you need is already in place because somebody else did the work. In the worst case you may have to research attributes and semantics already in the library and establish your extended schema. No argument about benefits to both producers and consumers, and still a dramatically lower cost than developing a completely new system to handle the different context.

At 7:14 PM, Blogger Kristin said...

Excellent point, Paul. You are quite right that if a required attribute is truly missing from the library (or worst of all, hard to find), the data producer either abandons the system (reverting to Excel) or pays the non-trivial cost of describing a new attribute and and template.

The hope is that costs to each individual data producer converge to zero over time only because of the contributions of data producers before him or her.

In practical terms, we know what's required here to make this work for conservation: organizations like The Nature Conservancy, NatureServe, Cornell Lab of Ornithology and others can lend their expertise and capacity to the "seeding" effort. We take our existing conservation data standards and describe them in the library, thus at least reducing the costs for data producers who follow by giving them a good head start.

Thanks again for your comment!


At 7:27 PM, Blogger Kristin said...

Three more thoughts on the costs to data producers.

First, there will always be a cost when the data producer is collecting truly novel information. For instance, if "soil acidity" was only recently measurable, the associated attribute would have to be described, potentially by the first researcher to measure it in the field.

Second, we can mitigate the costs by providing powerful and user friendly functionality for defining, even localizing, new attributes and submitting them back to the community repository.

Third, besides the "seeding" idea mentioned in my previous comment, the open source approach to the shared repository might help us as well by enlisting the power of ego. Is it hard to imagine egomaniac biologists investing themselves in the task of creating attributes and templates based on their expertise? We'll create attribute and entity template usage reports (sharing the usage counts without sharing the data) that have the effect of esteeming their authors.

- Kristin


Thursday, November 15, 2007

Harvesting Data for Conservation
Part 1: The Problem

Biodiversity Conservation, as an "industry," has not only common interest but critical need to improve a) the productivity and assessment of conservation activities and b) downstream aggregation and analysis across conservation activities. This need has been formally expressed (see Yet its practical realization has thus far eluded us.

From VanCouver07
As I see it, the fundamental barrier to the development of rich applications to manage conservation projects as well as aggregation and analysis tools is this: data model variability. For instance, whereas a field observation consists of a fundamental core of information (observer, observed species, date/time and location), meaningful observations almost always describe more than just this core. This is in order to support the purpose of the observation activity. For example, if we're trying to understand reproduction rates among migrating bird species, an observation record will not only document the date/time, location, observer and bird species, but also the fact that this is a nest observation and how many total eggs are in the nest, how many of those appear to be in tact. An observation management system that only allows users to capture the core attributes would be useless for almost all specific observation activities.

The same is true for tracking and managing information on protected areas, stewardship activities (e.g. prescribed burns, reforestation) and other datasets critical to conservation. While there is a common core of attributes to describe these entities in conservation, users must be able to extend beyond this core in practical application.

Because of that variability of the data model, users end up pursuing one of two approaches to capture and manage their conservation datasets. The first and most prevalent option is to employ technologies that are completely generic (e.g. spreadsheets or simple databases). These systems meet immediate needs fairly well. However, their resulting datasets are completely nonstandard and therefore unavailable for aggregation with similar datasets. In this case, the needs of the data producers may be met, but data consumers are frustrated (see Mismatched Incentives).

Where aggregation of large datasets and/or specialized functionality is required, users pursue the other option: procuring the development of custom systems. Custom systems of course are developed at considerable cost and, because they are hard-bound to a static data model, these systems are suitable only for a single application or, at best, a similar class of applications. How unfortunate that our investments in conservation data management systems must be repeated for each new dataset. We can ill-afford to enrich the functionality (e.g. usability, mapping, reporting, feeds, import/export, wizards) or performance of any given system because this investment is specific to users of only a specific dataset. It is as if each dissertation, because of its unique content, required the development of a new word processor.

Neither the completely generic nor the custom approach supports our need for leveraged investment into rich data entry/management applications at the conservation activity level nor aggregation and analysis tools operating on standardized datasets.

Labels: ,


At 2:03 PM, Blogger frank said...

Hey Kristin. I couldn't agree more. We need minimum data models for the core conservation entities, including conservation projects, protected areas, and species and ecosystem occurrences. How do we get there?


Thursday, November 01, 2007

Changing Jerseys

As many of you know, I was enjoying my work at NatureServe. At the same time, starting last June, I began to see through various connections signs of positive change in The Nature Conservancy's approach to information systems development including an improved governance, deeper integration with science, accountability and software development practices. When my friend Dennis Fuze, who is in charge of all systems development there, told me he was hiring for a Director of Conservation Systems Development, it seemed like just the right role. So I jumped into the interview process (intense!) and was offered the job, reporting to same-friend Dennis.

My technology background suited the systems development folks, and the Nicholas School credentials turned out to be instrumental in swaying the conservation science and practitioners… those who will be the customers of the systems development I oversee.

So I got the job and now I’m terrified. Okay, not completely, but I am “excited.” The Conservancy is a BIG organization with all of the accompanying challenges plus a few more. It’s also one that I respect immensely, has wonderful people and unique opportunities. I am thrilled to have the chance to contribute.

Tree frog, Canaima National Park, Venezuela© Ana Garcia/The Nature Conservancy

I have a great team (spread all over the country!) and a challenging portfolio of existing and new projects. I'll continue to work with folks from NatureServe, now as a tough customer. I'm still in the Conservation Information Technology league, I've just switched jerseys.

Labels: , , , ,

Wednesday, September 12, 2007

Mismatched Incentives

In Conservation IT, there's a lot of discussion about the "need for standards." But something about this rings authoritarian and ultimately limited.

Most aggregators and analysts of observation information agree about the imperative for data standards. They are united by their common interest in synthesis, analysis and decision support to address critical questions in natural resources management and conservation. This group recognizes both the unmet need and the missed opportunity in each observation dataset that remains in simple spreadsheet form, digitized but nonetheless disconnected. They are thus motivated to convene and deliberate, producing standards that reflect their interests in the data. They then cajole and/or coerce another group, the observation data collectors and on-the-ground researchers, to go out of their way to conform, convert, reformat, translate, crosswalk, describe and their data then upload it to shared data servers. Yet the benefits to this second group, the producers, are abstract, realized in another place and time and by someone else. Not surprisingly, observation data that is un-described, disconnected and highly variable in format and semantics continues to accumulate, the unfortunate outcome of mismatched incentives. The result is a wealth of information whose potential to inform and direct the understanding, effective management and conservation of natural systems is never realized.

Instead we need to meet the needs of information producers with great data entry, management and analysis tools that embed standards conformance, like the pollen of nieghboring flowers, in the data they produce. That is, only by meeting the needs of the data producers will we achieve standardization at the scale required by data consumers.

Labels: ,

Sunday, July 30, 2006

Conclusions from the Course

Cabo Blanco was mostly about wrapping up papers and having some great end-of-course parties and gave me the opportunity to think on all that I had learned over the this month experience.

I have shared with you so far much of the beauty and intrigue, large and small, of the sites we visited. But in my work and education in conservation, there was much to contemplate beyond an introduction to and appreciation of natural Costa Rica. So, along those lines, here's a smattering of impressions and ideas that struck me...

From a technology perspective, some obvious conclusions emerged. GPS: not there yet and not clear how we get there any time soon. The devices are fine, it's the canopy that's the problem. So unless you can afford an antenae to get your reading above the trees, GPS is just a nice idea. Devices in general, including field text, audio, image and video collection, are coming down in price, but under used by researches because those with sufficient durability are still prohibitively expensive. It rains a ton and if your device isn't waterproof, it's rather useless. Moreover, the degree that technology determines the scope of research projects was clarified. That is, researchers use the tools they know and answer the questions their tools can illuminate. It's easy to see how specialized technology is playing an increasingly important role in biology: gene research being the most striking example. But GIS was rarely used in our course, not because there weren't innumerable fascinating spatial questions to ask and answer with GIS, but because almost all the students lacked familiarity with the toolset. That will change as the tools improve (dramatic improvements await!), but I was struck by the lack of use even among these young and sophisticated computer users. It's nice to be needed. Finally, I continue to see an opportunity for dramatic new organization and infrastructure in bioinformatics. The combination of incredible information overlap, in real data not just metadata, combined with powerful scientific and conservation implications for effective sharing means that there are huge untapped efficiencies. I continued to be inspired by the idea of a sub-internet, a "Bioinformatics Web" where conservation and scientific, especially conservation biology, data is entered, analyzed and leveraged in dramatically improved ways. Bottom line on this topic is that my work at NatureServe seems just the right thing.

As I said in my Conservation Economics post, I was moved by the irony of Costa Rica's acclaimed conservation programs and the realities of continued deforestation. Amphibian declines and ongoing hunting issues clarified the possibility of "empty forests". Secondary forests may indeed be the forests of the future, but if biodiversity declines continue, they will be characteristically quite different.

But it was the economic realities of conservation that I was most impressed by, specifically the need to develop the economic rewards of conservation and reforestation or be prepared for continued degradation. For instance, I was compelled by our discussion with Dan Jansen's and his bias against species-based conservation at the cost of simple acquisition and effective protection of intact landscapes or those suitable to reforestation that can connect intact landscapes. His views were characteristically harsh, but pragmatic. Dan also spoke with us about the accessibility that DNA bar coding may bring to non-scientists. I happen to agree that taxonomy need not be protected by the high-priests of today's taxonomists and that they is enormous education, and conservation, potential in making species identification increasingly cheap and easy.

Here again the theme of integrating local people into the conservation/restoration theme was clarified. Dan's work under INBio to train and employ local experts in taxonomy is a powerful alternative to the traditional, first-world-academia approach used by many conservation organizations. We saw another example the power of local knowledge in Cuerici where Swiss foresters controlled "sustainable forestry" practices. Secondary tree mortality resulting from the selective logging was much higher than the experts anticipated but completely consistent with the predictions of local farmers who know the forest.

In getting to know my fellow students, I was impressed and heartened by their talents and passion for their areas of expertise. I look forward to watching many of them progress to successful careers in science. I was surprised, I must say, by how some of the brightest minds among them seemed relatively unconcerned by issues in conservation. If nothing else, I would think self-interest in the preservation of their own systems would motivate some concerns. The course coordinators' inclusion of conservation and social science issues was thus all the more valuable to me.

I was also struck by the need for environmental education to encourage pragmatism and a system's approach. Even some faculty members took what I considered to be idealistic and unrealistic positions on the case for conservation that simply ignored historical, sociological and economic forces that must be respectfully confronted. In others I witnessed extreme pessimism about those same forces and a sense of helplessness. Both classes of response were recognizable to me from my own history. My experience in Costa Rica has clarified for me at a deeper level the need to stay positive, constructive and holistic in my own thinking and communication. Neither narrow idealism nor pessimism can be afforded.

There were some side lessons on leadership. My course coordinators struggled a bit to maintain good science content and keep logistics in order. Without going into too much detail, I can say that the course provided a small microcosm for the study of leadership and group dynamics. I didn't envy the coordinators' leadership challenges especially logistics in a foreign country and a bunch of opinionated students with sometimes conflicting needs. The experience gave me countless examples of the idea that leadership is best thought of as service ... and hard work. I particularly emphathized with the lonliness that leadership can sometimes bring, specifically the need to let go of being understood and even liked.

In these blog entries, I can see an evolution in my understanding of Costa Rica, natural systems and tropical biology research. What I will remember about Costa Rica is the combination of its natural beauty and conservation realities. What a fantastic and fun (!) experience.

Labels: , ,