KM4Dev discussion synthesis – ‘Is it actually possible to measure Knowledge Sharing?’

Not everything that can be counted counts and not everything that counts can be counted. Einstein

In the summer of 2010, Stefano Barale made a post to the KM4Dev discussion group asking ‘Is it actually possible to measure Knowledge Sharing?’ This question stimulated an animated discussion with 76 responses from members of the community who were keen to share their experiences and perspectives on this question. This synthesis aims to pick up on the themes identified in this conversation and draw out some of the key lessons that emerged in the discussion. The conversation started with a discussion of tools and approaches and developed into multiple threads that explored complexity theory, subjectivity, possible indicators before arriving at reflection on the very nature and value of scientific exploration.

The original question focused on how traditional tools such as questionnaires, interviews and expeditions could be made more rigorous to show whether online knowledge repositories actually support knowledge sharing, using indicators related to shared authorship, updating of documents as well as frequency of access and number of citations to other documents in the organizational knowledge base.

The post stimulated an animated discussion with numerous responses that were quick to point out that “Sharing knowledge doesn’t occur though the uploading and downloading of documents” but rather it is how knowledge is used and whether its application leads to concrete development results that we need to capture. This highlights the lack of clarity between knowledge as a product or output that we can count, and the behavior change resulting from application of new knowledge which is a process and therefore much harder to measure. This ambiguity is exacerbated by our need for rigor, which is usually associated with quantitative methods, forcing us into a corner of counting knowledge products. Stefano then confessed that he had been playing devil´s advocate with the original suggestion of indicators but challenged the group to think about whether and how the output indicators which we are able to measure could be used as proxy measures for behavior change resulting from access and use of new knowledge, and provide us with an entry point into measuring outcomes.

A number of responses explored different interpretations of behavior change, with one post making the point that this is not just about knowing more but using the knowledge to do things differently. Another highlighted how behavior change has both positive and negative dimensions so we need to be clear on the wider context of this change to assess how specific behaviors bring us closer to our desired development goals – or the even more elusive concept of social change. We were reminded that systematic approaches for measuring behavioral change do exist; ethnographic and anthropological research have a rich tradition of using activity metrics, but the challenge is not so much gathering this data but knowing how to use it. Thus, we not only need to think about the data we need to collect to prove our contribution to behavior change but what we plan to do with the data and how we aggregate indicators to turn data into compelling evidence.

Knowledge sharing environments

Another thread emerged which moved away from focusing on specific knowledge products but to a wider focus to assess the environment that supports knowledge sharing in organizational contexts. As KM practitioners we are aware of the symbiotic relationship through which informal channels support official databases or intranets to enable knowledge sharing, with one comment noting how making visible the knowledge held by people was the "most important" element of the repository, not the documents themselves. From this perspective the key question becomes how to create professional environments that are conducive to knowledge exchange, combining formal technological solutions with social spaces that encourage discussions and exchange. We should be thinking about what this environment might look like and the conditions that help staff to make the best use of the knowledge available to them. Answers to these questions might give us an insight both into how to create such an environment and what its affects on the outcomes of projects and organizational performance might be. There are factors that facilitate - practical experience, teamwork, motivation, peer exchange – and restrict - bullying, lack of teamwork, lack of motivation, lack of trust, vanities, competition – our ability to use knowledge to do things differently than how we did them before. Moreover, people need information that is available to them at the right time in suitable formats as well as access to skills training to learn from people who have overcome the challenges and problems they currently face. One comment in this thread challenged us to be clear about whether our unit of measurement is the individual or the organisation, which has clear implications in how we measure the efficacy of our professional environments to promote knowledge sharing. Another asked us to consider whether we actually need to bother to try and measure the social change from KM. Is our concern with measuring an artificial concept from the development sector that leads us to chase our tails? What can we learn from the private sector companies, Google, Facebook, Twitter who have changed the way we work and think and measure their success in terms of revenue?

Products-processes / outputs-outcomes / tangibles-intangibles

This self critique of the development sector’s obsession with measurement challenges us to enter into evaluation parlance and explore the well known development dilemmas of whether to focus on products or processes, outputs or outcomes, questions that promote discussion and debate across the entire sector, and are by no means restricted to knowledge management and sharing. We need to embrace the fact that the most valuable lessons are often found, not in what we can measure but what we can’t. Another way of thinking about this is exploring tangible vs. intangibles which roughly transposes onto the outputs / outcomes question. Examples of intangibles are “things like increased levels of knowledge, a greater capacity to innovate, the capability to create more ideas, relationships, trust and active networks”. While intangibles are harder to measure, they are “also so much more valuable for what they contribute to the distillation of experience into understanding, lessons, etc.” That said we cannot entirely reject the importance of measuring and counting tangibles, as they are likely to also be central elements of our implementation plans and activities. A nice analogy was used of a man searching for his lost keys under a streetlight despite the fact that he had dropped them in a dark corner of the street. We shouldn’t restrict ourselves to measuring what we are able to see and need to shed light on the lessons that are not always immediately apparent.

Beyond this outputs-outcome / tangible-intangible debate our greater challenge is to gauge whether we have made any lasting impact and made a real difference to people’s lives;

“In order for the intangibles to have real value they must eventually translate into tangible benefits as well (unless we are interested in supporting knowledge for knowledge sake). It may be very difficult to measure the "ripple effect" of long term benefits derived from these intangibles, but in order for KM to be properly valued and supported by decision makers surely we must try to do so”.

So another entry point is to consider the anticipated outcomes of knowledge sharing; we share information with a purpose so we need to provide evidence that the information we share is fit for purpose and supports the desired outcomes.

Complexity and causality – shooting at butterflies

Another theme throughout the discussions was the issue of complexity and how to determine causality when we can’t necessarily identify all the complex factors at work. Both the Cynefin framework and Outcome Mapping were suggested as resources that could help us to maneuver our way around this complexity.

An excellent analogy sums up the challenge we face:

If you think the world is complex, interconnected, dynamic and emergent, then measurement of knowledge may be akin to using a machine gun to catch a butterfly

And yet plenty of us are still shooting at butterflies. Several posts captured the frustration of our own inability to move beyond linear mechanisms to measure our work. We are those that promote complexity theory and explore emergence of social processes to support learning, however;

Paradoxically, the majority of our effort still goes on over-designed, top-down systems which have an overly mechanistic view of human beings, how they interact and how they learn”.

Thus there is an inherent tension between embracing complexity as knowledge professionals and meeting the results and reporting requirements of donors and Senior Managers, creating a temptation to simplify the complexity in order to satisfying this thirst for answers. Can we find the middle ground between the complex and the linear?

Towards a formula or framework

We cannot afford to ignore the question of accountability and the need for evidence which means it is time for some radical out of the box thinking to identify a framework to bring these different elements of our experience together. I really loved the suggestion of a formula as food for thought on how we actually address our need for evidence in complex intangible questions such as improvement in quality of life, farm productivity and food security/insecurity levels, ability to innovate and adapt to different situations and circumstances, and the ever-elusive change in perceptions, thoughts and ideas. The following formula was suggested by Damas Ogwe

So … “Measurement could be argued out in the following equation:
PI + CI = (K2 + R2) – (K1 + R1)

Where
PI = Personal Impact after sharing
CI = Community Impact after sharing
K1 = Knowledge before sharing
K2 = Knowledge after sharing
R1 = Resource utilization before sharing
R2 = Resource utilization after sharing

Thus the difference between the sum of knowledge (K2) and resource use after (R2) knowledge sharing on one hand and the summation of knowledge (K1) and resource use before (R1) Knowledge sharing on the other can help us understand both personal (PI)and community impact (CI).”

This suggestion certainly captured our collective imagination and led to another intense exchange over whether the minus should in fact be a plus. The answer is of course that that would depend upon the context. In some cases new knowledge must be built upon existing knowledge and experience while in others it will replace previous ways of doing things. Knowledge is constantly being lost and that is something that we rarely acknowledge and certainly don’t know how to measure. But the very essence of knowledge makes it so hard to capture in this type of equation, it is something that flows rather than a fixed asset. So we need to identify mechanisms to measure knowledge as an asset that can be accumulated (and depreciated) whilst also exploring the flows and exchanges that support these processes.

Putting lessons into perspective

An essential piece of this puzzle is accepting that what works in one context or situation is not necessarily suitable or advisable in another so we need to embrace both the complex and context-specific nature of our work and be prepared to adapt to our circumstances. As we were reminded in an impassioned argument against ‘best practice’, we have no idea of the criteria or review process by which some practices were decided to be the better than other, highlighting our inherent subjectivity. We need to step back and question whether we recognize this subjectivity and its influence in our work enough. We are generally aware of the many divergent and incoherent views and perspectives that different stakeholders have of a development situation but are less conscious of our own perceptions and how they influence us in our efforts to extract solid lessons from our experiences that have value for others. From this perspective it is extremely difficult to determine what constitutes a lesson or a valid point of view so we need to start thinking more critically about lessons and encourage more than one interpretation of how lessons from a project or experience emerged and their potential applications for other contexts and projects.

As Knowledge management professionals we should be more aware than anyone of the dynamic interactions between multiple knowledges in the development process and we need to move beyond our obsession with tools and methods to focus on the synergy of different types of knowledge, each with their own measurement logic. We work with a range of evidence - individuals' self reports, community groups' stories, experts' objective measures and organizations’' performance indicators and it is the sharing of patterns across these approaches which build up the rich picture of any change (or lack of change), leading to collective learning among all the actors - and ultimately supporting collective action.

So maybe we should try to find better mechanisms to align the juxtaposition in our thinking between local and scientific knowledge? We know that statistical knowledge can be manipulated but this doesn’t reduce its perceived value and we favor this type of evidence as it requires proof and is therefore associated with rigor. Although local knowledge and community experience is about interpretations and perceptions this knowledge is equally valid when it comes to decision making so we need to find better mechanisms to incorporate local knowledge into decision making processes. By putting different types of knowledge on a more equal footing we can combine elements and learn from each other to develop solutions to common problems.

This line of thinking led to reflection on the very nature of scientific enquiry

“For science advancement, disproving hypotheses is the engine, but for professional life, disproving assumptions is the engine. We advance when we uncover and disprove the assumptions we were using when making sense of our work or designing strategies. We should not oppose local and scientific knowledge. More then ever we will need good science to understand the emerging global problems”

Science is about challenging accepted truths, doubt and questions are usually the precursor for a paradigm change. This conversation highlighted the level of doubt and questions around how we measure the impacts of our knowledge sharing activities and suggests the time is right for a shift in our thinking. We need creative, out-of-the-box thinking that combines qualitative and quantitative parameters, captures tangible and intangible outcomes and enables exploration of lessons from multiple perspectives.

Continuing the Conversation


The Knowledge Management Impact Challenge aims to continue the exploration of this topic by gathering and exchanging stories of what works and what doesn’t. It also features a growing collection of recommended resources in the online library—new contributions are encouraged. Visit http://kdid.org/kmic to learn more and share a story.

KM Impact Challenge Team

The KM Impact Challenge is an initiative of USAID’s Knowledge-Driven Microenterprise Development (KDMD) Project.

Suggested Resources

Below is the list of resources that were suggested throughout the conversation.

Books and Chapters

  • Anthony DiBella “Learning Practices – Assessment and Action for Organizational Improvement”
  • Kitching, Gavin, 'The Trouble with Theory: The Educational Cost of Post-Modernism".
  • Valerie A. Brown, John A. Harris and Jacqueline Y. Russell “Tackling wicked problems"
  • Chapter 6 of Tom Davenport's "Thinking for a Living"

Articles and manuals

  • IIRR. 1996. Recording and using indigenous knowledge: A manual. International Institute of Rural Reconstruction, Silang, Cavite, Philippines
  • Snowden, D.J. & Boone, M. (2007). A Leader’s Framework for Decision Making. Harvard Business Review, November 2007, pp. 69-76. http://www.mpiweb.org/CMS/uploadedFiles/Article%20for%20Marketing%2...)
  • Kurtz, C. F. & Snowden, D. J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world, IBM Systems Journal, 42 (3), p. 462.
  • Hugenholtz, N. I., de Croon, E. M., Smits, P. B., van Dijk, F. J., & Nieuwenhuijsen, K. (2008). Effectiveness of e-learning in continuing medical education for occupational physicians. Occup Med (Lond), 58(5), 370-372.
  • Recording and using indigenous knowledge: A manual www.mamud.com/ikmanual.htm

Web resources

Views: 203

Comment

You need to be a member of Knowledge Management for Development to add comments!

Join Knowledge Management for Development

Comment by Louise Clark on December 17, 2010 at 9:39am

Hi Carl - Thanks for your comment - think one of the main problems with control groups is that they contradict the very nature of Knowledge Sharing!!  If we do our jobs well then knowledge should flow beyond our immediate audiences...

 

Would be great to know if anybody has any experience of using control groups to measure the impact of their work.  Would make a great case story for the KM Impact Challenge

Comment by Carl Jackson on December 16, 2010 at 3:28pm

Thanks for a great summary of a thread that I had not followed but is really valuable.  The points made me wonder whether control groups would be necessary or helpful for measuring knowledge sharing in a robust way.  Without controls evidence of any kind is at risk of being disputed as its harder to say if the change reported would have happened anyway without the intervention. With controls it may be easier to bolster qualitative evidence by being able to contrast anecdotal / opinion evidence of change to that in a similar group not involved in the KS intervention.

 

I'll try to sign up for the KI Impact Challenge now: http://kdid.org/kmic

Donate !

We all get a great deal out of our engagement in KM4Dev. Maybe you would be happy to express appreciation through a regular voluntary contribution. Crowd funding works! 

background and info

________________________________

***

note if the donate link above does not work for you, click here on donate! and at the bottom of that page click on the donate logo

***

Members

© 2024       Powered by

Badges  |  Report an Issue  |  Terms of Service