A New Approach to Measuring Impact for Digitised Resources: do they change people’s lives?


This is a work in progress - more my notes and queries than a proper paper, stuff will change, references will be added. I wanted most to get this out there and to get your views, your inputs and your insights. Please comment, your thoughts are valued!

My recent research with Marilyn Deegan into the value and impact of digitised collections has shown that there is a serious lack of adequate means to assess impact in this sector and thus a lack of significant evidence beyond the anecdotal, number crunching from webometrics or evaluations of outputs rather than outcomes (http://www.kdcs.kcl.ac.uk/innovation/inspiring.html). In short, we need better evidence of impact. How has the digital resource delivered a positive change in a defined group of people’s lives or life opportunities?

In this Arcadia funded research, we are addressing some fundamental questions in assessing the impact of digitised resources on changing lives or life opportunities. We plan to report a synthesis of methods and techniques to resolve these into a cohesive, achievable methodology for impact assessment of digitised resources.To assist and clarify our thinking and research goals we would like to offer our description of impact.

Our conception of Impact for this research can thus be described as:
the measurable outcomes arising from the existence of a digital resource that demonstrate a change in the life or life opportunities of the community for which the resource is intended.

There is a well-established, professional  and mature field of Impact Assessment (IA) in sectors not normally associated with memory institutions methods of evaluation such as those seen in Environmental, Health, Economic or Social Impact Assessment. These provide some scope and lessons for our distinctive view of impact.

Impact Assessment (IA) is often predictive in nature. Environmental IA, in particular, focuses upon identifying the future consequences of a current or proposed action. As such it becomes a technical tool to predict the likely change created by a specific planned intervention. The European Commission’s definition of IA relates to a process that prepares evidence for political decision-makers on the advantages and disadvantages of possible policy options by assessing their potential impacts. In this latter case, impact is often thought in both political and economic terms. Clearly the most important aspect of this mode of IA is to influence and inform decision makers on future interventions and potential policy pathways.


Other IA relates to measuring the change in a person’s well being through a specific intervention. Health IA generally considers a range of evidence about the health effects of a proposal using a structured framework. This can be used to determine population health outcomes or to defend policy decisions. The UK National Health Service uses a tool called the QALY system (Quality Adjusted Life Year). This system assesses not only how much longer the treatment will allow a person to live, but also how it improves the life of that person#. The QALY is a measure of the value of health outcomes and as such is somewhat more limited than other methods used in Health IA, particularly in palliative care. King’s College London has developed the Palliative care Outcome Scale (POS)#, a tool to measure patients' physical symptoms, psychological, emotional and spiritual needs, and provision of information and support at the end of life. POS is a validated instrument that can be used in clinical care, audit, research and training. These forms of IA are effective at measuring interventions but generally need clear baselines and large comparable populations to gain significance.


Other IA methods focus upon the wealth or level of economic activity in a given geographic area or zone of influence. They may be viewed in terms of: (1) business output, (2) value added, (3) wealth (including property values), (4) personal income (including wages), or (5) jobs#.  Economic IA has the benefit of usually being able to identify baselines and significant indicators to measure improvement in the economic well-being  of an area. However, these measures are less satisfactory for intangible assets or for the assessment of digital domain resources. Contingent value assessments or analysis are seeking to resolve those intangible asset measures. There is also some very interesting assessment work supporting new business investment opportunities as described by the Triple Bottom Line also known as the three pillars: people, planet, profit. An impact investor seeks to enhance social structure or environmental health as well as achieve financial returns and the modes of measurement are proving interesting to this study.


Finally, Social IA looks more closely at individuals, organisations and social macro-systems. The International Principles for Social Impact Assessment (Vanclay, 2003) defines Social IA as including “the processes of analysing, monitoring and managing the intended and unintended social consequences, both positive and negative, of planned interventions and any social change processes invoked by those interventions”. Social IA has a predictive element but successive tools such as Theory of Change have made it more participatory and part of the process of managing the social issues. For our purposes we are happy to include so called social return on investment within the conception of social IA although others within the profession would disagree with this wide church approach. There are many methods and tools for Social IA and we think these may prove especially helpful in considering the life opportunities questions and indicators we need to establish.


What is increasingly clear though is that all these modes of IA have something to offer and that our inclusive approach to this research is a good one. Our challenge is to rationalise now into ways of presenting a cohesive set of methods and guidance that is most useful to our memory organisations and sector.


Balanced Scorecard
One way to organise the thinking might be to use the Balanced Scorecard approach to considering the Impact of a digitised resource upon lives. This allows us to balance out the Impacts being assessed and to ensure that multiple perspectives are enabled. By combining economic measures, social and non-financial measures in a single approach, the Balanced Scorecard should provide richer and more relevant information.
In the Balanced Scorecard approach we would suggest the following core headings:

  • Social and audience Impacts
  • Economic Impacts
  • Innovation Impacts
  • Internal process Impacts

In this way we can assess the way Impact is occurring both externally and internally to the organisation delivering the digital resource. Allowing a balanced perspective of changes to people’s lives who use the resource and changes to the organisation through the existence of the resource.

Challenges

There are a number of challenges that many Impact Assessments have sought to address with varying success. These include:-

  • timescales in which measurements may take place remain too short for outcomes to become visible and constrained by project timescales;
  • a lack of suitable benchmarks or baselines from which to measure change;
  • a diverse evidence base;
  • the wide range of possible beneficial stakeholders;
  • the difficulty in establishing useful indicators for the sector;
  • the need to make recommendations to decision-makers based on strong evidence;
  • the lack of skills and training in the community of practise;
  • and the need to review relevant qualitative as well as quantitative evidence to discover significant measurable outcomes.

These challenges are all part of scope for this study. The need to establish useful indicators for this sector is currently a major concern of this research. Indicators drive the questions and the methods which may be applied to gain significant information to support impact assessment.

What are your thoughts? What are the significant indicators? What are the exemplars out there that you have seen or methods you have tried?

Comments

  1. Bench marks pose the largest issue, especially when institutions are so different. Public vs private for example wherein the private sector is always ahead of the public sector with technology even though the public sector has the most to gain with digitisation due to it's vast archives. Are there different sets of bench marks or one industry standard?

    ReplyDelete
    Replies
    1. Hey Phill, thanks for the comment. For me the issue of benchmarks and baselines are an area needing more consideration. If I am assessing new benefits then I have to measure the change from something - but what is that something? How do we create a baseline of currently received benefits to measure the change from?

      So I am looking at the options for this. I want to get a baseline - i.e. a snapshot at a specific time of the economic, cultural, social, demographic, and geographic environment within which the project will take place. It could then be the foundation used to measure and assess the positive and negative impacts of the project over time. This of course implies that each indicator should be measured more than once in the life of the project in order to verify the change, and to verify whether or not proposed objectives were accomplished.

      Delete
  2. Hi Simon. A few thoughts come to mind reading this. First of all I agree there have got to be valuable lessons from other domains in terms of measuring impact, and the areas your identify here are useful and make sense. The real challenge, as you say, is how you apply these lenses to the impact of particular digital resources or projects. We are so often having to retrofit impact analysis these days, and what your points drive home is the need for a more longitudinal and embedded approach to be taken in impact assessment (very hard in the world of the short-lived, rapid development project).

    Impact measures (and the methodologies for collecting evidence over the longer-term) need to be determined at the very inception of a digital resources project or service. This will be tied up in the business case for the service itself, the benefits to be realised. Even in this arguably ideal scenario, we'll have to agree that we can only rely on a number of different, imperfect measures (both quantitative and qualitative) but collectively they can help tell us much more about immediate and hopefully downstream impact.

    ReplyDelete
    Replies
    1. Hi Joy, thank you for commenting!

      I strongly agree on the longitudinal issue. Many projects are evaluated over very short periods, often at the end of the project in the rush to finish and before the user community has actually got to grips with the resource. The measures here tend to be of outputs - how much digitised for instance. Outcomes tend to be measured in how many users or in growth of usership - worthy measures but not very informative over short periods.

      My previous research suggests that benefits and significant change (especially for scholarly resources) may not become apparent for a number of years. So unless we have a more longitudinal approach we are not going to capture these changes or benefits.

      There are some tricky aspect of longitudinal studies. Note my comment above about baselines - how do we establish these and measure more than once? Then there is the thing we are measuring: Is the change in a community of use (if so, how do we establish that and retain it over time)? Or is this a change to a set of individual lives and thus how do we assess the change that has happened through the time to those persons (and what to do if individuals withdraw from the study)?

      I am hoping we can at least address some of these questions but would very much welcome insights from the community on ways this has been addressed before.

      I am also expecting to use the research as a lobbying tool towards funders. What I want is for funders to allow projects to build in IA from the beginning, to allow the project to extend assessment beyond the end of the delivery phase. There is huge value for funders in doing good IA and I want to encourage a move from the 'fund and forget' model we have sometimes seen in the past.

      Delete
  3. Hello Simon, I just wrote some comments and then lost them all in the attempt to log in, so here's a reduced re-written version of some of my thoughts, hope this time it works :-):

    1) Timescales: I agree that impact needs to be seen longitudinally, it migth be useful to "phase" the measurement of impact as in short-medium-long term. IA is a process rather than something that can be measured at any fixed point in time (see 5 below)

    2) Some types of impact might be happening immediately after a project has finished (eg skills, innovation, infrastructre within and inst) and some at a later stage, eg impact on wider communities, might be worth taking this into consideration in the methodology

    3) there is also a "snowball effect" characteristic to impact, eg the more a digital resource is out there, the more impact it will will generate usually, if sustained...)

    4) indicators: this will depend on what type of impact we want to measure but also on the nature of the digi resorce, ie if a resource was created to address a "narrow and deep" need of a particular community, indicators that look for impact on wider society and wider audiences might not be so useful, so we might need different indicators not just for different aresa of impact measured, but also different types of projects

    5) embedding impact at project start: this can be done and funders can ask for projects to do that, it is not unproblematic, though, however, that's another story. But I think institutions and content creators too need to undertand, and be conviced of, the value of measuring impact beyond the duration of a project, and embedd it in their own processes for content creation, delivery and sustainability. In this way we could perhaps have a better chance of gaining evidence in time, in a longitudinal way, of the impact of a resoruce. Otherwise, projects yes, will probably be able to provie a snapshot of impact at a later date from the end-of-project date for the funders, but this will still be just that, a snapshot of impact at a moment in time.

    Very interestign can of worms :-)
    paola

    ReplyDelete
  4. Hi Simon,

    These are all very good questions. Having worked on several large-scale digitization projects, I was never happy with the time we had to evaluate the project. Often evaluations were underway before the resource was fully rolled out and with only a short time to get feedback. Having a longer view would be interesting, especially since there are some long-lived projects out there.

    One question I have is about the "thing" which creates impact? Are resources individual digital cultural resources? (i.e. a file that represents a cultural object) or are they the services that deliver those kinds of resources to a user community? I suspect that both have some kind of impact, but that they may require different kinds of measures. It would be interesting to know whether it is worth the investment in simple services (like the ability to embed a resource in a blog/social media site). (or if two collections, otherwise equal in the number/quality of objects have different impacts because of the services they wrap around those resources). It also seems like there would be characteristics of both objects and services that influence impact that would be interesting to know about (image quality/resolution, accessibility, openness, etc.)

    This might be more complicated than you were imagining for this study - but it would be interesting to understand how similar kinds of metrics are used in existing IA research.

    ReplyDelete
  5. Simon

    Given the stress that is being placed on the measurement of impact in a variety of contexts, further research into the intellectual frameworks used for the assessment of impact is clearly very urgent and timely, so this work is to be welcomed. I like very much the approach of looking at methods from other domains, but it seems to me there are still some very fundamental issues to be addressed.

    - The issue of digitization is a distraction here. We actually don’t seem to have any very good frameworks for assessing the impact of cultural activities altogether. Consider the example of the Prom concerts. They are clearly very successful and, if only measured by the strong bonds between the regular ‘prommers’, they help build a community and have an impact on that community. But how do we measure the impact of the Proms. We have a number of measures: ticket sales; the numbers of those who listen to and watch the concerts, both in person and at a distance; the number of musicians participating; amount of tourist money generated; critical reviews. Yet none of these measures give the least indication of the cultural importance of this concert series. Perhaps elements of any balanced scorecard should be qualitative measures as well as quantitative assessments.

    - There is a distinction between cultural and scholarly activity, which we don’t stress enough. There is a difference between the impact of the Proms and the impact of musical scholarship. Musical scholarship may discover new pieces of music, which then have an impact through their performance at concerts like the Proms. The measures for scholarly impact are different to those for cultural impact – the measures one might use for the impact of public libraries are different to those that one might use for scholarly books contained in the libraries.

    - I think it is wrong to assume that humanities oriented digitization has a cultural focus. Packages with primarily a scholarly focus such as EEBO are very different in their character to services with a clear cultural focus, such as BFI Online or many of the music digitization packages. In assessing impact, the purpose and intended audience needs to be taken into account. There is a difference between, say, Spotify and EEBO.

    - When packages are developed primarily for scholarly use, it is entirely possible that their impact is invariably indirect and thus difficult to measure. The importance of printed collections of primary sources (such as the old calendar series produced by the Public Record Office) was apparent from the extent to which they were used in scholarly works – it was difficult to demonstrate that they had much direct impact outside the scholarly community – their wider impact was invariably mediated through scholarly publications. The same may be the case with digital publications of primary resources.

    ReplyDelete
  6. I think that finding a path in life that both complements and challenges you is precious.Business Impact Analysis Template

    ReplyDelete
  7. Hi Simon,
    It is interesting what Andrew Prescott is saying about assessing impact, in that the purpose and intended audience needs to be taken into account. Perhaps what we need is to get away from IA as a means to justify any future funding and focus more on selling the vision of a digital society.
    In your own words what we need is “Critical Mass” and I am not convinced that we are there yet. Surely, we need many more projects to even begin to bench mark one project against another, and then we will have the problem of comparing chalk with cheese as no two projects are ever the same or are intended for the same purpose.
    Perhaps one measure of “impact” could be to measure the readership of the “unintended” audience rather than the target audience? The problem with this as I see it is that if UK funded projects have a positive impact on readers in say China or India then further funding may become problematic. One way around this may be to offer our services to institutions in countries with more money and potential for digitization?

    ReplyDelete
  8. My name is Matt Gigg, and I am a student in a digital humanities seminar called “Hamlet in the Humanities Lab” at the University of Calgary: . In my final paper for the course, I would like to base my argument on your blog post. You can read my paper after April 25th on the course blog:

    ReplyDelete
    Replies
    1. Sorry, the link didn't show up: http://engl203.ucalgaryblogs.ca/

      Delete
    2. Matt, I would be pleased if you based your paper on this blog. Let me know if I can provide further background. And please do post the direct link to the paper when you are done. Thank you.

      Delete
  9. I meant to post a couple of weeks ago when I read the paper. This is very important and complex work, and I agree with the need for longitudinal information. I've been looking at criteria for archiving digital resources, and measurable criteria for impact assessment would be really helpful. Are you going to publish some sort of grid describing what questions or metrics could provide information on the various modes?

    Very interesting work.

    ReplyDelete
  10. Hi Sheila - thanks for the comment and the support. The grid idea is very much in my thinking. Maybe we could share ideas in more detail as work progresses?

    ReplyDelete
  11. Hi Simon - I would definitely like to discuss this more. I also wanted to add that I hope it didn't seem that I was implying that your work only has value for criteria for archiving! I think the need for measuring imapct goes well beyond that. I just have been recently looking at archiving criteria, and it struck me that some of the issues I was looking at align nicely with this work (for deciding if a resource 'relevant to users' could use some of the same criteria for trying to determine if people 'value a resource')

    I would definitely enjoy some further discussion on your work.

    ReplyDelete
  12. Simon,
    You've certainly taken on a challenging project. I appreciate the way you have begun to frame it in the context of how other sectors have approached the problem. You provide a reasonable approach through a balanced scorecard and raise tough challenges.

    In regard to the challenges, I wonder if it would be possible to try to "break down" the components of the technical environment in order to isolate them from the specific measures. This is to say that the technical infrastructure serving the content, the websites contextualizing the content, the tools developed to explore and use the content, the development of new devices capable of accessing the content and the policy driving the use of content along with the depth and breadth of the content itself will be players in the measure. These need to, in some way, be isolated in order to understand the impact of the digitization as distinct from the technology ecosystem that facilitates access and use.

    Each of the examples above has its own lifecycle and trajectory of advancement. While it may be difficult to envision and accurately assess how the current and future state of these technologies play in to the overall impact, one could recognize them in the current assessments and over time look back to earlier assessments and retrospectively adjust the measure of impact with a clearer understanding that only a retrospective view into the role technology can provide.

    Good luck. I'm interested in learning about your progress.

    ReplyDelete

Post a Comment

Popular Posts

The Balanced Value Impact Model

The revolution starts here: open access to digital cultural heritage collections in the UK

BVI Model Version 2 Overview

Government Shutdown = Cultural Shutdown