Measuring and reporting

Measurement of Public Sector Innovation

This section of the Public Sector Innovation Toolkit advises the reader on the following1:

  • Why it is important to measure innovation in the public sector
  • How you might go about measuring the level of innovation and the innovation climate in your workplace
  • Some things to watch out for if you decide to conduct a public sector innovation survey.

This section also provides information on the studies done on public sector innovation in other countries and identifies some of the conceptual framework that underpins measurement.

It is important to note that the measurement of innovation in the public sector is an evolving field which is receiving increasing interest.  Therefore, this section of the toolkit will be updated from time to time as new information becomes available.

Part 2

  • How do we report on public sector innovation?
  • About the APSII project
  • Case studies on public sector innovation (measurement)
  • References and suggested reading

 

What is known about public sector innovation measurement?

The concept that innovation can lead to improved productivity and service delivery in the public sector and thereby benefit the wider economy is not a new one. While the public sector accounts for more than a third (~35%) of GDP in Australia (AIS Report 2011), literature on public sector innovation is relatively new. It is only since 2003 that there has been a sharp rise in the number of academic publications in this area.

Furthermore, most research is based on either small case studies, extrapolations from the private sector, and/or interviews with senior management. Few are based on rigorous large-scale studies. In the words of a researcher in this field, "one could characterise our current knowledge of public sector innovation as based on a number of untested assumptions on how public sector innovation occurs combined with insights from case studies and a few small-scale surveys"2 (Arundel, 2011).

Regular assessments of innovation in the Australian Public Service are currently undertaken by the Australian Public Service Commission (APSC) as it conducts two annual surveys – an APS agency questionnaire and a voluntary APS employee questionnaire – for an annual State of the Service Report3. Both surveys have included innovation in recent years.

The 2010-11 survey is the most comprehensive innovation survey undertaken by the Commission to date.  It showed that whilst innovation was a consideration amongst many Australian public service agencies, there remain barriers to achieving an innovative culture in the workplace.  Interestingly, it showed that the desire by employees to innovate outweighed the ability of agencies to provide an environment conducive to innovation.  There was also a discrepancy between agencies' view on the degree of innovation in the workplace and employees' views, with agencies for example reporting a higher level of innovation than employees.

The most recent survey, as reported in the State of the Service Report 2010-11, provided the following insights:

  • 50% of employees agreed their workgroup had implemented an innovation in the preceding 12 months (31% disagreed and 19% were 'not sure')
  • Almost 90% of employees were always looking for better ways to do things
  • 50% of employees agreed that their current agency encourages innovation and the development of new ideas
  • 53% of employees believed that there are barriers to innovation in their workplace—the greatest barrier being budget restrictions followed by unwillingness of managers to take risks
  • 71% of agencies reported that there are strategies in place to encourage innovation
  • 84% of agencies reported that they had significant innovations in areas that included human resources, policy development, program design and service delivery.

It is interesting to compare these results with earlier State of the Service Reports or the 2011 Research Note on Measuring Public Sector Innovation. While innovation is occurring across the APS, most agencies recognise that they need to improve their performance, although there is a recognition of the complex nature of public sector innovation.  In the words of the APSC, "public sector innovation remains a complicated and multi-dimensional issue".

Other countries, including those in the Organisation for Economic Cooperation and Development (OECD), are also grappling with public sector innovation and how to measure and report on it. Internationally there are a few rigorous large-scale studies (see below) that can be used as a guide on how to approach this topic. However, given some fundamental differences in the Australian political system and public sector administration, caution should be exercised in the direct adoption of their methodologies or the extrapolation of the study results to the Australian context.
Measuring and reporting public sector innovation is a growing area of interest, in Australia and around the world. International projects on the measurement of public sector innovation include:

  • the Government Innovation Index (GII) [South Korea]
  • the Public Sector Innovation Index [the United Kingdom]
  • the European Public Sector Innovation Scoreboard (EPSIS)4 [European Union]
  • the MEPIN Project5 [Denmark and other Nordic countries]
  • the NESTI Taskforce on the measurement of public sector innovation6 [OECD].

Refer to the attached Literature Review for further information on these international projects.

In Australia, the Australian Public Sector Innovation Indicators (APSII) Project commenced in late 2010. Its aim is to provide data and metrics on the innovation performance and capacities of Australian public service agencies and, in the longer term, to provide internationally comparable information that allows benchmarking against overseas counterparts. The APSII Project seeks to supplement the information routinely collected by the annual State of the Service surveys.

Back to top

Why do we need to measure public sector innovation?

What is the policy background to public sector innovation in Australia?

The Australian Government has recognised that an innovative public service is integral to a healthy national innovation system. Any improvements that are made in this area will flow to other areas of the economy, as public sector activities drive and diffuse innovation for societal benefits. The importance of this was highlighted in the 2020 Summit (April 2008) and in a number of reports including Venturous Australia: building strength in innovation (August 2008);Powering Ideas: An Innovation Agenda for the 21st Century (May 2009); and,Engage: getting on with Government 2.0 (December 2009).

Following a call for submissions and wide-ranging consultation, the Australian Government accepted the report Ahead of the Game: Blueprint for the Reform of Australian Government Administration and its 28 recommendations in March 2010. Improved innovation became a cornerstone of the APS reform agenda Building the Future Together. Two important reports assisted with the innovation component of the reform agenda –  a Better Practice Guide by the Australian National Audit Office (ANAO) entitled Innovation in the Public Sector: Enabling Better Performance, Driving New Directions (December 2009), and the  Management Advisory Committee Report entitled Empowering Change: Fostering Innovation in the Australian Public Service (May 2010).

In short, the Australian Government has given a clear directive to the Australian Public Service to adopt an innovative culture within Australian Government agencies to increase productivity and efficiency in service delivery. Similar initiatives can be seen at the State/Territory and Local Government levels.

How can measurement improve public sector innovation?

Reflecting this policy direction, an increasing number of public sector agencies refer to innovation in their organisational planning and output statements. Finding meaningful ways of measuring public sector innovation through innovation metrics and robust performance indicators will assist agencies in identifying the strengths and weaknesses of their innovation capacity and performance.  A robust measurement framework for public sector innovation will assist with organisational performance review and improvement, and will also enable agencies to better report on their innovation progress to government.

Identifying, developing, translating or adopting innovative solutions can be associated with identifiable stages (Empowering Change p.17). Measuring and reporting on public sector innovation can be applied to these stages:

  1. Idea generation: Innovation metrics may provide data on the extent to which an organisation's environment and practices contributes to idea generation – for example, the value that management places on new ideas (how readily they accept ideas from their staff, customers, partners and external agencies), the resources directed to creative thinking, the inclusion of innovative thinking in performance agreements ,and risk management culture can all impact on the level of idea generation
  2. Idea selection: In picking which ideas to use, innovation metrics may provide data on a range of issues such as: whether organisations have processes and incentive systems in place for soliciting and harnessing new ideas; whether organisations have criteria in place for ideas selection; how effectively an organisation selects which ideas progress to implementation and which ideas are abandoned; and, whether there are feedback mechanisms for proponents of innovative ideas
  3. Idea implementation: Innovation metrics may provide data that allows organisations to assess how successful and effective they are in translating ideas from concepts into practice – for example, data can be obtained on the level of resources provided to support the implementation of new ideas, if systems are in place to evaluate the success of a new idea or problems faced, if there are processes to ensure the necessary level of support (from management and innovation champions), and if specific innovation roles have been ascribed to staff
  4. Sustaining ideas: To maintain the momentum of innovative ideas, innovation metrics can inform agencies about the positive impact that innovation can have on its activities. This refers to both short term outputs and long term outcomes of the innovation. Routine cost-benefit analysis can assist agencies in evaluating the impact of innovative changes over time and if these changes should be continued, modified or abandoned.  Metrics may also indicate if there are strategies in place for engagement and collaboration on innovation, if innovation has become part of an organisation's strategic and business planning with practices embedded to promote and foster innovation, and if there is recognition and celebration of innovative achievements
  5. Idea diffusion: In spreading ideas and insights about innovation, metrics may provide information on the degree of adoption of new ideas across agencies and within agencies. It may inform the extent to which lessons learned (positive and negative) are disseminated throughout the public service and the degree to which the outcomes of innovation are communicated to stakeholders and the wider public. Innovation metrics give data on which to publically report on innovation (such as in annual reports and organisational reviews).

Each of the above stages is impacted by the "innovation climate" of the agency, which may be conducive or inhibitory to innovation. The Empowering Change report recommended that agencies identify and address any systemic barriers to innovation, irrespective of which stage of the innovation process they are at. Innovation metrics may help agencies to identify barriers and assess where resources should be most appropriately directed to improve organisational performance. Conversely, identifying areas that are conducive to adoption of innovation may help agencies to identify areas that should continue to be supported.

Reporting public sector innovation metrics, especially in terms of relative performance, can be an important motivational tool. Results from the implementation of innovations can be a driving force for others to either take on or avoid risks. Reporting also sustains the innovation culture by providing data on what is currently working and what is not. It assists with understanding possible enablers and barriers of innovation and gives agencies the opportunity to learn from each others' experiences.

Effective oversight and monitoring of public sector innovation requires access to accurate, timely, comprehensive and meaningful information. Measuring public sector innovation can inform organisational planning by providing information on:

  • existing innovation strategies
  • organisational strengths and weaknesses on various aspects of innovation (including capabilities and constraints)
  • improvements made over time
  • links between activities and impacts
  • systemic barriers to innovation (for example, management practices, culture and leadership)
  • strategies to identify and manage risk.

Back to top

What are the problems with measuring public sector innovation?

As acknowledged in the Empowering Change report and the ANAO's Better Practice Guide, the drivers for improved innovation are as great in the public service sector as they are in the private sector. In a tight fiscal environment, government agencies are seeking to cut costs, provide increased efficiencies in service delivery, and find new solutions to old intractable problems whilst at all times balancing the political risk.

However, there are also several functions carried out by government, which have no equivalent in the private sector – these include the maintenance of national security and sovereign rights, the making and amending of legislation, engagement in foreign affairs, the development and implementation of national policies and strategies, the raising of revenue and national disaster management, to name but a few. The profit and competitive pressures, which drive innovation in the private sector, are likely to be less significant in the public sector where the focus is on provision of services to the populace.

In measuring public sector innovation, these intrinsic differences between the private and public sector need to be recognised. Further, it is important to understand the framework conditions which promote or inhibit innovation, as well as to measure the outcome and output of innovation, the latter being frequently difficult to assess. The Empowering Change report identified the following specific challenges faced by today's governments:

  • Fiscal pressures to do more with less, heightened in recent times by the global financial crisis
  • Global competition and international markets
  • Impacts of information and communications technology (ICT)
  • Rising public expectations regarding service delivery, efficiency, openness, flexibility, accountability and participation in policy development
  • Addressing complex problems such as an ageing population, sustainability, affordable health care, and social inclusion.

The public sector has a complex decision-making and organisational structure that can shape the conditions for, and impact on, innovation. Innovations originate at every level of the APS (sometimes in collaboration with the private sector, non-government sector or universities) and capturing the level of innovation in the APS can be difficult. Many potential innovations are abandoned and consequently are difficult to find out about or measure.

Because of the stringent accounting requirements of government and the transparency of government processes, it is feasible to estimate some of the resources (hence costs) that have gone into an innovation activity (such as the purchase of ICT or other equipment).  However, innovation is only a part (frequently only a small part) of an organisation's overall work program and staff are generally involved in many other activities. Even with a strict time keeping system it is difficult to estimate the per cent of staff time spent on innovation, versus other tasks, as innovation is not always a discrete task that can be separated out from core activities.

It is also difficult to measure and value the outputs and outcomes of public sector innovation on the Australian economy, especially its wider socio-economic flow-on effects. A major proportion of public sector work is directed towards the provision of services, which in some cases have long-term impacts. Developing baseline metrics for government innovation programs can become complex and costly and isolating the specific results can take years. Government may also choose to conduct external surveys amongst the beneficiary of such innovation programs (like the general public or industry) on their perception of the outcomes of these programs to ascertain the long-term outcomes and values.

Furthermore, the public sector is heterogeneous in terms of size and functionality – this includes the types of activities that agencies engage in such as service delivery (e.g. in administration, education and health), the provision of policy advice and the enforcement of rights and national sovereignty.  It is therefore difficult to develop metrics that take account of service-wide comparisons of innovation activities. For example, some measures are likely to be more appropriate to service-delivery agencies than they are to regulatory or policy-making agencies.

The scale at which public sector innovation is measured is a further issue. Ideally, innovation ought to be measured at all levels of government – Australian, State/Territory and local government. Work is currently being undertaken by all three levels but it varies in approach, coverage and integration. The APSII Project aims to examine all three levels of government in the longer term but in the first instance it will concentrate on establishing innovation metrics at the Australian Government level.

Back to top

How do we measure public sector innovation?

International Approaches

Measuring public sector innovation is the goal of the OECD's NESTI Taskforce as currently there is no standard approach to measuring public sector innovation. The following public sector innovation surveys are being undertaken internationally:

  • The MEPIN Pilot Survey (2009) was conducted in the Nordic Countries and it measured public sector innovation at a central and regional level with 1,970 organisations interviewed on a voluntary basis
  • The 9th Innobarometer Survey7 (2010) was conducted by the European Commission, and it measured innovation in the public and private sectors at a national, regional and local level, and interviewed 4,063 organisations on a voluntary basis
  • The NESTA  Pilot Survey8 (2010) was conducted by the United Kingdom and it measured innovation in the National Health and Local Government sectors, and interviewed 175 organisations on a voluntary basis.

One Asian country has conducted annual surveys of public sector innovation since the mid-2000s:

  • The Government Innovation Index (GII) Survey and Reporting System has been conducted by South Korea since 2005, measuring and automatically evaluating innovation of 498 public institutions on a mandatory basis.

As seen by the above pilots, there is no consistent methodology used to measure public sector innovation. The surveys vary in scope, size, level of detail and the type of questions posed. While it is possible to draw some broad comparisons, a detailed comparative analysis is not possible because of the different methodologies.

Conceptual Frameworks

Public sector innovation is viewed in a variety of ways. Some researchers approach it in terms of a modified version of private sector innovation. For example, the approach adopted by the MEPIN Project is to primarily apply the Oslo Manual framework for innovation metrics9 with some changes.

Others adopt a different approach on the basis that the public sector operates under different constraints, which prevents the use of many private sector innovation metrics.

Further issues arise from the definition of innovation. Innovation is a complex construct and is studied from multiple perspectives at different levels of analysis by scholars from a variety of academic disciplines. Public sector innovation does not have an internationally accepted definition at this time. Elements that public sector innovation may share with business innovation include instigating change in process, product, service or organisational methods with the objective of improving efficiency or quality.

Empowering Change defines 'innovation' as fundamentally the generation and application of new ideas. It adopts the six categories of innovation developed by Windrum (Windrum & Koch, 2008), namely:

  1. Services innovation
  2. Service delivery innovation
  3. Administrative or organisational innovation
  4. Conceptual innovation
  5. Policy innovation
  6. Systemic innovation

While the first three categories of innovation are also those in the private sector, the last three categories are specific to the public sector (particularly numbers 5 and 6). Given that a variety of definitions are being applied to public sector innovation, the APSII Project aims to test, through its proposed pilot survey, which public sector innovation definition is most appropriate for the APS. The following is an outline of two commonly used conceptual frameworks. Each has strengths and weaknesses in terms of how public sector innovation is represented and that in turn influences the type of questions posed in innovation surveys.

The DAMVAD Model

This model is an adaptation of the earlier chain link model of innovation developed by Kline and Rosenberg in the 1980s and it underpins the MEPIN survey approach used by the Nordic countries. The model, adapted by DAMVAD  and largely based on the Oslo Manual approach, is specific to private sector innovation. It sets out a sequential presentation of the stages of the innovation process and has the following main themes (or elements) for measuring public sector innovation:

  • Objectives of innovation
  • Inputs to innovation
  • Innovation processes within the organisation
  • Outputs of the innovation process
  • General outcomes of the innovation
  • Environmental conditions (affecting innovation in public sector organisations)

It can be depicted by the following schematic diagram  (Bloch, 2010):

Please Note:

  • "Inputs" refers to any contributions that support the innovation process and it includes all types of investments (i.e. time and resources), training and acquisition of competencies
  • "Innovation Process" refers to the implementation of innovation (i.e. the generation, selection, implementation, sustaining and diffusion of ideas) and it includes organisational processes, work culture, linkages and collaborations
  • "Outputs" refers to the more tangible products arising from the innovation process and it includes any product, service, process and policy innovations; these may impact on the organisation itself and/or its external actors
  • "Outcome" refers to the sometimes less tangible and longer term effects of innovation and it includes the societal, environmental and economic benefits of the innovation; outcomes are frequently difficult to quantify
  • The arrows indicate linkages and the direction of exchange of information between the various elements of the innovation process
  • "Framework Conditions" refers to things that may affect the innovation process and includes budgets, policies, rules and organisational and government structures
  • "External Actors" refers to entities that are affected by, and which can affect, the innovation process.  They include users / clients, business, academia, other government organisations, non-government organisations and the wider stakeholders
  • "Framework Conditions" and "External Actors" make up the "Environmental Conditions" of an innovation process. Both exist irrespective of whether or not innovation takes place. However, an innovation process can impact on External Actors, whereas it cannot impact on Framework conditions.

The NESTA Model

The NESTA model is an adaptation (and simplification) of an earlier model developed by Ernst and Young, and is one of six scoping projects commissioned by NESTA in 2008-09 to develop a measurement framework for public sector innovation in the UK.

Unlike the DAMVAD model, it is not a sequential approach to specifying the stages of innovation. Rather, it is a more integrated or systemic approach which has the following four components:

  • Impact of innovation on organisational performance
  • Innovation Activity through its stages from idea generation to idea diffusion
  • Innovation Capability as part of the internal innovation framework
  • Wider sector conditions as part of the external innovation framework

The European surveys described above (MEPIN, Innobarometer and NESTA) have used either one or the other of the two conceptual approaches outlined above. However, none of the surveys have completely followed the DAMVAD model, as many of the fields of interest apply to more than one stage of the innovation process.

An Assessment Framework

Given the above two frameworks and the experience of the survey questionnaires trialled in Europe, the APSII Project has developed a list of indicators to measure public sector innovation in both a quantitative and qualitative manner. The data for the indicators can be collected through surveys.

Conceptually, this assessment framework is based more on the DAMVAD than the NESTA model, in that it structures the indicators under the themes of "Input", "Process", "Output", "Outcome" and "Environmental Conditions". However, these indicators could also be employed under a NESTA-type approach. For the purposes of international benchmarking, it is important that there is sufficient level of detail in the data collected to enable its assembly into whichever framework is ultimately adopted.

As the APSII Project develops a definitive survey methodology for measuring innovation in the APS and trials a pilot survey in 2012, it is likely that the thinking on these innovation indicators will be refined and that the list (below) will be reviewed.

Themes Indicators
Input (including objectives)
  • Investment in innovation (e.g. cost of staff working on innovations, direct innovation funding, consulting expenditures, R&D costs and other knowledge purchases)
  • Investment in human resources and skills for innovation (e.g. cost of staff training, workshops, upgrading qualifications, staff exchange/secondment)
  • Sources of innovation (e.g. management/senior staff, employees/frontline staff)
  • Technological infrastructure (incl. ICT) for innovation (e.g. role of technology in enabling innovation, cost of technology, use of specialist services)
Process
  • Explicit innovation strategy and targets (e.g. clearly articulated innovation objectives, specific targets / goals, strategic planning for innovation, dedicated innovation units/cells, dedicated structures/ resources for innovation; incentive and reward structures; opinion polling of stakeholder expectations)
  • Management of innovation process (e.g. risk assessment and management, internal review and evaluation of innovation, provision of resources, capacity building)
  • Innovation collaboration and alliances (e.g. outsourcing of service provision, number of external partnership arrangements, use of consultancy services; staff exchange/secondment)
  • Role of Management in innovation (e.g. degree of involvement, risk management, support for and commitment to innovation, innovation champions)
  • Procurement (e.g. requirement for innovative behaviour by suppliers/clients, usage of procurement to promote innovation)
  • Diffusion of innovation (e.g. internal and external reporting of innovation / lessons learnt, degree of internal / external cooperation; communication)
Output
  • Types of innovations (e.g. product, processes, technology, ideas, policy)
  • Effects of innovations (e.g. effects on organisational processes / practices, management systems, organisational communication/promotion/outreach, organisational (re)structure)
  • Related, intangible outputs (e.g. commercialisation, patents, copyright, trademarks)
  • Degree of novelty and scope of innovations (e.g. incremental versus radical innovation, autonomous versus systemic innovation)
Outcome
  • Organisational performance (e.g. productivity and efficiency gains, reduction in operating costs, service delivery, administrative / process improvements)
  • Employee satisfaction (e.g. rating in staff surveys, improved work conditions, level of staff engagement)
  • User / Client satisfaction (e.g. rating in stakeholder surveys, level of complaints, level of innovation adoption by stakeholders)
  • Other intangible effects, (e.g. increased trust and legitimacy, socio-economic benefits to society, environmental impacts)
Environmental Conditions (made up of Framework Conditions and External Actors)
  • Collaboration with external actors (e.g. users/clients, suppliers, researchers/academia, conferences/workshops)
  • Demands and expectations by external actors (e.g. clients, users, other government agencies, non-government agencies, universities, the wider public)
  • Policy and legislative constraints (e.g. political directives, whole-of-government policies, legislative frameworks, government structures)
  • Fiscal constraints (e.g. budget and other resource limitations, contractual and procurement rules)
  • Technical constraints (e.g. supplier capacity, technology limitations)
  • Political constraints (e.g. limitations on external collaborations)
  • Organisational culture (e.g. risk tolerance, drivers/barriers to innovation, innovation values)

It should be noted that the above indicators are likely to apply to more than one stage of the development of an innovation (e.g. the initial objectives, management's attitude towards innovation and the way that innovation is managed have an impact on all stages of the innovation process). Also, "Environmental Conditions" refer to both internal and external factors both of which have the potential to affect the innovation process.

General Survey Considerations

There are several issues to keep in mind when embarking on a survey to collect information on public sector innovation within an organisation.

Firstly, surveys are generally resource intensive. Depending on the extent of the sample size, a survey may take several months to complete and may be quite costly. Costs not only accrue to the agency conducting the survey but also to the responding participants in terms of their time invested.

Surveys have the following stages:

  1. design and development of questionnaire(s)
  2. pre-testing of draft questionnaire(s)
  3. conduct of survey
  4. analysis/evaluation of results and report preparation.

These stages require specific expertise and the application of questionnaire design guidelines if they are to be conducted appropriately. For example, the way questions are phrased in a questionnaire can impact on how well questions are answered and hence the quality of the survey output. Assistance from knowledgeable and experienced professionals is advisable during these stages.

For example, the initial design requires a clear understanding of what the survey is trying to achieve. The pre-testing of the draft questionnaire should be conducted by trained behaviourists who can assess how respondents react to the questions and if they are answering the questions from the correct frame of reference.

The actual survey must be conducted in a manner that is appropriate for the sample population chosen and consistent with the relevant data protection laws and Privacy Act provisions.

Finally, the analysis stage requires statistical expertise in the evaluation of the data.

Agencies like the Australian Bureau of Statistics (ABS) and APSC, which undertake regular surveys, have the infrastructure and expertise to conduct such studies. However, if agencies wish to undertake their own surveys, it is recommended that they seek external advice on questionnaire design and on the execution of the survey if such in-house expertise is not readily available.

Courses, such as those offered by the ABS, can teach basic questionnaire design principles such as:

  • Context of questionnaire design
  • Defining survey objectives
  • Developing content based on survey objectives
  • Determining most appropriate collection methodology
  • Understanding how respondents answer questions
  • Developing questionnaire wording
  • Ordering of questions
  • Format and layout
  • Accompanying information
  • Testing questionnaires
  • Evaluating questionnaires

Specific Survey Considerations

In addition to the above general consideration about how an innovation survey should be conducted, there are also specific survey considerations that need to be taken into account before embarking on such a task. Even if the survey is outsourced, the agency conducting the survey will require instructions on these issues.

Questionnaire Design

The types of questions posed, the way they are phrased and who they are directed at, has influence on the survey responses received and hence the reliability10 and validity11 of the survey data. As mentioned above, adherence to survey design principles is essential and the use of relevant expertise is strongly recommended. The following are some issues that should be considered before conducting an innovation survey within an organization:

  • Purpose of Survey: Before embarking on a survey, there needs to be clarity about the purpose of the survey and what it seeks to measure, as this determines both the scope of the survey and the nature of the questions posed. For example, it should be identified whether the goal of a public sector innovation survey is to provide a quantitative assessment of all the innovation that occurs within that agency, to measure the inputs and outputs of innovation or whether it is trying to detect the framework conditions (drivers and barriers) underlying innovation and /or organisational attitudes towards innovation. Agreement on these goals is integral to any questionnaire design
  • Type of Survey: Surveys can be conducted as personal interviews, telephone surveys, mail surveys, E-Mail or internet surveys. Each approach has advantages and disadvantages in terms of its cost implications, response rate and quality of answers provided. For example, personal surveys tend to be expensive but yield a high response rate and generally high quality answers. This approach is used for comprehensive and in-depth coverage of questions. At the other end of the spectrum, E-mail and internet surveys are the cheapest form of survey and they are the fastest way of reaching a large sample size. However, response rates can be poor or of unknown quality, there may be a bias in the responses received and there is little scope to delve into questions with any depth. Written questionnaires, where the respondent provides the answer, become increasingly cost effective as the sample size increases and/or the number of questions increases. The data from such surveys are generally easier to analyse and questionnaires tend to be less intrusive than face-to-face or phone interviews. They also remove any influence that a "middle man" or interviewer may have on the respondent
  • Scope of Survey: The scope of the survey, as defined by the sample size, affects both the survey cost and data quality. When embarking on an innovation survey, a decision needs to made about who should be surveyed within the organisation. Research has shown that perceptions of innovation differ between employees and management, level of seniority and areas/divisions within an organisation. Ideally a survey should take account of these varying factors when determining its scope. Surveying solely one group runs the risk of bias. Ideally all groups should be surveyed and the subsequent results analysed in an integrated manner
  • Size of Survey: Survey size has implications on funding and feasibility. However, surveys need to be based on an appropriate sample size depending on the information sought and the type of statistical analysis that will be undertaken. Specifically, the sample size needs to be sufficiently large to show contrast in the data and allow for any necessary stratification. For example, research has shown that there is a correlation between the level of awareness of innovation within an organisation and the degree of knowledge/education and seniority. If the analysis of the survey data seeks to tease out the correlation between these factors, then the survey design needs to stratify for these factors. It is advisable to seek statistical input on the optimal sample size for any given survey before embarking on this task.
  • Frequency of Survey: The period of the survey is also a consideration and how this coincides with other innovation surveys (such as the APSC's SoS Report). Given the resource implications of running a survey and the delay in innovations coming to fruition, it may not be necessary to conduct an annual survey. The European Union, for example, conducts Community Innovation Surveys (CIS) on business innovation every three years. The ABS Business Characteristics Survey collects full data on business innovation every two years, with only basic data collected in the alternative years. Any survey questionnaire will need to specify a reference period over which the questions apply and a reference point against which change is measured
  • Length of survey: The number of questions and the reading material accompanying these questions will determine the length of a survey. As a general rule, long and complex questionnaires get fewer responses than short, clear and concise questionnaires. Response rate may be an indicator of the quality of the information provided. A poor response rate casts serious doubt over the validity of the survey results and steps should be taken to maximise the response rate. This includes the shortening of questionnaires and asking only questions that accord with the aims of the study and for which answers are readily available. The use of filter questions (i.e. markers that refer the respondent to another part of the survey if the answer is "No") is recommended in longer surveys
  • Layout and design of survey: The covering letter and presentation of the questionnaire should impart the importance of the survey. Questions should flow from the general to the specific, from the impersonal to the personal and from the easy to the difficult. However, questions also need to be logical and flow smoothly from one question to the next. It is important to be mindful of the data availability and quality when designing a survey questionnaire. Asking detailed quantitative questions on topics where information is not (readily) available leads to inaccurate answers
  • Types of survey questions: Survey questions may be open-ended (i.e. no pre-determined answer is provided in the questionnaire and the respondent is asked to provide a response in their own words) or close-ended (i.e. a pre-determined response is provided such as Yes/No; True/False; a spectrum of choices ranging from strongly agree to strongly disagree). Open-ended questions are useful for exploratory research and the generation of ideas. However, as answers lack uniformity they may be difficult to analyse and they require skill in interpreting the results. In contrast, close-ended questions are easy to collate, analyse and interpret. However, they may not have catered for all possible answers and, unless thoroughly cognitively tested, may not be the best way of soliciting the information sought. Leading and value-laden questions should be avoided as they may result in bias. Vague or double-barrelled questions are likely to confuse the respondent and lead to inaccurate answers. As a general rule, asking a respondent to comment on specific aspects that they are familiar with results in more focused and higher quality answers than if they are asked all encompassing questions.

Statistical Analysis of Survey Data

Once a survey has been conducted, there are several issues that should be considered when undertaking an analysis of the survey data and reporting on the findings:

  • Limitations of Questionnaires: Surveys generally capture snapshots about individual perceptions on a given topic at the time of the survey. Frequently the data is qualitative (rather than quantitative). Surveys cannot deal with context as information is collected in isolation of the environment. Furthermore, even though correlations can be established through analysis generally, causal relationships cannot be attributed on the basis of the information reported
  • Survey Effects and Data Bias: Several survey effects are known to impact on data quality (accuracy and reliability) and account needs to be taken of these in the analysis
    • When individuals are asked to self-report they may describe actions/behaviours that are different to what would be described by others (or what may in fact be the reality of the situation). For example, in the employee innovation survey conducted by the APSC in 2011, respondents may have overstated their role in the innovation system and down played the organisation's achievements in this regard
    • When asked to evaluate others, judgements are often influenced by the overall impression that the respondent has of that person. For example, when asked to rate the role of a manager in supporting innovation a well-liked manager may receive a higher rating than a less well-liked manager even if their role has been identical
    • Aversion to record answers at the extreme ends of a scale has been demonstrated (i.e. respondents tend to avoid absolutes like "never" and "always"). Caution must therefore be used in ascribing values to such answers
  • Analytical Challenges: The statistical analysis of survey data can be quite complex. Here are some examples of the difficulties that may be encountered:
    • Large differences in the interpretation of response scales (such as "1 to 5" or "very important to not important") by different respondents can reduce the reliability of the results
    • "Confounding" can occur when factors are causally related. For example, the level of education can be highly correlated with job classification because a specific level of education is often a requirement for many job positions. This causal relationship can make it difficult to determine if job classification has an effect on innovation that is independent of the level of education. Complex covariate analysis is sometimes required to manage confounding
    • Data screening may be required to ensure that only answers of the relevant respondents are included in the analysis, as this could otherwise distort the findings. For example, surveying employees who have no exposure to innovation is likely to yield different results to surveying employees who have exposure to innovation. Including all respondents may be appropriate when asking questions about barriers to innovation but not if asking questions about the benefits of innovation
    • If the survey is undertaken at both an agency (i.e. senior management) and employee level, the integration of the two data sets presents a unique statistical challenge. This can require hierarchical regression models and survey questions that allow for the integration of the two sets of data
    • When analysing survey data, several types of statistical analysis may be necessary to examine the trends in a statistically valid manner. This can range from simple chi-square tests to more complex factor analysis and/or multivariate regression techniques
  • Review of Innovation Indicators: As mentioned under questionnaire design, it is critical that the purpose of a survey is specified before the questionnaire is designed. Surveys should be designed with innovation indicators in mind to facilitate identification of appropriate questions and collection of appropriate data. However, once the survey data have been analysed, it is worthwhile reviewing the outcomes to ensure that the correct indicators have been constructed from the findings. in terms of their utility as indicators of public sector innovation.

APSC State of the Service Surveys

The Australian Public Service Commissioner has a statutory requirement to report to Parliament on the state of the APS each year. This requirement is met through the State of the Service (SoS) Report, which draws on a range of information sources including a survey sent to all APS agencies employing 20 or more staff under the Public Service Act 1999, and a voluntary survey sent to a random, stratified sample of APS employees (about 18,000 in 2010-11) from all APS agencies employing 100 or more staff under the Act. Under the Act, Agency Heads are required to provide information to the Commissioner for the purpose of the SoS Report.

Information on innovation within the APS has become more prominent in the State of the Service Reports since 2004-05, reflecting the government's growing focus on innovation as part of its APS reform agenda. The SoS reports provide some insight into employee perceptions of the role that innovation plays in contributing to aspects of working life, such as culture, leadership, recruitment, retention and productivity.

The APSC expanded the questions relating to innovation in its 2010-11 surveys . The relevant sections were:

  1. Section K (pp. 40 – 42) of the Employee Survey (May/June 2011)
  2. Section G (p. 19) of the Agency Survey (June/July 2011).

The APSC provides larger APS agencies (200 or more employees in 2010-11) with their own agency-specific report after the SoS Report has been tabled in Parliament. This report summarises data for each agency and compares it against the APS average. Smaller agencies (less than 200 staff in 2010-11) are currently provided with an amalgamated report, benchmarked against the wider APS, for all agencies of a similar size.

Alternative Data Sources on Innovation

Before embarking on a new and extensive agency survey, it is worthwhile examining all existing innovation data sources within a given agency to obtain a comprehensive overview of what is known about public sector innovation within that organisation. Aside from the APSC surveys discussed above, there exists in Australia a range of other relevant information on public sector innovation. This may include, but is not limited to, the following potential sources:

  1. information on public sector R&D expenditure
  2. data on employee skills, education and training
  3. accounting or budgetary data on investments in and outcomes of innovation
  4. procurement data relating to innovation
  5. costing of technological infrastructure underpinning innovation (including ICT)
  6. performance assessment data on agencies and SES where it relates to innovation
  7. client data on satisfaction with public service delivery (including client surveys and service charter complaints)
  8. general employer – employee surveys (including 360o feedback and performance evaluation)
  9. organisational reviews by external consultants
  10. Government finance statistics.

In addition, data may be contained in survey reports, organisational reports (such as Annual Reports, Strategic Reports, Operational Plans and Performance Reports) or held in raw form. However, in some cases extracting the relevant innovation information from these data sets and analysing the innovation component in a statistically robust manner may be challenging.

Templates for Agency Surveys

Should public sector organisations want to conduct surveys on innovation within their workplace, there are several approaches available. The examples listed under "Case Studies of Public Sector Innovation" illustrate some approaches adopted by countries and the Literature Review explains this further.

The two types of questionnaires outlined below are provided as potential templates, if agencies wish to explore workplace innovation outside the regular surveys conducted by the APSC.

CIS-Style Agency Surveys

The Community Innovation Survey (CIS) is a series of harmonised surveys12 implemented by national statistical offices throughout the European Union and in Norway and Iceland.  They are primarily surveys of private industry and designed to give information on the innovation of different sectors and regions. Results from these surveys are used for the annual European Innovation Scoreboard.

The CIS is based on the Oslo Manual and examines product, process, organisational and marketing innovation, but it has more questions on product and process innovation than on the other two types of innovation. It looks at the effects of innovation, innovation activities and innovation expenditure and it also examines the barriers to innovation. The CIS adopts a 3 year reference period and a fixed reference point. A revised version of the CIS is the MEPIN questionnaire used by the Nordic countries. This questionnaire was specifically developed as part of a pilot study to measure public sector innovation in these countries.

A CIS-style survey has been developed by the APSII Team as a template for public sector agencies wishing to measure work place innovation. This survey approach is considered appropriate for agencies that are output/outcome focussed and the benefits of innovation are directly measurable. The CIS-style survey is comprehensive and requires detailed fiscal information.

SoS-Style Agency Survey

The agency questionnaire required to be filled out by APS agencies as part of the SoS Report is wide-ranging in coverage. For example, the 2010-2011 agency survey posed 150 questions in a 58 page document. The innovation component of the questionnaire covered 6 questions and one page of the overall document.

A modified and expanded version of a SoS-style survey is provided as a template, which expands on the innovation questions. While covering all five themes outlined in the conceptual framework for innovation indicators, this is a simpler survey to complete and requires no quantitative data on innovation input and output/outcomes. This approach is more suited for agencies that have less measurable functions. The state government in Queensland has adopted this style of survey to measure public sector innovation at a state government level.

Back to top

Continued in Measuring and reporting part 2

 

  1. Please note that all quotations in this section are not covered by the Creative Commons licence or Commonwealth Copyright. ↩
  2. An example of the latter is the Borins' survey published in 2001 and the Statistics Canada survey conducted by Louise Earl
    published in 2002. ↩
  3. For further information, see http://www.apsc.gov.au ↩
  4. EPSIS was established in 20111 and it seeks to develop a conceptual framework for the measurement of public sector innovation. For more information refer to: http://i3s.ec.europa.eu/commitment/32.html ↩
  5. MEPIN stands for Measure Public Innovation. It is a collaborative project spearheaded by the Nordic countries of Europe (i.e. Denmark, Finland, Iceland, Norway and Sweden). Its main objective is to develop guidelines and a questionnaire for collecting internationally comparable data on innovation in the public sector. The project was initiated by the Danish Ministry of Science, Technology and Innovation and is lead by DAMVAD. The project involves a consortium of research and statistics institutions from the Nordic countries. For more information refer to: www.mepin.eu ↩
  6. The NESTI Taskforce is a Working Party of National Experts on Science and Technology Indicators set up by the OECD's Committee for Scientific and Technological Policy. The group is involved in measuring innovation in science and technology (S&T). It aims to improve the collection methodology of internationally comparable data , their timely availability and it is involved in the development of S&T indicators. For more information about the NESTI Taskforce refer to:  http://www.uis.unesco.org/ScienceTechnology/Documents/38235147.pdf ↩
  7. The Innobarometer is an annual opinion poll of businesses or general public on attitudes and activities related to innovation policy conducted by the European Commission. Launched in September 2000, it complements the statistical analysis in the European Innovation Scoreboard. The Innobarometer is conducted as part of the Eurobarometer series. The objective of the 10th Innobarometer survey was to study the innovation strategies of the European public administration sector in response to changing constraints and opportunities. Participating countries in this survey included the 27 EU member countries, Norway and Switzerland. ↩
  8. NESTA is the "National Endowment for Science, Technology and the Arts" – an independent body charged with promoting innovation in the UK and influence policy makers in their thinking. NESTA acts through a combination of practical programs, early stage investment, research and policy, and the formation of partnerships to foster innovation and deliver radical new ideas. Funded by a £250 million endowment from the UK National Lottery, NESTA uses the interest from that endowment to fund and support its projects. ↩
  9. "The Measurement of Scientific and Technological Activities, Proposed Guidelines for Collecting and Interpreting Technological Innovation Data", also known as the Oslo Manual, was developed by the OECD. It contains guidelines for collecting and using data on industrial innovation and it applies primarily to the private sector. ↩
  10. Reliability refers to the ability to achieve survey results that reproducible and consistent with similar groups of respondents, over time and when other people administer the questionnaire. ↩
  11. Validity refers to the ability of questionnaires to measure what they claim to measure. ↩
  12.  Five Community Innovation Surveys were conducted for the following periods: 1992 (CIS1), 1996 (CIS2), 2001 (CIS3), 2002-2004 (CIS4) and 2004-2006 (CIS6).  For more information refer to http://epp.eurostat.ec.europa.eu/portal/page/portal/microdata/cis ↩
Edit