Printer Friendly

Issues concerning intellectual capital metrics and measurement of intellectual capital.

Abstract

At present intellectual capital reporting seems to have reached a plateau in which questions about whether these reports can be further improved in terms of information content should be asked. This paper contends that at present there are significant measurement problems to be addressed with respect to intellectual capital reports. Further work in terms of measurement needs to be done to validate the information produced from intellectual capital metrics so that meaningful analysis of the data can be performed, rather than these data being merely accepted at face value. A validation framework has been developed with which to assess current intellectual capital metrics or to guide in the development of new intellectual capital metrics. Validation is performed using a hierarchical framework which contains four validation levels, although it is not considered that all intellectual capital metrics published in an intellectual capital report necessarily have to achieved the highest level of validation.

Keywords: measuring intellectual capital, intellectual capital measures, intellectual capital metrics

Introduction

Did interest in intellectual capital begin with a measurement problem or a management problem? Obviously there needs to be a distinction between knowledge management and knowledge measurement. By not making a distinction this means that the interaction between knowledge measurement and knowledge management is not recognised. However, one is related to the other: if we have a poor understanding of what the intellectual capital metric measures, then it would be expected that a reduced capacity to provide effective knowledge management occurs. A sound approach to knowledge management should begin by investigating measurement issues with respect to intellectual capital. By completing this exercise, knowledge management, through a deeper understanding of the strengths and weaknesses of intellectual capital metrics, would be expected to improve and become more effective.

The measurement problem arose as many publicly listed companies had a large difference between historic cost net asset values and their corresponding market values. One effort to address this measurement 'gap' was to develop new accounting standards, such as Australian accounting standard AAS 38 Revaluation of Non-current Assets: but these solutions had their limitations. For example, AAS 38 can only really work for those organisations with significant tangible assets to revalue, i.e. property, plant, and equipment. The information technology revolution gave rise to companies, such as Microsoft Corporation and Cisco Systems, Inc., in which no amount of revaluation of tangible assets was going to bridge the 'gap' between historic cost net asset value and market value. Furthermore, this 'gap' problem was not just restricted to IT companies but included companies in many other industries--such as chemical manufacture, pharmaceuticals, and biotechnology--that relied heavily on intangibles such as organisation kn owledge for their successful operation. Obviously something else was needed, and so there was increasing interest in attempts to measure companies' intangibles: one such attempt focussed on measuring 'intellectual capital'.

While progress has been made in intellectual capital reporting there is still some way to go regarding measurement issues. For instance, there is still debate over what is intellectual capital. Petty and Guthrie (2000: 155) claim the term is often too all encompassing, "with the risk that in time the identity of the object will become unclear". For example Saint-Onge (1996) claimed that intellectual capital had three components--human capital, structural capital, and customer capital. On the other hand Rothberg and Erickson (2001) claim that there is yet another component, namely 'competitive capital'. However, this may be only a short-term problem. In terms of setting boundaries around what to measure as 'intellectual capital' there appears to be an emerging consensus that intellectual capital has three fundamental components, namely external (customer-related) capital, internal (structural) capital, and human capital (Edvinsson and Stenfelt, 1999; Edvinsson and Malone, 1997; Roos et al, 1997; Sveiby, 1997) . The MERITUM project, in developing guidelines for defining intellectual capital has adopted a similar framework, vis. human capital, structural capital and relational capital (Bukh, 2001: 2-3).

Non-Theory-Based Intellectual Capital Metrics

An equally important measurement issue is not concerned with setting boundaries on what is and what is not intellectual capital, but with how intellectual capital metrics are constructed. What does the metric measure? This measurement issue is the focus of this paper. A good example of the problems associated with intellectual capital measurements was the claim often made in the middle to late-1990s (for example Stewart (1997) and others), that a measure of intellectual capital could be derived from the following equation:

MV = NA + IC or IC = MV - NA (1)

Equation (1) implies that the firm's market value (MV) is composed of the net assets value (NA) or the firm's tangible asset value, plus the value of its intellectual capital (IC). Intellectual capital is then calculated from a simple manipulation of this equation as indicated above. From a first principles point of view this measure of IC would appear to have some merit. Obviously the firm's capacity to generate future revenues and dividends, which is then reflected in its share price, lies in the utilisation of assets currently owned and the effectiveness with which these assets are used. Effective utilisation of these assets is obviously dependent on the ability or knowledge contained within the firm. However, this rather simplistic derivation of intellectual capital should not be considered an appropriate measure. The main problem of deriving an intellectual capital measure on the basis of equation (1) is that measures used to derive other measures must themselves be measures from the same measurement sp ace. This is surely not the case with 'MV' being a measure derived essentially from perceptions of future revenue by the firm's owners, and 'NA' being a measure derived from identified and recorded past transactions made by the firm. The above equation is the same as saying:

{Three apples} + {Two train wrecks} = {One broken refrigerator}

Mouritsen (2000) also offers a slightly different, but equally compelling, argument against equation (1) as well. Unfortunately references to the gap between market and book values still appear, possibly due to the ease with which this metric can be calculated--for instance see Leadbeater (1999: 12). Leadbeater (1999: 7) provides another example of an inappropriate measurement, in the extract:

When I was young I was very impressed when my father crushed a Coca-Cola can. In those days Coke cans had to be opened with a can opener. Crushing one was a feat of strength. These days an empty can of Coke can be crushed in an instant. Cans are made from paper-thin metal. The Coke can's dramatic weight loss has been made possible by technologists and manufacturers working out smarter and smarter ways to make cans. Modern drinks cans are 80 per cent lighter than when I was a child. Put it another way: the modern Coke can is 80 per cent technology and 20 per cent metal.

The above quotation as a metaphor is magic but unfortunately as a measure it is meaningless. First of all, there is the unsustainable presumption that no technology or knowledge was used to create the old fashioned can--the one that weighed five times as much as the modern can. Did the process under which steel cans were made not rely on any technology and knowledge? Second, the can is still 100 per cent metal, now 100 per cent aluminium instead of 100 per cent steel. That is the weight reduction is due mainly to the replacement of the former material with a lighter one. How much of the total loss of weight should be attributable to knowledge or technology rather than to the fact that a lighter material was used? This measure suffers from the measurement problem of confounding.

Confounding is a mixing of effects due to different variables, which can make it look like there is a direct association--loss of weight of the soft drink can, due only to an infusion of 'technology'--when at best there is an indirect relationship. Consider another example of confounding in which a study is undertaken of accident levels between men and women drivers. The study finds that for a random sample of men they are involved in more motor vehicle accidents than for an equivalent random sample of women. On the face of this evidence one could conclude that women are safer drivers than men. However, further analysis finds that men drivers drove a lot more kilometres than the women drivers did. The extra kilometres driven by the men drivers are a confounding factor for this study. In the example above, there is no doubt that technology played a part in making Coke cans lighter, but the major factor that made today's Coke can lighter was the replacement of the material. A better measure of the impact of kn owledge in this situation would be to compare per unit costs of development and manufacture of steel cans with per unit costs of producing the modern aluminium can. Analysis of these data would then provide a more accurate insight into the impact that knowledge and technology have had on the manufacture of soft drink cans.

These sorts of simple intellectual capital metrics, such as market-to-book ratios and the Coke 'knowledge' metric mentioned above, are attractive due to the ease with which they are constructed. One argument could be: "Well it is better to have something rather than nothing". However, this temptation should be strongly resisted: metrics that provide incorrect measures, which organisations then rely upon, are worse than not having the metric at all. Relying on the known fact that we have zero knowledge is far better than relying on knowledge that is assumed to be good but in fact is bad. This argument has been recognised by many organisations that have assigned little value to these simple intellectual capital metrics. They provided no information value either in terms of correctly understanding what made up the organisation's intellectual capital, or how much intellectual capital they had at any one point in time. Accordingly, efforts were directed towards finding better and more sophisticated metrics that co uld more accurately and completely measure an organisation's intellectual capital, or the changes that an organisation experiences in its intellectual capital base across time.

Theory-Based Intellectual Capital Metrics and Reports: Some Measurement Issues

Given the fundamental flaw of the most important non-theory-based intellectual capital metric mentioned above, vis. market-to-book ratios, the response was to develop more sophisticated models such as Skandia's Navigator (Edvinnson, 1997), Sveiby's Intangible Asset Monitor (Sveiby, 1998), as well as other intellectual capital models or frameworks developed by Roos and Roos (1997), Brooking et al, (1998), and Mouritsen et al, (2001). These models formed the basis for the development of theory-based intellectual capital metrics. That is, the major difference between theses two groups of metrics was a linkage between an intellectual capital metric and some form of theoretical foundation. Although these metrics represented an advance, there are still problems concerning the measures they produce. One problem is differences that exist between the theoretical models, even if these differences are often minor. That is, there is no generally accepted theoretical model on which to base reporting of intellectual capita l (Mouritsen, 2000; Larsen et al, 1999). A more important problem from a measurement perspective for these theory-based intellectual capital metrics can be demonstrated using the 'Human capital' section of the 1999 intellectual capital report of the Austrian Research Centers, Seibersdorf (Leitner et al, 2001: Appendix), which is shown in Table 1.

The first problem is that these reports lack integration between metrics. For instance, the measures shown above for 'Human resources' and 'Training' are obtained from different measurement spaces and so cannot be used to derive further intellectual capital measures, either through some form of direct manipulation or indirectly, say in constructing a 'Human capital index'. They are restricted to interpretation within themselves across time, or at a high-level of comparison between organisations. Larsen et al, (1999) do not see a real problem with this, claiming that the purpose of these statements is not to have a common theme or an overriding logic that allows deeper analysis of the data reported. But even if this view is accepted there are still measurement issues to be considered for individual metrics. The data shown in these types of intellectual capital reports, such as the extract above, do not provide clear answers, from a knowledge management perspective to, questions such as:

What does it mean that more of our senior research staff left last year than did in previous years?

Did the firm's intellectual capital increase, remain the same, or decrease? Arguments can be put forward to support all three conclusions. Intellectual human capital increased because more senior and less enthusiastic staff, who now rest on efforts made in their younger years, have been replaced by younger and more enthusiastic researchers. Intellectual capital decreased because quite effective senior staff were replaced by more immature staff that require a lot of supervision of their research activities. Intellectual capital remained the same because each of these above effects balanced each other out. Accordingly, in order to progress research in this area, the inability to perform further analysis on information provided in current intellectual capital reports needs to be addressed. Indeed interest by organisations in developing intellectual capital reports appears to be on the wane. Even Skandia is not promoting its intellectual capital reports as vigorously as it was during the late 1990's, and has ceas ed publication of its intellectual capital reports. If intellectual capital is to continue as a viable research field and intellectual capital statements are to gain general acceptance across a broad range of organisations, industries and countries, then these measurement issues, need to be addressed.

Validation of Intellectual Capital Reports

One improvement that could be made to current intellectual capital reports is to subject the metrics to validation procedures. This procedure could use the Balanced Scorecard method as a model. For example, relationships between metrics contained within intellectual capital reports are investigated and made explicit. In addition to more disclosure, which should increase the information value of intellectual capital reports, this paper proposes two other initiatives. First, there must be greater standardisation of intellectual capital reports. The components of intellectual capital must be agreed to and the metrics associated with measuring those components must also be agreed upon. This would add to the information value of intellectual capital reports because valid comparisons of data could be made between organisations for similar time periods, and as well as for the same organisation across different time periods.

Secondly, actual measurements reported should be validated. This would assist in further analysis and interpretation of the metric's measures, and in understanding the metric's underlying raison d'etre. Developing a validation framework would provide a similar role that the audit performs for an organisation's financial accounting reports. One outcome of validation is that the value of the data contained in these reports will increase due to the fact that people using the reports will have greater confidence in deriving meaning from the data. Depending upon the appropriate level of validation an intellectual capital metric achieves, the metric may allow a correct answer to questions such as:

If there was greater retention of existing staff by the organisation then has this contributed to an increase in the organisation's human intellectual capital or not?

What sort of validation should be performed on intellectual capital metrics? The fundamental principle underlying the validation framework below is that intellectual capital metrics can be analysed to determine their 'validation level'. Validation levels are hierarchical: in other words, a potential 'Level 2' metric must first achieve 'Level 1' validation before being assessed to determine if it satisfies 'Level 2' validation criteria. Obviously higher validation levels for an intellectual capital metric mean a greater capacity and ability of that metric to be used in meaningful analysis and interpretation of the measures produced. Furthermore, more work is necessary to develop a broader range of 'tests of goodness' to determine if a metric has actually achieved a certain validation level. Incorporating the valuable work completed recently by the Meritum Project (Meritum, 2001: 86-88) is seen as possible starting point. In addition, value is seen in applying the validation framework to development of new int ellectual capital metrics as well as assessing existing metrics.

The framework's validation levels are as follows:

a. Level 1: The metric is valid from a first principles basis. To achieve Level 1 validation there must be an underlying logic between the metric's structure and what the metric is attempting to measure: the metric justifies its existence. One example how Level 1 validation of an intellectual capital metric should occur is provided by the year 2000 intellectual capital report for Systematic Software Engineering A/S, a Danish software house (Systematic, 2001). This report contains a metric called the 'Cola Index', which measures the number of bottles of coca cola drunk per employee during the year. The Cola Index shows a drop from 104 bottles in 1997/98 to 102 bottles in 1998/99. What sort of justification could be put forward to demonstrate that this metric achieves Level 1 validation? Presumably more coca cola drunk (and there is possibly a confounding issue here in that more coca cola may have been drunk in 1998/99 because employees purchased fewer but bigger bottles of coca cola) means that employees are more alert and so develop better software, thereby increasing human capital. However, this relationship would seem at best a tenuous one and so the Cola Index should not be judged as having achieved Level 1 validation.

As a general rule, intellectual capital metrics should have a link to either increasing the organisation's ability to gain knowledge, increasing the organisation's ability to improve utilisation of its existing knowledge, or decreasing chances of the organisation losing knowledge it should not lose. In most cases Level 1 validity could be established as simply as creating an English statement such as the following:

Higher levels of staff training will mean that, on average, staff are better trained and so will be able to solve problems in less time and more effectively than they could prior to being trained.

Note that the above only ensures the metric passes Level 1 validation, rather than establishing that the metric is a 'good' metric. Obviously, those metrics that pass higher levels of validation would be considered to be better metrics than those that do not.

b. Level 2: The metric's measurement scale is well understood. There should be no automatic presumption of a linear scale. Any intellectual capital metric to which the law of increasing marginal returns is applicable should be measured against an exponential scale rather than a linear one. For example, measuring the rate of new knowledge diffusion within an organisation would be expected to follow an exponential scale as one person tells two others, who in turn tell two others and so on. Another non-linear scale is a logarithmic one. Any metric to which the law of diminishing marginal returns applies should be measured against a logarithmic scale rather than either a linear or exponential one. If an organisation spent 1 per cent of total expenditure on training employees, and the training was done well, it would expect to achieve a particular knowledge increment. But increasing this percentage say to 2 per cent of total expenditure may not mean a doubling of this knowledge increment. Anything less would mean that the measurement scale is logarithmic rather than linear. In terms of relational capital, Ittner and Larcker (1998) claim there are diminishing returns an organisation can gain from investing in customer satisfaction and so attempt to increase customer retention rates. Finally, measurement scales should also work in the same way for either increasing values or decreasing values of the metric.

c. Level 3: The metric should allow valid comparisons with other internal measurement data. Level 3 validation of intellectual capital data could be performed using objectively based time measures (Kannegieter, 2000), but should not be restricted to earlier measures of the same metric. Wherever possible validation against other organisation data should be performed. One simple mechanism, although not the only one available, would be to correlate specific intellectual capital metrics with related financial data. Correlations could be either positive or negative. In the case of positive correlations, increases in values of the intellectual capital metric can be presumed to relate to increases in the related financial data. This sort of validation would address the measurement problem discussed for staff retention rates in the 1999 intellectual capital report for the Austrian Research Centers, Seibersdorf (Austrian Research Centers, Siebersdorf, 2000). Establishing a relationship between intellectual capital and financial data allows a deeper level of understanding to emerge from the information disclosure within current intellectual capital reports, such as being able to determine the contribution of staff to overall organisation performance. However, care needs to be taken in assessing the type of relationship between an intellectual capital metric and a financial metric. For instance the problem of confounding should be investigated to ensure that no other relationship between a different intellectual capital metric exists that effects, either positive or negative, the same financial metric.

Another issue that exists when looking at time series data, is best explained by looking at the impact that staff training programmes have on organisation knowledge as well as organisation performance. Given that a recent training programme had a significant positive outcome then presumably there is an increment to human capital (and possibly indirectly to structural capital). However, in terms of improved financial performance there may be some delay between the improvement in human capital and its translation into improved financial performance. That is, the financial metric lags the intellectual capital metric (or looked at in reverse, the intellectual capital metric leads the financial metric). This issue is concerned with making sure that metrics derive their values from related time periods. If they do not then any simplistic analysis, such as a direct comparison of two measures may not be relevant and therefore have reduced meaning. The work of Bassi et al, (2001) represents an important step inrecogni sing the effects of leads and lags on human intellectual capital metrics; and Johanson et al, (2001a; 2001b) also have provided valuable input in their discussion of using correlation theory with respect to human intellectual capital metrics.

d. Level 4: The metric should allow valid comparisons with data for other organisations. Level 4 validation would allow organisations to assess knowledge management effectiveness against other companies, most particularly their immediate competitors. Data from these metrics could be used in benchmarking: depending on the level of disclosure this could also include process benchmarking as well as competitive or generic benchmarking. For this level to be achieved similar data gathering and data calculation procedures must be present. Again confounding can rear its ugly head. Keeping to the example of staff training, one intellectual capital metric that may be derived is the percentage of staff that attended training courses during the current financial year. For one organisation these courses may include a significant number of computer assisted self-learning courses as well as the more formal face-to-face courses; for another organisation all staff training is conducted in a face-to-face environment. Valid com parisons of staff training efforts between these organisations cannot be made. To know when these comparisons can be made, requires higher levels of disclosure than is shown currently in intellectual capital reports such as the 2000 report of the Austrian Research Centers, Seibersdorf (Austrian Research Centers, Seibersdorf, 2001), the 2000 report for the Carl Bro organisation (Carl Bro, 2001), or the 2000 report for Systematic Software A/S (Systematic, 2001). An alternate strategy to increased disclosure (which some organisations may be reluctant to do) would be to produce commonly agreed upon standards on how particular important intellectual capital metrics are to be calculated, similar to international or national accounting standards. Adherence to the standard for one or more intellectual capital metrics would mean that the data are comparable between organisations and so meaningful analysis could be performed, either by the organisation itself or independent third parties such as financial analysts. A q uestion to be considered is what organisation should assume the responsibility for the creation and maintenance of these intellectual capital metric standards. Blair and Wallman (2001) argue for this responsibility to be shared by both private sector and public sector organisations as well as governments.

It is not expected that organisations would necessarily limit disclosure of intellectual capital to Level 4 metrics alone. Rather, these reports would consist of a mixture of metrics (or indicators), and each metric would have an indication as to what validation level that metric achieved. Indeed, the application of this validation framework would go a long way to addressing the call made by Blair and Wallman (2001: 63) to develop 'a coherent framework of value indicators'. A good point to begin would be with the IC metrics shown in Appendix 2 of 'A Guideline for Intellectual Capital Statements--A Key to Knowledge Management' (Danish Agency for Trade and Industry, 2000: 72-93), or in Appendix 6.2 of 'Intellectual Capital Managing and Reporting' (Nordic Industrial Fund, 2001:76-77).

Greater knowledge is therefore available in terms of interpreting intellectual capital reports either by employees, owners, auditors and other interested third parties such as financial analysts. Furthermore, while the above has focussed attention on how metrics are constructed, attention also needs to be directed towards the data used by the metric. Well-constructed metrics are a necessary but not a sufficient condition to improving the value of current intellectual capital reports. If processes are not in place to ensure high levels of data quality then the measures will not perform as well as they should. For example, assessments of customer satisfaction could be based upon the data obtained through a customer survey, in which many customers have either written slightly incorrect or completely incorrect answers to survey questions. In this situation any further downstream processing and analysis using these contaminated data can only itself become contaminated. Accordingly, processes are needed to ensure h igh information quality as well as high validity of the metric. It is interesting to note that the year 2000 report for Systematic Software AIS (Systematic, 2001: 17) was audited and so a beginning has been made to address this data quality issue.

Conclusion

This paper has indicated some new directions for the on-going development of intellectual capital reports. Although progress has been made and these reports now contain greater and better information than was apparent from the non-theory-based intellectual capital metrics, it is considered that intellectual capital measurement and metrics need to continue to evolve and improve. As indicated in the discussion, efforts should be focussed upon investigating current metrics used to determine their overall validity. The issue of confounding data produced by IC metrics should be an investigation of high priority. Validation would improve the ability of organisations to draw meaningful conclusions about themselves from their own intellectual capital data as well as make meaningful assessments of how well or otherwise they are doing with respect to their knowledge management processes when compared with other similar organisations. Finally, it should also be recognised that improved organisation performance is not me rely about good knowledge management alone. While organisations obviously strive to maximise opportunities to create new knowledge, make maximum use of existing knowledge, and minimise situations or opportunities in which valuable knowledge is lost, in many cases improved organisation performance could be due mainly to a random fortuitous synergy of circumstances- otherwise known as luck.
Table 1

Extract from 1999 Intellectual Capital Report, Austrian Research
Centers, Seibersdorf


Human capital
 Human resources
 New staff total
 Research staff
 Total staff fluctuation
 Total staff leaving
 Research staff, total
 Of whom aged 25-35
 Of whom aged 25-35 within two years
 Of whom aged 35-45
 Of whom aged 45-59
 Of whom retired
 Total retirement
 Average seniority (in years)
 Percentage of research staff
 Number of awards

 Training
 Days training per employee, total
 Days training per employee: communication & management
 Days training per employee: computer literacy
 Days training per employee: technical
 Training cost in per cent of salary, per employee


References

Austrian Research Centers, Seibersdorf. 2000. Intellectual Capital Report 1999, Town Office: A-1010, 1 Kramergasse, Vienna, Austria.

_____. 2001. Intellectual Capital Report 2000, Town Office: A-1010, 1 Kramergasse, Vienna, Austria.

Bassi, L.J., Harrison P., Ludwig J. and McMurrer, D.P. 2001. Human Capital Investments and Firm Performance. Unpublished, available at http://www.knowledgeam.com/aa.pdf.

Blair, M.M. and Wallman S.M.H. 2001. Unseen Wealth: Report of the Brookings Task Force on Intangibles, Brookings Institution Press, Washington D.C.

Brooking, A., Board, P. and Jones, S. 1998. "The predictive potential of intellectual capital", International Journal of Technology Management, 16(1-3): 115-125.

Bukh, N. 2001. 'Making the Intangible Tangible: Entrepreneurship for the Future", MERITUM Workshop, European Union, Sweden, 19-20 March.

Carl Bro. 2001. Intellectual Capital Accounts, 1999-00, http://www.carlbro.com/.

Danish Agency for Trade and Industry. (2000). A Guideline for Intellectual Capital Statements- A Key to Knowledge Management, Ministry of Trade and Industry, Copenhagen, www.efs.dk/icaccounts.

Edvinsson, L. 1997. "Developing intellectual capital at Skandia", Long Range Planning, 30(3): 366-373.

Edvinsson, L. and Malone, M., 1997. Intellectual capital: Realising your company's true value by finding its hidden brainpower, Harper and Collins, New York.

Edvinsson, L. and Stenfelt, C., 1999. "Intellectual capital of nations--for future wealth creation", Journal of Human Resource Costing and Accounting, 4(1): 21-33.

Ittner, CD. and Larcker, DF., 1998. "Are nonfinancial measures leading indicators of financial performance? An analysis of customer satisfaction", Journal of Accounting Research, 36:1-35.

Johanson, U., Martensson, M. and Skoog, M. (2001a) "Mobilising change through the management control of intangibles", Accounting, Organisations and Society, 26: 715-733.

Johanson, U., Martensson, M. and Skoog, M. (2001b) "Measuring to understand intangible performance drivers", The European Accounting Review, 10(3): 407-437.

Kannegieter, T. (2000) National Knowledge Management Framework: Preliminary Draft, Standards Australia International Ltd., Sydney, Australia.

Larsen, H.T., Bukh, P.N., and Mouritsen, J. 1999. "Intellectual Capital Statements and Knowledge Management: 'Measuring', 'Reporting' and 'Acting"', Australian Accounting Review Special Issue--Knowledge management: How to Corner and Elusive Quarry, 19(9-3): 15-26.

Leadbeater, C., 1999. "New Measures for the New Economy", International Symposium on Measuring and Reporting Intellectual Capital: Experiences, Issues, and Prospects, OECD, Amsterdam, June.

Leitner, K.H., Bornemann, M. and Schneider, U. 2001. "The making of the first enterprise wide intellectual capital report for a European Research Technology Organisation", Proceedings of 4th World Congress on Intellectual Capital, Hamilton Ontario, 17-19 January.

Meritum Project. 2001. Measuring Intangibles to Understand and Improve Innovation Management: Final Report, Spain, www.kunne.no/meritum.

Mouritsen, J. 2000. "Valuing Expressive Organisations: Intellectual capital and the Visualisation of Value Creation", in Schulz, M., Hatch, M.J. and Larsen, M.H. (eds.), The Expressive Organisation: Linking Identity Reputation, and the Corporate Brand, Oxford University Press, Oxford.

Mouritsen, J., Larsen, H.T., Bukh, P.N. and Johansen, MR. (2001) "Reading an intellectual capital statement: Describing and prescribing knowledge management strategies", Journal of Intellectual Capital, 2(4): 359-383.

Petty R. and Guthrie J. (2000) "Intellectual Capital Literature Review: Measurement, Reporting and Management", Journal of Intellectual Capital, 1(2): 155-176.

Nordic Industrial Fund. (2001) Intellectual Capital Managing and Reporting, A Nordika Project, www.nordicinnovation.net.

Roos, G. and Roos, J. (1997) "Measuring your company's intellectual performance", Long Range Planning, 30(3): 413-426.

Roos, J., Roos, G., Dragonetti, N. and Edvinsson, L., 1997. Intellectual Capital: Navigating in the new business landscape, MacMillan Business, London.

Rothberg, H.N. and Erickson, G.S. (2001) "Competitive Capital: A Fourth Pillar of Intellectual Capital?", Proceedings of 4th World Congress on Intellectual Capital, Hamilton Ontario, 17-19 January.

Saint-Onge, H. 1996. 'Tacit knowledge: The key to the strategic alignment of intellectual capital", Planning Review, 24(2): 10-14.

Stewart, TA., 1997. Intellectual capital: The new wealth of organisations, Doubleday, New York.

Sveiby, K.E., 1997. The New Organisational Wealth: Managing and Measuring Knowledge Based Assets, Berrett-Koehler Publishing, Inc. San Francisco.

_____, 1998. "Intellectual capital: Thinking ahead', Australian CPA, 68(5): 18-21.

Systematic Software A/S. 2001. Intellectual Capital Report 2000, Gothersgade 14, 1 Copenhagen, Denmark.

Ian Caddy

Ian is a lecturer of technology management in the School of Management, University of Western Sydney. Held academic positions at the University of New South Wales and the University of Technology, Sydney. Has written over 10 books and several journal articles and conference papers on the practical application of information technology within the business environment. His research interests include the application of information technology to businesses and the impact that information technology can have on business processes.
COPYRIGHT 2002 Singapore Institute of Management
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2002, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Caddy, Ian
Publication:Singapore Management Review
Date:Dec 15, 2002
Words:5410
Previous Article:Profiting from learning: firm-level effects of training investments and market implications.
Next Article:The paradox of commercialising public sector intellectual property.


Related Articles
R&D highlights of trade bill (provision of Omnibus Trade Bill that affect research and development)
SEC commissioner proposes new multilayer reporting model.
Measuring corporate IQ.
Challenging Measures for IC.
Introduction.
Multiple integrated performance management systems: IC and BSC in a software company.
The Measure of All Things: The Seven-Year Odyssey and Hidden Error That Transformed the World.
The components of value measurement: the worth of a company goes beyond dollars and cents.
From the editor.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters