April 9-12, 2008
Montréal, Québec, Canada

Towards New Metrics Of Success For On-line Museum Projects

Sebastian Chan, Powerhouse Museum, Sydney, Australia


All over the world museum Web site visitation is growing as better access and faster speeds increase Internet traffic to all categories of Web sites. Museums are increasingly experimenting with and implementing more interactive services on their own Web sites, and also decentralising their on-line 'brand' into a multiplicity of social networking sites and services, as well as virtual worlds.

This increased environmental complexity makes the traditional Web analytics and metrics that museums have used to measure and track success on the Web for the past decade increasingly inadequate. Occasional user surveys and server-side log analysis can no longer be relied upon by Web teams to guide them towards making museum sites more user-centric and effective. This is complicated further by the greater and greater proportion of on-line museum visitors entering the sites via a search engine. Whilst basic reporting currently satisfies government and sometimes corporate benefactors, far more complex analysis is required for museums themselves to more effectively evaluate and refine their on-line offerings for their users.

This paper argues that museums must take another look at their analytics tools and methods. It calls for a new approach and examines new ways of measuring the use of on-line museum projects and Web sites. It looks at the new range of analysis tools available to Web teams and, referencing the broader segmentation work of Peacock and Brownbill (2007), proposes practical ways a segmented approach can work for museums of all sizes. It proposes that museums need to take stock of their comparative positioning in each of these segments, rather than use raw figures.

Drawing upon search engine optimisation techniques and demand-side competitive ISP-level intelligence, it combines these with new site-specific techniques to allow museums to better learn how their existing users behave on their Web sites, as well as to identify the potential audience for their offerings, one that is currently untapped.

(This paper is intended to accompany a practical workshop which demonstrates, with examples, many of the concepts and techniques described within.)

Keywords: evaluation, measurement, strategy, programming, usability, analytics


The Council of Australasian Museum Directors announced that in 2005/6, 37 million user visits were made to its member museums (Council of Australasian Museum Directors, 2007). This was compared to 12 million physical visitors. It was a simple comparison that generated significant media interest, raising the profile of museums in the community, and importantly, in political and philanthropic funding circles.

What do these figures actually mean? Are they as impressive as they sound?

There are three immediate problems with these figures.

  • First, each museum uses different tools to measure on-line visitation, and each of these tools counts 'visitors', 'unique visitors' and 'user sessions' in different ways.
  • Secondly, there is no detail on where these visitors were from or what their visit intentions were.
  • Thirdly, there was no mention of the unmeasured ways museum content was being used once it had 'leaked' beyond the museums' own Web sites.

These problems are not unique to the on-line environment. Of the 12 million physical visitors, we do not know how many were visiting a museum because of its collection or to see a blockbuster or niche exhibition, how many were unwillingly dragged along by their parents or teachers, how many were just attending a corporate seminar that happened to be onsite. The figure also tells us nothing about their 'experience' of their visit, nor how long they spent onsite.

Traditional Analytics

It sounded so simple - everyone who visited your Web site would be recorded in the server logfile. It sounded so 'accurate', precise and quantitative. It promised to answer all those questions that are hard to answer about a physical visit - time spent, what was looked at, and how they got there. Unfortunately, it was a false promise.

Web statistics are a good example of the easy abuse of numerical data. Web analytics tools have been developed and refined largely to serve the needs of the on-line advertising industry. Outside the cultural sector, traffic figures allow Web sites to charge more for advertising and promise ever more 'eyeballs'. Traffic figures have been used to justify impression-based Web advertising (known as CPM, cost per thousand impressions) - the most lucrative on-line revenue generator, but are reliant on trust in traditional Web metrics. In contrast, Google's pay per click advertising model (known as CPC, cost per click) radically changes the way on-line advertising works by only incurring a cost when a visitor actually clicks on an advertisement. Within the cultural sector, traffic figures are used to justify funding and sponsorship, and to 'prove' the success of an on-line project.

Metrics Help Improve Design, But Don’t Measure Success

Museums usually undertake extensive research, conduct elaborate user surveys, and trawl through years of usage data when they are redesigning their Web presence (Thomas and Carey, 2005, Peacock and Brownbill, 2007). Whilst this is important when re-designing, often, after launching the new presence, the specialist resources brought in to undertake this work are jettisoned, and ongoing 'Web site evaluation' becomes a euphemism for just reporting 'monthly visitor figures'. One of the factors driving this is the 'research it, build it, launch it, move on' model that museum Web teams have inherited organisationally from the exhibition focus of museums in the physical world. Unfortunately, this model prevents museum Web teams from developing the necessary agility required to respond to rapidly changing on-line audience behaviours and expectations, and quickly leads to a sub-optimal on-line experience for users.

If they are not a key part of your revenue generation strategy, then Web metrics should be used primarily to help you improve the usability, accessibility and performance of your Web site - on an ongoing basis. Raw usage figures are virtually meaningless and divert important attention away from the very things that statistics can help with. In fact, using a range of different tools makes it possible to monitor your museum's Web presence and keep track of trends that indicate its effectiveness over time, so that ongoing improvements can be made.

A Summary Of Old Problems

The Problem With Log File Data

Server log files record every file requested by another computer. Nowadays, though, this rarely represents a real human user at the end of the line. A well built, somewhat standards-compliant museum Web site will be crawled almost daily by Google, Yahoo, MSN and all manner of minor search engine bots. Sometimes these bots will behave themselves and operate as a single user session, but it is not uncommon to have thousands of separate bot sessions recorded in a logfile. Add to this the swarms of content scrapers that feed spamblogs and unknown hack attempts that try to inject malicious code into your dynamically served page URLs, and an average museum Web site log might be contaminated by as much as 50-60% non-human traffic. Now established log analysis Web packages will usually be able to identify at least the known search engine bot traffic, but all packages will count this traffic differently, and some will inevitably slip through - especially the site scrapers.

The Problem With Page Tagging Data

The alternative to log file analysis from the Web analytics industry was page tagging. With page tagging, each time a Web browser loads a page in full it runs a script or loads a small file from a 'collection' server. Page tagging is good for picking up use of your Web site when it is stored in a cache or proxy. It also means that non-human traffic is eliminated, but at the same time, without complex programming page tagging doesn't pick up all those non-HTML files that museums are notable for producing plenty of: PDF and Flash.

The Problem With 'Unique Visitors'

Unique visitors sounds like a good metric. The problem is that it is calculated in very different ways by different analytics packages. To assist in separating different visitors from each other, visitors are usually identified by a combination of IP address and a cookie. Some packages calculate uniqueness on a daily basis, meaning that if the same computer and browser visits your Web site on two different days they will be counted as two unique visitors. If your users delete their cookies at the end of each session or have a dynamically allocated IP address, then the same user visiting from the same browser and computer multiple times will count as multiple unique visitors. There are no easy solutions to this; the best solution is just to count 'visits'.

The Problem With 'Visits' And 'Time Spent On Site'

However, visits are also problematic because different analysis packages apply different 'timeout' periods to inactive user sessions. Generally the industry has set a 30-minute timeout period after which the same browser and computer accessing the same Web site is determined to have started a new visit. This timeout period is necessary because it is impossible for log-based solutions to determine when a user leaves a Web site. Only a very few packages effectively track exit points - this has to be done by a Javascript hack - and even this is ineffective if a visitor's last page on your Web site is a PDF or Flash interactive.

'Time spent on site' measures break for the same reason. Because it is hard to determine when a visitor leaves a site, almost all Web analysis packages count a single page visit as zero (0) seconds. Likewise, if a visit spends 15 seconds on your home page, then goes to a second page, spends 2 minutes there and then leaves, the visit length will almost certainly be recorded as just 15 seconds. A better measure of the average time spent on site (or average visit length) needs to be manually calculated by removing all the single page visits from the calculation.

Newer browsers with tabbed Web browsing options (such as Firefox, Internet Explorer 7, Safari, Opera) also make these calculations problematic. Much like users walking away from their computers to get a coffee, if a user opens a new tab and browses to another site whilst keeping your site open, are they really 'on your site' any more?

The Problem With 'Page Views'

Page views are increasingly problematic with the popularity of AJAX and the rise of video content. Page views work quite well for hierarchical content where content on each page is fixed and users must traverse several pages to complete a task, but on sites which have switched to AJAX methods, the same task can now usually be completed on a single page with dynamic content updating. In these situations, time spent and individual sub-task tracking provide much better insights into user behaviour. This should not be new to museums as we have had to deal with tracking and analysing user behaviour in Flash content on Web sites and Director-based interactives in museum galleries.

Comparative Analytics

Whilst your internal metrics cannot be used to compare your museum's performance against another museum’s because of different measurement techniques, there are ways to assess your competitiveness. The most effective 'overall' measure is in fact to sample users beyond your own Web site – capture the entirety of a user’s behaviour before and after visiting your Web site, as well as that of non-users. There are two types of firms that do this type of measurement.

The first uses a method derived from television and radio ratings - surveying a group of users on their monthly behaviour, often using a Web tracking application installed on the survey participants’ computers to keep track of everything they visit during that period. This is known as panel-based measurement. Firms that do this sort of measurement include comScore ( and Neilsen NetRatings ( The bigger their sample size, the better the results. But in most cases even the largest museums score poorly: this sort of measurement is well suited to only the most popular and largest generalist Web sites – not museums.

A slightly modified form of this type of measurement occurs with 'toolbar trackers' like Alexa ( which gather data from the relatively small base of users who choose to install the toolbar. This is easily ‘gamed’ and inaccurate - at the Powerhouse Museum,one of our development computers had a toolbar feeding data to Alexa; we noticed that Alexa was reporting our 'development Web site' was one of the most popular areas of the Museum's domains.

The second uses the proxy logfiles generated by Internet service providers (ISPs). ISP logs provide a much larger and more accurate pool of data for analysis and, assuming that the firms providing the analysis have signed agreements with the major ISPs in your country, this can give the best comparative analysis of any solution. Free providers of this sort of analysis include Quantcast (US-only – and Compete (US-only – The best-known commercial provider is Hitwise (US, UK, Australia/NZ, Hong Kong, Singapore).

Panel-based and ISP proxy-based Web analytics can provide a much better picture of metrics such as 'time spent on site' because they capture an entire user's browsing profile - the sites they visit both before and after your site, as well as all their searches and, in panel-based services, detailed demographic data. These services are region-specific and can only estimate total traffic, focusing more on proportional metrics and trends.

Through these services it becomes possible to track the performance of your site against those of your competitors in your local area. This is particularly useful for examining physical visitation traffic as well as brand awareness by drilling down into search terms used to find particular sites.

Importantly, these services also offer the unique ability to examine what your ‘non-users’ are doing, and the alternative Web sites they are visiting to meet their information and other needs. Museums have been slow to engage with this sort of research, and yet these tools are readily available and used extensively in the commercial world.

Behavioural Snapshot Tools And Usability Tests

In response to the problems with the traditional blunt measurement tools, highly specialised user tracking tools like Clickdensity ( and Reinvigorate ( are emerging as ways of taking high volume ‘snapshots’ of user behaviour in a short period of time as an alternative to undertaking small and time-consuming focus group sessions and surveys.

Clickdensity is discussed at length by Haynes and Zambonini (2007) and offers the ability to track the coordinates of users’ mouse clicks on site pages. Aggregating this data over a month can quickly reveal shortfalls in user interface design.

Reinvigorate takes a different approach to traditional page tagging metrics by revealing what is happening ‘right now’ on a site. This allows the analyst to follow users as they move through the site, watching in real time the as they navigate and explore. Reinvigorate gives to ‘time spent’ a level of granularity which is often hidden by aggregated data.

Both of these tools are joined by many other startups that offer similar services through to the generation of ‘user videos’ which record movements over a number of pages.

A Simple Segmented Web Metrics Methodology

A better approach to Web metrics on your Web site is to segment your users and deploy appropriate measurement tools to reveal trends and behaviours (Carey and Jeffrey, 2006). Most good analytics packages will allow you to implement segmentation in the report generation process to filter users and sessions - if your package doesn't, then switch.

First, segment your content. Peacock and Brownbill (2007) propose four different types of broad audience segments – visitors, searchers, browsers and transactors – a useful start point. Within each category you will need to further segment. Visitors should be further segmented by exhibition and primary interest; searchers and browsers by discovered content; and transactors by transaction type. Each part of your Web site has a core purpose. The information about your location, opening times, hours and charges, exhibits and events are there to attract real-world visitors. You will want to create different segments for each exhibition and event so you can track comparative popularity. Other parts of your Web site, your collection and interactive learning resources, may be intended to operate independently from a real-world visit experience (Peacock, 2002). The more diverse your content - particularly with more usable collection databases - the greater the need to segment, as visitors will arrive at your site with a multiplicity of needs and intentions (Chan, 2007).

Second, set user filters. If it is not done automatically, filter out all non-human traffic (bots, spiders, crawlers). You may also want to remove all visits that have a length of under 5 or 10 seconds (understanding that this may also remove all your single page visitors as well), and scripts and RSS feed traffic. This will likely halve your total traffic but bring you closer to understanding what your actual human visitors are doing on your site.

Third, set targets for your segmented data. Targets should be set in terms of growth and refined over time. Probably your real-world visitation section (visitors) will show shorter average visit lengths than other sections of your site, indicating an effective communication design. Users should be able to quickly find what is on and where you are located and be able to act on this information. Lack of clarity on these parts of your site will directly impact upon your real-world visitation. Education resources and collection areas accessed by browsers and searchers should show longer visit lengths.

Fourth, keep an eye on traffic sources, geographic location (calculated through geo-IP lookups) and search phrases. Each segment will arrive at your site through different means. Your real-world visitation traffic should predominantly originate from geographical locations close enough to your location to make a real-world visit possible. If the majority of your traffic to these areas of your site is not from your local region, then your museum is disengaged from the local community and/or attracts mainly tourist audiences. Likewise, by keeping an eye on the top search phrases you will probably find your museum's name at the top of the list. If it isn't, then most of your search-originating traffic is likely to be more interested in your content than your real world brand; this has implications for your revenue generation, visitation, and marketing strategies.

These steps should bring your Web metrics to a level at which you will be able to make better decisions about your content, its layout and information architecture. For the museum as a whole, you will be able to better understand your on-line audience, its potential growth areas, and the weakest areas of your site that need redesigning or reconsideration.

New Problems For Web Analytics

Distributed Visitation And The Social Web

As Ellis and Kelly (2007) have noted, Web 2.0 and the distributed content model it brings with it make solely site specific analytics obsolete. Museums that have presences in Second Life, MySpace, and Facebook, and content in YouTube, and Flickr, as well as hosted blogs on and and hundreds of other distributed places, now need to know how 'effective' each of these is. Even if there is not an official presence in these environments, museum fans and visitors will have already created unofficial presences ranging from complete 3D models (Second Louvre - through to semi-private blog entries about their experience visiting a museum on holidays or as part of a school excursion.

These presences are based on interactions in the form of comments, ‘friending’ and conversation, and so 'visitation' as a metric is misleading. Instead, what matters is, just like in the physical museum, the rather nebulous concept of 'engagement'.

Complicating matters further is the fact that because the sites and services on which this content exists are not operated by the museums themselves, access to metrics, such as time spent or paths through content that would be available if the museums themselves operated the servers, is only available to the site owner (YouTube, MySpace, Facebook etc). Further, the valuable 'social graph' that links users to each other is opaque and takes significant investment to extract.

With some small galleries already replacing their investment in their 'own Web site' with a managed presence in one or more of these environments, the concept of a 'visit' is changing quickly. How these galleries will report visitation is now unknown, although on balance it is likely that their Web presence will be considerably more 'effective' if measured in terms of 'conversions' and 'friends'.

As Bearman and Geber (2007) remind us, the ambient museum is getting closer to reality. In this version of the future, museum content is pushed out to 'users' based upon their personal preferences and especially their geographic location. A simple example is a historic tour of a suburb that pulls in dynamic content from a plethora of content suppliers, based entirely on where the tourist is walking. In these coming realities, the concept of 'visiting a museum Web site' changes entirely, and the required metrics change too.

Investment in new social platforms for museum content necessitates a calculation of a return on investment (ROI) but this is not so easily measured. It requires a clear understanding of the objectives of organisational forays into the social Web, and then the combined use of different tools from different vendors.

Organisational Objectives

At this stage most museums are still in the experimental stage of social Web initiatives, and rarely have clearly defined strategic objectives. Some, for example Facebook pages and groups or MySpace profiles, are marketing driven; those like the Steve social tagging project (, the Powerhouse Museum's OPAC (, and the Library of Congress' participation in Flickr Commons ( are intended to explore how users might be able to contribute to object classification and discovery mechanisms as much as they are about wider exposure; and other experiments in Second Life and Facebook Application development, such as Brooklyn Museum of Art's ArtShare (, are more open technical experiments.

For marketing driven social media, the return needs to be measured in terms of 'meaningful use' and traditional 'conversions', or calls to action. 'Meaningful use' requires qualitative assessment of user interactions rather than simply quantitative data, whilst measuring 'conversions' usually requires the measurement of special offer take up rates (print and present this free pass deals, etc), and task completion tracking.

Technical experiments such as presences in Second Life or adding content to Facebook where usage will initially be limited are best assessed by how they affect staff understanding of user behaviours and preferences, and the impact of these changed perceptions on working methods and goals. This creates positive organisational change which filters up through the organisation.

The Powerhouse Museum's OPAC is an example of significant organisational change that was effected because curatorial staff gained a better understanding of how our on-line visitors discover and navigate the collection information. This prompted changes in internal organisation policies, procedures and processes around collection documentation and classification as a result of examining and gaining an understanding of user interactions (Chan, 2007).

It is not so much how many are accessing your content, but more, what they are doing with it.

For all social media in general, a good measure of overall effectiveness is the level of self-management generated by an active community. If the community self-manages and polices content, acting for the organisation, that is a good sign of a healthy project.

Dealing With Embedded Content

Most museum Web sites show a high level of traffic generated by Google Image Search and a high proportion of Web site visitation resulting from ‘image hijacking’. Image hijacking is a term used to describe when another Web site – usually now social networking sites like MySpace and Bebo, on-line forums, and blogs – embed an image from your Web site in their profile or blog post, or use it as their forum avatar. Rather than download the image from your site and then host the image on their own server, image hijackers simply embed the URL of the image on your site in their content. For those using log file analysis tools, this can generate a very large number of single page (single file) ‘visits’ very quickly.

There are two main opinions on image hijacking. Those in favour of counting this as a legitimate use of museum content argue that it demonstrates a type of engagement with the museum, especially for younger audiences who are seen to be traditionally under-represented in museums. Those against the counting of this traffic argue that the majority of this usage is a result of image discovery through Google Image Search and that embedding of this museum content rarely involves an actual visit to the museum’s Web site, but simply a copying of a decontextualised image URL. Further, once the image is presented in its new context, it is rarely acknowledged as coming from the host museum or linked back to, and thus has no benefit to the museum.

Image hijacking is spreading to other forms of content, and so a consistent policy on dealing with this sort of ‘content usage’ needs to be developed. Elsewhere other Web sites, and especially social content portals like YouTube, Flickr, Photobucket, Slideshare, Vimeo, as well as traditional media from newspapers to television broadcasters, are dealing with this by adding ‘embed this’ widgets to their content. This encourages users to ‘take away’ content to use on their own private sites but ensures that such content is linked back to its source, branded with the logos of its host and owner, as well as controlled so that it can be removed, replaced or updated at any time.

New Measurement Tools

General Social Media Health Checks

Most organisations now perform what are known as ‘ego searches’ for their brand name, event or exhibition name. An ego search is effectively the Internet equivalent of a traditional media monitoring service. A search is performed with traditional Web search as well as across multiple social media platforms for blog posts, linkbacks, discussions, photos and videos about your brand name.

Several simple free tools allow this to be performed quickly and regularly. Most also can deliver the ego results as an RSS feed to your RSS aggregator or e-mail client on a regular basis. Most free SEO sites can quickly return lists of linkbacks from Google, Yahoo and MSN searches. As a daily health check of your brand performance, a social media ego search is even more valuable and can reveal unsolicited visitor opinions of your organisation as they happen - information unlikely to be formally submitted (via guestbooks or feedback forms).

Technorati ( is an easy-to-receive regular update on blog posts about your organisation and other topics. Searches in Technorati can be turned into RSS feeds quickly. Because of changes in the way that Technorati counts, stores and searches, it is worth using Bloglines ( to perform the same ego search and compare the results.

Flickr and YouTube searches are also easy ways to track your presence and can help reveal the most popular parts (or most photographed parts) of your physical museum. Any social media site can be searched in this way – choosing the right ones to focus on will depend upon your target audiences and campaigns.

Another interesting measure of reputation for museums is the number of Wikipedia articles that reference content on their Web sites. Wikipedia provides a simple tool for performing this type of search ( and it returns a list of Wikipedia articles by title, complete with the URL from your site that is referenced by the article.

Measuring RSS Subscriptions And Podcasts

One of the great benefits of RSS is that content is easily syndicated to other Web sites and service providers. Once syndicated, though, there are no easy ways to measure actual usage of this content. Log file data will report back how many times an RSS feed is requested from your servers, but this only indicates the polling frequency of various RSS clients and readers. A single RSS client (user) may request the feed from your server every 30 minutes to check for changes, regardless of whether a user at the other end is reading the feed, and this will greatly inflate your visit figures unless specifically filtered out.

Feedburner ( offers a useful free service to measure subscriptions to RSS feeds, and its statistics provide details on item by item usage, as well as offering a useful RSS-to-email function. Subscribers can easily be tracked by time period. Feedburner is also well suited to monitoring podcast feeds, especially if your podcasts are being listed in the iTunes directory. Without Feedburner, monitoring the source of traffic to your podcasts relies entirely on server logfile data.

It should be emphasised that there is no current free solution for measuring how many podcasts are actually listened to after being downloaded. Many startups are developing technology in this space driven by the demand from advertisers to ensure that podcast advertising is reaching its intended audience. Most of these use a similar methodology to Feedburne,r tracking RSS and/or using a special embedded player application. For very popular podcasts it is occasionally possible to see them listed in user playlists on music trackers like Last.FM ,but this is rare (for example -

In many ways the best measure of the success of a podcast is how much feedback and discussion it generates. This is far more valuable than the total number of downloads.

Measuring Blog Performance

Blogs are difficult to measure using traditional Web analytic methods. The problems with RSS feed measurement are compounded by the design of most blog ‘home pages’ which list multiple posts on one page. Whilst this makes it easy for the reader to read multiple posts without loading another page, it makes tracking very difficult. Blogs will typically have very high numbers of users who visit only one page, and for the reasons discussed earlier, will have a zero visit length recorded against them. Further, if users are reading multiple posts on the one page, there is no reliable way of determining which posts are the most popular.

Technorati rankings, which are based on linkbacks, are frequently used to measure blog performance; however, as the volume of ‘splogs’ (spam blogs) grows exponentially, this is not always a reliable measure. As a generic measure it is a good starting point.

Again, it is far better to measure interactions – comments, trackbacks – and then qualitatively assess them. Blogs should ideally be generating conversation and discussion, and blogs will rank differently depending upon your choice of what to measure (Chan & Spadaccini, 2007).

On-Line Communities And Social Networking Sites

Measuring presence in social networking sites like Facebook and MySpace has usually been done at a basic level by counting ‘friends’ or ‘fans’ and profile comments. Increasingly, though, organisations are spreading their presence within these networks across profiles, groups, pages and events. Because the way users of social networking services use and interact with these presences is heavily reliant on situational relevance, some profile fans may not also exist in an organisational group, and vice versa.

With Facebook and MySpace both now offering developers the ability to create applications/widgets, more detailed information can be gathered in this way. Indeed, the real value to organisations lies in the ability to data mine some of the data held on fan profiles, although there are obvious privacy and ethical concerns that need to be considered carefully. A Facebook application developer, for instance, can find out not only how many profiles are using an application, but also how that application is being discovered, and as well can map the individual users to their interactions with the application. Those that can devise ways of using the resulting data will be the most successful, much in the same way the Powerhouse Museum’s OPAC project has gained more insights from tracking user behaviour than from the social tagging that can be done on the site (Chan, 2007).

Whilst YouTube provides only very rudimentary statistics for uploaders and content creators, Flickr now provides for uploaders a statistics page that reveals aggregate views, comments and favourites along with the more useful data about referrers – how people discovered your photos. Tracking photo referrers, especially from other Flickr groups, along with comments tracking, allows organisations to build stronger community bonds with their content in these services.


Social media metrics are still very much in their infancy, and traditional metrics are increasingly problematic and irrelevant. Tracking user behaviour is far more invasive, and privacy concerns are amplified as users present far more data about themselves and their identity.

Whilst organisations are still struggling with inadequate traditional Web metrics and still devising social media strategies and objectives, measurement of their effectiveness is difficult.

When combined with better site-specific metrics, however, there are great opportunities to build better, more engaging, more interactive content for more diverse museum audiences and to deliver them directly to the users where they already communicate and socialise on-line.

User metrics, when properly segmented, can greatly improve usability and reveal otherwise unseen usage patterns and trends. The smartest museums will be the ones who can leverage their real world trust and reputation in the social media environment, and build unobtrusive tools to translate learning from user behaviour into organisational action and change – rather than just counting the numbers.


Bearman, D. and K. Geber. Enhancing the Role of Cultural Heritage Institutions through New Media: Transformational Agendas and Projects in International Cultural Heritage Informatics Meeting (ICHIM07): Proceedings. J. Trant and D. Bearman (eds). Toronto: Archives & Museum Informatics. 2007. Published September 30, 2007 at

Carey S. and R. Jeffrey. “Audience Analysis in the Age of Engagement”. In J. Trant and D. Bearman (eds). Museums and the Web 2006: Proceedings. Toronto: Archives & Museum Informatics, published March 1, 2006 at

Chan, S. “Tagging and Searching – Serendipity and Museum Collection Databases”. In J. Trant and D. Bearman (eds). Museums and the Web 2007: Proceedings. Toronto: Archives & Museum Informatics, published March 31, 2007 at

Council of Australasian Museum Directors (2007). Council of Australasian Museum Directors annual survey highlights, Media Release, published 2 August 2007 at

Ellis, M., and B. Kelly. “Web 2.0: How to Stop Thinking and Start Doing: Addressing Organisational Barriers”. In J. Trant and D. Bearman (eds). Museums and the Web 2007: Proceedings. Toronto: Archives & Museum Informatics, published March 31, 2007 at

Haynes, J., and D. Zambonini. “Why Are They Doing That!? How Users Interact With Museum Web sites”. In J. Trant and D. Bearman (eds). Museums and the Web 2007: Proceedings. Toronto: Archives & Museum Informatics, published March 31, 2007 at

Peacock, D. (2002). “Statistics, Structures and Satisfied Customers: Using Web Log Data to Improve Site Performance”. In D. Bearman and J. Trant (eds). Museums and the Web 2002: Proceedings. Pittsburgh: Archives and Museum Informatics, 2002.

Peacock, D. and J. Brownbill. “Audiences, Visitors, Users: Reconceptualising Users Of Museum On-line Content and Services”. In J. Trant and D. Bearman (eds). Museums and the Web 2007: Proceedings. Toronto: Archives & Museum Informatics, published March 31, 2007 at

Spadaccini, J. and S. Chan. “Radical Trust: The State of the Museum Blogosphere”. In J. Trant and D. Bearman (eds). Museums and the Web 2007: Proceedings. Toronto: Archives & Museum Informatics, published March 31, 2007 at

Thomas, W. and S. Carey. “Actual/Virtual Visits: What Are The Links?” In J. Trant and D. Bearman (eds). Museums and the Web 2005: Proceedings. Toronto: Archives & Museum Informatics, published March 31, 2005 at

Cite as:

Chan, S., Towards New Metrics Of Success For On-line Museum Projects, in J. Trant and D. Bearman (eds.). Museums and the Web 2008: Proceedings, Toronto: Archives & Museum Informatics. Published March 31, 2008. Consulted