JILT 2000 (1) - Alan Robinson
Creating a Safety Net - A Proposed Rating Form for Assessing the Quality of Legal Information in Websites
Alan Robinson
Legit
legit@shoal.net.au
Delivered at the 2nd AustLII Conference on Computerisation of Law via the Internet, Australasian Legal Information Institute (AustLII), University of Technology, Sydney, Australia, 21-23 July 1999.
Download |
Abstract
This paper examines ratings as a means of assessing the quality of information on World Wide Web sites. It summarises several schemes used in rating medical and health Websites, then adapts these to a form which might be used to rate Websites containing legal information for the wider community. This form is trialled and the results analysed.
Legal information has been available electronically for over two decades, but it is only with the burgeoning of the Internet, and especially the World Wide Web, that information providers have started to consider legal consumers as well as professionals. In the last two years bodies from government, the judiciary, the community and private sectors have set up Websites to disseminate legal information to the general public. This considerable investment has required a vast leap of faith that the legal system's clients will be willing and able to use such resources. To a large extent that in turn will depend on quality of sites and the information they contain.
The Internet provides an easy and accessible forum to share, disseminate, and use legal information. However, anyone can post information on the Internet regardless of his or her background, legal qualifications, professional stature, or intention. Inaccurate, incomplete, or biased information can be damaging; for example, people seeking online information may be convinced to ignore their legal problems or rely on bad advice in lieu of professional legal assistance.
Other forms of public communication, such as print and broadcast media, have the same potential as the Internet to disseminate false or misleading information. However, while sections of the public have learnt to distrust the mass media, the glamour of the new media may lead them to adopt a less critical view of information on the Internet. Naive viewers in particular may be lulled by technological brilliance into placing more value on the content than it deserves, simply because they get it from the Net.
Although the variability of information quality on the Internet is generally recognized as a problem (U.S. General Accounting Office, 1996), there is no agreement on how to resolve it. Rating schemes have been suggested as one possibility (Tillman, 1995), but they have not yet been used by any legal information Websites, at least in Australia. This paper looks at ratings as a means of assessing quality of Website information and design. It suggests a rating form to be used in making such assessments, and analyses its use in assessing the quality of seven Websites that provide legal information to the public of New South Wales, Australia. The limitations of this methodology are then examined and ways to overcome those limitations are considered.
As was said by Wyatt (1997):
'unless we evaluate the quality of … sites and their effects on users, we risk drowning in a sea of poor quality information'.
The proposed ratings form will assist consumers who want to confidently rely on legal information on Websites, and will give content providers criteria so they can improve their ability to provide quality information on the Internet.
Ratings are sometimes posted as a seal on the reviewed site, or they could be included in a clearinghouse site of 'quality' sites if they meet the quality criteria. Rating tools vary from general pointers to detailed checklists. Techniques that can be used by the lay person to find 'good' information on the Internet often focus on the currency of the information, the author, and the sponsorship. These recommended tools for the lay person are quite general and may not really help in making an informed decision about the information available on the site.
More detailed evaluation tools tend to be targeted to the developer, librarian, or educator. They are often checklist guides to determine how good a Website is. One checklist targeting the lay person asks questions relating to the user's knowledge of the materials, authority, time, scope, form, clarity, recommendations from friends relating to the site, validity (how true the user thinks the information is), and importance. Another system devised by Reeves and Harmon (1993) for evaluating educational software and Websites makes effective use of scales of one to ten on such matters as ease of use, navigation, cognitive load, mapping, screen design, knowledge space compatibility, information presentation, aesthetics, and overall functionality. These concepts are more relevant to a multimedia expert than a legal consumer.
In addition, there are some ongoing activities related to the development and assessment of Website quality evaluation tools. For example, a project within the University of Georgia is establishing information quality criteria for Internet resources (Bennett, Wilkonson and Oliver, 1997). The audience for this endeavor consists primarily of educators. The project has developed a list of 125 indicators, grouped within the following 11 categories:
- Site access and usability (18 indicators);
- Resource identification and documentation (13 indicators);
- Author identification (9 indicators);
- Authority of author (5 indicators);
- Information structure and design (19 indicators);
- Relevance and scope of content (8 indicators);
- Validity of content (9 indicators);
- Accuracy and balance of content (8 indicators);
- Navigation within the document (12 indicators);
- Quality of links (13 indicators);
- Aesthetic and affective aspects (13 indicators).
3. Rating Health Information on the Internet
The issue of assessing quality of online information seems to have received a great deal more attention by the medical than the legal community, perhaps because there are more people setting up Websites with medical and health information of dubious veracity or value. It would probably be easier to sell a medical or health product over the Internet than a legal remedy.
For whatever reason, there are a number of sites that consider how to assess the quality of online medical information. These sites vary in depth of analysis and conclusions. Some of them are outlined below.
The Mental Health Network (MHN) rates the quality of the mental health Web resources' content and presentation. They review Web resources and give them an appropriate rating based upon four main rating categories: Content, Presentation, Ease-of-Use, and Overall Experience.
Content - When rating content they ask:
- Does the site add new and unique material to the online world, or does it simply list or link to existing material well-covered by other sites already?;
- Does it offer new insights into a disorder, or offer information not found elsewhere?;
- Does it offer refreshing perspectives, and/or regularly updated new content in the form of news articles, opinions, or other forms of communication?;
- Does it include interactive features?;
- Is it just an advertisement for an organization or company or individual?.
Presentation rates how good a site looks and how well organized the information is. The MHN asks:
- Is the information laid out in a logical and well-organized manner?;
- Are graphics designed appropriately for the site and do they load quickly?;
- Do the graphics overwhelm the user and the content?;
- Is the site advertiser-sponsored (and hence, adding to the 'busy-ness' of each page which sports a banner advertisement)?;
- Is it arranged so the best material the site has to offer is clearly delineated from other, less-important material?.
Ease-of-use denotes how easy it is for the user to move around the site and find specific information relatively easily and most of all, quickly. All sites should be logically arranged and organized. The MHN often pick a specific piece of information one would hypothetically be searching for and ask:
1. Can we find our way to X content quickly using the navigation aids provided, and then get back to the home page as painlessly?;
2. Is a search engine or utility provided to help us find information more quickly (when appropriate)?;
3. Can we easily find contact information on the page to provide feedback about broken links?.
Overall experience takes into account all three previous categories and allows MHN to evaluate their overall experience and feelings about the site. Although this is a relatively subjective rating, they attempt to make it less subjective by simply taking into account the 3 previous categories and asking:
- What kind of feelings did we have when we came away from the site?;
- Was it enjoyable to read through or was it painful?;
- Were we expecting more and became disappointed by the content or presentation or difficulty in navigating through the site?;
- Would we bookmark the site ourselves or find ourselves wanting to return on a regular basis?.
The MHN then uses the following star rating system:
[* * * *] - Excellent, one of the best sites on the Web for this resource;
[* * *] - Very good and worth your time to check out;
[* *] - Average, a good site filled with basic information;
[*] - Lacking important content or ease-of-use;
[1/2] - Poor, not worth your time.
3.2 Tufts University Nutrition Navigator
Tufts University Nutrition Navigator ranks nutrition Websites using an overall score based on a 25-point scale that is the sum of content and usability scores. The criteria for evaluating sites were developed by their Advisory Board, a panel of six leading nutrition experts. It uses the following measures for assessing content:
- Accuracy (1-10) has two parts: it evaluates the scientific accuracy of empirical data, how well the information is referenced and if the information is current (1-5); and it assesses how information is placed in the context of generally accepted dietary advice, looking closely at any conclusions the site may draw and any guidance it may provide. It also evaluates how well the site provides a balanced coverage of nutrition issues, particularly those that are complex and multifaceted. (1-5) Because accuracy is so important:
-
- if a site receives an accuracy score of 6 points or less, the site is automatically rated 'Not Recommended';
- if the site receives an accuracy score of 7, it cannot receive an overall rating higher than 'Average'.
- if a site receives an accuracy score of 6 points or less, the site is automatically rated 'Not Recommended';
- Depth of information (1-7) evaluates the depth (amount) of nutrition information provided relative to entire site content, and how well the site's objectives are met;
- Site last updated (1-3) rates whether the site has been updated within past month (3), within past 4 months (2) or longer than 4 months or no information is given on updates (1);
- Usability is a measure of the User experience (1-5) which assesses the site's use of effective and clear navigation tools, accessibility of information, download time and timeliness of information.
3.3 American Medical Association
Silberg, Lundberg and Musacchio (1997) from the American Medical Association state that 'when it comes to medical information, the Internet too often resembles a cocktail conversation rather than a tool for effective health care communication and decision making.' It is a medium in which anyone with a computer can serve simultaneously as author, editor, and publisher and can fill any or all of these roles anonymously if he or she so chooses. In such an environment, novices and savvy Internet users alike can have trouble distinguishing the wheat from the chaff, the useful from the harmful.
Silberg et al. suggest that the core standards to overcome these problems are:
- Authorship: Authors and contributors, their affiliations, and relevant credentials should be provided;
- Attribution: References and sources for all content should be listed clearly, and all relevant copyright information noted;
- Disclosure : Web site 'ownership' should be prominently and fully disclosed, as should any sponsorship, advertising, underwriting, commercial funding arrangements or support, or potential conflicts of interest. This includes arrangements in which links to other sites are posted as a result of financial considerations. Similar standards should hold in discussion forums;
- Currency: Dates that content was posted and updated should be indicated.
Silberg et al. recognise that the benchmarks they propose are no guarantee of quality in and of themselves. Nor do they wish to see centralised control of medical Websites:
'Web 'publishers' of all stripes - ourselves included - should be free to post whatever they like and live with the consequences. Let a thousand flowers bloom. We just want those cruising the information superhighway to be able to tell them from the weeds.'
Wyatt (1997) agrees that, by allowing anonymous authors to conceal commercial or other conflicts of interest, the Web does not help readers to discriminate between genuine insight and deliberate invention. He believes that checking whether a web site passes the criteria of Silberg et al is not enough.
'[F]or many purposes, evaluation of web sites needs to go beyond mere accountability to assessing the quality of their content, functions, and likely impact'
He proposes the methodology set out in Table 1 below.
Table 1 - Aspects of a web site which need to be considered when evaluating its reliability (from Wyatt, 1997) |
|
Aspect |
Evaluation method |
Credibility, conflicts of interest |
|
Web site owner or sponsor, conflicts of interest |
Inspect site (Silberg et al's criteria) |
Web site author, credentials |
Inspect site (Silberg et al's criteria) |
Structure and content of web site |
|
References to sources |
Inspect site (Silberg et al's criteria) |
Coverage, accuracy of content material |
Inspect site (Silberg et al's criteria; compare with current best evidence) |
Currency of content material |
Inspect site (Silberg et al's criteria; compare with current best evidence) |
Readability of material |
Calculate reading age, readability indices (word processor grammar checker) |
Quality of links to other sites |
Inspect site, judge if appropriate |
Media used to communicate material |
Inspect site, judge if appropriate |
Functions of web site |
|
Accessibility of site via search engines |
Laboratory test with users |
Use of site, profile of users |
Web server statistics, online questionnaires |
Navigation through material |
Laboratory test with users |
Impact of web site |
|
Educational impact on users |
Laboratory test, field trial |
Impact on clinical practice, patient outcome |
Laboratory test, field trial |
3.4 Health Information Technology Institute (HETI)
The Health Information Technology Institute provides a set of criteria that they say can be used accurately and reliably by the general public (consumer) to assess the quality of health information on the Internet. They recommend the following criteria as necessary for assessing the quality of that information:
- Credibility: Source, Context, Currency, Relevance/Utility, Editorial Review Process;
- Content: Accuracy, Hierarchy of Evidence, Original Sources Stated, Disclaimer, Omissions Noted;
- Disclosure: Purpose of Site, Profiling;
- Links : Selection, Architecture, Content, Back Linkages and Descriptions;
- Design: Accessibility, Logical Organization, Internal Search Engine;
- Interactivity: Mechanism for Feedback, Chat Rooms, Tailoring;
- Caveats: Alerts.
HETI arrived at the criteria through assembling a one-day Summit Meeting of a diverse group of individuals, including representatives of major professional, consumer, and government organizations attended. This was followed up through Internet and email communication and presentations at other conferences and meetings with individuals and associations to ensure a broad input base. They used the form in Appendix A to find out how medical and health practitioners ranked the importance of the various criteria. Not surprisingly, accuracy was regarded as the most essential element (92%) followed by credibility (86%).
Another approach for ensuring the quality of sites was developed by the Health On the Net Foundation, a non-profit organisation headquartered in Geneva, Switzerland. Its mission is to build and support the international health and medical community on the Internet and WWW so that the potential benefits of this new communications medium may be realised by individuals, medical professionals and healthcare providers.
The Foundation established a Health on the Net Code of Conduct (HONcode) for medical and health Websites. These Principles evolved from discussions with Webmasters, patient support groups and medical professionals in several countries. This is a self-policing approach by which groups that wish to abide by the HONcode principles can display the HONcode logo on their site. The principles are summarized as follows:
- Health care advice provided by qualified professionals;
- Site is intended to support (not replace) physician-visitor relationship;
- Confidentiality respected;
- Information referenced to source data;
- Claims supported by evidence;
- Information provided in the clearest manner, with e-mail support;
- Disclosure of external support;
- Advertising.
If the principles are violated and not corrected, the HONcode symbol is removed.
4. Criteria for Assessing Legal Information Websites
From the discussion above, it seems that accuracy, credibility, currency and usability are four common criteria used in evaluating health and medical Websites. These criteria are just as important when evaluating legal information Websites, and are expanded on below.
The accuracy of information is perhaps the most obvious criterion for quality of content. Accurate content is based on evidence and its verification. There is a risk that, through ignorance or bias, the content of the site may not be correct even if the original information sources were reliable. If the content is not original information, its source should be clearly indicated.
It is common for Websites to append a disclaimer at the bottom of each page. The disclaimer should describe the limitations, purpose, scope, authority, and currency of the information. It should also emphasize that the content is general information and not legal advice, thus, addressing liability concerns. The disclaimer should also clearly define the relationship, in terms of the scope of responsibility and control, between the original Website content and links to other sites.
Completeness is important to the quality of legal information. A comprehensive review of a topic should be presented, not a one-sided view with critical information missing. If the author or source of the information does not have all the facts to present, this should be noted.
More than any other criterion, accuracy can best be judged by a subject matter expert. Even a criminal lawyer will not be able to authoritatively comment on the accuracy of family law material. Although some rating systems permit non-professionals to comment on how accurate they 'think' the material may be, that could well be a case of the ignorant leading the blind.
A site should display the institution's or organization's name and logo as well as the name and the title of the authors. The user will then be able to form an opinion on how reliable the material should be. If individual lawyers or an organized group of lawyers provides it then it will be usually be more authoritative than if it is provided by someone without legal qualifications. The exception might be an organization which has a particular long-term interest in a subject (e.g., the Tenants Union could be a good source of information on tenant's rights), although the user should be aware of the possibility of bias.
The consumer must be wary of sound-a-like names or names that seem prestigious. Impressive names, conceived by shrewd marketing strategists, can be quite misleading. The user should always be skeptical of information posted by an anonymous source.
An author's affiliation with a sponsor or the author's personal viewpoint/opinions should be noted to indicate possible bias, lack of objectivity or potential conflicts of interest. Similarly, when information is provided as part of an advertisement or endorsement relating to a law firm, it needs to be labeled as such so the consumer can tell that the information is given in the context of selling a product.
Having an editorial review process is generally discounted as an impossible undertaking given the large number of Internet nodes or server computers and the ever-changing nature of the content of electronic documents (HETI, 1997). Sites that do have an editorial process should state so and describe the process and the individuals involved.
Increasingly, Websites are requesting information and using information for purposes that the users may be unaware of. It is critical that consumers are alerted of the collection, use, and dissemination of their information in order for them to make informed decisions to either provide information and/or approve of its eventual use.
Currency can be defined as keeping up to date with the present state of legal knowledge. An important advantage of publishing on the Internet is that it allows regular, even hourly, updating, so that consumers and professionals using the WWW expect material to be more up to date than paper sources.
The initial burst of enthusiasm that prompts a legal author to produce a Website may soon be tempered by the realization of the time and effort involved in keeping the site up to date. The easiest way to assess timeliness is to check the date on web pages, but, since the material may not have been current even then, independent comparison with the most up to date facts obtained elsewhere is preferable. Currency is again more likely to be within the province of the legal expert than the consumer.
Even if the content is correct and up to date, people must be able to read and understand it. For web sites intended for the general public, it is useful to decide a minimum reading age for the material; a word processor's grammar checker can then be used to assess the text's readability and reading age. However, such measures are less revealing than asking subjects to answer questions based on the material (Wyatt, 1997).
Since some web sites are complex, a major concern is how easily users can locate relevant material within the site. It is useful to compare users' ease of navigating through the site with the ease of using a printout of the material or the paper documents from which the web site is derived, to judge if the electronic medium makes information easier, or more difficult, to locate (Wyatt, 1997).
The best Websites are clearly focused on their purpose and target audience, are logically structured, and are simple, consistent, clear, and easy to use. They reflect an awareness of reading level, language, labeling, listing, cross-referencing, and comparison/contrast. A balance of words, pictures, colors, sounds, and motion may enhance absorption of the information. The best person to comment about this balance would be an educator with multimedia experience.
Relevance and utility are attributes that will benefit the user of a site. Relevance relates to how closely the actual content of a site corresponds to the information it purports to provide. Utility denotes the usefulness of a site. For example, suppose a person wishes to defend a claim, but did not know how to go about it. A site intended to help defendants would not have much utility if it only discussed legislation and did not provide tools that would help in actually applying those laws.
Websites should be accessible by the lowest common denominator of current browser technology. Any search engine it uses should be capable of searching specified content by keyword or search string and retrieving only relevant materials. Users should have the option of easily manipulating the search strategy to search only a section of a Website or the entire site.
The capability for interaction is a unique benefit of the Internet. For example, a link to send criticism and comments to the site's sources should always be included with the original information. Users should be able to comment on the validity and value of the information, and possibly point out areas of omission or obvious bias. A professionally operated Website will endeavor to respond to user feedback within a reasonable amount of time.
Chat rooms and Bulletin Boards allow information to be exchanged among many individuals, often anonymously. Whether a moderator is present should be posted, along with a warning that the information may not be accurate. If a moderator is present, the individual should be identified, together with his/her expertise and affiliations, and the source of his/her compensation.
In cases where a Website provides an interactive service, such as tailoring information to the user based on expert-system algorithms, the algorithm used should be stated, including its developer and the site's affiliation with the developer.
It is important that users be alerted when they are moving to an external site. Information relating to the linked source (before the user clicks to the site) or placement of transition screens so that movement to a new site is apparent has been suggested as solutions to this problem. In addition, sources could be identified in a similar manner as print journals - where the name appears in the header or footer of the 'page.'
Especially critical to the quality of an Internet site is its external links - their selection, architecture, and content. Issues relating to link selection include whether the person or group linking to an external site has the authority, expertise, and credentials to do so. Also relevant is the level of the intended audience. The original and linked sites should target a set of readers with similar characteristics.
A site that offers limited original content and few links will not be as useful as one that has identified, structured, and authenticated lists of relevant sites. A brief description of the site to be linked helps the user decide whether to pursue the link.
5. Proposed Online Legal Information Rating Form
Figure 1 is a form that summarises the above criteria by providing prompts for an evaluator when rating online legal information. The prompts are framed in such a way that a 'yes' answer equates to one mark, unless a range of marks is indicated.
Figure 1. Online Legal Information Rating Form |
|||
Criterion |
How to implement this criteria |
Mark |
Comment |
Accuracy |
Does the information appear to be accurate? Are there links to relevant legislation and case reports? Is the original source stated? Does the disclaimer describe limitations, purpose, scope, authority, and currency of information (0-5)? Is it made clear that the information provided is not a substitute for professional advice? Are any omissions noted? |
||
Source |
How credible is the source? (see scale below) Is the name of the author listed? Are his/her credentials listed? How well do the credentials match the text? (5 = perfect match, 0 = unrelated) Is the content provided in the public's interest? Is any possible conflict of interest noted? Does the information appear to be balanced? Does the source appear to be unbiased? Is the site not selling a product? Is the site's purpose disclosed? Is no user information captured (apart from feedback)? Is privacy of personal information assured? Is there an editorial review process? Is the editorial review process explained? |
||
Currency |
Is there a date stamp at the bottom of each page? How current is the material? (5 = < one month, 0 = > two years old) |
||
Usability |
How useful is the information? (0-5) Are hyperlinks useful? Are hyper links properly identified, structured and authenticated? (0-3) Is there a description of linked sites? Is a graphical browser not required? Are plugins not required? Which browser version is required (v2 = 3, v3=2, v4=1)? Is the site logically organised? Generic search engine = 2, Javascript engine = 1 Quality of search responses (0-2) Feedback mechanism - email = 1, form = 2 Is there a chat room? If so, is a moderator present? Is any information-tailoring algorithm disclosed? Are users alerted when they move to an external site? |
5 - University Law School, 4 - Legal organisation specialising in the area, 3 - Individual specialist/community organisation, 2 - Individual general lawyer,1 - Unqualified individual, 0 – Anonymous
6. Analysis of Proposed Rating Form
The ratings form was used by the author to assess the quality of the following sites that provide legal information for the citizens of N.S.W. (although some are aimed at a wider audience):
Table 2. Websites assessed using the proposed ratings form |
|||
Acronym |
Name |
Address |
Description |
FCA |
Family Court of Australia |
Family law services, forms & information |
|
LAC |
NSW Legal Aid Commission |
Legal aid policy, services & information |
|
Law4U |
Law for You |
Factsheets prepared by private lawyers |
|
LawSoc |
Law Society of NSW |
Solicitor's body |
|
Lawstuff |
National Children's & Youth Law Centre |
Children's rights - ask questions by email |
|
RLC |
Redfern Legal Centre |
Factsheets about tenancy |
The results of the assessment are attached as Appendix B and summarised below.
Table 3. Results of assessment of legal Websites |
|||||
Acronym |
Accuracy |
Source |
Currency |
Usability |
Total |
LawSoc |
9 |
17 |
5 |
18 |
49 |
RLC |
7 |
16 |
6 |
17 |
46 |
FCA |
8 |
15 |
5 |
16 |
44 |
LAC |
4 |
16 |
5 |
19 |
44 |
Lawstuff |
6 |
16 |
4 |
10 |
36 |
Law4U |
6 |
8 |
5 |
14 |
33 |
The results should be seen more as a formative analysis of the ratings form than as a guide to the quality of the sites. They are undoubtedly influenced by the background knowledge that the author gained from working on two of the sites. Nonetheless, the following observations can be made:
- Source credibility and usability clearly have a greater impact on the overall rating than accuracy and currency. In view of HETI's findings (section 3.4, above) that the health profession regarded accuracy as the most important attribute of a site, it may be necessary to give it greater weight in the rating form;
- None of the sites indicate who authored the content. Unless the user knew that there were only two lawyers actively involved in one of the sites, s/he might have given that site the same credibility as another site which has about three hundred employed lawyers it can call upon, or a third site which has some of the finest legal minds in the country at its disposal;
- Lawstuff received a surprisingly low rating for accuracy, considering that it is one of the most eminent sources of information about children's legal matters. It is not really likely to be less accurate than the other sites, although it may not be as current (according to the indicators used in the form) as it was one of the first websites created for the general public. If it had an editorial review policy that demanded the checking of the accuracy of each page regularly, and noted the date the information was last checked on the bottom of the page, its currency and accuracy rating would improve;
- Lawstuff's ratings also suffered as a result of one of its innovations - the use of 'LawToons' to graphically depict case scenarios, apparently to better engage its target audience. The increase in waiting time makes for dubious benefits, as many children will expect the near-instantaneous responses they are accustomed to from video games;
- One of the sites did not alert users when they left the site, still displaying their logo when part of another site was displayed. Perhaps this sort of duplicity should result in marks being deducted.
7. Limitations of the Proposed Rating Form
It is apparent from the above discussion that there will be a need to weight criteria according to their importance. A consensus as to each element's importance should be obtained by sending a form such as that used by HETI (see Appendix A) to organisations that provide or use legal information. It may be that the adding of extra questions to sections that are deemed most important may lead to an increase in their relative score.
Even if an accurate rating is given, the quality of that rating degrades over time unless there is some way to ensure that the site is kept current. Thus a seal of approval should itself be given a date stamp so that the user will know when the site was last checked.
The difficulty of having non-experts rate a website is most acute when they are asked to comment on the accuracy of content. Similarly, they may not be able to give an informed opinion as to some of the other criteria. Although it may be appropriate for a subject matter expert to rate the quality of a Website, there is clearly scope for the target audience themselves to comment on how useful they found the site. However, simply asking users to record their satisfaction with the material is unlikely to reveal problems with comprehension, as they may not realise that they have misunderstood or may blame themselves.
A fundamental issue is whether the website is actually used, and by whom. Since most server logs do not distinguish repeated visits to a page by the same individual, visits to a page cannot be equated with visitors. To collect more information, users can be asked to fill in web forms, but, as with paper questionnaires, most usually fail to do this, casting serious doubt on the generality of the data (Oppenheim, 1991). Even if data on use are genuine, comparison of rates of use between different sites needs to be simultaneous rather than historical, given the exponential growth in the use of the Internet.
The National Children's and Youth Legal Centre (Lawstuff) site allows users to ask legal questions by email. This can be a very useful feedback mechanism, as the user will receive a benefit as well as providing the Webmaster with demographic data and information about what is misunderstood, not found or missing on the site. The promised response time of ten days seems too long, though.
For those investing resources in a web site, a key question is its likely impact on client outcomes and its cost effectiveness compared with other methods for delivering the same information. Attempting to assess the impact of information, education and programmes is always made difficult by the intervention of confounding variables, so that one can rarely be certain that the results are due to an intervention or externalities.
Kirkpatrick (1979, p. 89), in discussing the difficulty of measuring the results of training, points out that 'there are ... so many complicating factors that it is extremely difficult if not impossible to evaluate certain kinds of programs in terms of results'. Instead of offering a specific formula, Kirkpatrick simply reports anecdotal efforts to measure results. He does applaud attempts by researchers such as Likert to use qualitative data in measuring results, but he laments the fact that current research techniques are essentially inadequate and that progress in this area is slow.
Tentative answers may be obtained by studying the impact of the site on the knowledge of sample users in laboratory settings, but its real impact on the legal system can be studied only in the field. Case studies of people who have relied on online legal information might show whether they have made 'correct' decisions (e.g. whether to commence litigation, whether to seek legal advice, whether to defend an indefensible case) and whether they felt they had sufficient information to make these decisions. These studies could be augmented with interviews with lawyers and judicial officers to find out whether the user seemed to have a better grasp of legal concepts than the 'average' client does. This would require double-blind trials so that neither the interviewer nor the interviewee knew whether or not the client had obtained online legal information.
Randomised trials comparing the effects of providing the same information in two different ways raise problems familiar to evaluators of other kinds of information resource, such as Hawthorne effects. Measuring complex human attributes such as ease of navigating a web site requires systematic testing, refinement of pilot questions, the use of reliable and valid instruments, the anchoring of measurement results, and the elimination of ambiguity and vagueness. Studies of information technology often use poorly selected subjects, typically enthusiasts for the technology in question (Wyatt, 1996). The reported details about the users or setting may be insufficient to know if they are representative of all clients who might use the information resource.
There are problems associated with developing a rating form that the general public can use to test a legal information site. Perhaps the form should be designed for subject matter experts, and then the ratings posted as a seal on the reviewed site or as a listing of quality sites included in a clearinghouse. That information would be valuable for both the legal consumer and the site developer.
If consumers do choose to use the Internet to find legal information, it may be best for them to approach law-related sites as places to gather background data and information on personal legal matters. They may then use this information as the basis for a more informed approach to their lawyer and the legal system.
Bennett L, Wilkonson G and Oliver K 'The Development and Validation of Instruments to Assess the Quality of Internet Information: A Progress Report,' 1997.
Health Information Technology Institute 'Criteria for Assessing the Quality of Health Information on the Internet', 1997.
Health On the Net Foundation 'Code of Conduct for medical and health web sites', 1998.
Kirkpatrick, D L 'Techniques for evaluating training programs.', 1979.
Mental Health Network 'Web Resource Ratings', 1997.
Oppenheim A N 'Questionnaire design, interviewing & attitude measurement', 1991.
Reeves, Thomas C and Harmon, Stephen W 'User Interface Rating Tool for Interactive Multimedia', 1993.
Silberg W M, Lundberg G D, Musacchio R A 'Assessing, controlling and assuring the quality of medical information on the internet.', 1997.
Tillman, Hope N 'Evaluating the Quality of Information on the Internet or Finding a Needle in a Haystack', presentation delivered at the John F Kennedy School of Government, Harvard University, Cambridge, Massachusetts, September 6, 1995.
Tufts University Nutrition Navigator <http://navigator.tufts.edu/ratings.html>
US General Accounting Office 'Consumer Health Informatics - Emerging Issues', 1996.
Wyatt, Jeremy C 'Commentary: Telemedicine trials - clinical pull or technology push?', 1996.
Wyatt, Jeremy C 'Commentary: Measuring quality and impact of the world wide web', 1997.
Ranking and Implementation of H.E.T.I. Criteria
Pilot use of Legal Information Rating Form
This is a Conference Paper published on 29 February 2000.
Citation: Robinson A, 'Creating a Safety Net - A Proposed Rating Form for Assessing the Quality of Legal Information in Websites', Conference Paper, 2000 (1) The Journal of Information, Law and Technology (JILT). <http://elj.warwick.ac.uk/jilt/00-1/robinson.html>. New citation as at 1/1/04: <http://www2.warwick.ac.uk/fac/soc/law/elj/jilt/2000_1/austlii/robinson/>