Skip to main content Skip to navigation

JILT 1996 (3) - Pitt & Levine

Electronic Publishing and Standards - Academic Opportunity or Opportunism?


Douglas Pitt and Niall Levine
University of Strathclyde
n.levine@strath.ac.uk


Contents
Abstract
1. Introduction
2. Probability and Publication
3. Paper Publication's Purpose
4. Definition of Electronic Publishing
5. The Publishing House Reinvented?
6. Quality Mechanisms
7. Conventional Publishing the Production Process and Standards
8. Growth of the Quality Movement
9. Application of Quality Metrics in an Electronic Publication Era
10. Pre-prints, Databases and Scholarship
11. The Role of Peer Review in Traditional Academic Publishing
12. Electronically Enhanced Peer Review Mechanisms
13. Electronically Assisted Editorial Moderation
14. Consequences of a Failure to Adopt Appropriate Standards
15. Liability of the Knowledge Professional
16. Conclusion
17. References
Word icon and download article in .doc format Download

Abstract:

Paper-based journals provide an arena for academic dissemination of knowledge. They provide a record of developments in particular disciplines. Produced at regular intervals, classifiable and physical, libraries provide an ideal repository. Conventionally published material has a 'fixity' that facilitates scholarship; the remotely accessed and ultimately deletable electronically published equivalent does not.

Dual electronic and conventional paper-based distribution ensures a wider dissemination. However, electronic publication, of pre-prints or a full article, raises concerns about originality, accreditation and liability. The relationship between the electronic publishing and scholarly standards, measured in terms of relevance, timeousness, efficacy, and cost, requires investigation.

This paper will attempt to consider the contribution electronic publishing can make to the prevailing publishing paradigm, together with future safeguards that stakeholders - publishers, authors, referees and users - require to address if electronic publishing is to evolve.


Keywords: Electronic, Publishing, Standards, Scholarship, Quality, Liability


Date of publication: 30 September 1996

Citation: Pitt D, and Levine N (1996) 'Electronic Publishing and Standards - Academic Opportunism or Opportunity?', BILETA '96 Conference Proceedings, 1996 (3) The Journal of Information, Law and Technology (JILT). <http://elj.warwick.ac.uk/elj/jilt/bileta/1996/3pitt/>. New citation as at 1/1/04: <http://www2.warwick.ac.uk/fac/soc/law/elj/jilt/1996_3/special/pitt/>


"£8 million pound cyber library - '25% of space for books'"
(Glasgow Herald, December, 1995)

This is not illusory: the future has already arrived.

1. Introduction

With less physical capacity for material and people, the library of the new millenium will, together with the way its users retrieve its contents, and hence the type of media it contains, all require to be re-designed. Strategists at publishing and software houses like McGraw-Hill (Snoddy, 1995), Bertelsmann, (Dempsey, 1995) and Microsoft (Kehoe, 1995) apparently share a similar view of the future.

Their vision is actually far from revolutionary. Who can honestly say they have remained immune from electronically retrievable databases, or the magnetic digitalisation of data? This is consistent with Drott's (Drott, 1995) contention that "new communication channels have, in fact, been emerging for decades."

Digitalisation presents a dilemma. On the one hand, there are existing media - stable, familiar, retrievable and recognisable - and on the other, forms of communication that are instant, alien and transitory. But unlike the scholarly prestige already accorded to printed journals, insist Case and Gilmore (Case and Gilmore, 1992) "computer files are not yet [1992] so regarded."

2. Probability and Publication

"Ninety-two coins spun consecutively have come down heads ninety-two consecutive times."

Guildenstern, Act 1

Rosencrantz & Guildenstern Are Dead, Tom Stoppard.


To ensure that electronic publishing does not become the first documented exception to Lotka's Law and Price's Urn (Koenig, 1995) (the observation that a few authors make significant contributions, but that most authors only make modest ones, and that the

odds of successful publication increase in direct proportion to the ratio of published output and its readers), it is vital that the unique capacity of electronic publishing - its speed, accessibility and interactive intelligence - is not attenuated by any mystique regarding its future role, reputation or permanence.

For those privileged to disseminate scholarly material, it is incumbent to consider two purposes of publishing knowledge.

2.1 Knowledge and Publishing

Since the education process is forged on the confluence of research, erudition and teaching, it is therefore informed by scholarly publishing. Consequently the standards of that material will also be academically influential.

2.2 Accountability and Publishing

The expedient of enhanced efficiency has permitted proponents of accountability to justify their attempts to quantify, measure and assess seemingly any form of output ad infinitum (Johnson and Kaplan, 1987). However artificial, the need to assess research

in terms of output and quality has even insinuated itself into the arena of competing higher education institutions. Output is analysed to such an extent that today, notes Valimma (Valimma, 1994), "all actors in higher education understand assessment as a way of producing comparative data."

Investigations of the actual utility of publication are less prevalent, although a recent study (Stolte-Heiskanen, 1992) concluded that innovativeness and societal utility were of greater concern to a community of Finnish scientists interested in exploring new boundaries of knowledge than seeking outright personal recognition.

These beg the question, 'how well does conventional journal publishing respond to issues of quality standards like innovation, timeliness and utility?'

3. Paper Publication's Purpose

Paper-based publications have what Gilmore (Gilmore, 1992) terms a 'fixity' that facilitates retrieval and citation, and hence ultimate scholarship. Investigating the future of book publishing, Landoni and Catenazzi (Landoni and Catenazzi, 1993), distinguish the portability, format familiarity, and resolution that characterise paper-based publications, with their demerits, (rapid obsolescence, linearity, and one-dimensional nature). Even utilising the latest technology, conventionally published academic material is still often 3-18 months in its gestation.

Other less formal channels of communication - committees, symposia and seminars -also play a crucial role in informing, and provide a forum where unconventional and unproven ideas can be exchanged, free from the fear of rejection by officially acknowledged experts who may also happen to serve as journal referees. Such material (often dubbed 'grey literature') is usually so prohibitively expensive and voluminous, that its research necessitates the use of a dedicated electronic database, (such as SIEGLE) before it is possible to borrow a hard copy from a centralised lender.

Enhanced communications, cheaper technology, pressure of space and cost have therefore precipitated the advent of the type of "cyber library" described above. In this model, the publication that refuses to distribute by electronic or interactive means will simply atrophy from audience apathy, its place in scholarly communication displaced by one that will - a sort of virtual by-pass.

4. Definition of Electronic Publishing

Although it is difficult to be precise about what does, and does not, constitute electronic publishing, it is a prerequisite that it will always be assisted by computer technology ('electronically assisted publishing' - EAP). Hawkins et al (Hawkins, D, Smith, F, Dietlrin, B, Joseph, E and Rindfuss, R, 1994) have defined electronic publishing as "the use of electronic media, computers and telecommunications to deliver information to users in electronic form or from electronic sources," what Harnad (Harnad, 1995) has termed "scholarly skywriting" . Within this domain, lie electronic bulletin boards, online catalogues newspapers, books, mail, and journals, as well as real time downloaded information services, software and even remote conferencing. The potential benefits in terms of enhanced peer participation, quality, review, navigational design, production costs and instant access are such that even those who are sceptical will be forced to acknowledge its contribution.

Although it is not the intention of this paper to explore an optimum design strategy for electronic publications, the unique 'language' first described by Nelson (Nelson, 1980, 1990) of hypertext (sound, animation and "navigability") raises concerns about the compromise between a publication's functionality (Laurillard, 1994) and complexity (Gygi, 1990). Since a 'reader's' ability to navigate a hypertext document depends on their individual I.T. dexterity, the successful conveyance of the message will depend as much on the quality of the publishing medium's design, as the author's content itself.

A consequence of technology that both produces and receives electronic publications, is an erosion of traditional author/producer/reviewer boundaries that suggests a future reconfiguration of stakeholder roles. Indeed Stodolsky (Stodolsky, 1994, 1984) argues that computer mediated communication (CMC) - an automated discussion arena that helps preserve anonymity and facilitates open debate - even decreases the dominance of 'establishment' didacticism, and enhances the capacity for free 'speech.'

But the electronic fluidity of CMC and electronic publishing raises concomitant risks. Copyright breach, plagiarism, defamation - attendant in conventional publishing - all become normless in cyberspace, and possibly indiscernible until everyone is connected to the electronic universe.

Given Case and Gilmore's contention that the current academic reward system is "constructed around the prestige of publishing in certain places" and that, as noted previously, "computer files are not yet [1992] so regarded" (Case & Gilmore, 1992 supra), electronic publishing still requires to reconcile its inherent informality, economy and ease of access, with the accreditation, review and authenticity currently accorded to conventional print publishing. Without further evolution, its future remains as endangered as the dodo. Its possible extinction is a consequence of the vagaries of its archiving, the invisibility of its citation, apparent reviewer disinterest and intransigence of reference abstracters. Put simply, the future of electronically disseminated scholarship depends on a recognition alien to its intangibility.

Academic acceptance of electronic publishing is stubbornly negative, a result in part of techno-phobia and each individual's own particular presentational requirements. Although discipline-specific attitudes to electronic publishing have been largely unstudied, McEldowney (McEldowney, 1995) conjectures that the attitude of science staff may, for instance, be more positive than those in the humanities, accounting for the higher proportion of life science electronic publications. Whether this represents the inherent conservatism of the learned professions or the particular suitability of the hypertext/database medium to scientific applications remains open to debate, although it is equally apparent that the linear progression and nature of academic argument is as discernible for example, in legal debate, as it is in say, scientific discovery.

What may be needed therefore is an electronic journal design format appropriate to the 'ergonomic needs' of specific classes of users. This could be discipline-related, as between those for example, searching a database (of physics findings), or navigating complex text (cross-referencing expert testimony) or validating fine detail (by direct comparison). For some professions, like lawyers or "historians, who put a premium on the veracity of a record" (Case & Gilmore, supra 1992) any doubts regarding authenticity may constitute a critical flaw. Indeed as Teresa Harrison (1991) has argued the failure of electronic publications may depend on the consistency of a particular journal's design for the "social practices of the discipline it serves," its ultimate success determined by the extent to which it reflects that discipline's communication needs. The need for rapid publication, for instance, is not universal - and apparently "simply does not exist in history" (Case & Gilmore, supra).

5. The Publishing House Reinvented?

Economies of scale, marketing expertise and production excellence - the essential ingredients of conventional publishing - have, despite a few renowned exceptions, resulted in the transference of conventional scholarly writing from its formative residence - the academic press - to its current home, the prestigious commercial publishing house (Floridi, 1995).

The as yet undefined presentational and production processes associated with electronic publishing have meant that its 'residence' still remains normless. One explanation is the 'fuzzy boundary' induced by the technology itself, that enables publishers to download "streams of digits" (content) for heterogeneous consumers (readers, libraries or learning institutions), to "use them the way they want," 'customization' (Snoddy 1995).

This uncertainty has at least stirred debate as to whom the gatekeepers of output and standards should be. Electronic publishing presents opportunities for those on both sides of the publishing divide. Laura Fillmore, (Fillmore 1995) 'proprietor' of the 'Online Bookstore', exhorts "today's publishers to maintain a decisive position in the ongoing and electronic evolution of our culture," while Guedeon (Guedeon, 1994) reasons that a (re)-convergence between the publishing world and libraries could lead to a redistribution of roles as momentous "as the modern publishing house after Gutenberg." Arnold (Arnold, 1994) advocates adopting a participative venture of academics, librarians, and publishers to obviate the economic excesses attendant on the "commercialisation of the electronic future" that otherwise threatens to engulf the reputation of all the existing stakeholders, but pessimistically laments that the precedent of the paper-orientated journal has already "become the paradigm for electronic publishing". Any assumption that academia might not yet be compelled to cede tentative guardianship of its embryonic electronic offspring underestimates the expansionist aspirations of conventional publishers like McGraw-Hill (Snoddy, supra 1995) or Bertelsmann (Dempsey, supra 1995) to become global purveyors of multimedia, or software developers, like Microsoft (Kehoe, supra 1995), to become publishers - for it is apparent (Carey 1995) that within "the communications industry, generally no one knows where they are going."

6. Quality Mechanisms

Within this flux lies the contentious and opaque issue of ensuring quality. The instantaneous availability of electronic publishing on the ubiquitous 'Infobahn' (which like any consumer addiction, engenders 'need' charged at an appropriately premium rate), threatens to exacerbate tensions between the ponderous production process of conventional publishing and the structureless velocity of electronic publishing (Sharpe 1994). Scholarly standards, traditionally the responsibility of academics, reviewers and publishers - the gatekeepers of quality - are here at their most vulnerable.

7. Conventional Publishing the Production Process and Standards

Paradigm of publishing orthodoxy

(THE PARADIGM OF PUBLISHING ORTHODOXY)

The paradigm depicted above reflects (New England Journal of Medicine, 1995) the "time tested tradition that helps to ensure the quality of [medical] literature" (sic) typical in conventional publishing. There are however numerous flaws and contradictions apparent within this model, relating to remoteness, expertise, objectivity and efficacy, all aspects that electronic publishing can do much to remedy.

Consider, for a moment, the individual stages in Conventional Publishing:-

AUTHORING: A publication's content, whether written individually or collaboratively suffers (probably in direct proportion to) from the physical remoteness of its contributors between each other and their respective research sources. However, given that electronic networks are increasingly the norm, such isolation is now unnecessary.

SUBMISSION: Although not necessarily a subject specialist, an experienced editor's initial scan is often sufficient to determine a submission's audience relevance. If the article's content fails this first test, the submission is 'spiked', and returned to the authors with a polite note addressing them to the journal's objectives, and on a good day, urging them to try elsewhere. Clearly this selection and acceptance criteria is less than systematic.

REVIEW: The next step in the conventional publication process, is validation. Review is normally conducted by 'expert' peers, drawn from a pool of respected specialists, familiar to the publication's editor.

To maintain the author's and reviewer's anonymity, such review is often conducted 'double blind,' by an invariably unpaid forum of expert volunteers. It is during this phase in the publication process that contentions are first questioned, evidence used in support of arguments examined, data verified, and hypotheses tested, to ensure that acceptable standards of academic rigour, appropriate format, and relevance are sustained. An additional device to help systematize the consistency of referee appraisal, is the pro-forma guide, enumerating assessment criteria like clarity,originality, and societal contribution.

Although the role of review in the electronic publishing processes will be discussed in more detail later, consider, for a moment, the following assumption. It is only really once ideas have entered the wider environment, notably exposure to an external audience, that they can actually be rigorously tested. The wider the dissemination, the greater the scrutiny. As these writers will argue later, electronic publishing offers the possibility of enhancing not only the breadth of spectrum and speed of the review process, but also its very sequence within the conventional publishing paradigm.

EDITORIAL PREPARATION: In due course (anything from 6 days to 6 months) a submission is returned to the editor, who taking account of the reviewer's proclivities, may immediately accept the article for publication (unlikely), return it to the author, recommending revisions (frequent), or reject it (ready for it to be resubmitted to another unsuspecting, and likely less renowned journal, or consigned to the author's virtual dustbin). More general public scrutiny, suggestions, support or disagreement will require to wait until after the appearance of the article (constrained as it is by limits of page format, size and space) in the next available print-run.

8. Growth of the Quality Movement

In the conventional publishing paradigm described above, it is striking that 'quality' control is largely post-facto.

Inspired by accounts of the post-war industrial experiences of Demming and Juran, recent decades have however, witnessed an almost messianic zeal in the quest for quality in other industries. Initially, production driven, the focus shifted from quality control (final output), to quality assurance (prescribed tolerances), employing quantitative tools like statistical analysis, and product scrutiny.

Latterly an holistic 'systems' based perspective, breaking down each process into individual constituents, (cause and effect analysis (Ishikawa, 1984) has culminated in the TQM movement, and the restructuring of entire organisations (BPR). This Idealised concept of manufacturing industry, has subsequently been applied to technologically transitional environments (Venkatraman 1994) , and ultimately translated to the service sector (SERVQUAL, Parasuraman et al).

Modern industrial quality initiatives therefore attempt to incorporate quality assessments during each stage in the production process, rather than grafting them on after completion - or, as depicted in the paradigm of the publishing 'review cycle' described above, 'post-facto' - only after material has been written and submitted for review.

9. Application of Quality Metrics in an Electronic Publication Era

By conceiving a 'route map' of the conventional publishing paradigm that 'signposts' each process into its constituent 'hardpoints,' it becomes possible to see precisely how specific elements in each stage inter-relate to each other, highlighting any 'fail points' that might prejudice the eventual outcome.

The first such sign-post can be described as 'enhanced author - audience research'.

By conducting research amongst existing users - and this includes the consumer, whether a specialist academic, or merely an interested browser - a publication can ascertain not only how well it fulfils their expectations, but also obtain a scientific audience profile that enables it to adapt or respond in accordance with those user expectations. If the publication is electronic, either the publisher, or the user, can customize it individually to match their particular needs.

For putative writers,electronic publishing also provides an opportunity to improve quality prior to authoring. First, an electronic search can systemise an appropriate journal's selection. Secondly, and more important, an electronic journal provides authors with direct access to reader feedback, in turn providing a more systematic basis on which to evaluate future contributions for their audience relevance and applicability.

10. Pre-prints, Databases and Scholarship

'Pre-prints' (non-published draft manuscripts used, for example, by physicists) provide another instrument towards the goal of improving scholarly standards. Though not officially peer reviewed as a pre-requisite to their collation in a respectable repository, they provide a recognised platform in which to propound radical, but unproven phenomena. Communication is open, and the gates of knowledge wider.

If the pre-print repository is electronic, these draft scholarly manuscripts can then be downloaded or accessed for subsequent scrutiny by others. A significant impediment to the use of pre-prints as a quality mechanism is a subsequent editorial policy that deems their previous distribution or electronic 'posting' as constituting 'prior-publication' and blackballs them. Recently, the Editors of the New England Journal of Medicine (N.E.J.M., 1995, supra) stated that "we have decided that electronic publishing should not be regarded differently to pre-publication". Thus posting a manuscript, table or figure on a host computer, "to which anyone in the world can gain access," stated the journal's editorial board, "will constitute prior publication".

This latter case clearly illustrates the vested traditions of conventional publishers. The justification put forward is broadly one of exclusive copyright. Unless negotiated otherwise, copyright is surrendered when a paper is submitted for commercial publication, restricting (Xerox machine permitting) any subsequent unauthorised reproduction, whilst imbuing the publication with a marketing niche - a sort of scholarly scoop.

Although specific concerns relating to the permanence, and authenticity of electronic pre-print are also enumerated by the N.E.J.M., raising more general quality concerns that still require to be addressed in any debate about electronic publishing, it is

the opinion of these writers that such issues can, and should be addressed by appropriately designed mechanisms that will not detract from the immediacy of access inherent in this developing medium.

11. The Role of Peer Review in Traditional Academic Publishing

For authors to "entrust" their contributions to electronic dissemination, such publication needs to attain the prestige normally accorded to paper-based media.

Even the most optimistic of the technologically orientated concede that "the one sector of the net that will have to be traditional.... is the validation of scholarly ideas and findings by peer review" (Harnad, S. 1995, supra).

A pre-requisite to attain such prestige is the application of peer review. This first rose to prominence during the mid-19th century, when both the Royal Society in London, and the National Academy in America, began distributing manuscripts to fellow academics as ad hoc committees of experts in order to improve the accountability of grants allocated for research.

Criticised because of the inconsistency, innate conservatism and ethical risks inherent in this 'self policing' quality assurance system (Green 1994), there is still little doubt of peer review's pre-eminence in academia, whatever the discipline: "the persons most qualified to judge the worth of a [scientist's] grant proposal or the merit of a submitted research paper are precisely those who are the scientists closest competitors" (Judson, 1993)

Indeed, the General Accounting Office recently concluded that despite shortcomings "the peer review process appeared to be working reasonably well.... virtually no one suggested replacing it."

12. Electronically Enhanced Peer Review Mechanisms

Brown (Brown, 1994) describes an "Action Learning Set" involving a multi-disciplinary, collegiate, face-to-face review process. In this architecture, non discipline-specific reviewers are deployed since "they lack preconceptions about the content of a paper or its context." Although nothing prevents Brown's suggested model, that substitutes collective synergy for individual criticism, from operating in a conventional publishing environment, this is precisely the type of refereeing mechanism that can be facilitated in electronic publishing.

Additionally, the difficulties of co-ordinating the simultaneous availability of such reviewers, (particularly given an international constituency) is significantly reduced by deploying the available technology. Reviewers who may be otherwise busy, absent

or dilatory, can be substituted by a readily available network of interested and accredited on-line volunteers for comment, criticism and acclaim, as appropriate.

Where electronic publishing scores over conventional counterpart, is in systematizing the criteria for the peer review/referee selection process itself. Electronic citation searches and up-to-date literature surveys can improve the basis on which referees

are selected, while 'real time' directories can detail their availability and any specific professional accreditation or expertise. Electronic publishing also allows general postings simply calling for reviewers.

In doing so however, the electronic medium must counter criticisms that it is in any way censoring information (not posting it until it is reviewed) or diluting the existing review pool of academic expertise by condoning an interactive free-for-all.

Heller (Heller, 1995) suggests that author accountability can be assessed by relating it to the number of times a document is downloaded for printing. Appraising a manuscript's 'value' in this way poses quality and ethical concerns. Equating the number of electronic access/downloads to 'quality' is a dangerous precedent.

By making review accessible to the peer community at-large, (including those from different disciplines) both Heller (Heller, 1995 supra) and Harnad (Harnad, 1994 supra) are attempting to minimise the prejudice of a particular profession's prevailing ideological consensus, thereby democratising peer review.

But the actual number of times a publication is electronically accessed should not be made analogous to either its (paid-for) subscription base or its contents abstraction in respected disciplinary indices. A document can be accessed automatically using an electronic bookmark without either being fully downloaded or read. Once retrieved, just like its paper counterparts, electronic files can easily be ignored or forgotten. Thus assessment of its relevance or applicability based upon quantifying the number of times it is 'accessed' raises ominous concerns about gerrymandering by an unscrupulous author's acolytes - something that is difficult to prove in an electronic-only environment. Clearly there is a risk of reducing quality to the lowest common denominator - sheer number of reviewers.

Conversely though, electronic publishing can enhance the reputation of academic publishing, by acquainting browsers with the refereeing process ex facie the accessed document, thereby displacing the suspicion associated (Gorson 1980) with its paper-based counterpart, that the editorial process is a procedurally opaque and 'closed' system. However there is a clear middle ground between the mystique of editorial policy and 'majority approval' by the vox populi.

The referee anonymity preserved by CMC may obviate the 'pecking order' syndrome identified by Lawley (Lawley, 1995) of traditional publications, a hierarchy perpetuated in any analyses of the prestige of referee/reader affiliations, be they professional, intellectually curious, or electronically connected, (to Compuserve or Colloquium). The desirability of adopting pseudonyms, or the comparative anonymity of code numbers is also debatable, but may be integral to the operation of an automated computer mediated publication, described below.

13. Electronically Assisted Editorial Moderation

The conventional publication process and its associated organisational structure - comprising contributors, editorial review, and print functions - displays negative facets such as delay, bureaucracy, fixed production costs and necessarily rigid schedules. As identified previously, any 'hard point' failure in the communication or 'hand-off' between each process has unintended consequences for successive stages, representing a potentially weak link that in part explains the vogue for 'concurrent engineering' in other industries. As argued previously, the traditional review process, ironically a tool to help validate quality, itself contributes to the inflexibility (and maybe conservatism), associated with the publishing process.

The interactive capabilities of electronically linked stakeholders - authors, referees, editors, reviewers and readers - encourages improved participation and lends itself to integrating formerly separate tasks, and reducing centralised control. Instead of determination, there is, perhaps, democracy, so that both editorial and referee dependence are substituted by increased participation, inhibiting any one individual from controlling or dominating the process. No one is suggesting a totally leaderless process - evidence (Hoffman, and Maier, 1961) suggests that where, in fact, there is no decision 'casting' mechanism, a group will tend to adopt positively received suggestions, regardless of their merit or quality, on about 85% of occasions, . Thus Maier (Maier, 1967) has argued that group interaction, consisting of individual influence, but centralised dissemination or moderation, can improve both the quality and acceptance of decisions.

The possibility of transplanting such 'brainstorming' exercises (recently evident in service/manufacturing sectors as a quality improvement tool) to an academic electronic publishing environment is not as fantastical as it sounds. Although initially designed to trigger innovative advertising ideas (Osborn, A.F. 1953) the central tenets - a non-judgemental 'facilitator,' plurality of participants, continuous suggestion of ideas and linear progression of contribution - designed to promote thorough 'discussion' of a specific subject, are all realisable using remote interactive electronic communications. For instance, Wood (Wood, 1995) describes the implementation of an educational software system intended to facilitate academic debate during distance 'learning' sessions . At Monash, online 'speakers' were able to address electronically-linked, but spatially disparate, 'virtual' seminar groups, who could respond via intermediaries ('facilitators') to panel discussions. In Monash's model of remote scholarship, 'keynote speakers' combined the roles of moderator, (participant induction, system administration), social host (introduction, encouragement or suggestion) and chairperson (setting agendas, controlling discussion etiquette, subject duration, and summarising group consensus).

In the publishing paradigm, while the editor may wish to retain the final say as to what is or is not published, a more collaborative model is envisaged, where (accredited) referees can either be selected, or volunteer (accredited and non-accredited), once material is electronically posted for them to review. The wider 'referee' catchment arena, and scope for subsequent discussion are all conducive to improving standards of content quality, whilst diminishing the negative philosophical or personality clashes and plagiarism, that are prevalent in the conventional closed publication referee system.

Instead of a forum of open ended debate, where personalities are identified, Stodolsky (Stodolsky, 1994 supra) has argued that the anonymity and automisation available in his concept of a telematic journal, offers enhanced content quality, and self-regulation. Utilizing encryption-key technology, Stolosky asserts that an electronically mediated journal not only provides scope for greater democracy in the way in which material is published, but also the integrity of that material, and if need be, its privacy (important for in-house or professional journals or with sensitive material).

Although hardly new (the Rand Corporation promoted the Delphi decision making process to improve prediction rate success without risk of contributor embarrassment in the 1950's (Luthans 1977) author anonymity provides assurance that submissions are evaluated on content, rather than sources.

Electronically automated mediation preserves author anonymity both throughout the rejection/acceptance and publication stages, thereby protecting self-expression and liberalising information exchange, so that as Stodolsky (Stodolsky, 1994 supra) claims, "the integrity of the entire structure need not rest on one person." In order to prevent plagiarism or abuse, Stodolsky's telematic journal also envisages a virtual archive, where messages, articles and reviews can be time stamped ("signed") and version controlled. In this view of a fully automated publishing 'office', not only can referees and reviewers be selected more scientifically, but by electronically filtering "source reputations" their contributions and comments can be evaluated, thus systematising a collation of commonly voiced views, and ultimately reducing editorial overload. Although Stodolsky was evaluating the role of CMC in a corporate environment, its combination of "quality control and security" is a clear indication of the potential for enhanced efficiency in the conventional publishing paradigm. It also helps differentiate electronic publishing, at this early stage in its development, from its paper-based relation, at a moment when both its acceptance and quality are of paramount importance if the medium is to fulfil its true promise.

14. Consequences of a Failure to Adopt Appropriate Standards

It is not the purpose of this paper to detail the full implications of a failure to attain acceptable standards in publication, rather it is to highlight the ways in which electronic publishing can be accorded the prestige that it deserves. However as a final justification of the need to achieve appropriate scholarly standards, consider the implications of failing so to do.

15. Liability of the Knowledge Professional

We live in litigious times. The moment one disseminates information for others to read, the potential for legal liability arises, In particular, anyone - professionals and non-professionals alike - who writes or gives advice upon which others rely should

consider some form of indemnity insurance, as ought their publishers. In this respect it is important to differentiate the individual legal personae and their respective liabilities.

Those who write about, comment upon and provide advice on specific subjects whether acting in the capacity of an individual, agent, employee or partner may make themselves (or their organisation) liable for negligent mis-statement, defamation,or other action based on plagiarism, or breach of copyright. The situation for publishers of remotely accessed databases is more complex - and new legislation is imminent.

An elementary question is the choice of the governing law if the countries of storage and retrieval vary; it may yet become necessary to create a convention to standardise 'virtual' rights and liabilities of all those countries who 'sign up'.

In questions regarding copyright, or defamation for instance, it is important to differentiate the legal implications of 'transmission' (in the capacity of a conduit, like say, a telephone company) or broadcast, storage, and reproduction. Thus an electronic 'moderator' editing a portfolio of titles or posting a defamatory statement might be liable in the same way as a magazine publisher or book shop, who it is legally presumed, ought to have known the entire contents of the works that are retailed. The situation is further complicated given that in an interactive medium like the internet, a 3rd (or 133rd) party - even an anonymous one - may have been responsible for 'posting' the offending material, over which the editor or publisher may have had no technical way or legal authority to exclude, but nevertheless, find themselves, legally liable.

16. Conclusion

This paper has briefly attempted to explore the virtual landscape of electronic publishing. It presents an image of a 'disorganised' topography, in which the conventional signage of academic publication orthodoxy provides at best a serious of unhelpful guides.

Amongst the somewhat mechanistic image of conventional linear publishing paths, the future (and present) world of electronic publishing is multi-dimensional and organic.

Ironically, many of the canons of conventional publishing, such as peer review may not be under threat as critics have suggested, but actually be enhanced through electronic means.

Where the critics argument needs to be given serious consideration is where it suggests that an inherent danger of electronic publishing is that it offers the potential for the opportunistic and less than scrupulous to lay claim to quality publication untested by the venerable quality mechanisms described previously.

More positively, the increased participation made available by electronic publishing, if linked to an architecture of novel but rigorous quality assurance procedures, may presage new and exciting opportunities within academic publishing. Any failure to devise appropriate standards may undermine the credibility of electronic publishing as an academic medium. The market for academic ideas, however virtual or virtuous, must be regulated. A central dilemma, though, is whether such intervention will enhance or destroy the specific raison d'etre of open exchange at this formative stage in the evolution of electronic publishing.

17. References

Gorson, M. Running a Referee System Primary Communications Research Centre University of Leicester, 1980

Abate,Tom 'Ethics in research,' Interpersonal Computing and technology: An electronic Journal for the 21st Century

Arnold, K. 'The body in the Virtual Library: Rethinking Scholarly Communications (arnold.bio.html)

Brown, Robert 'Write First Time' Literati Newsline 1994/5

Dempsey, Judy 'Bertelsmann's Pulse Races to America On Lines's Tune' Financial Times 4 December, 1995

Drott, D.C.M., 'Reexamining the role of conference papers in scholarly communication' Journal of the American Society for Information Science, 1995, 46(4), 299-305.

Fillmore, Laura 'Internet Publishing: How we must Think,' 1993 (laura@obs-us.com)

Floridi, Luciano 'Internet, which future for organised knowledge, Frankenstein or Pygmalion, International Journal Human-Computer Studies, 1995, (43), 261-274

Gilmore, Matthew B. and Case, Donald, O. 'Historians, Books, Computers and the Library' Library Trends Vol. 40 No. 4 Spring 1992, 667-86.

Green, Diana in Craft (ed), International developments in assuring quality in higher education,The Falmer Press London, 1994, 168-177.

Guedeon, Jean Claude 'Electronic Journals, Libraries, University Presses' in Scholarly Publishing on the Electronic Networks: Filling the Pipeline and paying the Piper, Association of Research Libraries, 4th Symposium 1994, 67-75 and 'Why are Electronic Publications Difficult to Classify' guedon@ere.montreal.ca.

Gygi K. 'Recognizing the Symptoms of Hypertext, in The art of Computer Interface design, ed, Laurel, B. Apple/Addison Wessley 1990 p279.

Harrison et al 'On Line Journals, Disciplinary Designs for Electronic Scholarship', Public Access Computer Systems Review (PACS@uhupvm1) 2 (1) 1991, 25-38 )

Hawkins, D., Smith, F., Dietlrin, B., Joseph, E and Rindfuss, R. 'Forces Shaping the Electronic Publishing Industry' in The Electronic Publishing Business and Its Market, Blunden & Blunden, PIRA, 1994

Harnad, S. Implementing peer review on the 'net,' in Peek R. and Newby, G. (eds) Electronic Publishing Confronts Academia, the Agenda for the Year 2000, Cambridge, MIT and also harnad@ecs.soton.ac.uk. and Electronic Scholarly Publications : Quo Vadis, Managing Information, March, 1995, 31-33.

Heller, Stephen, R. 'Chemistry on the Internet - The Road to Everywhere and No Where', Journal of Chemical Information and Computer Science, 1995 and US Department of Agriculture (srheller@nalusda.gov)

Hoffman, L.R. and Maier, NRF., Quality and the acceptance of problem solving solutions by members of heterogeneous and homogeneous groups, Journal of Abnormal and Social Psychology 1961 pp401-7

Ishikawa, K 'Quality Control in Japan', in Sasaki, The Japanese approach to production quality, Oxford, Pergammon 1984.

Johnson, T.H. and Kaplan, R.S. Relevance Lost: The Rise and Fall of Management Accounting, Boston, Harvard University Press 1987

Judson, Horace Freeland 'Structural Transformations of the Sciences and the End of Peer Review,' Journal of the American Medical Association, 272 July 13, 1994 92-4

Kehoe, Louise The Battle for Cyberspace Financial Times August 17, 1995.

Koenig, M. and Harrell, T., 'Lotka's Law, Price's Urn and Electronic Publishing,' Journal of the American Society for Information Science, June 1995, 386-388

Landoni, M. Catenazzi, N 'Hyper-books and Visual-books in an electronic library' The Electronic Library, Vol. 11, No.3 June 1993 175-186

Laurillard, D. 'How can learning technologies improve learning, Law Technology Journal Vol.3 No.2 May 1994, 46-49

Lawley, Elizabeth Lane, 'Computer Mediated Communication: An Initial Exploration' http://www.itcs.com/elawley/

Luthans, Fred, Organisational Behaviour McGraw-Hill, Tokyo, 1977

Maier, NRF, Assets and liabilities in group problem solving: the need for an integrative function', Psychological Review, 1967, 239-49)

McEldowney, P.F. 'Scholarly Electronic Journals - Trends and Academic Attitudes: A Research Proposal' Masters Project, University of North Carolina, Spring 1995 (philipmc@virginia.edu)

Nelson, Ted, 'The right way to think about software design', in The art of Computer Interface design, ed, Laurel, B. Apple/Addison Wesley,", 1990 and Literary Machines, Mindful Press, 1980

New England Journal of Medicine Vol 352, No. 25 'The Internet and the Journal' June 22 1995 p1709.

Osborn A .F., Applied Imagination, Scribener's Sons, NY 1953 p297.

Peer Review: reform needed to ensure fairness in Federal grant agency selection General Accounting Office,June 1994 202 512 6000GAO/PEMD).

Professor James Carey "Paper Dinosaurs Refuse to Fold" in Tony Jackson, Financial Times, 12 December, 1995

Sharpe, Liz Knowledge: Information, Publishing, in The Electronic Publishing Business and Its Market, Blunden & Blunden, PIRA, 1994

Snoddy R: 'A Publisher who had a global electronic dream', Financial Times, Media Futures, Interview with Joseph Dinnone, CEO McGraw-Hill 16 October 1995.

Stodolsky, D.S. 'Telematic Journals and Organisational Control: Integrity, Authority and Self-regulation,' Interpersonal Computing and technology: An electronic Journal for the 21st Century, 1994 Vol. 2 No. 1, 50-63 (david@adromeda.rutgers.edu) and 'Information Systems for Management' Human Systems Management, 1985 Vol. 5, 39-45

Stolte-Heiskanen, Veronica ' Research performance evaluation in the higher education sector', Higher Education Management, Vol 4, July 1992, 90-95.

Valimma, J. 'Academics on assessment and peer review, the Finnish experience' Higher Education Management, Vol. 6, November 1994, 391-407.

Venkatraman, N. I.T.-Enabled Business Transformation Sloan Management Review, Winter, 1994, 73-87

Wood, Jean, 'Work progress at the virtual Monash' Active Learning92, July 1995,13-18

JILT logo and link to home page