Skip to main content Skip to navigation

Manage Research Papers

Browse by year

198 - Models for X-11 and 'X-11-Forecast' Procedures for Preliminary and Revised Seasonal Adjustments

K.F. Wallis

Procedures for the seasonal adjustment of economic time series have typically been evaluated by studying their effect on a sample of actual time series. Recent proposals for amendments and extensions to existing methods have also been evaluated in the same way. Perhaps this approach is thought to be inevitable given that "there seems to be no ideal process of evaluating a method of adjustment" (Granger, 1978, p.55). In contrast, however, this paper continues a line of research in which the properties of the procedures themselves are studied, in the abstract. It is hoped that this will improve our general understanding of the performance of the existing methods and their extensions, and help to explain the results of the previous empirical studies. The particular procedure considered is the U.S. Bureau of the Census Method Variant X-11 (Shiskin et al., 1967), which is widely used and is generally held to give satisfactory results in the seasonal adjustment of historical data. Our analysis proceeds by linear filter methods. The basic framework of a set of "time-varying" linear filters is presented by Wallis (1982), and further properties of these filters and their components are considered in the present paper. The use of linear methods implies that attention is restricted to the performance of X-11 in additive mode (in which seasonal components are estimated as average differences from, not average ratios to, the trend-cycle), neglecting the option of graduating extreme irregular values.

Date
Thursday, 28 June 2001
Tags
1978-1988, Active

197 - On Utilitarianism and Horizontal Equity as Welfare-Symmetry in Incomes

J. Seade

The principle of horizontal equity, that people with equal (full) incomes should be treated alike by the taxman, is widely regarded in the traditional public-finance literature as one of the central guiding lights of good tax design, "called for by the principle of equal justice under the law" (Musgrave (1976; p.4)). Yet, very little attention has been given to the study of this principle, in particular its necessity or even admissibility within a more general welfare-theoretic framework. The first question that arises is why horizontal equity? Some might answer that there is something to be said for the distribution of welfare in the absence of tax, whose ranking is to be preserved. This line of thought would go along historic notions of justice and deserts, held by some economists and philosophers (e.g. Nozick) but I hope not by too many; such a stand seems, to me, gratuitous, unjustified. Welfare, or at any rate the bulk of formal welfare economics, has to do with actual allocations, with (end-) results. Perhaps changes in these results should in some cases be made to matter, or comparisons with other groups or countries. But I see no reason why the comparison with the "primitive state" should be given a central role in choosing policy. All potential distributions of income and of tax burdens should be treated symmetrically, just as possibilities in the menu that they all are. Accordingly, if horizontal equity is to arise as desirable, it must be as a result of other underlying criteria which we may agree upon as constituting the social good.

Date
Wednesday, 27 June 2001
Tags
1978-1988, Active

196 - Reflections on the Invisible Hand (Text of the Fred Hirsch Memorial Lecture 5.11.81)

Frank Hann

That a society of greedy and self seeking people constrained only by the criminal law and the law of property and contract should be capable of an orderly and coherent disposition of its economic resources is very surprising. Marx called such a society anarchic and so it is. Yet ever since Adam Smith economists have been concerned to show that such anarchy is consistent with order and indeed with certain desirable outcomes. Smith proposed that the market system acted like a guiding - an invisible - hand. It was invisible since in fact there was no actual hand on the rudder. The metaphor which he chose was exactly apposite. Two hundred years on, the basic theory has been much refined, and we know a good deal more about those instances where the hand trembles or fails. Yet there is no agreement on some of the fundamental ingredients of the story and there is also much which we simply do not understand. In this lecture I shall give my evaluation of our present theoretical state in this matter and draw a number of lessons of a somewhat practical kind.

Date
Tuesday, 26 June 2001
Tags
1978-1988, Active

195 - Americal Innovation Abroad

John Cable & Manfred J. Dirrheimer

Multidivisional firms account for a clear majority of the hundred leading companies in America and Britain, and significant proportions elsewhere in continental Europe.(1) Thus a sizeable proportion of total economic activity in Western, developed economies now takes place within the quasi-autonomous operating divisions of large industrial organizations, co-ordinated via a network of general offices. The multidivisional organization is widely accepted to be an American innovation (Chandler, (1966)).(2) Its international diffusion at first sight appears to conform to the 'culture free' theory of organization : the idea that organizational and institutional patterns are converging under a common logic or work and administration, in response to technological imperatives, and increasingly independent of cultural factors (e.g. Kerr, 1960; Harbison and Myers, 1959). In line with this view the European multidivisional development may be seen in the context of Servan-Schreiber's "American Challenge" (1968) : the revitalization of European economies with the aid of superlative American organisational methods. But as Child and Kieser (1979) point out, an alternative school of thought believes socio-cultural influences will be strongly manifested in patterns of organisational behaviour (eg, Crozier, 1964; Malinowski, 1960). And on closer inspection non-trivial differences are discernible in the institutional frameworks within which multi-divisionals have developed in different countries, and arguably in the nature of the multinationals themselves.

Date
Monday, 25 June 2001
Tags
Active, 1978-1988

194 - Rational Forecasts from Non-Rational Models

A Snell

Determined schemes are considered because of their overwhelming popularity. An unbiased and optimal (in the sense of minimum forecast error variance) extrapolative predictor is also described and used in the analysis because it provides a useful benchmark as the 'best' extrapolative proxy available. Section 4 examines the implications for estimation of using these three extrapolative proxies in the context of a simple two equation macro model due to Wallis (1980) and section 5 extends the model to include dynamics. The simple two equation model used bears a very close resemblance to the two main equations of the condensed St. Louis model described by Anderson and this, with its linearity and simplicity make it an ideal structure in which to house the analysis. Section 6 gives some numerical comparisons of multiplier error using the method under feasible values for the parameters. Section 7 discusses further problems that arise using the algorithm for policy analysis even when a set of consistent estimates is used. Focus here is on the seriousness of ignoring mistakes during simulation by substituting actual outcomes for expectations and on problems raised by Lucas' critique of policy evaluation (Lucas. 1976 ). Section 8 provides a summary and conclusion.

Date
Sunday, 24 June 2001
Tags
Active, 1978-1988

193 - Synopsis: Pricing and Quality Control under Goodwill Loss, I

C.D. Fraser

Extant theoretical studies of goodwill have, with one exception, explored the relationship between advertising and sales. We construct a simple model of the firm's pricing and quality control when it loses goodwill, hence future sales, should it produce defective commodities. Given a deterministic relationship between defectives and the future demand schedule, the impact of the firm's time preference and record of producing defectives upon its current pricing and quality control is examined together with the conditions for the firm to be driven from the market. Risk is then introduced into the relationship between defectives produced and future goodwill loss and the consequences for price and quality of increased risk are examined.

Date
Saturday, 23 June 2001
Tags
1978-1988, Active

192 - Bayesian Learning and the Optimal Investment Decision of the Firm

I. Tonks

This paper is about learning. It illustrates how in a two period allocation problem with uncertainty in each period, an economic agent's decisions are influenced by the knowledge that he is able to learn about the uncertainty. The time periods are linked through the learning process of the economic agent. The problem to be analysed is that faced by a firm deciding whether or not to invest in a new technology or production process, whose returns are not known with certainty. Because of the two period environment, the firm is able to experiment with the new process in the first period, and observe the results before making another investment decision at the beginning of the second. Given the opportunity for learning, how will this affect the decision of the firm in the first period?

Date
Friday, 22 June 2001
Tags
1978-1988, Active

191 - Short-Run Employment Behaviour of the Labour-Managed Firm: Evidence from Yugoslavia

G. Stewart

In a recent survey,. Estrin and Bartlett c19n50) argue that one of the weaknesses of the existing empirical literature on the Yugoslav labour market is its failure to directly test the central predictions of theoretical models of labour-managed firms (Lrs). Rather, the focus has been on indirect issues such as income dispersion and labour mobility. This paper considers one of the direct theoretical predictions, namely how the enterprise adjusts its employment level in response to short-run variations in demand. Short-run employment functions are derived and then estimated.. at the industry and aggregate levels using Yugoslav quarterly data. The results are used first of all to examine competing LMF models, secondly to look for any effects of the institutional reforms that took place in 1972 and, finally, to make comparisons with a capitalist economy, the U.K.

Date
Thursday, 21 June 2001
Tags
1978-1988, Active

190 - Technology, Diffusion, Wages and Employment

P. Stoneman

The debate on the impact of technological change on employment has a long history. The current concern with the topic has been prompted by the realisation of the potential of microelectronics. This realisation has generated a number of commentaries (e.g. Freeman (1978), Barronand Curnow (1979), Jenkins and Sherman (1979)), the major predictions of which are that the introduction of new technology will lead to unemployment. A rider to this is that the faster new technology is introduced (ignoring international competitive aspects) the higher will be the unemployment resulting. Unfortunately much of this literature is either devoid of theory, or if theory is included it is implicit rather than explicit. The outcome of this is that much of the literature ignores what Heertje (1977) calls compensation effects. In essence, if a labour saving technology is introduced in one sector of the economy there may well be automatic responses in the economy that lead to increased employment elsewhere. The purpose of this paper is two fold: (1) To investigate the time path of employment after the introduction of new technology taking account of compensation effects. and (2) To investigate, whether, once these compensation effects are allowed for, a higher speed of diffusion does mean lower employment.

Date
Wednesday, 20 June 2001
Tags
1978-1988, Active

189 - Uncertainty, Adjustment Costs and Expected Keynesian Unemployment

C.J. Ellis

Most macroeconomic models with quantity rationing regard agents as very naive, for despite successive periods of rationing they continue to behave as if they have had no such experience. This behaviour is logical since quantity adjustment is assumed costless and frictionless. However if we are to challenge the validity of the price tatonnement it is perhaps strange to allow free frictionless quantity adjustment. In this model quantity adjustment costs are introduced in the form of resources consumed in the adjustment process. Agents are aware of adjustment costs, but have to state initial transaction demands before the state of the world is known, consequently they base initial trades upon the maximisation of Von Neumann-Morgenstern objective functions. On learning the true state of the world agents then adjust optimally away from their initial trade vector.

Date
Tuesday, 19 June 2001
Tags
1978-1988, Active

188 - Wage Share, Concentration and Unionism

K.G. Cowling & I. Molho

Whilst a considerable amount of empirical work has been done on the relationship between wage levels, the degree of industrial concentration and unionism very little has been done on wage share. This is a rather surprising state of affairs given that a positive relationship between wage levels and both concentration and unionism need not imply that the functional distribution of income between labour and capital is affected by either variable. Indeed such observations are quite consistent with the view that greater concentration implies a lower share of value-added going to workers and that this outcome cannot easily be averted by union action. Whether or not such a view is tenable requires a direct examination of wage shares and provides the motivation for this paper

Date
Monday, 18 June 2001
Tags
1978-1988, Active

187 - Dynamic Models and Expectations Hypotheses

K.F. Wallis

Dynamic models of the relations between economic variables often rest on theories about how unobservable expectations are formed. In this paper the "adaptive expectations" and "rational expectations" hypotheses are compared and contrasted. The main distinctions concern the size of the information set on which expectations are based, and the optimality or otherwise of expectations, given that information set. Expectations that are optimal with respect to the given information may be defined as rational. With this definition, some bivariate dynamic rational expectations models are presented, incorporating the appropriate time series forecasting rules for either a single forecast or an infinite sum of discounted forecasts. Model identification problems are discussed, and it is shown how they may be resolved by joint estimation of the bivariate process.

Date
Sunday, 17 June 2001
Tags
Active, 1978-1988

186 - A Model of Bilateral Bargaining in the Labour Market

C.J. Ellis

In this paper we combine several strands of basic economic theory to provide a structure for analysing bilateral bargaining in the labour market. Using a quadratic revenue function and a Cobb-Douglas utility function largely for expositional ease, we derive isoprofit and isoutility cuves for one firm and union respectively. These curves are used to derive pireto efficient bargaining loci after the manner of Cartter (1959). We then introduce the concept of a bargaining core and demonstrate how different cares deriving from different preferences may be categorized. We then consider four particular solutions to the bargaining process, the monopolist, Nash and "market power !' outcomes. 'The comparative static properties of these solutions are then examined. Finally in an appendix we consider the general case of the problem and demonstrate which aspects of our earlier analysis are not dependent upon the particular functional forms.

Date
Saturday, 16 June 2001
Tags
1978-1988, Active

185 - 185 Oil, Dininflation and Export Competitiveness: A Model of the "Dutch Disease"

W.H. Buiter & D.D. Purvis

This paper examines three possible sources of "de-industrialization" in an open economy: monetary disinflation, an increase in the international price of oil, and a domestic oil discovery. The analysis is conducted using a model which incorporates different speeds of adjustment in goods and asset markets; domestic goods prices respond only sluggishly to excess demand while the exchange rate (and hence the price of imported goods) adjusts quickly. Monetary disinflation leads to reduced real balances, higher interest rates, and a lower nominal exchange rate. In the short-run this causes a real appreciation and a decline in domestic manufacturing output. Perhaps surprisingly, an increase in world oil prices can create similar effects even for a country which is a net exporter of oil. Although the direct effect of an oil price increase for such a country is an increase in the demand for the domestic manufacturing good, that effect may be swamped by a real appreciation created by the increased demand for the home currency. This corresponds rather closely to the recent experiences of several oil and gas exporting countries, and is commonly referred to as the "Dutch-Disease". In our analysis, however, this is only a transitional phenomeon. Domestic oil discoveries, though necessarily finite in nature, generate permanent income effects in demand which last beyond the productive life of the new oil reserve. Initially, current income is above permanent income, leading to an improvement in the trade account; this is eventually reversed when permanent income exceeds current income. A wide variety of output response patterns are possible.

Date
Friday, 15 June 2001
Tags
1978-1988, Active

184 - Model Validation and Forecast Comparisons: Theoretical and Practical Considerations

M. Salmon & K.F. Wallis

Most macroeconometric models are built with the object, wholly or partly, of providing forecasts. The term "forecast" covers three rather distinct types of exercise: (a) genuine "ex-ante" forecasts, in which the model user predicts the actual future development of the economy, and for which projected future values of input variables must be supplied; (b) "ex-post" forecasts, in which the model user eliminates the effects of error in the projections of the input variables by calculating "forecasts" over some period in the recent past, given the actual observed values of the input variables; (c) hypothetical forecasting or policy analysis exercises, in which the model user estimates the response of the economy to alternative scenarios, that is, to alternative values of policy instruments or tc different kinds of exogenous shock. In each case there is interest in evaluating the results of the forecasting exercise, not only for its own sake but also to provide information that is useful in model validation, that is, in checking the specification of the model. Of course the various forecasting exercises and their respective evaluations are not necessarily independent of one another, for example it is often said that in order to be useful in policy analysis a model should have a good real-world forecasting record over a period that was not part of the estimation period, so that it might also be expected to provide "good" estimates of responses to policy changes.

Date
Thursday, 14 June 2001
Tags
1978-1988, Active

183 - Monetary Policy and International Competitveness

W.H. Buiter & M.H. Miller

A model of Dornbusch is adapted to analyse the consequences for output and competitiveness of certain aspects of the U.K. government's medium term financial strategy and some other policy actions. This includes the announcement of a sequence of reductions in the target rate of monetary growth, an increase in VAT and a move to make the U.K. banking system more competitive. The impact of a discovery of domestic oil is also modeled. We consider the consequences of varying the degree of inertia in the underlying rate of inflation and of different rates of international capital mobility. A real interest rate equalization tax stabilizes the real exchange rate. but not the level of output. Once and for all changes in the level of the nominal money stock to accommodate changes in the demand for real money balances prevent 'overshooting' of the real exchange rate and fluctuations in output. It may, however, undermine the credibility of an announced policy of monetary disinflation.

Date
Wednesday, 13 June 2001
Tags
Active, 1978-1988

182 - Alternative Exchange Rate Regimes and the Transmission of Disturbances: A General Equilibrium Approach

T. Persson

Using a general equilibrium model with maximising agents, the transmission of disturbances under alternative exchange rate regimes are analysed and compared. Expectations of future prices and monetary policies are crucial to the analysis. Apart from specific results, the paper also provides an illustration of two more general points. First, conclusions from the analysis of globally fixed exchange rates or general floating, do not necessarily translate to the intermediate case of pegging in a world of otherwise floating rates. Second, the assumptions about the precise way that a fixed exchange rate is stab-ilized are critical for the results.

Date
Tuesday, 12 June 2001
Tags
1978-1988, Active

181 - On Random Preferences, Future Flexibility and the Demand for Illiquid Assets (Revised)

C.D. Fraser

Goldman's model of flexibility and portfolio choice with random preferences is extended to the case of two assets alternative to money, The asset highest yielding to maturity, but least flexible, is ex ante divisible but ex post indivisible. This asset is always chosen alongside money initially, irrespective of whether or not it is redeemable prematurely. When it is not so redeemable, transparent restrictions on asset prices are found to ensure that the other non-money asset is also held initially, and with a positive probability of premature redemption. When it is prematurely redeemable, these same restrictions ensure positive probability of this occurring

Date
Monday, 11 June 2001
Tags
1978-1988, Active

180 - Public Enterprise Pricing, Taxation and Market Structure

P.A. Weller

It is possible to identify two distinct approaches to the problem of haw to set pricing rules for public enterprise. The first, typified by the paper of Baumol and Bradford (1970), regards the problem as identical to that of setting optimal taxes. The second, first expounded in the paper of Boiteux (1956), recognises that in practice, and for a variety of reasons, governments impose budget constraints upon public enterprises, and that pricing of public sector outputs is quite likely to be determined independently of the tax structure. Our aim is to focus on the latter approach, paying particular attention to an aspect of the problem which has been somewhat neglected. We wish to take account of the fact that when a public enterprise adjusts prices, there are general equilibrium repercussions which take place in the private sector. These effects can be neglected by resorting to the expedient of assuming constant producer prices. This assumption would be inconsistent with the existence of pure profit in a competitive private sector, which is one of the cases we analyse. We consider also the implications of having a monopolistically competitive private sector, and as a polar case the situation where a single many-output monopolist operates in the private sector.

Date
Sunday, 10 June 2001
Tags
1978-1988, Active

179 - A Note on the Speed of Convergence of Prices in Random Exchange Economies

P.A. Weller

This paper is a continuation of work by Hildenbrand (1971) and Bhattacharya and Majumdar (1973) (henceforth B - M). They consider pure exchange economies in which both preferences and endowments are random. Hildenbrand examines the convergence behaviour of price vectors for which total expected excess demand is zero. He shows that as an economy increases in size, if agents are stochastically independent, then the limit of such a sequence of price vectors is an equilibrium price vector in a suitably defined limit economy. B - M consider the case where prices guarantee equilibrium in almost all states of the world. In other words, market-clearing prices are treated as random vectors. Iii particular, B - M show that under suitable assumptions there will exist a sequence of such random price vectors displaying almost sure convergence to any equilibrium price vector in a deterministic limit economy. As in the case of any convergence result, the speed of convergence is a natural question to investigate and we will be concerned in this note with establishing a result on the speed of convergence of the random price vectors as the economy increases in size. In order to provide a characterization of the result in a simple case, let us suppose that the deterministic limit economy has a unique equilibrium pc) . Assume also that the random economy consists of individuals with the same independently distributed random preferences and endowments. Then our result states that the probability that the random equilibrium price vector is further from pi, than some distance which converges to zero slower than N-1/2 (where N is the number of agents in the random economy) is less than a term which converges to zero faster than N-1/2. In the more general case we consider both distance and probability will depend upon the proportions of different types of agent in the economy. In addition, the distance will also depend upon the rate at which the proportions of agents of different types approach their limiting values.

Date
Saturday, 09 June 2001
Tags
1978-1988, Active

178 - Advertising and Hours of Work in U.S. Manufacturing, 1919 - 75

J.A. Brack & K.G. Cowling

Over this century, the average level of weekly hours worked by the labour force as a whole has shown a fairly consistent decline. For example, from 1900. to 1979 weekly hours for the whole labour force fell from 53.2 to 39.6. However, closer examination shows that this decline is largely due to changes in the composition of the labour force, in particular, to increases in the ratio of female to male workers, and in the ratio of mhits-collar to blue--collar workers. Upon examining a group which has remained relatively homogenous through time such as production workers in manufacturing, one finds a different pattern. Since 1945, weekly hours for this group have scarcely changed; in a period of steadily rising real wages it is the failure to decline that must be explained. We attempt to provide such an explanation in this paper

Date
Friday, 08 June 2001
Tags
Active, 1978-1988

177 - A Simple World Model of Monetart Union - A Note

J.M. Ellis

The economic debate about European monetary union is often conducted as if the problem were one of choosing between a fixed or a flexible exchange rate system (for example, see Sirc (1977)). This approach has been largely unsatisfactory, because it fails to get to grips with the main characteristic- of monetary union, which is neither of these two extremes, but is a combination of both with pegged exchange rates within the union, and with the union jointly floating against the rest of the world. It is clear that most models analysed in the literature, whether small-open-economy models or even two-country world models, are unable to get a handle on this special feature

Date
Thursday, 07 June 2001
Tags
1978-1988, Active

176 - Optimal Public Policy in Open Economies

A. Smith

This paper applies simple duality theory to integrate some elements of the theory of public finance into the theory of economic policy in open distorted economies. Results on the effect of reductions in distortions are derived which generalize the results of Dixit (Journal of Public Economics, 1975). Extensions of the theory of imiserising growth are made, focussing on the role of shadow factor prices, following Bhagwati, Srinivasan and Wan (Economic Journal, 1978). The results most relevant for piecemeal policy making (Dasgupta and Stiglitz, 1974; Findlay and Wellisz, 1976; Srinivasan and Bhagwati, 1978; all Journal of Political Economy) are shown to hold strictly only in small open economies in which all goods are internationally traded.

Date
Wednesday, 06 June 2001
Tags
Active, 1978-1988

175 - 'Direct' vs 'Indirect Taxation of Externalities: A General TreatmentDirect' vs 'Indirect Taxation of Externalities: A General Treatment

B. Lockwood

It is well-known that it may be possible to attain a Pareto-efficient allocation in an economy with consumption externalities by the imposition of suitable excise taxes and subsidies, although the imposition of such taxes may not be sufficient. In addition, the structure of these Pigovian excise taxes is familiar; they are levied only on the externality-causing goods, and in general will differ across individuals. In practice, however, it is usually prohibitively costly to attempt to distinguish perfectly between individual externality creators, whereas it is often feasible to impose ordinary commodity taxes on externality-creating goods as a 'second-best' corrective measure. An example that is often cited is the case of pollution from the internal combustion engine; it is infeasible to monitor the pollution emission of each separate car owner and tax him accordingly, but it is quite possible to tax petrol at a rate which is uniform across consumers.

Date
Tuesday, 05 June 2001
Tags
1978-1988, Active

174 - Savings Propensities from Wage and Non-Wage Income

A.J. Murfin

In econometrics, the consumption function has been subject to continual debate and respecification. This paper reinvestigates some aspects of the relation between incomes and consumption behaviour in the UK 1963-76, and, in particular, focuses on the study by Klein, Ball et al (1961) which attempts to take account of the role of income distribution. The questions of parameter stability and misspecification are considered in detail, and the conclusion is reached that a correct specification of the models presented below has yet to be found.

Date
Monday, 04 June 2001
Tags
1978-1988, Active

173 - The Gains from Free Trade

A.K. Dixit & V. Norman

Most propositions on the gains from trade with many consumers consider only lump-sum transfers as redistributive tools. It is widely believed that nothing can be said unless such transfers are possible. In this note we show that such a belief, and the consequent pessimism concerning the applicability of welfare propositions in trade theory, are groundless. Indirect consumer taxes are sufficient to make free trade Pareto-superior to restricted trade under the same conditions that would make free trade better than restricted trade in the one-consumer case, i.e. that there be no terms-of-trade gain from trade restrictions. This extends some earlier work in Dixit and Norman (1980, ch. 3). The essence of our argument is very simple. Suppose the government can levy taxes on all commodities entering individual consumers' utility functions, i.e. on all goods and all factors. Then, by appropriate choices of tax rates, the government can leave all consumer prices unchanged when moving from restricted trade to free trade. That will leave all consumer demands and utility levels unchanged. If such a scheme is feasible, in the sense that it gives non-negative government revenue, it means that all consumers can be made as well off with free trade as with restricted trade, thus proving weak Pareto-superiority of free trade. If government revenue is strictly positive under such a scheme, free trade can be made strongly superior simply by lowering the consumption tax rate on some good that all consumers demand in a positive quantity.

Date
Sunday, 03 June 2001
Tags
Active, 1978-1988

172 - PolicyDecentralisation and Exchange Rate Management in Interdependent Economies

W.H. Buiter & J. Eaton

The demise of Bretton Woods and of the short-lived Smithsonian agreement has raised questions about exchange rate management by monetary authorities acting in isolation from one another. For instance, will individual monetary authorities have an incentive to stabilise the exchange rate? To what extent will monetary actions abroad disrupt domestic monetary policy? What are the gains from co-ordinating monetary policy? The problems that arise when different agents pursue independent policies in interdependent economies have been explored by a number of authors. Aoki (1976), Cooper (1969), Hamada x976),-Allen and Kenen (1980), McFadden (1967), Patrick (1973), Kydland (1976) and Pindyck (1976), among others, have made significant contributions. Different authors have focused on different aspects of decentralized policy formation. One purpose of this paper is to provide a general discussion of decentralization. In part 2 we provide a theoretical framework for analysing policy formation among independent authorities operating in an interdependent environment. We distinguish three dimensions of the problem and discuss, by way of example, the Mundell (1962) assignment problem in terms of our typology. We show that instability in Mundell's context does not arise because different authorities are assigned different and inappropriate targets, but because they fail to formulate strategies in a co-operative way.

Date
Saturday, 02 June 2001
Tags
1978-1988, Active

171 - Anti-Inflationary Monetary Policy and the Capital Import Tax

N. Liviatan

Anti-inflationary monetary policy faces special problems under flexible exchange rates and free capital movements. While this policy might be quite effective in reducing inflation it is also likely to create changes in relative prices which can be undesirable. In particular the short run capital imports which are induced by the restrictive policy may bias the deflationary effect towards the exchange rate and thus lead to its appreciation in real terms. While this phenomenon may be temporary it may cause sufficient concern in an export oriented economy. A situation of this sort arose in the Israeli economy in the second part of 1978 when restrictive monetary policies led to a considerable (real) appreciation of the exchange rate. This has been followed by various restrictive measures on the movements of short run capital imports in order to protect the interests of the exporting industries which are given top priority in the Israeli economy.

Date
Friday, 01 June 2001
Tags
Active, 1978-1988

170 - Proto-Industry, Political Economy and the Division of Labour

Maxine Berg

The putting out or domestic system, once a traditional subject of research among students of the origins of the Industrial Revolution, has recently been revitalised and transformed into a supposedly new subject with the new name of 'proto-industrialisation.' Detached from its earlier mercantile and urban associations and its traditional place in the historians' analysis of the breakdown of guild restrictions, the phenomenon has recently been placed in the context of the study of demographic and agrarian change.1 Proto-industry, or rural industry practiced in conjunction with agricultural pursuits, has by its very name been identified as the source of industrialisation, and has been described as the great organisational innovation of the pre-industrial period. Great marvels of industrial organisation might have been achieved in the large urban and state enterprises of the Seventeenth and Eighteenth Centuries. And certainly the naval shipyards and arsenals, royal textile and tapestry works, glass and paper works became known for their size, division of labour and industrial discipline. But the increases in productivity and mass production in these exemplary preindustrial works were still, it is claimed, as nothing beside the remarkable effects of the modest but all pervasive domestic industries.

Date
Thursday, 31 May 2001
Tags
1978-1988, Active

169 - A Feldman-Type Model of War Economy

M. Harrison

In trying to sort out some of the economic issues raised by Soviet experience before and during the last war, I thought that some clarification might be obtained from the Feldman model of expanded repro-duction. The Feldman model has often been used to address the issues of rapid Soviet industrialisation and of priority to heavy industry. It can be developed to illustrate the choice of priorities in resource mobilisation for war. In my view the significance of the results is historical and, for reasons given below, I do not attribute to them any contemporary or future significance whatsoever.

Date
Wednesday, 30 May 2001
Tags
1978-1988, Active

168 - On the size of a Controlling Share-holding

D. Leech & J Cubbin

A large number of empirical studies have examined the question of the divorce between ownership and control as to either its extent (following Berle and Means (1932)) or its implication*, for behaviour (following Marris (1964)). These studies have used different samples, different variables and have employed very different criteria to decide on the location of control within each corporation (see Table 1). In addition there appears to be considerable confusion over nomenclature, in particular the meaning of the phrase "owner-controlled". It is therefore not surprising that they come to different conclusions on both questions.

Date
Tuesday, 29 May 2001
Tags
1978-1988, Active

167 - A Tentative Analysis of the Stability of a Competition Economy with Externalities

M. Homma

It has long been recognised that a consumer's preference or a firm's production possibility is itself affected by the allocation of resources among other consumers and firms. The presence of such dependent effects, usually referred to, as "externalities" shows that not all the economic behaviour is mediated through the market price system. While the problem of externalities has received increasing attention, analysis of its effects on the stability of a competitive economy appears to have suffered comparative neglected.

Date
Monday, 28 May 2001
Tags
1978-1988, Active

166 - On the Sign of the Optimal Marginal Income Tax

J. Seade

A well-known result of Mirrlees (1971; proposition 3) says that the optimal marginal rate of income tax is non-negative throughout the scale, for the model he considers and given only a mild regularity condition on preferences. That is, the burden of taxation unambiguously increases with earnings. This result is very useful. The model to which it applies is admittedly special (identical leisure/consumption preferences), but it is reassuring to know that no further specialisation of assumptions is required to reach such a basic conclusion: incentive effects from taxation will never turn the desired pattern of redistribution on its head, at any level of income.

Date
Sunday, 27 May 2001
Tags
1978-1988, Active

165 - Temporary Equilibrium, Expectations and Notional Spillovers

C.J. Ellis

In most temporary equilibrium models the market operates as follows. At the start of the market period the relative price vector is announced. Agents (consumers and producers) then compute and announce their initial market offers based upon Walrasian supply and demand curves. If the relative price vector is not the Walrasian constellation then some market offers go unsatisfied and markets clear on the "short side". Agents who face a quantity constraint on one market adjust their behaviour upon others in an attempt to achieve levels of transactions consistent with the solution to their constrained utility maximisation problems. However this approach assumes sufficient flexibility within each market period to allow behaviour in each market to adjust completely to the quantity constraints actually experienced in other markets in the same period. It can be argued that such flexibility is less than perfect, particuarly in the upward direction. In that case, agents will have to base their initial offers not merely on the fixed prices, but also upon their expectations of quantity constraints in other markets. It is these offers that are confronted in each market within a market period. They can be revised downwards, but not upwards, if the actual quantity constraints in other markets turn out to be different from those expected. With this mechanism, each period's markets clear by the familiar quantity adjustment. This will be shown to generate three new types of temporary equilibria termed "expectational" Keynesian, Classical, and Repressed Inflation. These new temporary equilibria will be shown to have interested intra period adjustment properties.

Date
Saturday, 26 May 2001
Tags
1978-1988, Active

164 - Keynesian Equilibrium and Fix Price Equilibria

P. Michel

The recent development of macroeconomic models with fixed prices has shown the importance of considering the type of unemployment in the economy, since this determines the effects of economic policy decisions. These models differ from the Keynesian approach by the hypothesis of rigidity of all prices, by the fact that equilibrium results from a tatonnement on quantities, and by the behaviour of firms whose role is as passive as that of households: firms and households are symmetrically treated. The most active role is that of the auctioneer who instead of adjusting prices, adjusts quantities. Keynesian equilibrium is defined by 'Aggregated Demand and Aggregate Supply Functions" and given an aggregate production function, it corresponds to a determined price level. But given fixed prices, there is generally no equality between production and aggregate supply and demand.

Date
Friday, 25 May 2001
Tags
1978-1988, Active

163 - A Dynamic Analysis of Differential Incidence in a Two-Class Economy with Public Capital

M. Homma

Within a two-class growing economy with public capital, a comparative dynamic analysis of differential shift from a wage income to a corporation profit tax is carried out to appreciate the distributional effects of tax substitution on capitalists and workers, and to set out the conditions which determine the magnitude of the tax shifting. It is also shown that the differential tax substitution induces decreased saving and capital shallowing lowering, the private capital/labour ratio, and that a higher rate of a corporation profit tax increases the tax shifting and vice versa in the steady state equilibrium with both classes existing.

Date
Thursday, 24 May 2001
Tags
1978-1988, Active

162 - Exogeneity

Robert F. Engle, David F. Hendry & Jean-Francois Richard

In spite of the importance of exogeneity in econometric modelling, an unambiguous definition does not seem to have been proposed to date. This lack has not only hindered systematic discussion, it has served to confuse the connections between "causality" and "exogeneity". Moreover, many existing definitions have been formulated in terms of disturbances from relationships which contain unknown parameters, yet whether or not such disturbances satisfy certain orthogonality conditions with other observables may be a matter of construction or may be a testable hypothesis: a clear distinction between these situations is. essential. To achieve such an objective, we formulate definitions in terms of the distributions of the observable variables, distinguishing between exogeneity assumptions and causality assumptions, where causality is used in the sense of Granger (1969). Following in particular Koopman's pioneering article (1950), exogeneity will be related to the statistical completeness of a model. In short, a variable will be considered exogenous for a given purpose if a statistical analysis can be conducted conditionally on that variable without loss of relevant sample information.

Date
Wednesday, 23 May 2001
Tags
1978-1988, Active

161 - Spurious Periodicity in Inappropriately Detrended Time Series

C.R. Nelson & Hejoon Kang

Econometric analysis of time series data is frequently preceded by regression on time to remove a trend component in the data. The resulting residuals are then treated as a stationary series to which procedures requiring station-arity, such as spectral analysis, can be applied. The objective is often to investigate the dynamics of transitory movements in the system; for example, in econometric models of the business cycle. When the data does consist of a deterministic function of time plus a stationary error then regression residuals will clearly be unbiased estimates of the stationary component. However, if the data is generated by (possibly repeated) summation of a stationary and invertible process then the series cannot be expressed as a deterministic function of time plus a stationary deviation, even though a least squares trend line and the associated residuals can always be calculated for any given finite sample. In a recent paper, Chan, Hayya, and Ord (1977)(hereafter CHO) were able to show that residuals from linear regression of a realization of a random walk (the summation of-a purely random series) on time have autocovariances which for given lag are a function of time and therefore that the residuals. are not stationary. Further, CHO established that the expected sample autocovariance function (the expected autocovariances for given lag averaged over the time interval of the sample) is a function of sample size as well as lag and therefore an artifact of the detrending procedure. This function is ;characterized by CHO in their Figure 1 as being effectively linear. in lag (although the exact function is a fifth degree polynomial) with the rate of decay from unity at the origin depending inversely on sample size. The first differences of a random walk are, of course, stationary with zero autocovariance at all lags. They concluded that "the low frequency portion of the spectrum will be exaggerated and the high frequency portion attenuated" relative to the appropriate first difference transformation.

Date
Tuesday, 22 May 2001
Tags
1978-1988, Active

160 - Factor Price Rigidities in an Open Economy

R. Cornes

The objective of this paper is to analyse some issues raised by the exogenous determination of factor prices in an open economy. To do this, we exploit the tools of duality analysis, particularly the restricted profit function. Some of these issues have been discussed by Brecher (1974) and Schweinberger (1978). Our treatment is closer in spirit to that of Schweinberger, which restricts attention to the small country case and which has n produced commodities. The present analysis considerably simplifies derivation of existing results, extends the discussion to consider the effects of exogenous commodity price changes on the system, and looks at an instructive special case.

Date
Monday, 21 May 2001
Tags
1978-1988, Active

159 - External Effects: An Alternative Formulation

R. Cornes

Economists have long recognised that the actions of individual agents may affect the decisions or the well being of others in important ways without necessarily being mediated through the market. This recognition has spawned a large and evergnowing literature. When modelling the behaviour of consumers, this literature has generally used the direct utility function as the basic tool of analysis. In this paper, we discuss the use of alternative, dual formulations of consumer behaviour. In Section III we discuss the use of the minimum expenditure function and of the indirect utility function in modelling the behaviour of an individual consumer who acts as a price taker in markets for tradeable commodities and as a quantity taker in his consumption of certain environmental commodities. Subsequently, we look at a simple model involving reciprocal externalities which has recently been the subject of discussion by Diamond and Mirrlees (1973), Sandmo (1978), Sadka (1978) and Sheshinski (1978). Part of this literature deliberately abstracts from real income effects, and in discussing this part we find the minimum expenditure function and its associated compensated demand functions particularly fruitful. It is our belief that the dual approach serves to clarify a number of issues in this area.

Date
Sunday, 20 May 2001
Tags
1978-1988, Active

158 - Population, Internal Migration and Economic Growth: An Empirical Analysis

R.S. Morland

During the last fifteen years or so, the role of population growth in the development process has received increasing attention. This has been manifested in the literature in three broad areas. In the first, the effects of rapid population growth on the growth of income have been studied with the use of simulation models (Enke (1971) (1974), Simon (1976)) which sometimes include endogenous population growth (Suits et al (1975), Hazledine and Moreland (1977)). In general these models show that per capita income could be increased by reducing the birth rate. However, they are often either unsophisticated in terms of the demographic structure (Suits et al (1975)) or the coefficients are imposed a priori (Enke (1971)) or key demographic rates are imposed exogenously (Simon (1976)) so that no feedback exists between the economy and demographic variables.

Date
Saturday, 19 May 2001
Tags
1978-1988, Active

157 - Optimum Taxation with Errors in Administration

N.H. Stern

The basic theorem of welfare economics tells us that, under standard assumptions, the first best can be achieved as a competitive equilibrium with zero taxes on commodities and the appropriate lump sum tax for each individual. The calculation of the appropriate set of lump sum taxes requires information on individuals which they have an incentive not to reveal - for example Mirrlees (1974) has shown that, where individuals differ in skills it is likely that the first best will require utility to decrease with skill. It is then natural to ask how well one can do with a tax system which does not discriminate between individuals. This has led to the theory of optimum income taxation where we assume that only income is observed and all individuals face the same income tax schedule. This schedule is then chosen to maximise welfare.

Date
Friday, 18 May 2001
Tags
1978-1988, Active

156 - A General Approach to the Construction of Model Diagostics Based upon the Lagrange Multiplier Principle

R.F. Engle

In order to assess the validity of the specification of an econometric model, it is useful to have a variety of diagnostic statistics which might provide evidence on the existence and possibly the type of misspecification involved. One source of diagnostics is hypothesis tests where the model under consideration is taken to be the null and the alternative is some generaliation. A particularly attractive approach is to construct optimal test statistics against a variety of specific alternatives. In this way it is possible to have reasonable power against a collection of interesting alternatives, although when looking at sets of non-independent statistics, one must be cautious about interpretations of the overall size of the test.

Date
Thursday, 17 May 2001
Tags
1978-1988, Active

155 - Dynamic Optimal Taxation, Rational Expectations and Control Theory

F.E. Kydland & E.C. Prescott

Within a rational expectations framework, policy has effect if it alters relative prices and policy evaluations are exercises in modern public finance theory. The time inconsistency of an optimal taxation plan precludes the use of standard control theory for its determination. In this article recursive methods are developed that overcome this difficulty. The technique is novel in that the constraint set as well as the value function are determined recursively. Even though there is little hope of the optimal plan being implemented - because of its time inconsistency - we think the exercise is of more than pedagogical interest. The optimal plan's return is a benchmark with which to compare the time consistent solution under alternative institutional constraints which society might choose to impose upon itself.

Date
Wednesday, 16 May 2001
Tags
1978-1988, Active

154 - Testing Recursiveness in a Triangular Simultaneous Equation Model

A. Holly

In applied work in macroeconomics using simultaneous equation systems relationships-between variables are sometimes described by means of a triangular model. However, in a simultaneous equation spirit the a priori assumption of full recursivity is typically not made. The purpose of this paper is to suggest a recursiveness test for models which are already written in a triangular form. It is a score test (Rao (1948)) applied to the concentrated likelihood function, which is equivalent to Neyman's C(a) test (1959). As indicated in Holly (1978) this type of procedure is quite general and can be applied to a large variety of tests of model specification. It is based on estimators of the model under the null hypothesis, which are, in the particular case of the recursity test the O.L.S. estimators of each structural equation.

Date
Tuesday, 15 May 2001
Tags
1978-1988, Active

153 - Co-operationand Productivity: Some Evidence from West German Experience

J. Cable & F.R. Fitzroy

Somewhere between traditional entrepreneurial firms and worker-cooperatives on the spectrum of alternative firm types lie a range of industrial partnership models, involving varying degrees of worker participation in decision-making, and/or profit-sharing. In West Germany there are known to be more than seven hundred firms in this category. Many belong to Arbeitsgemeinschaft zur Forderung der Partnerschaft in der Wirtschaft e.V. (AGP) headed by Michael Lezius. Guski and Schneider have recently published a register of these firms in collaboration with Lezius.11 Their analysis reveals a variety of legal configurations heavily influenced by tax and company law. The size of employee profit and stock shares also varies greatly, most being relatively small. About half the firms in the sample have instituted some form of employee participation in what is normally regarded as managerial decision making. The schemes introduced by AGP members range from employee control in a few worker-managed co-operatives among the many small firms to minimal consultative and informative practice in the more sparsely represented larger firms.

Date
Monday, 14 May 2001
Tags
1978-1988, Active

152 - Soviet Primary Accumulation Processes: Some Unresolved Problems

M. Harrison

What is primary socialist accumulation? The idea of 'primary' (sometimes called 'primitive') socialist accumulation was first developed by Preobrazhensky, the Bolshevik economist and spokesman for the Trotskyist opposition in the USSR in the 1920s. The idea was based on an analogy with Marx's writing on primary capitalist accumulation. Primary capitalist accumulation meant the initial phase of growth in which the capitalist elements of the economy developed at the expense of the pre-capitalist sector. With 'capitalist' changed to 'socialist' Preobrazhensky had the idea of primary socialist accumulation, clearly relevant to the USSR in the 1920s

Date
Sunday, 13 May 2001
Tags
1978-1988, Active

151 - Economic Causes and Effects of Mergers in West Germany

J. Cable

Merger activity in West Germany remained at a low level in comparison with most industrialised countries until the late nineteen-sixties, but has increased very rapidly since then.1 Mergers notified to the Federal Cartel Office under ยง23 of the 1958 Act Against Restraint of Competition (GWB) averaged around 40 per year up to 1968. The number then rose to a peak of 305 during a merger wave which occurred in 1969-71, contemporaneously with similar waves in a number of other countries and especially the USA. Thereafter, despite the introduction of merger controls under GWB from 1974, the merger rate has grown dramatically bringing the annual total to 554 in 1977, nearly twice the level of the previous, 1970 peak. Admittedly these statistics exaggerate the true increase in merger activity for two reasons. First, the coverage of the Cartel Office series is complete only after 1973, and was most incomplete before 1967, when serious discussion of merger controls began. Secondly, there has been a marked increase in the acquisition of smaller companies since 1973, due to the existence of a size threshold for immunity from control in the merger policy implemented by the 1973 amendment to GWB. Nevertheless it is clear that a significant increase in merger activity has occurred. This was viewed with some concern in the second report of the Monopolies Commission (MK),2 and a revision of merger controls is expected in the forthcoming (fourth) amendment to GWB.3

Date
Saturday, 12 May 2001
Tags
1978-1988, Active

150 - Merger Development and Policy in West Germany since 1958

J. Cable

West Germany's recovery and growth to a position of economic leadership over most other industrialised nations appears to have been achieved without the aid of widespread merger activity, at least up to the late nineteen-sixties. Although the official statistics understate the true number of mergers in the period, there is no evidence of significant merger waves in the fifties and sixties comparable with US and UK experience. However, Germany did share with almost all developed market economies the experience of a merger wave : between 1969 and 1971, when there was a distinct surge in the level of merger activity. Thereafter, German experience diverged once again from that of most other countries, with a continuing rapid growth of mergers throughout the 1970's.

Date
Friday, 11 May 2001
Tags
Active, 1978-1988

149 - Duality, Quantity Constraints and Consumer Behaviour

R. Cornes

Until the recent revival of interest among macroeconomists whose work is surveyed by Malinvaud (1977), the systematic analysis of quantity-constrained behaviour in a multimarket setting has attracted surprisingly little attention since the classic survey by Tobin (1952). The few exceptions, in addition to the works cited by Malinvaud, are the discussion by Gould and Henry (1967) of price control, the systematic analysis by Pollak (1969) of conditional demand functions, and a recent attempt by Howard (1977), not entirely successful, to extend the scope of the earlier treatment of consumer choice under quantity rationing by Tobin and Houthakker (1951). This paper exploits the minimum expenditure function approach to simplify the analysis of quantity-constrained consumer choice. Section I introduces the "restricted" minimum expenditure function and "restricted" compensated demand functions, which provide the basis of our approach. While our treatment of the formal rationing problem is similar to that of Neary and Roberts, we are also concerned to stress alternative applications of these functions, particularly to situations in which the quantity constraints are interpreted as externalities or public goods. Section II discusses applications of the analysis to a generalisation of the Tobin-Howthakker analysis and to the price control problem raised by Gould and Henry. Section III comments on the "virtual price system" used by Neary and Roberts, and draws attention to the formal similarity between the rationing and externality problems. Finally, section IV takes up the problem of price control in a general equilibrium context.

Date
Thursday, 10 May 2001
Tags
Active, 1978-1988

148 - Limit Theorems on the Core of a Many Good Economy with Individual Risks

P.A. Weller

The relationship between the core of an exchange economy and competitive equilibrium is well-known from the work of Debreu and Scarf 1-1901. If we define a group of individuals of the same type to be a group with the same preferences and endowments, Debreu and Scarf have shown that if the' are a fixed number of types in an economy, and if the economy is expanded by increasing equal numbers of each type, then the set of core allocations converges to the set of competitive allocations. There are a number of different ways in which we can introduce uncertainty into an exchange economy. The most straightforward is to assume that there is a given set S of possible states of the world We introduce state-contingent markets for each good and simply reinterpret the Debreu-Scarf theorem. Individuals of the same type now have the same preferences for certain outcomes, the same probability distribution over states of the world, and the same distribution of endowments. In addition they are assumed to maximise expected utility. A coalition is said to block an allocation if a redistribution of endowments within the coalition leaves at least one member of the coalition with higher expected utility, and none with lower. The set of core allocations again converges to the set of competitive allocations.

Date
Wednesday, 09 May 2001
Tags
1978-1988, Active

147 - Generalising from Case Studies: The First 46 Reports of the U.K. Price Commission

T. Hazledine

I. Introduction. This paper is an attempt to assess the implications, for the theory of markets and the firm, of a body of information culled from the first forty six Reports published by the United. Kingdom Price Commission since its reconstitution in August, 1977. By now the usual method for testing theories empirically in Industrial Organisation Economics is to specify a simple mathematical 'model' of the determinants of one variable by a set of others, and to estimate the parameters of this model by applying econometric techniques (mostly ordinary least squares) to a set of numerical data. The results have been notably poor, at least by the standards of success achieved by econometrics in other fields of economics, which are themselves modest enough.: At least two possible reasons for this suggest themselves (and appear to be supported by the findings of this paper): simple mathematical models may be too 'blunt' to capture the nuances of market behaviour, and the data used (most of it from standard official sources) may not match the variables that are theoretically appropriate. As an example of the former - a firm with a dominant market share may thereby gain monopoly power to raise its prices, but it may have achieved its position by producing at lower cost than its competitors and passing on the cost differential in lower prices. As an example of data failings - there are plausible theoretical grounds for expecting market structure to be related to the levels of both price and cost, but from official data (such as Censuses of Production) we can do no more than approximately infer the difference, or margin, between price and costs through data on profitability. If both price and cost are positively related, say, to concentration, the relationship of concentration to the margin between them is likely to be blurred, at best, and impossible to interpret unambiguously.

Date
Tuesday, 08 May 2001
Tags
1978-1988, Active

Let us know you agree to cookies