Skip to main content Skip to navigation

Feedback on WEB-EM-7 submissions

The provisional marks for the CS405 coursework have now been drawn up and distributed. In one or two cases, there are outstanding issues yet to be addressed, and I have been unable to mark a submission because your model has not been properly recorded or transmitted. Whilst I took a great deal of trouble to try to arrive at a fair evaluation of all your submissions, it is possible that I have overlooked features of your models and nuances in your papers etc (especially where you provided no Readme, or had difficulty in communicating your ideas). In addition to these general comments, I have prepared and will within the next week distribute some feedback on each submission (this gives a breakdown of paper and model marks), though this has been written primarily with auditing my marking process in mind (as say an external examiner may wish to do), and doesn't have the same sensitivity to your reaction that it might have if it were written just for you. Please make allowance for the directness of this feedback - I do not mean to be negative in my criticism. If - in the light of my feedback - you think I have made a serious error of judgement, please feel free to come and discuss your submission with me so that - if appropriate - I can adjust your mark accordingly.

I was generally very pleased with the quality of submissions for WEB-EM-7. They reflected a high level of commitment and interest in EM themes, and some of the work done was of a quite exceptionally high standard. I particularly appreciated the submissions that introduced new topics and those that engaged with the research-oriented aspects of the module (such as the use and development of Cadence).It was also good to see notations and techniques not featured in the labs being used: these included Eddi, Angel and the AOP.

In selecting a model exercise that fits well with the standard picture of construal in EM, the emphasis is on establishing a correspondence in your present experience between the state of the construal (on the computer) and state of its referent. In setting up such a "state-based" scenario for observation you focus on specifying a network of observables and dependencies between them so that the effects you see when interacting with the construal mimic the basic interactions you can observe with its referent.

In many of your modelling studies, either this raw direct interaction with state was absent or marginalised (e.g. all state-changing activity had been engineered from the outset with specific agents and purposes in mind), or it was manifest in a somewhat closed and predictable way such as is evident in a system or phenomenon with a well-established construal (as in "what is the trajectory of this projectile when launched at the following speed in this direction?"). When testing your models, I tried to identify underlying dependencies that could be exposed by direct agent interaction, but these didn't always exist! The effect was often somewhat as if you had written a program to make a robot climb a specific staircase, and I was reconfiguring the staircase as the robot was climbing. A helpful principle to keep in mind here is that of ensuring that your model admits and integrates as many different kinds of agency as possible.

For a basic level of performance, I generally wanted to see that at least the external state of your model (i.e. what appeared on the display) was linked to its internal state by dependencies (a state of affairs that the use of Scout and Donald readily affords). I appreciate that when raw interaction with state is engineered through ritualised interaction into process-like activities (e.g. as in the heapsort construal I introduced in the lectures), the relationship between the visualisation and the internal state may become more subtle, but in general it's a bad sign if you are not using scripts to maintain consistent visual state. And of course, I would ideally expect to see other kinds of dependencies represented in your model, mediated by definitions that don't simply assign explicit values to observables.

Unfortunately, there are many cultural influences that encourage you to use agency in preference to dependency, and to think primarily in process-like and triggered-action terms. When we already have a good construal for a phenomenon, we can often understand its behaviour as a process, and don't need to explore our construal in an experimental fashion. Lots of the models submitted were like this: based on a standard construal, and affording only such changes to the values of observables as were motivated by this construal. Another way to back into stereotyped agency is to make use of the (deprecated) VB-like features in EDEN (e.g. TEXTBOX windows), which are not very well-supported, and tend to promote the traditional state-by-side-effect paradigm that underlies traditional programming. If you consider that the aspiration in EM is for all change of state (as observed from no matter what viewpoint, and as effected by no matter what agent) to correspond to the redefinition of an observable, you see that one indication that you have deviated from EM principles is the fact that undo becomes difficult. This is particularly topical in models of activities that are goal-oriented in character - e.g. game playing, where the whole state model is liable to collapse into degeneracy when the game finishes.

Making the primary target of your modelling study something abstract, like a specific algorithm or algorithmic problem, makes it very challenging to maintain an EM focus (though this can be done, as the heapsort modelling activity illustrates). In getting to understand EM principles, it's helpful to have a concrete external referent, and difficult - until you have some experience - to develop a good modelling study without such a referent. It's also helpful to have in mind a referent that has an ambiguous construal and where the potential agency is not too circumscribed.

Writing a conference-style paper about your model is a very challenging task, and it is probably unreasonable to expect every student to be able to do it given the level of language skill it requires. In retrospect, it might have been helpful to stress that a well-motivated / described / documented modelling study is easier to write than a more high-level paper, and (though it may not be a way to achieve a really high mark) is quite enough to earn a respectable mark. When I indicated that you might wish to switch the weighting from model to paper if you had technical difficulties, I really intended you to keep closely engaged with the theme represented in your modelling study. What some students did was to use the paper in a rather different way - resorting to generalities drawn from tangentially related EM papers when model-building proved too difficult, which did not lead to such a coherent result.

In some respects the paper-writing exercise for WEB-EM-7 is preparation for research publication. An important element in this is to learn to be as careful and precise as possible in your use of language. Sentiments like: "EM is a new kind of software model" / "EM is a new programming language" etc involve conflating concepts from distinct categories (how can a style of modelling be a new kind of model? etc). A succession of errors of this kind can make it virtually impossible to understand what is being said. You should also try to pay attention to other stylistic issues - for instance, the term "human-centred" is to be prefered to "man-centred".