Wednesday, December 28, 2011

Software Quality

I do not regard myself as a software engineer. Sure, I 'do' programming (usually with some C-based language if you're interested), but such programming is to implement some functionality that I want or need for some purpose - to implement a model or a data processing tool for instance - which has been for my own use. As such, there has been minimal effort put into inherent extend-ability or modalarisation of my code. I've always paid relatively good attention to code commenting and some flexibility of use (personal, that is), but that mainly stems from my inability to remember implementational details rather than some grand vision of code reuse. I've also never really been bothered about efficiency or speed. I think this stems from the fact that when an undergrad, I did some programming with some fairly limited processors, on which resources (particularly memory) were limited and had to be at least considered, if not actively managed - moving from those things to a proper desktop led me to stop worrying about resources to the extent that I stopped even considering them. I would, and do, implement code in the way that it looked like it worked in my head, not in a way that would have been particularly efficient either in terms of computational resources or (my) time: another side-effect of the lack of memory on my part, I could look back on my code and almost reconstruct my line of thought. In summary, I was more a hacker than a crafter of code - it was more important what the stuff did and how it corresponded to the things swirling in my head, than how it looked on the screen, or how it actually worked. And I thought that this was all that was required, and was to be honest a little smug in thinking so.

Sometimes I feel that this is what programming actually is... Image from here.

However, in more recent times, my programming has had a number of additional constraints imposed. These are probably so fundamental to most normal people (by which I mean most programmers and software engineers), that me mentioning them may border on the ridiculous, but I think that they are probably not to your bog-standard academic-type, like me. It basically boils down to a simple fact: that there are other people out there, and that under certain circumstances, they might actually need to use/modify your code, or collaborate with you on this code. This process has only relatively recently begun to directly affect me and my work, since a little way into starting as part of the ALIZ-E project, but it's one that I am increasingly having to take into account, and one that I seemingly find myself a little reluctant to. Basically, I would say that the idea is that putting the effort into those aspects of software development that are not necessarily directly related to the desired functionality, but to more general infrastructure and usability from the perspective of the programmer (as well as the notional user of course), is very beneficial in the long run. Put like that, it seems sensible. But it's not particularly obvious when you're actually trying to throw together something that works in the face of a deadline. At least it didn't to me.

Anyway, this issue was raised in my mind over the past week or two for two reasons: firstly, I noticed that a project colleague has put up (or updated?) a page on his website about software quality; secondly, I just happen to be reading a book (given to me by some very dear friends) about programming (which is far more interesting than it may sound, but that is I think for another post when I've finished reading the book...).

This project colleague is Marc Schroder, and he is at DFKI in Germany, and this is the page I am referring to. The first time I met him was at the first ALIZ-E software integration meeting, at which he kept talking about good programming practice, and good practice regarding implementation structures and methods. To be perfectly honest, I viewed a lot of this as programming idealism, and distracting from the task at hand: implementing a (complex) system that would do what we wanted. Speaking to him during one of the meeting intervals, I made the point to him my opinion that a lot of academic programmers were hackers like me - good at making code that did what we wanted, but not necessarily good software engineers. I have no doubt that he'd heard and experienced this before, and indeed he was probably rather exasperated at the situation with the academic-hacker-types like me. He, on the other hand, has a good deal of experience with not just good quality programming, but also of multi-party project management, of which I had no experience. So, he knows what is involved in making some software actually work in a collaborative environment in which the participants are remotely interacting.

From the description on his software quality page page, and the various evangelist-style talks he's given us in the project on good coding practice (and I don't mean this in a negative manner - just descriptive of the style that academic-types speak to each other on subject matters that deeply interest them...), I have subsequently expanded my list of coding requirements. Or at least, I've added them to my desiderata, and am trying to actually incorporate them into the stuff I normally do. The order is roughly in importance for me at the moment, and misses things out (from Marc's list at least) probably because I don't (yet!?) understand their necessity.

  1. Error handling - as in properly, returning descriptions of what actually went wrong, so that you can figure it out, not just some way of not crashing the entire computer when something doesn't go to plan...
  2. Test-driven development - I think I understand the main principles involved, but to be honest, the practical details of how this actually should be approached are still tantalisingly out of reach... The idea of it being something like a living specification that keeps up to date with the code, is an actually useful tool in verifying updates, and replaces (to a certain extent) an external body of documentation, seems like a good idea all round.
  3. Refactoring - now this is something I have actually been doing for a while, though not for the efficiency aspects, but more for the matching of code operation to my internal cognitive machinations, and for some (limited) future flexibility (so I can easily change parameters and rerun for example)

I'd say that I now understand the necessity for these three things at least. And that I know that I need to apply it to my work as a fundamental feature rather than a mere after-thought. But I am also aware that this process has just begun for me, and that there is far more that I need to learn about testing regimes, interface definitions, etc, that are as yet just too unfamiliar to me. And yet there remains this vestigial resistance to such practice in favour of the hack-together-for-application methodology...

Tuesday, December 20, 2011

CFP: AISB Symposium on Computing and Philosophy

An upcoming event with which I have a minor involvement is the 4th incarnation of the AISB Symposium on Computing and Philosophy, which is due to take place in Birmingham, U.K., between the 2nd and 6th of July 2012. The AISB convention is this year being held in conjunction with the International Association of Computing And Philosophy (IACAP), and will mark the 100th anniversary of Alan Turing's birthday. There are 16 different symposia at this convention in all, all with varying emphases on the interaction between AI/Computing and Philosophy. For those of bent in that particular direction, there will be plenty to attract your attention!

An overview of the Symposium on Computing and Philosophy, from the website:

Turing’s famous question ‘can machines think?’ raises parallel questions about what it means to say of us humans that we think. More broadly, what does it mean to say that we are thinking beings? In this way we can see that Turing’s question about the potential of machines raises substantial questions about the nature of human identity. ‘If’, we might ask, ‘intelligent human behaviour could be successfully imitated, then what is there about our flesh and blood embodiment that need be regarded as exclusively essential to either intelligence or human identity?’. This and related questions come to the fore when we consider the way in which our involvement with and use of machines and technologies, as well as their involvement in us, is increasing and evolving. This is true of few more than those technologies that have a more intimate and developing role in our lives, such as implants and prosthetics (e.g. neuroprosthetics).

The Symposium will cover key areas relating to developments in implants and prosthetics, including:
  • How new developments in artificial intelligence (AI) / computational intelligence (CI) look set to develop implant technology (e.g. swarm intelligence for the control of smaller and smaller components)
  • Developments of implants and prosthetics for use in human,  primate and non-primate animals
  • The nature of human identity and how implants may impact on it (involving both conceptual and ethical questions)
  • The identification of, and debate surrounding, distinctions drawn between improvement or repair (e.g. for medical reasons), and enhancement or “upgrading” (e.g. to improve performance) using implants/prosthetics
  • What role other emerging, and converging, technologies may have on the development of implants (e.g. nanotechnology or biotechnology)
But the story of identity does not end with human implants and neuroprosthetics. In the last decade, huge strides have been made in ‘animat’ devices. These are robotic machines with both active biological and artificial (e.g. electronic, mechanical or robotic) components. Recently one of the organisers of this symposium, Slawomir Nasuto, in partnership with colleagues Victor Becerra, Kevin Warwick and Ben Whalley, developed an autonomous robot (an animat) controlled by cultures of living neural cells, which in turn were directly coupled to the robot's actuators and sensory inputs. This work raises the question of whether such ‘animat’ devices (devices, for example, with all the flexibility and insight of intelligent natural systems) are constrained by the limits (e.g. those of Turing Machines) identified in classical a priori arguments regarding standard ‘computational systems’. 
Both neuroprosthetic augmentation and animats may be considered as biotechnological hybrid systems. Although seemingly starting from very different sentient positions, the potential convergence in the relative amount and importance of biological and technological components in such systems raises the question of whether such convergence would be accompanied by a corresponding convergence of their respective teleological capacities; and what indeed the limits noted above could be.

For more information, see the symposium website. For those interested in submitting a paper, the deadline for submissions is the 1st of February 2012.

Monday, December 19, 2011

Me: films and robots

My first post in over a year, so lets start with something silly :-)

I'm sure that anyone who is working, or has worked, with robots has been influenced in some way (if not inspired) by some depiction in a work of fiction - most likely film - whether they choose to admit it or not (those who don't are probably lying). I'm quite happy to admit to this - and can point to two such robotic intelligent devices. What precisely about them gave rise to this influence I don't know - and I don't really want to deconstruct it in case it turns out to be ridiculous and/or trivial - but here they are nonetheless for you to assess.

Johnny 5 is alive!
The first one is the amazing - and actually fairly realistic (in terms of achievable mechanical complexity) - Johnny 5 from Short Circuit. I can't really say enough about this dude - I did really want one of the little mini-me's from the second film though! I can't really remember the first time I watched this, but I do know that over the many occasions I've watched the films I'm still drawn to it, despite the occasionally dodgy special effects (I'm thinking of the dancing)... 

The second one is the intelligent space-ship/robot arm thing in the Flight of the Navigator - 'Max'. I'm not entirely if this is supposed to be AI robot, or alien-being-controlling-a-robot, but any device that can fly a spaceship, go manic, and time-travel is alright in my book. The single eye-on-an-arm-thing was a bit strange, though even with such a fairly simple setup, the array of emotional expression was really quite impressive.

(I've only just realised that both of these films were released in '86 - this is just coincidence, as I watched both on TV a number of years afterwards - I didn't watch them in the cinema or anything). I'm not sure sure these would be the choices of most people - and I'm not going to bring age into it - but they're mine :-)


/rant
Having said all that though, there is a bit of a cautionary note I think. As much as the portrayal of the robot in science fiction is of course hugely beneficial in terms of building and maintaining interest in these synthetic devices, I do wonder sometimes whether this actually has the long-term reverse effect: building expectations of what such devices can do, not just beyond that which is currently possible, but beyond that which is even probable as possible. In the end, would this just not turn people off when they realise that the real state-of-the-art is actually fairly mundane? Or that what people like me think of as really quite exciting in terms of development just pale in comparison with the vividly recreated imaginations of script writers and graphic designers? In the end, surely such levels of unfulfilled expectation will serve as a damper on funding initiatives (am thinking of potential career prospects here...!) - "but what you are trying to do isn't really exciting, they were talking about it in the '70s/'80's/'90's/etc...". Either that, or the reality drifts so far from expectation that most people don't understand what's going on, and you end up in the same place. But that is perhaps for another discussion, on public engagement with science...

Or maybe I'm reading far too much into all of this, and should really just sit back, relax, and enjoy the view...

/endrant