Abstracts

 

Toward contrastive explanations in GeoAI

Ben Adams, University of Canterbury

In the last few years interest in GeoAI has grown as newer machine-learning techniques have shown success when applied to geographic problems. For the most part, this work has focused on training predictive deep-learning models using large data sets. However, these models can be opaque and the reasoning behind why certain outcomes are predicted will not be clear to a human who might want to make informed decisions based on the predictions. I will introduce some recent research on explainable AI, and then discuss how we can build geographic AI systems that better explain their reasoning. In particular, I will focus on contrastive explanations and show how they might work for common cases of GeoAI use, including crime analysis, travel behaviour modelling and population projection.

Protocol and sensor software development for fracture healing

James Atlas, University of Canterbury

The Mechanical Engineering Department at UC have developed a microelectronic strain sensor designed for use with a bone attached rod in fractures. Research and development is being carried out with the aim of tracking fracture healing progress. When a fracture occurs, a rod is attached to the bone to hold the pieces together. As the fracture heals the bone will become stronger causing less strain on the rod. Patients will be put through periodic tests of walking, standing etc. to get strain measurements from the rod. There is a need for a machine learning model to use the strain data from the rod to classify the activities a patient is experiencing. The purpose of this is to enable comparison of strain experienced in activities over time to track healing progress. We have developed an initial model for a basic drill press setup designed to emulate strain on a bone. The developed model performed successful classification for a drill press protocol enumerating many possible activities. The model achieved a cross fold validated accuracy of 0.80952. The success of the model demonstrates the applicability of the selected machine learning method, Time Series Forest, in a strain sensor context. The results show that similar models will likely be successful in contributing to the end goal of tracking healing progress for fractured bones. 

Same same but different 

Christoph Bartneck, New Zealand Human Interface Technology Lab

The idea of robots have inspired humans for generations. The Bank of Asia, for example, had commissioned a building that looks like robot to host its headquarters in Bangkok. This profound interest in creating artificial entities is a blessing and a curse for the study of human-robot interaction. On the one hand it almost guarantees a headline in newspapers, but on the other hand it biases all participants in the study. Still, almost all robots that made it out of the research labs and into the market failed. This talk with try to shine some light on why robots are so (un)popular. 

Building a computer that thinks like the brain

Simon Brown, University of Canterbury

Recent progress in artificial intelligence and machine learning means that humans are now far inferior to computers at playing games like chess and go. However, the brain is still far more efficient than even the largest supercomputers at performing some types of tasks, such as pattern or image recognition. This has motivated a worldwide effort to build brain-like or ‘neuromorphic’ computers, using a number of different approaches. The focus of neuromorphic computing is on hardware, in contrast to the usual software approaches to AI. I will review some of those approaches, which include the use of traditional silicon transistors to emulate neurons and synapses, and new solid-state devices which have synaptic and neuronal functionality. I will explain how my group has attacked one of the key remaining challenges, which is to achieve truly brain-like networks using self-assembled nano-components. Not only have we been able to build highly complex networks—the dynamical signals within those networks are remarkably similar to those of the brain.

Robots in Nozickland: a cautionary fairytale for our times

Doug Campbell, University of Canterbury

Minarchism is the theory, famously advocated by Robert Nozick, that a national state can be legitimate only if it is a minimal state—i.e., a state that confines itself to protecting its citizens from assault, theft, fraud and breach of contract. It remains a very influential theory on the economic right. In this talk I consider what would happen in a minimal state if inexpensive but highly capable artificially intelligent robots were invented, able to match or exceed human performance in most arenas. I argue that Nozick’s theory explodes under the weight of its own contradictions when the possibility of such machines being created is taken into account.

*Quagmire & botheration for table & diagram: when numbers mean something more than just that

Giulio Dalla Riva, University of Canterbury

In this talk I’m going to share and analyse my experiences in teaching ethics and data science. As examples, I pick two courses—one at UC and the other at the University of British Columbia—which invited data science students to reflect on the ethical dimension of their work. I claim to have learned some lessons.

Explaining explainable AI

Tim Dare, University of Auckland, Justine Kingsbury, University of Waikato

There is near consensus in the emerging field of data ethics that processes and systems must be explainable to a wide range of stakeholders. Europe’s General Data Protection Regulations (GDPR) guarantee individuals a ‘right of access’ to ‘knowledge of the logic involved’ in automated decision-making. New Zealand’s Algorithm Charter requires signatories to ‘maintain transparency by clearly explaining how decisions are informed by algorithms’. Are these two different statements of the same requirement, or are they different from each other? What level of explanation of an automated decision is required, and why is it required? If a machine-learning algorithm reliably produces good outcomes, even though no-one can explain exactly how, mightn’t reliability trump explainability? In this paper we clarify the explainability requirement and examine the justification for it.

The strange phenomenon of Turing denial 

Zhao Fan and Jack Copeland, University of Canterbury 

Shortly before the Second World War, Alan Turing invented the fundamental logical principles of the modern digital computer. Turing was, however, a misunderstood and relatively isolated figure, who made little or no attempt to communicate with the main centres of postwar computer development in the United States. He generally received at best a minor mention in histories of the computer written in the 20th and early 21st centuries. All that changed in 2012, Turing’s centenary year, and he is now popularly regarded as the founding father of computer science. But an academic backlash has developed. ‘Turing deniers’ seek to show that Turing made no very significant contribution to the development of computers and computing. We examine the arguments of some leading Turing deniers. 

Autonomous futures: Positioning lethal autonomous weapons in the landscape of future warfare

Amy L. Fletcher, University of Canterbury

The emergence of lethal autonomous weapons (LAWs) will disrupt military strategy and war-fighting in an already tumultuous geopolitical era characterized by a cranky America, an assertive China, a rising India, and a recalcitrant Russia. Already, thirty countries have called for a global ban on LAWs, citing both the humanitarian consequences of ‘robot warfare’ and the need to have a human ‘in the loop’ of any final decision to use lethal force. However, the four countries noted above, though each has a different position on the nuanced specifics of using LAWs, nevertheless do not intend to sign such a ban and are committed to the autonomous war-fighting paradigm in the pursuit of geopolitical dominance. To begin to parse this extraordinarily complex policy domain, this paper asks: how do elite US stakeholders harness particular ideas of the future of warfare to position and legitimize LAWS? The underlying premise of this research is that, while LAWs are tangible technologies that exist in real time and space, concepts such as ‘autonomous warfare’ or ‘robot warfare,’ and the rules and ethics governing them, must be brought into being via elite-level discourse. This project, drawing upon issue-mapping analysis of over 1,000 pertinent mass media articles and policy reports, seeks to determine how elite stakeholders deploy cultural tropes (including popular culture) and future projections to justify ongoing investment in autonomous weapons.

Minds, Brains, and the Puzzle of Implicit Computation

Randolph Grace, University of Canterbury

Many behavioural and perceptual phenomena, such as spatial navigation and object recognition, appear to require implicit computation—that is, the equivalent of mathematical or algebraic calculation. This capacity is found across a wide range of species, from insects to humans. Why can minds and brains do this? Shepard (1994) gave a possible reason: Because the world is described by Euclidean geometry and physical laws with algebraic structure, natural selection would favour perceptual systems that successfully adapted to those principles; thus, the algebraic and geometric invariants that characterize the external world have been internalized by evolution. Another possibility is suggested by our recent experiments with a novel ‘artificial algebra’ task. Participants learn by feedback and without explicit instruction to make an analogue response based on an algebraic combination of nonsymbolic magnitudes, such as line lengths or brightnesses. Results show ‘footprints’ of mathematical structure—response patterning that is not trained, implying that the participants have generated it themselves. These results suggest that algebraic structure is intrinsic to the mind, and an alternative explanation for implicit computation. According to our mathematical mind hypothesis, computation is what the brain is, not what the brain does. I conclude by exploring some implications of this view for artificial intelligence, numerical cognition, computational neuroscience, and philosophy of mathematics.

Not thinking like a young white western secular man—whose intelligence and what intelligence is being artificialized?

Mike Grimshaw, University of Canterbury

This paper takes the forms of a thought piece raising the question of diversity in AI. Not only is there a noted lack of diversity in the tech industry, there are questions needing to be raised as to what constitutes the ‘intelligence’ in AI. We could—or rather need to—say: non-white, non-male, non-western minds matter.

What is it like to be a bot?

James Maclaurin, University of Otago

New Zealand’s animal welfare legislation reflects the fact that we, like many many other countries, accord moral status to a wide variety of non-human animals. This raises the question of whether we might at some point have to accord weak artificial intelligence some sort of limited moral status. A recent proposal from John Danaher and Rob Sparrow suggests we deploy an ethical equivalent of the Turing test. This paper analyses the idea of ethical behaviour and argues that the proposed test is fundamentally ill-suited to detecting moral status in entities with simple mental and emotional lives.

Using AI to support student engagement in video-based learning

Tanja Mitrovic, University of Canterbury

Video-based learning is widely used in both formal education and informal learning in a variety of contexts. With the ubiquity of widely available video content, video-based learning is seen as one of the main strategies to provide engaging learning environments. However, numerous studies show that to learn effectively while watching videos, students need to engage deeply with video content. We have developed an active video watching platform (AVW-Space) to facilitate engagement with video content by providing means for constructive learning. The initial studies with AVW-Space on presentation skills show that only students who commented on videos and who rated comments written by their peers have improved their understanding of the target soft skill. In order to foster deeper engagement, we designed a choice architecture and a set of nudges to encourage students to write more, and to reflect on their past experience. The nudges are implemented using AI techniques, and are generated automatically based on the student’s behaviour while watching videos. We conducted three studies investigating the effect of nudges. The results provide evidence that show that nudges are effective. Students who received nudges wrote more comments, of various types, and of better quality.

Philosophical prototyping

Jonathan Pengelly, Victoria University of Wellington

Wallach and Allen argue that as artificial moral agents become more sophisticated, their similarities and differences to humans will tell us much about who and what we are. This development, they claim, will be crucial to humanity’s understanding of ethics. This paper agrees that AI technologies have the potential to generate new philosophical insights. To do this however, we must be open to new research methods which effectively utilise the power of these technologies. I propose one such method, philosophical prototyping, showing how it can be used to explore the limits, false and real, of moral theory.

Did Turing endorse the computational theory of mind?

Diane Proudfoot, University of Canterbury

Many, if not most, theorists assume that Turing anticipated the computational theory of mind. I argue that his account of intelligence and free will lead to a new objection to computationalism.

How to make a conscious robot

Justin Sytsma, Victoria University of Wellington

In one sense, robots are strikingly different from us. More different from us than mosquitos, or ferns, or even viruses. They are non-biological, non-living. They are artifacts. In another sense, however, robots can be strikingly similar to us. They can do many of the things that we do. In fact, they’re often created for just that purpose—to take over jobs previously done by humans. Not surprisingly, robots are a common comparison case for thinking about the mind, mental states, and cognitive capacities: they are both different from and similar to us, offering the hope of bringing into focus the role of behavioral cues in our beliefs that something has a mind or mental attributes. But this helpful tool is not without drawbacks. The very fact that robots are seen as different from us, often times as radically other, carries the risk of bias. There is evidence that people are generally disinclined to attribute a range of mental state capacities to even extremely sophisticated humanoid robots described as being behaviorally indistinguishable from humans, such as consciousness, free will, feeling pain, and having emotions. Does this reflect bias—that even human-like robots are treated as other—or does it reflect something deeper about how we think about minds? Might the same tendencies that lead us to dehumanize members of human outgroups, lead us to dehumanize robots, whatever their behavioral abilities? In this paper, we expand on previous work testing judgments about human-like robots, increasing the closeness between the participant and a robot, rather than simply the similarity between the robot and other humans. Across three large-scale studies (total N=3624) we find a large effect: when a robot is described as implementing a simulation of the participant’s brain, mental capacities typically denied of robots are ascribed at levels similar to self-attributions, and at much higher levels than when a robot is described as implementing a simulation of a stranger’s brain. Further, the same effect is found when comparing a robot running a simulation of a close friend’s brain versus a stranger’s brain. The results suggest that making a robot that people judge to have the full range of human mental capacities depends not so much on what the robot is capable of doing, but on people taking the robot to be part of their ingroup.

Comments