Invited Speakers

Dr Christine Mangelsdorf
School of Mathematics and Statistics, University of Melbourne

Lessons Learned in Teaching Engineering Mathematics
Teaching mathematics to a large class of engineering students poses many challenges.  Most of these are encountered in any large class, namely, the great diversity in the students’ abilities and background, English language skills and work ethics.  However teaching mathematics to engineering students does have its own unique challenges, not least is the generally low student motivation and negative attitude towards a subject that many regard as irrelevant to their degree.

Indeed, in the past, engineering mathematics subjects at Melbourne have regularly had high failure rates, with substandard performance on assignments, tests and exams. Students, Departments and Faculties placed great pressure on teaching staff to heavily scale results, leading to low staff morale.  In recent years, however, we have employed a number of approaches that have significantly improved student engagement and performance issues. In this presentation, I will share what has worked and what has not.  The lessons learned will cover many facets including: syllabus design, assessment practices, administration matters, tutorial formats, lecture presentation, controlling large classes, making maths relevant to engineering students, and providing extra support materials online and in person.

Christine is a teaching specialist in the School of Mathematics and Statistics at the University of Melbourne. She has been coordinating, lecturing and tutoring large first and second year level mathematics subjects for 20 years. Christine was awarded the 1999 Dean’s Award for Excellence in Teaching in the Faculty of Science.

Christine has been heavily involved in promoting mathematics to secondary school and university students. She is Chair of the School’s Recruitment and Publicity Committee, which is responsible for organising mathematical events and developing careers related materials, web sites and displays. She has been a committee member of the Victorian Branch of ANZIAM for 16 years.

Associate Professor Kylie Catchpole

College of Engineering and Computer Science, ANU

The bright future of solar energy
Solar electricity is continually dropping in price, and globally solar is a hundred billion dollar industry.  The price of solar electricity is now lower than the retail price of conventional electricity in many parts of Australia, and solar panels have been installed on over one million roofs across the country.  This presentation will give an overview of the astonishing growth in solar we’ve seen to date, and what we can expect for the future, including the latest research on using nanotechnology and new materials to improve the efficiency and reduce the cost of solar cells.  It will also look at how mathematics and statistics can contribute to further our understanding of solar energy conversion.

Kylie Catchpole is Associate Professor at the Centre for Sustainable Energy Systems at the Australian National University.  A/Prof. Catchpole’s research focuses on using nanotechnology and new materials in solar cells in order to make them cheaper and more efficient.  Her work on plasmonic solar cells was named as one of the top 10 emerging technologies in 2010 by MIT Technology Review, and she was awarded the John Booker Medal for Engineering Science in 2015.  She is the author of over 90 publications and her work has also been featured in the news sections of Science magazine and The Economist.

Professor Martyn Nash

Auckland Bioengineering Institute, and Department of Engineering Science, University of Auckland

Multi-scale modelling of cardiac electro-mechanics, arrhythmias and heart failure
The pumping function of the heart is controlled by the interactive effects of cardiac muscle mechanics and electrophysiology. The electro-mechanical properties of the heart can be investigated using multi-scale mathematical modelling by coupling a finite differences scheme for representing cardiac electrophysiology with a non-linear finite element method to describe tissue mechanics. This seminar will focus on the development of integrative computational models, based on structural and functional medical images of the heart, as well as the continuum modelling methods for representing cardiac excitation-contraction coupling and mechano-electrical feedback (MEF).

Mechanisms of cardiac arrhythmias and the cellular basis of MEF-driven ventricular fibrillation will be discussed. Biophysical heart models can also be used to investigate heart failure in patients by characterising the passive and contractile properties of the ventricular musculature.

Individualised models of this kind can help to stratify the different forms of heart abnormalities, and thus have the potential to inform patient management and therapy.

Martyn completed a B.Eng., and a PhD in cardiac mechanics at the University of Auckland. He then spent six years at the University of Oxford engaged in post-doctoral research on cardiac electrophysiology.

In 2003, Martyn joined the academic staff at University of Auckland, where he is a Professor of Biomedical Engineering in the Department of Engineering Science, and a Principal Investigator at the Auckland Bioengineering Institute. The primary focus of Martyn’s research is on bioengineering analyses of soft tissues (such as the heart, breast, skin, tongue, pelvic floor), for which he applies integrative model-based methods to interpret the variety of biological recordings available from laboratory and clinical studies. He is also engaged in the design of experimental and imaging techniques, and instrumentation to generate data of sufficient quality to further inform the mathematical models. This continual interplay between biological observation, bioinstrumentation and mathematical modelling provides mechanistic insight into physiological function in health and disease.

Dr Darryn Reid

Defence Science and Technology Group

Dr Darryn J Reid has been with DSTO since 1995, and has worked in distributed systems, distributed databases, machine learning and artificial intelligence, interoperability, formal reasoning and logic, modelling of C2, simulation, optimisation and optimal control, electronic warfare, intelligence analysis tools, missile targeting and control, command support systems, operations research, parallel and distributed computation, hardware design, mathematical control theory, mathematical complexity and nonlinear dynamics, advanced web-based technologies, languages, model theory and computation, stochastic modelling, formal ontology, object-oriented and functional programming, crowd modelling and military theory. He holds the degrees of Bachelor of Science in Mathematics and Computer Science, Bachelor of Science with First Class Honours in Mathematics and Computer Science, and Doctor of Philosophy in Theoretical Computer Science from the University of Queensland. He has strong research interests in pure and applied mathematics, theoretical and applied computer science, philosophy, military theory and economics. In other words, he knows just enough to realise how ignorant he is. He is currently trying to age as disgracefully as possible, with the support of his beautiful wife Julie and their son Tyler.

The Imperative of Mathematical Uncertainty
People tend to tacitly believe in the obtainability, at least in principle, of certainty, and to look to science as the provider of that certainty; arguably, mathematics is often taken to be the pinnacle of this programme. Yet for all its surface appeal, this view simply does not withstand scrutiny: indeed, it is mathematics itself that refutes it in the strongest terms possible. Far from the smooth mechanistic universe of tidy analytic solutions and strong predictions, emerging post-modern mathematics is a vibrant world bubbling with uncertainty and alive with creativity.

Rejecting the presumption that strong prediction is a necessary and sufficient condition to control, I propose to shift problem choices towards creating models and controllers that can operate in inescapably unpredictable contexts. This entails that controllers – which are really situated models – contribute towards generating environmental uncertainty. Order parameters from existing theory might be applied to investigate the properties of such systems; we have a new kind of test, analogous to the famous Turing Test, for models and controllers in unpredictable environments.

Mathematical modelling usually assumes a strong separation between problem and solution; complexity is inherently limited, and this, in turn, limits both the potential for the growth of knowledge and the real-world viability of the solution. For example, modelling and simulation problems are typically oriented around a high-fidelity environment that is tightly defined to try to eliminate the unexpected. Significant advances will instead come from the development of models that are capable of dealing with genuinely complex problems comprising environments that manifest future states that are unpredictable, within acceptable limitations. In other words, fidelity should be traded for unpredictability, rather than the other way around.

I argue that mathematics should tackle problems in which the interactions between the solutions, within their operational setting, are actually what create the problem. This implies defining the problem on top of a base world of an environment and resources, with the interactions creating the complexity that produces unpredictable future states of the system while they are in the process of solving the problem of just surviving in the very environment they collectively create.

This also raises the intriguing possibility of new simulations in which scenarios are discovered rather than modelled. Any time we have a system that is not stationary or not regular – such as a K-system or a Bernoulli system – we have a system in which more data does not give more information, and which manifests unique transient states. Unique transient states are scenarios, and they may be much more useful for testing sensitivities to failure than the preconceived scenarios we utilise today.

Irreducible uncertainty is as inherent in mathematics as it is in the phenomena for which we use the mathematics to understand and explore. The development of mathematics is not so much about solving problems – most problems have no solution and the solutions to those that do are really not as interesting as we usually imagine beforehand – but about creating new problem choices instead.

Dr Ben Rubinstein
Department of Computing and Information Systems

Ben has been a University of Melbourne Senior Lecturer in Computing and Information Systems since 2013; and will also be an ARC DECRA fellow starting 2016. Previously he gained four years of US industry experience in the research divisions of Microsoft, Google, Intel and Yahoo!; followed by a short stint at IBM Research Australia. As a full-time Researcher at Microsoft Research, Silicon Valley Ben shipped production systems for record linkage in Bing and the Xbox360. He actively researches topics in machine learning, security, privacy, databases, areas he engages with outside academia. Ben earned the PhD (2010) in Computer Science from UC Berkeley.

Data Integration through the Lens of Statistical Learning
Data integration is a staple in statistics (record linkage), databases (entity resolution, deduplication), data mining (nearest neighbors) and natural language processing (co-reference resolution). While it has enjoyed significant academic attention since the 1940s, and great value in industry, as a technology data integration remains unsolved. After briefly overviewing the standard (databases) entity resolution pipeline by example drawn from product applications, I’ll cover three separate but related research projects: (1) how to scale similarity scoring between two sources to many sources, under small judgment budgets (spoiler: using transfer learning); (2) how relatively homogenous attributes of matched records can be automatically merged (topic model-like Bayesian networks); (3) how more heterogeneous attribute values can be systematically normalized (using canonical correlation analysis). The punchline of these case studies is one of machine learning helping scale an important database problem to the dirty, multiple-source datasets that are common to modern, industrial data integration. Based on work with Neghaban (Yale), Zhao (LinkedIn), Han (UIUC), Gemmel (Trov), Lim (UMelbourne).

Professor Bill Blyth
School of Mathematical and Geospatial Sciences, RMIT University
with Dr Asim Ghous
Director Australian Scientific and Engineering Solutions-Maplesoft

Research Applications of Maple
We present some details of our unpublished research work, focussing on 2 problems illustrating different aspects of Maple which we use as a universal environment for all teaching, assessment and research.

  1. All Exact Solution(s) of Nonlinear Fredholm Integral Equations

using Maple and an “Experimental Mathematics – Digital Discovery” approach.

  1. Generalization of the Ujevic-Roberts corrected Simpson’s rule.

We use Maple to provide a much simpler derivation of the UR rule and correct the error term for the UR rule. We also generalize to higher order rules.

If time permits we will briefly mention another 2 different consultancy research problems which we solved following requests for help from Maple users (at no cost to the researchers who requested help).

C1. Find all self intersection points of a given plane curve.

C2. Implement the solution of chemotherapy treatment of cancer which is modelled as a coupled nonlinear system of DEs (with a dynamic carrying capacity).


Comments are closed