Dissertation pivot

Home / Research / Dissertation / Dissertation pivot

To align my dissertation efforts with the strategic and tactical needs of my employer, Spektron Systems – who is incredibly supportive, I must pivot my efforts.  It is fortunate that I am working with a company that directly uses machine learning and has readily available problems addressed by my research.  I should only have to conduct a small pivot that narrows my research to something relevant.

Narrowing my research

The direction of my research has been the relationship between the ideas of computational creativity and transfer learning.  In particular, I was looking at transfer learning as the mechanism for computational creativity.  This is a vast problem and unlikely to be useful in the short-term for Spektron.

Computational creativity, as a concept, may fit the strategic activities of the company, i.e., new molecular design; however, this may be a matter of application and interpretation.  Further, it is an ambitious undertaking that may not be within the bounds suitable for a dissertation. On the other hand, we can quickly align transfer learning with the tactical needs of Spektron, and I will present the case for this here.

What is Transfer Learning?

Briefly, transfer learning – within the machine learning context – is the use of techniques to transfer some knowledge between one learning agent and another.  The objective of the transfer is to increase the learning performance of the target agent.  Researchers measure performance improvement on three metrics: 1) the initial performance, 2) the time/effort it takes to learn the task thoroughly, and 3) the final performance level.

Transfer learning comes in many forms and addresses a wide-array of problems.  There are applications in supervised learning, unsupervised learning, and reinforcement learning paradigms.

How does this relate to Spektron’s efforts?

An introduction to our operations at Spektron is needed to describe how transfer learning may be a useful tactic.  In short, Spektron Sytems is a drug-discovery Company.  The goal of our company is to create New Chemical Entities (NCEs) that have better efficacy, lower toxicity, and fewer side-effects.  The key strategy is to use in-silico modeling to guide the development of NCEs towards application as an Investigational New Drug with the FDA, then sell the NCE to a pharmaceutical Company.

Several time-intensive and costly milestones exist between the design of a new drug and the IND submission.  Traversing these milestones incurs increasingly higher costs as an NCE gets closer to IND submission.  Our strategy is to build highly-predictive, in-silico models so that we can advance NCEs with a high probability of success.

Tactically, we use computational models for filtering NCEs from moving forward towards the costly activities.  Models, regression and classification, are built using the relationship between a structural descriptor of a compound and its activity.  The idea is to create models that generalize well-enough to predict behavior on NCEs that are novel enough to be slightly outside the domain of the training data.

It seems to be well established that such modeling is effective when the dependent variables are biological activities.  However, we have an interest in modeling on therapeutic effects or some macro-level endpoints such as those involved in cognitive and executive functioning.

We have noticed that modeling these relationships yields worse performance than those on biological activities.  Such difficulties happen, presumably, because there may exist multiple and nonlinear pathways to the macro-level endpoint. These introduce confounding factors for machine learning algorithms to work through.

Further, the number of training examples available for macro-level endpoints tends to be small; less than 100 and sometimes smaller.  The idea behind transfer learning is that a classification task could be learned with better performance by transferring some knowledge from another well-learned classification task that is related by some underlying conceptual space.

Doing this raises some questions and presents several challenges.  These are general to transfer learning scenarios, but may be solvable within this particular application.  These are the areas of opportunities in which I can forward the science.

What are the opportunities?

When you consider transferring knowledge from one or more source tasks to a target task we are faced with three challenges – with the goal of steering away from negative transfer (when transferring actually hurts performance):

  1. which source tasks to choose for the transfer,
  2. how to map from one task to the other, and
  3. how to execute the transfer.

These challenge areas will provide the backdrop for my dissertation research.  A novel and useful approach would be beneficial to the field of transfer learning in general.  Any work in this area is also likely to improve the computational modeling efforts at Spektron.

Next Steps

The next objective would be to narrow down to a particular problem.  Putting current methodology into practice at Spektron should help illuminate where the challenges are most relevant and provide the waypoints for the remainder of this dissertation.

The next step would be to identify some low-hanging fruit opportunities to apply transfer learning to Spektron’s modeling efforts.  This would provide not only a testing group for new ideas but also help shape the particular problem(s) to be addressed.


Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: