Notice: Undefined variable: pageDisplayTitle in /var/www/template.inc on line 7

Notice: Undefined variable: page_logoImage in /var/www/template.inc on line 9

Notice: Undefined variable: site_logoWidth in /var/www/template.inc on line 11
<br /> <b>Notice</b>: Undefined variable: noHierarchyInTitle in <b>/var/www/template.inc</b> on line <b>17</b><br /> Norton, David's Publications (detailed list) - NNML Laboratory - BYU CS Department
Notice: Undefined variable: site_icon in /var/www/template.inc on line 23

Notice: Undefined variable: page_style in /var/www/template.inc on line 42

Notice: Undefined variable: pageStrippable in /var/www/template.inc on line 50

Notice: Undefined variable: site_titleStack in /var/www/template.inc on line 70
  Norton, David's Publications (detailed list)

THIS PAGE IS NO LONGER MAINTAINED. Click here for our new publications list, which is more up-to-date.


This page contains the titles and abstracts of papers written by author Norton, David, a member of the BYU Neural Networks and Machine Learning (NNML) Research Group. Postscript files are available for most papers. A more concise list is available.

To view the entire list in one page, click here.


Improving the Separability of a Reservoir Facilitates Learning Transfer

  • Authors: David Norton and Dan Ventura
  • Abstract: We use a type of reservoir computing called the liquid state machine (LSM) to explore learning transfer. The Liquid State Machine (LSM) is a neural network model that uses a reservoir of recurrent spiking neurons as a filter for a readout function. We develop a method of training the reservoir, or liquid, that is not driven by residual error. Instead, the liquid is evaluated based on its ability to separate different classes of input into different spatial patterns of neural activity. Using this method, we train liquids on two qualitatively different types of artificial problems. Resulting liquids are shown to substantially improve performance on either problem regardless of which problem was used to train the liquid, thus demonstrating a significant level of learning transfer.
  • Reference: Proceedings of the International Joint Conference on Neural Networks, pages 2288–2293, 2009.
  • BibTeX
  • Download the file: pdf

Preparing More Effective Liquid State Machines Using Hebbian Learning

  • Authors: David Norton and Dan Ventura
  • Abstract: In Liquid State Machines, separation is a critical attribute of the liquid–which is traditionally not trained. The effects of using Hebbian learning in the liquid to improve separation are investigated in this paper. When presented with random input, Hebbian learning does not dramatically change separation. However, Hebbian learning does improve separation when presented with real-world speech data.
  • Reference: In Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN’06, pages 8359–8364, 2006.
  • BibTeX
  • Download the file: pdf

Valid XHTML 1.0 Strict Valid CSS!