DIVA poster final final (2) .pdf
Original filename: DIVA poster final final (2).pdf
Title: PowerPoint Presentation
This PDF 1.5 document has been generated by Microsoft® PowerPoint® 2013, and has been sent on pdf-archive.com on 24/01/2018 at 14:52, from IP address 207.38.x.x.
The current document download page has been viewed 510 times.
File size: 574 KB (1 page).
Privacy: public file
Download original PDF file
DIVA poster final final (2).pdf (PDF, 574 KB)
Share on social networks
Link to this file download page
Evolving DIVA: Applying Evolution to a Model of Classification
Advisor: Dr. Kenneth Kurtz
What is DIVA?
DIVA= DIVergent Autoencoder
• Model of Psychological Classification
• Machine Learning Approach
Designing an evolutionary process to facilitate
classification learning in DIVA.
1. Learning on real-life data sets.
After generating an initial population of DIVA networks:
1. Evaluate each network’s performance on a
classification task and assign a fitness value to that
2. Replicate networks with higher fitness, remove
networks with lower fitness
2. Identifying successful “Traits” within
3. Create “mutations” in some networks by altering
“Categories are represented as
coordinated statistical models of the
properties of the members” 1
DIVA Learns through
a supervised algorithm to optimize
weights through error-driven learning
by gradient descent. However,
backpropagation is sometimes slow,
too computationally demanding with
larger nets, and can get stuck in local
Cognitive Science to Inform
Machine Learning Development
Machine learning is a growing field
with expanding application.
Psychological models and theories
can be an essential tool to this
research field, which is predominantly
made up of computer scientists
RESEARCH POSTER PRESENTATION DESIGN © 2015
4. Repeat the evolutionary cycle
A population of 10 individual networks
was randomly generated. Fitness was
assigned based on the summed
classification probabilities of the correct
category in 8 examples of a four-variable
categorization problem. At each
generation, the five least fit individuals
were removed from the population. The
most fit individual was replicated twice,
and the second and third most fit
individuals were replicated once. Each
replicated network was mutated by
random variation added to each weight in
it’s matrix. Additionally, a new random
individual was introduced at each
generation. This was repeated for 10 trials
of 50 generations. The software was
written using Python 3.3 with the NumPy
matrix multiplication package.
Upon initialization of the
ranged from 3.8-4.2. In
most trials, the summed
in the most fit individual
Stimulus and categories used
rose to 5.2 by the twentieth
generation, and within a few generations
had spread to the whole population. However,
in no trial did this statistic rise above 5.2. I
believe that this is because random mutations
stopped improving network performance.
Additionally, it was found that the randomly
introduced individuals at each generation
sometimes had high relative fitness and
consequently played a large part in overall
3. “Sexual Reproduction”
4. Integrating individual learning with
population learning: Backprop and
Evolution for local and global
Professor Ken Kurtz
Doctoral Candidate Nolan Conaway
The EvoS Program at Binghamton
University of California Irvine Machine Learning
1: Kurtz, K.J. (2007) The divergent autoencoder
(DIVA) Model of category Learning. Psychonomic
Bulletin and Review, 14(4), 560-567.
2: Rumelhart D.E., Hinton G.E., Williams R.J. (1986).
Learning Representations by back-propagating
errors. Nature, 323. 533-536
Link to this page
Use the permanent link to the download page to share your document on Facebook, Twitter, LinkedIn, or directly with a contact by e-Mail, Messenger, Whatsapp, Line..
Use the short link to share your document on Twitter or by text message (SMS)
Copy the following HTML code to share your document on a Website or Blog