Huang, Yunsong and Jenkins, B. Keith
(2004)
*Fast and Biologically Plausible Computation with Perturbed Gaussian Markov Processes.*
In: 11th Joint Symposium on Neural Computation, May 15 2004, University of Southern California.
(Unpublished)
https://resolver.caltech.edu/CaltechJSNC:2004.poster009

Full text not available from this repository.

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechJSNC:2004.poster009

## Abstract

In recent years a wide range of statistical models have been applied to vision related problems and have enjoyed much success. In a generative probabilistic model, the probability distributions of the observed images together with hidden variables describing the images are formulated (in a top-down fashion), and the visual perception and learning can be understood as an inference (bottom-up) operation that computes the posterior probabilities over the hidden variables, based on which model selection and parameter tuning, for example, can be carried out. A 'good' model requires a realistic probabilistic formulation that closely matches the statistics of the input data, and requires that the computation resulting from such formulation is tractable, and hopefully also biologically plausible. Those two requirements are not trivial. In factor analysis, for example, the observed image is expressed as a linear superposition of many basis functions. While the generation or synthesis of the image is immediate, the inference operation would typically require iterations if non-Gaussian prior is assumed or if direct matrix inversion is not allowed. If, on the other hand, the image is simply projected onto a set of filters, e.g., Gabor functions, then the probabilistic formulation is confounded, that is, it's not immediately clear how confident it would be to interpret a certain filter's response as the detection of a feature,e.g., an edge. In this talk, we present a generative probabilistic model that consists of a mixture of perturbed 2D Gaussian Markov processes. (Because of this mixing, the resulting model is non-Gaussian.) In each Gaussian Markov process, the adjacent hidden nodes on a 2D grid is coupled by some bond energy that resembles the energy prescribed in the "plate" model. This bond energy, however, can be subject to perturbation. Specifically, the 'bond' can be 'broken' or weakened. This is a manipulation on the inverse of the covariance matrix of the Gaussian process, instead of a constant amount of addition/subtraction to the covariance matrix as in the case of adding/removing a basis in the factor analysis. We show that the inference of the posterior probability of such perturbation amounts to the following computation: the input image is projected onto several receptive fields, and their outputs then go through a quadratic nonlinearity, subtract a threshold (controlled by the prior) and subsequently undergo a sigmoid function. Low-level features such as edges and bars of different scale and orientation can be obtained by suitable perturbations. Therefore the output of those feature detectors correspond to the data-likelihood given those components in our mixture model. We demonstrate how different features interact with each other: specifically, lateral inhibition and colinear facilitation. Also, we show that a contour can 'gate', or modify the extent of other feature detectors in its vicinity. Note that those phenomena fall directly from our probabilistic formulation; there are no heuristics involved. When we move beyond individual feature detectors and try to infer the posterior probability of contours, we will encounter the computation involving matrix inversion. We then show that there exists a family of effective preconditioners for different configurations of contours. In fact, those preconditioners are so good that the matrix inversion can be obtained in a single step! The posterior mean and covariance of the hidden nodes can therefore be easily obtained (in negligible time on a PC). In contrast, algorithms such as anisotropic diffusion or Graduated Non- Convexity would typically need many iterations of lateral propagation of information. In summary, apart from adapting a few parameters (e.g., noise level), the inference of our model can be carried out in predominantly feedforward, fan-in/fan-out type of computation, and seems biologically plausible.

Item Type: | Conference or Workshop Item (Poster) |
---|---|

Additional Information: | Poster will be added |

Record Number: | CaltechJSNC:2004.poster009 |

Persistent URL: | https://resolver.caltech.edu/CaltechJSNC:2004.poster009 |

Usage Policy: | You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format |

ID Code: | 9 |

Collection: | CaltechCONF |

Deposited By: | Imported from CaltechJSNC |

Deposited On: | 07 Jun 2004 |

Last Modified: | 03 Oct 2019 22:49 |

Repository Staff Only: item control page