Adding an Extra Constraint To Help Recover Articulator Positions Michelle Heard, Department of math and Computer Science, Grambling, LA 71245 Abstract: Speech is the number one means of communication amongst humans. Due to today's technology, it would be a miraculous idea if one could be identified by what one is saying through the analysis of a computer. The project's goal is to be able to recover the position of the lips, jaw, tongue, and other articulators from speech sounds alone. To develop this procedure several algorithms were used; such as, vector quantizing speech signals, then using multi-dimensional scaling to represent quantized code in a topographical map. The basic work was developed by the group of Dr. John Hogan, Los Alamos National Laboratory. An extra constraint is being developed by implementing a program to smooth the estimated positions of a topographical map and to produce a code book of those positions. Multiple regression is used to determine the correlation between the estimated and the recovered positions. The work is in progress. Key words: Speech, articulators, topographical map, correlation, multiple regression Note: The above work was completed as part of the co-op Science and Engineering Research Semester at Los Alamos National Laboratory. The author is an undergraduate student at GSU and presently supported by the Office of Naval Research.