Proceedings available: http://www.aaai.org/Press/Reports/Conferences/cf-00-01.php An Implementation of a Parallel Machine Learner Lee A. Adams, Kevin M. Livingston, and Jennifer Seitzer We present an implementation of speculative parallelism in the realm of deductive and inductive reasoning systems. Machine learning, a form of induction, can be thought of as a search for a generalizing rule that summarizes a collection of data. In this work, we broach the search for such a rule by simultaneously traversing the search space at ten different starting points; ten ranking algorithms are executed simultaneously on a distributed architecture. Additionally, we present a data parallel design where each learning algorithm, itself, is distributed among many processors. We exemplify these concepts of speculative and data parallel parallelism by transforming a sequential knowledge-based deduction and induction system, INDED, into a speculatively parallel system where several processors simultaneously search for an accurate rule obtained in a supervised learning environment. Moreover, a data parallel implementation of the fundamental operation of each learning algorithm, ranking, is parallelized among the cluster nodes. We present algorithms for work delegation as well as final rule assessment used in selection of the superior learned rule.