[PDF][PDF] Parallel polynomial operations on SMPs: an overview
PS Wang - Journal of Symbolic Computation, 1996 - researchgate.net
PS Wang
Journal of Symbolic Computation, 1996•researchgate.netResearch at Kent (Wang 1991) and elsewhere (Dora and Fitch 1988, Zippel 1990) applies
parallelism to key symbolic computation algorithms for higher performance and implements
software to take advantage of advances in parallel computers. One particular focus at Kent is
parallel polynomial factoring and GCD computations. In this area, the research at Kent has
been conducted mainly on symmetric multiprocessors (SMP) where all processing elements
(pe) access a global shared memory in a symmetric fashion. Early implementations and …
parallelism to key symbolic computation algorithms for higher performance and implements
software to take advantage of advances in parallel computers. One particular focus at Kent is
parallel polynomial factoring and GCD computations. In this area, the research at Kent has
been conducted mainly on symmetric multiprocessors (SMP) where all processing elements
(pe) access a global shared memory in a symmetric fashion. Early implementations and …
Research at Kent (Wang 1991) and elsewhere (Dora and Fitch 1988, Zippel 1990) applies parallelism to key symbolic computation algorithms for higher performance and implements software to take advantage of advances in parallel computers. One particular focus at Kent is parallel polynomial factoring and GCD computations. In this area, the research at Kent has been conducted mainly on symmetric multiprocessors (SMP) where all processing elements (pe) access a global shared memory in a symmetric fashion. Early implementations and performance measuring were carried out on a 12-pe Encore Multimax and/or a 26-pe Sequent Balance. The work paves the way for constructing a parallel computer algebra system kernel to take real advantage of multiprocessor workstations that are becoming increasingly available to scientists and engineers. Among the many competing parallel architectures, it seems that the shared memory model has the best chance of becoming commonly adopted and widely available. Newer parallel machines offer faster CPUs, quicker memory access, and better scalability, the ability to increase the number of pe’s, by supporting a distributed global store at the hardware level.
Investigations conducted at Kent include polynomial factoring modulo small primes, univariate p-adic lifting, detection of true factors, reformulation of lift basis, multivariate p-adic lifting, and sparse multivariate GCD. Highlights of this work are reviewed here†. Parallel procedures take an input parameter, np, the number of parallel tasks to be used. It is of course important to fully utilize the specified number of tasks in the parallel procedures. Thus, load balancing is an important consideration in parallel algorithms.
researchgate.net
Showing the best result for this search. See all results