BEGIN:VCALENDAR PRODID:-//Microsoft Corporation//Outlook MIMEDIR//EN VERSION:1.0 BEGIN:VEVENT DTSTART:20121115T233000Z DTEND:20121116T000000Z LOCATION:355-D DESCRIPTION;ENCODING=QUOTED-PRINTABLE:ABSTRACT: Direct methods for solving sparse linear systems are robust and typically=0Aexhibit good performance, but often require large amounts of memory=0Adue to fill-in. Many industrial applications use out-of-core techniques=0Ato mitigate this problem. However, parallelizing sparse out-of-core=0Asolvers poses some unique challenges because accessing secondary storage=0Aintroduces serialization and I/O overhead. We analyze the data-movement=0Acosts and memory versus parallelism trade-offs in a shared-memory=0Aparallel out-of-core linear solver for sparse symmetric systems. We=0Apropose an algorithm that uses a novel memory management scheme and=0Aadaptive task parallelism to reduce the data-movement=0Acosts. We present experiments to show that our solver is faster=0Athan existing out-of-core sparse solvers on a single core, and is=0Amore scalable than the only other known shared-memory parallel out-of-core solver. This work is also directly applicable at the node level in=0Aa distributed-memory parallel scenario. SUMMARY:Managing Data-Movement for Effective Shared-Memory Parallelization of Out-of-Core Sparse Solvers PRIORITY:3 END:VEVENT END:VCALENDAR BEGIN:VCALENDAR PRODID:-//Microsoft Corporation//Outlook MIMEDIR//EN VERSION:1.0 BEGIN:VEVENT DTSTART:20121115T233000Z DTEND:20121116T000000Z LOCATION:355-D DESCRIPTION;ENCODING=QUOTED-PRINTABLE:ABSTRACT: Direct methods for solving sparse linear systems are robust and typically=0Aexhibit good performance, but often require large amounts of memory=0Adue to fill-in. Many industrial applications use out-of-core techniques=0Ato mitigate this problem. However, parallelizing sparse out-of-core=0Asolvers poses some unique challenges because accessing secondary storage=0Aintroduces serialization and I/O overhead. We analyze the data-movement=0Acosts and memory versus parallelism trade-offs in a shared-memory=0Aparallel out-of-core linear solver for sparse symmetric systems. We=0Apropose an algorithm that uses a novel memory management scheme and=0Aadaptive task parallelism to reduce the data-movement=0Acosts. We present experiments to show that our solver is faster=0Athan existing out-of-core sparse solvers on a single core, and is=0Amore scalable than the only other known shared-memory parallel out-of-core solver. This work is also directly applicable at the node level in=0Aa distributed-memory parallel scenario. SUMMARY:Managing Data-Movement for Effective Shared-Memory Parallelization of Out-of-Core Sparse Solvers PRIORITY:3 END:VEVENT END:VCALENDAR