Sign In / Sign Out
- ASU Home
- My ASU
- Colleges & Schools
- Map & Locations
Arizona State University, the Ira A. Fulton Schools of Engineering and the Intel Academic Community are working together to integrate Parallel Computing into computer science and engineering, mathematics and statistical sciences programs and the High Performance Computing Initiative at the undergraduate and Master’s degree levels.
Servers, PCs, and handheld devices are moving from faster and faster processors toward more and more cores. Developers are challenged to program these devices to take advantage of latent processing power. To utilize multiple processors, developers must delve into the world of parallel computation.
As more multicore processors are created, the expected use in future computing has generated a demand in industry for technologists that can create advanced software that leverages parallelism to take advantage of available processing power. This increased applicability of parallelism necessitates an update of undergraduate and master’s degree curricula in computer science, computer engineering and electrical engineering.
Intel’s Academic Program team members, Rowena Turner, Amit Jindal and Lauren Dankiewicz, together with ASU Professors Yann-Hang Lee, Yinong Chen, Partha Dasgupta, Eric Kostelich, Violet Syrotiuk, Gil Speyer, Aviral Shrivastava, and Alex Mahalov have worked together to target nine courses at both the undergraduate and master-levels where parallelism will be taught. These courses, from the freshmen level to the senior level, will give students better understanding, experience and sophistication in writing and debugging multi-threaded code.
In the introductory course for freshmen through senior levels, gaming and multi-media applications will be used to teach the basic concepts of parallelism: data and task decomposition, scheduling techniques, software architecture, at various granularities to an audience ranging from sophomore to senior levels, while retaining the emphasis on laboratory instruction. For example, students will discuss the mechanisms supporting parallel and multi-threaded execution, Intel’s Math Kernel Library, and the Intel Integrated Performance Primitives are included in Operating Systems class. Additional course modules are incorporated to teach the approaches for efficient parallelism in high performance computing and in server applications. Brief descriptions of the courses that teach parallel computation are listed below.
CSE 101: Introduction to Computer Science and Engineering
The CSE 101 consists of a sequence of modules to introduce the discipline of Computer Science and Engineering. To illustrate multi-core architecture and parallel computing, a demo called “Horsepower” will be shown to the students where the performance of a multi-threaded ambient animation process is enhanced when run on a multi-core CPU.
A graphical and visual multi-media and gaming application called “Destroy the Castle” will be used to show a specific number of bricks falling and insects crawling under a fixed frame rate with a specified number of threads. The CPU utilization history can also show the balanced loads in the multiple cores when the application is parallelized.
CSE 420: Computer Architecture I
CSE 420 discusses performance versus cost tradeoffs in the design of computer systems. Students learn the basic concepts of processor design with quantitative comparisons. The course starts with understanding of an instruction set architecture, and then design a basic, single-cycle implementation of the processor. The next steps are to pipeline the processor, and then include memory hierarchy in the processor. The course also covers instruction-level parallelism and its exploitation, multi-threaded and multi-core processors.
There will be a strong multi-core architecture and programming component in the course. A survey of multi-core architectures and basic techniques for parallelization will be covered. Introduction to the basic paradigms of parallel programming, including task-level parallelism and data-level parallelism with be discussed.
CSE 430: Operating Systems
CSE 430 gives an overview of operating system structures and services. The students will learn processor scheduling, concurrent processes, synchronization techniques, memory management, virtual memory, input/output, storage management, and file systems.
CSE 445/598: Distributed Software Development
Both CSE 445 and CSE 598 cover system architectures and design, service-oriented computing, and frameworks for development of distributed applications and software components.
In service-oriented distributed systems, server applications may be invoked by multiple clients simultaneously. Such applications can utilize multi-core architecture and threading techniques to improve service response time and to reduce resource contention. Intel’s TBB (Threading Building Blocks) technique is discussed in the class for better utilizing multi-cores. TBB provides straightforward ways of performance enhancement, by turning synchronous calls into asynchronous calls and converting large methods (threads) into smaller ones. A multithreading program is used to validate the Collatz conjecture and evaluate the performance of the multithreading program, in terms of execution time, speedup and efficiency. Students will write complete parallel applications and services and test them on a cloud server. Syllabus and information are available at: CSE 445/598 course material.
MAT 420: Scientific Computing
The emphasis of this course is on languages and software design for numerical computing in a Linux/POSIX environment. Topics include: the POSIX (bash, Korn) shell; Unix/Linux command-line utilities (grep, sort); an introduction to a scripting language (e.g., Awk, Python, Perl); make; IEEE floating point; Fortran 95; C++ (with emphasis on the Standard Template Library); and a survey of parallel programming paradigms (vectorization, OpenMP, MPI, Co-array Fortran). Programming assignments emphasize use of libraries like LAPACK and the Intel MKL. Students also do a project, which may highlight the use of a parallel programming methodology or the application of a numerical library (e.g., an ODE solver) for a simulation of a mathematical model.
CSE 494/598 and ME 494/598
Introduction to Parallel Computing These courses aim to introduce students to the fundamentals of parallel computing and to important techniques and practices in parallel programming. Various models and applications will be discussed. In addition, a sampling of current topics in high performance computing will be covered.
By the end of these courses, the student will have skills in OpenMP, MPI, parallel I/O with some exposure as well to GPGPUs through CUDA. Students will develop codes implementing various paradigms and learn how to debug, benchmark and tune these codes on a parallel system using software tools. Finally, students will become familiar with the latest approaches and strategies used in research and industry through a research project, guest lectures and application-specific discussions.
CSE 531: Distributed and Multiprocessor Operating Systems
CSE 531 is a graduate level course that covers advanced topics in Operating Systems. The course covers the concepts, policies and mechanisms used in the construction of operating systems that handle multiple processors, from closely coupled system (multicores) to loosely coupled systems (distributed systems).
This course is a comprehensive overview of the state of the technology in constructing multiprocessor and distributed operating systems. It includes material on threading, concurrent programming and parallelism. Multicore systems include symmetric (mainly Intel CMP) and asymmetric multiprocessing as well as the use of virtual processors for parallel applications such as in MPI. Distributed systems include message passing, RPC and DSM systems. In addition, the course covers distributed systems theory that lays the foundations of time, state and agreement in distributed systems.
CSE 591: Low-Power Computing
CSE 591 will cover a short tutorial on Computer Architecture. Additionally, power estimation and modeling at various levels of design hierarchy. The students will learn the importance of temperature effects, circuit-level, microarchitecture-level, software-level, system level and hybrid techniques of power optimization. The course will delve into modern architectures (current and future multi-core architectures) and techniques for addressing the power challenge. CMP architectures (e.g., Intel), distributed memory architectures (e.g., IBM Cell), and GPU architectures (e.g., Nvidia Fermi), and Coarse Grain-Reconfigurable Arrays or CGRAs will be covered throughout the semester. In addition, hardware and software aspects of these architectures and software development tools will be discussed.
Yinong Chen received his Ph.D. from the University of Karlsruhe (TH), Germany, in 1993. He did postdoctoral research at Karlsruhe and at LAAS-CNRS in France. From 1994 to 2000, he had teaching appointments with the School of Computer Science at the University of the Witwatersrand, Johannesburg. Chen joined Computer Science and Engineering at Arizona State University in 2001. He received the teacher of the year awards in 2008 and in 2009 from School of Computing and Informatics. Chen’s research areas are in software engineering, distributed computing, robotics, and dependable computing. He authored or co-authored four textbooks and more than 120 research papers. He is on editorial boards of several journals, including Journal of Systems and Software and Simulation Modeling Practice and Theory.
Professor Chen will be teaching CSE 101, Introduction to Computers Science, CSE 445, Distributed Systems Development for undergraduate students, and CSE 598, Distributed Systems Development for graduate students in spring 2011.
Partha Dasgupta joined ASU in 1991. Prior to ASU, he had teaching appointments with Georgia Tech and New York University. He received his Ph.D. in Computer Science from Stony Brook University. Dasgupta’s core areas of expertise are in operating systems, distributed computing and computer security. He has been involved with concurrent and parallel programming research and teaching for most of his career. He has significant prior research results and publications in construction of distributed operating systems, high performance systems and secure computing infrastructures. He has 20 years of experience with operating systems and 8 years experience with security systems. He is an accomplished teacher and researcher of topics in computer security and distributed computing.
Professor Dasgupta will be teaching CSE 430, Operating Systems, and CSE 531, Distributed and Multiprocessor Operating Systems, in spring 2011.
Eric Kostelich is a professor in the School of Mathematical and Statistical Sciences. His research interests are in applications of nonlinear dynamical systems, data assimilation, and high performance computing. He directs the NSF-funded program, Computational Science Training for Undergraduates in the Mathematical Sciences, which provides extended research experiences for a dozen students each year. In 2008, he received a Special Recognition award for excellence in teaching from the ASU Parents Association.
Professor Kostelich will be teaching MAT 420, Scientific Computing, in the fall 2011.
Aviral Shrivastava is assistant professor in the School of Computing, Informatics, and Decision Systems Engineering at the Arizona State University. He has established and heads the Compiler Micro-architecture Lab at ASU. He received his Master’s and Ph.D. degrees in Computer Systems Engineering from University of California, Irvine. His bachelor’s degree is in Computer Science and Engineering from Indian Institute of Technology, Delhi. Shrivastava’s interests are in Compilers, Computer Architecture, and VLSI CAD; with a particular focus at embedded multi-core architectures and compilers, e.g., the Cell processor in Sony Playstation 3, and GPU Computing. He has developed several compiler and micro-architectural techniques to optimize power, performance, code size, and reliability etc. of embedded systems.
Professor Shrivastava will be teaching CSE 420, Computer Architecture I, and CSE 591, Low Power Computing, in the spring 2011.
Violet Syrotiuk is an associate professor of Computer Science and Engineering at Arizona State University. Her research interests include autonomous adaptation of protocols in networks including ad hoc and cognitive radio networks. She serves on the editorial boards of three journals (Computer Networks, Computer Communications, and the International Journal of Communications Systems) and serves/served on the Technical Program Committee of several leading conferences. Syrotiuk’s research has been supported by grants from NSF, ONR, LANL, DSTO (Australia), and contracts with ATC, General Dynamics, and Raytheon.
Professor Syrotiuk will teach CSE 430, Operating Systems, in spring 2011.
Gil Speyer received his B.S. in electrical engineering from MIT. He worked on programmable logic chips at Xilinx, Inc. in San Jose, CA. Speyer earned his M.S. and Ph.D. in electrical engineering at ASU. He works for the High Performance Computing Initiative at ASU. He is a researcher on quantum transport, developing parallel programs with various research groups while teaching courses and workshops in parallel computing.
Speyer will teach CSE 494, Introduction to Parallel Programming, and ME 598 to Engineering students in spring 2011.
Alex Mahalov earned a PhD in applied mathematics from Cornell University in 1991. After a postdoc in the Department of Mechanical Engineering at UC Berkeley, he joined Arizona State University where he was promoted to full professor. Currently the Wilhoit Foundation Dean’s Distinguished Professor, Mahalov holds a joint appointment in the School of Mathematical and Statistical Sciences and the Center for Environmental Fluid Dynamics, School of Mechanical, Aerospace, Chemical and Materials Engineering. He has over one hundred peer reviewed publications. Most recently, research efforts focused on problems at the interface of scientific computing, computational fluid dynamics and high performance computing. Mahalov’s research has been supported by grants from NSF, AFOSR, NASA, LANL, and contracts from industry.
Professor Mahalov will be teaching MAT420, Scientific Computing, in the fall 2011 and fall 2012.
Intel, the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other Countries.