The following tools and software are used specifically by Doctorate in Computational Sciences (CSDR) students at Harrisburg University. At any point you can return to the Overview - Supported Software and Tools page.



Tool
Description
Support
Parallel Computing and Hardware-Optimized AI

CUDA (Compute Unified Device Architecture)CUDA provides parallel computing capabilities for AI/ML training and inference. It is used for neurocomputing acceleration, HPC, and AI inference.



Internal: Your class Faculty

Other: NVIDIA developer forums, internal DevOps
OpenCL (Open Computing Language)
OpenCL is an open-source framework enabling parallel programming across heterogeneous hardware. It is used for AI acceleration on non-NVIDIA hardware.



Internal: Your class Faculty

Other: Khronos Group, vendor documentation
MPI (Message Passing Interface)
MPI is used for deep learning models and enables multi-node parallelism for AI training on HPC clusters
Internal: Your class Faculty

Other: MPI open-source community, internal cluster team
Eigen (C++ Linear Algebra Library)
Integrated into TensorFlow C++ for fast inference and optimized for matrix operations.Internal: Your class Faculty

Other: Eigen GitHub, TensorFlow docs
AI Model Parallelization

Horovod (Uber's Distributed Training Framework)

Horovod Framework for distributed deep learning training is used for training LLMs on multiple GPUs.Internal: Your class Faculty

Other: Uber Horovod GitHub, OpenAI docs
NCCL (NVIDIA Collective Communications Library)

NCCL accelerates multi-GPU training using optimized interconnects for deep learning.
Internal: Your class Faculty

Other: NVIDIA dev forums, internal DevOps.
Ray Train (Ray.io for Distributed Training)
Distributed training library supporting scalable deep learning and large-scale models.

Internal: Your class Faculty

Other: Eigen GitHub, TensorFlow docs
FPGA-Based AI

Intel OpenVINO
Intel OpenVINO is an AI optimization toolkit for real-time inference that accelerates AI inference workloads on edge devices.
Internal: Your class Faculty

Other: Intel DevCloud, OpenVINO community
Xilinx Vitis AI

Xilinx Vitis AI is a hardware acceleration platform for AI inference on Xilinx FPGAs. It is used in low-power edge AI applications.
Internal: Your class Faculty

Other: Xilinx forums, FPGA optimization guides
Ray Train (Ray.io for Distributed Training)
Distributed training library supporting scalable deep learning. Used in large-scale model and reinforcement training.
Internal: Your class Faculty

Other: Ray community forums, open-source contributors
Edge AI & AI Inference

TensorRT (NVIDIA)
TensorRT optimizes AI inference performance for low-latency, real-time applications.
Internal: Your class Faculty

Other: NVIDIA Dev Forums