Software
Collection
Software (GitHub) datasets I've created
•
3 items
•
Updated
github_repo_link
stringlengths 27
60
| repo_name
stringlengths 2
29
| repo_description
stringlengths 10
159
⌀ | homepage_link
stringlengths 0
87
⌀ | repo_tag
stringlengths 3
23
⌀ | category
stringclasses 13
values | repo_tags
listlengths 3
20
⌀ |
|---|---|---|---|---|---|---|
https://github.com/pytorch/pytorch
|
pytorch
|
Tensors and Dynamic neural networks in Python with strong GPU acceleration
|
https://pytorch.org
|
machine-learning
|
machine learning framework
| null |
https://github.com/ggml-org/llama.cpp
|
llama.cpp
|
LLM inference in C/C++
| null |
ggml
|
inference engine
| null |
https://github.com/onnx/onnx
|
onnx
|
Open standard for machine learning interoperability
|
https://onnx.ai/
|
deep-learning
| null | null |
https://github.com/ray-project/ray
|
ray
|
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
|
https://ray.io
|
deep-learning
| null | null |
https://github.com/vllm-project/vllm
|
vllm
|
A high-throughput and memory-efficient inference and serving engine for LLMs
|
https://docs.vllm.ai
|
inference
|
inference engine
| null |
https://github.com/ollama/ollama
|
ollama
|
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
|
https://ollama.com
|
llms
|
inference engine
| null |
https://github.com/sgl-project/sglang
|
sglang
|
SGLang is a fast serving framework for large language models and vision language models.
|
https://docs.sglang.ai/
|
inference
|
inference engine
| null |
https://github.com/modular/modular
|
modular
|
The Modular Platform (includes MAX & Mojo)
|
https://docs.modular.com/
|
mojo
| null | null |
https://github.com/pytorch/ao
|
ao
|
PyTorch native quantization and sparsity for training and inference
|
https://pytorch.org/ao/stable/index.html
|
quantization
| null | null |
https://github.com/triton-lang/triton
|
triton
|
Development repository for the Triton language and compiler
|
https://triton-lang.org/
| null |
dsl
| null |
https://github.com/HazyResearch/ThunderKittens
|
ThunderKittens
|
Tile primitives for speedy kernels
| null | null | null | null |
https://github.com/gpu-mode/reference-kernels
|
reference-kernels
|
Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!
|
https://gpumode.com
|
gpu
|
kernels
| null |
https://github.com/pytorch/executorch
|
executorch
|
On-device AI across mobile, embedded and edge for PyTorch
|
https://executorch.ai
|
mobile
|
model compiler
| null |
https://github.com/guandeh17/Self-Forcing
|
Self-Forcing
|
Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight)
| null | null | null | null |
https://github.com/cumulo-autumn/StreamDiffusion
|
StreamDiffusion
|
StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation
| null | null | null | null |
https://github.com/comfyanonymous/ComfyUI
|
ComfyUI
|
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
|
https://www.comfy.org/
|
stable-diffusion
| null | null |
https://github.com/Jeff-LiangF/streamv2v
|
streamv2v
|
Official Pytorch implementation of StreamV2V.
|
https://jeff-liangf.github.io/projects/streamv2v/
| null | null | null |
https://github.com/letta-ai/letta
|
letta
|
Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time.
|
https://docs.letta.com/
|
ai-agents
| null | null |
https://github.com/jupyterlab/jupyterlab
|
jupyterlab
|
JupyterLab computational environment.
|
https://jupyterlab.readthedocs.io/
|
jupyter
|
ui
| null |
https://github.com/ROCm/rocm-systems
|
rocm-systems
|
super repo for rocm systems projects
| null | null | null | null |
https://github.com/NVIDIA/cutlass
|
cutlass
|
CUDA Templates and Python DSLs for High-Performance Linear Algebra
|
https://docs.nvidia.com/cutlass/index.html
|
cuda
| null | null |
https://github.com/pytorch/helion
|
helion
|
A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.
| null | null |
dsl
| null |
https://github.com/jax-ml/jax
|
jax
|
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
|
https://docs.jax.dev
|
jax
| null | null |
https://github.com/tensorflow/tensorflow
|
tensorflow
|
An Open Source Machine Learning Framework for Everyone
|
https://tensorflow.org
|
deep-learning
|
machine learning framework
| null |
https://github.com/deepspeedai/DeepSpeed
|
DeepSpeed
|
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
|
https://www.deepspeed.ai/
|
gpu
| null | null |
https://github.com/triton-inference-server/server
|
server
|
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
|
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
|
inference
| null | null |
https://github.com/ROCm/ROCm
|
ROCm
|
AMD ROCm™ Software - GitHub Home
|
https://rocm.docs.amd.com
|
documentation
| null | null |
https://github.com/llvm/llvm-project
|
llvm-project
|
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
|
http://llvm.org
| null |
compiler
| null |
https://github.com/cwpearson/cupti
|
cupti
|
Profile how CUDA applications create and modify data in memory.
| null | null |
profiler
| null |
https://github.com/LLNL/hatchet
|
hatchet
|
Graph-indexed Pandas DataFrames for analyzing hierarchical performance data
|
https://llnl-hatchet.readthedocs.io
|
performance
|
profiler
| null |
https://github.com/toyaix/triton-runner
|
triton-runner
|
Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.
|
https://triton-runner.org
|
triton
| null | null |
https://github.com/ByteDance-Seed/Triton-distributed
|
Triton-distributed
|
Distributed Compiler based on Triton for Parallel Systems
|
https://triton-distributed.readthedocs.io/en/latest/
| null |
model compiler
| null |
https://github.com/linkedin/Liger-Kernel
|
Liger-Kernel
|
Efficient Triton Kernels for LLM Training
|
https://openreview.net/pdf?id=36SjAIT42G
|
triton
|
kernels
| null |
https://github.com/thunlp/TritonBench
|
TritonBench
|
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
| null | null |
benchmark
| null |
https://github.com/meta-pytorch/tritonparse
|
tritonparse
|
TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels
|
https://meta-pytorch.org/tritonparse/
|
triton
| null | null |
https://github.com/elastic/elasticsearch
|
elasticsearch
|
Free and Open Source, Distributed, RESTful Search Engine
|
https://www.elastic.co/products/elasticsearch
|
search-engine
|
search engine
| null |
https://github.com/kubernetes/kubernetes
|
kubernetes
|
Production-Grade Container Scheduling and Management
|
https://kubernetes.io
|
containers
| null | null |
https://github.com/modelcontextprotocol/modelcontextprotocol
|
modelcontextprotocol
|
Specification and documentation for the Model Context Protocol
|
https://modelcontextprotocol.io
| null | null | null |
https://github.com/lastmile-ai/mcp-agent
|
mcp-agent
|
Build effective agents using Model Context Protocol and simple workflow patterns
| null |
ai-agents
| null | null |
https://github.com/milvus-io/milvus
|
milvus
|
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
|
https://milvus.io
|
vector-search
|
vector databse
| null |
https://github.com/gaoj0017/RaBitQ
|
RaBitQ
|
[SIGMOD 2024] RaBitQ: Quantizing High-Dimensional Vectors with a Theoretical Error Bound for Approximate Nearest Neighbor Search
|
https://github.com/VectorDB-NTU/RaBitQ-Library
|
nearest-neighbor-search
| null | null |
https://github.com/Airtable/airtable.js
|
airtable.js
|
Airtable javascript client
| null | null | null | null |
https://github.com/mistralai/mistral-inference
|
mistral-inference
|
Official inference library for Mistral models
|
https://mistral.ai/
|
llm-inference
|
inference engine
| null |
https://github.com/dstackai/dstack
|
dstack
|
dstack is an open-source control plane for running development, training, and inference jobs on GPUs—across hyperscalers, neoclouds, or on-prem.
|
https://dstack.ai
| null | null |
[
"amd",
"cloud",
"containers",
"docker",
"fine-tuning",
"gpu",
"inference",
"k8s",
"kubernetes",
"llms",
"machine-learning",
"nvidia",
"orchestration",
"python",
"slurm",
"training"
] |
https://github.com/numpy/numpy
|
numpy
|
The fundamental package for scientific computing with Python.
|
https://numpy.org
|
python
|
python library
| null |
https://github.com/scipy/scipy
|
scipy
|
SciPy library main repository
|
https://scipy.org
|
python
|
python library
| null |
https://github.com/numba/numba
|
numba
|
NumPy aware dynamic Python compiler using LLVM
|
https://numba.pydata.org/
|
compiler
| null | null |
https://github.com/sandialabs/torchdendrite
|
torchdendrite
|
Dendrites for PyTorch and SNNTorch neural networks
| null |
scr-3078
|
machine learning framework
| null |
https://github.com/Lightning-AI/lightning-thunder
|
lightning-thunder
|
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
| null | null | null | null |
https://github.com/pytorch/torchdynamo
|
torchdynamo
|
A Python-level JIT compiler designed to make unmodified PyTorch programs faster.
| null | null | null | null |
https://github.com/microsoft/TileIR
|
TileIR
| null | null | null |
dsl
| null |
https://github.com/pytorch/torchtitan
|
torchtitan
|
A PyTorch native platform for training generative AI models
| null | null | null | null |
https://github.com/NVIDIA/cudnn-frontend
|
cudnn-frontend
|
cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it
| null | null | null | null |
https://github.com/pytorch/ort
|
ort
|
Accelerate PyTorch models with ONNX Runtime
| null | null | null | null |
https://github.com/NVIDIA/nccl
|
nccl
|
Optimized primitives for collective multi-GPU communication
|
https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html
|
cuda
| null | null |
https://github.com/sgl-project/ome
|
ome
|
OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)
|
http://docs.sglang.ai/ome/
| null | null |
[
"deepseek",
"k8s",
"kimi-k2",
"llama",
"llm",
"llm-inference",
"model-as-a-service",
"model-serving",
"multi-node-kubernetes",
"oracle-cloud",
"sgalng",
"sglang"
] |
https://github.com/volcengine/verl
|
verl
|
verl: Volcano Engine Reinforcement Learning for LLMs
|
https://verl.readthedocs.io/en/latest/index.html
| null | null | null |
https://github.com/aws-neuron/neuronx-distributed-inference
|
neuronx-distributed-inference
| null | null | null |
inference engine
| null |
https://github.com/meta-pytorch/monarch
|
monarch
|
PyTorch Single Controller
|
https://meta-pytorch.org/monarch
| null | null | null |
https://github.com/ai-dynamo/nixl
|
nixl
|
NVIDIA Inference Xfer Library (NIXL)
| null | null | null | null |
https://github.com/LMCache/LMCache
|
LMCache
|
Supercharge Your LLM with the Fastest KV Cache Layer
|
https://lmcache.ai/
|
inference
| null | null |
https://github.com/linux-rdma/rdma-core
|
rdma-core
|
RDMA core userspace libraries and daemons
| null | null | null |
[
"infiniband",
"iwarp",
"kernel-rdma-drivers",
"linux-kernel",
"rdma",
"roce",
"userspace-libraries"
] |
https://github.com/NVIDIA/TensorRT
|
TensorRT
|
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
|
https://developer.nvidia.com/tensorrt
|
inference
| null | null |
https://github.com/Cambridge-ICCS/FTorch
|
FTorch
|
A library for directly calling PyTorch ML models from Fortran.
|
https://cambridge-iccs.github.io/FTorch/
|
deep-learning
| null | null |
https://github.com/facebook/hhvm
|
hhvm
|
A virtual machine for executing programs written in Hack.
|
https://hhvm.com
|
hack
| null | null |
https://github.com/vosen/ZLUDA
|
ZLUDA
|
CUDA on non-NVIDIA GPUs
|
https://vosen.github.io/ZLUDA/
|
cuda
| null | null |
https://github.com/vtsynergy/CU2CL
|
CU2CL
|
A prototype CUDA-to-OpenCL source-to-source translator, built on the Clang compiler framework
|
http://chrec.cs.vt.edu/cu2cl
| null | null | null |
https://github.com/pocl/pocl
|
pocl
|
pocl - Portable Computing Language
|
https://portablecl.org
|
opencl
| null | null |
https://github.com/apache/spark
|
spark
|
Apache Spark - A unified analytics engine for large-scale data processing
|
https://spark.apache.org/
|
big-data
| null | null |
https://github.com/codelion/openevolve
|
openevolve
|
Open-source implementation of AlphaEvolve
| null | null |
[
"alpha-evolve",
"alphacode",
"alphaevolve",
"coding-agent",
"deepmind",
"deepmind-lab",
"discovery",
"distributed-evolutionary-algorithms",
"evolutionary-algorithms",
"evolutionary-computation",
"genetic-algorithm",
"genetic-algorithms",
"iterative-methods",
"iterative-refinement",
"llm-engineering",
"llm-ensemble",
"llm-inference",
"openevolve",
"optimize"
] |
|
https://github.com/ROCm/hipBLAS
|
hipBLAS
|
[DEPRECATED] Moved to ROCm/rocm-libraries repo
|
https://github.com/ROCm/rocm-libraries
|
hip
| null | null |
https://github.com/ROCm/roctracer
|
roctracer
|
[DEPRECATED] Moved to ROCm/rocm-systems repo
|
https://github.com/ROCm/rocm-systems
| null | null | null |
https://github.com/huggingface/peft
|
peft
|
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
|
https://huggingface.co/docs/peft
| null | null |
[
"adapter",
"diffusion",
"fine-tuning",
"llm",
"lora",
"parameter-efficient-learning",
"peft",
"python",
"pytorch",
"transformers"
] |
https://github.com/ROCm/hip
|
hip
|
HIP: C++ Heterogeneous-Compute Interface for Portability
|
https://rocmdocs.amd.com/projects/HIP/
|
hip
| null | null |
https://github.com/ROCm/composable_kernel
|
composable_kernel
|
Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators
|
https://rocm.docs.amd.com/projects/composable_kernel/en/latest/
| null | null | null |
https://github.com/ROCm/aiter
|
aiter
|
AI Tensor Engine for ROCm
| null | null | null | null |
https://github.com/AMDResearch/intelliperf
|
intelliperf
|
Automated bottleneck detection and solution orchestration
| null | null | null |
[
"amd",
"genai",
"gpu",
"hip",
"instinct",
"llm",
"performance",
"rocm"
] |
https://github.com/AMD-AGI/GEAK-agent
|
GEAK-agent
|
It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.
| null | null | null | null |
https://github.com/AMD-AGI/torchtitan
|
torchtitan
|
A PyTorch native platform for training generative AI models
| null | null | null | null |
https://github.com/AMD-AGI/hipBLASLt
|
hipBLASLt
|
hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library
|
https://rocm.docs.amd.com/projects/hipBLASLt/en/latest/index.html
| null | null | null |
https://github.com/AMD-AGI/rocm-torchtitan
|
rocm-torchtitan
| null | null | null | null | null |
https://github.com/HazyResearch/Megakernels
|
Megakernels
|
kernels, of the mega variety
| null | null | null | null |
https://github.com/huggingface/kernels
|
kernels
|
Load compute kernels from the Hub
| null | null |
kernels
| null |
https://github.com/tile-ai/tilelang
|
tilelang
|
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
|
https://tilelang.com/
| null |
dsl
| null |
https://github.com/opencv/opencv
|
opencv
|
Open Source Computer Vision Library
|
https://opencv.org
|
image-processing
| null | null |
https://github.com/Lightning-AI/lightning-thunder
|
lightning-thunder
|
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
| null | null | null | null |
https://github.com/tracel-ai/burn
|
burn
|
Burn is a next generation tensor library and Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.
|
https://burn.dev
| null | null |
[
"autodiff",
"cross-platform",
"cuda",
"deep-learning",
"kernel-fusion",
"machine-learning",
"metal",
"ndarray",
"neural-network",
"onnx",
"pytorch",
"rocm",
"rust",
"scientific-computing",
"tensor",
"vulkan",
"wasm",
"webgpu"
] |
https://github.com/huggingface/kernels-community
|
kernels-community
|
Kernel sources for https://huggingface.co/kernels-community
| null | null |
kernels
| null |
https://github.com/flashinfer-ai/flashinfer-bench
|
flashinfer-bench
|
Building the Virtuous Cycle for AI-driven LLM Systems
|
https://bench.flashinfer.ai
| null |
benchmark
| null |
https://github.com/OSC/ondemand
|
ondemand
|
Supercomputing. Seamlessly. Open, Interactive HPC Via the Web
|
https://openondemand.org/
| null | null |
[
"gateway",
"hacktoberfest",
"hpc",
"hpc-applications"
] |
https://github.com/flashinfer-ai/flashinfer
|
flashinfer
|
FlashInfer: Kernel Library for LLM Serving
|
https://flashinfer.ai
| null | null |
[
"attention",
"cuda",
"distributed-inference",
"gpu",
"jit",
"large-large-models",
"llm-inference",
"moe",
"nvidia",
"pytorch"
] |
https://github.com/ScalingIntelligence/KernelBench
|
KernelBench
|
KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems
|
https://scalingintelligence.stanford.edu/blogs/kernelbench/
|
benchmark
|
benchmark
| null |
https://github.com/thunlp/TritonBench
|
TritonBench
|
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
| null | null |
benchmark
| null |
https://github.com/AutomataLab/cuJSON
|
cuJSON
|
cuJSON: A Highly Parallel JSON Parser for GPUs
| null | null | null | null |
https://github.com/Netflix/metaflow
|
metaflow
|
Build, Manage and Deploy AI/ML Systems
|
https://metaflow.org
| null | null |
[
"agents",
"ai",
"aws",
"azure",
"cost-optimization",
"datascience",
"distributed-training",
"gcp",
"generative-ai",
"high-performance-computing",
"kubernetes",
"llm",
"llmops",
"machine-learning",
"ml",
"ml-infrastructure",
"ml-platform",
"mlops",
"model-management",
"python"
] |
https://github.com/harmonic-ai/IMO2025
|
IMO2025
| null | null | null | null | null |
https://github.com/leanprover/lean4
|
lean4
|
Lean 4 programming language and theorem prover
|
https://lean-lang.org
|
lean
| null | null |
https://github.com/NVIDIA/warp
|
warp
|
A Python framework for accelerated simulation, data generation and spatial computing.
|
https://nvidia.github.io/warp/
| null | null |
[
"cuda",
"differentiable-programming",
"gpu",
"gpu-acceleration",
"nvidia",
"nvidia-warp",
"python"
] |
https://github.com/NVIDIA/cuda-python
|
cuda-python
|
CUDA Python: Performance meets Productivity
|
https://nvidia.github.io/cuda-python/
| null | null | null |
https://github.com/basetenlabs/truss
|
truss
|
The simplest way to serve AI/ML models in production
|
https://truss.baseten.co
| null | null |
[
"artificial-intelligence",
"easy-to-use",
"falcon",
"inference-api",
"inference-server",
"machine-learning",
"model-serving",
"open-source",
"packaging",
"stable-diffusion",
"whisper",
"wizardlm"
] |