Skip to content

Contact

Let's connect! I'm always interested in discussions about GPU optimization, LLM systems, and making AI tools more accessible.


Get in Touch

Email

waqasm86@gmail.com

Best for: - Project inquiries - Collaboration opportunities - Technical discussions - Feedback on llcuda

Response time: Usually within 24-48 hours


Social & Professional

GitHub

github.com/waqasm86

Follow my open-source work: - llcuda development - Ubuntu-Cuda-Llama.cpp-Executable - Bug reports and feature requests - Pull requests and contributions

PyPI

pypi.org/project/llcuda

Official llcuda package: - Latest releases - Version history - Package statistics


Project-Specific

llcuda

Issues & Bugs: github.com/waqasm86/llcuda/issues - Report bugs - Request features - Ask for help

Discussions: github.com/waqasm86/llcuda/discussions - General questions - Share use cases - Community support

Documentation: waqasm86.github.io/llcuda - Quick start guide - Installation help - Examples and tutorials

Ubuntu-Cuda-Llama.cpp-Executable

Issues: github.com/waqasm86/Ubuntu-Cuda-Llama.cpp-Executable/issues - Build problems - Compatibility issues - Platform support requests


What I'm Interested In

I'm particularly interested in hearing from you if you're:

Using llcuda

  • Share your use case: How are you using llcuda?
  • Benchmark your GPU: What performance are you seeing?
  • Found a bug?: Please report it!
  • Built something cool?: I'd love to hear about it!

Have Legacy Hardware

  • Testing on different GPUs: Help expand hardware support
  • Different Linux distros: Testing compatibility
  • Performance data: Share your benchmarks

Want to Collaborate

  • Windows/macOS support: Port llcuda to new platforms
  • AMD GPU support: ROCm integration
  • Documentation improvements: Make guides even better
  • New features: Implement advanced capabilities

Learning or Teaching

  • Student projects: Using llcuda for coursework
  • Tutorials or blog posts: Share your knowledge
  • Workshops: Teaching with llcuda
  • Research: Academic applications

Response Times

GitHub Issues: 24-48 hours (usually faster) Email: 24-48 hours for project inquiries Pull Requests: Will review within a week

Note: I'm a solo maintainer, so please be patient. I respond to everything!


Reporting Bugs

When reporting bugs, please include:

System Information:

# Python version
python3 --version

# llcuda version
python3 -c "import llcuda; print(llcuda.__version__)"

# GPU information
nvidia-smi

# OS information
cat /etc/os-release

Error Details: - Full error message - Steps to reproduce - Expected vs actual behavior - Minimal code example

Example Bug Report:

**Title**: CUDA out of memory with Phi-3 Mini on GTX 1050

**System**:
- llcuda version: 1.0.0
- Python: 3.11.0
- GPU: GeForce GTX 1050 (2GB VRAM)
- OS: Ubuntu 22.04

**Issue**:
Getting CUDA OOM error when loading Phi-3 Mini model.

**Code**:
import llcuda
engine = llcuda.InferenceEngine()
engine.load_model("phi-3-mini-Q4_K_M")  # Fails here

**Error**:
RuntimeError: CUDA out of memory...

**Expected**: Should work on 2GB VRAM
**Actual**: OOM error


Feature Requests

I welcome feature requests! Please provide:

Use Case: Why do you need this feature? Description: What should it do? Example: How would you use it? Priority: Is this blocking your work?

Example Feature Request:

**Feature**: Support for Q2_K quantization

**Use Case**: Need to run larger models on 1GB VRAM GPU

**Description**: Add support for Q2_K quantization to fit
larger models in limited VRAM.

**Example**:
engine = llcuda.InferenceEngine()
engine.load_model("mistral-7b-Q2_K")  # Hypothetical Q2_K model

**Priority**: Nice to have, not blocking


Collaboration Opportunities

Interested in collaborating? I'm looking for:

Code Contributors

  • Windows/macOS support
  • AMD GPU integration (ROCm)
  • Performance optimizations
  • New features

Technical Writers

  • Tutorial creation
  • Documentation improvements
  • Translation to other languages

Testers

  • Different GPU models
  • Various Linux distributions
  • Edge cases and stress testing

Researchers

  • Academic use cases
  • Performance studies
  • Novel applications

Commercial Inquiries

For commercial use, consulting, or custom development:

Email: waqasm86@gmail.com

Services I offer: - Custom llcuda integrations - Performance optimization for specific hardware - Training and workshops - Technical consulting on GPU computing

Note: llcuda is MIT licensed and free to use commercially. No license fees required.


Office Hours

I don't have formal office hours, but I'm most responsive:

Timezone: UTC+5 (Pakistan Standard Time) Best times: Weekdays, 9 AM - 6 PM PKT

For urgent issues, GitHub issues are usually faster than email.


Community Guidelines

When reaching out:

Do: - Be respectful and professional - Provide context and details - Search existing issues first - Share your GPU/system specs - Include error messages

Don't: - Demand immediate responses - Send duplicate messages across channels - Report security issues publicly (email instead) - Expect free consulting (unless it's a bug)


Stay Updated

GitHub Watch: Star the repository to get updates GitHub Discussions: Join the community Release Notes: Check PyPI for new versions

I announce major updates through: - GitHub releases - PyPI version updates - README updates


llcuda Documentation: waqasm86.github.io/llcuda Quick Start: Get running in 5 minutes Installation Guide: Comprehensive setup Examples: Production code samples

GitHub: github.com/waqasm86 PyPI: pypi.org/project/llcuda Resume: Download PDF


Frequently Asked Questions

Before reaching out, check if your question is answered in:

Installation Issues: Installation Guide Performance Questions: Performance Guide Usage Examples: Examples General Info: About Me


Thank You

Thank you for your interest in my work! I built llcuda to make LLM development accessible on hardware people already own, and your feedback helps make it better.

Whether you're: - A student learning AI - A developer building applications - A researcher exploring LLMs - Someone with an old GPU wanting to experiment

I'm here to help.

Looking forward to hearing from you!

— Waqas Muhammad


Contact Summary

Primary Contact: waqasm86@gmail.com

GitHub: github.com/waqasm86 PyPI: pypi.org/project/llcuda

For Bugs: GitHub Issues For Discussions: GitHub Discussions For Everything Else: waqasm86@gmail.com

Response Time: 24-48 hours Timezone: UTC+5 (PKT)