AI Blog

Insights, tutorials, and benchmarks from the world of AI

Benchmarks February 9, 2026

Qwen3-Coder-Next: IQ2 vs IQ3 Benchmarks

IQ2_XXS achieves 22 t/s on RTX 4090 at 200K context — 85% faster than IQ3 with no measurable quality loss. Full benchmark data and configuration.

Local AI Benchmarks IQ2 Quantization RTX 4090
Read Article →
Tutorial February 2, 2026

Running Uncensored AI Locally: My PRISM Setup

How I set up GLM-4.7-Flash locally with web search and vision capabilities. No content filters, no API subscription, just my hardware doing what I tell it to.

Local AI GLM-4.7 Uncensored Claude Code
Read Article →

Get AI Tips in Your Inbox

Subscribe for tutorials on local AI, Stable Diffusion, LoRA training, and Claude Code workflows.

More Articles Coming Soon

We're preparing additional tutorials and guides:

  • Stable Diffusion Mastery — Advanced prompting and workflow optimization
  • LoRA Training from Scratch — Create custom models for any subject
  • LLM Fine-tuning Guide — Adapt open-source models for your domain
  • AI Automation Pipelines — Build end-to-end workflows with n8n
  • Claude Code Pro Tips — Advanced usage and MCP development

Have a topic you'd like us to cover? Let us know