Pongpanoch Chongpatiyutt

Hi, I'm Pongpanoch Chongpatiyutt.

AI Engineer

I build practical AI systems for real teams: ingest, retrieve, evaluate, and ship with confidence.

I'm an AI Engineer focused on reliable retrieval and practical AI agent systems.

I started in machine learning through university coursework, then moved deeper into the engineering side: ingestion quality, retrieval grounding, and stable behavior under real operating constraints.

At Saifa AI, I worked on production-oriented RAG and worker architecture, including PDF and web ingestion, Qdrant indexing, event-driven processing, and evaluation workflows for repeatability. I also contributed to multi-agent reliability improvements with safer fallbacks, clearer routing, and better replay and debug workflows.

I enjoy building end-to-end AI workflows that teams can operate confidently, from data preparation and retrieval to response generation and regression checks.

Most company projects are NDA-bound, so I share architecture patterns, tooling, and quality safeguards rather than confidential product details.

Experience

AI Engineering Intern, Saifa AI (Sep 2025 - Present)

  • Built and improved RAG pipelines for PDF and website ingestion, including extraction, chunking, metadata design, and Qdrant indexing and filtering.
  • Implemented repeatable evaluation workflows with auditable artifacts (JSONL logs, replay payloads, extraction and retrieval comparisons) for regression checks.
  • Stabilized the core event-driven worker flow (message in to reply out) in a RabbitMQ and FastAPI architecture with defensive fallback handling.
  • Improved reliability through payload validation, idempotent handling for duplicate events, structured logging, and clearer failure classification.

Tooling

PythonFastAPIRabbitMQQdrantLangChainOpenAI APIAnthropic APIGemini APIRAGVector SearchJSONL Evaluation LogsPrompt EngineeringDockerGit

Student Assistant, TU Berlin - Faculty V (Aug 2023 - May 2025)

  • Managed CNC-based fabrication workflows for open-source hardware projects, from CAD and CAM preparation to machine operation.
  • Supported machine setup and handoff quality checks to keep fabrication runs repeatable for student and research teams.
  • Helped build and operate makerspace infrastructure for student and research projects.
  • Produced practical documentation and structured handoffs to support reproducible technical work.

Tooling

Fusion 360CAD/CAMCNC ProgrammingWorkshop Tooling3D PrintingTechnical Documentation

Project: Reusable RAG Delivery Foundation

RAG delivery is complex and usually starts with specialist setup before teams can validate real value. I built this reusable foundation so ingestion, retrieval, and cited responses are ready from day one.

This shortens heavy setup workload and shifts effort to higher-value work: use-case design, corpus-specific tuning, and edge-case reliability with real data.

Result: teams can build on top of a stable base and iterate toward production behavior far more efficiently.

Problem

New agent initiatives frequently stall at stack selection, ingestion reliability, and retrieval quality setup.

System

A production-style RAG foundation with PDF/URL ingestion, vector indexing, grounded chat, and citation traces.

Benefit

Teams can move from concept to domain testing faster, with less regression risk and less rework.

Core Stack

Next.jsTypeScriptOpenAI APIPineconeBrowserlesspuppeteer-corepdf-parseReact MarkdownTailwind CSS

System Design

Architecture at a glance: one Next.js app handles ingest and chat APIs, indexes evidence in Pinecone, and returns citation-grounded answers with built-in guardrails.

Pong AI Demo

Upload PDF or ingest URL, then chat with grounded responses.

Status

Idle
Uploading
Scraping
Ingesting
Ready

Active Sources

No sources ingested yet.

Pong Assistant

Grounded answers with traceable evidence

Idle

Ask about uploaded files and URL content once ingestion is ready.

Press Enter to send, Shift+Enter for newline.

Verify important details before making decisions.

Companion Tools I Built

The demo is the baseline layer. These companion tools make RAG quality measurable, tunable, and repeatable across different business use cases.

RAG Evaluation Harness

  • Runs golden-set queries with expected evidence and explicit pass/fail criteria.
  • Tracks citation coverage, groundedness, hit-rate, latency, and per-query cost across versions.

Stack

PythonJSONL TracesPandasJupyter NotebooksRegression Baselines

Chunking and Retrieval Comparator

  • Compares chunk size/overlap, top-k, thresholds, and reranking strategies on the same corpus.
  • Produces side-by-side retrieval and answer quality reports to guide tuning decisions.

Stack

LangChainRecursiveCharacterTextSplitterOpenAI EmbeddingsPinecone NamespacesRetrieval Parameter SweepsPandasJupyter Notebooks

Edge-Case Replay and Failure Analyzer

  • Replays problematic queries and classifies misses: no-evidence, wrong-source, and partial-answer.
  • Links each failure to trace artifacts so prompt, retrieval, and ingestion fixes are faster.

Stack

FastAPIRabbitMQStructured LoggingReplay PayloadsGit

Contact

For AI/ML/Software opportunities or project collaboration, feel free to reach out.

p.chongpatiyutt@gmail.com
LinkedInGitHub