03
Operate
See what the robot sees. Trust the machine.
The operator looks at the HMI. The vision overlay shows keypoints on the seam, pool boundary, correction arrows. The operator sees that the machine understands the process. It's not a black box. When something looks wrong, it's visible on screen before the weld goes bad. When something did go wrong, every frame is recorded — scrub back, find the moment, understand what happened.

Step 1
The operator sees what the machine sees
The HMI shows the live camera feed with the vision overlay. Keypoints on the seam edges. Pool boundary. Correction arrows showing where the robot is adjusting. The operator doesn't need to trust a black box — they can see that the machine understands the process.
Before: the robot runs a taught program. The operator watches the weld and hopes. If something drifts, they notice too late — after the defect.

Step 2
Monitor system health
The dashboard shows: camera FPS, vision model latency, robot communication status, PLC tag values. If throughput drops or a device disconnects, you see it immediately. Accessible from any browser — the office, the shop floor, home.
Before: "walk to the cell and look." No metrics. When throughput drops, nobody knows if it's the camera, the robot, or the PLC.

Step 3
Every frame is recorded
Camera frames, vision model output, robot commands, PLC tag values — all recorded with timestamps. Full traceability chain from raw image to robot action. When the auditor asks "show me what happened on part #4721" — you scrub to that timestamp.
Before: no video record. No vision model history. Quality issues investigated by guesswork. ISO traceability filled in by hand, after the fact.

Step 4
Diagnose in minutes, not days
A defect found in inspection? Open the recording. Scrub to the timestamp. See the exact frame: what the camera captured, what the vision model detected, what the robot did. Root cause — visible. Fix — testable on the same recording.
Before: the welding engineer examines the bad weld, hypothesizes what went wrong, adjusts parameters, runs another part. Trial and error. Days of guesswork.

Step 5
Improve without stopping production
Trained a better vision model? Replay last week's production recordings through it. See exactly where it differs from the current model. If the new model is better on 99% of frames but worse on 1% — you see those 1% before deploying. Zero downtime for validation.
Before: stop production to test a new model? Trust training metrics that don't transfer to your specific parts? Nobody upgrades because it's too risky.