Nebula Nibblers WinForms: When Codex Should Have Asked

Codex helped build and package a real C# WinForms arcade game as a Windows download. Then the experiment showed why fast AI coding still needs a human willing to test, question, and say no.

Introduction

Nebula Nibblers began with a practical question: could Codex help create a complete C# WinForms arcade game, build it into a working Windows .exe, and package it so readers could install and try it?

Yes. That is the immediate hook for anyone interested in the tech stack. This was not a mock-up, a code snippet, or a theoretical exercise. Codex helped produce a real WinForms desktop game and a downloadable Windows build, with the main cost being time, testing, and human review.

But not cleanly, and not without human correction.

Codex produced a playable WinForms game with menus, scoring, enemies, shields, waves, resizing, bonus cruisers, a Windows form icon, documentation, and a published executable. That represented a massive reduction in the time needed to move from idea to playable Windows release. It also shipped bugs, left menu buttons unwired, overcorrected design details, and mishandled a logo edit because it guessed instead of asking.

That is the useful story. AI-assisted development is not magic, and it is not useless. It is acceleration with risk attached. Codex behaved like a fast junior-to-mid developer: productive, tireless, and occasionally too confident about the wrong thing.

Nebula Nibblers became real because Codex moved quickly. It became better because a human stayed involved.

The Experiment

The goal was to build a Windows C# arcade game inspired by the fixed-screen alien-shooter pattern of classic arcade games, without copying copyrighted names, sprites, sounds, or assets.

The brief asked for:

  • a complete C# Windows game that compiles into an .exe
  • original cosmic enemies
  • player movement, shooting, lives, scoring, waves, enemy bullets, and destructible shields
  • menu, pause, game over, and wave-clear states
  • generated visuals and a clean retro arcade style
  • build and publish instructions
  • a Windows installer or executable suitable for download

Codex was told to start with a minimal playable version, then add menu, shields, scoring, waves, and polish.

That structure mattered. The AI had a clear target and a sensible build order, but the design decisions still had to be judged as the game took shape.

Comparison of Human and AI project contributions

What Codex Built Well

The first major result was genuinely impressive.

Codex created the project structure, source files, build script, documentation, and a published Windows executable. The practical result was important: a C# WinForms desktop game that could be compiled, published, packaged, and shared with readers as a Windows download.

The game included a Windows Forms game loop, menu states, player and enemy bullets, destructible shields, scoring, high score tracking, generated visuals, starfield effects, wave scaling, and sound hooks.

That is not trivial output. In a manual workflow, much of the time goes into scaffolding, wiring, naming, refactoring, testing, documentation, build scripts, publish settings, and small glue decisions. Codex reduced that friction dramatically.

It did not merely explain how to build the game. It created files, changed code, ran build checks, responded to bug reports, and republished the executable after fixes. That kind of fast iteration is one of the strongest practical uses of AI coding tools.

Hard Lesson One: A Game Can Build and Still Be Broken

The first published build was not the finish line.

During gameplay, the game threw a runtime error:

System.InvalidOperationException: Collection was modified; enumeration operation may not execute.

This is a common C# failure pattern: code loops through a collection while something else changes it. In this case, enemy bullet collision handling triggered a game-over path. EndGame() cleared _enemyBullets while the collision loop was still enumerating that same collection.

The project compiled. The executable ran. The game was playable. But a real gameplay path still crashed it.

That distinction matters. Compilation proves the code is syntactically valid. It does not prove the game loop survives real play.

Codex fixed the issue quickly once the crash was reported. It stopped collision handling after a state change and changed the enemy bullet/player collision path to a safer index-based loop.

Good fix. Bad original miss. The bug was found by playing the game, not by trusting the build output.

Hard Lesson Two: A Button That Looks Finished May Do Nothing

The next problem was simpler but just as revealing.

The Start Game and Quit buttons appeared on the menu, but clicking them did nothing.

That is not a deep architecture problem. It is a basic usability failure. The buttons were drawn, but the mouse interaction was not fully wired.

Codex fixed it quickly by adding hover behaviour, left-click handling, and cursor changes while keeping keyboard selection intact.

The lesson is blunt: users do not care that a button is rendered. They care that it works.

This is exactly where human testing matters. Click the buttons. Resize the window. Die in the game. Clear a wave. Try the ordinary paths, because the bugs often hide in places that look too obvious to question.

The Resize Decision Showed Good Human Direction

The game used a fixed logical playfield. That was the right base for an arcade game because enemy movement, bullet lanes, shield placement, and collision timing all depend on predictable coordinates.

The question was whether the window should be resizable or maximizable.

Codex gave a good recommendation: keep gameplay at 900 x 720, render to an offscreen buffer, scale that buffer uniformly into the actual window, and use letterboxing or pillarboxing when needed.

That preserved the playfield while making the game more comfortable on modern screens. Codex did not try to make every coordinate responsive, which could have changed difficulty and collision behaviour.

This was a strong AI moment, but it still came from a human question. A human noticed the limitation, asked for the best approach, judged the answer, and then told Codex to implement it.

The Logo Mistake: When Codex Should Have Asked

One of the clearest failures was not a gameplay bug. It was an asset-handling mistake.

The Llamavision logo PNG was already transparent. Codex assumed every white pixel needed to disappear. The actual problem was the white square box, Codex placed, behind the logo.

Some white pixels were part of the logo and needed to stay. Codex treated the issue too broadly, as if the solution involved special white-pixel handling or image reformatting. That was the wrong interpretation.

The correct fix was narrow: keep the transparent logo intact and remove the unwanted white box behind it. This is exactly where Codex should have paused.

A logo is not just an image file. It is a brand asset. When the requested change is narrow, the AI should make a narrow change or ask first. It did not ask, that was the mistake.

Iteration Turned the Prototype Into a Game

Once the basics worked, the experiment moved into actual game design.

The start screen needed a points legend. The first wave needed fewer enemies. Later waves needed to add rows, then cycle back with higher speed.

Codex implemented the structure:

  • Wave 1: 3 rows
  • Wave 2: 4 rows
  • Wave 3: 5 rows
  • Wave 4: 6 rows
  • Wave 5: back to 3 rows, but faster
  • later waves repeat with higher speed tiers

That was a good division of labour. The human provided the design intention. Codex handled the implementation.

Then the classic bonus UFO idea was adapted into original bonus cruisers:

  • Extra Life
  • Rapid Fire
  • Shield Patch
  • Starburst

Codex’s first rapid-fire version used a timer. That worked mechanically, but it did not match the desired feel. The correction was clear: no timer, and rapid fire is lost on death.

Codex also changed the shield layout to three centered banks to create more side space. That solved one issue but damaged the intended arcade layout. The better design was four shields creating five firing lanes: three central lanes and two narrower side lanes. The side lanes only needed to be wide enough for the player turret to shoot cleanly past the shields.

The human corrected the design, and Codex restored the four-shield/five-lane structure.

The Starburst cruiser then pushed the game beyond a simple homage. When hit, it gives points and releases shard-style bullets that can damage enemies, shields, or the player. That added risk, reward, and chaos without copying the original arcade game.

This is how AI-assisted game development works when it is useful. The AI is not the designer of record. It is the fast implementer. The human supplies taste, balance, play feel, and the final call on what the game is becoming.

Human Oversight Was Not Optional

This project worked because the human stayed in the loop.

The human wrote the scope, tested the executable, reported the runtime crash, noticed the dead menu buttons, questioned the resize behaviour, directed the wave structure, corrected the rapid-fire mechanic, protected the four-shield/five-lane layout, added the Starburst cruiser idea, and caught the logo handling mistake.

That is not passive supervision. That is product ownership.

The lesson is not “do not use Codex.” The lesson is “do not abdicate judgment to Codex.”

A sensible AI-assisted workflow still needs:

  • clear initial instructions
  • small testable milestones
  • real gameplay testing
  • manual review of generated files
  • extra caution around assets and branding
  • human approval before packaging and release
  • willingness to challenge confident output when the result feels wrong

What This Experiment Actually Proved

Nebula Nibblers proved that Codex can help produce a working Windows game from a structured brief and iterate on it quickly.

It also proved that a working build is not the same as a finished product.

The game needed runtime testing, usability testing, design correction, asset protection, and human taste.

Codex was fast. Codex was useful. Codex was also wrong in ordinary, practical ways.

That is the real lesson. AI-assisted coding is not only about whether AI can generate code. It is about whether a human can guide, test, correct, and contain that output until it becomes something safe and usable.

Nebula Nibblers became real because Codex accelerated the work.

It became better because a human did not stop watching.

Where Codex Drifted

Codex also drifted.

It shipped a runtime crash. It drew menu buttons before wiring mouse clicks. It made rapid fire timed when the desired mechanic was death-based. It overcorrected the shield layout. It mishandled the logo by guessing at the wrong interpretation.

None of that means Codex was useless.

It means Codex needed supervision.

That is the middle ground people often skip. AI coding tools are not worthless toys, and they are not dependable autonomous developers. They are powerful accelerators that can make wrong decisions quickly.

The danger is not only that AI makes mistakes. Humans do too. The danger is that AI can produce something polished enough to lower your guard.

Where Codex Helped

Codex was genuinely useful.

It turned a structured idea into a working Windows game. It generated scaffolding, organised source files, added documentation, created build and publish steps, fixed reported bugs, and handled repeated feature changes quickly.

The most valuable part was momentum. The project did not stall at “I should make a game one day.” It became something that could be played, tested, fixed, packaged, and shared.

For small experimental software projects, that momentum is often the difference between a folder of notes and a working release.

Try the Game

A Windows setup download will be provided here for readers who want to try the finished WinForms build themselves.

Download for Windows: [download link]

This is not being presented as a commercial-grade arcade release. It is an experiment in AI-assisted software creation, packaging, testing, and correction.

The download lets readers experience the result, but the article is really about the process: what Codex made easier, what it got wrong, and why the human role did not disappear.

Play Nebula Nibblers

Try Nebula Nibblers for free. A Windows arcade game built through human and AI collaboration. Tested, corrected, and packaged for download.

REQUIRES WINDOWS 33MB