If you’ve used Cursor to help you code, you’ve probably noticed a pattern: sometimes it saves your day, and other times it leaves you cleaning up its mess.

When I started using Cursor in our enterprise projects—large teams, rigid coding standards, and endless internal libraries—it felt like rolling dice every time. Sometimes, the AI-generated code was perfect, exactly how I’d imagined. Other times, it felt like someone unfamiliar had jumped into our project, wrote some code, and left us to deal with it.

I realized that AI assistants worked extremely well when we were very specific—explicitly naming every file that needed changes, describing exactly what each function should do, what the tests should include, and so forth. But let’s be honest: we don’t always have the luxury to provide such detailed instructions. Often, our tasks come as high-level tickets with broader descriptions and less context, requiring the AI assistant to handle many changes at once.

In these situations, getting the AI to produce useful, consistent code in one shot is rare. More often, we end up spending more time reviewing, adjusting, and aligning its output with our team’s coding style and code patterns.

Another big issue was that in large-scale, enterprise environments—especially those with service-oriented architectures—you often have 20, 30, or even 100 different services. Many of these services share the same programming language and coding patterns. Keeping identical [Cursor] coding rules in each of these repositories quickly becomes impractical. Any small change means manually updating dozens of repositories, creating needless overhead and frustration.

That’s when I started thinking about a simpler approach: what if we had a central place to define these coding standards once and share them seamlessly across multiple teams and projects via Cursor Project Rules? The goal was straightforward—no duplication, easy maintenance, and consistent AI-generated code everywhere.

This led me to build Cursor Project Rules, a VS Code extension that connects to Git repositories containing your team’s coding standards. Once linked, the AI assistant automatically aligns its output to your established practices, even for tasks described at a higher level.

But there’s an important reality here: the quality of your AI-generated output will always depend heavily on the context you provide. While a tool like Cursor Project Rules simplifies context-sharing and avoids duplication across repositories, the real work—and the real challenge—is defining these rules in the first place. It requires engineering teams to come together, discuss, and agree on shared coding standards. Ideally, a smaller group of engineers will lead these discussions, document emerging standards, and ensure they’re shared broadly.

This process not only solidifies coding practices but also improves communication across teams. Often, individual teams naturally create internal standards without sharing them, leading to fragmented practices. Consolidating these standards into shared rules helps unify the entire technical ecosystem, significantly easing tasks beyond coding, such as transferring ownership of projects and services between teams.

Ultimately, creating and maintaining a central repository of coding standards boosts productivity, fosters consistency across projects, and makes your engineering organization more robust and adaptable.

If you’re experiencing similar frustrations—spending too much time cleaning up after your AI assistant—give Cursor Project Rules a try.

Have your own AI-coding horror stories or success stories? I’d love to hear them—I know I’m not alone!