Skip to main content

AI Coding Guidelines

These are team-wide guidelines for using AI tools at Aliz. They apply regardless of which tool you're using.

Core Principles

  • Treat output as a draft, not a deliverable. AI-generated code needs the same review as a PR from a new team member.
  • You own the code. If AI writes it and you accept it, it's yours — bugs, security holes, and all.
  • Context quality determines output quality. Vague prompts produce vague code. The more specific the input, the more useful the output.
  • Don't let AI bypass your process. AI-generated code still goes through code review, testing, and security considerations.

Context Management

Give the AI what it needs to help you well:

  • Keep the relevant files open in the editor's context window.
  • Use workspace instruction files for persistent, repo-level context — see Prompt Engineering.
  • Break large tasks into focused sub-tasks — "refactor this one function" beats "rewrite the whole module."
  • Provide concrete inputs: types, interfaces, error messages, test cases — not vague descriptions.
tip

The more specific the context you give, the less the model has to guess, and the fewer hallucinations you'll get.

Reviewing AI Output

  • Read every line. Skimming is how bugs get through.
  • Check specifically for: correctness, edge cases, error handling, security, and style consistency with the rest of the codebase.
  • Run the existing test suite after accepting changes.
  • For larger AI-driven changes, run git diff before staging to review the full set of changes.

Security Considerations

Protecting Sensitive Information

danger

Never paste secrets, API keys, credentials, or PII into any AI prompt — not even "just for context." This applies to all third-party AI tools, whether browser-based or in-editor.

Be cautious with NDA-bound client code and proprietary business logic. Check with your manager before using any AI tools on client projects or with proprietary code.

AI-Generated Code Security Risks

AI models are trained on the entire internet, including insecure code. Common pitfalls to watch for:

  • SQL injection via string interpolation
  • Missing input validation and sanitization
  • Hardcoded credentials or tokens
  • Overly permissive CORS configuration

Watch for hallucinated package names — AI may suggest an npm install for a package that doesn't exist or, worse, is a typosquatted malicious package. Always verify package names on npmjs.com before installing.

caution

AI training data has a cutoff date. Code for rapidly-changing APIs (cloud SDKs, third-party integrations) may reference outdated patterns. Cross-check generated code against the official docs of whatever library or API is involved.

When AI Helps Most

  • Boilerplate and scaffolding — CRUD routes, form components, test stubs
  • Writing and expanding tests, especially generating edge-case inputs
  • Explaining unfamiliar code or third-party libraries
  • Writing and improving documentation and code comments
  • Refactoring well-understood, well-tested code
  • One-off scripts — data migrations, file processing, CI helpers

When to Be More Careful

  • Complex domain logic with subtle business invariants
  • Security-sensitive code — auth, authorization, cryptography
  • Performance-critical paths where algorithmic choices matter
  • Anything that touches PII, payments, or compliance-regulated data