“Pull requests tied to AI-generated code should always be reviewed by experienced engineers who understand the code, the business logic, and the compliance context,” i-GENTIC AI’s Timsah says. “Organizations should also prioritize transparency and lineage by treating AI-authored code like any other third-party dependency.”
Timsah adds: “They need full traceability into who wrote it, what model generated it, and under what parameters, which makes it easier to audit and remediate issues later.”
Mitigation strategies
AI coding assistants can be a force multiplier for development teams but only if enterprises build guardrails to manage the associated risk.
“With strong governance, automated oversight, and human accountability organizations can harness the speed of AI without multiplying vulnerabilities,” i-GENTIC AI’s Timsah advises.
Other experts put forward recommendations on mitigating the risks associated with AI coding assistants:
- Integrate security tooling into AI code assistants, for example, by taking advantage of MCP (model context protocol) servers.
- Limit the volume of AI-generated changes depending on the project so that pull requests remain manageable.
- Strictly enable automatic checks in CI/CD — secret scanners, static analysis, and cloud configuration control.
Mitigation of flaws created by AI coding assistants requires a different mindset, i-GENTIC AI’s Timsah says.
“Enterprises should use AI to watch AI by deploying agentic AI solutions that automatically scan AI-generated code against policies, security standards, and regulatory requirements before code is merged,” he argues.
Enterprises should also adopt shift-left security and continuous monitoring.
“Security checks cannot be bolted on at the end of the pipeline,” Timsah says. “They must be integrated directly into CI/CD processes so that AI-generated code receives the same scrutiny as open-source contributions.”
Pynest’s Rylko adds: “We treat AI assistants as ‘junior developers’ — their code is always checked by seniors.”