How Genio is using AI in software engineering

In this blog, we'll shares how we are using AI in Engineering at Genio, what is working well and where the limitations still are. We'll also share how we are thinking about AI as a valuable part of our long-term toolkit.

Clock 3 min read Calendar Published: 9 Jul 2025
Author Mala Benn
How Genio is using AI in software engineering

 

 

From hype to help 

The buzz around AI in software engineering is loud, and often confusing. With bold predictions from tech CEOs and a rapidly growing market of tools, it is easy to get caught up in the excitement. At Genio, we are taking a more grounded approach. We are not replacing engineers with AI. Instead, we are giving engineers useful tools to work smarter.

This blog shares how we are using AI in Engineering at Genio, what is working well, where the limitations still are, and how we are thinking about AI as a valuable part of our long-term toolkit.

AI to support productivity, not replace

You might have heard the bold claims that AI tools will replace mid-level engineers by 2025. From what we've seen, that is still far from reality. On the Technology Adoption Curve, Genio aims to sit in the Early Majority, open to AI’s real value but not swept up by hype. Our engineers are finding practical benefits when AI tools are used as assistants, not substitutes.

One of our most used tools is Cursor, a fork of VS Code which supports smart autocomplete and contextual code chat. It speeds up common tasks like writing boilerplate code or applying simple refactors. However, it still needs a human to guide it, evaluate its output, and make the final call.

We also have a budget available for engineers to trial any AI tools they’d like to, the only requirement is that the trial is timeboxed and they write up their thoughts, specifically on whether it’s worth rolling out wider.

Tools we’re using vs exploring

Here are some of the tool we actively use:

  • GitHub Copilot and Cursor offer smart autocomplete and in-editor suggestions that help speed up development and cut down on repetitive work. Both tools also include built-in chat features that can explain code, suggest improvements, and support small edits. While they show real promise, they still require a fair amount of human guidance and verification to be truly effective. From our experience so far, Cursor tends to deliver more accurate and helpful results.
  • ChatGPT and Gemini are great for asking "how do I do this" questions, generating shell scripts, or brainstorming ideas.

We also have some AI tools we're using, but still evaluating:

  • Claude Code: We’re exploring Claude, Anthropic’s advanced language model, as another coding agent option alongside our IDE tools. Unlike Copilot or Cursor, Claude runs from the command line instead of inside your editor. Paired with MCP servers, it could automate tasks like schema checks, query optimisations, research, and generating boilerplate or documentation answers.
  • CodeRabbit which offers conversational pull request reviews. It helps reviewers get quick context on a change and highlights potential issues, although we still rely on human reviewers to make the final decision.
  • Jules which is an AI coding agent from Google that runs asynchronously in a secure VM. It can read your entire codebase and handle tasks like bug fixes, writing tests, and updating dependencies, all while keeping you in control through plan approvals and pull requests.

Finally, there are tools or approaches we are not using yet. The AI coding space is evolving fast and that means we have to reassess tools on a regular basis to stay ahead.

  • AI for complex migrations often ends up introducing bugs or half-finished changes that take longer to fix than if we had done the work manually.
  • Auto-documentation can be helpful for summaries but lacks the depth and context to replace human-written documentation.
  • Exploratory testing using AI does not yet come close to a human tester’s ability to simulate real scenarios and uncover edge cases.

AI risks we're paying attention to

“The risk is not just that AI might be wrong, but that it looks so right you don’t question it. That can burn hours in debugging.”

AI tools are helpful, but they are not without problems. We are keeping a close eye on some common pitfalls:

  • Hallucinations happen when AI confidently suggests something that is completely wrong. The problem is that these suggestions often look convincing at a glance.
  • Over-reliance on generated answers can limit learning and growth, especially for junior engineers. If you skip the learning step and copy-paste an answer, you miss the chance to build your own understanding.
  • Tool fatigue is also a real risk. With new tools appearing all the time, it is easy to jump between them and end up with no consistent process.

How we are measuring the value of AI

Internally, we have had a lot of discussion about what metrics to track. Some of the ideas that have come up include:

  • Percentage of PRs which are AI assisted
  • Tracking license usage for tools like GitHub Copilot or Cursor
  • Qualitative data, feedback from engineers on what worked and what did not
  • Measuring engineer sentiment and perceived productivity improvements

We are also remaining careful. A high number of licenses does not automatically mean better outcomes, and tracking the wrong thing can encourage unhelpful behaviour.

To conclude

The most important shift is not just in the tools we use but in how we think about engineering work. Engineers will spend less time physically typing code and more time thinking, problem solving, and designing solutions. AI is not taking over engineering. It is helping us do it better.

Read more from our tech blog
Time for a simpler, smarter note taking accommodation?

Time for a simpler, smarter note taking accommodation?

Genio Notes is the online note taking tool that makes compliance simple, reduces cost and admin burden, and improves student outcomes.
Learn More