• Abhi's AI Playbook
  • Posts
  • AI Can Write Your Automation Test Scripts — An Experiment with Gemini

AI Can Write Your Automation Test Scripts — An Experiment with Gemini

A hands-on GenAI experiment that converts natural language test cases into automation scripts using Gemini

Mobile app testing is a necessary evil.
It ensures quality, but slows teams down—especially when QA teams document test cases but can’t automate them easily.

In this post, I’ll walk you through a practical GenAI-powered solution I built that translates natural language test cases into executable Maestro test scripts.

Whether you're a developer, tester, or just AI-curious, this is a glimpse of how AI can accelerate modern QA workflows.

🔍 Why This Matters

QA documentation and automation don’t speak the same language.

Teams often:

  • Document clear test cases in plain English

  • Lack time or skills to convert them into working automation

Maestro helps by offering a simple YAML-based syntax for mobile test automation, but still requires some coding comfort.

What if we could automate that conversion using AI?

That’s exactly what this project does.

🧠 The Notebook in Action

Here’s how the GenAI-powered pipeline works:

1. 🧾 Input: Natural Language Test Case

You write a simple test like:

"Open the app, tap on login, enter username and password, and verify you land on the dashboard."

2. ⚙️ Process: AI Converts to Maestro Script

Using Google’s Gemini model + prompt engineering, the system:

  • Understands the test intent

  • Extracts user actions and checks

  • Generates valid YAML syntax for Maestro

3. 📤 Output: Executable YAML

You get a ready-to-run Maestro test like:

appId: com.myapp
---
- launchApp
- tapOn: "Login"
- inputText: "username"
- inputText: "password123"
- tapOn: "Submit"
- assertVisible: "Dashboard"

🧱 Key Technologies Used

Tool

Role

Google Generative AI (Gemini 2.0 Flash)

Understand and generate test steps

Gradio

Simple UI to upload test cases and view output

Pydantic + YAML

Validate and format output scripts

Maestro

Test framework for executing scripts

The entire experience is wrapped in a notebook interface for fast iteration and debugging.

🧩 Architecture Overview

Key Components:

  • Input Sources: Natural language steps or QA docs

  • GenAI Engine: Gemini-powered extraction, few-shot learning, grounding

  • Validation Layer: Ensures syntax correctness and intent match

  • Output: Executable YAML scripts for Maestro

🔁 Data Flow Explained

  1. Test Case Input: Either typed or extracted from QA docs

  2. Prompting + Generation: Few-shot prompt guides Gemini to output YAML

  3. Validation + Enhancement: Adds missing steps, resolves ambiguities

  4. Final Output: Clean YAML script, ready to drop into your test suite

🛠️ What Makes This Work

Some AI techniques used:

  • Few-shot prompting: shows the model examples to mimic

  • Structured output generation: enforces YAML syntax for Maestro

  • Document understanding: parses multiple-step instructions

  • Grounding: ensures only valid Maestro commands are used

🚀 Why It’s Exciting

  • ⚡ Save engineering time converting test ideas to code

  • 🤖 Bring AI into QA workflows, not just dev tools

  • 🧠 Let testers focus on test logic, not scripting syntax

📥 Want to Try It?

If you're working in mobile QA, product engineering, or AI productivity, this project is a blueprint for:

  • Building internal tools

  • Accelerating automation pipelines

  • Bridging the gap between QA and Dev

Checkout the notebook + setup guide here.

Stay curious,
Abhishek Sisodia
🔗 Follow me on X/Twitter | Connect on LinkedIn