Reliable AI Development Workflows
No todo.md. No finger-crossing. No ambiguity.
Just consistent, reliable PRs every time.
🎯 Dreams vs Reality
The Dream: "AI agent, please build my app" → magic happens → perfect code
The Reality: Coding agents are fun but too free-thinking for reliable, consistent results
AndAI's Approach: Narrow scope, add guardrails, ensure predictable outcomes
Why AndAI works where others don't
🔒 Guardrailed Execution
Unlike free-thinking agents, AndAI constrains AI within defined boundaries. You know exactly what's being done, why, and when.
🏗️ State-Driven Architecture
Built with proper ticketing system as a state storage and state machine. Issue dependencies, issue types and sub-tasks, you know the drill. Ticketing system provides context and each task is a git branch.
🛡️ Failure Handling
Fails properly when needed, integrate failure path and add user in the middle. Because happy path isn't the only path.
🔄 Washing Machine Mode
Load your story tasks, let them crack away overnight using cheap tokens. Come to work for review sessions and prep for the next batch.
🌐 Any Model Support
Works with any AI model - cloud or local. You can even configure specific module for specific command step. Your choice.
🔌 Local first
Everything is configurable and running locally. Adjust workflow config, add projects and start creating tickets.
Go grab a coffee mode workflow
AndAI is not AI Agent or IDE integration. Instead it is long running process. A reliable one, because it is heavily wrapping AI with code. Pre process input, gather info and summarize, code, test, fix, auto review, evaluate - loopty loop. Reliability takes time. More expensive upfront? Yes. But with good workflow it always will spit out a code you asked for over and over again.
Define & Queue
Prepare structured tickets with clear acceptance criteria. Move it from Init to Backlog.
Isolated Processing
AndAI picks ticket, moves it to next status and works in dedicated branch with pre-defined command steps.
Quality Gates
Validation code before any human review. Try to auto fix or move ticket to status where user will take over.
Review & Deploy
AndAI deployment command will merge branch into parent and move ticket to Done. Then pick next unblocked ticket.
Configure Your Perfect Workflow
From simple 2-step processes to complex multi-stage pipelines - AndAI adapts to your needs. It's all yaml.
⚡ Full valid config
workflow:
transitions:
- { source: Init, target: Backlog }
- { source: Backlog, target: In Progress }
- { source: In Progress, target: QA }
- { source: QA, target: Deployment, success: true }
- { source: QA, target: Backlog, fail: true }
- { source: Deployment, target: Done }
issue_types:
Story:
description: |
Represents a significant feature or component that requires multiple Tasks to implement.
Scope: Entire feature or major component, potentially spanning multiple files and modules.
jobs:
Backlog: { steps: [{ command: next }] } # <-- Pick it up for work
In Progress: # <-- Start working
steps:
- { command: context-files, context: ["wiki", "ticket", "comments"], remember: true }
- command: summarize-task
comment: True
context: ["issue_types", "project", "wiki", "parents", "ticket", "comments", "children"]
prompt: |
Analyze all available information and prepare detailed improved current issue description.
Suggest how to split current Story issue into smaller scope Task issues.
Make sure each Task issue is small enough to be implemented in a single code file.
- command: create-issues # <-- Create sub-tasks, move to QA and stay there until they're not Done
action: Task
prompt: Split current issue into Task issues.
context: ["issue_types", "project", "wiki", "parents", "ticket", "last-5-comments", "children"]
Deployment: { steps: [{ command: merge-into-parent }] }
Task:
description: |
A unit of work focused on coding task. Scope: Only one code file or specific part of it.
jobs:
Init: { steps: [{ command: next }] } # <-- Prio from Init (defined in "priorities:")
Backlog: { steps: [{ command: next }] }
In Progress:
steps:
- command: aider
action: architect-code
summarize: True
comment-summary: False
context: ["project", "wiki", "ticket", "last-5-comments"]
prompt: Implement (code) given Task issue based on ticket description and last comments.
- { command: project-cmd, action: reformat }
- { command: commit, prompt: "linter changes" }
QA:
steps:
- { command: project-cmd, action: lint, remember: true }
- { command: project-cmd, action: test, remember: true }
- { command: context-files, context: ["ticket", "last-3-comments"], remember: true }
- { command: context-commits, context: ["ticket", "comments", "parent-comments"], remember: true }
- command: ai
prompt: |
If there are any linter or test errors then pinpoint exact files and place at fault.
If there are no errors then answer with "Linter and tests are OK".
remember: True
- { command: evaluate, context: ["ticket", "comments"] }
Deployment: { steps: [{ command: merge-into-parent }] }
states: # <-- Define ticket task states (statuses) and if AndAI will work there.
Init: { ai: ["Task"], is_first: true, is_default: true }
Backlog: { ai: ["Story", "Task"], description: "Issue is ready to be worked on." }
In Progress: { ai: ["Story", "Task"], description: "Analyze Issue and plan how to work on it." }
QA: { ai: ["Task"], description: "Test Task issues and Human Test Story." }
Deployment: { ai: ["Story", "Task"], description: "Merge code into to parent Issue branch." }
Done: { ai: [], is_closed: true, description: "Issue is completed." }
priorities:
- { type: Task, state: Deployment }
- { type: Task, state: QA }
- { type: Task, state: In Progress }
- { type: Task, state: Backlog }
- { type: Task, state: Init }
- { type: Story, state: Deployment }
- { type: Story, state: QA }
- { type: Story, state: In Progress }
- { type: Story, state: Backlog }
- { type: Story, state: Init }
projects:
- identifier: "andai"
name: "Andai"
description: "Golang project"
git_path: "/andai/.git"
git_local_dir: "/home/ubuntu/andai/.git"
final_branch: "main"
commands:
- { name: "test", command: ["make", "test"], ignore_err: true, ignore_stdout_if_no_stderr: true }
- { name: "lint", command: ["make", "run-lint"], ignore_err: true, success_if_no_output: true }
- { name: "reformat", command: ["gofmt", "-s", "-w", "."], ignore_err: true, success_if_no_output: true }
wiki: Coding assistant. More text about the project here.
redmine:
db: redmine:redmine@tcp(localhost:3306)/redmine # things from docker-compose
url: "http://localhost:8080"
api_key: "2159cef2fb6c82c4f66981f199798781e161c694" # chill, it's not a secret that needs hiding
repositories: "/var/repositories/"
llm_models:
- { name: "normal", temperature: 0.2, model: "gemini-2.5-pro", provider: custom, base_url: "http://localhost:4000/v1/chat/completions", api_key: "sk-1234" }
- { name: "large", commands: ["summarize-task"], temperature: 0.5, model: "gemini-2.5-flash", provider: custom, base_url: "http://localhost:4000/v1/chat/completions", api_key: "sk-1234" }
coding_agents:
aider: # <-- other coding agents coming..
config: "/home/ubuntu/www/aiwork/.andai.aider.yaml"
config_fallback: "/home/ubuntu/www/aiwork/.andai.aider.fallback.yaml"
model_metadata_file: "/home/ubuntu/www/aiwork/.andai.aider.model.json"
timeout: "180m"
map_tokens: 512
task_summary_prompt: |
I need you to REFORMAT the technical information above into a structured developer task.
DO NOT implement any technical solution - your role is ONLY to organize and present the information.
# This is valid, but not fully tested. Please check out /docs/examples/ if you want something battle tested.
$ andai validate config
Is valid
$ andai go
Ready to Build Reliable AI Workflows?
Stop crossing fingers. Start shipping predictable, quality code with AI assistance that actually works.