Agentic Services & Emerging Technology

AI and ML as delivery tools and embedded in solutions. DOGRAG methodology for grounded AI outputs.

Overview

GoSource has been adopting emerging technologies since the company was founded. We were among the first Australian firms to build on AWS when it offered only S3 and EC2. We used blockchain early enough to learn where it works and where it doesn’t. We embedded machine learning into production government systems years before the current AI wave. That pattern of early adoption followed by honest assessment is how we approach AI and agentic services today.

What We Offer

AI presents two opportunities for our clients.

As a delivery tool. Our developers use AI coding assistants daily (Claude Code, Aider, Cursor, Cline). The same coding standards, testing, and review practices apply to AI-assisted code as to hand-written code. AI-assisted development is systematised across the organisation with consistent quality expectations, not left to individual experimentation.

Embedded in delivered solutions. We build AI capabilities into client systems where they deliver measurable results:

ApplicationExampleEvidence
Predictive complianceML models targeting audit resources at highest-risk entitiesDAFF Export Assurance
Automated classificationML-driven auto-classification against regulatory taxonomiesFinance RMaaS, 500M+ records
Process mappingGenerative AI for business process analysisABF Trade Modernisation
Document processingAI-accelerated compliance assessment and entity resolutionRBA Online, 600+ member companies
Conversational interfacesLLM-powered access to complex data for non-technical usersDoH Ask MCF

We deliver on Microsoft Azure AI Foundry and AWS AI/ML services, selecting the platform that fits the client’s existing cloud ecosystem.

Bring the model to the data

Government clients with “no cloud” policies on sensitive data can’t send documents to cloud-hosted AI services. We deploy open-weight LLMs (70B+ parameter class) on hardware within the organisation’s security boundary instead. System inventories, capability maps, strategy papers, and architecture documentation never leave the network.

The key architectural insight is to break complex AI tasks into many small, focused extraction steps that build a structured knowledge graph incrementally, then run deterministic queries over the graph. This produces more reliable and auditable results than single-pass processing by a frontier cloud model.

DOGRAG

Standard RAG retrieves relevant context and feeds it to an LLM. Our Domain Ontology Grounded RAG (DOGRAG) methodology adds an ontological grounding layer that constrains AI outputs to established domain frameworks. This prevents the common failure mode where LLMs produce plausible but architecturally inconsistent outputs.

  1. Define the domain ontology. Requirements analysis uses user-centric design (epics, features, stories, personas). Architecture uses C4 (context, containers, components, code). Decomposition uses domain-driven design (bounded contexts, aggregates, services).
  2. Ground the retrieval. Context is filtered and structured according to the ontology, giving the AI the right framework, not just the right data.
  3. Constrain the generation. Output validation enforces ontological compliance. Outputs that drift are rejected and regenerated.
  4. Validate within bounded context. Each AI-generated component is validated against its defined scope, interfaces, and responsibilities.

Principles

  • Guardrails before acceleration. We establish architectural boundaries (bounded contexts, interface contracts, test coverage) before adopting AI-assisted workflows. AI amplifies good practices and bad ones equally.
  • Developer ownership. The developer is responsible for every line of code regardless of how it was written. If an AI-generated component can’t be understood, reviewed, tested, and maintained by the team, it shouldn’t be accepted.
  • Honest about the frontier. We are transparent about what works, what is experimental, and where reliable AI-assisted delivery ends today.

Policy Alignment

  • Australian Government Policy for the Responsible Use of AI in Government (v2.0) — human oversight, transparency, accountability, and impact assessment.
  • Australian Government AI Technical Standard (v1) — 42 technical standard statements covering the AI system lifecycle.
  • ACSC Essential Eight / ISM — AI-assisted development operates within the same security framework as all our work.

Evidence

  • Case Study: Export Assurance — ML for predictive compliance across the $80B Australian agricultural export market.
  • Case Study: Records Management as a Service — ML-powered auto-classification; 500M+ records; whole-of-government platform.
  • Case Study: ABF Trade Modernisation — Generative AI for business process mapping and future-state cargo architecture.
  • Current: RBA Online Platform — AI for entity resolution, document assessment, and compliance automation across 600+ member companies.
  • Staff: Michael Nelson — Published author on AI-assisted development with LLM agents; Staff Engineer.
  • Staff: Luke Tankey — AI-assisted coding practices at DEWR; project-specific guidelines and tooling.
  • Staff: Niko Kresic — LLM proof-of-concept applications using Langchain.

Tools & Technologies

  • AI Development: Claude Code, Aider, Cursor, Cline, Zed
  • LLM Frameworks: Langchain, custom DOGRAG implementation, RAG pipelines
  • Machine Learning: Python ML libraries, predictive modelling, classification models
  • Cloud AI: Azure AI Foundry, Copilot Studio; AWS SageMaker, Comprehend, Rekognition
  • Local Deployment: Llama 3.3 70B, Qwen 2.5 72B, DeepSeek-R1 70B; constrained decoding (Outlines, llama.cpp grammars); vLLM, Ollama