Everyone wants the speed of AI without the stomach-drop of getting the numbers wrong. Large language models (LLMs) can absolutely make Business Intelligence (BI) faster - drafting SQL, cleaning data, and writing clear summaries. But they only work reliably when they’re guided by clear definitions and tidy data. In short: AI is the engine, but your data model is the steering wheel.
Think of a typical analysis as a six-step journey:
Agree the question. Humans shine here. You translate the business need (“Why did margin dip last quarter?”) into something answerable.
Find the data. People pick the right sources and know the caveats. AI can help suggest where to look.
Prepare the data. This is AI’s sweet spot. With sensible column names and clean tables, LLMs can speed up joins, cleaning, and transformations.
Analyse. AI drafts queries and quick checks; humans choose methods and sense-check the logic.
Interpret. Turning numbers into decisions, risks, and trade-offs is still a human job.
Communicate. AI drafts the first pass; you tailor for audience and stakes.
When things go wrong, it’s usually because the model guessed what “revenue,” “active user,” or “fiscal year” means—and guessed badly. If your definitions are fuzzy, AI will multiply the fuzziness at high speed.
The only way to quality outcomes: governed context
LLMs can’t read your mind. Give them a simple “context bundle” so they stop guessing:
Plain-English glossary. Write down what key terms mean (“Net Revenue excludes returns, includes shipping fees,” etc.).
Simple, well-named tables. Clear relationships (what links to what), consistent date logic, and correct levels of detail.
Traceability. Know where each number comes from so you can check it when something looks off.
Access rules. Who can see what, and which version is the truth.
Short notes on edge cases. Fiscal calendar quirks, common exclusions, and “gotchas” that often trip people up.
When this context is available to the AI (even as short notes it can “look up” while answering), quality jumps and rework drops.
A real-life use-case example
In a recent retail pilot, we connected an AI assistant to a Power BI model and a one-page business glossary. Before that, the assistant kept inventing joins and misreading the fiscal year start, so analysts spent time debugging “plausible” but wrong results. After we locked down definitions (“net revenue,” “active customer,” promotional periods) and added short lineage notes (“this measure includes returns processed up to T+7”), the same prompts produced dependable SQL and clearer commentary. Cycle time fell, and—more importantly—trust went up.
Working patterns that keep you safe
Name things on purpose. Use human-readable column names and consistent grains (daily, monthly, customer level, etc.).
Write the one-pager. Capture 10–15 definitions and three common edge cases. That alone prevents most AI mistakes.
Ask better questions. Include grain, filters, and time windows in your prompt (“Monthly margin by product family for FY24 Q3, excluding returns”).
Check the basics automatically. Row counts, nulls, duplicates, and a couple of known-good comparisons (e.g., last month’s board pack).
Keep an audit trail. Log prompts, queries, and outputs so you can explain decisions later.
Train the team to critique AI. Teach the common failure modes: hallucinated joins, silent type casts, time zone/fiscal drift.
What is a skeptic’s view and why it helps
The danger isn’t wild nonsense—it’s confidently wrong answers that look tidy. If your model or definitions are sloppy, AI will produce tidy-looking nonsense faster than ever. If they’re tight, AI will scale your good practice. That’s why “do the boring basics” is not old-fashioned; it’s how you make AI safe and useful.
Bottom line
LLMs make analyses faster, especially in the middle of the workflow. But lasting value comes from the foundations: clear definitions, simple models, and light-touch governance. Treat your data model as the real prompt; the clearer it is, the better the AI performs.
How Keyrus can help
Keyrus brings the pieces together so AI speeds you up and keeps you right. We help you:
Define a practical business glossary and “source of truth” metrics.
Design a clean semantic layer (Power BI/Tabular, dbt, or Microsoft Fabric) with the right grains and relationships.
Add lineage, tests, and observability so errors are caught early.
Build lightweight context retrieval so AI answers use your definitions, not guesses.
Deploy secure AI assistants (cloud, VPC, or on-prem) with audit trails and access controls.
Coach teams to prompt well and review smartly, reducing rework while keeping decisions defensible.
If you want the speed of AI without the surprises, start by sharpening the prompt that matters most - your data model. Out team at Keyrus will assist you to optimise productivity, automate routine tasks, and focus on innovation through AI. Contact us at sales@keyrus.co.za.