Twenty-six years on the front line. Not all of it went well.

How I got here

My first work on software development programmes was at Orange — testing software by following scripts, and seeing first-hand how the changes affected people's job roles and the automation they relied on. Six years there, moving from user-tester into combined service delivery and project management roles, picking up Prince 2 Practitioner along the way. CRM project management came next — still software, still not ERP.

The move into ERP came via a Client-side transformation in industrial distribution; from there I spent eleven years SI-side, running ERP and CRM programmes for PLC clients across multiple platforms, including one of the largest private-sector ERP programmes in Europe at the time. More recently I've moved back Client-side on contract — diagnosing failing programmes, rebuilding governance, getting them across the line.

Most methodology authors have only seen one phase of that arc. The role distinctions in Keystone — Process Owners and SMEs, Programme Managers and Project Managers, Benefit Owners — exist because both sides of the table use the same words to mean different things, and most of the friction in delivery happens in the gap between them.

The method leans heavily on getting things right at the start. Pre-Programme alone has six stages, each one building on the last — because I've too often been in the wrong place at the wrong time to see what a poorly thought-out strategy does to everything that follows, from design right through to testing and on to benefits realisation. Or how execs who weren't engaged at the start end up treating the whole thing as the IT project developing over there somewhere. I once sat through a steering committee on a recovery programme where an executive sponsor — re-introduced after months of avoiding every prior touchpoint — opened with "why am I here, what's this got to do with me?" Self-preservation ranking above co-operation, in a forum that doesn't exist for any other reason. The test cycles are thorough too — eight levels of them. To some that'll look excessive. But I've seen first-hand how software changes actually land on the people doing the work, and how badly that goes when the testing in front of the user has been done by the wrong people, with scripts that don't match the real job. It isn't just about zero P1s and P2s. It's about whether the data and the process work for the people who've got to live with the system afterwards. The role distinctions in the testing framework — including the deliberate position that Process Owners shouldn't run UAT, and the refusal to treat NFT as a footnote — are what I wished someone had codified for me at the time.


What didn't go well

I have watched testing get compressed because the build slipped. I have watched boards treat the initial funding envelope as a final number, then act surprised when the Full Business Case came in higher at the end of design — at which point the sponsor needs the CEO to sign off the delta, asks Procurement to review it for top cover, and £350k of stall accumulates while the SI team and contractors hired to push the programme forward sit on the bench, billable and idle, somehow ending up the ones blamed for the overrun. I have watched a Programme Manager replaced at month four because the steering committee had lost confidence weeks before and hadn't said so. None of these were unrecoverable. All of them cost more time, money or trust than they needed to.

What I learned from each of those — slowly, and not always the first time — is in Keystone.

The eight-level testing structure with explicit role distinctions exists because managing the test cycles — and the defects they surface — is a demanding skill, best placed in the hands of a battle-hardened Client Test Manager who will fight to keep progress moving, knows how to test forwards of blockers, and won't let the SI fix at the SI's pace.

The two real board gates — and the deliberate refusal to label every checkpoint along the way as a "gate" — exist because gate-inflation is what board sponsors do when they want to feel in control of a programme they don't understand.

The four business case checkpoints exist because committing to build at the end of Discovery, with no firm SI pricing, is the most reliably expensive moment in an ERP programme.


What's in Keystone — and why

Keystone is not a methodology I invented. It's one I have refined for twenty-six years across thirty-plus programmes and projects — keeping what works under load, discarding what doesn't, codifying the role distinctions, governance gates and testing structure that most SI methodologies blur.

The pre-drafted artefacts philosophy came from running too many workshops where the conversation was "what should we do?" instead of "is this draft right?".

I've made sure the Change Management workstream is properly embedded in the method. Like the often-forgotten Client Test Manager, the Change Lead is a specialist role no mid-to-large ERP implementation should attempt without. They see the change through to the end alongside the Benefit Owners, and plan and co-ordinate training and communications with the surrounding team. Far too often I've watched transformation leads ask for change management, then — when the change person arrives — have no clear sense of what "good" looks like.

The same years also taught me how methodology gets used as cover. Senior SI people have told me openly that they won't test the system because doing so "puts them on the hook." Others run their own methodologies dense enough that only they can navigate them — owned by them, understood by them alone. On more than one programme, getting the system properly tested meant paying a testing premium high enough to be off-putting, at which point many clients quietly settled for less. Keystone is opinionated because the alternatives are unhelpful. It is open because the patterns shouldn't be locked behind a day rate.

A note on how the site was built. The methodology is mine, codified across the years and programmes above. The written method pages, the artefacts, the decks and the Command Centre were drafted with AI as a writing partner, then checked back through experience. I have used AI increasingly over the last two years, starting with a Client-side contract in 2024 where it accelerated requirements gathering, document generation and meeting synthesis. The judgement on what holds and what doesn't remains mine. The tool helped me get the pages out at a pace a solo consultant otherwise couldn't.


Why publish it openly

I'm not selling Keystone. The methodology is free to use, adapt, or teach. I sell my time, on the programmes that benefit from someone who has seen this shape of problem before and has a structured way of working through it. None of that makes me the knight on a white horse. I'm only ever as good as the teams I work with — but what's on this site gives anyone who wants to use it a better chance of success, even if only partly adopted.

Publishing the method openly means the conversation with a prospective client starts with "is this the right framework for what I'm trying to do?" rather than "what is your methodology?". That's a more honest place to start.

If the method is useful to someone who isn't a client of mine, that's a fair outcome. If it's useful to a partner organisation that wants to adopt it, also fair. The asset I trade on is judgement; the methodology is a way of demonstrating that the judgement is real before you've spent any money on it.


If you'd like to talk through where your programme is and what the right next move looks like, the discovery call is a thirty-minute conversation. No pitch, no slide deck — a structured diagnostic against the foundation of the method.