​​Flintstones example: University international student applications

A university transformed its convoluted, 20-year-old international student application process by adopting a Flintstones pilot, dramatically reducing support work for agents and securing buy-in for further development. This approach saved millions and completed in 18 months what previous attempts had failed to achieve over a decade.

Let's make the Flintstones approach more tangible with another example, this time in the charity sector at one of Australia's largest universities. The university had been considering replacing the 20-year-old international student application process. This process was powered by vendor customisations to the student administration system. It's a complex area with many edge cases across hundreds of agencies in different countries, with some courses having intricate rules.

Previous attempts to size the project indicated it was a multi-year program, including the feasibility phase. Several halted attempts had been made over the previous decade to get the project off the ground. The original implementation was poorly adopted and resulted in manual and phone-based workarounds. In some cases, the redevelopment project was cancelled after the initial stages of delivery. In most cases, the program stalled during the feasibility phase because the projected budgets were in the tens of millions. Each false start led to wasted investment in analysis, solution design, architecture, and project management, with most costs paid to external consultants. Various solutions were considered, including purchasing new software, paying the vendor to develop new customisations, and using their IT Service Management system as a workflow solution.

I'm guessing this example is already bringing to mind some non-projects you've experienced. Most people have witnessed at least one, usually several. I started with two simple questions to understand "why are we doing this?" both in terms of the problem and solution. The questions are: "What is the problem?" and "How will we know when we've solved the problem?" The answers were not easily at hand. However, over three or four 1-hour meetings, we were able to frame both the problem and the solution more effectively.

In rare cases, I have been able to achieve this in a single meeting. However, it's difficult to get the right people to the first meeting because of a catch-22. The right people to answer these questions are those experiencing the primary pain from today's problems. But until we've had a few conversations, we're not clear on what the problem is. So, as we narrow down and discuss our problems in more detail, we need to introduce different parties to the conversation.

Through this process, many different types of problems were raised as candidates for "the one problem we need to solve first." We need to narrow this down because a program with more than one goal has more than one sponsor, and usually, these goals and sponsors pull in different directions. This is something I learned from my project rescue days: solve one big problem at a time, then loop back and clean up the smaller problems in later projects with different goals and sponsors.

People described over 1,000 change requests to the current application process. Students were frustrated, agents were frustrated, and internal staff were overrun with work. We discussed at least 20 significant problems. Through a structured Q&A process, we settled on the main problem: "we have to do too much support work with the agents just to get one valid student application." Stated as a success question, this became: "Ask the Customer Service representative, have we dramatically reduced the inbound contacts from the agent to get their valid student applications?"

This question includes two important things. Firstly, we are clear on the internal people we need to ask. Secondly, the answer cannot be grey; it's a yes or no answer. This definition is more effective for two reasons. Firstly, the problem is stated concisely enough that the entire project team, including stakeholders and the wider audience, can hear it once and internalise it. This is crucial because only a project team where everyone understands the problem has any hope of solving it. Secondly, having an unambiguous measure of success, in the form of a question, means we don't have to work on technology architecture or functional scope to test one (or multiple) solutions on a small scale.

At this point, we have cracked the back of our feasibility, regardless of whether we run the program traditionally or leverage this new statement as a governance model to run a different type of feasibility phase, which I refer to as a Flintstones pilot. We chose the Flintstones option because it allowed us to start immediately, without needing to decide on vendors, technology, or scope, and without asking for major funding. We could achieve immediate results by running a cost-effective pilot. I will write separately about the process, criteria, and logic for defining a Flintstones pilot.

The result was the choice of a single Customer Service representative from a pool of around 100 and the selection of one of the agencies he dealt with. Within this agency, there were several named users of the existing solution, and the project team had direct access to these people. During the pilot, we started with only one of those users, but by the end of the pilot, the whole agency was using the system. Even within this user group, we specialised further to avoid the complex programs because it was determined that rule complexity was not the primary driver of their current issues, so there was no value in building those until we had assessed the feasibility of the solution.

The pilot lasted three weeks, and we could answer "yes" to the success question by Tuesday of the second week. This follows the same pattern I've seen in over 250 projects. As is almost always the case, news of the pilot, despite its small size, quickly spread throughout the Customer Service community. The one agent involved was amazed by the solution's effectiveness, even though it was tiny and temporary in nature.

The agent played an integral part in the presentation to the steering committee to secure funding for the next phase, which was to build out and integrate the solution so it could be rolled out to around a third of their customer service agents. The justification for this funding was self-sufficient, meaning the investment would be repaid in operational savings within a year, even if we didn't proceed with future project phases. This meant we never had to ask for any multi-million dollar investments, unlike all previous failed attempts. The total elapsed duration of this program ended up being 18 months, including the pilot and some pauses during phases to regroup and establish funding for the next phase. The total cost of this program was less than the budget of the failed attempts and less than 20% of any previous estimate presented to the steering committee.

More examples are coming—hit me up if you want them faster.

What part of the article raised the most doubts for you?

Login or Subscribe to participate in polls.

Andrew Walker
Technology consulting for charities
https://www.linkedin.com/in/andrew-walker-the-impatient-futurist/

Did someone forward this email to you? Want your own subscription? Head over here and sign yourself right up!

Back issues available here.

Reply

or to participate.