• info@unlokq.co.uk
  • UK
  • 07826 892222
    9 Jan 20

    Digital Transformation and your dead horses

    Digital Transformation is dead? Wait…What? We’ve only just started!

    Or perhaps you haven’t started! Don’t worry. You might be one of the myriad of Small or Medium sized enterprises that haven’t yet considered public cloud as an alternative to your on-premise technology suite, never mind considered it as part of a wider digital transformation effort.

    Perhaps, like some of your larger brethren, you will soon start the journey along the path of digital transformation. By the end of your effort it will likely be called something different. Those that remember the aspirations of the paperless office remember what might have been digital transformations forbear.  Digital transformation comes in many forms depending on your viewpoint – caused by the confusion in the use of the term – and illustrated in the picture.

    The term itself is losing its buzz, it’s become part of the everyday vernacular, and like so many manufactured terms before it, its meaning has been contorted, twisted and misunderstood to mean a multitude of many different things, each of which probably deserves its own term. Perhaps you can remember the plethora of definitions for ‘cloud’ before we settled on what it has now become.

    What we do know is that historically 70% of complex transformations fail (McKinsey). 84% of digital transformations fail (Forbes) and 75% of IT projects fails to create adequate ROI (Gartner) according to those initiating them .

    Better put, and the above make be misquotes from some text. 70% of complex transformations do not achieve their stated goals. But there is no mention of what these goals are, just binary failure [or success], and the potential causes of it. Even if this is not entirely true, and is different from overall satisfaction with transformation, there is enough truth to make a good baseline from it

    Who are you…and why are doing that?

    Whilst this is a useful picture to show the scope of transformation within an organisation there are a couple of dimensions that are not shown

    The owner of the initiative;

    1. 1
      Enabling technologies the CIO – obvious enough
    2. 2
      The COO is responsible for the outline blue trapezoid (maybe a rhombus?) and is predominantly internal facing operational processes 
    3. 3
      Either the CMO or CTO for the outward looking aspect
    4. 4
      The CEO, with the CFO at their right hand should be on the hook for industry facing business model changes

    The reason for the change;

    Though it is less important in the context of this post, it will be covered in a future post. This is most simply put as Save Money, Grow Money, Make New Money by Roland Dieser in an interview with Rob Llewellyn

    1. 1
      Changing the technology is about saving money and efficiency, mostly about automating the mundane and repeatable, and releasing people to do more valuable work
    2. 2
      Corporate execution is about growing money, by improving products and/or experiences, making better decisions both within the business and about how to go to market
    3. 3
      Business strategy is about making new money, or making money in different ways. Potentially disrupting the [assumed] norms in your market.

    What is failure?

    No one over just fails. You must fail to do something – and that something is not to ‘digitally transform’ – that is not the end goal! What is it that the digital transformation expected to achieve? The only example of this above is the mention of ROI. 

    Goals must be set by the instigating sponsor for the initiative and they must be FAST. Steer away from SMART goals as they are too static in terms of scope and timescale. Discussed in more detail in 'Are SMART goals really useful' . Everyones goals should be transparent and communicated, clearly linked to the strategy of the organisation. Make sure people know what they are going to be doing, how it contributes to the goal, and ideally how it benefits the customers of your business – I haven’t met a developer yet who isn’t turned off by ROI, but doesn’t have a view on how their organisation is viewed in comparison with its competitors.

    How long will it take?

    We move slower than we think we do. Optimism bias means that we assume that everything done is done perfectly, and will not suffer outages, defects, solutions be used, or administered incorrectly and that work will be well organised. We think, on the whole, it will all go well. The best place to look for affirmation that this is not the case in your peers. Ask people in your own industry how long it took them to undertake similar programmes, and hos much it cost. Do not be tempted to say we can do it better. You might, but take it as a bonus, not a given!

    Act FAST and STOP!

    Timeout hand signal

    Failure rates as high as 70% can also be caused by rigid goals and timescales. Goals should be adaptive as markets evolve over the multi-year timescales. A realistic goal in a short timescale is not achievable and will lead to sandbagging, and transformation goals not being ambitious enough. Conversely does a goal that would be achieved in the next quarter after a [arbitrary] time boundary has passed, really mean failure?

    One way to reduce the amount of time that initiatives take is to not guarantee them a huge pool of money up front. Transformation funding might be ring-fenced, but shouldn’t mean that each constituent initiative will be funded until it completes. The F in FAST facilitates this. Part of the frequent discussion should be;

    • Are we still doing the right thing?
    • How has your work contributed to the goal to this point?
    • Is the case for continuation still valid?
    • Can we have some more money now please?

    If any of the answers indicate that there is risk, then take appropriate action. And that might mean canning the initiative and moving on. At the risk of mixing my metaphors; stop throwing money at dead horses.

    If you want help in defining the purpose of your transformation. Determining, measuring and communicating your goals, or perhaps rooting out the dead wood in your IT portfolio. Contact UnlokQ for a discussion.

    Other posts you might be interested in

    Working from home vs the working week.

    Digital Transformation and your dead horses

    2 Jan 20

    5 Things that IT has conveniently forgets…then remembers again…and again! Part 2

    Continuing my look back at why IT has never been able to make good on its charter from part 1 of this post. Points 3 to 5 below.

    It is worth re-iterating the context that ‘Systems and Procedures’ departments precede computers by some decades. Their role was to design, simplify and measure business processes. Clerks (actors) manually recorded [vast amounts of] transactions in ledgers (databases), or journals (logs) and stored on shelves and cabinets (filesystems) under lock and key (security) – collectively systems. Procedures were meticulously recorded as repeatable playscripts (scripts) for the setup, recovery and operation of business processes.

    Systems are big.

    The frequency of transformation within industry is increasing, so-called Industrial revolutions are happening at a faster rate. However, the amplitude of these changes is decreasing. Read this in the context of Peter Thiels manifesto But what does this mean for your business? There are many companies that espouse the virtues of ‘Think Big, Start Small’ but very few that appear to stick to the axiom. Execution looks more like ‘Start Small, Stay Small! In deference to the agile methodology, teams break up work into small components. They then release each incremental change at speed. But is hardly transformative!

    Agile approaches to software development are quite evidently beneficial when done right. They improve time to market, recognise the value of the product early and create a fast feedback loop to learn. Aside; it may not always have a positive effect on the morale of employees.

    Zone to Win Triangle Model

    These approaches focus on the small. Business and IT leaders have forgotten how to think ‘BIG’. They assume that large numbers of incremental improvements equates to transformative change. Understanding how the interconnectedness of the infrastructure layer (see image) affects operational practice and ultimately the customer ecosystem (business model), and more importantly, how layer 3 behaviours affect business processes and the technology required to operate them. The ‘small’ nature of the agile products can lead to micro-focus on products that make ineffective practices efficient rather than create an effective ecosystem. It’s a 2-way street. More of this in the scopes of agile architecture blog, coming soon.

    Customer experience expertise must combine with business process expertise and product knowledge within the Product Owner, Product Management or Portfolio Owner layers – depending on the scale of your business - in order to retain BIG thinking.

    Data is the problem

    Data is the new oil; or so the current slogan goes. More like data is the old oil! Just ask those in the EDM functions of the 60’s. They knew that data was the key to success, they were also aware that they had a data redundancy problem – one which still is apparent today. There goal was to consolidate data and make it available to the business. In reality; all they did was provide a streamlined method of accessing the same sets of redundant data.

    The additional problem that they caused was that the MIS teams – responsible for the DBMS - ignored data that was not compatible with the technology [in use]. In the 2000’s technology moved on to being able to capture this unstructured data. But, that fact that data was now accessible did not mean it was new. A can of Castrol GTX on the garage shelf that you knew was there in 1984 does not make it new oil when you put it in your engine in 2020.

    Data has always been at the heart of business processes, the fact that IT has remembered this, and can [now] handle it does not mean it is new data.

    Security is important

    Balancing data and technology

    Of course it is, IT has never forgotten that! What they have forgotten is that CyberSecurity is a subset of security. Its effectiveness is dependent on security within the business processes and with people – Gene Kim pointed this out in the well-read Phoenix project, but many IT practitioners failed to understand the message. You need to address them both, you can’t just stop at cyber. This means addressing physical security, and the behavioural traits of people. No amount of cybersecurity will stop people using default passwords, or at least it hasn’t until now. No amount of cyber security will prevent people disposing of half a million of user records into a skip at the back of the building. Or holding the door open for people with a nondescript white badge to allow them into the building.

    Cybersecurity is important, obviously! But, concentrating on the technology alone misses the system wide context in which it sits. Penetration testers such as those experts at secarma understand that testing encompasses the physical and behavioural as well as the technological configuration.

    Read Next...

    The first and second points of this post are in part 1

    Read next about the unsolved problems that have persisted within IT for decades

    How long will digital transformation exist?

    Human experience and Purpose driven organisations

    For a complete map of this series of blogs see here

    Other posts you might be interested in

    Working from home vs the working week.

    Digital Transformation and your dead horses

    2 Jan 20

    5 Things that IT has conveniently forgets…then remembers again…and again! Part 1

    Once in a while it is worth looking back beyond the last 10 or 20 years of technology, there are some [depressing] themes that lead to the conclusion that IT has never been able to make good on its charter. Rebadged solutions with improved technology disguise the inconvenient truth that IT avoids its biggest problems. This is the first of a 2-parter, the with items 3 to 5 in part 2.

    It is worth remembering that ‘Systems and Procedures’ departments - the forerunners of IT - precede computers by some decades. Their role was to design, simplify and measure business processes. Clerks (actors) recorded [vast amounts of] transactions in ledgers (databases), or journals (logs) and stored on shelves and cabinets (filesystems) under lock and key (security) – collectively systems. Procedures were meticulously recorded as repeatable playscripts (scripts) for the setup, recovery and operation of business processes.

    5 things currently often forgotten are;

    IT enables business processes

    Can you describe how the thing that you are working on right now benefits your business?

    The reason that business bought into the use of computers , back in the 60’s, was to automate business processes. Commonly called the [Electronic] Data management function (EDM) its aim was to consolidate [redundant] data. Throughout the 70’s the goals were to support quality objectives during the burgeoning era of Total Quality Management (TQM) in response to the and the Toyota Production System, and subsequently the increases in speed to market with the 80’s consumer boom and globalisation in the 90’s.

    Unfortunately, the EDM function, later known as the Management Information Systems (MIS) function of the 70’s and 80’s became fixated on exploring the bells and whistles of technology, forgetting its purpose. Large technology projects were not tied to any business strategy or product. Led on by a fledgling software vendor community intent on selling business process in a box, with monolithic platforms.

    In the 90’s there was [some] realisation that IT needed to realign technology with business process as Business Process Re-engineering Programmes were initiated, but Y2K got in the way of these efforts. By the 2000’s service-oriented architecture (SOA) and ITIL purported to address this, but most IT organisations applied it to what they could control - their own department. The hard part – understanding the business – was seldom done in any meaningful way.

    By the 2010’s the agile revolution had taken effect in the enterprise and the concept of a Product Owner is now well known – even when the product is a service. The product-centric approaches puts emphasis on customer engagement during development. There will be a future post on the negatives of scaling agile in the coming weeks.

    Everyone needs constant reminders to review of the context of their work. Teams and products must have a justifiable purpose, and not just be job protection. Employers also need to recognise the need to dynamically re-skill as part of their strategy

    Systems are not technology

    ‘Systems and Procedures’ departments, as were, clearly understood that the system was the combination of how people used processes and data.

    The view of these 'systems' has been obfuscated through the changing name of the department responsible for it; moving from [Electronic] Data Management, through [Management] Information Systems, Information Services to Information Technology. These name changes distanced itself from the system of which it was a part. It had an effect on the people that have worked within the function - they became programmers, developers, software engineers. Even when IT uses the term ‘system’ - systems programmer or a system administrator – it refers to a technology component or software program, not the business system. Whilst ‘Systems and Procedures’ looked towards business operations, IT has mostly been looking in on itself.

    interlocking cogs

    Business Process Re-engineering remembered the system in the 90’s, and in the early 2010’s ‘Systems thinking was in vogue for the enterprise, but similarly to the way software vendors of the 80’s viewed their ERP offerings. IT proposed a single [point] solution without really understanding the problem.

    Transformation necessarily includes people, processes and things, and some of those things might include technology, but is not limited to it. We mustn't forget how technological change impacts processes and people, and vice versa. For example, testing does not finish with developers functional tests and release to production, but in the form of feedback from real users, continuously.

    Read Next...

    The second part of this post continues with points 3 to 5

    Read next about the unsolved problems that have persisted within IT for decades

    How long will digital transformation exist?

    Human experience and Purpose driven organisations

    For a complete map of this series of blogs see here

    Other Posts you might be interested in

    Working from home vs the working week.

    Digital Transformation and your dead horses

    5 Dec 19

    5 reasons agility is latent in your business

    Insert Image

    I recently attended 2 events that left me thinking that there is latent agility in SMEs; one hosted by a small service provider for manufacturing SMEs, another by PWC, on the use of virtual reality in manufacturing. The latter presented research that the manufacturing sector was growing by only 1% in the UK. They attributed this to 6 reasons – which I will share when I get a copy of the deck (terrible memory)

    Surprisingly for me they were exactly the same reasons that were plain in the SME event, related to lack of agility and resistance to change. Surely the SME market was not exhibiting the same traits as its bigger enterprise counterparts in the supply chain, was it?

    Over years of dealings with large enterprise organisations had I perhaps misunderstood that smaller businesses were infinitely more agile and responsive than their bigger brothers? Why did this not appear to be the case? Or was this assumption reserved only to start-up rather than SMEs at large?

    I have some first thoughts on why that latent agility exists

    update to cure latent agility

    1. Over-delegation

    SMEs can be too focused on achieving. They delegate too much responsibility to suppliers. The implications being that they do not stand back and ask what should we be doing rather than what are we doing.

    When a business says, 'we do it this way, can you handle that?' Suppliers are all too keen to say ‘yes, we can do that, it’ll cost you £xx because we'll have to customise some things’.

    Selling customised solutions is 'sticky'. Whilst [initially] it may appear good since organisations do not need to change processes. Support and upgrades can soon become expensive, and migration away from the custom solution even more so. Ultimately it removes potential for real improvement and exacerbates latent agility but preventing budget from a money pit that is the status quo to initiatives that can move your business forward.

    Use an external consultant with an objective viewpoint to perform due diligence. They can help you;

    • Investigate which suppliers operationally and commercially match your needs
    • Explore which solution most closely matches your existing processes. Or understand whether you are willing to undergo more radical change
    • Understand where software vendors are willing to accommodate changes to their core product to meet your needs. After all, a solution supported and testable by a software vendor is preferable to a supplier supported one
    • Identify where it is possible to match business processes to software capabilities.

    2. Hero-worship

    Everyone knows Jack. Jack is they guy who knows how it works, and he knows what to do to make it do that thing that you need it to do. We’ll go ask Jack and it will get done.

    But being beholden to a single point of expertise means modifications to the system are slow since there is a bottleneck in getting things done. Your business cannot be nimble if it is constantly waiting for work to pass through a bottleneck it slows down the entire machine that is your business, the epitome of latent agility. Jack is in high demand, and often this can’t be seen without looking at your business with the eyes of an ‘outsider’, and Jack has valuable information that he can take with him to other places of work, be they your competition or otherwise.

    Share knowledge to reduce latent agility

    Worse still there might be Diane, who also knows how to get stuff done. Sometimes it's even the same stuff. Now you have 2 areas of the business doing the same thing in 2 different ways.

    Make sure you do not have a Jack, or Diane, in your organisation otherwise you will waste time in discovering what you should already know. And uncovering inefficiencies that are hard to remediate. Ensure that knowledge is shared, not only in documentation but in practice.

    Have a succession plan in place, and recognise that Jack and Diane could also be a suppliers.

    3. Lack of knowledge

    There is no desire adopt new technology. Less than 50% of the SMEs are actively looking to move to cloud or exploring new technology that will make them more agile. This could be due to a number of factors. But stem from lack of understanding;

    There is little knowledge within the company of how existing capabilities can technically be moved to other platforms, be that IaaS, PaaS or SaaS. [Possibly because Jack hasn’t had time to look at it yet because he’s inundated by his backlog].

    There is little incentive for incumbent suppliers to move to the cloud – especially if they have no offering there – they have a vested interest in the status quo, and the cash-cow that customisation provides.

    It is also assumed that data privacy is covered if data only exists within the confines of the internal network. The implications of cloud are seen as too complex to even contemplate. Data sovereignty and other data governance concerns are avoided, ostrich-like. Then there are the intricacies of cost.

    Talk to other companies, even competitors, who have made similar bold decisions. Work with those who specialise in new technology, particularly those that provide [on-the-job] training as part of their remit.

    4. Illusions of Control

    There is predominance of technology that gives an illusion of control, but it can also thwart progress. Well-meaning staff are subverting processes that are over restrictive. This provides a false view of process efficiency and your ability to choose the right tools as a result. Process discovery can only be based on the real process, not the documented one.

    Restricting access to Facebook as an example is wasted expenditure on technology. The fact that it is perceived to be unproductive is not the same as it being a security concern. Staff will circumvent this restriction by using their phones anyway. If people are using those same phones to work around other restrictions that are part of the business process, then you have a real problem.

    Which would you choose;

    • A. sales rep who is knowledgeable of the consequences of accessing corporate data over an insecure mobile device, but gets the sale.
    • B. A sales rep who is denied access any client information whilst on the road, other than by phone call
    • C. A sales rep who uses a laptop with a VPN connection to the office and accesses client information over the secure connection
    • D. A sales rep who uses Office 365 on a company-controlled phone to view client information over the internet

    Security of your, and your clients data is, or should be, of paramount concern to you, speak to experts who can keep your data as secure as it can be whilst keeping your employees happy by making security as frictionless as possible, turning what could be real cause of latent agility into a positive experience for your workforce. Train your staff on the reasons why things are the way they are, and the implications of not following the rules. Above all, trust them…until its time not to

    fearful people

    5. Fear

    Things break…so 1. Understand the potential impact and 2. Have a plan.

    Resistance to change is often due to the fear that you’ll break something. If it ain’t broke then don’t fix it is the mantra of laggards. Doing nothing because of fear is the biggest single cause of latent agility. Lack of any movement is by definition not agile, or anything else for that matter. Things NEED to change all the time, even if only to stay still.

    • New vulnerabilities require patching, to avoid this opens you up to a world of hurt.
    • Hardware goes out of support and needs to be replaced.
    • Operating systems and applications need to be upgraded because of new regulatory controls and supportability

    This is Business As Usual (BAU) stuff, if you’re not doing it then why not.


    Building confidence with BAU change is an important step in being able to introduce new, innovative change. With new change, whether it be the integration of a new business through M&A, or the introduction of a new technology, the impact needs to be known so that it isn’t a surprise. Ensure that you have engaged someone with a broad level architectural perspective whether they are from inside, or external to your business. They will be able to let you know the order that things can be achieved and what dependencies there are, creating a malleable roadmap for various stages of any initiative, that incorporates BAU change with new change.


    What is critical in both cases is the ability to recover your operations if a change goes bad. If you can do this then recovery after a serious failure, corruption or breach is just a variation on a theme. But this capability is critical to the health of your business.

    Don’t do anything without a proven recovery plan, and make it part of your operating principles, and make sure your suppliers aware.

    The last thing you want is to leave the responsibility of patching an upgrades to a supplier, where each party assumes the other has recoverability covered.

    The backup plan for components coalesces into a disaster recovery plan for all technology services. But business continuity relies on people, and paper. Disasters are typically not short-term events, the business needs to be able to function. What are the critical systems that need to be restored first, what are the pieces of information that need to be available to whom, and who is responsible for doing what, and who needs to be informed.

    Test it…regularly, not just at ‘go-live’. Wargame breach and failure scenarios. Make it routine

    Architectural oversight can provide all the above. If you want to know more about how UnlokQ consulting that can turn your latent agility into real potential then get in touch below

    Other Articles you may like

    Working from home vs the working week.

    Digital Transformation and your dead horses

    18 Oct 19

    The evolution of the CIO

    This post was initially prompted by the last entry in a list of 6 by @CIOnline. I never got around to publishing it (2 years ago) but I read another post in CIO just this week. Seems I wasn’t far off current thinking 😊

    The word ‘adapt’ in the first post seems a little too small in scale to understand the potential change required in a central IT organisation. And note there the use of the word central! Adaptation is to take something and make it fit a new purpose, in this case IT. But most will constrain their thinking, by making assumptions. Since IT is a centralised function most would assume that it must remain central.

    Align with business strategy

    As MuleSoft says, [in a rather different context]; “Let’s break IT”, or better put, lets decentralise it. Rather than central IT working for the lines of business, often spreading themselves too thinly, or in the wrong proportions, they need to become part of the lines of business. What Jim calls BusOps – I prefer BizOps. This can only happen if the things that IT produces are relevant to the business. We know that this is often not the case, as it one of the cited causes of ‘shadow IT’.

    IT as a cost centre

    The most efficient model for IT is a central structure - resource and skills shared. This has long been the mantra of those proponents of efficiencies of scale. But, with shrinking budgets for traditional IT, those efficiencies are offset by inadequate effectiveness. If IT staff move into the LoB, the LoB then has dedicated skills that are directly relevant to its needs, on-tap. The IT staff become part of a profit centre.

    Writing note showing Time To Update. Business photo showcasing Renewal Updating Changes needed Renovation Modernization written by Man on Notebook Book holding Marker on wooden background

    Gig Economy?

    So we hive off parts of IT - or specific people with relevant skills - into the LoBs from where they are funded. But what if what IT [can] produce is not the right thing for the business?

    Any consumer choice should be based on the ability of the supplier to fulfil the need. They should not be forced down a single [internal] route. Simplistically, this is what leads to ‘shadow IT’. This may lead to an IT function that is larger than the work that it has to do.

    What should you do with these people that are not able to produce the right things? Let them languish in the vestiges of central IT? A ‘talent’ pool where they could have no work for a long time. Where they are a cost that the business has to withstand - and a severe one at that. Reskill them at a cost to the business?

    Or fire them! Sounds harsh, but standing still is often not an option to those employers OR employees. Potential results include;

    1. 1
      Passing the responsibility to the employees themselves. It is in their interest to re-skill allow themselves to find work. Either with the same organisation [in the LoB] or with another supplier.· 
    2. 2
      Contracting, or the ‘gig economy’. As increasingly niche – so called legacy - skills are marketable over a broader employer spectrum

    The evolving CIO

    This change may not, at first glance, be very appealing to a CIO with a large fiefdom to oversee. With top-line erosion as good people move out of IT into the business. Their role is, in conjunction with enterprise architects and Chief Data Officers, the corralling of working practices into a workable set of guidelines and structure to the technology approach. For example; Shared services platforms (mostly around security), supplier selection, and how lines of business communicate with each other – the arterial data flows that allow the business to operate.

    After all IT is Information Technology – the technology that allows for information to be stored, and shared

    The Evolved CIO

    Removed of the low-level drudge of trying to balance the books, the CIO now has a much smaller, but more impactful role.

    • Understanding of the now and future needs of the business
    • Advertising upcoming need to the wider market (gig economy included), such that they can more effectively serve your business.
    • Protecting the business against risk

    Comment below or come talk to me about creating and executing a strategy to make your IT more effective

    14 Oct 19

    Are SMART goals really useful

    Add Content Block

    Recently I found myself, like many others, in the position of trying to define my goals for the upcoming year. Albeit this was different to other organisations where they were set for me - my job was to accept. The requisite delay in goal setting was a secondary concern - as it has been every year, and in every company before.

    In the meantime I found myself asking questions like;

    • Is setting my own goals what I should be doing? Are they genuinely my goals, or the goals of the company?
    • Do SMART goals work for me? If so, why does it feel like they don’t?
    • What is the difference between a goal and an objective, and do they both need to be SMART?
    • What does it feel like they are cast in stone? why is so common, and acceptable to set them after they are complete?
    • Why should there be short-, medium- and long-term objectives when the review cycle is yearly?

    SMART Goals?

    I couldn’t be the only one asking myself these questions [every year]. Turns out that I am not! Hurrah! – but I appear to be part of a small minority that DO question the validity of SMART goals.

    My first question I will explore at some later time. My observation is that employees personal aspirations are not always in line with those of their employers.

    On my second question an MIT paper on when SMART goals are not so Smart addressed my second question directly;

    Simply put, for non-predictable or creative roles then SMART is not as applicable.

    BOOM! SMART is less applicable where organisational or customer demand could fundamentally change. A world where the funnel of work is dynamic, and may, or may not change, was not conducive to SMART goal setting since;

    change goals

    Businessman standing at the crossroad. Decision making concept.

    1. 1
      The work may can stop or change course suddenly as priorities change. To achieve an objective that achieves a goal that no longer exists is futile.
    2. 2
      Objectives that are at a level that is achievable and realistic, are not specific enough to be measurable.
    3. 3
      Measurable outcomes of work can lag the actual work by some significant, and unknown, time. Outcomes may negate, or aggregate with, other initiatives. These initiatives may also have outcomes that lag the work by varying amounts of time. Measurements, therefore, can not be isolated to any specific contributor.

    What does this mean. Well, timeliness is clearly important in the review cycle. This needs to be either at a frequency shorter than a typical initiative - an agile release, or a sprint? Or at the point at which the work is completed or a decision is made to stop - Kanban. Definitely not the (bi-)annual cycles that are common place. [I touch on possible reasons for this frequency later].

    Increased feedback frequency creates the opportunity to change objectives. When the objectives change, then new measurements put in place. However, the question remains; what measurements are truly valuable over these shorter timescales.

    It was some weeks later I happened across an article in MIT. It proposes FAST as an alternative to SMART. BUT, aside from one key point they are interchangeable [in my view].

    Frequently discussed. – OK, this addresses one of the concerns above. But does not distinguish it from SMART, it is possible to do reviews at whatever frequency you like. FAST makes it explicit

    Ambitious. – Combines achievable and realistic in a single term. It has connotations of a little more stretch in its meaning. Both variants would need to be time-bound to be meaningful. In this case SMART is more explicit.

    Specific. – Shared between FAST and SMART. The association with measurement is implicit in both SMART and FAST. Although SMART does make specific mention of it. In both cases the problems with measurement in the shorter review cycles remain the same.

    Transparent – The gamechanger, and worthy of further discussion

    With SMART everyone has goals, defined by either the individual or the organisation. But no-one else gets to see what they are;

    • Are they comparable to peers?
    • Do they correlate, or conflict with team or and corporate goals?
    • Does everyone agree that they are achievable, given the current backlog of work.

    Open Goals

    FAST allows people to talk about the goals. As a result it facilitates the increased frequency of feedback.

    With hidden goals, people are reliant on their own [lack of] ingenuity to come up with a plan to achieve their goals. When objectives are open, others can contribute to how to achieve them, or if they are realistic. Goal setting becomes a team sport. Ultimately, everyone ends up facing the same direction. They are all vested in the progress of everyone else, as an individual or as part of a team.

    If transparency facilitates frequency then what else is required? Relevancy! This too is a side effect of the conversations borne of transparency. Relevancy is the alignment of individual to team goals. Of team goals to business unit goals. Of business unit goals to corporate goals. It affects, and is affected by, the knowledge and capabilities of the individual and/or team. In turn, affecting the quality of their (or their teams) contribution to a higher goal. This higher goal will have a well-defined and persistent metric for measurement. The problem then becomes how to quantify the contribution, presumably as some percentage. I have no empirical evidence here.

    Next. Why does the review process currently happen on yearly timescales? Simple, its a result of corporate reporting on yearly timescales (annual reports). Organisational objectives change (or persist) on yearly cycles. This becomes an arbitrary trigger for 'goal setting' conversations. It needs to STOP! The corporate goals are whatever they are at whatever time you refer to them. The relevancy of individual goals is on the scale of the individual review period, NOT the organisational one.

    Annual reports also drive investment cycles, and the budgeting for the company. Part of which is the budget allocated to staff promotions. The historical legacy is that [in most organisations] this budget is spent all at one time - promotion rounds - when the chosen few are promoted. Performance review and promotion should not be linked with financial cycles, but with quality. [People] managers should be aware that the pot of money exists throughout the year. That it is available to remunerate employees capable of stepping up the ladder whenever they are able. If you have high quality staff that should be promoted then a pot of money should not be a constraint. The only thing to control is that the fund is not depleted too soon. If it is, then there there is a budget setting problem. Those in the people department have not recognised the quality of their own staff, and have budgeted incorrectly.


    Forget SMART, forget FAST. Focus on Transparency and Relevance

    Break the link between budget (cost) and promotions (quality).

    Slow moving organisational and financial cycles are only of benefit for shareholders, or the taxman. NOT your employees.

    If you have concerns with SMART goals, or problems setting effective goals for your employees. Come and speak to me about changing your approach to being Transparent and Relevant, and see UnlokQs Fractional CIO Offering for SMEs

    Insert Image
    14 Feb 18

    Acceptable Imperfection, Algorithms and the Quantum Mechanics of Life

    Only the quantum processes of the very small can produce high fidelity copies of DNA - the building blocks of life. But, even though there is an extremely low error rate of around one in one billion, it is not perfect, errors DO happen.

    These errors are what has allowed life to adapt over hundreds of millions of years. Historical knowledge is codified in DNA - human and otherwise - and forms the basis for life on Earth, as we know it.

    But what does this have to do with algorithms.

    Biases defined in the past will be unavoidably coded into algorithms, it cannot happen any other way - without a time machine. The algorithm must learn - if left unchecked then the biases will continue ad infinitum. They must also unlearn as posited in this MIT Post. Algorithm authors must facilitate this learning by injecting errors to replicate adaptability. In the same way as enzymatic replication of DNA has.

    Unfortunately, the indicative timescales that nature has provided us are not very human friendly, since;

    Opponents of an algorithmic world cite these biases of the past as abhorrent. They they must be removed at all costs, NOW! The binary worlds of computing and mathematics evoke ideals of perfection, immediately. Particularly notable when current diversity aspirations are using data from 20th century society. Simply not tenable!

    Proponents assert that mathematics and computing can improve on error rates found in the squishy world of life. But this is to avoid the point. Adaptation takes a long time to test, and exists on evolutionary timescales. Adapting too fast is mutation and often leads to death.

    From either viewpoint above the notion of perfection is wrong. Whatever algorithms humans (or machines) produce will be imperfect. Things will improve, given more time and more data.

    We need to strive for acceptable imperfection.

    9 Feb 18

    IT Operations, but not as we know it

    There are those who think that IT is all about IT. These same folks that think that shadow-IT is (or was) something that needed to be wiped out (as ‘rogue’-IT). 

    Usually this is by people in IT operations looking for stability and uptime as their measure of success, and they see it as a WAR against those that advocate speed.
    If you are one of these people, and you haven’t already realised. The war is over! Shadow IT has won! In fact, from the other side of the fence it never looked like a war at all.

    The only reason IT operations has existed historically is because of poor application development practices and the ‘split’ of applications and infrastructure ownership. The latter being the purview of IT operations. Improving development practices of DevOps (removing siloes) and Agile practices such as Extreme Programming (XP) improving quality, coupled with infrastructure as code providing the ability to create infrastructure as part of the application, facilitated by highly automated [cloud] providers is the ultimate demise of IT Operations as we know it.

    So why is there a sensation in central IT, predominantly IT operations, that they have won? I see two main reasons for this

    • 1
      A misunderstanding of the new infrastructure paradigm. The belief that infrastructure (as code) is still separate from the applications, and requests are still made for infrastructure (as a service) from IT operations. WRONG*!
    • 2
      Confusing responsibility with ownership. Integration of applications, purchased or developed, by, or on behalf of, the Line of Business is hard. LoB’s have passed back the responsibility for these integrations to ‘central IT’ from an architecture and governance perspective. IT operations has not taken ownership of the applications.

    *[Unless you are a provider of IaaS using all the automated techniques and with virtually limitless capacity. But this is often out of reach of even the largest enterprises]

    The takeaway is; if there is an application involved, then it takes precedence over infrastructure, it always has, and always will. The war was over before it even started.

    Read more about what is required to manage the integration of unowned services in the paper below

    Seeing through the clouds: Oversight at multiple levels

    Find out more about what management of XaaS means to the enterprise

    8 Feb 18

    Don’t know your IaaS from your elbow?

    Although it may seem obvious to some, it is not obvious to others about what public cloud (IaaS) providers actually provide. This was evident through conversation with some of my peers.

    I can only think that this stems from the fact that they see the ‘cloud’ as an extension to the on-premise virtual [machine] infrastructure that they know and love, it is not! Though Amazon Web Services (less so MS Azure) attempted to prevent this confusion by calling its service Elastic Compute Cloud, but since EC2 is a nice abbreviation the nuance is lost.

    Virtual Machines would be a PaaS service since they have an operating system installed. EC2 and its peer equivalents from Azure and GCP however, are not, they are just a bunch of compute resources from various ‘blocks’ of infrastructure in a datacentre that only come together as a server at time of request.

    Cloud providers in their benevolence have attempted to make life easy for consumers, by allowing them to choose an operating system to put on top (to make the compute useful), the cost of which is included in the pricing. Though this may appear like what they are providing is a VM [with an operating system], they are not. The operating system is the choice of the requester and remains the responsibility of the requester during operation. Patching and compliance therefore is the responsibility of the consumer. Although is removes hardware ownership, it still means non value-add activities are being performed by staff.

    As a norm, IaaS services should be avoided and higher level PaaS and FaaS (serverless) options should be the preference of developers, where they can be absolved of maintenance effort. Even then applications should be developed only by exception, they should be consumed as SaaS wherever possible.

    Seeing through the clouds: Oversight at multiple levels

    Find out more about what management of XaaS means to the enterprise

    31 Dec 17

    Multi-cloud fallacy

    Keeping your cloud options open appears to be an admirable thing to do since we have been burned previously by vendor lock-in many times. Moving to public cloud appears, at first, to be the ideal time for us to take stock and avoid the same pitfalls. But is it really the same problem?

    Apples vs. Oranges

    Historically in enterprise organisations, architectural decisions have been made on [falsely] elongated timescales. This has allowed vendors to sell licenses for longer periods of time by offering heavy term discounts, in a self-fulfilling cycle this has exacerbated the tendency to lengthen the period for architectural reviews. In the consumption model introduced by public cloud we are able to change architectures on a far more frequent timeframe dictated by movement of the business in response to customer choices; beware the enterprise architects and subordinates that are willing to sit on their laurels and defend the old school position. 

    We are dealing with apples and oranges with the ‘so-called’ lock-in since we have moved from a financial lock-in to a functional one.

    Unnecessary Complexity

    To be able to take advantage of any multi-cloud system a configuration (application or otherwise) must be able to run equally well on any provider environment. For this to be true then the underlying environment must be an ‘absolute’ commodity, or be very close to it. If this is not the case then the effort (read cost) of maintaining the configuration such that it can support non-identical environments may be more overhead that what is saved by moving to a cloud provider. The only scenario that this may be the case is when you run a containerised application, where any orchestration layer is not provided by the cloud vendor.

    Spiting your face

    There is a certain futility in choosing to multi-cloud as it necessarily means that you are removing your ability to use any chosen vendors higher level functions, limiting yourself to close to commoditised infrastructure services (possibly up to database). Higher level functions are certainly not commodities and the ability to shift workloads between providers would definitely require significant effort. There is a reason why cloud providers are spending far more effort on the higher-level offerings since they ARE the functional lock-in mentioned earlier. If you don’t believe me, perhaps you can remember the Oracle on Itanium spat between HP (when it was one company) and Oracle, there would be very few customers willing to give up their oracle database because they only wanted to run Itanium servers, though many would trash the server platform to maintain the Oracle database; higher level services ALWAYS win out.

    Choose wisely

    Do not be tempted to replicate your on-prem systems and architectures for cloud-based compute. This will not be a good use of your time and money. Follow these 3 guidelines;

    • Consider your data, how it is used, stored and transferred, and with this knowledge make an informed choice of the highest level services that will provide the functionality that you need
    • Make decsions on a per (business) service basis in conjunction with the owner of the service (Line of Business). They, after all, have a vested interest in the choices made
    • Keep pace with the changes made by your chosen vendor and optimise accordingly
  • Importantly! Keep abreast of competitive offers. You CAN still change your mind, but this becomes a cost exercise. Development cost vs savings