Once a change request is approved, the implementation process soon begins. This phase will include the technical and organizational stakeholders identified and assigned during the previous stages in the IT change management process. And it will likely trigger a detailed workflow with a lot of moving parts. 

At this point, the change request has gone through a thorough vetting process, including close scrutiny, review, and approval. The time has come to build the solution, test it, and deploy it into the wild. Easier said than done, as each of these steps involves a lot of people and, subsequently, a fair amount of risk. How organizations manage implementation and post-implementation often makes all the difference.

Mitigating risk during build, testing, and staging

Though every step in the IT change management process should help regulate and reduce risk, the implementation phase is especially important. During this phase, technical teams move closer to taking action on the change request, including actual changes to the IT infrastructure It’s important that organizations mitigate risk during this phase to avoid releasing bugs and other vulnerabilities out “into the wild,” common pitfalls that arise during implementation To avoid these pitfalls, here are some common approaches, usually used together in some combination:

  • Unit Testing: A common fundamental testing procedure, unit testing evaluates a specific part of an IT change, such as a function or process. If new software is being deployed, for example, the team may test a specific procedure carried out by that software. Is every “unit” performing as planned? A detailed unit testing plan and the procedure can help teams automate this process to the extent possible so that unit testing is never overlooked.
  • Static Code Analysis: Call it a dry run. Call it a developer’s best friend. Essentially, static code analysis evaluates code before it is executed, which can help identify shortcomings, vulnerabilities, and other risk factors that need to be addressed before moving to the next stage in testing.
  • Dynamic Code Analysis: As opposed to static code testing, dynamic code analysis looks for issues after code is run. Both tests can reveal vulnerabilities the other cannot, and both are often automated using high-precision software such as SonarQube.
  • Integration Testing: Usually, updates to software or additions of new software to the IT infrastructure will require new interfaces and integrations with existing systems. Integration testing is used to ensure that all of these systems play together seamlessly, and to identify any problems or potential show-stoppers. 
  • Quality Assurance Testing: QA is a broad category of testing that ensures the quality of the application is upheld and implements processes so that future rollouts will meet established standards.
  • User Acceptance Testing: User acceptance testing (UAT) puts the results of a change request in the hands of the end-user for “real-world” testing. How does it hold up against the actual way that end-users interact with it, and does it perform as specified (and expected) during the change request drafting and review phase? UAT is sometimes referred to as beta testing.
  • Regression Testing: Before moving new code or software or software updates to production, it’s essential to understand how this change impacts the live application. Regression testing typically includes automated tests for critical processes and previously documented issues that confirm the system operates as intended following a change.  
  • Vulnerability Assessment: Organizations must scrutinize changes to the application and analyze any vulnerabilities that a change creates. These might include the identification and prioritization of any “attack vectors” so that the appropriate remediation action is taken before the update is deployed to production.  

Migrating changes to production  

Upon completion of the analysis and testing activities, the change request is approved and ready for release.

Generally, you don't want the same person that developed a change to be the one that tests the change or migrates the change to production. A "Separation of Duties" policy (SOD) is enforced to ensure that someone can't implement changes without other people signing off, to avoid bad changes being implemented by maliciously or by accident. 

Tips for a successful migration: 

  • Prepare written deployment steps, including activities that can be done in advance.
  • Prepare written rollback/backout steps including contact information for key people.
  • Establish a deployment time-frame or "window" where the users anticipate downtime or restricted access.
  • Inform users/stakeholders about an upcoming release or maintenance period. Notify users/stakeholders upon completion.
  • Where possible, implement changes that are not visible users first for a quicker, less-impactful release.

 

Automating the change management workflow is about more than efficiency – it’s about effectiveness. Myndbend Process Manager can guide your team through the change management process from request submission to the post-implementation review, ensuring that critical steps aren’t partially completed or lost in a backlog.

MICHAEL SCHRAEPFER

AWS-SAA, MCP, MCITP, MCSE, MCSA, ITIL

Essendis

The post-implementation phase

Just because a change has been implemented does not mean that the work is done. Far from it. What is likely to follow is a thorough post-implementation review, which will examine the success of a change request in addressing the problems or issues it was designed to address. Stakeholders usually will test the change immediately upon deployment to production, however, 90 days is a typical timeline for post-implementation review.

 To give you an idea of how this might look, we’ve included sample questions that you can use as reference:

Post-implementation review sample questions

  • Is the implemented change functioning in production as expected?
  • Do reports, such as Google Analytics, show expected user-behaviour?
  • Has the change been documented and communicated to users?
  • Has the support team been trained and given anticipated questions?
  • Has a plan been put into place for deferred features or bugs?
  • Has the team conducted a retrospective meeting? 
  • What went well? What can be improved upon?

 

Unsuccessful implementations do happen, and it is important for organizations to have contingency workflows in place to determine what caused the implementation to fail, as well as how to avoid a similar outcome during future rollouts. Depending on the magnitude of the problem, this could also trigger Business Continuity, Disaster Recovery, or Incident Response procedures. An organization might also have in place rollback or reverting procedures or steps for applying hotfixes to address post-implementation bugs or vulnerabilities. Whereas successful change requests can be closed—including all child tickets—unsuccessful implementations should trigger workflows for unsuccessful, backed-out, or cancelled RFCs. 

All of this can be detailed and automated using IT change management software such as Myndbend Process Manager:

  • Myndbend Process Manager's templates can be used to create tickets in Zendesk with implementation and post-implementation steps. Multiple tickets can be created at once or sequentially.
  • Schedule "Change Requests" for the future. 
  • Automatically create recurring "Change Requests" 
  • Automatically (or manually) add approvers based on the change requested.
  • Advance a change request based on the status of approvals or related tickets.