Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

survey icon Share your experience with the FAS IT-Playbook by taking this brief survey

Personal Property Management Systems (PPMS) Lessons Learned

Purpose

The purpose of this document is to capture what worked well and what did not on the PPMS Project. Lessons learned define what the team would do differently if they performed this project again. This document will assist similar teams, through shared knowledge, and enhance best practices. A successful Lessons-Learned session will help project teams repeat desirable outcomes and avoid undesirable outcomes.

PPMS Project Lessons Learned

Biggest Successes

Description Factors that Promoted this Success
Cloud Adoption At PPMS commencement, FAS and FAS-IT did not have significant experience with cloud technology, in particular the AWS-based cloud technology to be used by this essentially ‘green field’ application. Through the PPMS team’s decision to utilize a Virtual Private Cloud (VPC) and the experience REI Systems (REI) brought to the program, there were few, if any, issues with utilizing this technology / infrastructure.
Intuitive UI Design Many of the users who interacted with the PPMS system were impressed with the user interface, the look-and-feel of many of the screens / interaction points and the methods used to automate many of the business processes.
Agile Ceremonies were routinely held and attended by relevant stakeholders. To enhance communication and ensure team members were on the same page, agile ceremonies were routinely planned and well attended. These included: Daily standups, Sprint Planning Meetings, Sprint Demos, and Retrospectives. Planning meetings provided time to discuss in-scope requirements (user stories) to ensure the agile team understood the intent of the requirement and associated acceptance criteria. Notably, the retrospectives provided opportunities to reflect, provide feedback, and adopt processes for continuous improvement.
Weekly Meetings for IT and Business Stakeholders Each week, the Business Line and IT teams met to discuss progress of the PPMS program. This was an opportunity for candid discussion relating to impediments, priorities, and performance. These meetings improved transparency and collaboration between the business and IT teams and sharpened the focus toward critical activities required for the Go-Live release.
Hands-on Testing by the Business Lines Prior to the project restart (May 2022) Product Owners were not able to conduct hands-on testing until User Acceptance Testing (UAT). This was cited as an area for improvement. After the project restart testing practices were improved. Product Owners participated in the sprint demos, and were also able to conduct hands-on testing, in the Test environment, prior to accepting a user story. Product demos were provided, however they were not a substitute for customer acceptance.
Coding Practices Coding practices demonstrated PPMS code met critical criteria including:
  • Functional - The code delivers expected functionality, and it works.
  • Consistent - The code is written with a consistent style.
  • Maintainable - The code is easy to understand.
  • Testable - The code is testable. (Although gaps in automated testing were evident)
UAT planning and Test scripts When issues/problems occurred, the appropriate team resolved or addressed them right away and made the UAT session move forward smoothly.
Soft launch was deployed prior to the first, substantial production release. Deploying a ‘soft-launch’ to production provided an opportunity for business / end users to play with the system in a real time system prior to releasing the URL to the full user community. to avoid any major changes to the configurations or other changes that may impact the production timeline.
Training / User Guides Successful training was a key to success along with detailed / information user guides so the end users are well trained and familiar with the PPMS.
Creation and maintenance of the PPMS Quality Assurance Surveillance Plan (QASP) improved the timing of delivery of project deliverables. After the project was restarted in May 2022, a QASP was introduced as a means of improving the timeliness of project deliverables. This was successful in that it established a consolidated list to track all project documents, when they were due, and when they were delivered. The QASP was considered a huge success to ensure all the documents were delivered, reviewed, updated with comments, and finalized.
Communication and knowledge on the requirements Communication was a key to success as well having the appropriate resources in the requirements workshop / refinement sessions.

Biggest Challenges

Description Net Effect on Project
The project schedule did not account for functional requirements decomposition and new requirements were accepted without being fully vetted - which caused scope creep. At the start of the project, requirements were not sufficiently analyzed and decomposed in a number of areas, leading to incomplete or, in some cases, incorrect automation. Additional requirements were introduced resulting in scope creep, ultimately impacting the schedule and cost. Accounting for time for the business analysts to have facilitated requirements sessions - well ahead of sprint assignment ensures the backlog is refined and user stories meet the Definition of Ready. Missing requirements and changing priorities resulted in cost escalations and schedule issues.
Data Management Legacy data management and migration were addressed late in the project. Due to time constraints, data profiling and cleansing was minimized, and data migration efforts resulted in multiple issues with data quality. Overall, a more organized and structured approach should have been defined early in the project, as all legacy data sources were known.
Unrealistic Approved budget The initial approved budget was wholly insufficient to meet the program’s MVP requirements ($8M provided vs ROM of $20M). The program, IT, and vendor struggled to “make this funding work” while addressing statutory and business requirements. This predictably led to cost overruns that had to subsequently be addressed. Justifying and obtaining approval for additional funding distracted and took time from leaders in managing the project.
Unrealistic Project Timeline The initial project timeline was insufficient to custom develop, test and deploy a system of this magnitude. This may be partially due to the insufficient decomposition of requirements and unrealistic budget. The breakneck pace of the project led to SME burnout, sacrificing critical project management and control activities, and errors in communication.
Improved testing practices are required to reduce bugs discovered by Product Owners and avoid post-production defects reported by customers. Overall, some testing practices were curtailed due to time and resource constraints. PPMS should implement regular regression testing and begin automated testing where appropriate.
Communication and knowledge on the requirements Communication was a key to success as well as to have the right and adequate technical individuals in the requirements workshop / refinement sessions.
Resource Bandwidth More resources with a depth of business knowledge were needed to align to the schedule demands. The product owners participated in the project as “other duties as assigned”. The POs never participated in a development effort of this size before and did not have IT expertise. Additional, dedicated PO resources should be obtained in order to successfully manage the business priorities and activities.
Ensure decision makers are actively attending the appropriate meetings in an effort to avoid delays in progress. Clearly identify roles for decision making and strengthen the role of the Product Owners.
Limited testing by Agencies (UAT) for (Batch/Web Service) Even though enough time was given, many Agency users failed to connect and a few that did, lacked detailed testing.
Establish a Project Management Office Initially, two separate GSA business lines were part of the project in addition to PPM (3 total). PPM was not sufficiently resourced to manage the requirements of other business lines. PPM had to assist in managing the requirements of other business lines whose business process and customer needs were not familiar. A dedicated PMO should be established to manage the development of enterprise applications and ensure appropriate resources are available to meet the needs of all stakeholders.

Potential Areas of Improvement

Description Possible Mitigation
Requirements need to be clearly defined and detailed as possible to avoid cost overrun Functional and non-functional requirements need to be reviewed and vetted with the integrated team thoroughly.
Good use of resources Attendees can be selected to only those who can contribute to the project.
List of Potential Requirements for each story There are several factors you need to keep in mind when writing the requirements for a story. Not only do you need to write about what you WANT to accomplish, but also, what should NOT happen (and other “negative” results). It’s challenging to track all of it and keep it in mind for every story. For this reason, we often missed requirements. I wish I would have had a template to work from that would at least capture basic things to consider. For instance:
  1. What do you need to accomplish?
  2. Does this affect only one screen, or multiple screens? (e.g., sorting capabilities on various search pages).
  3. What users will be affected? Who can use the functionality? Are any users exempt from the functionality?
  4. Is an info tip needed?
  5. Is an action history needed?
  6. Do you need to print the screen / report / search results / etc?
And so forth. I think if I had a general set of questions to consider for each story, I may have missed less requirements that I subconsciously assumed would be included (like action history).
Conduct post-sprint testing During phases 1-6 of the project the vendor was unable to deploy working code to a GSA environment for product owners to test functionality at the end of each sprint. Instead, testing was conducted at the end of each phase (of 6 sprints). This inhibited the project team from testing functionality, identifying, and correcting issues earlier in the development process. It also resulted in many defects being discovered during UAT which could have been remedied by the vendor earlier.
Require automated regression testing The vendor was unable to automate regression testing due to cost and schedule constraints. This resulted in regression testing being conducted manually. Often the manual regression testing failed to uncover all issues. The product owners felt that their UAT testing also served as regression testing due to the large amount of defects discovered. Not automating regression testing also dramatically increased the level of manual effort required to test DM&E enhancements which consumed an exorbitant amount of DM&E resources.
Requirements need to be clearly defined and detailed as possible to avoid cost overrun Functional and non-functional requirements need to be reviewed and vetted with the integrated team thoroughly.
Good use of resources Attendees can be selected to only those who can contribute to the project.
List of Potential Requirements for each story There are several factors you need to keep in mind when writing the requirements for a story. Not only do you need to write about what you WANT to accomplish, but also, what should NOT happen (and other “negative” results). It’s challenging to track all of it and keep it in mind for every story. For this reason, we often missed requirements. I wish I would have had a template to work from that would at least capture basic things to consider. For instance:
  1. What do you need to accomplish?
  2. Does this affect only one screen, or multiple screens? (e.g., sorting capabilities on various search pages).
  3. What users will be affected? Who can use the functionality? Are any users exempt from the functionality?
  4. Is an info tip needed?
  5. Is an action history needed?
  6. Do you need to print the screen / report / search results / etc?
And so forth. I think if I had a general set of questions to consider for each story, I may have missed less requirements that I subconsciously assumed would be included (like action history).
Conduct post-sprint testing During phases 1-6 of the project the vendor was unable to deploy working code to a GSA environment for product owners to test functionality at the end of each sprint. Instead, testing was conducted at the end of each phase (of 6 sprints). This inhibited the project team from testing functionality, identifying, and correcting issues earlier in the development process. It also resulted in many defects being discovered during UAT which could have been remedied by the vendor earlier.
Require automated regression testing The vendor was unable to automate regression testing due to cost and schedule constraints. This resulted in regression testing being conducted manually. Often the manual regression testing failed to uncover all issues. The product owners felt that their UAT testing also served as regression testing due to the large amount of defects discovered. Not automating regression testing also dramatically increased the level of manual effort required to test DM&E enhancements which consumed an exorbitant amount of DM&E resources.