Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Share your experience with the FAS IT-Playbook by taking this brief survey

Gathering Requirements

What requirements do we need to gather, in what format, and how? FAS-IT shares its experience, lessons learned and other practical advice. 

Overview of Requirements

Requirements dictate the scope and breakdown of work within a project. A project’s requirements can come in a variety of categories and formats, as you will see in the sections below. This page will help you better understand how to gather requirements, organize them into understandable and manageable components of work, and ensure that they are sufficiently detailed to capture the full scope of the work. Some examples of requirements gathering best practices are provided further down on this page by the GSAFleet.gov and PPMS teams. At the end of this page, there are several resources that can help you and your team know when your requirements are complete enough to begin the work.

Categories of Requirements

In most cases, requirements need to be separated into business, functional, and non-functional requirements. Business requirements are focused on the overarching business objectives that the system aims to support. Functional requirements are more focused on user needs, whereas non-functional requirements are more focused on the system’s needs from a compliance and performance perspective. These categories of business, functional, and non-functional requirements are explained in greater detail below.

Business Requirements

Business requirements describe the overarching goals and objectives that the system modernization project aims to achieve. They include alignment with business strategies, cost considerations, ROI expectations, and any specific business process improvements that are expected from the modernization effort.

Functional Requirements

Functional requirements specify the functions, features, and capabilities from a user's perspective and how the system responds to inputs. Typically these are released at a task order level, are very high level and need to be broken down further. If the requirements are not sufficiently detailed, this poses significant risk to the vendor’s ability to deliver on the project.

  • User requirements are closely related and include e.g. usability concerns, UI requirements, accessibility requirements (for users with disabilities), user training and support needs.
  • Non-Functional Requirements

    Non-functional requirements generally fall into two types: persistent system qualities and attributes, and design constraints. They define the needs in the following aspects:

    • Performance: How fast a system should respond to requests 
    • Scalability: How well a system can handle an increase in users or workload 
    • Security: How well a system protects against unauthorized access and data breaches
    • Usability: How easy a system is to use 
    • Maintainability: How easy it is to update and modify the system 
    • Reliability: How consistently the system can perform under stress in periods of significant activity
    • Regulatory compliance including Accessibility

    Non-Functional Requirements are usually revisited as part of the Definition of Done (DoD) for each Program Increment (PI) and each Sprint or release.

    Specific focus areas within non-functional requirements include:

     

    • Regulatory and Compliance Requirements: e.g. data protection laws, industry standards
      • Accessibility requirements (Section 508, etc.)
    • Technical Requirements: These outline the technical aspects and constraints of the system. They include hardware specifications (if applicable), software environments, integration requirements with existing systems, APIs to be used, and any technical standards that must be followed.
    • Operational Requirements: e.g. system administration procedures, backup and recovery requirements, disaster recovery plans, monitoring needs, operational constraints.
      • Transition (Migration) Requirements e.g. migration strategies, data migration requirements, training and change management needs, interim solutions during the transition period.
      • Support and Maintenance Requirements: e.g.software updates, bug fixes, help desk support, and service level agreements (SLAs).
    • Environmental Requirements: e.g. operating conditions (temperature, humidity), physical space requirements, or energy consumption limitations.

    Example: Every project is a little different, and the definition of a “complete set of requirements” varies accordingly, but here is an example of categories of non-functional requirements that one recent FAS-IT project defined.

    Categories
    • Browser Verification
      • Performance
      • Compatibility
      • Portability
      • Security
    • Logging & Auditing
      • Access Authentication
      • Secure Audit
    • Reliability, Maintainability & Availability
      • System Security
      • Monitoring and Alerting
      • Performance
      • Compliance
      • Product Support
      • Vendor Onboarding Process
      • Disaster Recovery
      • System Hosting
    • Backup Methods
      • Data Retention
      • Backup Authentication
      • Failover & Redundancy
    • Usability
      • User Interface
      • Reporting

    Methods for Gathering Requirements

    Some different types of activities that teams can carry out in order to create the requirements documents above:

    • Understand the product vision: The product owner should first articulate a compelling vision and strategy for their product, and this should focus on desired outcomes rather than requirements.
    • Interviews, questionnaires, surveys - Read more at 18F
    • Analysis
      • Document Analysis (review documentation for the as-is system, if one exists)
      • Interface Analysis (review the interface of the as-is system, if one exists)
      • Reverse engineer processes
    • Workshops
      • Brainstorming
      • Role-Play
      • Focus Groups
    • Demos of legacy applications (review functionality,  possible gap analysis, understand what users do and don’t like, features that are irrelevant today)
    • User Observation
    • Prototyping

    Formats of Requirements

    Parts of the requirements may require just “plain”  text, written in a descriptive, narrative form using natural language to describe what the system should do. Beyond that, here are some formats that requirements are typically captured in:

    • User Stories: User stories are at the heart of Agile development methodologies. They are concise, informal descriptions of a feature told from the perspective of the end user. They capture functional requirements and are used to prioritize development efforts. 
    • User scenarios / Use Cases describe interactions between users and/or systems to achieve defined goals, usually by patterns of actions and expected outcomes. May be a collection of user stories. Read more at 18F
    • Personas are User archetypes based on conversations with real people. Read more at 18F
    • Storyboards are a visual sequence of a specific use case or scenario, coupled with a narrative. Read more at 18F. >
    • Wireframes, prototypes, and mockups are other artifacts that help stakeholders visualize the user interface and interactions, capturing both functional and usability requirements.
    • Journey maps, task flow diagrams, etc. Read more at 18F
    • Data models: e.g. identity-relationship diagrams (ERDs), data flow diagrams (DFDs), or data dictionaries, may help define the structure, relationships, and attributes of data entities.
    • Tables and matrices to show relationships, dependencies, and attributes of requirements
    • Constraints and assumptions that impact the system design and implementation
    • Traceability matrices link requirements to their sources and to the system components that satisfy them. They ensure that each requirement is met through the design, development, testing, and implementation phases.

    Where to Document Requirements: FAS Jira Project Template

    Issues are documented in the corresponding fields of the FAS-IT JIRA Project Template (FJPT). The materials linked below will help you know how to best leverage the FJPT.

    Learn more about Jira and Confluence at FAS-IT

    Learn more about the FAS Jira Project Template (FJPT)

    User Stories

    At FAS-IT our different initiatives each use one of three different methodologies to develop User Stories that are well-defined, well-detailed and comprehensive, and which allow  estimates to be derived easily from them.


    Gherkin

    With Gherkin, teams define scenarios and acceptance criteria in a format that both technical and non-technical stakeholders can understand. Each story begins with a context (Given) describing the initial state, followed by an action (When) representing the user's interaction, and concludes with an outcome (Then) specifying the expected result. 

    3 C’s (Card, Conversation, Confirmation)

    The 3Cs (Card, Conversation, Confirmation) framework involves writing user stories on index cards (Card), engaging in conversations (Conversation) between stakeholders to elaborate on the details and acceptance criteria, and reaching a shared understanding. Finally, the acceptance criteria are defined to ensure that the story meets the user's needs (Confirmation). 

    Stories Cheat Sheet

    INVEST

    INVEST stands for the first letter of 6 characteristics of a great user story. Independent stories can be developed and tested in isolation. Negotiable stories let us  collaborate and refine them. They should deliver Value to the user and be Estimable, so the team can gauge effort and prioritize effectively. Small stories promote incremental delivery and minimize risk, and Testable stories let us validate against acceptance criteria.

    Outcomes and Estimates

    Additional guidance on developing outcomes and estimates for user stories are outlined below.


    Outcomes

    Outcomes are the desired results or impacts that a project or a specific piece of work aims to achieve. They are broader than deliverables and focus on the value created for the end-users or the organization. Outcomes are considered during the definition of epics and user stories. They guide the team's efforts toward delivering meaningful value and help stakeholders understand the broader impact of the work being done.

    A circular diagram showing the 3 outcomes to expect within an Agile project. The first outcome is predictability. Establishing a plan every sprint and committing to the work as a team will drive execution and transparency, making the work more predictable over time. The second outcome is consistency. Sprint throughput will become consistent, allowing teams to conduct capacity planning and commit to the right amount of work based on the team's historical velocity. The last outcome is efficiency. Measuring cycle-time within a sprint and authoring higher-quality requirements will help eliminate process bottlenecks and drive organizational efficiency within the project.

    Estimates

    Estimates represent the team's assessment of the effort required to complete a task, user story, or epic. Estimation helps in planning and prioritizing work effectively. Estimates are an essential part of sprint planning. Teams assign estimates to user stories or tasks, which aids in capacity planning and ensures that the team commits to a realistic amount of work for the upcoming sprint.

    A graph showing how points are estimated for epics, user stories, etc. The point estimate can be calculcated by adding the amount of work, complexity of the work, and the risk and uncertainty associated with the work. As these 3 elements of the work become larger, more points are assigned to that story or epic. Point estimates usually follow a modified version of the Fibonacci sequence with the smallest possible estimate being 1 point and the largest possible estimate being 13 points. This encourages stories that have high amounts of work or complexity to be broken down into smaller, more manageable chunks of work.

    FAS Examples, Case Studies and Lessons Learned

    The teams below have been tasked with significant requirements gathering efforts within their modernizations. Their insights are shared below for the benefit of other teams who are currently or in the future needing guidance on how to best gather requirements in a FAS context.

    GSAFleet.gov

    Modernization Context

    The Fleet team prioritized the Data Architecture and migrating applications off the Mainframe. Quickly, the Fleet teams learned that due to the complexity of migrating 19 legacy systems, a primary focus had to be on data and redesigning business rules to meet the business needs and address the constraints imposed by the data in the legacy applications.

    This may sound different than many of the theoretical recommendations for requirements gathering which often focus on starting with the customer experience. But in this case data was a prerequisite and needed to be completely restructured, understood, and explained first.

    Capturing Requirements

    To address the challenges with the system data and business rules, the team developed the AIC (as-is consolidated) database to capture and reorganize the data into a format that could be put into the Cloud. In conjunction with the AIC database, the team built a data dictionary over the first two years of the project. This saved a lot of development time and made moving the data into the cloud much easier. Once fundamental aspects of the Data (availability, structure, architecture) were clear, the team focused on the User Experience, “back-adjusting” the other requirements as necessary to optimize customer experience. 

    GSAFleet.gov captured the requirements using many of the standard issue types from the FAS Jira Project Template (FJPT), including Portfolio Epics, Epics, and User Stories. The team learned that it is important that GSA staff write the user stories because they have the expertise. Contractors have a lack of prolonged familiarity with the systems and may write stories that more resemble test scripts. 

    The team also used a separate issue type outside of Jira called Experiences - a custom document that the team used to capture business needs that crossed multiple Portfolio Epics, such as “Manage a Vehicle”. Specifically, the Experience Plan includes an overview of the customer experience involved, the epics associated with it, the business and operational changes being implemented as part of the experience, the anticipated impact, the training required to carry out the changes, and a centralized location for stakeholders to sign off on the changes. 

    The primary value of the Experience document is that it promotes a business-focused mindset instead of a “legacy mindset” where the applications are developed based on how they have historically been built instead of they can best serve the business needs. Additionally, an Experience document enables the development teams to adopt a customer-centric approach that considers business needs at every step of development, ensuring full alignment and understanding among stakeholders on the business needs and the changes needed to meet these needs.. 

    • Once the Experiences were completed, the team transferred the contents into the FJPT in Jira and evaluated their progress using Earned Value Management, a technique used to measure project performance based on planned and actual values, and enables project managers to adjust accordingly.

    Lesson Learned: Focus on the Customer Experience, but be flexible if for basic reasons you need to consider data or other parts of the architecture first.

    Formats

    GSAFleet.gov gathered requirements in the following formats – the Experience Plan and Feature Document being two that the GSAFleet.gov has tailored to their project’s specific need:

    • Roadmap — this is driven by Product Management
    • Product Definition Documents
    • User stories
    • “Experience Plan” (for multiple Experiences, esp. if a business process is changing)
      • Includes the experience of the customer, the epics associated with it, the business and operational changes involved (including changes in data governance), anticipated impact, training required from a change management perspective, and stakeholder sign-off.
      • Stakeholders detail what is changing from a business standpoint, and thus ensure understanding across the teams. All major stakeholders must sign off before development starts.
      • Also links back to Data Dictionary to show what data is involved in the Experience
    • Data dictionary – an Excel document with tables, data domain, data elements, and full descriptions of all the data and tables
    • Serves as the source of truth for the modernization since all Experience Plans and Issue Types in Jira trace back to it. 
    • Traceability matrices
    • “Feature Document”: A document about “what is in this feature” defines scope (e.g it doesn’t include Europe) and stakeholders can sign off on that

    Stakeholder roles

    • The Business Owner owns the product vision, modernizing systems and processes.
    • The Product Manager is assigned the development of strategy to execute the vision.
    • The Product Owner is responsible for the day-to-day execution of the strategy.

    In Summary

    • For this project, there was a big focus on data first before the system functionality - understanding the data and mapping every story to the data dictionary and the AIC (as-is consolidated) database.
    • Encourage development teams to move away from legacy mindset and towards a product mindset that focuses on business needs and not how things have historically been done.
    • Put documentation and mechanisms in place to help with stakeholder buy-in, especially for big changes; make stakeholders all agree and sign off before beginning development.
    • Experience documents are helpful for understanding the user experience and how changes provide business value to users.

    Personal Property Management System (PPMS)

    Modernization Context

    The GSA Office of Personal Property Management (PPM) uses multiple systems that reside on Unisys Clearpath mainframe architecture. PPM depends on 8 disparate applications to manage the asset disposal life cycle of the federal personal property assets. The applications were originally architected using legacy mainframe-server technology and the 8 legacy apps were built in Common Business Oriented Language (COBOL). The 8 legacy systems are:

    • CFL
    • GovSales
    • GSA Auctions
    • GSAXcess
    • SASy – Sales Automation System
    • MySales
    • ePay
    • Warehouse Inventory

    Personal Property Management is currently in the process of transforming the applications using emerging open source technologies and cloud platform suitable to the needs of all the PPM functions. These systems and tools are integrated internally and externally with each other and several other applications in GSA systems, third party payment systems (TPPSs), private and other federal agency systems.

    Capturing Requirements

    The team needs realistic time frames to collect requirements of sufficient depth.

    • At the start of the project, it was decided to develop only the Minimum Viable Product (MVP) and it was challenging to know what level to decompose the requirements to based on the MVP. This ambiguity also left room for additional requirements to be introduced in each phase of the development, resulting in scope creep and ultimately impacting the team’s ability to meet the original schedule and cost projections. A legacy requirement traceability matrix (RTM) was also not started and finalized before the beginning of the project.
    •  
    • The team learned a lot in the beginning of the project from the high level requirements captured, which was missing the level of detail that was needed for the initial requirements so that the development team could hit the ground running.

    Review documentation and legacy system and any documentation for it.

    • Started with 50,000 foot view documentation and then drilled down story by story in phases.
    • `
    • The Challenge: There was very limited formal documentation of legacy systems and of processes.
    •  

    Lesson Learned: Create a requirements checklist.

    There are several factors to keep in mind when writing the requirements for a story. Not only do you need to capture what you want to accomplish, but also, what should not happen (and other “negative” results). It’s challenging to capture both of those elements and keep them in mind for every story. In retrospect, the team determined it would have been helpful to have a list of questions like the examples below to help define the user stories.

    • What do you need to accomplish? Clearly define the project scope without any ambiguity by having as many detailed requirements and other business processes as possible. 
    • Does this affect only one screen, or multiple screens or batch process, or internal and external integrations with other systems? (e.g., sorting capabilities on various search pages).
    • A comprehensive data migration process that includes mapping the legacy data and cleansing the data to the new system.
    • What users will be affected? Who can use the functionality? Are any users exempt from the functionality?
    • Is an infotip needed?
    • Is an action history needed?
    • Do you need to print the screen / report / search results / etc?

    Formats

    • Master Requirements Sheet: The PPMS business line has a living master requirements sheet mapped to each MVP.
    • Epics and Stories: Requirements were converted to epics and stories. All of that was done in Google Sheets. Once the award came, everything was bulk loaded into Jira. Any story 8 points or more was broken down further. 
    • Traceability Matrix: The team ultimately did not rely on one due to the complexity of moving from 8 legacy systems to 1 modernized system.
    • Wireframes and prototypes are created today even post-modernization to manage changes to the user experience during the requirements gathering. The team has also leveraged PoCs, flow diagrams, and data models.

    Stakeholder Roles

    The PPMS business line and IT team worked collaboratively to build and capture requirements.

    • In several aspects of the project, this proved to be helpful because the business line and the IT team were able to collaborate to identify improvements to the processes and business logic that could be incorporated into the modernized system. 
    • This approach was also helpful because it helped the teams focus on customer needs and address customer pain points that were identified through the business line’s interviews with PPMS customers. 
    • Defining the MVP also helped the team have a clearer approach to the testing process. Anything that did not go into the MVP was scoped for the “MVP Plus” and moved to the backlog.

    In Summary

    • Overall, the project depended on the completeness of requirements based on experience without any formal legacy documentation. It’s hard to know when requirements are complete enough, it’s always fluid. That happens pretty much with every application that doesn’t have the legacy documentation.
    • Setting expectations is the number one thing that helps to create the Definition of Done and be able to have a good go-live.
    • Lesson learned about scope creep: Having a hard deadline and clear definition of done limits the risk of scope creep for the initial MVP. Additional requested changes and other constraints the team faced could be reviewed and applied to the “MVP Plus”. This enabled the team to focus on the most important functionality and process changes in the initial go-live.  (Note: We planned the big bang release due to complications in maintaining the legacy and modernized systems by combining 8 systems into 1 single system along with the cost/time constraints.)
    • Lesson learned: The team should err on the side of being more specific with the requirements in the beginning. The team started with very high-level requirements, but as early stage development began, it was realized that more detail was needed.
    • For a more comprehensive description of the PPMS modernization initiative, please click here.

    Templates

    FCS Logo

    Cloud Infrastructure Shared Services (CISS) provides a choice of templates for Requirements Gathering to develop Cloud Strategies to help potential new tenant applications move through the Cloud Smart Journey. Each CISS customer will receive guidance about which requirements document templates to use for their project. The templates are a great way to understand what a requirements set should include. Links to some of the templates used by CISS are provided below. For more information, visit the Cloud Smart Journey page.

    Nasa Logo

    NASA Checklist

    NASA publishes an excellent, comprehensive checklist for requirements for projects (including non-IT projects). Areas covered include the following. Access the checklist here.

    A series of attributes included in NASA's comprehensive requirements checklist. The attributes are: General, Clarity, Completeness, Compliance, Consistency, Traceability, Correctness, Functionality, Performance, Interfaces, Maintainability, Reliability, Testability, and Data Usage.

    Other Templates

    A template for Definition of Done and Definition and Ready has been recently developed for various levels of work within a project roadmaps, PIs, sprints, and stories / tasks. Check it out here.